Skip to main content

Accessing the FOCI Cluster (Nov 2024)

FOCI Cluster Image

NOTE: If you're looking to access RPI's Quantum One system via the FOCI Cluster, go here instead...

 

  • The Rensselaer FOCI Cluster (formerly "IDEA Cluster") is a high performance computing environment consisting of six virtualized compute servers hosted by two AMD servers in various configurations ranging from 24-40 cores (48-80), 256GB-1TB RAM, and up to four GPUs per machine (Nvidia Ampere A100 GPUs). The FOCI Cluster includes one dedicated storage server totaling more than 40TB of usuable space. The IDEA Cluster is designed for dedicated data mining, machine learning, and neural computing-intensive jobs using popular toolkits.
  • FOCI Cluster "power" users, please be mindful of how you are consuming limited resources! The FOCI Cluster is NOT intended for large, "production" research computing. For that, RPI provides world-class assets such as those available via the CCI. If you have computing jobs on the Cluster that are preventing other users from getting their work done, you will be asked to migrate your work to the CCI. The FOCI Cluster is ideal for prototyping large jobs whose ultimate destination is the CCI
     
  • Student Access:
    • Students of FOCI/IDEA-sponsored courses (e.g. MATP-4400, MATP-4910) are automatically added for the term of the course. Class privileges end with the end of the term, but home directories are preserved
    • The Cluster provides access to: RStudio, Jupyter, Python, MATLAB, GPUs (on some nodes; see Cluster Details); lots of storage and computes
    • Access via the RPI physical network or RPI VPN required
    • Priority is given to FOCI researchers and students in FOCI/IDEA-sponsored courses (e.g. MATP-4910, MATP-4400)
  • Web links to RStudio and Jupyter GUIs:
  • GUI access to MATLAB possible via port forwarding; command line recommended! (Contact John Erickson)
  • Linux terminal accessible from within RStudio "Terminal" or via ssh (below)
  • Shared Data on the Cluster:
    • All idea_users have access to shared storage via /data ("data" in your home directories)
    • Permissions are best managed via the Linux terminal (see above)
  • Shell access to individual nodes: You must access "landing pad" first, then a specific compute node:
    • ssh your_rcs@lp01.idea.rpi.edu, then ssh idea-node-XX
    • For example:
      1. ssh erickj4@lp01.idea.rpi.edu
      2. Then, ssh idea-node-02 (to access "Node 02," a GPU node)

Help for R users new to Linux & github

Help for IDEA Cluster Python Users

Help for General IDEA Cluster Issues

Contact John Erickson for details! erickj4@rpi.edu