These are the questions that the EIT admin staff get asked the most. If you don't find what you're looking for either here or in the documentation section, please contact the FRCE administrators through email or through a Service Now ticket. We are also always open to suggestions on what to add if you feel that something is missing from this site.
Any person in the NCI with an active NIH account can request access to the cluster through this form. An email notification will be sent to you when the account is active.
Access is only through ssh to batch.ncifcrf.gov from within the NIH network. For Windows systems Putty, MobaXterm, and Bitvise are all popular and free. Mac and Linux users can use ssh directly from the command line. It is required that you be on the NIH network, either directly connected or on VPN.
As of 07/17/2023, we are half way migrating compute node operating system from CentOS7 to OEL8.
Since the default partition on FRCE is norm which consists of CentOS7 nodes, to request OEL8 nodes, "-p norm-oel8" or "-p gpu-oel8" need to be added to srun or sbtach commands line. Or, they can be added to SLURM job scripts.
Though the /mnt/nasapps/ on oel8 nodes has the same basic file structures as on CentOS7 nodes, they are two separate volumes. Currently, the head node is still on CentOS7, and can only access the volume for CentOS7 nasapps nodes.
The default shell for all users is /bin/bash
and this setting comes from Active Directory. It cannot be changed. However, you can add code to your login init script that will change the shell whenever you log in. Edit .bashrc
and add this code at the end of the file.
if [ "$PS1" ]; then export SHELL=/bin/zsh exec /bin/zsh --login fi
The if
statement is necessary for the change to apply only to interactive logins. If it was not there the change would prevent some non-interactive applications like file transfers from working.
An interactive session can be started with the srun
command. srun
has to be told to open a interactive terminal and then use a shell as the command to run. An example is
srun --export ALL --pty -p short bash
X11 applications should on the head node so long as you have an X11 server running on your desktop and your ssh client is configured for X11 forwarding. You can confirm that X11 applications will display on your desktop by logging into the head node and running a basic program like xclock
.
To run any non-trivial application, the srun
is very similar but with one extra flag to enable X11 forwarding.
srun --export ALL --pty --x11 -p short bash
It is recommended to request a compute node for any CPU intensive tasks. GPU is only available in partitions gpu and gpuib.
- Request a compute node:
- srun --pty bash
- Or request a gpu node:
- srun --pty --partition gpu --gres=gpu:1 bash
- Run vncserver from compute node
- You will get standard output: "Log file is /home/qiangn/.vnc/fsitgl-hpc010p.ncifcrf.gov:1.log".
- The vnc service run ports starting with 5900. So vnc session :1 listens to port 5901. This is shown in the log file on above message as well.
- From another terminal on your computer,
- ssh -L 6001:fsitgl-hpc010p:5901 fsitgl-head01p
- The name "hpc010p" should match the node name in above message. The 6001 port is a local port that you are going to connect to. You can choose a different high port.
- From a vncviewer or screen share client on Mac, connect to localhost:6001.
The port tunneling process is similar to jupyter: https://ncifrederick.cancer.gov/staff/frce/documentation/jupyter, but with different ports to be tunneled.
There is a template sbatch script to setup VNC server and the tunneling.
- On FRCE head node, "module load frce", and run "sbatch_templates.sh"
- You will get directory ./FRCE_EXAMPLES/slurm/.
- Run "sbatch ./FRCE_EXAMPLES/slurm/vnc.sh"
- With that SLURM job running, you will get an email about a command line to setup ssh tunneling and vnc-client.
Biowulf is a completely separate HPC system. Information on how to apply for an account is on their accounts page.
Issues concerning Biowulf go through the NIH ticketing system.
Users can email staff@hpc.nih.gov for Biowulf-related information. More contact info can be found on their contact page. The webpage also includes information about making requests through the NIH IT helpdesk.