Biowulf & FRCE differences

Biowulf and FRCE have many features in common and most programs and scripts written on one will run on the other with only minor changes. Both run operating systems based on Red Hat Linux and both use Slurm as the resource and scheduling manager. The modules command is used to enable commands for specific packages. There are some differences.

  • Biowulf allows jobs to allocate scratch space that is automatically cleaned when the job completes. FRCE has a permanent scratch area that the user is responsible for maintaining.
  • Biowulf has a few commands specific to its environment that would be difficult to support on FRCE. Examples include swarm and freen.
  • FRCE is integrated with NCI-F specific storage designed both for long-term data and high-speed scratch and data access. Biowulf has similar capabilities. However, NCI laboratories can have workflows that push instrument data directly onto the NCI-F storage and this data it immediately accessible by FRCE.
  • FRCE allows remote systems such as web servers to submit jobs to the cluster using either sbatch commands or the slurm API. Biowulf has a similar capability though the Palantir service (as does FRCE) but Palantir is more limited than sbatch.
  • Both systems are integrated with Active Directory (AD) for password management. FRCE takes this one step further and allows the use of Active Directory service accounts on the system. This is particularly useful for web server applications. A difference does exist in how AD is implemented on the two systems. Biowulf assigns account uids/gids sequentially while FRCE uses the uids/gids defined in the user's AD profile. Because of this difference, it is not possible to share NFS storage between the installations.

One final obvious difference. Biowulf is about 25 times the size of FRCE both in the number of servers and the total number of cores.

If you are transitioning a workflow from Biowulf to FRCE and need assistance in modifying scripts or storage locations please contact contact the HPC staff.