Williams HPC cluster is a shared Linux computing resource supporting research and teaching. It is available for all faculty, staff and students who need high performance computing in their work and study at Williams. To request an account please email us at firstname.lastname@example.org.
1 Head Node
4 Compute Nodes – Total of 256 Cores and 896 GB RAM
64 Cores w/ 128 GB RAM (1)
64 Cores w/ 256 GB RAM (3)
Queues and Scheduler
The cluster uses TORQUE for resources management and MAUI to schedule jobs. The policy has a fair-share component and backfill implementation to provide all users fair access to cluster resources. The current setup:
|name||max nodes/cores||max walltime||base priority||description|
|debug||2 / 32||1 hour||highest||for debugging|
|hpcc||4 / 256||504 hours||normal||for all normal jobs|
|long||2 / 128||505 ~ 720 hours||low||for jobs that run between 21 and 30 days|
|matlab||4 / 96||720 hours||normal||dedicated for Matlab MDCS|
- Matlab MDCS
- MrBayes, Topcom, Macaulay2, Polymake, IMa2p and more ……
Feel free to install things for yourself. If you would prefer we take care of things for you, contact email@example.com.
Request an Account
The first step in gaining access to our clusters is requesting an account. Please email firstname.lastname@example.org for more information.
Before you begin using the cluster, here are some important guidelines:
- Do not run jobs or do real work on the head node (aka login node). Always allocate a compute node and run programs there
- Never give your password or ssh key to anyone else.
- Clean up after yourself by releasing unused jobs and removing unneeded files.
hpcc.williams.edu is accessed via a protocol called secure shell (ssh). You can use ssh directly. From a Mac, use Mac Terminal. On Windows, you can use Putty. If you want to access the cluster from outside Williams, you must use the Williams VPN. For more information on ssh and how to connect to the cluster with your application and operating system of choice, please see getting-started for more information.
Transfer Your Files
You will likely find it necessary to copy files between your local machines and the clusters. Just as with logging in, there are different ways to do this, depending on your local operating system. We support SFTP, SSHFS, SCP and SMP protocol. Please see getting-started for more information.
To best serve the diverse needs of all the software that you need in your work in an HPCC environment, we use a module system to manage software. This allows you to swap between different application and versions of those applications with relative ease and focus on getting your work done, not compiling software. Please see Software Guide for more information. If you find software that you’d like to use that isn’t available, feel free to contact email@example.com.
Schedule a Job
You control your jobs using a job scheduling system that dedicates and manages compute resources for you. Basically this is done in one of two ways. For testing and debugging you may want to run your job interactively. This way you can directly interact with the compute node(s) in real time to make sure your code works and your jobs will run as expected. The other way, which is the preferred way for large and long-running jobs, involves writing your job commands in a script and submitting that to the job scheduler. Please see Getting-started for more information.
Current Status of the Cluster
The cluster is monitored using Ganglia (cluster monitoring system). You can check the status of the cluster and the and its load live from this link.
New to Linux?
You don’t need to be a Linux expert to use the cluster but familiarity with Linux commands is required for interacting with the cluster. We have a Unix Commands Cheat Sheet that can help you get started.