From Console to Clusters: Mastering Slurm, Data Strategies, and High-Performance Computing at Princeton, 2/19 and 2/20
Registration
Registration is now closed (this event already took place).
Details
This two-part workshop introduces the Research Computing ecosystem at Princeton: the computing clusters (Nobel, Adroit, Della, Stellar, Tiger and Traverse), the storage systems, and the data visualization machines. After an overview of the different systems and the sorts of tasks each is geared toward, the course gives users a hands-on introduction to technical topics including: how to connect to the clusters; how to manage file storage; how to access or install additional software; and how to launch jobs through our scheduling software (Slurm).
Workshop format: Interactive presentation, with hands-on activities.
Target audience: New users of the Princeton Research Computing systems and experienced users those looking to improve their cluster computing skills.
Knowledge prerequisites: A working facility with the Linux command line. This requirement is covered in "Introduction to the Linux Command Line". If you can not satisfy this prerequisite then consider attending "Big Data, Easy Access: Exploring Princeton's Clusters for Social Sciences and Humanities" which shows participants how to use the Research Computing clusters using only a web browser.
Hardware/software prerequisites: For this workshop, users must have an account on the Adroit cluster, and they should confirm that they can SSH into Adroit several hours beforehand. Request an account on Adroit: https://bit.ly/3wicSaH (VPN required if off-campus). Details on all of the above can be found in this guide (https://bit.ly/3QER9Sv).
Learning objectives: Attendees will come away with the basic skills needed to carry out their work on the Research Computing clusters.
Speakers
Mattie Niznik
Research Software & Programming Analyst, Princeton Institute for Computational Science and Engineering