What is Slurm and is it Still Relevant for Modern Workloads?
How to Run on the GPUs – High Performance Computing Facility - UMBC
The GPU Clusterware Project
astrobites on Twitter: "Soon, will have both Slurm jobs running continuously and Kubernetes jobs running containerized jobs! The divide will be set by user demand #AAS236 https://t.co/IdTt9X0fqN" / Twitter
Deploy an Auto-Scaling HPC Cluster with Slurm
Technical Support
How do I know which GPUs a job was allocated using SLURM? - Stack Overflow
GPU클러스터 오케스트레이션, Slurm vs Kubernetes [토크아이티 세미남#55, 리더스시스템즈] - YouTube
Multi-Node GPU Workloads with Unprivileged Containers on Slurm
How to work – Platform"HybriLIT"
Docker DGX-1
PDF] Partnership for Advanced Computing in Europe Topologically Aware Job Scheduling for SLURM | Semantic Scholar
Open MPI / srun vs sbatch : r/SLURM
Using SLURM scheduler on Lehigh's HPC clusters
Deploying Rich Cluster API on DGX for Multi-User Sharing | NVIDIA Technical Blog
Defining User Restrictions for GPUs | by Amir Erfan Eshratifar | Towards Data Science
PyTorch on the HPC Clusters | Princeton Research Computing
Introduction to SLURM
Deploying a Burstable and Event-driven HPC Cluster on AWS Using SLURM, Part 1 | AWS Compute Blog