Using Modules to manage your environmentModules are used to manage different software environments on the Wilson cluster.
Containers are a way to package software in a format that can run isolated on a shared operating system. Unlike VMs, containers do not bundle a full operating system - only libraries and settings required to make the software work are needed. This makes for efficient, lightweight, self-contained systems and guarantees that software will always run the same, regardless of where it’s deployed. More information about containers is available at https://www.docker.com/what-container
2. What is a singularity container?
Singularity enables users to have full control of their environment. Singularity containers can be used to package entire scientific workflows, software and libraries, and even data. This means that you don’t have to ask your cluster admin to install anything for you - you can put it in a Singularity container and run. More information about singularity and singularity containers is available here https://www.sylabs.io/. A presentation on introduction to Singularity is available here.
Singularity containers on the Wilson cluster.
Singularity is installed on the Wilson cluster and there are several precompiled singularity containers available under
[@tev ~]$ srun --pty -N 1 -p amd32 bash srun: job 24670 queued and waiting for resources srun: job 24670 has been allocated resources [amitoj@tev0509]$ singularity shell /home/singularity/OpenHPC_1.3.3_Centos7/OpenHPC_1.3.3.simg Singularity: Invoking an interactive shell within container... Singularity OpenHPC_1.3.3.simg:~> cat /etc/redhat-release CentOS Linux release 7.4.1708 (Core) Singularity OpenHPC_1.3.3.simg:~> exit exitHere is an example on running a singularity container on the Wilson cluster as a batch job.
[@tev ~]$ cat runscript.sh #!/bin/bash singularity exec /home/singularity/OpenHPC_1.3.3_Centos7/OpenHPC_1.3.3.simg cat /etc/redhat-release exit [@tev ~]$ sbatch -N 1 -p amd32 runscript.sh Submitted batch job 24671 [@tev ~]$ slurm-24671.out CentOS Linux release 7.4.1708 (Core)If you wish to use NVIDIA Management Library utilities (such as nvidia-smi), as of May 9, 2018 you will have to bind the localhost /usr/bin directory as follows:
singularity exec --nv -B /usr/bin:/opt /home/singularity/ML/ubuntu1604-cuda-90-ML-tf1.8.simg bashand inside the container you may call nvidia-smi as follows:
$/opt/nvidia-smi [options]Here is an example:
[@tev ~]$ srun --pty -N 1 -p gpu --gres=gpu:p100nvlink:1 bash [@gpu3 ~]$ singularity exec --nv -B /usr/bin:/opt /home/singularity/ML/ubuntu1604-cuda-90-ML-tf1.8.simg bash [@gpu3 ~]$ /opt/nvidia-smi -L GPU 0: Tesla P100-SXM2-16GB (UUID: GPU-ce7c94c8-f84f-5bab-d7d8-f7d1efc45432) [@gpu3 ~]$If you plan to modify the precompiled singularity containers read the instructions here https://www.sylabs.io/guides/2.5.1/user-guide/. If you plan to build your own singularity containers then make sure you have root/superuser access to a computer with Singularity installed. This can be accomplished on a users desktop/laptop. Easy to follow steps to create your own singularity container are available at https://www.sylabs.io/guides/2.5.1/user-guide/quick_start.html#installation. Our policy is to allow all user compiled singularity containers to run on the Wilson cluster (no restrictions). For containers that will run on GPU hosts please make sure the GPU host NVIDIA driver version matches the one inside your container.
3. How can I add Python packages into an existing Singulairty image?
pip install -user the-python-package
4. Where can I find containers related to Machine Learning?
To find Machine Learning containers provided by the DeepLearnPhysics project visit their GitHub site at https://github.com/DeepLearnPhysics/larcv2-singularity with links to singularity container recipes.
If you still have additional questions regarding singularity containers on the Wilson cluster please email us at firstname.lastname@example.org
Did not find what you need?
If there is a specific version software that you need please email us at email@example.com and if we support the software requested, we will install it on the cluster.
Contact: Amitoj Singh
Last modified: Nov 6, 2017