Wilson HPC Computing Facility

Compilers:

  • /usr/bin/gcc is gcc 4.4.7
  • /usr/bin/gcc49 is gcc 4.9.2
  • gcc 4.9.2 is available at /usr/local/gcc-4.9.2/bin/gcc
  • gcc 5.1.0 is available at /usr/local/gcc-5.1.0/bin/gcc
  • gcc 6.4.0 is available at /usr/local/gcc-6.4.0/bin/gcc

MPI:

  • /usr/local/openmpi: version 1.10.17 built with gcc for Infiniband and SLURM
  • /usr/local/mvapich2: version 2-2.2 built with gcc for Infiniband and SLURM

Programming Languages

  • /usr/bin/python is python 2.6.6
  • Python 2.7.9 is available at /usr/local/python-2.7.9
  • Python 3.4.2 is available at /usr/local/python-3.4.2

Using Modules to manage your environment

Modules are used to manage different software environments on the Wilson cluster. module avail will list available modules on the cluster, and they can be added to your environment using module load $modulename. As an example, to compile an OpenMPI program:

[@tev ~]$ module load mpi/openmpi
[@tev ~]$ module list

Currently Loaded Modules:
  1) mpi/openmpi/3.0.1

[@tev ~]$ which mpiexec
/usr/local/openmpi-3.0.1/bin/mpiexec
[@tev ~]$ mpicc -o hello hello_world.c

Singularity Containers

    1. What is a container?

    Containers are a way to package software in a format that can run isolated on a shared operating system. Unlike VMs, containers do not bundle a full operating system - only libraries and settings required to make the software work are needed. This makes for efficient, lightweight, self-contained systems and guarantees that software will always run the same, regardless of where it’s deployed. More information about containers is available at https://www.docker.com/what-container

    2. What is a singularity container?

    Singularity enables users to have full control of their environment. Singularity containers can be used to package entire scientific workflows, software and libraries, and even data. This means that you don’t have to ask your cluster admin to install anything for you - you can put it in a Singularity container and run. More information about singularity and singularity containers is available here https://www.sylabs.io/. A presentation on introduction to Singularity is available here.

    Singularity containers on the Wilson cluster.

    Singularity is installed on the Wilson cluster and there are several precompiled singularity containers available under /home/singularity directory. Here is an example on running a singularity container on the Wilson cluster as an interactive job.

    [@tev ~]$ srun --pty -N 1 -p amd32 bash
    srun: job 24670 queued and waiting for resources
    srun: job 24670 has been allocated resources
    [amitoj@tev0509]$ singularity shell /home/singularity/OpenHPC_1.3.3_Centos7/OpenHPC_1.3.3.simg
    Singularity: Invoking an interactive shell within container...
    
    Singularity OpenHPC_1.3.3.simg:~> cat /etc/redhat-release
    CentOS Linux release 7.4.1708 (Core)
    Singularity OpenHPC_1.3.3.simg:~> exit
    exit
    
    Here is an example on running a singularity container on the Wilson cluster as a batch job.
    [@tev ~]$ cat runscript.sh
    #!/bin/bash
    
    singularity exec /home/singularity/OpenHPC_1.3.3_Centos7/OpenHPC_1.3.3.simg cat /etc/redhat-release
    exit
    
    [@tev ~]$ sbatch -N 1 -p amd32 runscript.sh
    Submitted batch job 24671
    
    [@tev ~]$ slurm-24671.out
    CentOS Linux release 7.4.1708 (Core)
    
    If you wish to use NVIDIA Management Library utilities (such as nvidia-smi), as of May 9, 2018 you will have to bind the localhost /usr/bin directory as follows:

    singularity exec --nv -B /usr/bin:/opt /home/singularity/ML/ubuntu1604-cuda-90-ML-tf1.8.simg bash
    
    and inside the container you may call nvidia-smi as follows:

    $/opt/nvidia-smi [options]
    
    Here is an example:
    [@tev ~]$ srun --pty -N 1 -p gpu --gres=gpu:p100nvlink:1  bash
    [@gpu3 ~]$ singularity exec --nv -B /usr/bin:/opt /home/singularity/ML/ubuntu1604-cuda-90-ML-tf1.8.simg bash
    [@gpu3 ~]$ /opt/nvidia-smi -L
    GPU 0: Tesla P100-SXM2-16GB (UUID: GPU-ce7c94c8-f84f-5bab-d7d8-f7d1efc45432)
    [@gpu3 ~]$
    
    If you plan to modify the precompiled singularity containers read the instructions here https://www.sylabs.io/guides/2.5.1/user-guide/. If you plan to build your own singularity containers then make sure you have root/superuser access to a computer with Singularity installed. This can be accomplished on a users desktop/laptop. Easy to follow steps to create your own singularity container are available at https://www.sylabs.io/guides/2.5.1/user-guide/quick_start.html#installation. Our policy is to allow all user compiled singularity containers to run on the Wilson cluster (no restrictions). For containers that will run on GPU hosts please make sure the GPU host NVIDIA driver version matches the one inside your container.

    3. How can I add Python packages into an existing Singulairty image?

    pip install -user the-python-package
    

    4. Where can I find containers related to Machine Learning?

    To find Machine Learning containers provided by the DeepLearnPhysics project visit their GitHub site at https://github.com/DeepLearnPhysics/larcv2-singularity with links to singularity container recipes.

    If you still have additional questions regarding singularity containers on the Wilson cluster please email us at tev-admin@fnal.gov

    Did not find what you need?

    If there is a specific version software that you need please email us at tev-admin@fnal.gov and if we support the software requested, we will install it on the cluster.

Contact: Amitoj Singh
Last modified: Nov 6, 2017