This is an old revision of the document!
Table of Contents
Introduction to HEP/LCRC resources
In order to use HEP/LCRC computing resources, you will need an account for the ATLAS-g group. Look at the page: https://accounts.lcrc.anl.gov/ You will need to ask to join to the “ATLAS-HEP-group” by providing your ANL user name. The users are usually placed into the “g-ATLAS” group.
Please also look at the description https://collab.cels.anl.gov/display/HEPATLASLCRC
The description here uses the bash shell. Please go to https://accounts.lcrc.anl.gov/account.php to change your shell.
At this very moment (Jan 2018), this resource cannot replace the ATLAS cluster and have several features:
- LCRC resources are under maintenance on Monday (each week?)
- HOME directory is tiny (100GB) and you have to use some tricks to deal with it (not well tested)
- logins are done using the .ssh key (often requires 1 day to change with LCRC assist)
- Cannot be mounted on desktops
Available resources
The following interactive nodes can be used:
heplogin.lcrc.anl.gov # login at random to either hepd-0003 or hepd-0004 heplogin1.lcrc.anl.gov # login directly to hepd-0003 heplogin2.lcrc.anl.gov # login directly to hepd-0004
Each node has 72 CPUs and a lot of memory. After login, you will end up in a rather small “home” space which has a limit of 500 GB.
/home/[USER]
You can use this location to keep code etc. (but not data):
/lcrc/group/ATLAS/users/[USER]
Updating the password
LCRC is not user friendly system when it comes to login and passwords. Changing passwords is a “process”. If you went through all steps in changing your ANL domain password, you will need to create ssh public key, update it on your account https://accounts.lcrc.anl.gov/account.php, and then send email to [email protected] with the request to update public key in your account. Since it involves manual labor from the LCRC, do not expect that you will be able to login to the LCRC at the same day.
Read:
Setting up local HEP software
You can setup some pre-defined HEP software as:
source /soft/hep/hep_setup.sh
It will set gcc7.1, root, FastJet, Pythia8, LHAPDF etc. Look at the directory “/soft/hep/”. Note sets ROOT 6.12 with all plugins included. PyROOT uses Python 2.7 compiled under gcc71.
If you need to setup new TexLive 2016, use:
source /soft/hep/hep_texlive.sh
Setting up LCRC software
You can setup more software packages using Lmod. Look at https://www.lcrc.anl.gov/for-users/software/.
Setting up ATLAS software
You can setup ATLAS software as:
source /soft/hep/setupATLAS
or if you need RUCIO and the grid, use:
source /soft/hep/setupATLASg
In this case you need to put the grid certificate as described in Grid certificate.
Data storages
Significant data from the grid should be put to the following locations:
/lcrc/project/ATLAS/data/ # for the group /lcrc/group/ATLAS/atlasfs/local/[USER] # for users
Allocating interactive nodes
If you need to run a job interactively, you can allocate a node to do this. Try this command:
srun --pty -p bdwall -t 24:00:00 /bin/bash
It will allocate a new node (in bash) for 24h. It uses Xeon(R) CPU E5-2695 v4 @ 2.10GHz (36 CPUs per node). More info about this can be found in Running jobs on BeBob. Note that you should keep the terminal open while jobs are running.
When you use bdwall partition, your jobs will accounted against default CPU allocations (100k per 4 months). Therefore, when possible, please use “hepd” partition. See the next section.
Running Batch job on HEP resources
srun --pty -p hepd -t 24:00:00 /bin/bash module load StdEnv # important to avoid slum bug
Then you can setup root etc as “source /soft/hep/setup.sh”.
SLURM is used as the batch system. It does whole node scheduling (not “core scheduling”)! If you run single core job, your allocation will be multiplied by 36 (cores!) Please see this page for details on how to use SLURM on LCRC http://www.lcrc.anl.gov/for-users/using-lcrc/running-jobs/running-jobs-on-bebop/
The partion for the HEP nodes is hepd
To run on non HEP nodes use partition bdwall with Account - ATLAS-HEP-group
Using Singularity
To run jobs on all LCRC resources using ATLAS analysis base requires Docker/Singularity. Yiming (Ablet) Abulaiti created a tutorial on how to do this. Read this
Here are the suggested steps for 21.2.51 release.
docker pull atlas/analysisbase:21.2.51
Then make singularity image:
docker run -v /var/run/docker.sock:/var/run/docker.sock -v `pwd`:/output --privileged -t --rm singularityware/docker2singularity:v2.3 atlas/analysisbase:21.2.51
— Sergei&Doug 2018/01/04 13:36