lcrc:introduction
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
lcrc:introduction [2019/01/22 15:14] – [Using Singularity] asc | lcrc:introduction [2024/09/19 19:06] (current) – asc | ||
---|---|---|---|
Line 10: | Line 10: | ||
The description here uses the bash shell. Please go to https:// | The description here uses the bash shell. Please go to https:// | ||
- | At this very moment (Jan 2018), this resource cannot replace the ATLAS cluster and have several features: | + | At this very moment (Jan 2024), this resource cannot replace the ATLAS cluster and have several features: |
* LCRC resources are under maintenance on Monday (each week?) | * LCRC resources are under maintenance on Monday (each week?) | ||
Line 27: | Line 27: | ||
</ | </ | ||
Each node has 72 CPUs and a lot of memory. After login, you will end up in a rather small " | Each node has 72 CPUs and a lot of memory. After login, you will end up in a rather small " | ||
+ | |||
+ | You cannot login to these servers directly (from Aug 2024). First login to: | ||
< | < | ||
+ | ssh -i $HOME/ | ||
+ | </ | ||
+ | |||
+ | then ssh to hep1/hep2: | ||
+ | |||
+ | < | ||
+ | ssh -i $HOME/ | ||
+ | </ | ||
+ | |||
+ | or | ||
+ | |||
+ | < | ||
+ | ssh -i $HOME/ | ||
+ | </ | ||
+ | < | ||
+ | |||
+ | |||
+ | |||
/ | / | ||
</ | </ | ||
Line 108: | Line 128: | ||
<code bash> | <code bash> | ||
- | srun --pty -p bdwall | + | srun --pty -p bdwall |
</ | </ | ||
Line 132: | Line 152: | ||
To run on non HEP nodes use partition bdwall with Account - ATLAS-HEP-group | To run on non HEP nodes use partition bdwall with Account - ATLAS-HEP-group | ||
+ | |||
+ | ==== Using interactive jobs ==== | ||
+ | |||
+ | |||
+ | First, allocate a HEP node: | ||
+ | |||
+ | <code bash> | ||
+ | salloc -N 1 -p hepd -A condo -t 00:30:00 | ||
+ | </ | ||
+ | |||
+ | This allocates it for 30 min, but you can allocate it up to 7 days. | ||
+ | You cam also allocate it on bebob: | ||
+ | |||
+ | <code bash> | ||
+ | salloc -N 1 -p bdwall --account=ATLAS-HEP-group -t 00:30:00 | ||
+ | </ | ||
+ | |||
+ | This does not login you! | ||
+ | Check what node did you allocate | ||
+ | |||
+ | <code bash> | ||
+ | squeue -u user | ||
+ | </ | ||
+ | |||
+ | |||
+ | Now you know the node. Then login to bebob (first!) and then ssh to this node. | ||
+ | |||
+ | Another method is to use | ||
+ | |||
+ | <code bash> | ||
+ | srun --pty -p bdwall | ||
+ | </ | ||
+ | |||
+ | |||
+ | === Running long interactive jobs === | ||
+ | |||
+ | See more description in: https:// | ||
+ | |||
+ | You should be able to do for example: | ||
+ | |||
+ | < | ||
+ | -ssh bebop | ||
+ | -screen | ||
+ | -salloc -N 1 -p hepd -A condo -t 96:00:00 | ||
+ | -ssh < | ||
+ | -Work on interactive job for x amount of time... | ||
+ | -Disconnect from screen (different than exit, see the documentation) | ||
+ | -Logout | ||
+ | </ | ||
+ | |||
+ | < | ||
+ | -Login to the same login node screen was started on | ||
+ | -screen -ls | ||
+ | -Connect to screen session | ||
+ | -Continue where you left off (if they allocation is still active) | ||
+ | </ | ||
+ | |||
+ | See below for more details: | ||
+ | |||
+ | https:// | ||
+ | |||
+ | https:// | ||
+ | |||
+ | |||
+ | ====== CVMFS repositories ====== | ||
+ | Mounted CVMFS repositories on Bebop and Swing computing node. | ||
+ | |||
+ | < | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | </ | ||
+ | |||
+ | Note, they are not mounted on login nodes | ||
====== Using Singularity ====== | ====== Using Singularity ====== | ||
Line 143: | Line 248: | ||
docker pull atlas/ | docker pull atlas/ | ||
</ | </ | ||
+ | |||
+ | Then make singularity image: | ||
+ | <code bash> | ||
+ | docker run -v / | ||
+ | </ | ||
+ | |||
+ | |||
+ | Currently, the image for AtlasBase 2.2.51 located here: | ||
+ | |||
+ | < | ||
+ | / | ||
+ | </ | ||
+ | |||
+ | You can go inside this image as: | ||
+ | |||
+ | < | ||
+ | singularity exec / | ||
+ | </ | ||
+ | |||
+ | ====== Using Singularity for cvmfsexec ====== | ||
+ | |||
+ | One can also setup cvmf on any LCRC nodes as this: | ||
+ | < | ||
+ | source / | ||
+ | </ | ||
+ | |||
+ | Then check: | ||
+ | < | ||
+ | ls /cvmfs/ | ||
+ | </ | ||
+ | |||
+ | You will see the mounted directory (SL7): | ||
+ | < | ||
+ | atlas-condb.cern.ch/ | ||
+ | atlas-nightlies.cern.ch/ | ||
+ | </ | ||
+ | |||
+ | |||
+ | |||
+ | |||
- | --- // | + | --- // |
lcrc/introduction.1548170046.txt.gz · Last modified: 2019/01/22 15:14 by asc