This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
lcrc:introduction [2019/10/07 21:28] asc [Allocating interactive nodes] |
lcrc:introduction [2023/06/15 14:05] (current) rwang [Using interactive jobs] |
||
---|---|---|---|
Line 132: | Line 132: | ||
To run on non HEP nodes use partition bdwall with Account - ATLAS-HEP-group | To run on non HEP nodes use partition bdwall with Account - ATLAS-HEP-group | ||
+ | |||
+ | ==== Using interactive jobs ==== | ||
+ | |||
+ | |||
+ | First, allocate a HEP node: | ||
+ | |||
+ | <code bash> | ||
+ | salloc -N 1 -p hepd -A condo -t 00:30:00 | ||
+ | </ | ||
+ | |||
+ | This allocates it for 30 min, but you can allocate it up to 7 days. | ||
+ | You cam also allocate it on bebob: | ||
+ | |||
+ | <code bash> | ||
+ | salloc -N 1 -p bdwall --account=ATLAS-HEP-group -t 00:30:00 | ||
+ | </ | ||
+ | |||
+ | This does not login you! | ||
+ | Check what node did you allocate | ||
+ | |||
+ | <code bash> | ||
+ | squeue -u user | ||
+ | </ | ||
+ | |||
+ | |||
+ | Now you know the node. Then login to bebob (first!) and then ssh to this node. | ||
+ | |||
+ | Another method is to use | ||
+ | |||
+ | <code bash> | ||
+ | srun --pty -p bdwall | ||
+ | </ | ||
+ | |||
+ | |||
+ | === Running long interactive jobs === | ||
+ | |||
+ | See more description in: https:// | ||
+ | |||
+ | You should be able to do for example: | ||
+ | |||
+ | < | ||
+ | -ssh bebop | ||
+ | -screen | ||
+ | -salloc -N 1 -p hepd -A condo -t 96:00:00 | ||
+ | -ssh < | ||
+ | -Work on interactive job for x amount of time... | ||
+ | -Disconnect from screen (different than exit, see the documentation) | ||
+ | -Logout | ||
+ | </ | ||
+ | |||
+ | < | ||
+ | -Login to the same login node screen was started on | ||
+ | -screen -ls | ||
+ | -Connect to screen session | ||
+ | -Continue where you left off (if they allocation is still active) | ||
+ | </ | ||
+ | |||
+ | See below for more details: | ||
+ | |||
+ | https:// | ||
+ | |||
+ | https:// | ||
+ | |||
+ | |||
+ | ====== CVMFS repositories ====== | ||
+ | Mounted CVMFS repositories on Bebop and Swing computing node. | ||
+ | |||
+ | < | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | </ | ||
+ | |||
+ | Note, they are not mounted on login nodes | ||
====== Using Singularity ====== | ====== Using Singularity ====== | ||
Line 162: | Line 247: | ||
</ | </ | ||
- | --- // | + | ====== Using Singularity for cvmfsexec ====== |
+ | |||
+ | One can also setup cvmf on any LCRC nodes as this: | ||
+ | < | ||
+ | source / | ||
+ | </ | ||
+ | |||
+ | Then check: | ||
+ | < | ||
+ | ls /cvmfs/ | ||
+ | </ | ||
+ | |||
+ | You will see the mounted directory (SL7): | ||
+ | < | ||
+ | atlas-condb.cern.ch/ | ||
+ | atlas-nightlies.cern.ch/ | ||
+ | </ | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | --- // |