User Tools

Site Tools


lcrc:introduction

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
lcrc:introduction [2019/01/22 14:42]
asc [Using singularity]
lcrc:introduction [2023/06/15 14:05] (current)
rwang [Using interactive jobs]
Line 108: Line 108:
  
 <code bash> <code bash>
-srun --pty -p  bdwall  -t 24:00:00 /bin/bash+srun --pty -p  bdwall   -A condo -t 24:00:00 /bin/bash
 </code> </code>
  
Line 132: Line 132:
 To run on non HEP nodes use partition bdwall with Account - ATLAS-HEP-group To run on non HEP nodes use partition bdwall with Account - ATLAS-HEP-group
  
-==== Using Singularity  ====+ 
 +==== Using interactive jobs ==== 
 + 
 + 
 +First, allocate a HEP node: 
 + 
 +<code bash> 
 +salloc -N 1 -p hepd -A condo  -t 00:30:00 
 +</code> 
 + 
 +This allocates it for 30 min, but you can allocate it up to 7 days. 
 +You cam also allocate it on bebob: 
 + 
 +<code bash> 
 +salloc -N 1 -p bdwall --account=ATLAS-HEP-group -t 00:30:00 
 +</code> 
 + 
 +This does not login you! 
 +Check what node did you allocate 
 + 
 +<code bash> 
 +squeue -u user 
 +</code> 
 + 
 + 
 +Now you know the node. Then login to bebob (first!) and then ssh to this node. 
 + 
 +Another method is to use 
 + 
 +<code bash> 
 +srun --pty -p  bdwall  --account=ATLAS-HEP-group -t 00:30:00  /bin/bash 
 +</code> 
 + 
 + 
 +=== Running long interactive jobs === 
 + 
 +See more description in: https://www.lcrc.anl.gov/for-users/using-lcrc/running-jobs/running-jobs-on-bebop/ 
 + 
 +You should be able to do for example: 
 + 
 +<code> 
 +-ssh bebop 
 +-screen 
 +-salloc -N 1 -p hepd -A condo -t 96:00:00 
 +-ssh <nodename> 
 +-Work on interactive job for x amount of time... 
 +-Disconnect from screen (different than exit, see the documentation) 
 +-Logout 
 +</code> 
 + 
 +<code> 
 +-Login to the same login node screen was started on 
 +-screen -ls 
 +-Connect to screen session 
 +-Continue where you left off (if they allocation is still active) 
 +</code> 
 + 
 +See below for more details: 
 + 
 +https://www.gnu.org/software/screen/ 
 + 
 +https://www.hamvocke.com/blog/a-quick-and-easy-guide-to-tmux/ 
 + 
 + 
 +====== CVMFS repositories ====== 
 +Mounted CVMFS repositories on Bebop and Swing computing node. 
 + 
 +<code> 
 +/cvmfs/atlas.cern.ch 
 +/cvmfs/atlas-condb.cern.ch 
 +/cvmfs/grid.cern.ch 
 +/cvmfs/oasis.opensciencegrid.org 
 +/cvmfs/sft.cern.ch 
 +/cvmfs/geant4.cern.ch 
 +/cvmfs/spt.opensciencegrid.org 
 +/cvmfs/dune.opensciencegrid.org 
 +/cvmfs/larsoft.opensciencegrid.org 
 +/cvmfs/config-osg.opensciencegrid.org 
 +/cvmfs/fermilab.opensciencegrid.org 
 +/cvmfs/icarus.opensciencegrid.org 
 +/cvmfs/sbn.opensciencegrid.org 
 +/cvmfs/sw.hsf.org 
 +</code> 
 + 
 +Note, they are not mounted on login nodes 
 +====== Using Singularity ====== 
  
 To run jobs on all LCRC resources using ATLAS analysis base requires Docker/Singularity. To run jobs on all LCRC resources using ATLAS analysis base requires Docker/Singularity.
  Yiming (Ablet) Abulaiti created a tutorial on how to do this. {{:lcrc:analysisbaselcrc.pdf|Read this}}  Yiming (Ablet) Abulaiti created a tutorial on how to do this. {{:lcrc:analysisbaselcrc.pdf|Read this}}
 +
 +Here are the suggested steps for 21.2.51 release.
 +
 +<code>
 +docker pull atlas/analysisbase:21.2.51
 +</code>
 +
 +Then make singularity image:
 +<code bash>
 +docker run -v /var/run/docker.sock:/var/run/docker.sock -v `pwd`:/output --privileged -t --rm singularityware/docker2singularity:v2.3 atlas/analysisbase:21.2.51
 +</code>
 +
 +
 +Currently, the image for AtlasBase 2.2.51 located here:
 +
 +<code>
 +/soft/hep/atlas.cern.ch/repo/containers/images/singularity/atlas_analysisbase_21.2.51-2018-11-04-01795eabe66c.img
 +</code>
 +
 +You can go inside this image as:
 +
 +<code>
 +singularity exec /soft/hep/atlas.cern.ch/repo/containers/images/singularity/atlas_analysisbase_21.2.51-2018-11-04-01795eabe66c.img bash -l
 +</code>
 +
 +====== Using Singularity for cvmfsexec ======
 +
 +One can also setup cvmf on any LCRC nodes as this:
 +<code>
 +source /soft/hep/CVMFSexec/setup.sh
 +</code>
 +
 +Then check:
 +<code>
 +ls /cvmfs/
 +</code>
 +
 +You will see the mounted directory (SL7):
 +<code>
 +atlas-condb.cern.ch/      atlas.cern.ch/  cvmfs-config.cern.ch/  sft-nightlies.cern.ch/  sw.hsf.org/
 +atlas-nightlies.cern.ch/  cms.cern.ch/    projects.cern.ch/      sft.cern.ch/            unpacked.cern.ch/
 +</code>
 +
 +
 +
 +
  
  
- --- //[[Sergei&Doug]] 2018/01/04 13:36//+ --- //[[Sergei&Doug&Rui]] 2018/01/04 13:36//
lcrc/introduction.1548168120.txt.gz · Last modified: 2019/01/22 14:42 by asc