User Tools

Site Tools


lcrc:introduction

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
lcrc:introduction [2019/10/07 21:28]
asc [Allocating interactive nodes]
lcrc:introduction [2023/06/15 14:05] (current)
rwang [Using interactive jobs]
Line 132: Line 132:
 To run on non HEP nodes use partition bdwall with Account - ATLAS-HEP-group To run on non HEP nodes use partition bdwall with Account - ATLAS-HEP-group
  
 +
 +==== Using interactive jobs ====
 +
 +
 +First, allocate a HEP node:
 +
 +<code bash>
 +salloc -N 1 -p hepd -A condo  -t 00:30:00
 +</code>
 +
 +This allocates it for 30 min, but you can allocate it up to 7 days.
 +You cam also allocate it on bebob:
 +
 +<code bash>
 +salloc -N 1 -p bdwall --account=ATLAS-HEP-group -t 00:30:00
 +</code>
 +
 +This does not login you!
 +Check what node did you allocate
 +
 +<code bash>
 +squeue -u user
 +</code>
 +
 +
 +Now you know the node. Then login to bebob (first!) and then ssh to this node.
 +
 +Another method is to use
 +
 +<code bash>
 +srun --pty -p  bdwall  --account=ATLAS-HEP-group -t 00:30:00  /bin/bash
 +</code>
 +
 +
 +=== Running long interactive jobs ===
 +
 +See more description in: https://www.lcrc.anl.gov/for-users/using-lcrc/running-jobs/running-jobs-on-bebop/
 +
 +You should be able to do for example:
 +
 +<code>
 +-ssh bebop
 +-screen
 +-salloc -N 1 -p hepd -A condo -t 96:00:00
 +-ssh <nodename>
 +-Work on interactive job for x amount of time...
 +-Disconnect from screen (different than exit, see the documentation)
 +-Logout
 +</code>
 +
 +<code>
 +-Login to the same login node screen was started on
 +-screen -ls
 +-Connect to screen session
 +-Continue where you left off (if they allocation is still active)
 +</code>
 +
 +See below for more details:
 +
 +https://www.gnu.org/software/screen/
 +
 +https://www.hamvocke.com/blog/a-quick-and-easy-guide-to-tmux/
 +
 +
 +====== CVMFS repositories ======
 +Mounted CVMFS repositories on Bebop and Swing computing node.
 +
 +<code>
 +/cvmfs/atlas.cern.ch
 +/cvmfs/atlas-condb.cern.ch
 +/cvmfs/grid.cern.ch
 +/cvmfs/oasis.opensciencegrid.org
 +/cvmfs/sft.cern.ch
 +/cvmfs/geant4.cern.ch
 +/cvmfs/spt.opensciencegrid.org
 +/cvmfs/dune.opensciencegrid.org
 +/cvmfs/larsoft.opensciencegrid.org
 +/cvmfs/config-osg.opensciencegrid.org
 +/cvmfs/fermilab.opensciencegrid.org
 +/cvmfs/icarus.opensciencegrid.org
 +/cvmfs/sbn.opensciencegrid.org
 +/cvmfs/sw.hsf.org
 +</code>
 +
 +Note, they are not mounted on login nodes
 ====== Using Singularity ====== ====== Using Singularity ======
  
Line 162: Line 247:
 </code> </code>
  
- --- //[[Sergei&Doug]] 2018/01/04 13:36//+====== Using Singularity for cvmfsexec ====== 
 + 
 +One can also setup cvmf on any LCRC nodes as this: 
 +<code> 
 +source /soft/hep/CVMFSexec/setup.sh 
 +</code> 
 + 
 +Then check: 
 +<code> 
 +ls /cvmfs/ 
 +</code> 
 + 
 +You will see the mounted directory (SL7): 
 +<code> 
 +atlas-condb.cern.ch/      atlas.cern.ch/  cvmfs-config.cern.ch/  sft-nightlies.cern.ch/  sw.hsf.org/ 
 +atlas-nightlies.cern.ch/  cms.cern.ch/    projects.cern.ch/      sft.cern.ch/            unpacked.cern.ch/ 
 +</code> 
 + 
 + 
 + 
 + 
 + 
 + 
 + --- //[[Sergei&Doug&Rui]] 2018/01/04 13:36//
lcrc/introduction.1570483725.txt.gz · Last modified: 2019/10/07 21:28 by asc