User Tools

Site Tools


lcrc:introduction

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
lcrc:introduction [2018/02/22 14:26] asclcrc:introduction [2024/09/19 19:06] (current) asc
Line 10: Line 10:
 The description here uses the bash shell. Please go to https://accounts.lcrc.anl.gov/account.php to change your shell.  The description here uses the bash shell. Please go to https://accounts.lcrc.anl.gov/account.php to change your shell. 
  
-At this very moment (Jan 2018), this resource cannot replace the ATLAS cluster and have several features:+At this very moment (Jan 2024), this resource cannot replace the ATLAS cluster and have several features:
  
   * LCRC resources are under maintenance on Monday (each week?)   * LCRC resources are under maintenance on Monday (each week?)
Line 26: Line 26:
 heplogin2.lcrc.anl.gov  # login directly to hepd-0004 heplogin2.lcrc.anl.gov  # login directly to hepd-0004
 </code> </code>
-Each node has 72 CPUs and a lot of memory. After login, you will end up in a rather small "home" space which has a limit of 100 GB.+Each node has 72 CPUs and a lot of memory. After login, you will end up in a rather small "home" space which has a limit of 500 GB. 
 + 
 +You cannot login to these servers directly (from Aug 2024). First login to:
  
 <code> <code>
-/blues/gpfs/home/[USER]+ssh -i $HOME/.ssh/YOURKEY [USER]@bebop.lcrc.anl.gov -X
 </code> </code>
  
-Apparently, LCRC cannot spend more than 50$/per users home,  since they too busy with leadership in computing. But you have other places  to store your precious source files **(we need to confirm backup policy and restoration procedure!)**. First, you can use this location:+then ssh to hep1/hep2:
  
 <code> <code>
-/lcrc/group/ATLAS/users/[USER]+ssh -i  $HOME/.ssh/YOURKEY  heplogin1.lcrc.anl.gov  -X
 </code> </code>
  
-You can go to your "home" directory  automatically during the  login using these lines:+or
  
-<code bash+<code> 
-export TRUEHOME=/lcrc/group/ATLAS/users/$USER +ssh -i  $HOME/.ssh/YOURKEY  heplogin2.lcrc.anl.gov  -X 
-echo "Changing /blues/gpfs/home/$USER to $TRUEHOME" +</code> 
-cd $TRUEHOME +<code> 
-export HOME=$TRUEHOME+ 
 + 
 + 
 +/home/[USER] 
 +</code> 
 + 
 +You can use this location to keep code etc. (but not data): 
 + 
 +<code> 
 +/lcrc/group/ATLAS/users/[USER]
 </code> </code>
-in the file called ".bashrc" 
  
  
Line 118: Line 128:
  
 <code bash> <code bash>
-srun --pty -p  bdwall  -t 24:00:00 /bin/bash+srun --pty -p  bdwall   -A condo -t 24:00:00 /bin/bash
 </code> </code>
  
Line 129: Line 139:
  
 <code bash> <code bash>
-srun --pty -p  hepd  -t 24:00:00 /bin/bash+srun --pty -p hepd -t 24:00:00 /bin/bash 
 +module load StdEnv            # important to avoid slum bug
 </code> </code>
 +
 +Then you can setup root etc as "source /soft/hep/setup.sh".
  
 SLURM is used as the batch system.  It does whole node scheduling (not "core scheduling")! If you run single core job, your allocation will be multiplied by 36 (cores!)   SLURM is used as the batch system.  It does whole node scheduling (not "core scheduling")! If you run single core job, your allocation will be multiplied by 36 (cores!)  
Line 139: Line 152:
 To run on non HEP nodes use partition bdwall with Account - ATLAS-HEP-group To run on non HEP nodes use partition bdwall with Account - ATLAS-HEP-group
  
- --- //[[Sergei&Doug]] 2018/01/04 13:36//+ 
 +==== Using interactive jobs ==== 
 + 
 + 
 +First, allocate a HEP node: 
 + 
 +<code bash> 
 +salloc -N 1 -p hepd -A condo  -t 00:30:00 
 +</code> 
 + 
 +This allocates it for 30 min, but you can allocate it up to 7 days. 
 +You cam also allocate it on bebob: 
 + 
 +<code bash> 
 +salloc -N 1 -p bdwall --account=ATLAS-HEP-group -t 00:30:00 
 +</code> 
 + 
 +This does not login you! 
 +Check what node did you allocate 
 + 
 +<code bash> 
 +squeue -u user 
 +</code> 
 + 
 + 
 +Now you know the node. Then login to bebob (first!) and then ssh to this node. 
 + 
 +Another method is to use 
 + 
 +<code bash> 
 +srun --pty -p  bdwall  --account=ATLAS-HEP-group -t 00:30:00  /bin/bash 
 +</code> 
 + 
 + 
 +=== Running long interactive jobs === 
 + 
 +See more description in: https://www.lcrc.anl.gov/for-users/using-lcrc/running-jobs/running-jobs-on-bebop/ 
 + 
 +You should be able to do for example: 
 + 
 +<code> 
 +-ssh bebop 
 +-screen 
 +-salloc -N 1 -p hepd -A condo -t 96:00:00 
 +-ssh <nodename> 
 +-Work on interactive job for x amount of time... 
 +-Disconnect from screen (different than exit, see the documentation) 
 +-Logout 
 +</code> 
 + 
 +<code> 
 +-Login to the same login node screen was started on 
 +-screen -ls 
 +-Connect to screen session 
 +-Continue where you left off (if they allocation is still active) 
 +</code> 
 + 
 +See below for more details: 
 + 
 +https://www.gnu.org/software/screen/ 
 + 
 +https://www.hamvocke.com/blog/a-quick-and-easy-guide-to-tmux/ 
 + 
 + 
 +====== CVMFS repositories ====== 
 +Mounted CVMFS repositories on Bebop and Swing computing node. 
 + 
 +<code> 
 +/cvmfs/atlas.cern.ch 
 +/cvmfs/atlas-condb.cern.ch 
 +/cvmfs/grid.cern.ch 
 +/cvmfs/oasis.opensciencegrid.org 
 +/cvmfs/sft.cern.ch 
 +/cvmfs/geant4.cern.ch 
 +/cvmfs/spt.opensciencegrid.org 
 +/cvmfs/dune.opensciencegrid.org 
 +/cvmfs/larsoft.opensciencegrid.org 
 +/cvmfs/config-osg.opensciencegrid.org 
 +/cvmfs/fermilab.opensciencegrid.org 
 +/cvmfs/icarus.opensciencegrid.org 
 +/cvmfs/sbn.opensciencegrid.org 
 +/cvmfs/sw.hsf.org 
 +</code> 
 + 
 +Note, they are not mounted on login nodes 
 +====== Using Singularity ====== 
 + 
 + 
 +To run jobs on all LCRC resources using ATLAS analysis base requires Docker/Singularity. 
 + Yiming (Ablet) Abulaiti created a tutorial on how to do this. {{:lcrc:analysisbaselcrc.pdf|Read this}} 
 + 
 +Here are the suggested steps for 21.2.51 release. 
 + 
 +<code> 
 +docker pull atlas/analysisbase:21.2.51 
 +</code> 
 + 
 +Then make singularity image: 
 +<code bash> 
 +docker run -v /var/run/docker.sock:/var/run/docker.sock -v `pwd`:/output --privileged -t --rm singularityware/docker2singularity:v2.3 atlas/analysisbase:21.2.51 
 +</code> 
 + 
 + 
 +Currently, the image for AtlasBase 2.2.51 located here: 
 + 
 +<code> 
 +/soft/hep/atlas.cern.ch/repo/containers/images/singularity/atlas_analysisbase_21.2.51-2018-11-04-01795eabe66c.img 
 +</code> 
 + 
 +You can go inside this image as: 
 + 
 +<code> 
 +singularity exec /soft/hep/atlas.cern.ch/repo/containers/images/singularity/atlas_analysisbase_21.2.51-2018-11-04-01795eabe66c.img bash -l 
 +</code> 
 + 
 +====== Using Singularity for cvmfsexec ====== 
 + 
 +One can also setup cvmf on any LCRC nodes as this: 
 +<code> 
 +source /soft/hep/CVMFSexec/setup.sh 
 +</code> 
 + 
 +Then check: 
 +<code> 
 +ls /cvmfs/ 
 +</code> 
 + 
 +You will see the mounted directory (SL7): 
 +<code> 
 +atlas-condb.cern.ch/      atlas.cern.ch/  cvmfs-config.cern.ch/  sft-nightlies.cern.ch/  sw.hsf.org/ 
 +atlas-nightlies.cern.ch/  cms.cern.ch/    projects.cern.ch/      sft.cern.ch/            unpacked.cern.ch/ 
 +</code> 
 + 
 + 
 + 
 + 
 + 
 + 
 + --- //[[Sergei&Doug&Rui]] 2018/01/04 13:36//
lcrc/introduction.1519309589.txt.gz · Last modified: 2018/02/22 14:26 by asc