User Tools

Site Tools


asc:workbook_introduction

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
asc:workbook_introduction [2023/08/14 19:12] – [Data storage] ascasc:workbook_introduction [2023/08/14 19:17] – [Default shell and environment] asc
Line 38: Line 38:
  
  
-===== Computer farm =====+===== Setting up some software =====
  
  
-The computer farm consists of 16 servers atlas50-65. Each server has 2 CPU with 8 cores and 5 TB of local disk space.   The farm is based on SL6+You can setup some software (ROOT, FASTJET, PYTHIA, new Latex) as this:
  
-The PC farm uses the [[http://atlaswww.hep.anl.gov/asc/arcond/|Arcond]] for job submission which allows to share the resources and 
-simplifies the data discovery. The backend  of this farm is Condor. You can use it as well 
-as long as you restrict your jobs to atlas50-65 and use 4 cores per server. If you have long jobs, please use the old farm. Login to the computer farm is not allowed. 
  
-  
-You can see the current CPU usage using this web page [[http://146.139.33.7/ganglia/|ANL-ATLAS computer farm monitor]]. (can be accessed from inside ANL, or when using a proxy after atlas1 ssh). Alternatively, look at the Condor jobs monitor here [[http://atlaswww.hep.anl.gov/asc/admin/cpu-monitor/ | ANL-ATLAS condor monitor]]. 
- 
-You can run [[http://atlaswww.hep.anl.gov/asc/jcondor/ | JCondor Monitoring]] GUI to monitor condor cluster: 
 <code bash> <code bash>
-/share/sl6/set_asc.sh +source /users/admin/share/sl7/setup.sh
-cp -rf /share/sl6/jcondor . +
-cd jcondor +
-java -jar jcondor.jar+
 </code> </code>
 +This setup uses the native Python2 from SL7
  
-===== After initial login ===== 
  
  
-About the scratch disk space: It is available on atlas1,2,16,17,18 (5TB for each server) and on every desktopEvery user has directory under /data1 and /data2The scratch disks do not have backupYou should use these disks if you do not want to gain speed for running/compiling your programs.+You can also setup basic programs using Python3Create setup file "setup.sh" as this: 
 +<code bash> 
 +#!/bin/bash 
 +echo "Setup ROOT, PyROOT tensorflow" 
 +export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase 
 +source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh 
 +lsetup "views LCG_104 x86_64-centos7-gcc11-opt" 
 +</code>
  
-In some cases, you will need to access data from all computers. If data size is not large (not larger than 10 GB), one can put data on NFS: /data/nfs/users/ If you are interested in a larger sample, contact the administrator.+Then you can setup many LCG packages as: 
 + 
 +<code> 
 +source setup.sh 
 +</code>
  
 Please read the section [[asc:workbook_data|Working with data at ANL Tier3]] for details on how to store and process your data. Please read the section [[asc:workbook_data|Working with data at ANL Tier3]] for details on how to store and process your data.
  
  
-===== Default shell and  environment ===== 
  
  
-The default shell for the ANL ASC is bash. 
-To setup the necessary environment (ROOT, SVN etc), run this script: 
- 
-<code bash source.sh> 
-source /share/sl6/set_asc.sh 
-</code> 
  
 Check this by running "root" or "condor_status". You can also add the environmental variables automatically: Create '.bash_profile' file if it is not done yet, and put these lines: Check this by running "root" or "condor_status". You can also add the environmental variables automatically: Create '.bash_profile' file if it is not done yet, and put these lines:
Line 82: Line 76:
 <code bash>    <code bash>   
     alias ls='ls -F --color=auto'     alias ls='ls -F --color=auto'
-    source /share/sl6/set_asc.sh 
  </code>  </code>
  
asc/workbook_introduction.txt · Last modified: 2023/08/14 19:20 by asc