User Tools

Site Tools


asc:workbook_introduction

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
asc:workbook_introduction [2014/02/14 15:40]
asc [Data storage]
asc:workbook_introduction [2023/08/14 19:20] (current)
asc [Setting up ATLAS Software]
Line 18: Line 18:
 ===== Interactive nodes and desktops ===== ===== Interactive nodes and desktops =====
  
-At ANL ASC one can use any of these "interactive" servers with public ssh login from outside. All computers are based on Scientific Linux 5.3. Public ssh login servers: +To login to atlas clusters (not LCRC!) use these login computers:
- +
-^ name                ^ cores Linux ^     note                       ^ +
-| atlas15.hep.anl.gov | 8   SL5.7    | Public login for ALL ASC users | +
-| atlas16.hep.anl.gov | 16  SL6.4    | Public login for ALL ASC users | +
-| atlas17.hep.anl.gov | 16  SL6.4    | Public login for ALL ASC users | +
-| atlas18.hep.anl.gov | 16  SL6.4    | Public login for ALL ASC users |  +
-| atlas1.hep.anl.gov  | 16  SL6.4    | for ANL group and ANL-based visitors  |     +
-| atlas2.hep.anl.gov  | 16  SL6.4    | for ANL group and ANL-based visitors  |   +
  
 +<code>
 +alogin1.hep.anl.gov
 +alogin2.hep.anl.gov
 +</code>
  
 You should use ANL domain password and user name. All other computers can be accessed from the computers above. It should be noted that they require the ANL domain password and the user name. You should use ANL domain password and user name. All other computers can be accessed from the computers above. It should be noted that they require the ANL domain password and the user name.
  
  
-Desktop computers do not have  login from outside.  The hostnames of the users desktops are atlas20 - atlas49.hep.anl.gov  
  
 ===== Data storage ===== ===== Data storage =====
Line 39: Line 34:
  
 <code bash> <code bash>
-/atlasfs/atlas/local+/data/atlasfs02/c/users/
 </code> </code>
  
-Please check your name there. 
  
-In addition, every server has scratch space here: +===== Setting up some software =====
-<code bash> +
-/data1/ +
-/data2/ +
-</code>+
  
-atlas2 has more space in /data1, /data2, /data3, /data4. 
  
-Finally, you can use "atlas4" which mounts 20 TB of scratch space. atlas4 can be accessed after login to atlas1,2.+You can setup some software (ROOT, FASTJET, PYTHIAnew Latex) as this:
  
-The computer farm has distributed data storage managed by ArCond. This will be described later. 
-===== Computer farm ===== 
  
 +<code bash>
 +source /users/admin/share/sl7/setup.sh
 +</code>
 +This setup uses the native Python2 from SL7
  
-The computer farm consists of 16 servers atlas50-65Each server has 2 CPU with 8 cores and 5 TB of local disk space  The farm is based on SL6+Check this by running "root" or "condor_status"You can also add the environmental variables automatically: Create '.bash_profile' file if it is not done yet, and put these lines:
  
-The PC farm uses the [[http://atlaswww.hep.anl.gov/asc/arcond/|Arcond]] for job submission which allows to share the resources and +<code bash>    
-simplifies the data discovery. The backend  of this farm is Condor. You can use it as well +    alias ls='ls -F --color=auto' 
-as long as you restrict your jobs to atlas50-65 and use 4 cores per server. If you have long jobs, please use the old farm. Login to the computer farm is not allowed.+ </code>
  
-  +This will set up the recent version of ROOT with the native PYTHON 2.from SL5.3. Alternatively, put the above lines to the file '.bashrc' or run this command: 
-You can see the current CPU usage using this web page [[http://146.139.33.7/ganglia/|ANL-ATLAS computer farm monitor]]. (can be accessed from inside ANL, or when using a proxy after atlas1 ssh). Alternatively, look at the Condor jobs monitor here [[http://atlaswww.hep.anl.gov/asc/admin/cpu-monitor/ | ANL-ATLAS condor monitor]].+ 
 +At this point, no any atlas software is installedNote that the same setup script also setup FASTJET and LHAPDFCheck this as
  
-You can run [[http://atlaswww.hep.anl.gov/asc/jcondor/ | JCondor Monitoring]] GUI to monitor condor cluster: 
 <code bash> <code bash>
-/share/grid/app/asc_app/asc_rel/1.0/setup-script/set_asc.sh +echo $FASTJET 
-cp -rf /share/sl6/jcondor . +echo $LHAPDF 
-cd jcondor +echo $PROMC
-java -jar jcondor.jar+
 </code> </code>
-===== After initial login ===== 
  
  
-About the scratch disk space: It is available on atlas1,2,16,17,18 (5TB for each server) and on every desktop. Every user has a directory under /data1 and /data2. The scratch disks do not have backup. You should use these disks if you do not want to gain speed for running/compiling your programs.+== Python3 from LCG ==
  
-In some cases, you will need to access data from all computers. If data size is not large (not larger than 10 GB), one can put data on NFS: /data/nfs/usersIf you are interested in a larger sample, contact the administrator.+You can also setup basic programs using Python3. Create a setup file "setup.sh" as this: 
 +<code bash> 
 +#!/bin/bash 
 +echo "Setup ROOT, PyROOT tensorflow" 
 +export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase 
 +source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh 
 +lsetup "views LCG_104 x86_64-centos7-gcc11-opt" 
 +</code>
  
-Please read the section [[asc:workbook_data|Working with data at ANL Tier3]] for details on how to store and process your data.+Then you can setup many LCG packages as:
  
- +<code> 
-===== Default shell and  environment ===== +source setup.sh
- +
- +
-The default shell for the ANL ASC is bash. +
-To setup the necessary environment (ROOT, SVN etc), run this script: +
- +
-<code bash source.sh+
-source /share/grid/app/asc_app/asc_rel/1.0/setup-script/set_asc.sh+
 </code> </code>
  
-Check this by running "root" or "condor_status". You can also add the environmental variables automaticallyCreate '.bash_profile' file if it is not done yet, and put these lines:+Please read the section [[asc:workbook_data|Working with data at ANL Tier3]] for details on how to store and process your data.
  
-<code bash>    
-    alias ls='ls -F --color=auto' 
-    source /share/grid/app/asc_app/asc_rel/1.0/setup-script/set_asc.sh 
- </code> 
  
-This will set up the recent version of ROOT with the native PYTHON 2.4 from SL5.3. Alternativelyput the above lines to the file '.bashrc' or run this command: +Before compiling any packageplease check this directory. Note that you can also use cvmfs "localSetupSFT".
- +
-At this point, no any atlas software is installed. Note that the same setup script also setup FASTJET and LHAPDFCheck this as+
  
 <code bash> <code bash>
-echo $FASTJET +export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase 
-echo $LHAPDF+source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh 
 +localSetupSFT --help
 </code> </code>
 +This prints the available (non-ATLAS) software.
  
 ===== Setting up ATLAS Software ===== ===== Setting up ATLAS Software =====
  
-To setup ATLAS software from the packman, copy and save these lines in a file, say "set.sh"+To setup ATLAS software, copy and save these lines in a file, say "set.sh"
  
 <code bash setup.sh>  <code bash setup.sh> 
Line 126: Line 110:
  
 Then "source set.sh" every time you login on atlas16-28 computers. Note: you should use the bash shell for this setup. If you are happy with this, one can put this line in .bash_profile or .bashrc files (if you are using bash shell and want to set up the ATLAS staff every time you login). You can change the ATLAS release and testarea by changing "AVERS" and "TEST_AREA" variables. Then "source set.sh" every time you login on atlas16-28 computers. Note: you should use the bash shell for this setup. If you are happy with this, one can put this line in .bash_profile or .bashrc files (if you are using bash shell and want to set up the ATLAS staff every time you login). You can change the ATLAS release and testarea by changing "AVERS" and "TEST_AREA" variables.
- 
-You can also use packages compiled with RootCore. For SL6, they are located here: 
- 
-<code> 
- /share/sl6/AtlasRootCoreLib 
-</code> 
- 
- they were compiled against the native SL6 gcc (/usr/bin/gcc), native python (/usr/bin/python) and ROOT (source /share/grid/app/asc_app/asc_rel/1.0/setup-script/set_asc.sh). 
-If you want to recompile such packages using a different ROOT, simply 
-copy this directory to your directory, and run "A_COMPILE.sh" to recompile all these packages. 
- 
-If you need to add more packages, get them with svn: 
- 
-<code bash> 
-export SVNOFF=svn+ssh://svn.cern.ch/reps/atlasoff 
-svn co $SVNOFF/Reconstruction/Jet/ApplyJetCalibration/tags/ApplyJetCalibration-00-03-15 ApplyJetCalibration 
-</code> 
- 
-and run the script "A_COMPILE.sh" again. 
  
  
 +Read more [[https://twiki.atlas-canada.ca/bin/view/AtlasCanada/ATLASLocalRootBase | about ATLASLocalRootBase]].
  
 ===== COOL database ===== ===== COOL database =====
Line 169: Line 135:
 ===== Cleaning environmental variables ===== ===== Cleaning environmental variables =====
  
- +To remove the ATLAS release and all associated environmental variables and setting the ASC ANL environment only, use:
-(This is in case you need to start over with your session)<br />To remove the ATLAS release and all associated environmental variables and setting the ASC ANL environment only, use:+
 <code>   source /share/grid/app/asc_app/asc_rel/1.0/setup-script/set_asc.sh</code> <code>   source /share/grid/app/asc_app/asc_rel/1.0/setup-script/set_asc.sh</code>
  
asc/workbook_introduction.1392392403.txt.gz · Last modified: 2014/02/14 15:40 by asc