[[asc:workbook| < alogin1.hep.anl.gov alogin2.hep.anl.gov You should use ANL domain password and user name. All other computers can be accessed from the computers above. It should be noted that they require the ANL domain password and the user name. ===== Data storage ===== Every interactive node mounts this central storage: /data/atlasfs02/c/users/ ===== Setting up some software ===== You can setup some software (ROOT, FASTJET, PYTHIA, new Latex) as this: source /users/admin/share/sl7/setup.sh This setup uses the native Python2 from SL7 Check this by running "root" or "condor_status". You can also add the environmental variables automatically: Create '.bash_profile' file if it is not done yet, and put these lines: alias ls='ls -F --color=auto' This will set up the recent version of ROOT with the native PYTHON 2.4 from SL5.3. Alternatively, put the above lines to the file '.bashrc' or run this command: At this point, no any atlas software is installed. Note that the same setup script also setup FASTJET and LHAPDF. Check this as echo $FASTJET echo $LHAPDF echo $PROMC == Python3 from LCG == You can also setup basic programs using Python3. Create a setup file "setup.sh" as this: #!/bin/bash echo "Setup ROOT, PyROOT tensorflow" export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh lsetup "views LCG_104 x86_64-centos7-gcc11-opt" Then you can setup many LCG packages as: source setup.sh Please read the section [[asc:workbook_data|Working with data at ANL Tier3]] for details on how to store and process your data. Before compiling any package, please check this directory. Note that you can also use cvmfs "localSetupSFT". export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh localSetupSFT --help This prints the available (non-ATLAS) software. ===== Setting up ATLAS Software ===== To setup ATLAS software, copy and save these lines in a file, say "set.sh" export AVERS=17.8.0 export TEST_AREA=$HOME/testarea # ANL local setup for fronter export ALRB_localConfigDir="/share/sl6/cvmfs/localConfig" export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh asetup --release=$AVERS --testarea=$TEST_AREA Then "source set.sh" every time you login on atlas16-28 computers. Note: you should use the bash shell for this setup. If you are happy with this, one can put this line in .bash_profile or .bashrc files (if you are using bash shell and want to set up the ATLAS staff every time you login). You can change the ATLAS release and testarea by changing "AVERS" and "TEST_AREA" variables. Read more [[https://twiki.atlas-canada.ca/bin/view/AtlasCanada/ATLASLocalRootBase | about ATLASLocalRootBase]]. ===== COOL database ===== It is mirrored at ANL. After setting up an ATLAS release, check $ATLAS_POOLCOND_PATH Normally, you do not need to do anything since athena should find this path. ===== Database releases ===== If you will need different database release (rather than that included in the current athena release), put these lines in your setup: export DBRELEASE_INSTALLDIR="/share/grid/DBRelease" export DBRELEASE_VERSION="9.6.1" export ATLAS_DB_AREA=${DBRELEASE_INSTALLDIR} export DBRELEASE_OVERRIDE=${DBRELEASE_VERSION} Read more details [[https://twiki.cern.ch/twiki/bin/view/Atlas/AtlasDBRelease|here]] ===== Cleaning environmental variables ===== To remove the ATLAS release and all associated environmental variables and setting the ASC ANL environment only, use: source /share/grid/app/asc_app/asc_rel/1.0/setup-script/set_asc.sh After executing this script, you will have an access to most recent self-contained ROOT installation and all other variables necessary to work at ASC (CVS, firefox etc). You can put this string, say in a file "clean": #!/bin/bash source /share/grid/app/asc_app/asc_rel/1.0/setup-script/set_asc.sh so that when you want to clear the shell from ATLAS release, jut type "source clean" ===== ATLAS Event display ===== Setup atlas release and do: cd $ATLANTISJAVA_HOME /usr/bin/java -jar atlantis.jar Here we assume release 15.6.1 and atlas16/17 Running VP1: Setup atlas release, go to the testarea/[release] and type "vp1" ===== Using CVMFS file system ===== This is an alternative way to setup ATLAS releases and the grid. This setup uses a network-based file system from CERN and closely follows the lxplus setup. This setup can only work on interactive nodes: **atlas1,2,16,18** Login on atlas2.hep.anl.gov and do: export ALRB_localConfigDir="/share/sl6/cvmfs/localConfig" # local to ANL config files (fronter condition database) export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh Then follow the instructions. Typically, you can setup an atlas release as: asetup --release=17.0.0 --testarea=/users//testarea assuming that the directory ~/testarea/AtlasOffline-17.8.0 exists You can do showVersions --show=athena to list athena versions. (similarly for showVersions --show=dbrelease). Note that the conditions pool files catalog is setup once setupATLAS is done. (If cvmfs is available, its Athena/DBRelease versions will also be listed; otherwise, only local disk versions are listed.) When setting us "pathena", avoid using DQ2 setup: asetup 17.8.0,slc6,gcc47 localSetupPandaClient Use DQ2 get setup in a different window!: localSetupDQ2Client Your typical setup script may look as: #!/bin/bash export AVERS=17.8.0 export TEST_AREA=$HOME/testarea export ALRB_localConfigDir="/share/sl6/cvmfs/localConfig" # local to ANL config files (fronter condition database) export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh asetup --release=$AVERS --testarea=$TEST_AREA then do "source setup.sh" to setup it. ==== Using DQ2 with CVMFS ==== After running the setup shown above, execute localSetupDQ2Client --skipConfirm You'll get a banner and then type the password. Say "yes". It's safest to dedicate a window for DQ2, or log out and in after using DQ2 if you want to use Athena. Then type: voms-proxy-init -voms atlas -valid 96:00 Read more about dq2-get and other grid services [[asc:workbook_grid|here]]. Read additional information on how to use [[https://twiki.atlas-canada.ca/bin/view/AtlasCanada/ATLASLocalRootBase|ATLASLocalRootBase]] ===== Using XROOTD ===== You may use xrootd on the interactive nodes. Login to atlas16 or atlas18, and try to copy file to the XROOTD user space on the farm: rdcp test.txt xroot://atlashn1.hep.anl.gov:1094//atlas/USER/test/text.txt (replace USER with your user name). Similarly, use other XROOD commands. For example, remove this file: rmdir xroot://atlashn1.hep.anl.gov:1094//atlas/USER/test.txt On atlas16,18 you have a "common" data space. Check it the directory: /atlasfs/atlas/local/ See also the link [[asc:workbook_xrootd]]. ====== Working with the data ====== Please do not keep data on NFS where your home directory is (/users/). There is a significant performance penalty when running your jobs, plus it is impossible to backup your data Please read this Section [[asc:workbook_data|Working with the data]] --- //[[chekanov@gmail.com|Sergei Chekanov]] 2011/03/09 17:17//