User Tools

Site Tools


asc:workbook_introduction

This is an old revision of the document!


<<back

How to become a member

To start using this wiki, forum and the download area, you should become a member. For this, you should register here.

Available computers

Below we describe the computing environment of the ANL ATLAS cluster. Some users, especially visitors, are welcome to use the Atlas support center cluster which has additional resources. You can look at the instructions here

Interactive nodes and desktops

To login to atlas clusters (not LCRC!) use these login computers:

alogin1.hep.anl.gov
alogin2.hep.anl.gov

You should use ANL domain password and user name. All other computers can be accessed from the computers above. It should be noted that they require the ANL domain password and the user name.

Data storage

Every interactive node mounts this central storage:

/data/atlasfs02/c/users/

Computer farm

The computer farm consists of 16 servers atlas50-65. Each server has 2 CPU with 8 cores and 5 TB of local disk space. The farm is based on SL6

The PC farm uses the Arcond for job submission which allows to share the resources and simplifies the data discovery. The backend of this farm is Condor. You can use it as well as long as you restrict your jobs to atlas50-65 and use 4 cores per server. If you have long jobs, please use the old farm. Login to the computer farm is not allowed.

You can see the current CPU usage using this web page ANL-ATLAS computer farm monitor. (can be accessed from inside ANL, or when using a proxy after atlas1 ssh). Alternatively, look at the Condor jobs monitor here ANL-ATLAS condor monitor.

You can run JCondor Monitoring GUI to monitor condor cluster:

/share/sl6/set_asc.sh
cp -rf /share/sl6/jcondor .
cd jcondor
java -jar jcondor.jar

After initial login

About the scratch disk space: It is available on atlas1,2,16,17,18 (5TB for each server) and on every desktop. Every user has a directory under /data1 and /data2. The scratch disks do not have backup. You should use these disks if you do not want to gain speed for running/compiling your programs.

In some cases, you will need to access data from all computers. If data size is not large (not larger than 10 GB), one can put data on NFS: /data/nfs/users/ If you are interested in a larger sample, contact the administrator.

Please read the section Working with data at ANL Tier3 for details on how to store and process your data.

Default shell and environment

The default shell for the ANL ASC is bash. To setup the necessary environment (ROOT, SVN etc), run this script:

source.sh
source /share/sl6/set_asc.sh

Check this by running “root” or “condor_status”. You can also add the environmental variables automatically: Create '.bash_profile' file if it is not done yet, and put these lines:

    alias ls='ls -F --color=auto'
    source /share/sl6/set_asc.sh
 

This will set up the recent version of ROOT with the native PYTHON 2.4 from SL5.3. Alternatively, put the above lines to the file '.bashrc' or run this command:

At this point, no any atlas software is installed. Note that the same setup script also setup FASTJET and LHAPDF. Check this as

echo $FASTJET
echo $LHAPDF
echo $PROMC

All precompiled software is located here:

/share/sl6/

Before compiling any package, please check this directory. Note that you can also use cvmfs “localSetupSFT”.

export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh
localSetupSFT --help

This prints the available (non-ATLAS) software.

Setting up ATLAS Software

To setup ATLAS software, copy and save these lines in a file, say “set.sh”

setup.sh
export AVERS=17.8.0
export TEST_AREA=$HOME/testarea
# ANL local setup for fronter
export ALRB_localConfigDir="/share/sl6/cvmfs/localConfig"
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh
asetup --release=$AVERS --testarea=$TEST_AREA

Then “source set.sh” every time you login on atlas16-28 computers. Note: you should use the bash shell for this setup. If you are happy with this, one can put this line in .bash_profile or .bashrc files (if you are using bash shell and want to set up the ATLAS staff every time you login). You can change the ATLAS release and testarea by changing “AVERS” and “TEST_AREA” variables.

You can also use packages compiled with RootCore. For SL6, they are located here:

 /share/sl6/AtlasRootCoreLib

they were compiled against the native SL6 gcc (/usr/bin/gcc), native python (/usr/bin/python) and ROOT (source /share/grid/app/asc_app/asc_rel/1.0/setup-script/set_asc.sh). If you want to recompile such packages using a different ROOT, simply copy this directory to your directory, and run “A_COMPILE.sh” to recompile all these packages.

If you need to add more packages, get them with svn:

export SVNOFF=svn+ssh://svn.cern.ch/reps/atlasoff
svn co $SVNOFF/Reconstruction/Jet/ApplyJetCalibration/tags/ApplyJetCalibration-00-03-15 ApplyJetCalibration

and run the script “A_COMPILE.sh” again.

Read more about ATLASLocalRootBase.

COOL database

It is mirrored at ANL. After setting up an ATLAS release, check $ATLAS_POOLCOND_PATH Normally, you do not need to do anything since athena should find this path.

Database releases

If you will need different database release (rather than that included in the current athena release), put these lines in your setup:

export DBRELEASE_INSTALLDIR="/share/grid/DBRelease"
export DBRELEASE_VERSION="9.6.1"
export ATLAS_DB_AREA=${DBRELEASE_INSTALLDIR}
export DBRELEASE_OVERRIDE=${DBRELEASE_VERSION}

Read more details here

Cleaning environmental variables

To remove the ATLAS release and all associated environmental variables and setting the ASC ANL environment only, use:

   source /share/grid/app/asc_app/asc_rel/1.0/setup-script/set_asc.sh

After executing this script, you will have an access to most recent self-contained ROOT installation and all other variables necessary to work at ASC (CVS, firefox etc). You can put this string, say in a file “clean”:

set.sh
#!/bin/bash
source /share/grid/app/asc_app/asc_rel/1.0/setup-script/set_asc.sh

so that when you want to clear the shell from ATLAS release, jut type “source clean”

ATLAS Event display

Setup atlas release and do:

cd $ATLANTISJAVA_HOME
/usr/bin/java -jar atlantis.jar

Here we assume release 15.6.1 and atlas16/17

Running VP1:

Setup atlas release, go to the testarea/[release] and type “vp1”

Using CVMFS file system

This is an alternative way to setup ATLAS releases and the grid. This setup uses a network-based file system from CERN and closely follows the lxplus setup.

This setup can only work on interactive nodes: atlas1,2,16,18

Login on atlas2.hep.anl.gov and do:

setup.sh
export ALRB_localConfigDir="/share/sl6/cvmfs/localConfig"  # local to ANL config files (fronter condition database)
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh 

Then follow the instructions. Typically, you can setup an atlas release as:

asetup --release=17.0.0 --testarea=/users/<user>/testarea

assuming that the directory ~<user>/testarea/AtlasOffline-17.8.0 exists

You can do

showVersions --show=athena 

to list athena versions. (similarly for showVersions –show=dbrelease). Note that the conditions pool files catalog is setup once setupATLAS is done. (If cvmfs is available, its Athena/DBRelease versions will also be listed; otherwise, only local disk versions are listed.)

When setting us “pathena”, avoid using DQ2 setup:

asetup 17.8.0,slc6,gcc47
localSetupPandaClient

Use DQ2 get setup in a different window!:

localSetupDQ2Client

Your typical setup script may look as:

setup.sh
#!/bin/bash
export AVERS=17.8.0
export TEST_AREA=$HOME/testarea
export ALRB_localConfigDir="/share/sl6/cvmfs/localConfig"  # local to ANL config files (fronter condition database)
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh
asetup --release=$AVERS --testarea=$TEST_AREA

then do “source setup.sh” to setup it.

Using DQ2 with CVMFS

After running the setup shown above, execute

localSetupDQ2Client --skipConfirm

You'll get a banner and then type the password.

Say “yes”. It's safest to dedicate a window for DQ2, or log out and in after using DQ2 if you want to use Athena.

Then type:

voms-proxy-init -voms atlas -valid 96:00

Read more about dq2-get and other grid services here.

Read additional information on how to use ATLASLocalRootBase

Using XROOTD

You may use xrootd on the interactive nodes. Login to atlas16 or atlas18, and try to copy file to the XROOTD user space on the farm:

rdcp test.txt  xroot://atlashn1.hep.anl.gov:1094//atlas/USER/test/text.txt

(replace USER with your user name).

Similarly, use other XROOD commands. For example, remove this file:

rmdir xroot://atlashn1.hep.anl.gov:1094//atlas/USER/test.txt

On atlas16,18 you have a “common” data space. Check it the directory:

/atlasfs/atlas/local/

See also the link workbook_xrootd.

Working with the data

Please do not keep data on NFS where your home directory is (/users/). There is a significant performance penalty when running your jobs, plus it is impossible to backup your data

Please read this Section Working with the data

Sergei Chekanov 2011/03/09 17:17

asc/workbook_introduction.1692040339.txt.gz · Last modified: 2023/08/14 19:12 by asc