User Tools

Site Tools


asc:tutorials:2014october_connect

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
asc:tutorials:2014october_connect [2014/10/09 19:31]
asc [Lesson 6: Working on the ANL farm]
asc:tutorials:2014october_connect [2014/10/10 15:28] (current)
asc [Lesson 6: Using HTCondor and Tier2]
Line 205: Line 205:
 ====== Lesson 5: Running on multiple cores ====== ====== Lesson 5: Running on multiple cores ======
  
-This example is not needed for ATLAS connect.+This example is not needed for ATLAS connect. If you still want to know how to run an ATLAS analysis job on several cores of your desktop, 
 +look at [[asc:tutorials:2014october#lesson_5running_a_job_on_multiple_cores]]
  
-====== Lesson 6: Working on the ANL farm ======+====== Lesson 6: Using HTCondor and Tier2 ======
  
 Lesson 5: Working on a Tier3 farm (Condor queue) Lesson 5: Working on a Tier3 farm (Condor queue)
  
-In this example we will use HTCondor workload management system to send the job to be executed in a queue at a Tier3 farm.  +In this example we will use HTCondor workload management system to send the job to be executed in a queue at a Tier3 farm.  For this example we will start from the directory lesson 4, so if you did not do the lesson 4 please do that one first and verify that your code runs locally.
- +
-For this example we will start from the directory lesson 4, so if you did not do the lesson 4 please do that one first and verify that your code runs locally. +
 Start from the new shell and set up environment, then create this shell script that will be executed at the beginning of each job at each farm node: Start from the new shell and set up environment, then create this shell script that will be executed at the beginning of each job at each farm node:
-startJob.sh 
  
-<code bash> +<file bash startJob.sh
-#!/bin/zsh+#!/bin/bash
 export RUCIO_ACCOUNT=YOUR_CERN_USERNAME export RUCIO_ACCOUNT=YOUR_CERN_USERNAME
 export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
Line 241: Line 238:
 testRun submitDir testRun submitDir
 echo "enddate $(date)" echo "enddate $(date)"
-</code>+</file>
  
 Make sure the RUCIO_ACCOUNT variable is properly set. Make this file executable and create the file that describes our job needs and  that we will give to condor: Make sure the RUCIO_ACCOUNT variable is properly set. Make this file executable and create the file that describes our job needs and  that we will give to condor:
  
-<code bash+<file bash job.sub>
-job.sub+
 Jobs=10 Jobs=10
 getenv         = False getenv         = False
Line 259: Line 255:
 #Requirements   = HAS_CVMFS =?= True #Requirements   = HAS_CVMFS =?= True
 queue $(Jobs) queue $(Jobs)
-</code>+</file>
  
 To access files using FAX the jobs need a valid grid proxy. That's why we send it with each job. Proxy is the file starting with "x509up" so in both  job.sub and startJob.sh you should change "x509up_u21183" with the name of your grid proxy file. The filename you may find in the environment variable $X509_USER_PROXY. To access files using FAX the jobs need a valid grid proxy. That's why we send it with each job. Proxy is the file starting with "x509up" so in both  job.sub and startJob.sh you should change "x509up_u21183" with the name of your grid proxy file. The filename you may find in the environment variable $X509_USER_PROXY.
Line 265: Line 261:
 You need to pack all of the working directory into a payload.zip file: You need to pack all of the working directory into a payload.zip file:
  
-<code bash>+<file bash startJob.sh>
 startJob.sh startJob.sh
 rc clean rc clean
 rm -rf RootCoreBin rm -rf RootCoreBin
 zip -r payload.zip * zip -r payload.zip *
-</code>+</file>
  
 Now you may submit your task for the execution and follow its status in this way: Now you may submit your task for the execution and follow its status in this way:
-startJob.sh+ 
 +<code> 
 +chmod 755 ./startJob.sh; ./startJob.sh
 +</code>
  
 <code bash> <code bash>
asc/tutorials/2014october_connect.1412883063.txt.gz · Last modified: 2014/10/09 19:31 by asc