User Tools

Site Tools


asc:tutorials:2014october_connect

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
asc:tutorials:2014october_connect [2014/10/10 15:15]
asc [Lesson 6: Using HTCondor and Tier2]
asc:tutorials:2014october_connect [2014/10/10 15:28] (current)
asc [Lesson 6: Using HTCondor and Tier2]
Line 214: Line 214:
 In this example we will use HTCondor workload management system to send the job to be executed in a queue at a Tier3 farm.  For this example we will start from the directory lesson 4, so if you did not do the lesson 4 please do that one first and verify that your code runs locally. In this example we will use HTCondor workload management system to send the job to be executed in a queue at a Tier3 farm.  For this example we will start from the directory lesson 4, so if you did not do the lesson 4 please do that one first and verify that your code runs locally.
 Start from the new shell and set up environment, then create this shell script that will be executed at the beginning of each job at each farm node: Start from the new shell and set up environment, then create this shell script that will be executed at the beginning of each job at each farm node:
-startJob.sh 
  
 <file bash startJob.sh> <file bash startJob.sh>
-#!/bin/zsh+#!/bin/bash
 export RUCIO_ACCOUNT=YOUR_CERN_USERNAME export RUCIO_ACCOUNT=YOUR_CERN_USERNAME
 export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
Line 243: Line 242:
 Make sure the RUCIO_ACCOUNT variable is properly set. Make this file executable and create the file that describes our job needs and  that we will give to condor: Make sure the RUCIO_ACCOUNT variable is properly set. Make this file executable and create the file that describes our job needs and  that we will give to condor:
  
-<code bash+<file bash job.sub>
-job.sub+
 Jobs=10 Jobs=10
 getenv         = False getenv         = False
Line 257: Line 255:
 #Requirements   = HAS_CVMFS =?= True #Requirements   = HAS_CVMFS =?= True
 queue $(Jobs) queue $(Jobs)
-</code>+</file>
  
 To access files using FAX the jobs need a valid grid proxy. That's why we send it with each job. Proxy is the file starting with "x509up" so in both  job.sub and startJob.sh you should change "x509up_u21183" with the name of your grid proxy file. The filename you may find in the environment variable $X509_USER_PROXY. To access files using FAX the jobs need a valid grid proxy. That's why we send it with each job. Proxy is the file starting with "x509up" so in both  job.sub and startJob.sh you should change "x509up_u21183" with the name of your grid proxy file. The filename you may find in the environment variable $X509_USER_PROXY.
Line 263: Line 261:
 You need to pack all of the working directory into a payload.zip file: You need to pack all of the working directory into a payload.zip file:
  
-<code bash>+<file bash startJob.sh>
 startJob.sh startJob.sh
 rc clean rc clean
 rm -rf RootCoreBin rm -rf RootCoreBin
 zip -r payload.zip * zip -r payload.zip *
-</code>+</file>
  
 Now you may submit your task for the execution and follow its status in this way: Now you may submit your task for the execution and follow its status in this way:
-startJob.sh+ 
 +<code> 
 +chmod 755 ./startJob.sh; ./startJob.sh
 +</code>
  
 <code bash> <code bash>
asc/tutorials/2014october_connect.1412954113.txt.gz · Last modified: 2014/10/10 15:15 by asc