asc:tutorials:2014october_connect
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
asc:tutorials:2014october_connect [2014/10/09 19:30] – [Lesson 6: Working on the ANL farm] asc | asc:tutorials:2014october_connect [2014/10/10 15:28] (current) – [Lesson 6: Using HTCondor and Tier2] asc | ||
---|---|---|---|
Line 205: | Line 205: | ||
====== Lesson 5: Running on multiple cores ====== | ====== Lesson 5: Running on multiple cores ====== | ||
- | This example is not needed for ATLAS connect. | + | This example is not needed for ATLAS connect. |
+ | look at [[asc: | ||
- | ====== Lesson 6: Working on the ANL farm ====== | + | ====== Lesson 6: Using HTCondor and Tier2 ====== |
Lesson 5: Working on a Tier3 farm (Condor queue) | Lesson 5: Working on a Tier3 farm (Condor queue) | ||
- | In this example we will use HTCondor workload management system to send the job to be executed in a queue at a Tier3 farm. | + | In this example we will use HTCondor workload management system to send the job to be executed in a queue at a Tier3 farm. For this example we will start from the directory lesson 4, so if you did not do the lesson 4 please do that one first and verify that your code runs locally. |
- | + | ||
- | For this example we will start from the directory lesson 4, so if you did not do the lesson 4 please do that one first and verify that your code runs locally. | + | |
Start from the new shell and set up environment, | Start from the new shell and set up environment, | ||
- | startJob.sh | ||
- | <code> | + | <file bash startJob.sh> |
- | #!/bin/zsh | + | #!/bin/bash |
export RUCIO_ACCOUNT=YOUR_CERN_USERNAME | export RUCIO_ACCOUNT=YOUR_CERN_USERNAME | ||
export ATLAS_LOCAL_ROOT_BASE=/ | export ATLAS_LOCAL_ROOT_BASE=/ | ||
Line 241: | Line 238: | ||
testRun submitDir | testRun submitDir | ||
echo " | echo " | ||
- | </code> | + | </file> |
Make sure the RUCIO_ACCOUNT variable is properly set. Make this file executable and create the file that describes our job needs and that we will give to condor: | Make sure the RUCIO_ACCOUNT variable is properly set. Make this file executable and create the file that describes our job needs and that we will give to condor: | ||
- | <code> | + | <file bash job.sub> |
- | job.sub | + | |
Jobs=10 | Jobs=10 | ||
getenv | getenv | ||
Line 259: | Line 255: | ||
# | # | ||
queue $(Jobs) | queue $(Jobs) | ||
- | </code> | + | </file> |
To access files using FAX the jobs need a valid grid proxy. That's why we send it with each job. Proxy is the file starting with " | To access files using FAX the jobs need a valid grid proxy. That's why we send it with each job. Proxy is the file starting with " | ||
Line 265: | Line 261: | ||
You need to pack all of the working directory into a payload.zip file: | You need to pack all of the working directory into a payload.zip file: | ||
- | <code> | + | <file bash startJob.sh> |
startJob.sh | startJob.sh | ||
rc clean | rc clean | ||
rm -rf RootCoreBin | rm -rf RootCoreBin | ||
zip -r payload.zip * | zip -r payload.zip * | ||
- | </code> | + | </file> |
Now you may submit your task for the execution and follow its status in this way: | Now you may submit your task for the execution and follow its status in this way: | ||
- | startJob.sh | ||
< | < | ||
+ | chmod 755 ./ | ||
+ | </ | ||
+ | |||
+ | <code bash> | ||
~> condor_submit job.sub | ~> condor_submit job.sub | ||
Submitting job(s).......... | Submitting job(s).......... |
asc/tutorials/2014october_connect.1412883028.txt.gz · Last modified: 2014/10/09 19:30 by asc