Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
D
Dataset production
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Iterations
Wiki
Requirements
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Locked files
Build
Pipelines
Jobs
Pipeline schedules
Test cases
Artifacts
Deploy
Releases
Package registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Code review analytics
Issue analytics
Insights
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
GitLab community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
UHH ML PS et al.
Experimental
Dataset production
Commits
6fb40d9e
Commit
6fb40d9e
authored
4 years ago
by
Patrick L.S. Connor
Browse files
Options
Downloads
Patches
Plain Diff
submitting 1M events for both FullSim and FastSim
parent
6a10d1c1
No related branches found
No related tags found
No related merge requests found
Changes
4
Show whitespace changes
Inline
Side-by-side
Showing
4 changed files
Full/run
+8
-3
8 additions, 3 deletions
Full/run
README.md
+22
-2
22 additions, 2 deletions
README.md
parallel
+4
-3
4 additions, 3 deletions
parallel
submit
+7
-6
7 additions, 6 deletions
submit
with
41 additions
and
14 deletions
Full/run
+
8
−
3
View file @
6fb40d9e
#!/bin/zsh
nevents
=
$
1
id
=
$
2
nevents
=
1
000
id
=
$
1
cd
$id
...
...
@@ -13,10 +13,15 @@ py=${cfi}_GEN_SIM.py
ls
$py
echo
"process.RandomNumberGeneratorService.generator.initialSeed = cms.untracked.uint32(
$
id
)"
>>
$py
echo
"process.RandomNumberGeneratorService.generator.initialSeed = cms.untracked.uint32(
$
((
id
+
1
))
)"
>>
$py
cmsRun
$py
cmsDriver.py step2
--conditions
auto:run2_mc
-s
DIGI:pdigi_valid,L1,DIGI2RAW,HLT:@relval2016
--datatier
GEN-SIM-DIGI-RAW-HLTDEBUG
-n
$nevents
--era
Run2_2016
--eventcontent
FEVTDEBUGHLT
--filein
file:step1.root
--fileout
file:step2.root
#> step2_TTbar_13+TTbar_13+DIGIUP15+RECOUP15+HARVESTUP15+ALCATTUP15.log 2>&1
cmsDriver.py step2
--conditions
auto:run2_mc
-s
DIGI:pdigi_valid,L1,DIGI2RAW,HLT:@relval2016
--datatier
GEN-SIM-DIGI-RAW-HLTDEBUG
-n
1000
--era
Run2_2016
--eventcontent
FEVTDEBUGHLT
--filein
file:step1.root
--fileout
file:step2.root
>
step2_TTbar_13+TTbar_13+DIGIUP15+RECOUP15+HARVESTUP15+ALCATTUP15.log 2>&1
cmsDriver.py step3
--runUnscheduled
--conditions
auto:run2_mc
-s
RAW2DIGI,L1Reco,RECO,RECOSIM,EI,PAT,VALIDATION:@standardValidation+@miniAODValidation,DQM:@standardDQM+@ExtraHLT+@miniAODDQM
--datatier
GEN-SIM-RECO,AODSIM,MINIAODSIM,DQMIO
-n
$nevents
--era
Run2_2016
--eventcontent
RECOSIM,AODSIM,MINIAODSIM,DQM
--filein
file:step2.root
--fileout
file:step3.root
#> step3_TTbar_13+TTbar_13+DIGIUP15+RECOUP15+HARVESTUP15+ALCATTUP15.log 2>&1
This diff is collapsed.
Click to expand it.
README.md
+
22
−
2
View file @
6fb40d9e
...
...
@@ -17,9 +17,17 @@ cmsrel CMSSW_10_6_22
Finally clone this repository in
`$CMSSW_BASE/src`
.
### In case of change of CMSSW version
The version of CMSSW may change in the future.
To track down the releases for the local architecture, just enter:
```
ls -1 -d /cvmfs/cms.cern.ch/*/cms/cmssw/CMSSW_10_6_22/src
```
## Execution
Load the environement (necessary each time you open a new shell)
:
Start a new shell (to avoid possible conflicts), and load the environement
:
```
source init
```
...
...
@@ -46,7 +54,8 @@ Just do:
./parallel
```
No option is necessary.
You can change the number of events
You can change the number of events in the script itself.
Beware that each time you run this command, the former root files are removed.
This approach can be useful to ensure that different seed are used for each job (the second option of
`run`
, which we previously ignored).
Always check the occupancy of the local machine with
`htop`
, and don't go for this option if too many people are on this machine.
...
...
@@ -58,6 +67,9 @@ Similar, just another script:
./submit
```
This should be the privileged approach for large-scale production.
Here too, remember that each time you rerun the command, you actually remove the former run.
In case you want to extend the statistics of some existing sample, just clone this repo and run it from scratch.
#### Troubleshooting
...
...
@@ -70,8 +82,16 @@ If your job is on hold and you want to know more:
```
condor_q -global -better-analyze JOBID
```
(You get the job id when running
`condor_q`
.)
If a or several jobs was or were put on hold, and if you could fix the issue, then you can release the job as follows:
```
condor_release -all
```
If you want to kill all your jobs:
```
condor_rm -all
```
Otherwise, consult the
[
official documentation
](
https://htcondor.readthedocs.io/en/latest/man-pages/index.html
)
.
This diff is collapsed.
Click to expand it.
parallel
+
4
−
3
View file @
6fb40d9e
...
...
@@ -4,8 +4,9 @@ export NJOBS=2
for
i
in
{
1..
$NJOBS
}
do
rm
-rf
$i
mkdir
$i
./run
$i
&
j
=
$((
i-1
))
rm
-rf
$j
mkdir
$j
./run
$j
&
done
wait
This diff is collapsed.
Click to expand it.
submit
+
7
−
6
View file @
6fb40d9e
#!/bin/zsh
eval
`
/usr/bin/modulecmd zsh use
-a
/afs/desy.de/group/cms/modulefiles/
`
eval
`
/usr/bin/modulecmd zsh load cmssw
`
eval
`
scramv1 runtime
-sh
`
#
eval `/usr/bin/modulecmd zsh use -a /afs/desy.de/group/cms/modulefiles/`
#
eval `/usr/bin/modulecmd zsh load cmssw`
#
eval `scramv1 runtime -sh`
export
LD_LIBRARY_PATH_STORED
=
$LD_LIBRARY_PATH
export
NJOBS
=
1000
for
i
in
{
1..
$NJOBS
}
do
rm
-rf
$i
mkdir
$i
j
=
$((
i-1
))
rm
-rf
$j
mkdir
$j
done
condor_submit job
condor_submit
-batch-name
${
PWD
##*/
}
job
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment