old model:
no assimilator
output files live in upload hierarchy w/ physical names
WUs are marked as assimulated when batch is retired;
file_deleter deletes output files after that.
new model:
assimilator (e.g. sample_assimilate.py and sample_assimilator.cpp)
moves output files of canonical results to project/results/<batch_id>,
with names that include the batch name and the logical name.
WU is marked as assimilated; file_deleter deletes
output files of non-canonical results.
advantages of new model:
can see all output files of a batch on cmdline with ls
can zip output files of a batch without copying them
unified naming scheme for output files that encodes
the batch, the job (e.g. the BUDA job dir name)
and the logical name of the file.
------------
script assimilator: pass logical names to the script
Support both models. Choice of model is per app.
The project.inc file says which app uses which model.
This lets you run arbitrary Docker applications using a single
BOINC app (and one app version per platform).
The Dockerfile and science executables are in the workunit.
The script tools/submit_buda lets you test this.
Basic tests were successful.
2) Unify the scripts for testing job submission
The 'test' assimilators (sample_assimilate.py and sample_assimilator.cpp)
do the same thing: they copy result files to
<proj_dir>/results/<batch_id>/<wu_name> (if 1 output file)
<proj_dir>/results/<batch_id>/<wu_name>_i (if >1 output file)
where <batch_id> is 0 if WU is not in a batch
... and they write error code to <wu_name>_error if the WU errored out
Scripts to submit jobs:
submit_job
submit_batch
submit_buda
Script to query jobs:
query_job
This works for either jobs or batches,
as long as the app uses one of the above assimilators
3) Add plan class 'docker' to plan_class_spec.xml.sample