This lets you run arbitrary Docker applications using a single
BOINC app (and one app version per platform).
The Dockerfile and science executables are in the workunit.
The script tools/submit_buda lets you test this.
Basic tests were successful.
2) Unify the scripts for testing job submission
The 'test' assimilators (sample_assimilate.py and sample_assimilator.cpp)
do the same thing: they copy result files to
<proj_dir>/results/<batch_id>/<wu_name> (if 1 output file)
<proj_dir>/results/<batch_id>/<wu_name>_i (if >1 output file)
where <batch_id> is 0 if WU is not in a batch
... and they write error code to <wu_name>_error if the WU errored out
Scripts to submit jobs:
submit_job
submit_batch
submit_buda
Script to query jobs:
query_job
This works for either jobs or batches,
as long as the app uses one of the above assimilators
3) Add plan class 'docker' to plan_class_spec.xml.sample
You can now pre-assign a job's credit, as described here:
https://boinc.berkeley.edu/trac/wiki/CreditOptions
Note: this feature was originally available via an
--additional_xml "<credit>xx</credit>" arg to create_work.
This is an ugly kludge; I removed it.
In fact, the --additional_xml arg should be removed at some point.
Also: change stage_file to it cd's to html/bin when including stuff;
this is needed since util_basic.inc now includes something else
In some cases of file staging (both remote and via stage_file)
we'd do the following:
1) create the .md5 file (in check_download_file())
2) move or copy the file into the download dir
This can result in the file having a later mod time than the .md5 file,
which causes process_input_template() to reject the .md5 file.
Solution: touch the .md5 file after the move or copy
Files in the download dir can have accompanying ".md5" files
containing their MD5 and size.
This eliminates the need to calculate these when creating a job using the file.
The .md5 files were being created by stage_file (local staging)
but not by remote file management.
In fact, the latter wasn't checking for file immutability violations.
I changed remote file management to add this check,
and to create the .md5 file.
The latter is done in a new function shared with stage_file.
Previously if you wanted to create lots of jobs from a script (e.g. PHP)
you had to run create_work once per job.
With the --stdin option you run it once,
passing it a file (view stdin) with one line per job.
Each line can specify a command line and/or a set of input files.
On my server this gives a performance of about 1000 jobs per minute,
which is less than I would have expected,
but all the time is spent in doing MySQL inserts
so that's as good as we can do for now.
Also fix a bug in stage_file.
The docs said that putting <gzip/> for a file in your input template
would cause it to be transferred in gzip form.
But most of the server-side implementation was missing.
- in process_input_template(), parse <gzip/>,
and add <gzipped_url> elements to the output.
- stage_file was generating MD5 cache files containing only the MD5,
but process_input_template() expected them to contain file size as well.
Change stage_file to write both,
and change process_input_template() to write an error message
if it finds a bad MD5 file.
- remove stuff from process_input_template() related to
"generated_locally", a feature that doesn't exist anymore.