This lets you run arbitrary Docker applications using a single
BOINC app (and one app version per platform).
The Dockerfile and science executables are in the workunit.
The script tools/submit_buda lets you test this.
Basic tests were successful.
2) Unify the scripts for testing job submission
The 'test' assimilators (sample_assimilate.py and sample_assimilator.cpp)
do the same thing: they copy result files to
<proj_dir>/results/<batch_id>/<wu_name> (if 1 output file)
<proj_dir>/results/<batch_id>/<wu_name>_i (if >1 output file)
where <batch_id> is 0 if WU is not in a batch
... and they write error code to <wu_name>_error if the WU errored out
Scripts to submit jobs:
submit_job
submit_batch
submit_buda
Script to query jobs:
query_job
This works for either jobs or batches,
as long as the app uses one of the above assimilators
3) Add plan class 'docker' to plan_class_spec.xml.sample
script_assimilator:
in error case, pass WU name, ID, and batch ID
if verbose, show script cmdline
sample_assimilate.py:
write to ../sample_results/x
use batch ID in error case
show cmdlines if os.system() fails
Projects that have there own assimilator need to implement the assimilate_handler_usage() function and should printf() a description of those custom options. For examples see sample_assimilator.cpp or script_assimilator.cpp
The assimilator handler can now pass unknown arguments to the project specific handler.
Projects that have there own assimilators need to implement the assimilate_handler_init() function and handle project specific arguments there. For examples see sample_assimilator.cpp or script_assimilator.cpp
I also added some checks for missing arguments and clarified some error messages.
The SETI@home result table is about to run out of 32-bit IDs,
so we need to move to 64-bit result IDs.
This will happen to the workunit table at some point too.
I changed the server C++ code to use the "long" type for all DB IDs
(and to use appropriate conversion codes like %lu).
"long" is 64 bit on 64-bit machines.
For uniformity I did this for all tables,
even ones (like app) that will never get big.
I chose NOT to change the DB schema for now.
The new code will work with 32-bit ID fields in the DB.
As projects approach the 32-bit limit on a table they can change
its ID field, and fields that reference this table, to BIGINT.
This is likely to happen only on the result and workunit tables.
I put functions in html/ops/db_update.php
to change the IDs of these tables.