Instead: declare a static const instance (whose data members are zero)
and copy that.
This avoid the error-prone need to assign each member,
and it works even if there are virtual function tables.
Change the submit_batch RPC, and the various bindings of it (PHP, Python, C++)
to support an app_version_num arg,
specifying which app version number should be used to process the jobs in the batch.
The content-based file management system used by Condor GAHP
used a file's MD5 hash as its physical name.
This caused problems for Einstein@home because their app
expected filenames with particular extensions.
So I:
- changed the docs and param names of the C++ interface
to the file management system (query_files() and upload_files())
to clarify that the physical file names can include additional text
besides the MD5 hash.
- changed the BOINC GAHP to look for filename extensions,
and append these to the physical filenames
The batch query call used by Condor (query_batch_set(), in the C++ API)
returned info about all the jobs in the set of batches,
even those that hadn't changed.
This is potentially inefficient - a query might return info
about 10,000 jobs, only a few (or none) of which have changed state
since the last call.
Solution: add a "min_mod_time" parameter to the call.
Only jobs that have changed state since that time are reported.
Also, add a "server_time" field to the return,
giving the current time on the server
(in case there's clock skew between client and server)
Also, fix some text scrambling introduced in previous checkin;
there must have been a gremlin in my vim.
On Windows, the working-set size reported by the OS for VM apps is too low.
Apparently the RAM usage is in fact roughly the VM size.
This can lead to running multiple VM apps,
which use more RAM than is available, causing performance problems.
Solution: use workunit.rsc_memory_bound as the working set size for VM apps.
(Note: for now, a VM app is one where the plan class includes "vbox").
- Batches now have optional "expire time".
If this time passes and the batch is not retired, abort and retire it.
- Add script "expire_batches" which enforces the above.
Run it as a periodic task.
- Add a web RPC for setting the expire time of a batch
(it can be changed multiple times)
- Add a C++ interface for this RPC
- Add a BOINC_SET_LEASE command to the BOINC GAHP
("lease" is Condor term for expire time)
BOINC_QUERY_BATCHES now prints, for each queried batch,
a count of jobs followed by the jobs
BOINC_ABORT_JOBS takes a list of jobs, which may belong to different batches.
The handler for this looks up the batches and makes sure
the jobs belong to the user.
- remote job submission:
- prefix error messages with "BOINC server:"
so higher levels can tell where the error is coming from
- "get templates" RPC can take job name instead of app name
- Condor interface
- add BOINC_SELECT_PROJECT function
- BOINC_SUBMIT no longer has info about output files
- Change BOINC_FETCH_OUTPUT semantics
- change "query_batch" to "query_batches"; allow multiple batches
- add "ping server" web RPC and GAHP function
- change BoincDb::get() so that it generates XML error message if needed
- add Web RPC for querying a completed job (returns status,
stderr out, run times, etc.)
- support Jaime's changes to GAHP protocol
- support for zipped output files