The SETI@home result table is about to run out of 32-bit IDs,
so we need to move to 64-bit result IDs.
This will happen to the workunit table at some point too.
I changed the server C++ code to use the "long" type for all DB IDs
(and to use appropriate conversion codes like %lu).
"long" is 64 bit on 64-bit machines.
For uniformity I did this for all tables,
even ones (like app) that will never get big.
I chose NOT to change the DB schema for now.
The new code will work with 32-bit ID fields in the DB.
As projects approach the 32-bit limit on a table they can change
its ID field, and fields that reference this table, to BIGINT.
This is likely to happen only on the result and workunit tables.
I put functions in html/ops/db_update.php
to change the IDs of these tables.
The python version was failing cryptically.
For scripts that access the DB, I'd like to move from Python to PHP,
so that we have one less DB layer to maintain.
Normally update_version gets the API version from the executable,
by scanning for the string API_VERSION.
However, if the project compresses executables this won't work.
So let it be specified in version.xml.
We were creating the workunit, then updating its transitioner_flags field.
If the transitioner runs inbetween,
it would (incorrectly) create results for the workunit.
Solution: set transitioner_flags during insert.
A "remote input file" is located on a data server other than the project server.
Previously these could be specified only in the input template,
which was of limited utility.
Add new ways of specifying remote input files:
1) in the create_work program, a remote input file can be specified
with command-line args
--remote_file name URL nbytes MD5
or by the same syntax in stdin when creating multiple jobs
2) add a variant of create_work() called create_work2(),
which takes a vector of INFILE_DESC structures that can specify
either local or remote files
The job submission RPC handler (PHP) originally ran the
create_work program once per job.
This took about 1.5 minutes to create 1000 jobs.
Recently I changed this so that create_work only is run once;
it does one SQL insert per job.
Disappointingly, this was only slightly faster: 1 min per 1000 jobs.
This commit changes create_work to create multiple jobs per SQL insert
(as many as will fit in a 1 MB query, which is the default limit).
This speeds things up by a factor of 100: 1000 jobs in 0.5 sec.
Previously if you wanted to create lots of jobs from a script (e.g. PHP)
you had to run create_work once per job.
With the --stdin option you run it once,
passing it a file (view stdin) with one line per job.
Each line can specify a command line and/or a set of input files.
On my server this gives a performance of about 1000 jobs per minute,
which is less than I would have expected,
but all the time is spent in doing MySQL inserts
so that's as good as we can do for now.
Also fix a bug in stage_file.
The docs said that putting <gzip/> for a file in your input template
would cause it to be transferred in gzip form.
But most of the server-side implementation was missing.
- in process_input_template(), parse <gzip/>,
and add <gzipped_url> elements to the output.
- stage_file was generating MD5 cache files containing only the MD5,
but process_input_template() expected them to contain file size as well.
Change stage_file to write both,
and change process_input_template() to write an error message
if it finds a bad MD5 file.
- remove stuff from process_input_template() related to
"generated_locally", a feature that doesn't exist anymore.
- add delete button
- show # of badges assigned
- don't use separate head/tail function are admin pages
- add badge assign task for new projects
- turn on db_dump, update_uotd, update_forum_activities,
update_profile_pages, notify for new projects
Add script (compare_pot) for comparing templates module comments,
and sending reminder emails if they differ.
Add script (buid_pos) for generating all templates and comparing
them with current versions.