Use snprintf() instead of sprintf() in a few places
(should eventually do it everywhere)
Notes:
(void)x;
seems to work (cross-platform) for suppressing unused arg warnings.
int n;
n = snprintf(buf, sizeof(buf), ....)
(void)n;
works in some cases for suppressing buffer-size warnings.
But not all cases. Not sure why.
If you specify the input or output templates,
the standard ones (appname_in, appname_out) don't need to exist.
Also, in create_work():
- don't read the output template; just check that it exists.
- deprecate the result_template_filepath arg; redundant
This supports the TACC use case,
in the jobs in a batch can use different Docker images
and different input and output file signatures,
none of which are known in advance.
Python API binding:
- A JOB_DESC object can optionally contain wu_template and result_template
elements, which are the templates (the actual XML) to use for that job.
Add these to the XML request message if present.
- Added the same capability to the PHP binding, but not C++.
- Added and debugged test cases for both languages.
Also, submit_batch() can take either a batch name (in which case
the batch is created) or a batch ID
(in which the batch was created prior to remotely staging files).
RPC handler:
- in submit_batch(), check for jobs with templates specified
and store them in files.
For input templates (which are deleted after creating jobs)
we put them in /tmp,
and use a map so that if two templates are the same we use 1 file.
For output templates (which have to last until all jobs are done)
we put them in templates/tmp, with content-based filenames
to economize.
- When creating jobs, or generating SQL strings for multiple jobs,
use these names as --wu_template_filename
and --result_template_filename args to create_work
(either cmdline args or stdin args)
- Delete WU templates when done
create_work.cpp:
handle per-job --wu_template and --result_template args in stdin job lines
(the names of per-job WU and result templates).
Maintain a map mapping WU template name to contents,
to avoid repeatedly reading them.
For jobs that don't specify templates, use the ones specified
at the batch level, or the defaults.
Otherwise, result file names can be inferred from result names.
An attacker with task A could find the name of the "wingman" task B,
upload fake files as B's output files,
upload the same files as A's output files,
report A as completed, and get unearned credit.
The SETI@home result table is about to run out of 32-bit IDs,
so we need to move to 64-bit result IDs.
This will happen to the workunit table at some point too.
I changed the server C++ code to use the "long" type for all DB IDs
(and to use appropriate conversion codes like %lu).
"long" is 64 bit on 64-bit machines.
For uniformity I did this for all tables,
even ones (like app) that will never get big.
I chose NOT to change the DB schema for now.
The new code will work with 32-bit ID fields in the DB.
As projects approach the 32-bit limit on a table they can change
its ID field, and fields that reference this table, to BIGINT.
This is likely to happen only on the result and workunit tables.
I put functions in html/ops/db_update.php
to change the IDs of these tables.
We were creating the workunit, then updating its transitioner_flags field.
If the transitioner runs inbetween,
it would (incorrectly) create results for the workunit.
Solution: set transitioner_flags during insert.
A "remote input file" is located on a data server other than the project server.
Previously these could be specified only in the input template,
which was of limited utility.
Add new ways of specifying remote input files:
1) in the create_work program, a remote input file can be specified
with command-line args
--remote_file name URL nbytes MD5
or by the same syntax in stdin when creating multiple jobs
2) add a variant of create_work() called create_work2(),
which takes a vector of INFILE_DESC structures that can specify
either local or remote files
The job submission RPC handler (PHP) originally ran the
create_work program once per job.
This took about 1.5 minutes to create 1000 jobs.
Recently I changed this so that create_work only is run once;
it does one SQL insert per job.
Disappointingly, this was only slightly faster: 1 min per 1000 jobs.
This commit changes create_work to create multiple jobs per SQL insert
(as many as will fit in a 1 MB query, which is the default limit).
This speeds things up by a factor of 100: 1000 jobs in 0.5 sec.
See http://boinc.berkeley.edu/trac/wiki/MultiSize
The components of this include:
- DB changes:
add size_class to workunit and result
n_size_classes to app; >1 means multi-size
- size_regulator daemon program: change results states
from INACTIVE to UNSENT carefully
- size_census program; writes quantile info in flat files
- transitioner: when creating results for multi-size apps,
set server state to INACTIVE
- sched shmem (feeder): read quantile info from flat files,
store in shared memory
- scheduler (score-based scheduling): for multi-size apps,
add component to score function for size class.
- show_shmem: show result size class
- make_work (and other callers of count_unsent_results()):
count both INACTIVE and UNSENT
- create_work: add --size_class cmdline option
Also:
- if get MySQL errors in upgrade, don't rewrite db_version
to account for systematic errors in FLOP count
- adjust_user_priority: get total project RAC by summing RAC
of app versions where RAC has been updated in past week
- feeder: add --priority_asc option
(for when wu.priority is a logical time)
This now supports two main use cases:
1) there's a job that you want to run once on all hosts,
present and future
(or all hosts belonging to a user, or to a team).
The job is never transitioned, validated, or assimilated.
2) There's a normal job for which you want to use only
hosts belonging to a specific user (e.g. cluster or cloud hosts).
This restriction can be made either when the job is created,
or on the fly,
e.g. as part of a scheme for accelerating batch completion.
For the latter purposes we now provide a function
restrict_wu_to_user(DB_WORKUNIT&, int userid);
The job goes through the standard
transitioner/validator/assimilator path.
These cases are enabled by config flags
<enable_assignment_multi/>
<enable_assignment/>
respectively.
Assignment of type 2) are no longer stored in shared mem,
so there is no limit on their number.
There is no longer a rule that assigned job names must contain "asgn".
NOTE: this requires a database update.
svn path=/trunk/boinc/; revision=25169
for canceling jobs
- added program cancel_jobs for canceling jobs
- DB interface: it's not an error if update_fields_noid()
affects != 1 rows
svn path=/trunk/boinc/; revision=24413
Add parsed_tag and is_tag to the class,
so that parsing functions don't need to declare them
and pass them around.
- Complete the task of using XML_PARSER as the argument
to all parsing functions.
(Internally, many of these functions still use the old XML parser;
that's the next step.)
svn path=/trunk/boinc/; revision=23978
are not processed correctly
- remote job submission: debug
- create_work: --rsc_fpops_est etc. should override the template file
svn path=/trunk/boinc/; revision=23942
- don't create result records for uploads and downloads.
Just create a msg_to_client record.
- the scheduler handles file-transfer results specially;
it makes a vector of them, then calls a project-supplied function
handle_file_xfer_results()
- change the interface and implementation of put_file and get_file
- client write project sched priority in GUI RPC replies,
but not to the state file
svn path=/trunk/boinc/; revision=23857
to mess up input templates containing
<copy_file/> or other attribute tags.
XML_PARSER now contains a member element() for when
you want to copy an element without knowing its structure.
svn path=/trunk/boinc/; revision=23790
(so that they don't have to be 1 element/line)
and also allow optional <input_template> root element
- fix bug in WORKUNIT DB interface
svn path=/trunk/boinc/; revision=23648
fix bug that corrupted WU command lines.
The problem: we were using strcpy(p, p+n) to delete the
first n characters of p.
This is incorrect - the behavior of strcpy() is undefined
if its args overlap.
On some systems (e.q. AQUA's server) it does wacky things.
svn path=/trunk/boinc/; revision=23007
This allows a user to delete all traces of themselves from a project.
Namely:
- clear fields of user record: email_addr, authenticator,
name, country, postal_code
Note: record is not deleted
- clear the domain_name and last_ip_addr fields of hosts
Note: records are not deleted
- quit team
- delete private messages sent and received
- delete forum posts, subscriptions, and forum prefs
- delete profile and associated images
- server: compile fix
svn path=/trunk/boinc/; revision=23006
URL, size, and MD5 of input files.
This supports "non-local" input files,
i.e. files not present on the project server.
NOTE: as implemented,
this requires a separate input template for each job.
It would be slightly better to let you specify the
URL/size/MD5 in the create_work() call.
From Zoltan Farkas (SZTAKI)
svn path=/trunk/boinc/; revision=22320