(reported by Kevin Reed).
The problem: cache inconsistency.
If there are 2 results for the same WU in shared mem,
and 2 scheduler instances get them around the same time,
they can send them with different app versions.
We already fixed this problem for HR by
1) rereading the relevant WU fields while deciding
whether to send the result
2) doing a "careful update" of the WU field using a where clause
to make sure it wasn't modified in the (short) interval
since rereading it.
I fixed the HAV problem in the same way,
and merged the two mechanisms to combine the DB queries.
Also:
- The rereads are done in slow_check() (see below).
- The careful updates are done in update_wu_on_send(),
and this is called *before* doing careful updates on result fields.
That way, if the WU updates fail, we don't have orphaned results.
- already_sent_to_different_platform_careful() (sic)
no longer does DB stuff, so it's merged with
already_send_to_different_hr_class() (better name)
NOTE: slow_check() is used in array scheduling only.
Score-based scheduling uses other code,
in which this bug is not yet fixed.
Locality scheduling doesn't support HR or HAV at all.
This should be unified.
svn path=/trunk/boinc/; revision=24484
Lets you specify, on a per-app basis,
that all instances should be done using the same app version.
This is for validation in the presence of GPUs.
- scheduler: code cleanup
- Instead of adding a bunch of non-DB fields to RESULT,
used a derived class SCHED_DB_RESULT.
- Instead of storing a pointer to BEST_APP_VERSION in RESULT,
store the structure itself.
This simplifies the memory allocation situation.
- client: condition "Got server request to delete file" messages
on <file_xfer_debug>
svn path=/trunk/boinc/; revision=23636
My change of 1 Oct ([22440]) required that such jobs
be processed with 64-bit apps,
on the assumption that 32-bit apps have a 2 GB user address space limit.
However, it turns out this limit applies only to Windows
(kernel and user mode share the 4GB address space; each gets half).
On Linux, the split is 3GB user / 1 GB kernel.
On Mac OS X, user mode and kernel mode have separate address spaces,
each of them 4 GB.
svn path=/trunk/boinc/; revision=22599
That produced a messed-up query that assigned garbage values to:
host_app_version.turnaround_var
host_app_version.turnaround_q
host_app_version.max_jobs_per_day
host_app_version.consecutive_valid
To repair these:
- set turnaround_var and turnaround_q to zero
- if max_jobs_per_day is outside of
(0..config.daily_result_quota)
set it to config.daily_result_quota
- if consecutive_valid is outside (0..1000), set it to zero
I added a script, html/ops/repair_21812.php, that does this;
if you ran server code between [21181] and [21812], run this script.
- scheduler/transitioner: add <debug_quota> log flag
- changed the build system to always use -Wall
(if we'd done this before, this bug wouldn't have happened)
- fixed a bunch of other compile warnings
svn path=/trunk/boinc/; revision=21812
"min # of GPU processors" attribute (stored in batch)
and are sent only to hosts whose GPUs have at least this #.
The logical place for this is in the scoring function, JOB::get_score().
I added a clause (#ifdef'd out) that does this.
It rejects the WU if #procs is too small,
otherwise it adds min/actual to the score.
This favors sending jobs that need lots of procs to GPUs that have them.
svn path=/trunk/boinc/; revision=18764
The limit on jobs in progress is now
max_wus_in_progress * NCPUS
+ max_wus_in_progress * NGPUS
where NCPUS and NGPUS reflect prefs and are capped.
Furthermore: if the client reports plan class for in-progress jobs
(see checkin of 31 May 2009)
then these limits are enforced separately;
i.e. the # of in-progress CPU jobs is <= max_wus_in_progress*NCPUS,
and the # of in-progress GPU jobs is <= max_wus_in_progress_gpu*NGPUS
- scheduler config: rename <cuda_multiplier> to <gpu_multiplier>
- scheduler: <max_wus_to_send> is now scaled by
(NCPUS + gpu_multiplier*NGPUS)
- scheduler: don't keep scanning array if !work_needed()
- scheduler: moved array-scan logic from sched_send.cpp to sched_array.cpp
- scheduler: don't say "no work available" if jobs are available
but work_needed() is initially false
svn path=/trunk/boinc/; revision=18255