"min # of GPU processors" attribute (stored in batch)
and are sent only to hosts whose GPUs have at least this #.
The logical place for this is in the scoring function, JOB::get_score().
I added a clause (#ifdef'd out) that does this.
It rejects the WU if #procs is too small,
otherwise it adds min/actual to the score.
This favors sending jobs that need lots of procs to GPUs that have them.
svn path=/trunk/boinc/; revision=18764
The limit on jobs in progress is now
max_wus_in_progress * NCPUS
+ max_wus_in_progress * NGPUS
where NCPUS and NGPUS reflect prefs and are capped.
Furthermore: if the client reports plan class for in-progress jobs
(see checkin of 31 May 2009)
then these limits are enforced separately;
i.e. the # of in-progress CPU jobs is <= max_wus_in_progress*NCPUS,
and the # of in-progress GPU jobs is <= max_wus_in_progress_gpu*NGPUS
- scheduler config: rename <cuda_multiplier> to <gpu_multiplier>
- scheduler: <max_wus_to_send> is now scaled by
(NCPUS + gpu_multiplier*NGPUS)
- scheduler: don't keep scanning array if !work_needed()
- scheduler: moved array-scan logic from sched_send.cpp to sched_array.cpp
- scheduler: don't say "no work available" if jobs are available
but work_needed() is initially false
svn path=/trunk/boinc/; revision=18255