worked in the presence of coprocessors.
The simulator maintained per-project queues of pending jobs.
When a job finished (in the simulation) it would get
one or more jobs from that project's pending queue.
The problem: this could cause "holes" in the scheduling of GPUs,
and produce an erroneous nonzero shortfall for GPUs,
leading to infinite work fetch.
The solution: maintain a separate (per-resource, not per--project)
queue of pending coprocessor jobs.
When a coprocessor job finishes,
start pending jobs from the queue for that resource.
Another change: the simulator did strict reservation of coprocessors.
If there are 2 instances of CUDA,
and a 1-instance job is running in the simulation,
it wouldn't start an additional 2-instance job.
This also can cause erroneous nonzero shortfalls.
So instead, schedule coprocessors like CPUs, i.e. saturate them.
This can cause distorted completion time estimates,
but it's better than infinite work fetch.
svn path=/trunk/boinc/; revision=17093
(per-project or overall) if there are no pending tasks.
This is needed when there are coproc (i.e. CUDA) jobs;
CPUs may be idle because pending jobs are waiting for active jobs
to release coprocs.
In this situation the CPU idleness should not be counted as shortfall;
otherwise (if there are only coproc jobs) there will always be a shortfall,
and the client will fetch infinite work.
svn path=/trunk/boinc/; revision=16545