NOT when its first upload starts.
This prevents a scheduler RPC from happening between the two.
This was a bug in [22841]
svn path=/trunk/boinc/; revision=22896
for RR sim's pending-job lists.
Erasing head of vector is slow.
- lib: allow GPU peak FLOPS to be specified in XML (for simulator)
- simulator work
- client: old work fetch policy: projects may need enough jobs
for all device instances, not just resource_share*ninst.
E.g. a project that has only CPU jobs in a CPU/GPU client
- client: with REC scheduling, don't ask for work for
secondary resources if project has negative priority.
- client: in RR sim, make sure we saturate devices if possible.
Otherwise we may report a shortfall incorrectly
svn path=/trunk/boinc/; revision=22894
- manager: change "add account manager" to "use account manager".
"Add" is confusing, because you can't add multiple account managers
like you add projects.
- client: recognize a few new ATI GPU models
from Robert Kreß
svn path=/trunk/boinc/; revision=22843
(either at startup or during execution)
reset a number of "wait until X" variables;
otherwise we might wait years to contact a project, restart a file xfer, etc.
Notes:
- there is no problem setting clocks forward; things just happen prematurely
- some variables (e.g. task deadlines) are not reset,
because it's not clear what to set them to
- sched: remove ati_opencl plan class until we understand what it is
svn path=/trunk/boinc/; revision=22842
and an upload started in the last 5 min, don't fetch work from it.
The goal is to merge the 2 scheduler RPCs
(fetch work, report completed taskS) into a single RPC.
Note: this may result in idleness in some cases.
- scheduler: if client doesn't handle plan class (pre-5.10),
check plan-class app versions anyway,
but only use if it's a single-CPU app.
This allows single-CPU app versions with specific requirements
(like SSE) to be issued to old clients.
From Bernd Machenschalk
svn path=/trunk/boinc/; revision=22841
Old: enforce_schedule() won't run an active job if its
working set size exceeds remaining available RAM.
Problem: there may be a lot of similar jobs.
The client starts one, finds that its working set is too large,
starts the second, and so on.
Solution: if J is an unstarted job,
and there are started jobs using the same app version,
consider J's working set size to be the largest of
the working sets of those jobs.
- client: fix an apparent bug that could oversaturate
the CPUs with single-thread jobsk
svn path=/trunk/boinc/; revision=22840
because invalid jobs were counted as successful.
How could this bug possibly have survived this long?
From TJM (thanks -- who are you?)
Fixes#1029
svn path=/trunk/boinc/; revision=22839
p_model as well as p_features;
pre-6.x clients report them in p_model.
- client: fix bug where "reread config file" didn't update
the max log file sizes
svn path=/trunk/boinc/; revision=22838
of the performance of an app version on a host.
It will then stop using that app version,
so the estimate never has a chance to converge to its correct value.
Fix: multiply by a random factor (mean 1, stddev .1)
when comparing the FLOPS estimates of app versions.
svn path=/trunk/boinc/; revision=22837