pointers to dynamically allocated COPROC-derived objects,
just have the objects themselves.
Dynamic allocation should be avoided at all costs.
svn path=/trunk/boinc/; revision=21564
Add more info to "project in-progress job list".
Old: entries included only job name and app plan class;
this was used to resend lost jobs,
and to count the # of CPU and GPU jobs.
But it's not usable e.g. for per-app in-progress limits.
New: send the client's app versions (including usage info)
and for each in-progress job, which app version it uses.
(This reduces request-message size compared with sending
usage info and app name per job).
- client and scheduler RPC:
Add more info to "all in-progress job list", and make it optional.
This list is used by schedulers that do deadline checks
using EDF workload simulation.
Old: the list is always sent, and it contains no info
about job resource usage
New: the list is sent only if the scheduler asked for it
in a previous reply,
and each entry now contains resource usage (CPU, GPUs)
Note: the scheduler's EDF simulator is outdated;
it doesn't know about GPU jobs.
But we may as well get the info in place.
svn path=/trunk/boinc/; revision=21513
There are now separate flags for
"file_xfers_suspended": don't do file transfers
"network_suspended": don't do any network comm
(scheduler RPCs, RSS fetch, master fetch, etc.)
The policy:
if preferences/settings say no network
(quota exceeded, no-network mode, user active, time, excl. app)
then:
file_xfers_suspended = true
if (no recent network-related RPC) network_suspended = true
- user web: code cleanup for project prefs
svn path=/trunk/boinc/; revision=21299
and is the same, don't copy (it might not be writeable)
- client: change "result" to "task" in user-visible messages
svn path=/trunk/boinc/; revision=20785
Old: it's based entirely on CPU time.
So a GPU project, whose app uses only a fraction
of a CPU, accrues positive debt.
This is OK if the project has only GPU apps,
since STD is not (currently) used for GPU scheduling.
But some projects have both CPU and GPU apps.
New: STD is based on total processing.
It has terms for each resource type.
The notion of "runnable resource share" is specific to a type.
Note: the notion of "resource share fraction" appears in
a couple of other places:
- it's passed to apps in app_init_data.xml
- it's passed in scheduler requests.
It should be broken down by resource type in these cases too.
Note to self: do this later.
svn path=/trunk/boinc/; revision=19762
didn't work due to a typo.
- client: if <ncpus> is present in cc_config.xml,
we're supposed to act as if there were that many physical CPUs.
In particular, we need to set host_info.p_ncpus to that value,
since that's what is reported in scheduler requests.
svn path=/trunk/boinc/; revision=19522
1) if an APP_VERSION is missing a coprocessor,
don't delete it and its files.
(If the coprocessor returns, we won't need to re-download)
2) if a RESULT uses an app version that is missing a coprocessor,
abort it (rather than deleting it).
The client will report the result on the next scheduler RPC,
and the server will make a new instance.
svn path=/trunk/boinc/; revision=19235
and <ati_backoff> elements to scheduler reply.
These specify backoffs for the resource types,
overriding the existing backoff mechanism.
Projects can supply these if they don't have apps of a particular type
and don't want to get periodic requests for them.
svn path=/trunk/boinc/; revision=19059
If you have 2 CPUs and a 1-day job in EDF mode,
the busy time should be zero, not .5 days.
Add a class BUSY_TIME_ESTIMATOR that makes a somewhat better
(though still fairly crude) estimate.
svn path=/trunk/boinc/; revision=19003
with a GPU request if project is anonymous platform
AND it has an app for that GPU type
- client: report overall work request as well as per-resource-type requests
svn path=/trunk/boinc/; revision=18994
to the max of the requests for different resource types.
Otherwise projects with old schedulers won't send us work.
svn path=/trunk/boinc/; revision=18945
e.g. the Milkyway@home ATI app, of which we can typically run
2 or 3 instances at once on a GPU.
Changes include:
- In APP_VERSION, don't use a COPROCS to represent the GPU
requirements; just use doubles ncudas and natis.
- sufficient_coprocs() etc. are no longer members of COPROCS
- in HOST_USAGE, ncudas and natis are doubles
- in scheduler request, req_instances is now a double
This checkin doesn't include the job scheduling logic,
i.e. assigning jobs to GPUs. That will follow.
svn path=/trunk/boinc/; revision=18868
We need to estimate 2 different delays for each resource type:
1) "saturated time": the time the resource will be fully utilized
(new name for the old "estimated delay").
This is used to compute work requests.
2) "busy time": the time a new job would have to wait
to start using this resource.
This is passed to the scheduler and used for a crude deadline check.
Note: this is ill-defined; a single number doesn't suffice.
But as a very rough estimate, I'll use the sum of
(J.duration * J.ninstances)/ninstances
over all jobs that miss their deadline under RR sim.
svn path=/trunk/boinc/; revision=18629
- client: show times correctly in rr_sim debug msgs
- client: in "requesting new tasks" msg,
say what resources we're requesting (if there's more than CPU)
- client: estimated delay was possibly being calculated incorrectly
because of roundoff error
svn path=/trunk/boinc/; revision=18269
- client: Haiku support (from Urias McCullough)
- client: include plan class in other_result list in sched request
(for resource-specific jobs-in-progress limit)
svn path=/trunk/boinc/; revision=18250
Instead, write the info into a file in the slot directory,
and check for these files on startup.
This should reduce the overhead of state-file writing
on machines with lots of cores.
There will still be a flurry of writes each time a job finishes,
but reducing that overhead would be a larger job.
- client: make sure we write the state file after a failed RPC
svn path=/trunk/boinc/; revision=17814
CPU time is visible in task Properties.
- Manager: in task Properties, show final CPU and elapsed times
if job is finished
- client: honor backoff for account-manager-requested scheduler RPCs
- client: keep track final elapsed time for results
- GUI RPC: report final elapsed time
svn path=/trunk/boinc/; revision=17588
app versions in scheduler reply
- client: when reporting anonymous platform apps in sched request,
don't include <file_info>s (not relevant to server)
svn path=/trunk/boinc/; revision=17507