Change these to the Github wiki.
Web: change a couple of links from Trac to Github wiki.
text_transform.inc: the [github]wiki:xxx[/github] tag linked
to a non-existent boinc-dev-doc repo.
Link to the Github wiki instead.
The round-robin simulation would stop simulating jobs for a project
once a max concurrent limit (app or project) was reached.
As a result it would decide there was a shortfall,
and keep requesting work up to the limit of 1000 jobs.
To fix this:
1) keep simulating a project after an MCL is reached
2) for each (project, resource) pair, keep track of the latest
simulation time T when an MCL was reached.
3) for such a project, don't request work for a resource if
T > now + work buf size
This allows us, e.g., to request GPU jobs from a project
even if its CPU jobs (taking MCL into account) fill the buffer.
This works in the simulation case that showed the problem (#192).
Also: add a bit more logging, and improve names
Instead: declare a static const instance (whose data members are zero)
and copy that.
This avoid the error-prone need to assign each member,
and it works even if there are virtual function tables.
For projects P with MC restrictions, during RR simulation,
we keep track of the max # of instances used by P,
subject to the restrictions, and use that to calculate its "MC shortfall".
Problem: if P doesn't have any jobs, the max # instances is zero,
so MC shortfall is zero, so we erroneously don't request work for P.
Solution: initialize max # of instances to the min of the restrictions;
we'll always be able to use at least that many instances.
Synopsis: max concurrent was being enforced in the last stage of CPU sched,
but not in earlier stages, or in work fetch.
This caused starvation in some cases.
Fix this by modeling max concurrent in RR sim and make_run_list().
- CPU sched: model and enforce max concurrent limits in building run list
for CPU jobs; otherwise the list has jobs we can't actually run
- RR simulation: model and enforce max concurrent limits
- RR sim: fix bug in calculation of # idle instances
- RR sim: model unavailability of GPUs
e.g. if we can't run GPU jobs we can potentially run more CPU jobs
- work fetch: if a project is at a max concurrent limit,
don't fetch work from it.
The jobs we get (possibly) wouldn't be runnable.
NOTE: we currently provide max concurrent limits
at both project and app level.
The problem with app level is that apps can have versions that
use different resources.
It would be better to have limits at the resource level instead.
- In many cases (e.g. job completion) CPU sched and work fetch are both done
back to back. Each of them does RR simulation.
Only need to do this once (efficiency).
- Show max concurrent settings in startup messages
- Make max runnable jobs (1000) into a #define
- Fix removal of "can't fetch work" notices
- Make "can't fetch work" notices resource-specific;
the reasons may differ between resources
- Get rid of WF_DEBUG macro;
just print everything if log_flags.work_fetch_debug is set.
- Change project- and resource-level work-fetch reason codes
(DONT_FETCH_PREFS etc.) from #defines to enums,
and give them prefixes RSC_REASON and PROJECT_REASON
- Fix bug where the return of compute_project_reason() wasn't
actually being stored in project.work_fetch.
- Add work-fetch reason MAX_CONCURRENT (project is at max concurrent limit)
The logic for backup projects (fetch for a resource only if idle instance)
was skipped in the case of GPU exceptions in my checkin of 10/10/2014.
I'm not sure why I did this, and it allows incorrect work fetch
in some cases, so I'm taking it out.
A while back I changed the job sched and work fetch policies to use
REC-based project priority.
The work fetch logic sorts the project list (in CLIENT_STATE::projects)
by descending priority.
This causes two problems:
- If you have a lot of projects, it's hard to find a particular one
in the event log, e.g. in work_fetch_debug output.
- In the manager's Statistics tab, the selected project can change
unexpectedly since we identify it by array index,
and the array order may change.
Solution: sort CLIENT_STATE::projects alphabetically (case insensitive).
In WORK_FETCH, copy this array to a separate array,
that is then sorted by decreasing priority.
We were using an int bitmap to store flags for the instances of a coproc.
Furthermore, because of the use of 2^n-1 to generate a bitmap of 1s,
the limit on instances was 31.
Use a long long for the bitmap instead, and don't use 2^n-1.
This increases the limit to 64.
In work fetch setup, we were computing rsc_project_reason
before doing the round-robin simulation.
It needs to be done after, because it uses the # of idle devices,
which is computed by the simulation.
- Remove code that tries to keep track of available GPU RAM
and defer jobs that don't fit.
This never worked, it relied on project estimates of RAM usage,
and it's been replaced by having the app do temporary exit
if alloc fails.
- Move logic for checking for deferred jobs from CPU
to work fetch.
- Rename rsc_defer_sched to has_deferred_job,
and move it from PROJECT to RSC_PROJECT_WORK_FETCH
- tweak work_fetch_debug output
The logic for deciding whether to fetch work for a project
or a (project, resource type) pair
was scattered among several functions, with confusing names.
Consolidate this logic, and use consistent names.
We weren't copying the request fields from RSC_WORK_FETCH to COPROC.
Do this, and clean up the code a bit.
Note: the arrays that parallel the COPROCS::coprocs array
are a bit of a kludge; that stuff logically belongs in COPROC.
But it's specific to the client, so I can't put it there.
Maybe I could do something fancy with derived classes, not sure.
My commit of Feb 7 caused work fetch to project P
to be deferred for up to 5 min if an upload to P is active,
even if some instances are idle.
This was to deal with a case where the idleness was caused
by a jobs-in-progress limit by P,
and work requests lead to long backoff.
However, this can cause instances to be idle unnecessarily.
I changed things so that, if instances are idle,
a work fetch can happen even during upload.
But only one such fetch will be done.
The basic problem: the way we assign GPU instances when creating
the "run list" is slightly different from the way we assign them
when we actually run the jobs;
the latter assigns a running job to the instance it's using,
but the former doesn't.
Solution (kludge): when building the run list,
don't reserve instances for currently running jobs.
This will result in more jobs in the run list, and avoid starvation.
For efficiency, do this only if there are exclusions for this type.
Comment: this is yet another complexity that would be eliminated
if GPU instances were modeled separately.
I wish I had time to do that.
- client emulator: change default latency bound from 1 day to 10 days
This gives you a way to simulate the effects of app_config.xml
- client: piggyback requests for resources even if we're backed off from them
- client: change resource backoff logic
Old: if we requested work and didn't get any,
back off from resources for which we requested work
New: for each resource type T:
if we requested work for T and didn't get any, back off from T
Also, don't back off if we're already backed off
(i.e. if this is a piggyback request)
Also, only back off if the RPC was due to an automatic
and potentially rapid source
(namely: work fetch, result report, trickle up)
- client: fix small work fetch bug
by Jacob Klein.
The new policy is roughly as follows:
- find the highest-priority project P that is allowed
to fetch work for a resource below buf_min
- Ask P for work for all resources R below buf_max
for which it's allowed to fetch work,
unless there's a higher-priority project allowed
to request work for R.
If we're going to do an RPC to P for reasons other than work fetch,
the policy is:
- for each resource R for which P is the highest-priority project
allowed to fetch work, and R is below buf_max,
request work for R.
- scale amount of work request by
(# non-excluded instances)/#instances
- change policy:
old: don't fetch work if #jobs > #non-excluded instances
new: don't fetch work if # of instance-seconds used in RR sim
> work_buf_min * (#non-exluded instances)/#instances
Note: this fixes a major problem (starvation)
with project-level GPU exclusion.
However, project-level GPU exclusion interferes with most of
the client's scheduling policies.
E.g., round-robin simulation doesn't take GPU exclusion into account,
and the resulting completion estimates and device shortfalls
can be wrong by an order of magnitude.
The only way I can see to fix this would be to model each
GPU instance as a separate resource,
and to associate each job with a particular GPU instance.
This would be a sweeping change in both client and server.
initial work request to a project
- client: put some casts to double in NVIDIA detect code.
Shouldn't make any difference.
- volunteer storage: truncate file to right size after retrieval
svn path=/trunk/boinc/; revision=26051
allow it to fetch work of that type if the # of runnable
jobs it <= the # of non-excluded instances (rather than 0).
svn path=/trunk/boinc/; revision=26045
for a reason other than work fetch,
and we're deciding whether to piggyback a work request,
skip the checks for hysteresis (buffer < min)
and for per-resource backoff time.
These checks are there only to limit the rate of RPCs,
which is not relevant since we're doing one any.
This fixes a bug where a project w/ sporadic jobs specifies
a next_rpc_delay to ensure regular polling from clients.
When these polls occur they should request work regardless of backoff.
svn path=/trunk/boinc/; revision=26002
and there's a simple reason
(e.g. the project is suspended, no-new-tasks, downloads stalled, etc.)
show it in the event lot.
If the reason is more complex, don't try to explain.
svn path=/trunk/boinc/; revision=25827
if we're making a scheduler RPC to a project for reasons
other than work fetch,
and we're deciding whether to ask for work, ignore hysteresis;
i.e. ask for work even if we're above the min buffer
(idea from John McLeod).
svn path=/trunk/boinc/; revision=25291
work fetch (e.g. to report completed jobs)
only request work if it's the project we would have chosen
if we were fetching work.
- client: the way in which project priorities were adjusted
in work fetch to reflected currently queued work was wrong.
- client: fix bug in the way project priorities are adjusted
in RR simulator
- client emulator: if there are results in the state file
with states DOWNLOADING or UPLOADING,
change them to DOWNLOADED or UPLOADED.
Otherwise they're stuck.
svn path=/trunk/boinc/; revision=24737
reduce its runtime from O(N^2) to O(N),
where N is the number of runnable jobs
(which can be in the thousands).
This will make the client emulator run a lot faster,
and will reduce the client CPU overhead a bit.
- API: change boinc_get_opencl_ids() so that it returns
a BOINC error code (< -100) if the app_init.xml is
missing or bad (i.e. we're running standalone),
and an OpenCL error code (> -100) if an OpenCL call failed.
svn path=/trunk/boinc/; revision=24469