CPU scheduling policy
Of the results that are runnable, which ones to execute? BOINC will generally execute NCPUS results at once, where NCPUS is the minimum of the physical number of CPUs (counting hyperthreading) and the user's 'max_cpus' general preference.
CPU scheduling enforcement
When to actually enforce (by preemption) the schedule? Sometimes it's preferable to delay the preemption of an application until it checkpoints.
Work-fetch policy
When should the core client ask a project for more work, which project should it ask, and how much work should it ask for?

The goals of the CPU scheduler and work-fetch policies are (in descending priority):

In previous versions of BOINC, the core client attempted to maintain at least one result for each attached project, and would do weighted round-robin CPU scheduling among all projects. In some scenarios (any combination of slow computer, lots of projects, and tight deadlines) a computer could miss the deadlines of all its results. The new policies solve this problem as follows:

Concepts and terms

Wall CPU time

A result's wall CPU time is the amount of wall-clock time its process has been runnable at the OS level. The actual CPU time may be less than this, e.g. if the process does a lot of paging, or if other (non-BOINC) processing jobs run at the same time.

BOINC uses wall CPU time as the measure of how much resource has been given to each project. Why not use actual CPU time instead?

Result states

R is runnable if

Project states

P is runnable if P is downloading if P is fetchable (i.e. the work-fetch policy allows work to be fetched from it) if P is latency-limited if This means that P has work available, but didn't send any because the work's deadlines couldn't be met given the existing work queue.

P is potentially runnable if

This means that, to the best of the client's knowledge, it could do work for P if it wanted to.

Debt

Intuitively, a project's 'debt' is how much work is owed to it, relative to other projects. BOINC uses two types of debt; each is defined for a set S of projects. In each case, the debt is recalculated periodically as follows: Short-term debt is used by the CPU scheduler. It is adjusted over the set of runnable projects. It is normalized so that minimum short-term debt is zero, and maximum short-term debt is no greater than 86,400 (i.e. one day).

Long-term debt is used by the work-fetch policy. It is defined for all projects, and adjusted over the set of potentially runnable projects. It is normalized so that average long-term debt, over all project, is zero.

CPU scheduling policy

The CPU scheduler uses an earliest-deadline-first (EDF) policy for results that are in danger of missing their deadline, and weighted round-robin among other projects if additional CPUs exist. This allows the client to meet deadlines that would otherwise be missed, while honoring resource shares over the long term.

The scheduler starts by doing a simulation of round-robin scheduling applied to the current work queue. This produces the following outputs:

The scheduling policy is:
  1. Let P be the project with the earliest-deadline runnable result among projects with deadlines_missed(P)>0. Let R be P's earliest-deadline runnable result not scheduled yet. Tiebreaker: least index in result array.
  2. If such an R exists, schedule R and decrement deadlines_missed(P).
  3. If there are more CPUs, and projects with deadlines_missed(P)>0, go to 1.
  4. If all CPUs are scheduled, stop.
  5. Set the 'anticipated debt' of each project to its short-term debt
  6. Find the project P with the greatest anticipated debt, select one of P's runnable results (picking one that is already running, if possible, else the result with earliest deadline) and schedule that result.
  7. Decrement P's anticipated debt by the 'expected payoff' (the scheduling period divided by NCPUS).
  8. Repeat steps 6 and 7 for additional CPUs

The CPU scheduler runs when a result is completed, when the end of the user-specified scheduling period is reached, when new results become runnable, or when the user performs a UI interaction (e.g. suspending or resuming a project or result).

CPU scheduling enforcement

The CPU scheduler decides what result should run, but it doesn't enforce this decision (by preempting, resuming and starting applications). This enforcement is done by a separate function, which runs periodically, and is also called by the CPU scheduler at its conclusion. The following rules apply to application preemption:

Work-fetch policy

When a result runs in EDF mode, its project may get more than its share of CPU time. The work-fetch policy is responsible for ensuring that this doesn't happen repeatedly. It does this by suppressing work fetch for the project.

A project P is overworked if

This condition occurs if P's results run in EDF mode (and in extreme cases, when a project with large negative LTD is detached). The work-fetch policy avoids getting work from overworked projects. This prevents a situation where a project with short deadlines monopolizes the CPU.

The work-fetch policy uses the functions

frs(project P)
P's fractional resource share among fetchable projects.
work_to_fill_buf(P)
The amount of work needed to keep P busy for the next min_queue seconds, namely:
    y = min_queue*ncpus - work_before_minq(P)
    if (y <= 0) return 0
    return y/frs(P)

The work-fetch policy function is called every few minutes (or as needed) by the scheduler RPC polling function. It sets the variable work_request_size(P) for each project P, which is the number of seconds of work to request if we do a scheduler RPC to P. This is computed as follows:

for each project P
    if P is suspended, deferred, overworked, or no-new-work
        P.work_request_size = 0
    else
        P.work_request_size = work_to_fill_buf(P)

if min_queue*ncpus > total_work_before_minq
    if P.work_request_size==0 for all P
        for each project P
            if P is suspended, deferred, overworked, or no-new-work
                continue
            P.work_request_size = 1

    if P.work_request_size==0 for all P
        for each project P
            if P is suspended, deferred, or no-new-work
                continue
            P.work_request_size = 1

    if P.work_request_size>0 for some P
        Normalize P.work_request_size so that they
        sum to min_queue*ncpus - total_work_before_minq
        and are proportional to P.resource_share

The scheduler RPC mechanism may select a project to contact because of a user request, an outstanding trickle-up message, or a result that is overdue for reporting. If it does so, it will also request work from that project. Otherwise, the RPC mechanism chooses the project P for which

P.work_request_size>0 and
P.long_term_debt + work_to_fill_buf(P) is greatest
and gets work from that project.

Scheduler work-send policy

NOTE: the following has not been implemented, and is independent of the above policies.

The scheduler should avoid sending results whose deadlines are likely to be missed, or which are likely to cause existing results to miss their deadlines. This will be accomplished as follows:


Describing scenarios

We encourage the use of the following notation for describing scheduling scenarios (times are given in hours):

P(C, D, R)

This describes a project with

A scenario is described by a list of project, plus the following optional parameters: Hence a typical scenario description is:
(P1(1000, 2000, .5), P2(1, 10, .5), NCPUS=4)
"; page_tail(); ?>