2004-06-14 20:31:21 +00:00
|
|
|
<?php
|
|
|
|
require_once("docutil.php");
|
2004-06-27 23:26:52 +00:00
|
|
|
page_head("Client scheduling policies");
|
2004-06-14 20:31:21 +00:00
|
|
|
echo "
|
|
|
|
|
2004-06-27 23:26:52 +00:00
|
|
|
This document describes two interrelated scheduling policies
|
|
|
|
in the BOINC client:
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<ul>
|
2004-06-27 23:26:52 +00:00
|
|
|
<li> <b>CPU scheduling policy</b>: what result to run when.
|
|
|
|
<li> <b>Work fetch policy</b>: when to contact scheduling servers,
|
|
|
|
which projects to contact, and how much work to task for.
|
2004-06-22 22:56:50 +00:00
|
|
|
</ul>
|
|
|
|
|
|
|
|
<h2>CPU scheduling</h2>
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-27 23:26:52 +00:00
|
|
|
<p>The CPU scheduling policy aims to achieve the following goals
|
|
|
|
(in decreasing priority):</p>
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<ol>
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<li>
|
2004-06-27 23:26:52 +00:00
|
|
|
<b>Maximize CPU utilization</b>.
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<li>
|
2004-06-27 23:26:52 +00:00
|
|
|
<b>Enforce resource shares.</b>
|
|
|
|
The ratio of CPU time allocated to projects that have work,
|
|
|
|
in a typical period of a day or two,
|
|
|
|
should be approximately the same as the ratio of
|
|
|
|
the user-specified resource shares.
|
2004-06-29 19:54:50 +00:00
|
|
|
If a process has no work for some period,
|
2004-07-03 18:50:20 +00:00
|
|
|
it does not accumulate a 'debt' of work.
|
2004-06-22 22:56:50 +00:00
|
|
|
|
|
|
|
<li>
|
|
|
|
<b>Satisfy result deadlines if possible.</b>
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<li>
|
2004-06-27 23:26:52 +00:00
|
|
|
<b>Reschedule CPUs periodically.</b>
|
|
|
|
This goal stems from the large differences in duration of
|
|
|
|
results from different projects.
|
|
|
|
Participant in multiple projects
|
|
|
|
will expect to see their computers do work on each of these projects in a
|
|
|
|
reasonable time period.
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<li>
|
2004-06-27 23:26:52 +00:00
|
|
|
<b>Minimize mean time to completion.</b>
|
2004-06-22 22:56:50 +00:00
|
|
|
For example, it's better to have one result from
|
2004-06-27 23:26:52 +00:00
|
|
|
a project complete in time T than to have two results
|
2004-06-22 22:56:50 +00:00
|
|
|
simultaneously complete in time 2T.
|
2004-06-14 20:31:21 +00:00
|
|
|
|
|
|
|
</ol>
|
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<p>
|
|
|
|
A result is 'active' if there is a slot directory for it.
|
2004-06-27 23:26:52 +00:00
|
|
|
There can be more active results than CPUs.
|
2004-06-14 20:31:21 +00:00
|
|
|
|
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<h3>Debt</h3>
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<p>
|
|
|
|
The notion of 'debt' is used to respect the resource share allocation
|
|
|
|
for each project.
|
|
|
|
The debt to a project represents the amount of work
|
|
|
|
(in CPU time) we owe it.
|
|
|
|
Debt is decreased when CPU time is devoted to a project.
|
|
|
|
We increase the debt to a project according to the
|
|
|
|
total amount of work done in a time period scaled by the project's
|
|
|
|
resource share.
|
|
|
|
|
|
|
|
<p>
|
|
|
|
For example, consider a system participating in two projects, A and B,
|
|
|
|
with resource shares 75% and 25%, respectively.
|
|
|
|
Suppose in some time period, the system devotes 25 minutes of CPU time to project A
|
|
|
|
and 15 minutes of CPU time to project B.
|
2004-06-28 23:30:01 +00:00
|
|
|
We decrease the debt to A by 25 minutes and increase it by 30 minutes (75% of 25 + 15).
|
2004-06-22 22:56:50 +00:00
|
|
|
So the debt increases overall.
|
|
|
|
This makes sense because we expected to devote a
|
|
|
|
larger percentage of the system resources to project A than it
|
|
|
|
actually got.
|
|
|
|
|
|
|
|
<p>
|
|
|
|
The choice of projects for which to start result computations
|
|
|
|
can simply follow the debt ordering of the projects.
|
|
|
|
The algorithm computes the 'anticipated debt' to a project
|
|
|
|
(the debt we expect to owe after the time period expires)
|
|
|
|
as it chooses result computations to run.
|
|
|
|
|
|
|
|
<h3>A sketch of the CPU scheduling algorithm</h3>
|
|
|
|
|
|
|
|
<p>
|
|
|
|
This algorithm is run:
|
|
|
|
<ul>
|
|
|
|
<li> Whenever a CPU is free
|
|
|
|
<li> Whenever a new result arrives (via scheduler RPC)
|
2004-07-03 18:50:20 +00:00
|
|
|
<li> Whenever it hasn't run for MV seconds, for some scheduling period
|
|
|
|
MV
|
2004-06-22 22:56:50 +00:00
|
|
|
</ul>
|
|
|
|
|
|
|
|
<p>
|
|
|
|
We will attempt to minimize the number of active result
|
|
|
|
computations for a project by dynamically choosing results to compute
|
|
|
|
from a global pool.
|
|
|
|
When we allocate CPU time to project,
|
|
|
|
we will choose already running tasks first,
|
|
|
|
then preempted tasks, and only choose to start a new result
|
|
|
|
computation in the last resort.
|
|
|
|
This will not guarantee the above
|
|
|
|
property, but we hope it will be close to achieving it.
|
2004-06-14 20:31:21 +00:00
|
|
|
|
|
|
|
<ol>
|
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<li>Decrease debts to projects according to the amount of work done for
|
|
|
|
the projects in the last period.
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<li>Increase debts to projects according to the projects' resource shares.
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<li>Let the anticipated debt for each project be initialized to
|
|
|
|
its current debt.
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<li>Repeat until we decide on a result to compute for each processor:
|
2004-06-14 20:31:21 +00:00
|
|
|
|
|
|
|
<ol>
|
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<li>Choose the project that has the largest anticipated debt and a
|
|
|
|
ready-to-compute result.
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<li>Decrease the anticipated debt of the project by the expected amount of CPU time.
|
2004-06-14 20:31:21 +00:00
|
|
|
|
|
|
|
</ol>
|
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<li>Preempt current result computations, and start new ones.
|
2004-06-14 20:31:21 +00:00
|
|
|
|
|
|
|
</ol>
|
|
|
|
|
|
|
|
<h3>Pseudocode</h3>
|
|
|
|
|
|
|
|
<pre>
|
2004-06-22 22:56:50 +00:00
|
|
|
data structures:
|
|
|
|
ACTIVE_TASK:
|
2004-07-03 18:50:20 +00:00
|
|
|
double cpu_time_at_last_sched
|
2004-06-22 22:56:50 +00:00
|
|
|
double current_cpu_time
|
|
|
|
scheduler_state:
|
|
|
|
PREEMPTED
|
|
|
|
RUNNING
|
|
|
|
next_scheduler_state // temp
|
|
|
|
PROJECT:
|
|
|
|
double work_done_this_period // temp
|
|
|
|
double debt
|
|
|
|
double anticipated_debt // temp
|
2004-06-27 23:26:52 +00:00
|
|
|
RESULT next_runnable_result
|
2004-06-22 22:56:50 +00:00
|
|
|
|
|
|
|
schedule_cpus():
|
|
|
|
|
|
|
|
foreach project P
|
|
|
|
P.work_done_this_period = 0
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
total_work_done_this_period = 0
|
|
|
|
foreach task T that is RUNNING:
|
2004-07-03 18:50:20 +00:00
|
|
|
x = T.current_cpu_time - T.cpu_time_at_last_sched
|
2004-06-22 22:56:50 +00:00
|
|
|
T.project.work_done_this_period += x
|
|
|
|
total_work_done_this_period += x
|
2004-06-14 20:31:21 +00:00
|
|
|
|
|
|
|
foreach P in projects:
|
|
|
|
P.debt += P.resource_share * total_work_done_this_period
|
|
|
|
- P.work_done_this_period
|
|
|
|
|
|
|
|
expected_pay_off = total_work_done_this_period / num_cpus
|
|
|
|
|
|
|
|
foreach P in projects:
|
2004-06-22 22:56:50 +00:00
|
|
|
P.anticipated_debt = P.debt
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
foreach task T
|
|
|
|
T.next_scheduler_state = PREEMPTED
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
do num_cpus times:
|
|
|
|
// choose the project with the largest anticipated debt
|
|
|
|
P = argmax { P.anticipated_debt } over all P in projects with runnable result
|
|
|
|
if none:
|
2004-06-14 20:31:21 +00:00
|
|
|
break
|
2004-06-30 16:42:10 +00:00
|
|
|
if (some T (not already scheduled to run) for P is RUNNING):
|
2004-06-27 23:26:52 +00:00
|
|
|
T.next_scheduler_state = RUNNING
|
2004-06-22 22:56:50 +00:00
|
|
|
P.anticipated_debt -= expected_pay_off
|
|
|
|
continue
|
2004-06-30 16:42:10 +00:00
|
|
|
if (some T (not already scheduled to run) for P is PREEMPTED):
|
2004-06-22 22:56:50 +00:00
|
|
|
T.next_scheduler_state = RUNNING
|
|
|
|
P.anticipated_debt -= expected_pay_off
|
|
|
|
continue
|
|
|
|
if (some R in results is for P, not active, and ready to run):
|
2004-06-27 23:26:52 +00:00
|
|
|
Choose R with the earliest deadline
|
2004-06-22 22:56:50 +00:00
|
|
|
T = new ACTIVE_TASK for R
|
|
|
|
T.next_scheduler_state = RUNNING
|
|
|
|
P.anticipated_debt -= expected_pay_off
|
|
|
|
|
|
|
|
foreach task T
|
|
|
|
if scheduler_state == PREEMPTED and next_scheduler_state = RUNNING
|
|
|
|
unsuspend or run
|
|
|
|
if scheduler_state == RUNNING and next_scheduler_state = PREEMPTED
|
|
|
|
suspend (or kill)
|
|
|
|
|
|
|
|
foreach task T
|
2004-07-03 18:50:20 +00:00
|
|
|
T.cpu_time_at_last_sched = T.current_cpu_time
|
2004-06-22 22:56:50 +00:00
|
|
|
</pre>
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<h2>Work fetch policy</h2>
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<p>
|
|
|
|
The work fetch policy has the following goal:
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<ul>
|
|
|
|
<li>
|
2004-06-27 23:26:52 +00:00
|
|
|
Maintain sufficient work so that the CPU scheduler
|
|
|
|
is never 'starved' (i.e. a result is available for
|
|
|
|
a particular project when one is needed).
|
|
|
|
<li>
|
|
|
|
More specifically:
|
|
|
|
given a 'connection period' parameter T (days),
|
|
|
|
always maintain sufficient work so that the CPU scheduler will
|
|
|
|
not be starved for T days,
|
2004-06-29 19:54:50 +00:00
|
|
|
given average work processing rate.
|
2004-06-27 23:26:52 +00:00
|
|
|
The client should contact scheduling servers only about every T days.
|
|
|
|
<li>
|
|
|
|
Don't fetch more work than necessary, given the above goals.
|
2004-06-29 19:54:50 +00:00
|
|
|
Thus, try to maintain enough work that starvation will occur
|
|
|
|
between T and 2T days from now.
|
2004-06-22 22:56:50 +00:00
|
|
|
</ul>
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<h3>When to get work</h3>
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<p>
|
2004-06-29 19:54:50 +00:00
|
|
|
At a given time, the CPU scheduler may need as many as
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<blockquote>
|
|
|
|
min_results(P) = ceil(ncpus * P.resource_share)
|
|
|
|
</blockquote>
|
|
|
|
|
|
|
|
<p>
|
2004-06-29 19:54:50 +00:00
|
|
|
results for project P to avoid starvation.
|
|
|
|
The client can estimate the amount of time that will elapse until
|
|
|
|
the number of runnable results falls below min_results(P)
|
|
|
|
for some project P.
|
|
|
|
When this length of time is less than T,
|
|
|
|
it is time to get more work for project P.
|
2004-06-22 22:56:50 +00:00
|
|
|
|
|
|
|
<h3>A sketch of the work fetch algorithm</h3>
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-29 19:54:50 +00:00
|
|
|
The algorithm sets P.work_request for each project P
|
|
|
|
and returns an 'urgency level':
|
|
|
|
|
|
|
|
<pre>
|
|
|
|
NEED_WORK_IMMEDIATELY
|
|
|
|
CPU scheduler is currently starved (may not have idle CPU)
|
|
|
|
NEED_WORK
|
|
|
|
Will starve within T days
|
|
|
|
DONT_NEED_WORK
|
|
|
|
otherwise
|
|
|
|
</pre>
|
|
|
|
It can be called whenever the client can make a scheduler RPC.
|
2004-06-22 22:56:50 +00:00
|
|
|
<p>
|
2004-06-14 20:31:21 +00:00
|
|
|
<ol>
|
2004-06-22 22:56:50 +00:00
|
|
|
<li>
|
2004-06-29 19:54:50 +00:00
|
|
|
For each project P
|
2004-06-22 22:56:50 +00:00
|
|
|
<ol>
|
|
|
|
<li>
|
2004-06-29 19:54:50 +00:00
|
|
|
Let R0...Rn-1 be P's runnable results ordered by decreasing deadline.
|
2004-06-29 20:58:18 +00:00
|
|
|
<li>
|
2004-06-29 19:54:50 +00:00
|
|
|
Let S be the sum of estimated duration E(R) of R(min_results-1),...Rn-1,
|
|
|
|
where E(R) is R's FLOPS estimate
|
|
|
|
divided by this host's average processing rate times P.resource_share.
|
|
|
|
(i.e., S is the expected time until starvation for P).
|
2004-06-22 22:56:50 +00:00
|
|
|
<li>
|
2004-06-29 19:54:50 +00:00
|
|
|
If S < T
|
2004-06-22 22:56:50 +00:00
|
|
|
<ol>
|
2004-06-29 20:58:18 +00:00
|
|
|
<li>If S == 0: urgency = NEED_WORK_IMMEDIATELY
|
2004-06-29 19:54:50 +00:00
|
|
|
<li>P.work_request = (2T - S) * P.resource_share
|
2004-06-29 20:58:18 +00:00
|
|
|
<li>urgency = max(urgency, NEED_WORK)
|
2004-06-22 22:56:50 +00:00
|
|
|
</ol>
|
|
|
|
<li>
|
2004-06-29 19:54:50 +00:00
|
|
|
Else, P.work_request = 0
|
2004-06-22 22:56:50 +00:00
|
|
|
</ol>
|
2004-06-28 23:30:01 +00:00
|
|
|
<li>
|
2004-06-29 19:54:50 +00:00
|
|
|
Return urgency
|
2004-06-22 22:56:50 +00:00
|
|
|
</ol>
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<p>
|
|
|
|
The mechanism for actually getting work checks if a project has a
|
|
|
|
non-zero work request and if so, makes the scheduler RPC call to
|
|
|
|
request the work.
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<h3>Pseudocode</h3>
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
<pre>
|
|
|
|
data structures:
|
|
|
|
PROJECT:
|
2004-06-28 23:30:01 +00:00
|
|
|
double work_request
|
2004-06-29 19:54:50 +00:00
|
|
|
|
2004-06-28 23:30:01 +00:00
|
|
|
compute_work_request():
|
|
|
|
|
2004-06-29 20:58:18 +00:00
|
|
|
urgency = 0
|
2004-06-28 23:30:01 +00:00
|
|
|
|
|
|
|
foreach project P:
|
2004-06-29 20:58:18 +00:00
|
|
|
work_remaining = 0
|
|
|
|
results_to_skip = min_results(P) - 1
|
|
|
|
P.work_request = 0
|
2004-06-28 23:30:01 +00:00
|
|
|
|
2004-06-29 20:58:18 +00:00
|
|
|
foreach result R for P in order of decreasing deadline:
|
|
|
|
if results_to_skip > 0:
|
|
|
|
results_to_skip--
|
|
|
|
continue
|
|
|
|
work_remaining += E(R)
|
2004-06-28 23:30:01 +00:00
|
|
|
|
|
|
|
if P.work_remaining < T:
|
2004-06-29 20:58:18 +00:00
|
|
|
if work_remaining == 0:
|
|
|
|
urgency = NEED_WORK_IMMEDIATELY
|
|
|
|
P.work_request =
|
|
|
|
(2*T - work_remaining / SECONDS_PER_DAY) * P.resource_share
|
|
|
|
urgency = max(NEED_WORK, urgency)
|
|
|
|
|
|
|
|
return urgency
|
2004-06-14 20:31:21 +00:00
|
|
|
|
2004-06-22 22:56:50 +00:00
|
|
|
</pre>
|
2004-06-14 20:31:21 +00:00
|
|
|
|
|
|
|
";
|
|
|
|
page_tail();
|
|
|
|
?>
|
|
|
|
|