The old policy avoided running an N-CPU job unless N CPUs were free.
This could result in idle CPUs for long periods; for example:
on a 4-CPU machine, suppose you have a long 1-CPU job in EDF mode,
and some 4-CPU jobs.
3 CPUs will be idle until the 1-CPU job finishes.
Furthermore, the work fetch mechanism won't try to get
jobs (possibly non-MT) from other projects,
because the RR simulation doesn't reflect the scheduling
policy's exclusion principle.
The change: schedule jobs until ncpus_used >= ncpus.
E.g. in the above situation run the 1- and 4-CPU jobs together.
In extreme cases we might run 3 1-CPU jobs and the 4-CPU job.
This will degrade the performance of the 4-CPU job,
but that's probably better than having idle CPUs.
svn path=/trunk/boinc/; revision=25312
in which the tiebreaker is MD5 of name.
That way the order is stable
(it doesn't change from one run of the client to the next)
and it doesn't grep results with similar names
(and hence for the same app).
This ordering is used for
1) the order of display in the manager
2) the job scheduler's notion of FIFO
svn path=/trunk/boinc/; revision=25300
old: RR simulation marks some jobs as missing their deadline,
and the job scheduler runs those jobs as "high priority".
problem: those generally aren't the ones we should run.
E.g. if the client has a lot of jobs from a project,
typically the ones with later deadlines are the ones
whose deadlines are missed in the simulation.
But in this case the EDF policy says we should run
the ones with earliest deadlines.
new: if a project has N deadline misses,
run its N earliest-deadline jobs,
regardless of whether they missed their deadline in the sim.
Note: this is how it used to be (as designed by John McLeod).
I attempted to improve it, and got it wrong.
svn path=/trunk/boinc/; revision=25188
Report it (along with disk usage) in scheduler request messages.
This will allow the scheduler to send file-delete commands
if the project is using more than its share.
- client: add <disk_usage_debug> log flag
- create_work: add --help, show --command_line option
svn path=/trunk/boinc/; revision=24968
work fetch (e.g. to report completed jobs)
only request work if it's the project we would have chosen
if we were fetching work.
- client: the way in which project priorities were adjusted
in work fetch to reflected currently queued work was wrong.
- client: fix bug in the way project priorities are adjusted
in RR simulator
- client emulator: if there are results in the state file
with states DOWNLOADING or UPLOADING,
change them to DOWNLOADED or UPLOADED.
Otherwise they're stuck.
svn path=/trunk/boinc/; revision=24737
If we're contacting a project to report results,
only piggyback work requests for resources for which
that project is the highest priority that may have work.
- client: compute result.not_started more efficiently
TODO: continue efficiency work. There's still some quadratic stuff
svn path=/trunk/boinc/; revision=24523
(e.g. when editing it via the Manager).
Include only the GPUs that were specified in the original cc_config.xml,
not those detected by the client.
- client: fix bug that failed to require authorization for
GUI RPCs that are supposed to be authorized
- client: report parse errors in acct_mgr_url.xml and acct_mgr_login.xml
- fix compile warnings
- user web: in sample project_specific_prefs.inc,
get app names from the DB instead of listing them in the PHP code.
svn path=/trunk/boinc/; revision=24518
so that they do what they're supposed to
(i.e. enforce resource shares)
- client: change log flag <debt_debug> to <priority_debug>
- client simulator: update REC even with large delta-t.
- client simulator: handle "no new work" apps correctly
svn path=/trunk/boinc/; revision=24429
- Job scheduling: the baseline policy is to schedule based on "project priority",
which is how much processing P should receive based on resource share
minus how much it actually has received recently.
This policy tends to run jobs from the same project together,
so we modified it by adding a priority adjustment as jobs are scheduled.
The idea is that if 2 projects have about the same priority
they should split the processors.
The problem: the adjustment was too large on hosts that are on
only a small fraction of the time,
thus tending to run 1 job from each project, regardless of priority.
Solution: make an adjustment that reflects the host's actual throughput.
See adjust_rec_sched() for details.
- Work fetch: similar situation.
We were making an adjustment based on how much work the project currently has queued,
but the adjustment drowned out the project priority,
so we'd tend to always get work from the project that has least work queued.
Solution: make a smaller adjustment (-.3 ... .3)
- client: in message announcing app start, show the plan class
- client: don't show "unrecognized XML" messages for account files.
It's typically project-specific prefs that the client doesn't know about.
svn path=/trunk/boinc/; revision=24403
- client: if an app version can't be used because the GPUs it needs
are all excluded, mark it and all its results as "coproc missing"
so that they won't be looked at in scheduling logic.
svn path=/trunk/boinc/; revision=24317
in the presence of GPU exclusions.
The problem was in the job-selection phase,
which picks enough jobs to use all devices.
It was ignoring GPU exclusions, so for example on
a 2 GPU system it could pick 2 jobs from a project
for which 1 GPU is excluded,
and as a result 1 GPU would be idle.
Solution: during job selection,
keep track of GPU usage on a per-instance basis.
Select a job only if it can run on a non-excluded GPU.
- client: in computing ncprocs_excluded (which is used in
work fetch policy) don't count exclusions of non-existent devices
svn path=/trunk/boinc/; revision=24316
- measure the available RAM of each GPU when BOINC starts up.
If this fails, set available = physical.
Show available RAM in startup messages.
- use available RAM rather than physical RAM in selecting
the "best" GPU instance
- report available RAM to the scheduler
TODO: change the scheduler to use available rather than physical
if it's reported
svn path=/trunk/boinc/; revision=24210
by simulating time-slicing explicitly.
Also simulate changes in project REC
and hence in scheduling priority.
- client: add a log flag "rrsim_detail" that prints
time-slice-level info.
svn path=/trunk/boinc/; revision=24161
- client: extend <exclude_gpu> option so that if <device_num> is omitted,
all GPUs of the given type are excluded.
svn path=/trunk/boinc/; revision=23902
- client: cc_config.xml: if <devnum> is omitted from a <exclude_gpu>,
it means exclude all instances of that GPU type
- client: if all instances of a GPU type are excluded for a project,
don't ask the project for jobs of that type
svn path=/trunk/boinc/; revision=23898
adjust project REC by the amount of work queued, to increase variety
NOTE: at some point I think I had a reason to not do this,
but I can't remember what it is.
- client, job scheduling policy: fix how project REC is adjusted
svn path=/trunk/boinc/; revision=23838
as non-high-priority
- client: don't print spurious "domino prevention"
and "thrashing prevention" msgs
- manager: show project descriptions in same size font
as the rest of the dialog
svn path=/trunk/boinc/; revision=23831
If you put an element of the form
<exclude_gpu>
<url>http://project_url.com/</url>
<device_num>1</device_num>
</exclude_gpu>
in your cc_config.xml, that GPU won't be used for that project
svn path=/trunk/boinc/; revision=23774
- client: allow "non_cpu_intensive" to be specified independently
for different apps in a project.
This is intended to support projects that use the
Attic file distribution system,
which needs to have a daemon running.
svn path=/trunk/boinc/; revision=23610
If set, and a WU has nonzero batch,
it is interpreted as a user ID,
and the job will be sent only to hosts with that user ID.
Note: the use of workunit.batch is arbitrary;
we could also use workunit.opaque or other deprecated field.
svn path=/trunk/boinc/; revision=23556
otherwise non-ASCII characters in client_state.xml
make it invalid XML
- client: fix (I think) to scheduling logic.
a job is preemptable if it's finished its time slice and
Old: has checkpointed in last 10 sec
New: has checkpointed since the end of the time slice
svn path=/trunk/boinc/; revision=23551
Old:
If the AM sends us a project we're already attached to,
and the authenticator is different,
print an error message and don't change anything.
Problem:
If the AM is using weak authenticators,
and the user has changed their password,
the weak authenticator changes.
In this case the AM will send the new weak auth,
the client will ignore it,
and all subsequent scheduler RPCs will fail
until the user removes/adds the project.
Solution:
If the AM sends us a new auth for a project, use it.
Note:
From the time the password is changed on the project
to the next AM RPC,
the client will have a bad weak auth and scheduler RPCs will fail.
That's OK.
client/
acct_mgr.cpp
svn path=/trunk/boinc/; revision=23479
- new GPU types can be added easily
- users can specify GPUs in cc_config.xml,
referred to by app_info.xml,
and they will be scheduled by BOINC
and passed --device N options
Note: the parsing of cc_config.xml is not done yet.
- RPC protocols (account manager and scheduler)
can now specify GPU types in separate elements
rather than embedding them in tag names
e.g. <no_rsc>NVIDIA</no_rsc> rather than <no_cuda/>
- client: in account manager replies, parse elements of the form
<no_rsc>NAME</no_rsc>
indicating the GPUs of type NAME should not be used.
This allows account managers to control GPU types
not hardwired into the client.
Note: <no_cuda/> and <no_ati/> will continue to be supported.
- scheduler RPC reply: add
<no_rsc_apps>NAME</no_rsc_apps>
(NAME = GPU name)
to indicate that the project has no jobs for the indicated GPU type.
<no_cuda_apps> etc. are still supported
- client/lib: remove set_debts() GUI RPC
- client/scheduler RPC
remove <cuda_backoff> etc. (superceded by no_app)
Exception: <ip_result> elements in sched request
still have <ncudas> and <natis>.
Fix this later.
Implementation notes:
- client/lib: change "CUDA" to "NVIDIA" in type/variable names, and in XML
Continue to recognize "CUDA" for compatibility
- host_info.coprocs no longer used within the client;
use a global var (COPROCS coprocs) instead.
COPROCS now has an array of COPROCs;
GPUs types are identified by the array index.
Index zero means CPU.
- a bunch of other resource-specific structs (like RSC_WORK_FETCH)
are now stored in arrays, with same indices as COPROCS
(i.e. index 0 is CPU)
- COPROCS still has COPROC_NVIDIA and COPROC_ATI structs to hold vendor-specific info
- APP_VERSION now has a struct GPU_USAGE to describe its GPU usage
svn path=/trunk/boinc/; revision=23253
When we first send a job, we pick an app version,
then call wu_is_infeasible_fast()
to see if the host is able to run the job with that app version.
In addition to checking disk space etc.
this calls wu_is_infeasible_custom() to do project-specific checks
(e.g. for SETI@home: don't use GPUs for VLAR jobs).
However, when we resend a job, we pick an app version
(possibly different from the original one)
and send the job without any checking.
So, for example, we might send a VLAR job to a GPU,
or send a job to a host with insufficient disk space
(because free space has changed since original send).
Solution: call wu_is_infeasible_fast() before resending a job,
and if it returns true, mark the job as done and don't resend it.
svn path=/trunk/boinc/; revision=23098