on each request.
- client: when showing how much work a scheduler request returned,
scale by availability (as is done to show the amount of the request)
- client in account manager request, <not_started_dur> and
<in_progress_dur> are in wall time, not run time
(i.e. scale them by availability)
Note: there's some confusion in the code between runtime and wall time,
where in general wall time = runtime / availability.
New convention: let's use "runtime" for the former,
and "duration" for the latter.
svn path=/trunk/boinc/; revision=25597
in plan_class_spec by using coproc_pref() and capped_host_fpops()
(moved coproc_perf() to sched_customize.h to make it available
in plan_class_spec.cpp, and cleaned up includes)
svn path=/trunk/boinc/; revision=25467
- scheduler: parse d_project_share
- scheduler: if vbox and vbox_mt are both available,
use vbox for a 1-CPU machine
svn path=/trunk/boinc/; revision=25176
of the full 2 CPUs. Vboxwrapper uses ceil() to allocate enough
whole CPUs for Virtualbox. Ideally this will cause the BOINC
client-side scheduler to use the remaining fraction of the CPU
for GPU data transfer which will then free up one whole CPU for
another job. All without over-commiting anything.
sched/
sched_customize.cpp
svn path=/trunk/boinc/; revision=25120
Some credit cheats (e.g. with credit_by_runtime) can be done
by reporting a huge value.
Fix this by capping the value at 1.1 times the 95th percentile
of host.p_fpops, taken over active hosts.
svn path=/trunk/boinc/; revision=25017
depending on how many the host has,
and whether CPU VM extensions are present
(this reflects the requirements of CernVM).
svn path=/trunk/boinc/; revision=25009
If found, set HOST_INFO::p_vm_extensions_disabled,
and pass this to the scheduler.
- scheduler (VBox app plan function) if a host has p_vm_extensions_disabled
set, don't sent it multicore VBox jobs.
Note: if you have a host with VM extensions, and they're disabled
in the BIOS, and you enable them, you can remove the
<p_vm_extensions_disabled> line from client_state.xml
and you'll be eligible to get multicore VM jobs again.
svn path=/trunk/boinc/; revision=24944
Tells multicore apps how many cores to use.
The --nthreads command line arg to the app is now deprecated
though we'll keep it around for the time being.
svn path=/trunk/boinc/; revision=24708
are assumed to be for NVIDIA GPU apps;
plan class names containing 'ati' are assumed to be for AMD GPU apps.
Clauses for 'nvidia' were missing in a couple of places.
svn path=/trunk/boinc/; revision=24512
(in sched_customize.cpp)
the flops_scale argument is intended to express the
GPU efficiency (actual/peak).
Pass appropriate values.
svn path=/trunk/boinc/; revision=24405
The problem: the choice of app version was based on
the "projected FLOPS" return by estimate_flops(av).
If usage stats exist for the host / app version,
this returns a number X such that
WU.rsc_fpops_est/X approximates the runtime of a job
using the given app version..
(If WU.rsc_fpops_est is way off, this will be correspondingly way off
from the actual FLOPS the app version will get.)
However, if there are no usage stats,
it return an estimate based on host hardware speed,
which might be 100X less.
Hence, in some cases a new app version would never get used.
Solution: choose app versions based on the values
returned by the app plan functions.
Use estimate_flops() AFTER choosing the version.
- scheduler: improve the accuracy of FLOPS estimation for GPU apps.
The "flops_scale" argument to coproc_perf
(which expresses the difference between peak GPU FLOPS
and actual FLOPS) should be used to scale GPU FLOPS
prior to calling coproc_perf(),
rather than scaling the estimate returned by coproc_perf().
- show_shmem: show have_X_apps flags
svn path=/trunk/boinc/; revision=24385
to match those in the clGetDeviceInfo() calls.
Principles:
- if there's already a name for something, use it.
- follow case conventions
svn path=/trunk/boinc/; revision=24344
plan classes after all.
Otherwise (since app_plan() is not passed an app version)
there's no way to enforce that 64 bit hosts are sent
only the 64 bit version (which is necessary because
of the split-registry scheme).
svn path=/trunk/boinc/; revision=23935
which prevented the client from cleaning up
subprocesses of misbehaving multiprocess apps.
- remote job submission system:
assign physical names to input files (based on their MD5)
rather than having the user provide physical names
- VM apps: eliminate vbox64 plan class. Only vbox.
svn path=/trunk/boinc/; revision=23923
- don't create result records for uploads and downloads.
Just create a msg_to_client record.
- the scheduler handles file-transfer results specially;
it makes a vector of them, then calls a project-supplied function
handle_file_xfer_results()
- change the interface and implementation of put_file and get_file
- client write project sched priority in GUI RPC replies,
but not to the state file
svn path=/trunk/boinc/; revision=23857
Win: enumerate all descendants, and kill them all TerminateProcess().
Unix:
send the main process a SIGTERM.
Check once a sec for existence of descendants.
if none, done
If any still exist after 10 sec, kill all descendants
- wrapper fix bug in Win env var stuff
- scheduler: check for VBox version 3.2+ in app_plan()
svn path=/trunk/boinc/; revision=23085
(either at startup or during execution)
reset a number of "wait until X" variables;
otherwise we might wait years to contact a project, restart a file xfer, etc.
Notes:
- there is no problem setting clocks forward; things just happen prematurely
- some variables (e.g. task deadlines) are not reset,
because it's not clear what to set them to
- sched: remove ati_opencl plan class until we understand what it is
svn path=/trunk/boinc/; revision=22842
p_model as well as p_features;
pre-6.x clients report them in p_model.
- client: fix bug where "reread config file" didn't update
the max log file sizes
svn path=/trunk/boinc/; revision=22838
My change of 1 Oct ([22440]) required that such jobs
be processed with 64-bit apps,
on the assumption that 32-bit apps have a 2 GB user address space limit.
However, it turns out this limit applies only to Windows
(kernel and user mode share the 4GB address space; each gets half).
On Linux, the split is 3GB user / 1 GB kernel.
On Mac OS X, user mode and kernel mode have separate address spaces,
each of them 4 GB.
svn path=/trunk/boinc/; revision=22599
- scheduler: add a clause to wu_is_infeasible_custom() for SETI@home:
don't process VLAR jobs using CUDA apps.
Note: this is implemented in a slightly non-optimal way.
If the request asks for both GPU and CPU jobs,
the scheduler will first decide to use the GPU version.
It will scan jobs, skipping over VLAR jobs.
When the GPU request is satisfied, it will switch to the CPU version
and continue scanning, accepting VLAR jobs.
But the jobs that were skipped initially won't be rescanned.
Also, it would be slightly nice to preferentially send
VLAR jobs to hosts asking for CPU work.
(This could be done in the scoring function).
svn path=/trunk/boinc/; revision=21895
That produced a messed-up query that assigned garbage values to:
host_app_version.turnaround_var
host_app_version.turnaround_q
host_app_version.max_jobs_per_day
host_app_version.consecutive_valid
To repair these:
- set turnaround_var and turnaround_q to zero
- if max_jobs_per_day is outside of
(0..config.daily_result_quota)
set it to config.daily_result_quota
- if consecutive_valid is outside (0..1000), set it to zero
I added a script, html/ops/repair_21812.php, that does this;
if you ran server code between [21181] and [21812], run this script.
- scheduler/transitioner: add <debug_quota> log flag
- changed the build system to always use -Wall
(if we'd done this before, this bug wouldn't have happened)
- fixed a bunch of other compile warnings
svn path=/trunk/boinc/; revision=21812
Old: various redundant and/or misleading messages were sent.
New:
- if host w/ no GPU contacts a GPU-only project,
send high-pri message saying they need a GPU
- if host w/ GPU has driver too old for all versions,
send high-pri message saying to update driver
- if host w/ GPU has driver too old for some versions,
send low-pri message saying to update driver
- if host has GPU but too little RAM for any app,
send low-pri message saying so
- scheduler: revamp GPU plan class functions
svn path=/trunk/boinc/; revision=21760
are now translatable,
using the convention that any substring enclosed in _(" ... ")
should be passed throughh wxGetTranslation() or the equivalent.
- client: when writing messages to stdout, strip out _(...)
- manager: translate strings from client
- scheduler: message tweaks
svn path=/trunk/boinc/; revision=21706
pointers to dynamically allocated COPROC-derived objects,
just have the objects themselves.
Dynamic allocation should be avoided at all costs.
svn path=/trunk/boinc/; revision=21564
and default it to off
- client: if we print available GPU RAM (which we now don't)
have a separate timer per GPU type
- scheduler: add new plan classes cuda_opencl (sic) and ati_opencl
svn path=/trunk/boinc/; revision=21498