(but not all) wasn't finished.
New logic: if the project has an NCI app then:
- make a list of NCI apps for which the client doesn't have
a job in progress.
- try to send one job for each of these apps
- do this even if no work is being requested.
- don't send jobs for NCI apps by other mechanisms
NOTE: the client logic isn't quite right for mixed NCI projects.
If there's no job for a given NCI app,
the client should do a scheduler RPC.
This isn't critical so we won't do this now.
svn path=/trunk/boinc/; revision=26068
on each request.
- client: when showing how much work a scheduler request returned,
scale by availability (as is done to show the amount of the request)
- client in account manager request, <not_started_dur> and
<in_progress_dur> are in wall time, not run time
(i.e. scale them by availability)
Note: there's some confusion in the code between runtime and wall time,
where in general wall time = runtime / availability.
New convention: let's use "runtime" for the former,
and "duration" for the latter.
svn path=/trunk/boinc/; revision=25597
in plan_class_spec by using coproc_pref() and capped_host_fpops()
(moved coproc_perf() to sched_customize.h to make it available
in plan_class_spec.cpp, and cleaned up includes)
svn path=/trunk/boinc/; revision=25467
- scheduler: parse d_project_share
- scheduler: if vbox and vbox_mt are both available,
use vbox for a 1-CPU machine
svn path=/trunk/boinc/; revision=25176
of the full 2 CPUs. Vboxwrapper uses ceil() to allocate enough
whole CPUs for Virtualbox. Ideally this will cause the BOINC
client-side scheduler to use the remaining fraction of the CPU
for GPU data transfer which will then free up one whole CPU for
another job. All without over-commiting anything.
sched/
sched_customize.cpp
svn path=/trunk/boinc/; revision=25120
Some credit cheats (e.g. with credit_by_runtime) can be done
by reporting a huge value.
Fix this by capping the value at 1.1 times the 95th percentile
of host.p_fpops, taken over active hosts.
svn path=/trunk/boinc/; revision=25017
depending on how many the host has,
and whether CPU VM extensions are present
(this reflects the requirements of CernVM).
svn path=/trunk/boinc/; revision=25009
If found, set HOST_INFO::p_vm_extensions_disabled,
and pass this to the scheduler.
- scheduler (VBox app plan function) if a host has p_vm_extensions_disabled
set, don't sent it multicore VBox jobs.
Note: if you have a host with VM extensions, and they're disabled
in the BIOS, and you enable them, you can remove the
<p_vm_extensions_disabled> line from client_state.xml
and you'll be eligible to get multicore VM jobs again.
svn path=/trunk/boinc/; revision=24944
Tells multicore apps how many cores to use.
The --nthreads command line arg to the app is now deprecated
though we'll keep it around for the time being.
svn path=/trunk/boinc/; revision=24708
are assumed to be for NVIDIA GPU apps;
plan class names containing 'ati' are assumed to be for AMD GPU apps.
Clauses for 'nvidia' were missing in a couple of places.
svn path=/trunk/boinc/; revision=24512
(in sched_customize.cpp)
the flops_scale argument is intended to express the
GPU efficiency (actual/peak).
Pass appropriate values.
svn path=/trunk/boinc/; revision=24405
The problem: the choice of app version was based on
the "projected FLOPS" return by estimate_flops(av).
If usage stats exist for the host / app version,
this returns a number X such that
WU.rsc_fpops_est/X approximates the runtime of a job
using the given app version..
(If WU.rsc_fpops_est is way off, this will be correspondingly way off
from the actual FLOPS the app version will get.)
However, if there are no usage stats,
it return an estimate based on host hardware speed,
which might be 100X less.
Hence, in some cases a new app version would never get used.
Solution: choose app versions based on the values
returned by the app plan functions.
Use estimate_flops() AFTER choosing the version.
- scheduler: improve the accuracy of FLOPS estimation for GPU apps.
The "flops_scale" argument to coproc_perf
(which expresses the difference between peak GPU FLOPS
and actual FLOPS) should be used to scale GPU FLOPS
prior to calling coproc_perf(),
rather than scaling the estimate returned by coproc_perf().
- show_shmem: show have_X_apps flags
svn path=/trunk/boinc/; revision=24385
to match those in the clGetDeviceInfo() calls.
Principles:
- if there's already a name for something, use it.
- follow case conventions
svn path=/trunk/boinc/; revision=24344
plan classes after all.
Otherwise (since app_plan() is not passed an app version)
there's no way to enforce that 64 bit hosts are sent
only the 64 bit version (which is necessary because
of the split-registry scheme).
svn path=/trunk/boinc/; revision=23935
which prevented the client from cleaning up
subprocesses of misbehaving multiprocess apps.
- remote job submission system:
assign physical names to input files (based on their MD5)
rather than having the user provide physical names
- VM apps: eliminate vbox64 plan class. Only vbox.
svn path=/trunk/boinc/; revision=23923
- don't create result records for uploads and downloads.
Just create a msg_to_client record.
- the scheduler handles file-transfer results specially;
it makes a vector of them, then calls a project-supplied function
handle_file_xfer_results()
- change the interface and implementation of put_file and get_file
- client write project sched priority in GUI RPC replies,
but not to the state file
svn path=/trunk/boinc/; revision=23857
Win: enumerate all descendants, and kill them all TerminateProcess().
Unix:
send the main process a SIGTERM.
Check once a sec for existence of descendants.
if none, done
If any still exist after 10 sec, kill all descendants
- wrapper fix bug in Win env var stuff
- scheduler: check for VBox version 3.2+ in app_plan()
svn path=/trunk/boinc/; revision=23085
(either at startup or during execution)
reset a number of "wait until X" variables;
otherwise we might wait years to contact a project, restart a file xfer, etc.
Notes:
- there is no problem setting clocks forward; things just happen prematurely
- some variables (e.g. task deadlines) are not reset,
because it's not clear what to set them to
- sched: remove ati_opencl plan class until we understand what it is
svn path=/trunk/boinc/; revision=22842
p_model as well as p_features;
pre-6.x clients report them in p_model.
- client: fix bug where "reread config file" didn't update
the max log file sizes
svn path=/trunk/boinc/; revision=22838
My change of 1 Oct ([22440]) required that such jobs
be processed with 64-bit apps,
on the assumption that 32-bit apps have a 2 GB user address space limit.
However, it turns out this limit applies only to Windows
(kernel and user mode share the 4GB address space; each gets half).
On Linux, the split is 3GB user / 1 GB kernel.
On Mac OS X, user mode and kernel mode have separate address spaces,
each of them 4 GB.
svn path=/trunk/boinc/; revision=22599
- scheduler: add a clause to wu_is_infeasible_custom() for SETI@home:
don't process VLAR jobs using CUDA apps.
Note: this is implemented in a slightly non-optimal way.
If the request asks for both GPU and CPU jobs,
the scheduler will first decide to use the GPU version.
It will scan jobs, skipping over VLAR jobs.
When the GPU request is satisfied, it will switch to the CPU version
and continue scanning, accepting VLAR jobs.
But the jobs that were skipped initially won't be rescanned.
Also, it would be slightly nice to preferentially send
VLAR jobs to hosts asking for CPU work.
(This could be done in the scoring function).
svn path=/trunk/boinc/; revision=21895