See http://boinc.berkeley.edu/trac/wiki/MultiSize
The components of this include:
- DB changes:
add size_class to workunit and result
n_size_classes to app; >1 means multi-size
- size_regulator daemon program: change results states
from INACTIVE to UNSENT carefully
- size_census program; writes quantile info in flat files
- transitioner: when creating results for multi-size apps,
set server state to INACTIVE
- sched shmem (feeder): read quantile info from flat files,
store in shared memory
- scheduler (score-based scheduling): for multi-size apps,
add component to score function for size class.
- show_shmem: show result size class
- make_work (and other callers of count_unsent_results()):
count both INACTIVE and UNSENT
- create_work: add --size_class cmdline option
Also:
- if get MySQL errors in upgrade, don't rewrite db_version
and change types of mem-size fields from int to double.
These fields are size_t in NVIDIA's version of this;
however, cuDeviceGetAttribute() returns them as int,
so I don't see where this makes any difference.
- client: fix bug in handling of <no_rsc_apps> element.
- scheduler: message tweaks.
Note: [foo] means that the message is enabled by <debug_foo>.
svn path=/trunk/boinc/; revision=25849
If you link your functions (init_result(), compare_results(),
cleanup_result()) with validate_test.cpp,
you'll get a program that you can run as
validate_test file1 file2
and it will compare the two files
(this works only for validators that expect 1 file per result).
I added a makefile, sched/makefile_validator_test,
that you can use for this.
- server: shuffle code so that the above doesn't need to
link MySQL libraries
- client: if we fetch a master file and it contains no scheduler URLs,
show a message of class INTERNAL_ERROR
- client/scheduler: make CUDA_DEVICE_PROP.totalGlobalMem a double,
and remove dtotalGlobalMem.
Although NVIDIA reports RAM size as a size_t,
there's no reason to store it as an integer after that.
svn path=/trunk/boinc/; revision=25542
file_deleter.cpp into a separate program,
since it blocks normal file deletion while it's running.
From Bernd.
- storage stuff
svn path=/trunk/boinc/; revision=25321
to be sent to non-targeted hosts.
The feeder was erroneously putting targeted jobs
in the shared mem cache.
Changes:
- The feeder only enumerates jobs for which
workunit.transitioner_flags is zero.
NOTE: this field is nonzero iff the job is assigned.
- create_work: when creating an assigned jobs,
set workunit.transitioner_flags appropriately
svn path=/trunk/boinc/; revision=25314
we multiple projected FLOPS by a normal random var
with mean 1 and stddev 0.1.
Make the stddev configurable; in particular it can be zero.
svn path=/trunk/boinc/; revision=25311
- scheduler: parse d_project_share
- scheduler: if vbox and vbox_mt are both available,
use vbox for a 1-CPU machine
svn path=/trunk/boinc/; revision=25176
This now supports two main use cases:
1) there's a job that you want to run once on all hosts,
present and future
(or all hosts belonging to a user, or to a team).
The job is never transitioned, validated, or assimilated.
2) There's a normal job for which you want to use only
hosts belonging to a specific user (e.g. cluster or cloud hosts).
This restriction can be made either when the job is created,
or on the fly,
e.g. as part of a scheme for accelerating batch completion.
For the latter purposes we now provide a function
restrict_wu_to_user(DB_WORKUNIT&, int userid);
The job goes through the standard
transitioner/validator/assimilator path.
These cases are enabled by config flags
<enable_assignment_multi/>
<enable_assignment/>
respectively.
Assignment of type 2) are no longer stored in shared mem,
so there is no limit on their number.
There is no longer a rule that assigned job names must contain "asgn".
NOTE: this requires a database update.
svn path=/trunk/boinc/; revision=25169
(which won't parse a double as an int)
revealed a type mismatch in FILE_TRANSFER::next_request_time
between client and server.
svn path=/trunk/boinc/; revision=25125
Some credit cheats (e.g. with credit_by_runtime) can be done
by reporting a huge value.
Fix this by capping the value at 1.1 times the 95th percentile
of host.p_fpops, taken over active hosts.
svn path=/trunk/boinc/; revision=25017
depending on how many the host has,
and whether CPU VM extensions are present
(this reflects the requirements of CernVM).
svn path=/trunk/boinc/; revision=25009
If found, set HOST_INFO::p_vm_extensions_disabled,
and pass this to the scheduler.
- scheduler (VBox app plan function) if a host has p_vm_extensions_disabled
set, don't sent it multicore VBox jobs.
Note: if you have a host with VM extensions, and they're disabled
in the BIOS, and you enable them, you can remove the
<p_vm_extensions_disabled> line from client_state.xml
and you'll be eligible to get multicore VM jobs again.
svn path=/trunk/boinc/; revision=24944
is a "runtime outlier", i.e. its runtime does
not correspond to the job's rsc_fpops_est.
Runtime outliers are not counted in the statistics for
elapsed time, turnaround time, and peak FLOPs count.
The is intended for applications like SETI@home,
some of whose jobs finish more or less instantly
(this happens if the data contains a lot of interference).
If a host happens to get a bunch of these short jobs,
its statistics will get skewed: in essence, the server
will think that the host is extremely fast,
and will send it too many jobs.
svn path=/trunk/boinc/; revision=24225
This assigns credit proportional to runtime*p_fpops.
To prevent cheating, p_fpops is capped at the 95th percentile value
among active hosts,
and runtime is capped at a specified limit.
This option supports apps, like LHC's CERNvm app,
that run for a certain amount of time and then exit.
The CreditNew system doesn't work for such apps.
- trickle_credit:
To prevent cheating,
cap p_fpops at the 95th percentile value among active hosts,
and require a limit on runtime.
- require that trickle handlers supply an initialization function
svn path=/trunk/boinc/; revision=24182
- scheduler: when using elapsed time stats to predict runtime,
cap the estimated FLOPS at twice the peak FLOPS;
otherwise, if a host has received a lot of very short jobs
recently, it will get a too-high FLOPS estimate and
will exceed the rsc_fpops_bound limit.
svn path=/trunk/boinc/; revision=24128
Add parsed_tag and is_tag to the class,
so that parsing functions don't need to declare them
and pass them around.
- Complete the task of using XML_PARSER as the argument
to all parsing functions.
(Internally, many of these functions still use the old XML parser;
that's the next step.)
svn path=/trunk/boinc/; revision=23978
- add fields to batch table, extend APIs accordingly
- require that example web interface run on BOINC server
(this makes many things easier;
an actual remote interface would require a bit more work)
svn path=/trunk/boinc/; revision=23881
(structure, not table) for AQUA
- client, Windows: when wake up from hibernation,
get the time before printing log msg
svn path=/trunk/boinc/; revision=23784
Lets you specify, on a per-app basis,
that all instances should be done using the same app version.
This is for validation in the presence of GPUs.
- scheduler: code cleanup
- Instead of adding a bunch of non-DB fields to RESULT,
used a derived class SCHED_DB_RESULT.
- Instead of storing a pointer to BEST_APP_VERSION in RESULT,
store the structure itself.
This simplifies the memory allocation situation.
- client: condition "Got server request to delete file" messages
on <file_xfer_debug>
svn path=/trunk/boinc/; revision=23636
for some WUs
- back end: fix the way "report grace period" is implemented
old: result.report_deadline (i.e. what's in the DB) and
the deadline sent to the client are the same.
Some confusing and incorrect logic in the transitioner
tries to provide the desired semantics.
new: result.report_deadline is the deadline sent to the client,
plus the grace period.
No logic in the transitioner is needed.
svn path=/trunk/boinc/; revision=23040
- whether host is "reliable" for an app version
- whether host is eligible for single replication for an app version
- whether to use host scaling
In each case, the answer is yes if the number of
consecutive valid results is above a threshold.
This replaces existing "error rate" and "scale probation" mechanisms.
TODO: the # of consecutive valid results should also determine
a limit on jobs in progress for an app version.
Namely, if N is the threshold for host scaling, the limit should be
ndevices*(max(1, consecutive_valid - N))
The client currently doesn't supply enough
app version info to do this.
It could be approximated; that would give some protection
against cherry-picking.
- credit: more conservative formulas for combining claimed credit
among replicas.
If there are normal replicas, we use a "low average"
that weights each sample by the sum of the other samples.
Otherwise we use the min (not the average) of the approximate samples.
NOTE: a DB update is required
svn path=/trunk/boinc/; revision=21230
- daily quota mechanism
- reliable mechanism (accelerated retries)
- "trusted" mechanism (adaptive replication)
- scheduler: enforce host scale probation only for apps with
host_scale_check set.
- validator: do scale probation on invalid results
(need this in addition to error and timeout cases)
- feeder: update app version scales every 10 min, not 10 sec
- back-end apps: support --foo as well as -foo for options
Notes:
- If you have, say, cuda, cuda23 and cuda_fermi plan classes,
a host will have separate quotas for each one.
That means it could error out on 100 jobs for cuda_fermi,
and when its quota goes to zero,
error out on 100 jobs for cuda23, etc.
This is intentional; there may be cases where one version
works but not the others.
- host.error_rate and host.max_results_day are deprecated
TODO:
- the values in the app table for limits on jobs in progress etc.
should override rather than config.xml.
Implementation notes:
scheduler:
process_request():
read all host_app_versions for host at start;
Compute "reliable" and "trusted" for each one.
write modified records at end
get_app_version():
add "reliable_only" arg; if set, use only reliable versions
skip over-quota versions
Multi-pass scheduling: if have at least one reliable version,
do a pass for jobs that need reliable,
and use only reliable versions.
Then clear best_app_versions cache.
Score-based scheduling: for need-reliable jobs,
it will pick the fastest version,
then give a score bonus if that version happens to be reliable.
When get back a successful result from client:
increase daily quota
When get back an error result from client:
impose scale probation
decrease daily quota if not aborted
Validator:
when handling a WU, create a vector of HOST_APP_VERSION
parallel to vector of RESULT.
Pass it to assign_credit_set().
Make copies of originals so we can update only modified ones
update HOST_APP_VERSION error rates
Transitioner:
decrease quota on timeout
svn path=/trunk/boinc/; revision=21181
TODO: remove related code
- validator: update wu.canonical_credit correctly.
However, this field should be deprecated.
- validator: check for error return from assign_credit_set().
svn path=/trunk/boinc/; revision=21096
are written to the DB.
The incremental approach was bogus.
New approach:
host_app_version: write directly; R/W interval is tiny
app_version: maintain an explicit list of update samples
for both PFC and credit.
When the validator flushes its app_version cache,
do careful updates.
Note: when using double fields in careful updates,
you can't test for equality. Use abs(new-old) < 1e-N
svn path=/trunk/boinc/; revision=21057
see http://boinc.berkeley.edu/trac/wiki/CreditNew
Projects will need to update DB and recompile all back-end programs.
Summary:
- new way of computing credit
- "reliable host" mechanism is per app version
- "host punishment" mechanism is per app version
- adjustment of wu.rsc_fpops_est provides the
equivalent of per app version DCF
- max jobs in progress is now per app
- max jobs per RPC is now per app
TODO:
- reliable mechanism:
- populate and use host_app_version.error_rate
- populate host_app_version.turnaround
- host punishment:
- populate host_app_version.max_jobs_per_day
- populate host_app_version.n_jobs_today
- use app.max_jobs_per_day_init
- job limits:
- use app.max_jobs_in_progress, max_gpu_jobs_in_progress
- use app.max_jobs_per_rpc
- adjust wu.rsc_fpops_est
- remove old credit stuff
fpops_cumulative, credit_multiplier
credit computation in scheduler
- AVERAGE class: use the Knuth algorithm (Wikipedia)
svn path=/trunk/boinc/; revision=21021
New policy: anon platform and old platform jobs
get average credit, possibly scaled by elapsed time.
We no longer attempt to guess what app version produced them.
svn path=/trunk/boinc/; revision=20816
Triggering the work generator is now done via the DB
instead of flat files.
Since only E@h uses locality scheduling,
I kept the DB changes in a separate file (db/schema_locality.sql).
There's a new field in the workunit table,
and that's a required update (in db_update.php)
- manager: compile fix
svn path=/trunk/boinc/; revision=20807