- don't use devices for which work is not being requested
- obey wu_is_infeasible_custom()
(e.g. don't send SETI@home VLAR jobs to GPUs)
- scheduler: add <debug_array_detail> log flag for slot-level messages
- admin web: show and allow control of app.beta
- It was possible if all results for a workunit were PFC_MODE_INVALID
that NaN pfc would be used causing database update errors. Solved
by using wu_estimated_pfc() as pfc in that case.
- Sanity check was comparing raw_pfc directly to rsc_fpops_bound. That
was causing problems GPUs with high performance estimates. Fixed by
including the app_version scale factor in the check. I thought I had
already committed this...
- Removed a few lines of commented out experimental code accidentally
comitted earlier.
- Committed to git repository on 8/24
svn path=/trunk/boinc/; revision=26144
In LLS array pass, skip file-on-host check if host
doesn't have any sticky files.
TODO: it should actually be "any sticky files for this app".
But we currently don't have any way to know that.
svn path=/trunk/boinc/; revision=26108
We were failing to mark the cache entries as free.
- API: initialize GPU device # to -1;
If client doesn't give us a device number, something is wrong
and it's better to not start computing.
svn path=/trunk/boinc/; revision=26079
We were using a static BEST_APP_VERSION in
check_homogeneous_app_version(),
and it wasn't being initialized on each call
(e.g. its HOST_USAGE was not being cleared).
svn path=/trunk/boinc/; revision=26076
(but not all) wasn't finished.
New logic: if the project has an NCI app then:
- make a list of NCI apps for which the client doesn't have
a job in progress.
- try to send one job for each of these apps
- do this even if no work is being requested.
- don't send jobs for NCI apps by other mechanisms
NOTE: the client logic isn't quite right for mixed NCI projects.
If there's no job for a given NCI app,
the client should do a scheduler RPC.
This isn't critical so we won't do this now.
svn path=/trunk/boinc/; revision=26068
cmdline tool for remote job submission (not done)
- remote job submission: support the 4 file modes described
in the documentation (not done)
svn path=/trunk/boinc/; revision=26067
and non-CPU-intensive applications.
An app can be specified as non-CPU-intensive in project.xml,
and this attribute can be set or cleared using the admin web interface.
Note: support for this was added to the client in 2011,
but we didn't add server-side support at that time.
This change is in 6.12 and later clients.
svn path=/trunk/boinc/; revision=26060
- add a config item vda_host_timeout.
A host that hasn't done a scheduler RPC for this long
is considered dead.
- a host that's not running a version 7+ client is considered dead
- host.cpu_efficiency (an otherwise unused field) is used
as a flag for dead hosts
- the scheduler clears the flag if the client is v7+
- vdad sets the flag for hosts where last RPC is old
- before choosing a host for chunk download,
vdad checks its client version.
svn path=/trunk/boinc/; revision=26059
- Allow projects to report "desired disk usage" (DDU).
If the client learns that a project wants disk space,
it can shrink the allocation to other projects.
- Base share computation on DDU rather than disk usage.
- Introduce the notion of "disk resource share".
This is defined (somewhat arbitrarily) as resource share
plus 1/10 of the largest resource share.
This is intended to ensure that even zero-share projects
get enough disk space to store app versions and data files;
otherwise they wouldn't be able to compute.
- server: use host.d_boinc_max (which wasn't being used)
to start d_project_share reported by client.
- volunteer storage: change the way hosts are allocated to chunks.
Allow hosts to store several chunks of the same file, if needed
svn path=/trunk/boinc/; revision=26052
Do first read from socket before opening the disk file
(an attempt to fix filesystem lockups on WCG).
Increase buffer size from 16KB to 256KB.
svn path=/trunk/boinc/; revision=26046
while writing to them.
It's not clear to me that this locking is beneficial,
and it may be causing filesystem problems at WCG
- volunteer storage stuff
svn path=/trunk/boinc/; revision=26021
- When a normal and an approx result are compared the normal result now gets
double weight in a pegged_avg() with any approx results. "Normal mode" GPU
results are frequently resulting in bad credit values for yet undetermined
reasons. Since GPUs are so fast, there can be a lot of bad values in a
short time. Including the prior average and another result even seems to
prevent this in many case.
- Replaced many of the if { msg_log.printf } with msg_log.cond_printf()
- Accidentally changed some of the formatting when trying a new editor that
apparently autoformats. Sorry for the extra diff lines.
- There's a problem with pfc calculation for hosts whose credit calculation
fails the sanity check. This has been a problem for a long time. Because the
result fails the sanity check, the host_app_version pfc is never updated.
Because hav.pfc is never updated, the credit calculation continues to be
wrong. To quote Quinn, it's like one of those viscious things. I hope to
fix that soon.
svn path=/trunk/boinc/; revision=25999
"cpu" in XML, and other code was looking for "CPU".
To fix this and prevent similar problems,
processor type names are now encapsulated in proc_type_name_xml().
Code should use this rather than having hard-wired names.
Redefine: GPU_TYPE_* as macros that call proc_type_name_xml().
svn path=/trunk/boinc/; revision=25996
proximity to the average estimate. This reduces the number of results that
are granted extremely low credit when a new app_version is released and (I
hope) improves work/duration estimates by speeding up the convergence of app
versions stats. I may try using this in lieu of low_average for normal
result, but haven't yet.
svn path=/trunk/boinc/; revision=25953