individual jobs rather than globally.
To use this, projects must add <report_immediately/>
to the <result> elements in job templates
svn path=/trunk/boinc/; revision=23515
- All sticky files are reported on each scheduler RPC
- If a scheduler reply says to delete a file, clear its sticky flag
In particular:
- remove the "send file list" tag in scheduler RPC replies
- remove FILE_INFO::marked_for_delete
- remove FILE_INFO::report_on_rpc
- remove the request_file_list program
svn path=/trunk/boinc/; revision=23431
- new GPU types can be added easily
- users can specify GPUs in cc_config.xml,
referred to by app_info.xml,
and they will be scheduled by BOINC
and passed --device N options
Note: the parsing of cc_config.xml is not done yet.
- RPC protocols (account manager and scheduler)
can now specify GPU types in separate elements
rather than embedding them in tag names
e.g. <no_rsc>NVIDIA</no_rsc> rather than <no_cuda/>
- client: in account manager replies, parse elements of the form
<no_rsc>NAME</no_rsc>
indicating the GPUs of type NAME should not be used.
This allows account managers to control GPU types
not hardwired into the client.
Note: <no_cuda/> and <no_ati/> will continue to be supported.
- scheduler RPC reply: add
<no_rsc_apps>NAME</no_rsc_apps>
(NAME = GPU name)
to indicate that the project has no jobs for the indicated GPU type.
<no_cuda_apps> etc. are still supported
- client/lib: remove set_debts() GUI RPC
- client/scheduler RPC
remove <cuda_backoff> etc. (superceded by no_app)
Exception: <ip_result> elements in sched request
still have <ncudas> and <natis>.
Fix this later.
Implementation notes:
- client/lib: change "CUDA" to "NVIDIA" in type/variable names, and in XML
Continue to recognize "CUDA" for compatibility
- host_info.coprocs no longer used within the client;
use a global var (COPROCS coprocs) instead.
COPROCS now has an array of COPROCs;
GPUs types are identified by the array index.
Index zero means CPU.
- a bunch of other resource-specific structs (like RSC_WORK_FETCH)
are now stored in arrays, with same indices as COPROCS
(i.e. index 0 is CPU)
- COPROCS still has COPROC_NVIDIA and COPROC_ATI structs to hold vendor-specific info
- APP_VERSION now has a struct GPU_USAGE to describe its GPU usage
svn path=/trunk/boinc/; revision=23253
- manager: change "add account manager" to "use account manager".
"Add" is confusing, because you can't add multiple account managers
like you add projects.
- client: recognize a few new ATI GPU models
from Robert Kreß
svn path=/trunk/boinc/; revision=22843
and an upload started in the last 5 min, don't fetch work from it.
The goal is to merge the 2 scheduler RPCs
(fetch work, report completed taskS) into a single RPC.
Note: this may result in idleness in some cases.
- scheduler: if client doesn't handle plan class (pre-5.10),
check plan-class app versions anyway,
but only use if it's a single-CPU app.
This allows single-CPU app versions with specific requirements
(like SSE) to be issued to old clients.
From Bernd Machenschalk
svn path=/trunk/boinc/; revision=22841
Additions to request message:
<not_started_dur>X</not_started_dur>
<in_progress_dur>X</in_progress_dur>
The estimated remaining duration of unstarted
and in-progress tasks
Additions to reply message, within <project>, optional:
<suspend>0|1</suspend>
suspend or resume project (overrides local state)
<abort_not_started>0|1</abort_not_started>
if set, abort unstarted jobs
svn path=/trunk/boinc/; revision=22698
If # of ready-to-report tasks > max_tasks_reported,
then the excess ready-to-report tasks weren't getting
reported to the scheduler at all (i.e. not in <other_results> either)
so the scheduler would resend them
(not a fatal problem, but a waste of bandwidth).
From Josef Segur.
svn path=/trunk/boinc/; revision=22500
allow for the possibility that suspended BOINC apps
aren't really suspended
(e.g. multithread apps that don't use boinc_init_parallel())
- client: message tweak
svn path=/trunk/boinc/; revision=22388
where the client tells the scheduler which app versions
its queued jobs use
(this is needed, e.g., to enforce per-app or per-resource job limits).
In this mechanism, the client sends an array of <app_version>s,
and each <other_result> includes an index into this array.
- The wrong index was being sent (client).
- If an <app_version> had a non-existent app name
(e.g. because that app had been deprecated)
it wasn't getting put in the array, invalidating array indices
Furthermore, an erroneous message was being sent to the user
Fix: if parse error for <app_version>,
put it in the array anyway, but with cav.app = NULL,
meaning that it's a place-holder.
Send a message to user only if anon platform.
- manager: increase notice buffers to 64K
svn path=/trunk/boinc/; revision=22052
This feature lets you run the BOINC client as a job on grid systems
that handle only 1-CPU jobs;
it disables various mechanisms that prevent multiple clients per host
(which is normally a bad thing).
Old:
- Run the client with a --allow_multiple_clients flag.
This tells it not to use a mutex that prevents
multiple clients per host.
- Run the project with the <multiple_clients_per_host> config flag.
This suppresses two mechanisms:
- (avoid duplicate host records)
on a scheduler request with no host ID,
looks for a host with same domain name, OS type,
and mem size, and assumes the request is from that host
- (job retry)
If we get a request that doesn't have a host ID
but does have a host CPID,
mark its in-progress results as over
NOTE: I CAN'T REMEMBER WHY WE SUPPRESS THIS;
MARK S, DO YOU REMEMBER?
Problem:
if the grid clients attach to a project that
doesn't use <multiple_clients_per_host>, bad things happen.
E.g., if there are several requests at about the same time,
most of them will fail with
"another RPC already in progress" errors.
If a project does include this flag,
it loses protection from duplicate host records.
New:
- If the client is run with --allow_multiple_clients flag,
it passes a <allow_multiple_clients> element
in scheduler requests.
- The scheduler skips the duplicate-host check on
requests that include this flag.
- There is no more <multiple_clients_per_host> scheduler option.
Note: if a project using the old mechanism upgrades to this change,
it will need to use new clients for its grid deployment.
svn path=/trunk/boinc/; revision=21839
for messages intended as notices.
This will avoid showing lots of obscure stuff as notices
for projects with old server code.
svn path=/trunk/boinc/; revision=21836
If set, then:
if there are any active jobs at startup, don't fetch more work
otherwise make exactly 1 scheduler RPC requesting work,
and request only enough jobs to fill all devices.
- client: --exit_when_idle: make it available in config file
and change semantics to:
If set: exit if
1) there are no tasks, and
2) either there was an active task on startup,
or we made a scheduler RPC requesting work
Note: if there are not active tasks on startup,
and the client makes a work request which doesn't return work,
it will exit.
svn path=/trunk/boinc/; revision=21680
pointers to dynamically allocated COPROC-derived objects,
just have the objects themselves.
Dynamic allocation should be avoided at all costs.
svn path=/trunk/boinc/; revision=21564
Add more info to "project in-progress job list".
Old: entries included only job name and app plan class;
this was used to resend lost jobs,
and to count the # of CPU and GPU jobs.
But it's not usable e.g. for per-app in-progress limits.
New: send the client's app versions (including usage info)
and for each in-progress job, which app version it uses.
(This reduces request-message size compared with sending
usage info and app name per job).
- client and scheduler RPC:
Add more info to "all in-progress job list", and make it optional.
This list is used by schedulers that do deadline checks
using EDF workload simulation.
Old: the list is always sent, and it contains no info
about job resource usage
New: the list is sent only if the scheduler asked for it
in a previous reply,
and each entry now contains resource usage (CPU, GPUs)
Note: the scheduler's EDF simulator is outdated;
it doesn't know about GPU jobs.
But we may as well get the info in place.
svn path=/trunk/boinc/; revision=21513
There are now separate flags for
"file_xfers_suspended": don't do file transfers
"network_suspended": don't do any network comm
(scheduler RPCs, RSS fetch, master fetch, etc.)
The policy:
if preferences/settings say no network
(quota exceeded, no-network mode, user active, time, excl. app)
then:
file_xfers_suspended = true
if (no recent network-related RPC) network_suspended = true
- user web: code cleanup for project prefs
svn path=/trunk/boinc/; revision=21299
and is the same, don't copy (it might not be writeable)
- client: change "result" to "task" in user-visible messages
svn path=/trunk/boinc/; revision=20785
Old: it's based entirely on CPU time.
So a GPU project, whose app uses only a fraction
of a CPU, accrues positive debt.
This is OK if the project has only GPU apps,
since STD is not (currently) used for GPU scheduling.
But some projects have both CPU and GPU apps.
New: STD is based on total processing.
It has terms for each resource type.
The notion of "runnable resource share" is specific to a type.
Note: the notion of "resource share fraction" appears in
a couple of other places:
- it's passed to apps in app_init_data.xml
- it's passed in scheduler requests.
It should be broken down by resource type in these cases too.
Note to self: do this later.
svn path=/trunk/boinc/; revision=19762
didn't work due to a typo.
- client: if <ncpus> is present in cc_config.xml,
we're supposed to act as if there were that many physical CPUs.
In particular, we need to set host_info.p_ncpus to that value,
since that's what is reported in scheduler requests.
svn path=/trunk/boinc/; revision=19522
1) if an APP_VERSION is missing a coprocessor,
don't delete it and its files.
(If the coprocessor returns, we won't need to re-download)
2) if a RESULT uses an app version that is missing a coprocessor,
abort it (rather than deleting it).
The client will report the result on the next scheduler RPC,
and the server will make a new instance.
svn path=/trunk/boinc/; revision=19235
and <ati_backoff> elements to scheduler reply.
These specify backoffs for the resource types,
overriding the existing backoff mechanism.
Projects can supply these if they don't have apps of a particular type
and don't want to get periodic requests for them.
svn path=/trunk/boinc/; revision=19059
If you have 2 CPUs and a 1-day job in EDF mode,
the busy time should be zero, not .5 days.
Add a class BUSY_TIME_ESTIMATOR that makes a somewhat better
(though still fairly crude) estimate.
svn path=/trunk/boinc/; revision=19003
with a GPU request if project is anonymous platform
AND it has an app for that GPU type
- client: report overall work request as well as per-resource-type requests
svn path=/trunk/boinc/; revision=18994
to the max of the requests for different resource types.
Otherwise projects with old schedulers won't send us work.
svn path=/trunk/boinc/; revision=18945
e.g. the Milkyway@home ATI app, of which we can typically run
2 or 3 instances at once on a GPU.
Changes include:
- In APP_VERSION, don't use a COPROCS to represent the GPU
requirements; just use doubles ncudas and natis.
- sufficient_coprocs() etc. are no longer members of COPROCS
- in HOST_USAGE, ncudas and natis are doubles
- in scheduler request, req_instances is now a double
This checkin doesn't include the job scheduling logic,
i.e. assigning jobs to GPUs. That will follow.
svn path=/trunk/boinc/; revision=18868
We need to estimate 2 different delays for each resource type:
1) "saturated time": the time the resource will be fully utilized
(new name for the old "estimated delay").
This is used to compute work requests.
2) "busy time": the time a new job would have to wait
to start using this resource.
This is passed to the scheduler and used for a crude deadline check.
Note: this is ill-defined; a single number doesn't suffice.
But as a very rough estimate, I'll use the sum of
(J.duration * J.ninstances)/ninstances
over all jobs that miss their deadline under RR sim.
svn path=/trunk/boinc/; revision=18629
- client: show times correctly in rr_sim debug msgs
- client: in "requesting new tasks" msg,
say what resources we're requesting (if there's more than CPU)
- client: estimated delay was possibly being calculated incorrectly
because of roundoff error
svn path=/trunk/boinc/; revision=18269
- client: Haiku support (from Urias McCullough)
- client: include plan class in other_result list in sched request
(for resource-specific jobs-in-progress limit)
svn path=/trunk/boinc/; revision=18250
Instead, write the info into a file in the slot directory,
and check for these files on startup.
This should reduce the overhead of state-file writing
on machines with lots of cores.
There will still be a flurry of writes each time a job finishes,
but reducing that overhead would be a larger job.
- client: make sure we write the state file after a failed RPC
svn path=/trunk/boinc/; revision=17814
CPU time is visible in task Properties.
- Manager: in task Properties, show final CPU and elapsed times
if job is finished
- client: honor backoff for account-manager-requested scheduler RPCs
- client: keep track final elapsed time for results
- GUI RPC: report final elapsed time
svn path=/trunk/boinc/; revision=17588
app versions in scheduler reply
- client: when reporting anonymous platform apps in sched request,
don't include <file_info>s (not relevant to server)
svn path=/trunk/boinc/; revision=17507
when to do a scheduler RPC:
if user request or acct mgr request, ignore backoff and suspend via GUI;
in all other cases honor both of these.
svn path=/trunk/boinc/; revision=17503
to ask for work inappropriately,
and tell user that it wasn't asking for work.
Here's what was going on:
There are two different structures with work request fields
(req_secs, req_instances, estimated_delay):
COPROC_CUDA *coproc_cuda
and
RSC_WORK_FETCH cuda_work_fetch.
WORK_FETCH::choose_project() copied from cuda_work_fetch to coproc_cuda,
but only if a project was selected.
WORK_FETCH::clear_request() clears cuda_work_fetch but not coproc_cuda.
Scenario:
- a scheduler op is made to project A requesting X>0 secs of CUDA
- later, a scheduler op is made to project B for reason
other than work fetch (e.g., user request)
- choose_project() doesn't choose anything,
so the value of coproc_cuda->req_secs remains X
- clear_request() is called but that doesn't change *coproc_cuda
Solution: work-fetch code no longer knows about internals of
COPROC_CUDA and is not responsible for settings its request fields.
The copying of request fields from RSC_WORK_FETCH to COPROC
is done at a higher level,
in CLIENT_STATE::make_scheduler_request()
Additional bug fix: estimated_delay wasn't being cleared in some cases.
svn path=/trunk/boinc/; revision=17411
on first-time startup.
- client: don't do an RPC until we've done CPU benchmarks.
We need the benchmark values to fill in app_version.flops
svn path=/trunk/boinc/; revision=17404
which of those files to include
- Modified MAC address check to work on some non-Linux unixes.
(mac_address.cpp)
- Added suggested change to "already attached to project" checking.
(ProjectInfoPage.cpp)
- changed includes of standard c header files to their c++ equivalents
(i.e. replaced <stdio.h> with <cstdio>) for namespace protection.
- replaced "using namespace std;" with more explicit "using std::function" in
several files.
- Fixed bug in checking whether the os is OS/2 and added conditional OS_OS2
to the build environment. (boinc_platform.m4,configure.ac)
- Changed build environment to not use -nostandardlibs unless we are using
G++ and static linkage is specified. (configure.ac)
- Added makefiles and package building files for solaris CSW package manager.
- Fixed bug with attempting to find login name using logname. (configure.ac)
- Added ifdef HAVE_* protection around some include files commonly found in
sys.
- Added support for unified binary for x86_64/i686-pc-solaris.
(cs_platforms.cpp)
- generate_host_cpid() now uses MAC address on non-linux unix.
(hostinfo_network.cpp)
- Macro BOINC_SET_COMPILE_FLAGS now doesn't check gcc only flags on non-gcc
compilers. (boinc_set_compile_flags.m4)
- Library compiles no longer depend upon the library extension or require
the library to be prefixed with lib.
- More fixes for fcgi builds.
- Added declaration of "struct ether_addr" and ether_ntoa(). Have not yet
implemented ether_ntoa() for machines that don't have it, or where it is
buggy. (unix_util.h)
- Added FCGI::perror() which calls FCGI_perror(). (boinc_fcgi.{h,cpp})
- Fixed library Makefiles so that all required headers get installed.
svn path=/trunk/boinc/; revision=17388
This fixes a bug that can cause debts to NEVER get updated.
- client: added "abort_jobs_on_exit" feature
(available by --abort_jobs_on_exit cmdline
or <abort_jobs_on_exit> in cc_config.xml).
If set, when the client is exited by user request
(this includes signals on Unix)
it marks all pending jobs as aborted,
and does a scheduler RPC to all projects with jobs.
When these are completed the client exits.
This is useful when BOINC is being used on grids
where it is wiped clean after each run.
svn path=/trunk/boinc/; revision=17300
- client: if a project-requested RPC doesn't return work,
don't do resource backoff.
- client: if a user-requested scheduler RPC errors out, clear the request
svn path=/trunk/boinc/; revision=17191
using a coprocessor we don't know about, ignore it
(and all results using that app_version will be flushed).
This deals with the situation where we have some GPU jobs,
but the GPU card is removed (previously this resulted in a crash).
This requires some code shuffling so that we check for coprocessors
before reading state file.
svn path=/trunk/boinc/; revision=17161
ignore intervals longer than 10 secs;
that could only happen if the client or host was suspended/hibernated.
- client: in adjust_debts(), ignore intervals longer than
2*work fetch period, not 2*CPU sched period.
adjust_debts() is called from work fetch.
svn path=/trunk/boinc/; revision=17154
There are two mechanisms to prevent the scheduler from
sending jobs that won't finish by their deadline.
Simple mechanism:
The client sends the interval x for which CPUs are projected
to be saturated.
Given a job with estimated duration y,
the scheduler doesn't send it if x + y exceeds the delay bound.
If it does send it, x is incremented by y.
Complex mechanism:
Client sends workload description.
Scheduler does EDF simulation, sees if deadlines are missed.
The only project using this AFAIK is BOINC alpha test.
Neither of these mechanisms takes coprocessors into account,
and as a result jobs could be sent that are doomed to
miss their deadline.
This checkin adds coprocessor awareness to the Simple mechanism.
Changes:
Client:
compute estimated delay (i.e. time until non-saturation)
for coprocessors as well as CPU.
Send them in scheduler request as part of coproc descriptor.
Scheduler:
Keep track of estimated delays separately for different resources
- client: fixed bug that computed CPU estimated delay incorrectly
- client: the work request (req_secs) for a resource is the min
of the project's share and the shortfall.
svn path=/trunk/boinc/; revision=17086