negative values are stored in app_version_id fields to represent
anonymous platform versions.
So need to use %ld rather than %lu for these fields.
Also there were a couple of more changes of int do DB_ID_TYPE
The SETI@home result table is about to run out of 32-bit IDs,
so we need to move to 64-bit result IDs.
This will happen to the workunit table at some point too.
I changed the server C++ code to use the "long" type for all DB IDs
(and to use appropriate conversion codes like %lu).
"long" is 64 bit on 64-bit machines.
For uniformity I did this for all tables,
even ones (like app) that will never get big.
I chose NOT to change the DB schema for now.
The new code will work with 32-bit ID fields in the DB.
As projects approach the 32-bit limit on a table they can change
its ID field, and fields that reference this table, to BIGINT.
This is likely to happen only on the result and workunit tables.
I put functions in html/ops/db_update.php
to change the IDs of these tables.
The query to get counts of unsent results for various size classes
did a sequential scan, which isn't practical for large projects.
All we care about is the count up to a certain (low) limit,
so I replaced it with an enumeration with a limit, and count the results.
Note: Android clients report the Linux kernel version in host.os_version.
A month or two ago we added the Android version,
which can't always be inferred from the Linux version.
This will only work for devices running this new client.
For projects (like Lattice) that assign a WU's HR class when it's created,
we don't want the mechanism that clears the HR class
if there are error results and no in-progress of completed results.
This option suppresses this.
There was a bug where, when you suspend GPU activity,
GPU jobs show as suspended but are not actually suspended.
This was because of recent changes to distinguish GPU and non-GPU coprocs.
Change things so that coprocs are by default GPUs.
If you want to declare a non-GPU coproc in your cc_config.xml,
you much put <non_gpu/> in its <coproc> element.
Tasks and daemons written in PHP are generally run with a command of the form
run_in_ops script.php
The start script would, by default, base log/out/lock/pid filenames
on "run_in_ops".
This forced you to specify the filenames in config.xml, e.g.
<output>script.out</output>
Fix this by skipping the run_in_ops and using the next string
to form default filenames.
The OS version# constraints (os_min_version, os_max_version)
were only supported for Win and Mac.
Make them work for Linux and Android too.
Note: for Android, we get the Linux kernel version.
This isn't the same as the Android version, but it may be enough
When a project is reset, the client deletes all the files from its dir
that BOINC knows about.
However, there may be other files, created by its apps.
On Android, where space is likely to be scarce,
delete these when the project is reset.
Maybe we should do this on all projects.
But we'd need to not to delete user-created files
like app_config.xml, and anonymous-platform stuff
like app_info.xml and the app files.
When using XML plan class spec, the "mt" plan class would
correctly set avg_ncpus, but wouldn't pass a cmdline arg to the app.
So you'd get jobs where BOINC thinks it's using N CPUs,
but it's actually using only one.
I.e. treat miner ASICs as a distinct processor type;
send miner_asic jobs only if the client requests them.
Note: I was planning to do this in a more general way,
in which the scheduler wouldn't have a hard-wired list of processor types.
However, that would be a large code change,
so for now I just added miner_asic to the list of processor types
(nvidia, ati, intel_gpu),
and made various changes to get things to work.
Also: in the job dispatch logic, try to send coproc jobs
before CPU jobs.
That way if e.g. there's a limit on jobs in progress,
we'll preferentially send coproc jobs.
The old scheduler worked as follows:
scan jobs; for each job
get_app_version
do various checks
add_job_to_reply
The check for per-app job limits was in get_app_version,
and the incrementing of per-app job count is in add_job_to_reply.
The new (score-based) scheduler works as follows:
scan jobs; for each job
get_app_version
add to list
sort list by score
scan list; for each job
do various checks
add_job_to_reply
So the limit check (in get_app_version) was ineffective
because it happens before we've incremented counts.
Fix: do the limit check (also) in the "scan list" loop.
Bigger picture: we need to restructure app version selection;
job limit enforcement doesn't belong there any more.