mirror of https://github.com/BOINC/boinc.git
219 lines
6.9 KiB
Plaintext
Executable File
219 lines
6.9 KiB
Plaintext
Executable File
support for HTTP and SOCKS proxies
|
|
make get_local_ip_addr() work in all cases
|
|
est_time_to_completion doesn't work for non-running tasks
|
|
get timezone working on all platforms
|
|
get benchmark working as separate process
|
|
must do this before send first scheduler request
|
|
|
|
validate in crontab
|
|
|
|
Windows client
|
|
"bring up graphics" button/menu item in core client
|
|
make mini-logo
|
|
test uninstall
|
|
fix "unable to calculate"
|
|
xfers: show file size,
|
|
completed results: show final CPU time
|
|
-----------------------
|
|
|
|
test feeder reread-DB mechanism
|
|
|
|
use PHP session mechanism instead of our own cookies
|
|
|
|
use https for login (don't sent account ID or password in clear)
|
|
|
|
protect project admin web pages (htaccess)
|
|
|
|
Deadline mechanism for results
|
|
- use in result dispatching
|
|
- use in file uploading (decide what to upload next)
|
|
- use in deciding when to make scheduler RPC (done already?)
|
|
|
|
create "result adder" program that detects WUs that don't have
|
|
a canonical result yet, and should,
|
|
and creates more results for them.
|
|
This should detect situations where we're getting lots
|
|
of error results, and skip over those WUs
|
|
|
|
CPU benchmarking
|
|
This should be done by a pseudo-application
|
|
rather than by the core client.
|
|
This would eliminate the GUI-starvation problem,
|
|
and would make it possible to have architecture-specific
|
|
benchmarking programs (e.g. for graphics coprocessor)
|
|
or project-specific programs.
|
|
|
|
Testing framework
|
|
better mechanisms to model server/client/communication failure
|
|
better mechanisms to simulate large load
|
|
do client/server on separate hosts?
|
|
|
|
Add "garbage-collect state" fields to WU, result
|
|
whether deletion of files is OK, is done
|
|
|
|
investigate binary diff mechanism for updating persistent files
|
|
|
|
verify support for > 4 GB files everywhere
|
|
|
|
use FTP instead of HTTP for file xfer??
|
|
measure speed diff
|
|
|
|
Delete files if needed to honor disk usage constraint
|
|
inform user if this happens
|
|
|
|
implement max bytes/sec network preferences
|
|
|
|
Local scheduling
|
|
more intelligent decision about when/what to work on
|
|
- monitor VM situation, run small-footprint programs
|
|
even if user active
|
|
- monitor network usage, do net xfers if network idle
|
|
even if user active
|
|
|
|
The following would require client to accept connections:
|
|
- clients can act as proxy scheduling server
|
|
- exiting client can pass work to another client
|
|
- client can transfer files to other clients
|
|
|
|
User/host "reputation"
|
|
keep track of % results bad, %results claimed > 2x granted credit
|
|
both per-host and per-user.
|
|
Make these visible to project, to that user (only)
|
|
|
|
Storage validation
|
|
periodic rehash of persistent files;
|
|
compare results between hosts
|
|
|
|
Include account ID in URL for file xfers
|
|
This would let you verify network xfers by scanning web logs
|
|
(could use that to give credit for xfers)
|
|
|
|
Global preferences
|
|
implement disk usage prefs
|
|
time-of-day prefs?
|
|
test propagation mechanism
|
|
set up multi-project, multi-host test;
|
|
change global prefs at one web site,
|
|
make sure they propagate to all hosts
|
|
limit on frequency of disk writes?
|
|
max net traffic per day?
|
|
implement in client
|
|
|
|
Per-project preferences
|
|
test project-specific prefs
|
|
make example web edit pages
|
|
make app that uses them
|
|
set up a test with multiple projects
|
|
test "add project" feature, GUI and cmdline
|
|
test resource share mechanism
|
|
|
|
Proxies
|
|
work through HTTP, Socks proxies
|
|
look at other open-source code (Mozilla?)
|
|
|
|
Documentation
|
|
simplify/finish docs on server installation
|
|
get rid of text in INSTALL and INSTALL_CLIENT;
|
|
they should just refer to .html files
|
|
|
|
Testing in general
|
|
figure out how to set up multi-project, multi-host tests
|
|
from a single script
|
|
automate some simple test cases
|
|
|
|
Messages from core client
|
|
decide what messages should be shown to user, and how
|
|
log file? GUI? dialog?
|
|
|
|
CPU benchmarking
|
|
review CPU benchmarks - do they do what we want?
|
|
what to do when tests show hardware problem?
|
|
How should we weight factors for credit?
|
|
run CPU tests unobtrusively, periodically
|
|
check that on/conn/active fracs are maintainted correctly
|
|
check that bandwidth is measured correctly
|
|
measure disk/mem size on all platforms
|
|
get timezone to work
|
|
|
|
WU/result sequence mechanism
|
|
design/implement/document
|
|
|
|
Multiple application files
|
|
document, test
|
|
|
|
CPU accounting in the presence of checkpoint/restart
|
|
test
|
|
|
|
Test nslots > 1
|
|
|
|
Redundancy checking and validation
|
|
test the validation mechanism
|
|
make sure credit is granted correctly
|
|
make sure average, total credit maintained correctly for user, host
|
|
|
|
Scheduler RPC
|
|
formalize notion of "permanent failure" (e.g. can't download file)
|
|
report perm failures to scheduler, record in DB
|
|
make sure RPC backoff is done for any perm failure
|
|
(in general, should never make back-to-back RPCs to a project)
|
|
make sure that client eventually reloads master URL
|
|
|
|
Data transfer
|
|
make sure restart of downloads works
|
|
make sure restart of uploads works
|
|
test download/upload with multiple data servers
|
|
make sure it tries servers in succession,
|
|
does exponential backoff if all fail
|
|
review and document prioritization of transfers
|
|
review protocol; make sure error returns are possible and handled correctly
|
|
|
|
Application graphics
|
|
finish design, implementation, doc, testing
|
|
size, frame rate, whether to generate
|
|
|
|
Work generation
|
|
generation of upload signature is very slow
|
|
|
|
Windows GUI
|
|
finish design/implement
|
|
display credit info, team
|
|
|
|
Windows screensaver functionality
|
|
idle-only behavior without screensaver - test
|
|
|
|
Versioning
|
|
think through issues involved in:
|
|
compatibility of core client and scheduling server
|
|
compatibility of core client and data server
|
|
compatibility of core client and app version
|
|
compatibility of core client and client state file?
|
|
Need version numbers for protocols/interfaces?
|
|
What messages to show user? Project?
|
|
|
|
Scheduler
|
|
Should dispatch results based on deadline?
|
|
test that scheduler estimates WU completion time correctly
|
|
test that scheduler sends right amount of work
|
|
test that client estimates remaining work correctly,
|
|
requests correct # of seconds
|
|
test that hi/low water mark system works
|
|
test that scheduler sends only feasible WUs
|
|
|
|
Persistent files
|
|
test
|
|
design/implement test reporting, retrieval mechanisms
|
|
(do this using WU/results with null application?)
|
|
|
|
User HTML
|
|
leader boards
|
|
|
|
NET_XFER_SET
|
|
review logic; prevent one stream for starving others
|
|
|
|
test HTTP redirect mechanism for all types of ops
|
|
|
|
prevent file_xfer->req1 from overflowing. This problems seems to be
|
|
happening when the file_upload_handler returnes a message to the
|
|
client that is large. This causes project->parsefile to get wrong
|
|
input and so on.
|