From dfce3b3f7286ed365501bec17a950975fef12891 Mon Sep 17 00:00:00 2001 From: Brad Fitzpatrick Date: Tue, 4 Feb 2014 18:48:16 -0800 Subject: [PATCH] more TODO Change-Id: I4bdeb5d4c922b23e95c5e3789bcf65df6e0838a5 --- TODO | 41 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) diff --git a/TODO b/TODO index 580028470..d6b4e431c 100644 --- a/TODO +++ b/TODO @@ -4,6 +4,47 @@ There are two TODO lists. This file (good for airplanes) and the online bug trac Offline list: +-- also rename serverinit/serverconfig.go to serverinit.go + +-- websocket upload protocol. different write & read on same socket, + as opposed to HTTP, to have multiple chunks in flight. + +-- unit tests for websocket stuff. (in integration tests) + +-- extension to blobserver upload protocol to minimize fsyncs: maybe a + client can say "no rush" on a bunch of data blobs first (which + still don't get acked back over websocket until they've been + fsynced), and then when the client uploads the schema/vivivy blob, + that websocket message won't have the "no rush" flag, calling the + optional blobserver.Storage method to fsync (in the case of + diskpacked/localdisk) and getting all the "uploaded" messages back + for the data chunks that were written-but-not-synced. + +-- benchmark uploading a 100MB file to localdisk & diskpacked + from camput. + +-- measure FUSE operations, latency, round-trips, performance. + see next item: + +-- ... we probaby need a "describe all chunks in file" HTTP handler. + then FUSE (when it sees sequential access) can say "what's the + list of all chunks in this file?" and then fetch them all at once. + see next item: + +-- ... HTTP handler to get multiple blobs at once. multi-download + in multipart/mime body. we have this for stat and upload, but + not download. + +-- ... if we do blob fetching over websocket too, then we can support + cancellation of blob requests. Then we can combine the previous + two items: FUSE client can ask the server, over websockets, for a + list of all chunks, and to also start streaming them all. assume a + high-latency (but acceptable bandwidth) link. the chunks are + already in flight, but some might be redundant. once the client figures + out some might be redundant, it can issue "stop send" messages over + that websocket connection to prevent dups. this should work on + both "files" and "bytes" types. + -- cacher: configurable policy on max cache size. clean oldest things (consider mtime+atime) to get back under max cache size. maybe prefer keeping small things (metadata blobs) too,