Code implementing this interface will use NewBlob(), which has been
modified to return a pointer to blob.Blob. So change the StreamBlobs
function in this interface to also send *.blob.Blob
Change-Id: Ia3b94c3f41f95cb31e96762d4c39b3172cc978f2
We'll use this later on to see the effect of using blob streaming
rather than enumeration. In particular, we will be looking for a
significant speed-up with diskpacked when metadata processing uses
streaming rather than enumeration.
Change-Id: I8295d05d74a84518844237bc48d4f11db8ea14b0
To be implemented later, but adding now so googledrive's blob storage implementation
can depend on them.
Change-Id: Ief374e8592bd696c79aa2b80ded11e301063750b
Two important related changes:
1) sorted/mysql now takes into account the host given in the config
2) the required tables are now automatically created by NewKeyValue
http://camlistore.org/issue/263
Change-Id: I0043f36edb0630d6484148508d3a1e08c8e88a94
Related changes:
Split docker-related test-helper functions from pkg/sorted/mongo.
These helper functions are now also used in the pkg/blobserver/mongo
tests too.
Also fixed a typo in a comment in pkg/blob/fetcher.go and a missing variable in
debug output in pkg/blobserver/storagetest/storagetest.go
Addresses http://camlistore.org/issue/127
Change-Id: I8b6f57f9ced066d6f83788fdcc87be6619c65c3c
Conflicts:
pkg/blob/fetcher.go
Delete from index and zero out blob's data.
Use FALLOC_FL_PUNCH_HOLE if available (linux >= 2.6.38).
Change-Id: I0c194fd6734a35b520212398d93dfdc5cabadeb5
If there's an enumeration error for src or dst, sending a zero value
will shut down ListMissingDestinationBlobs.
Change-Id: I95598e9dc2607610436faa06f40485d1abd2342f
StreamingFetcher is now just Fetcher, and its FetchStreaming is now
just Fetch.
SeekFetcher is gone. Blobs are max 16 MB anyway, so we can slurp to
memory when needed. The main thing that cared about SeekFetcher
was the GET handler, ServeBlobref, because http.ServeContent needed
one for range requests. That's rewritten in an earlier commit, using
the FakeSeeker from another earlier commit.
Lot of code got simpler as a result.
Change-Id: Ib819413e48a8f9b8d97f596d0fbf771dab211f11
By default it will use cznic/kv. But if you want to use something else
(like MySQL, or leveldb, or Postgres, or Mongo), you can.
Change-Id: I8ce3571a701717ffde3b80856c72a9e3223ab439
Add blobserver.EnumerateAllFrom, make S3 client API match the underlying S3 protocol
with marker instead of after, and then push after->marker conversion logic up
into the s3 enumerate code that's using the S3 client.
Change-Id: I034a7c1c8af441881ebba74bcb523bd690cd16d3
The handler was adding 1 to things to see if it was at the end, but that was causing
HTTP requests to Amazon like: limit 1000, limit 1, limit 1000, limit 1, etc.
This makes it twice as fast.
Change-Id: Ibb7e3f6ae7229a21c87817c7438324d36e7b491a
Not just in blob.SizedRef, but in blobserver.Fetch and
blobserver.FetchStreaming, too.
Blobs have a max size of 10-32 MB anyway, and the index.Corpus is now using
uint32 to save memory.
Change-Id: I1172445c2f9463fdaee55bfe0f1218d44be4aa53
From the package docs:
Package archiver zips lots of little blobs into bigger zip files
and stores them somewhere. While generic, it was designed to
incrementally create Amazon Glacier archives from many little
blobs, rather than creating millions of Glacier archives.
Change-Id: If304b2d4bf144bfab073c61c148bb34fa0be2f2d
Bytes read/writen per pack file, as well as per configured diskpacked
configuration are now available as expvars.
Also add reader stat helpers to pkg/types and updated the original
user in server/image.go
Change-Id: Ifc9d76c57aab329d4b947e9a4ef9eac008bc608d