Previously, `create table ...` sql migrations were being made without the
database existing. This resulted in a panic and error like:
pq: database "pk_3a94488d_blobpacked" does not exist
There seems to be an upstream issue with our postgres library in which
`CREATE DATABASE ...` queries are not prepared so we have to build the
sql manually. For now I've added a regex to make sure we don't allow
anything too crazy in.
Fixes#1022
Change-Id: I0da16759e9219347bb11713b92337021546f9d57
Follow-up of ec66bcc871
I had run the pkg/search tests to see if no high-level search was
broken, for forgot about pkg/index itself, whoops.
Change-Id: Iec3aa28aca82ac6773983ea9df9ade26a48fc4a7
Otherwise claims that are actually from the same signer, end up being
treated as from different signers, because some of the claims were
signed by the sha1 version and others by the sha224 one.
TODO in follow-up CLs: similar fixes in rest of the corpus, such as with
claimPtrsAttrValue. See if non-corpus index functions/methods suffer
from the same problem.
Change-Id: Icbc70e97edc569f46e575d79aaf4359b33996053
I had intended for this to be a small change.
I was going to just add context.Context to the BlobReceiver interface,
but then I saw blob.Fetcher could also use one, so I decided to do two
in one CL.
And then it got a bit infectious and ended up touching everything.
I ended up doing SubFetch in the process by necessity.
At a certain point I finally started using context.TODO() in a few
spots, but not too many. But removing context.TODO() will come in the
future. There are more blob storage interfaces lacking context, too,
like RemoveBlobs.
Updates #733
Change-Id: Idf273180b3f8e397ac5929c6d7f520ccc5cdce08
As the priority is to fix GCE instances, the port for the http-01
challenge is not configurable for now (80) even not on GCE, but it will
be in a follow-up change.
update golang.org/x/crypto/* (for acme) to rev
13931e22f9e72ea58bb73048bc752b48c6d4d4ac
update golang.org/x/sys/* to rev
fff93fa7cd278d84afc205751523809c464168ab (because unix is a dep of
crypto)
remove warning about Let's Encrypt security issue from pkg/deploy/gce
I had to manually exclude vendor/golang.org/x/crypto/acme/jws_test.go
for now because it contains a private key, and git whines about it, and
i could not override it.
Fixes#1033
Change-Id: Ie4f2049e97892dee9ab513300a5f12e64976aec8
In rev: 837fe8ac46 a new
version of websocket was imported and changes were made to use
origin checking. This integration test was missed during those
changes.
This change fixes this test and adds the origin data to the
request, uses the correct 'ws' scheme and uses the new Dialer
API to make requests.
Change-Id: I93e8228794665012f15370532cdeda3cb702ea00
This blobserver is just "cat"ing the given "read" storages.
This is read-only, so you should use some other storage to augment this for
writing and removing - for example the "cond" storage is perfect for this.
My use-case is to use blobpacked with large=diskpacked, small=filesystem,
but consolidate the small blob storage into a diskpacked + filesystem
after the filesystem becomes huge.
Another use-case is joining separately built camlistore servers into one.
(For me, they have to be separated later, so I've built them separately,
but I've to use it joined for a month).
Change-Id: I4e7e42cd59286f0f34da2f6ff01e44439771d53c
Remove the blob.SHA{1,224}From{Bytes,String} constructors too. No
longer used. This adds blob.RefFromBytes which was missing. We had
blob.RefFromString. Now everything uses blob.RefFrom* instead of
specifying a hash function.
Some tests set a flag to force use of SHA-1 because there was too much
golden data to update. We can remove those one-by-one over time as we
fix up tests.
Updates #537
Change-Id: Ibe6428089a6221594c2b751f53f98b03b5a28dc2
This addresses a long-standing TODO in the BlobStatter interface to
clean it up. Just like all new Go programmers, I misused channels in
APIs. I should've cleaned this up years ago.
While here, I also added a context.
The rest should get contexts later.
This also cleans up a few things here & there.
The pkg/client statting no longer does batching, which added a lot of
complexity. There was a comment saying something like "once we have
SPDY, we can delete this". Well, we have HTTP/2 now, so seems
deletable.
All tests pass.
Change-Id: I034ce07d9b70e5cc9e5482213368993e638d4bc8
followup of 7eda9fd502
We want existing Perkeep instances on GCE to be able to keep on running
with their DBNames-style existing databases.
To that end, we introduce the "perkeep-config-version" metadata key,
which will be set by the launcher from now on.
When perkeep configuration starts, it can lookup that key. If it is set,
it means we're in a newly created instance, and we don't need to care
about DBNames compatibility. If not, we modify the low level
configuration on the fly, so that it keeps on using the old DBNames
values that were set for a GCE instance.
Change-Id: I611811fdb9c68777c2ba799e9047d00ec0bae040
DBNames is supposed to provide configuration for the various databases
names. However,
1) I contend that nobody needs or wants to configure them as long as we
provide sane defaults.
2) it seems the only obvious user we have for this is to set up some of
the names on GCE.
3) having another external source for names complicates the code
further, especially when we already have the distinction between
database names for DBMS and file names for file-based databases.
4) writing a correct documentation for it is awkward.
Therefore, in this CL, I propose that we remove DBNames. Instead,
genconfig.go now sets some consistent default names for the various
queues and indexes set up on a DBMS (MySQL, PostGres, Mongo). To that
end, we introduce the new, but optional, DBUnique configuration
parameter, that is used as a part of all the database names, in order to
be able to run several Perkeep instances on the same DBMS, without name
conflicts.
In addition, the queue for the bs->index synchandler is now set up on
the same DBMS that is already in use for the index itself, instead of
using a file-base database.
And i think we could proceed likewise for the other queues.
Fixes#951
Change-Id: Ib6a638f088a563d881e3957e4042e932382b44f4
The mongo integration was using a very old package. It's using
a new namespace now. Upgrade and adjust all call points
Removes labix.org/v2/mgo
Introduces gopkg.in/mgo.v2 from branch v2 with revision
3f83fa5005286a7fe593b055f0d7771a7dce4655
Change-Id: I2784fca941998460f58e0ac8d3d51286401590b5
Calls to net.Dial* are prohibited with GopherJS. This can happen if the
client's transport is set by the user.
This change forces transportForConfig to return nil when the client
package is compiled with gopherjs, in order to make sure that a call to
newClient returns a client with a nil transport.
Change-Id: I577457bd7d924d31710168086dc2b394df3d1ae0
With Go master (roughly Go 1.10):
name old time/op new time/op delta
Rollsum-8 39.0ms ± 2% 22.2ms ± 2% -42.93% (p=0.008 n=5+5)
name old speed new speed delta
Rollsum-8 135MB/s ± 2% 236MB/s ± 2% +75.23% (p=0.008 n=5+5)
Change-Id: Ic933f8ee5f1ffaada8e37c8ac19edb7d6c0fc57d
Notably: pkg/misc all moves.
And pkg/googlestorage is deleted, since it's not used. Only the
x/net/http2/h2demo code used to use it, but that ended in
https://go-review.googlesource.com/33230 (our vendored code is old).
So just nuke that dir for now. When it's refreshed, it'll either be
gone (dep prune) or new enough to not need googlestorage.
Also move pkg/pools, pkg/leak, and pkg/geocode to internal.
More remains.
Change-Id: I2640c4d18424062fdb8461ba451f1ce26719ae9d
Part of the project renaming, issue #981.
After this, users will need to mv their $GOPATH/src/camlistore.org to
$GOPATH/src/perkeep.org. Sorry.
This doesn't yet rename the tools like camlistored, camput, camget,
camtool, etc.
Also, this only moves the lru package to internal. More will move to
internal later.
Also, this doesn't yet remove the "/pkg/" directory. That'll likely
happen later.
This updates some docs, but not all.
devcam test now passes again, even with Go 1.10 (which requires vet
checks are clean too). So a bunch of vet tests are fixed in this CL
too, and a bunch of other broken tests are now fixed (introduced from
the past week of merging the CL backlog).
Change-Id: If580db1691b5b99f8ed6195070789b1f44877dd4
After the rollsum fix in 4723d0f452
landed, the way files were truncated in blobs changed, and hence some
expected hashsums as well.
This CL adjusts such expectations.
Change-Id: I44fc1f5ce1922d7bc99f9a8096ef4b8d212571dc
The search handler can store and retrieve search aliases.
Keyword namedSearch handles these new atoms of the form
named:foo.
Creating an alias has been implemented using a client in
the camtool subcommands named-search-get and named-search-set.
Change-Id: I7960f83bad464eb1a971c07f33631744a5eea814
A file is shown as folder and inside you can see all its versions (named according
to the date). Basically is like `at` but the path (not the date) goes first.
So if you go to:
$ ls <mountpoint>/versions/my_folder
dr-x------ 1 grecco 32000 0 Mar 13 01:39 my_file
and then:
$ ls <mountpoint>/versions/my_folder/my_file
-r-------- 1 grecco 32000 2 Mar 13 01:39 2014-03-12T14:53:34.471588505Z
-r-------- 1 grecco 32000 2 Mar 13 01:39 2014-03-12T14:53:36.6569929Z
-r-------- 1 grecco 32000 2 Mar 13 01:39 2014-03-12T14:53:38.842875168Z
-r-------- 1 grecco 32000 25 Mar 13 01:39 2014-03-12T21:16:09.905612807Z
These files are standard files which can be opened to see the file content as a
specific point of time.
Change-Id: I38a4d7bf35ba32407036535e629039e23dc32735
I created https://github.com/camlistore/old-cam-snapshot with a snapshot
of our git repo before we start rearranging things.
This way we won't break anybody depending on camlistore.org/* Go
packages after the move. They just won't get any updates. Actually
things will probably break if they run "go get -u". Need to
investigate what happens there.
Change-Id: I45e109c18323e65bd76706faa08d955dcbc5f6c6
Keeping it simple: every time a new meta blob is added, Push it into a
heap, if the heap gets too long, Pop out all the blobs, pack them,
upload the new one and delete the olds.
The Push and Pop operations are done under Lock, packing, uploading and
deleting in a goroutine.
Meta blobs can't get bigger than twice the full size. Packing happens
on average <max heap length> times before filling a blob because blobs
are added as single-lines. This means uploading approximately
<max heap length> * <full size> / 2
bytes of blobs that will be removed for each full size blob.
At start, push all non-full meta blobs, so that we do packing that might
have failed previously.
Change-Id: I1f2fbfc802c1b82dcc87fc0b333c30949229c928
NaCl offers authenticated encryption, which means that the blobstore
can't tamper with the data. Since SHA-1 were checked one could not
change a blob outright, but could add new blobs by tampering with the
meta blobs, too. It's true that only signed blobs should cause actions
just by being present, but we are already far too deep in the chain of
assumptions, just not to spend a bit of CPU adding a MAC. The new
scheme is much easier to prove secure.
Also simplified the meta by removing the IV (which is in the encrypted
blob anyway) and the encrypted size (which is plaintext size + overhead).
Finally, added tests (including a storagetest) and tried to make this
sort of production-ready.
Still to do are meta compaction and a way to regenerate the meta from
the blobs, in case of meta corruption (which now we can do securely
thanks to NaCl authentication).
golang.org/x/crypto/nacl/secretbox:
golang.org/x/crypto/poly1305:
golang.org/x/crypto/salsa20/salsa:
golang.org/x/crypto/scrypt:
golang.org/x/crypto/pbkdf2:
1e61df8d9ea476e2e1504cd9a32b40280c7c6c7e
Change-Id: I095c6204ac093f6292c7943dbb77655d2c51aba6
Additional mappings (taken from the file utility source code) and
test cases. Included file license.
Change-Id: I1dadff94c7c7d5280e12d82da61b7159f29eabe3
improving proxycache
- added fuller sample config to the package documentation
- switched the stats caching from sorted.kv to the stats blobserver
- added a cleaning mechanism to evict the least recently used blobs
- implemented StatBlobs to actually inspect the local cache. It still
always consults the origin, but only for the blobs necessary after
giving the cache a 50ms headstart.
- logging a few errors that were previously ignored
- added tests modeled after the tests for the localdisk blobstore
- added a method to verify the cache, and call it on initialization
- added a strictStats option to always get stats from the origin
- filling in cacheBytes on initialization
improving stats blobserver
- implemented a few more of the blobserver interfaces, Enumerator and
Remover
- Fixed a bug(?) in ReceiveBlob that seemed to prevent it from actually
storing stats
- added a test
minor improvements include:
- blobserver/memory: allowing the memory blobserver to hold actually
infinite items, if desired
- blobserver: closing dest in the NoImpl blobserver, as required by the
BlobEnumerator interface
- storagetest: not closing dest leads to deadlock
- lru: max entries of 0 now means infinite (maybe do anything <0?)
- test: a helper function to create a random blob using a global random
source that is, by default, deterministic, to make test results more
consistent.
In the future, an improved BlobHub or similar interface could allow a
tighter feedback loop in providing cache consistency. i.e. the cache
could register with backend stores to be notified of content updates,
minimizing the time between backend changes and cache correction.
The proxycache will verify itself at startup, reporting an error if
any of its blobs do not exist in the backend storage or if the backend
storage has a different size for the content than the cache.
Fixes#443
Change-Id: I9ee1efd8c1d0eed49bb82930c2489a64122d3e00
Adds allocation-free way to check if a blob ref is equal to its
stringified form.
For #972 (doesn't fix it, as that bug is about a pending CL)
Change-Id: I49c6dee162698d38bb12314623b1507ee7bb246e
The runsit package is obsolete. Pull the listen code directly into webserver and
remove support for the runsit specific named ports. Update TODO.
Change-Id: I0d8ea798375d0eb4abea86ed9e6454376233e992
Otherwise, the android app fails to connect with a server that uses
Let's Encrypt (because it relies on SNI, which requires the ServerName
to be set).
Change-Id: I9f25486bea68e83c68584a83817c98bfc84f62b9
This code previously had methods returning channels. Such APIs are
always error-prone and difficult to use. Switch to a synchronous func
callback pattern instead, with contexts propagated.
Change-Id: Iaa1b91227c0daf4c8562fcba8d27dbcd7ab755c5
Remove dynamic rate limit adjustment for now. It was racy.
No need to be super fast, anyway, as long as it catches up eventually.
But we can make it smarter later. I wanted to get it correct first.
Change-Id: Id5b5fc946546d8d9c0720f1c0ec2f341a17cdd01
This switches most usages of the pre-1.7 context library to use the
standard library. Remaining usages are in:
app/publisher/main.go
pkg/fs/...
Change-Id: Ia74acc39499dcb39892342a2c9a2776537cf49f1
to Rev 48b2ede4844e13f1a2b7ce4d2529c9af7e359fc5
The qr package has moved from code.google.com/p/rsc/qr to it's new
canonical home at rsc.io/qr
Change-Id: Ibb04ee7e83c9707ff253a91abb4f60f9b150d61c
to Rev 9dfe39835686865bff950a07b394c12a98ddc811 for golang.org/x/html
to Rev 88f656faf3f37f690df1a32515b479415e1a6769 for golang.org/x/text
These packages moved from code.google.com to their new home
in golang.org/x/html and golang.org/x/text
Change-Id: I4ee45ae1e18eb05ef7b0a8ec69e2f1f11d140340
To rev 9dfe39835686865bff950a07b394c12a98ddc811
The xsrftoken package now lives in golang.org/x/net instead of code.google.com.
Change-Id: I4d98b1e50099dc7a1e1188f5c4311cd28c79f44a
This CL addresses issues #685 and #862.
The general problem is that some critical errors, that lead clients such
as camput to exit with failure, are not displayed when not running in
verbose mode.
The reason that happens is because of code such as:
if *cmdmain.FlagVerbose {
log.SetOutput(cmdmain.Stderr)
} else {
log.SetOutput(ioutil.Discard)
}
which means that in non-verbose mode we discard absolutely all log
messages, even those that would be printed during a Fatal* call.
To address that problem, we introduce a logger, as well as the Printf
and Logf functions using it, in pkg/cmdmain. These two functions only
output when *cmdmain.FlagVerbose is true.
Commands such as camput or camtool should now always:
1) log.SetOutput(cmdmain.Stderr) in init().
2) use log.Printf for messages that should always be printed.
3) use cmdmain.Printf/Logf for messages that should only be printed when
*cmdmain.FlagVerbose is true.
4) use log.Fatal for critical errors.
5) optionally, set the Verbose and Logger of the client(s) they are
using.
Also, camput and camtool are now relying on the global -verbose flag
from cmdmain, instead of having to define one for each subcommand.
fixes#685fixes#862
Change-Id: I088032fd28184a201076097bf878894b22a8a120
On GCE, on startup, we did not set the camliNetIP in the high-level
config if the camlistore-hostname instance var vas already set.
The camliNetIP presence in the config is the signal for the server
that it should be configured as part of camlistore.net (and in
particular that it should update the record for its name on the
camlistore.net DNS server). Part of this configuration is to set the
camlistore-hostname var for the instance.
Therefore, a server which had been configured once as described above,
would not, on a subsequent restart, behave as if part of camlistore.net
and would skip the related configuration code path. Unless the
camlistore-hostname var was manually wiped before restart.
As this manual step is not an obvious one, this CL changes the
initialization, so that if a camlistore-hostname var is set, but the
value is a subdomain of "camlistore.net", the var is treated as empty
and initialization proceeds as if with a new server.
Fixes#963
Change-Id: Iab70185d7b90ef7e70bb831d363ff9d525922e35
To rev 37a4c36ce6286bb78bceb20579fecdfe7a759e02
Fix vendor/embed/closure/updatelibrary.go to now fetch from github.
Fix pkg/misc/closure/gendeps.go to work with new addDependency calls.
Fixes#903Fixes#961
Change-Id: Ie555cf9bf5a8624845095fb3351482a690a2571c
The message was getting printed before some potential errors could still
occur during reindexing.
Issue #954
Change-Id: I952fe7b92045ad85681cb63b7fb13803d5c43004
This change adds a check right after index initialization that
enumerates blobs for a few seconds and verifies that all of them are
indexed.
A warning is logged if any of the blobs are not found in the index.
Issue #947
Change-Id: Idc0df2121c1fb58e7560173b7753eaaddc4e653b
Some errors such are missing fields, or wrong values, or conflicting
values were already caught logically when the high-level configuration
is transformed into the low-level configuration. Or later when
initializing handlers.
However, there was no mechanism to catch typos such as "httsCert"
instead of "httpsCert", or "leveldb" instead of "levelDB" in the
high-level configuration. Such errors would result in misconfiguration
(e.g. use of Let's Encrypt instead of the desired HTTPS certificate)
which can then even go unnoticed since the server still starts.
Therefore, this change generates a fake serverconfig.Config with all its
fields set, so that its JSON encoding results in a list of all the
possible configuration fields. This allows to compare the given
configuration fields with that list, and catch invalid names.
Change-Id: I4d6e61462353e52938db93ba332df2d52225e750
The DownloadHandler only accepted file schemas as input for building a
zip archive so far.
It can now zip directories and their contents as well.
Non-regular files (socket, fifo, symlink) are now handled too.
As previously, no compression is applied when zipping.
When the DownloadHandler's Fetcher is a caching fetcher, all the files
that are supposed to be included in the archive are read, so we can
report reading errors even before starting to create the archive. We now
also take advantage of this optimization to build a
blobRef->filepath mapping when checking the files. Then, when the zip
archive is actually being built, the file path can be looked up in the
map, instead of having to assemble it again with recursive concatenation
of directory names.
Change-Id: I853c495798a9a43e12f3386603a70560ff46a237
-keep the browser URL bar in sync with the current search/zoom-level
-introduce the "map:" predicate, to be used as the current viewport in
the map aspect. This was previously achieved with the locrect predicate. We
try to keep this new predicate unknown to the server, and replaced on
the fly by an equivalent locrect predicate.
-stay on the map aspect when the search expression has a map predicate.
Same thing when loading a URL with a map predicate parameter: try to
load directly on map aspect, instead of going to search aspect first.
-make sure there's only at most one map query in flight at all times
Change-Id: Ibf928ee5c926670c221f212f14ca9799e1e7bb79
Avoid select overhead in hot paths. Just use funcs.
Also, for sort-by-map searches, don't do a describe and pass over all
the results doing location lookups a second time. Remember the
location from the initial matching. Cache it on the search value.
Reduces some sort-by-map searches from 10 seconds to 3 seconds for
me. (still too slow, but good start)
Change-Id: I632954738df9accd802f28364ed11e48ddba0d14
We only started preventing NaNs from locations from being indexed at
ee13a3060b, so files indexed before
that could have introduced indexed NaNs, which we were not checking
against, until now.
Change-Id: I31fc8b9482cbd546591d553d7d8804700c7cf175
The "ref:prefix" search predicate is simply the equivalent of the
Constraint {
BlobRefPrefix: prefix,
}
search constraint.
The "ref:prefix" search expression was already supported by the search
box of the web UI, but as opposed to (all, I think) other search
expressions, it was not supported server-side. Which means, it had to be
converted to a search Constraint as the above, before being sent in the
query.
This change therefore fixes this inconsistency. In addition, but
relatedly, since the map aspect relies on expressing the zoom-level as a
locrect expression, it is much simpler if the search query it uses only
has to be constructed from search expressions, and without search
constraints. So if we want to be able to support marking a single node
search with the map aspect, while dealing only with search expressions,
this change is necessary.
Fixes#939
Change-Id: Ia58f410198ecd1f7e0981321da370d687df3a120
This change makes zooming and panning on the map aspect send a new
"MapSort" search query, so that the (1000, by default) most relevant
results for the currently displayed area always appear as markers
after a zoom/pan.
This required completely "lifting up" (in React lingo) the currentSearch
state out of the Header class, and into the Index class, which should
have already been done in a56372830a. This
is necessary because we need the expression in the search box to be mutable
both by the user('s input) and by the map aspect, which represents the
zoom level as a trailing "locrect" predicate in the search expression.
Fixes#938
Change-Id: I0004c9ff09f03b4f1d95a35e54605689eebf0c1a
The search for the Around blobRef was using the equality operator,
instead of an ordering operator, which was incorrect.
Also, if the Around blob.Ref had not been found during the search, we
were only returning early for special cases, which is incorrect. As soon
as we know that an Around blob is wanted, and we know it has not been
found, the search should abort.
Relatedly, a formal check to enforce that Around is incompatible with
MapSorted was added.
Finally, some tests to check the Around algorithm, in the case where we
don't use a sorted source, were added.
Change-Id: I94fc984cf4130badc879cdadaba718ef6361c9b7
Since the map aspect was added to the web UI, it was discovered that it
currently does not scale well with the number of matching nodes.
The actual reason is that the search session is requesting an ever
increasing window of results, to get all results, instead of taking into
account the Continue token or Around mechanism.
However, this bug gives the opportunity to optimize the results for this
kind of interface. Instead of requesting the results in creation order,
until we get them all, we can request the set of nodes that looks the
best when displayed on the map. In other words, if there are more
results that the requested limit, the selected set of nodes should be
one that spreads around on the relevant area as evenly as possible.
This kind of selection is implemented in this CL, and will be used by
queries specifying the MapSort sort type.
Related: Issue #934
Change-Id: I6eb4901b40332863f17dab1ec4bfc11f3e99092a
The Around Query field was only taken into account if the source of blobs
to search through was sorted (e.g. a slice of sorted permanodes from the
corpus).
However, since (when requested) we already do sort all the matching
blobs when they don't come from a sorted source, and we slice the
results to a range matching the requested limit, there is nothing
preventing from centering that final slice on the requested "around".
In addition to the intrinsic feature, this offers some sort of
continuation mechanism, which makes up for the fact that using a
continuation token is also only possible in some limited cases.
This will be useful in particular for file searches, with the
DirConstraint, where we want to be able to eventually get all the
results, but in several limited batches (e.g. from the search session of
the web UI).
Change-Id: Ic9c10cac9670b43b35994e3eff13221728611c70
This change adds the "locrect" search predicate, which works like the
"loc" predicate, except it allows to specify the coordinates of a
rectangular area as a location, instead of a named location.
The coordinates are the latitude and longitude of the North-East corner
of the rectangle, followed by the latitude and longitude of the
South-West corner. For example: locrect:48.63,-123.37,46.59,-121.28
Related: issue #934
Change-Id: I0cf39c1d0b49d557b2081f07b2c8b4508ccfc052
Notably:
-do not load any markers on an empty search query, because that would
mean loading absolutely all of the items with a location, which seems
like a bad idea.
-use different markers for different nodes. For now, foursquare
checkins, file images, and files have their own marker.
-vendor in https://github.com/lvoogdt/Leaflet.awesome-markers to achieve
the above, which relies on Font Awesome, which we already have in.
icons available for the markers: http://fontawesome.io/icons/
-when no location can be inferred from the search query, set the view to
encompass all markers that were drawn.
-when a location search is known, draw a rectangle representing the
results zone.
-use thumber for image in marker popup
-use title, if possible, instead of blobRef for link text in marker
popup
-switch to directly using OpenStreetMap tiles, instead of MapBox ones.
https://storage.googleapis.com/camlistore-screenshots/Screenshot_20170622-232359.png
Change-Id: Ibc84fa988aea8b8d3a2588ee8790adf6d9b5ad7a
The picasa importer and the gphotos importer fetch items in different
ways (one using the Picasa API, and the other the Google Drive API). But
most of the time a photo object downloaded from Picasa should result in
the same file as if downloaded from the Google Photos folder in Google
Drive (which is as it should be, really). Therefore the corresponding
file schemas in Camlistore should be identical as well. When that's the
case, there is no reason for the two importers to each create a
permanode if they're going to have the same camliContent. Especially
since it would look like duplicates (in the web UI in particular).
This change fixes the gphotos importer so that when it imports a photo,
and creates a file schema for it, it looks for an existing picasa
permanode having the same file schema as a camliContent. If one is found
it is reused, and the permanode ends up with an attributes set that is a
merge of the picasa-based attributes and the drive-based ones.
One caveat: the files in the Google Photos folder in Google Drive are
never updated with whatever modifications are done "at the source" (i.e.
on the items in Google Photos), they always stay as they were originally
uploaded.
(https://productforums.google.com/forum/#!msg/drive/HbNOd1o40CQ/VfIJCncyAAAJ).
As a result, such photos are different when imported from Picasa and
when imported from Drive, so they result in different files in
Camlistore.
In a subsequent CL, we'll modify the picasa importer in a similar manner
as what's done in this here CL.
Issues #874#896
Change-Id: I6a3c89de1af404556f01ca61d92861933fd35158