Since we do not systematically regenerate app/publisher/js/zsearch.go
anymore (because it's part of building the web UI process),
go test ./app/publisher/js
would fail. But we want go test ./... to succeed, so we add build tags
in ./app/publisher/js so that it gets ignored by default.
Change-Id: Ia77689ed937411a628e903189433b70be659e941
- correct logging that logged functions instead of their value
- use ID vs Id naming
- use correct function names in comments
Change-Id: I61562cef7ebac7337ec6c85312cdf7915cb1a84b
I had intended for this to be a small change.
I was going to just add context.Context to the BlobReceiver interface,
but then I saw blob.Fetcher could also use one, so I decided to do two
in one CL.
And then it got a bit infectious and ended up touching everything.
I ended up doing SubFetch in the process by necessity.
At a certain point I finally started using context.TODO() in a few
spots, but not too many. But removing context.TODO() will come in the
future. There are more blob storage interfaces lacking context, too,
like RemoveBlobs.
Updates #733
Change-Id: Idf273180b3f8e397ac5929c6d7f520ccc5cdce08
Part of the project renaming, issue #981.
After this, users will need to mv their $GOPATH/src/camlistore.org to
$GOPATH/src/perkeep.org. Sorry.
This doesn't yet rename the tools like camlistored, camput, camget,
camtool, etc.
Also, this only moves the lru package to internal. More will move to
internal later.
Also, this doesn't yet remove the "/pkg/" directory. That'll likely
happen later.
This updates some docs, but not all.
devcam test now passes again, even with Go 1.10 (which requires vet
checks are clean too). So a bunch of vet tests are fixed in this CL
too, and a bunch of other broken tests are now fixed (introduced from
the past week of merging the CL backlog).
Change-Id: If580db1691b5b99f8ed6195070789b1f44877dd4
Use the go/parser to parse, go/ast to modify and
go/format to write the types needed for publisher/js
instead of running godoc.
Change-Id: I567fac9e6cdd0cb29fbd097b0d478fd9f35864fb
This switches most usages of the pre-1.7 context library to use the
standard library. Remaining usages are in:
app/publisher/main.go
pkg/fs/...
Change-Id: Ia74acc39499dcb39892342a2c9a2776537cf49f1
According to https://golang.org/pkg/html/template/#ErrorCode
(ErrPredefinedEscaper), the template engine already performs
sanitization equivalent to urlquery on any pipeline, and therefore makes
this extra urlquery call unnecessary, and even maybe harmful.
Change-Id: I7fcce395bf015b64022d1ac66b42069cdefb69eb
Apparently the html escaper was not being used in the right order/place,
and the auto-escaper does a better job than us at escaping?
For reference: https://go-review.googlesource.com/c/go/+/37880
I have noticed some odd results when trying other escaping combinations,
but that's a Go problem, if any, that I will investigate independently.
Fixes#960
Change-Id: I43e6eca26404d0bcbc7a48764393186f1d112dbc
If we keep app/publisher/js/zsearch.go, when running make.go, it first gets mirrored into
CAMLI_ROOT/tmp/build-gopath-nosqlite/src/camlistore.org/app/publisher/js/zsearch.go
, with the same modtime as app/publisher/js/zsearch.go.
Then in genSearchTypes,
CAMLI_ROOT/tmp/build-gopath-nosqlite/src/camlistore.org/app/publisher/js/zsearch.go gets
regenerated anyway, because its modtime is older than that of
CAMLI_ROOT/tmp/build-gopath-nosqlite/src/camlistore.org/pkg/search/describe.go
Then in a subsequent run of make.go, the mirroring overwrites the newly regenerated file,
because its modtime is different (indeed, it is newer) than that of
app/publisher/js/zsearch.go
And the cycle repeats ad vitam eternam.
I think if we added app/publisher/js/zsearch.go to mirrorIgnored, it
might fix the problem, but I don't see the point in even keeping
zearch.go in git at all, so I propose with this CL to simply remove it.
If removed, it is simply generated at the first make.go run, and never
again afterwards (except if pkg/search/describe.go changes).
Fixes#957
Change-Id: Ia6fcc50ce33513a2003809783fc323ab36a60b52
When the DownloadHandler handles a zip archive requests, it writes the
zip archive being built directly to the network response, in order to
avoid writing any of the files to disk or to memory. As a consequence,
as soon as the archive starts being built, if any error reading one of
the files occurs, it can't properly report it to the client as the
response had already started being sent.
In this change, if the DownloadHandler's Fetcher is a caching fetcher,
we try reading all the requested files before starting the zip archive.
As a result, we can error early and properly if any of the files can't
be read. And since the blobs get cached as they are read the first time,
reading them a second time when actually building the archive should not
be too costly.
Change-Id: I6477d82149b08b1db1471ca9ad77fef254929db0
Change the css to allow variable width images, keep constant height only.
Use a fixed limit maxThumbWidthRatio to limit thumbnail image width
for panoramic images.
Fixes#902
Change-Id: I60c16c63b018680885c67f00b47e2e96c8dba47e
Also remove the scheme variable from the template that
tells what protocol the backend uses to talk to camlistore.
Fixes#920
Change-Id: Ia25e99d0f1b77077158f761f11a5c6bacfa2dc3b
We want to add a feature for clients (the web UI), where they can select
a bunch of files and ask the server for a zip archive of all these files.
This CL modifies the DownloadHandler so it does exactly that upon
reception of a POST request with a query parameter of the form
files=sha1-foo,sha1-bar,sha1-baz
This CL also adds a new button to the contextual sidebar of the web UI,
that takes care of sending the download request to the server.
known limitations: only permanodes with file as camliContent are
accepted as a valid selection (i.e. no sets, or static-dirs, etc) for
now.
Implementation detail:
We're creating an ephemeral DOM form on the fly to send the request.
The reason is: if we sent it as a Go http request, we'd have to read
the response manually and then we'd have no way of writing it to disk.
If we did it with an xhr, we could write the response to disk by
creating a File or Blob and then using URL.createObjectURL(), but we'd
have to keep the response in memory while doing so, which is
unacceptable for large enough archives.
Fixes#899
Change-Id: I104f7c5bd10ab3369e28d33752380dd12b5b3e6b
In other words, when on the camliRoot node, there is an attribute such
as:
camliPath:foo = sha1-foo
Where sha1-foo is a permanode that is not a set (i.e. it does not have
camliMembers or camliPaths), and typically is a permanode with some
camliContent.
Change-Id: Ib827130bb2456c4c0d7bfb40e40a425515ee1bde
Instead of reporting "404 not found" make it clear
publisher is working, but needs an explicit path.
Change-Id: Ic686b82335ba36e0649dd563831b1221a8579e0d
As the requests to the publisher are proxied through Camlistore's app
handler, there's no point in the publisher having its own autocert
Manager to request a certificate. Therefore, the publisher reuses
(readonly) camlistored's autocert CacheDir to get its certificate.
It follows that, for now, Let's Encrypt only works for the publisher if
it is running on the same host as camlistored (or more precisely, if they
share the same filesystem).
Fixes#458
Change-Id: Icf3be2913f85f9ec6f94b831ad58e1949b4d6961
Or to be more precise, golang.org/x/crypto/acme/autocert
The default behaviour regarding HTTPS certificates changes as such:
1) If the high-level config does not specify a certificate, the
low-level config used to be generated with a default certificate path.
This is no longer the case.
2) If the low-level config does not specify a certificate, we used to
generate self-signed ones at the default path. This is no longer always
the case. We only do this if our hostname does not look like an FQDN,
otherwise we try Let's Encrypt.
3) As a result, if the high-level config does not specify a certificate,
and the hostname looks like an FQDN, it is no longer the case that we'll
generate a self-signed. Let's Encrypt will be tried instead.
To sum up, the new rules are:
If cert/key files are specified, and found, use them.
If cert/key files are specified, not found, and the default values,
generate them (self-signed CA used as a cert), and use them.
If cert/key files are not specified, use Let's Encrypt if we have an
FQDN, otherwise generate self-signed.
Regarding cert caching:
On non-GCE, store the autocert cache dir in
osutil.CamliConfigDir()/letsencrypt.cache
On GCE, store in /tmp/camli-letsencrypt.cache
Fixes#701Fixes#859
Change-Id: Id78a9c6f113fa93e38d690033c10a749d1844ea6
When receiving a file, we were only trying to guess its MIME type
through its contents (pkg/magic). We're now making a better effort at it
by guessing from the filename extension if needed.
Also:
pkg/magic: get rid of all the extra video extensions that are already
covered by mime.TypeByExtension. Because it's redundant and
confusing.
app/publisher, pkg/types/camtypes: also use mime.TypeByExtension as an
extra effort. Especially since a reindex would be necessary to benefit
from the pkg/index change.
There are other places in Camlistore that could use such an effort.
Maybe we should have a camtypes.*FileInfo.MIME() method that tries all
the ways to guess the MIME type of the file?
Change-Id: Ib9a2bc42af77c5394dac578ae415524b5111ad4e
To decide whether a search submitted to the app search proxy is allowed,
we compare its results to the domain blobs, result of the master query,
that we cache when the master query is set.
However, since the results of the master query are liable to change when
new blobs arrive (e.g. a new camliMember is added to the set that is
published), that cache may need to be invalidated. Otherwise, we might
reply with a 403 to search query that is actually allowed.
Therefore, this CL adds a refresh of the cache on two instances:
-When the app handler gets a search query that seems to be forbidden.
Before replying with a 403, we refresh the cache with the master query,
and recheck whether the search query is allowed.
-When the publisher gets a request for a "members" page, or the "file"
page, it preemptively asks the app handler to refresh. Now that a lot of
the client workflow has been moved to javascript/the browser, these
kinds of requests should not happen too often, so it seems a reasonable
place to ask for a refresh. But this might change, so we should of
course be careful not to flood the app handler with refresh requests in
the future.
In any case, the app handler is suppressing the refresh requests, so
that it does not perform refreshes at more that one per minute.
As a smarter approach, we could later imagine a way for the app handler
to be aware of when new blobs get to the blobserver (akin to the blob
hub that the sync handler uses?), so that it only ever refreshes when
needed.
Fixes#851
Change-Id: Idc14cce5018053deac01ec454e5c936ed93e5a05
So far only images were served with their MIME types set properly, so
they would display directly in the browser, instead of being served as a
file download.
Now the same is done for a subset of text types: i.e. text/plain,
text/html, text/xml, and text/json. Aside from the browsing convenience,
the obvious advantage is being able to serve HTML directly, which should
allow us to build other things on top of the publisher.
Also a bit of related refactoring: moving the extension matching to
pkg/magic
Change-Id: Id98065c7c685036a272d1d2e293bfcbca33015ee
Since the app handler should not trim the r.URL.Path of the handler's
prefix, it is now the responsibility of the app to cope with that
prefix.
Fixes#833
Change-Id: Ie1fa9801b26767c3e3b6612498380261e22cdf07
Some of the publisher features have moved from the server-side app to
the client-side app (the browser) thanks to gopherjs. Some of these
features imply doing some search queries against Camlistore, which
requires authentication. The server-side app receives the necessary
credentials on creation, from Camlistore. However, we can't just
communicate them to the client-side (as we do with the web UI) since the
publisher app itself does not require any auth and is supposed to be
exposed to the world.
Therefore, we need to allow some search queries to be done without
authentication.
To this end, the app handler on Camlistore now assumes a new role: it is
also a search proxy for the app. The app sends an unauthenticated search
query to the app handler (instead of directly to the search handler),
and it is the role of the app handler to verify that this query is
allowed for the app, and if yes, to forward the search to the Camlistore's
search handler.
We introduce a new mechanism to filter the search queries in the form of
a master query. Upon startup, the publisher registers, using the new
CAMLI_APP_MASTERQUERY_URL env var, a *search.SearchQuery with the app
handler. The app handler runs that query and caches all the blob refs
included in the response to that query. In the following, all incoming
search queries are run by the app handler, which checks that none of the
response blobs are out of the set defined by the aforementioned cached
blob refs. If that check fails, the search response is not forwarded to
the app/client.
The process can be improved in a subsequent CL (or patchset), with finer
grained domains, i.e. a master search query per published camliPath,
instead of one for the whole app handler.
Change-Id: I00d91ff73e0cbe78744bfae9878077dc3a8521f4
This change allows the publisher to use resources from a SourceRoot
directory, without having to rebuild the publisher binary, instead of
only using embedded resources.
Change-Id: Ife29e3015b8595a33f175a62d98fcf5ffa689134
Using go:generate to call a shell script with some go doc + sed hackery.
we could probably do it better with go/types later if needed.
Change-Id: Ie1cf04d418b8b498f83f7029eb736dbc779feeb5
Done with gopherjs and jquery.
Some build tagging added in pkg/schema and pkg/netutil because
gopherjs does not support cgo (so no os/user).
Issue #798
Change-Id: Ib1e1e94185f75cdf696aa2dd31c57fa9e3af84a1
Run gopherjs to generate trivial javascript code that is used by the
publisher app.
Context:
https://github.com/camlistore/camlistore/issues/798#issuecomment-226902924
github.com/gopherjs vendored in at rev
f3c437955da554f2643747a598b0cc772a749f3f
PLEASE NOTE that this copy of gopherjs has been modified to avoid
depending on fsnotify. Hence the -w flag and the gopherjs serve command
are most likely broken.
Diff for that modification:
https://gist.github.com/mpl/ac9033bb28207401b7cedc3d74e6c096
Dependencies for building gopherjs:
kardianos/osext 29ae4ffbc9a6fe9fb2bc5029050ce6996ea1d3bc
neelance/sourcemap 8c68805598ab8d5637b1a72b5f7d381ea0f39c31
spf13/cobra c678ff029ee250b65714e518f4f5c5cb934955de
spf13/pflag 7f60f83a2c81bc3c3c0d5297f61ddfa68da9d3b7
golang.org/x/crypto/ssh/terminal
c197bcf24cde29d3f73c7b4ac6fd41f4384e8af6
golang.org/x/tools/go/types/typeutil
ac02106e04bdb66a2db0413d931012bea165d7e0
github.com/gopherjs/jquery vendored in at
fbbfc4bbe29a29cb05788b66be44e0ac7f43cac7
jquery vendored in at 2.2.3
Change-Id: I7ff2d8e43e8a963f5ac1d13a2c936f263f7c53fc