* Support subpaths when serving stash through a reverse proxy
* Add README documentation
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
* Add indexes for path and checksum to images
The scenes table has unique indexes/constraints on path and checksum
colums. The images table doesn't, which doesn't really make sense, as
scanning uses these colums extensively which warrents an index, and both
should be unique as well.
Adding these indexes thus heavily improves the scanning tasks
performance. On a database containing 4700 images a (re)scan of those
4700 files, which thus shouldn't do anything, took 1.2 seconds, with the
indexes added this only takes 0.4 seconds. Taking the same test on a
generated database containing 4M images + the actual 4700 images took 26
minutes for a rescan, and with the index existing also only takes 0.4
seconds.
* Add images.checksum unique constraint in code with fallback
Work around the issue where in some cases duplicate images (/checksums
on images) might exist. This as discussed in #1740 by creating the index
on startup and in case of an error logging the duplicates. This so the
users where this scenario exists can correct the database (by searching
on the logged checksum(s) and removing the duplicates) and after a
restart the unique index / constraint will still be created. In case
when creating the unique index fails a "normal" / non-unique index is
created as surrogate so the user will still get the performance benefit
(for example during scanning) without being forced to remove the
duplicates and restart beforehand. This surrogate is also automatically
cleaned up after the unique index is succesfully created.
NORMAL sync is safe when using WAL journaliing.
It cuts the amount of sync calls to the disk, resulting in faster write
operation. On power loss, the database will perhaps lose some ongoing
commits, but that is all.
The expectation is that if the database lives on the same disk as the
stash, this could help performance quite a bit under heavier operation.
* Avoid redundant logging in migrations
Return the error and let the caller handle the logging of the error if
needed.
While here, defer m.Close() to the function boundary.
* Treat errors as values
Use %v rather than %s and pass the errors directly.
* Generate a wrapped error on stat-failure
* Log 3 unchecked errors
Rather than ignore errors, log them at
the WARNING log level.
The server has been functioning without these, so assume they are not at
the ERROR level.
* Propagate errors upward
Failure in path generation was ignored. Propagate the errors upward the
call stack, so it can be handled at the level of orchestration.
* Warn on errors
Log errors rather than quenching them.
Errors are logged at the Warn-level for now.
* Check error when creating test databases
Use the builtin log package and stop the program fatally on error.
* Add warnings to uncheck task errors
Focus on the task system in a single commit, logging unchecked
errors as warnings.
* Warn-on-error in API routes
Look through the API routes, and make sure errors are being logged if
they occur. Prefer the Warn-log-level because none of these has proven
to be fatal in the system up until now.
* Propagate error when adding Util API
* Propagate error on adding util API
* Return unhandled error
* JS log API: propagate and log errors
* JS Plugins: log GQL addition failures.
* Warn on failure to write to stdin
* Warn on failure to stop task
* Wrap viper.BindEnv
The current viper code only errors if no name is provided, so it should
never fail. Rewrite the code flow to factor through a panic-function.
This removes error warnings from this part of the code.
* Log errors in concurrency test
If we can't initialize the configuration, treat the test as a failure.
* Warn on errors in configuration code
* Plug an unchecked error in gallery zip walking
* Warn on screenshot serving failure
* Warn on encoder screenshot failure
* Warn on errors in path-handling code
* Undo the errcheck on configurations for now.
* Use one-line initializers where applicable
rather than using
err := f()
if err!= nil { ..
prefer the shorter
if err := f(); err != nil { ..
If f() isn't too long of a name, or wraps a function with a body.
* Execute Gallery.Create.Post plugin hook during scan
Fix issue where Gallery.Create.Post hook is not executed when a new
gallery is created during scan, when the gallery is created based on the
folder.
* Fix Gallery.Create.Post hook being invoked in transaction
Invoke the Gallery.Create.Post hook during zip scan after the
transaction has been committed. This is necessary to allow the plugin to
access the gallery (using GraphQL API). Otherwise the API obviously uses
a different database transaction which can't find the gallery as it
isn't committed yet.
* Fix logs from scraper and plugins not being shown in UI
Using `logger.` in the logger package to write logs is "incorrect". This
as the package contains a variable named `logger` which contains the
logrus instance. So instead of the log line being handled by the custom
log implementation / wrapper which makes sure the lines are shown in the
UI as well, it's written to logrus directly meaning the wrapper is
skipped.
This "issue" is obviously triggered because in any other place
`logger.X` can be used and it will used the custom logger package /
wrapper which works fine.
* Add plugin / scraper name to logging output
Indicate which plugin / scraper wrote a log message by including its
name to the `[Scrape]` prefix.
* Add missing addLogItem call
* Generate screenshot images for markers
In some scenarios it might not be possible to use the preview video or
image of markers, i.e. when only static images are supported like in
Kodi. So generate a static screenshot as well.
* Make generating animated and static image optional for markers
* Use screenshot for markers when preview type is set to static image
Add some form of backwards compatibility in the UI for
`IHierarchicalLabeledIdCriterion`. With this backwards compatibility
it's possible to recall saved filters which use the tags filter.
Otherwise stuff would blow up because the saved filter has a different
structure than what the `IHierarchicalLabeledIdCriterion` expects.
* Add migration to create studio aliases table
* Refactor studioQueryBuilder.Query to use filterBuilder
* Expand GraphQL API with aliases support for studio
* Add aliases support for studios to the UI
* List aliases in details panel
* Allow editing aliases in edit panel
* Add 'aliases' filter when searching
* Find studios by alias in filter / select
* Add auto-tagging based on studio aliases
* Support studio aliases for filename parsing
* Support importing and exporting of studio aliases
* Search for studio alias as well during scraping
* Add migration script for tag relations table
* Expand hierarchical filter features
Expand the features of the hierarchical multi input filter with support
for using a relations table, which only has parent_id and child_id
columns, and support adding an additional intermediate table to join on,
for example for scenes and tags which are linked by the scenes_tags
table as well.
* Add hierarchical filtering for tags
* Add hierarchical tags support to scene markers
Refactor filtering of scene markers to filterBuilder and in the process
add support for hierarchical tags as well.
* List parent and child tags on tag details page
* Support setting parent and child tags
Add support for setting parent and child tags during tag creation and
tag updates.
* Validate no loops are created in tags hierarchy
* Update tag merging to support tag hierarcy
* Add unit tests for tags.EnsureUniqueHierarchy
* Fix applying recursive to with clause
The SQL `RECURSIVE` of a `WITH` clause only needs to be applied once,
imediately after the `WITH`. So this fixes the query building to do just
that, automatically applying the `RECURSIVE` keyword when any added with
clause is added as recursive.
* Rename hierarchical root id column
* Rewrite hierarchical filtering for performance
Completely rewrite the hierarchical filtering to optimize for
performance. Doing the recursive query in combination with a complex
query seems to break SQLite optimizing some things which means that the
recursive part might be 2,5 second slower than adding a static
`VALUES()` list. This is mostly noticable in case of the tag hierarchy
where setting an exclusion with any depth (or depth: all) being applied
has this performance impact of 2,5 second. "Include" also suffered this
issue, but some rewritten query by joining in the *_tags table in one
pass and applying a `WHERE x IS NOT NULL` filter did seem to optimize
that case. But that optimization isn't applied to the `IS NULL` filter
of "exclude". Running a simple query beforehand to get all (recursive)
items and then applying them to the query doesn't have this performance
penalty.
* Remove UI references to child studios and tags
* Add parents to tag export
* Support importing of parent relationship for tags
* Assign stable ids to parent / child badges
* Silence Apollo warning on parents/children fields on tags
Silence warning triggered by Apollo GraphQL by explicitly instructing it
to use the incoming parents/children values. By default it already does
this, but it triggers a warning as it might be unintended that it uses
the incoming values (instead of for example merging both arrays).
Setting merge to false still applies the same behaviour (use only
incoming values) but silences the warning as it's explicitly configured
to work like this.
* Rework detecting unique tag hierarchy
Completely rework the unique tag hierarchy to detect invalid hierarchies
for which a tag is "added in the middle". So when there are tags A <- B
and A <- C, you could previously edit tag B and add tag C as a sub tag
without it being noticed as parent A being applied twice (to tag C).
While afterwards saving tag C would fail as tag A was applied as parent
twice. The updated code correctly detects this scenario as well.
Furthermore the error messaging has been reworked a bit and the message
now mentions both the direct parent / sub tag as well as the tag which
would results in the error. So in aboves example it would now show the
message that tag C can't be applied because tag A already is a parent.
* Update relations on cached tags when needed
Update the relations on cached tags when a tag is created / updated /
deleted so these always reflect the correct state. Otherwise (re)opening
a tag might still show the old relations untill the page is fully
reloaded or the list is navigated. But this obviously is strange when
you for example have tag A, create or update tag B to have a relation to
tag A, and from tags B page click through to tag A and it doesn't show
that it is linked to tag B.
The strings.Replace function counts the number of replacements. If 0,
the original string is returned. Hence, there is no need to check if a
replacement will happen before doing the work.
* Remove stuff which isn't being used
Some fields, functions and structs aren't in use by the project. Remove
them for janitorial reasons.
* Remove more unused code
All of these functions are currently not in use. Clean up the code by
removal, since the version control has the code if need be.
* Remove unused functions
There's a large set of unused functions and variables in the code base.
Remove these, so it clearer what code to support going forward.
Dead code has been eliminated.
Where applicable, comment const-sections in tests, so reserved
identifiers are still known.
* Fix use-def of tsURL
The first def of tsURL doesn't matter because there's no use before
we hit the 2nd def.
* Remove dead code assignment
Setting logFile = "" is effectively dead code, because there's no use
of it later.
* Comment out found
The variable 'found' is dead in the function (because no post-process
action is following it). Comment it for now.
* Comment dead code in tests
These might provide hints as to what isn't covered at the moment.
* Dead code removal
In the case of constants where iota is involved, move the iota so it
matches the current key values.
This avoids problems with persistently stored key IDs.
* Bump Go to 1.17, refactor build/x86_64 Dockerfile to make better use of multi-stage
* Bump to 1.17 from 1.16
* Bump packr version, provide needed legacy env var
* Add apple silicon support, fix macos build chain
* Update unused travis ci
* Fix error string capitalization
Error strings often follow another string. Hence, they should not be
capitalized, unless referencing a name.
* Uncapitalize more error strings
While here, use %v on the error directly, which makes it easier to wrap
the error later with %w if need be.
* Uncapitalize more error strings
While here, rename Url to URL as a nitpick.
* When stopping, close the database
This patch is likely to cause errornous behavior in the application.
This is due to the fact that the application doesn't gracefully shut
down, but is forcefully terminated.
However, the purpose is to uncover what needs to be done, to make
it a more graceful shutdown.
The call to p.ExecuteTask happens in a separate go-routine. It writes,
under m.mutex, into the job's details. However, the test goroutine
itself reads j.Details which is a read race.
Protect the reads in the test by the lock.
This makes `package job` pass `go test -race`
* Unify scraped types
* Make name fields optional
* Unify single scrape queries
* Change UI to use new interfaces
* Add multi scrape interfaces
* Use images instead of image
Add a new python library mechanicalsoup to the docker container.
There is a pending pull request in the Community scrapers for the Ifeelmyself that uses this library so it might be worth including it in the container.
This should add ~100k to the size of the container.
* Add script offset / delay to Handy support.
Further work on #1376.
Offsets are added to the current video position, so a positive value leads to earlier motion. (The most common setting.)
This is needed because most script times have a consistent delay when compared to the video. (Delay from the API calls to the server should be handled by the server offset calculation.)
* Rename scriptOffset to funscriptOffset
* Correct localisation keys
Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
* Rebuild image edit using formik
* Prompt on page leave when changes are not saved
* Only enables save when changes are made
* Wrap in <Form onSubmit> (not sure if this does anything)