* Log 3 unchecked errors
Rather than ignore errors, log them at
the WARNING log level.
The server has been functioning without these, so assume they are not at
the ERROR level.
* Log errors in concurrency test
If we can't initialize the configuration, treat the test as a failure.
* Undo the errcheck on configurations for now.
* Handle unchecked errors in pkg/manager
* Resolve unchecked errors
* Handle DLNA/DMS unchecked errors
* Handle error checking in concurrency test
Generalize config initialization, so we can initialize a configuration
without writing it to disk.
Use this in the test case, since otherwise the test fails to write.
* Handle the remaining unchecked errors
* Heed gosimple in update test
* Use one-line if-initializer statements
While here, fix a wrong variable capture error.
* testing.T doesn't support %w
use %v instead which is supported.
* Remove unused query builder functions
The Int/String criterion handler functions are now generalized.
Thus, there's no need to keep these functions around anymore.
* Mark filterBuilder.addRecursiveWith nolint
The function is useful in the future and no other refactors are looking
nice.
Keep the function around, but tell the linter to ignore it.
* Remove utils.Btoi
There are no users of this utility function
* Return error on scan failure
If we fail to scan the row when looking for the
unique checksum index, then report the error upwards.
* Fix comments on exported functions
* Fix typos
* Fix startup error
* Fix logs from scraper and plugins not being shown in UI
Using `logger.` in the logger package to write logs is "incorrect". This
as the package contains a variable named `logger` which contains the
logrus instance. So instead of the log line being handled by the custom
log implementation / wrapper which makes sure the lines are shown in the
UI as well, it's written to logrus directly meaning the wrapper is
skipped.
This "issue" is obviously triggered because in any other place
`logger.X` can be used and it will used the custom logger package /
wrapper which works fine.
* Add plugin / scraper name to logging output
Indicate which plugin / scraper wrote a log message by including its
name to the `[Scrape]` prefix.
* Add missing addLogItem call
* Unify scraped types
* Make name fields optional
* Unify single scrape queries
* Change UI to use new interfaces
* Add multi scrape interfaces
* Use images instead of image
* find correct python executable
For script scrapers using python, both python and python3 are valid depending on the OS and running environment. To save users from having any issues, this change will find the correct executable for them.
Co-authored-by: bnkai <bnkai@users.noreply.github.com>
* api/urlbuilders/movie: Auto format.
* graphql+pkg+ui: Implement scraping movies by URL.
This patch implements the missing required boilerplate for scraping
movies by URL, using performers and scenes as a reference.
Although this patch contains a big chunck of ground work for enabling
scraping movies by fragment, the feature would require additional
changes to be completely implemented and was not tested.
* graphql+pkg+ui: Scrape movie studio.
Extends and corrects the movie model for the ability to store and
dereference studio IDs with received studio string from the scraper.
This was done with Scenes as a reference. For simplicity the duplication
of having `ScrapedMovieStudio` and `ScrapedSceneStudio` was kept, which
should probably be refactored to be the same type in the model in the
future.
* ui/movies: Add movie scrape dialog.
Adds possibility to update existing movie entries with the URL scraper.
For this the MovieScrapeDialog.tsx was implemented with Performers and
Scenes as a reference. In addition DurationUtils needs to be called one
time for converting seconds from the model to the string that is
displayed in the component. This seemed the least intrusive to me as it
kept a ScrapeResult<string> type compatible with ScrapedInputGroupRow.
* Refactor xpath scraper code
* Make post-process a list
* Add map post-process action
* Add fixed xpath values
* Refactor scrapers into cache
* Refactor into mapped config
* Trim test html