Commit Graph

23 Commits

Author SHA1 Message Date
WithoutPants 964b559309 Restructure data layer (#2532)
* Add new txn manager interface
* Add txn management to sqlite
* Rename get to getByID
* Add contexts to repository methods
* Update query builders
* Add context to reader writer interfaces
* Use repository in resolver
* Tighten interfaces
* Tighten interfaces in dlna
* Tighten interfaces in match package
* Tighten interfaces in scraper package
* Tighten interfaces in scan code
* Tighten interfaces on autotag package
* Remove ReaderWriter usage
* Merge database package into sqlite
2022-09-06 07:03:40 +00:00
SmallCoccinelle c1f89611e2
Refactor scraper top half (#1893)
* Simplify scraper listing

Introduce an enum, scraper.Kind, which explains what we are looking
for. Make it possible to match this from a scraper struct.

Use the enum to rewrite all the listing code to use the same code path.

* Use a map, nitpick ScrapePerformerList

Let the cache store a map from ID of a scraper to the scraper. This
improves lookups when there are many scrapers, making it practically
O(1) rather than O(n). If many scrapers are stored, this is faster.

Since range expressions work unchanged, we don't have to change much,
and things will still work.

make Kind a Stringer

Rename ScraperPerformerList -> ScraperPerformerQuery since that name
is used in the other scrapers, and we value consistency.

Tune ScraperPerformerQuery:

* Return static errors
* Use the new functionality

* When loading scrapers, do so directly

Rather than first walking the directory structure to obtain file paths,
fold the load directly in the the filepath walk. This makes the code
for more direct.

* Use static ErrNotFound

If a scraper isn't found, return one static error. This paves the way
for eventually doing our own error-presenter in gqlgen.

* Store the cache in the Resolver state

Putting the scraperCache directly in the resolver avoids the need to
call manager.GetInstance() all over the place to get access to the
scraper cache. The cache is stored by pointer, so it should be safe,
since the cache will just update its internal state rather than being
overwritten.

We can now utilize the resolver state to grab the cache where needed.

While here, pass context.Context from the resolver down into a function,
which removes a context.TODO()

* Introduce ScrapedContent

Create a union in the GraphQL schema for all scraped content. This
simplifies the internal implementation because we get variance on
the output content type.

Introduce a new type ScrapedContentType which signifies the scraped
content you want as a caller.

Use these to generalize the List interface and the URL scraping
interface.

* Simplify the scraper API

Introduce a new interface for scraping. This interface is then
used in the upper half of the scraper code, to make the code use one
code flow rather than multiple code flows. Variance is currently at
the old scraper structure.

Add extending interfaces for the different ways of invoking scrapes.
Use interface conversions to convert a scraper from the cache to a
scraper supporting the extra methods.

The return path returns models.ScrapedContent.

Write a general postProcess function in the scraper, handling all
ScrapedContent via type switching. This consolidates all postprocessing
code flows.

Introduce marhsallers in the resolver code for converting ScrapedContent
into the underlying concrete types. Use this to plug the existing
fields in the Query resolver, so everything still works.

* ScrapedContent: add more marshalling functions

Handle all marshalling of ScrapedContent through marhsalling functions.
Removes some hand-rolled early variants of it, and replaces it with
a canonical code flow.

* Support loadByName via scraper_s

In order to temporarily plug a hole in the current implementation, we
use the older implementation as a hook to get the newer implementation
to run.

Later on, this can serve as a guide for how to implement the lower level
bits inside the scrapers themselves. For now, it just enables support.

* Plug the remaining scraper functions for now

Since we would like to have a scraper which works in between refactors,
plug the lower level parts of the scraper for now. It avoids us having
to tackle this part just yet.

* Move postprocessing to its own file

There's enough postprocessing to clutter the main scrapers.go file.

Move all of this into a new file, postprocessing to make the API
simpler. It now lives in scrapers.go.

* Scraper: Invoke API consistency

scraper.Cache.ScrapeByName -> ScrapeName

* Fix scraping scenes by URL

Simple typo. While here, also make a single marshaller nil-aware.

* Introduce scraper groups, consolidate loadByURL

Rename `scraper_s` into `group`. A group is a group of scrapers with
the same identity. This corresponds to a single YAML file for a scraper
configuration. It defines a group which supports different types of
scraping contexts.

Move config into the group, and lift txnManager and globalConfig to
the group.

Because we now return models.ScrapedContent we can use interfaces to
get variance from the different underlying scrapers. Use a type
switch for the URL matcher candidates. And then again for the scrapers.

This consolidates all URL scraping paths into one.

While here, remove the urlMatcher interface which isn't needed. Also
clean up the remaining interfaces for url scraping and delete code
which has no purpose anymore.

* Consolidate fragment scraping in one code path

While here, abide the linters checks.

* Refactor loadByFragment

Give it the same treatment as loadByURL:

Step 1: find a scraperActionImpl which works for the data.
Step 2: use that to scrape

Most of this is simple analysis on the data at hand. It can be pushed
down further in a later commit, but for now we leave it here.

* Remove configScraper, autotag is a scraper

Remove the remains of the configScraper struct. It now lives on in the
group struct. Kill the remaining interfaces from the old implementation
while here.

Remove group.specification since it can now be handled by a simple
func call to spec().

Work through the autotag scraper. It now implements the scraper
interface, so it can be used as a scraper. This also simplifies the
autotag scraper quite a bit since it doens't have to implement a number
of unsupported func calls.

* Simplify the fragment scraper flow

* Pass the context

Eliminate a round of context.TODO() in the scraper code by passing
the calling context down into the subsystem. This will gracefully
allow for termination of remote calls if the client goes away for some
reason in GraphQL requests.

* Improve listScrapers in the schema

Support lists of types we accept.

* Be graceful on nil values in conversion

Supporting nil-values make the API more robust in the
case of partial results in a multi-scrape situation.

* Improve listScrapers: output at-most-once

Use the ID of a scraper to reduce the output set. If a scraper has
been included, don't include it again.

* Consolidate all API level errors into resolver.go
* Reorder files and functions:

scrapers.go -> cache.go:
    It almost contains nothing but the cache code.
    Move errors into scraper.go from here because
    It is a better place to have them living right now
group.go:
    All of the group structure. This can now go from
    scraper.go, making it more lean. Move group create
    from config_scraper to here.
config.go:
    Move the `(c config) spec()` call to here.
config_scraper.go:
    Empty file by now

* Name-update the scraper interfaces

Use 'via' rather than 'loadBy'.

The scrape happens via a given scrape method, so I think this is a nice
name for it.

* Rename scrapers for consistency.

While here, improve the error formatting, so different errors come
back differently.

* Nuke the freeones field from the GraphQL schema

* Fix autotag interfacing, refactor

The autotag scraper uses a pointer receiver, but the rest of the code
we use for scraping doesn't expect a pointer-receiver. Hence, to fix
the autotag scraper, we change it to be a value receiver, like the
rest of the code.

Fix: viaScene, and viaGallery.

While here, remove a couple of pointer-receiver methods which can be
trivially rewritten into plain functions.

* Protect against pointer interfaces

The underlying code can be a bit inconsistent in what it returns.
Introduce pointer-types in the postprocessing layer and handle them
accordingly for now. Once a better understanding of the lower levels
are understood, we can lift this.

* Move ErrConversion into the models package.

The conversion error pertains to the logic of converting models.
Because of this, it should move there, so it is centralized.

* Be consistent in scraper resolver error handling

If we have a static error

    Err = errors.New(..)

Then use it wrapped at the start:

    fmt.Errorf("%w: ...context...", Err)

This reads better.

While here, avoid using the underlying Atoi errors: they are verbose,
and like 99% of the time, the user know what is wrong from the input
string, so just give that back.

Also, remove the scraper id from the error contexts: it is implicit,
and the error wouldn't change if we used a different scraper, which
the error message would imply.

* Mark the list*Scrapers() API as deprecated

The same functionality is now present in listScrapers.

* Improve error formatting

Think about how each error is going to be used and tweak them to be
nicer.

* Return a sorted list of scrapers

This helps testing, it's closer to what we had, caches like stable data,
and it is easier for humans. It also makes the output stable, because
map iteration is randomized.

* Fix listScrapers calls to return in ID-order

Since we need the ordering to be by ID in all situations, it is easier
to just generalize the cache listScrapers call to support multiple
scraper types.

This avoids a de-dupe map up the chain, since every scraper is only
considered once. Sorting now happens in the cache listScrapers call.

Use this generalized function in all resolvers, which are now simple
passthroughs.

* Remove UpdateConfig from the scraper cache.

This isn't needed, so get rid of it.

* Pull a context into identify

Scraping scenes in the identify tasks now use a context from up the
call chain.

* Do not store the scraper cache in the resolver.

Scraper caches are updated through
manager.singleton•RefreshScraperCache, so we can't keep a pointer to
it in the resolver. Instead, solve this by adding a fetcher method to
the resolver type. This keeps it local to the resolver, while handling
the problem of updating caches in the configuration.
2021-11-19 10:55:34 +11:00
SmallCoccinelle e513b6ffa5
Cache and reuse the scraper HTTP client (#1855)
* Add Cookies directly to the request

Rather than maintaining a cookie jar on a one-shot HTTP client, maintain
the jar ourselves: make a new jar, then use it to select the right
cookies.

The cookies are set on the request rather than on the client. This will
retain the current behavior as we are always throwing the client away
after each use.

This patch enables the lifting of the http client as well over time.

* Introduce a cached scraper HTTP client

The scraper cache is augmented with an *http.Client. These are safe for
concurrent use, so the pointer can safely be passed around. Push this
into scraper configurations where applicable, next to the txnManagers.

When we issue a loadUrl request, do so on the cached *http.Client,
which will reuse existing idle connections in the client if any are
present.

* Set MaxIdleConnsPerHost. Closes #1850

We allow for up to 8 idle connections to a single host. This should
make concurrent operation toward the same host reuse connections, even
for sizeable concurrency.

The number isn't bumped excessively high. We should probably limit
concurrency toward a single site anyway, since we'll be able to overrun
a site with queries quite easily if we have many concurrent goroutines
issuing requests at the same time.

* Reinstate driverOptions / useCDP check

Use DeMorgan's laws to invert the logic and exit early. Fixes tests
breaking.

* Documentation fixup.

* Use the scraper http.Client when fetching images

Fold image fetchers onto the cached scraper http.Client as well. This
makes the scraper have a single http.Client cache for all its
operations.

Thread the client upwards to the relevant attachment points: either the
cache, or a stash_box instance, which is extended to include a pointer
to the client.

Style roughly follows that of txnManagers.

* Use the same http Client as the GraphQL client use

Rather than using http.DefaultClient, use the same client as the
GraphQL client use in the stash_box subsystem. This localizes the
client used in the subsystem into the constructing New.. call.

* Hoist HTTP client construction

Create a function for initializaing the HTTP Client we use. While here
hoist magic numbers into constants. Introduce a proper static redirect
error and use it in the client code as well.

* Reinstate printCookies

This is a debugging function, and it might still come in handy in the
future at some point.

* Nitpick comment.

* Minor tidy

Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
2021-10-20 16:12:24 +11:00
kermieisinthehouse 5ec70ac3e0
Fix List filter styles, fix freeones spam (#1853)
* Fix List filter styles, fix freeones spam
2021-10-15 14:02:49 +11:00
WithoutPants e9d48683f8
Autotag scraper (#1817)
* Refactor scraper structures
* Move matching code into new package
* Add autotag scraper
* Always check first letter of auto-tag names
* Account for nulls

Co-authored-by: Kermie <kermie@isinthe.house>
2021-10-11 23:06:06 +11:00
julien0221 d673c4ce03
added details, deathdate, hair color, weight to performers and added details to studios (#1274)
* added details to performers and studios
* added deathdate, hair_color and weight to performers
* Simplify performer/studio create mutations
* Add changelog and recategorised

Co-authored-by: WithoutPants <53250216+WithoutPants@users.noreply.github.com>
2021-04-16 16:06:35 +10:00
bnkai 4299f113e0
Fix Freeones search (#1230) 2021-03-25 10:01:56 +11:00
Belley 86bfb64a0d
Fix freeones scraper (#1091) 2021-02-01 08:15:50 +11:00
Belley 94392c7c4d
Fixing image for Freeones Scrapers (#930) 2020-11-07 09:36:26 +11:00
WithoutPants 70f73ecf4a
Update freeones scraper (#881) 2020-10-24 13:12:21 +11:00
WithoutPants 2b9215702e
Refactor xpath scraper code. Add fixed and map (#616)
* Refactor xpath scraper code
* Make post-process a list
* Add map post-process action
* Add fixed xpath values
* Refactor scrapers into cache
* Refactor into mapped config
* Trim test html
2020-07-21 14:06:25 +10:00
bnkai a7ac02fb50
freeones fixes (#615) 2020-06-17 11:02:06 +10:00
bnkai b89956de25
freeones scraper fixes/tweaking (#584) 2020-06-02 09:45:37 +10:00
WithoutPants 215c4e3bde
Change builtin freeones scraper to community yml (#542) 2020-05-15 20:10:20 +10:00
bnkai 0b50e83dbf
freeones scraper tweaks (#509) 2020-05-04 14:11:49 +10:00
WithoutPants 82201e23e0
Make ethnicity freetext and fix freeones ethnicity panic (#431)
* Make ethnicity free text

* Fix panic in freeones scraper for other ethnicity
2020-04-02 08:25:39 +11:00
WithoutPants 92837fe1f7 Add scene metadata scraping functionality (#236)
* Add scene scraping functionality

* Adapt to changed scraper config
2019-12-15 20:35:34 -05:00
WithoutPants 50784025f2 Change scraper config to yaml (#256) 2019-12-12 14:27:44 -05:00
WithoutPants 17247060b6 Generic performer scrapers (#203)
* Generalise scraper API

* Add script performer scraper

* Fixes from testing

* Add context to scrapers and generalise

* Add scraping performer from URL

* Add error handling

* Move log to debug

* Add supported scrape types
2019-11-18 21:49:05 -05:00
bill 9f6888a3d6 fix freeones scraper bugs 2019-10-16 02:05:49 +03:00
Friendly C 7c94262020 Freeones Scrape: Fix scraping by alias 2019-10-10 23:56:06 +02:00
bnkai bcc70af7e5 Fix minor freeones scraper bug (#41)
Fix minor freeones scraper bug
2019-04-11 11:54:38 -07:00
Stash Dev b488c1ed7d Reorg 2019-02-14 15:42:52 -08:00