stash/pkg/scraper/scraper.go

235 lines
6.2 KiB
Go
Raw Permalink Normal View History

// Package scraper provides interfaces to interact with the scraper subsystem.
// The [Cache] type is the main entry point to the scraper subsystem.
package scraper
Refactor scraper top half (#1893) * Simplify scraper listing Introduce an enum, scraper.Kind, which explains what we are looking for. Make it possible to match this from a scraper struct. Use the enum to rewrite all the listing code to use the same code path. * Use a map, nitpick ScrapePerformerList Let the cache store a map from ID of a scraper to the scraper. This improves lookups when there are many scrapers, making it practically O(1) rather than O(n). If many scrapers are stored, this is faster. Since range expressions work unchanged, we don't have to change much, and things will still work. make Kind a Stringer Rename ScraperPerformerList -> ScraperPerformerQuery since that name is used in the other scrapers, and we value consistency. Tune ScraperPerformerQuery: * Return static errors * Use the new functionality * When loading scrapers, do so directly Rather than first walking the directory structure to obtain file paths, fold the load directly in the the filepath walk. This makes the code for more direct. * Use static ErrNotFound If a scraper isn't found, return one static error. This paves the way for eventually doing our own error-presenter in gqlgen. * Store the cache in the Resolver state Putting the scraperCache directly in the resolver avoids the need to call manager.GetInstance() all over the place to get access to the scraper cache. The cache is stored by pointer, so it should be safe, since the cache will just update its internal state rather than being overwritten. We can now utilize the resolver state to grab the cache where needed. While here, pass context.Context from the resolver down into a function, which removes a context.TODO() * Introduce ScrapedContent Create a union in the GraphQL schema for all scraped content. This simplifies the internal implementation because we get variance on the output content type. Introduce a new type ScrapedContentType which signifies the scraped content you want as a caller. Use these to generalize the List interface and the URL scraping interface. * Simplify the scraper API Introduce a new interface for scraping. This interface is then used in the upper half of the scraper code, to make the code use one code flow rather than multiple code flows. Variance is currently at the old scraper structure. Add extending interfaces for the different ways of invoking scrapes. Use interface conversions to convert a scraper from the cache to a scraper supporting the extra methods. The return path returns models.ScrapedContent. Write a general postProcess function in the scraper, handling all ScrapedContent via type switching. This consolidates all postprocessing code flows. Introduce marhsallers in the resolver code for converting ScrapedContent into the underlying concrete types. Use this to plug the existing fields in the Query resolver, so everything still works. * ScrapedContent: add more marshalling functions Handle all marshalling of ScrapedContent through marhsalling functions. Removes some hand-rolled early variants of it, and replaces it with a canonical code flow. * Support loadByName via scraper_s In order to temporarily plug a hole in the current implementation, we use the older implementation as a hook to get the newer implementation to run. Later on, this can serve as a guide for how to implement the lower level bits inside the scrapers themselves. For now, it just enables support. * Plug the remaining scraper functions for now Since we would like to have a scraper which works in between refactors, plug the lower level parts of the scraper for now. It avoids us having to tackle this part just yet. * Move postprocessing to its own file There's enough postprocessing to clutter the main scrapers.go file. Move all of this into a new file, postprocessing to make the API simpler. It now lives in scrapers.go. * Scraper: Invoke API consistency scraper.Cache.ScrapeByName -> ScrapeName * Fix scraping scenes by URL Simple typo. While here, also make a single marshaller nil-aware. * Introduce scraper groups, consolidate loadByURL Rename `scraper_s` into `group`. A group is a group of scrapers with the same identity. This corresponds to a single YAML file for a scraper configuration. It defines a group which supports different types of scraping contexts. Move config into the group, and lift txnManager and globalConfig to the group. Because we now return models.ScrapedContent we can use interfaces to get variance from the different underlying scrapers. Use a type switch for the URL matcher candidates. And then again for the scrapers. This consolidates all URL scraping paths into one. While here, remove the urlMatcher interface which isn't needed. Also clean up the remaining interfaces for url scraping and delete code which has no purpose anymore. * Consolidate fragment scraping in one code path While here, abide the linters checks. * Refactor loadByFragment Give it the same treatment as loadByURL: Step 1: find a scraperActionImpl which works for the data. Step 2: use that to scrape Most of this is simple analysis on the data at hand. It can be pushed down further in a later commit, but for now we leave it here. * Remove configScraper, autotag is a scraper Remove the remains of the configScraper struct. It now lives on in the group struct. Kill the remaining interfaces from the old implementation while here. Remove group.specification since it can now be handled by a simple func call to spec(). Work through the autotag scraper. It now implements the scraper interface, so it can be used as a scraper. This also simplifies the autotag scraper quite a bit since it doens't have to implement a number of unsupported func calls. * Simplify the fragment scraper flow * Pass the context Eliminate a round of context.TODO() in the scraper code by passing the calling context down into the subsystem. This will gracefully allow for termination of remote calls if the client goes away for some reason in GraphQL requests. * Improve listScrapers in the schema Support lists of types we accept. * Be graceful on nil values in conversion Supporting nil-values make the API more robust in the case of partial results in a multi-scrape situation. * Improve listScrapers: output at-most-once Use the ID of a scraper to reduce the output set. If a scraper has been included, don't include it again. * Consolidate all API level errors into resolver.go * Reorder files and functions: scrapers.go -> cache.go: It almost contains nothing but the cache code. Move errors into scraper.go from here because It is a better place to have them living right now group.go: All of the group structure. This can now go from scraper.go, making it more lean. Move group create from config_scraper to here. config.go: Move the `(c config) spec()` call to here. config_scraper.go: Empty file by now * Name-update the scraper interfaces Use 'via' rather than 'loadBy'. The scrape happens via a given scrape method, so I think this is a nice name for it. * Rename scrapers for consistency. While here, improve the error formatting, so different errors come back differently. * Nuke the freeones field from the GraphQL schema * Fix autotag interfacing, refactor The autotag scraper uses a pointer receiver, but the rest of the code we use for scraping doesn't expect a pointer-receiver. Hence, to fix the autotag scraper, we change it to be a value receiver, like the rest of the code. Fix: viaScene, and viaGallery. While here, remove a couple of pointer-receiver methods which can be trivially rewritten into plain functions. * Protect against pointer interfaces The underlying code can be a bit inconsistent in what it returns. Introduce pointer-types in the postprocessing layer and handle them accordingly for now. Once a better understanding of the lower levels are understood, we can lift this. * Move ErrConversion into the models package. The conversion error pertains to the logic of converting models. Because of this, it should move there, so it is centralized. * Be consistent in scraper resolver error handling If we have a static error Err = errors.New(..) Then use it wrapped at the start: fmt.Errorf("%w: ...context...", Err) This reads better. While here, avoid using the underlying Atoi errors: they are verbose, and like 99% of the time, the user know what is wrong from the input string, so just give that back. Also, remove the scraper id from the error contexts: it is implicit, and the error wouldn't change if we used a different scraper, which the error message would imply. * Mark the list*Scrapers() API as deprecated The same functionality is now present in listScrapers. * Improve error formatting Think about how each error is going to be used and tweak them to be nicer. * Return a sorted list of scrapers This helps testing, it's closer to what we had, caches like stable data, and it is easier for humans. It also makes the output stable, because map iteration is randomized. * Fix listScrapers calls to return in ID-order Since we need the ordering to be by ID in all situations, it is easier to just generalize the cache listScrapers call to support multiple scraper types. This avoids a de-dupe map up the chain, since every scraper is only considered once. Sorting now happens in the cache listScrapers call. Use this generalized function in all resolvers, which are now simple passthroughs. * Remove UpdateConfig from the scraper cache. This isn't needed, so get rid of it. * Pull a context into identify Scraping scenes in the identify tasks now use a context from up the call chain. * Do not store the scraper cache in the resolver. Scraper caches are updated through manager.singleton•RefreshScraperCache, so we can't keep a pointer to it in the resolver. Instead, solve this by adding a fetcher method to the resolver type. This keeps it local to the resolver, while handling the problem of updating caches in the configuration.
2021-11-18 23:55:34 +00:00
import (
"context"
"errors"
"fmt"
"io"
Refactor scraper top half (#1893) * Simplify scraper listing Introduce an enum, scraper.Kind, which explains what we are looking for. Make it possible to match this from a scraper struct. Use the enum to rewrite all the listing code to use the same code path. * Use a map, nitpick ScrapePerformerList Let the cache store a map from ID of a scraper to the scraper. This improves lookups when there are many scrapers, making it practically O(1) rather than O(n). If many scrapers are stored, this is faster. Since range expressions work unchanged, we don't have to change much, and things will still work. make Kind a Stringer Rename ScraperPerformerList -> ScraperPerformerQuery since that name is used in the other scrapers, and we value consistency. Tune ScraperPerformerQuery: * Return static errors * Use the new functionality * When loading scrapers, do so directly Rather than first walking the directory structure to obtain file paths, fold the load directly in the the filepath walk. This makes the code for more direct. * Use static ErrNotFound If a scraper isn't found, return one static error. This paves the way for eventually doing our own error-presenter in gqlgen. * Store the cache in the Resolver state Putting the scraperCache directly in the resolver avoids the need to call manager.GetInstance() all over the place to get access to the scraper cache. The cache is stored by pointer, so it should be safe, since the cache will just update its internal state rather than being overwritten. We can now utilize the resolver state to grab the cache where needed. While here, pass context.Context from the resolver down into a function, which removes a context.TODO() * Introduce ScrapedContent Create a union in the GraphQL schema for all scraped content. This simplifies the internal implementation because we get variance on the output content type. Introduce a new type ScrapedContentType which signifies the scraped content you want as a caller. Use these to generalize the List interface and the URL scraping interface. * Simplify the scraper API Introduce a new interface for scraping. This interface is then used in the upper half of the scraper code, to make the code use one code flow rather than multiple code flows. Variance is currently at the old scraper structure. Add extending interfaces for the different ways of invoking scrapes. Use interface conversions to convert a scraper from the cache to a scraper supporting the extra methods. The return path returns models.ScrapedContent. Write a general postProcess function in the scraper, handling all ScrapedContent via type switching. This consolidates all postprocessing code flows. Introduce marhsallers in the resolver code for converting ScrapedContent into the underlying concrete types. Use this to plug the existing fields in the Query resolver, so everything still works. * ScrapedContent: add more marshalling functions Handle all marshalling of ScrapedContent through marhsalling functions. Removes some hand-rolled early variants of it, and replaces it with a canonical code flow. * Support loadByName via scraper_s In order to temporarily plug a hole in the current implementation, we use the older implementation as a hook to get the newer implementation to run. Later on, this can serve as a guide for how to implement the lower level bits inside the scrapers themselves. For now, it just enables support. * Plug the remaining scraper functions for now Since we would like to have a scraper which works in between refactors, plug the lower level parts of the scraper for now. It avoids us having to tackle this part just yet. * Move postprocessing to its own file There's enough postprocessing to clutter the main scrapers.go file. Move all of this into a new file, postprocessing to make the API simpler. It now lives in scrapers.go. * Scraper: Invoke API consistency scraper.Cache.ScrapeByName -> ScrapeName * Fix scraping scenes by URL Simple typo. While here, also make a single marshaller nil-aware. * Introduce scraper groups, consolidate loadByURL Rename `scraper_s` into `group`. A group is a group of scrapers with the same identity. This corresponds to a single YAML file for a scraper configuration. It defines a group which supports different types of scraping contexts. Move config into the group, and lift txnManager and globalConfig to the group. Because we now return models.ScrapedContent we can use interfaces to get variance from the different underlying scrapers. Use a type switch for the URL matcher candidates. And then again for the scrapers. This consolidates all URL scraping paths into one. While here, remove the urlMatcher interface which isn't needed. Also clean up the remaining interfaces for url scraping and delete code which has no purpose anymore. * Consolidate fragment scraping in one code path While here, abide the linters checks. * Refactor loadByFragment Give it the same treatment as loadByURL: Step 1: find a scraperActionImpl which works for the data. Step 2: use that to scrape Most of this is simple analysis on the data at hand. It can be pushed down further in a later commit, but for now we leave it here. * Remove configScraper, autotag is a scraper Remove the remains of the configScraper struct. It now lives on in the group struct. Kill the remaining interfaces from the old implementation while here. Remove group.specification since it can now be handled by a simple func call to spec(). Work through the autotag scraper. It now implements the scraper interface, so it can be used as a scraper. This also simplifies the autotag scraper quite a bit since it doens't have to implement a number of unsupported func calls. * Simplify the fragment scraper flow * Pass the context Eliminate a round of context.TODO() in the scraper code by passing the calling context down into the subsystem. This will gracefully allow for termination of remote calls if the client goes away for some reason in GraphQL requests. * Improve listScrapers in the schema Support lists of types we accept. * Be graceful on nil values in conversion Supporting nil-values make the API more robust in the case of partial results in a multi-scrape situation. * Improve listScrapers: output at-most-once Use the ID of a scraper to reduce the output set. If a scraper has been included, don't include it again. * Consolidate all API level errors into resolver.go * Reorder files and functions: scrapers.go -> cache.go: It almost contains nothing but the cache code. Move errors into scraper.go from here because It is a better place to have them living right now group.go: All of the group structure. This can now go from scraper.go, making it more lean. Move group create from config_scraper to here. config.go: Move the `(c config) spec()` call to here. config_scraper.go: Empty file by now * Name-update the scraper interfaces Use 'via' rather than 'loadBy'. The scrape happens via a given scrape method, so I think this is a nice name for it. * Rename scrapers for consistency. While here, improve the error formatting, so different errors come back differently. * Nuke the freeones field from the GraphQL schema * Fix autotag interfacing, refactor The autotag scraper uses a pointer receiver, but the rest of the code we use for scraping doesn't expect a pointer-receiver. Hence, to fix the autotag scraper, we change it to be a value receiver, like the rest of the code. Fix: viaScene, and viaGallery. While here, remove a couple of pointer-receiver methods which can be trivially rewritten into plain functions. * Protect against pointer interfaces The underlying code can be a bit inconsistent in what it returns. Introduce pointer-types in the postprocessing layer and handle them accordingly for now. Once a better understanding of the lower levels are understood, we can lift this. * Move ErrConversion into the models package. The conversion error pertains to the logic of converting models. Because of this, it should move there, so it is centralized. * Be consistent in scraper resolver error handling If we have a static error Err = errors.New(..) Then use it wrapped at the start: fmt.Errorf("%w: ...context...", Err) This reads better. While here, avoid using the underlying Atoi errors: they are verbose, and like 99% of the time, the user know what is wrong from the input string, so just give that back. Also, remove the scraper id from the error contexts: it is implicit, and the error wouldn't change if we used a different scraper, which the error message would imply. * Mark the list*Scrapers() API as deprecated The same functionality is now present in listScrapers. * Improve error formatting Think about how each error is going to be used and tweak them to be nicer. * Return a sorted list of scrapers This helps testing, it's closer to what we had, caches like stable data, and it is easier for humans. It also makes the output stable, because map iteration is randomized. * Fix listScrapers calls to return in ID-order Since we need the ordering to be by ID in all situations, it is easier to just generalize the cache listScrapers call to support multiple scraper types. This avoids a de-dupe map up the chain, since every scraper is only considered once. Sorting now happens in the cache listScrapers call. Use this generalized function in all resolvers, which are now simple passthroughs. * Remove UpdateConfig from the scraper cache. This isn't needed, so get rid of it. * Pull a context into identify Scraping scenes in the identify tasks now use a context from up the call chain. * Do not store the scraper cache in the resolver. Scraper caches are updated through manager.singleton•RefreshScraperCache, so we can't keep a pointer to it in the resolver. Instead, solve this by adding a fetcher method to the resolver type. This keeps it local to the resolver, while handling the problem of updating caches in the configuration.
2021-11-18 23:55:34 +00:00
"net/http"
"strconv"
Refactor scraper top half (#1893) * Simplify scraper listing Introduce an enum, scraper.Kind, which explains what we are looking for. Make it possible to match this from a scraper struct. Use the enum to rewrite all the listing code to use the same code path. * Use a map, nitpick ScrapePerformerList Let the cache store a map from ID of a scraper to the scraper. This improves lookups when there are many scrapers, making it practically O(1) rather than O(n). If many scrapers are stored, this is faster. Since range expressions work unchanged, we don't have to change much, and things will still work. make Kind a Stringer Rename ScraperPerformerList -> ScraperPerformerQuery since that name is used in the other scrapers, and we value consistency. Tune ScraperPerformerQuery: * Return static errors * Use the new functionality * When loading scrapers, do so directly Rather than first walking the directory structure to obtain file paths, fold the load directly in the the filepath walk. This makes the code for more direct. * Use static ErrNotFound If a scraper isn't found, return one static error. This paves the way for eventually doing our own error-presenter in gqlgen. * Store the cache in the Resolver state Putting the scraperCache directly in the resolver avoids the need to call manager.GetInstance() all over the place to get access to the scraper cache. The cache is stored by pointer, so it should be safe, since the cache will just update its internal state rather than being overwritten. We can now utilize the resolver state to grab the cache where needed. While here, pass context.Context from the resolver down into a function, which removes a context.TODO() * Introduce ScrapedContent Create a union in the GraphQL schema for all scraped content. This simplifies the internal implementation because we get variance on the output content type. Introduce a new type ScrapedContentType which signifies the scraped content you want as a caller. Use these to generalize the List interface and the URL scraping interface. * Simplify the scraper API Introduce a new interface for scraping. This interface is then used in the upper half of the scraper code, to make the code use one code flow rather than multiple code flows. Variance is currently at the old scraper structure. Add extending interfaces for the different ways of invoking scrapes. Use interface conversions to convert a scraper from the cache to a scraper supporting the extra methods. The return path returns models.ScrapedContent. Write a general postProcess function in the scraper, handling all ScrapedContent via type switching. This consolidates all postprocessing code flows. Introduce marhsallers in the resolver code for converting ScrapedContent into the underlying concrete types. Use this to plug the existing fields in the Query resolver, so everything still works. * ScrapedContent: add more marshalling functions Handle all marshalling of ScrapedContent through marhsalling functions. Removes some hand-rolled early variants of it, and replaces it with a canonical code flow. * Support loadByName via scraper_s In order to temporarily plug a hole in the current implementation, we use the older implementation as a hook to get the newer implementation to run. Later on, this can serve as a guide for how to implement the lower level bits inside the scrapers themselves. For now, it just enables support. * Plug the remaining scraper functions for now Since we would like to have a scraper which works in between refactors, plug the lower level parts of the scraper for now. It avoids us having to tackle this part just yet. * Move postprocessing to its own file There's enough postprocessing to clutter the main scrapers.go file. Move all of this into a new file, postprocessing to make the API simpler. It now lives in scrapers.go. * Scraper: Invoke API consistency scraper.Cache.ScrapeByName -> ScrapeName * Fix scraping scenes by URL Simple typo. While here, also make a single marshaller nil-aware. * Introduce scraper groups, consolidate loadByURL Rename `scraper_s` into `group`. A group is a group of scrapers with the same identity. This corresponds to a single YAML file for a scraper configuration. It defines a group which supports different types of scraping contexts. Move config into the group, and lift txnManager and globalConfig to the group. Because we now return models.ScrapedContent we can use interfaces to get variance from the different underlying scrapers. Use a type switch for the URL matcher candidates. And then again for the scrapers. This consolidates all URL scraping paths into one. While here, remove the urlMatcher interface which isn't needed. Also clean up the remaining interfaces for url scraping and delete code which has no purpose anymore. * Consolidate fragment scraping in one code path While here, abide the linters checks. * Refactor loadByFragment Give it the same treatment as loadByURL: Step 1: find a scraperActionImpl which works for the data. Step 2: use that to scrape Most of this is simple analysis on the data at hand. It can be pushed down further in a later commit, but for now we leave it here. * Remove configScraper, autotag is a scraper Remove the remains of the configScraper struct. It now lives on in the group struct. Kill the remaining interfaces from the old implementation while here. Remove group.specification since it can now be handled by a simple func call to spec(). Work through the autotag scraper. It now implements the scraper interface, so it can be used as a scraper. This also simplifies the autotag scraper quite a bit since it doens't have to implement a number of unsupported func calls. * Simplify the fragment scraper flow * Pass the context Eliminate a round of context.TODO() in the scraper code by passing the calling context down into the subsystem. This will gracefully allow for termination of remote calls if the client goes away for some reason in GraphQL requests. * Improve listScrapers in the schema Support lists of types we accept. * Be graceful on nil values in conversion Supporting nil-values make the API more robust in the case of partial results in a multi-scrape situation. * Improve listScrapers: output at-most-once Use the ID of a scraper to reduce the output set. If a scraper has been included, don't include it again. * Consolidate all API level errors into resolver.go * Reorder files and functions: scrapers.go -> cache.go: It almost contains nothing but the cache code. Move errors into scraper.go from here because It is a better place to have them living right now group.go: All of the group structure. This can now go from scraper.go, making it more lean. Move group create from config_scraper to here. config.go: Move the `(c config) spec()` call to here. config_scraper.go: Empty file by now * Name-update the scraper interfaces Use 'via' rather than 'loadBy'. The scrape happens via a given scrape method, so I think this is a nice name for it. * Rename scrapers for consistency. While here, improve the error formatting, so different errors come back differently. * Nuke the freeones field from the GraphQL schema * Fix autotag interfacing, refactor The autotag scraper uses a pointer receiver, but the rest of the code we use for scraping doesn't expect a pointer-receiver. Hence, to fix the autotag scraper, we change it to be a value receiver, like the rest of the code. Fix: viaScene, and viaGallery. While here, remove a couple of pointer-receiver methods which can be trivially rewritten into plain functions. * Protect against pointer interfaces The underlying code can be a bit inconsistent in what it returns. Introduce pointer-types in the postprocessing layer and handle them accordingly for now. Once a better understanding of the lower levels are understood, we can lift this. * Move ErrConversion into the models package. The conversion error pertains to the logic of converting models. Because of this, it should move there, so it is centralized. * Be consistent in scraper resolver error handling If we have a static error Err = errors.New(..) Then use it wrapped at the start: fmt.Errorf("%w: ...context...", Err) This reads better. While here, avoid using the underlying Atoi errors: they are verbose, and like 99% of the time, the user know what is wrong from the input string, so just give that back. Also, remove the scraper id from the error contexts: it is implicit, and the error wouldn't change if we used a different scraper, which the error message would imply. * Mark the list*Scrapers() API as deprecated The same functionality is now present in listScrapers. * Improve error formatting Think about how each error is going to be used and tweak them to be nicer. * Return a sorted list of scrapers This helps testing, it's closer to what we had, caches like stable data, and it is easier for humans. It also makes the output stable, because map iteration is randomized. * Fix listScrapers calls to return in ID-order Since we need the ordering to be by ID in all situations, it is easier to just generalize the cache listScrapers call to support multiple scraper types. This avoids a de-dupe map up the chain, since every scraper is only considered once. Sorting now happens in the cache listScrapers call. Use this generalized function in all resolvers, which are now simple passthroughs. * Remove UpdateConfig from the scraper cache. This isn't needed, so get rid of it. * Pull a context into identify Scraping scenes in the identify tasks now use a context from up the call chain. * Do not store the scraper cache in the resolver. Scraper caches are updated through manager.singleton•RefreshScraperCache, so we can't keep a pointer to it in the resolver. Instead, solve this by adding a fetcher method to the resolver type. This keeps it local to the resolver, while handling the problem of updating caches in the configuration.
2021-11-18 23:55:34 +00:00
"github.com/stashapp/stash/pkg/models"
)
type Source struct {
// Index of the configured stash-box instance to use. Should be unset if scraper_id is set
StashBoxIndex *int `json:"stash_box_index"`
// Stash-box endpoint
StashBoxEndpoint *string `json:"stash_box_endpoint"`
// Scraper ID to scrape with. Should be unset if stash_box_index is set
ScraperID *string `json:"scraper_id"`
}
// Scraped Content is the forming union over the different scrapers
type ScrapedContent interface {
IsScrapedContent()
}
// Type of the content a scraper generates
type ScrapeContentType string
const (
ScrapeContentTypeGallery ScrapeContentType = "GALLERY"
ScrapeContentTypeMovie ScrapeContentType = "MOVIE"
ScrapeContentTypeGroup ScrapeContentType = "GROUP"
ScrapeContentTypePerformer ScrapeContentType = "PERFORMER"
ScrapeContentTypeScene ScrapeContentType = "SCENE"
)
var AllScrapeContentType = []ScrapeContentType{
ScrapeContentTypeGallery,
ScrapeContentTypeMovie,
ScrapeContentTypeGroup,
ScrapeContentTypePerformer,
ScrapeContentTypeScene,
}
func (e ScrapeContentType) IsValid() bool {
switch e {
case ScrapeContentTypeGallery, ScrapeContentTypeMovie, ScrapeContentTypeGroup, ScrapeContentTypePerformer, ScrapeContentTypeScene:
return true
}
return false
}
func (e ScrapeContentType) String() string {
return string(e)
}
func (e *ScrapeContentType) UnmarshalGQL(v interface{}) error {
str, ok := v.(string)
if !ok {
return fmt.Errorf("enums must be strings")
}
*e = ScrapeContentType(str)
if !e.IsValid() {
return fmt.Errorf("%s is not a valid ScrapeContentType", str)
}
return nil
}
func (e ScrapeContentType) MarshalGQL(w io.Writer) {
fmt.Fprint(w, strconv.Quote(e.String()))
}
type Scraper struct {
ID string `json:"id"`
Name string `json:"name"`
// Details for performer scraper
Performer *ScraperSpec `json:"performer"`
// Details for scene scraper
Scene *ScraperSpec `json:"scene"`
// Details for gallery scraper
Gallery *ScraperSpec `json:"gallery"`
// Details for movie scraper
Group *ScraperSpec `json:"group"`
// Details for movie scraper
Movie *ScraperSpec `json:"movie"`
}
type ScraperSpec struct {
// URLs matching these can be scraped with
Urls []string `json:"urls"`
SupportedScrapes []ScrapeType `json:"supported_scrapes"`
}
type ScrapeType string
const (
// From text query
ScrapeTypeName ScrapeType = "NAME"
// From existing object
ScrapeTypeFragment ScrapeType = "FRAGMENT"
// From URL
ScrapeTypeURL ScrapeType = "URL"
)
var AllScrapeType = []ScrapeType{
ScrapeTypeName,
ScrapeTypeFragment,
ScrapeTypeURL,
}
func (e ScrapeType) IsValid() bool {
switch e {
case ScrapeTypeName, ScrapeTypeFragment, ScrapeTypeURL:
return true
}
return false
}
func (e ScrapeType) String() string {
return string(e)
}
func (e *ScrapeType) UnmarshalGQL(v interface{}) error {
str, ok := v.(string)
if !ok {
return fmt.Errorf("enums must be strings")
}
*e = ScrapeType(str)
if !e.IsValid() {
return fmt.Errorf("%s is not a valid ScrapeType", str)
}
return nil
}
func (e ScrapeType) MarshalGQL(w io.Writer) {
fmt.Fprint(w, strconv.Quote(e.String()))
}
Refactor scraper top half (#1893) * Simplify scraper listing Introduce an enum, scraper.Kind, which explains what we are looking for. Make it possible to match this from a scraper struct. Use the enum to rewrite all the listing code to use the same code path. * Use a map, nitpick ScrapePerformerList Let the cache store a map from ID of a scraper to the scraper. This improves lookups when there are many scrapers, making it practically O(1) rather than O(n). If many scrapers are stored, this is faster. Since range expressions work unchanged, we don't have to change much, and things will still work. make Kind a Stringer Rename ScraperPerformerList -> ScraperPerformerQuery since that name is used in the other scrapers, and we value consistency. Tune ScraperPerformerQuery: * Return static errors * Use the new functionality * When loading scrapers, do so directly Rather than first walking the directory structure to obtain file paths, fold the load directly in the the filepath walk. This makes the code for more direct. * Use static ErrNotFound If a scraper isn't found, return one static error. This paves the way for eventually doing our own error-presenter in gqlgen. * Store the cache in the Resolver state Putting the scraperCache directly in the resolver avoids the need to call manager.GetInstance() all over the place to get access to the scraper cache. The cache is stored by pointer, so it should be safe, since the cache will just update its internal state rather than being overwritten. We can now utilize the resolver state to grab the cache where needed. While here, pass context.Context from the resolver down into a function, which removes a context.TODO() * Introduce ScrapedContent Create a union in the GraphQL schema for all scraped content. This simplifies the internal implementation because we get variance on the output content type. Introduce a new type ScrapedContentType which signifies the scraped content you want as a caller. Use these to generalize the List interface and the URL scraping interface. * Simplify the scraper API Introduce a new interface for scraping. This interface is then used in the upper half of the scraper code, to make the code use one code flow rather than multiple code flows. Variance is currently at the old scraper structure. Add extending interfaces for the different ways of invoking scrapes. Use interface conversions to convert a scraper from the cache to a scraper supporting the extra methods. The return path returns models.ScrapedContent. Write a general postProcess function in the scraper, handling all ScrapedContent via type switching. This consolidates all postprocessing code flows. Introduce marhsallers in the resolver code for converting ScrapedContent into the underlying concrete types. Use this to plug the existing fields in the Query resolver, so everything still works. * ScrapedContent: add more marshalling functions Handle all marshalling of ScrapedContent through marhsalling functions. Removes some hand-rolled early variants of it, and replaces it with a canonical code flow. * Support loadByName via scraper_s In order to temporarily plug a hole in the current implementation, we use the older implementation as a hook to get the newer implementation to run. Later on, this can serve as a guide for how to implement the lower level bits inside the scrapers themselves. For now, it just enables support. * Plug the remaining scraper functions for now Since we would like to have a scraper which works in between refactors, plug the lower level parts of the scraper for now. It avoids us having to tackle this part just yet. * Move postprocessing to its own file There's enough postprocessing to clutter the main scrapers.go file. Move all of this into a new file, postprocessing to make the API simpler. It now lives in scrapers.go. * Scraper: Invoke API consistency scraper.Cache.ScrapeByName -> ScrapeName * Fix scraping scenes by URL Simple typo. While here, also make a single marshaller nil-aware. * Introduce scraper groups, consolidate loadByURL Rename `scraper_s` into `group`. A group is a group of scrapers with the same identity. This corresponds to a single YAML file for a scraper configuration. It defines a group which supports different types of scraping contexts. Move config into the group, and lift txnManager and globalConfig to the group. Because we now return models.ScrapedContent we can use interfaces to get variance from the different underlying scrapers. Use a type switch for the URL matcher candidates. And then again for the scrapers. This consolidates all URL scraping paths into one. While here, remove the urlMatcher interface which isn't needed. Also clean up the remaining interfaces for url scraping and delete code which has no purpose anymore. * Consolidate fragment scraping in one code path While here, abide the linters checks. * Refactor loadByFragment Give it the same treatment as loadByURL: Step 1: find a scraperActionImpl which works for the data. Step 2: use that to scrape Most of this is simple analysis on the data at hand. It can be pushed down further in a later commit, but for now we leave it here. * Remove configScraper, autotag is a scraper Remove the remains of the configScraper struct. It now lives on in the group struct. Kill the remaining interfaces from the old implementation while here. Remove group.specification since it can now be handled by a simple func call to spec(). Work through the autotag scraper. It now implements the scraper interface, so it can be used as a scraper. This also simplifies the autotag scraper quite a bit since it doens't have to implement a number of unsupported func calls. * Simplify the fragment scraper flow * Pass the context Eliminate a round of context.TODO() in the scraper code by passing the calling context down into the subsystem. This will gracefully allow for termination of remote calls if the client goes away for some reason in GraphQL requests. * Improve listScrapers in the schema Support lists of types we accept. * Be graceful on nil values in conversion Supporting nil-values make the API more robust in the case of partial results in a multi-scrape situation. * Improve listScrapers: output at-most-once Use the ID of a scraper to reduce the output set. If a scraper has been included, don't include it again. * Consolidate all API level errors into resolver.go * Reorder files and functions: scrapers.go -> cache.go: It almost contains nothing but the cache code. Move errors into scraper.go from here because It is a better place to have them living right now group.go: All of the group structure. This can now go from scraper.go, making it more lean. Move group create from config_scraper to here. config.go: Move the `(c config) spec()` call to here. config_scraper.go: Empty file by now * Name-update the scraper interfaces Use 'via' rather than 'loadBy'. The scrape happens via a given scrape method, so I think this is a nice name for it. * Rename scrapers for consistency. While here, improve the error formatting, so different errors come back differently. * Nuke the freeones field from the GraphQL schema * Fix autotag interfacing, refactor The autotag scraper uses a pointer receiver, but the rest of the code we use for scraping doesn't expect a pointer-receiver. Hence, to fix the autotag scraper, we change it to be a value receiver, like the rest of the code. Fix: viaScene, and viaGallery. While here, remove a couple of pointer-receiver methods which can be trivially rewritten into plain functions. * Protect against pointer interfaces The underlying code can be a bit inconsistent in what it returns. Introduce pointer-types in the postprocessing layer and handle them accordingly for now. Once a better understanding of the lower levels are understood, we can lift this. * Move ErrConversion into the models package. The conversion error pertains to the logic of converting models. Because of this, it should move there, so it is centralized. * Be consistent in scraper resolver error handling If we have a static error Err = errors.New(..) Then use it wrapped at the start: fmt.Errorf("%w: ...context...", Err) This reads better. While here, avoid using the underlying Atoi errors: they are verbose, and like 99% of the time, the user know what is wrong from the input string, so just give that back. Also, remove the scraper id from the error contexts: it is implicit, and the error wouldn't change if we used a different scraper, which the error message would imply. * Mark the list*Scrapers() API as deprecated The same functionality is now present in listScrapers. * Improve error formatting Think about how each error is going to be used and tweak them to be nicer. * Return a sorted list of scrapers This helps testing, it's closer to what we had, caches like stable data, and it is easier for humans. It also makes the output stable, because map iteration is randomized. * Fix listScrapers calls to return in ID-order Since we need the ordering to be by ID in all situations, it is easier to just generalize the cache listScrapers call to support multiple scraper types. This avoids a de-dupe map up the chain, since every scraper is only considered once. Sorting now happens in the cache listScrapers call. Use this generalized function in all resolvers, which are now simple passthroughs. * Remove UpdateConfig from the scraper cache. This isn't needed, so get rid of it. * Pull a context into identify Scraping scenes in the identify tasks now use a context from up the call chain. * Do not store the scraper cache in the resolver. Scraper caches are updated through manager.singleton•RefreshScraperCache, so we can't keep a pointer to it in the resolver. Instead, solve this by adding a fetcher method to the resolver type. This keeps it local to the resolver, while handling the problem of updating caches in the configuration.
2021-11-18 23:55:34 +00:00
var (
// ErrMaxRedirects is returned if the max number of HTTP redirects are reached.
ErrMaxRedirects = errors.New("maximum number of HTTP redirects reached")
// ErrNotFound is returned when an entity isn't found
ErrNotFound = errors.New("scraper not found")
// ErrNotSupported is returned when a given invocation isn't supported, and there
// is a guard function which should be able to guard against it.
ErrNotSupported = errors.New("scraper operation not supported")
)
// Input coalesces inputs of different types into a single structure.
// The system expects one of these to be set, and the remaining to be
// set to nil.
type Input struct {
Performer *ScrapedPerformerInput
Scene *ScrapedSceneInput
Gallery *ScrapedGalleryInput
}
// populateURL populates the URL field of the input based on the
// URLs field of the input. Does nothing if the URL field is already set.
func (i *Input) populateURL() {
if i.Scene != nil && i.Scene.URL == nil && len(i.Scene.URLs) > 0 {
i.Scene.URL = &i.Scene.URLs[0]
}
if i.Gallery != nil && i.Gallery.URL == nil && len(i.Gallery.URLs) > 0 {
i.Gallery.URL = &i.Gallery.URLs[0]
}
if i.Performer != nil && i.Performer.URL == nil && len(i.Performer.URLs) > 0 {
i.Performer.URL = &i.Performer.URLs[0]
}
}
// simple type definitions that can help customize
// actions per query
type QueryType int
const (
// for now only SearchQuery is needed
SearchQuery QueryType = iota + 1
)
Refactor scraper top half (#1893) * Simplify scraper listing Introduce an enum, scraper.Kind, which explains what we are looking for. Make it possible to match this from a scraper struct. Use the enum to rewrite all the listing code to use the same code path. * Use a map, nitpick ScrapePerformerList Let the cache store a map from ID of a scraper to the scraper. This improves lookups when there are many scrapers, making it practically O(1) rather than O(n). If many scrapers are stored, this is faster. Since range expressions work unchanged, we don't have to change much, and things will still work. make Kind a Stringer Rename ScraperPerformerList -> ScraperPerformerQuery since that name is used in the other scrapers, and we value consistency. Tune ScraperPerformerQuery: * Return static errors * Use the new functionality * When loading scrapers, do so directly Rather than first walking the directory structure to obtain file paths, fold the load directly in the the filepath walk. This makes the code for more direct. * Use static ErrNotFound If a scraper isn't found, return one static error. This paves the way for eventually doing our own error-presenter in gqlgen. * Store the cache in the Resolver state Putting the scraperCache directly in the resolver avoids the need to call manager.GetInstance() all over the place to get access to the scraper cache. The cache is stored by pointer, so it should be safe, since the cache will just update its internal state rather than being overwritten. We can now utilize the resolver state to grab the cache where needed. While here, pass context.Context from the resolver down into a function, which removes a context.TODO() * Introduce ScrapedContent Create a union in the GraphQL schema for all scraped content. This simplifies the internal implementation because we get variance on the output content type. Introduce a new type ScrapedContentType which signifies the scraped content you want as a caller. Use these to generalize the List interface and the URL scraping interface. * Simplify the scraper API Introduce a new interface for scraping. This interface is then used in the upper half of the scraper code, to make the code use one code flow rather than multiple code flows. Variance is currently at the old scraper structure. Add extending interfaces for the different ways of invoking scrapes. Use interface conversions to convert a scraper from the cache to a scraper supporting the extra methods. The return path returns models.ScrapedContent. Write a general postProcess function in the scraper, handling all ScrapedContent via type switching. This consolidates all postprocessing code flows. Introduce marhsallers in the resolver code for converting ScrapedContent into the underlying concrete types. Use this to plug the existing fields in the Query resolver, so everything still works. * ScrapedContent: add more marshalling functions Handle all marshalling of ScrapedContent through marhsalling functions. Removes some hand-rolled early variants of it, and replaces it with a canonical code flow. * Support loadByName via scraper_s In order to temporarily plug a hole in the current implementation, we use the older implementation as a hook to get the newer implementation to run. Later on, this can serve as a guide for how to implement the lower level bits inside the scrapers themselves. For now, it just enables support. * Plug the remaining scraper functions for now Since we would like to have a scraper which works in between refactors, plug the lower level parts of the scraper for now. It avoids us having to tackle this part just yet. * Move postprocessing to its own file There's enough postprocessing to clutter the main scrapers.go file. Move all of this into a new file, postprocessing to make the API simpler. It now lives in scrapers.go. * Scraper: Invoke API consistency scraper.Cache.ScrapeByName -> ScrapeName * Fix scraping scenes by URL Simple typo. While here, also make a single marshaller nil-aware. * Introduce scraper groups, consolidate loadByURL Rename `scraper_s` into `group`. A group is a group of scrapers with the same identity. This corresponds to a single YAML file for a scraper configuration. It defines a group which supports different types of scraping contexts. Move config into the group, and lift txnManager and globalConfig to the group. Because we now return models.ScrapedContent we can use interfaces to get variance from the different underlying scrapers. Use a type switch for the URL matcher candidates. And then again for the scrapers. This consolidates all URL scraping paths into one. While here, remove the urlMatcher interface which isn't needed. Also clean up the remaining interfaces for url scraping and delete code which has no purpose anymore. * Consolidate fragment scraping in one code path While here, abide the linters checks. * Refactor loadByFragment Give it the same treatment as loadByURL: Step 1: find a scraperActionImpl which works for the data. Step 2: use that to scrape Most of this is simple analysis on the data at hand. It can be pushed down further in a later commit, but for now we leave it here. * Remove configScraper, autotag is a scraper Remove the remains of the configScraper struct. It now lives on in the group struct. Kill the remaining interfaces from the old implementation while here. Remove group.specification since it can now be handled by a simple func call to spec(). Work through the autotag scraper. It now implements the scraper interface, so it can be used as a scraper. This also simplifies the autotag scraper quite a bit since it doens't have to implement a number of unsupported func calls. * Simplify the fragment scraper flow * Pass the context Eliminate a round of context.TODO() in the scraper code by passing the calling context down into the subsystem. This will gracefully allow for termination of remote calls if the client goes away for some reason in GraphQL requests. * Improve listScrapers in the schema Support lists of types we accept. * Be graceful on nil values in conversion Supporting nil-values make the API more robust in the case of partial results in a multi-scrape situation. * Improve listScrapers: output at-most-once Use the ID of a scraper to reduce the output set. If a scraper has been included, don't include it again. * Consolidate all API level errors into resolver.go * Reorder files and functions: scrapers.go -> cache.go: It almost contains nothing but the cache code. Move errors into scraper.go from here because It is a better place to have them living right now group.go: All of the group structure. This can now go from scraper.go, making it more lean. Move group create from config_scraper to here. config.go: Move the `(c config) spec()` call to here. config_scraper.go: Empty file by now * Name-update the scraper interfaces Use 'via' rather than 'loadBy'. The scrape happens via a given scrape method, so I think this is a nice name for it. * Rename scrapers for consistency. While here, improve the error formatting, so different errors come back differently. * Nuke the freeones field from the GraphQL schema * Fix autotag interfacing, refactor The autotag scraper uses a pointer receiver, but the rest of the code we use for scraping doesn't expect a pointer-receiver. Hence, to fix the autotag scraper, we change it to be a value receiver, like the rest of the code. Fix: viaScene, and viaGallery. While here, remove a couple of pointer-receiver methods which can be trivially rewritten into plain functions. * Protect against pointer interfaces The underlying code can be a bit inconsistent in what it returns. Introduce pointer-types in the postprocessing layer and handle them accordingly for now. Once a better understanding of the lower levels are understood, we can lift this. * Move ErrConversion into the models package. The conversion error pertains to the logic of converting models. Because of this, it should move there, so it is centralized. * Be consistent in scraper resolver error handling If we have a static error Err = errors.New(..) Then use it wrapped at the start: fmt.Errorf("%w: ...context...", Err) This reads better. While here, avoid using the underlying Atoi errors: they are verbose, and like 99% of the time, the user know what is wrong from the input string, so just give that back. Also, remove the scraper id from the error contexts: it is implicit, and the error wouldn't change if we used a different scraper, which the error message would imply. * Mark the list*Scrapers() API as deprecated The same functionality is now present in listScrapers. * Improve error formatting Think about how each error is going to be used and tweak them to be nicer. * Return a sorted list of scrapers This helps testing, it's closer to what we had, caches like stable data, and it is easier for humans. It also makes the output stable, because map iteration is randomized. * Fix listScrapers calls to return in ID-order Since we need the ordering to be by ID in all situations, it is easier to just generalize the cache listScrapers call to support multiple scraper types. This avoids a de-dupe map up the chain, since every scraper is only considered once. Sorting now happens in the cache listScrapers call. Use this generalized function in all resolvers, which are now simple passthroughs. * Remove UpdateConfig from the scraper cache. This isn't needed, so get rid of it. * Pull a context into identify Scraping scenes in the identify tasks now use a context from up the call chain. * Do not store the scraper cache in the resolver. Scraper caches are updated through manager.singleton•RefreshScraperCache, so we can't keep a pointer to it in the resolver. Instead, solve this by adding a fetcher method to the resolver type. This keeps it local to the resolver, while handling the problem of updating caches in the configuration.
2021-11-18 23:55:34 +00:00
// scraper is the generic interface to the scraper subsystems
type scraper interface {
// spec returns the scraper specification, suitable for graphql
spec() Scraper
Refactor scraper top half (#1893) * Simplify scraper listing Introduce an enum, scraper.Kind, which explains what we are looking for. Make it possible to match this from a scraper struct. Use the enum to rewrite all the listing code to use the same code path. * Use a map, nitpick ScrapePerformerList Let the cache store a map from ID of a scraper to the scraper. This improves lookups when there are many scrapers, making it practically O(1) rather than O(n). If many scrapers are stored, this is faster. Since range expressions work unchanged, we don't have to change much, and things will still work. make Kind a Stringer Rename ScraperPerformerList -> ScraperPerformerQuery since that name is used in the other scrapers, and we value consistency. Tune ScraperPerformerQuery: * Return static errors * Use the new functionality * When loading scrapers, do so directly Rather than first walking the directory structure to obtain file paths, fold the load directly in the the filepath walk. This makes the code for more direct. * Use static ErrNotFound If a scraper isn't found, return one static error. This paves the way for eventually doing our own error-presenter in gqlgen. * Store the cache in the Resolver state Putting the scraperCache directly in the resolver avoids the need to call manager.GetInstance() all over the place to get access to the scraper cache. The cache is stored by pointer, so it should be safe, since the cache will just update its internal state rather than being overwritten. We can now utilize the resolver state to grab the cache where needed. While here, pass context.Context from the resolver down into a function, which removes a context.TODO() * Introduce ScrapedContent Create a union in the GraphQL schema for all scraped content. This simplifies the internal implementation because we get variance on the output content type. Introduce a new type ScrapedContentType which signifies the scraped content you want as a caller. Use these to generalize the List interface and the URL scraping interface. * Simplify the scraper API Introduce a new interface for scraping. This interface is then used in the upper half of the scraper code, to make the code use one code flow rather than multiple code flows. Variance is currently at the old scraper structure. Add extending interfaces for the different ways of invoking scrapes. Use interface conversions to convert a scraper from the cache to a scraper supporting the extra methods. The return path returns models.ScrapedContent. Write a general postProcess function in the scraper, handling all ScrapedContent via type switching. This consolidates all postprocessing code flows. Introduce marhsallers in the resolver code for converting ScrapedContent into the underlying concrete types. Use this to plug the existing fields in the Query resolver, so everything still works. * ScrapedContent: add more marshalling functions Handle all marshalling of ScrapedContent through marhsalling functions. Removes some hand-rolled early variants of it, and replaces it with a canonical code flow. * Support loadByName via scraper_s In order to temporarily plug a hole in the current implementation, we use the older implementation as a hook to get the newer implementation to run. Later on, this can serve as a guide for how to implement the lower level bits inside the scrapers themselves. For now, it just enables support. * Plug the remaining scraper functions for now Since we would like to have a scraper which works in between refactors, plug the lower level parts of the scraper for now. It avoids us having to tackle this part just yet. * Move postprocessing to its own file There's enough postprocessing to clutter the main scrapers.go file. Move all of this into a new file, postprocessing to make the API simpler. It now lives in scrapers.go. * Scraper: Invoke API consistency scraper.Cache.ScrapeByName -> ScrapeName * Fix scraping scenes by URL Simple typo. While here, also make a single marshaller nil-aware. * Introduce scraper groups, consolidate loadByURL Rename `scraper_s` into `group`. A group is a group of scrapers with the same identity. This corresponds to a single YAML file for a scraper configuration. It defines a group which supports different types of scraping contexts. Move config into the group, and lift txnManager and globalConfig to the group. Because we now return models.ScrapedContent we can use interfaces to get variance from the different underlying scrapers. Use a type switch for the URL matcher candidates. And then again for the scrapers. This consolidates all URL scraping paths into one. While here, remove the urlMatcher interface which isn't needed. Also clean up the remaining interfaces for url scraping and delete code which has no purpose anymore. * Consolidate fragment scraping in one code path While here, abide the linters checks. * Refactor loadByFragment Give it the same treatment as loadByURL: Step 1: find a scraperActionImpl which works for the data. Step 2: use that to scrape Most of this is simple analysis on the data at hand. It can be pushed down further in a later commit, but for now we leave it here. * Remove configScraper, autotag is a scraper Remove the remains of the configScraper struct. It now lives on in the group struct. Kill the remaining interfaces from the old implementation while here. Remove group.specification since it can now be handled by a simple func call to spec(). Work through the autotag scraper. It now implements the scraper interface, so it can be used as a scraper. This also simplifies the autotag scraper quite a bit since it doens't have to implement a number of unsupported func calls. * Simplify the fragment scraper flow * Pass the context Eliminate a round of context.TODO() in the scraper code by passing the calling context down into the subsystem. This will gracefully allow for termination of remote calls if the client goes away for some reason in GraphQL requests. * Improve listScrapers in the schema Support lists of types we accept. * Be graceful on nil values in conversion Supporting nil-values make the API more robust in the case of partial results in a multi-scrape situation. * Improve listScrapers: output at-most-once Use the ID of a scraper to reduce the output set. If a scraper has been included, don't include it again. * Consolidate all API level errors into resolver.go * Reorder files and functions: scrapers.go -> cache.go: It almost contains nothing but the cache code. Move errors into scraper.go from here because It is a better place to have them living right now group.go: All of the group structure. This can now go from scraper.go, making it more lean. Move group create from config_scraper to here. config.go: Move the `(c config) spec()` call to here. config_scraper.go: Empty file by now * Name-update the scraper interfaces Use 'via' rather than 'loadBy'. The scrape happens via a given scrape method, so I think this is a nice name for it. * Rename scrapers for consistency. While here, improve the error formatting, so different errors come back differently. * Nuke the freeones field from the GraphQL schema * Fix autotag interfacing, refactor The autotag scraper uses a pointer receiver, but the rest of the code we use for scraping doesn't expect a pointer-receiver. Hence, to fix the autotag scraper, we change it to be a value receiver, like the rest of the code. Fix: viaScene, and viaGallery. While here, remove a couple of pointer-receiver methods which can be trivially rewritten into plain functions. * Protect against pointer interfaces The underlying code can be a bit inconsistent in what it returns. Introduce pointer-types in the postprocessing layer and handle them accordingly for now. Once a better understanding of the lower levels are understood, we can lift this. * Move ErrConversion into the models package. The conversion error pertains to the logic of converting models. Because of this, it should move there, so it is centralized. * Be consistent in scraper resolver error handling If we have a static error Err = errors.New(..) Then use it wrapped at the start: fmt.Errorf("%w: ...context...", Err) This reads better. While here, avoid using the underlying Atoi errors: they are verbose, and like 99% of the time, the user know what is wrong from the input string, so just give that back. Also, remove the scraper id from the error contexts: it is implicit, and the error wouldn't change if we used a different scraper, which the error message would imply. * Mark the list*Scrapers() API as deprecated The same functionality is now present in listScrapers. * Improve error formatting Think about how each error is going to be used and tweak them to be nicer. * Return a sorted list of scrapers This helps testing, it's closer to what we had, caches like stable data, and it is easier for humans. It also makes the output stable, because map iteration is randomized. * Fix listScrapers calls to return in ID-order Since we need the ordering to be by ID in all situations, it is easier to just generalize the cache listScrapers call to support multiple scraper types. This avoids a de-dupe map up the chain, since every scraper is only considered once. Sorting now happens in the cache listScrapers call. Use this generalized function in all resolvers, which are now simple passthroughs. * Remove UpdateConfig from the scraper cache. This isn't needed, so get rid of it. * Pull a context into identify Scraping scenes in the identify tasks now use a context from up the call chain. * Do not store the scraper cache in the resolver. Scraper caches are updated through manager.singleton•RefreshScraperCache, so we can't keep a pointer to it in the resolver. Instead, solve this by adding a fetcher method to the resolver type. This keeps it local to the resolver, while handling the problem of updating caches in the configuration.
2021-11-18 23:55:34 +00:00
// supports tests if the scraper supports a given content type
supports(ScrapeContentType) bool
Refactor scraper top half (#1893) * Simplify scraper listing Introduce an enum, scraper.Kind, which explains what we are looking for. Make it possible to match this from a scraper struct. Use the enum to rewrite all the listing code to use the same code path. * Use a map, nitpick ScrapePerformerList Let the cache store a map from ID of a scraper to the scraper. This improves lookups when there are many scrapers, making it practically O(1) rather than O(n). If many scrapers are stored, this is faster. Since range expressions work unchanged, we don't have to change much, and things will still work. make Kind a Stringer Rename ScraperPerformerList -> ScraperPerformerQuery since that name is used in the other scrapers, and we value consistency. Tune ScraperPerformerQuery: * Return static errors * Use the new functionality * When loading scrapers, do so directly Rather than first walking the directory structure to obtain file paths, fold the load directly in the the filepath walk. This makes the code for more direct. * Use static ErrNotFound If a scraper isn't found, return one static error. This paves the way for eventually doing our own error-presenter in gqlgen. * Store the cache in the Resolver state Putting the scraperCache directly in the resolver avoids the need to call manager.GetInstance() all over the place to get access to the scraper cache. The cache is stored by pointer, so it should be safe, since the cache will just update its internal state rather than being overwritten. We can now utilize the resolver state to grab the cache where needed. While here, pass context.Context from the resolver down into a function, which removes a context.TODO() * Introduce ScrapedContent Create a union in the GraphQL schema for all scraped content. This simplifies the internal implementation because we get variance on the output content type. Introduce a new type ScrapedContentType which signifies the scraped content you want as a caller. Use these to generalize the List interface and the URL scraping interface. * Simplify the scraper API Introduce a new interface for scraping. This interface is then used in the upper half of the scraper code, to make the code use one code flow rather than multiple code flows. Variance is currently at the old scraper structure. Add extending interfaces for the different ways of invoking scrapes. Use interface conversions to convert a scraper from the cache to a scraper supporting the extra methods. The return path returns models.ScrapedContent. Write a general postProcess function in the scraper, handling all ScrapedContent via type switching. This consolidates all postprocessing code flows. Introduce marhsallers in the resolver code for converting ScrapedContent into the underlying concrete types. Use this to plug the existing fields in the Query resolver, so everything still works. * ScrapedContent: add more marshalling functions Handle all marshalling of ScrapedContent through marhsalling functions. Removes some hand-rolled early variants of it, and replaces it with a canonical code flow. * Support loadByName via scraper_s In order to temporarily plug a hole in the current implementation, we use the older implementation as a hook to get the newer implementation to run. Later on, this can serve as a guide for how to implement the lower level bits inside the scrapers themselves. For now, it just enables support. * Plug the remaining scraper functions for now Since we would like to have a scraper which works in between refactors, plug the lower level parts of the scraper for now. It avoids us having to tackle this part just yet. * Move postprocessing to its own file There's enough postprocessing to clutter the main scrapers.go file. Move all of this into a new file, postprocessing to make the API simpler. It now lives in scrapers.go. * Scraper: Invoke API consistency scraper.Cache.ScrapeByName -> ScrapeName * Fix scraping scenes by URL Simple typo. While here, also make a single marshaller nil-aware. * Introduce scraper groups, consolidate loadByURL Rename `scraper_s` into `group`. A group is a group of scrapers with the same identity. This corresponds to a single YAML file for a scraper configuration. It defines a group which supports different types of scraping contexts. Move config into the group, and lift txnManager and globalConfig to the group. Because we now return models.ScrapedContent we can use interfaces to get variance from the different underlying scrapers. Use a type switch for the URL matcher candidates. And then again for the scrapers. This consolidates all URL scraping paths into one. While here, remove the urlMatcher interface which isn't needed. Also clean up the remaining interfaces for url scraping and delete code which has no purpose anymore. * Consolidate fragment scraping in one code path While here, abide the linters checks. * Refactor loadByFragment Give it the same treatment as loadByURL: Step 1: find a scraperActionImpl which works for the data. Step 2: use that to scrape Most of this is simple analysis on the data at hand. It can be pushed down further in a later commit, but for now we leave it here. * Remove configScraper, autotag is a scraper Remove the remains of the configScraper struct. It now lives on in the group struct. Kill the remaining interfaces from the old implementation while here. Remove group.specification since it can now be handled by a simple func call to spec(). Work through the autotag scraper. It now implements the scraper interface, so it can be used as a scraper. This also simplifies the autotag scraper quite a bit since it doens't have to implement a number of unsupported func calls. * Simplify the fragment scraper flow * Pass the context Eliminate a round of context.TODO() in the scraper code by passing the calling context down into the subsystem. This will gracefully allow for termination of remote calls if the client goes away for some reason in GraphQL requests. * Improve listScrapers in the schema Support lists of types we accept. * Be graceful on nil values in conversion Supporting nil-values make the API more robust in the case of partial results in a multi-scrape situation. * Improve listScrapers: output at-most-once Use the ID of a scraper to reduce the output set. If a scraper has been included, don't include it again. * Consolidate all API level errors into resolver.go * Reorder files and functions: scrapers.go -> cache.go: It almost contains nothing but the cache code. Move errors into scraper.go from here because It is a better place to have them living right now group.go: All of the group structure. This can now go from scraper.go, making it more lean. Move group create from config_scraper to here. config.go: Move the `(c config) spec()` call to here. config_scraper.go: Empty file by now * Name-update the scraper interfaces Use 'via' rather than 'loadBy'. The scrape happens via a given scrape method, so I think this is a nice name for it. * Rename scrapers for consistency. While here, improve the error formatting, so different errors come back differently. * Nuke the freeones field from the GraphQL schema * Fix autotag interfacing, refactor The autotag scraper uses a pointer receiver, but the rest of the code we use for scraping doesn't expect a pointer-receiver. Hence, to fix the autotag scraper, we change it to be a value receiver, like the rest of the code. Fix: viaScene, and viaGallery. While here, remove a couple of pointer-receiver methods which can be trivially rewritten into plain functions. * Protect against pointer interfaces The underlying code can be a bit inconsistent in what it returns. Introduce pointer-types in the postprocessing layer and handle them accordingly for now. Once a better understanding of the lower levels are understood, we can lift this. * Move ErrConversion into the models package. The conversion error pertains to the logic of converting models. Because of this, it should move there, so it is centralized. * Be consistent in scraper resolver error handling If we have a static error Err = errors.New(..) Then use it wrapped at the start: fmt.Errorf("%w: ...context...", Err) This reads better. While here, avoid using the underlying Atoi errors: they are verbose, and like 99% of the time, the user know what is wrong from the input string, so just give that back. Also, remove the scraper id from the error contexts: it is implicit, and the error wouldn't change if we used a different scraper, which the error message would imply. * Mark the list*Scrapers() API as deprecated The same functionality is now present in listScrapers. * Improve error formatting Think about how each error is going to be used and tweak them to be nicer. * Return a sorted list of scrapers This helps testing, it's closer to what we had, caches like stable data, and it is easier for humans. It also makes the output stable, because map iteration is randomized. * Fix listScrapers calls to return in ID-order Since we need the ordering to be by ID in all situations, it is easier to just generalize the cache listScrapers call to support multiple scraper types. This avoids a de-dupe map up the chain, since every scraper is only considered once. Sorting now happens in the cache listScrapers call. Use this generalized function in all resolvers, which are now simple passthroughs. * Remove UpdateConfig from the scraper cache. This isn't needed, so get rid of it. * Pull a context into identify Scraping scenes in the identify tasks now use a context from up the call chain. * Do not store the scraper cache in the resolver. Scraper caches are updated through manager.singleton•RefreshScraperCache, so we can't keep a pointer to it in the resolver. Instead, solve this by adding a fetcher method to the resolver type. This keeps it local to the resolver, while handling the problem of updating caches in the configuration.
2021-11-18 23:55:34 +00:00
// supportsURL tests if the scraper supports scrapes of a given url, producing a given content type
supportsURL(url string, ty ScrapeContentType) bool
}
Refactor scraper top half (#1893) * Simplify scraper listing Introduce an enum, scraper.Kind, which explains what we are looking for. Make it possible to match this from a scraper struct. Use the enum to rewrite all the listing code to use the same code path. * Use a map, nitpick ScrapePerformerList Let the cache store a map from ID of a scraper to the scraper. This improves lookups when there are many scrapers, making it practically O(1) rather than O(n). If many scrapers are stored, this is faster. Since range expressions work unchanged, we don't have to change much, and things will still work. make Kind a Stringer Rename ScraperPerformerList -> ScraperPerformerQuery since that name is used in the other scrapers, and we value consistency. Tune ScraperPerformerQuery: * Return static errors * Use the new functionality * When loading scrapers, do so directly Rather than first walking the directory structure to obtain file paths, fold the load directly in the the filepath walk. This makes the code for more direct. * Use static ErrNotFound If a scraper isn't found, return one static error. This paves the way for eventually doing our own error-presenter in gqlgen. * Store the cache in the Resolver state Putting the scraperCache directly in the resolver avoids the need to call manager.GetInstance() all over the place to get access to the scraper cache. The cache is stored by pointer, so it should be safe, since the cache will just update its internal state rather than being overwritten. We can now utilize the resolver state to grab the cache where needed. While here, pass context.Context from the resolver down into a function, which removes a context.TODO() * Introduce ScrapedContent Create a union in the GraphQL schema for all scraped content. This simplifies the internal implementation because we get variance on the output content type. Introduce a new type ScrapedContentType which signifies the scraped content you want as a caller. Use these to generalize the List interface and the URL scraping interface. * Simplify the scraper API Introduce a new interface for scraping. This interface is then used in the upper half of the scraper code, to make the code use one code flow rather than multiple code flows. Variance is currently at the old scraper structure. Add extending interfaces for the different ways of invoking scrapes. Use interface conversions to convert a scraper from the cache to a scraper supporting the extra methods. The return path returns models.ScrapedContent. Write a general postProcess function in the scraper, handling all ScrapedContent via type switching. This consolidates all postprocessing code flows. Introduce marhsallers in the resolver code for converting ScrapedContent into the underlying concrete types. Use this to plug the existing fields in the Query resolver, so everything still works. * ScrapedContent: add more marshalling functions Handle all marshalling of ScrapedContent through marhsalling functions. Removes some hand-rolled early variants of it, and replaces it with a canonical code flow. * Support loadByName via scraper_s In order to temporarily plug a hole in the current implementation, we use the older implementation as a hook to get the newer implementation to run. Later on, this can serve as a guide for how to implement the lower level bits inside the scrapers themselves. For now, it just enables support. * Plug the remaining scraper functions for now Since we would like to have a scraper which works in between refactors, plug the lower level parts of the scraper for now. It avoids us having to tackle this part just yet. * Move postprocessing to its own file There's enough postprocessing to clutter the main scrapers.go file. Move all of this into a new file, postprocessing to make the API simpler. It now lives in scrapers.go. * Scraper: Invoke API consistency scraper.Cache.ScrapeByName -> ScrapeName * Fix scraping scenes by URL Simple typo. While here, also make a single marshaller nil-aware. * Introduce scraper groups, consolidate loadByURL Rename `scraper_s` into `group`. A group is a group of scrapers with the same identity. This corresponds to a single YAML file for a scraper configuration. It defines a group which supports different types of scraping contexts. Move config into the group, and lift txnManager and globalConfig to the group. Because we now return models.ScrapedContent we can use interfaces to get variance from the different underlying scrapers. Use a type switch for the URL matcher candidates. And then again for the scrapers. This consolidates all URL scraping paths into one. While here, remove the urlMatcher interface which isn't needed. Also clean up the remaining interfaces for url scraping and delete code which has no purpose anymore. * Consolidate fragment scraping in one code path While here, abide the linters checks. * Refactor loadByFragment Give it the same treatment as loadByURL: Step 1: find a scraperActionImpl which works for the data. Step 2: use that to scrape Most of this is simple analysis on the data at hand. It can be pushed down further in a later commit, but for now we leave it here. * Remove configScraper, autotag is a scraper Remove the remains of the configScraper struct. It now lives on in the group struct. Kill the remaining interfaces from the old implementation while here. Remove group.specification since it can now be handled by a simple func call to spec(). Work through the autotag scraper. It now implements the scraper interface, so it can be used as a scraper. This also simplifies the autotag scraper quite a bit since it doens't have to implement a number of unsupported func calls. * Simplify the fragment scraper flow * Pass the context Eliminate a round of context.TODO() in the scraper code by passing the calling context down into the subsystem. This will gracefully allow for termination of remote calls if the client goes away for some reason in GraphQL requests. * Improve listScrapers in the schema Support lists of types we accept. * Be graceful on nil values in conversion Supporting nil-values make the API more robust in the case of partial results in a multi-scrape situation. * Improve listScrapers: output at-most-once Use the ID of a scraper to reduce the output set. If a scraper has been included, don't include it again. * Consolidate all API level errors into resolver.go * Reorder files and functions: scrapers.go -> cache.go: It almost contains nothing but the cache code. Move errors into scraper.go from here because It is a better place to have them living right now group.go: All of the group structure. This can now go from scraper.go, making it more lean. Move group create from config_scraper to here. config.go: Move the `(c config) spec()` call to here. config_scraper.go: Empty file by now * Name-update the scraper interfaces Use 'via' rather than 'loadBy'. The scrape happens via a given scrape method, so I think this is a nice name for it. * Rename scrapers for consistency. While here, improve the error formatting, so different errors come back differently. * Nuke the freeones field from the GraphQL schema * Fix autotag interfacing, refactor The autotag scraper uses a pointer receiver, but the rest of the code we use for scraping doesn't expect a pointer-receiver. Hence, to fix the autotag scraper, we change it to be a value receiver, like the rest of the code. Fix: viaScene, and viaGallery. While here, remove a couple of pointer-receiver methods which can be trivially rewritten into plain functions. * Protect against pointer interfaces The underlying code can be a bit inconsistent in what it returns. Introduce pointer-types in the postprocessing layer and handle them accordingly for now. Once a better understanding of the lower levels are understood, we can lift this. * Move ErrConversion into the models package. The conversion error pertains to the logic of converting models. Because of this, it should move there, so it is centralized. * Be consistent in scraper resolver error handling If we have a static error Err = errors.New(..) Then use it wrapped at the start: fmt.Errorf("%w: ...context...", Err) This reads better. While here, avoid using the underlying Atoi errors: they are verbose, and like 99% of the time, the user know what is wrong from the input string, so just give that back. Also, remove the scraper id from the error contexts: it is implicit, and the error wouldn't change if we used a different scraper, which the error message would imply. * Mark the list*Scrapers() API as deprecated The same functionality is now present in listScrapers. * Improve error formatting Think about how each error is going to be used and tweak them to be nicer. * Return a sorted list of scrapers This helps testing, it's closer to what we had, caches like stable data, and it is easier for humans. It also makes the output stable, because map iteration is randomized. * Fix listScrapers calls to return in ID-order Since we need the ordering to be by ID in all situations, it is easier to just generalize the cache listScrapers call to support multiple scraper types. This avoids a de-dupe map up the chain, since every scraper is only considered once. Sorting now happens in the cache listScrapers call. Use this generalized function in all resolvers, which are now simple passthroughs. * Remove UpdateConfig from the scraper cache. This isn't needed, so get rid of it. * Pull a context into identify Scraping scenes in the identify tasks now use a context from up the call chain. * Do not store the scraper cache in the resolver. Scraper caches are updated through manager.singleton•RefreshScraperCache, so we can't keep a pointer to it in the resolver. Instead, solve this by adding a fetcher method to the resolver type. This keeps it local to the resolver, while handling the problem of updating caches in the configuration.
2021-11-18 23:55:34 +00:00
// urlScraper is the interface of scrapers supporting url loads
type urlScraper interface {
scraper
viaURL(ctx context.Context, client *http.Client, url string, ty ScrapeContentType) (ScrapedContent, error)
}
Refactor scraper top half (#1893) * Simplify scraper listing Introduce an enum, scraper.Kind, which explains what we are looking for. Make it possible to match this from a scraper struct. Use the enum to rewrite all the listing code to use the same code path. * Use a map, nitpick ScrapePerformerList Let the cache store a map from ID of a scraper to the scraper. This improves lookups when there are many scrapers, making it practically O(1) rather than O(n). If many scrapers are stored, this is faster. Since range expressions work unchanged, we don't have to change much, and things will still work. make Kind a Stringer Rename ScraperPerformerList -> ScraperPerformerQuery since that name is used in the other scrapers, and we value consistency. Tune ScraperPerformerQuery: * Return static errors * Use the new functionality * When loading scrapers, do so directly Rather than first walking the directory structure to obtain file paths, fold the load directly in the the filepath walk. This makes the code for more direct. * Use static ErrNotFound If a scraper isn't found, return one static error. This paves the way for eventually doing our own error-presenter in gqlgen. * Store the cache in the Resolver state Putting the scraperCache directly in the resolver avoids the need to call manager.GetInstance() all over the place to get access to the scraper cache. The cache is stored by pointer, so it should be safe, since the cache will just update its internal state rather than being overwritten. We can now utilize the resolver state to grab the cache where needed. While here, pass context.Context from the resolver down into a function, which removes a context.TODO() * Introduce ScrapedContent Create a union in the GraphQL schema for all scraped content. This simplifies the internal implementation because we get variance on the output content type. Introduce a new type ScrapedContentType which signifies the scraped content you want as a caller. Use these to generalize the List interface and the URL scraping interface. * Simplify the scraper API Introduce a new interface for scraping. This interface is then used in the upper half of the scraper code, to make the code use one code flow rather than multiple code flows. Variance is currently at the old scraper structure. Add extending interfaces for the different ways of invoking scrapes. Use interface conversions to convert a scraper from the cache to a scraper supporting the extra methods. The return path returns models.ScrapedContent. Write a general postProcess function in the scraper, handling all ScrapedContent via type switching. This consolidates all postprocessing code flows. Introduce marhsallers in the resolver code for converting ScrapedContent into the underlying concrete types. Use this to plug the existing fields in the Query resolver, so everything still works. * ScrapedContent: add more marshalling functions Handle all marshalling of ScrapedContent through marhsalling functions. Removes some hand-rolled early variants of it, and replaces it with a canonical code flow. * Support loadByName via scraper_s In order to temporarily plug a hole in the current implementation, we use the older implementation as a hook to get the newer implementation to run. Later on, this can serve as a guide for how to implement the lower level bits inside the scrapers themselves. For now, it just enables support. * Plug the remaining scraper functions for now Since we would like to have a scraper which works in between refactors, plug the lower level parts of the scraper for now. It avoids us having to tackle this part just yet. * Move postprocessing to its own file There's enough postprocessing to clutter the main scrapers.go file. Move all of this into a new file, postprocessing to make the API simpler. It now lives in scrapers.go. * Scraper: Invoke API consistency scraper.Cache.ScrapeByName -> ScrapeName * Fix scraping scenes by URL Simple typo. While here, also make a single marshaller nil-aware. * Introduce scraper groups, consolidate loadByURL Rename `scraper_s` into `group`. A group is a group of scrapers with the same identity. This corresponds to a single YAML file for a scraper configuration. It defines a group which supports different types of scraping contexts. Move config into the group, and lift txnManager and globalConfig to the group. Because we now return models.ScrapedContent we can use interfaces to get variance from the different underlying scrapers. Use a type switch for the URL matcher candidates. And then again for the scrapers. This consolidates all URL scraping paths into one. While here, remove the urlMatcher interface which isn't needed. Also clean up the remaining interfaces for url scraping and delete code which has no purpose anymore. * Consolidate fragment scraping in one code path While here, abide the linters checks. * Refactor loadByFragment Give it the same treatment as loadByURL: Step 1: find a scraperActionImpl which works for the data. Step 2: use that to scrape Most of this is simple analysis on the data at hand. It can be pushed down further in a later commit, but for now we leave it here. * Remove configScraper, autotag is a scraper Remove the remains of the configScraper struct. It now lives on in the group struct. Kill the remaining interfaces from the old implementation while here. Remove group.specification since it can now be handled by a simple func call to spec(). Work through the autotag scraper. It now implements the scraper interface, so it can be used as a scraper. This also simplifies the autotag scraper quite a bit since it doens't have to implement a number of unsupported func calls. * Simplify the fragment scraper flow * Pass the context Eliminate a round of context.TODO() in the scraper code by passing the calling context down into the subsystem. This will gracefully allow for termination of remote calls if the client goes away for some reason in GraphQL requests. * Improve listScrapers in the schema Support lists of types we accept. * Be graceful on nil values in conversion Supporting nil-values make the API more robust in the case of partial results in a multi-scrape situation. * Improve listScrapers: output at-most-once Use the ID of a scraper to reduce the output set. If a scraper has been included, don't include it again. * Consolidate all API level errors into resolver.go * Reorder files and functions: scrapers.go -> cache.go: It almost contains nothing but the cache code. Move errors into scraper.go from here because It is a better place to have them living right now group.go: All of the group structure. This can now go from scraper.go, making it more lean. Move group create from config_scraper to here. config.go: Move the `(c config) spec()` call to here. config_scraper.go: Empty file by now * Name-update the scraper interfaces Use 'via' rather than 'loadBy'. The scrape happens via a given scrape method, so I think this is a nice name for it. * Rename scrapers for consistency. While here, improve the error formatting, so different errors come back differently. * Nuke the freeones field from the GraphQL schema * Fix autotag interfacing, refactor The autotag scraper uses a pointer receiver, but the rest of the code we use for scraping doesn't expect a pointer-receiver. Hence, to fix the autotag scraper, we change it to be a value receiver, like the rest of the code. Fix: viaScene, and viaGallery. While here, remove a couple of pointer-receiver methods which can be trivially rewritten into plain functions. * Protect against pointer interfaces The underlying code can be a bit inconsistent in what it returns. Introduce pointer-types in the postprocessing layer and handle them accordingly for now. Once a better understanding of the lower levels are understood, we can lift this. * Move ErrConversion into the models package. The conversion error pertains to the logic of converting models. Because of this, it should move there, so it is centralized. * Be consistent in scraper resolver error handling If we have a static error Err = errors.New(..) Then use it wrapped at the start: fmt.Errorf("%w: ...context...", Err) This reads better. While here, avoid using the underlying Atoi errors: they are verbose, and like 99% of the time, the user know what is wrong from the input string, so just give that back. Also, remove the scraper id from the error contexts: it is implicit, and the error wouldn't change if we used a different scraper, which the error message would imply. * Mark the list*Scrapers() API as deprecated The same functionality is now present in listScrapers. * Improve error formatting Think about how each error is going to be used and tweak them to be nicer. * Return a sorted list of scrapers This helps testing, it's closer to what we had, caches like stable data, and it is easier for humans. It also makes the output stable, because map iteration is randomized. * Fix listScrapers calls to return in ID-order Since we need the ordering to be by ID in all situations, it is easier to just generalize the cache listScrapers call to support multiple scraper types. This avoids a de-dupe map up the chain, since every scraper is only considered once. Sorting now happens in the cache listScrapers call. Use this generalized function in all resolvers, which are now simple passthroughs. * Remove UpdateConfig from the scraper cache. This isn't needed, so get rid of it. * Pull a context into identify Scraping scenes in the identify tasks now use a context from up the call chain. * Do not store the scraper cache in the resolver. Scraper caches are updated through manager.singleton•RefreshScraperCache, so we can't keep a pointer to it in the resolver. Instead, solve this by adding a fetcher method to the resolver type. This keeps it local to the resolver, while handling the problem of updating caches in the configuration.
2021-11-18 23:55:34 +00:00
// nameScraper is the interface of scrapers supporting name loads
type nameScraper interface {
scraper
viaName(ctx context.Context, client *http.Client, name string, ty ScrapeContentType) ([]ScrapedContent, error)
}
Refactor scraper top half (#1893) * Simplify scraper listing Introduce an enum, scraper.Kind, which explains what we are looking for. Make it possible to match this from a scraper struct. Use the enum to rewrite all the listing code to use the same code path. * Use a map, nitpick ScrapePerformerList Let the cache store a map from ID of a scraper to the scraper. This improves lookups when there are many scrapers, making it practically O(1) rather than O(n). If many scrapers are stored, this is faster. Since range expressions work unchanged, we don't have to change much, and things will still work. make Kind a Stringer Rename ScraperPerformerList -> ScraperPerformerQuery since that name is used in the other scrapers, and we value consistency. Tune ScraperPerformerQuery: * Return static errors * Use the new functionality * When loading scrapers, do so directly Rather than first walking the directory structure to obtain file paths, fold the load directly in the the filepath walk. This makes the code for more direct. * Use static ErrNotFound If a scraper isn't found, return one static error. This paves the way for eventually doing our own error-presenter in gqlgen. * Store the cache in the Resolver state Putting the scraperCache directly in the resolver avoids the need to call manager.GetInstance() all over the place to get access to the scraper cache. The cache is stored by pointer, so it should be safe, since the cache will just update its internal state rather than being overwritten. We can now utilize the resolver state to grab the cache where needed. While here, pass context.Context from the resolver down into a function, which removes a context.TODO() * Introduce ScrapedContent Create a union in the GraphQL schema for all scraped content. This simplifies the internal implementation because we get variance on the output content type. Introduce a new type ScrapedContentType which signifies the scraped content you want as a caller. Use these to generalize the List interface and the URL scraping interface. * Simplify the scraper API Introduce a new interface for scraping. This interface is then used in the upper half of the scraper code, to make the code use one code flow rather than multiple code flows. Variance is currently at the old scraper structure. Add extending interfaces for the different ways of invoking scrapes. Use interface conversions to convert a scraper from the cache to a scraper supporting the extra methods. The return path returns models.ScrapedContent. Write a general postProcess function in the scraper, handling all ScrapedContent via type switching. This consolidates all postprocessing code flows. Introduce marhsallers in the resolver code for converting ScrapedContent into the underlying concrete types. Use this to plug the existing fields in the Query resolver, so everything still works. * ScrapedContent: add more marshalling functions Handle all marshalling of ScrapedContent through marhsalling functions. Removes some hand-rolled early variants of it, and replaces it with a canonical code flow. * Support loadByName via scraper_s In order to temporarily plug a hole in the current implementation, we use the older implementation as a hook to get the newer implementation to run. Later on, this can serve as a guide for how to implement the lower level bits inside the scrapers themselves. For now, it just enables support. * Plug the remaining scraper functions for now Since we would like to have a scraper which works in between refactors, plug the lower level parts of the scraper for now. It avoids us having to tackle this part just yet. * Move postprocessing to its own file There's enough postprocessing to clutter the main scrapers.go file. Move all of this into a new file, postprocessing to make the API simpler. It now lives in scrapers.go. * Scraper: Invoke API consistency scraper.Cache.ScrapeByName -> ScrapeName * Fix scraping scenes by URL Simple typo. While here, also make a single marshaller nil-aware. * Introduce scraper groups, consolidate loadByURL Rename `scraper_s` into `group`. A group is a group of scrapers with the same identity. This corresponds to a single YAML file for a scraper configuration. It defines a group which supports different types of scraping contexts. Move config into the group, and lift txnManager and globalConfig to the group. Because we now return models.ScrapedContent we can use interfaces to get variance from the different underlying scrapers. Use a type switch for the URL matcher candidates. And then again for the scrapers. This consolidates all URL scraping paths into one. While here, remove the urlMatcher interface which isn't needed. Also clean up the remaining interfaces for url scraping and delete code which has no purpose anymore. * Consolidate fragment scraping in one code path While here, abide the linters checks. * Refactor loadByFragment Give it the same treatment as loadByURL: Step 1: find a scraperActionImpl which works for the data. Step 2: use that to scrape Most of this is simple analysis on the data at hand. It can be pushed down further in a later commit, but for now we leave it here. * Remove configScraper, autotag is a scraper Remove the remains of the configScraper struct. It now lives on in the group struct. Kill the remaining interfaces from the old implementation while here. Remove group.specification since it can now be handled by a simple func call to spec(). Work through the autotag scraper. It now implements the scraper interface, so it can be used as a scraper. This also simplifies the autotag scraper quite a bit since it doens't have to implement a number of unsupported func calls. * Simplify the fragment scraper flow * Pass the context Eliminate a round of context.TODO() in the scraper code by passing the calling context down into the subsystem. This will gracefully allow for termination of remote calls if the client goes away for some reason in GraphQL requests. * Improve listScrapers in the schema Support lists of types we accept. * Be graceful on nil values in conversion Supporting nil-values make the API more robust in the case of partial results in a multi-scrape situation. * Improve listScrapers: output at-most-once Use the ID of a scraper to reduce the output set. If a scraper has been included, don't include it again. * Consolidate all API level errors into resolver.go * Reorder files and functions: scrapers.go -> cache.go: It almost contains nothing but the cache code. Move errors into scraper.go from here because It is a better place to have them living right now group.go: All of the group structure. This can now go from scraper.go, making it more lean. Move group create from config_scraper to here. config.go: Move the `(c config) spec()` call to here. config_scraper.go: Empty file by now * Name-update the scraper interfaces Use 'via' rather than 'loadBy'. The scrape happens via a given scrape method, so I think this is a nice name for it. * Rename scrapers for consistency. While here, improve the error formatting, so different errors come back differently. * Nuke the freeones field from the GraphQL schema * Fix autotag interfacing, refactor The autotag scraper uses a pointer receiver, but the rest of the code we use for scraping doesn't expect a pointer-receiver. Hence, to fix the autotag scraper, we change it to be a value receiver, like the rest of the code. Fix: viaScene, and viaGallery. While here, remove a couple of pointer-receiver methods which can be trivially rewritten into plain functions. * Protect against pointer interfaces The underlying code can be a bit inconsistent in what it returns. Introduce pointer-types in the postprocessing layer and handle them accordingly for now. Once a better understanding of the lower levels are understood, we can lift this. * Move ErrConversion into the models package. The conversion error pertains to the logic of converting models. Because of this, it should move there, so it is centralized. * Be consistent in scraper resolver error handling If we have a static error Err = errors.New(..) Then use it wrapped at the start: fmt.Errorf("%w: ...context...", Err) This reads better. While here, avoid using the underlying Atoi errors: they are verbose, and like 99% of the time, the user know what is wrong from the input string, so just give that back. Also, remove the scraper id from the error contexts: it is implicit, and the error wouldn't change if we used a different scraper, which the error message would imply. * Mark the list*Scrapers() API as deprecated The same functionality is now present in listScrapers. * Improve error formatting Think about how each error is going to be used and tweak them to be nicer. * Return a sorted list of scrapers This helps testing, it's closer to what we had, caches like stable data, and it is easier for humans. It also makes the output stable, because map iteration is randomized. * Fix listScrapers calls to return in ID-order Since we need the ordering to be by ID in all situations, it is easier to just generalize the cache listScrapers call to support multiple scraper types. This avoids a de-dupe map up the chain, since every scraper is only considered once. Sorting now happens in the cache listScrapers call. Use this generalized function in all resolvers, which are now simple passthroughs. * Remove UpdateConfig from the scraper cache. This isn't needed, so get rid of it. * Pull a context into identify Scraping scenes in the identify tasks now use a context from up the call chain. * Do not store the scraper cache in the resolver. Scraper caches are updated through manager.singleton•RefreshScraperCache, so we can't keep a pointer to it in the resolver. Instead, solve this by adding a fetcher method to the resolver type. This keeps it local to the resolver, while handling the problem of updating caches in the configuration.
2021-11-18 23:55:34 +00:00
// fragmentScraper is the interface of scrapers supporting fragment loads
type fragmentScraper interface {
scraper
viaFragment(ctx context.Context, client *http.Client, input Input) (ScrapedContent, error)
}
Refactor scraper top half (#1893) * Simplify scraper listing Introduce an enum, scraper.Kind, which explains what we are looking for. Make it possible to match this from a scraper struct. Use the enum to rewrite all the listing code to use the same code path. * Use a map, nitpick ScrapePerformerList Let the cache store a map from ID of a scraper to the scraper. This improves lookups when there are many scrapers, making it practically O(1) rather than O(n). If many scrapers are stored, this is faster. Since range expressions work unchanged, we don't have to change much, and things will still work. make Kind a Stringer Rename ScraperPerformerList -> ScraperPerformerQuery since that name is used in the other scrapers, and we value consistency. Tune ScraperPerformerQuery: * Return static errors * Use the new functionality * When loading scrapers, do so directly Rather than first walking the directory structure to obtain file paths, fold the load directly in the the filepath walk. This makes the code for more direct. * Use static ErrNotFound If a scraper isn't found, return one static error. This paves the way for eventually doing our own error-presenter in gqlgen. * Store the cache in the Resolver state Putting the scraperCache directly in the resolver avoids the need to call manager.GetInstance() all over the place to get access to the scraper cache. The cache is stored by pointer, so it should be safe, since the cache will just update its internal state rather than being overwritten. We can now utilize the resolver state to grab the cache where needed. While here, pass context.Context from the resolver down into a function, which removes a context.TODO() * Introduce ScrapedContent Create a union in the GraphQL schema for all scraped content. This simplifies the internal implementation because we get variance on the output content type. Introduce a new type ScrapedContentType which signifies the scraped content you want as a caller. Use these to generalize the List interface and the URL scraping interface. * Simplify the scraper API Introduce a new interface for scraping. This interface is then used in the upper half of the scraper code, to make the code use one code flow rather than multiple code flows. Variance is currently at the old scraper structure. Add extending interfaces for the different ways of invoking scrapes. Use interface conversions to convert a scraper from the cache to a scraper supporting the extra methods. The return path returns models.ScrapedContent. Write a general postProcess function in the scraper, handling all ScrapedContent via type switching. This consolidates all postprocessing code flows. Introduce marhsallers in the resolver code for converting ScrapedContent into the underlying concrete types. Use this to plug the existing fields in the Query resolver, so everything still works. * ScrapedContent: add more marshalling functions Handle all marshalling of ScrapedContent through marhsalling functions. Removes some hand-rolled early variants of it, and replaces it with a canonical code flow. * Support loadByName via scraper_s In order to temporarily plug a hole in the current implementation, we use the older implementation as a hook to get the newer implementation to run. Later on, this can serve as a guide for how to implement the lower level bits inside the scrapers themselves. For now, it just enables support. * Plug the remaining scraper functions for now Since we would like to have a scraper which works in between refactors, plug the lower level parts of the scraper for now. It avoids us having to tackle this part just yet. * Move postprocessing to its own file There's enough postprocessing to clutter the main scrapers.go file. Move all of this into a new file, postprocessing to make the API simpler. It now lives in scrapers.go. * Scraper: Invoke API consistency scraper.Cache.ScrapeByName -> ScrapeName * Fix scraping scenes by URL Simple typo. While here, also make a single marshaller nil-aware. * Introduce scraper groups, consolidate loadByURL Rename `scraper_s` into `group`. A group is a group of scrapers with the same identity. This corresponds to a single YAML file for a scraper configuration. It defines a group which supports different types of scraping contexts. Move config into the group, and lift txnManager and globalConfig to the group. Because we now return models.ScrapedContent we can use interfaces to get variance from the different underlying scrapers. Use a type switch for the URL matcher candidates. And then again for the scrapers. This consolidates all URL scraping paths into one. While here, remove the urlMatcher interface which isn't needed. Also clean up the remaining interfaces for url scraping and delete code which has no purpose anymore. * Consolidate fragment scraping in one code path While here, abide the linters checks. * Refactor loadByFragment Give it the same treatment as loadByURL: Step 1: find a scraperActionImpl which works for the data. Step 2: use that to scrape Most of this is simple analysis on the data at hand. It can be pushed down further in a later commit, but for now we leave it here. * Remove configScraper, autotag is a scraper Remove the remains of the configScraper struct. It now lives on in the group struct. Kill the remaining interfaces from the old implementation while here. Remove group.specification since it can now be handled by a simple func call to spec(). Work through the autotag scraper. It now implements the scraper interface, so it can be used as a scraper. This also simplifies the autotag scraper quite a bit since it doens't have to implement a number of unsupported func calls. * Simplify the fragment scraper flow * Pass the context Eliminate a round of context.TODO() in the scraper code by passing the calling context down into the subsystem. This will gracefully allow for termination of remote calls if the client goes away for some reason in GraphQL requests. * Improve listScrapers in the schema Support lists of types we accept. * Be graceful on nil values in conversion Supporting nil-values make the API more robust in the case of partial results in a multi-scrape situation. * Improve listScrapers: output at-most-once Use the ID of a scraper to reduce the output set. If a scraper has been included, don't include it again. * Consolidate all API level errors into resolver.go * Reorder files and functions: scrapers.go -> cache.go: It almost contains nothing but the cache code. Move errors into scraper.go from here because It is a better place to have them living right now group.go: All of the group structure. This can now go from scraper.go, making it more lean. Move group create from config_scraper to here. config.go: Move the `(c config) spec()` call to here. config_scraper.go: Empty file by now * Name-update the scraper interfaces Use 'via' rather than 'loadBy'. The scrape happens via a given scrape method, so I think this is a nice name for it. * Rename scrapers for consistency. While here, improve the error formatting, so different errors come back differently. * Nuke the freeones field from the GraphQL schema * Fix autotag interfacing, refactor The autotag scraper uses a pointer receiver, but the rest of the code we use for scraping doesn't expect a pointer-receiver. Hence, to fix the autotag scraper, we change it to be a value receiver, like the rest of the code. Fix: viaScene, and viaGallery. While here, remove a couple of pointer-receiver methods which can be trivially rewritten into plain functions. * Protect against pointer interfaces The underlying code can be a bit inconsistent in what it returns. Introduce pointer-types in the postprocessing layer and handle them accordingly for now. Once a better understanding of the lower levels are understood, we can lift this. * Move ErrConversion into the models package. The conversion error pertains to the logic of converting models. Because of this, it should move there, so it is centralized. * Be consistent in scraper resolver error handling If we have a static error Err = errors.New(..) Then use it wrapped at the start: fmt.Errorf("%w: ...context...", Err) This reads better. While here, avoid using the underlying Atoi errors: they are verbose, and like 99% of the time, the user know what is wrong from the input string, so just give that back. Also, remove the scraper id from the error contexts: it is implicit, and the error wouldn't change if we used a different scraper, which the error message would imply. * Mark the list*Scrapers() API as deprecated The same functionality is now present in listScrapers. * Improve error formatting Think about how each error is going to be used and tweak them to be nicer. * Return a sorted list of scrapers This helps testing, it's closer to what we had, caches like stable data, and it is easier for humans. It also makes the output stable, because map iteration is randomized. * Fix listScrapers calls to return in ID-order Since we need the ordering to be by ID in all situations, it is easier to just generalize the cache listScrapers call to support multiple scraper types. This avoids a de-dupe map up the chain, since every scraper is only considered once. Sorting now happens in the cache listScrapers call. Use this generalized function in all resolvers, which are now simple passthroughs. * Remove UpdateConfig from the scraper cache. This isn't needed, so get rid of it. * Pull a context into identify Scraping scenes in the identify tasks now use a context from up the call chain. * Do not store the scraper cache in the resolver. Scraper caches are updated through manager.singleton•RefreshScraperCache, so we can't keep a pointer to it in the resolver. Instead, solve this by adding a fetcher method to the resolver type. This keeps it local to the resolver, while handling the problem of updating caches in the configuration.
2021-11-18 23:55:34 +00:00
// sceneScraper is a scraper which supports scene scrapes with
// scene data as the input.
type sceneScraper interface {
scraper
viaScene(ctx context.Context, client *http.Client, scene *models.Scene) (*ScrapedScene, error)
}
Refactor scraper top half (#1893) * Simplify scraper listing Introduce an enum, scraper.Kind, which explains what we are looking for. Make it possible to match this from a scraper struct. Use the enum to rewrite all the listing code to use the same code path. * Use a map, nitpick ScrapePerformerList Let the cache store a map from ID of a scraper to the scraper. This improves lookups when there are many scrapers, making it practically O(1) rather than O(n). If many scrapers are stored, this is faster. Since range expressions work unchanged, we don't have to change much, and things will still work. make Kind a Stringer Rename ScraperPerformerList -> ScraperPerformerQuery since that name is used in the other scrapers, and we value consistency. Tune ScraperPerformerQuery: * Return static errors * Use the new functionality * When loading scrapers, do so directly Rather than first walking the directory structure to obtain file paths, fold the load directly in the the filepath walk. This makes the code for more direct. * Use static ErrNotFound If a scraper isn't found, return one static error. This paves the way for eventually doing our own error-presenter in gqlgen. * Store the cache in the Resolver state Putting the scraperCache directly in the resolver avoids the need to call manager.GetInstance() all over the place to get access to the scraper cache. The cache is stored by pointer, so it should be safe, since the cache will just update its internal state rather than being overwritten. We can now utilize the resolver state to grab the cache where needed. While here, pass context.Context from the resolver down into a function, which removes a context.TODO() * Introduce ScrapedContent Create a union in the GraphQL schema for all scraped content. This simplifies the internal implementation because we get variance on the output content type. Introduce a new type ScrapedContentType which signifies the scraped content you want as a caller. Use these to generalize the List interface and the URL scraping interface. * Simplify the scraper API Introduce a new interface for scraping. This interface is then used in the upper half of the scraper code, to make the code use one code flow rather than multiple code flows. Variance is currently at the old scraper structure. Add extending interfaces for the different ways of invoking scrapes. Use interface conversions to convert a scraper from the cache to a scraper supporting the extra methods. The return path returns models.ScrapedContent. Write a general postProcess function in the scraper, handling all ScrapedContent via type switching. This consolidates all postprocessing code flows. Introduce marhsallers in the resolver code for converting ScrapedContent into the underlying concrete types. Use this to plug the existing fields in the Query resolver, so everything still works. * ScrapedContent: add more marshalling functions Handle all marshalling of ScrapedContent through marhsalling functions. Removes some hand-rolled early variants of it, and replaces it with a canonical code flow. * Support loadByName via scraper_s In order to temporarily plug a hole in the current implementation, we use the older implementation as a hook to get the newer implementation to run. Later on, this can serve as a guide for how to implement the lower level bits inside the scrapers themselves. For now, it just enables support. * Plug the remaining scraper functions for now Since we would like to have a scraper which works in between refactors, plug the lower level parts of the scraper for now. It avoids us having to tackle this part just yet. * Move postprocessing to its own file There's enough postprocessing to clutter the main scrapers.go file. Move all of this into a new file, postprocessing to make the API simpler. It now lives in scrapers.go. * Scraper: Invoke API consistency scraper.Cache.ScrapeByName -> ScrapeName * Fix scraping scenes by URL Simple typo. While here, also make a single marshaller nil-aware. * Introduce scraper groups, consolidate loadByURL Rename `scraper_s` into `group`. A group is a group of scrapers with the same identity. This corresponds to a single YAML file for a scraper configuration. It defines a group which supports different types of scraping contexts. Move config into the group, and lift txnManager and globalConfig to the group. Because we now return models.ScrapedContent we can use interfaces to get variance from the different underlying scrapers. Use a type switch for the URL matcher candidates. And then again for the scrapers. This consolidates all URL scraping paths into one. While here, remove the urlMatcher interface which isn't needed. Also clean up the remaining interfaces for url scraping and delete code which has no purpose anymore. * Consolidate fragment scraping in one code path While here, abide the linters checks. * Refactor loadByFragment Give it the same treatment as loadByURL: Step 1: find a scraperActionImpl which works for the data. Step 2: use that to scrape Most of this is simple analysis on the data at hand. It can be pushed down further in a later commit, but for now we leave it here. * Remove configScraper, autotag is a scraper Remove the remains of the configScraper struct. It now lives on in the group struct. Kill the remaining interfaces from the old implementation while here. Remove group.specification since it can now be handled by a simple func call to spec(). Work through the autotag scraper. It now implements the scraper interface, so it can be used as a scraper. This also simplifies the autotag scraper quite a bit since it doens't have to implement a number of unsupported func calls. * Simplify the fragment scraper flow * Pass the context Eliminate a round of context.TODO() in the scraper code by passing the calling context down into the subsystem. This will gracefully allow for termination of remote calls if the client goes away for some reason in GraphQL requests. * Improve listScrapers in the schema Support lists of types we accept. * Be graceful on nil values in conversion Supporting nil-values make the API more robust in the case of partial results in a multi-scrape situation. * Improve listScrapers: output at-most-once Use the ID of a scraper to reduce the output set. If a scraper has been included, don't include it again. * Consolidate all API level errors into resolver.go * Reorder files and functions: scrapers.go -> cache.go: It almost contains nothing but the cache code. Move errors into scraper.go from here because It is a better place to have them living right now group.go: All of the group structure. This can now go from scraper.go, making it more lean. Move group create from config_scraper to here. config.go: Move the `(c config) spec()` call to here. config_scraper.go: Empty file by now * Name-update the scraper interfaces Use 'via' rather than 'loadBy'. The scrape happens via a given scrape method, so I think this is a nice name for it. * Rename scrapers for consistency. While here, improve the error formatting, so different errors come back differently. * Nuke the freeones field from the GraphQL schema * Fix autotag interfacing, refactor The autotag scraper uses a pointer receiver, but the rest of the code we use for scraping doesn't expect a pointer-receiver. Hence, to fix the autotag scraper, we change it to be a value receiver, like the rest of the code. Fix: viaScene, and viaGallery. While here, remove a couple of pointer-receiver methods which can be trivially rewritten into plain functions. * Protect against pointer interfaces The underlying code can be a bit inconsistent in what it returns. Introduce pointer-types in the postprocessing layer and handle them accordingly for now. Once a better understanding of the lower levels are understood, we can lift this. * Move ErrConversion into the models package. The conversion error pertains to the logic of converting models. Because of this, it should move there, so it is centralized. * Be consistent in scraper resolver error handling If we have a static error Err = errors.New(..) Then use it wrapped at the start: fmt.Errorf("%w: ...context...", Err) This reads better. While here, avoid using the underlying Atoi errors: they are verbose, and like 99% of the time, the user know what is wrong from the input string, so just give that back. Also, remove the scraper id from the error contexts: it is implicit, and the error wouldn't change if we used a different scraper, which the error message would imply. * Mark the list*Scrapers() API as deprecated The same functionality is now present in listScrapers. * Improve error formatting Think about how each error is going to be used and tweak them to be nicer. * Return a sorted list of scrapers This helps testing, it's closer to what we had, caches like stable data, and it is easier for humans. It also makes the output stable, because map iteration is randomized. * Fix listScrapers calls to return in ID-order Since we need the ordering to be by ID in all situations, it is easier to just generalize the cache listScrapers call to support multiple scraper types. This avoids a de-dupe map up the chain, since every scraper is only considered once. Sorting now happens in the cache listScrapers call. Use this generalized function in all resolvers, which are now simple passthroughs. * Remove UpdateConfig from the scraper cache. This isn't needed, so get rid of it. * Pull a context into identify Scraping scenes in the identify tasks now use a context from up the call chain. * Do not store the scraper cache in the resolver. Scraper caches are updated through manager.singleton•RefreshScraperCache, so we can't keep a pointer to it in the resolver. Instead, solve this by adding a fetcher method to the resolver type. This keeps it local to the resolver, while handling the problem of updating caches in the configuration.
2021-11-18 23:55:34 +00:00
// galleryScraper is a scraper which supports gallery scrapes with
// gallery data as the input.
type galleryScraper interface {
scraper
viaGallery(ctx context.Context, client *http.Client, gallery *models.Gallery) (*ScrapedGallery, error)
}