Version 507
This commit is contained in:
parent
441ce76184
commit
b1b841cb11
12
client.bat
12
client.bat
|
@ -1,8 +1,13 @@
|
|||
@ECHO off
|
||||
|
||||
pushd "%~dp0"
|
||||
|
||||
IF NOT EXIST "venv\" (
|
||||
|
||||
SET /P gumpf=You need to set up a venv! Check the running from source help for more info!
|
||||
|
||||
popd
|
||||
|
||||
EXIT /B 1
|
||||
|
||||
)
|
||||
|
@ -12,6 +17,9 @@ CALL venv\Scripts\activate.bat
|
|||
IF ERRORLEVEL 1 (
|
||||
|
||||
SET /P gumpf=The venv failed to activate, stopping now!
|
||||
|
||||
popd
|
||||
|
||||
EXIT /B 1
|
||||
|
||||
)
|
||||
|
@ -24,3 +32,7 @@ start "" "pythonw" client.pyw
|
|||
|
||||
REM Here is an alternate line that will keep the console open and see live log updates. Useful for boot/live debugging.
|
||||
REM python client.py
|
||||
|
||||
CALL venv\Scripts\deactivate.bat
|
||||
|
||||
popd
|
||||
|
|
|
@ -1,7 +1,10 @@
|
|||
#!/bin/bash
|
||||
|
||||
pushd "$(dirname "$0")"
|
||||
|
||||
if [ ! -d "venv" ]; then
|
||||
echo "You need to set up a venv! Check the running from source help for more info!"
|
||||
popd
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
@ -9,6 +12,7 @@ source venv/bin/activate
|
|||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "The venv failed to activate, stopping now!"
|
||||
popd
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
@ -19,3 +23,5 @@ fi
|
|||
python client.py
|
||||
|
||||
deactivate
|
||||
|
||||
popd
|
||||
|
|
|
@ -1,7 +1,10 @@
|
|||
#!/bin/bash
|
||||
|
||||
pushd "$(dirname "$0")"
|
||||
|
||||
if [ ! -d "venv" ]; then
|
||||
echo "You need to set up a venv! Check the running from source help for more info!"
|
||||
popd
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
@ -9,6 +12,7 @@ source venv/bin/activate
|
|||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "The venv failed to activate, stopping now!"
|
||||
popd
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
@ -19,3 +23,5 @@ fi
|
|||
python client.py
|
||||
|
||||
deactivate
|
||||
|
||||
popd
|
||||
|
|
|
@ -7,7 +7,48 @@ title: Changelog
|
|||
!!! note
|
||||
This is the new changelog, only the most recent builds. For all versions, see the [old changelog](old_changelog.html).
|
||||
|
||||
## [Version 50](https://github.com/hydrusnetwork/hydrus/releases/tag/v506)
|
||||
## [Version 507](https://github.com/hydrusnetwork/hydrus/releases/tag/v506)
|
||||
|
||||
### misc
|
||||
|
||||
* fixed an issue where you could set 'all known tags' in the media-tag exporter box in the sidecars system
|
||||
* if a media-tag exporter in the sidecars system is set to an invalid (missing) tag service, the dialog now protests when you try to OK it. also, when you boot into this dialog, it will now moan about the invalid service. also, new media-tag exporters will always start with a valid local tag service.
|
||||
* Qt import error states are handled better. when the client boots, the various 'could not find Qt' errors at different qtpy and QtCore import stages are now handled separately. the Qt selected by qtpy, if any, is reported, as is the state of QT_API and whether hydrus thought it was importable. it seems like there have been a couple of users caught by something like system-wide QT_API env variables here, which this should reveal better in boot-crash logs from now on
|
||||
* all the new setup scripts in the base directory now push their location as the new CWD when they start, and they pop back to your original when they exit. you should be able to call them from anywhere now!
|
||||
* I've written a 'setup_desktop.sh' install script for Linux users to 'install' a hydrus.desktop file for the current install location to your applications directory. thanks to the user who made the original hydrus.desktop file for the help here
|
||||
* I fixed the focus when you open a 'edit predicate' panel that only has buttons, like 'has audio'/'no audio'. top button should have focus again, so you can hit enter quick
|
||||
* added updated link to hydownloader on the client api page
|
||||
|
||||
### dupes apply better to groups of thumbs
|
||||
|
||||
* tl;dr: when the user sets a 'copy both ways' duplicate file status on more than two thumbnails, the duplicate metadata merge options are applied better now
|
||||
* advanced explanation: previously, all merge updates were calculated before applying the updates, so when applied to a group of interconnected relationships, the nodes that were not directly connected to each other were not syncing data. now, all merge updates are calculated and applied to each pair in turn, and then the whole batch is repeated once more, ensuring two-way transitivity. for instance, if you are set to copy tags in both directions and set 'A is the best' of three files 'ABC', and B has tag 'x' and C has 'y', then previously A would get 'x' and 'y', but B would not get 'y' and C would not get 'x'. now, A gets 'x' before the AC merge is calculated, so A and C get x, and then the whole operation is repeated, so when AB is re-calculated, B now gets 'y' from the updated A. same thing if you set to archive if either file is archived--now that archived status will propagate across the whole group in one action
|
||||
|
||||
### client api
|
||||
|
||||
* the new 'tags' structure in `/get_files/file_metadata` now has the 'all known tags' service's tags
|
||||
* the 'file_services' structure in `/get_files/file_metadata` now states service name, type, and pretty type, like 'tags'
|
||||
* `/get_services` now says the service `type` and `type_pretty`, like 'tags'. `/get_services` may be reformatted to a service_key key'd Object at some point, since it uses an old custom human-readable service type as Object key atm and I'd rather we move to the same labels and references for everything, but we'll see
|
||||
* updated the client api help with more example result data for the above changes (and other stuff like 'all my files')
|
||||
* updated the client api unit tests to deal with the above changes
|
||||
* client api version is now 36
|
||||
|
||||
### server/janitor improvements
|
||||
|
||||
* I recommend server admins update their servers this week! everything old still works, but jannies who update have new abilities that won't work until you update
|
||||
* the petition processing page now has an 'account id' text field. paste an account id in there, and you'll get the petition counts just for that account! the petitions requested will also only be for that account!
|
||||
* if you get a 404 on a 'get petition' call (either due to another janitor clearing the last, or from a server count cache miscount), it no longer throws an error. instead, a popup appears for five seconds saying 'hey, there wasn't one after all, please hit refresh counts'
|
||||
|
||||
### boring server improvements
|
||||
|
||||
* refactored the account-fetching routine a little. some behind the scenes account identifier code, which determines an account from a mapping or file record, is now cleaner and more cleanly separated from the 'fetch account from account key' calls. account key is the master account identifier henceforth, and any content lookups will look up the account key and then do normal account lookup after. I will clean this further in the near future
|
||||
* a new server call looks up the account key from a content object explicitly; this will get more use in future
|
||||
* all the 'get number of x' server calls now support 'get number of x made by y' for account-specific counting. these numbers aren't cached, but should be fairly quick for janitorial purposes
|
||||
* same deal for petitions, the server can now fetch petitions by a particular user, if any
|
||||
* added/updated unit tests for these changes
|
||||
* general server code cleanup
|
||||
|
||||
## [Version 506](https://github.com/hydrusnetwork/hydrus/releases/tag/v506)
|
||||
|
||||
### misc
|
||||
|
||||
|
@ -433,39 +474,3 @@ _almost all the changes this week are only important to server admins and janito
|
|||
* to match the new change in the server, in the client, tag and rating services now store their 'num_files' service info count as the new 'num_file_hashes'. existing numbers will be converted over during update
|
||||
* fixed a probably ten year old bug where 'num pending/petitioned files' had the same enum as 'num pending/petitioned mappings'. never noticed, since no service has done both those things
|
||||
* if the upload pending process fails due to an unusual permission error or similar, the pending menu should now recover and update itself (previously it stayed greyed out)
|
||||
|
||||
## [Version 497](https://github.com/hydrusnetwork/hydrus/releases/tag/v497)
|
||||
|
||||
### misc
|
||||
* I bulked out the 'star' rating shape a bit more, since the new pentragram, while it looked better than my old 'by-eye' star, was a bit thin. if you prefer the pentagram, this is now selectable as a new shape type under manage services
|
||||
* the Windows installer is now Qt6 exclusively. there are no special update instructions, it should all just work™
|
||||
* the 'manage tag siblings/parents' dialogs now have explicit delete buttons, which should make mass-deletes a little easier to do. some of the background code is cleaned up too, and the 'add' button is moved up to the main button row
|
||||
* you can now hide all sibling and/or parent text-suffix 'decorators' in the manage tags and autocomplete dropdown taglists, with four new checkboxes under _options->tags_. the right-click menus of these lists let you temporarily show/hide too, just like 'hide/show parent rows'
|
||||
* when you change the namespace sort in the options, the existing collect-by dropdowns now update instantly (previously, existing pages needed a client restart to see any changes)
|
||||
* I updated how the media viewer 'note' hover window lays out and does its 'how tall should I be?' estimate. it fits better, being exactly just tall enough in more cases, but it still seems to have trouble with multiple notes that include wrapping text
|
||||
* added a link to the new flatpak release (easy Linux running-from-source setup) that a user made to the install help
|
||||
* fixed an issue with the new 'default' file import options when you right-click a watcher/gallery download--the 'show files' menu now correctly adapts to you having a default file import options
|
||||
* if you are set to elide page tab names, then all pages will tooltip their names on mouseover
|
||||
* new clients now start with (ctrl+page up/down) as 'move page selection left/right'
|
||||
|
||||
### client api
|
||||
* the Client API routine that fetches file statuses for a given URL no longer double-checks 'already in db' results against your actual file system. this check is more appropriate to an actual working import process, so it now defaults off in the Client API
|
||||
* if you want to do this check because you are searching for missing files, you can turn it back on with the new 'doublecheck_file_system' parameter.
|
||||
* the client api help has been updated to reference this
|
||||
* the client api's Server header is now "client api/32 (497)". NOT "client api/17". it was stating the hydrus network version erroneously. it now states client api version and software version. if you are able to parse this header, it makes '/api_version' request superfluous
|
||||
* the client api version is now 32
|
||||
|
||||
### multiline parsing
|
||||
* the parser now supports limited multiline parsing. the main changes are hardcoded: the formulae beneath note content parsers and those that do subsidiary page parser splitting no longer remove newlines when they parse. all the parsing UI and the test panels and so on are now aware of this and set flags in all the right places, and parsed notes are now washed through the new trimming/cleaning method, and everything _seems_ to basically work. the main remaining problems is the complicated string processing UI has mixed single/multi-line testing support. some looks great, most gets coerced to single-line just for the previewed test results
|
||||
* as an example, the default hentai foundry downloader now grabs the artist description as a multi-line note
|
||||
* the parsing sub-system that extracts cohesive strings from complex html blocks now inserts newlines at 'p' and 'br' tags
|
||||
* trying to parse clean multiline notes still caused several formatting issues this week, so I have updated the automatic note-washing routine to standardise hydrus notes in several new ways that I hope will not be too disruptive to manually written notes:
|
||||
* the note washing routine now coerces all newline characters to 'backslash-n', regardless of platform
|
||||
* the note washing routine now trims each line, so no leading or trailing whitespace anywhere. I am open to changing this in future, maybe for handwritten notes where you really want an indent somewhere, but parsing from complex nested html tags is making a heap of weird extra whitespace, for which this is a clean solution
|
||||
* the note washing routine now trims newline gaps that are greater than two-newlines. you can split paragraphs by one empty line, but no more
|
||||
* there may be other issues figuring out cleanly formatted strings from nested html tags--so give it a go and let me know what you think. maybe p and br blocks should always make two newlines, so we have separated paragraphs, maybe I need to parse more blocks, like h1 and friends. any specific example html blocks would also be helpful
|
||||
|
||||
### cleanup
|
||||
* refactored ClientGUIParsing to its own 'parsing' module and split everything into four less tangled files
|
||||
* cleaned up a bunch of taglist text presentation code, mostly simplicity and clarity in prep for future updates
|
||||
* updated the checker options button to use a Qt signal instead of a callable
|
||||
|
|
|
@ -25,6 +25,7 @@ Once the API is running, go to its entry in _services->review services_. Each ex
|
|||
* [Hydrus Companion](https://gitgud.io/prkc/hydrus-companion): a Chrome/Firefox extension for hydrus that allows easy download queueing as you browse and advanced login support
|
||||
* [Hydrus Web](https://github.com/floogulinc/hydrus-web): a web client for hydrus (allows phone browsing of hydrus)
|
||||
* [Hyshare](https://github.com/floogulinc/hyshare): a way to share small galleries with friends--a replacement for the old 'local booru' system
|
||||
* [hydownloader](https://gitgud.io/thatfuckingbird/hydownloader): Hydrus-like download system based on gallery-dl.
|
||||
* [LoliSnatcher](https://github.com/NO-ob/LoliSnatcher_Droid): a booru client for Android that can talk to hydrus
|
||||
* [Anime Boxes](https://www.animebox.es/): a booru browser, now supports adding your client as a Hydrus Server
|
||||
* [FlipFlip](https://ififfy.github.io/flipflip/#/): an advanced slideshow interface, now supports hydrus as a source
|
||||
|
|
|
@ -225,65 +225,94 @@ Response:
|
|||
"local_tags" : [
|
||||
{
|
||||
"name" : "my tags",
|
||||
"service_key" : "6c6f63616c2074616773"
|
||||
"service_key" : "6c6f63616c2074616773",
|
||||
"type" : 5,
|
||||
"type_pretty" : "local tag service"
|
||||
},
|
||||
{
|
||||
"name" : "filenames",
|
||||
"service_key" : "231a2e992b67101318c410abb6e7d98b6e32050623f138ca93bd4ad2993de31b"
|
||||
"service_key" : "231a2e992b67101318c410abb6e7d98b6e32050623f138ca93bd4ad2993de31b",
|
||||
"type" : 5,
|
||||
"type_pretty" : "local tag service"
|
||||
}
|
||||
],
|
||||
"tag_repositories" : [
|
||||
{
|
||||
"name" : "PTR",
|
||||
"service_key" : "ccb0cf2f9e92c2eb5bd40986f72a339ef9497014a5fb8ce4cea6d6c9837877d9"
|
||||
"service_key" : "ccb0cf2f9e92c2eb5bd40986f72a339ef9497014a5fb8ce4cea6d6c9837877d9",
|
||||
"type" : 0,
|
||||
"type_pretty" : "hydrus tag repository"
|
||||
}
|
||||
],
|
||||
"file_repositories" : [
|
||||
{
|
||||
"name" : "kamehameha central",
|
||||
"service_key" : "89295dc26dae3ea7d395a1746a8fe2cb836b9472b97db48024bd05587f32ab0b",
|
||||
"type" : 1,
|
||||
"type_pretty" : "hydrus file repository"
|
||||
}
|
||||
],
|
||||
"local_files" : [
|
||||
{
|
||||
"name" : "my files",
|
||||
"service_key" : "6c6f63616c2066696c6573"
|
||||
}
|
||||
],
|
||||
"local_updates" : [
|
||||
{
|
||||
"name" : "repository updates",
|
||||
"service_key" : "7265706f7369746f72792075706461746573"
|
||||
}
|
||||
],
|
||||
"file_repositories" : [],
|
||||
"all_local_files" : [
|
||||
{
|
||||
"name" : "all local files",
|
||||
"service_key" : "616c6c206c6f63616c2066696c6573"
|
||||
"service_key" : "6c6f63616c2066696c6573",
|
||||
"type" : 2,
|
||||
"type_pretty" : "local file domain"
|
||||
}
|
||||
],
|
||||
"all_local_media" : [
|
||||
{
|
||||
"name" : "all my files",
|
||||
"service_key" : "616c6c206c6f63616c206d65646961"
|
||||
}
|
||||
],
|
||||
"all_known_files" : [
|
||||
{
|
||||
"name" : "all known files",
|
||||
"service_key" : "616c6c206b6e6f776e2066696c6573"
|
||||
}
|
||||
],
|
||||
"all_known_tags" : [
|
||||
{
|
||||
"name" : "all known tags",
|
||||
"service_key" : "616c6c206b6e6f776e2074616773"
|
||||
"service_key" : "616c6c206c6f63616c206d65646961",
|
||||
"type" : 21,
|
||||
"type_pretty" : "virtual combined local media service"
|
||||
}
|
||||
],
|
||||
"trash" : [
|
||||
{
|
||||
"name" : "trash",
|
||||
"service_key" : "7472617368"
|
||||
"service_key" : "7472617368",
|
||||
"type" : 14,
|
||||
"type_pretty" : "local trash file domain"
|
||||
}
|
||||
],
|
||||
"local_updates" : [
|
||||
{
|
||||
"name" : "repository updates",
|
||||
"service_key" : "7265706f7369746f72792075706461746573",
|
||||
"type" : 20,
|
||||
"type_pretty" : "local update file domain"
|
||||
}
|
||||
],
|
||||
"all_local_files" : [
|
||||
{
|
||||
"name" : "all local files",
|
||||
"service_key" : "616c6c206c6f63616c2066696c6573",
|
||||
"type" : 15,
|
||||
"type_pretty" : "virtual combined local file service"
|
||||
}
|
||||
],
|
||||
"all_known_files" : [
|
||||
{
|
||||
"name" : "all known files",
|
||||
"service_key" : "616c6c206b6e6f776e2066696c6573",
|
||||
"type" : 11,
|
||||
"type_pretty" : "virtual combined file service"
|
||||
}
|
||||
],
|
||||
"all_known_tags" : [
|
||||
{
|
||||
"name" : "all known tags",
|
||||
"service_key" : "616c6c206b6e6f776e2074616773",
|
||||
"type" : 10,
|
||||
"type_pretty" : "virtual combined tag service"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
These services may be referred to in various metadata responses or required in request parameters (e.g. where to add tag mappings). Note that a user can rename their services. Much of this Client API uses this renameable 'service name' as service identifier, but I may start using service key, which is non-mutable ID specific to each client. The hardcoded services have shorter service key strings (it is usually just 'all known files' etc.. ASCII-converted to hex), but user-made stuff will have 64-character hex.
|
||||
These services may be referred to in various metadata responses or required in request parameters (e.g. where to add tag mappings). Note that a user can rename their services. The older parts of the Client API use the renameable 'service name' as service identifier, but wish to move away from this. Please use the hex 'service_key', which is a non-mutable ID specific to each client. The hardcoded services have shorter service key strings (it is usually just 'all known files' etc.. ASCII-converted to hex), but user-made stuff will have 64-character hex.
|
||||
|
||||
Now that I state `type` and `type_pretty` here, I may rearrange this call, probably to make the `service_key` the Object key, rather than the arbitrary 'all_known_tags' strings.
|
||||
|
||||
|
||||
## Adding Files
|
||||
|
@ -1537,6 +1566,13 @@ Response:
|
|||
"type_pretty" : "hydrus tag repository",
|
||||
"storage_tags" : {},
|
||||
"display_tags" : {}
|
||||
},
|
||||
"616c6c206b6e6f776e2074616773" : {
|
||||
"name" : "all known tags",
|
||||
"type" : 10,
|
||||
"type_pretty" : "virtual combined tag service",
|
||||
"storage_tags" : {},
|
||||
"display_tags" : {}
|
||||
}
|
||||
}
|
||||
},
|
||||
|
@ -1559,14 +1595,29 @@ Response:
|
|||
"file_services" : {
|
||||
"current" : {
|
||||
"616c6c206c6f63616c2066696c6573" : {
|
||||
"name" : "all local files",
|
||||
"type" : 15,
|
||||
"type_pretty" : "virtual combined local file service",
|
||||
"time_imported" : 1641044491
|
||||
},
|
||||
"616c6c206c6f63616c2066696c6573" : {
|
||||
"name" : "all my files",
|
||||
"type" : 21,
|
||||
"type_pretty" : "virtual combined local media service",
|
||||
"time_imported" : 1641044491
|
||||
},
|
||||
"cb072cffbd0340b67aec39e1953c074e7430c2ac831f8e78fb5dfbda6ec8dcbd" : {
|
||||
"name" : "cool space babes",
|
||||
"type" : 2,
|
||||
"type_pretty" : "local file domain",
|
||||
"time_imported" : 1641204220
|
||||
}
|
||||
},
|
||||
"deleted" : {
|
||||
"6c6f63616c2066696c6573" : {
|
||||
"name" : "my files",
|
||||
"type" : 2,
|
||||
"type_pretty" : "local file domain",
|
||||
"time_deleted" : 1641204274,
|
||||
"time_imported" : 1641044491
|
||||
}
|
||||
|
@ -1598,6 +1649,10 @@ Response:
|
|||
"37e3849bda234f53b0e9792a036d14d4f3a9a136d1cb939705dbcd5287941db4" : {
|
||||
"0" : ["blonde_hair", "blue_eyes", "looking_at_viewer"],
|
||||
"1" : ["bodysuit"]
|
||||
},
|
||||
"616c6c206b6e6f776e2074616773" : {
|
||||
"0" : ["samus favourites", "blonde_hair", "blue_eyes", "looking_at_viewer"],
|
||||
"1" : ["bodysuit"]
|
||||
}
|
||||
},
|
||||
"service_keys_to_statuses_to_display_tags" : {
|
||||
|
@ -1608,6 +1663,10 @@ Response:
|
|||
"37e3849bda234f53b0e9792a036d14d4f3a9a136d1cb939705dbcd5287941db4" : {
|
||||
"0" : ["blonde hair", "blue_eyes", "looking at viewer"],
|
||||
"1" : ["bodysuit", "clothing"]
|
||||
},
|
||||
"616c6c206b6e6f776e2074616773" : {
|
||||
"0" : ["samus favourites", "favourites", "blonde hair", "blue_eyes", "looking at viewer"],
|
||||
"1" : ["bodysuit", "clothing"]
|
||||
}
|
||||
},
|
||||
"tags" : {
|
||||
|
@ -1636,6 +1695,19 @@ Response:
|
|||
"0" : ["blonde hair", "blue_eyes", "looking at viewer"],
|
||||
"1" : ["bodysuit", "clothing"]
|
||||
}
|
||||
},
|
||||
"616c6c206b6e6f776e2074616773" : {
|
||||
"name" : "all known tags",
|
||||
"type" : 10,
|
||||
"type_pretty" : "virtual combined tag service",
|
||||
"storage_tags" : {
|
||||
"0" : ["samus favourites", "blonde_hair", "blue_eyes", "looking_at_viewer"],
|
||||
"1" : ["bodysuit"]
|
||||
},
|
||||
"display_tags" : {
|
||||
"0" : ["samus favourites", "favourites", "blonde hair", "blue_eyes", "looking at viewer"],
|
||||
"1" : ["bodysuit", "clothing"]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -33,6 +33,42 @@
|
|||
<div class="content">
|
||||
<h3 id="changelog"><a href="#changelog">changelog</a></h3>
|
||||
<ul>
|
||||
<li><h3 id="version_507"><a href="#version_507">version 507</a></h3></li>
|
||||
<ul>
|
||||
<li>misc:</li>
|
||||
<li>fixed an issue where you could set 'all known tags' in the media-tag exporter box in the sidecars system</li>
|
||||
<li>if a media-tag exporter in the sidecars system is set to an invalid (missing) tag service, the dialog now protests when you try to OK it. also, when you boot into this dialog, it will now moan about the invalid service. also, new media-tag exporters will always start with a valid local tag service.</li>
|
||||
<li>Qt import error states are handled better. when the client boots, the various 'could not find Qt' errors at different qtpy and QtCore import stages are now handled separately. the Qt selected by qtpy, if any, is reported, as is the state of QT_API and whether hydrus thought it was importable. it seems like there have been a couple of users caught by something like system-wide QT_API env variables here, which this should reveal better in boot-crash logs from now on</li>
|
||||
<li>all the new setup scripts in the base directory now push their location as the new CWD when they start, and they pop back to your original when they exit. you should be able to call them from anywhere now!</li>
|
||||
<li>I've written a 'setup_desktop.sh' install script for Linux users to 'install' a hydrus.desktop file for the current install location to your applications directory. thanks to the user who made the original hydrus.desktop file for the help here</li>
|
||||
<li>I fixed the focus when you open a 'edit predicate' panel that only has buttons, like 'has audio'/'no audio'. top button should have focus again, so you can hit enter quick</li>
|
||||
<li>added updated link to hydownloader on the client api page</li>
|
||||
<li>.</li>
|
||||
<li>dupes apply better to groups of thumbs:</li>
|
||||
<li>tl;dr: when the user sets a 'copy both ways' duplicate file status on more than two thumbnails, the duplicate metadata merge options are applied better now</li>
|
||||
<li>advanced explanation: previously, all merge updates were calculated before applying the updates, so when applied to a group of interconnected relationships, the nodes that were not directly connected to each other were not syncing data. now, all merge updates are calculated and applied to each pair in turn, and then the whole batch is repeated once more, ensuring two-way transitivity. for instance, if you are set to copy tags in both directions and set 'A is the best' of three files 'ABC', and B has tag 'x' and C has 'y', then previously A would get 'x' and 'y', but B would not get 'y' and C would not get 'x'. now, A gets 'x' before the AC merge is calculated, so A and C get x, and then the whole operation is repeated, so when AB is re-calculated, B now gets 'y' from the updated A. same thing if you set to archive if either file is archived--now that archived status will propagate across the whole group in one action</li>
|
||||
<li>.</li>
|
||||
<li>client api:</li>
|
||||
<li>the new 'tags' structure in `/get_files/file_metadata` now has the 'all known tags' service's tags</li>
|
||||
<li>the 'file_services' structure in `/get_files/file_metadata` now states service name, type, and pretty type, like 'tags'</li>
|
||||
<li>`/get_services` now says the service `type` and `type_pretty`, like 'tags'. `/get_services` may be reformatted to a service_key key'd Object at some point, since it uses an old custom human-readable service type as Object key atm and I'd rather we move to the same labels and references for everything, but we'll see</li>
|
||||
<li>updated the client api help with more example result data for the above changes (and other stuff like 'all my files')</li>
|
||||
<li>updated the client api unit tests to deal with the above changes</li>
|
||||
<li>client api version is now 36</li>
|
||||
<li>.</li>
|
||||
<li>server/janitor improvements:</li>
|
||||
<li>I recommend server admins update their servers this week! everything old still works, but jannies who update have new abilities that won't work until you update</li>
|
||||
<li>the petition processing page now has an 'account id' text field. paste an account id in there, and you'll get the petition counts just for that account! the petitions requested will also only be for that account!</li>
|
||||
<li>if you get a 404 on a 'get petition' call (either due to another janitor clearing the last, or from a server count cache miscount), it no longer throws an error. instead, a popup appears for five seconds saying 'hey, there wasn't one after all, please hit refresh counts'</li>
|
||||
<li>.</li>
|
||||
<li>boring server improvements:</li>
|
||||
<li>refactored the account-fetching routine a little. some behind the scenes account identifier code, which determines an account from a mapping or file record, is now cleaner and more cleanly separated from the 'fetch account from account key' calls. account key is the master account identifier henceforth, and any content lookups will look up the account key and then do normal account lookup after. I will clean this further in the near future</li>
|
||||
<li>a new server call looks up the account key from a content object explicitly; this will get more use in future</li>
|
||||
<li>all the 'get number of x' server calls now support 'get number of x made by y' for account-specific counting. these numbers aren't cached, but should be fairly quick for janitorial purposes</li>
|
||||
<li>same deal for petitions, the server can now fetch petitions by a particular user, if any</li>
|
||||
<li>added/updated unit tests for these changes</li>
|
||||
<li>general server code cleanup</li>
|
||||
</ul>
|
||||
<li><h3 id="version_506"><a href="#version_506">version 506</a></h3></li>
|
||||
<ul>
|
||||
<li>misc:</li>
|
||||
|
|
|
@ -126,13 +126,15 @@ There are three external libraries. You just have to get them and put them in th
|
|||
|
||||
If you get an error about the venv failing to activate during `setup_venv.sh`, you may need to install venv especially for your system. The specific error message should help you out, but you'll be looking at something along the lines of `apt install python3.10-venv`.
|
||||
|
||||
If you like, you can run the `setup_desktop.sh` file to install a hydrus.desktop file to your applications folder. (Or check the template in `install_dir/static/hydrus.desktop` and do it yourself!)
|
||||
|
||||
|
||||
=== "macOS"
|
||||
|
||||
|
||||
Double-click `setup_venv.command`.
|
||||
|
||||
If you do not have permission to run the .command file, then either open a terminal on the folder and enter:
|
||||
If you do not have permission to run the .command file, then open a terminal on the folder and enter:
|
||||
|
||||
`chmod +x setup_venv.command`
|
||||
|
||||
|
|
|
@ -1,13 +1,20 @@
|
|||
@ECHO off
|
||||
|
||||
pushd "%~dp0"
|
||||
|
||||
where /q git
|
||||
IF ERRORLEVEL 1 (
|
||||
|
||||
SET /P gumpf=You do not seem to git installed!
|
||||
SET /P gumpf=You do not seem to have git installed!
|
||||
|
||||
popd
|
||||
|
||||
EXIT /B 1
|
||||
|
||||
)
|
||||
|
||||
git pull
|
||||
|
||||
popd
|
||||
|
||||
SET /P done=Done!
|
||||
|
|
|
@ -1,7 +1,11 @@
|
|||
#!/bin/bash
|
||||
|
||||
pushd "$(dirname "$0")"
|
||||
|
||||
git pull
|
||||
|
||||
echo "Done!"
|
||||
|
||||
read
|
||||
|
||||
popd
|
||||
|
|
|
@ -1,5 +1,9 @@
|
|||
#!/bin/bash
|
||||
|
||||
pushd "$(dirname "$0")"
|
||||
|
||||
git pull
|
||||
|
||||
echo "Done!"
|
||||
|
||||
popd
|
||||
|
|
|
@ -359,7 +359,7 @@ class DuplicateActionOptions( HydrusSerialisable.SerialisableBase ):
|
|||
return ( self._tag_service_actions, self._rating_service_actions, self._sync_archive_action, self._sync_urls_action )
|
||||
|
||||
|
||||
def ProcessPairIntoContentUpdates( self, first_media, second_media, delete_first = False, delete_second = False, file_deletion_reason = None ):
|
||||
def ProcessPairIntoContentUpdates( self, first_media, second_media, delete_first = False, delete_second = False, file_deletion_reason = None, do_not_do_deletes = False ):
|
||||
|
||||
if file_deletion_reason is None:
|
||||
|
||||
|
@ -591,6 +591,11 @@ class DuplicateActionOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
for media in deletee_media:
|
||||
|
||||
if do_not_do_deletes:
|
||||
|
||||
continue
|
||||
|
||||
|
||||
if media.HasDeleteLocked():
|
||||
|
||||
ClientMedia.ReportDeleteLockFailures( [ media ] )
|
||||
|
|
|
@ -23,7 +23,6 @@ from hydrus.core.networking import HydrusNetworking
|
|||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientData
|
||||
from hydrus.client import ClientFiles
|
||||
from hydrus.client import ClientLocation
|
||||
from hydrus.client import ClientThreading
|
||||
from hydrus.client.gui import QtPorting as QP
|
||||
from hydrus.client.importing import ClientImporting
|
||||
|
@ -3356,7 +3355,7 @@ class ServicesManager( object ):
|
|||
|
||||
self._keys_to_services[ CC.TEST_SERVICE_KEY ] = GenerateService( CC.TEST_SERVICE_KEY, HC.TEST_SERVICE, 'test service' )
|
||||
|
||||
key = lambda s: s.GetName()
|
||||
key = lambda s: s.GetName().lower()
|
||||
|
||||
self._services_sorted = sorted( services, key = key )
|
||||
|
||||
|
@ -3381,6 +3380,13 @@ class ServicesManager( object ):
|
|||
|
||||
|
||||
|
||||
def GetDefaultLocalTagService( self ) -> Service:
|
||||
|
||||
# I can replace this with 'default_local_location_context' kind of thing at some point, but for now we'll merge in here
|
||||
|
||||
return self.GetServices( ( HC.LOCAL_TAG, ) )[0]
|
||||
|
||||
|
||||
def GetLocalMediaFileServices( self ):
|
||||
|
||||
with self._lock:
|
||||
|
@ -3457,7 +3463,7 @@ class ServicesManager( object ):
|
|||
|
||||
|
||||
|
||||
def GetServices( self, desired_types: typing.Collection[ int ] = HC.ALL_SERVICES, randomised: bool = False ):
|
||||
def GetServices( self, desired_types: typing.Collection[ int ] = HC.ALL_SERVICES, randomised: bool = False ) -> typing.List[ Service ]:
|
||||
|
||||
with self._lock:
|
||||
|
||||
|
|
|
@ -3,6 +3,74 @@ import os
|
|||
# If not explicitly set, prefer PySide instead of PyQt, which is the qtpy default
|
||||
# It is critical that this runs on startup *before* anything is imported from qtpy.
|
||||
|
||||
def get_qt_api_str_status():
|
||||
|
||||
try:
|
||||
|
||||
if 'QT_API' in os.environ:
|
||||
|
||||
qt_api = os.environ[ 'QT_API' ]
|
||||
|
||||
import_status = 'imported ok'
|
||||
|
||||
if qt_api == 'pyqt5':
|
||||
|
||||
try:
|
||||
|
||||
import PyQt5
|
||||
|
||||
except ImportError as e:
|
||||
|
||||
import_status = 'did not import ok: {}'.format( e )
|
||||
|
||||
|
||||
elif qt_api == 'pyside2':
|
||||
|
||||
try:
|
||||
|
||||
import PySide2
|
||||
|
||||
except ImportError as e:
|
||||
|
||||
import_status = 'did not import ok: {}'.format( e )
|
||||
|
||||
|
||||
elif qt_api == 'pyqt6':
|
||||
|
||||
try:
|
||||
|
||||
import PyQt6
|
||||
|
||||
except ImportError as e:
|
||||
|
||||
import_status = 'did not import ok: {}'.format( e )
|
||||
|
||||
|
||||
elif qt_api == 'pyside6':
|
||||
|
||||
try:
|
||||
|
||||
import PySide6
|
||||
|
||||
except ImportError as e:
|
||||
|
||||
import_status = 'did not import ok: {}'.format( e )
|
||||
|
||||
|
||||
|
||||
return 'QT_API: {}, {}'.format( qt_api, import_status )
|
||||
|
||||
else:
|
||||
|
||||
return 'No QT_API set.'
|
||||
|
||||
|
||||
except Exception as e:
|
||||
|
||||
return 'Unable to get QT_API info: {}'.format( e )
|
||||
|
||||
|
||||
|
||||
if 'QT_API' not in os.environ:
|
||||
|
||||
try:
|
||||
|
@ -28,23 +96,65 @@ if 'QT_API' not in os.environ:
|
|||
|
||||
#
|
||||
|
||||
def DoWinDarkMode():
|
||||
|
||||
os.environ[ 'QT_QPA_PLATFORM' ] = 'windows:darkmode=1'
|
||||
|
||||
|
||||
try:
|
||||
|
||||
import qtpy
|
||||
|
||||
except ModuleNotFoundError:
|
||||
except ModuleNotFoundError as e:
|
||||
|
||||
raise Exception( 'The qtpy module was not found! Are you sure you installed and activated your venv correctly? Check the \'running from source\' section of the help if you are confused!' )
|
||||
qt_str = get_qt_api_str_status()
|
||||
|
||||
message = 'Either the qtpy module was not found, or qtpy could not find a Qt to use! Error was: {}'.format(
|
||||
e
|
||||
)
|
||||
message += os.linesep * 2
|
||||
message += 'Are you sure you installed and activated your venv correctly? Check the \'running from source\' section of the help if you are confused! Here is info on QT_API: {}'.format(
|
||||
qt_str
|
||||
)
|
||||
|
||||
raise Exception( message )
|
||||
|
||||
|
||||
from qtpy import QtCore as QC
|
||||
from qtpy import QtWidgets as QW
|
||||
from qtpy import QtGui as QG
|
||||
try:
|
||||
|
||||
from qtpy import QtCore as QC
|
||||
from qtpy import QtWidgets as QW
|
||||
from qtpy import QtGui as QG
|
||||
|
||||
except ModuleNotFoundError as e:
|
||||
|
||||
message = 'One of the Qt modules could not be loaded! Error was: {}'.format(
|
||||
e
|
||||
)
|
||||
|
||||
message += os.linesep * 2
|
||||
|
||||
try:
|
||||
|
||||
message += 'Of the different Qts, qtpy selected: PySide2 ({}), PySide6 ({}), PyQt5 ({}), PyQt6 ({}).'.format(
|
||||
'selected' if qtpy.PYSIDE2 else 'not selected',
|
||||
'selected' if qtpy.PYSIDE6 else 'not selected',
|
||||
'selected' if qtpy.PYQT5 else 'not selected',
|
||||
'selected' if qtpy.PYQT6 else 'not selected'
|
||||
)
|
||||
|
||||
except:
|
||||
|
||||
message += 'qtpy had problems saying which module it had selected!'
|
||||
|
||||
|
||||
qt_str = get_qt_api_str_status()
|
||||
|
||||
message += ' Here is info on QT_API: {}'.format(
|
||||
qt_str
|
||||
)
|
||||
|
||||
message += os.linesep * 2
|
||||
|
||||
message += 'If you are running from a built release, please let hydev know!'
|
||||
|
||||
raise Exception( message )
|
||||
|
||||
|
||||
# 2022-07
|
||||
# an older version of qtpy, 1.9 or so, didn't actually have attribute qtpy.PYQT6, so we'll test and assign carefully
|
||||
|
@ -111,6 +221,11 @@ else:
|
|||
raise RuntimeError( 'You need one of PySide2, PySide6, PyQt5, or PyQt6' )
|
||||
|
||||
|
||||
def DoWinDarkMode():
|
||||
|
||||
os.environ[ 'QT_QPA_PLATFORM' ] = 'windows:darkmode=1'
|
||||
|
||||
|
||||
def MonkeyPatchMissingMethods():
|
||||
|
||||
if WE_ARE_QT5:
|
||||
|
|
|
@ -3,8 +3,10 @@ import typing
|
|||
from qtpy import QtCore as QC
|
||||
from qtpy import QtWidgets as QW
|
||||
|
||||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusExceptions
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusText
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
|
@ -19,6 +21,7 @@ from hydrus.client.gui.metadata import ClientGUIMetadataMigrationExporters
|
|||
from hydrus.client.gui.metadata import ClientGUIMetadataMigrationImporters
|
||||
from hydrus.client.gui.widgets import ClientGUICommon
|
||||
from hydrus.client.metadata import ClientMetadataMigration
|
||||
from hydrus.client.metadata import ClientMetadataMigrationExporters
|
||||
|
||||
class EditSingleFileMetadataRouterPanel( ClientGUIScrolledPanels.EditPanel ):
|
||||
|
||||
|
@ -140,6 +143,14 @@ class SingleFileMetadataRoutersControl( ClientGUIListBoxes.AddEditDeleteListBox
|
|||
|
||||
exporter = self._allowed_exporter_classes[0]()
|
||||
|
||||
if isinstance( exporter, ClientMetadataMigrationExporters.SingleFileMetadataExporterMediaTags ):
|
||||
|
||||
if not HG.client_controller.services_manager.ServiceExists( exporter.GetServiceKey() ):
|
||||
|
||||
exporter.SetServiceKey( HG.client_controller.services_manager.GetDefaultLocalTagService().GetServiceKey() )
|
||||
|
||||
|
||||
|
||||
router = ClientMetadataMigration.SingleFileMetadataRouter( exporter = exporter )
|
||||
|
||||
return self._EditRouter( router )
|
||||
|
|
|
@ -55,7 +55,7 @@ class EditSingleFileMetadataExporterPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
self._allowed_exporter_classes = allowed_exporter_classes
|
||||
|
||||
self._current_exporter_class = type( exporter )
|
||||
self._service_key = CC.COMBINED_TAG_SERVICE_KEY
|
||||
self._service_key = CC.DEFAULT_LOCAL_TAG_SERVICE_KEY
|
||||
|
||||
#
|
||||
|
||||
|
@ -213,6 +213,15 @@ class EditSingleFileMetadataExporterPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
if self._current_exporter_class == ClientMetadataMigrationExporters.SingleFileMetadataExporterMediaTags:
|
||||
|
||||
try:
|
||||
|
||||
HG.client_controller.services_manager.GetName( self._service_key )
|
||||
|
||||
except HydrusExceptions.DataMissing:
|
||||
|
||||
raise HydrusExceptions.VetoException( 'Sorry, your exporter needs a valid tag service! The selected one is missing!' )
|
||||
|
||||
|
||||
exporter = ClientMetadataMigrationExporters.SingleFileMetadataExporterMediaTags( service_key = self._service_key )
|
||||
|
||||
elif self._current_exporter_class == ClientMetadataMigrationExporters.SingleFileMetadataExporterMediaURLs:
|
||||
|
@ -247,7 +256,7 @@ class EditSingleFileMetadataExporterPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
def _SelectService( self ):
|
||||
|
||||
service_key = ClientGUIDialogsQuick.SelectServiceKey( service_types = HC.ALL_TAG_SERVICES, unallowed = [ self._service_key ] )
|
||||
service_key = ClientGUIDialogsQuick.SelectServiceKey( service_types = HC.REAL_TAG_SERVICES, unallowed = [ self._service_key ] )
|
||||
|
||||
if service_key is None:
|
||||
|
||||
|
@ -293,6 +302,13 @@ class EditSingleFileMetadataExporterPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
self._service_selection_panel.setVisible( True )
|
||||
|
||||
if not HG.client_controller.services_manager.ServiceExists( self._service_key ):
|
||||
|
||||
message = 'Hey, the tag service for your exporter does not seem to exist! Maybe it was deleted. Please select a new one that does.'
|
||||
|
||||
QW.QMessageBox.warning( self, 'Warning', message )
|
||||
|
||||
|
||||
elif isinstance( exporter, ClientMetadataMigrationExporters.SingleFileMetadataExporterMediaURLs ):
|
||||
|
||||
pass
|
||||
|
|
|
@ -617,6 +617,8 @@ class ReviewAccountsPanel( QW.QWidget ):
|
|||
|
||||
def _RefreshAccounts( self ):
|
||||
|
||||
# TODO: so, rework this guy, and modifyaccounts parent, to not hold account_identifiers, but account_keys. have an async lookup convert contents to account keys before launching this guy
|
||||
|
||||
account_identifiers = self._account_identifiers
|
||||
service = self._service
|
||||
|
||||
|
@ -646,18 +648,18 @@ class ReviewAccountsPanel( QW.QWidget ):
|
|||
|
||||
account = result[ 'account' ]
|
||||
|
||||
account_key = account.GetAccountKey()
|
||||
subject_account_key = account.GetAccountKey()
|
||||
|
||||
if account_key in account_keys_to_accounts:
|
||||
if subject_account_key in account_keys_to_accounts:
|
||||
|
||||
continue
|
||||
|
||||
|
||||
account_keys_to_accounts[ account_key ] = account
|
||||
account_keys_to_accounts[ subject_account_key ] = account
|
||||
|
||||
try:
|
||||
|
||||
response = self._service.Request( HC.GET, 'account_info', { 'subject_identifier' : HydrusNetwork.AccountIdentifier( account_key = account_key ) } )
|
||||
response = self._service.Request( HC.GET, 'account_info', { 'subject_account_key' : subject_account_key } )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
|
@ -668,7 +670,7 @@ class ReviewAccountsPanel( QW.QWidget ):
|
|||
|
||||
account_string = str( response[ 'account_info' ] )
|
||||
|
||||
account_keys_to_account_info[ account_key ] = account_string
|
||||
account_keys_to_account_info[ subject_account_key ] = account_string
|
||||
|
||||
|
||||
|
||||
|
@ -806,6 +808,7 @@ class ReviewAccountsPanel( QW.QWidget ):
|
|||
self._RefreshAccounts()
|
||||
|
||||
|
||||
|
||||
class ModifyAccountsPanel( ClientGUIScrolledPanels.ReviewPanel ):
|
||||
|
||||
def __init__( self, parent: QW.QWidget, service_key: bytes, subject_identifiers: typing.Collection[ HydrusNetwork.AccountIdentifier ] ):
|
||||
|
@ -1011,7 +1014,7 @@ class ModifyAccountsPanel( ClientGUIScrolledPanels.ReviewPanel ):
|
|||
|
||||
for subject_account_key in subject_account_keys:
|
||||
|
||||
service.Request( HC.POST, 'modify_account_account_type', { 'subject_identifier' : HydrusNetwork.AccountIdentifier( account_key = subject_account_key ), 'account_type_key' : account_type_key } )
|
||||
service.Request( HC.POST, 'modify_account_account_type', { 'subject_account_key' : subject_account_key, 'account_type_key' : account_type_key } )
|
||||
|
||||
|
||||
return 1
|
||||
|
@ -1089,7 +1092,7 @@ class ModifyAccountsPanel( ClientGUIScrolledPanels.ReviewPanel ):
|
|||
|
||||
for subject_account_key in subject_account_keys:
|
||||
|
||||
service.Request( HC.POST, 'modify_account_ban', { 'subject_identifier' : HydrusNetwork.AccountIdentifier( account_key = subject_account_key ), 'reason' : reason, 'expires' : expires } )
|
||||
service.Request( HC.POST, 'modify_account_ban', { 'subject_account_key' : subject_account_key, 'reason' : reason, 'expires' : expires } )
|
||||
|
||||
|
||||
return 1
|
||||
|
@ -1124,7 +1127,7 @@ class ModifyAccountsPanel( ClientGUIScrolledPanels.ReviewPanel ):
|
|||
|
||||
for ( subject_account_key, new_expires ) in subject_account_keys_and_new_expires:
|
||||
|
||||
service.Request( HC.POST, 'modify_account_expires', { 'subject_identifier' : HydrusNetwork.AccountIdentifier( account_key = subject_account_key ), 'expires' : new_expires } )
|
||||
service.Request( HC.POST, 'modify_account_expires', { 'subject_account_key' : subject_account_key, 'expires' : new_expires } )
|
||||
|
||||
|
||||
return 1
|
||||
|
@ -1181,7 +1184,7 @@ class ModifyAccountsPanel( ClientGUIScrolledPanels.ReviewPanel ):
|
|||
|
||||
for subject_account_key in subject_account_keys:
|
||||
|
||||
service.Request( HC.POST, 'modify_account_set_message', { 'subject_identifier' : HydrusNetwork.AccountIdentifier( account_key = subject_account_key ), 'message': message } )
|
||||
service.Request( HC.POST, 'modify_account_set_message', { 'subject_account_key' : subject_account_key, 'message': message } )
|
||||
|
||||
|
||||
return 1
|
||||
|
@ -1229,7 +1232,7 @@ class ModifyAccountsPanel( ClientGUIScrolledPanels.ReviewPanel ):
|
|||
|
||||
for subject_account_key in subject_account_keys:
|
||||
|
||||
service.Request( HC.POST, 'modify_account_unban', { 'subject_identifier' : HydrusNetwork.AccountIdentifier( account_key = subject_account_key ) } )
|
||||
service.Request( HC.POST, 'modify_account_unban', { 'subject_account_key' : subject_account_key } )
|
||||
|
||||
|
||||
return 1
|
||||
|
|
|
@ -4266,10 +4266,15 @@ class ManagementPanelPetitions( ManagementPanel ):
|
|||
|
||||
self._last_petition_type_fetched = None
|
||||
|
||||
self._last_fetched_subject_account_key = None
|
||||
|
||||
#
|
||||
|
||||
self._petitions_info_panel = ClientGUICommon.StaticBox( self, 'petitions info' )
|
||||
|
||||
self._petition_account_key = QW.QLineEdit( self._petitions_info_panel )
|
||||
self._petition_account_key.setPlaceholderText( 'account id filter' )
|
||||
|
||||
self._refresh_num_petitions_button = ClientGUICommon.BetterButton( self._petitions_info_panel, 'refresh counts', self._FetchNumPetitions )
|
||||
|
||||
self._petition_types_to_controls = {}
|
||||
|
@ -4364,6 +4369,7 @@ class ManagementPanelPetitions( ManagementPanel ):
|
|||
|
||||
#
|
||||
|
||||
self._petitions_info_panel.Add( self._petition_account_key, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
self._petitions_info_panel.Add( self._refresh_num_petitions_button, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
for hbox in content_type_hboxes:
|
||||
|
@ -4418,6 +4424,9 @@ class ManagementPanelPetitions( ManagementPanel ):
|
|||
self._contents_add.rightClicked.connect( self.EventAddRowRightClick )
|
||||
self._contents_delete.rightClicked.connect( self.EventDeleteRowRightClick )
|
||||
|
||||
self._petition_account_key.textChanged.connect( self._UpdateAccountKey )
|
||||
|
||||
self._UpdateAccountKey()
|
||||
self._DrawCurrentPetition()
|
||||
|
||||
|
||||
|
@ -4565,23 +4574,35 @@ class ManagementPanelPetitions( ManagementPanel ):
|
|||
|
||||
def _DrawNumPetitions( self ):
|
||||
|
||||
for ( content_type, status, count ) in self._num_petition_info:
|
||||
if self._num_petition_info is None:
|
||||
|
||||
petition_type = ( content_type, status )
|
||||
for ( petition_type, ( st, button ) ) in self._petition_types_to_controls.items():
|
||||
|
||||
st.setText( '0 petitions' )
|
||||
|
||||
button.setEnabled( False )
|
||||
|
||||
|
||||
if petition_type in self._petition_types_to_controls:
|
||||
else:
|
||||
|
||||
for ( content_type, status, count ) in self._num_petition_info:
|
||||
|
||||
( st, button ) = self._petition_types_to_controls[ petition_type ]
|
||||
petition_type = ( content_type, status )
|
||||
|
||||
st.setText( HydrusData.ToHumanInt( count )+' petitions' )
|
||||
|
||||
if count > 0:
|
||||
if petition_type in self._petition_types_to_controls:
|
||||
|
||||
button.setEnabled( True )
|
||||
( st, button ) = self._petition_types_to_controls[ petition_type ]
|
||||
|
||||
else:
|
||||
st.setText( HydrusData.ToHumanInt( count )+' petitions' )
|
||||
|
||||
button.setEnabled( False )
|
||||
if count > 0:
|
||||
|
||||
button.setEnabled( True )
|
||||
|
||||
else:
|
||||
|
||||
button.setEnabled( False )
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -4623,7 +4644,7 @@ class ManagementPanelPetitions( ManagementPanel ):
|
|||
|
||||
def _FetchNumPetitions( self ):
|
||||
|
||||
def do_it( service ):
|
||||
def do_it( service, subject_account_key = None ):
|
||||
|
||||
def qt_draw( n_p_i ):
|
||||
|
||||
|
@ -4654,7 +4675,25 @@ class ManagementPanelPetitions( ManagementPanel ):
|
|||
|
||||
try:
|
||||
|
||||
response = service.Request( HC.GET, 'num_petitions' )
|
||||
if subject_account_key is None:
|
||||
|
||||
response = service.Request( HC.GET, 'num_petitions' )
|
||||
|
||||
else:
|
||||
|
||||
try:
|
||||
|
||||
response = service.Request( HC.GET, 'num_petitions', { 'subject_account_key' : subject_account_key } )
|
||||
|
||||
except HydrusExceptions.NotFoundException:
|
||||
|
||||
HydrusData.ShowText( 'That account id was not found!' )
|
||||
|
||||
QP.CallAfter( qt_draw, None )
|
||||
|
||||
return
|
||||
|
||||
|
||||
|
||||
num_petition_info = response[ 'num_petitions' ]
|
||||
|
||||
|
@ -4668,7 +4707,11 @@ class ManagementPanelPetitions( ManagementPanel ):
|
|||
|
||||
self._refresh_num_petitions_button.setText( 'Fetching\u2026' )
|
||||
|
||||
self._controller.CallToThread( do_it, self._service )
|
||||
subject_account_key = self._GetSubjectAccountKey()
|
||||
|
||||
self._last_fetched_subject_account_key = subject_account_key
|
||||
|
||||
self._controller.CallToThread( do_it, self._service, subject_account_key )
|
||||
|
||||
|
||||
def _FetchPetition( self, content_type, status ):
|
||||
|
@ -4699,17 +4742,34 @@ class ManagementPanelPetitions( ManagementPanel ):
|
|||
|
||||
|
||||
button.setEnabled( True )
|
||||
button.setText( 'fetch '+HC.content_status_string_lookup[status]+' '+HC.content_type_string_lookup[content_type]+' petition' )
|
||||
button.setText( 'fetch {} {} petition'.format( HC.content_status_string_lookup[ status ], HC.content_type_string_lookup[ content_type ] ) )
|
||||
|
||||
|
||||
def do_it( service ):
|
||||
def do_it( service, subject_account_key = None ):
|
||||
|
||||
try:
|
||||
|
||||
response = service.Request( HC.GET, 'petition', { 'content_type' : content_type, 'status' : status } )
|
||||
if subject_account_key is None:
|
||||
|
||||
response = service.Request( HC.GET, 'petition', { 'content_type' : content_type, 'status' : status } )
|
||||
|
||||
else:
|
||||
|
||||
response = service.Request( HC.GET, 'petition', { 'content_type' : content_type, 'status' : status, 'subject_account_key' : subject_account_key } )
|
||||
|
||||
|
||||
QP.CallAfter( qt_setpet, response['petition'] )
|
||||
|
||||
except HydrusExceptions.NotFoundException:
|
||||
|
||||
job_key = ClientThreading.JobKey()
|
||||
|
||||
job_key.SetVariable( 'popup_text_1', 'Hey, the server did not have a petition after all. Please hit refresh counts.' )
|
||||
|
||||
job_key.Delete( 5 )
|
||||
|
||||
HG.client_controller.pub( 'message', job_key )
|
||||
|
||||
finally:
|
||||
|
||||
QP.CallAfter( qt_done )
|
||||
|
@ -4728,7 +4788,9 @@ class ManagementPanelPetitions( ManagementPanel ):
|
|||
button.setEnabled( False )
|
||||
button.setText( 'Fetching\u2026' )
|
||||
|
||||
self._controller.CallToThread( do_it, self._service )
|
||||
subject_account_key = self._GetSubjectAccountKey()
|
||||
|
||||
self._controller.CallToThread( do_it, self._service, subject_account_key )
|
||||
|
||||
|
||||
def _FlipSelected( self ):
|
||||
|
@ -4772,6 +4834,34 @@ class ManagementPanelPetitions( ManagementPanel ):
|
|||
return contents_and_checks
|
||||
|
||||
|
||||
def _GetSubjectAccountKey( self ):
|
||||
|
||||
account_key_hex = self._petition_account_key.text()
|
||||
|
||||
if len( account_key_hex ) == 0:
|
||||
|
||||
return None
|
||||
|
||||
else:
|
||||
|
||||
try:
|
||||
|
||||
account_key_bytes = bytes.fromhex( account_key_hex )
|
||||
|
||||
if len( account_key_bytes ) != 32:
|
||||
|
||||
raise Exception()
|
||||
|
||||
|
||||
return account_key_bytes
|
||||
|
||||
except Exception as e:
|
||||
|
||||
return None
|
||||
|
||||
|
||||
|
||||
|
||||
def _SetContentsAndChecks( self, action, contents_and_checks, sort_type ):
|
||||
|
||||
def key( c_and_s ):
|
||||
|
@ -4871,6 +4961,50 @@ class ManagementPanelPetitions( ManagementPanel ):
|
|||
|
||||
|
||||
|
||||
def _UpdateAccountKey( self ):
|
||||
|
||||
account_key_hex = self._petition_account_key.text()
|
||||
|
||||
if len( account_key_hex ) == 0:
|
||||
|
||||
valid = True
|
||||
|
||||
else:
|
||||
|
||||
try:
|
||||
|
||||
account_key_bytes = bytes.fromhex( account_key_hex )
|
||||
|
||||
if len( account_key_bytes ) != 32:
|
||||
|
||||
raise Exception()
|
||||
|
||||
|
||||
valid = True
|
||||
|
||||
except Exception as e:
|
||||
|
||||
valid = False
|
||||
|
||||
|
||||
|
||||
if valid:
|
||||
|
||||
self._petition_account_key.setObjectName( 'HydrusValid' )
|
||||
|
||||
if self._GetSubjectAccountKey() != self._last_fetched_subject_account_key:
|
||||
|
||||
self._FetchNumPetitions()
|
||||
|
||||
|
||||
else:
|
||||
|
||||
self._petition_account_key.setObjectName( 'HydrusInvalid' )
|
||||
|
||||
|
||||
self._petition_account_key.style().polish( self._petition_account_key )
|
||||
|
||||
|
||||
def ContentsAddDoubleClick( self, item ):
|
||||
|
||||
selected_indices = self._contents_add.GetSelectedIndices()
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
import collections
|
||||
import itertools
|
||||
import os
|
||||
import random
|
||||
|
@ -1573,21 +1574,76 @@ class MediaPanel( ClientMedia.ListeningMediaList, QW.QScrollArea, CAC.Applicatio
|
|||
|
||||
pair_info = []
|
||||
|
||||
for ( first_media, second_media ) in media_pairs:
|
||||
# there's an issue here in that one decision will affect the next. if we say 'copy tags both sides' and say A > B & C, then B's tags, merged with A, should soon merge with C
|
||||
# therefore, we need to update the media objects as we go here, which means we need duplicates to force content updates on
|
||||
# this is a little hacky, so maybe a big rewrite here would be nice
|
||||
|
||||
# There's a second issue, wew, in that in order to propagate C back to B, we need to do the whole thing twice! wow!
|
||||
# some service_key_to_content_updates preservation gubbins is needed as a result
|
||||
|
||||
hashes_to_duplicated_media = {}
|
||||
hash_pairs_to_list_of_service_keys_to_content_updates = collections.defaultdict( list )
|
||||
|
||||
for is_first_run in ( True, False ):
|
||||
|
||||
first_hash = first_media.GetHash()
|
||||
second_hash = second_media.GetHash()
|
||||
|
||||
if duplicate_action_options is None:
|
||||
for ( first_media, second_media ) in media_pairs:
|
||||
|
||||
list_of_service_keys_to_content_updates = []
|
||||
first_hash = first_media.GetHash()
|
||||
second_hash = second_media.GetHash()
|
||||
|
||||
else:
|
||||
if first_hash not in hashes_to_duplicated_media:
|
||||
|
||||
hashes_to_duplicated_media[ first_hash ] = first_media.Duplicate()
|
||||
|
||||
|
||||
list_of_service_keys_to_content_updates = [ duplicate_action_options.ProcessPairIntoContentUpdates( first_media, second_media, file_deletion_reason = file_deletion_reason ) ]
|
||||
first_duplicated_media = hashes_to_duplicated_media[ first_hash ]
|
||||
|
||||
if second_hash not in hashes_to_duplicated_media:
|
||||
|
||||
hashes_to_duplicated_media[ second_hash ] = second_media.Duplicate()
|
||||
|
||||
|
||||
second_duplicated_media = hashes_to_duplicated_media[ second_hash ]
|
||||
|
||||
list_of_service_keys_to_content_updates = hash_pairs_to_list_of_service_keys_to_content_updates[ ( first_hash, second_hash ) ]
|
||||
|
||||
if duplicate_action_options is not None:
|
||||
|
||||
do_not_do_deletes = is_first_run
|
||||
|
||||
# so the important part of this mess is here. we send the duplicated media, which is keeping up with content updates, to the method here
|
||||
# original 'first_media' is not changed, and won't be until the database Write clears and publishes everything
|
||||
list_of_service_keys_to_content_updates.append( duplicate_action_options.ProcessPairIntoContentUpdates( first_duplicated_media, second_duplicated_media, file_deletion_reason = file_deletion_reason, do_not_do_deletes = do_not_do_deletes ) )
|
||||
|
||||
|
||||
for service_keys_to_content_updates in list_of_service_keys_to_content_updates:
|
||||
|
||||
for ( service_key, content_updates ) in service_keys_to_content_updates.items():
|
||||
|
||||
for content_update in content_updates:
|
||||
|
||||
hashes = content_update.GetHashes()
|
||||
|
||||
if first_hash in hashes:
|
||||
|
||||
first_duplicated_media.GetMediaResult().ProcessContentUpdate( service_key, content_update )
|
||||
|
||||
|
||||
if second_hash in hashes:
|
||||
|
||||
second_duplicated_media.GetMediaResult().ProcessContentUpdate( service_key, content_update )
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
if is_first_run:
|
||||
|
||||
continue
|
||||
|
||||
|
||||
pair_info.append( ( duplicate_type, first_hash, second_hash, list_of_service_keys_to_content_updates ) )
|
||||
|
||||
|
||||
pair_info.append( ( duplicate_type, first_hash, second_hash, list_of_service_keys_to_content_updates ) )
|
||||
|
||||
|
||||
if len( pair_info ) > 0:
|
||||
|
|
|
@ -1466,7 +1466,7 @@ class AutoCompleteDropdownTags( AutoCompleteDropdown ):
|
|||
|
||||
if location_context.IsAllKnownFiles() and self._tag_service_key == CC.COMBINED_TAG_SERVICE_KEY:
|
||||
|
||||
top_local_tag_service_key = list( HG.client_controller.services_manager.GetServiceKeys( ( HC.LOCAL_TAG, ) ) )[0]
|
||||
top_local_tag_service_key = HG.client_controller.services_manager.GetDefaultLocalTagService().GetServiceKey()
|
||||
|
||||
self._SetTagService( top_local_tag_service_key )
|
||||
|
||||
|
|
|
@ -60,6 +60,8 @@ class StaticSystemPredicateButton( QW.QWidget ):
|
|||
QP.AddToLayout( hbox, self._predicates_button, flag )
|
||||
QP.AddToLayout( hbox, self._remove_button, CC.FLAGS_CENTER )
|
||||
|
||||
self.setFocusProxy( self._predicates_button )
|
||||
|
||||
self.setLayout( hbox )
|
||||
|
||||
|
||||
|
|
|
@ -1272,7 +1272,21 @@ class HydrusResourceClientAPIRestrictedGetServices( HydrusResourceClientAPIRestr
|
|||
|
||||
services = HG.client_controller.services_manager.GetServices( service_types )
|
||||
|
||||
body_dict[ name ] = [ { 'name' : service.GetName(), 'service_key' : service.GetServiceKey().hex() } for service in services ]
|
||||
services_list = []
|
||||
|
||||
for service in services:
|
||||
|
||||
service_dict = {
|
||||
'name' : service.GetName(),
|
||||
'type' : service.GetServiceType(),
|
||||
'type_pretty' : HC.service_string_lookup[ service.GetServiceType() ],
|
||||
'service_key' : service.GetServiceKey().hex()
|
||||
}
|
||||
|
||||
services_list.append( service_dict )
|
||||
|
||||
|
||||
body_dict[ name ] = services_list
|
||||
|
||||
|
||||
body = Dumps( body_dict, request.preferred_mime )
|
||||
|
@ -2468,7 +2482,7 @@ class HydrusResourceClientAPIRestrictedGetFilesFileMetadata( HydrusResourceClien
|
|||
|
||||
services_manager = HG.client_controller.services_manager
|
||||
|
||||
real_tag_service_keys = services_manager.GetServiceKeys( HC.REAL_TAG_SERVICES )
|
||||
tag_service_keys = services_manager.GetServiceKeys( HC.ALL_TAG_SERVICES )
|
||||
service_keys_to_types = { service.GetServiceKey() : service.GetServiceType() for service in services_manager.GetServices() }
|
||||
service_keys_to_names = services_manager.GetServiceKeysToNames()
|
||||
|
||||
|
@ -2529,6 +2543,9 @@ class HydrusResourceClientAPIRestrictedGetFilesFileMetadata( HydrusResourceClien
|
|||
timestamp = locations_manager.GetCurrentTimestamp( file_service_key )
|
||||
|
||||
metadata_row[ 'file_services' ][ 'current' ][ file_service_key.hex() ] = {
|
||||
'name' : service_keys_to_names[ file_service_key ],
|
||||
'type' : service_keys_to_types[ file_service_key ],
|
||||
'type_pretty' : HC.service_string_lookup[ service_keys_to_types[ file_service_key ] ],
|
||||
'time_imported' : timestamp
|
||||
}
|
||||
|
||||
|
@ -2540,6 +2557,9 @@ class HydrusResourceClientAPIRestrictedGetFilesFileMetadata( HydrusResourceClien
|
|||
( timestamp, original_timestamp ) = locations_manager.GetDeletedTimestamps( file_service_key )
|
||||
|
||||
metadata_row[ 'file_services' ][ 'deleted' ][ file_service_key.hex() ] = {
|
||||
'name' : service_keys_to_names[ file_service_key ],
|
||||
'type' : service_keys_to_types[ file_service_key ],
|
||||
'type_pretty' : HC.service_string_lookup[ service_keys_to_types[ file_service_key ] ],
|
||||
'time_deleted' : timestamp,
|
||||
'time_imported' : original_timestamp
|
||||
}
|
||||
|
@ -2609,7 +2629,7 @@ class HydrusResourceClientAPIRestrictedGetFilesFileMetadata( HydrusResourceClien
|
|||
|
||||
tags_dict = {}
|
||||
|
||||
for tag_service_key in real_tag_service_keys:
|
||||
for tag_service_key in tag_service_keys:
|
||||
|
||||
storage_statuses_to_tags = tags_manager.GetStatusesToTags( tag_service_key, ClientTags.TAG_DISPLAY_STORAGE )
|
||||
|
||||
|
|
|
@ -80,8 +80,8 @@ options = {}
|
|||
# Misc
|
||||
|
||||
NETWORK_VERSION = 20
|
||||
SOFTWARE_VERSION = 506
|
||||
CLIENT_API_VERSION = 35
|
||||
SOFTWARE_VERSION = 507
|
||||
CLIENT_API_VERSION = 36
|
||||
|
||||
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )
|
||||
|
||||
|
|
|
@ -211,11 +211,7 @@ def ParseHydrusNetworkGETArgs( requests_args ):
|
|||
|
||||
args = ParseTwistedRequestGETArgs( requests_args, INT_PARAMS, BYTE_PARAMS, STRING_PARAMS, JSON_PARAMS, JSON_BYTE_LIST_PARAMS )
|
||||
|
||||
if 'subject_account_key' in args:
|
||||
|
||||
args[ 'subject_identifier' ] = HydrusNetwork.AccountIdentifier( account_key = args[ 'subject_account_key' ] )
|
||||
|
||||
elif 'subject_hash' in args:
|
||||
if 'subject_hash' in args: # or parent/sib stuff in args
|
||||
|
||||
hash = args[ 'subject_hash' ]
|
||||
|
||||
|
@ -230,6 +226,8 @@ def ParseHydrusNetworkGETArgs( requests_args ):
|
|||
content = HydrusNetwork.Content( HC.CONTENT_TYPE_FILES, [ hash ] )
|
||||
|
||||
|
||||
# TODO: add siblings and parents here
|
||||
|
||||
args[ 'subject_identifier' ] = HydrusNetwork.AccountIdentifier( content = content )
|
||||
|
||||
|
||||
|
@ -422,7 +420,7 @@ class ParsedRequestArguments( dict ):
|
|||
raise HydrusExceptions.BadRequestException( 'It looks like the parameter "{}" was missing!'.format( key ) )
|
||||
|
||||
|
||||
def GetValue( self, key, expected_type, expected_list_type = None, expected_dict_types = None, default_value = None ):
|
||||
def GetValue( self, key, expected_type, expected_list_type = None, expected_dict_types = None, default_value = None, none_on_missing = False ):
|
||||
|
||||
# not None because in JSON sometimes people put 'null' to mean 'did not enter this optional parameter'
|
||||
if key in self and self[ key ] is not None:
|
||||
|
@ -485,7 +483,7 @@ class ParsedRequestArguments( dict ):
|
|||
|
||||
else:
|
||||
|
||||
if default_value is None:
|
||||
if default_value is None and not none_on_missing:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'The required parameter "{}" was missing!'.format( key ) )
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@ import os
|
|||
import requests
|
||||
import time
|
||||
import traceback
|
||||
|
||||
requests.Request
|
||||
import twisted.internet.ssl
|
||||
from twisted.internet import threads, reactor, defer
|
||||
|
||||
|
|
|
@ -94,9 +94,9 @@ class DB( HydrusDB.HydrusDB ):
|
|||
self._read_commands_to_methods = {
|
||||
'access_key' : self._GetAccessKey,
|
||||
'account' : self._GetAccountFromAccountKey,
|
||||
'account_from_content' : self._GetAccountFromContent,
|
||||
'account_info' : self._GetAccountInfo,
|
||||
'account_key_from_access_key' : self._GetAccountKeyFromAccessKey,
|
||||
'account_key_from_content' : self._GetAccountKeyFromContent,
|
||||
'account_types' : self._GetAccountTypes,
|
||||
'auto_create_account_types' : self._GetAutoCreateAccountTypes,
|
||||
'auto_create_registration_key' : self._GetAutoCreateRegistrationKey,
|
||||
|
@ -672,7 +672,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
return HydrusNetwork.Account.GenerateAccountFromTuple( ( account_key, account_type, created, expires, dictionary ) )
|
||||
|
||||
|
||||
def _GetAccountFromContent( self, service_key, content ):
|
||||
def _GetAccountKeyFromContent( self, service_key, content ):
|
||||
|
||||
service_id = self._GetServiceId( service_key )
|
||||
service_type = self._GetServiceType( service_id )
|
||||
|
@ -765,9 +765,9 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
( account_id, ) = result
|
||||
|
||||
account = self._GetAccount( service_id, account_id )
|
||||
account_key = self._GetAccountKeyFromAccountId( account_id )
|
||||
|
||||
return account
|
||||
return account_key
|
||||
|
||||
|
||||
def _GetAccountFromAccountKey( self, service_key, account_key ):
|
||||
|
@ -834,7 +834,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
return account_key
|
||||
|
||||
|
||||
def _GetAccountId( self, account_key ):
|
||||
def _GetAccountId( self, account_key: bytes ) -> int:
|
||||
|
||||
result = self._Execute( 'SELECT account_id FROM accounts WHERE account_key = ?;', ( sqlite3.Binary( account_key ), ) ).fetchone()
|
||||
|
||||
|
@ -2688,11 +2688,18 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
|
||||
|
||||
def _RepositoryGetFilePetition( self, service_id ):
|
||||
def _RepositoryGetFilePetition( self, service_id, account_id = None ):
|
||||
|
||||
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name, ip_addresses_table_name ) = GenerateRepositoryFilesTableNames( service_id )
|
||||
|
||||
result = self._Execute( 'SELECT DISTINCT account_id, reason_id FROM ' + petitioned_files_table_name + ' LIMIT 100;' ).fetchall()
|
||||
if account_id is None:
|
||||
|
||||
result = self._Execute( 'SELECT DISTINCT account_id, reason_id FROM {} LIMIT 100;'.format( petitioned_files_table_name ) ).fetchall()
|
||||
|
||||
else:
|
||||
|
||||
result = self._Execute( 'SELECT DISTINCT account_id, reason_id FROM {} WHERE account_id = ? LIMIT 100;'.format( petitioned_files_table_name ), ( account_id, ) ).fetchall()
|
||||
|
||||
|
||||
if len( result ) == 0:
|
||||
|
||||
|
@ -2742,11 +2749,18 @@ class DB( HydrusDB.HydrusDB ):
|
|||
return result
|
||||
|
||||
|
||||
def _RepositoryGetMappingPetition( self, service_id ):
|
||||
def _RepositoryGetMappingPetition( self, service_id, account_id = None ):
|
||||
|
||||
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateRepositoryMappingsTableNames( service_id )
|
||||
|
||||
result = self._Execute( 'SELECT DISTINCT account_id, reason_id FROM ' + petitioned_mappings_table_name + ' LIMIT 100;' ).fetchall()
|
||||
if account_id is None:
|
||||
|
||||
result = self._Execute( 'SELECT DISTINCT account_id, reason_id FROM {} LIMIT 100;'.format( petitioned_mappings_table_name ) ).fetchall()
|
||||
|
||||
else:
|
||||
|
||||
result = self._Execute( 'SELECT DISTINCT account_id, reason_id FROM {} WHERE account_id = ? LIMIT 100;'.format( petitioned_mappings_table_name ), ( account_id, ) ).fetchall()
|
||||
|
||||
|
||||
if len( result ) == 0:
|
||||
|
||||
|
@ -2908,7 +2922,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
return master_tag_id
|
||||
|
||||
|
||||
def _RepositoryGetNumPetitions( self, service_key, account ):
|
||||
def _RepositoryGetNumPetitions( self, service_key, account, subject_account_key = None ):
|
||||
|
||||
service_id = self._GetServiceId( service_key )
|
||||
|
||||
|
@ -2940,100 +2954,139 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
final_petition_count_info = []
|
||||
|
||||
for ( content_type, content_status, info_type ) in petition_count_info:
|
||||
if subject_account_key is None:
|
||||
|
||||
result = self._Execute( 'SELECT info FROM service_info WHERE service_id = ? AND info_type = ?;', ( service_id, info_type ) ).fetchone()
|
||||
|
||||
if result is None:
|
||||
|
||||
self._RepositoryRegenerateServiceInfoSpecific( service_id, ( info_type, ) )
|
||||
for ( content_type, content_status, info_type ) in petition_count_info:
|
||||
|
||||
result = self._Execute( 'SELECT info FROM service_info WHERE service_id = ? AND info_type = ?;', ( service_id, info_type ) ).fetchone()
|
||||
|
||||
|
||||
( count, ) = result
|
||||
if result is None:
|
||||
|
||||
self._RepositoryRegenerateServiceInfoSpecific( service_id, ( info_type, ) )
|
||||
|
||||
result = self._Execute( 'SELECT info FROM service_info WHERE service_id = ? AND info_type = ?;', ( service_id, info_type ) ).fetchone()
|
||||
|
||||
|
||||
( count, ) = result
|
||||
|
||||
final_petition_count_info.append( ( content_type, content_status, count ) )
|
||||
|
||||
|
||||
final_petition_count_info.append( ( content_type, content_status, count ) )
|
||||
else:
|
||||
|
||||
try:
|
||||
|
||||
subject_account_id = self._GetAccountId( subject_account_key )
|
||||
|
||||
except HydrusExceptions.InsufficientCredentialsException:
|
||||
|
||||
raise HydrusExceptions.NotFoundException( 'That subject account id was not found on this service!' )
|
||||
|
||||
|
||||
for ( content_type, content_status, info_type ) in petition_count_info:
|
||||
|
||||
count = self._RepositoryGetServiceInfoSpecificForAccount( service_id, info_type, subject_account_id )
|
||||
|
||||
final_petition_count_info.append( ( content_type, content_status, count ) )
|
||||
|
||||
|
||||
|
||||
return final_petition_count_info
|
||||
|
||||
|
||||
def _RepositoryGetPetition( self, service_key, account, content_type, status ):
|
||||
def _RepositoryGetPetition( self, service_key, account, content_type, status, subject_account_key = None ):
|
||||
|
||||
# TODO: update this guy to take reason too, for (account key, reason) tuple lookups
|
||||
|
||||
service_id = self._GetServiceId( service_key )
|
||||
|
||||
try:
|
||||
|
||||
if subject_account_key is None:
|
||||
|
||||
subject_account_id = None
|
||||
|
||||
else:
|
||||
|
||||
subject_account_id = self._GetAccountId( subject_account_key )
|
||||
|
||||
|
||||
if content_type == HC.CONTENT_TYPE_FILES:
|
||||
|
||||
petition = self._RepositoryGetFilePetition( service_id )
|
||||
petition = self._RepositoryGetFilePetition( service_id, account_id = subject_account_id )
|
||||
|
||||
elif content_type == HC.CONTENT_TYPE_MAPPINGS:
|
||||
|
||||
petition = self._RepositoryGetMappingPetition( service_id )
|
||||
petition = self._RepositoryGetMappingPetition( service_id, account_id = subject_account_id )
|
||||
|
||||
elif content_type == HC.CONTENT_TYPE_TAG_PARENTS:
|
||||
|
||||
if status == HC.CONTENT_STATUS_PENDING:
|
||||
|
||||
petition = self._RepositoryGetTagParentPend( service_id )
|
||||
petition = self._RepositoryGetTagParentPend( service_id, account_id = subject_account_id )
|
||||
|
||||
else:
|
||||
|
||||
petition = self._RepositoryGetTagParentPetition( service_id )
|
||||
petition = self._RepositoryGetTagParentPetition( service_id, account_id = subject_account_id )
|
||||
|
||||
|
||||
elif content_type == HC.CONTENT_TYPE_TAG_SIBLINGS:
|
||||
|
||||
if status == HC.CONTENT_STATUS_PENDING:
|
||||
|
||||
petition = self._RepositoryGetTagSiblingPend( service_id )
|
||||
petition = self._RepositoryGetTagSiblingPend( service_id, account_id = subject_account_id )
|
||||
|
||||
else:
|
||||
|
||||
petition = self._RepositoryGetTagSiblingPetition( service_id )
|
||||
petition = self._RepositoryGetTagSiblingPetition( service_id, account_id = subject_account_id )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Unknown content type!' )
|
||||
|
||||
|
||||
except HydrusExceptions.NotFoundException:
|
||||
|
||||
info_type = None
|
||||
|
||||
if content_type == HC.CONTENT_TYPE_FILES:
|
||||
if subject_account_key is None:
|
||||
|
||||
info_type = HC.SERVICE_INFO_NUM_ACTIONABLE_FILE_DELETE_PETITIONS
|
||||
info_type = None
|
||||
|
||||
elif content_type == HC.CONTENT_TYPE_MAPPINGS:
|
||||
|
||||
info_type = HC.SERVICE_INFO_NUM_ACTIONABLE_MAPPING_DELETE_PETITIONS
|
||||
|
||||
elif content_type == HC.CONTENT_TYPE_TAG_PARENTS:
|
||||
|
||||
if status == HC.CONTENT_STATUS_PENDING:
|
||||
if content_type == HC.CONTENT_TYPE_FILES:
|
||||
|
||||
info_type = HC.SERVICE_INFO_NUM_ACTIONABLE_PARENT_ADD_PETITIONS
|
||||
info_type = HC.SERVICE_INFO_NUM_ACTIONABLE_FILE_DELETE_PETITIONS
|
||||
|
||||
else:
|
||||
elif content_type == HC.CONTENT_TYPE_MAPPINGS:
|
||||
|
||||
info_type = HC.SERVICE_INFO_NUM_ACTIONABLE_PARENT_DELETE_PETITIONS
|
||||
info_type = HC.SERVICE_INFO_NUM_ACTIONABLE_MAPPING_DELETE_PETITIONS
|
||||
|
||||
elif content_type == HC.CONTENT_TYPE_TAG_PARENTS:
|
||||
|
||||
if status == HC.CONTENT_STATUS_PENDING:
|
||||
|
||||
info_type = HC.SERVICE_INFO_NUM_ACTIONABLE_PARENT_ADD_PETITIONS
|
||||
|
||||
else:
|
||||
|
||||
info_type = HC.SERVICE_INFO_NUM_ACTIONABLE_PARENT_DELETE_PETITIONS
|
||||
|
||||
|
||||
elif content_type == HC.CONTENT_TYPE_TAG_SIBLINGS:
|
||||
|
||||
if status == HC.CONTENT_STATUS_PENDING:
|
||||
|
||||
info_type = HC.SERVICE_INFO_NUM_ACTIONABLE_SIBLING_ADD_PETITIONS
|
||||
|
||||
else:
|
||||
|
||||
info_type = HC.SERVICE_INFO_NUM_ACTIONABLE_SIBLING_DELETE_PETITIONS
|
||||
|
||||
|
||||
|
||||
elif content_type == HC.CONTENT_TYPE_TAG_SIBLINGS:
|
||||
|
||||
if status == HC.CONTENT_STATUS_PENDING:
|
||||
if info_type is not None:
|
||||
|
||||
info_type = HC.SERVICE_INFO_NUM_ACTIONABLE_SIBLING_ADD_PETITIONS
|
||||
self._Execute( 'DELETE FROM service_info WHERE service_id = ? AND info_type = ?;', ( service_id, info_type ) ).fetchone()
|
||||
|
||||
else:
|
||||
|
||||
info_type = HC.SERVICE_INFO_NUM_ACTIONABLE_SIBLING_DELETE_PETITIONS
|
||||
|
||||
|
||||
|
||||
if info_type is not None:
|
||||
|
||||
self._Execute( 'DELETE FROM service_info WHERE service_id = ? AND info_type = ?;', ( service_id, info_type ) ).fetchone()
|
||||
|
||||
|
||||
raise
|
||||
|
@ -3106,6 +3159,120 @@ class DB( HydrusDB.HydrusDB ):
|
|||
return service_hash_ids
|
||||
|
||||
|
||||
def _RepositoryGetServiceInfoSpecificForAccount( self, service_id: int, info_type: int, account_id: int ):
|
||||
|
||||
service_name = self._GetServiceName( service_id )
|
||||
|
||||
( hash_id_map_table_name, tag_id_map_table_name ) = GenerateRepositoryMasterMapTableNames( service_id )
|
||||
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name, ip_addresses_table_name ) = GenerateRepositoryFilesTableNames( service_id )
|
||||
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateRepositoryMappingsTableNames( service_id )
|
||||
( current_tag_siblings_table_name, deleted_tag_siblings_table_name, pending_tag_siblings_table_name, petitioned_tag_siblings_table_name ) = GenerateRepositoryTagSiblingsTableNames( service_id )
|
||||
( current_tag_parents_table_name, deleted_tag_parents_table_name, pending_tag_parents_table_name, petitioned_tag_parents_table_name ) = GenerateRepositoryTagParentsTableNames( service_id )
|
||||
|
||||
if info_type == HC.SERVICE_INFO_NUM_FILES:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM {} WHERE account_id = ?;'.format( current_files_table_name ), ( account_id, ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_DELETED_FILES:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM {} WHERE account_id = ?;'.format( deleted_files_table_name ), ( account_id, ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_PENDING_FILES:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM {} WHERE account_id = ?;'.format( pending_files_table_name ), ( account_id, ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_PETITIONED_FILES:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM {} WHERE account_id = ?;'.format( petitioned_files_table_name ), ( account_id, ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_ACTIONABLE_FILE_ADD_PETITIONS:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT reason_id FROM {} WHERE account_id = ? );'.format( pending_files_table_name ), ( account_id, ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_ACTIONABLE_FILE_DELETE_PETITIONS:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT reason_id FROM {} WHERE account_id = ? );'.format( petitioned_files_table_name ), ( account_id, ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_MAPPINGS:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM {} WHERE account_id = ?;'.format( current_mappings_table_name ), ( account_id, ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_DELETED_MAPPINGS:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM {} WHERE account_id = ?;'.format( deleted_mappings_table_name ), ( account_id, ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_PENDING_MAPPINGS:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM {} WHERE account_id = ?;'.format( pending_mappings_table_name ), ( account_id, ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_PETITIONED_MAPPINGS:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM {} WHERE account_id = ?;'.format( petitioned_mappings_table_name ), ( account_id, ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_ACTIONABLE_MAPPING_ADD_PETITIONS:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT master_tag_id, reason_id FROM {} WHERE account_id = ? );'.format( pending_mappings_table_name ), ( account_id, ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_ACTIONABLE_MAPPING_DELETE_PETITIONS:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT service_tag_id, reason_id FROM {} WHERE account_id = ? );'.format( petitioned_mappings_table_name ), ( account_id, ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_TAG_SIBLINGS:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM {} WHERE account_id = ?;'.format( current_tag_siblings_table_name ), ( account_id, ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_DELETED_TAG_SIBLINGS:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM {} WHERE account_id = ?;'.format( deleted_tag_siblings_table_name ), ( account_id, ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_PENDING_TAG_SIBLINGS:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM {} WHERE account_id = ?;'.format( pending_tag_siblings_table_name ), ( account_id, ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_PETITIONED_TAG_SIBLINGS:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM {} WHERE account_id = ?;'.format( petitioned_tag_siblings_table_name ), ( account_id, ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_ACTIONABLE_SIBLING_ADD_PETITIONS:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT reason_id FROM {} WHERE account_id = ? EXCEPT SELECT DISTINCT reason_id FROM {} WHERE account_id = ? );'.format( pending_tag_siblings_table_name, petitioned_tag_siblings_table_name ), ( account_id, account_id ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_ACTIONABLE_SIBLING_DELETE_PETITIONS:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT reason_id FROM {} WHERE account_id = ? );'.format( petitioned_tag_siblings_table_name ), ( account_id, ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_TAG_PARENTS:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM {} WHERE account_id = ?;'.format( current_tag_parents_table_name ), ( account_id, ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_DELETED_TAG_PARENTS:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM {} WHERE account_id = ?;'.format( deleted_tag_parents_table_name ), ( account_id, ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_PENDING_TAG_PARENTS:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM {} WHERE account_id = ?;'.format( pending_tag_parents_table_name ), ( account_id, ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_PETITIONED_TAG_PARENTS:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM {} WHERE account_id = ?;'.format( petitioned_tag_parents_table_name ), ( account_id, ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_ACTIONABLE_PARENT_ADD_PETITIONS:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT reason_id FROM {} WHERE account_id = ? EXCEPT SELECT DISTINCT reason_id FROM {} WHERE account_id = ? );'.format( pending_tag_parents_table_name, petitioned_tag_parents_table_name ), ( account_id, account_id ) ).fetchone()
|
||||
|
||||
elif info_type == HC.SERVICE_INFO_NUM_ACTIONABLE_PARENT_DELETE_PETITIONS:
|
||||
|
||||
( info, ) = self._Execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT reason_id FROM {} WHERE account_id = ? );'.format( petitioned_tag_parents_table_name ), ( account_id, ) ).fetchone()
|
||||
|
||||
else:
|
||||
|
||||
raise Exception( 'Was asked to generate account-specific service info for an unsupported type: {}'.format( info_type ) )
|
||||
|
||||
|
||||
return info
|
||||
|
||||
|
||||
def _RepositoryGetServiceTagId( self, service_id, master_tag_id, timestamp ):
|
||||
|
||||
( hash_id_map_table_name, tag_id_map_table_name ) = GenerateRepositoryMasterMapTableNames( service_id )
|
||||
|
@ -3130,11 +3297,18 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
|
||||
|
||||
def _RepositoryGetTagParentPend( self, service_id ):
|
||||
def _RepositoryGetTagParentPend( self, service_id, account_id = None ):
|
||||
|
||||
( current_tag_parents_table_name, deleted_tag_parents_table_name, pending_tag_parents_table_name, petitioned_tag_parents_table_name ) = GenerateRepositoryTagParentsTableNames( service_id )
|
||||
|
||||
result = self._Execute( 'SELECT DISTINCT account_id as a1, reason_id as r1 FROM {} WHERE 1 NOT IN ( SELECT 1 FROM {} WHERE account_id = a1 AND reason_id = r1 ) LIMIT 100;'.format( pending_tag_parents_table_name, petitioned_tag_parents_table_name ) ).fetchall()
|
||||
if account_id is None:
|
||||
|
||||
result = self._Execute( 'SELECT DISTINCT account_id as a1, reason_id as r1 FROM {} WHERE 1 NOT IN ( SELECT 1 FROM {} WHERE account_id = a1 AND reason_id = r1 ) LIMIT 100;'.format( pending_tag_parents_table_name, petitioned_tag_parents_table_name ) ).fetchall()
|
||||
|
||||
else:
|
||||
|
||||
result = self._Execute( 'SELECT DISTINCT account_id as a1, reason_id as r1 FROM {} WHERE account_id = ? AND 1 NOT IN ( SELECT 1 FROM {} WHERE account_id = a1 AND reason_id = r1 ) LIMIT 100;'.format( pending_tag_parents_table_name, petitioned_tag_parents_table_name ), ( account_id, ) ).fetchall()
|
||||
|
||||
|
||||
if len( result ) == 0:
|
||||
|
||||
|
@ -3171,11 +3345,18 @@ class DB( HydrusDB.HydrusDB ):
|
|||
return HydrusNetwork.Petition( petitioner_account, reason, actions_and_contents )
|
||||
|
||||
|
||||
def _RepositoryGetTagParentPetition( self, service_id ):
|
||||
def _RepositoryGetTagParentPetition( self, service_id, account_id = None ):
|
||||
|
||||
( current_tag_parents_table_name, deleted_tag_parents_table_name, pending_tag_parents_table_name, petitioned_tag_parents_table_name ) = GenerateRepositoryTagParentsTableNames( service_id )
|
||||
|
||||
result = self._Execute( 'SELECT DISTINCT account_id, reason_id FROM ' + petitioned_tag_parents_table_name + ' LIMIT 100;' ).fetchall()
|
||||
if account_id is None:
|
||||
|
||||
result = self._Execute( 'SELECT DISTINCT account_id, reason_id FROM {} LIMIT 100;'.format( petitioned_tag_parents_table_name ) ).fetchall()
|
||||
|
||||
else:
|
||||
|
||||
result = self._Execute( 'SELECT DISTINCT account_id, reason_id FROM {} WHERE account_id = ? LIMIT 100;'.format( petitioned_tag_parents_table_name ), ( account_id, ) ).fetchall()
|
||||
|
||||
|
||||
if len( result ) == 0:
|
||||
|
||||
|
@ -3242,11 +3423,18 @@ class DB( HydrusDB.HydrusDB ):
|
|||
return HydrusNetwork.Petition( petitioner_account, reason, actions_and_contents )
|
||||
|
||||
|
||||
def _RepositoryGetTagSiblingPend( self, service_id ):
|
||||
def _RepositoryGetTagSiblingPend( self, service_id, account_id = None ):
|
||||
|
||||
( current_tag_siblings_table_name, deleted_tag_siblings_table_name, pending_tag_siblings_table_name, petitioned_tag_siblings_table_name ) = GenerateRepositoryTagSiblingsTableNames( service_id )
|
||||
|
||||
result = self._Execute( 'SELECT DISTINCT account_id as a1, reason_id as r1 FROM {} WHERE 1 NOT IN ( SELECT 1 FROM {} WHERE account_id = a1 AND reason_id = r1 ) LIMIT 100;'.format( pending_tag_siblings_table_name, petitioned_tag_siblings_table_name ) ).fetchall()
|
||||
if account_id is None:
|
||||
|
||||
result = self._Execute( 'SELECT DISTINCT account_id as a1, reason_id as r1 FROM {} WHERE 1 NOT IN ( SELECT 1 FROM {} WHERE account_id = a1 AND reason_id = r1 ) LIMIT 100;'.format( pending_tag_siblings_table_name, petitioned_tag_siblings_table_name ) ).fetchall()
|
||||
|
||||
else:
|
||||
|
||||
result = self._Execute( 'SELECT DISTINCT account_id as a1, reason_id as r1 FROM {} WHERE account_id = ? AND 1 NOT IN ( SELECT 1 FROM {} WHERE account_id = a1 AND reason_id = r1 ) LIMIT 100;'.format( pending_tag_siblings_table_name, petitioned_tag_siblings_table_name ), ( account_id, ) ).fetchall()
|
||||
|
||||
|
||||
if len( result ) == 0:
|
||||
|
||||
|
@ -3283,11 +3471,18 @@ class DB( HydrusDB.HydrusDB ):
|
|||
return HydrusNetwork.Petition( petitioner_account, reason, actions_and_contents )
|
||||
|
||||
|
||||
def _RepositoryGetTagSiblingPetition( self, service_id ):
|
||||
def _RepositoryGetTagSiblingPetition( self, service_id, account_id = None ):
|
||||
|
||||
( current_tag_siblings_table_name, deleted_tag_siblings_table_name, pending_tag_siblings_table_name, petitioned_tag_siblings_table_name ) = GenerateRepositoryTagSiblingsTableNames( service_id )
|
||||
|
||||
result = self._Execute( 'SELECT DISTINCT account_id, reason_id FROM ' + petitioned_tag_siblings_table_name + ' LIMIT 100;' ).fetchall()
|
||||
if account_id is None:
|
||||
|
||||
result = self._Execute( 'SELECT DISTINCT account_id, reason_id FROM {} LIMIT 100;'.format( petitioned_tag_siblings_table_name ) ).fetchall()
|
||||
|
||||
else:
|
||||
|
||||
result = self._Execute( 'SELECT DISTINCT account_id, reason_id FROM {} WHERE account_id = ? LIMIT 100;'.format( petitioned_tag_siblings_table_name ), ( account_id, ) ).fetchall()
|
||||
|
||||
|
||||
if len( result ) == 0:
|
||||
|
||||
|
@ -3306,7 +3501,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
#
|
||||
|
||||
pairs = self._Execute( 'SELECT bad_service_tag_id, good_service_tag_id FROM ' + petitioned_tag_siblings_table_name + ' WHERE account_id = ? AND reason_id = ?;', ( petitioner_account_id, reason_id ) ).fetchall()
|
||||
pairs = self._Execute( 'SELECT bad_service_tag_id, good_service_tag_id FROM {} WHERE account_id = ? AND reason_id = ?;'.format( petitioned_tag_siblings_table_name ), ( petitioner_account_id, reason_id ) ).fetchall()
|
||||
|
||||
contents = []
|
||||
|
||||
|
|
|
@ -30,6 +30,8 @@ class HydrusServiceRestricted( HydrusServer.HydrusService ):
|
|||
|
||||
root.putChild( b'account_info', ServerServerResources.HydrusResourceRestrictedAccountInfo( self._service, HydrusServer.REMOTE_DOMAIN ) )
|
||||
|
||||
root.putChild( b'account_key_from_content', ServerServerResources.HydrusResourceRestrictedAccountKeyFromContent( self._service, HydrusServer.REMOTE_DOMAIN ) )
|
||||
|
||||
root.putChild( b'account_types', ServerServerResources.HydrusResourceRestrictedAccountTypes( self._service, HydrusServer.REMOTE_DOMAIN ) )
|
||||
|
||||
root.putChild( b'options_nullification_period', ServerServerResources.HydrusResourceRestrictedOptionsModifyNullificationPeriod( self._service, HydrusServer.REMOTE_DOMAIN ) )
|
||||
|
|
|
@ -465,21 +465,7 @@ class HydrusResourceRestrictedAccountInfo( HydrusResourceRestrictedAccountModify
|
|||
|
||||
def _threadDoGETJob( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
if 'subject_identifier' not in request.parsed_request_args:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'I was expecting an account id, but did not get one!' )
|
||||
|
||||
|
||||
subject_identifier = request.parsed_request_args[ 'subject_identifier' ]
|
||||
|
||||
if subject_identifier.HasAccountKey():
|
||||
|
||||
subject_account_key = subject_identifier.GetAccountKey()
|
||||
|
||||
else:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'The subject\'s account identifier did not include an account id!' )
|
||||
|
||||
subject_account_key = request.parsed_request_args[ 'subject_account_key' ]
|
||||
|
||||
subject_account = HG.server_controller.Read( 'account', self._service_key, subject_account_key )
|
||||
|
||||
|
@ -492,25 +478,26 @@ class HydrusResourceRestrictedAccountInfo( HydrusResourceRestrictedAccountModify
|
|||
return response_context
|
||||
|
||||
|
||||
class HydrusResourceRestrictedAccountKeyFromContent( HydrusResourceRestrictedAccountModify ):
|
||||
|
||||
def _threadDoGETJob( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
subject_content = request.parsed_request_args[ 'subject_content' ]
|
||||
|
||||
subject_account_key = HG.server_controller.Read( 'account_key_from_content', self._service_key, subject_content )
|
||||
|
||||
body = HydrusNetworkVariableHandling.DumpHydrusArgsToNetworkBytes( { 'subject_account_key' : subject_account_key } )
|
||||
|
||||
response_context = HydrusServerResources.ResponseContext( 200, body = body )
|
||||
|
||||
return response_context
|
||||
|
||||
|
||||
class HydrusResourceRestrictedAccountModifyAccountType( HydrusResourceRestrictedAccountModify ):
|
||||
|
||||
def _threadDoPOSTJob( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
if 'subject_identifier' not in request.parsed_request_args:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'I was expecting an account id, but did not get one!' )
|
||||
|
||||
|
||||
subject_identifier = request.parsed_request_args[ 'subject_identifier' ]
|
||||
|
||||
if subject_identifier.HasAccountKey():
|
||||
|
||||
subject_account_key = subject_identifier.GetAccountKey()
|
||||
|
||||
else:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'The subject\'s account identifier did not include an account id!' )
|
||||
|
||||
subject_account_key = request.parsed_request_args[ 'subject_account_key' ]
|
||||
|
||||
if 'account_type_key' not in request.parsed_request_args:
|
||||
|
||||
|
@ -530,21 +517,7 @@ class HydrusResourceRestrictedAccountModifyBan( HydrusResourceRestrictedAccountM
|
|||
|
||||
def _threadDoPOSTJob( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
if 'subject_identifier' not in request.parsed_request_args:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'I was expecting an account id, but did not get one!' )
|
||||
|
||||
|
||||
subject_identifier = request.parsed_request_args[ 'subject_identifier' ]
|
||||
|
||||
if subject_identifier.HasAccountKey():
|
||||
|
||||
subject_account_key = subject_identifier.GetAccountKey()
|
||||
|
||||
else:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'The subject\'s account identifier did not include an account id!' )
|
||||
|
||||
subject_account_key = request.parsed_request_args[ 'subject_account_key' ]
|
||||
|
||||
if 'reason' not in request.parsed_request_args:
|
||||
|
||||
|
@ -586,20 +559,26 @@ class HydrusResourceRestrictedAccountModifyExpires( HydrusResourceRestrictedAcco
|
|||
|
||||
def _threadDoPOSTJob( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
if 'subject_identifier' not in request.parsed_request_args:
|
||||
if 'subject_account_key' in request.parsed_request_args:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'I was expecting an account id, but did not get one!' )
|
||||
subject_account_key = request.parsed_request_args[ 'subject_account_key' ]
|
||||
|
||||
|
||||
subject_identifier = request.parsed_request_args[ 'subject_identifier' ]
|
||||
|
||||
if subject_identifier.HasAccountKey():
|
||||
elif 'subject_identifier' in request.parsed_request_args:
|
||||
|
||||
subject_account_key = subject_identifier.GetAccountKey()
|
||||
subject_identifier = request.parsed_request_args[ 'subject_identifier' ]
|
||||
|
||||
if subject_identifier.HasAccountKey():
|
||||
|
||||
subject_account_key = subject_identifier.GetAccountKey()
|
||||
|
||||
else:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'The subject\'s account identifier did not include an account id!' )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'The subject\'s account identifier did not include an account id!' )
|
||||
raise HydrusExceptions.BadRequestException( 'I was expecting an account id, but did not get one!' )
|
||||
|
||||
|
||||
if 'expires' not in request.parsed_request_args:
|
||||
|
@ -630,20 +609,26 @@ class HydrusResourceRestrictedAccountModifySetMessage( HydrusResourceRestrictedA
|
|||
|
||||
def _threadDoPOSTJob( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
if 'subject_identifier' not in request.parsed_request_args:
|
||||
if 'subject_account_key' in request.parsed_request_args:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'I was expecting an account id, but did not get one!' )
|
||||
subject_account_key = request.parsed_request_args[ 'subject_account_key' ]
|
||||
|
||||
|
||||
subject_identifier = request.parsed_request_args[ 'subject_identifier' ]
|
||||
|
||||
if subject_identifier.HasAccountKey():
|
||||
elif 'subject_identifier' in request.parsed_request_args:
|
||||
|
||||
subject_account_key = subject_identifier.GetAccountKey()
|
||||
subject_identifier = request.parsed_request_args[ 'subject_identifier' ]
|
||||
|
||||
if subject_identifier.HasAccountKey():
|
||||
|
||||
subject_account_key = subject_identifier.GetAccountKey()
|
||||
|
||||
else:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'The subject\'s account identifier did not include an account id!' )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'The subject\'s account identifier did not include an account id!' )
|
||||
raise HydrusExceptions.BadRequestException( 'I was expecting an account id, but did not get one!' )
|
||||
|
||||
|
||||
if 'message' not in request.parsed_request_args:
|
||||
|
@ -669,20 +654,26 @@ class HydrusResourceRestrictedAccountModifyUnban( HydrusResourceRestrictedAccoun
|
|||
|
||||
def _threadDoPOSTJob( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
if 'subject_identifier' not in request.parsed_request_args:
|
||||
if 'subject_account_key' in request.parsed_request_args:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'I was expecting an account id, but did not get one!' )
|
||||
subject_account_key = request.parsed_request_args[ 'subject_account_key' ]
|
||||
|
||||
|
||||
subject_identifier = request.parsed_request_args[ 'subject_identifier' ]
|
||||
|
||||
if subject_identifier.HasAccountKey():
|
||||
elif 'subject_identifier' in request.parsed_request_args:
|
||||
|
||||
subject_account_key = subject_identifier.GetAccountKey()
|
||||
subject_identifier = request.parsed_request_args[ 'subject_identifier' ]
|
||||
|
||||
if subject_identifier.HasAccountKey():
|
||||
|
||||
subject_account_key = subject_identifier.GetAccountKey()
|
||||
|
||||
else:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'The subject\'s account identifier did not include an account id!' )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'The subject\'s account identifier did not include an account id!' )
|
||||
raise HydrusExceptions.BadRequestException( 'I was expecting an account id, but did not get one!' )
|
||||
|
||||
|
||||
HG.server_controller.WriteSynchronous( 'modify_account_unban', self._service_key, request.hydrus_account, subject_account_key )
|
||||
|
@ -696,35 +687,45 @@ class HydrusResourceRestrictedAccountOtherAccount( HydrusResourceRestrictedAccou
|
|||
|
||||
def _threadDoGETJob( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
if 'subject_identifier' not in request.parsed_request_args:
|
||||
subject_account_key = None
|
||||
|
||||
if 'subject_identifier' in request.parsed_request_args:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'I was expecting an account identifier for the subject, but did not get one!' )
|
||||
subject_identifier = request.parsed_request_args[ 'subject_identifier' ]
|
||||
|
||||
if subject_identifier.HasAccountKey():
|
||||
|
||||
subject_account_key = subject_identifier.GetAccountKey()
|
||||
|
||||
elif subject_identifier.HasContent():
|
||||
|
||||
subject_content = subject_identifier.GetContent()
|
||||
|
||||
subject_account_key = HG.server_controller.Read( 'account_key_from_content', self._service_key, subject_content )
|
||||
|
||||
else:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'The subject\'s account identifier did not include an account id or content!' )
|
||||
|
||||
|
||||
|
||||
subject_identifier = request.parsed_request_args[ 'subject_identifier' ]
|
||||
if 'subject_account_key' in request.parsed_request_args:
|
||||
|
||||
subject_account_key = request.parsed_request_args[ 'subject_account_key' ]
|
||||
|
||||
|
||||
if subject_identifier.HasAccountKey():
|
||||
if subject_account_key is None:
|
||||
|
||||
subject_account_key = subject_identifier.GetAccountKey()
|
||||
raise HydrusExceptions.BadRequestException( 'I was expecting an account id, but did not get one!' )
|
||||
|
||||
try:
|
||||
|
||||
subject_account = HG.server_controller.Read( 'account', self._service_key, subject_account_key )
|
||||
|
||||
except HydrusExceptions.InsufficientCredentialsException as e:
|
||||
|
||||
raise HydrusExceptions.NotFoundException( e )
|
||||
|
||||
|
||||
try:
|
||||
|
||||
elif subject_identifier.HasContent():
|
||||
subject_account = HG.server_controller.Read( 'account', self._service_key, subject_account_key )
|
||||
|
||||
subject_content = subject_identifier.GetContent()
|
||||
except HydrusExceptions.InsufficientCredentialsException as e:
|
||||
|
||||
subject_account = HG.server_controller.Read( 'account_from_content', self._service_key, subject_content )
|
||||
|
||||
else:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'The subject\'s account identifier did not include an account id or content!' )
|
||||
raise HydrusExceptions.NotFoundException( e )
|
||||
|
||||
|
||||
body = HydrusNetworkVariableHandling.DumpHydrusArgsToNetworkBytes( { 'account' : subject_account } )
|
||||
|
@ -906,8 +907,9 @@ class HydrusResourceRestrictedNumPetitions( HydrusResourceRestricted ):
|
|||
|
||||
def _threadDoGETJob( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
# cache this
|
||||
petition_count_info = HG.server_controller.Read( 'num_petitions', self._service_key, request.hydrus_account )
|
||||
subject_account_key = request.parsed_request_args.GetValue( 'subject_account_key', bytes, none_on_missing = True )
|
||||
|
||||
petition_count_info = HG.server_controller.Read( 'num_petitions', self._service_key, request.hydrus_account, subject_account_key = subject_account_key )
|
||||
|
||||
body = HydrusNetworkVariableHandling.DumpHydrusArgsToNetworkBytes( { 'num_petitions' : petition_count_info } )
|
||||
|
||||
|
@ -949,12 +951,12 @@ class HydrusResourceRestrictedPetition( HydrusResourceRestricted ):
|
|||
|
||||
def _threadDoGETJob( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
# rewangle this to take an id from the summary list. probably ( account_key, reason_id )
|
||||
# and combine petitioned and pending into the same petition
|
||||
subject_account_key = request.parsed_request_args.GetValue( 'subject_account_key', bytes, none_on_missing = True )
|
||||
# add reason to here some time, for when we eventually select petitions from a summary list of ( account, reason, size ) stuff
|
||||
content_type = request.parsed_request_args[ 'content_type' ]
|
||||
status = request.parsed_request_args[ 'status' ]
|
||||
|
||||
petition = HG.server_controller.Read( 'petition', self._service_key, request.hydrus_account, content_type, status )
|
||||
petition = HG.server_controller.Read( 'petition', self._service_key, request.hydrus_account, content_type, status, subject_account_key = subject_account_key )
|
||||
|
||||
body = HydrusNetworkVariableHandling.DumpHydrusArgsToNetworkBytes( { 'petition' : petition } )
|
||||
|
||||
|
|
|
@ -44,6 +44,8 @@ class TestClientAPI( unittest.TestCase ):
|
|||
@classmethod
|
||||
def setUpClass( cls ):
|
||||
|
||||
cls.maxDiff = None
|
||||
|
||||
cls._client_api = ClientServices.GenerateService( CC.CLIENT_API_SERVICE_KEY, HC.CLIENT_API_SERVICE, 'client api' )
|
||||
cls._client_api_cors = ClientServices.GenerateService( CC.CLIENT_API_SERVICE_KEY, HC.CLIENT_API_SERVICE, 'client api' )
|
||||
|
||||
|
@ -293,7 +295,7 @@ class TestClientAPI( unittest.TestCase ):
|
|||
search_tag_filter = HydrusTags.TagFilter()
|
||||
|
||||
search_tag_filter.SetRule( '', HC.FILTER_BLACKLIST )
|
||||
search_tag_filter.SetRule( ':', HC.FILTER_BLACKLIST )
|
||||
search_tag_filter.SetRule( ' :', HC.FILTER_BLACKLIST )
|
||||
search_tag_filter.SetRule( 'green', HC.FILTER_WHITELIST )
|
||||
|
||||
api_permissions.SetSearchTagFilter( search_tag_filter )
|
||||
|
@ -346,13 +348,13 @@ class TestClientAPI( unittest.TestCase ):
|
|||
|
||||
for request_type in ( 'header', 'get' ):
|
||||
|
||||
if request_type == 'header':
|
||||
if request_type == 'header' :
|
||||
|
||||
headers = { key_name : key_hex }
|
||||
|
||||
connection.request( 'GET', '/verify_access_key', headers = headers )
|
||||
|
||||
elif request_type == 'get':
|
||||
elif request_type == 'get' :
|
||||
|
||||
connection.request( 'GET', '/verify_access_key?{}={}'.format( key_name, key_hex ) )
|
||||
|
||||
|
@ -634,60 +636,89 @@ class TestClientAPI( unittest.TestCase ):
|
|||
should_break = { set_up_permissions[ 'add_urls' ], set_up_permissions[ 'manage_cookies' ] }
|
||||
|
||||
expected_answer = {
|
||||
'local_tags': [
|
||||
'local_tags' : [
|
||||
{
|
||||
'name': 'my tags',
|
||||
'service_key': '6c6f63616c2074616773'
|
||||
'name' : 'my tags',
|
||||
'service_key' : '6c6f63616c2074616773',
|
||||
'type': 5,
|
||||
'type_pretty': 'local tag service'
|
||||
}
|
||||
],
|
||||
'tag_repositories': [
|
||||
'tag_repositories' : [
|
||||
{
|
||||
'name': 'example tag repo',
|
||||
'service_key': HG.test_controller.example_tag_repo_service_key.hex()
|
||||
'name' : 'example tag repo',
|
||||
'service_key' : HG.test_controller.example_tag_repo_service_key.hex(),
|
||||
'type': 0,
|
||||
'type_pretty': 'hydrus tag repository'
|
||||
}
|
||||
],
|
||||
'local_files': [
|
||||
'local_files' : [
|
||||
{
|
||||
'name': 'my files',
|
||||
'service_key': '6c6f63616c2066696c6573'
|
||||
'name' : 'my files',
|
||||
'service_key' : '6c6f63616c2066696c6573',
|
||||
'type': 2,
|
||||
'type_pretty': 'local file domain'
|
||||
}
|
||||
],
|
||||
'local_updates': [
|
||||
'local_updates' : [
|
||||
{
|
||||
'name': 'repository updates',
|
||||
'service_key': '7265706f7369746f72792075706461746573'
|
||||
'name' : 'repository updates',
|
||||
'service_key' : '7265706f7369746f72792075706461746573',
|
||||
'type': 20,
|
||||
'type_pretty': 'local update file domain'
|
||||
}
|
||||
],
|
||||
'file_repositories': [
|
||||
'file_repositories' : [
|
||||
{
|
||||
'name': 'example file repo 1',
|
||||
'service_key': HG.test_controller.example_file_repo_service_key_1.hex(),
|
||||
'type': 1,
|
||||
'type_pretty': 'hydrus file repository'},
|
||||
{
|
||||
'name': 'example file repo 2',
|
||||
'service_key': HG.test_controller.example_file_repo_service_key_2.hex(),
|
||||
'type': 1,
|
||||
'type_pretty': 'hydrus file repository'
|
||||
}
|
||||
],
|
||||
'all_local_files': [
|
||||
'all_local_files' : [
|
||||
{
|
||||
'name': 'all local files',
|
||||
'service_key': '616c6c206c6f63616c2066696c6573'
|
||||
'name' : 'all local files',
|
||||
'service_key' : '616c6c206c6f63616c2066696c6573',
|
||||
'type' : 15,
|
||||
'type_pretty' : 'virtual combined local file service'
|
||||
}
|
||||
],
|
||||
'all_local_media': [
|
||||
'all_local_media' : [
|
||||
{
|
||||
'name': 'all my files',
|
||||
'service_key': '616c6c206c6f63616c206d65646961'
|
||||
'name' : 'all my files',
|
||||
'service_key' : '616c6c206c6f63616c206d65646961',
|
||||
'type': 21,
|
||||
'type_pretty': 'virtual combined local media service'
|
||||
}
|
||||
],
|
||||
'all_known_files': [
|
||||
'all_known_files' : [
|
||||
{
|
||||
'name': 'all known files',
|
||||
'service_key': '616c6c206b6e6f776e2066696c6573'
|
||||
'name' : 'all known files',
|
||||
'service_key' : '616c6c206b6e6f776e2066696c6573',
|
||||
'type' : 11,
|
||||
'type_pretty' : 'virtual combined file service'
|
||||
}
|
||||
],
|
||||
'all_known_tags': [
|
||||
'all_known_tags' : [
|
||||
{
|
||||
'name': 'all known tags',
|
||||
'service_key': '616c6c206b6e6f776e2074616773'
|
||||
'name' : 'all known tags',
|
||||
'service_key' : '616c6c206b6e6f776e2074616773',
|
||||
'type' : 10,
|
||||
'type_pretty' : 'virtual combined tag service'
|
||||
}
|
||||
],
|
||||
'trash': [
|
||||
'trash' : [
|
||||
{
|
||||
'name': 'trash',
|
||||
'service_key': '7472617368'
|
||||
'name' : 'trash',
|
||||
'service_key' : '7472617368',
|
||||
'type': 14,
|
||||
'type_pretty': 'local trash file domain'
|
||||
}
|
||||
]
|
||||
}
|
||||
|
@ -2972,8 +3003,8 @@ class TestClientAPI( unittest.TestCase ):
|
|||
|
||||
sorted_urls = sorted( urls )
|
||||
|
||||
random_file_service_hex_current = HydrusData.GenerateKey()
|
||||
random_file_service_hex_deleted = HydrusData.GenerateKey()
|
||||
random_file_service_hex_current = HG.test_controller.example_file_repo_service_key_1
|
||||
random_file_service_hex_deleted = HG.test_controller.example_file_repo_service_key_2
|
||||
|
||||
current_import_timestamp = 500
|
||||
ipfs_import_timestamp = 123456
|
||||
|
@ -3085,13 +3116,19 @@ class TestClientAPI( unittest.TestCase ):
|
|||
'file_services' : {
|
||||
'current' : {
|
||||
random_file_service_hex_current.hex() : {
|
||||
'time_imported' : current_import_timestamp
|
||||
'time_imported' : current_import_timestamp,
|
||||
'name' : HG.test_controller.services_manager.GetName( random_file_service_hex_current ),
|
||||
'type' : HG.test_controller.services_manager.GetServiceType( random_file_service_hex_current ),
|
||||
'type_pretty' : HC.service_string_lookup[ HG.test_controller.services_manager.GetServiceType( random_file_service_hex_current ) ]
|
||||
}
|
||||
},
|
||||
'deleted' : {
|
||||
random_file_service_hex_deleted.hex() : {
|
||||
'time_deleted' : deleted_deleted_timestamp,
|
||||
'time_imported' : deleted_import_timestamp
|
||||
'time_imported' : deleted_import_timestamp,
|
||||
'name' : HG.test_controller.services_manager.GetName( random_file_service_hex_deleted ),
|
||||
'type' : HG.test_controller.services_manager.GetServiceType( random_file_service_hex_deleted ),
|
||||
'type_pretty' : HC.service_string_lookup[ HG.test_controller.services_manager.GetServiceType( random_file_service_hex_deleted ) ]
|
||||
}
|
||||
}
|
||||
},
|
||||
|
@ -3116,7 +3153,12 @@ class TestClientAPI( unittest.TestCase ):
|
|||
|
||||
for ( i_s_k, multihash ) in locations_manager.GetServiceFilenames().items():
|
||||
|
||||
metadata_row[ 'file_services' ][ 'current' ][ i_s_k.hex() ] = { 'time_imported' : ipfs_import_timestamp }
|
||||
metadata_row[ 'file_services' ][ 'current' ][ i_s_k.hex() ] = {
|
||||
'time_imported' : ipfs_import_timestamp,
|
||||
'name' : HG.test_controller.services_manager.GetName( i_s_k ),
|
||||
'type' : HG.test_controller.services_manager.GetServiceType( i_s_k ),
|
||||
'type_pretty' : HC.service_string_lookup[ HG.test_controller.services_manager.GetServiceType( i_s_k ) ]
|
||||
}
|
||||
|
||||
metadata_row[ 'ipfs_multihashes' ][ i_s_k.hex() ] = multihash
|
||||
|
||||
|
@ -3126,11 +3168,11 @@ class TestClientAPI( unittest.TestCase ):
|
|||
|
||||
tags_dict = {}
|
||||
|
||||
real_tag_service_keys = services_manager.GetServiceKeys( HC.REAL_TAG_SERVICES )
|
||||
tag_service_keys = services_manager.GetServiceKeys( HC.ALL_TAG_SERVICES )
|
||||
service_keys_to_types = { service.GetServiceKey() : service.GetServiceType() for service in services_manager.GetServices() }
|
||||
service_keys_to_names = services_manager.GetServiceKeysToNames()
|
||||
|
||||
for tag_service_key in real_tag_service_keys:
|
||||
for tag_service_key in tag_service_keys:
|
||||
|
||||
storage_statuses_to_tags = tags_manager.GetStatusesToTags( tag_service_key, ClientTags.TAG_DISPLAY_STORAGE )
|
||||
|
||||
|
@ -3208,8 +3250,8 @@ class TestClientAPI( unittest.TestCase ):
|
|||
detailed_known_urls_metadata_row = dict( metadata_row )
|
||||
|
||||
detailed_known_urls_metadata_row[ 'detailed_known_urls' ] = [
|
||||
{'normalised_url': 'https://gelbooru.com/index.php?id=4841557&page=post&s=view', 'url_type': 0, 'url_type_string': 'post url', 'match_name': 'gelbooru file page', 'can_parse': True},
|
||||
{'normalised_url': 'https://img2.gelbooru.com//images/80/c8/80c8646b4a49395fb36c805f316c49a9.jpg', 'url_type': 5, 'url_type_string': 'unknown url', 'match_name': 'unknown url', 'can_parse': False, 'cannot_parse_reason' : 'unknown url class'}
|
||||
{'normalised_url' : 'https://gelbooru.com/index.php?id=4841557&page=post&s=view', 'url_type' : 0, 'url_type_string' : 'post url', 'match_name' : 'gelbooru file page', 'can_parse' : True},
|
||||
{'normalised_url' : 'https://img2.gelbooru.com//images/80/c8/80c8646b4a49395fb36c805f316c49a9.jpg', 'url_type' : 5, 'url_type_string' : 'unknown url', 'match_name' : 'unknown url', 'can_parse' : False, 'cannot_parse_reason' : 'unknown url class'}
|
||||
]
|
||||
|
||||
detailed_known_urls_metadata.append( detailed_known_urls_metadata_row )
|
||||
|
|
|
@ -611,7 +611,9 @@ class TestClientDBDuplicates( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( len( file_duplicate_types_to_counts ), 2 )
|
||||
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], self._get_group_potential_count( file_duplicate_types_to_counts ) )
|
||||
result = self._get_group_potential_count( file_duplicate_types_to_counts )
|
||||
|
||||
self.assertIn( file_duplicate_types_to_counts[ HC.DUPLICATE_POTENTIAL ], ( result, result -1 ) )
|
||||
self.assertEqual( file_duplicate_types_to_counts[ HC.DUPLICATE_MEMBER ], len( self._our_main_dupe_group_hashes ) - 1 )
|
||||
|
||||
result = self._read( 'file_duplicate_hashes', ClientLocation.LocationContext.STATICCreateSimple( CC.LOCAL_FILE_SERVICE_KEY ), self._king_hash, HC.DUPLICATE_KING )
|
||||
|
|
|
@ -228,6 +228,8 @@ class Controller( object ):
|
|||
|
||||
self._param_read_responses = {}
|
||||
|
||||
self.example_file_repo_service_key_1 = HydrusData.GenerateKey()
|
||||
self.example_file_repo_service_key_2 = HydrusData.GenerateKey()
|
||||
self.example_tag_repo_service_key = HydrusData.GenerateKey()
|
||||
self.example_ipfs_service_key = HydrusData.GenerateKey()
|
||||
|
||||
|
@ -241,6 +243,8 @@ class Controller( object ):
|
|||
services.append( ClientServices.GenerateService( CC.LOCAL_UPDATE_SERVICE_KEY, HC.LOCAL_FILE_UPDATE_DOMAIN, 'repository updates' ) )
|
||||
services.append( ClientServices.GenerateService( CC.TRASH_SERVICE_KEY, HC.LOCAL_FILE_TRASH_DOMAIN, 'trash' ) )
|
||||
services.append( ClientServices.GenerateService( CC.DEFAULT_LOCAL_TAG_SERVICE_KEY, HC.LOCAL_TAG, 'my tags' ) )
|
||||
services.append( ClientServices.GenerateService( self.example_file_repo_service_key_1, HC.FILE_REPOSITORY, 'example file repo 1' ) )
|
||||
services.append( ClientServices.GenerateService( self.example_file_repo_service_key_2, HC.FILE_REPOSITORY, 'example file repo 2' ) )
|
||||
services.append( ClientServices.GenerateService( self.example_tag_repo_service_key, HC.TAG_REPOSITORY, 'example tag repo' ) )
|
||||
services.append( ClientServices.GenerateService( CC.COMBINED_TAG_SERVICE_KEY, HC.COMBINED_TAG, 'all known tags' ) )
|
||||
services.append( ClientServices.GenerateService( CC.COMBINED_FILE_SERVICE_KEY, HC.COMBINED_FILE, 'all known files' ) )
|
||||
|
|
|
@ -214,17 +214,17 @@ class TestServer( unittest.TestCase ):
|
|||
self.assertEqual( response[ 'ip' ], ip )
|
||||
self.assertEqual( response[ 'timestamp' ], timestamp )
|
||||
|
||||
# account from hash
|
||||
# account key from file
|
||||
|
||||
subject_content = HydrusNetwork.Content( content_type = HC.CONTENT_TYPE_FILES, content_data = hash )
|
||||
test_hash = HydrusData.GenerateKey()
|
||||
|
||||
subject_account_identifier = HydrusNetwork.AccountIdentifier( content = subject_content )
|
||||
HG.test_controller.SetRead( 'account_key_from_content', self._account.GetAccountKey() )
|
||||
|
||||
HG.test_controller.SetRead( 'account', self._account )
|
||||
content = HydrusNetwork.Content( content_type = HC.CONTENT_TYPE_FILES, content_data = ( test_hash, ) )
|
||||
|
||||
response = service.Request( HC.GET, 'other_account', { 'subject_identifier' : subject_account_identifier } )
|
||||
response = service.Request( HC.GET, 'account_key_from_content', { 'subject_content' : content } )
|
||||
|
||||
self.assertEqual( repr( response[ 'account' ] ), repr( self._account ) )
|
||||
self.assertEqual( repr( response[ 'subject_account_key' ] ), repr( self._account.GetAccountKey() ) )
|
||||
|
||||
# thumbnail
|
||||
|
||||
|
@ -517,33 +517,7 @@ class TestServer( unittest.TestCase ):
|
|||
|
||||
HG.test_controller.SetRead( 'account', self._account )
|
||||
|
||||
subject_account_identifier = HydrusNetwork.AccountIdentifier( account_key = self._account.GetAccountKey() )
|
||||
|
||||
response = service.Request( HC.GET, 'other_account', { 'subject_identifier' : subject_account_identifier } )
|
||||
|
||||
self.assertEqual( repr( response[ 'account' ] ), repr( self._account ) )
|
||||
|
||||
# account from file
|
||||
|
||||
HG.test_controller.SetRead( 'account_from_content', self._account )
|
||||
|
||||
content = HydrusNetwork.Content( content_type = HC.CONTENT_TYPE_FILES, content_data = ( HydrusData.GenerateKey(), ) )
|
||||
|
||||
subject_account_identifier = HydrusNetwork.AccountIdentifier( content = content )
|
||||
|
||||
response = service.Request( HC.GET, 'other_account', { 'subject_identifier' : subject_account_identifier } )
|
||||
|
||||
self.assertEqual( repr( response[ 'account' ] ), repr( self._account ) )
|
||||
|
||||
# account from mapping
|
||||
|
||||
HG.test_controller.SetRead( 'account_from_content', self._account )
|
||||
|
||||
content = HydrusNetwork.Content( content_type = HC.CONTENT_TYPE_MAPPING, content_data = ( 'hello', HydrusData.GenerateKey() ) )
|
||||
|
||||
subject_account_identifier = HydrusNetwork.AccountIdentifier( content = content )
|
||||
|
||||
response = service.Request( HC.GET, 'other_account', { 'subject_identifier' : subject_account_identifier } )
|
||||
response = service.Request( HC.GET, 'other_account', { 'subject_account_key' : self._account.GetAccountKey() } )
|
||||
|
||||
self.assertEqual( repr( response[ 'account' ] ), repr( self._account ) )
|
||||
|
||||
|
@ -553,34 +527,10 @@ class TestServer( unittest.TestCase ):
|
|||
|
||||
HG.test_controller.SetRead( 'account_info', account_info )
|
||||
|
||||
subject_account_identifier = HydrusNetwork.AccountIdentifier( account_key = HydrusData.GenerateKey() )
|
||||
|
||||
response = service.Request( HC.GET, 'account_info', { 'subject_identifier' : subject_account_identifier } )
|
||||
response = service.Request( HC.GET, 'account_info', { 'subject_account_key' : HydrusData.GenerateKey() } )
|
||||
|
||||
self.assertEqual( response[ 'account_info' ], account_info )
|
||||
|
||||
#
|
||||
|
||||
content = HydrusNetwork.Content( content_type = HC.CONTENT_TYPE_FILES, content_data = ( HydrusData.GenerateKey(), ) )
|
||||
|
||||
subject_account_identifier = HydrusNetwork.AccountIdentifier( content = content )
|
||||
|
||||
with self.assertRaises( HydrusExceptions.BadRequestException ):
|
||||
|
||||
# can only do it with an account id
|
||||
response = service.Request( HC.GET, 'account_info', { 'subject_identifier' : subject_account_identifier } )
|
||||
|
||||
|
||||
content = HydrusNetwork.Content( content_type = HC.CONTENT_TYPE_MAPPING, content_data = ( 'hello', HydrusData.GenerateKey() ) )
|
||||
|
||||
subject_account_identifier = HydrusNetwork.AccountIdentifier( content = content )
|
||||
|
||||
with self.assertRaises( HydrusExceptions.BadRequestException ):
|
||||
|
||||
# can only do it with an account id
|
||||
response = service.Request( HC.GET, 'account_info', { 'subject_identifier' : subject_account_identifier } )
|
||||
|
||||
|
||||
# account_types
|
||||
|
||||
account_types = [ HydrusNetwork.AccountType.GenerateAdminAccountType( service.GetServiceType() ) ]
|
||||
|
@ -661,20 +611,18 @@ class TestServer( unittest.TestCase ):
|
|||
|
||||
def _test_tag_repo( self, service ):
|
||||
|
||||
# account from tag
|
||||
# account from mapping
|
||||
|
||||
test_tag = 'character:samus aran'
|
||||
test_hash = HydrusData.GenerateKey()
|
||||
|
||||
subject_content = HydrusNetwork.Content( content_type = HC.CONTENT_TYPE_MAPPING, content_data = ( test_tag, test_hash ) )
|
||||
HG.test_controller.SetRead( 'account_key_from_content', self._account.GetAccountKey() )
|
||||
|
||||
subject_account_identifier = HydrusNetwork.AccountIdentifier( content = subject_content )
|
||||
content = HydrusNetwork.Content( content_type = HC.CONTENT_TYPE_MAPPING, content_data = ( test_tag, test_hash ) )
|
||||
|
||||
HG.test_controller.SetRead( 'account', self._account )
|
||||
response = service.Request( HC.GET, 'account_key_from_content', { 'subject_content' : content } )
|
||||
|
||||
response = service.Request( HC.GET, 'other_account', { 'subject_identifier' : subject_account_identifier } )
|
||||
|
||||
self.assertEqual( repr( response[ 'account' ] ), repr( self._account ) )
|
||||
self.assertEqual( repr( response[ 'subject_account_key' ] ), repr( self._account.GetAccountKey() ) )
|
||||
|
||||
|
||||
def test_repository_file( self ):
|
||||
|
|
|
@ -144,9 +144,9 @@ class TestServerDB( unittest.TestCase ):
|
|||
|
||||
#
|
||||
|
||||
result = self._read( 'account_from_content', self._tag_service_key, mapping_content )
|
||||
result = self._read( 'account_key_from_content', self._tag_service_key, mapping_content )
|
||||
|
||||
self.assertEqual( result.GetAccountKey(), self._tag_service_regular_account.GetAccountKey() )
|
||||
self.assertEqual( result, self._tag_service_regular_account.GetAccountKey() )
|
||||
|
||||
|
||||
def _test_account_modification( self ):
|
||||
|
|
|
@ -1,10 +1,17 @@
|
|||
@ECHO off
|
||||
|
||||
pushd "%~dp0"
|
||||
|
||||
IF NOT EXIST "venv\" (
|
||||
|
||||
SET /P gumpf=Sorry, you do not seem to have a venv!
|
||||
|
||||
popd
|
||||
|
||||
EXIT /B 1
|
||||
|
||||
)
|
||||
|
||||
start venv\Scripts\activate.bat
|
||||
ECHO Type 'deactivate' to return.
|
||||
|
||||
CALL venv\Scripts\activate.bat
|
||||
|
|
|
@ -0,0 +1,44 @@
|
|||
#!/bin/bash
|
||||
|
||||
pushd "$(dirname "$0")"
|
||||
|
||||
INSTALL_DIR="$(readlink -f .)"
|
||||
DESKTOP_SOURCE_PATH=$INSTALL_DIR/static/hydrus.desktop
|
||||
DESKTOP_DEST_PATH=$HOME/.local/share/applications/hydrus.desktop
|
||||
|
||||
echo "Install folder appears to be $INSTALL_DIR"
|
||||
|
||||
if [ ! -f "$DESKTOP_SOURCE_PATH" ]; then
|
||||
echo "Sorry, I do not see the template file at $DESKTOP_SOURCE_PATH! Was it deleted, or this script moved?"
|
||||
popd
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -f "$DESKTOP_DEST_PATH" ]; then
|
||||
|
||||
echo "You already have a hydrus.desktop file at $DESKTOP_DEST_PATH. Would you like to overwrite it? y/n "
|
||||
|
||||
else
|
||||
|
||||
echo "Create a hydrus.desktop file at $DESKTOP_DEST_PATH? y/n "
|
||||
|
||||
fi
|
||||
|
||||
read affirm
|
||||
|
||||
if [ $affirm = "y" ]; then
|
||||
:
|
||||
elif [ $affirm = "n" ]; then
|
||||
popd
|
||||
exit 0
|
||||
else
|
||||
echo "Sorry, did not understand that input!"
|
||||
popd
|
||||
exit 1
|
||||
fi
|
||||
|
||||
sed -e "s#Exec=.*#Exec=${INSTALL_DIR}/client.sh#" -e "s#Icon=.*#Icon=${INSTALL_DIR}/static/hydrus.png#" $DESKTOP_SOURCE_PATH > $DESKTOP_DEST_PATH
|
||||
|
||||
echo "Done!"
|
||||
|
||||
popd
|
|
@ -1,8 +1,13 @@
|
|||
@ECHO off
|
||||
|
||||
pushd "%~dp0"
|
||||
|
||||
IF NOT EXIST "venv\" (
|
||||
|
||||
SET /P gumpf=You need to set up a venv! Check the running from source help for more info!
|
||||
|
||||
popd
|
||||
|
||||
EXIT /B 1
|
||||
|
||||
)
|
||||
|
@ -30,3 +35,5 @@ mkdocs build -d help
|
|||
CALL venv\Scripts\deactivate.bat
|
||||
|
||||
SET /P done=Done!
|
||||
|
||||
popd
|
||||
|
|
|
@ -1,7 +1,10 @@
|
|||
#!/bin/bash
|
||||
|
||||
pushd "$(dirname "$0")"
|
||||
|
||||
if [ ! -d "venv" ]; then
|
||||
echo "You need to set up a venv! Check the running from source help for more info!"
|
||||
popd
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
@ -23,3 +26,5 @@ deactivate
|
|||
echo "Done!"
|
||||
|
||||
read
|
||||
|
||||
popd
|
||||
|
|
|
@ -1,7 +1,10 @@
|
|||
#!/bin/bash
|
||||
|
||||
pushd "$(dirname "$0")"
|
||||
|
||||
if [ ! -d "venv" ]; then
|
||||
echo "You need to set up a venv! Check the running from source help for more info!"
|
||||
popd
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
@ -21,3 +24,5 @@ mkdocs build -d help
|
|||
deactivate
|
||||
|
||||
echo "Done!"
|
||||
|
||||
popd
|
||||
|
|
|
@ -1,9 +1,14 @@
|
|||
@ECHO off
|
||||
|
||||
pushd "%~dp0"
|
||||
|
||||
where /q python
|
||||
IF ERRORLEVEL 1 (
|
||||
|
||||
SET /P gumpf=You do not seem to have python installed. Please check the 'running from source' help.
|
||||
|
||||
popd
|
||||
|
||||
EXIT /B 1
|
||||
|
||||
)
|
||||
|
@ -77,6 +82,9 @@ CALL venv\Scripts\activate.bat
|
|||
IF ERRORLEVEL 1 (
|
||||
|
||||
SET /P gumpf=The venv failed to activate, stopping now!
|
||||
|
||||
popd
|
||||
|
||||
EXIT /B 1
|
||||
|
||||
)
|
||||
|
@ -107,9 +115,15 @@ IF "%install_type%" == "s" (
|
|||
CALL venv\Scripts\deactivate.bat
|
||||
|
||||
SET /P done=Done!
|
||||
|
||||
popd
|
||||
|
||||
EXIT /B 0
|
||||
|
||||
:parse_fail
|
||||
|
||||
SET /P done=Sorry, did not understand that input!
|
||||
|
||||
popd
|
||||
|
||||
EXIT /B 1
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
#!/bin/bash
|
||||
|
||||
pushd "$(dirname "$0")"
|
||||
|
||||
py_command=python3
|
||||
|
||||
type -P $py_command
|
||||
|
@ -41,6 +43,7 @@ elif [ $install_type = "a" ]; then
|
|||
:
|
||||
else
|
||||
echo "Sorry, did not understand that input!"
|
||||
popd
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
@ -54,6 +57,7 @@ elif [ $install_type = "a" ]; then
|
|||
:
|
||||
else
|
||||
echo "Sorry, did not understand that input!"
|
||||
popd
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
@ -67,10 +71,12 @@ elif [ $install_type = "a" ]; then
|
|||
:
|
||||
else
|
||||
echo "Sorry, did not understand that input!"
|
||||
popd
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
echo "Sorry, did not understand that input!"
|
||||
popd
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
@ -81,6 +87,7 @@ source venv/bin/activate
|
|||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "The venv failed to activate, stopping now!"
|
||||
popd
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
@ -117,3 +124,5 @@ deactivate
|
|||
echo "Done!"
|
||||
|
||||
read
|
||||
|
||||
popd
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
#!/bin/bash
|
||||
|
||||
pushd "$(dirname "$0")"
|
||||
|
||||
py_command=python3
|
||||
|
||||
type -P $py_command
|
||||
|
@ -54,6 +56,7 @@ elif [ $install_type = "a" ]; then
|
|||
:
|
||||
else
|
||||
echo "Sorry, did not understand that input!"
|
||||
popd
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
@ -67,10 +70,12 @@ elif [ $install_type = "a" ]; then
|
|||
:
|
||||
else
|
||||
echo "Sorry, did not understand that input!"
|
||||
popd
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
echo "Sorry, did not understand that input!"
|
||||
popd
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
@ -81,6 +86,7 @@ source venv/bin/activate
|
|||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "The venv failed to activate, stopping now!"
|
||||
popd
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
@ -115,3 +121,5 @@ fi
|
|||
deactivate
|
||||
|
||||
echo "Done!"
|
||||
|
||||
popd
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
[Desktop Entry]
|
||||
Version=1.0
|
||||
Name=Hydrus Client
|
||||
Comment=Danbooru-like image tagging and searching system for the desktop
|
||||
Comment=Booru-like media management application
|
||||
Exec=hydrus-client
|
||||
Icon=/usr/lib/hydrus/static/hydrus_non-transparent.png
|
||||
Terminal=false
|
||||
|
|
Loading…
Reference in New Issue