parent
b0a3e217bb
commit
c681a1cff5
|
@ -8,6 +8,45 @@
|
|||
<div class="content">
|
||||
<h3 id="changelog"><a href="#changelog">changelog</a></h3>
|
||||
<ul>
|
||||
<li><h3 id="version_451"><a href="#version_451">version 451</a></h3></li>
|
||||
<ul>
|
||||
<li>stupid anti-virus thing:</li>
|
||||
<li>we have had several more anti-virus false positives just recently. we discovered that at least one testbed used by these companies was testing the 'open html help' checkbox in the installer, which then launched Edge on the testbed, and then launched the Windows Update process for Edge and Skype, which was somehow interacting with UAC and thus considered suspicious activity owned by the hydrus installer process, lmao. thereafter, it seems the installer exe's DNS requests were somehow being cross-connected with the client.exe scan as that was identified as connected with the installer. taking that checkbox out as a test produced a much cleaner scan. there is a limit to how much of this nonsense I will accomodate, but this week we are trying a release without that 'open help' link in the installer, let's see how it goes</li>
|
||||
<li>semi-related, I brushed up the install path message in the installer and clarified help->help will open the help in the first-start welcome popup message</li>
|
||||
<li>.</li>
|
||||
<li>misc:</li>
|
||||
<li>I fixed a critical bug in tag sibling storage when a 'bad' tag's mapping is removed (e.g. 'delete A->B') and added ('add A->C') in the same transaction, and in a heap of fun other situations besides, that mostly resulted in the newly added sibling being forgotten. the bug was worse when this was on a local tag service via the manage siblings dialog. this problem is likely the cause of some of our weird sibling issues on clients that processed certain repository updates extremely quickly. I will keep investigating here for more issues and trigger another sibling reset for everyone in the future</li>
|
||||
<li>the 'show some random pairs' button on the duplicates page is nicer--the 'did not find any pairs' notification is a popup rather than an annoying error dialog, and when there is nothing found, it also clears the page of thumbs. it also tries to guess if you are at the end of the current search, and if so, it will not do an auto re-fetch and will clear the page without producing the popup message</li>
|
||||
<li>fixed a bug that meant file order was not being saved correctly in sessions! sorry for the trouble!</li>
|
||||
<li>import of videos is now a little faster as the ffmpeg call to check resolution and duration is now retained to check for presence of an audio channel</li>
|
||||
<li>when files are imported, the status messages are now much more granular. large and CPU-heavy files should move noticeably from hash generation to filetype calculation to metadata to actual file copying</li>
|
||||
<li>fixed a database query bug in the new processing progress tracking code that was affecting some (perhaps older) versions of sqlite</li>
|
||||
<li>when you trash/untrash/etc... a file in the media viewer, the top hover text now updates to show the file location change</li>
|
||||
<li>fixed a typo bug in the new content type tracking that broke ipfs pinning yet again, sorry for the trouble! (issue #955)</li>
|
||||
<li>I fleshed out my database pending and num_pending tests significantly. now all uploadable content types are tested, so ipfs should not break at the _db_ level again</li>
|
||||
<li>the page tab menu now clumps the 'close x pages' into a dynamic submenu when there are several options and excludes duplicates (e.g. 'close others' and 'close to the left' when you right-click the rightmost page)</li>
|
||||
<li>the page tab menu also puts the 'move' actions under a submenu</li>
|
||||
<li>the page tab menu now has 'select' submenu for navigating home/left/right/end like the shortcuts</li>
|
||||
<li>fixed some repository content type checking problems: showing petition pages when the user has moderation privileges on a repository, permission check when fetching number of petitions, and permissions check when uploading files</li>
|
||||
<li>fixed a typo in the 'running in wine' html that made the whole document big and bold</li>
|
||||
<li>across the program, a 'year' for most date calculations like 'system:time imported: more than a year ago' is now 365 days (up from 12 x 30-day months). these will likely be calendar calculated correctly in future, but for now we'll just stick with simple but just a bit more accurate</li>
|
||||
<li>fixed a bug in mpv loop-seek when the system lags for a moment just when the user closes the media viewer and the video loops back to start</li>
|
||||
<li>.</li>
|
||||
<li>client api:</li>
|
||||
<li>expanded my testing system to handle more 'read' database parameter testing, and added some unit tests for the new client api file search code</li>
|
||||
<li>fixed the 'file_sort_asc' in the new client api file search call. it was a stupid untested typo, thank you for the reports (issue #959)</li>
|
||||
<li>fixed 'file_service_name' and 'tag_service_name' when they are GET parameters in the client api</li>
|
||||
<li>I fleshed out the file search sort help to say what ascending/descending means for each file sort type</li>
|
||||
<li>.</li>
|
||||
<li>boring database cleanup:</li>
|
||||
<li>to cut down on redundant spam, the new query planner profile mode only plans each unique query text once per run of the mode</li>
|
||||
<li>also fixed an issue in the query planner with multiple-row queries with an empty list argument</li>
|
||||
<li>refactored the tag sibling and parent database storage and lookup code out to separate db modules</li>
|
||||
<li>untangled and optimised a couple of sibling/parent lookup chain regeneration calls</li>
|
||||
<li>moved more sibling and parent responsibility to the new modules, clearing some inline hardcoding out of the main class</li>
|
||||
<li>cleaned up a bunch of sibling, parent, and display code generally, and improved communication between these modules, particularly in regards to update interactions and display sync</li>
|
||||
<li>the similar files data tables are migrated to more appropriate locations. previously, they were all in client.caches.db, now the phash definition and file mapping tables are in master, and the similar files search record is now in main</li>
|
||||
</ul>
|
||||
<li><h3 id="version_450"><a href="#version_450">version 450</a></h3></li>
|
||||
<ul>
|
||||
<li>misc:</li>
|
||||
|
|
|
@ -1218,27 +1218,27 @@
|
|||
</ul>
|
||||
<p>More system predicate types and input formats will be available in future. Please test out the system predicates you want to send. Reverse engineering system predicate data from text is obviously tricky. If a system predicate does not parse, you'll get 400.</p>
|
||||
<p>The file and tag services are for search domain selection, just like clicking the buttons in the client. They are optional--default is 'my files' and 'all known tags', and you can use either key or name as in <a href="#get_services">GET /get_services</a>, whichever is easiest for your situation.</p>
|
||||
<p>file_sort_type is an integer according to this enum (default is import time):</p>
|
||||
<p>file_sort_asc is 'true' for ascending, and 'false' for descending. The default is descending.</p>
|
||||
<p>file_sort_type is by default <i>import time</i>. It is an integer according to the following enum, and I have written the semantic (asc/desc) meaning for each type after:</p>
|
||||
<ul>
|
||||
<li>0 - file size</li>
|
||||
<li>1 - duration</li>
|
||||
<li>2 - import time</li>
|
||||
<li>3 - filetype</li>
|
||||
<li>4 - random</li>
|
||||
<li>5 - width</li>
|
||||
<li>6 - height</li>
|
||||
<li>7 - ratio</li>
|
||||
<li>8 - number of pixels</li>
|
||||
<li>9 - number of tags (on the current tag domain)</li>
|
||||
<li>10 - number of media viewers</li>
|
||||
<li>11 - total media viewtime</li>
|
||||
<li>12 - approximate bitrate</li>
|
||||
<li>13 - has audio</li>
|
||||
<li>14 - modified time</li>
|
||||
<li>15 - framerate</li>
|
||||
<li>16 - number of frames</li>
|
||||
<li>0 - file size (smallest first/largest first)</li>
|
||||
<li>1 - duration (shortest first/longest first)</li>
|
||||
<li>2 - import time (oldest first/newest first)</li>
|
||||
<li>3 - filetype (N/A)</li>
|
||||
<li>4 - random (N/A)</li>
|
||||
<li>5 - width (slimmest first/widest first)</li>
|
||||
<li>6 - height (shortest first/tallest first)</li>
|
||||
<li>7 - ratio (tallest first/widest first)</li>
|
||||
<li>8 - number of pixels (ascending/descending)</li>
|
||||
<li>9 - number of tags (on the current tag domain) (ascending/descending)</li>
|
||||
<li>10 - number of media views (ascending/descending)</li>
|
||||
<li>11 - total media viewtime (ascending/descending)</li>
|
||||
<li>12 - approximate bitrate (smallest first/largest first)</li>
|
||||
<li>13 - has audio (audio first/silent first)</li>
|
||||
<li>14 - modified time (oldest first/newest first)</li>
|
||||
<li>15 - framerate (slowest first/fastest first)</li>
|
||||
<li>16 - number of frames (smallest first/largest first)</li>
|
||||
</ul>
|
||||
<p>file_sort_asc is 'true' for ascending, and 'false' for descending. The default is descending. What ascending or descending means in a context can be complicated (e.g. for ratio), so you might want to play around with it or double-check the UI in the client to figure it out.</p>
|
||||
<p>Response description: The full list of numerical file ids that match the search.</p>
|
||||
<li>
|
||||
<p>Example response:</p>
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
</head>
|
||||
<body>
|
||||
<div class="content">
|
||||
<h3 id="intro"><a href="#intro">getting it to work on wine</ap</h3>
|
||||
<h3 id="intro"><a href="#intro">getting it to work on wine</a></h3>
|
||||
<p>Several Linux and macOS users have found success running hydrus with Wine. Here is a post from a Linux dude:</p>
|
||||
<i>
|
||||
<p>Some things I picked up on after extended use:</p>
|
||||
|
|
|
@ -1220,7 +1220,7 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
message = 'Hi, this looks like the first time you have started the hydrus client.'
|
||||
message += os.linesep * 2
|
||||
message += 'Don\'t forget to check out the help if you haven\'t already--it has an extensive \'getting started\' section, including how to update and the importance of backing up your database.'
|
||||
message += 'Don\'t forget to check out the help if you haven\'t already, by clicking help->help--it has an extensive \'getting started\' section, including how to update and the importance of backing up your database.'
|
||||
message += os.linesep * 2
|
||||
message += 'To dismiss popup messages like this, right-click them.'
|
||||
|
||||
|
|
|
@ -96,7 +96,7 @@ def GetDefaultCheckerOptions( name ):
|
|||
|
||||
elif name == 'slow tag subscription':
|
||||
|
||||
return ClientImportOptions.CheckerOptions( intended_files_per_check = 1, never_faster_than = 7 * 86400, never_slower_than = 180 * 86400, death_file_velocity = ( 1, 360 * 86400 ) )
|
||||
return ClientImportOptions.CheckerOptions( intended_files_per_check = 1, never_faster_than = 7 * 86400, never_slower_than = 180 * 86400, death_file_velocity = ( 1, 365 * 86400 ) )
|
||||
|
||||
|
||||
def GetDefaultHentaiFoundryInfo():
|
||||
|
|
|
@ -366,7 +366,7 @@ class FileSystemPredicates( object ):
|
|||
|
||||
( years, months, days, hours ) = age_value
|
||||
|
||||
age = ( ( ( ( ( ( ( years * 12 ) + months ) * 30 ) + days ) * 24 ) + hours ) * 3600 )
|
||||
age = ( years * 365 * 86400 ) + ( ( ( ( ( months * 30 ) + days ) * 24 ) + hours ) * 3600 )
|
||||
|
||||
now = HydrusData.GetNow()
|
||||
|
||||
|
@ -2176,7 +2176,7 @@ class Predicate( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
DAY = 86400
|
||||
MONTH = DAY * 30
|
||||
YEAR = MONTH * 12
|
||||
YEAR = DAY * 365
|
||||
|
||||
time_delta = 0
|
||||
|
||||
|
|
|
@ -58,7 +58,7 @@ def GenerateDefaultServiceDictionary( service_type ):
|
|||
dictionary[ 'update_downloading_paused' ] = False
|
||||
dictionary[ 'update_processing_paused' ] = False
|
||||
|
||||
content_types = tuple( HC.REPOSITORY_CONTENT_TYPES[ service_type ] )
|
||||
content_types = tuple( HC.SERVICE_TYPES_TO_CONTENT_TYPES[ service_type ] )
|
||||
|
||||
dictionary[ 'update_processing_content_types_paused' ] = [ [ content_type, False ] for content_type in content_types ]
|
||||
|
||||
|
@ -1514,7 +1514,7 @@ class ServiceRepository( ServiceRestricted ):
|
|||
|
||||
if 'update_processing_content_types_paused' not in dictionary:
|
||||
|
||||
content_types = tuple( HC.REPOSITORY_CONTENT_TYPES[ self._service_type ] )
|
||||
content_types = tuple( HC.SERVICE_TYPES_TO_CONTENT_TYPES[ self._service_type ] )
|
||||
|
||||
dictionary[ 'update_processing_content_types_paused' ] = [ [ content_type, False ] for content_type in content_types ]
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -159,7 +159,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
|
|||
|
||||
else:
|
||||
|
||||
content_types = tuple( HC.REPOSITORY_CONTENT_TYPES[ service_type ] )
|
||||
content_types = tuple( HC.SERVICE_TYPES_TO_CONTENT_TYPES[ service_type ] )
|
||||
|
||||
|
||||
inserts.extend( ( ( hash_id, content_type, processed ) for content_type in content_types ) )
|
||||
|
@ -324,7 +324,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
|
|||
( min_unregistered_update_index, ) = result
|
||||
|
||||
|
||||
predicate_phrase = 'processed = False AND content_type IN {}'.format( HydrusData.SplayListForDB( content_types_to_process ) )
|
||||
predicate_phrase = 'processed = ? AND content_type IN {}'.format( HydrusData.SplayListForDB( content_types_to_process ) )
|
||||
|
||||
if min_unregistered_update_index is not None:
|
||||
|
||||
|
@ -336,7 +336,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ):
|
|||
|
||||
query = 'SELECT update_index, hash_id, content_type FROM {} CROSS JOIN {} USING ( hash_id ) WHERE {};'.format( repository_updates_processed_table_name, repository_updates_table_name, predicate_phrase )
|
||||
|
||||
rows = self._Execute( query ).fetchall()
|
||||
rows = self._Execute( query, ( False, ) ).fetchall()
|
||||
|
||||
update_indices_to_unprocessed_hash_ids = HydrusData.BuildKeyToSetDict( ( ( update_index, hash_id ) for ( update_index, hash_id, content_type ) in rows ) )
|
||||
hash_ids_to_content_types_to_process = HydrusData.BuildKeyToSetDict( ( ( hash_id, content_type ) for ( update_index, hash_id, content_type ) in rows ) )
|
||||
|
|
|
@ -197,7 +197,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
|
|||
|
||||
index_generation_tuples = []
|
||||
|
||||
index_generation_tuples.append( ( 'external_caches.shape_perceptual_hash_map', [ 'hash_id' ], False ) )
|
||||
index_generation_tuples.append( ( 'external_master.shape_perceptual_hash_map', [ 'hash_id' ], False ) )
|
||||
index_generation_tuples.append( ( 'external_caches.shape_vptree', [ 'parent_id' ], False ) )
|
||||
|
||||
return index_generation_tuples
|
||||
|
@ -406,15 +406,15 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
|
|||
|
||||
def CreateInitialTables( self ):
|
||||
|
||||
self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_perceptual_hashes ( phash_id INTEGER PRIMARY KEY, phash BLOB_BYTES UNIQUE );' )
|
||||
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.shape_perceptual_hashes ( phash_id INTEGER PRIMARY KEY, phash BLOB_BYTES UNIQUE );' )
|
||||
|
||||
self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_perceptual_hash_map ( phash_id INTEGER, hash_id INTEGER, PRIMARY KEY ( phash_id, hash_id ) );' )
|
||||
self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.shape_perceptual_hash_map ( phash_id INTEGER, hash_id INTEGER, PRIMARY KEY ( phash_id, hash_id ) );' )
|
||||
|
||||
self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_vptree ( phash_id INTEGER PRIMARY KEY, parent_id INTEGER, radius INTEGER, inner_id INTEGER, inner_population INTEGER, outer_id INTEGER, outer_population INTEGER );' )
|
||||
|
||||
self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_maintenance_branch_regen ( phash_id INTEGER PRIMARY KEY );' )
|
||||
|
||||
self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_search_cache ( hash_id INTEGER PRIMARY KEY, searched_distance INTEGER );' )
|
||||
self._Execute( 'CREATE TABLE IF NOT EXISTS shape_search_cache ( hash_id INTEGER PRIMARY KEY, searched_distance INTEGER );' )
|
||||
|
||||
|
||||
def DisassociatePHashes( self, hash_id, phash_ids ):
|
||||
|
|
|
@ -0,0 +1,813 @@
|
|||
import collections
|
||||
import itertools
|
||||
import sqlite3
|
||||
import typing
|
||||
|
||||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusDBModule
|
||||
|
||||
from hydrus.client.db import ClientDBDefinitionsCache
|
||||
from hydrus.client.db import ClientDBServices
|
||||
from hydrus.client.db import ClientDBTagSiblings
|
||||
from hydrus.client.metadata import ClientTags
|
||||
from hydrus.client.metadata import ClientTagsHandling
|
||||
|
||||
def GenerateTagParentsLookupCacheTableName( display_type: int, service_id: int ):
|
||||
|
||||
( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( service_id )
|
||||
|
||||
if display_type == ClientTags.TAG_DISPLAY_IDEAL:
|
||||
|
||||
return cache_ideal_tag_parents_lookup_table_name
|
||||
|
||||
elif display_type == ClientTags.TAG_DISPLAY_ACTUAL:
|
||||
|
||||
return cache_actual_tag_parents_lookup_table_name
|
||||
|
||||
|
||||
def GenerateTagParentsLookupCacheTableNames( service_id ):
|
||||
|
||||
cache_ideal_tag_parents_lookup_table_name = 'external_caches.ideal_tag_parents_lookup_cache_{}'.format( service_id )
|
||||
cache_actual_tag_parents_lookup_table_name = 'external_caches.actual_tag_parents_lookup_cache_{}'.format( service_id )
|
||||
|
||||
return ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name )
|
||||
|
||||
class ClientDBTagParents( HydrusDBModule.HydrusDBModule ):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
cursor: sqlite3.Cursor,
|
||||
modules_services: ClientDBServices.ClientDBMasterServices,
|
||||
modules_tags_local_cache: ClientDBDefinitionsCache.ClientDBCacheLocalTags,
|
||||
modules_tag_siblings: ClientDBTagSiblings.ClientDBTagSiblings
|
||||
):
|
||||
|
||||
self.modules_services = modules_services
|
||||
self.modules_tags_local_cache = modules_tags_local_cache
|
||||
self.modules_tag_siblings = modules_tag_siblings
|
||||
|
||||
self._service_ids_to_display_application_status = {}
|
||||
|
||||
self._service_ids_to_applicable_service_ids = None
|
||||
self._service_ids_to_interested_service_ids = None
|
||||
|
||||
HydrusDBModule.HydrusDBModule.__init__( self, 'client tag parents', cursor )
|
||||
|
||||
|
||||
def _GetInitialIndexGenerationTuples( self ):
|
||||
|
||||
index_generation_tuples = [
|
||||
( 'tag_parents', [ 'service_id', 'parent_tag_id' ], False ),
|
||||
( 'tag_parent_petitions', [ 'service_id', 'parent_tag_id' ], False ),
|
||||
]
|
||||
|
||||
return index_generation_tuples
|
||||
|
||||
|
||||
def AddTagParents( self, service_id, pairs ):
|
||||
|
||||
self._ExecuteMany( 'DELETE FROM tag_parents WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ?;', ( ( service_id, child_tag_id, parent_tag_id ) for ( child_tag_id, parent_tag_id ) in pairs ) )
|
||||
self._ExecuteMany( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ? AND status = ?;', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_PENDING ) for ( child_tag_id, parent_tag_id ) in pairs ) )
|
||||
|
||||
self._ExecuteMany( 'INSERT OR IGNORE INTO tag_parents ( service_id, child_tag_id, parent_tag_id, status ) VALUES ( ?, ?, ?, ? );', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_CURRENT ) for ( child_tag_id, parent_tag_id ) in pairs ) )
|
||||
|
||||
|
||||
def ClearActual( self, service_id ):
|
||||
|
||||
cache_actual_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( ClientTags.TAG_DISPLAY_ACTUAL, service_id )
|
||||
|
||||
self._Execute( 'DELETE FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) )
|
||||
|
||||
if service_id in self._service_ids_to_display_application_status:
|
||||
|
||||
del self._service_ids_to_display_application_status[ service_id ]
|
||||
|
||||
|
||||
|
||||
def CreateInitialTables( self ):
|
||||
|
||||
self._Execute( 'CREATE TABLE tag_parents ( service_id INTEGER, child_tag_id INTEGER, parent_tag_id INTEGER, status INTEGER, PRIMARY KEY ( service_id, child_tag_id, parent_tag_id, status ) );' )
|
||||
self._Execute( 'CREATE TABLE tag_parent_petitions ( service_id INTEGER, child_tag_id INTEGER, parent_tag_id INTEGER, status INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, child_tag_id, parent_tag_id, status ) );' )
|
||||
|
||||
self._Execute( 'CREATE TABLE tag_parent_application ( master_service_id INTEGER, service_index INTEGER, application_service_id INTEGER, PRIMARY KEY ( master_service_id, service_index ) );' )
|
||||
|
||||
|
||||
def DeleteTagParents( self, service_id, pairs ):
|
||||
|
||||
self._ExecuteMany( 'DELETE FROM tag_parents WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ?;', ( ( service_id, child_tag_id, parent_tag_id ) for ( child_tag_id, parent_tag_id ) in pairs ) )
|
||||
self._ExecuteMany( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ? AND status = ?;', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_PETITIONED ) for ( child_tag_id, parent_tag_id ) in pairs ) )
|
||||
|
||||
self._ExecuteMany( 'INSERT OR IGNORE INTO tag_parents ( service_id, child_tag_id, parent_tag_id, status ) VALUES ( ?, ?, ?, ? );', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_DELETED ) for ( child_tag_id, parent_tag_id ) in pairs ) )
|
||||
|
||||
|
||||
def Drop( self, tag_service_id ):
|
||||
|
||||
self._Execute( 'DELETE FROM tag_parents WHERE service_id = ?;', ( tag_service_id, ) )
|
||||
self._Execute( 'DELETE FROM tag_parent_petitions WHERE service_id = ?;', ( tag_service_id, ) )
|
||||
|
||||
( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( tag_service_id )
|
||||
|
||||
self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_actual_tag_parents_lookup_table_name ) )
|
||||
self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_ideal_tag_parents_lookup_table_name ) )
|
||||
|
||||
self._Execute( 'DELETE FROM tag_parent_application WHERE master_service_id = ? OR application_service_id = ?;', ( tag_service_id, tag_service_id ) )
|
||||
|
||||
self._service_ids_to_applicable_service_ids = None
|
||||
self._service_ids_to_interested_service_ids = None
|
||||
|
||||
|
||||
def FilterChained( self, display_type, tag_service_id, ideal_tag_ids ):
|
||||
|
||||
if len( ideal_tag_ids ) == 0:
|
||||
|
||||
return set()
|
||||
|
||||
elif len( ideal_tag_ids ) == 1:
|
||||
|
||||
( ideal_tag_id, ) = ideal_tag_ids
|
||||
|
||||
if self.IsChained( display_type, tag_service_id, ideal_tag_id ):
|
||||
|
||||
return { ideal_tag_id }
|
||||
|
||||
else:
|
||||
|
||||
return set()
|
||||
|
||||
|
||||
|
||||
# get the tag_ids that are part of a parent chain
|
||||
|
||||
cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
||||
with self._MakeTemporaryIntegerTable( ideal_tag_ids, 'tag_id' ) as temp_table_name:
|
||||
|
||||
# keep these separate--older sqlite can't do cross join to an OR ON
|
||||
|
||||
# temp tags to lookup
|
||||
chain_tag_ids = self._STS( self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} ON ( child_tag_id = tag_id );'.format( temp_table_name, cache_tag_parents_lookup_table_name ) ) )
|
||||
chain_tag_ids.update( self._STI( self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} ON ( ancestor_tag_id = tag_id );'.format( temp_table_name, cache_tag_parents_lookup_table_name ) ) ) )
|
||||
|
||||
|
||||
return chain_tag_ids
|
||||
|
||||
|
||||
def Generate( self, tag_service_id ):
|
||||
|
||||
( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( tag_service_id )
|
||||
|
||||
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( child_tag_id INTEGER, ancestor_tag_id INTEGER, PRIMARY KEY ( child_tag_id, ancestor_tag_id ) );'.format( cache_actual_tag_parents_lookup_table_name ) )
|
||||
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( child_tag_id INTEGER, ancestor_tag_id INTEGER, PRIMARY KEY ( child_tag_id, ancestor_tag_id ) );'.format( cache_ideal_tag_parents_lookup_table_name ) )
|
||||
|
||||
self._CreateIndex( cache_actual_tag_parents_lookup_table_name, [ 'ancestor_tag_id' ] )
|
||||
self._CreateIndex( cache_ideal_tag_parents_lookup_table_name, [ 'ancestor_tag_id' ] )
|
||||
|
||||
self._Execute( 'INSERT OR IGNORE INTO tag_parent_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', ( tag_service_id, 0, tag_service_id ) )
|
||||
|
||||
self._service_ids_to_applicable_service_ids = None
|
||||
self._service_ids_to_interested_service_ids = None
|
||||
|
||||
self.Regen( ( tag_service_id, ) )
|
||||
|
||||
|
||||
def GenerateApplicationDicts( self ):
|
||||
|
||||
unsorted_dict = HydrusData.BuildKeyToListDict( ( master_service_id, ( index, application_service_id ) ) for ( master_service_id, index, application_service_id ) in self._Execute( 'SELECT master_service_id, service_index, application_service_id FROM tag_parent_application;' ) )
|
||||
|
||||
self._service_ids_to_applicable_service_ids = collections.defaultdict( list )
|
||||
|
||||
self._service_ids_to_applicable_service_ids.update( { master_service_id : [ application_service_id for ( index, application_service_id ) in sorted( index_and_applicable_service_ids ) ] for ( master_service_id, index_and_applicable_service_ids ) in unsorted_dict.items() } )
|
||||
|
||||
self._service_ids_to_interested_service_ids = collections.defaultdict( set )
|
||||
|
||||
for ( master_service_id, application_service_ids ) in self._service_ids_to_applicable_service_ids.items():
|
||||
|
||||
for application_service_id in application_service_ids:
|
||||
|
||||
self._service_ids_to_interested_service_ids[ application_service_id ].add( master_service_id )
|
||||
|
||||
|
||||
|
||||
|
||||
def GetAllTagIds( self, display_type, tag_service_id ):
|
||||
|
||||
cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
||||
tag_ids = set()
|
||||
|
||||
tag_ids.update( self._STI( self._Execute( 'SELECT DISTINCT child_tag_id FROM {};'.format( cache_tag_parents_lookup_table_name ) ) ) )
|
||||
tag_ids.update( self._STI( self._Execute( 'SELECT DISTINCT ancestor_tag_id FROM {};'.format( cache_tag_parents_lookup_table_name ) ) ) )
|
||||
|
||||
return tag_ids
|
||||
|
||||
|
||||
def GetAncestors( self, display_type: int, tag_service_id: int, ideal_tag_id: int ):
|
||||
|
||||
cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
||||
ancestor_ids = self._STS( self._Execute( 'SELECT ancestor_tag_id FROM {} WHERE child_tag_id = ?;'.format( cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ) ) )
|
||||
|
||||
return ancestor_ids
|
||||
|
||||
|
||||
def GetApplicableServiceIds( self, tag_service_id ):
|
||||
|
||||
if self._service_ids_to_applicable_service_ids is None:
|
||||
|
||||
self.GenerateApplicationDicts()
|
||||
|
||||
|
||||
return self._service_ids_to_applicable_service_ids[ tag_service_id ]
|
||||
|
||||
|
||||
def GetApplication( self ):
|
||||
|
||||
if self._service_ids_to_applicable_service_ids is None:
|
||||
|
||||
self.GenerateApplicationDicts()
|
||||
|
||||
|
||||
service_ids_to_service_keys = {}
|
||||
|
||||
service_keys_to_parent_applicable_service_keys = {}
|
||||
|
||||
for ( master_service_id, applicable_service_ids ) in self._service_ids_to_applicable_service_ids.items():
|
||||
|
||||
all_service_ids = [ master_service_id ] + list( applicable_service_ids )
|
||||
|
||||
for service_id in all_service_ids:
|
||||
|
||||
if service_id not in service_ids_to_service_keys:
|
||||
|
||||
service_ids_to_service_keys[ service_id ] = self.modules_services.GetService( service_id ).GetServiceKey()
|
||||
|
||||
|
||||
|
||||
service_keys_to_parent_applicable_service_keys[ service_ids_to_service_keys[ master_service_id ] ] = [ service_ids_to_service_keys[ service_id ] for service_id in applicable_service_ids ]
|
||||
|
||||
|
||||
return service_keys_to_parent_applicable_service_keys
|
||||
|
||||
|
||||
def GetApplicationStatus( self, service_id ):
|
||||
|
||||
if service_id not in self._service_ids_to_display_application_status:
|
||||
|
||||
( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( service_id )
|
||||
|
||||
actual_parent_rows = set( self._Execute( 'SELECT child_tag_id, ancestor_tag_id FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) ) )
|
||||
ideal_parent_rows = set( self._Execute( 'SELECT child_tag_id, ancestor_tag_id FROM {};'.format( cache_ideal_tag_parents_lookup_table_name ) ) )
|
||||
|
||||
parent_rows_to_remove = actual_parent_rows.difference( ideal_parent_rows )
|
||||
parent_rows_to_add = ideal_parent_rows.difference( actual_parent_rows )
|
||||
|
||||
num_actual_rows = len( actual_parent_rows )
|
||||
num_ideal_rows = len( ideal_parent_rows )
|
||||
|
||||
self._service_ids_to_display_application_status[ service_id ] = ( parent_rows_to_add, parent_rows_to_remove, num_actual_rows, num_ideal_rows )
|
||||
|
||||
|
||||
( parent_rows_to_add, parent_rows_to_remove, num_actual_rows, num_ideal_rows ) = self._service_ids_to_display_application_status[ service_id ]
|
||||
|
||||
return ( parent_rows_to_add, parent_rows_to_remove, num_actual_rows, num_ideal_rows )
|
||||
|
||||
|
||||
def GetChainsMembers( self, display_type: int, tag_service_id: int, ideal_tag_ids: typing.Collection[ int ] ):
|
||||
|
||||
if len( ideal_tag_ids ) == 0:
|
||||
|
||||
return set()
|
||||
|
||||
|
||||
cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
||||
chain_tag_ids = set( ideal_tag_ids )
|
||||
we_have_looked_up = set()
|
||||
next_search_tag_ids = set( ideal_tag_ids )
|
||||
|
||||
while len( next_search_tag_ids ) > 0:
|
||||
|
||||
if len( next_search_tag_ids ) == 1:
|
||||
|
||||
( ideal_tag_id, ) = next_search_tag_ids
|
||||
|
||||
round_of_tag_ids = self._STS( self._Execute( 'SELECT child_tag_id FROM {} WHERE ancestor_tag_id = ? UNION ALL SELECT ancestor_tag_id FROM {} WHERE child_tag_id = ?;'.format( cache_tag_parents_lookup_table_name, cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ideal_tag_id ) ) )
|
||||
|
||||
else:
|
||||
|
||||
with self._MakeTemporaryIntegerTable( next_search_tag_ids, 'tag_id' ) as temp_next_search_tag_ids_table_name:
|
||||
|
||||
round_of_tag_ids = self._STS( self._Execute( 'SELECT child_tag_id FROM {} CROSS JOIN {} ON ( ancestor_tag_id = tag_id ) UNION ALL SELECT ancestor_tag_id FROM {} CROSS JOIN {} ON ( child_tag_id = tag_id );'.format( temp_next_search_tag_ids_table_name, cache_tag_parents_lookup_table_name, temp_next_search_tag_ids_table_name, cache_tag_parents_lookup_table_name ) ) )
|
||||
|
||||
|
||||
|
||||
chain_tag_ids.update( round_of_tag_ids )
|
||||
|
||||
we_have_looked_up.update( next_search_tag_ids )
|
||||
|
||||
next_search_tag_ids = round_of_tag_ids.difference( we_have_looked_up )
|
||||
|
||||
|
||||
return chain_tag_ids
|
||||
|
||||
|
||||
def GetChainsMembersTables( self, display_type: int, tag_service_id: int, ideal_tag_ids_table_name: str, results_table_name: str ):
|
||||
|
||||
# if it isn't crazy, I should write this whole lad to be one or two recursive queries
|
||||
|
||||
cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
||||
first_ideal_tag_ids = self._STS( self._Execute( 'SELECT ideal_tag_id FROM {};'.format( ideal_tag_ids_table_name ) ) )
|
||||
|
||||
chain_tag_ids = set( first_ideal_tag_ids )
|
||||
we_have_looked_up = set()
|
||||
next_search_tag_ids = set( first_ideal_tag_ids )
|
||||
|
||||
while len( next_search_tag_ids ) > 0:
|
||||
|
||||
if len( next_search_tag_ids ) == 1:
|
||||
|
||||
( ideal_tag_id, ) = next_search_tag_ids
|
||||
|
||||
round_of_tag_ids = self._STS( self._Execute( 'SELECT child_tag_id FROM {} WHERE ancestor_tag_id = ? UNION ALL SELECT ancestor_tag_id FROM {} WHERE child_tag_id = ?;'.format( cache_tag_parents_lookup_table_name, cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ideal_tag_id ) ) )
|
||||
|
||||
else:
|
||||
|
||||
with self._MakeTemporaryIntegerTable( next_search_tag_ids, 'tag_id' ) as temp_next_search_tag_ids_table_name:
|
||||
|
||||
round_of_tag_ids = self._STS( self._Execute( 'SELECT child_tag_id FROM {} CROSS JOIN {} ON ( ancestor_tag_id = tag_id ) UNION ALL SELECT ancestor_tag_id FROM {} CROSS JOIN {} ON ( child_tag_id = tag_id );'.format( temp_next_search_tag_ids_table_name, cache_tag_parents_lookup_table_name, temp_next_search_tag_ids_table_name, cache_tag_parents_lookup_table_name ) ) )
|
||||
|
||||
|
||||
|
||||
new_tag_ids = round_of_tag_ids.difference( chain_tag_ids )
|
||||
|
||||
if len( new_tag_ids ) > 0:
|
||||
|
||||
self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( tag_id ) VALUES ( ? );', ( ( tag_id, ) for tag_id in round_of_tag_ids.difference( new_tag_ids ) ) )
|
||||
|
||||
chain_tag_ids.update( new_tag_ids )
|
||||
|
||||
|
||||
we_have_looked_up.update( next_search_tag_ids )
|
||||
|
||||
next_search_tag_ids = round_of_tag_ids.difference( we_have_looked_up )
|
||||
|
||||
|
||||
|
||||
def GetDescendants( self, display_type: int, tag_service_id: int, ideal_tag_id: int ):
|
||||
|
||||
cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
||||
descendant_ids = self._STS( self._Execute( 'SELECT child_tag_id FROM {} WHERE ancestor_tag_id = ?;'.format( cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ) ) )
|
||||
|
||||
return descendant_ids
|
||||
|
||||
|
||||
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
|
||||
|
||||
expected_table_names = [
|
||||
'tag_parents',
|
||||
'tag_parent_petitions',
|
||||
'tag_parent_application'
|
||||
]
|
||||
|
||||
return expected_table_names
|
||||
|
||||
|
||||
def GetInterestedServiceIds( self, tag_service_id ):
|
||||
|
||||
if self._service_ids_to_interested_service_ids is None:
|
||||
|
||||
self.GenerateApplicationDicts()
|
||||
|
||||
|
||||
return self._service_ids_to_interested_service_ids[ tag_service_id ]
|
||||
|
||||
|
||||
def GetTablesAndColumnsThatUseDefinitions( self, content_type: int ) -> typing.List[ typing.Tuple[ str, str ] ]:
|
||||
|
||||
if HC.CONTENT_TYPE_TAG:
|
||||
|
||||
return [
|
||||
( 'tag_parents', 'child_tag_id' ),
|
||||
( 'tag_parents', 'parent_tag_id' ),
|
||||
( 'tag_parent_petitions', 'child_tag_id' ),
|
||||
( 'tag_parent_petitions', 'parent_tag_id' )
|
||||
]
|
||||
|
||||
|
||||
return []
|
||||
|
||||
|
||||
def GetTagParents( self, service_key ):
|
||||
|
||||
service_id = self.modules_services.GetServiceId( service_key )
|
||||
|
||||
statuses_to_pair_ids = self.GetTagParentsIds( service_id )
|
||||
|
||||
all_tag_ids = set()
|
||||
|
||||
for pair_ids in statuses_to_pair_ids.values():
|
||||
|
||||
for ( child_tag_id, parent_tag_id ) in pair_ids:
|
||||
|
||||
all_tag_ids.add( child_tag_id )
|
||||
all_tag_ids.add( parent_tag_id )
|
||||
|
||||
|
||||
|
||||
tag_ids_to_tags = self.modules_tags_local_cache.GetTagIdsToTags( tag_ids = all_tag_ids )
|
||||
|
||||
statuses_to_pairs = collections.defaultdict( set )
|
||||
|
||||
statuses_to_pairs.update( { status : { ( tag_ids_to_tags[ child_tag_id ], tag_ids_to_tags[ parent_tag_id ] ) for ( child_tag_id, parent_tag_id ) in pair_ids } for ( status, pair_ids ) in statuses_to_pair_ids.items() } )
|
||||
|
||||
return statuses_to_pairs
|
||||
|
||||
|
||||
def GetTagParentsIds( self, service_id ):
|
||||
|
||||
statuses_and_pair_ids = self._Execute( 'SELECT status, child_tag_id, parent_tag_id FROM tag_parents WHERE service_id = ? UNION SELECT status, child_tag_id, parent_tag_id FROM tag_parent_petitions WHERE service_id = ?;', ( service_id, service_id ) ).fetchall()
|
||||
|
||||
unsorted_statuses_to_pair_ids = HydrusData.BuildKeyToListDict( ( status, ( child_tag_id, parent_tag_id ) ) for ( status, child_tag_id, parent_tag_id ) in statuses_and_pair_ids )
|
||||
|
||||
statuses_to_pair_ids = collections.defaultdict( list )
|
||||
|
||||
statuses_to_pair_ids.update( { status : sorted( pair_ids ) for ( status, pair_ids ) in unsorted_statuses_to_pair_ids.items() } )
|
||||
|
||||
return statuses_to_pair_ids
|
||||
|
||||
|
||||
def GetTagParentsIdsChains( self, service_id, tag_ids ):
|
||||
|
||||
# I experimented with one or two recursive queries, and for siblings, but it mostly ended up hellmode index efficiency. I think ( service_id, integer ) did it in
|
||||
|
||||
# note that this has to do sibling lookup as well to fetch pairs that are only connected to our chain by sibling relationships, and we are assuming here that the sibling lookup cache is valid
|
||||
|
||||
searched_tag_ids = set()
|
||||
next_tag_ids = set( tag_ids )
|
||||
result_rows = set()
|
||||
|
||||
while len( next_tag_ids ) > 0:
|
||||
|
||||
tag_ids_seen_this_round = set()
|
||||
|
||||
ideal_tag_ids = self.modules_tag_siblings.GetIdeals( ClientTags.TAG_DISPLAY_IDEAL, service_id, next_tag_ids )
|
||||
|
||||
tag_ids_seen_this_round.update( self.modules_tag_siblings.GetChainsMembersFromIdeals( ClientTags.TAG_DISPLAY_IDEAL, service_id, ideal_tag_ids ) )
|
||||
|
||||
with self._MakeTemporaryIntegerTable( next_tag_ids, 'tag_id' ) as temp_next_tag_ids_table_name:
|
||||
|
||||
searched_tag_ids.update( next_tag_ids )
|
||||
|
||||
# keep these separate--older sqlite can't do cross join to an OR ON
|
||||
|
||||
# temp tag_ids to parents
|
||||
queries = [
|
||||
'SELECT status, child_tag_id, parent_tag_id FROM {} CROSS JOIN tag_parents ON ( child_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name ),
|
||||
'SELECT status, child_tag_id, parent_tag_id FROM {} CROSS JOIN tag_parents ON ( parent_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name ),
|
||||
'SELECT status, child_tag_id, parent_tag_id FROM {} CROSS JOIN tag_parent_petitions ON ( child_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name ),
|
||||
'SELECT status, child_tag_id, parent_tag_id FROM {} CROSS JOIN tag_parent_petitions ON ( parent_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name )
|
||||
]
|
||||
|
||||
query = ' UNION '.join( queries )
|
||||
|
||||
for row in self._Execute( query, ( service_id, service_id, service_id, service_id ) ):
|
||||
|
||||
result_rows.add( row )
|
||||
|
||||
( status, child_tag_id, parent_tag_id ) = row
|
||||
|
||||
tag_ids_seen_this_round.update( ( child_tag_id, parent_tag_id ) )
|
||||
|
||||
|
||||
|
||||
next_tag_ids = tag_ids_seen_this_round.difference( searched_tag_ids )
|
||||
|
||||
|
||||
unsorted_statuses_to_pair_ids = HydrusData.BuildKeyToListDict( ( status, ( child_tag_id, parent_tag_id ) ) for ( status, child_tag_id, parent_tag_id ) in result_rows )
|
||||
|
||||
statuses_to_pair_ids = collections.defaultdict( list )
|
||||
|
||||
statuses_to_pair_ids.update( { status : sorted( pair_ids ) for ( status, pair_ids ) in unsorted_statuses_to_pair_ids.items() } )
|
||||
|
||||
return statuses_to_pair_ids
|
||||
|
||||
|
||||
def GetTagsToAncestors( self, display_type: int, tag_service_id: int, ideal_tag_ids: typing.Collection[ int ] ):
|
||||
|
||||
if len( ideal_tag_ids ) == 0:
|
||||
|
||||
return {}
|
||||
|
||||
elif len( ideal_tag_ids ) == 1:
|
||||
|
||||
( ideal_tag_id, ) = ideal_tag_ids
|
||||
|
||||
ancestors = self.GetAncestors( display_type, tag_service_id, ideal_tag_id )
|
||||
|
||||
return { ideal_tag_id : ancestors }
|
||||
|
||||
|
||||
cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
||||
with self._MakeTemporaryIntegerTable( ideal_tag_ids, 'child_tag_id' ) as temp_table_name:
|
||||
|
||||
tag_ids_to_ancestors = HydrusData.BuildKeyToSetDict( self._Execute( 'SELECT child_tag_id, ancestor_tag_id FROM {} CROSS JOIN {} USING ( child_tag_id );'.format( temp_table_name, cache_tag_parents_lookup_table_name ) ) )
|
||||
|
||||
|
||||
for tag_id in ideal_tag_ids:
|
||||
|
||||
if tag_id not in tag_ids_to_ancestors:
|
||||
|
||||
tag_ids_to_ancestors[ tag_id ] = set()
|
||||
|
||||
|
||||
|
||||
return tag_ids_to_ancestors
|
||||
|
||||
|
||||
def GetTagsToDescendants( self, display_type: int, tag_service_id: int, ideal_tag_ids: typing.Collection[ int ] ):
|
||||
|
||||
if len( ideal_tag_ids ) == 0:
|
||||
|
||||
return {}
|
||||
|
||||
elif len( ideal_tag_ids ) == 1:
|
||||
|
||||
( ideal_tag_id, ) = ideal_tag_ids
|
||||
|
||||
descendants = self.GetDescendants( display_type, tag_service_id, ideal_tag_id )
|
||||
|
||||
return { ideal_tag_id : descendants }
|
||||
|
||||
|
||||
cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
||||
with self._MakeTemporaryIntegerTable( ideal_tag_ids, 'ancestor_tag_id' ) as temp_table_name:
|
||||
|
||||
tag_ids_to_descendants = HydrusData.BuildKeyToSetDict( self._Execute( 'SELECT ancestor_tag_id, child_tag_id FROM {} CROSS JOIN {} USING ( ancestor_tag_id );'.format( temp_table_name, cache_tag_parents_lookup_table_name ) ) )
|
||||
|
||||
|
||||
for ideal_tag_id in ideal_tag_ids:
|
||||
|
||||
if ideal_tag_id not in tag_ids_to_descendants:
|
||||
|
||||
tag_ids_to_descendants[ ideal_tag_id ] = set()
|
||||
|
||||
|
||||
|
||||
return tag_ids_to_descendants
|
||||
|
||||
|
||||
def IdealiseStatusesToPairIds( self, tag_service_id, unideal_statuses_to_pair_ids ):
|
||||
|
||||
all_tag_ids = set( itertools.chain.from_iterable( ( itertools.chain.from_iterable( pair_ids ) for pair_ids in unideal_statuses_to_pair_ids.values() ) ) )
|
||||
|
||||
tag_ids_to_ideal_tag_ids = self.modules_tag_siblings.GetTagsToIdeals( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id, all_tag_ids )
|
||||
|
||||
ideal_statuses_to_pair_ids = collections.defaultdict( list )
|
||||
|
||||
for ( status, pair_ids ) in unideal_statuses_to_pair_ids.items():
|
||||
|
||||
ideal_pair_ids = sorted( ( ( tag_ids_to_ideal_tag_ids[ child_tag_id ], tag_ids_to_ideal_tag_ids[ parent_tag_id ] ) for ( child_tag_id, parent_tag_id ) in pair_ids ) )
|
||||
|
||||
ideal_statuses_to_pair_ids[ status ] = ideal_pair_ids
|
||||
|
||||
|
||||
return ideal_statuses_to_pair_ids
|
||||
|
||||
|
||||
def IsChained( self, display_type, tag_service_id, ideal_tag_id ):
|
||||
|
||||
cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
||||
return self._Execute( 'SELECT 1 FROM {} WHERE child_tag_id = ? OR ancestor_tag_id = ?;'.format( cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ideal_tag_id ) ).fetchone() is not None
|
||||
|
||||
|
||||
def NotifyParentAddRowSynced( self, tag_service_id, row ):
|
||||
|
||||
if tag_service_id in self._service_ids_to_display_application_status:
|
||||
|
||||
( parent_rows_to_add, parent_rows_to_remove, num_actual_rows, num_ideal_rows ) = self._service_ids_to_display_application_status[ tag_service_id ]
|
||||
|
||||
parent_rows_to_add.discard( row )
|
||||
|
||||
num_actual_rows += 1
|
||||
|
||||
self._service_ids_to_display_application_status[ tag_service_id ] = ( parent_rows_to_add, parent_rows_to_remove, num_actual_rows, num_ideal_rows )
|
||||
|
||||
|
||||
|
||||
def NotifyParentDeleteRowSynced( self, tag_service_id, row ):
|
||||
|
||||
if tag_service_id in self._service_ids_to_display_application_status:
|
||||
|
||||
( parent_rows_to_add, parent_rows_to_remove, num_actual_rows, num_ideal_rows ) = self._service_ids_to_display_application_status[ tag_service_id ]
|
||||
|
||||
parent_rows_to_remove.discard( row )
|
||||
|
||||
num_actual_rows -= 1
|
||||
|
||||
self._service_ids_to_display_application_status[ tag_service_id ] = ( parent_rows_to_add, parent_rows_to_remove, num_actual_rows, num_ideal_rows )
|
||||
|
||||
|
||||
|
||||
def PendTagParents( self, service_id, triples ):
|
||||
|
||||
self._ExecuteMany( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ?;', ( ( service_id, child_tag_id, parent_tag_id ) for ( child_tag_id, parent_tag_id, reason_id ) in triples ) )
|
||||
|
||||
self._ExecuteMany( 'INSERT OR IGNORE INTO tag_parent_petitions ( service_id, child_tag_id, parent_tag_id, reason_id, status ) VALUES ( ?, ?, ?, ?, ? );', ( ( service_id, child_tag_id, parent_tag_id, reason_id, HC.CONTENT_STATUS_PENDING ) for ( child_tag_id, parent_tag_id, reason_id ) in triples ) )
|
||||
|
||||
|
||||
def PetitionTagParents( self, service_id, triples ):
|
||||
|
||||
self._ExecuteMany( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ?;', ( ( service_id, child_tag_id, parent_tag_id ) for ( child_tag_id, parent_tag_id, reason_id ) in triples ) )
|
||||
|
||||
self._ExecuteMany( 'INSERT OR IGNORE INTO tag_parent_petitions ( service_id, child_tag_id, parent_tag_id, reason_id, status ) VALUES ( ?, ?, ?, ?, ? );', ( ( service_id, child_tag_id, parent_tag_id, reason_id, HC.CONTENT_STATUS_PETITIONED ) for ( child_tag_id, parent_tag_id, reason_id ) in triples ) )
|
||||
|
||||
|
||||
def RescindPendingTagParents( self, service_id, pairs ):
|
||||
|
||||
self._ExecuteMany( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ? AND status = ?;', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_PENDING ) for ( child_tag_id, parent_tag_id ) in pairs ) )
|
||||
|
||||
|
||||
def RescindPetitionedTagParents( self, service_id, pairs ):
|
||||
|
||||
self._ExecuteMany( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ? AND status = ?;', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_PETITIONED ) for ( child_tag_id, parent_tag_id ) in pairs ) )
|
||||
|
||||
|
||||
def Regen( self, tag_service_ids ):
|
||||
|
||||
for tag_service_id in tag_service_ids:
|
||||
|
||||
cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id )
|
||||
|
||||
self._Execute( 'DELETE FROM {};'.format( cache_tag_parents_lookup_table_name ) )
|
||||
|
||||
applicable_service_ids = self.GetApplicableServiceIds( tag_service_id )
|
||||
|
||||
tps = ClientTagsHandling.TagParentsStructure()
|
||||
|
||||
for applicable_service_id in applicable_service_ids:
|
||||
|
||||
unideal_statuses_to_pair_ids = self.GetTagParentsIds( service_id = applicable_service_id )
|
||||
|
||||
# we have to collapse the parent ids according to siblings
|
||||
|
||||
ideal_statuses_to_pair_ids = self.IdealiseStatusesToPairIds( tag_service_id, unideal_statuses_to_pair_ids )
|
||||
|
||||
#
|
||||
|
||||
petitioned_fast_lookup = set( ideal_statuses_to_pair_ids[ HC.CONTENT_STATUS_PETITIONED ] )
|
||||
|
||||
for ( child_tag_id, parent_tag_id ) in ideal_statuses_to_pair_ids[ HC.CONTENT_STATUS_CURRENT ]:
|
||||
|
||||
if ( child_tag_id, parent_tag_id ) in petitioned_fast_lookup:
|
||||
|
||||
continue
|
||||
|
||||
|
||||
tps.AddPair( child_tag_id, parent_tag_id )
|
||||
|
||||
|
||||
for ( child_tag_id, parent_tag_id ) in ideal_statuses_to_pair_ids[ HC.CONTENT_STATUS_PENDING ]:
|
||||
|
||||
tps.AddPair( child_tag_id, parent_tag_id )
|
||||
|
||||
|
||||
|
||||
self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( child_tag_id, ancestor_tag_id ) VALUES ( ?, ? );'.format( cache_tag_parents_lookup_table_name ), tps.IterateDescendantAncestorPairs() )
|
||||
|
||||
if tag_service_id in self._service_ids_to_display_application_status:
|
||||
|
||||
del self._service_ids_to_display_application_status[ tag_service_id ]
|
||||
|
||||
|
||||
|
||||
|
||||
def RegenChains( self, tag_service_ids, tag_ids ):
|
||||
|
||||
if len( tag_ids ) == 0:
|
||||
|
||||
return
|
||||
|
||||
|
||||
for tag_service_id in tag_service_ids:
|
||||
|
||||
cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id )
|
||||
|
||||
# it is possible that the parents cache currently contains non-ideal tag_ids
|
||||
# so, to be safe, we'll also get all sibling chain members
|
||||
|
||||
tag_ids_to_clear_and_regen = set( tag_ids )
|
||||
|
||||
ideal_tag_ids = self.modules_tag_siblings.GetIdeals( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id, tag_ids )
|
||||
|
||||
tag_ids_to_clear_and_regen.update( self.modules_tag_siblings.GetChainsMembersFromIdeals( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id, ideal_tag_ids ) )
|
||||
|
||||
# and now all possible current parent chains based on this
|
||||
|
||||
tag_ids_to_clear_and_regen.update( self.GetChainsMembers( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id, tag_ids_to_clear_and_regen ) )
|
||||
|
||||
# this should now contain all possible tag_ids that could be in tag parents right now related to what we were given
|
||||
|
||||
self._ExecuteMany( 'DELETE FROM {} WHERE child_tag_id = ? OR ancestor_tag_id = ?;'.format( cache_tag_parents_lookup_table_name ), ( ( tag_id, tag_id ) for tag_id in tag_ids_to_clear_and_regen ) )
|
||||
|
||||
# we wipe them
|
||||
|
||||
applicable_tag_service_ids = self.GetApplicableServiceIds( tag_service_id )
|
||||
|
||||
tps = ClientTagsHandling.TagParentsStructure()
|
||||
|
||||
for applicable_tag_service_id in applicable_tag_service_ids:
|
||||
|
||||
service_key = self.modules_services.GetService( applicable_tag_service_id ).GetServiceKey()
|
||||
|
||||
unideal_statuses_to_pair_ids = self.GetTagParentsIdsChains( applicable_tag_service_id, tag_ids_to_clear_and_regen )
|
||||
|
||||
ideal_statuses_to_pair_ids = self.IdealiseStatusesToPairIds( tag_service_id, unideal_statuses_to_pair_ids )
|
||||
|
||||
#
|
||||
|
||||
petitioned_fast_lookup = set( ideal_statuses_to_pair_ids[ HC.CONTENT_STATUS_PETITIONED ] )
|
||||
|
||||
for ( child_tag_id, parent_tag_id ) in ideal_statuses_to_pair_ids[ HC.CONTENT_STATUS_CURRENT ]:
|
||||
|
||||
if ( child_tag_id, parent_tag_id ) in petitioned_fast_lookup:
|
||||
|
||||
continue
|
||||
|
||||
|
||||
tps.AddPair( child_tag_id, parent_tag_id )
|
||||
|
||||
|
||||
for ( child_tag_id, parent_tag_id ) in ideal_statuses_to_pair_ids[ HC.CONTENT_STATUS_PENDING ]:
|
||||
|
||||
tps.AddPair( child_tag_id, parent_tag_id )
|
||||
|
||||
|
||||
|
||||
self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( child_tag_id, ancestor_tag_id ) VALUES ( ?, ? );'.format( cache_tag_parents_lookup_table_name ), tps.IterateDescendantAncestorPairs() )
|
||||
|
||||
if tag_service_id in self._service_ids_to_display_application_status:
|
||||
|
||||
del self._service_ids_to_display_application_status[ tag_service_id ]
|
||||
|
||||
|
||||
|
||||
|
||||
def SetApplication( self, service_keys_to_applicable_service_keys ):
|
||||
|
||||
if self._service_ids_to_applicable_service_ids is None:
|
||||
|
||||
self.GenerateApplicationDicts()
|
||||
|
||||
|
||||
new_service_ids_to_applicable_service_ids = collections.defaultdict( list )
|
||||
|
||||
for ( master_service_key, applicable_service_keys ) in service_keys_to_applicable_service_keys.items():
|
||||
|
||||
master_service_id = self.modules_services.GetServiceId( master_service_key )
|
||||
applicable_service_ids = [ self.modules_services.GetServiceId( service_key ) for service_key in applicable_service_keys ]
|
||||
|
||||
new_service_ids_to_applicable_service_ids[ master_service_id ] = applicable_service_ids
|
||||
|
||||
|
||||
old_and_new_master_service_ids = set( self._service_ids_to_applicable_service_ids.keys() )
|
||||
old_and_new_master_service_ids.update( new_service_ids_to_applicable_service_ids.keys() )
|
||||
|
||||
inserts = []
|
||||
|
||||
service_ids_to_sync = set()
|
||||
|
||||
for master_service_id in old_and_new_master_service_ids:
|
||||
|
||||
if master_service_id in new_service_ids_to_applicable_service_ids:
|
||||
|
||||
applicable_service_ids = new_service_ids_to_applicable_service_ids[ master_service_id ]
|
||||
|
||||
inserts.extend( ( ( master_service_id, i, applicable_service_id ) for ( i, applicable_service_id ) in enumerate( applicable_service_ids ) ) )
|
||||
|
||||
if applicable_service_ids != self._service_ids_to_applicable_service_ids[ master_service_id ]:
|
||||
|
||||
service_ids_to_sync.add( master_service_id )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
service_ids_to_sync.add( master_service_id )
|
||||
|
||||
|
||||
|
||||
self._Execute( 'DELETE FROM tag_parent_application;' )
|
||||
|
||||
self._ExecuteMany( 'INSERT OR IGNORE INTO tag_parent_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', inserts )
|
||||
|
||||
self._service_ids_to_applicable_service_ids = None
|
||||
self._service_ids_to_interested_service_ids = None
|
||||
|
||||
return service_ids_to_sync
|
||||
|
||||
|
|
@ -0,0 +1,909 @@
|
|||
import collections
|
||||
import itertools
|
||||
import sqlite3
|
||||
import typing
|
||||
|
||||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusDBModule
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client.db import ClientDBDefinitionsCache
|
||||
from hydrus.client.db import ClientDBMaster
|
||||
from hydrus.client.db import ClientDBServices
|
||||
from hydrus.client.metadata import ClientTags
|
||||
from hydrus.client.metadata import ClientTagsHandling
|
||||
|
||||
def GenerateTagSiblingsLookupCacheTableName( display_type: int, service_id: int ):
|
||||
|
||||
( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( service_id )
|
||||
|
||||
if display_type == ClientTags.TAG_DISPLAY_IDEAL:
|
||||
|
||||
return cache_ideal_tag_siblings_lookup_table_name
|
||||
|
||||
elif display_type == ClientTags.TAG_DISPLAY_ACTUAL:
|
||||
|
||||
return cache_actual_tag_siblings_lookup_table_name
|
||||
|
||||
|
||||
def GenerateTagSiblingsLookupCacheTableNames( service_id ):
|
||||
|
||||
cache_ideal_tag_siblings_lookup_table_name = 'external_caches.ideal_tag_siblings_lookup_cache_{}'.format( service_id )
|
||||
cache_actual_tag_siblings_lookup_table_name = 'external_caches.actual_tag_siblings_lookup_cache_{}'.format( service_id )
|
||||
|
||||
return ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name )
|
||||
|
||||
class ClientDBTagSiblings( HydrusDBModule.HydrusDBModule ):
|
||||
|
||||
def __init__( self, cursor: sqlite3.Cursor, modules_services: ClientDBServices.ClientDBMasterServices, modules_tags: ClientDBMaster.ClientDBMasterTags, modules_tags_local_cache: ClientDBDefinitionsCache.ClientDBCacheLocalTags ):
|
||||
|
||||
self.modules_services = modules_services
|
||||
self.modules_tags_local_cache = modules_tags_local_cache
|
||||
self.modules_tags = modules_tags
|
||||
|
||||
self._service_ids_to_display_application_status = {}
|
||||
|
||||
self._service_ids_to_applicable_service_ids = None
|
||||
self._service_ids_to_interested_service_ids = None
|
||||
|
||||
HydrusDBModule.HydrusDBModule.__init__( self, 'client tag siblings', cursor )
|
||||
|
||||
|
||||
def _GenerateApplicationDicts( self ):
|
||||
|
||||
unsorted_dict = HydrusData.BuildKeyToListDict( ( master_service_id, ( index, application_service_id ) ) for ( master_service_id, index, application_service_id ) in self._Execute( 'SELECT master_service_id, service_index, application_service_id FROM tag_sibling_application;' ) )
|
||||
|
||||
self._service_ids_to_applicable_service_ids = collections.defaultdict( list )
|
||||
|
||||
self._service_ids_to_applicable_service_ids.update( { master_service_id : [ application_service_id for ( index, application_service_id ) in sorted( index_and_applicable_service_ids ) ] for ( master_service_id, index_and_applicable_service_ids ) in unsorted_dict.items() } )
|
||||
|
||||
self._service_ids_to_interested_service_ids = collections.defaultdict( set )
|
||||
|
||||
for ( master_service_id, application_service_ids ) in self._service_ids_to_applicable_service_ids.items():
|
||||
|
||||
for application_service_id in application_service_ids:
|
||||
|
||||
self._service_ids_to_interested_service_ids[ application_service_id ].add( master_service_id )
|
||||
|
||||
|
||||
|
||||
|
||||
def _GetInitialIndexGenerationTuples( self ):
|
||||
|
||||
index_generation_tuples = [
|
||||
( 'tag_siblings', [ 'service_id', 'good_tag_id' ], False ),
|
||||
( 'tag_sibling_petitions', [ 'service_id', 'good_tag_id' ], False ),
|
||||
]
|
||||
|
||||
return index_generation_tuples
|
||||
|
||||
|
||||
def AddTagSiblings( self, service_id, pairs ):
|
||||
|
||||
self._ExecuteMany( 'DELETE FROM tag_siblings WHERE service_id = ? AND bad_tag_id = ? AND good_tag_id = ?;', ( ( service_id, bad_tag_id, good_tag_id ) for ( bad_tag_id, good_tag_id ) in pairs ) )
|
||||
self._ExecuteMany( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND good_tag_id = ? AND status = ?;', ( ( service_id, bad_tag_id, good_tag_id, HC.CONTENT_STATUS_PENDING ) for ( bad_tag_id, good_tag_id ) in pairs ) )
|
||||
|
||||
self._ExecuteMany( 'INSERT OR IGNORE INTO tag_siblings ( service_id, bad_tag_id, good_tag_id, status ) VALUES ( ?, ?, ?, ? );', ( ( service_id, bad_tag_id, good_tag_id, HC.CONTENT_STATUS_CURRENT ) for ( bad_tag_id, good_tag_id ) in pairs ) )
|
||||
|
||||
|
||||
def ClearActual( self, service_id ):
|
||||
|
||||
cache_actual_tag_sibling_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( ClientTags.TAG_DISPLAY_ACTUAL, service_id )
|
||||
|
||||
self._Execute( 'DELETE FROM {};'.format( cache_actual_tag_sibling_lookup_table_name ) )
|
||||
|
||||
if service_id in self._service_ids_to_display_application_status:
|
||||
|
||||
del self._service_ids_to_display_application_status[ service_id ]
|
||||
|
||||
|
||||
|
||||
def CreateInitialTables( self ):
|
||||
|
||||
self._Execute( 'CREATE TABLE tag_siblings ( service_id INTEGER, bad_tag_id INTEGER, good_tag_id INTEGER, status INTEGER, PRIMARY KEY ( service_id, bad_tag_id, status ) );' )
|
||||
self._Execute( 'CREATE TABLE tag_sibling_petitions ( service_id INTEGER, bad_tag_id INTEGER, good_tag_id INTEGER, status INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, bad_tag_id, status ) );' )
|
||||
|
||||
self._Execute( 'CREATE TABLE tag_sibling_application ( master_service_id INTEGER, service_index INTEGER, application_service_id INTEGER, PRIMARY KEY ( master_service_id, service_index ) );' )
|
||||
|
||||
|
||||
def DeleteTagSiblings( self, service_id, pairs ):
|
||||
|
||||
self._ExecuteMany( 'DELETE FROM tag_siblings WHERE service_id = ? AND bad_tag_id = ? AND good_tag_id = ?;', ( ( service_id, bad_tag_id, good_tag_id ) for ( bad_tag_id, good_tag_id ) in pairs ) )
|
||||
self._ExecuteMany( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND good_tag_id = ? AND status = ?;', ( ( service_id, bad_tag_id, good_tag_id, HC.CONTENT_STATUS_PETITIONED ) for ( bad_tag_id, good_tag_id ) in pairs ) )
|
||||
|
||||
self._ExecuteMany( 'INSERT OR IGNORE INTO tag_siblings ( service_id, bad_tag_id, good_tag_id, status ) VALUES ( ?, ?, ?, ? );', ( ( service_id, bad_tag_id, good_tag_id, HC.CONTENT_STATUS_DELETED ) for ( bad_tag_id, good_tag_id ) in pairs ) )
|
||||
|
||||
|
||||
def Drop( self, tag_service_id ):
|
||||
|
||||
self._Execute( 'DELETE FROM tag_siblings WHERE service_id = ?;', ( tag_service_id, ) )
|
||||
self._Execute( 'DELETE FROM tag_sibling_petitions WHERE service_id = ?;', ( tag_service_id, ) )
|
||||
|
||||
( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( tag_service_id )
|
||||
|
||||
self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_actual_tag_siblings_lookup_table_name ) )
|
||||
self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_ideal_tag_siblings_lookup_table_name ) )
|
||||
|
||||
self._Execute( 'DELETE FROM tag_sibling_application WHERE master_service_id = ? OR application_service_id = ?;', ( tag_service_id, tag_service_id ) )
|
||||
|
||||
self._service_ids_to_applicable_service_ids = None
|
||||
self._service_ids_to_interested_service_ids = None
|
||||
|
||||
|
||||
def FilterChained( self, display_type, tag_service_id, tag_ids ):
|
||||
|
||||
if len( tag_ids ) == 0:
|
||||
|
||||
return set()
|
||||
|
||||
elif len( tag_ids ) == 1:
|
||||
|
||||
( tag_id, ) = tag_ids
|
||||
|
||||
if self.IsChained( display_type, tag_service_id, tag_id ):
|
||||
|
||||
return { tag_id }
|
||||
|
||||
else:
|
||||
|
||||
return set()
|
||||
|
||||
|
||||
|
||||
# get the tag_ids that are part of a sibling chain
|
||||
|
||||
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
||||
with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_table_name:
|
||||
|
||||
# keep these separate--older sqlite can't do cross join to an OR ON
|
||||
|
||||
# temp tags to lookup
|
||||
chain_tag_ids = self._STS( self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ) )
|
||||
chain_tag_ids.update( self._STI( self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} ON ( ideal_tag_id = tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ) ) )
|
||||
|
||||
|
||||
return chain_tag_ids
|
||||
|
||||
|
||||
def FilterChainedIdealsIntoTable( self, display_type, tag_service_id, tag_ids_table_name, results_table_name ):
|
||||
|
||||
# get the tag_ids that are part of a sibling chain
|
||||
|
||||
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
||||
# keep these separate--older sqlite can't do cross join to an OR ON
|
||||
|
||||
# temp tags to lookup
|
||||
self._Execute( 'INSERT OR IGNORE INTO {} SELECT ideal_tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) )
|
||||
self._STI( self._Execute( 'INSERT OR IGNORE INTO {} SELECT tag_id FROM {} CROSS JOIN {} ON ( ideal_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) )
|
||||
|
||||
|
||||
def FilterChainedIntoTable( self, display_type, tag_service_id, tag_ids_table_name, results_table_name ):
|
||||
|
||||
# get the tag_ids that are part of a sibling chain
|
||||
|
||||
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
||||
# keep these separate--older sqlite can't do cross join to an OR ON
|
||||
|
||||
# temp tags to lookup
|
||||
self._Execute( 'INSERT OR IGNORE INTO {} SELECT tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) )
|
||||
self._STI( self._Execute( 'INSERT OR IGNORE INTO {} SELECT tag_id FROM {} CROSS JOIN {} ON ( ideal_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) )
|
||||
|
||||
|
||||
def Generate( self, tag_service_id ):
|
||||
|
||||
self._service_ids_to_applicable_service_ids = None
|
||||
self._service_ids_to_interested_service_ids = None
|
||||
|
||||
( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( tag_service_id )
|
||||
|
||||
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( bad_tag_id INTEGER PRIMARY KEY, ideal_tag_id INTEGER );'.format( cache_actual_tag_siblings_lookup_table_name ) )
|
||||
self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( bad_tag_id INTEGER PRIMARY KEY, ideal_tag_id INTEGER );'.format( cache_ideal_tag_siblings_lookup_table_name ) )
|
||||
|
||||
self._CreateIndex( cache_actual_tag_siblings_lookup_table_name, [ 'ideal_tag_id' ] )
|
||||
self._CreateIndex( cache_ideal_tag_siblings_lookup_table_name, [ 'ideal_tag_id' ] )
|
||||
|
||||
self._Execute( 'INSERT OR IGNORE INTO tag_sibling_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', ( tag_service_id, 0, tag_service_id ) )
|
||||
|
||||
self._service_ids_to_applicable_service_ids = None
|
||||
self._service_ids_to_interested_service_ids = None
|
||||
|
||||
self.Regen( ( tag_service_id, ) )
|
||||
|
||||
|
||||
def GetAllTagIds( self, display_type, tag_service_id ):
|
||||
|
||||
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
||||
tag_ids = set()
|
||||
|
||||
tag_ids.update( self._STI( self._Execute( 'SELECT DISTINCT bad_tag_id FROM {};'.format( cache_tag_siblings_lookup_table_name ) ) ) )
|
||||
tag_ids.update( self._STI( self._Execute( 'SELECT ideal_tag_id FROM {};'.format( cache_tag_siblings_lookup_table_name ) ) ) )
|
||||
|
||||
return tag_ids
|
||||
|
||||
|
||||
def GetApplicableServiceIds( self, tag_service_id ):
|
||||
|
||||
if self._service_ids_to_applicable_service_ids is None:
|
||||
|
||||
self._GenerateApplicationDicts()
|
||||
|
||||
|
||||
return self._service_ids_to_applicable_service_ids[ tag_service_id ]
|
||||
|
||||
|
||||
def GetApplication( self ):
|
||||
|
||||
if self._service_ids_to_applicable_service_ids is None:
|
||||
|
||||
self._GenerateApplicationDicts()
|
||||
|
||||
|
||||
service_ids_to_service_keys = {}
|
||||
|
||||
service_keys_to_applicable_service_keys = {}
|
||||
|
||||
for ( master_service_id, applicable_service_ids ) in self._service_ids_to_applicable_service_ids.items():
|
||||
|
||||
all_service_ids = [ master_service_id ] + list( applicable_service_ids )
|
||||
|
||||
for service_id in all_service_ids:
|
||||
|
||||
if service_id not in service_ids_to_service_keys:
|
||||
|
||||
service_ids_to_service_keys[ service_id ] = self.modules_services.GetService( service_id ).GetServiceKey()
|
||||
|
||||
|
||||
|
||||
service_keys_to_applicable_service_keys[ service_ids_to_service_keys[ master_service_id ] ] = [ service_ids_to_service_keys[ service_id ] for service_id in applicable_service_ids ]
|
||||
|
||||
|
||||
return service_keys_to_applicable_service_keys
|
||||
|
||||
|
||||
def GetApplicationStatus( self, service_id ):
|
||||
|
||||
if service_id not in self._service_ids_to_display_application_status:
|
||||
|
||||
( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( service_id )
|
||||
|
||||
actual_sibling_rows = set( self._Execute( 'SELECT bad_tag_id, ideal_tag_id FROM {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) )
|
||||
ideal_sibling_rows = set( self._Execute( 'SELECT bad_tag_id, ideal_tag_id FROM {};'.format( cache_ideal_tag_siblings_lookup_table_name ) ) )
|
||||
|
||||
sibling_rows_to_remove = actual_sibling_rows.difference( ideal_sibling_rows )
|
||||
sibling_rows_to_add = ideal_sibling_rows.difference( actual_sibling_rows )
|
||||
|
||||
num_actual_rows = len( actual_sibling_rows )
|
||||
num_ideal_rows = len( ideal_sibling_rows )
|
||||
|
||||
self._service_ids_to_display_application_status[ service_id ] = ( sibling_rows_to_add, sibling_rows_to_remove, num_actual_rows, num_ideal_rows )
|
||||
|
||||
|
||||
( sibling_rows_to_add, sibling_rows_to_remove, num_actual_rows, num_ideal_rows ) = self._service_ids_to_display_application_status[ service_id ]
|
||||
|
||||
return ( sibling_rows_to_add, sibling_rows_to_remove, num_actual_rows, num_ideal_rows )
|
||||
|
||||
|
||||
def GetChainMembersFromIdeal( self, display_type, tag_service_id, ideal_tag_id ) -> typing.Set[ int ]:
|
||||
|
||||
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
||||
sibling_tag_ids = self._STS( self._Execute( 'SELECT bad_tag_id FROM {} WHERE ideal_tag_id = ?;'.format( cache_tag_siblings_lookup_table_name ), ( ideal_tag_id, ) ) )
|
||||
|
||||
sibling_tag_ids.add( ideal_tag_id )
|
||||
|
||||
return sibling_tag_ids
|
||||
|
||||
|
||||
def GetChainsMembersFromIdeals( self, display_type, tag_service_id, ideal_tag_ids ) -> typing.Set[ int ]:
|
||||
|
||||
if len( ideal_tag_ids ) == 0:
|
||||
|
||||
return set()
|
||||
|
||||
elif len( ideal_tag_ids ) == 1:
|
||||
|
||||
( ideal_tag_id, ) = ideal_tag_ids
|
||||
|
||||
return self.GetChainMembersFromIdeal( display_type, tag_service_id, ideal_tag_id )
|
||||
|
||||
|
||||
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
||||
with self._MakeTemporaryIntegerTable( ideal_tag_ids, 'ideal_tag_id' ) as temp_table_name:
|
||||
|
||||
# temp tags to lookup
|
||||
sibling_tag_ids = self._STS( self._Execute( 'SELECT bad_tag_id FROM {} CROSS JOIN {} USING ( ideal_tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ) )
|
||||
|
||||
|
||||
sibling_tag_ids.update( ideal_tag_ids )
|
||||
|
||||
return sibling_tag_ids
|
||||
|
||||
|
||||
def GetChainsMembersFromIdealsTables( self, display_type, tag_service_id, ideal_tag_ids_table_name, results_table_name ) -> typing.Set[ int ]:
|
||||
|
||||
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
||||
self._Execute( 'INSERT OR IGNORE INTO {} ( tag_id ) SELECT ideal_tag_id FROM {};'.format( results_table_name, ideal_tag_ids_table_name ) )
|
||||
|
||||
# tags to lookup
|
||||
self._Execute( 'INSERT OR IGNORE INTO {} ( tag_id ) SELECT bad_tag_id FROM {} CROSS JOIN {} USING ( ideal_tag_id );'.format( results_table_name, ideal_tag_ids_table_name, cache_tag_siblings_lookup_table_name ) )
|
||||
|
||||
|
||||
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
|
||||
|
||||
expected_table_names = [
|
||||
'tag_siblings',
|
||||
'tag_sibling_petitions',
|
||||
'tag_sibling_application'
|
||||
]
|
||||
|
||||
return expected_table_names
|
||||
|
||||
|
||||
def GetIdeal( self, display_type, tag_service_id, tag_id ) -> int:
|
||||
|
||||
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
||||
result = self._Execute( 'SELECT ideal_tag_id FROM {} WHERE bad_tag_id = ?;'.format( cache_tag_siblings_lookup_table_name ), ( tag_id, ) ).fetchone()
|
||||
|
||||
if result is None:
|
||||
|
||||
return tag_id
|
||||
|
||||
else:
|
||||
|
||||
( ideal_tag_id, ) = result
|
||||
|
||||
return ideal_tag_id
|
||||
|
||||
|
||||
|
||||
def GetIdeals( self, display_type, tag_service_id, tag_ids ) -> typing.Set[ int ]:
|
||||
|
||||
if not isinstance( tag_ids, set ):
|
||||
|
||||
tag_ids = set( tag_ids )
|
||||
|
||||
|
||||
if len( tag_ids ) == 0:
|
||||
|
||||
return set()
|
||||
|
||||
elif len( tag_ids ) == 1:
|
||||
|
||||
( tag_id, ) = tag_ids
|
||||
|
||||
return { self.GetIdeal( display_type, tag_service_id, tag_id ) }
|
||||
|
||||
|
||||
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
||||
with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_tag_ids_table_name:
|
||||
|
||||
magic_case = 'CASE WHEN ideal_tag_id NOT NULL THEN ideal_tag_id ELSE tag_id END'
|
||||
|
||||
cursor = self._Execute(
|
||||
'SELECT {} FROM {} LEFT OUTER JOIN {} ON ( tag_id = bad_tag_id );'.format(
|
||||
magic_case,
|
||||
temp_tag_ids_table_name,
|
||||
cache_tag_siblings_lookup_table_name
|
||||
)
|
||||
)
|
||||
|
||||
return self._STS( cursor )
|
||||
|
||||
|
||||
'''
|
||||
no_ideal_found_tag_ids = set( tag_ids )
|
||||
ideal_tag_ids = set()
|
||||
|
||||
with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_table_name:
|
||||
|
||||
# temp tags to lookup
|
||||
for ( tag_id, ideal_tag_id ) in self._Execute( 'SELECT tag_id, ideal_tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ):
|
||||
|
||||
no_ideal_found_tag_ids.discard( tag_id )
|
||||
ideal_tag_ids.add( ideal_tag_id )
|
||||
|
||||
|
||||
ideal_tag_ids.update( no_ideal_found_tag_ids )
|
||||
|
||||
|
||||
return ideal_tag_ids
|
||||
'''
|
||||
|
||||
|
||||
def GetIdealsIntoTable( self, display_type, tag_service_id, tag_ids_table_name, results_table_name ):
|
||||
|
||||
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
||||
magic_case = 'CASE WHEN ideal_tag_id NOT NULL THEN ideal_tag_id ELSE tag_id END'
|
||||
|
||||
cursor = self._Execute(
|
||||
'INSERT OR IGNORE INTO {} ( ideal_tag_id ) SELECT {} FROM {} LEFT OUTER JOIN {} ON ( tag_id = bad_tag_id );'.format(
|
||||
results_table_name,
|
||||
magic_case,
|
||||
tag_ids_table_name,
|
||||
cache_tag_siblings_lookup_table_name
|
||||
)
|
||||
)
|
||||
|
||||
return self._STS( cursor )
|
||||
|
||||
|
||||
def GetIdealsToChains( self, display_type, tag_service_id, ideal_tag_ids ):
|
||||
|
||||
# this only takes ideal_tag_ids
|
||||
|
||||
if len( ideal_tag_ids ) == 0:
|
||||
|
||||
return {}
|
||||
|
||||
elif len( ideal_tag_ids ) == 1:
|
||||
|
||||
( ideal_tag_id, ) = ideal_tag_ids
|
||||
|
||||
chain_tag_ids = self.GetChainMembersFromIdeal( display_type, tag_service_id, ideal_tag_id )
|
||||
|
||||
return { ideal_tag_id : chain_tag_ids }
|
||||
|
||||
|
||||
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
||||
with self._MakeTemporaryIntegerTable( ideal_tag_ids, 'ideal_tag_id' ) as temp_table_name:
|
||||
|
||||
# temp tags to lookup
|
||||
ideal_tag_ids_to_chain_members = HydrusData.BuildKeyToSetDict( self._Execute( 'SELECT ideal_tag_id, bad_tag_id FROM {} CROSS JOIN {} USING ( ideal_tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ) )
|
||||
|
||||
|
||||
# this returns ideal in the chain, and chains of size 1
|
||||
|
||||
for ideal_tag_id in ideal_tag_ids:
|
||||
|
||||
ideal_tag_ids_to_chain_members[ ideal_tag_id ].add( ideal_tag_id )
|
||||
|
||||
|
||||
return ideal_tag_ids_to_chain_members
|
||||
|
||||
|
||||
def GetInterestedServiceIds( self, tag_service_id ):
|
||||
|
||||
if self._service_ids_to_interested_service_ids is None:
|
||||
|
||||
self._GenerateApplicationDicts()
|
||||
|
||||
|
||||
return self._service_ids_to_interested_service_ids[ tag_service_id ]
|
||||
|
||||
|
||||
def GetTablesAndColumnsThatUseDefinitions( self, content_type: int ) -> typing.List[ typing.Tuple[ str, str ] ]:
|
||||
|
||||
if HC.CONTENT_TYPE_TAG:
|
||||
|
||||
return [
|
||||
( 'tag_siblings', 'bad_tag_id' ),
|
||||
( 'tag_siblings', 'good_tag_id' ),
|
||||
( 'tag_sibling_petitions', 'bad_tag_id' ),
|
||||
( 'tag_sibling_petitions', 'good_tag_id' )
|
||||
]
|
||||
|
||||
|
||||
return []
|
||||
|
||||
|
||||
def GetTagSiblingsForTags( self, service_key, tags ):
|
||||
|
||||
if service_key == CC.COMBINED_TAG_SERVICE_KEY:
|
||||
|
||||
tag_service_ids = self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES )
|
||||
|
||||
else:
|
||||
|
||||
tag_service_ids = ( self.modules_services.GetServiceId( service_key ), )
|
||||
|
||||
|
||||
existing_tags = { tag for tag in tags if self.modules_tags.TagExists( tag ) }
|
||||
|
||||
existing_tag_ids = set( self.modules_tags_local_cache.GetTagIdsToTags( tags = existing_tags ).keys() )
|
||||
|
||||
tag_ids_to_chain_tag_ids = collections.defaultdict( set )
|
||||
|
||||
for tag_service_id in tag_service_ids:
|
||||
|
||||
tag_ids_to_ideal_tag_ids = self.GetTagsToIdeals( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, existing_tag_ids )
|
||||
|
||||
ideal_tag_ids = set( tag_ids_to_ideal_tag_ids.values() )
|
||||
|
||||
ideal_tag_ids_to_chain_tag_ids = self.GetIdealsToChains( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, ideal_tag_ids )
|
||||
|
||||
for tag_id in existing_tag_ids:
|
||||
|
||||
chain_tag_ids = ideal_tag_ids_to_chain_tag_ids[ tag_ids_to_ideal_tag_ids[ tag_id ] ]
|
||||
|
||||
tag_ids_to_chain_tag_ids[ tag_id ].update( chain_tag_ids )
|
||||
|
||||
|
||||
|
||||
all_tag_ids = set( tag_ids_to_chain_tag_ids.keys() )
|
||||
all_tag_ids.update( itertools.chain.from_iterable( tag_ids_to_chain_tag_ids.values() ) )
|
||||
|
||||
tag_ids_to_tags = self.modules_tags_local_cache.GetTagIdsToTags( tag_ids = all_tag_ids )
|
||||
|
||||
tags_to_siblings = { tag_ids_to_tags[ tag_id ] : { tag_ids_to_tags[ chain_tag_id ] for chain_tag_id in chain_tag_ids } for ( tag_id, chain_tag_ids ) in tag_ids_to_chain_tag_ids.items() }
|
||||
|
||||
for tag in tags:
|
||||
|
||||
if tag not in existing_tags:
|
||||
|
||||
tags_to_siblings[ tag ] = { tag }
|
||||
|
||||
|
||||
|
||||
return tags_to_siblings
|
||||
|
||||
|
||||
def GetTagSiblingsIdeals( self, service_key ):
|
||||
|
||||
tag_service_id = self.modules_services.GetServiceId( service_key )
|
||||
|
||||
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id )
|
||||
|
||||
pair_ids = self._Execute( 'SELECT bad_tag_id, ideal_tag_id FROM {};'.format( cache_tag_siblings_lookup_table_name ) ).fetchall()
|
||||
|
||||
all_tag_ids = set( itertools.chain.from_iterable( pair_ids ) )
|
||||
|
||||
tag_ids_to_tags = self.modules_tags_local_cache.GetTagIdsToTags( tag_ids = all_tag_ids )
|
||||
|
||||
tags_to_ideals = { tag_ids_to_tags[ bad_tag_id ] : tag_ids_to_tags[ good_tag_id ] for ( bad_tag_id, good_tag_id ) in pair_ids }
|
||||
|
||||
return tags_to_ideals
|
||||
|
||||
|
||||
def GetTagsToIdeals( self, display_type, tag_service_id, tag_ids ):
|
||||
|
||||
if not isinstance( tag_ids, set ):
|
||||
|
||||
tag_ids = set( tag_ids )
|
||||
|
||||
|
||||
if len( tag_ids ) == 0:
|
||||
|
||||
return {}
|
||||
|
||||
elif len( tag_ids ) == 1:
|
||||
|
||||
( tag_id, ) = tag_ids
|
||||
|
||||
return { tag_id : self.GetIdeal( display_type, tag_service_id, tag_id ) }
|
||||
|
||||
|
||||
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
||||
no_ideal_found_tag_ids = set( tag_ids )
|
||||
tag_ids_to_ideal_tag_ids = {}
|
||||
|
||||
with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_table_name:
|
||||
|
||||
# temp tags to lookup
|
||||
for ( tag_id, ideal_tag_id ) in self._Execute( 'SELECT tag_id, ideal_tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ):
|
||||
|
||||
no_ideal_found_tag_ids.discard( tag_id )
|
||||
tag_ids_to_ideal_tag_ids[ tag_id ] = ideal_tag_id
|
||||
|
||||
|
||||
tag_ids_to_ideal_tag_ids.update( { tag_id : tag_id for tag_id in no_ideal_found_tag_ids } )
|
||||
|
||||
|
||||
return tag_ids_to_ideal_tag_ids
|
||||
|
||||
|
||||
def GetTagSiblings( self, service_key ):
|
||||
|
||||
service_id = self.modules_services.GetServiceId( service_key )
|
||||
|
||||
statuses_to_pair_ids = self.GetTagSiblingsIds( service_id )
|
||||
|
||||
all_tag_ids = set()
|
||||
|
||||
for pair_ids in statuses_to_pair_ids.values():
|
||||
|
||||
for ( bad_tag_id, good_tag_id ) in pair_ids:
|
||||
|
||||
all_tag_ids.add( bad_tag_id )
|
||||
all_tag_ids.add( good_tag_id )
|
||||
|
||||
|
||||
|
||||
tag_ids_to_tags = self.modules_tags_local_cache.GetTagIdsToTags( tag_ids = all_tag_ids )
|
||||
|
||||
statuses_to_pairs = collections.defaultdict( set )
|
||||
|
||||
statuses_to_pairs.update( { status : { ( tag_ids_to_tags[ bad_tag_id ], tag_ids_to_tags[ good_tag_id ] ) for ( bad_tag_id, good_tag_id ) in pair_ids } for ( status, pair_ids ) in statuses_to_pair_ids.items() } )
|
||||
|
||||
return statuses_to_pairs
|
||||
|
||||
|
||||
def GetTagSiblingsIds( self, service_id ):
|
||||
|
||||
statuses_and_pair_ids = self._Execute( 'SELECT status, bad_tag_id, good_tag_id FROM tag_siblings WHERE service_id = ? UNION SELECT status, bad_tag_id, good_tag_id FROM tag_sibling_petitions WHERE service_id = ?;', ( service_id, service_id ) ).fetchall()
|
||||
|
||||
unsorted_statuses_to_pair_ids = HydrusData.BuildKeyToListDict( ( status, ( bad_tag_id, good_tag_id ) ) for ( status, bad_tag_id, good_tag_id ) in statuses_and_pair_ids )
|
||||
|
||||
statuses_to_pair_ids = collections.defaultdict( list )
|
||||
|
||||
statuses_to_pair_ids.update( { status : sorted( pair_ids ) for ( status, pair_ids ) in unsorted_statuses_to_pair_ids.items() } )
|
||||
|
||||
return statuses_to_pair_ids
|
||||
|
||||
|
||||
def GetTagSiblingsIdsChains( self, service_id, tag_ids ):
|
||||
|
||||
done_tag_ids = set()
|
||||
next_tag_ids = set( tag_ids )
|
||||
result_rows = set()
|
||||
|
||||
while len( next_tag_ids ) > 0:
|
||||
|
||||
with self._MakeTemporaryIntegerTable( next_tag_ids, 'tag_id' ) as temp_next_tag_ids_table_name:
|
||||
|
||||
done_tag_ids.update( next_tag_ids )
|
||||
|
||||
next_tag_ids = set()
|
||||
|
||||
# keep these separate--older sqlite can't do cross join to an OR ON
|
||||
|
||||
# temp tag_ids to siblings
|
||||
queries = [
|
||||
'SELECT status, bad_tag_id, good_tag_id FROM {} CROSS JOIN tag_siblings ON ( bad_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name ),
|
||||
'SELECT status, bad_tag_id, good_tag_id FROM {} CROSS JOIN tag_siblings ON ( good_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name ),
|
||||
'SELECT status, bad_tag_id, good_tag_id FROM {} CROSS JOIN tag_sibling_petitions ON ( bad_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name ),
|
||||
'SELECT status, bad_tag_id, good_tag_id FROM {} CROSS JOIN tag_sibling_petitions ON ( good_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name )
|
||||
]
|
||||
|
||||
query = ' UNION '.join( queries )
|
||||
|
||||
for row in self._Execute( query, ( service_id, service_id, service_id, service_id ) ):
|
||||
|
||||
result_rows.add( row )
|
||||
|
||||
( status, bad_tag_id, good_tag_id ) = row
|
||||
|
||||
for tag_id in ( bad_tag_id, good_tag_id ):
|
||||
|
||||
if tag_id not in done_tag_ids:
|
||||
|
||||
next_tag_ids.add( tag_id )
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
unsorted_statuses_to_pair_ids = HydrusData.BuildKeyToListDict( ( status, ( bad_tag_id, good_tag_id ) ) for ( status, bad_tag_id, good_tag_id ) in result_rows )
|
||||
|
||||
statuses_to_pair_ids = collections.defaultdict( list )
|
||||
|
||||
statuses_to_pair_ids.update( { status : sorted( pair_ids ) for ( status, pair_ids ) in unsorted_statuses_to_pair_ids.items() } )
|
||||
|
||||
return statuses_to_pair_ids
|
||||
|
||||
|
||||
def IsChained( self, display_type, tag_service_id, tag_id ):
|
||||
|
||||
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id )
|
||||
|
||||
return self._Execute( 'SELECT 1 FROM {} WHERE bad_tag_id = ? OR ideal_tag_id = ?;'.format( cache_tag_siblings_lookup_table_name ), ( tag_id, tag_id ) ).fetchone() is not None
|
||||
|
||||
|
||||
def NotifySiblingAddRowSynced( self, tag_service_id, row ):
|
||||
|
||||
if tag_service_id in self._service_ids_to_display_application_status:
|
||||
|
||||
( sibling_rows_to_add, sibling_rows_to_remove, num_actual_rows, num_ideal_rows ) = self._service_ids_to_display_application_status[ tag_service_id ]
|
||||
|
||||
sibling_rows_to_add.discard( row )
|
||||
|
||||
num_actual_rows += 1
|
||||
|
||||
self._service_ids_to_display_application_status[ tag_service_id ] = ( sibling_rows_to_add, sibling_rows_to_remove, num_actual_rows, num_ideal_rows )
|
||||
|
||||
|
||||
|
||||
def NotifySiblingDeleteRowSynced( self, tag_service_id, row ):
|
||||
|
||||
if tag_service_id in self._service_ids_to_display_application_status:
|
||||
|
||||
( sibling_rows_to_add, sibling_rows_to_remove, num_actual_rows, num_ideal_rows ) = self._service_ids_to_display_application_status[ tag_service_id ]
|
||||
|
||||
sibling_rows_to_remove.discard( row )
|
||||
|
||||
num_actual_rows -= 1
|
||||
|
||||
self._service_ids_to_display_application_status[ tag_service_id ] = ( sibling_rows_to_add, sibling_rows_to_remove, num_actual_rows, num_ideal_rows )
|
||||
|
||||
|
||||
|
||||
def PendTagSiblings( self, service_id, triples ):
|
||||
|
||||
self._ExecuteMany( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND good_tag_id = ?;', ( ( service_id, bad_tag_id, good_tag_id ) for ( bad_tag_id, good_tag_id, reason_id ) in triples ) )
|
||||
|
||||
self._ExecuteMany( 'INSERT OR IGNORE INTO tag_sibling_petitions ( service_id, bad_tag_id, good_tag_id, reason_id, status ) VALUES ( ?, ?, ?, ?, ? );', ( ( service_id, bad_tag_id, good_tag_id, reason_id, HC.CONTENT_STATUS_PENDING ) for ( bad_tag_id, good_tag_id, reason_id ) in triples ) )
|
||||
|
||||
|
||||
def PetitionTagSiblings( self, service_id, triples ):
|
||||
|
||||
self._ExecuteMany( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND good_tag_id = ?;', ( ( service_id, bad_tag_id, good_tag_id ) for ( bad_tag_id, good_tag_id, reason_id ) in triples ) )
|
||||
|
||||
self._ExecuteMany( 'INSERT OR IGNORE INTO tag_sibling_petitions ( service_id, bad_tag_id, good_tag_id, reason_id, status ) VALUES ( ?, ?, ?, ?, ? );', ( ( service_id, bad_tag_id, good_tag_id, reason_id, HC.CONTENT_STATUS_PETITIONED ) for ( bad_tag_id, good_tag_id, reason_id ) in triples ) )
|
||||
|
||||
|
||||
def RescindPendingTagSiblings( self, service_id, pairs ):
|
||||
|
||||
self._ExecuteMany( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND good_tag_id = ? AND status = ?;', ( ( service_id, bad_tag_id, good_tag_id, HC.CONTENT_STATUS_PENDING ) for ( bad_tag_id, good_tag_id ) in pairs ) )
|
||||
|
||||
|
||||
def RescindPetitionedTagSiblings( self, service_id, pairs ):
|
||||
|
||||
self._ExecuteMany( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND good_tag_id = ? AND status = ?;', ( ( service_id, bad_tag_id, good_tag_id, HC.CONTENT_STATUS_PETITIONED ) for ( bad_tag_id, good_tag_id ) in pairs ) )
|
||||
|
||||
|
||||
def Regen( self, tag_service_ids ):
|
||||
|
||||
for tag_service_id in tag_service_ids:
|
||||
|
||||
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id )
|
||||
|
||||
self._Execute( 'DELETE FROM {};'.format( cache_tag_siblings_lookup_table_name ) )
|
||||
|
||||
applicable_service_ids = self.GetApplicableServiceIds( tag_service_id )
|
||||
|
||||
tss = ClientTagsHandling.TagSiblingsStructure()
|
||||
|
||||
for applicable_service_id in applicable_service_ids:
|
||||
|
||||
statuses_to_pair_ids = self.GetTagSiblingsIds( service_id = applicable_service_id )
|
||||
|
||||
petitioned_fast_lookup = set( statuses_to_pair_ids[ HC.CONTENT_STATUS_PETITIONED ] )
|
||||
|
||||
for ( bad_tag_id, good_tag_id ) in statuses_to_pair_ids[ HC.CONTENT_STATUS_CURRENT ]:
|
||||
|
||||
if ( bad_tag_id, good_tag_id ) in petitioned_fast_lookup:
|
||||
|
||||
continue
|
||||
|
||||
|
||||
tss.AddPair( bad_tag_id, good_tag_id )
|
||||
|
||||
|
||||
for ( bad_tag_id, good_tag_id ) in statuses_to_pair_ids[ HC.CONTENT_STATUS_PENDING ]:
|
||||
|
||||
tss.AddPair( bad_tag_id, good_tag_id )
|
||||
|
||||
|
||||
|
||||
self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( bad_tag_id, ideal_tag_id ) VALUES ( ?, ? );'.format( cache_tag_siblings_lookup_table_name ), tss.GetBadTagsToIdealTags().items() )
|
||||
|
||||
if tag_service_id in self._service_ids_to_display_application_status:
|
||||
|
||||
del self._service_ids_to_display_application_status[ tag_service_id ]
|
||||
|
||||
|
||||
|
||||
|
||||
def RegenChains( self, tag_service_ids, tag_ids ):
|
||||
|
||||
# as this guy can change ideals, the related parent chains need to be regenned afterwards too
|
||||
|
||||
if len( tag_ids ) == 0:
|
||||
|
||||
return
|
||||
|
||||
|
||||
for tag_service_id in tag_service_ids:
|
||||
|
||||
cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id )
|
||||
|
||||
tag_ids_to_clear_and_regen = set( tag_ids )
|
||||
|
||||
ideal_tag_ids = self.GetIdeals( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id, tag_ids )
|
||||
|
||||
tag_ids_to_clear_and_regen.update( self.GetChainsMembersFromIdeals( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id, ideal_tag_ids ) )
|
||||
|
||||
self._ExecuteMany( 'DELETE FROM {} WHERE bad_tag_id = ? OR ideal_tag_id = ?;'.format( cache_tag_siblings_lookup_table_name ), ( ( tag_id, tag_id ) for tag_id in tag_ids_to_clear_and_regen ) )
|
||||
|
||||
applicable_tag_service_ids = self.GetApplicableServiceIds( tag_service_id )
|
||||
|
||||
tss = ClientTagsHandling.TagSiblingsStructure()
|
||||
|
||||
for applicable_tag_service_id in applicable_tag_service_ids:
|
||||
|
||||
service_key = self.modules_services.GetService( applicable_tag_service_id ).GetServiceKey()
|
||||
|
||||
statuses_to_pair_ids = self.GetTagSiblingsIdsChains( applicable_tag_service_id, tag_ids_to_clear_and_regen )
|
||||
|
||||
petitioned_fast_lookup = set( statuses_to_pair_ids[ HC.CONTENT_STATUS_PETITIONED ] )
|
||||
|
||||
for ( bad_tag_id, good_tag_id ) in statuses_to_pair_ids[ HC.CONTENT_STATUS_CURRENT ]:
|
||||
|
||||
if ( bad_tag_id, good_tag_id ) in petitioned_fast_lookup:
|
||||
|
||||
continue
|
||||
|
||||
|
||||
tss.AddPair( bad_tag_id, good_tag_id )
|
||||
|
||||
|
||||
for ( bad_tag_id, good_tag_id ) in statuses_to_pair_ids[ HC.CONTENT_STATUS_PENDING ]:
|
||||
|
||||
tss.AddPair( bad_tag_id, good_tag_id )
|
||||
|
||||
|
||||
|
||||
self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( bad_tag_id, ideal_tag_id ) VALUES ( ?, ? );'.format( cache_tag_siblings_lookup_table_name ), tss.GetBadTagsToIdealTags().items() )
|
||||
|
||||
if tag_service_id in self._service_ids_to_display_application_status:
|
||||
|
||||
del self._service_ids_to_display_application_status[ tag_service_id ]
|
||||
|
||||
|
||||
|
||||
|
||||
def SetApplication( self, service_keys_to_applicable_service_keys ):
|
||||
|
||||
if self._service_ids_to_applicable_service_ids is None:
|
||||
|
||||
self._GenerateApplicationDicts()
|
||||
|
||||
|
||||
new_service_ids_to_applicable_service_ids = collections.defaultdict( list )
|
||||
|
||||
for ( master_service_key, applicable_service_keys ) in service_keys_to_applicable_service_keys.items():
|
||||
|
||||
master_service_id = self.modules_services.GetServiceId( master_service_key )
|
||||
applicable_service_ids = [ self.modules_services.GetServiceId( service_key ) for service_key in applicable_service_keys ]
|
||||
|
||||
new_service_ids_to_applicable_service_ids[ master_service_id ] = applicable_service_ids
|
||||
|
||||
|
||||
old_and_new_master_service_ids = set( self._service_ids_to_applicable_service_ids.keys() )
|
||||
old_and_new_master_service_ids.update( new_service_ids_to_applicable_service_ids.keys() )
|
||||
|
||||
inserts = []
|
||||
|
||||
service_ids_to_sync = set()
|
||||
|
||||
for master_service_id in old_and_new_master_service_ids:
|
||||
|
||||
if master_service_id in new_service_ids_to_applicable_service_ids:
|
||||
|
||||
applicable_service_ids = new_service_ids_to_applicable_service_ids[ master_service_id ]
|
||||
|
||||
inserts.extend( ( ( master_service_id, i, applicable_service_id ) for ( i, applicable_service_id ) in enumerate( applicable_service_ids ) ) )
|
||||
|
||||
if applicable_service_ids != self._service_ids_to_applicable_service_ids[ master_service_id ]:
|
||||
|
||||
service_ids_to_sync.add( master_service_id )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
service_ids_to_sync.add( master_service_id )
|
||||
|
||||
|
||||
|
||||
self._Execute( 'DELETE FROM tag_sibling_application;' )
|
||||
|
||||
self._ExecuteMany( 'INSERT OR IGNORE INTO tag_sibling_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', inserts )
|
||||
|
||||
self._service_ids_to_applicable_service_ids = None
|
||||
self._service_ids_to_interested_service_ids = None
|
||||
|
||||
return service_ids_to_sync
|
||||
|
||||
|
|
@ -130,7 +130,7 @@ def THREADUploadPending( service_key ):
|
|||
|
||||
nums_pending_for_this_service = nums_pending[ service_key ]
|
||||
|
||||
content_types_for_this_service = set( HC.REPOSITORY_CONTENT_TYPES[ service_type ] )
|
||||
content_types_for_this_service = set( HC.SERVICE_TYPES_TO_CONTENT_TYPES[ service_type ] )
|
||||
|
||||
if service_type in HC.REPOSITORIES:
|
||||
|
||||
|
@ -3754,7 +3754,7 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
|
|||
|
||||
if result == QW.QDialog.Accepted:
|
||||
|
||||
self._controller.Write( 'regenerate_tag_siblings_cache' )
|
||||
self._controller.Write( 'regenerate_tag_siblings_and_parents_cache' )
|
||||
|
||||
|
||||
|
||||
|
@ -4913,6 +4913,8 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
HG.query_planner_mode = False
|
||||
|
||||
HG.queries_planned = set()
|
||||
|
||||
HydrusData.ShowText( 'Query Planning done: {} queries analyzed'.format( HydrusData.ToHumanInt( HG.query_planner_query_count ) ) )
|
||||
|
||||
|
||||
|
@ -6397,7 +6399,7 @@ Try to keep this below 10 million!'''
|
|||
|
||||
services = self._controller.services_manager.GetServices()
|
||||
|
||||
petition_permissions = [ ( content_type, HC.PERMISSION_ACTION_MODERATE ) for content_type in HC.REPOSITORY_CONTENT_TYPES ]
|
||||
petition_permissions = [ ( content_type, HC.PERMISSION_ACTION_MODERATE ) for content_type in HC.SERVICE_TYPES_TO_CONTENT_TYPES ]
|
||||
|
||||
repositories = [ service for service in services if service.GetServiceType() in HC.REPOSITORIES ]
|
||||
|
||||
|
|
|
@ -218,7 +218,9 @@ class mpvWidget( QW.QWidget ):
|
|||
return
|
||||
|
||||
|
||||
if self._media is not None and self._player.time_pos <= 1.0:
|
||||
current_timestamp_s = self._player.time_pos
|
||||
|
||||
if self._media is not None and current_timestamp_s is not None and current_timestamp_s <= 1.0:
|
||||
|
||||
self._current_seek_to_start_count += 1
|
||||
|
||||
|
|
|
@ -3557,8 +3557,15 @@ class ManageTagParents( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
if self._i_am_local_tag_service:
|
||||
|
||||
for pair in self._current_statuses_to_pairs[ HC.CONTENT_STATUS_PENDING ]: content_updates.append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_ADD, pair ) )
|
||||
for pair in self._current_statuses_to_pairs[ HC.CONTENT_STATUS_PETITIONED ]: content_updates.append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_DELETE, pair ) )
|
||||
for pair in self._current_statuses_to_pairs[ HC.CONTENT_STATUS_PETITIONED ]:
|
||||
|
||||
content_updates.append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_DELETE, pair ) )
|
||||
|
||||
|
||||
for pair in self._current_statuses_to_pairs[ HC.CONTENT_STATUS_PENDING ]:
|
||||
|
||||
content_updates.append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_ADD, pair ) )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
|
@ -3574,10 +3581,10 @@ class ManageTagParents( ClientGUIScrolledPanels.ManagePanel ):
|
|||
new_petitions = current_petitioned.difference( original_petitioned )
|
||||
rescinded_petitions = original_petitioned.difference( current_petitioned )
|
||||
|
||||
content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_PEND, pair, reason = self._pairs_to_reasons[ pair ] ) for pair in new_pends ) )
|
||||
content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_RESCIND_PETITION, pair ) for pair in rescinded_petitions ) )
|
||||
content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_RESCIND_PEND, pair ) for pair in rescinded_pends ) )
|
||||
content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_PETITION, pair, reason = self._pairs_to_reasons[ pair ] ) for pair in new_petitions ) )
|
||||
content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_RESCIND_PETITION, pair ) for pair in rescinded_petitions ) )
|
||||
content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_PEND, pair, reason = self._pairs_to_reasons[ pair ] ) for pair in new_pends ) )
|
||||
|
||||
|
||||
return ( self._service_key, content_updates )
|
||||
|
@ -4566,16 +4573,16 @@ class ManageTagSiblings( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
if self._i_am_local_tag_service:
|
||||
|
||||
for pair in self._current_statuses_to_pairs[ HC.CONTENT_STATUS_PENDING ]:
|
||||
|
||||
content_updates.append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_ADD, pair ) )
|
||||
|
||||
|
||||
for pair in self._current_statuses_to_pairs[ HC.CONTENT_STATUS_PETITIONED ]:
|
||||
|
||||
content_updates.append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_DELETE, pair ) )
|
||||
|
||||
|
||||
for pair in self._current_statuses_to_pairs[ HC.CONTENT_STATUS_PENDING ]:
|
||||
|
||||
content_updates.append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_ADD, pair ) )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
current_pending = self._current_statuses_to_pairs[ HC.CONTENT_STATUS_PENDING ]
|
||||
|
@ -4590,10 +4597,10 @@ class ManageTagSiblings( ClientGUIScrolledPanels.ManagePanel ):
|
|||
new_petitions = current_petitioned.difference( original_petitioned )
|
||||
rescinded_petitions = original_petitioned.difference( current_petitioned )
|
||||
|
||||
content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_PEND, pair, reason = self._pairs_to_reasons[ pair ] ) for pair in new_pends ) )
|
||||
content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_RESCIND_PETITION, pair ) for pair in rescinded_petitions ) )
|
||||
content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_RESCIND_PEND, pair ) for pair in rescinded_pends ) )
|
||||
content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_PETITION, pair, reason = self._pairs_to_reasons[ pair ] ) for pair in new_petitions ) )
|
||||
content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_RESCIND_PETITION, pair ) for pair in rescinded_petitions ) )
|
||||
content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_PEND, pair, reason = self._pairs_to_reasons[ pair ] ) for pair in new_pends ) )
|
||||
|
||||
|
||||
return ( self._service_key, content_updates )
|
||||
|
|
|
@ -1123,6 +1123,7 @@ class CanvasHoverFrameTop( CanvasHoverFrame ):
|
|||
|
||||
if do_redraw:
|
||||
|
||||
self._ResetText()
|
||||
self._ResetButtons()
|
||||
|
||||
|
||||
|
|
|
@ -952,6 +952,8 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
|
|||
self._potential_file_search_currently_happening = False
|
||||
self._maintenance_numbers_need_redrawing = True
|
||||
|
||||
self._potential_duplicates_count = 0
|
||||
|
||||
self._have_done_first_maintenance_numbers_show = False
|
||||
|
||||
new_options = self._controller.new_options
|
||||
|
@ -1277,7 +1279,14 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
|
|||
|
||||
self._dupe_count_numbers_dirty = True
|
||||
|
||||
self._ShowRandomPotentialDupes()
|
||||
if self._potential_duplicates_count > 1:
|
||||
|
||||
self._ShowRandomPotentialDupes()
|
||||
|
||||
else:
|
||||
|
||||
self._ShowPotentialDupes( [] )
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -1288,22 +1297,20 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
|
|||
self._UpdateMaintenanceStatus()
|
||||
|
||||
|
||||
def _ShowRandomPotentialDupes( self ):
|
||||
def _ShowPotentialDupes( self, hashes ):
|
||||
|
||||
( file_search_context, both_files_match ) = self._GetFileSearchContextAndBothFilesMatch()
|
||||
|
||||
file_service_key = file_search_context.GetFileServiceKey()
|
||||
|
||||
hashes = self._controller.Read( 'random_potential_duplicate_hashes', file_search_context, both_files_match )
|
||||
|
||||
if len( hashes ) == 0:
|
||||
if len( hashes ) > 0:
|
||||
|
||||
QW.QMessageBox.critical( self, 'Error', 'No files were found. Try refreshing the count, and if this keeps happening, please let hydrus_dev know.' )
|
||||
media_results = self._controller.Read( 'media_results', hashes, sorted = True )
|
||||
|
||||
return
|
||||
else:
|
||||
|
||||
media_results = []
|
||||
|
||||
|
||||
media_results = self._controller.Read( 'media_results', hashes, sorted = True )
|
||||
|
||||
panel = ClientGUIResults.MediaPanelThumbnails( self._page, self._page_key, file_service_key, media_results )
|
||||
|
||||
|
@ -1312,6 +1319,20 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
|
|||
self._page.SwapMediaPanel( panel )
|
||||
|
||||
|
||||
def _ShowRandomPotentialDupes( self ):
|
||||
|
||||
( file_search_context, both_files_match ) = self._GetFileSearchContextAndBothFilesMatch()
|
||||
|
||||
hashes = self._controller.Read( 'random_potential_duplicate_hashes', file_search_context, both_files_match )
|
||||
|
||||
if len( hashes ) == 0:
|
||||
|
||||
HydrusData.ShowText( 'No files were found. Try refreshing the count, and if this keeps happening, please let hydrus_dev know.' )
|
||||
|
||||
|
||||
self._ShowPotentialDupes( hashes )
|
||||
|
||||
|
||||
def _UpdateMaintenanceStatus( self ):
|
||||
|
||||
self._refresh_maintenance_button.setEnabled( not ( self._duplicates_manager_is_fetching_maintenance_numbers or self._potential_file_search_currently_happening ) )
|
||||
|
@ -1408,9 +1429,11 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
|
|||
|
||||
def _UpdatePotentialDuplicatesCount( self, potential_duplicates_count ):
|
||||
|
||||
self._potential_duplicates_count = potential_duplicates_count
|
||||
|
||||
self._num_potential_duplicates.setText( '{} potential pairs.'.format( HydrusData.ToHumanInt( potential_duplicates_count ) ) )
|
||||
|
||||
if potential_duplicates_count > 0:
|
||||
if self._potential_duplicates_count > 0:
|
||||
|
||||
self._show_some_dupes.setEnabled( True )
|
||||
self._launch_filter.setEnabled( True )
|
||||
|
|
|
@ -110,9 +110,7 @@ class DialogPageChooser( ClientGUIDialogs.Dialog ):
|
|||
|
||||
self._services = HG.client_controller.services_manager.GetServices()
|
||||
|
||||
repository_petition_permissions = [ ( content_type, HC.PERMISSION_ACTION_MODERATE ) for content_type in HC.REPOSITORY_CONTENT_TYPES ]
|
||||
|
||||
self._petition_service_keys = [ service.GetServiceKey() for service in self._services if service.GetServiceType() in HC.REPOSITORIES and True in ( service.HasPermission( content_type, action ) for ( content_type, action ) in repository_petition_permissions ) ]
|
||||
self._petition_service_keys = [ service.GetServiceKey() for service in self._services if service.GetServiceType() in HC.REPOSITORIES and True in ( service.HasPermission( content_type, HC.PERMISSION_ACTION_MODERATE ) for content_type in HC.SERVICE_TYPES_TO_CONTENT_TYPES[ service.GetServiceType() ] ) ]
|
||||
|
||||
self._InitButtons( 'home' )
|
||||
|
||||
|
@ -1595,8 +1593,10 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
|
|||
|
||||
click_over_tab = tab_index != -1
|
||||
|
||||
can_go_home = tab_index > 1
|
||||
can_go_left = tab_index > 0
|
||||
can_go_right = tab_index < end_index
|
||||
can_go_end = tab_index < end_index - 1
|
||||
|
||||
click_over_page_of_pages = False
|
||||
|
||||
|
@ -1619,20 +1619,101 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
|
|||
|
||||
ClientGUIMenus.AppendMenuItem( menu, 'close page', 'Close this page.', self._ClosePage, tab_index )
|
||||
|
||||
if num_pages > 1:
|
||||
if more_than_one_tab:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( menu, 'close other pages', 'Close all pages but this one.', self._CloseOtherPages, tab_index )
|
||||
if not can_go_left or not can_go_right:
|
||||
|
||||
if num_pages == 2:
|
||||
|
||||
label = 'close other page'
|
||||
description = 'Close the other page.'
|
||||
|
||||
else:
|
||||
|
||||
label = 'close other pages'
|
||||
description = 'Close all pages but this one.'
|
||||
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( menu, label, description, self._CloseOtherPages, tab_index )
|
||||
|
||||
else:
|
||||
|
||||
close_menu = QW.QMenu( menu )
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( close_menu, 'other pages', 'Close all pages but this one.', self._CloseOtherPages, tab_index )
|
||||
|
||||
if can_go_left:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( close_menu, 'pages to the left', 'Close all pages to the left of this one.', self._CloseLeftPages, tab_index )
|
||||
|
||||
|
||||
if can_go_right:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( close_menu, 'pages to the right', 'Close all pages to the right of this one.', self._CloseRightPages, tab_index )
|
||||
|
||||
|
||||
ClientGUIMenus.AppendMenu( menu, close_menu, 'close' )
|
||||
|
||||
|
||||
#
|
||||
|
||||
move_menu = QW.QMenu( menu )
|
||||
|
||||
if can_go_home:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( move_menu, 'to left end', 'Move this page all the way to the left.', self._ShiftPage, tab_index, new_index=0 )
|
||||
|
||||
|
||||
if can_go_left:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( menu, 'close pages to the left', 'Close all pages to the left of this one.', self._CloseLeftPages, tab_index )
|
||||
ClientGUIMenus.AppendMenuItem( move_menu, 'left', 'Move this page one to the left.', self._ShiftPage, tab_index, delta=-1 )
|
||||
|
||||
|
||||
if can_go_right:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( menu, 'close pages to the right', 'Close all pages to the right of this one.', self._CloseRightPages, tab_index )
|
||||
ClientGUIMenus.AppendMenuItem( move_menu, 'right', 'Move this page one to the right.', self._ShiftPage, tab_index, 1 )
|
||||
|
||||
|
||||
if can_go_end:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( move_menu, 'to right end', 'Move this page all the way to the right.', self._ShiftPage, tab_index, new_index=end_index )
|
||||
|
||||
|
||||
ClientGUIMenus.AppendMenu( menu, move_menu, 'move page' )
|
||||
|
||||
#
|
||||
|
||||
selection_index = self.currentIndex()
|
||||
|
||||
can_select_home = selection_index > 1
|
||||
can_select_left = selection_index > 0
|
||||
can_select_right = selection_index < end_index
|
||||
can_select_end = selection_index < end_index - 1
|
||||
|
||||
select_menu = QW.QMenu( menu )
|
||||
|
||||
if can_select_home:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( select_menu, 'first page', 'Select the page at the start of these.', self.MoveSelectionEnd, -1 )
|
||||
|
||||
|
||||
if can_select_left:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( select_menu, 'page to the left', 'Select the page to the left of this one.', self.MoveSelection, -1 )
|
||||
|
||||
|
||||
if can_select_right:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( select_menu, 'page to the right', 'Select the page to the right of this one.', self.MoveSelection, 1 )
|
||||
|
||||
|
||||
if can_select_end:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( select_menu, 'last page', 'Select the page at the end of these.', self.MoveSelectionEnd, 1 )
|
||||
|
||||
|
||||
ClientGUIMenus.AppendMenu( menu, select_menu, 'select' )
|
||||
|
||||
|
||||
ClientGUIMenus.AppendSeparator( menu )
|
||||
|
||||
|
@ -1651,33 +1732,6 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
|
|||
|
||||
if more_than_one_tab:
|
||||
|
||||
ClientGUIMenus.AppendSeparator( menu )
|
||||
|
||||
can_home = tab_index > 1
|
||||
can_move_left = tab_index > 0
|
||||
can_move_right = tab_index < end_index
|
||||
can_end = tab_index < end_index - 1
|
||||
|
||||
if can_home:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( menu, 'move to left end', 'Move this page all the way to the left.', self._ShiftPage, tab_index, new_index=0 )
|
||||
|
||||
|
||||
if can_move_left:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( menu, 'move left', 'Move this page one to the left.', self._ShiftPage, tab_index, delta=-1 )
|
||||
|
||||
|
||||
if can_move_right:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( menu, 'move right', 'Move this page one to the right.', self._ShiftPage, tab_index, 1 )
|
||||
|
||||
|
||||
if can_end:
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( menu, 'move to right end', 'Move this page all the way to the right.', self._ShiftPage, tab_index, new_index=end_index )
|
||||
|
||||
|
||||
ClientGUIMenus.AppendSeparator( menu )
|
||||
|
||||
submenu = QW.QMenu( menu )
|
||||
|
|
|
@ -2566,7 +2566,7 @@ class ReviewServiceRepositorySubPanel( QW.QWidget ):
|
|||
|
||||
#
|
||||
|
||||
content_types = tuple( HC.REPOSITORY_CONTENT_TYPES[ self._service.GetServiceType() ] )
|
||||
content_types = tuple( HC.SERVICE_TYPES_TO_CONTENT_TYPES[ self._service.GetServiceType() ] )
|
||||
|
||||
self._content_types_to_gauges_and_buttons = {}
|
||||
|
||||
|
|
|
@ -618,20 +618,15 @@ class FileSeed( HydrusSerialisable.SerialisableBase ):
|
|||
raise HydrusExceptions.VetoException( 'Source file does not exist!' )
|
||||
|
||||
|
||||
if limited_mimes is not None:
|
||||
|
||||
mime = HydrusFileHandling.GetMime( path )
|
||||
|
||||
if mime not in limited_mimes:
|
||||
|
||||
raise HydrusExceptions.VetoException( 'Not in allowed mimes!' )
|
||||
|
||||
|
||||
|
||||
( os_file_handle, temp_path ) = HydrusPaths.GetTempPath()
|
||||
|
||||
try:
|
||||
|
||||
if status_hook is not None:
|
||||
|
||||
status_hook( 'copying file to temp location' )
|
||||
|
||||
|
||||
copied = HydrusPaths.MirrorFile( path, temp_path )
|
||||
|
||||
if not copied:
|
||||
|
@ -639,6 +634,23 @@ class FileSeed( HydrusSerialisable.SerialisableBase ):
|
|||
raise Exception( 'File failed to copy to temp path--see log for error.' )
|
||||
|
||||
|
||||
if limited_mimes is not None:
|
||||
|
||||
# I think this thing should and will be rolled into file import options late
|
||||
|
||||
if status_hook is not None:
|
||||
|
||||
status_hook( 'testing file type' )
|
||||
|
||||
|
||||
mime = HydrusFileHandling.GetMime( temp_path )
|
||||
|
||||
if mime not in limited_mimes:
|
||||
|
||||
raise HydrusExceptions.VetoException( 'Not in allowed mimes!' )
|
||||
|
||||
|
||||
|
||||
self.Import( temp_path, file_import_options, status_hook = status_hook )
|
||||
|
||||
finally:
|
||||
|
|
|
@ -139,21 +139,11 @@ class FileImportJob( object ):
|
|||
HydrusData.ShowText( 'File import job starting work.' )
|
||||
|
||||
|
||||
if status_hook is not None:
|
||||
|
||||
status_hook( 'calculating pre-import status' )
|
||||
|
||||
|
||||
self.GeneratePreImportHashAndStatus()
|
||||
self.GeneratePreImportHashAndStatus( status_hook = status_hook )
|
||||
|
||||
if self._pre_import_file_status.ShouldImport( self._file_import_options ):
|
||||
|
||||
if status_hook is not None:
|
||||
|
||||
status_hook( 'generating metadata' )
|
||||
|
||||
|
||||
self.GenerateInfo()
|
||||
self.GenerateInfo( status_hook = status_hook )
|
||||
|
||||
try:
|
||||
|
||||
|
@ -178,14 +168,14 @@ class FileImportJob( object ):
|
|||
|
||||
if status_hook is not None:
|
||||
|
||||
status_hook( 'copying file' )
|
||||
status_hook( 'copying file into file storage' )
|
||||
|
||||
|
||||
HG.client_controller.client_files_manager.AddFile( hash, mime, self._temp_path, thumbnail_bytes = self._thumbnail_bytes )
|
||||
|
||||
if status_hook is not None:
|
||||
|
||||
status_hook( 'updating database' )
|
||||
status_hook( 'importing to database' )
|
||||
|
||||
|
||||
self._post_import_file_status = HG.client_controller.WriteSynchronous( 'import_file', self )
|
||||
|
@ -210,10 +200,15 @@ class FileImportJob( object ):
|
|||
return self._post_import_file_status
|
||||
|
||||
|
||||
def GeneratePreImportHashAndStatus( self ):
|
||||
def GeneratePreImportHashAndStatus( self, status_hook = None ):
|
||||
|
||||
HydrusImageHandling.ConvertToPNGIfBMP( self._temp_path )
|
||||
|
||||
if status_hook is not None:
|
||||
|
||||
status_hook( 'calculating hash' )
|
||||
|
||||
|
||||
hash = HydrusFileHandling.GetHashFromPath( self._temp_path )
|
||||
|
||||
if HG.file_import_report_mode:
|
||||
|
@ -221,6 +216,11 @@ class FileImportJob( object ):
|
|||
HydrusData.ShowText( 'File import job hash: {}'.format( hash.hex() ) )
|
||||
|
||||
|
||||
if status_hook is not None:
|
||||
|
||||
status_hook( 'checking for file status' )
|
||||
|
||||
|
||||
self._pre_import_file_status = HG.client_controller.Read( 'hash_status', 'sha256', hash, prefix = 'file recognised' )
|
||||
|
||||
# just in case
|
||||
|
@ -234,10 +234,15 @@ class FileImportJob( object ):
|
|||
|
||||
|
||||
|
||||
def GenerateInfo( self ):
|
||||
def GenerateInfo( self, status_hook = None ):
|
||||
|
||||
if self._pre_import_file_status.mime is None:
|
||||
|
||||
if status_hook is not None:
|
||||
|
||||
status_hook( 'generating filetype' )
|
||||
|
||||
|
||||
mime = HydrusFileHandling.GetMime( self._temp_path )
|
||||
|
||||
self._pre_import_file_status.mime = mime
|
||||
|
@ -272,6 +277,11 @@ class FileImportJob( object ):
|
|||
|
||||
|
||||
|
||||
if status_hook is not None:
|
||||
|
||||
status_hook( 'generating file metadata' )
|
||||
|
||||
|
||||
self._file_info = HydrusFileHandling.GetFileInfo( self._temp_path, mime )
|
||||
|
||||
( size, mime, width, height, duration, num_frames, has_audio, num_words ) = self._file_info
|
||||
|
@ -283,6 +293,11 @@ class FileImportJob( object ):
|
|||
|
||||
if mime in HC.MIMES_WITH_THUMBNAILS:
|
||||
|
||||
if status_hook is not None:
|
||||
|
||||
status_hook( 'generating thumbnail' )
|
||||
|
||||
|
||||
if HG.file_import_report_mode:
|
||||
|
||||
HydrusData.ShowText( 'File import job generating thumbnail' )
|
||||
|
@ -306,6 +321,11 @@ class FileImportJob( object ):
|
|||
|
||||
if mime in HC.MIMES_WE_CAN_PHASH:
|
||||
|
||||
if status_hook is not None:
|
||||
|
||||
status_hook( 'generating similar files metadata' )
|
||||
|
||||
|
||||
if HG.file_import_report_mode:
|
||||
|
||||
HydrusData.ShowText( 'File import job generating phashes' )
|
||||
|
@ -324,6 +344,11 @@ class FileImportJob( object ):
|
|||
HydrusData.ShowText( 'File import job generating other hashes' )
|
||||
|
||||
|
||||
if status_hook is not None:
|
||||
|
||||
status_hook( 'generating additional hashes' )
|
||||
|
||||
|
||||
self._extra_hashes = HydrusFileHandling.GetExtraHashesFromPath( self._temp_path )
|
||||
|
||||
self._file_modified_timestamp = HydrusFileHandling.GetFileModifiedTimestamp( self._temp_path )
|
||||
|
|
|
@ -1710,6 +1710,8 @@ class MediaList( object ):
|
|||
|
||||
self._sorted_media.sort( sort_key = sort_key, reverse = reverse )
|
||||
|
||||
self._RecalcHashes()
|
||||
|
||||
|
||||
FILE_FILTER_ALL = 0
|
||||
FILE_FILTER_NOT_SELECTED = 1
|
||||
|
|
|
@ -39,7 +39,7 @@ LOCAL_BOORU_JSON_BYTE_LIST_PARAMS = set()
|
|||
|
||||
CLIENT_API_INT_PARAMS = { 'file_id', 'file_sort_type' }
|
||||
CLIENT_API_BYTE_PARAMS = { 'hash', 'destination_page_key', 'page_key', 'Hydrus-Client-API-Access-Key', 'Hydrus-Client-API-Session-Key', 'tag_service_key', 'file_service_key' }
|
||||
CLIENT_API_STRING_PARAMS = { 'name', 'url', 'domain' }
|
||||
CLIENT_API_STRING_PARAMS = { 'name', 'url', 'domain', 'file_service_name', 'tag_service_name' }
|
||||
CLIENT_API_JSON_PARAMS = { 'basic_permissions', 'system_inbox', 'system_archive', 'tags', 'file_ids', 'only_return_identifiers', 'detailed_url_information', 'simple', 'file_sort_asc' }
|
||||
CLIENT_API_JSON_BYTE_LIST_PARAMS = { 'hashes' }
|
||||
|
||||
|
@ -1694,7 +1694,7 @@ class HydrusResourceClientAPIRestrictedGetFilesSearchFiles( HydrusResourceClient
|
|||
sort_order = CC.SORT_ASC if file_sort_asc else CC.SORT_DESC
|
||||
|
||||
# newest first
|
||||
sort_by = ClientMedia.MediaSort( sort_type = ( 'system', file_sort_type ), sort_order = CC.SORT_DESC )
|
||||
sort_by = ClientMedia.MediaSort( sort_type = ( 'system', file_sort_type ), sort_order = sort_order )
|
||||
|
||||
hash_ids = HG.client_controller.Read( 'file_query_ids', file_search_context, sort_by = sort_by, apply_implicit_limit = False )
|
||||
|
||||
|
|
|
@ -2,7 +2,6 @@ import re
|
|||
import subprocess
|
||||
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusVideoHandling
|
||||
|
||||
def ParseFFMPEGAudio( lines ):
|
||||
|
||||
|
@ -41,71 +40,3 @@ def ParseFFMPEGAudio( lines ):
|
|||
|
||||
return ( audio_found, audio_format )
|
||||
|
||||
def VideoHasAudio( path ):
|
||||
|
||||
info_lines = HydrusVideoHandling.GetFFMPEGInfoLines( path )
|
||||
|
||||
( audio_found, audio_format ) = ParseFFMPEGAudio( info_lines )
|
||||
|
||||
if not audio_found:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
# just because video metadata has an audio stream doesn't mean it has audio. some vids have silent audio streams lmao
|
||||
# so, let's read it as PCM and see if there is any noise
|
||||
# this obviously only works for single audio stream vids, we'll adapt this if someone discovers a multi-stream mkv with a silent channel that doesn't work here
|
||||
|
||||
cmd = [ HydrusVideoHandling.FFMPEG_PATH ]
|
||||
|
||||
# this is perhaps not sensible for eventual playback and I should rather go for wav file-like and feed into python 'wave' in order to maintain stereo/mono and so on and have easy chunk-reading
|
||||
|
||||
cmd.extend( [ '-i', path,
|
||||
'-loglevel', 'quiet',
|
||||
'-f', 's16le',
|
||||
'-' ] )
|
||||
|
||||
|
||||
sbp_kwargs = HydrusData.GetSubprocessKWArgs()
|
||||
|
||||
HydrusData.CheckProgramIsNotShuttingDown()
|
||||
|
||||
try:
|
||||
|
||||
process = subprocess.Popen( cmd, bufsize = 65536, stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE, **sbp_kwargs )
|
||||
|
||||
except FileNotFoundError as e:
|
||||
|
||||
HydrusData.ShowText( 'Cannot render audio--FFMPEG not found!' )
|
||||
|
||||
raise
|
||||
|
||||
|
||||
# silent PCM data is just 00 bytes
|
||||
# every now and then, you'll get a couple ffs for some reason, but this is not legit audio data
|
||||
|
||||
try:
|
||||
|
||||
chunk_of_pcm_data = process.stdout.read( 65536 )
|
||||
|
||||
while len( chunk_of_pcm_data ) > 0:
|
||||
|
||||
# iterating over bytes gives you ints, recall
|
||||
if True in ( b != 0 and b != 255 for b in chunk_of_pcm_data ):
|
||||
|
||||
return True
|
||||
|
||||
|
||||
chunk_of_pcm_data = process.stdout.read( 65536 )
|
||||
|
||||
|
||||
return False
|
||||
|
||||
finally:
|
||||
|
||||
process.terminate()
|
||||
|
||||
process.stdout.close()
|
||||
process.stderr.close()
|
||||
|
||||
|
||||
|
|
|
@ -81,7 +81,7 @@ options = {}
|
|||
# Misc
|
||||
|
||||
NETWORK_VERSION = 20
|
||||
SOFTWARE_VERSION = 450
|
||||
SOFTWARE_VERSION = 451
|
||||
CLIENT_API_VERSION = 19
|
||||
|
||||
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )
|
||||
|
@ -90,7 +90,7 @@ HYDRUS_KEY_LENGTH = 32
|
|||
|
||||
READ_BLOCK_SIZE = 256 * 1024
|
||||
|
||||
lifetimes = [ ( 'one month', 31 * 86400 ), ( 'three months', 3 * 31 * 86400 ), ( 'six months', 6 * 31 * 86400 ), ( 'one year', 12 * 31 * 86400 ), ( 'two years', 24 * 31 * 86400 ), ( 'five years', 60 * 31 * 86400 ), ( 'does not expire', None ) ]
|
||||
lifetimes = [ ( 'one month', 30 * 86400 ), ( 'three months', 3 * 30 * 86400 ), ( 'six months', 6 * 30 * 86400 ), ( 'one year', 365 * 86400 ), ( 'two years', 2 * 365 * 86400 ), ( 'five years', 5 * 365 * 86400 ), ( 'does not expire', None ) ]
|
||||
|
||||
# some typing stuff
|
||||
|
||||
|
@ -176,8 +176,6 @@ content_type_string_lookup[ CONTENT_TYPE_NOTES ] = 'notes'
|
|||
content_type_string_lookup[ CONTENT_TYPE_FILE_VIEWING_STATS ] = 'file viewing stats'
|
||||
content_type_string_lookup[ CONTENT_TYPE_DEFINITIONS ] = 'definitions'
|
||||
|
||||
REPOSITORY_CONTENT_TYPES = [ CONTENT_TYPE_FILES, CONTENT_TYPE_MAPPINGS, CONTENT_TYPE_TAG_PARENTS, CONTENT_TYPE_TAG_SIBLINGS ]
|
||||
|
||||
CONTENT_UPDATE_ADD = 0
|
||||
CONTENT_UPDATE_DELETE = 1
|
||||
CONTENT_UPDATE_PEND = 2
|
||||
|
@ -441,9 +439,16 @@ ALL_FILE_SERVICES = FILE_SERVICES + ( COMBINED_FILE, )
|
|||
|
||||
SERVICES_WITH_THUMBNAILS = [ FILE_REPOSITORY, LOCAL_FILE_DOMAIN ]
|
||||
|
||||
REPOSITORY_CONTENT_TYPES = {
|
||||
SERVICE_TYPES_TO_CONTENT_TYPES = {
|
||||
FILE_REPOSITORY : ( CONTENT_TYPE_FILES, ),
|
||||
TAG_REPOSITORY : ( CONTENT_TYPE_MAPPINGS, CONTENT_TYPE_TAG_PARENTS, CONTENT_TYPE_TAG_SIBLINGS )
|
||||
LOCAL_FILE_DOMAIN : ( CONTENT_TYPE_FILES, ),
|
||||
LOCAL_FILE_TRASH_DOMAIN : ( CONTENT_TYPE_FILES, ),
|
||||
COMBINED_LOCAL_FILE : ( CONTENT_TYPE_FILES, ),
|
||||
IPFS : ( CONTENT_TYPE_FILES, ),
|
||||
TAG_REPOSITORY : ( CONTENT_TYPE_MAPPINGS, CONTENT_TYPE_TAG_PARENTS, CONTENT_TYPE_TAG_SIBLINGS ),
|
||||
LOCAL_TAG : ( CONTENT_TYPE_MAPPINGS, CONTENT_TYPE_TAG_PARENTS, CONTENT_TYPE_TAG_SIBLINGS ),
|
||||
LOCAL_RATING_LIKE : ( CONTENT_TYPE_RATINGS, ),
|
||||
LOCAL_RATING_NUMERICAL : ( CONTENT_TYPE_RATINGS, )
|
||||
}
|
||||
|
||||
DELETE_FILES_PETITION = 0
|
||||
|
|
|
@ -646,6 +646,13 @@ class HydrusController( object ):
|
|||
|
||||
def PrintQueryPlan( self, query, plan_lines ):
|
||||
|
||||
if query in HG.queries_planned:
|
||||
|
||||
return
|
||||
|
||||
|
||||
HG.queries_planned.add( query )
|
||||
|
||||
pretty_timestamp = time.strftime( '%Y-%m-%d %H-%M-%S', time.localtime( HG.query_planner_start_time ) )
|
||||
|
||||
query_planner_log_filename = '{} query planner - {}.log'.format( self._name, pretty_timestamp )
|
||||
|
|
|
@ -148,7 +148,7 @@ class DBBase( object ):
|
|||
|
||||
def _Execute( self, query, *args ) -> sqlite3.Cursor:
|
||||
|
||||
if HG.query_planner_mode:
|
||||
if HG.query_planner_mode and query not in HG.queries_planned:
|
||||
|
||||
plan_lines = self._c.execute( 'EXPLAIN QUERY PLAN {}'.format( query ), *args ).fetchall()
|
||||
|
||||
|
@ -162,15 +162,18 @@ class DBBase( object ):
|
|||
|
||||
def _ExecuteMany( self, query, args_iterator ):
|
||||
|
||||
if HG.query_planner_mode:
|
||||
if HG.query_planner_mode and query not in HG.queries_planned:
|
||||
|
||||
args_iterator = list( args_iterator )
|
||||
|
||||
plan_lines = self._c.execute( 'EXPLAIN QUERY PLAN {}'.format( query ), args_iterator[0] ).fetchall()
|
||||
|
||||
HG.query_planner_query_count += 1
|
||||
|
||||
HG.client_controller.PrintQueryPlan( query, plan_lines )
|
||||
if len( args_iterator ) > 0:
|
||||
|
||||
plan_lines = self._c.execute( 'EXPLAIN QUERY PLAN {}'.format( query ), args_iterator[0] ).fetchall()
|
||||
|
||||
HG.query_planner_query_count += 1
|
||||
|
||||
HG.client_controller.PrintQueryPlan( query, plan_lines )
|
||||
|
||||
|
||||
|
||||
self._c.executemany( query, args_iterator )
|
||||
|
|
|
@ -285,7 +285,7 @@ def TimeDeltaToPrettyTimeDelta( seconds, show_seconds = True ):
|
|||
HOUR = 60 * MINUTE
|
||||
DAY = 24 * HOUR
|
||||
MONTH = 30 * DAY
|
||||
YEAR = 12 * MONTH
|
||||
YEAR = 365 * DAY
|
||||
|
||||
lines = []
|
||||
|
||||
|
|
|
@ -193,6 +193,15 @@ def GetFileInfo( path, mime = None, ok_to_look_for_hydrus_updates = False ):
|
|||
num_frames = None
|
||||
num_words = None
|
||||
|
||||
if mime in HC.MIMES_THAT_DEFINITELY_HAVE_AUDIO:
|
||||
|
||||
has_audio = True
|
||||
|
||||
else:
|
||||
|
||||
has_audio = False
|
||||
|
||||
|
||||
if mime in ( HC.IMAGE_JPEG, HC.IMAGE_PNG, HC.IMAGE_GIF, HC.IMAGE_WEBP, HC.IMAGE_TIFF, HC.IMAGE_ICON ):
|
||||
|
||||
( ( width, height ), duration, num_frames ) = HydrusImageHandling.GetImageProperties( path, mime )
|
||||
|
@ -203,7 +212,7 @@ def GetFileInfo( path, mime = None, ok_to_look_for_hydrus_updates = False ):
|
|||
|
||||
elif mime in ( HC.IMAGE_APNG, HC.VIDEO_AVI, HC.VIDEO_FLV, HC.VIDEO_WMV, HC.VIDEO_MOV, HC.VIDEO_MP4, HC.VIDEO_MKV, HC.VIDEO_REALMEDIA, HC.VIDEO_WEBM, HC.VIDEO_MPEG ):
|
||||
|
||||
( ( width, height ), duration, num_frames ) = HydrusVideoHandling.GetFFMPEGVideoProperties( path )
|
||||
( ( width, height ), duration, num_frames, has_audio ) = HydrusVideoHandling.GetFFMPEGVideoProperties( path )
|
||||
|
||||
elif mime == HC.APPLICATION_PDF:
|
||||
|
||||
|
@ -222,19 +231,6 @@ def GetFileInfo( path, mime = None, ok_to_look_for_hydrus_updates = False ):
|
|||
duration = int( file_duration_in_s * 1000 )
|
||||
|
||||
|
||||
if mime in HC.MIMES_THAT_DEFINITELY_HAVE_AUDIO:
|
||||
|
||||
has_audio = True
|
||||
|
||||
elif mime in HC.MIMES_THAT_MAY_HAVE_AUDIO:
|
||||
|
||||
has_audio = HydrusAudioHandling.VideoHasAudio( path )
|
||||
|
||||
else:
|
||||
|
||||
has_audio = False
|
||||
|
||||
|
||||
if width is not None and width < 0:
|
||||
|
||||
width *= -1
|
||||
|
|
|
@ -40,6 +40,7 @@ query_planner_mode = False
|
|||
|
||||
query_planner_start_time = 0
|
||||
query_planner_query_count = 0
|
||||
queries_planned = set()
|
||||
profile_start_time = 0
|
||||
profile_slow_count = 0
|
||||
profile_fast_count = 0
|
||||
|
|
|
@ -292,7 +292,9 @@ def GetFFMPEGVideoProperties( path, force_count_frames_manually = False ):
|
|||
|
||||
duration_in_ms = int( duration * 1000 )
|
||||
|
||||
return ( resolution, duration_in_ms, num_frames )
|
||||
has_audio = VideoHasAudio( path, lines_for_first_second )
|
||||
|
||||
return ( resolution, duration_in_ms, num_frames, has_audio )
|
||||
|
||||
def GetMime( path ):
|
||||
|
||||
|
@ -750,6 +752,72 @@ def ParseFFMPEGVideoResolution( lines ):
|
|||
raise HydrusExceptions.DamagedOrUnusualFileException( 'Error parsing resolution!' )
|
||||
|
||||
|
||||
def VideoHasAudio( path, info_lines ):
|
||||
|
||||
( audio_found, audio_format ) = HydrusAudioHandling.ParseFFMPEGAudio( info_lines )
|
||||
|
||||
if not audio_found:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
# just because video metadata has an audio stream doesn't mean it has audio. some vids have silent audio streams lmao
|
||||
# so, let's read it as PCM and see if there is any noise
|
||||
# this obviously only works for single audio stream vids, we'll adapt this if someone discovers a multi-stream mkv with a silent channel that doesn't work here
|
||||
|
||||
cmd = [ FFMPEG_PATH ]
|
||||
|
||||
# this is perhaps not sensible for eventual playback and I should rather go for wav file-like and feed into python 'wave' in order to maintain stereo/mono and so on and have easy chunk-reading
|
||||
|
||||
cmd.extend( [ '-i', path,
|
||||
'-loglevel', 'quiet',
|
||||
'-f', 's16le',
|
||||
'-' ] )
|
||||
|
||||
|
||||
sbp_kwargs = HydrusData.GetSubprocessKWArgs()
|
||||
|
||||
HydrusData.CheckProgramIsNotShuttingDown()
|
||||
|
||||
try:
|
||||
|
||||
process = subprocess.Popen( cmd, bufsize = 65536, stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE, **sbp_kwargs )
|
||||
|
||||
except FileNotFoundError as e:
|
||||
|
||||
HydrusData.ShowText( 'Cannot render audio--FFMPEG not found!' )
|
||||
|
||||
raise
|
||||
|
||||
|
||||
# silent PCM data is just 00 bytes
|
||||
# every now and then, you'll get a couple ffs for some reason, but this is not legit audio data
|
||||
|
||||
try:
|
||||
|
||||
chunk_of_pcm_data = process.stdout.read( 65536 )
|
||||
|
||||
while len( chunk_of_pcm_data ) > 0:
|
||||
|
||||
# iterating over bytes gives you ints, recall
|
||||
if True in ( b != 0 and b != 255 for b in chunk_of_pcm_data ):
|
||||
|
||||
return True
|
||||
|
||||
|
||||
chunk_of_pcm_data = process.stdout.read( 65536 )
|
||||
|
||||
|
||||
return False
|
||||
|
||||
finally:
|
||||
|
||||
process.terminate()
|
||||
|
||||
process.stdout.close()
|
||||
process.stderr.close()
|
||||
|
||||
|
||||
# This was built from moviepy's FFMPEG_VideoReader
|
||||
class VideoRendererFFMPEG( object ):
|
||||
|
||||
|
|
|
@ -17,7 +17,7 @@ MIN_UPDATE_PERIOD = 600
|
|||
MAX_UPDATE_PERIOD = 100000 * 100 # three months or so jej
|
||||
|
||||
MIN_NULLIFICATION_PERIOD = 86400
|
||||
MAX_NULLIFICATION_PERIOD = 86400 * 360
|
||||
MAX_NULLIFICATION_PERIOD = 86400 * 365
|
||||
|
||||
def GenerateDefaultServiceDictionary( service_type ):
|
||||
|
||||
|
@ -1060,7 +1060,7 @@ class AccountType( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
if service_type in HC.REPOSITORIES:
|
||||
|
||||
for content_type in HC.REPOSITORY_CONTENT_TYPES:
|
||||
for content_type in HC.SERVICE_TYPES_TO_CONTENT_TYPES[ service_type ]:
|
||||
|
||||
permissions[ content_type ] = HC.PERMISSION_ACTION_MODERATE
|
||||
|
||||
|
|
|
@ -884,7 +884,7 @@ class HydrusResourceRestrictedNumPetitions( HydrusResourceRestricted ):
|
|||
|
||||
# further permissions checked in the db
|
||||
|
||||
request.hydrus_account.CheckAtLeastOnePermission( [ ( content_type, HC.PERMISSION_ACTION_MODERATE ) for content_type in HC.REPOSITORY_CONTENT_TYPES ] )
|
||||
request.hydrus_account.CheckAtLeastOnePermission( [ ( content_type, HC.PERMISSION_ACTION_MODERATE ) for content_type in HC.SERVICE_TYPES_TO_CONTENT_TYPES[ self._service.GetServiceType() ] ] )
|
||||
|
||||
|
||||
def _threadDoGETJob( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
@ -1129,7 +1129,7 @@ class HydrusResourceRestrictedUpdate( HydrusResourceRestricted ):
|
|||
|
||||
# further permissions checked in the db
|
||||
|
||||
request.hydrus_account.CheckAtLeastOnePermission( [ ( content_type, HC.PERMISSION_ACTION_PETITION ) for content_type in HC.REPOSITORY_CONTENT_TYPES ] )
|
||||
request.hydrus_account.CheckAtLeastOnePermission( [ ( content_type, HC.PERMISSION_ACTION_PETITION ) for content_type in HC.SERVICE_TYPES_TO_CONTENT_TYPES[ self._service.GetServiceType() ] ] )
|
||||
|
||||
|
||||
|
||||
|
@ -1170,7 +1170,7 @@ class HydrusResourceRestrictedImmediateUpdate( HydrusResourceRestricted ):
|
|||
|
||||
def _checkAccountPermissions( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
request.hydrus_account.CheckAtLeastOnePermission( [ ( content_type, HC.PERMISSION_ACTION_MODERATE ) for content_type in HC.REPOSITORY_CONTENT_TYPES ] )
|
||||
request.hydrus_account.CheckAtLeastOnePermission( [ ( content_type, HC.PERMISSION_ACTION_MODERATE ) for content_type in HC.SERVICE_TYPES_TO_CONTENT_TYPES[ self._service.GetServiceType() ] ] )
|
||||
|
||||
|
||||
def _threadDoGETJob( self, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
|
|
@ -24,6 +24,7 @@ from hydrus.client import ClientManagers
|
|||
from hydrus.client import ClientSearch
|
||||
from hydrus.client import ClientServices
|
||||
from hydrus.client.importing import ClientImportFiles
|
||||
from hydrus.client.media import ClientMedia
|
||||
from hydrus.client.media import ClientMediaManagers
|
||||
from hydrus.client.media import ClientMediaResult
|
||||
from hydrus.client.metadata import ClientTags
|
||||
|
@ -1809,6 +1810,8 @@ class TestClientAPI( unittest.TestCase ):
|
|||
|
||||
# search files
|
||||
|
||||
HG.test_controller.ClearReads( 'file_query_ids' )
|
||||
|
||||
sample_hash_ids = set( random.sample( hash_ids, 3 ) )
|
||||
|
||||
HG.test_controller.SetRead( 'file_query_ids', set( sample_hash_ids ) )
|
||||
|
@ -1833,10 +1836,31 @@ class TestClientAPI( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( d, expected_answer )
|
||||
|
||||
[ ( args, kwargs ) ] = HG.test_controller.GetRead( 'file_query_ids' )
|
||||
|
||||
( file_search_context, ) = args
|
||||
|
||||
self.assertEqual( file_search_context.GetFileServiceKey(), CC.LOCAL_FILE_SERVICE_KEY )
|
||||
self.assertEqual( file_search_context.GetTagSearchContext().service_key, CC.COMBINED_TAG_SERVICE_KEY )
|
||||
self.assertEqual( set( file_search_context.GetPredicates() ), { ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_TAG, tag ) for tag in tags } )
|
||||
|
||||
self.assertIn( 'sort_by', kwargs )
|
||||
|
||||
sort_by = kwargs[ 'sort_by' ]
|
||||
|
||||
self.assertEqual( sort_by.sort_type, ( 'system', CC.SORT_FILES_BY_IMPORT_TIME ) )
|
||||
self.assertEqual( sort_by.sort_order, CC.SORT_DESC )
|
||||
|
||||
self.assertIn( 'apply_implicit_limit', kwargs )
|
||||
|
||||
self.assertEqual( kwargs[ 'apply_implicit_limit' ], False )
|
||||
|
||||
# sort
|
||||
|
||||
# this just tests if it parses, we don't have a full test for read params yet
|
||||
|
||||
HG.test_controller.ClearReads( 'file_query_ids' )
|
||||
|
||||
sample_hash_ids = set( random.sample( hash_ids, 3 ) )
|
||||
|
||||
HG.test_controller.SetRead( 'file_query_ids', set( sample_hash_ids ) )
|
||||
|
@ -1855,9 +1879,28 @@ class TestClientAPI( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( response.status, 200 )
|
||||
|
||||
[ ( args, kwargs ) ] = HG.test_controller.GetRead( 'file_query_ids' )
|
||||
|
||||
( file_search_context, ) = args
|
||||
|
||||
self.assertEqual( file_search_context.GetFileServiceKey(), CC.LOCAL_FILE_SERVICE_KEY )
|
||||
self.assertEqual( file_search_context.GetTagSearchContext().service_key, CC.COMBINED_TAG_SERVICE_KEY )
|
||||
self.assertEqual( set( file_search_context.GetPredicates() ), { ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_TAG, tag ) for tag in tags } )
|
||||
|
||||
self.assertIn( 'sort_by', kwargs )
|
||||
|
||||
sort_by = kwargs[ 'sort_by' ]
|
||||
|
||||
self.assertEqual( sort_by.sort_type, ( 'system', CC.SORT_FILES_BY_FRAMERATE ) )
|
||||
self.assertEqual( sort_by.sort_order, CC.SORT_DESC )
|
||||
|
||||
self.assertIn( 'apply_implicit_limit', kwargs )
|
||||
|
||||
self.assertEqual( kwargs[ 'apply_implicit_limit' ], False )
|
||||
|
||||
# sort
|
||||
|
||||
# this just tests if it parses, we don't have a full test for read params yet
|
||||
HG.test_controller.ClearReads( 'file_query_ids' )
|
||||
|
||||
sample_hash_ids = set( random.sample( hash_ids, 3 ) )
|
||||
|
||||
|
@ -1877,9 +1920,28 @@ class TestClientAPI( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( response.status, 200 )
|
||||
|
||||
[ ( args, kwargs ) ] = HG.test_controller.GetRead( 'file_query_ids' )
|
||||
|
||||
( file_search_context, ) = args
|
||||
|
||||
self.assertEqual( file_search_context.GetFileServiceKey(), CC.LOCAL_FILE_SERVICE_KEY )
|
||||
self.assertEqual( file_search_context.GetTagSearchContext().service_key, CC.COMBINED_TAG_SERVICE_KEY )
|
||||
self.assertEqual( set( file_search_context.GetPredicates() ), { ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_TAG, tag ) for tag in tags } )
|
||||
|
||||
self.assertIn( 'sort_by', kwargs )
|
||||
|
||||
sort_by = kwargs[ 'sort_by' ]
|
||||
|
||||
self.assertEqual( sort_by.sort_type, ( 'system', CC.SORT_FILES_BY_FRAMERATE ) )
|
||||
self.assertEqual( sort_by.sort_order, CC.SORT_ASC )
|
||||
|
||||
self.assertIn( 'apply_implicit_limit', kwargs )
|
||||
|
||||
self.assertEqual( kwargs[ 'apply_implicit_limit' ], False )
|
||||
|
||||
# file domain
|
||||
|
||||
# this just tests if it parses, we don't have a full test for read params yet
|
||||
HG.test_controller.ClearReads( 'file_query_ids' )
|
||||
|
||||
sample_hash_ids = set( random.sample( hash_ids, 3 ) )
|
||||
|
||||
|
@ -1904,9 +1966,28 @@ class TestClientAPI( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( response.status, 200 )
|
||||
|
||||
[ ( args, kwargs ) ] = HG.test_controller.GetRead( 'file_query_ids' )
|
||||
|
||||
( file_search_context, ) = args
|
||||
|
||||
self.assertEqual( file_search_context.GetFileServiceKey(), CC.TRASH_SERVICE_KEY )
|
||||
self.assertEqual( file_search_context.GetTagSearchContext().service_key, CC.COMBINED_TAG_SERVICE_KEY )
|
||||
self.assertEqual( set( file_search_context.GetPredicates() ), { ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_TAG, tag ) for tag in tags } )
|
||||
|
||||
self.assertIn( 'sort_by', kwargs )
|
||||
|
||||
sort_by = kwargs[ 'sort_by' ]
|
||||
|
||||
self.assertEqual( sort_by.sort_type, ( 'system', CC.SORT_FILES_BY_FRAMERATE ) )
|
||||
self.assertEqual( sort_by.sort_order, CC.SORT_ASC )
|
||||
|
||||
self.assertIn( 'apply_implicit_limit', kwargs )
|
||||
|
||||
self.assertEqual( kwargs[ 'apply_implicit_limit' ], False )
|
||||
|
||||
# file and tag domain
|
||||
|
||||
# this just tests if it parses, we don't have a full test for read params yet
|
||||
HG.test_controller.ClearReads( 'file_query_ids' )
|
||||
|
||||
sample_hash_ids = set( random.sample( hash_ids, 3 ) )
|
||||
|
||||
|
@ -1932,6 +2013,25 @@ class TestClientAPI( unittest.TestCase ):
|
|||
|
||||
self.assertEqual( response.status, 200 )
|
||||
|
||||
[ ( args, kwargs ) ] = HG.test_controller.GetRead( 'file_query_ids' )
|
||||
|
||||
( file_search_context, ) = args
|
||||
|
||||
self.assertEqual( file_search_context.GetFileServiceKey(), CC.TRASH_SERVICE_KEY )
|
||||
self.assertEqual( file_search_context.GetTagSearchContext().service_key, CC.COMBINED_TAG_SERVICE_KEY )
|
||||
self.assertEqual( set( file_search_context.GetPredicates() ), { ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_TAG, tag ) for tag in tags } )
|
||||
|
||||
self.assertIn( 'sort_by', kwargs )
|
||||
|
||||
sort_by = kwargs[ 'sort_by' ]
|
||||
|
||||
self.assertEqual( sort_by.sort_type, ( 'system', CC.SORT_FILES_BY_FRAMERATE ) )
|
||||
self.assertEqual( sort_by.sort_order, CC.SORT_ASC )
|
||||
|
||||
self.assertIn( 'apply_implicit_limit', kwargs )
|
||||
|
||||
self.assertEqual( kwargs[ 'apply_implicit_limit' ], False )
|
||||
|
||||
# file and tag domain
|
||||
|
||||
# this just tests if it parses, we don't have a full test for read params yet
|
||||
|
|
|
@ -1515,15 +1515,254 @@ class TestClientDB( unittest.TestCase ):
|
|||
|
||||
def test_nums_pending( self ):
|
||||
|
||||
TestClientDB._clear_db()
|
||||
|
||||
result = self._read( 'nums_pending' )
|
||||
|
||||
self.assertEqual( result, {} )
|
||||
|
||||
# we can do more testing when I add repo service to this testing framework
|
||||
#
|
||||
|
||||
services = list( self._read( 'services' ) )
|
||||
|
||||
tag_sk = HydrusData.GenerateKey()
|
||||
file_sk = HydrusData.GenerateKey()
|
||||
ipfs_sk = HydrusData.GenerateKey()
|
||||
|
||||
services.append( ClientServices.GenerateService( tag_sk, HC.TAG_REPOSITORY, 'test tag repo' ) )
|
||||
services.append( ClientServices.GenerateService( file_sk, HC.FILE_REPOSITORY, 'test file repo' ) )
|
||||
services.append( ClientServices.GenerateService( ipfs_sk, HC.IPFS, 'test ipfs' ) )
|
||||
|
||||
self._write( 'update_services', services )
|
||||
|
||||
#
|
||||
|
||||
result = self._read( 'nums_pending' )
|
||||
|
||||
expected_result = {
|
||||
tag_sk: {
|
||||
HC.SERVICE_INFO_NUM_PENDING_MAPPINGS : 0,
|
||||
HC.SERVICE_INFO_NUM_PETITIONED_MAPPINGS : 0,
|
||||
HC.SERVICE_INFO_NUM_PENDING_TAG_SIBLINGS : 0,
|
||||
HC.SERVICE_INFO_NUM_PETITIONED_TAG_SIBLINGS : 0,
|
||||
HC.SERVICE_INFO_NUM_PENDING_TAG_PARENTS : 0,
|
||||
HC.SERVICE_INFO_NUM_PETITIONED_TAG_PARENTS : 0
|
||||
},
|
||||
file_sk: {
|
||||
HC.SERVICE_INFO_NUM_PENDING_FILES: 0,
|
||||
HC.SERVICE_INFO_NUM_PETITIONED_FILES: 0
|
||||
},
|
||||
ipfs_sk: {
|
||||
HC.SERVICE_INFO_NUM_PENDING_FILES: 0,
|
||||
HC.SERVICE_INFO_NUM_PETITIONED_FILES: 0
|
||||
}
|
||||
}
|
||||
|
||||
self.assertEqual( result, expected_result )
|
||||
|
||||
#
|
||||
|
||||
hashes = [ os.urandom( 32 ) for i in range( 64 ) ]
|
||||
|
||||
tags = [ 'this', 'is', 'a:test' ]
|
||||
|
||||
content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_MAPPINGS, HC.CONTENT_UPDATE_PEND, ( tag, hashes ) ) for tag in tags ]
|
||||
|
||||
service_keys_to_content_updates = { tag_sk : content_updates }
|
||||
|
||||
self._write( 'content_updates', service_keys_to_content_updates )
|
||||
|
||||
#
|
||||
|
||||
hashes = [ os.urandom( 32 ) for i in range( 64 ) ]
|
||||
|
||||
tags = [ 'bad tag', 'bad' ]
|
||||
|
||||
content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_MAPPINGS, HC.CONTENT_UPDATE_ADD, ( tag, hashes ) ) for tag in tags ]
|
||||
|
||||
service_keys_to_content_updates = { tag_sk : content_updates }
|
||||
|
||||
self._write( 'content_updates', service_keys_to_content_updates )
|
||||
|
||||
content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_MAPPINGS, HC.CONTENT_UPDATE_PETITION, ( tag, hashes ), reason = 'yo' ) for tag in tags ]
|
||||
|
||||
service_keys_to_content_updates = { tag_sk : content_updates }
|
||||
|
||||
self._write( 'content_updates', service_keys_to_content_updates )
|
||||
|
||||
#
|
||||
|
||||
pairs = [
|
||||
( 'sib tag 1a', 'sib tag 1b' ),
|
||||
( 'sib tag 2a', 'sib tag 2b' ),
|
||||
( 'sib tag 3a', 'sib tag 3b' ),
|
||||
( 'sib tag 4a', 'sib tag 4b' )
|
||||
]
|
||||
|
||||
content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_PEND, pair, reason = 'good sibling m8' ) for pair in pairs ]
|
||||
|
||||
service_keys_to_content_updates = { tag_sk : content_updates }
|
||||
|
||||
self._write( 'content_updates', service_keys_to_content_updates )
|
||||
|
||||
#
|
||||
|
||||
pairs = [
|
||||
( 'samus aran', 'princess peach' ),
|
||||
( 'lara croft', 'princess peach' )
|
||||
]
|
||||
|
||||
content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_ADD, pair ) for pair in pairs ]
|
||||
|
||||
service_keys_to_content_updates = { tag_sk : content_updates }
|
||||
|
||||
self._write( 'content_updates', service_keys_to_content_updates )
|
||||
|
||||
content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_PETITION, pair, reason = 'mistake' ) for pair in pairs ]
|
||||
|
||||
service_keys_to_content_updates = { tag_sk : content_updates }
|
||||
|
||||
self._write( 'content_updates', service_keys_to_content_updates )
|
||||
|
||||
#
|
||||
|
||||
pairs = [
|
||||
( 'par tag 1a', 'par tag 1b' ),
|
||||
( 'par tag 2a', 'par tag 2b' ),
|
||||
( 'par tag 3a', 'par tag 3b' )
|
||||
]
|
||||
|
||||
content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_PEND, pair, reason = 'good parent m8' ) for pair in pairs ]
|
||||
|
||||
service_keys_to_content_updates = { tag_sk : content_updates }
|
||||
|
||||
self._write( 'content_updates', service_keys_to_content_updates )
|
||||
|
||||
#
|
||||
|
||||
pairs = [
|
||||
( 'ayanami rei', 'zelda' )
|
||||
]
|
||||
|
||||
content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_ADD, pair ) for pair in pairs ]
|
||||
|
||||
service_keys_to_content_updates = { tag_sk : content_updates }
|
||||
|
||||
self._write( 'content_updates', service_keys_to_content_updates )
|
||||
|
||||
content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_PETITION, pair, reason = 'mistake' ) for pair in pairs ]
|
||||
|
||||
service_keys_to_content_updates = { tag_sk : content_updates }
|
||||
|
||||
self._write( 'content_updates', service_keys_to_content_updates )
|
||||
|
||||
#
|
||||
|
||||
hashes = [ os.urandom( 32 ) for i in range( 15 ) ]
|
||||
|
||||
content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_PEND, hashes ) ]
|
||||
|
||||
service_keys_to_content_updates = { file_sk : content_updates }
|
||||
|
||||
self._write( 'content_updates', service_keys_to_content_updates )
|
||||
|
||||
#
|
||||
|
||||
hashes = [ os.urandom( 32 ) for i in range( 20 ) ]
|
||||
|
||||
content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_PEND, hashes ) ]
|
||||
|
||||
service_keys_to_content_updates = { ipfs_sk : content_updates }
|
||||
|
||||
self._write( 'content_updates', service_keys_to_content_updates )
|
||||
|
||||
#
|
||||
|
||||
test_files = {
|
||||
'5d884d84813beeebd59a35e474fa3e4742d0f2b6679faa7609b245ddbbd05444' : 'muh_jpg.jpg',
|
||||
'cdc67d3b377e6e1397ffa55edc5b50f6bdf4482c7a6102c6f27fa351429d6f49' : 'muh_png.png',
|
||||
'9e7b8b5abc7cb11da32db05671ce926a2a2b701415d1b2cb77a28deea51010c3' : 'muh_apng.png'
|
||||
}
|
||||
|
||||
file_import_options = HG.client_controller.new_options.GetDefaultFileImportOptions( 'loud' )
|
||||
|
||||
for ( hash, filename ) in test_files.items():
|
||||
|
||||
HG.test_controller.SetRead( 'hash_status', ClientImportFiles.FileImportStatus.STATICGetUnknownStatus() )
|
||||
|
||||
path = os.path.join( HC.STATIC_DIR, 'testing', filename )
|
||||
|
||||
file_import_job = ClientImportFiles.FileImportJob( path, file_import_options )
|
||||
|
||||
file_import_job.GeneratePreImportHashAndStatus()
|
||||
|
||||
file_import_job.GenerateInfo()
|
||||
|
||||
file_import_status = self._write( 'import_file', file_import_job )
|
||||
|
||||
|
||||
hashes = list( [ bytes.fromhex( hh ) for hh in test_files.keys() ] )
|
||||
|
||||
media_results = self._read( 'media_results', hashes )
|
||||
|
||||
content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_ADD, ( mr.GetFileInfoManager(), 100 ) ) for mr in media_results ]
|
||||
|
||||
service_keys_to_content_updates = { file_sk : content_updates }
|
||||
|
||||
self._write( 'content_updates', service_keys_to_content_updates )
|
||||
|
||||
content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_ADD, ( mr.GetFileInfoManager(), os.urandom( 16 ).hex() ) ) for mr in media_results ]
|
||||
|
||||
service_keys_to_content_updates = { ipfs_sk : content_updates }
|
||||
|
||||
self._write( 'content_updates', service_keys_to_content_updates )
|
||||
|
||||
#
|
||||
|
||||
content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_PETITION, hashes, reason = 'nope' ) ]
|
||||
|
||||
service_keys_to_content_updates = { file_sk : content_updates }
|
||||
|
||||
self._write( 'content_updates', service_keys_to_content_updates )
|
||||
|
||||
#
|
||||
|
||||
content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_PETITION, hashes ) ]
|
||||
|
||||
service_keys_to_content_updates = { ipfs_sk : content_updates }
|
||||
|
||||
self._write( 'content_updates', service_keys_to_content_updates )
|
||||
|
||||
#
|
||||
|
||||
result = self._read( 'nums_pending' )
|
||||
|
||||
expected_result = {
|
||||
tag_sk: {
|
||||
HC.SERVICE_INFO_NUM_PENDING_MAPPINGS : 64 * 3,
|
||||
HC.SERVICE_INFO_NUM_PETITIONED_MAPPINGS : 64 * 2,
|
||||
HC.SERVICE_INFO_NUM_PENDING_TAG_SIBLINGS : 4,
|
||||
HC.SERVICE_INFO_NUM_PETITIONED_TAG_SIBLINGS : 2,
|
||||
HC.SERVICE_INFO_NUM_PENDING_TAG_PARENTS : 3,
|
||||
HC.SERVICE_INFO_NUM_PETITIONED_TAG_PARENTS : 1
|
||||
},
|
||||
file_sk: {
|
||||
HC.SERVICE_INFO_NUM_PENDING_FILES: 15,
|
||||
HC.SERVICE_INFO_NUM_PETITIONED_FILES: 3
|
||||
},
|
||||
ipfs_sk: {
|
||||
HC.SERVICE_INFO_NUM_PENDING_FILES: 20,
|
||||
HC.SERVICE_INFO_NUM_PETITIONED_FILES: 3
|
||||
}
|
||||
}
|
||||
|
||||
self.assertEqual( result, expected_result )
|
||||
|
||||
|
||||
def test_pending( self ):
|
||||
|
||||
TestClientDB._clear_db()
|
||||
|
||||
service_key = HydrusData.GenerateKey()
|
||||
|
||||
services = self._read( 'services' )
|
||||
|
@ -1560,7 +1799,7 @@ class TestClientDB( unittest.TestCase ):
|
|||
|
||||
#
|
||||
|
||||
self._write( 'update_services', old_services )
|
||||
TestClientDB._clear_db()
|
||||
|
||||
|
||||
def test_pixiv_account( self ):
|
||||
|
@ -1581,6 +1820,8 @@ class TestClientDB( unittest.TestCase ):
|
|||
|
||||
def test_services( self ):
|
||||
|
||||
TestClientDB._clear_db()
|
||||
|
||||
result = self._read( 'services', ( HC.LOCAL_FILE_DOMAIN, HC.LOCAL_FILE_TRASH_DOMAIN, HC.COMBINED_LOCAL_FILE, HC.LOCAL_TAG ) )
|
||||
|
||||
result_service_keys = { service.GetServiceKey() for service in result }
|
||||
|
|
|
@ -2256,7 +2256,7 @@ class TestClientDBTags( unittest.TestCase ):
|
|||
|
||||
if do_regen_sibs:
|
||||
|
||||
self._write( 'regenerate_tag_siblings_cache' )
|
||||
self._write( 'regenerate_tag_siblings_and_parents_cache' )
|
||||
|
||||
self._sync_display()
|
||||
|
||||
|
@ -2485,7 +2485,7 @@ class TestClientDBTags( unittest.TestCase ):
|
|||
|
||||
if do_regen_sibs:
|
||||
|
||||
self._write( 'regenerate_tag_siblings_cache' )
|
||||
self._write( 'regenerate_tag_siblings_and_parents_cache' )
|
||||
|
||||
self._sync_display()
|
||||
|
||||
|
@ -2603,7 +2603,7 @@ class TestClientDBTags( unittest.TestCase ):
|
|||
|
||||
if do_regen_sibs:
|
||||
|
||||
self._write( 'regenerate_tag_siblings_cache' )
|
||||
self._write( 'regenerate_tag_siblings_and_parents_cache' )
|
||||
|
||||
self._sync_display()
|
||||
|
||||
|
|
|
@ -217,15 +217,15 @@ class Controller( object ):
|
|||
|
||||
HydrusData.ShowText = show_text
|
||||
|
||||
self._reads = {}
|
||||
self._name_read_responses = {}
|
||||
|
||||
self._reads[ 'local_booru_share_keys' ] = []
|
||||
self._reads[ 'messaging_sessions' ] = []
|
||||
self._reads[ 'options' ] = ClientDefaults.GetClientDefaultOptions()
|
||||
self._reads[ 'file_system_predicates' ] = []
|
||||
self._reads[ 'media_results' ] = []
|
||||
self._name_read_responses[ 'local_booru_share_keys' ] = []
|
||||
self._name_read_responses[ 'messaging_sessions' ] = []
|
||||
self._name_read_responses[ 'options' ] = ClientDefaults.GetClientDefaultOptions()
|
||||
self._name_read_responses[ 'file_system_predicates' ] = []
|
||||
self._name_read_responses[ 'media_results' ] = []
|
||||
|
||||
self._param_reads = {}
|
||||
self._param_read_responses = {}
|
||||
|
||||
self.example_tag_repo_service_key = HydrusData.GenerateKey()
|
||||
|
||||
|
@ -243,7 +243,7 @@ class Controller( object ):
|
|||
services.append( ClientServices.GenerateService( LOCAL_RATING_LIKE_SERVICE_KEY, HC.LOCAL_RATING_LIKE, 'example local rating like service' ) )
|
||||
services.append( ClientServices.GenerateService( LOCAL_RATING_NUMERICAL_SERVICE_KEY, HC.LOCAL_RATING_NUMERICAL, 'example local rating numerical service' ) )
|
||||
|
||||
self._reads[ 'services' ] = services
|
||||
self._name_read_responses[ 'services' ] = services
|
||||
|
||||
client_files_locations = {}
|
||||
|
||||
|
@ -255,14 +255,15 @@ class Controller( object ):
|
|||
|
||||
|
||||
|
||||
self._reads[ 'client_files_locations' ] = client_files_locations
|
||||
self._name_read_responses[ 'client_files_locations' ] = client_files_locations
|
||||
|
||||
self._reads[ 'sessions' ] = []
|
||||
self._reads[ 'tag_parents' ] = {}
|
||||
self._reads[ 'tag_siblings_all_ideals' ] = {}
|
||||
self._reads[ 'inbox_hashes' ] = set()
|
||||
self._name_read_responses[ 'sessions' ] = []
|
||||
self._name_read_responses[ 'tag_parents' ] = {}
|
||||
self._name_read_responses[ 'tag_siblings_all_ideals' ] = {}
|
||||
self._name_read_responses[ 'inbox_hashes' ] = set()
|
||||
|
||||
self._writes = collections.defaultdict( list )
|
||||
self._read_call_args = collections.defaultdict( list )
|
||||
self._write_call_args = collections.defaultdict( list )
|
||||
|
||||
self._managers = {}
|
||||
|
||||
|
@ -487,6 +488,14 @@ class Controller( object ):
|
|||
return job
|
||||
|
||||
|
||||
def ClearReads( self, name ):
|
||||
|
||||
if name in self._read_call_args:
|
||||
|
||||
del self._read_call_args[ name ]
|
||||
|
||||
|
||||
|
||||
def ClearTestDB( self ):
|
||||
|
||||
self._test_db = None
|
||||
|
@ -494,9 +503,9 @@ class Controller( object ):
|
|||
|
||||
def ClearWrites( self, name ):
|
||||
|
||||
if name in self._writes:
|
||||
if name in self._write_call_args:
|
||||
|
||||
del self._writes[ name ]
|
||||
del self._write_call_args[ name ]
|
||||
|
||||
|
||||
|
||||
|
@ -584,11 +593,20 @@ class Controller( object ):
|
|||
return {}
|
||||
|
||||
|
||||
def GetRead( self, name ):
|
||||
|
||||
read = self._read_call_args[ name ]
|
||||
|
||||
del self._read_call_args[ name ]
|
||||
|
||||
return read
|
||||
|
||||
|
||||
def GetWrite( self, name ):
|
||||
|
||||
write = self._writes[ name ]
|
||||
write = self._write_call_args[ name ]
|
||||
|
||||
del self._writes[ name ]
|
||||
del self._write_call_args[ name ]
|
||||
|
||||
return write
|
||||
|
||||
|
@ -656,6 +674,8 @@ class Controller( object ):
|
|||
|
||||
def Read( self, name, *args, **kwargs ):
|
||||
|
||||
self._read_call_args[ name ].append( ( args, kwargs ) )
|
||||
|
||||
if self._test_db is not None:
|
||||
|
||||
return self._test_db.Read( name, *args, **kwargs )
|
||||
|
@ -663,9 +683,9 @@ class Controller( object ):
|
|||
|
||||
try:
|
||||
|
||||
if ( name, args ) in self._param_reads:
|
||||
if ( name, args ) in self._param_read_responses:
|
||||
|
||||
return self._param_reads[ ( name, args ) ]
|
||||
return self._param_read_responses[ ( name, args ) ]
|
||||
|
||||
|
||||
except:
|
||||
|
@ -673,7 +693,7 @@ class Controller( object ):
|
|||
pass
|
||||
|
||||
|
||||
return self._reads[ name ]
|
||||
return self._name_read_responses[ name ]
|
||||
|
||||
|
||||
def RegisterUIUpdateWindow( self, window ):
|
||||
|
@ -871,12 +891,12 @@ class Controller( object ):
|
|||
|
||||
def SetParamRead( self, name, args, value ):
|
||||
|
||||
self._param_reads[ ( name, args ) ] = value
|
||||
self._param_read_responses[ ( name, args ) ] = value
|
||||
|
||||
|
||||
def SetRead( self, name, value ):
|
||||
|
||||
self._reads[ name ] = value
|
||||
self._name_read_responses[ name ] = value
|
||||
|
||||
|
||||
def SetStatusBarDirty( self ):
|
||||
|
@ -928,12 +948,12 @@ class Controller( object ):
|
|||
return self._test_db.Write( name, *args, **kwargs )
|
||||
|
||||
|
||||
self._writes[ name ].append( ( args, kwargs ) )
|
||||
self._write_call_args[ name ].append( ( args, kwargs ) )
|
||||
|
||||
|
||||
def WriteSynchronous( self, name, *args, **kwargs ):
|
||||
|
||||
self._writes[ name ].append( ( args, kwargs ) )
|
||||
self._write_call_args[ name ].append( ( args, kwargs ) )
|
||||
|
||||
if name == 'import_file':
|
||||
|
||||
|
|
|
@ -1,7 +1,8 @@
|
|||
[Icons]
|
||||
Name: {group}\hydrus client; Filename: {app}\client.exe; WorkingDir: {app}; Tasks: programgroupicons
|
||||
Name: {group}\hydrus server; Filename: {app}\server.exe; WorkingDir: {app}; Tasks: programgroupicons
|
||||
Name: {group}\help; Filename: {app}\help\index.html; WorkingDir: {app}; Tasks: programgroupicons
|
||||
;Taking this out to stop anti-virus testbeds pursing it and launching Edge and detecting Edge update calls as suspicious DNS lmao
|
||||
;Name: {group}\help; Filename: {app}\help\index.html; WorkingDir: {app}; Tasks: programgroupicons
|
||||
Name: {group}\uninstall hydrus network; Filename: {uninstallexe}; WorkingDir: {app}; Tasks: programgroupicons; IconFilename: {app}\static\cross.ico
|
||||
Name: {userdesktop}\hydrus client; Filename: {app}\client.exe; WorkingDir: {app}; Tasks: desktopicons
|
||||
Name: {userdesktop}\hydrus server; Filename: {app}\server.exe; WorkingDir: {app}; Tasks: desktopicons
|
||||
|
@ -25,17 +26,18 @@ UninstallDisplayIcon={app}\static\hydrus.ico
|
|||
Name: desktopicons; Description: Create desktop icons; Flags: unchecked; Components: install
|
||||
Name: programgroupicons; Description: Create program group icons; Components: install
|
||||
[Messages]
|
||||
SelectDirBrowseLabel=To continue, click Next. If you would like to select a different folder, click Browse. Databases will be created beneath the install dir, so make sure the hard drive has enough spare space for your purposes and your user has permission to write there! If you install to program files, you'll need to run the programs as administrator!
|
||||
SelectDirBrowseLabel=To continue, click Next. If you would like to select a different folder, click Browse. By default, databases will be created beneath the install dir, so make sure the hard drive has enough spare space for your purposes and your user has permission to write there! If you install to a protected location like 'Program Files', the database will be created in your User Directory.
|
||||
[Components]
|
||||
Name: install; Description: Install; Types: install; Flags: fixed
|
||||
[Types]
|
||||
Name: install; Description: Install
|
||||
Name: extract; Description: Extract only
|
||||
[Run]
|
||||
Filename: {app}\help\index.html; Description: Open help/getting started guide (highly recommended for new users); Flags: postinstall unchecked shellexec
|
||||
;Taking this out to stop anti-virus testbeds pursing it and launching Edge and detecting Edge update calls as suspicious DNS lmao
|
||||
;Filename: {app}\help\index.html; Description: Open help/getting started guide (highly recommended for new users); Flags: postinstall unchecked shellexec
|
||||
Filename: {app}\client.exe; Description: Open the client; Flags: postinstall nowait unchecked
|
||||
[Files]
|
||||
Source: dist\Hydrus Network\* ; DestDir: {app}; Flags: ignoreversion recursesubdirs createallsubdirs
|
||||
Source: dist\Hydrus Network\*; DestDir: {app}; Flags: ignoreversion recursesubdirs createallsubdirs
|
||||
[InstallDelete]
|
||||
Name: {app}\Crypto; Type: filesandordirs; Components: install
|
||||
Name: {app}\tcl; Type: filesandordirs; Components: install
|
||||
|
@ -66,4 +68,4 @@ Name: {app}\_tkinter.pyd; Type: files; Components: install
|
|||
Name: {app}\_yaml.cp36-win_amd64.pyd; Type: files; Components: install
|
||||
Name: {app}\_yaml.cp37-win_amd64.pyd; Type: files; Components: install
|
||||
Name: {app}\_cffi_backend.cp36-win_amd64.pyd; Type: files; Components: install
|
||||
Name: {app}\_distutils_findvs.pyd; Type: files; Components: install
|
||||
Name: {app}\_distutils_findvs.pyd; Type: files; Components: install
|
||||
|
|
Loading…
Reference in New Issue