changelog
-
+
version 451
+ - stupid anti-virus thing: +
- we have had several more anti-virus false positives just recently. we discovered that at least one testbed used by these companies was testing the 'open html help' checkbox in the installer, which then launched Edge on the testbed, and then launched the Windows Update process for Edge and Skype, which was somehow interacting with UAC and thus considered suspicious activity owned by the hydrus installer process, lmao. thereafter, it seems the installer exe's DNS requests were somehow being cross-connected with the client.exe scan as that was identified as connected with the installer. taking that checkbox out as a test produced a much cleaner scan. there is a limit to how much of this nonsense I will accomodate, but this week we are trying a release without that 'open help' link in the installer, let's see how it goes +
- semi-related, I brushed up the install path message in the installer and clarified help->help will open the help in the first-start welcome popup message +
- . +
- misc: +
- I fixed a critical bug in tag sibling storage when a 'bad' tag's mapping is removed (e.g. 'delete A->B') and added ('add A->C') in the same transaction, and in a heap of fun other situations besides, that mostly resulted in the newly added sibling being forgotten. the bug was worse when this was on a local tag service via the manage siblings dialog. this problem is likely the cause of some of our weird sibling issues on clients that processed certain repository updates extremely quickly. I will keep investigating here for more issues and trigger another sibling reset for everyone in the future +
- the 'show some random pairs' button on the duplicates page is nicer--the 'did not find any pairs' notification is a popup rather than an annoying error dialog, and when there is nothing found, it also clears the page of thumbs. it also tries to guess if you are at the end of the current search, and if so, it will not do an auto re-fetch and will clear the page without producing the popup message +
- fixed a bug that meant file order was not being saved correctly in sessions! sorry for the trouble! +
- import of videos is now a little faster as the ffmpeg call to check resolution and duration is now retained to check for presence of an audio channel +
- when files are imported, the status messages are now much more granular. large and CPU-heavy files should move noticeably from hash generation to filetype calculation to metadata to actual file copying +
- fixed a database query bug in the new processing progress tracking code that was affecting some (perhaps older) versions of sqlite +
- when you trash/untrash/etc... a file in the media viewer, the top hover text now updates to show the file location change +
- fixed a typo bug in the new content type tracking that broke ipfs pinning yet again, sorry for the trouble! (issue #955) +
- I fleshed out my database pending and num_pending tests significantly. now all uploadable content types are tested, so ipfs should not break at the _db_ level again +
- the page tab menu now clumps the 'close x pages' into a dynamic submenu when there are several options and excludes duplicates (e.g. 'close others' and 'close to the left' when you right-click the rightmost page) +
- the page tab menu also puts the 'move' actions under a submenu +
- the page tab menu now has 'select' submenu for navigating home/left/right/end like the shortcuts +
- fixed some repository content type checking problems: showing petition pages when the user has moderation privileges on a repository, permission check when fetching number of petitions, and permissions check when uploading files +
- fixed a typo in the 'running in wine' html that made the whole document big and bold +
- across the program, a 'year' for most date calculations like 'system:time imported: more than a year ago' is now 365 days (up from 12 x 30-day months). these will likely be calendar calculated correctly in future, but for now we'll just stick with simple but just a bit more accurate +
- fixed a bug in mpv loop-seek when the system lags for a moment just when the user closes the media viewer and the video loops back to start +
- . +
- client api: +
- expanded my testing system to handle more 'read' database parameter testing, and added some unit tests for the new client api file search code +
- fixed the 'file_sort_asc' in the new client api file search call. it was a stupid untested typo, thank you for the reports (issue #959) +
- fixed 'file_service_name' and 'tag_service_name' when they are GET parameters in the client api +
- I fleshed out the file search sort help to say what ascending/descending means for each file sort type +
- . +
- boring database cleanup: +
- to cut down on redundant spam, the new query planner profile mode only plans each unique query text once per run of the mode +
- also fixed an issue in the query planner with multiple-row queries with an empty list argument +
- refactored the tag sibling and parent database storage and lookup code out to separate db modules +
- untangled and optimised a couple of sibling/parent lookup chain regeneration calls +
- moved more sibling and parent responsibility to the new modules, clearing some inline hardcoding out of the main class +
- cleaned up a bunch of sibling, parent, and display code generally, and improved communication between these modules, particularly in regards to update interactions and display sync +
- the similar files data tables are migrated to more appropriate locations. previously, they were all in client.caches.db, now the phash definition and file mapping tables are in master, and the similar files search record is now in main +
version 450
- misc: diff --git a/help/client_api.html b/help/client_api.html index 79db6dc4..394798c4 100644 --- a/help/client_api.html +++ b/help/client_api.html @@ -1218,27 +1218,27 @@
- 0 - file size -
- 1 - duration -
- 2 - import time -
- 3 - filetype -
- 4 - random -
- 5 - width -
- 6 - height -
- 7 - ratio -
- 8 - number of pixels -
- 9 - number of tags (on the current tag domain) -
- 10 - number of media viewers -
- 11 - total media viewtime -
- 12 - approximate bitrate -
- 13 - has audio -
- 14 - modified time -
- 15 - framerate -
- 16 - number of frames +
- 0 - file size (smallest first/largest first) +
- 1 - duration (shortest first/longest first) +
- 2 - import time (oldest first/newest first) +
- 3 - filetype (N/A) +
- 4 - random (N/A) +
- 5 - width (slimmest first/widest first) +
- 6 - height (shortest first/tallest first) +
- 7 - ratio (tallest first/widest first) +
- 8 - number of pixels (ascending/descending) +
- 9 - number of tags (on the current tag domain) (ascending/descending) +
- 10 - number of media views (ascending/descending) +
- 11 - total media viewtime (ascending/descending) +
- 12 - approximate bitrate (smallest first/largest first) +
- 13 - has audio (audio first/silent first) +
- 14 - modified time (oldest first/newest first) +
- 15 - framerate (slowest first/fastest first) +
- 16 - number of frames (smallest first/largest first)
-
Example response:
diff --git a/help/wine.html b/help/wine.html index bde1bb63..27578b43 100644 --- a/help/wine.html +++ b/help/wine.html @@ -6,7 +6,7 @@-getting it to work on wine +
getting it to work on wine
Several Linux and macOS users have found success running hydrus with Wine. Here is a post from a Linux dude:
Some things I picked up on after extended use:
diff --git a/hydrus/client/ClientController.py b/hydrus/client/ClientController.py index 9928ad3a..b3b7b526 100644 --- a/hydrus/client/ClientController.py +++ b/hydrus/client/ClientController.py @@ -1220,7 +1220,7 @@ class Controller( HydrusController.HydrusController ): message = 'Hi, this looks like the first time you have started the hydrus client.' message += os.linesep * 2 - message += 'Don\'t forget to check out the help if you haven\'t already--it has an extensive \'getting started\' section, including how to update and the importance of backing up your database.' + message += 'Don\'t forget to check out the help if you haven\'t already, by clicking help->help--it has an extensive \'getting started\' section, including how to update and the importance of backing up your database.' message += os.linesep * 2 message += 'To dismiss popup messages like this, right-click them.' diff --git a/hydrus/client/ClientDefaults.py b/hydrus/client/ClientDefaults.py index 802cd6af..acbb03eb 100644 --- a/hydrus/client/ClientDefaults.py +++ b/hydrus/client/ClientDefaults.py @@ -96,7 +96,7 @@ def GetDefaultCheckerOptions( name ): elif name == 'slow tag subscription': - return ClientImportOptions.CheckerOptions( intended_files_per_check = 1, never_faster_than = 7 * 86400, never_slower_than = 180 * 86400, death_file_velocity = ( 1, 360 * 86400 ) ) + return ClientImportOptions.CheckerOptions( intended_files_per_check = 1, never_faster_than = 7 * 86400, never_slower_than = 180 * 86400, death_file_velocity = ( 1, 365 * 86400 ) ) def GetDefaultHentaiFoundryInfo(): diff --git a/hydrus/client/ClientSearch.py b/hydrus/client/ClientSearch.py index 08aa5dc2..ba55e3eb 100644 --- a/hydrus/client/ClientSearch.py +++ b/hydrus/client/ClientSearch.py @@ -366,7 +366,7 @@ class FileSystemPredicates( object ): ( years, months, days, hours ) = age_value - age = ( ( ( ( ( ( ( years * 12 ) + months ) * 30 ) + days ) * 24 ) + hours ) * 3600 ) + age = ( years * 365 * 86400 ) + ( ( ( ( ( months * 30 ) + days ) * 24 ) + hours ) * 3600 ) now = HydrusData.GetNow() @@ -2176,7 +2176,7 @@ class Predicate( HydrusSerialisable.SerialisableBase ): DAY = 86400 MONTH = DAY * 30 - YEAR = MONTH * 12 + YEAR = DAY * 365 time_delta = 0 diff --git a/hydrus/client/ClientServices.py b/hydrus/client/ClientServices.py index ab18cf1d..ce02fc4d 100644 --- a/hydrus/client/ClientServices.py +++ b/hydrus/client/ClientServices.py @@ -58,7 +58,7 @@ def GenerateDefaultServiceDictionary( service_type ): dictionary[ 'update_downloading_paused' ] = False dictionary[ 'update_processing_paused' ] = False - content_types = tuple( HC.REPOSITORY_CONTENT_TYPES[ service_type ] ) + content_types = tuple( HC.SERVICE_TYPES_TO_CONTENT_TYPES[ service_type ] ) dictionary[ 'update_processing_content_types_paused' ] = [ [ content_type, False ] for content_type in content_types ] @@ -1514,7 +1514,7 @@ class ServiceRepository( ServiceRestricted ): if 'update_processing_content_types_paused' not in dictionary: - content_types = tuple( HC.REPOSITORY_CONTENT_TYPES[ self._service_type ] ) + content_types = tuple( HC.SERVICE_TYPES_TO_CONTENT_TYPES[ self._service_type ] ) dictionary[ 'update_processing_content_types_paused' ] = [ [ content_type, False ] for content_type in content_types ] diff --git a/hydrus/client/db/ClientDB.py b/hydrus/client/db/ClientDB.py index d70d60a5..a25a8e6c 100644 --- a/hydrus/client/db/ClientDB.py +++ b/hydrus/client/db/ClientDB.py @@ -42,6 +42,8 @@ from hydrus.client.db import ClientDBRepositories from hydrus.client.db import ClientDBSerialisable from hydrus.client.db import ClientDBServices from hydrus.client.db import ClientDBSimilarFiles +from hydrus.client.db import ClientDBTagParents +from hydrus.client.db import ClientDBTagSiblings from hydrus.client.importing import ClientImportFiles from hydrus.client.media import ClientMedia from hydrus.client.media import ClientMediaManagers @@ -320,46 +322,6 @@ def GenerateSpecificTagsTableName( file_service_id, tag_service_id ): return tags_table_name -def GenerateTagParentsLookupCacheTableName( display_type: int, service_id: int ): - - ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( service_id ) - - if display_type == ClientTags.TAG_DISPLAY_IDEAL: - - return cache_ideal_tag_parents_lookup_table_name - - elif display_type == ClientTags.TAG_DISPLAY_ACTUAL: - - return cache_actual_tag_parents_lookup_table_name - - -def GenerateTagParentsLookupCacheTableNames( service_id ): - - cache_ideal_tag_parents_lookup_table_name = 'external_caches.ideal_tag_parents_lookup_cache_{}'.format( service_id ) - cache_actual_tag_parents_lookup_table_name = 'external_caches.actual_tag_parents_lookup_cache_{}'.format( service_id ) - - return ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) - -def GenerateTagSiblingsLookupCacheTableName( display_type: int, service_id: int ): - - ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( service_id ) - - if display_type == ClientTags.TAG_DISPLAY_IDEAL: - - return cache_ideal_tag_siblings_lookup_table_name - - elif display_type == ClientTags.TAG_DISPLAY_ACTUAL: - - return cache_actual_tag_siblings_lookup_table_name - - -def GenerateTagSiblingsLookupCacheTableNames( service_id ): - - cache_ideal_tag_siblings_lookup_table_name = 'external_caches.ideal_tag_siblings_lookup_cache_{}'.format( service_id ) - cache_actual_tag_siblings_lookup_table_name = 'external_caches.actual_tag_siblings_lookup_cache_{}'.format( service_id ) - - return ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) - def WildcardHasFTS4SearchableCharacters( wildcard: str ): # fts4 says it can do alphanumeric or unicode with a value >= 128 @@ -502,13 +464,6 @@ class DB( HydrusDB.HydrusDB ): self._regen_tags_managers_hash_ids = set() self._regen_tags_managers_tag_ids = set() - self._service_ids_to_sibling_applicable_service_ids = None - self._service_ids_to_sibling_interested_service_ids = None - self._service_ids_to_parent_applicable_service_ids = None - self._service_ids_to_parent_interested_service_ids = None - - self._service_ids_to_display_application_status = {} - HydrusDB.HydrusDB.__init__( self, controller, db_dir, db_name ) @@ -627,16 +582,8 @@ class DB( HydrusDB.HydrusDB ): self._CacheTagsGenerate( file_service_id, service_id ) - self._Execute( 'INSERT OR IGNORE INTO tag_sibling_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', ( service_id, 0, service_id ) ) - self._Execute( 'INSERT OR IGNORE INTO tag_parent_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', ( service_id, 0, service_id ) ) - - self._service_ids_to_sibling_applicable_service_ids = None - self._service_ids_to_sibling_interested_service_ids = None - self._service_ids_to_parent_applicable_service_ids = None - self._service_ids_to_parent_interested_service_ids = None - - self._AddServiceCreateTagParents( service_id ) - self._AddServiceCreateTagSiblings( service_id ) + self.modules_tag_parents.Generate( service_id ) + self.modules_tag_siblings.Generate( service_id ) self._AddServiceCreateMappings( service_id, service_type ) @@ -687,46 +634,6 @@ class DB( HydrusDB.HydrusDB ): - def _AddServiceCreateTagParents( self, service_id ): - - self._CacheTagParentsGenerate( service_id ) - - - def _AddServiceCreateTagSiblings( self, service_id ): - - self._CacheTagSiblingsGenerate( service_id ) - - - def _AddTagParents( self, service_id, pairs, defer_cache_update = False ): - - self._ExecuteMany( 'DELETE FROM tag_parents WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ?;', ( ( service_id, child_tag_id, parent_tag_id ) for ( child_tag_id, parent_tag_id ) in pairs ) ) - self._ExecuteMany( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ? AND status = ?;', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_PENDING ) for ( child_tag_id, parent_tag_id ) in pairs ) ) - - self._ExecuteMany( 'INSERT OR IGNORE INTO tag_parents ( service_id, child_tag_id, parent_tag_id, status ) VALUES ( ?, ?, ?, ? );', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_CURRENT ) for ( child_tag_id, parent_tag_id ) in pairs ) ) - - tag_ids = set( itertools.chain.from_iterable( pairs ) ) - - if not defer_cache_update: - - self._CacheTagParentsParentsChanged( service_id, tag_ids ) - - - - def _AddTagSiblings( self, service_id, pairs, defer_cache_update = False ): - - self._ExecuteMany( 'DELETE FROM tag_siblings WHERE service_id = ? AND bad_tag_id = ?;', ( ( service_id, bad_tag_id ) for ( bad_tag_id, good_tag_id ) in pairs ) ) - self._ExecuteMany( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND status = ?;', ( ( service_id, bad_tag_id, HC.CONTENT_STATUS_PENDING ) for ( bad_tag_id, good_tag_id ) in pairs ) ) - - self._ExecuteMany( 'INSERT OR IGNORE INTO tag_siblings ( service_id, bad_tag_id, good_tag_id, status ) VALUES ( ?, ?, ?, ? );', ( ( service_id, bad_tag_id, good_tag_id, HC.CONTENT_STATUS_CURRENT ) for ( bad_tag_id, good_tag_id ) in pairs ) ) - - tag_ids = set( itertools.chain.from_iterable( pairs ) ) - - if not defer_cache_update: - - self._CacheTagSiblingsSiblingsChanged( service_id, tag_ids ) - - - def _ArchiveFiles( self, hash_ids ): hash_ids_archived = self.modules_files_metadata_basic.ArchiveFiles( hash_ids ) @@ -2505,11 +2412,11 @@ class DB( HydrusDB.HydrusDB ): def _CacheTagDisplayFilterChained( self, display_type, tag_service_id, tag_ids ): # we are not passing ideal_tag_ids here, but that's ok, we are testing sibling chains in one second - parents_chained_tag_ids = self._CacheTagParentsFilterChained( display_type, tag_service_id, tag_ids ) + parents_chained_tag_ids = self.modules_tag_parents.FilterChained( display_type, tag_service_id, tag_ids ) unknown_tag_ids = set( tag_ids ).difference( parents_chained_tag_ids ) - sibling_chained_tag_ids = self._CacheTagSiblingsFilterChained( display_type, tag_service_id, unknown_tag_ids ) + sibling_chained_tag_ids = self.modules_tag_siblings.FilterChained( display_type, tag_service_id, unknown_tag_ids ) chained_tag_ids = set( parents_chained_tag_ids ).union( sibling_chained_tag_ids ) @@ -2518,82 +2425,19 @@ class DB( HydrusDB.HydrusDB ): def _CacheTagDisplayGetApplication( self ): - if self._service_ids_to_sibling_applicable_service_ids is None: - - self._CacheTagSiblingsGenerateApplicationDicts() - - - service_ids_to_service_keys = {} - - service_keys_to_sibling_applicable_service_keys = {} - - for ( master_service_id, applicable_service_ids ) in self._service_ids_to_sibling_applicable_service_ids.items(): - - all_service_ids = [ master_service_id ] + list( applicable_service_ids ) - - for service_id in all_service_ids: - - if service_id not in service_ids_to_service_keys: - - service_ids_to_service_keys[ service_id ] = self.modules_services.GetService( service_id ).GetServiceKey() - - - - service_keys_to_sibling_applicable_service_keys[ service_ids_to_service_keys[ master_service_id ] ] = [ service_ids_to_service_keys[ service_id ] for service_id in applicable_service_ids ] - - - if self._service_ids_to_parent_applicable_service_ids is None: - - self._CacheTagParentsGenerateApplicationDicts() - - - service_keys_to_parent_applicable_service_keys = {} - - for ( master_service_id, applicable_service_ids ) in self._service_ids_to_parent_applicable_service_ids.items(): - - all_service_ids = [ master_service_id ] + list( applicable_service_ids ) - - for service_id in all_service_ids: - - if service_id not in service_ids_to_service_keys: - - service_ids_to_service_keys[ service_id ] = self.modules_services.GetService( service_id ).GetServiceKey() - - - - service_keys_to_parent_applicable_service_keys[ service_ids_to_service_keys[ master_service_id ] ] = [ service_ids_to_service_keys[ service_id ] for service_id in applicable_service_ids ] - + service_keys_to_sibling_applicable_service_keys = self.modules_tag_siblings.GetApplication() + service_keys_to_parent_applicable_service_keys = self.modules_tag_parents.GetApplication() return ( service_keys_to_sibling_applicable_service_keys, service_keys_to_parent_applicable_service_keys ) def _CacheTagDisplayGetApplicationStatus( self, service_id ): - if service_id not in self._service_ids_to_display_application_status: - - ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( service_id ) - - actual_sibling_rows = set( self._Execute( 'SELECT bad_tag_id, ideal_tag_id FROM {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) ) - ideal_sibling_rows = set( self._Execute( 'SELECT bad_tag_id, ideal_tag_id FROM {};'.format( cache_ideal_tag_siblings_lookup_table_name ) ) ) - - sibling_rows_to_remove = actual_sibling_rows.difference( ideal_sibling_rows ) - sibling_rows_to_add = ideal_sibling_rows.difference( actual_sibling_rows ) - - ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( service_id ) - - actual_parent_rows = set( self._Execute( 'SELECT child_tag_id, ancestor_tag_id FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) ) ) - ideal_parent_rows = set( self._Execute( 'SELECT child_tag_id, ancestor_tag_id FROM {};'.format( cache_ideal_tag_parents_lookup_table_name ) ) ) - - parent_rows_to_remove = actual_parent_rows.difference( ideal_parent_rows ) - parent_rows_to_add = ideal_parent_rows.difference( actual_parent_rows ) - - num_actual_rows = len( actual_sibling_rows ) + len( actual_parent_rows ) - num_ideal_rows = len( ideal_sibling_rows ) + len( ideal_parent_rows ) - - self._service_ids_to_display_application_status[ service_id ] = ( sibling_rows_to_add, sibling_rows_to_remove, parent_rows_to_add, parent_rows_to_remove, num_actual_rows, num_ideal_rows ) - + ( sibling_rows_to_add, sibling_rows_to_remove, num_actual_sibling_rows, num_ideal_sibling_rows ) = self.modules_tag_siblings.GetApplicationStatus( service_id ) + ( parent_rows_to_add, parent_rows_to_remove, num_actual_parent_rows, num_ideal_parent_rows ) = self.modules_tag_parents.GetApplicationStatus( service_id ) - ( sibling_rows_to_add, sibling_rows_to_remove, parent_rows_to_add, parent_rows_to_remove, num_actual_rows, num_ideal_rows ) = self._service_ids_to_display_application_status[ service_id ] + num_actual_rows = num_actual_sibling_rows + num_actual_parent_rows + num_ideal_rows = num_ideal_sibling_rows + num_ideal_parent_rows return ( sibling_rows_to_add, sibling_rows_to_remove, parent_rows_to_add, parent_rows_to_remove, num_actual_rows, num_ideal_rows ) @@ -2614,8 +2458,8 @@ class DB( HydrusDB.HydrusDB ): status[ 'waiting_on_tag_repos' ] = [] for ( applicable_service_ids, content_type ) in [ - ( self._CacheTagParentsGetApplicableServiceIds( service_id ), HC.CONTENT_TYPE_TAG_PARENTS ), - ( self._CacheTagSiblingsGetApplicableServiceIds( service_id ), HC.CONTENT_TYPE_TAG_SIBLINGS ) + ( self.modules_tag_parents.GetApplicableServiceIds( service_id ), HC.CONTENT_TYPE_TAG_PARENTS ), + ( self.modules_tag_siblings.GetApplicableServiceIds( service_id ), HC.CONTENT_TYPE_TAG_SIBLINGS ) ]: for applicable_service_id in applicable_service_ids: @@ -2637,7 +2481,7 @@ class DB( HydrusDB.HydrusDB ): def _CacheTagDisplayGetApplicableServiceIds( self, tag_service_id ): - return set( self._CacheTagSiblingsGetApplicableServiceIds( tag_service_id ) ).union( self._CacheTagParentsGetApplicableServiceIds( tag_service_id ) ) + return set( self.modules_tag_siblings.GetApplicableServiceIds( tag_service_id ) ).union( self.modules_tag_parents.GetApplicableServiceIds( tag_service_id ) ) def _CacheTagDisplayGetChainsMembers( self, display_type, tag_service_id, tag_ids ): @@ -2649,15 +2493,15 @@ class DB( HydrusDB.HydrusDB ): with self._MakeTemporaryIntegerTable( [], 'ideal_tag_id' ) as temp_ideal_tag_ids_table_name: - self._CacheTagSiblingsGetIdealsIntoTable( display_type, tag_service_id, temp_tag_ids_table_name, temp_ideal_tag_ids_table_name ) + self.modules_tag_siblings.GetIdealsIntoTable( display_type, tag_service_id, temp_tag_ids_table_name, temp_ideal_tag_ids_table_name ) with self._MakeTemporaryIntegerTable( [], 'tag_id' ) as temp_parent_chain_members_table_name: - self._CacheTagParentsGetChainsMembersTables( display_type, tag_service_id, temp_ideal_tag_ids_table_name, temp_parent_chain_members_table_name ) + self.modules_tag_parents.GetChainsMembersTables( display_type, tag_service_id, temp_ideal_tag_ids_table_name, temp_parent_chain_members_table_name ) with self._MakeTemporaryIntegerTable( [], 'tag_id' ) as temp_chain_members_table_name: - self._CacheTagSiblingsGetChainsMembersFromIdealsTables( display_type, tag_service_id, temp_parent_chain_members_table_name, temp_chain_members_table_name ) + self.modules_tag_siblings.GetChainsMembersFromIdealsTables( display_type, tag_service_id, temp_parent_chain_members_table_name, temp_chain_members_table_name ) return self._STS( self._Execute( 'SELECT tag_id FROM {};'.format( temp_chain_members_table_name ) ) ) @@ -2668,7 +2512,7 @@ class DB( HydrusDB.HydrusDB ): def _CacheTagDisplayGetImpliedBy( self, display_type, tag_service_id, tag_id ): - ideal_tag_id = self._CacheTagSiblingsGetIdeal( display_type, tag_service_id, tag_id ) + ideal_tag_id = self.modules_tag_siblings.GetIdeal( display_type, tag_service_id, tag_id ) if ideal_tag_id == tag_id: @@ -2677,9 +2521,9 @@ class DB( HydrusDB.HydrusDB ): # and any of its or those descendants' siblings # these are all ideal siblings - self_and_descendant_ids = { tag_id }.union( self._CacheTagParentsGetDescendants( display_type, tag_service_id, ideal_tag_id ) ) + self_and_descendant_ids = { tag_id }.union( self.modules_tag_parents.GetDescendants( display_type, tag_service_id, ideal_tag_id ) ) - implication_ids = self._CacheTagSiblingsGetChainsMembersFromIdeals( display_type, tag_service_id, self_and_descendant_ids ) + implication_ids = self.modules_tag_siblings.GetChainsMembersFromIdeals( display_type, tag_service_id, self_and_descendant_ids ) else: @@ -2695,18 +2539,18 @@ class DB( HydrusDB.HydrusDB ): # a tag implies its ideal sibling and any ancestors - ideal_tag_id = self._CacheTagSiblingsGetIdeal( display_type, tag_service_id, tag_id ) + ideal_tag_id = self.modules_tag_siblings.GetIdeal( display_type, tag_service_id, tag_id ) implies = { ideal_tag_id } - implies.update( self._CacheTagParentsGetAncestors( display_type, tag_service_id, ideal_tag_id ) ) + implies.update( self.modules_tag_parents.GetAncestors( display_type, tag_service_id, ideal_tag_id ) ) return implies def _CacheTagDisplayGetInterestedServiceIds( self, tag_service_id ): - return set( self._CacheTagSiblingsGetInterestedServiceIds( tag_service_id ) ).union( self._CacheTagParentsGetInterestedServiceIds( tag_service_id ) ) + return set( self.modules_tag_siblings.GetInterestedServiceIds( tag_service_id ) ).union( self.modules_tag_parents.GetInterestedServiceIds( tag_service_id ) ) def _CacheTagDisplayGetSiblingsAndParentsForTags( self, tags ): @@ -2735,14 +2579,14 @@ class DB( HydrusDB.HydrusDB ): existing_tag_ids = { self.modules_tags.GetTagId( tag ) for tag in existing_tags } - tag_ids_to_ideal_tag_ids = self._CacheTagSiblingsGetTagsToIdeals( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, existing_tag_ids ) + tag_ids_to_ideal_tag_ids = self.modules_tag_siblings.GetTagsToIdeals( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, existing_tag_ids ) ideal_tag_ids = set( tag_ids_to_ideal_tag_ids.values() ) - ideal_tag_ids_to_sibling_chain_ids = self._CacheTagSiblingsGetIdealsToChains( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, ideal_tag_ids ) + ideal_tag_ids_to_sibling_chain_ids = self.modules_tag_siblings.GetIdealsToChains( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, ideal_tag_ids ) - ideal_tag_ids_to_descendant_tag_ids = self._CacheTagParentsGetTagsToDescendants( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, ideal_tag_ids ) - ideal_tag_ids_to_ancestor_tag_ids = self._CacheTagParentsGetTagsToAncestors( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, ideal_tag_ids ) + ideal_tag_ids_to_descendant_tag_ids = self.modules_tag_parents.GetTagsToDescendants( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, ideal_tag_ids ) + ideal_tag_ids_to_ancestor_tag_ids = self.modules_tag_parents.GetTagsToAncestors( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, ideal_tag_ids ) all_tag_ids = set() @@ -2783,7 +2627,7 @@ class DB( HydrusDB.HydrusDB ): else: - tag_ids_to_ideals = self._CacheTagSiblingsGetTagsToIdeals( display_type, tag_service_id, tag_ids ) + tag_ids_to_ideals = self.modules_tag_siblings.GetTagsToIdeals( display_type, tag_service_id, tag_ids ) tag_ids_that_exist_in_display = set() @@ -2804,13 +2648,13 @@ class DB( HydrusDB.HydrusDB ): # tags are implied by descendants, and their siblings - tag_ids_to_descendants = self._CacheTagParentsGetTagsToDescendants( display_type, tag_service_id, tag_ids_that_exist_in_display ) + tag_ids_to_descendants = self.modules_tag_parents.GetTagsToDescendants( display_type, tag_service_id, tag_ids_that_exist_in_display ) all_tags_and_descendants = set( tag_ids_that_exist_in_display ) all_tags_and_descendants.update( itertools.chain.from_iterable( tag_ids_to_descendants.values() ) ) # these are all ideal_tag_ids - all_tags_and_descendants_to_chains = self._CacheTagSiblingsGetIdealsToChains( display_type, tag_service_id, all_tags_and_descendants ) + all_tags_and_descendants_to_chains = self.modules_tag_siblings.GetIdealsToChains( display_type, tag_service_id, all_tags_and_descendants ) for ( tag_id, descendants ) in tag_ids_to_descendants.items(): @@ -2829,11 +2673,11 @@ class DB( HydrusDB.HydrusDB ): tag_ids_to_implies = collections.defaultdict( set ) - tag_ids_to_ideals = self._CacheTagSiblingsGetTagsToIdeals( display_type, tag_service_id, tag_ids ) + tag_ids_to_ideals = self.modules_tag_siblings.GetTagsToIdeals( display_type, tag_service_id, tag_ids ) ideal_tag_ids = set( tag_ids_to_ideals.values() ) - ideal_tag_ids_to_ancestors = self._CacheTagParentsGetTagsToAncestors( display_type, tag_service_id, ideal_tag_ids ) + ideal_tag_ids_to_ancestors = self.modules_tag_parents.GetTagsToAncestors( display_type, tag_service_id, ideal_tag_ids ) for ( tag_id, ideal_tag_id ) in tag_ids_to_ideals.items(): @@ -2861,13 +2705,13 @@ class DB( HydrusDB.HydrusDB ): existing_tag_ids = { self.modules_tags.GetTagId( tag ) for tag in existing_tags } - existing_tag_ids_to_ideal_tag_ids = self._CacheTagSiblingsGetTagsToIdeals( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, existing_tag_ids ) + existing_tag_ids_to_ideal_tag_ids = self.modules_tag_siblings.GetTagsToIdeals( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, existing_tag_ids ) ideal_tag_ids = set( existing_tag_ids_to_ideal_tag_ids.values() ) interesting_tag_ids_to_ideal_tag_ids = { tag_id : ideal_tag_id for ( tag_id, ideal_tag_id ) in existing_tag_ids_to_ideal_tag_ids.items() if tag_id != ideal_tag_id } - ideal_tag_ids_to_ancestor_tag_ids = self._CacheTagParentsGetTagsToAncestors( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, ideal_tag_ids ) + ideal_tag_ids_to_ancestor_tag_ids = self.modules_tag_parents.GetTagsToAncestors( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, ideal_tag_ids ) existing_tag_ids_to_ancestor_tag_ids = { existing_tag_id : ideal_tag_ids_to_ancestor_tag_ids[ existing_tag_ids_to_ideal_tag_ids[ existing_tag_id ] ] for existing_tag_id in existing_tag_ids } @@ -2915,110 +2759,58 @@ class DB( HydrusDB.HydrusDB ): def _CacheTagDisplayIsChained( self, display_type, tag_service_id, tag_id ): - return self._CacheTagParentsIsChained( display_type, tag_service_id, tag_id ) or self._CacheTagSiblingsIsChained( display_type, tag_service_id, tag_id ) + return self.modules_tag_parents.IsChained( display_type, tag_service_id, tag_id ) or self.modules_tag_siblings.IsChained( display_type, tag_service_id, tag_id ) + + + def _CacheTagDisplayNotifyParentsChanged( self, tag_service_id_that_changed, tag_ids_that_changed ): + + if len( tag_ids_that_changed ) == 0: + + return + + + # the parents for tag_ids have changed for tag_service_id + # therefore any service that is interested in tag_service_ids's parents needs to regen the respective chains for these tags + + interested_tag_service_ids = self.modules_tag_parents.GetInterestedServiceIds( tag_service_id_that_changed ) + + self.modules_tag_parents.RegenChains( interested_tag_service_ids, tag_ids_that_changed ) + + + def _CacheTagDisplayNotifySiblingsChanged( self, tag_service_id_that_changed, tag_ids_that_changed ): + + if len( tag_ids_that_changed ) == 0: + + return + + + # the siblings for tag_ids have changed for tag_service_id + # therefore any service that is interested in tag_service_ids's siblings needs to regen the respective chains for these tags + + interested_tag_service_ids = self.modules_tag_siblings.GetInterestedServiceIds( tag_service_id_that_changed ) + + self.modules_tag_siblings.RegenChains( interested_tag_service_ids, tag_ids_that_changed ) + + # since siblings just changed, parents can as well! any interested service that has any of these tag_ids in its parent structure may need new ids, so let's regen + + self.modules_tag_parents.RegenChains( interested_tag_service_ids, tag_ids_that_changed ) + + #for interested_tag_service_id in interested_tag_service_ids: + + #self._CacheTagDisplayNotifyParentsChanged( interested_tag_service_id, tag_ids_that_changed ) + def _CacheTagDisplaySetApplication( self, service_keys_to_sibling_applicable_service_keys, service_keys_to_parent_applicable_service_keys ): - if self._service_ids_to_sibling_applicable_service_ids is None: - - self._CacheTagSiblingsGenerateApplicationDicts() - + sibling_service_ids_to_sync = self.modules_tag_siblings.SetApplication( service_keys_to_sibling_applicable_service_keys ) + parent_service_ids_to_sync = self.modules_tag_parents.SetApplication( service_keys_to_parent_applicable_service_keys ) - new_service_ids_to_sibling_applicable_service_ids = collections.defaultdict( list ) - - service_ids_to_sync = set() - - for ( master_service_key, applicable_service_keys ) in service_keys_to_sibling_applicable_service_keys.items(): - - master_service_id = self.modules_services.GetServiceId( master_service_key ) - applicable_service_ids = [ self.modules_services.GetServiceId( service_key ) for service_key in applicable_service_keys ] - - new_service_ids_to_sibling_applicable_service_ids[ master_service_id ] = applicable_service_ids - - - old_and_new_master_service_ids = set( self._service_ids_to_sibling_applicable_service_ids.keys() ) - old_and_new_master_service_ids.update( new_service_ids_to_sibling_applicable_service_ids.keys() ) - - inserts = [] - - for master_service_id in old_and_new_master_service_ids: - - if master_service_id in new_service_ids_to_sibling_applicable_service_ids: - - applicable_service_ids = new_service_ids_to_sibling_applicable_service_ids[ master_service_id ] - - inserts.extend( ( ( master_service_id, i, applicable_service_id ) for ( i, applicable_service_id ) in enumerate( applicable_service_ids ) ) ) - - if applicable_service_ids != self._service_ids_to_sibling_applicable_service_ids[ master_service_id ]: - - service_ids_to_sync.add( master_service_id ) - - - else: - - service_ids_to_sync.add( master_service_id ) - - - - self._Execute( 'DELETE FROM tag_sibling_application;' ) - - self._ExecuteMany( 'INSERT OR IGNORE INTO tag_sibling_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', inserts ) - - self._service_ids_to_sibling_applicable_service_ids = None - self._service_ids_to_sibling_interested_service_ids = None + service_ids_to_sync = sibling_service_ids_to_sync.union( parent_service_ids_to_sync ) # - if self._service_ids_to_parent_applicable_service_ids is None: - - self._CacheTagParentsGenerateApplicationDicts() - - - new_service_ids_to_parent_applicable_service_ids = collections.defaultdict( list ) - - for ( master_service_key, applicable_service_keys ) in service_keys_to_parent_applicable_service_keys.items(): - - master_service_id = self.modules_services.GetServiceId( master_service_key ) - applicable_service_ids = [ self.modules_services.GetServiceId( service_key ) for service_key in applicable_service_keys ] - - new_service_ids_to_parent_applicable_service_ids[ master_service_id ] = applicable_service_ids - - - old_and_new_master_service_ids = set( self._service_ids_to_parent_applicable_service_ids.keys() ) - old_and_new_master_service_ids.update( new_service_ids_to_parent_applicable_service_ids.keys() ) - - inserts = [] - - for master_service_id in old_and_new_master_service_ids: - - if master_service_id in new_service_ids_to_parent_applicable_service_ids: - - applicable_service_ids = new_service_ids_to_parent_applicable_service_ids[ master_service_id ] - - inserts.extend( ( ( master_service_id, i, applicable_service_id ) for ( i, applicable_service_id ) in enumerate( applicable_service_ids ) ) ) - - if applicable_service_ids != self._service_ids_to_parent_applicable_service_ids[ master_service_id ]: - - service_ids_to_sync.add( master_service_id ) - - - else: - - service_ids_to_sync.add( master_service_id ) - - - - self._Execute( 'DELETE FROM tag_parent_application;' ) - - self._ExecuteMany( 'INSERT OR IGNORE INTO tag_parent_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', inserts ) - - self._service_ids_to_parent_applicable_service_ids = None - self._service_ids_to_parent_interested_service_ids = None - - # - - self._RegenerateTagSiblingsCache( only_these_service_ids = service_ids_to_sync ) + self._RegenerateTagSiblingsAndParentsCache( only_these_service_ids = service_ids_to_sync ) self.pub_after_job( 'notify_new_tag_display_application' ) @@ -3117,8 +2909,8 @@ class DB( HydrusDB.HydrusDB ): # However, if we try that strategy when adding, we actually increase max job time, as those delayed big jobs only have the option of staying the same or getting bigger! We get zoom speed and then clunk mode. # Therefore, when adding, to limit max work time for the whole migration, we want to actually choose the largest jobs first! That work has to be done, and it doesn't get easier! - ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( tag_service_id ) - ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( tag_service_id ) + ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = ClientDBTagSiblings.GenerateTagSiblingsLookupCacheTableNames( tag_service_id ) + ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = ClientDBTagParents.GenerateTagParentsLookupCacheTableNames( tag_service_id ) def GetWeightedSiblingRow( sibling_rows, index ): @@ -3164,7 +2956,7 @@ class DB( HydrusDB.HydrusDB ): # first up, the removees. what is in actual but not ideal some_removee_sibling_rows = HydrusData.SampleSetByGettingFirst( sibling_rows_to_remove, 20 ) - some_removee_parent_rows = HydrusData.SampleSetByGettingFirst( parent_rows_to_remove, 20 ) + some_removee_parent_rows = HydrusData.SampleSetByGettingFirst( parent_rows_to_remove, 20 ) if len( some_removee_sibling_rows ) + len( some_removee_parent_rows ) > 0: @@ -3219,7 +3011,7 @@ class DB( HydrusDB.HydrusDB ): after_chain_tag_ids_to_implied_by = self._CacheTagDisplayGetTagsToImpliedBy( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, possibly_affected_tag_ids ) - sibling_rows_to_remove.discard( smallest_sibling_row ) + self.modules_tag_siblings.NotifySiblingDeleteRowSynced( tag_service_id, smallest_sibling_row ) if smallest_parent_row is not None: @@ -3241,11 +3033,9 @@ class DB( HydrusDB.HydrusDB ): after_chain_tag_ids_to_implied_by = self._CacheTagDisplayGetTagsToImpliedBy( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, possibly_affected_tag_ids ) - parent_rows_to_remove.discard( smallest_parent_row ) + self.modules_tag_parents.NotifyParentDeleteRowSynced( tag_service_id, smallest_parent_row ) - num_actual_rows -= 1 - else: # there is nothing to remove, so we'll now go for what is in ideal but not actual @@ -3286,7 +3076,6 @@ class DB( HydrusDB.HydrusDB ): if largest_sibling_row is not None: - # the only things changed here are those implied by or that imply one of these values ( a, b ) = largest_sibling_row @@ -3304,7 +3093,7 @@ class DB( HydrusDB.HydrusDB ): after_chain_tag_ids_to_implied_by = self._CacheTagDisplayGetTagsToImpliedBy( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, possibly_affected_tag_ids ) - sibling_rows_to_add.discard( largest_sibling_row ) + self.modules_tag_siblings.NotifySiblingAddRowSynced( tag_service_id, largest_sibling_row ) if largest_parent_row is not None: @@ -3326,15 +3115,11 @@ class DB( HydrusDB.HydrusDB ): after_chain_tag_ids_to_implied_by = self._CacheTagDisplayGetTagsToImpliedBy( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, possibly_affected_tag_ids ) - parent_rows_to_add.discard( largest_parent_row ) + self.modules_tag_parents.NotifyParentAddRowSynced( tag_service_id, largest_parent_row ) - num_actual_rows += 1 - else: - del self._service_ids_to_display_application_status[ tag_service_id ] - break @@ -3400,7 +3185,7 @@ class DB( HydrusDB.HydrusDB ): self._CacheCombinedFilesDisplayMappingsAddImplications( tag_service_id, implication_tag_ids, tag_id ) - self._service_ids_to_display_application_status[ tag_service_id ] = ( sibling_rows_to_add, sibling_rows_to_remove, parent_rows_to_add, parent_rows_to_remove, num_actual_rows, num_ideal_rows ) + ( sibling_rows_to_add, sibling_rows_to_remove, parent_rows_to_add, parent_rows_to_remove, num_actual_rows, num_ideal_rows ) = self._CacheTagDisplayGetApplicationStatus( tag_service_id ) if len( all_tag_ids_altered ) > 0: @@ -3417,431 +3202,6 @@ class DB( HydrusDB.HydrusDB ): return still_needs_work - def _CacheTagParentsDrop( self, tag_service_id ): - - ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( tag_service_id ) - - self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_actual_tag_parents_lookup_table_name ) ) - self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_ideal_tag_parents_lookup_table_name ) ) - - - def _CacheTagParentsFilterChained( self, display_type, tag_service_id, ideal_tag_ids ): - - if len( ideal_tag_ids ) == 0: - - return set() - - elif len( ideal_tag_ids ) == 1: - - ( ideal_tag_id, ) = ideal_tag_ids - - if self._CacheTagParentsIsChained( display_type, tag_service_id, ideal_tag_id ): - - return { ideal_tag_id } - - else: - - return set() - - - - # get the tag_ids that are part of a parent chain - - cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) - - with self._MakeTemporaryIntegerTable( ideal_tag_ids, 'tag_id' ) as temp_table_name: - - # keep these separate--older sqlite can't do cross join to an OR ON - - # temp tags to lookup - chain_tag_ids = self._STS( self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} ON ( child_tag_id = tag_id );'.format( temp_table_name, cache_tag_parents_lookup_table_name ) ) ) - chain_tag_ids.update( self._STI( self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} ON ( ancestor_tag_id = tag_id );'.format( temp_table_name, cache_tag_parents_lookup_table_name ) ) ) ) - - - return chain_tag_ids - - - def _CacheTagParentsGenerate( self, tag_service_id ): - - ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( tag_service_id ) - - self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( child_tag_id INTEGER, ancestor_tag_id INTEGER, PRIMARY KEY ( child_tag_id, ancestor_tag_id ) );'.format( cache_actual_tag_parents_lookup_table_name ) ) - self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( child_tag_id INTEGER, ancestor_tag_id INTEGER, PRIMARY KEY ( child_tag_id, ancestor_tag_id ) );'.format( cache_ideal_tag_parents_lookup_table_name ) ) - - self._CreateIndex( cache_actual_tag_parents_lookup_table_name, [ 'ancestor_tag_id' ] ) - self._CreateIndex( cache_ideal_tag_parents_lookup_table_name, [ 'ancestor_tag_id' ] ) - - self.modules_db_maintenance.AnalyzeTable( cache_actual_tag_parents_lookup_table_name ) - self.modules_db_maintenance.AnalyzeTable( cache_ideal_tag_parents_lookup_table_name ) - - self._CacheTagParentsRegen( ( tag_service_id, ) ) - - - def _CacheTagParentsGenerateApplicationDicts( self ): - - unsorted_dict = HydrusData.BuildKeyToListDict( ( master_service_id, ( index, application_service_id ) ) for ( master_service_id, index, application_service_id ) in self._Execute( 'SELECT master_service_id, service_index, application_service_id FROM tag_parent_application;' ) ) - - self._service_ids_to_parent_applicable_service_ids = collections.defaultdict( list ) - - self._service_ids_to_parent_applicable_service_ids.update( { master_service_id : [ application_service_id for ( index, application_service_id ) in sorted( index_and_applicable_service_ids ) ] for ( master_service_id, index_and_applicable_service_ids ) in unsorted_dict.items() } ) - - self._service_ids_to_parent_interested_service_ids = collections.defaultdict( set ) - - for ( master_service_id, application_service_ids ) in self._service_ids_to_parent_applicable_service_ids.items(): - - for application_service_id in application_service_ids: - - self._service_ids_to_parent_interested_service_ids[ application_service_id ].add( master_service_id ) - - - - - def _CacheTagParentsGetAncestors( self, display_type: int, tag_service_id: int, ideal_tag_id: int ): - - cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) - - ancestor_ids = self._STS( self._Execute( 'SELECT ancestor_tag_id FROM {} WHERE child_tag_id = ?;'.format( cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ) ) ) - - return ancestor_ids - - - def _CacheTagParentsGetApplicableServiceIds( self, tag_service_id ): - - if self._service_ids_to_parent_applicable_service_ids is None: - - self._CacheTagParentsGenerateApplicationDicts() - - - return self._service_ids_to_parent_applicable_service_ids[ tag_service_id ] - - - def _CacheTagParentsGetChainsMembers( self, display_type: int, tag_service_id: int, ideal_tag_ids: typing.Collection[ int ] ): - - if len( ideal_tag_ids ) == 0: - - return set() - - - cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) - - chain_tag_ids = set( ideal_tag_ids ) - we_have_looked_up = set() - next_search_tag_ids = set( ideal_tag_ids ) - - while len( next_search_tag_ids ) > 0: - - if len( next_search_tag_ids ) == 1: - - ( ideal_tag_id, ) = next_search_tag_ids - - round_of_tag_ids = self._STS( self._Execute( 'SELECT child_tag_id FROM {} WHERE ancestor_tag_id = ? UNION ALL SELECT ancestor_tag_id FROM {} WHERE child_tag_id = ?;'.format( cache_tag_parents_lookup_table_name, cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ideal_tag_id ) ) ) - - else: - - with self._MakeTemporaryIntegerTable( next_search_tag_ids, 'tag_id' ) as temp_next_search_tag_ids_table_name: - - round_of_tag_ids = self._STS( self._Execute( 'SELECT child_tag_id FROM {} CROSS JOIN {} ON ( ancestor_tag_id = tag_id ) UNION ALL SELECT ancestor_tag_id FROM {} CROSS JOIN {} ON ( child_tag_id = tag_id );'.format( temp_next_search_tag_ids_table_name, cache_tag_parents_lookup_table_name, temp_next_search_tag_ids_table_name, cache_tag_parents_lookup_table_name ) ) ) - - - - chain_tag_ids.update( round_of_tag_ids ) - - we_have_looked_up.update( next_search_tag_ids ) - - next_search_tag_ids = round_of_tag_ids.difference( we_have_looked_up ) - - - return chain_tag_ids - - - def _CacheTagParentsGetChainsMembersTables( self, display_type: int, tag_service_id: int, ideal_tag_ids_table_name: str, results_table_name: str ): - - # if it isn't crazy, I should write this whole lad to be one or two recursive queries - - cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) - - first_ideal_tag_ids = self._STS( self._Execute( 'SELECT ideal_tag_id FROM {};'.format( ideal_tag_ids_table_name ) ) ) - - chain_tag_ids = set( first_ideal_tag_ids ) - we_have_looked_up = set() - next_search_tag_ids = set( first_ideal_tag_ids ) - - while len( next_search_tag_ids ) > 0: - - if len( next_search_tag_ids ) == 1: - - ( ideal_tag_id, ) = next_search_tag_ids - - round_of_tag_ids = self._STS( self._Execute( 'SELECT child_tag_id FROM {} WHERE ancestor_tag_id = ? UNION ALL SELECT ancestor_tag_id FROM {} WHERE child_tag_id = ?;'.format( cache_tag_parents_lookup_table_name, cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ideal_tag_id ) ) ) - - else: - - with self._MakeTemporaryIntegerTable( next_search_tag_ids, 'tag_id' ) as temp_next_search_tag_ids_table_name: - - round_of_tag_ids = self._STS( self._Execute( 'SELECT child_tag_id FROM {} CROSS JOIN {} ON ( ancestor_tag_id = tag_id ) UNION ALL SELECT ancestor_tag_id FROM {} CROSS JOIN {} ON ( child_tag_id = tag_id );'.format( temp_next_search_tag_ids_table_name, cache_tag_parents_lookup_table_name, temp_next_search_tag_ids_table_name, cache_tag_parents_lookup_table_name ) ) ) - - - - new_tag_ids = round_of_tag_ids.difference( chain_tag_ids ) - - if len( new_tag_ids ) > 0: - - self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( tag_id ) VALUES ( ? );', ( ( tag_id, ) for tag_id in round_of_tag_ids.difference( new_tag_ids ) ) ) - - chain_tag_ids.update( new_tag_ids ) - - - we_have_looked_up.update( next_search_tag_ids ) - - next_search_tag_ids = round_of_tag_ids.difference( we_have_looked_up ) - - - - def _CacheTagParentsGetDescendants( self, display_type: int, tag_service_id: int, ideal_tag_id: int ): - - cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) - - descendant_ids = self._STS( self._Execute( 'SELECT child_tag_id FROM {} WHERE ancestor_tag_id = ?;'.format( cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ) ) ) - - return descendant_ids - - - def _CacheTagParentsGetInterestedServiceIds( self, tag_service_id ): - - if self._service_ids_to_parent_interested_service_ids is None: - - self._CacheTagParentsGenerateApplicationDicts() - - - return self._service_ids_to_parent_interested_service_ids[ tag_service_id ] - - - def _CacheTagParentsGetTagsToAncestors( self, display_type: int, tag_service_id: int, ideal_tag_ids: typing.Collection[ int ] ): - - if len( ideal_tag_ids ) == 0: - - return {} - - elif len( ideal_tag_ids ) == 1: - - ( ideal_tag_id, ) = ideal_tag_ids - - ancestors = self._CacheTagParentsGetAncestors( display_type, tag_service_id, ideal_tag_id ) - - return { ideal_tag_id : ancestors } - - - cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) - - with self._MakeTemporaryIntegerTable( ideal_tag_ids, 'child_tag_id' ) as temp_table_name: - - tag_ids_to_ancestors = HydrusData.BuildKeyToSetDict( self._Execute( 'SELECT child_tag_id, ancestor_tag_id FROM {} CROSS JOIN {} USING ( child_tag_id );'.format( temp_table_name, cache_tag_parents_lookup_table_name ) ) ) - - - for tag_id in ideal_tag_ids: - - if tag_id not in tag_ids_to_ancestors: - - tag_ids_to_ancestors[ tag_id ] = set() - - - - return tag_ids_to_ancestors - - - def _CacheTagParentsGetTagsToDescendants( self, display_type: int, tag_service_id: int, ideal_tag_ids: typing.Collection[ int ] ): - - if len( ideal_tag_ids ) == 0: - - return {} - - elif len( ideal_tag_ids ) == 1: - - ( ideal_tag_id, ) = ideal_tag_ids - - descendants = self._CacheTagParentsGetDescendants( display_type, tag_service_id, ideal_tag_id ) - - return { ideal_tag_id : descendants } - - - cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) - - with self._MakeTemporaryIntegerTable( ideal_tag_ids, 'ancestor_tag_id' ) as temp_table_name: - - tag_ids_to_descendants = HydrusData.BuildKeyToSetDict( self._Execute( 'SELECT ancestor_tag_id, child_tag_id FROM {} CROSS JOIN {} USING ( ancestor_tag_id );'.format( temp_table_name, cache_tag_parents_lookup_table_name ) ) ) - - - for ideal_tag_id in ideal_tag_ids: - - if ideal_tag_id not in tag_ids_to_descendants: - - tag_ids_to_descendants[ ideal_tag_id ] = set() - - - - return tag_ids_to_descendants - - - def _CacheTagParentsIdealiseStatusesToPairIds( self, tag_service_id, unideal_statuses_to_pair_ids ): - - all_tag_ids = set( itertools.chain.from_iterable( ( itertools.chain.from_iterable( pair_ids ) for pair_ids in unideal_statuses_to_pair_ids.values() ) ) ) - - tag_ids_to_ideal_tag_ids = self._CacheTagSiblingsGetTagsToIdeals( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id, all_tag_ids ) - - ideal_statuses_to_pair_ids = collections.defaultdict( list ) - - for ( status, pair_ids ) in unideal_statuses_to_pair_ids.items(): - - ideal_pair_ids = sorted( ( ( tag_ids_to_ideal_tag_ids[ child_tag_id ], tag_ids_to_ideal_tag_ids[ parent_tag_id ] ) for ( child_tag_id, parent_tag_id ) in pair_ids ) ) - - ideal_statuses_to_pair_ids[ status ] = ideal_pair_ids - - - return ideal_statuses_to_pair_ids - - - def _CacheTagParentsIsChained( self, display_type, tag_service_id, ideal_tag_id ): - - cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) - - return self._Execute( 'SELECT 1 FROM {} WHERE child_tag_id = ? OR ancestor_tag_id = ?;'.format( cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ideal_tag_id ) ).fetchone() is not None - - - def _CacheTagParentsRegen( self, tag_service_ids ): - - for tag_service_id in tag_service_ids: - - cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id ) - - self._Execute( 'DELETE FROM {};'.format( cache_tag_parents_lookup_table_name ) ) - - applicable_service_ids = self._CacheTagParentsGetApplicableServiceIds( tag_service_id ) - - tps = ClientTagsHandling.TagParentsStructure() - - for applicable_service_id in applicable_service_ids: - - unideal_statuses_to_pair_ids = self._GetTagParentsIds( service_id = applicable_service_id ) - - # we have to collapse the parent ids according to siblings - - ideal_statuses_to_pair_ids = self._CacheTagParentsIdealiseStatusesToPairIds( tag_service_id, unideal_statuses_to_pair_ids ) - - # - - petitioned_fast_lookup = set( ideal_statuses_to_pair_ids[ HC.CONTENT_STATUS_PETITIONED ] ) - - for ( child_tag_id, parent_tag_id ) in ideal_statuses_to_pair_ids[ HC.CONTENT_STATUS_CURRENT ]: - - if ( child_tag_id, parent_tag_id ) in petitioned_fast_lookup: - - continue - - - tps.AddPair( child_tag_id, parent_tag_id ) - - - for ( child_tag_id, parent_tag_id ) in ideal_statuses_to_pair_ids[ HC.CONTENT_STATUS_PENDING ]: - - tps.AddPair( child_tag_id, parent_tag_id ) - - - - self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( child_tag_id, ancestor_tag_id ) VALUES ( ?, ? );'.format( cache_tag_parents_lookup_table_name ), tps.IterateDescendantAncestorPairs() ) - - if tag_service_id in self._service_ids_to_display_application_status: - - del self._service_ids_to_display_application_status[ tag_service_id ] - - - - - def _CacheTagParentsRegenChains( self, tag_service_ids, tag_ids ): - - if len( tag_ids ) == 0: - - return - - - for tag_service_id in tag_service_ids: - - cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id ) - - # it is possible that the parents cache currently contains non-ideal tag_ids - # so, to be safe, we'll also get all sibling chain members - - tag_ids_to_clear_and_regen = set( tag_ids ) - - ideal_tag_ids = self._CacheTagSiblingsGetIdeals( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id, tag_ids ) - - tag_ids_to_clear_and_regen.update( self._CacheTagSiblingsGetChainsMembersFromIdeals( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id, ideal_tag_ids ) ) - - # and now all possible current parent chains based on this - - tag_ids_to_clear_and_regen.update( self._CacheTagParentsGetChainsMembers( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id, tag_ids_to_clear_and_regen ) ) - - # this should now contain all possible tag_ids that could be in tag parents right now related to what we were given - - self._ExecuteMany( 'DELETE FROM {} WHERE child_tag_id = ? OR ancestor_tag_id = ?;'.format( cache_tag_parents_lookup_table_name ), ( ( tag_id, tag_id ) for tag_id in tag_ids_to_clear_and_regen ) ) - - # we wipe them - - applicable_tag_service_ids = self._CacheTagParentsGetApplicableServiceIds( tag_service_id ) - - tps = ClientTagsHandling.TagParentsStructure() - - for applicable_tag_service_id in applicable_tag_service_ids: - - service_key = self.modules_services.GetService( applicable_tag_service_id ).GetServiceKey() - - unideal_statuses_to_pair_ids = self._GetTagParentsIdsChains( applicable_tag_service_id, tag_ids_to_clear_and_regen ) - - ideal_statuses_to_pair_ids = self._CacheTagParentsIdealiseStatusesToPairIds( tag_service_id, unideal_statuses_to_pair_ids ) - - # - - petitioned_fast_lookup = set( ideal_statuses_to_pair_ids[ HC.CONTENT_STATUS_PETITIONED ] ) - - for ( child_tag_id, parent_tag_id ) in ideal_statuses_to_pair_ids[ HC.CONTENT_STATUS_CURRENT ]: - - if ( child_tag_id, parent_tag_id ) in petitioned_fast_lookup: - - continue - - - tps.AddPair( child_tag_id, parent_tag_id ) - - - for ( child_tag_id, parent_tag_id ) in ideal_statuses_to_pair_ids[ HC.CONTENT_STATUS_PENDING ]: - - tps.AddPair( child_tag_id, parent_tag_id ) - - - - self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( child_tag_id, ancestor_tag_id ) VALUES ( ?, ? );'.format( cache_tag_parents_lookup_table_name ), tps.IterateDescendantAncestorPairs() ) - - if tag_service_id in self._service_ids_to_display_application_status: - - del self._service_ids_to_display_application_status[ tag_service_id ] - - - - - def _CacheTagParentsParentsChanged( self, tag_service_id_that_changed, tag_ids_that_changed ): - - if len( tag_ids_that_changed ) == 0: - - return - - - # the parents for tag_ids have changed for tag_service_id - # therefore any service that is interested in tag_service_ids's siblings needs to regen the respective chains for these tags - - interested_tag_service_ids = self._CacheTagParentsGetInterestedServiceIds( tag_service_id_that_changed ) - - self._CacheTagParentsRegenChains( interested_tag_service_ids, tag_ids_that_changed ) - - def _CacheTagsAddTags( self, file_service_id, tag_service_id, tag_ids ): if len( tag_ids ) == 0: @@ -4088,8 +3448,8 @@ class DB( HydrusDB.HydrusDB ): def _CacheTagsPopulate( self, file_service_id, tag_service_id, status_hook = None ): ac_cache_table_name = self._CacheMappingsGetACCacheTableName( ClientTags.TAG_DISPLAY_STORAGE, file_service_id, tag_service_id ) - siblings_table_name = GenerateTagSiblingsLookupCacheTableName( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id ) - parents_table_name = GenerateTagParentsLookupCacheTableName( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id ) + siblings_table_name = ClientDBTagSiblings.GenerateTagSiblingsLookupCacheTableName( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id ) + parents_table_name = ClientDBTagParents.GenerateTagParentsLookupCacheTableName( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id ) queries = [ 'SELECT tag_id FROM {}'.format( ac_cache_table_name ), @@ -4101,8 +3461,6 @@ class DB( HydrusDB.HydrusDB ): query = ' UNION '.join( queries ) + ';' - queries = [ query ] - BLOCK_SIZE = 10000 for ( group_of_tag_ids, num_done, num_to_do ) in HydrusDB.ReadLargeIdQueryInSeparateChunks( self._c, query, BLOCK_SIZE ): @@ -4269,534 +3627,6 @@ class DB( HydrusDB.HydrusDB ): - def _CacheTagSiblingsDrop( self, tag_service_id ): - - ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( tag_service_id ) - - self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) - self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_ideal_tag_siblings_lookup_table_name ) ) - - - def _CacheTagSiblingsFilterChained( self, display_type, tag_service_id, tag_ids ): - - if len( tag_ids ) == 0: - - return set() - - elif len( tag_ids ) == 1: - - ( tag_id, ) = tag_ids - - if self._CacheTagSiblingsIsChained( display_type, tag_service_id, tag_id ): - - return { tag_id } - - else: - - return set() - - - - # get the tag_ids that are part of a sibling chain - - cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) - - with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_table_name: - - # keep these separate--older sqlite can't do cross join to an OR ON - - # temp tags to lookup - chain_tag_ids = self._STS( self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ) ) - chain_tag_ids.update( self._STI( self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} ON ( ideal_tag_id = tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ) ) ) - - - return chain_tag_ids - - - def _CacheTagSiblingsFilterChainedIdealsIntoTable( self, display_type, tag_service_id, tag_ids_table_name, results_table_name ): - - # get the tag_ids that are part of a sibling chain - - cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) - - # keep these separate--older sqlite can't do cross join to an OR ON - - # temp tags to lookup - self._Execute( 'INSERT OR IGNORE INTO {} SELECT ideal_tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) - self._STI( self._Execute( 'INSERT OR IGNORE INTO {} SELECT tag_id FROM {} CROSS JOIN {} ON ( ideal_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) ) - - - def _CacheTagSiblingsFilterChainedIntoTable( self, display_type, tag_service_id, tag_ids_table_name, results_table_name ): - - # get the tag_ids that are part of a sibling chain - - cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) - - # keep these separate--older sqlite can't do cross join to an OR ON - - # temp tags to lookup - self._Execute( 'INSERT OR IGNORE INTO {} SELECT tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) - self._STI( self._Execute( 'INSERT OR IGNORE INTO {} SELECT tag_id FROM {} CROSS JOIN {} ON ( ideal_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) ) - - - def _CacheTagSiblingsGenerate( self, tag_service_id ): - - ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( tag_service_id ) - - self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( bad_tag_id INTEGER PRIMARY KEY, ideal_tag_id INTEGER );'.format( cache_actual_tag_siblings_lookup_table_name ) ) - self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( bad_tag_id INTEGER PRIMARY KEY, ideal_tag_id INTEGER );'.format( cache_ideal_tag_siblings_lookup_table_name ) ) - - self._CreateIndex( cache_actual_tag_siblings_lookup_table_name, [ 'ideal_tag_id' ] ) - self._CreateIndex( cache_ideal_tag_siblings_lookup_table_name, [ 'ideal_tag_id' ] ) - - self.modules_db_maintenance.AnalyzeTable( cache_actual_tag_siblings_lookup_table_name ) - self.modules_db_maintenance.AnalyzeTable( cache_ideal_tag_siblings_lookup_table_name ) - - self._CacheTagSiblingsRegen( ( tag_service_id, ) ) - - - def _CacheTagSiblingsGenerateApplicationDicts( self ): - - unsorted_dict = HydrusData.BuildKeyToListDict( ( master_service_id, ( index, application_service_id ) ) for ( master_service_id, index, application_service_id ) in self._Execute( 'SELECT master_service_id, service_index, application_service_id FROM tag_sibling_application;' ) ) - - self._service_ids_to_sibling_applicable_service_ids = collections.defaultdict( list ) - - self._service_ids_to_sibling_applicable_service_ids.update( { master_service_id : [ application_service_id for ( index, application_service_id ) in sorted( index_and_applicable_service_ids ) ] for ( master_service_id, index_and_applicable_service_ids ) in unsorted_dict.items() } ) - - self._service_ids_to_sibling_interested_service_ids = collections.defaultdict( set ) - - for ( master_service_id, application_service_ids ) in self._service_ids_to_sibling_applicable_service_ids.items(): - - for application_service_id in application_service_ids: - - self._service_ids_to_sibling_interested_service_ids[ application_service_id ].add( master_service_id ) - - - - - def _CacheTagSiblingsGetChainMembersFromIdeal( self, display_type, tag_service_id, ideal_tag_id ) -> typing.Set[ int ]: - - cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) - - sibling_tag_ids = self._STS( self._Execute( 'SELECT bad_tag_id FROM {} WHERE ideal_tag_id = ?;'.format( cache_tag_siblings_lookup_table_name ), ( ideal_tag_id, ) ) ) - - sibling_tag_ids.add( ideal_tag_id ) - - return sibling_tag_ids - - - def _CacheTagSiblingsGetChainsMembersFromIdeals( self, display_type, tag_service_id, ideal_tag_ids ) -> typing.Set[ int ]: - - if len( ideal_tag_ids ) == 0: - - return set() - - elif len( ideal_tag_ids ) == 1: - - ( ideal_tag_id, ) = ideal_tag_ids - - return self._CacheTagSiblingsGetChainMembersFromIdeal( display_type, tag_service_id, ideal_tag_id ) - - - cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) - - with self._MakeTemporaryIntegerTable( ideal_tag_ids, 'ideal_tag_id' ) as temp_table_name: - - # temp tags to lookup - sibling_tag_ids = self._STS( self._Execute( 'SELECT bad_tag_id FROM {} CROSS JOIN {} USING ( ideal_tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ) ) - - - sibling_tag_ids.update( ideal_tag_ids ) - - return sibling_tag_ids - - - def _CacheTagSiblingsGetChainsMembersFromIdealsTables( self, display_type, tag_service_id, ideal_tag_ids_table_name, results_table_name ) -> typing.Set[ int ]: - - cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) - - self._Execute( 'INSERT OR IGNORE INTO {} ( tag_id ) SELECT ideal_tag_id FROM {};'.format( results_table_name, ideal_tag_ids_table_name ) ) - - # tags to lookup - self._Execute( 'INSERT OR IGNORE INTO {} ( tag_id ) SELECT bad_tag_id FROM {} CROSS JOIN {} USING ( ideal_tag_id );'.format( results_table_name, ideal_tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) - - - def _CacheTagSiblingsGetApplicableServiceIds( self, tag_service_id ): - - if self._service_ids_to_sibling_applicable_service_ids is None: - - self._CacheTagSiblingsGenerateApplicationDicts() - - - return self._service_ids_to_sibling_applicable_service_ids[ tag_service_id ] - - - def _CacheTagSiblingsGetIdeal( self, display_type, tag_service_id, tag_id ) -> int: - - cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) - - result = self._Execute( 'SELECT ideal_tag_id FROM {} WHERE bad_tag_id = ?;'.format( cache_tag_siblings_lookup_table_name ), ( tag_id, ) ).fetchone() - - if result is None: - - return tag_id - - else: - - ( ideal_tag_id, ) = result - - return ideal_tag_id - - - - def _CacheTagSiblingsGetIdeals( self, display_type, tag_service_id, tag_ids ) -> typing.Set[ int ]: - - if not isinstance( tag_ids, set ): - - tag_ids = set( tag_ids ) - - - if len( tag_ids ) == 0: - - return set() - - elif len( tag_ids ) == 1: - - ( tag_id, ) = tag_ids - - return { self._CacheTagSiblingsGetIdeal( display_type, tag_service_id, tag_id ) } - - - cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) - - with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_tag_ids_table_name: - - magic_case = 'CASE WHEN ideal_tag_id NOT NULL THEN ideal_tag_id ELSE tag_id END' - - cursor = self._Execute( - 'SELECT {} FROM {} LEFT OUTER JOIN {} ON ( tag_id = bad_tag_id );'.format( - magic_case, - temp_tag_ids_table_name, - cache_tag_siblings_lookup_table_name - ) - ) - - return self._STS( cursor ) - - - ''' - no_ideal_found_tag_ids = set( tag_ids ) - ideal_tag_ids = set() - - with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_table_name: - - # temp tags to lookup - for ( tag_id, ideal_tag_id ) in self._Execute( 'SELECT tag_id, ideal_tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ): - - no_ideal_found_tag_ids.discard( tag_id ) - ideal_tag_ids.add( ideal_tag_id ) - - - ideal_tag_ids.update( no_ideal_found_tag_ids ) - - - return ideal_tag_ids - ''' - - - def _CacheTagSiblingsGetIdealsIntoTable( self, display_type, tag_service_id, tag_ids_table_name, results_table_name ): - - cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) - - magic_case = 'CASE WHEN ideal_tag_id NOT NULL THEN ideal_tag_id ELSE tag_id END' - - cursor = self._Execute( - 'INSERT OR IGNORE INTO {} ( ideal_tag_id ) SELECT {} FROM {} LEFT OUTER JOIN {} ON ( tag_id = bad_tag_id );'.format( - results_table_name, - magic_case, - tag_ids_table_name, - cache_tag_siblings_lookup_table_name - ) - ) - - return self._STS( cursor ) - - - def _CacheTagSiblingsGetIdealsToChains( self, display_type, tag_service_id, ideal_tag_ids ): - - # this only takes ideal_tag_ids - - if len( ideal_tag_ids ) == 0: - - return {} - - elif len( ideal_tag_ids ) == 1: - - ( ideal_tag_id, ) = ideal_tag_ids - - chain_tag_ids = self._CacheTagSiblingsGetChainMembersFromIdeal( display_type, tag_service_id, ideal_tag_id ) - - return { ideal_tag_id : chain_tag_ids } - - - cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) - - with self._MakeTemporaryIntegerTable( ideal_tag_ids, 'ideal_tag_id' ) as temp_table_name: - - # temp tags to lookup - ideal_tag_ids_to_chain_members = HydrusData.BuildKeyToSetDict( self._Execute( 'SELECT ideal_tag_id, bad_tag_id FROM {} CROSS JOIN {} USING ( ideal_tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ) ) - - - # this returns ideal in the chain, and chains of size 1 - - for ideal_tag_id in ideal_tag_ids: - - ideal_tag_ids_to_chain_members[ ideal_tag_id ].add( ideal_tag_id ) - - - return ideal_tag_ids_to_chain_members - - - def _CacheTagSiblingsGetInterestedServiceIds( self, tag_service_id ): - - if self._service_ids_to_sibling_interested_service_ids is None: - - self._CacheTagSiblingsGenerateApplicationDicts() - - - return self._service_ids_to_sibling_interested_service_ids[ tag_service_id ] - - - def _CacheTagSiblingsGetTagSiblingsForTags( self, service_key, tags ): - - if service_key == CC.COMBINED_TAG_SERVICE_KEY: - - tag_service_ids = self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES ) - - else: - - tag_service_ids = ( self.modules_services.GetServiceId( service_key ), ) - - - existing_tags = { tag for tag in tags if self.modules_tags.TagExists( tag ) } - - existing_tag_ids = { self.modules_tags.GetTagId( tag ) for tag in existing_tags } - - tag_ids_to_chain_tag_ids = collections.defaultdict( set ) - - for tag_service_id in tag_service_ids: - - tag_ids_to_ideal_tag_ids = self._CacheTagSiblingsGetTagsToIdeals( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, existing_tag_ids ) - - ideal_tag_ids = set( tag_ids_to_ideal_tag_ids.values() ) - - ideal_tag_ids_to_chain_tag_ids = self._CacheTagSiblingsGetIdealsToChains( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, ideal_tag_ids ) - - for tag_id in existing_tag_ids: - - chain_tag_ids = ideal_tag_ids_to_chain_tag_ids[ tag_ids_to_ideal_tag_ids[ tag_id ] ] - - tag_ids_to_chain_tag_ids[ tag_id ].update( chain_tag_ids ) - - - - all_tag_ids = set( tag_ids_to_chain_tag_ids.keys() ) - all_tag_ids.update( itertools.chain.from_iterable( tag_ids_to_chain_tag_ids.values() ) ) - - tag_ids_to_tags = self.modules_tags_local_cache.GetTagIdsToTags( tag_ids = all_tag_ids ) - - tags_to_siblings = { tag_ids_to_tags[ tag_id ] : { tag_ids_to_tags[ chain_tag_id ] for chain_tag_id in chain_tag_ids } for ( tag_id, chain_tag_ids ) in tag_ids_to_chain_tag_ids.items() } - - for tag in tags: - - if tag not in existing_tags: - - tags_to_siblings[ tag ] = { tag } - - - - return tags_to_siblings - - - def _CacheTagSiblingsGetTagSiblingsIdeals( self, service_key ): - - tag_service_id = self.modules_services.GetServiceId( service_key ) - - cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id ) - - pair_ids = self._Execute( 'SELECT bad_tag_id, ideal_tag_id FROM {};'.format( cache_tag_siblings_lookup_table_name ) ).fetchall() - - all_tag_ids = set( itertools.chain.from_iterable( pair_ids ) ) - - tag_ids_to_tags = self.modules_tags_local_cache.GetTagIdsToTags( tag_ids = all_tag_ids ) - - tags_to_ideals = { tag_ids_to_tags[ bad_tag_id ] : tag_ids_to_tags[ good_tag_id ] for ( bad_tag_id, good_tag_id ) in pair_ids } - - return tags_to_ideals - - - def _CacheTagSiblingsGetTagsToIdeals( self, display_type, tag_service_id, tag_ids ): - - if not isinstance( tag_ids, set ): - - tag_ids = set( tag_ids ) - - - if len( tag_ids ) == 0: - - return {} - - elif len( tag_ids ) == 1: - - ( tag_id, ) = tag_ids - - return { tag_id : self._CacheTagSiblingsGetIdeal( display_type, tag_service_id, tag_id ) } - - - cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) - - no_ideal_found_tag_ids = set( tag_ids ) - tag_ids_to_ideal_tag_ids = {} - - with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_table_name: - - # temp tags to lookup - for ( tag_id, ideal_tag_id ) in self._Execute( 'SELECT tag_id, ideal_tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ): - - no_ideal_found_tag_ids.discard( tag_id ) - tag_ids_to_ideal_tag_ids[ tag_id ] = ideal_tag_id - - - tag_ids_to_ideal_tag_ids.update( { tag_id : tag_id for tag_id in no_ideal_found_tag_ids } ) - - - return tag_ids_to_ideal_tag_ids - - - def _CacheTagSiblingsIsChained( self, display_type, tag_service_id, tag_id ): - - cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) - - return self._Execute( 'SELECT 1 FROM {} WHERE bad_tag_id = ? OR ideal_tag_id = ?;'.format( cache_tag_siblings_lookup_table_name ), ( tag_id, tag_id ) ).fetchone() is not None - - - def _CacheTagSiblingsRegen( self, tag_service_ids ): - - for tag_service_id in tag_service_ids: - - cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id ) - - self._Execute( 'DELETE FROM {};'.format( cache_tag_siblings_lookup_table_name ) ) - - applicable_service_ids = self._CacheTagSiblingsGetApplicableServiceIds( tag_service_id ) - - tss = ClientTagsHandling.TagSiblingsStructure() - - for applicable_service_id in applicable_service_ids: - - statuses_to_pair_ids = self._GetTagSiblingsIds( service_id = applicable_service_id ) - - petitioned_fast_lookup = set( statuses_to_pair_ids[ HC.CONTENT_STATUS_PETITIONED ] ) - - for ( bad_tag_id, good_tag_id ) in statuses_to_pair_ids[ HC.CONTENT_STATUS_CURRENT ]: - - if ( bad_tag_id, good_tag_id ) in petitioned_fast_lookup: - - continue - - - tss.AddPair( bad_tag_id, good_tag_id ) - - - for ( bad_tag_id, good_tag_id ) in statuses_to_pair_ids[ HC.CONTENT_STATUS_PENDING ]: - - tss.AddPair( bad_tag_id, good_tag_id ) - - - - self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( bad_tag_id, ideal_tag_id ) VALUES ( ?, ? );'.format( cache_tag_siblings_lookup_table_name ), tss.GetBadTagsToIdealTags().items() ) - - if tag_service_id in self._service_ids_to_display_application_status: - - del self._service_ids_to_display_application_status[ tag_service_id ] - - - - - def _CacheTagSiblingsRegenChains( self, tag_service_ids, tag_ids ): - - if len( tag_ids ) == 0: - - return - - - for tag_service_id in tag_service_ids: - - cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id ) - - tag_ids_to_clear_and_regen = set( tag_ids ) - - ideal_tag_ids = self._CacheTagSiblingsGetIdeals( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id, tag_ids ) - - tag_ids_to_clear_and_regen.update( self._CacheTagSiblingsGetChainsMembersFromIdeals( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id, ideal_tag_ids ) ) - - self._ExecuteMany( 'DELETE FROM {} WHERE bad_tag_id = ? OR ideal_tag_id = ?;'.format( cache_tag_siblings_lookup_table_name ), ( ( tag_id, tag_id ) for tag_id in tag_ids_to_clear_and_regen ) ) - - applicable_tag_service_ids = self._CacheTagSiblingsGetApplicableServiceIds( tag_service_id ) - - tss = ClientTagsHandling.TagSiblingsStructure() - - for applicable_tag_service_id in applicable_tag_service_ids: - - service_key = self.modules_services.GetService( applicable_tag_service_id ).GetServiceKey() - - statuses_to_pair_ids = self._GetTagSiblingsIdsChains( applicable_tag_service_id, tag_ids_to_clear_and_regen ) - - petitioned_fast_lookup = set( statuses_to_pair_ids[ HC.CONTENT_STATUS_PETITIONED ] ) - - for ( bad_tag_id, good_tag_id ) in statuses_to_pair_ids[ HC.CONTENT_STATUS_CURRENT ]: - - if ( bad_tag_id, good_tag_id ) in petitioned_fast_lookup: - - continue - - - tss.AddPair( bad_tag_id, good_tag_id ) - - - for ( bad_tag_id, good_tag_id ) in statuses_to_pair_ids[ HC.CONTENT_STATUS_PENDING ]: - - tss.AddPair( bad_tag_id, good_tag_id ) - - - - self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( bad_tag_id, ideal_tag_id ) VALUES ( ?, ? );'.format( cache_tag_siblings_lookup_table_name ), tss.GetBadTagsToIdealTags().items() ) - - if tag_service_id in self._service_ids_to_display_application_status: - - del self._service_ids_to_display_application_status[ tag_service_id ] - - - # as siblings may have changed for these tags, parents may have as well - self._CacheTagParentsRegenChains( ( tag_service_id, ), tag_ids_to_clear_and_regen ) - - - - def _CacheTagSiblingsSiblingsChanged( self, tag_service_id_that_changed, tag_ids_that_changed ): - - if len( tag_ids_that_changed ) == 0: - - return - - - # the siblings for tag_ids have changed for tag_service_id - # therefore any service that is interested in tag_service_ids's siblings needs to regen the respective chains for these tags - - interested_tag_service_ids = self._CacheTagSiblingsGetInterestedServiceIds( tag_service_id_that_changed ) - - self._CacheTagSiblingsRegenChains( interested_tag_service_ids, tag_ids_that_changed ) - - def _CheckDBIntegrity( self ): prefix_string = 'checking db integrity: ' @@ -4998,11 +3828,11 @@ class DB( HydrusDB.HydrusDB ): # main - self.modules_db_maintenance.CreateInitialTables() - self.modules_db_maintenance.CreateInitialIndices() - - self.modules_services.CreateInitialTables() - self.modules_services.CreateInitialIndices() + for module in self._modules: + + module.CreateInitialTables() + module.CreateInitialIndices() + self._Execute( 'CREATE TABLE IF NOT EXISTS alternate_file_groups ( alternates_group_id INTEGER PRIMARY KEY );' ) @@ -5026,22 +3856,10 @@ class DB( HydrusDB.HydrusDB ): self._Execute( 'CREATE TABLE IF NOT EXISTS confirmed_alternate_pairs ( smaller_media_id INTEGER, larger_media_id INTEGER, PRIMARY KEY ( smaller_media_id, larger_media_id ) );' ) self._CreateIndex( 'confirmed_alternate_pairs', [ 'larger_media_id', 'smaller_media_id' ], unique = True ) - self.modules_files_storage.CreateInitialTables() - self.modules_files_storage.CreateInitialIndices() - - self.modules_files_metadata_basic.CreateInitialTables() - self.modules_files_metadata_basic.CreateInitialIndices() - - self.modules_files_maintenance.CreateInitialTables() - self.modules_files_maintenance.CreateInitialIndices() - self._Execute( 'CREATE TABLE file_notes ( hash_id INTEGER, name_id INTEGER, note_id INTEGER, PRIMARY KEY ( hash_id, name_id ) );' ) self._CreateIndex( 'file_notes', [ 'note_id' ] ) self._CreateIndex( 'file_notes', [ 'name_id' ] ) - self.modules_serialisable.CreateInitialTables() - self.modules_serialisable.CreateInitialIndices() - self._Execute( 'CREATE TABLE local_ratings ( service_id INTEGER, hash_id INTEGER, rating REAL, PRIMARY KEY ( service_id, hash_id ) );' ) self._CreateIndex( 'local_ratings', [ 'hash_id' ] ) self._CreateIndex( 'local_ratings', [ 'rating' ] ) @@ -5069,22 +3887,6 @@ class DB( HydrusDB.HydrusDB ): self._Execute( 'CREATE TABLE statuses ( status_id INTEGER PRIMARY KEY, status TEXT UNIQUE );' ) - self._Execute( 'CREATE TABLE tag_parents ( service_id INTEGER, child_tag_id INTEGER, parent_tag_id INTEGER, status INTEGER, PRIMARY KEY ( service_id, child_tag_id, parent_tag_id, status ) );' ) - self._CreateIndex( 'tag_parents', [ 'service_id', 'parent_tag_id' ] ) - - self._Execute( 'CREATE TABLE tag_parent_petitions ( service_id INTEGER, child_tag_id INTEGER, parent_tag_id INTEGER, status INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, child_tag_id, parent_tag_id, status ) );' ) - self._CreateIndex( 'tag_parent_petitions', [ 'service_id', 'parent_tag_id' ] ) - - self._Execute( 'CREATE TABLE tag_parent_application ( master_service_id INTEGER, service_index INTEGER, application_service_id INTEGER, PRIMARY KEY ( master_service_id, service_index ) );' ) - - self._Execute( 'CREATE TABLE tag_siblings ( service_id INTEGER, bad_tag_id INTEGER, good_tag_id INTEGER, status INTEGER, PRIMARY KEY ( service_id, bad_tag_id, status ) );' ) - self._CreateIndex( 'tag_siblings', [ 'service_id', 'good_tag_id' ] ) - - self._Execute( 'CREATE TABLE tag_sibling_petitions ( service_id INTEGER, bad_tag_id INTEGER, good_tag_id INTEGER, status INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, bad_tag_id, status ) );' ) - self._CreateIndex( 'tag_sibling_petitions', [ 'service_id', 'good_tag_id' ] ) - - self._Execute( 'CREATE TABLE tag_sibling_application ( master_service_id INTEGER, service_index INTEGER, application_service_id INTEGER, PRIMARY KEY ( master_service_id, service_index ) );' ) - self._Execute( 'CREATE TABLE url_map ( hash_id INTEGER, url_id INTEGER, PRIMARY KEY ( hash_id, url_id ) );' ) self._CreateIndex( 'url_map', [ 'url_id' ] ) @@ -5096,35 +3898,6 @@ class DB( HydrusDB.HydrusDB ): self._Execute( 'CREATE TABLE version ( version INTEGER );' ) - # caches - - self.modules_similar_files.CreateInitialTables() - self.modules_similar_files.CreateInitialIndices() - - self._CreateDBCaches() - - # master - - self.modules_hashes.CreateInitialTables() - self.modules_hashes.CreateInitialIndices() - - self.modules_tags.CreateInitialTables() - self.modules_tags.CreateInitialIndices() - - self.modules_urls.CreateInitialTables() - self.modules_urls.CreateInitialIndices() - - self.modules_texts.CreateInitialTables() - self.modules_texts.CreateInitialIndices() - - # caches - - self.modules_hashes_local_cache.CreateInitialTables() - self.modules_hashes_local_cache.CreateInitialIndices() - - self.modules_tags_local_cache.CreateInitialTables() - self.modules_tags_local_cache.CreateInitialIndices() - # inserts location = HydrusPaths.ConvertAbsPathToPortablePath( client_files_default ) @@ -5270,11 +4043,6 @@ class DB( HydrusDB.HydrusDB ): self._ExecuteMany( 'INSERT INTO json_dumps_named VALUES ( ?, ?, ?, ?, ? );', ClientDefaults.GetDefaultScriptRows() ) - def _CreateDBCaches( self ): - - pass - - def _CullFileViewingStatistics( self ): media_min = self._controller.new_options.GetNoneableInteger( 'file_viewing_statistics_media_min_time' ) @@ -5536,26 +4304,17 @@ class DB( HydrusDB.HydrusDB ): if service_type in HC.REAL_TAG_SERVICES: - self._DeleteServiceDropTagParents( service_id ) - - self._DeleteServiceDropTagSiblings( service_id ) - interested_service_ids = set( self._CacheTagDisplayGetInterestedServiceIds( service_id ) ) interested_service_ids.discard( service_id ) # lmao, not any more! - # we have to do this after the above since we need to know _what_ to regen with these new application rules in a sec - self._Execute( 'DELETE FROM tag_sibling_application WHERE master_service_id = ? OR application_service_id = ?;', ( service_id, service_id ) ) - self._Execute( 'DELETE FROM tag_parent_application WHERE master_service_id = ? OR application_service_id = ?;', ( service_id, service_id ) ) + self.modules_tag_parents.Drop( service_id ) - self._service_ids_to_sibling_applicable_service_ids = None - self._service_ids_to_sibling_interested_service_ids = None - self._service_ids_to_parent_applicable_service_ids = None - self._service_ids_to_parent_interested_service_ids = None + self.modules_tag_siblings.Drop( service_id ) if len( interested_service_ids ) > 0: - self._RegenerateTagSiblingsCache( only_these_service_ids = interested_service_ids ) + self._RegenerateTagSiblingsAndParentsCache( only_these_service_ids = interested_service_ids ) self._CacheTagsDrop( self.modules_services.combined_file_service_id, service_id ) @@ -5642,22 +4401,6 @@ class DB( HydrusDB.HydrusDB ): - def _DeleteServiceDropTagParents( self, service_id ): - - self._Execute( 'DELETE FROM tag_parents WHERE service_id = ?;', ( service_id, ) ) - self._Execute( 'DELETE FROM tag_parent_petitions WHERE service_id = ?;', ( service_id, ) ) - - self._CacheTagParentsDrop( service_id ) - - - def _DeleteServiceDropTagSiblings( self, service_id ): - - self._Execute( 'DELETE FROM tag_siblings WHERE service_id = ?;', ( service_id, ) ) - self._Execute( 'DELETE FROM tag_sibling_petitions WHERE service_id = ?;', ( service_id, ) ) - - self._CacheTagSiblingsDrop( service_id ) - - def _DeleteServiceInfo( self, service_key = None, types_to_delete = None ): predicates = [] @@ -5688,36 +4431,6 @@ class DB( HydrusDB.HydrusDB ): self.pub_after_job( 'notify_new_pending' ) - def _DeleteTagParents( self, service_id, pairs, defer_cache_update = False ): - - self._ExecuteMany( 'DELETE FROM tag_parents WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ?;', ( ( service_id, child_tag_id, parent_tag_id ) for ( child_tag_id, parent_tag_id ) in pairs ) ) - self._ExecuteMany( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ? AND status = ?;', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_PETITIONED ) for ( child_tag_id, parent_tag_id ) in pairs ) ) - - self._ExecuteMany( 'INSERT OR IGNORE INTO tag_parents ( service_id, child_tag_id, parent_tag_id, status ) VALUES ( ?, ?, ?, ? );', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_DELETED ) for ( child_tag_id, parent_tag_id ) in pairs ) ) - - tag_ids = set( itertools.chain.from_iterable( pairs ) ) - - if not defer_cache_update: - - self._CacheTagParentsParentsChanged( service_id, tag_ids ) - - - - def _DeleteTagSiblings( self, service_id, pairs, defer_cache_update = False ): - - self._ExecuteMany( 'DELETE FROM tag_siblings WHERE service_id = ? AND bad_tag_id = ?;', ( ( service_id, bad_tag_id ) for ( bad_tag_id, good_tag_id ) in pairs ) ) - self._ExecuteMany( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND status = ?;', ( ( service_id, bad_tag_id, HC.CONTENT_STATUS_PETITIONED ) for ( bad_tag_id, good_tag_id ) in pairs ) ) - - self._ExecuteMany( 'INSERT OR IGNORE INTO tag_siblings ( service_id, bad_tag_id, good_tag_id, status ) VALUES ( ?, ?, ?, ? );', ( ( service_id, bad_tag_id, good_tag_id, HC.CONTENT_STATUS_DELETED ) for ( bad_tag_id, good_tag_id ) in pairs ) ) - - tag_ids = set( itertools.chain.from_iterable( pairs ) ) - - if not defer_cache_update: - - self._CacheTagSiblingsSiblingsChanged( service_id, tag_ids ) - - - def _DisplayCatastrophicError( self, text ): message = 'The db encountered a serious error! This is going to be written to the log as well, but here it is for a screenshot:' @@ -7664,23 +6377,23 @@ class DB( HydrusDB.HydrusDB ): if display_tag_service_id != self.modules_services.combined_tag_service_id: - tag_ids_to_ideal_tag_ids = self._CacheTagSiblingsGetTagsToIdeals( ClientTags.TAG_DISPLAY_ACTUAL, display_tag_service_id, tag_ids ) + tag_ids_to_ideal_tag_ids = self.modules_tag_siblings.GetTagsToIdeals( ClientTags.TAG_DISPLAY_ACTUAL, display_tag_service_id, tag_ids ) - tag_ids_that_are_sibling_chained = self._CacheTagSiblingsFilterChained( ClientTags.TAG_DISPLAY_ACTUAL, display_tag_service_id, tag_ids ) + tag_ids_that_are_sibling_chained = self.modules_tag_siblings.FilterChained( ClientTags.TAG_DISPLAY_ACTUAL, display_tag_service_id, tag_ids ) tag_ids_to_ideal_tag_ids_for_siblings = { tag_id : ideal_tag_id for ( tag_id, ideal_tag_id ) in tag_ids_to_ideal_tag_ids.items() if tag_id in tag_ids_that_are_sibling_chained } - ideal_tag_ids_to_sibling_chain_tag_ids = self._CacheTagSiblingsGetIdealsToChains( ClientTags.TAG_DISPLAY_ACTUAL, display_tag_service_id, set( tag_ids_to_ideal_tag_ids_for_siblings.values() ) ) + ideal_tag_ids_to_sibling_chain_tag_ids = self.modules_tag_siblings.GetIdealsToChains( ClientTags.TAG_DISPLAY_ACTUAL, display_tag_service_id, set( tag_ids_to_ideal_tag_ids_for_siblings.values() ) ) # ideal_tag_ids = set( tag_ids_to_ideal_tag_ids.values() ) - ideal_tag_ids_that_are_parent_chained = self._CacheTagParentsFilterChained( ClientTags.TAG_DISPLAY_ACTUAL, display_tag_service_id, ideal_tag_ids ) + ideal_tag_ids_that_are_parent_chained = self.modules_tag_parents.FilterChained( ClientTags.TAG_DISPLAY_ACTUAL, display_tag_service_id, ideal_tag_ids ) tag_ids_to_ideal_tag_ids_for_parents = { tag_id : ideal_tag_id for ( tag_id, ideal_tag_id ) in tag_ids_to_ideal_tag_ids.items() if ideal_tag_id in ideal_tag_ids_that_are_parent_chained } - ideal_tag_ids_to_ancestor_tag_ids = self._CacheTagParentsGetTagsToAncestors( ClientTags.TAG_DISPLAY_ACTUAL, display_tag_service_id, set( tag_ids_to_ideal_tag_ids_for_parents.values() ) ) + ideal_tag_ids_to_ancestor_tag_ids = self.modules_tag_parents.GetTagsToAncestors( ClientTags.TAG_DISPLAY_ACTUAL, display_tag_service_id, set( tag_ids_to_ideal_tag_ids_for_parents.values() ) ) else: @@ -7762,13 +6475,13 @@ class DB( HydrusDB.HydrusDB ): for search_tag_service_id in search_tag_service_ids: - tag_ids_that_are_sibling_chained = self._CacheTagSiblingsFilterChained( ClientTags.TAG_DISPLAY_ACTUAL, search_tag_service_id, tag_ids ) + tag_ids_that_are_sibling_chained = self.modules_tag_siblings.FilterChained( ClientTags.TAG_DISPLAY_ACTUAL, search_tag_service_id, tag_ids ) - tag_ids_to_ideal_tag_ids_for_siblings = self._CacheTagSiblingsGetTagsToIdeals( ClientTags.TAG_DISPLAY_ACTUAL, search_tag_service_id, tag_ids_that_are_sibling_chained ) + tag_ids_to_ideal_tag_ids_for_siblings = self.modules_tag_siblings.GetTagsToIdeals( ClientTags.TAG_DISPLAY_ACTUAL, search_tag_service_id, tag_ids_that_are_sibling_chained ) ideal_tag_ids = set( tag_ids_to_ideal_tag_ids_for_siblings.values() ) - ideal_tag_ids_to_sibling_chain_tag_ids = self._CacheTagSiblingsGetIdealsToChains( ClientTags.TAG_DISPLAY_ACTUAL, search_tag_service_id, ideal_tag_ids ) + ideal_tag_ids_to_sibling_chain_tag_ids = self.modules_tag_siblings.GetIdealsToChains( ClientTags.TAG_DISPLAY_ACTUAL, search_tag_service_id, ideal_tag_ids ) for ( tag_id, ideal_tag_id ) in tag_ids_to_ideal_tag_ids_for_siblings.items(): @@ -8185,11 +6898,11 @@ class DB( HydrusDB.HydrusDB ): with self._MakeTemporaryIntegerTable( [], 'ideal_tag_id' ) as temp_ideal_tag_ids_table_name: - self._CacheTagSiblingsFilterChainedIdealsIntoTable( ClientTags.TAG_DISPLAY_ACTUAL, sibling_tag_service_id, temp_tag_ids_table_name, temp_ideal_tag_ids_table_name ) + self.modules_tag_siblings.FilterChainedIdealsIntoTable( ClientTags.TAG_DISPLAY_ACTUAL, sibling_tag_service_id, temp_tag_ids_table_name, temp_ideal_tag_ids_table_name ) with self._MakeTemporaryIntegerTable( [], 'tag_id' ) as temp_chained_tag_ids_table_name: - self._CacheTagSiblingsGetChainsMembersFromIdealsTables( ClientTags.TAG_DISPLAY_ACTUAL, sibling_tag_service_id, temp_ideal_tag_ids_table_name, temp_chained_tag_ids_table_name ) + self.modules_tag_siblings.GetChainsMembersFromIdealsTables( ClientTags.TAG_DISPLAY_ACTUAL, sibling_tag_service_id, temp_ideal_tag_ids_table_name, temp_chained_tag_ids_table_name ) tag_ids.update( self._STI( self._Execute( 'SELECT tag_id FROM {};'.format( temp_chained_tag_ids_table_name ) ) ) ) @@ -8209,12 +6922,12 @@ class DB( HydrusDB.HydrusDB ): return set() - ideal_tag_ids = self._CacheTagSiblingsGetIdeals( ClientTags.TAG_DISPLAY_ACTUAL, sibling_tag_service_id, batch_of_tag_ids ) + ideal_tag_ids = self.modules_tag_siblings.GetIdeals( ClientTags.TAG_DISPLAY_ACTUAL, sibling_tag_service_id, batch_of_tag_ids ) ideal_tag_ids.difference_update( seen_ideal_tag_ids[ sibling_tag_service_id ] ) seen_ideal_tag_ids[ sibling_tag_service_id ].update( ideal_tag_ids ) - tag_ids.update( self._CacheTagSiblingsGetChainsMembersFromIdeals( ClientTags.TAG_DISPLAY_ACTUAL, sibling_tag_service_id, ideal_tag_ids ) ) + tag_ids.update( self.modules_tag_siblings.GetChainsMembersFromIdeals( ClientTags.TAG_DISPLAY_ACTUAL, sibling_tag_service_id, ideal_tag_ids ) ) ''' @@ -8720,7 +7433,7 @@ class DB( HydrusDB.HydrusDB ): seen_service_ids_to_seen_tag_ids = HydrusData.BuildKeyToSetDict( ( ( tag_service_id, tag_id ) for ( hash_id, ( tag_service_id, status, tag_id ) ) in display_tag_data ) ) - seen_service_ids_to_tag_ids_to_ideal_tag_ids = { tag_service_id : self._CacheTagSiblingsGetTagsToIdeals( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, tag_ids ) for ( tag_service_id, tag_ids ) in seen_service_ids_to_seen_tag_ids.items() } + seen_service_ids_to_tag_ids_to_ideal_tag_ids = { tag_service_id : self.modules_tag_siblings.GetTagsToIdeals( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, tag_ids ) for ( tag_service_id, tag_ids ) in seen_service_ids_to_seen_tag_ids.items() } display_tag_data = [ ( hash_id, ( tag_service_id, status, seen_service_ids_to_tag_ids_to_ideal_tag_ids[ tag_service_id ][ tag_id ] ) ) for ( hash_id, ( tag_service_id, status, tag_id ) ) in display_tag_data ] @@ -12177,192 +10890,6 @@ class DB( HydrusDB.HydrusDB ): return final_result_tag_ids - def _GetTagParents( self, service_key ): - - service_id = self.modules_services.GetServiceId( service_key ) - - statuses_to_pair_ids = self._GetTagParentsIds( service_id ) - - all_tag_ids = set() - - for pair_ids in statuses_to_pair_ids.values(): - - for ( child_tag_id, parent_tag_id ) in pair_ids: - - all_tag_ids.add( child_tag_id ) - all_tag_ids.add( parent_tag_id ) - - - - tag_ids_to_tags = self.modules_tags_local_cache.GetTagIdsToTags( tag_ids = all_tag_ids ) - - statuses_to_pairs = collections.defaultdict( set ) - - statuses_to_pairs.update( { status : { ( tag_ids_to_tags[ child_tag_id ], tag_ids_to_tags[ parent_tag_id ] ) for ( child_tag_id, parent_tag_id ) in pair_ids } for ( status, pair_ids ) in statuses_to_pair_ids.items() } ) - - return statuses_to_pairs - - - def _GetTagParentsIds( self, service_id ): - - statuses_and_pair_ids = self._Execute( 'SELECT status, child_tag_id, parent_tag_id FROM tag_parents WHERE service_id = ? UNION SELECT status, child_tag_id, parent_tag_id FROM tag_parent_petitions WHERE service_id = ?;', ( service_id, service_id ) ).fetchall() - - unsorted_statuses_to_pair_ids = HydrusData.BuildKeyToListDict( ( status, ( child_tag_id, parent_tag_id ) ) for ( status, child_tag_id, parent_tag_id ) in statuses_and_pair_ids ) - - statuses_to_pair_ids = collections.defaultdict( list ) - - statuses_to_pair_ids.update( { status : sorted( pair_ids ) for ( status, pair_ids ) in unsorted_statuses_to_pair_ids.items() } ) - - return statuses_to_pair_ids - - - def _GetTagParentsIdsChains( self, service_id, tag_ids ): - - # I experimented with one or two recursive queries, and for siblings, but it mostly ended up hellmode index efficiency. I think ( service_id, integer ) did it in - - # note that this has to do sibling lookup as well to fetch pairs that are only connected to our chain by sibling relationships, and we are assuming here that the sibling lookup cache is valid - - searched_tag_ids = set() - next_tag_ids = set( tag_ids ) - result_rows = set() - - while len( next_tag_ids ) > 0: - - tag_ids_seen_this_round = set() - - ideal_tag_ids = self._CacheTagSiblingsGetIdeals( ClientTags.TAG_DISPLAY_IDEAL, service_id, next_tag_ids ) - - tag_ids_seen_this_round.update( self._CacheTagSiblingsGetChainsMembersFromIdeals( ClientTags.TAG_DISPLAY_IDEAL, service_id, ideal_tag_ids ) ) - - with self._MakeTemporaryIntegerTable( next_tag_ids, 'tag_id' ) as temp_next_tag_ids_table_name: - - searched_tag_ids.update( next_tag_ids ) - - # keep these separate--older sqlite can't do cross join to an OR ON - - # temp tag_ids to parents - queries = [ - 'SELECT status, child_tag_id, parent_tag_id FROM {} CROSS JOIN tag_parents ON ( child_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name ), - 'SELECT status, child_tag_id, parent_tag_id FROM {} CROSS JOIN tag_parents ON ( parent_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name ), - 'SELECT status, child_tag_id, parent_tag_id FROM {} CROSS JOIN tag_parent_petitions ON ( child_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name ), - 'SELECT status, child_tag_id, parent_tag_id FROM {} CROSS JOIN tag_parent_petitions ON ( parent_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name ) - ] - - query = ' UNION '.join( queries ) - - for row in self._Execute( query, ( service_id, service_id, service_id, service_id ) ): - - result_rows.add( row ) - - ( status, child_tag_id, parent_tag_id ) = row - - tag_ids_seen_this_round.update( ( child_tag_id, parent_tag_id ) ) - - - - next_tag_ids = tag_ids_seen_this_round.difference( searched_tag_ids ) - - - unsorted_statuses_to_pair_ids = HydrusData.BuildKeyToListDict( ( status, ( child_tag_id, parent_tag_id ) ) for ( status, child_tag_id, parent_tag_id ) in result_rows ) - - statuses_to_pair_ids = collections.defaultdict( list ) - - statuses_to_pair_ids.update( { status : sorted( pair_ids ) for ( status, pair_ids ) in unsorted_statuses_to_pair_ids.items() } ) - - return statuses_to_pair_ids - - - def _GetTagSiblings( self, service_key ): - - service_id = self.modules_services.GetServiceId( service_key ) - - statuses_to_pair_ids = self._GetTagSiblingsIds( service_id ) - - all_tag_ids = set() - - for pair_ids in statuses_to_pair_ids.values(): - - for ( bad_tag_id, good_tag_id ) in pair_ids: - - all_tag_ids.add( bad_tag_id ) - all_tag_ids.add( good_tag_id ) - - - - tag_ids_to_tags = self.modules_tags_local_cache.GetTagIdsToTags( tag_ids = all_tag_ids ) - - statuses_to_pairs = collections.defaultdict( set ) - - statuses_to_pairs.update( { status : { ( tag_ids_to_tags[ bad_tag_id ], tag_ids_to_tags[ good_tag_id ] ) for ( bad_tag_id, good_tag_id ) in pair_ids } for ( status, pair_ids ) in statuses_to_pair_ids.items() } ) - - return statuses_to_pairs - - - def _GetTagSiblingsIds( self, service_id ): - - statuses_and_pair_ids = self._Execute( 'SELECT status, bad_tag_id, good_tag_id FROM tag_siblings WHERE service_id = ? UNION SELECT status, bad_tag_id, good_tag_id FROM tag_sibling_petitions WHERE service_id = ?;', ( service_id, service_id ) ).fetchall() - - unsorted_statuses_to_pair_ids = HydrusData.BuildKeyToListDict( ( status, ( bad_tag_id, good_tag_id ) ) for ( status, bad_tag_id, good_tag_id ) in statuses_and_pair_ids ) - - statuses_to_pair_ids = collections.defaultdict( list ) - - statuses_to_pair_ids.update( { status : sorted( pair_ids ) for ( status, pair_ids ) in unsorted_statuses_to_pair_ids.items() } ) - - return statuses_to_pair_ids - - - def _GetTagSiblingsIdsChains( self, service_id, tag_ids ): - - done_tag_ids = set() - next_tag_ids = set( tag_ids ) - result_rows = set() - - while len( next_tag_ids ) > 0: - - with self._MakeTemporaryIntegerTable( next_tag_ids, 'tag_id' ) as temp_next_tag_ids_table_name: - - done_tag_ids.update( next_tag_ids ) - - next_tag_ids = set() - - # keep these separate--older sqlite can't do cross join to an OR ON - - # temp tag_ids to siblings - queries = [ - 'SELECT status, bad_tag_id, good_tag_id FROM {} CROSS JOIN tag_siblings ON ( bad_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name ), - 'SELECT status, bad_tag_id, good_tag_id FROM {} CROSS JOIN tag_siblings ON ( good_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name ), - 'SELECT status, bad_tag_id, good_tag_id FROM {} CROSS JOIN tag_sibling_petitions ON ( bad_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name ), - 'SELECT status, bad_tag_id, good_tag_id FROM {} CROSS JOIN tag_sibling_petitions ON ( good_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name ) - ] - - query = ' UNION '.join( queries ) - - for row in self._Execute( query, ( service_id, service_id, service_id, service_id ) ): - - result_rows.add( row ) - - ( status, bad_tag_id, good_tag_id ) = row - - for tag_id in ( bad_tag_id, good_tag_id ): - - if tag_id not in done_tag_ids: - - next_tag_ids.add( tag_id ) - - - - - - - unsorted_statuses_to_pair_ids = HydrusData.BuildKeyToListDict( ( status, ( bad_tag_id, good_tag_id ) ) for ( status, bad_tag_id, good_tag_id ) in result_rows ) - - statuses_to_pair_ids = collections.defaultdict( list ) - - statuses_to_pair_ids.update( { status : sorted( pair_ids ) for ( status, pair_ids ) in unsorted_statuses_to_pair_ids.items() } ) - - return statuses_to_pair_ids - - def _GetTrashHashes( self, limit = None, minimum_age = None ): if limit is None: @@ -13100,6 +11627,14 @@ class DB( HydrusDB.HydrusDB ): self._modules.append( self.modules_mappings_storage ) + self.modules_tag_siblings = ClientDBTagSiblings.ClientDBTagSiblings( self._c, self.modules_services, self.modules_tags, self.modules_tags_local_cache ) + + self._modules.append( self.modules_tag_siblings ) + + self.modules_tag_parents = ClientDBTagParents.ClientDBTagParents( self._c, self.modules_services, self.modules_tags_local_cache, self.modules_tag_siblings ) + + self._modules.append( self.modules_tag_parents ) + # self.modules_similar_files = ClientDBSimilarFiles.ClientDBSimilarFiles( self._c, self.modules_services, self.modules_files_storage ) @@ -13820,26 +12355,17 @@ class DB( HydrusDB.HydrusDB ): if action == HC.CONTENT_UPDATE_ADD: - self._AddTagParents( service_id, pairs, defer_cache_update = True ) + self.modules_tag_parents.AddTagParents( service_id, pairs ) elif action == HC.CONTENT_UPDATE_DELETE: - self._DeleteTagParents( service_id, pairs, defer_cache_update = True ) + self.modules_tag_parents.DeleteTagParents( service_id, pairs ) changed_parent_tag_ids.update( ( child_tag_id, parent_tag_id ) ) elif action in ( HC.CONTENT_UPDATE_PEND, HC.CONTENT_UPDATE_PETITION ): - if action == HC.CONTENT_UPDATE_PEND: - - new_status = HC.CONTENT_STATUS_PENDING - - elif action == HC.CONTENT_UPDATE_PETITION: - - new_status = HC.CONTENT_STATUS_PETITIONED - - ( child_tag, parent_tag ) = row try: @@ -13857,9 +12383,16 @@ class DB( HydrusDB.HydrusDB ): reason_id = self.modules_texts.GetTextId( reason ) - self._Execute( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ?;', ( service_id, child_tag_id, parent_tag_id ) ) + triples = ( ( child_tag_id, parent_tag_id, reason_id ), ) - self._Execute( 'INSERT OR IGNORE INTO tag_parent_petitions ( service_id, child_tag_id, parent_tag_id, reason_id, status ) VALUES ( ?, ?, ?, ?, ? );', ( service_id, child_tag_id, parent_tag_id, reason_id, new_status ) ) + if action == HC.CONTENT_UPDATE_PEND: + + self.modules_tag_parents.PendTagParents( service_id, triples ) + + elif action == HC.CONTENT_UPDATE_PETITION: + + self.modules_tag_parents.PetitionTagParents( service_id, triples ) + changed_parent_tag_ids.update( ( child_tag_id, parent_tag_id ) ) @@ -13867,15 +12400,6 @@ class DB( HydrusDB.HydrusDB ): elif action in ( HC.CONTENT_UPDATE_RESCIND_PEND, HC.CONTENT_UPDATE_RESCIND_PETITION ): - if action == HC.CONTENT_UPDATE_RESCIND_PEND: - - deletee_status = HC.CONTENT_STATUS_PENDING - - elif action == HC.CONTENT_UPDATE_RESCIND_PETITION: - - deletee_status = HC.CONTENT_STATUS_PETITIONED - - ( child_tag, parent_tag ) = row try: @@ -13889,7 +12413,16 @@ class DB( HydrusDB.HydrusDB ): continue - self._Execute( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ? AND status = ?;', ( service_id, child_tag_id, parent_tag_id, deletee_status ) ) + pairs = ( ( child_tag_id, parent_tag_id ), ) + + if action == HC.CONTENT_UPDATE_RESCIND_PEND: + + self.modules_tag_parents.RescindPendingTagParents( service_id, pairs ) + + elif action == HC.CONTENT_UPDATE_RESCIND_PETITION: + + self.modules_tag_parents.RescindPetitionedTagParents( service_id, pairs ) + changed_parent_tag_ids.update( ( child_tag_id, parent_tag_id ) ) @@ -13919,26 +12452,17 @@ class DB( HydrusDB.HydrusDB ): if action == HC.CONTENT_UPDATE_ADD: - self._AddTagSiblings( service_id, pairs, defer_cache_update = True ) + self.modules_tag_siblings.AddTagSiblings( service_id, pairs ) elif action == HC.CONTENT_UPDATE_DELETE: - self._DeleteTagSiblings( service_id, pairs, defer_cache_update = True ) + self.modules_tag_siblings.DeleteTagSiblings( service_id, pairs ) changed_sibling_tag_ids.update( ( bad_tag_id, good_tag_id ) ) elif action in ( HC.CONTENT_UPDATE_PEND, HC.CONTENT_UPDATE_PETITION ): - if action == HC.CONTENT_UPDATE_PEND: - - new_status = HC.CONTENT_STATUS_PENDING - - elif action == HC.CONTENT_UPDATE_PETITION: - - new_status = HC.CONTENT_STATUS_PETITIONED - - ( bad_tag, good_tag ) = row try: @@ -13956,9 +12480,16 @@ class DB( HydrusDB.HydrusDB ): reason_id = self.modules_texts.GetTextId( reason ) - self._Execute( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND good_tag_id = ?;', ( service_id, bad_tag_id, good_tag_id ) ) + triples = ( ( bad_tag_id, good_tag_id, reason_id ), ) - self._Execute( 'INSERT OR IGNORE INTO tag_sibling_petitions ( service_id, bad_tag_id, good_tag_id, reason_id, status ) VALUES ( ?, ?, ?, ?, ? );', ( service_id, bad_tag_id, good_tag_id, reason_id, new_status ) ) + if action == HC.CONTENT_UPDATE_PEND: + + self.modules_tag_siblings.PendTagSiblings( service_id, triples ) + + elif action == HC.CONTENT_UPDATE_PETITION: + + self.modules_tag_siblings.PetitionTagSiblings( service_id, triples ) + changed_sibling_tag_ids.update( ( bad_tag_id, good_tag_id ) ) @@ -13966,15 +12497,6 @@ class DB( HydrusDB.HydrusDB ): elif action in ( HC.CONTENT_UPDATE_RESCIND_PEND, HC.CONTENT_UPDATE_RESCIND_PETITION ): - if action == HC.CONTENT_UPDATE_RESCIND_PEND: - - deletee_status = HC.CONTENT_STATUS_PENDING - - elif action == HC.CONTENT_UPDATE_RESCIND_PETITION: - - deletee_status = HC.CONTENT_STATUS_PETITIONED - - ( bad_tag, good_tag ) = row try: @@ -13988,7 +12510,16 @@ class DB( HydrusDB.HydrusDB ): continue - self._Execute( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND status = ?;', ( service_id, bad_tag_id, deletee_status ) ) + pairs = ( ( bad_tag_id, good_tag_id ), ) + + if action == HC.CONTENT_UPDATE_RESCIND_PEND: + + self.modules_tag_siblings.RescindPendingTagSiblings( service_id, pairs ) + + elif action == HC.CONTENT_UPDATE_RESCIND_PETITION: + + self.modules_tag_siblings.RescindPetitionedTagSiblings( service_id, pairs ) + changed_sibling_tag_ids.update( ( bad_tag_id, good_tag_id ) ) @@ -14097,12 +12628,12 @@ class DB( HydrusDB.HydrusDB ): if len( changed_sibling_tag_ids ) > 0: - self._CacheTagSiblingsSiblingsChanged( service_id, changed_sibling_tag_ids ) + self._CacheTagDisplayNotifySiblingsChanged( service_id, changed_sibling_tag_ids ) if len( changed_parent_tag_ids ) > 0: - self._CacheTagParentsParentsChanged( service_id, changed_parent_tag_ids ) + self._CacheTagDisplayNotifyParentsChanged( service_id, changed_parent_tag_ids ) @@ -14290,16 +12821,22 @@ class DB( HydrusDB.HydrusDB ): for chunk in HydrusData.SplitIteratorIntoAutothrottledChunks( i, PAIR_ROWS_INITIAL_CHUNK_SIZE, precise_time_to_stop ): parent_ids = [] + tag_ids = set() for ( service_child_tag_id, service_parent_tag_id ) in chunk: child_tag_id = self.modules_repositories.NormaliseServiceTagId( service_id, service_child_tag_id ) parent_tag_id = self.modules_repositories.NormaliseServiceTagId( service_id, service_parent_tag_id ) + tag_ids.add( child_tag_id ) + tag_ids.add( parent_tag_id ) + parent_ids.append( ( child_tag_id, parent_tag_id ) ) - self._AddTagParents( service_id, parent_ids ) + self.modules_tag_parents.AddTagParents( service_id, parent_ids ) + + self._CacheTagDisplayNotifyParentsChanged( service_id, tag_ids ) parents_or_siblings_changed = True @@ -14323,16 +12860,22 @@ class DB( HydrusDB.HydrusDB ): for chunk in HydrusData.SplitIteratorIntoAutothrottledChunks( i, PAIR_ROWS_INITIAL_CHUNK_SIZE, precise_time_to_stop ): parent_ids = [] + tag_ids = set() for ( service_child_tag_id, service_parent_tag_id ) in chunk: child_tag_id = self.modules_repositories.NormaliseServiceTagId( service_id, service_child_tag_id ) parent_tag_id = self.modules_repositories.NormaliseServiceTagId( service_id, service_parent_tag_id ) + tag_ids.add( child_tag_id ) + tag_ids.add( parent_tag_id ) + parent_ids.append( ( child_tag_id, parent_tag_id ) ) - self._DeleteTagParents( service_id, parent_ids ) + self.modules_tag_parents.DeleteTagParents( service_id, parent_ids ) + + self._CacheTagDisplayNotifyParentsChanged( service_id, tag_ids ) parents_or_siblings_changed = True @@ -14361,16 +12904,22 @@ class DB( HydrusDB.HydrusDB ): for chunk in HydrusData.SplitIteratorIntoAutothrottledChunks( i, PAIR_ROWS_INITIAL_CHUNK_SIZE, precise_time_to_stop ): sibling_ids = [] + tag_ids = set() for ( service_bad_tag_id, service_good_tag_id ) in chunk: bad_tag_id = self.modules_repositories.NormaliseServiceTagId( service_id, service_bad_tag_id ) good_tag_id = self.modules_repositories.NormaliseServiceTagId( service_id, service_good_tag_id ) + tag_ids.add( bad_tag_id ) + tag_ids.add( good_tag_id ) + sibling_ids.append( ( bad_tag_id, good_tag_id ) ) - self._AddTagSiblings( service_id, sibling_ids ) + self.modules_tag_siblings.AddTagSiblings( service_id, sibling_ids ) + + self._CacheTagDisplayNotifySiblingsChanged( service_id, tag_ids ) parents_or_siblings_changed = True @@ -14396,16 +12945,22 @@ class DB( HydrusDB.HydrusDB ): for chunk in HydrusData.SplitIteratorIntoAutothrottledChunks( i, PAIR_ROWS_INITIAL_CHUNK_SIZE, precise_time_to_stop ): sibling_ids = [] + tag_ids = set() for ( service_bad_tag_id, service_good_tag_id ) in chunk: bad_tag_id = self.modules_repositories.NormaliseServiceTagId( service_id, service_bad_tag_id ) good_tag_id = self.modules_repositories.NormaliseServiceTagId( service_id, service_good_tag_id ) + tag_ids.add( bad_tag_id ) + tag_ids.add( good_tag_id ) + sibling_ids.append( ( bad_tag_id, good_tag_id ) ) - self._DeleteTagSiblings( service_id, sibling_ids ) + self.modules_tag_siblings.DeleteTagSiblings( service_id, sibling_ids ) + + self._CacheTagDisplayNotifySiblingsChanged( service_id, tag_ids ) parents_or_siblings_changed = True @@ -14510,12 +13065,12 @@ class DB( HydrusDB.HydrusDB ): elif action == 'related_tags': result = self._GetRelatedTags( *args, **kwargs ) elif action == 'tag_display_application': result = self._CacheTagDisplayGetApplication( *args, **kwargs ) elif action == 'tag_display_maintenance_status': result = self._CacheTagDisplayGetApplicationStatusNumbers( *args, **kwargs ) - elif action == 'tag_parents': result = self._GetTagParents( *args, **kwargs ) - elif action == 'tag_siblings': result = self._GetTagSiblings( *args, **kwargs ) - elif action == 'tag_siblings_all_ideals': result = self._CacheTagSiblingsGetTagSiblingsIdeals( *args, **kwargs ) + elif action == 'tag_parents': result = self.modules_tag_parents.GetTagParents( *args, **kwargs ) + elif action == 'tag_siblings': result = self.modules_tag_siblings.GetTagSiblings( *args, **kwargs ) + elif action == 'tag_siblings_all_ideals': result = self.modules_tag_siblings.GetTagSiblingsIdeals( *args, **kwargs ) elif action == 'tag_display_decorators': result = self._CacheTagDisplayGetUIDecorators( *args, **kwargs ) elif action == 'tag_siblings_and_parents_lookup': result = self._CacheTagDisplayGetSiblingsAndParentsForTags( *args, **kwargs ) - elif action == 'tag_siblings_lookup': result = self._CacheTagSiblingsGetTagSiblingsForTags( *args, **kwargs ) + elif action == 'tag_siblings_lookup': result = self.modules_tag_siblings.GetTagSiblingsForTags( *args, **kwargs ) elif action == 'potential_duplicates_count': result = self._DuplicatesGetPotentialDuplicatesCount( *args, **kwargs ) elif action == 'url_statuses': result = self._GetURLStatuses( *args, **kwargs ) elif action == 'vacuum_data': result = self.modules_db_maintenance.GetVacuumData( *args, **kwargs ) @@ -14850,23 +13405,13 @@ class DB( HydrusDB.HydrusDB ): # first off, we want to clear all the current siblings and parents so they will be reprocessed later # we'll also have to catch up the tag definition cache to account for this - ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( tag_service_id ) - ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( tag_service_id ) - tag_ids_in_dispute = set() - tag_ids_in_dispute.update( self._STS( self._Execute( 'SELECT DISTINCT bad_tag_id FROM {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) ) ) - tag_ids_in_dispute.update( self._STS( self._Execute( 'SELECT ideal_tag_id FROM {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) ) ) - tag_ids_in_dispute.update( self._STS( self._Execute( 'SELECT DISTINCT child_tag_id FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) ) ) ) - tag_ids_in_dispute.update( self._STS( self._Execute( 'SELECT DISTINCT ancestor_tag_id FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) ) ) ) + tag_ids_in_dispute.update( self.modules_tag_siblings.GetAllTagIds( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id ) ) + tag_ids_in_dispute.update( self.modules_tag_parents.GetAllTagIds( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id ) ) - self._Execute( 'DELETE FROM {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) - self._Execute( 'DELETE FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) ) - - if tag_service_id in self._service_ids_to_display_application_status: - - del self._service_ids_to_display_application_status[ tag_service_id ] - + self.modules_tag_siblings.ClearActual( tag_service_id ) + self.modules_tag_parents.ClearActual( tag_service_id ) if len( tag_ids_in_dispute ) > 0: @@ -14930,8 +13475,6 @@ class DB( HydrusDB.HydrusDB ): job_key.SetVariable( 'popup_text_2', '' ) self._controller.frame_splash_status.SetSubtext( '' ) - self._service_ids_to_display_application_status = {} - finally: job_key.SetVariable( 'popup_text_1', 'done!' ) @@ -15050,16 +13593,8 @@ class DB( HydrusDB.HydrusDB ): for tag_service_id in tag_service_ids: - ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( tag_service_id ) - ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( tag_service_id ) - - self._Execute( 'DELETE FROM {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) - self._Execute( 'DELETE FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) ) - - if tag_service_id in self._service_ids_to_display_application_status: - - del self._service_ids_to_display_application_status[ tag_service_id ] - + self.modules_tag_siblings.ClearActual( tag_service_id ) + self.modules_tag_parents.ClearActual( tag_service_id ) time.sleep( 0.01 ) @@ -15146,7 +13681,7 @@ class DB( HydrusDB.HydrusDB ): # as siblings may have changed, parents may have as well - self._CacheTagParentsRegen( tag_service_ids ) + self.modules_tag_parents.Regen( tag_service_ids ) self.pub_after_job( 'notify_new_tag_display_application' ) @@ -15232,7 +13767,7 @@ class DB( HydrusDB.HydrusDB ): - def _RegenerateTagSiblingsCache( self, only_these_service_ids = None ): + def _RegenerateTagSiblingsAndParentsCache( self, only_these_service_ids = None ): if only_these_service_ids is None: @@ -15243,10 +13778,10 @@ class DB( HydrusDB.HydrusDB ): tag_service_ids = only_these_service_ids - self._CacheTagSiblingsRegen( tag_service_ids ) + self.modules_tag_siblings.Regen( tag_service_ids ) # as siblings may have changed, parents may have as well - self._CacheTagParentsRegen( tag_service_ids ) + self.modules_tag_parents.Regen( tag_service_ids ) self.pub_after_job( 'notify_new_tag_display_application' ) @@ -15397,11 +13932,8 @@ class DB( HydrusDB.HydrusDB ): main_cache_tables = set() - main_cache_tables.add( 'shape_perceptual_hashes' ) - main_cache_tables.add( 'shape_perceptual_hash_map' ) main_cache_tables.add( 'shape_vptree' ) main_cache_tables.add( 'shape_maintenance_branch_regen' ) - main_cache_tables.add( 'shape_search_cache' ) missing_main_tables = sorted( main_cache_tables.difference( existing_cache_tables ) ) @@ -15417,8 +13949,6 @@ class DB( HydrusDB.HydrusDB ): BlockingSafeShowMessage( message ) - self._CreateDBCaches() - if version >= 414: @@ -15430,7 +13960,7 @@ class DB( HydrusDB.HydrusDB ): for tag_service_id in tag_display_cache_service_ids: - ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( tag_service_id ) + ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = ClientDBTagSiblings.GenerateTagSiblingsLookupCacheTableNames( tag_service_id ) actual_missing = cache_actual_tag_siblings_lookup_table_name.split( '.' )[1] not in existing_cache_tables @@ -15448,7 +13978,7 @@ class DB( HydrusDB.HydrusDB ): if actual_missing or ideal_missing: - self._CacheTagSiblingsGenerate( tag_service_id ) + self.modules_tag_siblings.Generate( tag_service_id ) self._CreateIndex( cache_actual_tag_siblings_lookup_table_name, [ 'ideal_tag_id' ] ) @@ -15474,7 +14004,7 @@ class DB( HydrusDB.HydrusDB ): for tag_service_id in tag_display_cache_service_ids: - ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( tag_service_id ) + ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = ClientDBTagParents.GenerateTagParentsLookupCacheTableNames( tag_service_id ) actual_missing = cache_actual_tag_parents_lookup_table_name.split( '.' )[1] not in existing_cache_tables @@ -15492,7 +14022,7 @@ class DB( HydrusDB.HydrusDB ): if actual_missing or ideal_missing: - self._CacheTagParentsGenerate( tag_service_id ) + self.modules_tag_parents.Generate( tag_service_id ) self._CreateIndex( cache_actual_tag_parents_lookup_table_name, [ 'ancestor_tag_id' ] ) @@ -16075,7 +14605,7 @@ class DB( HydrusDB.HydrusDB ): self._Execute( 'DELETE FROM tag_parents WHERE service_id = ?;', ( service_id, ) ) self._Execute( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND status = ?;', ( service_id, HC.CONTENT_STATUS_PETITIONED ) ) - ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( service_id ) + ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = ClientDBTagParents.GenerateTagParentsLookupCacheTableNames( service_id ) # do not delete from actual! self._Execute( 'DELETE FROM {};'.format( cache_ideal_tag_parents_lookup_table_name ) ) @@ -16086,7 +14616,7 @@ class DB( HydrusDB.HydrusDB ): self._Execute( 'DELETE FROM tag_siblings WHERE service_id = ?;', ( service_id, ) ) self._Execute( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND status = ?;', ( service_id, HC.CONTENT_STATUS_PETITIONED ) ) - ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( service_id ) + ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = ClientDBTagSiblings.GenerateTagSiblingsLookupCacheTableNames( service_id ) self._Execute( 'DELETE FROM {};'.format( cache_ideal_tag_siblings_lookup_table_name ) ) @@ -16099,14 +14629,9 @@ class DB( HydrusDB.HydrusDB ): interested_service_ids = set( self._CacheTagDisplayGetInterestedServiceIds( service_id ) ) - self._service_ids_to_sibling_applicable_service_ids = None - self._service_ids_to_sibling_interested_service_ids = None - self._service_ids_to_parent_applicable_service_ids = None - self._service_ids_to_parent_interested_service_ids = None - if len( interested_service_ids ) > 0: - self._RegenerateTagSiblingsCache( only_these_service_ids = interested_service_ids ) + self._RegenerateTagSiblingsAndParentsCache( only_these_service_ids = interested_service_ids ) @@ -16611,7 +15136,7 @@ class DB( HydrusDB.HydrusDB ): old_tag_siblings_lookup_name = 'external_caches.{}'.format( old_tag_siblings_lookup_base_name ) - ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( tag_service_id ) + ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = ClientDBTagSiblings.GenerateTagSiblingsLookupCacheTableNames( tag_service_id ) self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( bad_tag_id INTEGER PRIMARY KEY, ideal_tag_id INTEGER );'.format( cache_actual_tag_siblings_lookup_table_name ) ) self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( bad_tag_id INTEGER PRIMARY KEY, ideal_tag_id INTEGER );'.format( cache_ideal_tag_siblings_lookup_table_name ) ) @@ -16654,8 +15179,8 @@ class DB( HydrusDB.HydrusDB ): for tag_service_id in tag_service_ids: - self._CacheTagParentsDrop( tag_service_id ) - self._CacheTagParentsGenerate( tag_service_id ) + self.modules_tag_parents.Drop( tag_service_id ) + self.modules_tag_parents.Generate( tag_service_id ) @@ -16819,7 +15344,7 @@ class DB( HydrusDB.HydrusDB ): try: - self._RegenerateTagSiblingsCache() + self._RegenerateTagSiblingsAndParentsCache() except Exception as e: @@ -17241,8 +15766,8 @@ class DB( HydrusDB.HydrusDB ): self._controller.frame_splash_status.SetSubtext( message ) - ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( tag_service_id ) - ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( tag_service_id ) + ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = ClientDBTagSiblings.GenerateTagSiblingsLookupCacheTableNames( tag_service_id ) + ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = ClientDBTagParents.GenerateTagParentsLookupCacheTableNames( tag_service_id ) tag_ids_in_dispute = set() @@ -17904,6 +16429,34 @@ class DB( HydrusDB.HydrusDB ): + if version == 450: + + result = self._c.execute( 'SELECT 1 FROM external_caches.sqlite_master WHERE name = ?;', ( 'shape_perceptual_hashes', ) ).fetchone() + + if result is not None: + + self._controller.frame_splash_status.SetSubtext( 'moving some similar file data around' ) + + self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.shape_perceptual_hashes ( phash_id INTEGER PRIMARY KEY, phash BLOB_BYTES UNIQUE );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.shape_perceptual_hash_map ( phash_id INTEGER, hash_id INTEGER, PRIMARY KEY ( phash_id, hash_id ) );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS shape_search_cache ( hash_id INTEGER PRIMARY KEY, searched_distance INTEGER );' ) + + self._Execute( 'INSERT OR IGNORE INTO external_master.shape_perceptual_hashes SELECT phash_id, phash FROM external_caches.shape_perceptual_hashes;' ) + self._Execute( 'INSERT OR IGNORE INTO external_master.shape_perceptual_hash_map SELECT phash_id, hash_id FROM external_caches.shape_perceptual_hash_map;' ) + self._Execute( 'INSERT OR IGNORE INTO main.shape_search_cache SELECT hash_id, searched_distance FROM external_caches.shape_search_cache;' ) + + self._Execute( 'DROP TABLE external_caches.shape_perceptual_hashes;' ) + self._Execute( 'DROP TABLE external_caches.shape_perceptual_hash_map;' ) + self._Execute( 'DROP TABLE external_caches.shape_search_cache;' ) + + self._CreateIndex( 'external_master.shape_perceptual_hash_map', [ 'hash_id' ] ) + + self.modules_db_maintenance.AnalyzeTable( 'external_master.shape_perceptual_hashes' ) + self.modules_db_maintenance.AnalyzeTable( 'external_master.shape_perceptual_hash_map' ) + self.modules_db_maintenance.AnalyzeTable( 'shape_search_cache' ) + + + self._controller.frame_splash_status.SetTitleText( 'updated db to v{}'.format( HydrusData.ToHumanInt( version + 1 ) ) ) self._Execute( 'UPDATE version SET version = ?;', ( version + 1, ) ) @@ -18426,7 +16979,7 @@ class DB( HydrusDB.HydrusDB ): elif action == 'regenerate_tag_mappings_cache': self._RegenerateTagMappingsCache( *args, **kwargs ) elif action == 'regenerate_tag_parents_cache': self._RegenerateTagParentsCache( *args, **kwargs ) elif action == 'regenerate_tag_pending_mappings_cache': self._RegenerateTagPendingMappingsCache( *args, **kwargs ) - elif action == 'regenerate_tag_siblings_cache': self._RegenerateTagSiblingsCache( *args, **kwargs ) + elif action == 'regenerate_tag_siblings_and_parents_cache': self._RegenerateTagSiblingsAndParentsCache( *args, **kwargs ) elif action == 'register_shutdown_work': self.modules_db_maintenance.RegisterShutdownWork( *args, **kwargs ) elif action == 'repopulate_mappings_from_cache': self._RepopulateMappingsFromCache( *args, **kwargs ) elif action == 'repopulate_tag_cache_missing_subtags': self._RepopulateTagCacheMissingSubtags( *args, **kwargs ) diff --git a/hydrus/client/db/ClientDBRepositories.py b/hydrus/client/db/ClientDBRepositories.py index 078195d4..52ed253c 100644 --- a/hydrus/client/db/ClientDBRepositories.py +++ b/hydrus/client/db/ClientDBRepositories.py @@ -159,7 +159,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): else: - content_types = tuple( HC.REPOSITORY_CONTENT_TYPES[ service_type ] ) + content_types = tuple( HC.SERVICE_TYPES_TO_CONTENT_TYPES[ service_type ] ) inserts.extend( ( ( hash_id, content_type, processed ) for content_type in content_types ) ) @@ -324,7 +324,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): ( min_unregistered_update_index, ) = result - predicate_phrase = 'processed = False AND content_type IN {}'.format( HydrusData.SplayListForDB( content_types_to_process ) ) + predicate_phrase = 'processed = ? AND content_type IN {}'.format( HydrusData.SplayListForDB( content_types_to_process ) ) if min_unregistered_update_index is not None: @@ -336,7 +336,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): query = 'SELECT update_index, hash_id, content_type FROM {} CROSS JOIN {} USING ( hash_id ) WHERE {};'.format( repository_updates_processed_table_name, repository_updates_table_name, predicate_phrase ) - rows = self._Execute( query ).fetchall() + rows = self._Execute( query, ( False, ) ).fetchall() update_indices_to_unprocessed_hash_ids = HydrusData.BuildKeyToSetDict( ( ( update_index, hash_id ) for ( update_index, hash_id, content_type ) in rows ) ) hash_ids_to_content_types_to_process = HydrusData.BuildKeyToSetDict( ( ( hash_id, content_type ) for ( update_index, hash_id, content_type ) in rows ) ) diff --git a/hydrus/client/db/ClientDBSimilarFiles.py b/hydrus/client/db/ClientDBSimilarFiles.py index dda971ce..b068265d 100644 --- a/hydrus/client/db/ClientDBSimilarFiles.py +++ b/hydrus/client/db/ClientDBSimilarFiles.py @@ -197,7 +197,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): index_generation_tuples = [] - index_generation_tuples.append( ( 'external_caches.shape_perceptual_hash_map', [ 'hash_id' ], False ) ) + index_generation_tuples.append( ( 'external_master.shape_perceptual_hash_map', [ 'hash_id' ], False ) ) index_generation_tuples.append( ( 'external_caches.shape_vptree', [ 'parent_id' ], False ) ) return index_generation_tuples @@ -406,15 +406,15 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): def CreateInitialTables( self ): - self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_perceptual_hashes ( phash_id INTEGER PRIMARY KEY, phash BLOB_BYTES UNIQUE );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.shape_perceptual_hashes ( phash_id INTEGER PRIMARY KEY, phash BLOB_BYTES UNIQUE );' ) - self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_perceptual_hash_map ( phash_id INTEGER, hash_id INTEGER, PRIMARY KEY ( phash_id, hash_id ) );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.shape_perceptual_hash_map ( phash_id INTEGER, hash_id INTEGER, PRIMARY KEY ( phash_id, hash_id ) );' ) self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_vptree ( phash_id INTEGER PRIMARY KEY, parent_id INTEGER, radius INTEGER, inner_id INTEGER, inner_population INTEGER, outer_id INTEGER, outer_population INTEGER );' ) self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_maintenance_branch_regen ( phash_id INTEGER PRIMARY KEY );' ) - self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_search_cache ( hash_id INTEGER PRIMARY KEY, searched_distance INTEGER );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS shape_search_cache ( hash_id INTEGER PRIMARY KEY, searched_distance INTEGER );' ) def DisassociatePHashes( self, hash_id, phash_ids ): diff --git a/hydrus/client/db/ClientDBTagParents.py b/hydrus/client/db/ClientDBTagParents.py new file mode 100644 index 00000000..f0b0ff22 --- /dev/null +++ b/hydrus/client/db/ClientDBTagParents.py @@ -0,0 +1,813 @@ +import collections +import itertools +import sqlite3 +import typing + +from hydrus.core import HydrusConstants as HC +from hydrus.core import HydrusData +from hydrus.core import HydrusDBModule + +from hydrus.client.db import ClientDBDefinitionsCache +from hydrus.client.db import ClientDBServices +from hydrus.client.db import ClientDBTagSiblings +from hydrus.client.metadata import ClientTags +from hydrus.client.metadata import ClientTagsHandling + +def GenerateTagParentsLookupCacheTableName( display_type: int, service_id: int ): + + ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( service_id ) + + if display_type == ClientTags.TAG_DISPLAY_IDEAL: + + return cache_ideal_tag_parents_lookup_table_name + + elif display_type == ClientTags.TAG_DISPLAY_ACTUAL: + + return cache_actual_tag_parents_lookup_table_name + + +def GenerateTagParentsLookupCacheTableNames( service_id ): + + cache_ideal_tag_parents_lookup_table_name = 'external_caches.ideal_tag_parents_lookup_cache_{}'.format( service_id ) + cache_actual_tag_parents_lookup_table_name = 'external_caches.actual_tag_parents_lookup_cache_{}'.format( service_id ) + + return ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) + +class ClientDBTagParents( HydrusDBModule.HydrusDBModule ): + + def __init__( + self, + cursor: sqlite3.Cursor, + modules_services: ClientDBServices.ClientDBMasterServices, + modules_tags_local_cache: ClientDBDefinitionsCache.ClientDBCacheLocalTags, + modules_tag_siblings: ClientDBTagSiblings.ClientDBTagSiblings + ): + + self.modules_services = modules_services + self.modules_tags_local_cache = modules_tags_local_cache + self.modules_tag_siblings = modules_tag_siblings + + self._service_ids_to_display_application_status = {} + + self._service_ids_to_applicable_service_ids = None + self._service_ids_to_interested_service_ids = None + + HydrusDBModule.HydrusDBModule.__init__( self, 'client tag parents', cursor ) + + + def _GetInitialIndexGenerationTuples( self ): + + index_generation_tuples = [ + ( 'tag_parents', [ 'service_id', 'parent_tag_id' ], False ), + ( 'tag_parent_petitions', [ 'service_id', 'parent_tag_id' ], False ), + ] + + return index_generation_tuples + + + def AddTagParents( self, service_id, pairs ): + + self._ExecuteMany( 'DELETE FROM tag_parents WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ?;', ( ( service_id, child_tag_id, parent_tag_id ) for ( child_tag_id, parent_tag_id ) in pairs ) ) + self._ExecuteMany( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ? AND status = ?;', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_PENDING ) for ( child_tag_id, parent_tag_id ) in pairs ) ) + + self._ExecuteMany( 'INSERT OR IGNORE INTO tag_parents ( service_id, child_tag_id, parent_tag_id, status ) VALUES ( ?, ?, ?, ? );', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_CURRENT ) for ( child_tag_id, parent_tag_id ) in pairs ) ) + + + def ClearActual( self, service_id ): + + cache_actual_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( ClientTags.TAG_DISPLAY_ACTUAL, service_id ) + + self._Execute( 'DELETE FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) ) + + if service_id in self._service_ids_to_display_application_status: + + del self._service_ids_to_display_application_status[ service_id ] + + + + def CreateInitialTables( self ): + + self._Execute( 'CREATE TABLE tag_parents ( service_id INTEGER, child_tag_id INTEGER, parent_tag_id INTEGER, status INTEGER, PRIMARY KEY ( service_id, child_tag_id, parent_tag_id, status ) );' ) + self._Execute( 'CREATE TABLE tag_parent_petitions ( service_id INTEGER, child_tag_id INTEGER, parent_tag_id INTEGER, status INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, child_tag_id, parent_tag_id, status ) );' ) + + self._Execute( 'CREATE TABLE tag_parent_application ( master_service_id INTEGER, service_index INTEGER, application_service_id INTEGER, PRIMARY KEY ( master_service_id, service_index ) );' ) + + + def DeleteTagParents( self, service_id, pairs ): + + self._ExecuteMany( 'DELETE FROM tag_parents WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ?;', ( ( service_id, child_tag_id, parent_tag_id ) for ( child_tag_id, parent_tag_id ) in pairs ) ) + self._ExecuteMany( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ? AND status = ?;', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_PETITIONED ) for ( child_tag_id, parent_tag_id ) in pairs ) ) + + self._ExecuteMany( 'INSERT OR IGNORE INTO tag_parents ( service_id, child_tag_id, parent_tag_id, status ) VALUES ( ?, ?, ?, ? );', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_DELETED ) for ( child_tag_id, parent_tag_id ) in pairs ) ) + + + def Drop( self, tag_service_id ): + + self._Execute( 'DELETE FROM tag_parents WHERE service_id = ?;', ( tag_service_id, ) ) + self._Execute( 'DELETE FROM tag_parent_petitions WHERE service_id = ?;', ( tag_service_id, ) ) + + ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( tag_service_id ) + + self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_actual_tag_parents_lookup_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_ideal_tag_parents_lookup_table_name ) ) + + self._Execute( 'DELETE FROM tag_parent_application WHERE master_service_id = ? OR application_service_id = ?;', ( tag_service_id, tag_service_id ) ) + + self._service_ids_to_applicable_service_ids = None + self._service_ids_to_interested_service_ids = None + + + def FilterChained( self, display_type, tag_service_id, ideal_tag_ids ): + + if len( ideal_tag_ids ) == 0: + + return set() + + elif len( ideal_tag_ids ) == 1: + + ( ideal_tag_id, ) = ideal_tag_ids + + if self.IsChained( display_type, tag_service_id, ideal_tag_id ): + + return { ideal_tag_id } + + else: + + return set() + + + + # get the tag_ids that are part of a parent chain + + cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) + + with self._MakeTemporaryIntegerTable( ideal_tag_ids, 'tag_id' ) as temp_table_name: + + # keep these separate--older sqlite can't do cross join to an OR ON + + # temp tags to lookup + chain_tag_ids = self._STS( self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} ON ( child_tag_id = tag_id );'.format( temp_table_name, cache_tag_parents_lookup_table_name ) ) ) + chain_tag_ids.update( self._STI( self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} ON ( ancestor_tag_id = tag_id );'.format( temp_table_name, cache_tag_parents_lookup_table_name ) ) ) ) + + + return chain_tag_ids + + + def Generate( self, tag_service_id ): + + ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( tag_service_id ) + + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( child_tag_id INTEGER, ancestor_tag_id INTEGER, PRIMARY KEY ( child_tag_id, ancestor_tag_id ) );'.format( cache_actual_tag_parents_lookup_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( child_tag_id INTEGER, ancestor_tag_id INTEGER, PRIMARY KEY ( child_tag_id, ancestor_tag_id ) );'.format( cache_ideal_tag_parents_lookup_table_name ) ) + + self._CreateIndex( cache_actual_tag_parents_lookup_table_name, [ 'ancestor_tag_id' ] ) + self._CreateIndex( cache_ideal_tag_parents_lookup_table_name, [ 'ancestor_tag_id' ] ) + + self._Execute( 'INSERT OR IGNORE INTO tag_parent_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', ( tag_service_id, 0, tag_service_id ) ) + + self._service_ids_to_applicable_service_ids = None + self._service_ids_to_interested_service_ids = None + + self.Regen( ( tag_service_id, ) ) + + + def GenerateApplicationDicts( self ): + + unsorted_dict = HydrusData.BuildKeyToListDict( ( master_service_id, ( index, application_service_id ) ) for ( master_service_id, index, application_service_id ) in self._Execute( 'SELECT master_service_id, service_index, application_service_id FROM tag_parent_application;' ) ) + + self._service_ids_to_applicable_service_ids = collections.defaultdict( list ) + + self._service_ids_to_applicable_service_ids.update( { master_service_id : [ application_service_id for ( index, application_service_id ) in sorted( index_and_applicable_service_ids ) ] for ( master_service_id, index_and_applicable_service_ids ) in unsorted_dict.items() } ) + + self._service_ids_to_interested_service_ids = collections.defaultdict( set ) + + for ( master_service_id, application_service_ids ) in self._service_ids_to_applicable_service_ids.items(): + + for application_service_id in application_service_ids: + + self._service_ids_to_interested_service_ids[ application_service_id ].add( master_service_id ) + + + + + def GetAllTagIds( self, display_type, tag_service_id ): + + cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) + + tag_ids = set() + + tag_ids.update( self._STI( self._Execute( 'SELECT DISTINCT child_tag_id FROM {};'.format( cache_tag_parents_lookup_table_name ) ) ) ) + tag_ids.update( self._STI( self._Execute( 'SELECT DISTINCT ancestor_tag_id FROM {};'.format( cache_tag_parents_lookup_table_name ) ) ) ) + + return tag_ids + + + def GetAncestors( self, display_type: int, tag_service_id: int, ideal_tag_id: int ): + + cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) + + ancestor_ids = self._STS( self._Execute( 'SELECT ancestor_tag_id FROM {} WHERE child_tag_id = ?;'.format( cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ) ) ) + + return ancestor_ids + + + def GetApplicableServiceIds( self, tag_service_id ): + + if self._service_ids_to_applicable_service_ids is None: + + self.GenerateApplicationDicts() + + + return self._service_ids_to_applicable_service_ids[ tag_service_id ] + + + def GetApplication( self ): + + if self._service_ids_to_applicable_service_ids is None: + + self.GenerateApplicationDicts() + + + service_ids_to_service_keys = {} + + service_keys_to_parent_applicable_service_keys = {} + + for ( master_service_id, applicable_service_ids ) in self._service_ids_to_applicable_service_ids.items(): + + all_service_ids = [ master_service_id ] + list( applicable_service_ids ) + + for service_id in all_service_ids: + + if service_id not in service_ids_to_service_keys: + + service_ids_to_service_keys[ service_id ] = self.modules_services.GetService( service_id ).GetServiceKey() + + + + service_keys_to_parent_applicable_service_keys[ service_ids_to_service_keys[ master_service_id ] ] = [ service_ids_to_service_keys[ service_id ] for service_id in applicable_service_ids ] + + + return service_keys_to_parent_applicable_service_keys + + + def GetApplicationStatus( self, service_id ): + + if service_id not in self._service_ids_to_display_application_status: + + ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( service_id ) + + actual_parent_rows = set( self._Execute( 'SELECT child_tag_id, ancestor_tag_id FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) ) ) + ideal_parent_rows = set( self._Execute( 'SELECT child_tag_id, ancestor_tag_id FROM {};'.format( cache_ideal_tag_parents_lookup_table_name ) ) ) + + parent_rows_to_remove = actual_parent_rows.difference( ideal_parent_rows ) + parent_rows_to_add = ideal_parent_rows.difference( actual_parent_rows ) + + num_actual_rows = len( actual_parent_rows ) + num_ideal_rows = len( ideal_parent_rows ) + + self._service_ids_to_display_application_status[ service_id ] = ( parent_rows_to_add, parent_rows_to_remove, num_actual_rows, num_ideal_rows ) + + + ( parent_rows_to_add, parent_rows_to_remove, num_actual_rows, num_ideal_rows ) = self._service_ids_to_display_application_status[ service_id ] + + return ( parent_rows_to_add, parent_rows_to_remove, num_actual_rows, num_ideal_rows ) + + + def GetChainsMembers( self, display_type: int, tag_service_id: int, ideal_tag_ids: typing.Collection[ int ] ): + + if len( ideal_tag_ids ) == 0: + + return set() + + + cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) + + chain_tag_ids = set( ideal_tag_ids ) + we_have_looked_up = set() + next_search_tag_ids = set( ideal_tag_ids ) + + while len( next_search_tag_ids ) > 0: + + if len( next_search_tag_ids ) == 1: + + ( ideal_tag_id, ) = next_search_tag_ids + + round_of_tag_ids = self._STS( self._Execute( 'SELECT child_tag_id FROM {} WHERE ancestor_tag_id = ? UNION ALL SELECT ancestor_tag_id FROM {} WHERE child_tag_id = ?;'.format( cache_tag_parents_lookup_table_name, cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ideal_tag_id ) ) ) + + else: + + with self._MakeTemporaryIntegerTable( next_search_tag_ids, 'tag_id' ) as temp_next_search_tag_ids_table_name: + + round_of_tag_ids = self._STS( self._Execute( 'SELECT child_tag_id FROM {} CROSS JOIN {} ON ( ancestor_tag_id = tag_id ) UNION ALL SELECT ancestor_tag_id FROM {} CROSS JOIN {} ON ( child_tag_id = tag_id );'.format( temp_next_search_tag_ids_table_name, cache_tag_parents_lookup_table_name, temp_next_search_tag_ids_table_name, cache_tag_parents_lookup_table_name ) ) ) + + + + chain_tag_ids.update( round_of_tag_ids ) + + we_have_looked_up.update( next_search_tag_ids ) + + next_search_tag_ids = round_of_tag_ids.difference( we_have_looked_up ) + + + return chain_tag_ids + + + def GetChainsMembersTables( self, display_type: int, tag_service_id: int, ideal_tag_ids_table_name: str, results_table_name: str ): + + # if it isn't crazy, I should write this whole lad to be one or two recursive queries + + cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) + + first_ideal_tag_ids = self._STS( self._Execute( 'SELECT ideal_tag_id FROM {};'.format( ideal_tag_ids_table_name ) ) ) + + chain_tag_ids = set( first_ideal_tag_ids ) + we_have_looked_up = set() + next_search_tag_ids = set( first_ideal_tag_ids ) + + while len( next_search_tag_ids ) > 0: + + if len( next_search_tag_ids ) == 1: + + ( ideal_tag_id, ) = next_search_tag_ids + + round_of_tag_ids = self._STS( self._Execute( 'SELECT child_tag_id FROM {} WHERE ancestor_tag_id = ? UNION ALL SELECT ancestor_tag_id FROM {} WHERE child_tag_id = ?;'.format( cache_tag_parents_lookup_table_name, cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ideal_tag_id ) ) ) + + else: + + with self._MakeTemporaryIntegerTable( next_search_tag_ids, 'tag_id' ) as temp_next_search_tag_ids_table_name: + + round_of_tag_ids = self._STS( self._Execute( 'SELECT child_tag_id FROM {} CROSS JOIN {} ON ( ancestor_tag_id = tag_id ) UNION ALL SELECT ancestor_tag_id FROM {} CROSS JOIN {} ON ( child_tag_id = tag_id );'.format( temp_next_search_tag_ids_table_name, cache_tag_parents_lookup_table_name, temp_next_search_tag_ids_table_name, cache_tag_parents_lookup_table_name ) ) ) + + + + new_tag_ids = round_of_tag_ids.difference( chain_tag_ids ) + + if len( new_tag_ids ) > 0: + + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( tag_id ) VALUES ( ? );', ( ( tag_id, ) for tag_id in round_of_tag_ids.difference( new_tag_ids ) ) ) + + chain_tag_ids.update( new_tag_ids ) + + + we_have_looked_up.update( next_search_tag_ids ) + + next_search_tag_ids = round_of_tag_ids.difference( we_have_looked_up ) + + + + def GetDescendants( self, display_type: int, tag_service_id: int, ideal_tag_id: int ): + + cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) + + descendant_ids = self._STS( self._Execute( 'SELECT child_tag_id FROM {} WHERE ancestor_tag_id = ?;'.format( cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ) ) ) + + return descendant_ids + + + def GetExpectedTableNames( self ) -> typing.Collection[ str ]: + + expected_table_names = [ + 'tag_parents', + 'tag_parent_petitions', + 'tag_parent_application' + ] + + return expected_table_names + + + def GetInterestedServiceIds( self, tag_service_id ): + + if self._service_ids_to_interested_service_ids is None: + + self.GenerateApplicationDicts() + + + return self._service_ids_to_interested_service_ids[ tag_service_id ] + + + def GetTablesAndColumnsThatUseDefinitions( self, content_type: int ) -> typing.List[ typing.Tuple[ str, str ] ]: + + if HC.CONTENT_TYPE_TAG: + + return [ + ( 'tag_parents', 'child_tag_id' ), + ( 'tag_parents', 'parent_tag_id' ), + ( 'tag_parent_petitions', 'child_tag_id' ), + ( 'tag_parent_petitions', 'parent_tag_id' ) + ] + + + return [] + + + def GetTagParents( self, service_key ): + + service_id = self.modules_services.GetServiceId( service_key ) + + statuses_to_pair_ids = self.GetTagParentsIds( service_id ) + + all_tag_ids = set() + + for pair_ids in statuses_to_pair_ids.values(): + + for ( child_tag_id, parent_tag_id ) in pair_ids: + + all_tag_ids.add( child_tag_id ) + all_tag_ids.add( parent_tag_id ) + + + + tag_ids_to_tags = self.modules_tags_local_cache.GetTagIdsToTags( tag_ids = all_tag_ids ) + + statuses_to_pairs = collections.defaultdict( set ) + + statuses_to_pairs.update( { status : { ( tag_ids_to_tags[ child_tag_id ], tag_ids_to_tags[ parent_tag_id ] ) for ( child_tag_id, parent_tag_id ) in pair_ids } for ( status, pair_ids ) in statuses_to_pair_ids.items() } ) + + return statuses_to_pairs + + + def GetTagParentsIds( self, service_id ): + + statuses_and_pair_ids = self._Execute( 'SELECT status, child_tag_id, parent_tag_id FROM tag_parents WHERE service_id = ? UNION SELECT status, child_tag_id, parent_tag_id FROM tag_parent_petitions WHERE service_id = ?;', ( service_id, service_id ) ).fetchall() + + unsorted_statuses_to_pair_ids = HydrusData.BuildKeyToListDict( ( status, ( child_tag_id, parent_tag_id ) ) for ( status, child_tag_id, parent_tag_id ) in statuses_and_pair_ids ) + + statuses_to_pair_ids = collections.defaultdict( list ) + + statuses_to_pair_ids.update( { status : sorted( pair_ids ) for ( status, pair_ids ) in unsorted_statuses_to_pair_ids.items() } ) + + return statuses_to_pair_ids + + + def GetTagParentsIdsChains( self, service_id, tag_ids ): + + # I experimented with one or two recursive queries, and for siblings, but it mostly ended up hellmode index efficiency. I think ( service_id, integer ) did it in + + # note that this has to do sibling lookup as well to fetch pairs that are only connected to our chain by sibling relationships, and we are assuming here that the sibling lookup cache is valid + + searched_tag_ids = set() + next_tag_ids = set( tag_ids ) + result_rows = set() + + while len( next_tag_ids ) > 0: + + tag_ids_seen_this_round = set() + + ideal_tag_ids = self.modules_tag_siblings.GetIdeals( ClientTags.TAG_DISPLAY_IDEAL, service_id, next_tag_ids ) + + tag_ids_seen_this_round.update( self.modules_tag_siblings.GetChainsMembersFromIdeals( ClientTags.TAG_DISPLAY_IDEAL, service_id, ideal_tag_ids ) ) + + with self._MakeTemporaryIntegerTable( next_tag_ids, 'tag_id' ) as temp_next_tag_ids_table_name: + + searched_tag_ids.update( next_tag_ids ) + + # keep these separate--older sqlite can't do cross join to an OR ON + + # temp tag_ids to parents + queries = [ + 'SELECT status, child_tag_id, parent_tag_id FROM {} CROSS JOIN tag_parents ON ( child_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name ), + 'SELECT status, child_tag_id, parent_tag_id FROM {} CROSS JOIN tag_parents ON ( parent_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name ), + 'SELECT status, child_tag_id, parent_tag_id FROM {} CROSS JOIN tag_parent_petitions ON ( child_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name ), + 'SELECT status, child_tag_id, parent_tag_id FROM {} CROSS JOIN tag_parent_petitions ON ( parent_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name ) + ] + + query = ' UNION '.join( queries ) + + for row in self._Execute( query, ( service_id, service_id, service_id, service_id ) ): + + result_rows.add( row ) + + ( status, child_tag_id, parent_tag_id ) = row + + tag_ids_seen_this_round.update( ( child_tag_id, parent_tag_id ) ) + + + + next_tag_ids = tag_ids_seen_this_round.difference( searched_tag_ids ) + + + unsorted_statuses_to_pair_ids = HydrusData.BuildKeyToListDict( ( status, ( child_tag_id, parent_tag_id ) ) for ( status, child_tag_id, parent_tag_id ) in result_rows ) + + statuses_to_pair_ids = collections.defaultdict( list ) + + statuses_to_pair_ids.update( { status : sorted( pair_ids ) for ( status, pair_ids ) in unsorted_statuses_to_pair_ids.items() } ) + + return statuses_to_pair_ids + + + def GetTagsToAncestors( self, display_type: int, tag_service_id: int, ideal_tag_ids: typing.Collection[ int ] ): + + if len( ideal_tag_ids ) == 0: + + return {} + + elif len( ideal_tag_ids ) == 1: + + ( ideal_tag_id, ) = ideal_tag_ids + + ancestors = self.GetAncestors( display_type, tag_service_id, ideal_tag_id ) + + return { ideal_tag_id : ancestors } + + + cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) + + with self._MakeTemporaryIntegerTable( ideal_tag_ids, 'child_tag_id' ) as temp_table_name: + + tag_ids_to_ancestors = HydrusData.BuildKeyToSetDict( self._Execute( 'SELECT child_tag_id, ancestor_tag_id FROM {} CROSS JOIN {} USING ( child_tag_id );'.format( temp_table_name, cache_tag_parents_lookup_table_name ) ) ) + + + for tag_id in ideal_tag_ids: + + if tag_id not in tag_ids_to_ancestors: + + tag_ids_to_ancestors[ tag_id ] = set() + + + + return tag_ids_to_ancestors + + + def GetTagsToDescendants( self, display_type: int, tag_service_id: int, ideal_tag_ids: typing.Collection[ int ] ): + + if len( ideal_tag_ids ) == 0: + + return {} + + elif len( ideal_tag_ids ) == 1: + + ( ideal_tag_id, ) = ideal_tag_ids + + descendants = self.GetDescendants( display_type, tag_service_id, ideal_tag_id ) + + return { ideal_tag_id : descendants } + + + cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) + + with self._MakeTemporaryIntegerTable( ideal_tag_ids, 'ancestor_tag_id' ) as temp_table_name: + + tag_ids_to_descendants = HydrusData.BuildKeyToSetDict( self._Execute( 'SELECT ancestor_tag_id, child_tag_id FROM {} CROSS JOIN {} USING ( ancestor_tag_id );'.format( temp_table_name, cache_tag_parents_lookup_table_name ) ) ) + + + for ideal_tag_id in ideal_tag_ids: + + if ideal_tag_id not in tag_ids_to_descendants: + + tag_ids_to_descendants[ ideal_tag_id ] = set() + + + + return tag_ids_to_descendants + + + def IdealiseStatusesToPairIds( self, tag_service_id, unideal_statuses_to_pair_ids ): + + all_tag_ids = set( itertools.chain.from_iterable( ( itertools.chain.from_iterable( pair_ids ) for pair_ids in unideal_statuses_to_pair_ids.values() ) ) ) + + tag_ids_to_ideal_tag_ids = self.modules_tag_siblings.GetTagsToIdeals( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id, all_tag_ids ) + + ideal_statuses_to_pair_ids = collections.defaultdict( list ) + + for ( status, pair_ids ) in unideal_statuses_to_pair_ids.items(): + + ideal_pair_ids = sorted( ( ( tag_ids_to_ideal_tag_ids[ child_tag_id ], tag_ids_to_ideal_tag_ids[ parent_tag_id ] ) for ( child_tag_id, parent_tag_id ) in pair_ids ) ) + + ideal_statuses_to_pair_ids[ status ] = ideal_pair_ids + + + return ideal_statuses_to_pair_ids + + + def IsChained( self, display_type, tag_service_id, ideal_tag_id ): + + cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) + + return self._Execute( 'SELECT 1 FROM {} WHERE child_tag_id = ? OR ancestor_tag_id = ?;'.format( cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ideal_tag_id ) ).fetchone() is not None + + + def NotifyParentAddRowSynced( self, tag_service_id, row ): + + if tag_service_id in self._service_ids_to_display_application_status: + + ( parent_rows_to_add, parent_rows_to_remove, num_actual_rows, num_ideal_rows ) = self._service_ids_to_display_application_status[ tag_service_id ] + + parent_rows_to_add.discard( row ) + + num_actual_rows += 1 + + self._service_ids_to_display_application_status[ tag_service_id ] = ( parent_rows_to_add, parent_rows_to_remove, num_actual_rows, num_ideal_rows ) + + + + def NotifyParentDeleteRowSynced( self, tag_service_id, row ): + + if tag_service_id in self._service_ids_to_display_application_status: + + ( parent_rows_to_add, parent_rows_to_remove, num_actual_rows, num_ideal_rows ) = self._service_ids_to_display_application_status[ tag_service_id ] + + parent_rows_to_remove.discard( row ) + + num_actual_rows -= 1 + + self._service_ids_to_display_application_status[ tag_service_id ] = ( parent_rows_to_add, parent_rows_to_remove, num_actual_rows, num_ideal_rows ) + + + + def PendTagParents( self, service_id, triples ): + + self._ExecuteMany( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ?;', ( ( service_id, child_tag_id, parent_tag_id ) for ( child_tag_id, parent_tag_id, reason_id ) in triples ) ) + + self._ExecuteMany( 'INSERT OR IGNORE INTO tag_parent_petitions ( service_id, child_tag_id, parent_tag_id, reason_id, status ) VALUES ( ?, ?, ?, ?, ? );', ( ( service_id, child_tag_id, parent_tag_id, reason_id, HC.CONTENT_STATUS_PENDING ) for ( child_tag_id, parent_tag_id, reason_id ) in triples ) ) + + + def PetitionTagParents( self, service_id, triples ): + + self._ExecuteMany( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ?;', ( ( service_id, child_tag_id, parent_tag_id ) for ( child_tag_id, parent_tag_id, reason_id ) in triples ) ) + + self._ExecuteMany( 'INSERT OR IGNORE INTO tag_parent_petitions ( service_id, child_tag_id, parent_tag_id, reason_id, status ) VALUES ( ?, ?, ?, ?, ? );', ( ( service_id, child_tag_id, parent_tag_id, reason_id, HC.CONTENT_STATUS_PETITIONED ) for ( child_tag_id, parent_tag_id, reason_id ) in triples ) ) + + + def RescindPendingTagParents( self, service_id, pairs ): + + self._ExecuteMany( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ? AND status = ?;', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_PENDING ) for ( child_tag_id, parent_tag_id ) in pairs ) ) + + + def RescindPetitionedTagParents( self, service_id, pairs ): + + self._ExecuteMany( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ? AND status = ?;', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_PETITIONED ) for ( child_tag_id, parent_tag_id ) in pairs ) ) + + + def Regen( self, tag_service_ids ): + + for tag_service_id in tag_service_ids: + + cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id ) + + self._Execute( 'DELETE FROM {};'.format( cache_tag_parents_lookup_table_name ) ) + + applicable_service_ids = self.GetApplicableServiceIds( tag_service_id ) + + tps = ClientTagsHandling.TagParentsStructure() + + for applicable_service_id in applicable_service_ids: + + unideal_statuses_to_pair_ids = self.GetTagParentsIds( service_id = applicable_service_id ) + + # we have to collapse the parent ids according to siblings + + ideal_statuses_to_pair_ids = self.IdealiseStatusesToPairIds( tag_service_id, unideal_statuses_to_pair_ids ) + + # + + petitioned_fast_lookup = set( ideal_statuses_to_pair_ids[ HC.CONTENT_STATUS_PETITIONED ] ) + + for ( child_tag_id, parent_tag_id ) in ideal_statuses_to_pair_ids[ HC.CONTENT_STATUS_CURRENT ]: + + if ( child_tag_id, parent_tag_id ) in petitioned_fast_lookup: + + continue + + + tps.AddPair( child_tag_id, parent_tag_id ) + + + for ( child_tag_id, parent_tag_id ) in ideal_statuses_to_pair_ids[ HC.CONTENT_STATUS_PENDING ]: + + tps.AddPair( child_tag_id, parent_tag_id ) + + + + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( child_tag_id, ancestor_tag_id ) VALUES ( ?, ? );'.format( cache_tag_parents_lookup_table_name ), tps.IterateDescendantAncestorPairs() ) + + if tag_service_id in self._service_ids_to_display_application_status: + + del self._service_ids_to_display_application_status[ tag_service_id ] + + + + + def RegenChains( self, tag_service_ids, tag_ids ): + + if len( tag_ids ) == 0: + + return + + + for tag_service_id in tag_service_ids: + + cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id ) + + # it is possible that the parents cache currently contains non-ideal tag_ids + # so, to be safe, we'll also get all sibling chain members + + tag_ids_to_clear_and_regen = set( tag_ids ) + + ideal_tag_ids = self.modules_tag_siblings.GetIdeals( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id, tag_ids ) + + tag_ids_to_clear_and_regen.update( self.modules_tag_siblings.GetChainsMembersFromIdeals( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id, ideal_tag_ids ) ) + + # and now all possible current parent chains based on this + + tag_ids_to_clear_and_regen.update( self.GetChainsMembers( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id, tag_ids_to_clear_and_regen ) ) + + # this should now contain all possible tag_ids that could be in tag parents right now related to what we were given + + self._ExecuteMany( 'DELETE FROM {} WHERE child_tag_id = ? OR ancestor_tag_id = ?;'.format( cache_tag_parents_lookup_table_name ), ( ( tag_id, tag_id ) for tag_id in tag_ids_to_clear_and_regen ) ) + + # we wipe them + + applicable_tag_service_ids = self.GetApplicableServiceIds( tag_service_id ) + + tps = ClientTagsHandling.TagParentsStructure() + + for applicable_tag_service_id in applicable_tag_service_ids: + + service_key = self.modules_services.GetService( applicable_tag_service_id ).GetServiceKey() + + unideal_statuses_to_pair_ids = self.GetTagParentsIdsChains( applicable_tag_service_id, tag_ids_to_clear_and_regen ) + + ideal_statuses_to_pair_ids = self.IdealiseStatusesToPairIds( tag_service_id, unideal_statuses_to_pair_ids ) + + # + + petitioned_fast_lookup = set( ideal_statuses_to_pair_ids[ HC.CONTENT_STATUS_PETITIONED ] ) + + for ( child_tag_id, parent_tag_id ) in ideal_statuses_to_pair_ids[ HC.CONTENT_STATUS_CURRENT ]: + + if ( child_tag_id, parent_tag_id ) in petitioned_fast_lookup: + + continue + + + tps.AddPair( child_tag_id, parent_tag_id ) + + + for ( child_tag_id, parent_tag_id ) in ideal_statuses_to_pair_ids[ HC.CONTENT_STATUS_PENDING ]: + + tps.AddPair( child_tag_id, parent_tag_id ) + + + + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( child_tag_id, ancestor_tag_id ) VALUES ( ?, ? );'.format( cache_tag_parents_lookup_table_name ), tps.IterateDescendantAncestorPairs() ) + + if tag_service_id in self._service_ids_to_display_application_status: + + del self._service_ids_to_display_application_status[ tag_service_id ] + + + + + def SetApplication( self, service_keys_to_applicable_service_keys ): + + if self._service_ids_to_applicable_service_ids is None: + + self.GenerateApplicationDicts() + + + new_service_ids_to_applicable_service_ids = collections.defaultdict( list ) + + for ( master_service_key, applicable_service_keys ) in service_keys_to_applicable_service_keys.items(): + + master_service_id = self.modules_services.GetServiceId( master_service_key ) + applicable_service_ids = [ self.modules_services.GetServiceId( service_key ) for service_key in applicable_service_keys ] + + new_service_ids_to_applicable_service_ids[ master_service_id ] = applicable_service_ids + + + old_and_new_master_service_ids = set( self._service_ids_to_applicable_service_ids.keys() ) + old_and_new_master_service_ids.update( new_service_ids_to_applicable_service_ids.keys() ) + + inserts = [] + + service_ids_to_sync = set() + + for master_service_id in old_and_new_master_service_ids: + + if master_service_id in new_service_ids_to_applicable_service_ids: + + applicable_service_ids = new_service_ids_to_applicable_service_ids[ master_service_id ] + + inserts.extend( ( ( master_service_id, i, applicable_service_id ) for ( i, applicable_service_id ) in enumerate( applicable_service_ids ) ) ) + + if applicable_service_ids != self._service_ids_to_applicable_service_ids[ master_service_id ]: + + service_ids_to_sync.add( master_service_id ) + + + else: + + service_ids_to_sync.add( master_service_id ) + + + + self._Execute( 'DELETE FROM tag_parent_application;' ) + + self._ExecuteMany( 'INSERT OR IGNORE INTO tag_parent_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', inserts ) + + self._service_ids_to_applicable_service_ids = None + self._service_ids_to_interested_service_ids = None + + return service_ids_to_sync + + diff --git a/hydrus/client/db/ClientDBTagSiblings.py b/hydrus/client/db/ClientDBTagSiblings.py new file mode 100644 index 00000000..be065748 --- /dev/null +++ b/hydrus/client/db/ClientDBTagSiblings.py @@ -0,0 +1,909 @@ +import collections +import itertools +import sqlite3 +import typing + +from hydrus.core import HydrusConstants as HC +from hydrus.core import HydrusData +from hydrus.core import HydrusDBModule + +from hydrus.client import ClientConstants as CC +from hydrus.client.db import ClientDBDefinitionsCache +from hydrus.client.db import ClientDBMaster +from hydrus.client.db import ClientDBServices +from hydrus.client.metadata import ClientTags +from hydrus.client.metadata import ClientTagsHandling + +def GenerateTagSiblingsLookupCacheTableName( display_type: int, service_id: int ): + + ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( service_id ) + + if display_type == ClientTags.TAG_DISPLAY_IDEAL: + + return cache_ideal_tag_siblings_lookup_table_name + + elif display_type == ClientTags.TAG_DISPLAY_ACTUAL: + + return cache_actual_tag_siblings_lookup_table_name + + +def GenerateTagSiblingsLookupCacheTableNames( service_id ): + + cache_ideal_tag_siblings_lookup_table_name = 'external_caches.ideal_tag_siblings_lookup_cache_{}'.format( service_id ) + cache_actual_tag_siblings_lookup_table_name = 'external_caches.actual_tag_siblings_lookup_cache_{}'.format( service_id ) + + return ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) + +class ClientDBTagSiblings( HydrusDBModule.HydrusDBModule ): + + def __init__( self, cursor: sqlite3.Cursor, modules_services: ClientDBServices.ClientDBMasterServices, modules_tags: ClientDBMaster.ClientDBMasterTags, modules_tags_local_cache: ClientDBDefinitionsCache.ClientDBCacheLocalTags ): + + self.modules_services = modules_services + self.modules_tags_local_cache = modules_tags_local_cache + self.modules_tags = modules_tags + + self._service_ids_to_display_application_status = {} + + self._service_ids_to_applicable_service_ids = None + self._service_ids_to_interested_service_ids = None + + HydrusDBModule.HydrusDBModule.__init__( self, 'client tag siblings', cursor ) + + + def _GenerateApplicationDicts( self ): + + unsorted_dict = HydrusData.BuildKeyToListDict( ( master_service_id, ( index, application_service_id ) ) for ( master_service_id, index, application_service_id ) in self._Execute( 'SELECT master_service_id, service_index, application_service_id FROM tag_sibling_application;' ) ) + + self._service_ids_to_applicable_service_ids = collections.defaultdict( list ) + + self._service_ids_to_applicable_service_ids.update( { master_service_id : [ application_service_id for ( index, application_service_id ) in sorted( index_and_applicable_service_ids ) ] for ( master_service_id, index_and_applicable_service_ids ) in unsorted_dict.items() } ) + + self._service_ids_to_interested_service_ids = collections.defaultdict( set ) + + for ( master_service_id, application_service_ids ) in self._service_ids_to_applicable_service_ids.items(): + + for application_service_id in application_service_ids: + + self._service_ids_to_interested_service_ids[ application_service_id ].add( master_service_id ) + + + + + def _GetInitialIndexGenerationTuples( self ): + + index_generation_tuples = [ + ( 'tag_siblings', [ 'service_id', 'good_tag_id' ], False ), + ( 'tag_sibling_petitions', [ 'service_id', 'good_tag_id' ], False ), + ] + + return index_generation_tuples + + + def AddTagSiblings( self, service_id, pairs ): + + self._ExecuteMany( 'DELETE FROM tag_siblings WHERE service_id = ? AND bad_tag_id = ? AND good_tag_id = ?;', ( ( service_id, bad_tag_id, good_tag_id ) for ( bad_tag_id, good_tag_id ) in pairs ) ) + self._ExecuteMany( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND good_tag_id = ? AND status = ?;', ( ( service_id, bad_tag_id, good_tag_id, HC.CONTENT_STATUS_PENDING ) for ( bad_tag_id, good_tag_id ) in pairs ) ) + + self._ExecuteMany( 'INSERT OR IGNORE INTO tag_siblings ( service_id, bad_tag_id, good_tag_id, status ) VALUES ( ?, ?, ?, ? );', ( ( service_id, bad_tag_id, good_tag_id, HC.CONTENT_STATUS_CURRENT ) for ( bad_tag_id, good_tag_id ) in pairs ) ) + + + def ClearActual( self, service_id ): + + cache_actual_tag_sibling_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( ClientTags.TAG_DISPLAY_ACTUAL, service_id ) + + self._Execute( 'DELETE FROM {};'.format( cache_actual_tag_sibling_lookup_table_name ) ) + + if service_id in self._service_ids_to_display_application_status: + + del self._service_ids_to_display_application_status[ service_id ] + + + + def CreateInitialTables( self ): + + self._Execute( 'CREATE TABLE tag_siblings ( service_id INTEGER, bad_tag_id INTEGER, good_tag_id INTEGER, status INTEGER, PRIMARY KEY ( service_id, bad_tag_id, status ) );' ) + self._Execute( 'CREATE TABLE tag_sibling_petitions ( service_id INTEGER, bad_tag_id INTEGER, good_tag_id INTEGER, status INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, bad_tag_id, status ) );' ) + + self._Execute( 'CREATE TABLE tag_sibling_application ( master_service_id INTEGER, service_index INTEGER, application_service_id INTEGER, PRIMARY KEY ( master_service_id, service_index ) );' ) + + + def DeleteTagSiblings( self, service_id, pairs ): + + self._ExecuteMany( 'DELETE FROM tag_siblings WHERE service_id = ? AND bad_tag_id = ? AND good_tag_id = ?;', ( ( service_id, bad_tag_id, good_tag_id ) for ( bad_tag_id, good_tag_id ) in pairs ) ) + self._ExecuteMany( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND good_tag_id = ? AND status = ?;', ( ( service_id, bad_tag_id, good_tag_id, HC.CONTENT_STATUS_PETITIONED ) for ( bad_tag_id, good_tag_id ) in pairs ) ) + + self._ExecuteMany( 'INSERT OR IGNORE INTO tag_siblings ( service_id, bad_tag_id, good_tag_id, status ) VALUES ( ?, ?, ?, ? );', ( ( service_id, bad_tag_id, good_tag_id, HC.CONTENT_STATUS_DELETED ) for ( bad_tag_id, good_tag_id ) in pairs ) ) + + + def Drop( self, tag_service_id ): + + self._Execute( 'DELETE FROM tag_siblings WHERE service_id = ?;', ( tag_service_id, ) ) + self._Execute( 'DELETE FROM tag_sibling_petitions WHERE service_id = ?;', ( tag_service_id, ) ) + + ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( tag_service_id ) + + self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_ideal_tag_siblings_lookup_table_name ) ) + + self._Execute( 'DELETE FROM tag_sibling_application WHERE master_service_id = ? OR application_service_id = ?;', ( tag_service_id, tag_service_id ) ) + + self._service_ids_to_applicable_service_ids = None + self._service_ids_to_interested_service_ids = None + + + def FilterChained( self, display_type, tag_service_id, tag_ids ): + + if len( tag_ids ) == 0: + + return set() + + elif len( tag_ids ) == 1: + + ( tag_id, ) = tag_ids + + if self.IsChained( display_type, tag_service_id, tag_id ): + + return { tag_id } + + else: + + return set() + + + + # get the tag_ids that are part of a sibling chain + + cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) + + with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_table_name: + + # keep these separate--older sqlite can't do cross join to an OR ON + + # temp tags to lookup + chain_tag_ids = self._STS( self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ) ) + chain_tag_ids.update( self._STI( self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} ON ( ideal_tag_id = tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ) ) ) + + + return chain_tag_ids + + + def FilterChainedIdealsIntoTable( self, display_type, tag_service_id, tag_ids_table_name, results_table_name ): + + # get the tag_ids that are part of a sibling chain + + cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) + + # keep these separate--older sqlite can't do cross join to an OR ON + + # temp tags to lookup + self._Execute( 'INSERT OR IGNORE INTO {} SELECT ideal_tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) + self._STI( self._Execute( 'INSERT OR IGNORE INTO {} SELECT tag_id FROM {} CROSS JOIN {} ON ( ideal_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) ) + + + def FilterChainedIntoTable( self, display_type, tag_service_id, tag_ids_table_name, results_table_name ): + + # get the tag_ids that are part of a sibling chain + + cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) + + # keep these separate--older sqlite can't do cross join to an OR ON + + # temp tags to lookup + self._Execute( 'INSERT OR IGNORE INTO {} SELECT tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) + self._STI( self._Execute( 'INSERT OR IGNORE INTO {} SELECT tag_id FROM {} CROSS JOIN {} ON ( ideal_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) ) + + + def Generate( self, tag_service_id ): + + self._service_ids_to_applicable_service_ids = None + self._service_ids_to_interested_service_ids = None + + ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( tag_service_id ) + + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( bad_tag_id INTEGER PRIMARY KEY, ideal_tag_id INTEGER );'.format( cache_actual_tag_siblings_lookup_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( bad_tag_id INTEGER PRIMARY KEY, ideal_tag_id INTEGER );'.format( cache_ideal_tag_siblings_lookup_table_name ) ) + + self._CreateIndex( cache_actual_tag_siblings_lookup_table_name, [ 'ideal_tag_id' ] ) + self._CreateIndex( cache_ideal_tag_siblings_lookup_table_name, [ 'ideal_tag_id' ] ) + + self._Execute( 'INSERT OR IGNORE INTO tag_sibling_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', ( tag_service_id, 0, tag_service_id ) ) + + self._service_ids_to_applicable_service_ids = None + self._service_ids_to_interested_service_ids = None + + self.Regen( ( tag_service_id, ) ) + + + def GetAllTagIds( self, display_type, tag_service_id ): + + cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) + + tag_ids = set() + + tag_ids.update( self._STI( self._Execute( 'SELECT DISTINCT bad_tag_id FROM {};'.format( cache_tag_siblings_lookup_table_name ) ) ) ) + tag_ids.update( self._STI( self._Execute( 'SELECT ideal_tag_id FROM {};'.format( cache_tag_siblings_lookup_table_name ) ) ) ) + + return tag_ids + + + def GetApplicableServiceIds( self, tag_service_id ): + + if self._service_ids_to_applicable_service_ids is None: + + self._GenerateApplicationDicts() + + + return self._service_ids_to_applicable_service_ids[ tag_service_id ] + + + def GetApplication( self ): + + if self._service_ids_to_applicable_service_ids is None: + + self._GenerateApplicationDicts() + + + service_ids_to_service_keys = {} + + service_keys_to_applicable_service_keys = {} + + for ( master_service_id, applicable_service_ids ) in self._service_ids_to_applicable_service_ids.items(): + + all_service_ids = [ master_service_id ] + list( applicable_service_ids ) + + for service_id in all_service_ids: + + if service_id not in service_ids_to_service_keys: + + service_ids_to_service_keys[ service_id ] = self.modules_services.GetService( service_id ).GetServiceKey() + + + + service_keys_to_applicable_service_keys[ service_ids_to_service_keys[ master_service_id ] ] = [ service_ids_to_service_keys[ service_id ] for service_id in applicable_service_ids ] + + + return service_keys_to_applicable_service_keys + + + def GetApplicationStatus( self, service_id ): + + if service_id not in self._service_ids_to_display_application_status: + + ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( service_id ) + + actual_sibling_rows = set( self._Execute( 'SELECT bad_tag_id, ideal_tag_id FROM {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) ) + ideal_sibling_rows = set( self._Execute( 'SELECT bad_tag_id, ideal_tag_id FROM {};'.format( cache_ideal_tag_siblings_lookup_table_name ) ) ) + + sibling_rows_to_remove = actual_sibling_rows.difference( ideal_sibling_rows ) + sibling_rows_to_add = ideal_sibling_rows.difference( actual_sibling_rows ) + + num_actual_rows = len( actual_sibling_rows ) + num_ideal_rows = len( ideal_sibling_rows ) + + self._service_ids_to_display_application_status[ service_id ] = ( sibling_rows_to_add, sibling_rows_to_remove, num_actual_rows, num_ideal_rows ) + + + ( sibling_rows_to_add, sibling_rows_to_remove, num_actual_rows, num_ideal_rows ) = self._service_ids_to_display_application_status[ service_id ] + + return ( sibling_rows_to_add, sibling_rows_to_remove, num_actual_rows, num_ideal_rows ) + + + def GetChainMembersFromIdeal( self, display_type, tag_service_id, ideal_tag_id ) -> typing.Set[ int ]: + + cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) + + sibling_tag_ids = self._STS( self._Execute( 'SELECT bad_tag_id FROM {} WHERE ideal_tag_id = ?;'.format( cache_tag_siblings_lookup_table_name ), ( ideal_tag_id, ) ) ) + + sibling_tag_ids.add( ideal_tag_id ) + + return sibling_tag_ids + + + def GetChainsMembersFromIdeals( self, display_type, tag_service_id, ideal_tag_ids ) -> typing.Set[ int ]: + + if len( ideal_tag_ids ) == 0: + + return set() + + elif len( ideal_tag_ids ) == 1: + + ( ideal_tag_id, ) = ideal_tag_ids + + return self.GetChainMembersFromIdeal( display_type, tag_service_id, ideal_tag_id ) + + + cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) + + with self._MakeTemporaryIntegerTable( ideal_tag_ids, 'ideal_tag_id' ) as temp_table_name: + + # temp tags to lookup + sibling_tag_ids = self._STS( self._Execute( 'SELECT bad_tag_id FROM {} CROSS JOIN {} USING ( ideal_tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ) ) + + + sibling_tag_ids.update( ideal_tag_ids ) + + return sibling_tag_ids + + + def GetChainsMembersFromIdealsTables( self, display_type, tag_service_id, ideal_tag_ids_table_name, results_table_name ) -> typing.Set[ int ]: + + cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) + + self._Execute( 'INSERT OR IGNORE INTO {} ( tag_id ) SELECT ideal_tag_id FROM {};'.format( results_table_name, ideal_tag_ids_table_name ) ) + + # tags to lookup + self._Execute( 'INSERT OR IGNORE INTO {} ( tag_id ) SELECT bad_tag_id FROM {} CROSS JOIN {} USING ( ideal_tag_id );'.format( results_table_name, ideal_tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) + + + def GetExpectedTableNames( self ) -> typing.Collection[ str ]: + + expected_table_names = [ + 'tag_siblings', + 'tag_sibling_petitions', + 'tag_sibling_application' + ] + + return expected_table_names + + + def GetIdeal( self, display_type, tag_service_id, tag_id ) -> int: + + cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) + + result = self._Execute( 'SELECT ideal_tag_id FROM {} WHERE bad_tag_id = ?;'.format( cache_tag_siblings_lookup_table_name ), ( tag_id, ) ).fetchone() + + if result is None: + + return tag_id + + else: + + ( ideal_tag_id, ) = result + + return ideal_tag_id + + + + def GetIdeals( self, display_type, tag_service_id, tag_ids ) -> typing.Set[ int ]: + + if not isinstance( tag_ids, set ): + + tag_ids = set( tag_ids ) + + + if len( tag_ids ) == 0: + + return set() + + elif len( tag_ids ) == 1: + + ( tag_id, ) = tag_ids + + return { self.GetIdeal( display_type, tag_service_id, tag_id ) } + + + cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) + + with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_tag_ids_table_name: + + magic_case = 'CASE WHEN ideal_tag_id NOT NULL THEN ideal_tag_id ELSE tag_id END' + + cursor = self._Execute( + 'SELECT {} FROM {} LEFT OUTER JOIN {} ON ( tag_id = bad_tag_id );'.format( + magic_case, + temp_tag_ids_table_name, + cache_tag_siblings_lookup_table_name + ) + ) + + return self._STS( cursor ) + + + ''' + no_ideal_found_tag_ids = set( tag_ids ) + ideal_tag_ids = set() + + with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_table_name: + + # temp tags to lookup + for ( tag_id, ideal_tag_id ) in self._Execute( 'SELECT tag_id, ideal_tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ): + + no_ideal_found_tag_ids.discard( tag_id ) + ideal_tag_ids.add( ideal_tag_id ) + + + ideal_tag_ids.update( no_ideal_found_tag_ids ) + + + return ideal_tag_ids + ''' + + + def GetIdealsIntoTable( self, display_type, tag_service_id, tag_ids_table_name, results_table_name ): + + cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) + + magic_case = 'CASE WHEN ideal_tag_id NOT NULL THEN ideal_tag_id ELSE tag_id END' + + cursor = self._Execute( + 'INSERT OR IGNORE INTO {} ( ideal_tag_id ) SELECT {} FROM {} LEFT OUTER JOIN {} ON ( tag_id = bad_tag_id );'.format( + results_table_name, + magic_case, + tag_ids_table_name, + cache_tag_siblings_lookup_table_name + ) + ) + + return self._STS( cursor ) + + + def GetIdealsToChains( self, display_type, tag_service_id, ideal_tag_ids ): + + # this only takes ideal_tag_ids + + if len( ideal_tag_ids ) == 0: + + return {} + + elif len( ideal_tag_ids ) == 1: + + ( ideal_tag_id, ) = ideal_tag_ids + + chain_tag_ids = self.GetChainMembersFromIdeal( display_type, tag_service_id, ideal_tag_id ) + + return { ideal_tag_id : chain_tag_ids } + + + cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) + + with self._MakeTemporaryIntegerTable( ideal_tag_ids, 'ideal_tag_id' ) as temp_table_name: + + # temp tags to lookup + ideal_tag_ids_to_chain_members = HydrusData.BuildKeyToSetDict( self._Execute( 'SELECT ideal_tag_id, bad_tag_id FROM {} CROSS JOIN {} USING ( ideal_tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ) ) + + + # this returns ideal in the chain, and chains of size 1 + + for ideal_tag_id in ideal_tag_ids: + + ideal_tag_ids_to_chain_members[ ideal_tag_id ].add( ideal_tag_id ) + + + return ideal_tag_ids_to_chain_members + + + def GetInterestedServiceIds( self, tag_service_id ): + + if self._service_ids_to_interested_service_ids is None: + + self._GenerateApplicationDicts() + + + return self._service_ids_to_interested_service_ids[ tag_service_id ] + + + def GetTablesAndColumnsThatUseDefinitions( self, content_type: int ) -> typing.List[ typing.Tuple[ str, str ] ]: + + if HC.CONTENT_TYPE_TAG: + + return [ + ( 'tag_siblings', 'bad_tag_id' ), + ( 'tag_siblings', 'good_tag_id' ), + ( 'tag_sibling_petitions', 'bad_tag_id' ), + ( 'tag_sibling_petitions', 'good_tag_id' ) + ] + + + return [] + + + def GetTagSiblingsForTags( self, service_key, tags ): + + if service_key == CC.COMBINED_TAG_SERVICE_KEY: + + tag_service_ids = self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES ) + + else: + + tag_service_ids = ( self.modules_services.GetServiceId( service_key ), ) + + + existing_tags = { tag for tag in tags if self.modules_tags.TagExists( tag ) } + + existing_tag_ids = set( self.modules_tags_local_cache.GetTagIdsToTags( tags = existing_tags ).keys() ) + + tag_ids_to_chain_tag_ids = collections.defaultdict( set ) + + for tag_service_id in tag_service_ids: + + tag_ids_to_ideal_tag_ids = self.GetTagsToIdeals( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, existing_tag_ids ) + + ideal_tag_ids = set( tag_ids_to_ideal_tag_ids.values() ) + + ideal_tag_ids_to_chain_tag_ids = self.GetIdealsToChains( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, ideal_tag_ids ) + + for tag_id in existing_tag_ids: + + chain_tag_ids = ideal_tag_ids_to_chain_tag_ids[ tag_ids_to_ideal_tag_ids[ tag_id ] ] + + tag_ids_to_chain_tag_ids[ tag_id ].update( chain_tag_ids ) + + + + all_tag_ids = set( tag_ids_to_chain_tag_ids.keys() ) + all_tag_ids.update( itertools.chain.from_iterable( tag_ids_to_chain_tag_ids.values() ) ) + + tag_ids_to_tags = self.modules_tags_local_cache.GetTagIdsToTags( tag_ids = all_tag_ids ) + + tags_to_siblings = { tag_ids_to_tags[ tag_id ] : { tag_ids_to_tags[ chain_tag_id ] for chain_tag_id in chain_tag_ids } for ( tag_id, chain_tag_ids ) in tag_ids_to_chain_tag_ids.items() } + + for tag in tags: + + if tag not in existing_tags: + + tags_to_siblings[ tag ] = { tag } + + + + return tags_to_siblings + + + def GetTagSiblingsIdeals( self, service_key ): + + tag_service_id = self.modules_services.GetServiceId( service_key ) + + cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id ) + + pair_ids = self._Execute( 'SELECT bad_tag_id, ideal_tag_id FROM {};'.format( cache_tag_siblings_lookup_table_name ) ).fetchall() + + all_tag_ids = set( itertools.chain.from_iterable( pair_ids ) ) + + tag_ids_to_tags = self.modules_tags_local_cache.GetTagIdsToTags( tag_ids = all_tag_ids ) + + tags_to_ideals = { tag_ids_to_tags[ bad_tag_id ] : tag_ids_to_tags[ good_tag_id ] for ( bad_tag_id, good_tag_id ) in pair_ids } + + return tags_to_ideals + + + def GetTagsToIdeals( self, display_type, tag_service_id, tag_ids ): + + if not isinstance( tag_ids, set ): + + tag_ids = set( tag_ids ) + + + if len( tag_ids ) == 0: + + return {} + + elif len( tag_ids ) == 1: + + ( tag_id, ) = tag_ids + + return { tag_id : self.GetIdeal( display_type, tag_service_id, tag_id ) } + + + cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) + + no_ideal_found_tag_ids = set( tag_ids ) + tag_ids_to_ideal_tag_ids = {} + + with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_table_name: + + # temp tags to lookup + for ( tag_id, ideal_tag_id ) in self._Execute( 'SELECT tag_id, ideal_tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ): + + no_ideal_found_tag_ids.discard( tag_id ) + tag_ids_to_ideal_tag_ids[ tag_id ] = ideal_tag_id + + + tag_ids_to_ideal_tag_ids.update( { tag_id : tag_id for tag_id in no_ideal_found_tag_ids } ) + + + return tag_ids_to_ideal_tag_ids + + + def GetTagSiblings( self, service_key ): + + service_id = self.modules_services.GetServiceId( service_key ) + + statuses_to_pair_ids = self.GetTagSiblingsIds( service_id ) + + all_tag_ids = set() + + for pair_ids in statuses_to_pair_ids.values(): + + for ( bad_tag_id, good_tag_id ) in pair_ids: + + all_tag_ids.add( bad_tag_id ) + all_tag_ids.add( good_tag_id ) + + + + tag_ids_to_tags = self.modules_tags_local_cache.GetTagIdsToTags( tag_ids = all_tag_ids ) + + statuses_to_pairs = collections.defaultdict( set ) + + statuses_to_pairs.update( { status : { ( tag_ids_to_tags[ bad_tag_id ], tag_ids_to_tags[ good_tag_id ] ) for ( bad_tag_id, good_tag_id ) in pair_ids } for ( status, pair_ids ) in statuses_to_pair_ids.items() } ) + + return statuses_to_pairs + + + def GetTagSiblingsIds( self, service_id ): + + statuses_and_pair_ids = self._Execute( 'SELECT status, bad_tag_id, good_tag_id FROM tag_siblings WHERE service_id = ? UNION SELECT status, bad_tag_id, good_tag_id FROM tag_sibling_petitions WHERE service_id = ?;', ( service_id, service_id ) ).fetchall() + + unsorted_statuses_to_pair_ids = HydrusData.BuildKeyToListDict( ( status, ( bad_tag_id, good_tag_id ) ) for ( status, bad_tag_id, good_tag_id ) in statuses_and_pair_ids ) + + statuses_to_pair_ids = collections.defaultdict( list ) + + statuses_to_pair_ids.update( { status : sorted( pair_ids ) for ( status, pair_ids ) in unsorted_statuses_to_pair_ids.items() } ) + + return statuses_to_pair_ids + + + def GetTagSiblingsIdsChains( self, service_id, tag_ids ): + + done_tag_ids = set() + next_tag_ids = set( tag_ids ) + result_rows = set() + + while len( next_tag_ids ) > 0: + + with self._MakeTemporaryIntegerTable( next_tag_ids, 'tag_id' ) as temp_next_tag_ids_table_name: + + done_tag_ids.update( next_tag_ids ) + + next_tag_ids = set() + + # keep these separate--older sqlite can't do cross join to an OR ON + + # temp tag_ids to siblings + queries = [ + 'SELECT status, bad_tag_id, good_tag_id FROM {} CROSS JOIN tag_siblings ON ( bad_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name ), + 'SELECT status, bad_tag_id, good_tag_id FROM {} CROSS JOIN tag_siblings ON ( good_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name ), + 'SELECT status, bad_tag_id, good_tag_id FROM {} CROSS JOIN tag_sibling_petitions ON ( bad_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name ), + 'SELECT status, bad_tag_id, good_tag_id FROM {} CROSS JOIN tag_sibling_petitions ON ( good_tag_id = tag_id ) WHERE service_id = ?'.format( temp_next_tag_ids_table_name ) + ] + + query = ' UNION '.join( queries ) + + for row in self._Execute( query, ( service_id, service_id, service_id, service_id ) ): + + result_rows.add( row ) + + ( status, bad_tag_id, good_tag_id ) = row + + for tag_id in ( bad_tag_id, good_tag_id ): + + if tag_id not in done_tag_ids: + + next_tag_ids.add( tag_id ) + + + + + + + unsorted_statuses_to_pair_ids = HydrusData.BuildKeyToListDict( ( status, ( bad_tag_id, good_tag_id ) ) for ( status, bad_tag_id, good_tag_id ) in result_rows ) + + statuses_to_pair_ids = collections.defaultdict( list ) + + statuses_to_pair_ids.update( { status : sorted( pair_ids ) for ( status, pair_ids ) in unsorted_statuses_to_pair_ids.items() } ) + + return statuses_to_pair_ids + + + def IsChained( self, display_type, tag_service_id, tag_id ): + + cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) + + return self._Execute( 'SELECT 1 FROM {} WHERE bad_tag_id = ? OR ideal_tag_id = ?;'.format( cache_tag_siblings_lookup_table_name ), ( tag_id, tag_id ) ).fetchone() is not None + + + def NotifySiblingAddRowSynced( self, tag_service_id, row ): + + if tag_service_id in self._service_ids_to_display_application_status: + + ( sibling_rows_to_add, sibling_rows_to_remove, num_actual_rows, num_ideal_rows ) = self._service_ids_to_display_application_status[ tag_service_id ] + + sibling_rows_to_add.discard( row ) + + num_actual_rows += 1 + + self._service_ids_to_display_application_status[ tag_service_id ] = ( sibling_rows_to_add, sibling_rows_to_remove, num_actual_rows, num_ideal_rows ) + + + + def NotifySiblingDeleteRowSynced( self, tag_service_id, row ): + + if tag_service_id in self._service_ids_to_display_application_status: + + ( sibling_rows_to_add, sibling_rows_to_remove, num_actual_rows, num_ideal_rows ) = self._service_ids_to_display_application_status[ tag_service_id ] + + sibling_rows_to_remove.discard( row ) + + num_actual_rows -= 1 + + self._service_ids_to_display_application_status[ tag_service_id ] = ( sibling_rows_to_add, sibling_rows_to_remove, num_actual_rows, num_ideal_rows ) + + + + def PendTagSiblings( self, service_id, triples ): + + self._ExecuteMany( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND good_tag_id = ?;', ( ( service_id, bad_tag_id, good_tag_id ) for ( bad_tag_id, good_tag_id, reason_id ) in triples ) ) + + self._ExecuteMany( 'INSERT OR IGNORE INTO tag_sibling_petitions ( service_id, bad_tag_id, good_tag_id, reason_id, status ) VALUES ( ?, ?, ?, ?, ? );', ( ( service_id, bad_tag_id, good_tag_id, reason_id, HC.CONTENT_STATUS_PENDING ) for ( bad_tag_id, good_tag_id, reason_id ) in triples ) ) + + + def PetitionTagSiblings( self, service_id, triples ): + + self._ExecuteMany( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND good_tag_id = ?;', ( ( service_id, bad_tag_id, good_tag_id ) for ( bad_tag_id, good_tag_id, reason_id ) in triples ) ) + + self._ExecuteMany( 'INSERT OR IGNORE INTO tag_sibling_petitions ( service_id, bad_tag_id, good_tag_id, reason_id, status ) VALUES ( ?, ?, ?, ?, ? );', ( ( service_id, bad_tag_id, good_tag_id, reason_id, HC.CONTENT_STATUS_PETITIONED ) for ( bad_tag_id, good_tag_id, reason_id ) in triples ) ) + + + def RescindPendingTagSiblings( self, service_id, pairs ): + + self._ExecuteMany( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND good_tag_id = ? AND status = ?;', ( ( service_id, bad_tag_id, good_tag_id, HC.CONTENT_STATUS_PENDING ) for ( bad_tag_id, good_tag_id ) in pairs ) ) + + + def RescindPetitionedTagSiblings( self, service_id, pairs ): + + self._ExecuteMany( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND good_tag_id = ? AND status = ?;', ( ( service_id, bad_tag_id, good_tag_id, HC.CONTENT_STATUS_PETITIONED ) for ( bad_tag_id, good_tag_id ) in pairs ) ) + + + def Regen( self, tag_service_ids ): + + for tag_service_id in tag_service_ids: + + cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id ) + + self._Execute( 'DELETE FROM {};'.format( cache_tag_siblings_lookup_table_name ) ) + + applicable_service_ids = self.GetApplicableServiceIds( tag_service_id ) + + tss = ClientTagsHandling.TagSiblingsStructure() + + for applicable_service_id in applicable_service_ids: + + statuses_to_pair_ids = self.GetTagSiblingsIds( service_id = applicable_service_id ) + + petitioned_fast_lookup = set( statuses_to_pair_ids[ HC.CONTENT_STATUS_PETITIONED ] ) + + for ( bad_tag_id, good_tag_id ) in statuses_to_pair_ids[ HC.CONTENT_STATUS_CURRENT ]: + + if ( bad_tag_id, good_tag_id ) in petitioned_fast_lookup: + + continue + + + tss.AddPair( bad_tag_id, good_tag_id ) + + + for ( bad_tag_id, good_tag_id ) in statuses_to_pair_ids[ HC.CONTENT_STATUS_PENDING ]: + + tss.AddPair( bad_tag_id, good_tag_id ) + + + + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( bad_tag_id, ideal_tag_id ) VALUES ( ?, ? );'.format( cache_tag_siblings_lookup_table_name ), tss.GetBadTagsToIdealTags().items() ) + + if tag_service_id in self._service_ids_to_display_application_status: + + del self._service_ids_to_display_application_status[ tag_service_id ] + + + + + def RegenChains( self, tag_service_ids, tag_ids ): + + # as this guy can change ideals, the related parent chains need to be regenned afterwards too + + if len( tag_ids ) == 0: + + return + + + for tag_service_id in tag_service_ids: + + cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id ) + + tag_ids_to_clear_and_regen = set( tag_ids ) + + ideal_tag_ids = self.GetIdeals( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id, tag_ids ) + + tag_ids_to_clear_and_regen.update( self.GetChainsMembersFromIdeals( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id, ideal_tag_ids ) ) + + self._ExecuteMany( 'DELETE FROM {} WHERE bad_tag_id = ? OR ideal_tag_id = ?;'.format( cache_tag_siblings_lookup_table_name ), ( ( tag_id, tag_id ) for tag_id in tag_ids_to_clear_and_regen ) ) + + applicable_tag_service_ids = self.GetApplicableServiceIds( tag_service_id ) + + tss = ClientTagsHandling.TagSiblingsStructure() + + for applicable_tag_service_id in applicable_tag_service_ids: + + service_key = self.modules_services.GetService( applicable_tag_service_id ).GetServiceKey() + + statuses_to_pair_ids = self.GetTagSiblingsIdsChains( applicable_tag_service_id, tag_ids_to_clear_and_regen ) + + petitioned_fast_lookup = set( statuses_to_pair_ids[ HC.CONTENT_STATUS_PETITIONED ] ) + + for ( bad_tag_id, good_tag_id ) in statuses_to_pair_ids[ HC.CONTENT_STATUS_CURRENT ]: + + if ( bad_tag_id, good_tag_id ) in petitioned_fast_lookup: + + continue + + + tss.AddPair( bad_tag_id, good_tag_id ) + + + for ( bad_tag_id, good_tag_id ) in statuses_to_pair_ids[ HC.CONTENT_STATUS_PENDING ]: + + tss.AddPair( bad_tag_id, good_tag_id ) + + + + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( bad_tag_id, ideal_tag_id ) VALUES ( ?, ? );'.format( cache_tag_siblings_lookup_table_name ), tss.GetBadTagsToIdealTags().items() ) + + if tag_service_id in self._service_ids_to_display_application_status: + + del self._service_ids_to_display_application_status[ tag_service_id ] + + + + + def SetApplication( self, service_keys_to_applicable_service_keys ): + + if self._service_ids_to_applicable_service_ids is None: + + self._GenerateApplicationDicts() + + + new_service_ids_to_applicable_service_ids = collections.defaultdict( list ) + + for ( master_service_key, applicable_service_keys ) in service_keys_to_applicable_service_keys.items(): + + master_service_id = self.modules_services.GetServiceId( master_service_key ) + applicable_service_ids = [ self.modules_services.GetServiceId( service_key ) for service_key in applicable_service_keys ] + + new_service_ids_to_applicable_service_ids[ master_service_id ] = applicable_service_ids + + + old_and_new_master_service_ids = set( self._service_ids_to_applicable_service_ids.keys() ) + old_and_new_master_service_ids.update( new_service_ids_to_applicable_service_ids.keys() ) + + inserts = [] + + service_ids_to_sync = set() + + for master_service_id in old_and_new_master_service_ids: + + if master_service_id in new_service_ids_to_applicable_service_ids: + + applicable_service_ids = new_service_ids_to_applicable_service_ids[ master_service_id ] + + inserts.extend( ( ( master_service_id, i, applicable_service_id ) for ( i, applicable_service_id ) in enumerate( applicable_service_ids ) ) ) + + if applicable_service_ids != self._service_ids_to_applicable_service_ids[ master_service_id ]: + + service_ids_to_sync.add( master_service_id ) + + + else: + + service_ids_to_sync.add( master_service_id ) + + + + self._Execute( 'DELETE FROM tag_sibling_application;' ) + + self._ExecuteMany( 'INSERT OR IGNORE INTO tag_sibling_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', inserts ) + + self._service_ids_to_applicable_service_ids = None + self._service_ids_to_interested_service_ids = None + + return service_ids_to_sync + + diff --git a/hydrus/client/gui/ClientGUI.py b/hydrus/client/gui/ClientGUI.py index af968139..787edeea 100644 --- a/hydrus/client/gui/ClientGUI.py +++ b/hydrus/client/gui/ClientGUI.py @@ -130,7 +130,7 @@ def THREADUploadPending( service_key ): nums_pending_for_this_service = nums_pending[ service_key ] - content_types_for_this_service = set( HC.REPOSITORY_CONTENT_TYPES[ service_type ] ) + content_types_for_this_service = set( HC.SERVICE_TYPES_TO_CONTENT_TYPES[ service_type ] ) if service_type in HC.REPOSITORIES: @@ -3754,7 +3754,7 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ): if result == QW.QDialog.Accepted: - self._controller.Write( 'regenerate_tag_siblings_cache' ) + self._controller.Write( 'regenerate_tag_siblings_and_parents_cache' ) @@ -4913,6 +4913,8 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p HG.query_planner_mode = False + HG.queries_planned = set() + HydrusData.ShowText( 'Query Planning done: {} queries analyzed'.format( HydrusData.ToHumanInt( HG.query_planner_query_count ) ) ) @@ -6397,7 +6399,7 @@ Try to keep this below 10 million!''' services = self._controller.services_manager.GetServices() - petition_permissions = [ ( content_type, HC.PERMISSION_ACTION_MODERATE ) for content_type in HC.REPOSITORY_CONTENT_TYPES ] + petition_permissions = [ ( content_type, HC.PERMISSION_ACTION_MODERATE ) for content_type in HC.SERVICE_TYPES_TO_CONTENT_TYPES ] repositories = [ service for service in services if service.GetServiceType() in HC.REPOSITORIES ] diff --git a/hydrus/client/gui/ClientGUIMPV.py b/hydrus/client/gui/ClientGUIMPV.py index 6fc8f4f5..c1a4ed55 100644 --- a/hydrus/client/gui/ClientGUIMPV.py +++ b/hydrus/client/gui/ClientGUIMPV.py @@ -218,7 +218,9 @@ class mpvWidget( QW.QWidget ): return - if self._media is not None and self._player.time_pos <= 1.0: + current_timestamp_s = self._player.time_pos + + if self._media is not None and current_timestamp_s is not None and current_timestamp_s <= 1.0: self._current_seek_to_start_count += 1 diff --git a/hydrus/client/gui/ClientGUITags.py b/hydrus/client/gui/ClientGUITags.py index eff49d1b..b249dc0c 100644 --- a/hydrus/client/gui/ClientGUITags.py +++ b/hydrus/client/gui/ClientGUITags.py @@ -3557,8 +3557,15 @@ class ManageTagParents( ClientGUIScrolledPanels.ManagePanel ): if self._i_am_local_tag_service: - for pair in self._current_statuses_to_pairs[ HC.CONTENT_STATUS_PENDING ]: content_updates.append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_ADD, pair ) ) - for pair in self._current_statuses_to_pairs[ HC.CONTENT_STATUS_PETITIONED ]: content_updates.append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_DELETE, pair ) ) + for pair in self._current_statuses_to_pairs[ HC.CONTENT_STATUS_PETITIONED ]: + + content_updates.append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_DELETE, pair ) ) + + + for pair in self._current_statuses_to_pairs[ HC.CONTENT_STATUS_PENDING ]: + + content_updates.append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_ADD, pair ) ) + else: @@ -3574,10 +3581,10 @@ class ManageTagParents( ClientGUIScrolledPanels.ManagePanel ): new_petitions = current_petitioned.difference( original_petitioned ) rescinded_petitions = original_petitioned.difference( current_petitioned ) - content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_PEND, pair, reason = self._pairs_to_reasons[ pair ] ) for pair in new_pends ) ) + content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_RESCIND_PETITION, pair ) for pair in rescinded_petitions ) ) content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_RESCIND_PEND, pair ) for pair in rescinded_pends ) ) content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_PETITION, pair, reason = self._pairs_to_reasons[ pair ] ) for pair in new_petitions ) ) - content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_RESCIND_PETITION, pair ) for pair in rescinded_petitions ) ) + content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_PEND, pair, reason = self._pairs_to_reasons[ pair ] ) for pair in new_pends ) ) return ( self._service_key, content_updates ) @@ -4566,16 +4573,16 @@ class ManageTagSiblings( ClientGUIScrolledPanels.ManagePanel ): if self._i_am_local_tag_service: - for pair in self._current_statuses_to_pairs[ HC.CONTENT_STATUS_PENDING ]: - - content_updates.append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_ADD, pair ) ) - - for pair in self._current_statuses_to_pairs[ HC.CONTENT_STATUS_PETITIONED ]: content_updates.append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_DELETE, pair ) ) + for pair in self._current_statuses_to_pairs[ HC.CONTENT_STATUS_PENDING ]: + + content_updates.append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_ADD, pair ) ) + + else: current_pending = self._current_statuses_to_pairs[ HC.CONTENT_STATUS_PENDING ] @@ -4590,10 +4597,10 @@ class ManageTagSiblings( ClientGUIScrolledPanels.ManagePanel ): new_petitions = current_petitioned.difference( original_petitioned ) rescinded_petitions = original_petitioned.difference( current_petitioned ) - content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_PEND, pair, reason = self._pairs_to_reasons[ pair ] ) for pair in new_pends ) ) + content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_RESCIND_PETITION, pair ) for pair in rescinded_petitions ) ) content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_RESCIND_PEND, pair ) for pair in rescinded_pends ) ) content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_PETITION, pair, reason = self._pairs_to_reasons[ pair ] ) for pair in new_petitions ) ) - content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_RESCIND_PETITION, pair ) for pair in rescinded_petitions ) ) + content_updates.extend( ( HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_PEND, pair, reason = self._pairs_to_reasons[ pair ] ) for pair in new_pends ) ) return ( self._service_key, content_updates ) diff --git a/hydrus/client/gui/canvas/ClientGUICanvasHoverFrames.py b/hydrus/client/gui/canvas/ClientGUICanvasHoverFrames.py index 1d7b7230..b5a56cba 100644 --- a/hydrus/client/gui/canvas/ClientGUICanvasHoverFrames.py +++ b/hydrus/client/gui/canvas/ClientGUICanvasHoverFrames.py @@ -1123,6 +1123,7 @@ class CanvasHoverFrameTop( CanvasHoverFrame ): if do_redraw: + self._ResetText() self._ResetButtons() diff --git a/hydrus/client/gui/pages/ClientGUIManagement.py b/hydrus/client/gui/pages/ClientGUIManagement.py index 0e5ca9d5..939b1f71 100644 --- a/hydrus/client/gui/pages/ClientGUIManagement.py +++ b/hydrus/client/gui/pages/ClientGUIManagement.py @@ -952,6 +952,8 @@ class ManagementPanelDuplicateFilter( ManagementPanel ): self._potential_file_search_currently_happening = False self._maintenance_numbers_need_redrawing = True + self._potential_duplicates_count = 0 + self._have_done_first_maintenance_numbers_show = False new_options = self._controller.new_options @@ -1277,7 +1279,14 @@ class ManagementPanelDuplicateFilter( ManagementPanel ): self._dupe_count_numbers_dirty = True - self._ShowRandomPotentialDupes() + if self._potential_duplicates_count > 1: + + self._ShowRandomPotentialDupes() + + else: + + self._ShowPotentialDupes( [] ) + @@ -1288,22 +1297,20 @@ class ManagementPanelDuplicateFilter( ManagementPanel ): self._UpdateMaintenanceStatus() - def _ShowRandomPotentialDupes( self ): + def _ShowPotentialDupes( self, hashes ): ( file_search_context, both_files_match ) = self._GetFileSearchContextAndBothFilesMatch() file_service_key = file_search_context.GetFileServiceKey() - hashes = self._controller.Read( 'random_potential_duplicate_hashes', file_search_context, both_files_match ) - - if len( hashes ) == 0: + if len( hashes ) > 0: - QW.QMessageBox.critical( self, 'Error', 'No files were found. Try refreshing the count, and if this keeps happening, please let hydrus_dev know.' ) + media_results = self._controller.Read( 'media_results', hashes, sorted = True ) - return + else: + + media_results = [] - - media_results = self._controller.Read( 'media_results', hashes, sorted = True ) panel = ClientGUIResults.MediaPanelThumbnails( self._page, self._page_key, file_service_key, media_results ) @@ -1312,6 +1319,20 @@ class ManagementPanelDuplicateFilter( ManagementPanel ): self._page.SwapMediaPanel( panel ) + def _ShowRandomPotentialDupes( self ): + + ( file_search_context, both_files_match ) = self._GetFileSearchContextAndBothFilesMatch() + + hashes = self._controller.Read( 'random_potential_duplicate_hashes', file_search_context, both_files_match ) + + if len( hashes ) == 0: + + HydrusData.ShowText( 'No files were found. Try refreshing the count, and if this keeps happening, please let hydrus_dev know.' ) + + + self._ShowPotentialDupes( hashes ) + + def _UpdateMaintenanceStatus( self ): self._refresh_maintenance_button.setEnabled( not ( self._duplicates_manager_is_fetching_maintenance_numbers or self._potential_file_search_currently_happening ) ) @@ -1408,9 +1429,11 @@ class ManagementPanelDuplicateFilter( ManagementPanel ): def _UpdatePotentialDuplicatesCount( self, potential_duplicates_count ): + self._potential_duplicates_count = potential_duplicates_count + self._num_potential_duplicates.setText( '{} potential pairs.'.format( HydrusData.ToHumanInt( potential_duplicates_count ) ) ) - if potential_duplicates_count > 0: + if self._potential_duplicates_count > 0: self._show_some_dupes.setEnabled( True ) self._launch_filter.setEnabled( True ) diff --git a/hydrus/client/gui/pages/ClientGUIPages.py b/hydrus/client/gui/pages/ClientGUIPages.py index 44710964..4d181e6e 100644 --- a/hydrus/client/gui/pages/ClientGUIPages.py +++ b/hydrus/client/gui/pages/ClientGUIPages.py @@ -110,9 +110,7 @@ class DialogPageChooser( ClientGUIDialogs.Dialog ): self._services = HG.client_controller.services_manager.GetServices() - repository_petition_permissions = [ ( content_type, HC.PERMISSION_ACTION_MODERATE ) for content_type in HC.REPOSITORY_CONTENT_TYPES ] - - self._petition_service_keys = [ service.GetServiceKey() for service in self._services if service.GetServiceType() in HC.REPOSITORIES and True in ( service.HasPermission( content_type, action ) for ( content_type, action ) in repository_petition_permissions ) ] + self._petition_service_keys = [ service.GetServiceKey() for service in self._services if service.GetServiceType() in HC.REPOSITORIES and True in ( service.HasPermission( content_type, HC.PERMISSION_ACTION_MODERATE ) for content_type in HC.SERVICE_TYPES_TO_CONTENT_TYPES[ service.GetServiceType() ] ) ] self._InitButtons( 'home' ) @@ -1595,8 +1593,10 @@ class PagesNotebook( QP.TabWidgetWithDnD ): click_over_tab = tab_index != -1 + can_go_home = tab_index > 1 can_go_left = tab_index > 0 can_go_right = tab_index < end_index + can_go_end = tab_index < end_index - 1 click_over_page_of_pages = False @@ -1619,20 +1619,101 @@ class PagesNotebook( QP.TabWidgetWithDnD ): ClientGUIMenus.AppendMenuItem( menu, 'close page', 'Close this page.', self._ClosePage, tab_index ) - if num_pages > 1: + if more_than_one_tab: - ClientGUIMenus.AppendMenuItem( menu, 'close other pages', 'Close all pages but this one.', self._CloseOtherPages, tab_index ) + if not can_go_left or not can_go_right: + + if num_pages == 2: + + label = 'close other page' + description = 'Close the other page.' + + else: + + label = 'close other pages' + description = 'Close all pages but this one.' + + + ClientGUIMenus.AppendMenuItem( menu, label, description, self._CloseOtherPages, tab_index ) + + else: + + close_menu = QW.QMenu( menu ) + + ClientGUIMenus.AppendMenuItem( close_menu, 'other pages', 'Close all pages but this one.', self._CloseOtherPages, tab_index ) + + if can_go_left: + + ClientGUIMenus.AppendMenuItem( close_menu, 'pages to the left', 'Close all pages to the left of this one.', self._CloseLeftPages, tab_index ) + + + if can_go_right: + + ClientGUIMenus.AppendMenuItem( close_menu, 'pages to the right', 'Close all pages to the right of this one.', self._CloseRightPages, tab_index ) + + + ClientGUIMenus.AppendMenu( menu, close_menu, 'close' ) + + + # + + move_menu = QW.QMenu( menu ) + + if can_go_home: + + ClientGUIMenus.AppendMenuItem( move_menu, 'to left end', 'Move this page all the way to the left.', self._ShiftPage, tab_index, new_index=0 ) + if can_go_left: - ClientGUIMenus.AppendMenuItem( menu, 'close pages to the left', 'Close all pages to the left of this one.', self._CloseLeftPages, tab_index ) + ClientGUIMenus.AppendMenuItem( move_menu, 'left', 'Move this page one to the left.', self._ShiftPage, tab_index, delta=-1 ) if can_go_right: - ClientGUIMenus.AppendMenuItem( menu, 'close pages to the right', 'Close all pages to the right of this one.', self._CloseRightPages, tab_index ) + ClientGUIMenus.AppendMenuItem( move_menu, 'right', 'Move this page one to the right.', self._ShiftPage, tab_index, 1 ) + if can_go_end: + + ClientGUIMenus.AppendMenuItem( move_menu, 'to right end', 'Move this page all the way to the right.', self._ShiftPage, tab_index, new_index=end_index ) + + + ClientGUIMenus.AppendMenu( menu, move_menu, 'move page' ) + + # + + selection_index = self.currentIndex() + + can_select_home = selection_index > 1 + can_select_left = selection_index > 0 + can_select_right = selection_index < end_index + can_select_end = selection_index < end_index - 1 + + select_menu = QW.QMenu( menu ) + + if can_select_home: + + ClientGUIMenus.AppendMenuItem( select_menu, 'first page', 'Select the page at the start of these.', self.MoveSelectionEnd, -1 ) + + + if can_select_left: + + ClientGUIMenus.AppendMenuItem( select_menu, 'page to the left', 'Select the page to the left of this one.', self.MoveSelection, -1 ) + + + if can_select_right: + + ClientGUIMenus.AppendMenuItem( select_menu, 'page to the right', 'Select the page to the right of this one.', self.MoveSelection, 1 ) + + + if can_select_end: + + ClientGUIMenus.AppendMenuItem( select_menu, 'last page', 'Select the page at the end of these.', self.MoveSelectionEnd, 1 ) + + + ClientGUIMenus.AppendMenu( menu, select_menu, 'select' ) + ClientGUIMenus.AppendSeparator( menu ) @@ -1651,33 +1732,6 @@ class PagesNotebook( QP.TabWidgetWithDnD ): if more_than_one_tab: - ClientGUIMenus.AppendSeparator( menu ) - - can_home = tab_index > 1 - can_move_left = tab_index > 0 - can_move_right = tab_index < end_index - can_end = tab_index < end_index - 1 - - if can_home: - - ClientGUIMenus.AppendMenuItem( menu, 'move to left end', 'Move this page all the way to the left.', self._ShiftPage, tab_index, new_index=0 ) - - - if can_move_left: - - ClientGUIMenus.AppendMenuItem( menu, 'move left', 'Move this page one to the left.', self._ShiftPage, tab_index, delta=-1 ) - - - if can_move_right: - - ClientGUIMenus.AppendMenuItem( menu, 'move right', 'Move this page one to the right.', self._ShiftPage, tab_index, 1 ) - - - if can_end: - - ClientGUIMenus.AppendMenuItem( menu, 'move to right end', 'Move this page all the way to the right.', self._ShiftPage, tab_index, new_index=end_index ) - - ClientGUIMenus.AppendSeparator( menu ) submenu = QW.QMenu( menu ) diff --git a/hydrus/client/gui/services/ClientGUIClientsideServices.py b/hydrus/client/gui/services/ClientGUIClientsideServices.py index 47357dca..9eb008e1 100644 --- a/hydrus/client/gui/services/ClientGUIClientsideServices.py +++ b/hydrus/client/gui/services/ClientGUIClientsideServices.py @@ -2566,7 +2566,7 @@ class ReviewServiceRepositorySubPanel( QW.QWidget ): # - content_types = tuple( HC.REPOSITORY_CONTENT_TYPES[ self._service.GetServiceType() ] ) + content_types = tuple( HC.SERVICE_TYPES_TO_CONTENT_TYPES[ self._service.GetServiceType() ] ) self._content_types_to_gauges_and_buttons = {} diff --git a/hydrus/client/importing/ClientImportFileSeeds.py b/hydrus/client/importing/ClientImportFileSeeds.py index 154c5a1d..10ec1c46 100644 --- a/hydrus/client/importing/ClientImportFileSeeds.py +++ b/hydrus/client/importing/ClientImportFileSeeds.py @@ -618,20 +618,15 @@ class FileSeed( HydrusSerialisable.SerialisableBase ): raise HydrusExceptions.VetoException( 'Source file does not exist!' ) - if limited_mimes is not None: - - mime = HydrusFileHandling.GetMime( path ) - - if mime not in limited_mimes: - - raise HydrusExceptions.VetoException( 'Not in allowed mimes!' ) - - - ( os_file_handle, temp_path ) = HydrusPaths.GetTempPath() try: + if status_hook is not None: + + status_hook( 'copying file to temp location' ) + + copied = HydrusPaths.MirrorFile( path, temp_path ) if not copied: @@ -639,6 +634,23 @@ class FileSeed( HydrusSerialisable.SerialisableBase ): raise Exception( 'File failed to copy to temp path--see log for error.' ) + if limited_mimes is not None: + + # I think this thing should and will be rolled into file import options late + + if status_hook is not None: + + status_hook( 'testing file type' ) + + + mime = HydrusFileHandling.GetMime( temp_path ) + + if mime not in limited_mimes: + + raise HydrusExceptions.VetoException( 'Not in allowed mimes!' ) + + + self.Import( temp_path, file_import_options, status_hook = status_hook ) finally: diff --git a/hydrus/client/importing/ClientImportFiles.py b/hydrus/client/importing/ClientImportFiles.py index 316c5458..3263a8a1 100644 --- a/hydrus/client/importing/ClientImportFiles.py +++ b/hydrus/client/importing/ClientImportFiles.py @@ -139,21 +139,11 @@ class FileImportJob( object ): HydrusData.ShowText( 'File import job starting work.' ) - if status_hook is not None: - - status_hook( 'calculating pre-import status' ) - - - self.GeneratePreImportHashAndStatus() + self.GeneratePreImportHashAndStatus( status_hook = status_hook ) if self._pre_import_file_status.ShouldImport( self._file_import_options ): - if status_hook is not None: - - status_hook( 'generating metadata' ) - - - self.GenerateInfo() + self.GenerateInfo( status_hook = status_hook ) try: @@ -178,14 +168,14 @@ class FileImportJob( object ): if status_hook is not None: - status_hook( 'copying file' ) + status_hook( 'copying file into file storage' ) HG.client_controller.client_files_manager.AddFile( hash, mime, self._temp_path, thumbnail_bytes = self._thumbnail_bytes ) if status_hook is not None: - status_hook( 'updating database' ) + status_hook( 'importing to database' ) self._post_import_file_status = HG.client_controller.WriteSynchronous( 'import_file', self ) @@ -210,10 +200,15 @@ class FileImportJob( object ): return self._post_import_file_status - def GeneratePreImportHashAndStatus( self ): + def GeneratePreImportHashAndStatus( self, status_hook = None ): HydrusImageHandling.ConvertToPNGIfBMP( self._temp_path ) + if status_hook is not None: + + status_hook( 'calculating hash' ) + + hash = HydrusFileHandling.GetHashFromPath( self._temp_path ) if HG.file_import_report_mode: @@ -221,6 +216,11 @@ class FileImportJob( object ): HydrusData.ShowText( 'File import job hash: {}'.format( hash.hex() ) ) + if status_hook is not None: + + status_hook( 'checking for file status' ) + + self._pre_import_file_status = HG.client_controller.Read( 'hash_status', 'sha256', hash, prefix = 'file recognised' ) # just in case @@ -234,10 +234,15 @@ class FileImportJob( object ): - def GenerateInfo( self ): + def GenerateInfo( self, status_hook = None ): if self._pre_import_file_status.mime is None: + if status_hook is not None: + + status_hook( 'generating filetype' ) + + mime = HydrusFileHandling.GetMime( self._temp_path ) self._pre_import_file_status.mime = mime @@ -272,6 +277,11 @@ class FileImportJob( object ): + if status_hook is not None: + + status_hook( 'generating file metadata' ) + + self._file_info = HydrusFileHandling.GetFileInfo( self._temp_path, mime ) ( size, mime, width, height, duration, num_frames, has_audio, num_words ) = self._file_info @@ -283,6 +293,11 @@ class FileImportJob( object ): if mime in HC.MIMES_WITH_THUMBNAILS: + if status_hook is not None: + + status_hook( 'generating thumbnail' ) + + if HG.file_import_report_mode: HydrusData.ShowText( 'File import job generating thumbnail' ) @@ -306,6 +321,11 @@ class FileImportJob( object ): if mime in HC.MIMES_WE_CAN_PHASH: + if status_hook is not None: + + status_hook( 'generating similar files metadata' ) + + if HG.file_import_report_mode: HydrusData.ShowText( 'File import job generating phashes' ) @@ -324,6 +344,11 @@ class FileImportJob( object ): HydrusData.ShowText( 'File import job generating other hashes' ) + if status_hook is not None: + + status_hook( 'generating additional hashes' ) + + self._extra_hashes = HydrusFileHandling.GetExtraHashesFromPath( self._temp_path ) self._file_modified_timestamp = HydrusFileHandling.GetFileModifiedTimestamp( self._temp_path ) diff --git a/hydrus/client/media/ClientMedia.py b/hydrus/client/media/ClientMedia.py index 646dbf23..5a9a72fa 100644 --- a/hydrus/client/media/ClientMedia.py +++ b/hydrus/client/media/ClientMedia.py @@ -1710,6 +1710,8 @@ class MediaList( object ): self._sorted_media.sort( sort_key = sort_key, reverse = reverse ) + self._RecalcHashes() + FILE_FILTER_ALL = 0 FILE_FILTER_NOT_SELECTED = 1 diff --git a/hydrus/client/networking/ClientLocalServerResources.py b/hydrus/client/networking/ClientLocalServerResources.py index 91bfda5d..5f0d75cb 100644 --- a/hydrus/client/networking/ClientLocalServerResources.py +++ b/hydrus/client/networking/ClientLocalServerResources.py @@ -39,7 +39,7 @@ LOCAL_BOORU_JSON_BYTE_LIST_PARAMS = set() CLIENT_API_INT_PARAMS = { 'file_id', 'file_sort_type' } CLIENT_API_BYTE_PARAMS = { 'hash', 'destination_page_key', 'page_key', 'Hydrus-Client-API-Access-Key', 'Hydrus-Client-API-Session-Key', 'tag_service_key', 'file_service_key' } -CLIENT_API_STRING_PARAMS = { 'name', 'url', 'domain' } +CLIENT_API_STRING_PARAMS = { 'name', 'url', 'domain', 'file_service_name', 'tag_service_name' } CLIENT_API_JSON_PARAMS = { 'basic_permissions', 'system_inbox', 'system_archive', 'tags', 'file_ids', 'only_return_identifiers', 'detailed_url_information', 'simple', 'file_sort_asc' } CLIENT_API_JSON_BYTE_LIST_PARAMS = { 'hashes' } @@ -1694,7 +1694,7 @@ class HydrusResourceClientAPIRestrictedGetFilesSearchFiles( HydrusResourceClient sort_order = CC.SORT_ASC if file_sort_asc else CC.SORT_DESC # newest first - sort_by = ClientMedia.MediaSort( sort_type = ( 'system', file_sort_type ), sort_order = CC.SORT_DESC ) + sort_by = ClientMedia.MediaSort( sort_type = ( 'system', file_sort_type ), sort_order = sort_order ) hash_ids = HG.client_controller.Read( 'file_query_ids', file_search_context, sort_by = sort_by, apply_implicit_limit = False ) diff --git a/hydrus/core/HydrusAudioHandling.py b/hydrus/core/HydrusAudioHandling.py index 8e7d4a1a..f5118155 100644 --- a/hydrus/core/HydrusAudioHandling.py +++ b/hydrus/core/HydrusAudioHandling.py @@ -2,7 +2,6 @@ import re import subprocess from hydrus.core import HydrusData -from hydrus.core import HydrusVideoHandling def ParseFFMPEGAudio( lines ): @@ -41,71 +40,3 @@ def ParseFFMPEGAudio( lines ): return ( audio_found, audio_format ) -def VideoHasAudio( path ): - - info_lines = HydrusVideoHandling.GetFFMPEGInfoLines( path ) - - ( audio_found, audio_format ) = ParseFFMPEGAudio( info_lines ) - - if not audio_found: - - return False - - - # just because video metadata has an audio stream doesn't mean it has audio. some vids have silent audio streams lmao - # so, let's read it as PCM and see if there is any noise - # this obviously only works for single audio stream vids, we'll adapt this if someone discovers a multi-stream mkv with a silent channel that doesn't work here - - cmd = [ HydrusVideoHandling.FFMPEG_PATH ] - - # this is perhaps not sensible for eventual playback and I should rather go for wav file-like and feed into python 'wave' in order to maintain stereo/mono and so on and have easy chunk-reading - - cmd.extend( [ '-i', path, - '-loglevel', 'quiet', - '-f', 's16le', - '-' ] ) - - - sbp_kwargs = HydrusData.GetSubprocessKWArgs() - - HydrusData.CheckProgramIsNotShuttingDown() - - try: - - process = subprocess.Popen( cmd, bufsize = 65536, stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE, **sbp_kwargs ) - - except FileNotFoundError as e: - - HydrusData.ShowText( 'Cannot render audio--FFMPEG not found!' ) - - raise - - - # silent PCM data is just 00 bytes - # every now and then, you'll get a couple ffs for some reason, but this is not legit audio data - - try: - - chunk_of_pcm_data = process.stdout.read( 65536 ) - - while len( chunk_of_pcm_data ) > 0: - - # iterating over bytes gives you ints, recall - if True in ( b != 0 and b != 255 for b in chunk_of_pcm_data ): - - return True - - - chunk_of_pcm_data = process.stdout.read( 65536 ) - - - return False - - finally: - - process.terminate() - - process.stdout.close() - process.stderr.close() - - diff --git a/hydrus/core/HydrusConstants.py b/hydrus/core/HydrusConstants.py index df0693e2..31625f44 100644 --- a/hydrus/core/HydrusConstants.py +++ b/hydrus/core/HydrusConstants.py @@ -81,7 +81,7 @@ options = {} # Misc NETWORK_VERSION = 20 -SOFTWARE_VERSION = 450 +SOFTWARE_VERSION = 451 CLIENT_API_VERSION = 19 SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 ) @@ -90,7 +90,7 @@ HYDRUS_KEY_LENGTH = 32 READ_BLOCK_SIZE = 256 * 1024 -lifetimes = [ ( 'one month', 31 * 86400 ), ( 'three months', 3 * 31 * 86400 ), ( 'six months', 6 * 31 * 86400 ), ( 'one year', 12 * 31 * 86400 ), ( 'two years', 24 * 31 * 86400 ), ( 'five years', 60 * 31 * 86400 ), ( 'does not expire', None ) ] +lifetimes = [ ( 'one month', 30 * 86400 ), ( 'three months', 3 * 30 * 86400 ), ( 'six months', 6 * 30 * 86400 ), ( 'one year', 365 * 86400 ), ( 'two years', 2 * 365 * 86400 ), ( 'five years', 5 * 365 * 86400 ), ( 'does not expire', None ) ] # some typing stuff @@ -176,8 +176,6 @@ content_type_string_lookup[ CONTENT_TYPE_NOTES ] = 'notes' content_type_string_lookup[ CONTENT_TYPE_FILE_VIEWING_STATS ] = 'file viewing stats' content_type_string_lookup[ CONTENT_TYPE_DEFINITIONS ] = 'definitions' -REPOSITORY_CONTENT_TYPES = [ CONTENT_TYPE_FILES, CONTENT_TYPE_MAPPINGS, CONTENT_TYPE_TAG_PARENTS, CONTENT_TYPE_TAG_SIBLINGS ] - CONTENT_UPDATE_ADD = 0 CONTENT_UPDATE_DELETE = 1 CONTENT_UPDATE_PEND = 2 @@ -441,9 +439,16 @@ ALL_FILE_SERVICES = FILE_SERVICES + ( COMBINED_FILE, ) SERVICES_WITH_THUMBNAILS = [ FILE_REPOSITORY, LOCAL_FILE_DOMAIN ] -REPOSITORY_CONTENT_TYPES = { +SERVICE_TYPES_TO_CONTENT_TYPES = { FILE_REPOSITORY : ( CONTENT_TYPE_FILES, ), - TAG_REPOSITORY : ( CONTENT_TYPE_MAPPINGS, CONTENT_TYPE_TAG_PARENTS, CONTENT_TYPE_TAG_SIBLINGS ) + LOCAL_FILE_DOMAIN : ( CONTENT_TYPE_FILES, ), + LOCAL_FILE_TRASH_DOMAIN : ( CONTENT_TYPE_FILES, ), + COMBINED_LOCAL_FILE : ( CONTENT_TYPE_FILES, ), + IPFS : ( CONTENT_TYPE_FILES, ), + TAG_REPOSITORY : ( CONTENT_TYPE_MAPPINGS, CONTENT_TYPE_TAG_PARENTS, CONTENT_TYPE_TAG_SIBLINGS ), + LOCAL_TAG : ( CONTENT_TYPE_MAPPINGS, CONTENT_TYPE_TAG_PARENTS, CONTENT_TYPE_TAG_SIBLINGS ), + LOCAL_RATING_LIKE : ( CONTENT_TYPE_RATINGS, ), + LOCAL_RATING_NUMERICAL : ( CONTENT_TYPE_RATINGS, ) } DELETE_FILES_PETITION = 0 diff --git a/hydrus/core/HydrusController.py b/hydrus/core/HydrusController.py index cca8f4e9..503ee3d6 100644 --- a/hydrus/core/HydrusController.py +++ b/hydrus/core/HydrusController.py @@ -646,6 +646,13 @@ class HydrusController( object ): def PrintQueryPlan( self, query, plan_lines ): + if query in HG.queries_planned: + + return + + + HG.queries_planned.add( query ) + pretty_timestamp = time.strftime( '%Y-%m-%d %H-%M-%S', time.localtime( HG.query_planner_start_time ) ) query_planner_log_filename = '{} query planner - {}.log'.format( self._name, pretty_timestamp ) diff --git a/hydrus/core/HydrusDBBase.py b/hydrus/core/HydrusDBBase.py index 66a63d4b..27eac095 100644 --- a/hydrus/core/HydrusDBBase.py +++ b/hydrus/core/HydrusDBBase.py @@ -148,7 +148,7 @@ class DBBase( object ): def _Execute( self, query, *args ) -> sqlite3.Cursor: - if HG.query_planner_mode: + if HG.query_planner_mode and query not in HG.queries_planned: plan_lines = self._c.execute( 'EXPLAIN QUERY PLAN {}'.format( query ), *args ).fetchall() @@ -162,15 +162,18 @@ class DBBase( object ): def _ExecuteMany( self, query, args_iterator ): - if HG.query_planner_mode: + if HG.query_planner_mode and query not in HG.queries_planned: args_iterator = list( args_iterator ) - plan_lines = self._c.execute( 'EXPLAIN QUERY PLAN {}'.format( query ), args_iterator[0] ).fetchall() - - HG.query_planner_query_count += 1 - - HG.client_controller.PrintQueryPlan( query, plan_lines ) + if len( args_iterator ) > 0: + + plan_lines = self._c.execute( 'EXPLAIN QUERY PLAN {}'.format( query ), args_iterator[0] ).fetchall() + + HG.query_planner_query_count += 1 + + HG.client_controller.PrintQueryPlan( query, plan_lines ) + self._c.executemany( query, args_iterator ) diff --git a/hydrus/core/HydrusData.py b/hydrus/core/HydrusData.py index 58a4f20c..f1d538d7 100644 --- a/hydrus/core/HydrusData.py +++ b/hydrus/core/HydrusData.py @@ -285,7 +285,7 @@ def TimeDeltaToPrettyTimeDelta( seconds, show_seconds = True ): HOUR = 60 * MINUTE DAY = 24 * HOUR MONTH = 30 * DAY - YEAR = 12 * MONTH + YEAR = 365 * DAY lines = [] diff --git a/hydrus/core/HydrusFileHandling.py b/hydrus/core/HydrusFileHandling.py index 87e7922a..c35b7851 100644 --- a/hydrus/core/HydrusFileHandling.py +++ b/hydrus/core/HydrusFileHandling.py @@ -193,6 +193,15 @@ def GetFileInfo( path, mime = None, ok_to_look_for_hydrus_updates = False ): num_frames = None num_words = None + if mime in HC.MIMES_THAT_DEFINITELY_HAVE_AUDIO: + + has_audio = True + + else: + + has_audio = False + + if mime in ( HC.IMAGE_JPEG, HC.IMAGE_PNG, HC.IMAGE_GIF, HC.IMAGE_WEBP, HC.IMAGE_TIFF, HC.IMAGE_ICON ): ( ( width, height ), duration, num_frames ) = HydrusImageHandling.GetImageProperties( path, mime ) @@ -203,7 +212,7 @@ def GetFileInfo( path, mime = None, ok_to_look_for_hydrus_updates = False ): elif mime in ( HC.IMAGE_APNG, HC.VIDEO_AVI, HC.VIDEO_FLV, HC.VIDEO_WMV, HC.VIDEO_MOV, HC.VIDEO_MP4, HC.VIDEO_MKV, HC.VIDEO_REALMEDIA, HC.VIDEO_WEBM, HC.VIDEO_MPEG ): - ( ( width, height ), duration, num_frames ) = HydrusVideoHandling.GetFFMPEGVideoProperties( path ) + ( ( width, height ), duration, num_frames, has_audio ) = HydrusVideoHandling.GetFFMPEGVideoProperties( path ) elif mime == HC.APPLICATION_PDF: @@ -222,19 +231,6 @@ def GetFileInfo( path, mime = None, ok_to_look_for_hydrus_updates = False ): duration = int( file_duration_in_s * 1000 ) - if mime in HC.MIMES_THAT_DEFINITELY_HAVE_AUDIO: - - has_audio = True - - elif mime in HC.MIMES_THAT_MAY_HAVE_AUDIO: - - has_audio = HydrusAudioHandling.VideoHasAudio( path ) - - else: - - has_audio = False - - if width is not None and width < 0: width *= -1 diff --git a/hydrus/core/HydrusGlobals.py b/hydrus/core/HydrusGlobals.py index 9a573a65..200f96f4 100644 --- a/hydrus/core/HydrusGlobals.py +++ b/hydrus/core/HydrusGlobals.py @@ -40,6 +40,7 @@ query_planner_mode = False query_planner_start_time = 0 query_planner_query_count = 0 +queries_planned = set() profile_start_time = 0 profile_slow_count = 0 profile_fast_count = 0 diff --git a/hydrus/core/HydrusVideoHandling.py b/hydrus/core/HydrusVideoHandling.py index 46f83bae..ed7f1f0c 100644 --- a/hydrus/core/HydrusVideoHandling.py +++ b/hydrus/core/HydrusVideoHandling.py @@ -292,7 +292,9 @@ def GetFFMPEGVideoProperties( path, force_count_frames_manually = False ): duration_in_ms = int( duration * 1000 ) - return ( resolution, duration_in_ms, num_frames ) + has_audio = VideoHasAudio( path, lines_for_first_second ) + + return ( resolution, duration_in_ms, num_frames, has_audio ) def GetMime( path ): @@ -750,6 +752,72 @@ def ParseFFMPEGVideoResolution( lines ): raise HydrusExceptions.DamagedOrUnusualFileException( 'Error parsing resolution!' ) +def VideoHasAudio( path, info_lines ): + + ( audio_found, audio_format ) = HydrusAudioHandling.ParseFFMPEGAudio( info_lines ) + + if not audio_found: + + return False + + + # just because video metadata has an audio stream doesn't mean it has audio. some vids have silent audio streams lmao + # so, let's read it as PCM and see if there is any noise + # this obviously only works for single audio stream vids, we'll adapt this if someone discovers a multi-stream mkv with a silent channel that doesn't work here + + cmd = [ FFMPEG_PATH ] + + # this is perhaps not sensible for eventual playback and I should rather go for wav file-like and feed into python 'wave' in order to maintain stereo/mono and so on and have easy chunk-reading + + cmd.extend( [ '-i', path, + '-loglevel', 'quiet', + '-f', 's16le', + '-' ] ) + + + sbp_kwargs = HydrusData.GetSubprocessKWArgs() + + HydrusData.CheckProgramIsNotShuttingDown() + + try: + + process = subprocess.Popen( cmd, bufsize = 65536, stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE, **sbp_kwargs ) + + except FileNotFoundError as e: + + HydrusData.ShowText( 'Cannot render audio--FFMPEG not found!' ) + + raise + + + # silent PCM data is just 00 bytes + # every now and then, you'll get a couple ffs for some reason, but this is not legit audio data + + try: + + chunk_of_pcm_data = process.stdout.read( 65536 ) + + while len( chunk_of_pcm_data ) > 0: + + # iterating over bytes gives you ints, recall + if True in ( b != 0 and b != 255 for b in chunk_of_pcm_data ): + + return True + + + chunk_of_pcm_data = process.stdout.read( 65536 ) + + + return False + + finally: + + process.terminate() + + process.stdout.close() + process.stderr.close() + + # This was built from moviepy's FFMPEG_VideoReader class VideoRendererFFMPEG( object ): diff --git a/hydrus/core/networking/HydrusNetwork.py b/hydrus/core/networking/HydrusNetwork.py index 93631963..8cb1afc8 100644 --- a/hydrus/core/networking/HydrusNetwork.py +++ b/hydrus/core/networking/HydrusNetwork.py @@ -17,7 +17,7 @@ MIN_UPDATE_PERIOD = 600 MAX_UPDATE_PERIOD = 100000 * 100 # three months or so jej MIN_NULLIFICATION_PERIOD = 86400 -MAX_NULLIFICATION_PERIOD = 86400 * 360 +MAX_NULLIFICATION_PERIOD = 86400 * 365 def GenerateDefaultServiceDictionary( service_type ): @@ -1060,7 +1060,7 @@ class AccountType( HydrusSerialisable.SerialisableBase ): if service_type in HC.REPOSITORIES: - for content_type in HC.REPOSITORY_CONTENT_TYPES: + for content_type in HC.SERVICE_TYPES_TO_CONTENT_TYPES[ service_type ]: permissions[ content_type ] = HC.PERMISSION_ACTION_MODERATE diff --git a/hydrus/server/networking/ServerServerResources.py b/hydrus/server/networking/ServerServerResources.py index b9b2972c..a0bc6ce8 100644 --- a/hydrus/server/networking/ServerServerResources.py +++ b/hydrus/server/networking/ServerServerResources.py @@ -884,7 +884,7 @@ class HydrusResourceRestrictedNumPetitions( HydrusResourceRestricted ): # further permissions checked in the db - request.hydrus_account.CheckAtLeastOnePermission( [ ( content_type, HC.PERMISSION_ACTION_MODERATE ) for content_type in HC.REPOSITORY_CONTENT_TYPES ] ) + request.hydrus_account.CheckAtLeastOnePermission( [ ( content_type, HC.PERMISSION_ACTION_MODERATE ) for content_type in HC.SERVICE_TYPES_TO_CONTENT_TYPES[ self._service.GetServiceType() ] ] ) def _threadDoGETJob( self, request: HydrusServerRequest.HydrusRequest ): @@ -1129,7 +1129,7 @@ class HydrusResourceRestrictedUpdate( HydrusResourceRestricted ): # further permissions checked in the db - request.hydrus_account.CheckAtLeastOnePermission( [ ( content_type, HC.PERMISSION_ACTION_PETITION ) for content_type in HC.REPOSITORY_CONTENT_TYPES ] ) + request.hydrus_account.CheckAtLeastOnePermission( [ ( content_type, HC.PERMISSION_ACTION_PETITION ) for content_type in HC.SERVICE_TYPES_TO_CONTENT_TYPES[ self._service.GetServiceType() ] ] ) @@ -1170,7 +1170,7 @@ class HydrusResourceRestrictedImmediateUpdate( HydrusResourceRestricted ): def _checkAccountPermissions( self, request: HydrusServerRequest.HydrusRequest ): - request.hydrus_account.CheckAtLeastOnePermission( [ ( content_type, HC.PERMISSION_ACTION_MODERATE ) for content_type in HC.REPOSITORY_CONTENT_TYPES ] ) + request.hydrus_account.CheckAtLeastOnePermission( [ ( content_type, HC.PERMISSION_ACTION_MODERATE ) for content_type in HC.SERVICE_TYPES_TO_CONTENT_TYPES[ self._service.GetServiceType() ] ] ) def _threadDoGETJob( self, request: HydrusServerRequest.HydrusRequest ): diff --git a/hydrus/test/TestClientAPI.py b/hydrus/test/TestClientAPI.py index 1f5f0e67..a6c33f7c 100644 --- a/hydrus/test/TestClientAPI.py +++ b/hydrus/test/TestClientAPI.py @@ -24,6 +24,7 @@ from hydrus.client import ClientManagers from hydrus.client import ClientSearch from hydrus.client import ClientServices from hydrus.client.importing import ClientImportFiles +from hydrus.client.media import ClientMedia from hydrus.client.media import ClientMediaManagers from hydrus.client.media import ClientMediaResult from hydrus.client.metadata import ClientTags @@ -1809,6 +1810,8 @@ class TestClientAPI( unittest.TestCase ): # search files + HG.test_controller.ClearReads( 'file_query_ids' ) + sample_hash_ids = set( random.sample( hash_ids, 3 ) ) HG.test_controller.SetRead( 'file_query_ids', set( sample_hash_ids ) ) @@ -1833,10 +1836,31 @@ class TestClientAPI( unittest.TestCase ): self.assertEqual( d, expected_answer ) + [ ( args, kwargs ) ] = HG.test_controller.GetRead( 'file_query_ids' ) + + ( file_search_context, ) = args + + self.assertEqual( file_search_context.GetFileServiceKey(), CC.LOCAL_FILE_SERVICE_KEY ) + self.assertEqual( file_search_context.GetTagSearchContext().service_key, CC.COMBINED_TAG_SERVICE_KEY ) + self.assertEqual( set( file_search_context.GetPredicates() ), { ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_TAG, tag ) for tag in tags } ) + + self.assertIn( 'sort_by', kwargs ) + + sort_by = kwargs[ 'sort_by' ] + + self.assertEqual( sort_by.sort_type, ( 'system', CC.SORT_FILES_BY_IMPORT_TIME ) ) + self.assertEqual( sort_by.sort_order, CC.SORT_DESC ) + + self.assertIn( 'apply_implicit_limit', kwargs ) + + self.assertEqual( kwargs[ 'apply_implicit_limit' ], False ) + # sort # this just tests if it parses, we don't have a full test for read params yet + HG.test_controller.ClearReads( 'file_query_ids' ) + sample_hash_ids = set( random.sample( hash_ids, 3 ) ) HG.test_controller.SetRead( 'file_query_ids', set( sample_hash_ids ) ) @@ -1855,9 +1879,28 @@ class TestClientAPI( unittest.TestCase ): self.assertEqual( response.status, 200 ) + [ ( args, kwargs ) ] = HG.test_controller.GetRead( 'file_query_ids' ) + + ( file_search_context, ) = args + + self.assertEqual( file_search_context.GetFileServiceKey(), CC.LOCAL_FILE_SERVICE_KEY ) + self.assertEqual( file_search_context.GetTagSearchContext().service_key, CC.COMBINED_TAG_SERVICE_KEY ) + self.assertEqual( set( file_search_context.GetPredicates() ), { ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_TAG, tag ) for tag in tags } ) + + self.assertIn( 'sort_by', kwargs ) + + sort_by = kwargs[ 'sort_by' ] + + self.assertEqual( sort_by.sort_type, ( 'system', CC.SORT_FILES_BY_FRAMERATE ) ) + self.assertEqual( sort_by.sort_order, CC.SORT_DESC ) + + self.assertIn( 'apply_implicit_limit', kwargs ) + + self.assertEqual( kwargs[ 'apply_implicit_limit' ], False ) + # sort - # this just tests if it parses, we don't have a full test for read params yet + HG.test_controller.ClearReads( 'file_query_ids' ) sample_hash_ids = set( random.sample( hash_ids, 3 ) ) @@ -1877,9 +1920,28 @@ class TestClientAPI( unittest.TestCase ): self.assertEqual( response.status, 200 ) + [ ( args, kwargs ) ] = HG.test_controller.GetRead( 'file_query_ids' ) + + ( file_search_context, ) = args + + self.assertEqual( file_search_context.GetFileServiceKey(), CC.LOCAL_FILE_SERVICE_KEY ) + self.assertEqual( file_search_context.GetTagSearchContext().service_key, CC.COMBINED_TAG_SERVICE_KEY ) + self.assertEqual( set( file_search_context.GetPredicates() ), { ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_TAG, tag ) for tag in tags } ) + + self.assertIn( 'sort_by', kwargs ) + + sort_by = kwargs[ 'sort_by' ] + + self.assertEqual( sort_by.sort_type, ( 'system', CC.SORT_FILES_BY_FRAMERATE ) ) + self.assertEqual( sort_by.sort_order, CC.SORT_ASC ) + + self.assertIn( 'apply_implicit_limit', kwargs ) + + self.assertEqual( kwargs[ 'apply_implicit_limit' ], False ) + # file domain - # this just tests if it parses, we don't have a full test for read params yet + HG.test_controller.ClearReads( 'file_query_ids' ) sample_hash_ids = set( random.sample( hash_ids, 3 ) ) @@ -1904,9 +1966,28 @@ class TestClientAPI( unittest.TestCase ): self.assertEqual( response.status, 200 ) + [ ( args, kwargs ) ] = HG.test_controller.GetRead( 'file_query_ids' ) + + ( file_search_context, ) = args + + self.assertEqual( file_search_context.GetFileServiceKey(), CC.TRASH_SERVICE_KEY ) + self.assertEqual( file_search_context.GetTagSearchContext().service_key, CC.COMBINED_TAG_SERVICE_KEY ) + self.assertEqual( set( file_search_context.GetPredicates() ), { ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_TAG, tag ) for tag in tags } ) + + self.assertIn( 'sort_by', kwargs ) + + sort_by = kwargs[ 'sort_by' ] + + self.assertEqual( sort_by.sort_type, ( 'system', CC.SORT_FILES_BY_FRAMERATE ) ) + self.assertEqual( sort_by.sort_order, CC.SORT_ASC ) + + self.assertIn( 'apply_implicit_limit', kwargs ) + + self.assertEqual( kwargs[ 'apply_implicit_limit' ], False ) + # file and tag domain - # this just tests if it parses, we don't have a full test for read params yet + HG.test_controller.ClearReads( 'file_query_ids' ) sample_hash_ids = set( random.sample( hash_ids, 3 ) ) @@ -1932,6 +2013,25 @@ class TestClientAPI( unittest.TestCase ): self.assertEqual( response.status, 200 ) + [ ( args, kwargs ) ] = HG.test_controller.GetRead( 'file_query_ids' ) + + ( file_search_context, ) = args + + self.assertEqual( file_search_context.GetFileServiceKey(), CC.TRASH_SERVICE_KEY ) + self.assertEqual( file_search_context.GetTagSearchContext().service_key, CC.COMBINED_TAG_SERVICE_KEY ) + self.assertEqual( set( file_search_context.GetPredicates() ), { ClientSearch.Predicate( ClientSearch.PREDICATE_TYPE_TAG, tag ) for tag in tags } ) + + self.assertIn( 'sort_by', kwargs ) + + sort_by = kwargs[ 'sort_by' ] + + self.assertEqual( sort_by.sort_type, ( 'system', CC.SORT_FILES_BY_FRAMERATE ) ) + self.assertEqual( sort_by.sort_order, CC.SORT_ASC ) + + self.assertIn( 'apply_implicit_limit', kwargs ) + + self.assertEqual( kwargs[ 'apply_implicit_limit' ], False ) + # file and tag domain # this just tests if it parses, we don't have a full test for read params yet diff --git a/hydrus/test/TestClientDB.py b/hydrus/test/TestClientDB.py index 94957bac..b8fb3dda 100644 --- a/hydrus/test/TestClientDB.py +++ b/hydrus/test/TestClientDB.py @@ -1515,15 +1515,254 @@ class TestClientDB( unittest.TestCase ): def test_nums_pending( self ): + TestClientDB._clear_db() + result = self._read( 'nums_pending' ) self.assertEqual( result, {} ) - # we can do more testing when I add repo service to this testing framework + # + + services = list( self._read( 'services' ) ) + + tag_sk = HydrusData.GenerateKey() + file_sk = HydrusData.GenerateKey() + ipfs_sk = HydrusData.GenerateKey() + + services.append( ClientServices.GenerateService( tag_sk, HC.TAG_REPOSITORY, 'test tag repo' ) ) + services.append( ClientServices.GenerateService( file_sk, HC.FILE_REPOSITORY, 'test file repo' ) ) + services.append( ClientServices.GenerateService( ipfs_sk, HC.IPFS, 'test ipfs' ) ) + + self._write( 'update_services', services ) + + # + + result = self._read( 'nums_pending' ) + + expected_result = { + tag_sk: { + HC.SERVICE_INFO_NUM_PENDING_MAPPINGS : 0, + HC.SERVICE_INFO_NUM_PETITIONED_MAPPINGS : 0, + HC.SERVICE_INFO_NUM_PENDING_TAG_SIBLINGS : 0, + HC.SERVICE_INFO_NUM_PETITIONED_TAG_SIBLINGS : 0, + HC.SERVICE_INFO_NUM_PENDING_TAG_PARENTS : 0, + HC.SERVICE_INFO_NUM_PETITIONED_TAG_PARENTS : 0 + }, + file_sk: { + HC.SERVICE_INFO_NUM_PENDING_FILES: 0, + HC.SERVICE_INFO_NUM_PETITIONED_FILES: 0 + }, + ipfs_sk: { + HC.SERVICE_INFO_NUM_PENDING_FILES: 0, + HC.SERVICE_INFO_NUM_PETITIONED_FILES: 0 + } + } + + self.assertEqual( result, expected_result ) + + # + + hashes = [ os.urandom( 32 ) for i in range( 64 ) ] + + tags = [ 'this', 'is', 'a:test' ] + + content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_MAPPINGS, HC.CONTENT_UPDATE_PEND, ( tag, hashes ) ) for tag in tags ] + + service_keys_to_content_updates = { tag_sk : content_updates } + + self._write( 'content_updates', service_keys_to_content_updates ) + + # + + hashes = [ os.urandom( 32 ) for i in range( 64 ) ] + + tags = [ 'bad tag', 'bad' ] + + content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_MAPPINGS, HC.CONTENT_UPDATE_ADD, ( tag, hashes ) ) for tag in tags ] + + service_keys_to_content_updates = { tag_sk : content_updates } + + self._write( 'content_updates', service_keys_to_content_updates ) + + content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_MAPPINGS, HC.CONTENT_UPDATE_PETITION, ( tag, hashes ), reason = 'yo' ) for tag in tags ] + + service_keys_to_content_updates = { tag_sk : content_updates } + + self._write( 'content_updates', service_keys_to_content_updates ) + + # + + pairs = [ + ( 'sib tag 1a', 'sib tag 1b' ), + ( 'sib tag 2a', 'sib tag 2b' ), + ( 'sib tag 3a', 'sib tag 3b' ), + ( 'sib tag 4a', 'sib tag 4b' ) + ] + + content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_PEND, pair, reason = 'good sibling m8' ) for pair in pairs ] + + service_keys_to_content_updates = { tag_sk : content_updates } + + self._write( 'content_updates', service_keys_to_content_updates ) + + # + + pairs = [ + ( 'samus aran', 'princess peach' ), + ( 'lara croft', 'princess peach' ) + ] + + content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_ADD, pair ) for pair in pairs ] + + service_keys_to_content_updates = { tag_sk : content_updates } + + self._write( 'content_updates', service_keys_to_content_updates ) + + content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_PETITION, pair, reason = 'mistake' ) for pair in pairs ] + + service_keys_to_content_updates = { tag_sk : content_updates } + + self._write( 'content_updates', service_keys_to_content_updates ) + + # + + pairs = [ + ( 'par tag 1a', 'par tag 1b' ), + ( 'par tag 2a', 'par tag 2b' ), + ( 'par tag 3a', 'par tag 3b' ) + ] + + content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_PEND, pair, reason = 'good parent m8' ) for pair in pairs ] + + service_keys_to_content_updates = { tag_sk : content_updates } + + self._write( 'content_updates', service_keys_to_content_updates ) + + # + + pairs = [ + ( 'ayanami rei', 'zelda' ) + ] + + content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_ADD, pair ) for pair in pairs ] + + service_keys_to_content_updates = { tag_sk : content_updates } + + self._write( 'content_updates', service_keys_to_content_updates ) + + content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_PETITION, pair, reason = 'mistake' ) for pair in pairs ] + + service_keys_to_content_updates = { tag_sk : content_updates } + + self._write( 'content_updates', service_keys_to_content_updates ) + + # + + hashes = [ os.urandom( 32 ) for i in range( 15 ) ] + + content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_PEND, hashes ) ] + + service_keys_to_content_updates = { file_sk : content_updates } + + self._write( 'content_updates', service_keys_to_content_updates ) + + # + + hashes = [ os.urandom( 32 ) for i in range( 20 ) ] + + content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_PEND, hashes ) ] + + service_keys_to_content_updates = { ipfs_sk : content_updates } + + self._write( 'content_updates', service_keys_to_content_updates ) + + # + + test_files = { + '5d884d84813beeebd59a35e474fa3e4742d0f2b6679faa7609b245ddbbd05444' : 'muh_jpg.jpg', + 'cdc67d3b377e6e1397ffa55edc5b50f6bdf4482c7a6102c6f27fa351429d6f49' : 'muh_png.png', + '9e7b8b5abc7cb11da32db05671ce926a2a2b701415d1b2cb77a28deea51010c3' : 'muh_apng.png' + } + + file_import_options = HG.client_controller.new_options.GetDefaultFileImportOptions( 'loud' ) + + for ( hash, filename ) in test_files.items(): + + HG.test_controller.SetRead( 'hash_status', ClientImportFiles.FileImportStatus.STATICGetUnknownStatus() ) + + path = os.path.join( HC.STATIC_DIR, 'testing', filename ) + + file_import_job = ClientImportFiles.FileImportJob( path, file_import_options ) + + file_import_job.GeneratePreImportHashAndStatus() + + file_import_job.GenerateInfo() + + file_import_status = self._write( 'import_file', file_import_job ) + + + hashes = list( [ bytes.fromhex( hh ) for hh in test_files.keys() ] ) + + media_results = self._read( 'media_results', hashes ) + + content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_ADD, ( mr.GetFileInfoManager(), 100 ) ) for mr in media_results ] + + service_keys_to_content_updates = { file_sk : content_updates } + + self._write( 'content_updates', service_keys_to_content_updates ) + + content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_ADD, ( mr.GetFileInfoManager(), os.urandom( 16 ).hex() ) ) for mr in media_results ] + + service_keys_to_content_updates = { ipfs_sk : content_updates } + + self._write( 'content_updates', service_keys_to_content_updates ) + + # + + content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_PETITION, hashes, reason = 'nope' ) ] + + service_keys_to_content_updates = { file_sk : content_updates } + + self._write( 'content_updates', service_keys_to_content_updates ) + + # + + content_updates = [ HydrusData.ContentUpdate( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_PETITION, hashes ) ] + + service_keys_to_content_updates = { ipfs_sk : content_updates } + + self._write( 'content_updates', service_keys_to_content_updates ) + + # + + result = self._read( 'nums_pending' ) + + expected_result = { + tag_sk: { + HC.SERVICE_INFO_NUM_PENDING_MAPPINGS : 64 * 3, + HC.SERVICE_INFO_NUM_PETITIONED_MAPPINGS : 64 * 2, + HC.SERVICE_INFO_NUM_PENDING_TAG_SIBLINGS : 4, + HC.SERVICE_INFO_NUM_PETITIONED_TAG_SIBLINGS : 2, + HC.SERVICE_INFO_NUM_PENDING_TAG_PARENTS : 3, + HC.SERVICE_INFO_NUM_PETITIONED_TAG_PARENTS : 1 + }, + file_sk: { + HC.SERVICE_INFO_NUM_PENDING_FILES: 15, + HC.SERVICE_INFO_NUM_PETITIONED_FILES: 3 + }, + ipfs_sk: { + HC.SERVICE_INFO_NUM_PENDING_FILES: 20, + HC.SERVICE_INFO_NUM_PETITIONED_FILES: 3 + } + } + + self.assertEqual( result, expected_result ) def test_pending( self ): + TestClientDB._clear_db() + service_key = HydrusData.GenerateKey() services = self._read( 'services' ) @@ -1560,7 +1799,7 @@ class TestClientDB( unittest.TestCase ): # - self._write( 'update_services', old_services ) + TestClientDB._clear_db() def test_pixiv_account( self ): @@ -1581,6 +1820,8 @@ class TestClientDB( unittest.TestCase ): def test_services( self ): + TestClientDB._clear_db() + result = self._read( 'services', ( HC.LOCAL_FILE_DOMAIN, HC.LOCAL_FILE_TRASH_DOMAIN, HC.COMBINED_LOCAL_FILE, HC.LOCAL_TAG ) ) result_service_keys = { service.GetServiceKey() for service in result } diff --git a/hydrus/test/TestClientDBTags.py b/hydrus/test/TestClientDBTags.py index e406fbf2..e48e8a73 100644 --- a/hydrus/test/TestClientDBTags.py +++ b/hydrus/test/TestClientDBTags.py @@ -2256,7 +2256,7 @@ class TestClientDBTags( unittest.TestCase ): if do_regen_sibs: - self._write( 'regenerate_tag_siblings_cache' ) + self._write( 'regenerate_tag_siblings_and_parents_cache' ) self._sync_display() @@ -2485,7 +2485,7 @@ class TestClientDBTags( unittest.TestCase ): if do_regen_sibs: - self._write( 'regenerate_tag_siblings_cache' ) + self._write( 'regenerate_tag_siblings_and_parents_cache' ) self._sync_display() @@ -2603,7 +2603,7 @@ class TestClientDBTags( unittest.TestCase ): if do_regen_sibs: - self._write( 'regenerate_tag_siblings_cache' ) + self._write( 'regenerate_tag_siblings_and_parents_cache' ) self._sync_display() diff --git a/hydrus/test/TestController.py b/hydrus/test/TestController.py index f4318600..ee16329f 100644 --- a/hydrus/test/TestController.py +++ b/hydrus/test/TestController.py @@ -217,15 +217,15 @@ class Controller( object ): HydrusData.ShowText = show_text - self._reads = {} + self._name_read_responses = {} - self._reads[ 'local_booru_share_keys' ] = [] - self._reads[ 'messaging_sessions' ] = [] - self._reads[ 'options' ] = ClientDefaults.GetClientDefaultOptions() - self._reads[ 'file_system_predicates' ] = [] - self._reads[ 'media_results' ] = [] + self._name_read_responses[ 'local_booru_share_keys' ] = [] + self._name_read_responses[ 'messaging_sessions' ] = [] + self._name_read_responses[ 'options' ] = ClientDefaults.GetClientDefaultOptions() + self._name_read_responses[ 'file_system_predicates' ] = [] + self._name_read_responses[ 'media_results' ] = [] - self._param_reads = {} + self._param_read_responses = {} self.example_tag_repo_service_key = HydrusData.GenerateKey() @@ -243,7 +243,7 @@ class Controller( object ): services.append( ClientServices.GenerateService( LOCAL_RATING_LIKE_SERVICE_KEY, HC.LOCAL_RATING_LIKE, 'example local rating like service' ) ) services.append( ClientServices.GenerateService( LOCAL_RATING_NUMERICAL_SERVICE_KEY, HC.LOCAL_RATING_NUMERICAL, 'example local rating numerical service' ) ) - self._reads[ 'services' ] = services + self._name_read_responses[ 'services' ] = services client_files_locations = {} @@ -255,14 +255,15 @@ class Controller( object ): - self._reads[ 'client_files_locations' ] = client_files_locations + self._name_read_responses[ 'client_files_locations' ] = client_files_locations - self._reads[ 'sessions' ] = [] - self._reads[ 'tag_parents' ] = {} - self._reads[ 'tag_siblings_all_ideals' ] = {} - self._reads[ 'inbox_hashes' ] = set() + self._name_read_responses[ 'sessions' ] = [] + self._name_read_responses[ 'tag_parents' ] = {} + self._name_read_responses[ 'tag_siblings_all_ideals' ] = {} + self._name_read_responses[ 'inbox_hashes' ] = set() - self._writes = collections.defaultdict( list ) + self._read_call_args = collections.defaultdict( list ) + self._write_call_args = collections.defaultdict( list ) self._managers = {} @@ -487,6 +488,14 @@ class Controller( object ): return job + def ClearReads( self, name ): + + if name in self._read_call_args: + + del self._read_call_args[ name ] + + + def ClearTestDB( self ): self._test_db = None @@ -494,9 +503,9 @@ class Controller( object ): def ClearWrites( self, name ): - if name in self._writes: + if name in self._write_call_args: - del self._writes[ name ] + del self._write_call_args[ name ] @@ -584,11 +593,20 @@ class Controller( object ): return {} + def GetRead( self, name ): + + read = self._read_call_args[ name ] + + del self._read_call_args[ name ] + + return read + + def GetWrite( self, name ): - write = self._writes[ name ] + write = self._write_call_args[ name ] - del self._writes[ name ] + del self._write_call_args[ name ] return write @@ -656,6 +674,8 @@ class Controller( object ): def Read( self, name, *args, **kwargs ): + self._read_call_args[ name ].append( ( args, kwargs ) ) + if self._test_db is not None: return self._test_db.Read( name, *args, **kwargs ) @@ -663,9 +683,9 @@ class Controller( object ): try: - if ( name, args ) in self._param_reads: + if ( name, args ) in self._param_read_responses: - return self._param_reads[ ( name, args ) ] + return self._param_read_responses[ ( name, args ) ] except: @@ -673,7 +693,7 @@ class Controller( object ): pass - return self._reads[ name ] + return self._name_read_responses[ name ] def RegisterUIUpdateWindow( self, window ): @@ -871,12 +891,12 @@ class Controller( object ): def SetParamRead( self, name, args, value ): - self._param_reads[ ( name, args ) ] = value + self._param_read_responses[ ( name, args ) ] = value def SetRead( self, name, value ): - self._reads[ name ] = value + self._name_read_responses[ name ] = value def SetStatusBarDirty( self ): @@ -928,12 +948,12 @@ class Controller( object ): return self._test_db.Write( name, *args, **kwargs ) - self._writes[ name ].append( ( args, kwargs ) ) + self._write_call_args[ name ].append( ( args, kwargs ) ) def WriteSynchronous( self, name, *args, **kwargs ): - self._writes[ name ].append( ( args, kwargs ) ) + self._write_call_args[ name ].append( ( args, kwargs ) ) if name == 'import_file': diff --git a/static/build_files/windows/InnoSetup.iss b/static/build_files/windows/InnoSetup.iss index ff9c0296..f8309e37 100644 --- a/static/build_files/windows/InnoSetup.iss +++ b/static/build_files/windows/InnoSetup.iss @@ -1,7 +1,8 @@ [Icons] Name: {group}\hydrus client; Filename: {app}\client.exe; WorkingDir: {app}; Tasks: programgroupicons Name: {group}\hydrus server; Filename: {app}\server.exe; WorkingDir: {app}; Tasks: programgroupicons -Name: {group}\help; Filename: {app}\help\index.html; WorkingDir: {app}; Tasks: programgroupicons +;Taking this out to stop anti-virus testbeds pursing it and launching Edge and detecting Edge update calls as suspicious DNS lmao +;Name: {group}\help; Filename: {app}\help\index.html; WorkingDir: {app}; Tasks: programgroupicons Name: {group}\uninstall hydrus network; Filename: {uninstallexe}; WorkingDir: {app}; Tasks: programgroupicons; IconFilename: {app}\static\cross.ico Name: {userdesktop}\hydrus client; Filename: {app}\client.exe; WorkingDir: {app}; Tasks: desktopicons Name: {userdesktop}\hydrus server; Filename: {app}\server.exe; WorkingDir: {app}; Tasks: desktopicons @@ -25,17 +26,18 @@ UninstallDisplayIcon={app}\static\hydrus.ico Name: desktopicons; Description: Create desktop icons; Flags: unchecked; Components: install Name: programgroupicons; Description: Create program group icons; Components: install [Messages] -SelectDirBrowseLabel=To continue, click Next. If you would like to select a different folder, click Browse. Databases will be created beneath the install dir, so make sure the hard drive has enough spare space for your purposes and your user has permission to write there! If you install to program files, you'll need to run the programs as administrator! +SelectDirBrowseLabel=To continue, click Next. If you would like to select a different folder, click Browse. By default, databases will be created beneath the install dir, so make sure the hard drive has enough spare space for your purposes and your user has permission to write there! If you install to a protected location like 'Program Files', the database will be created in your User Directory. [Components] Name: install; Description: Install; Types: install; Flags: fixed [Types] Name: install; Description: Install Name: extract; Description: Extract only [Run] -Filename: {app}\help\index.html; Description: Open help/getting started guide (highly recommended for new users); Flags: postinstall unchecked shellexec +;Taking this out to stop anti-virus testbeds pursing it and launching Edge and detecting Edge update calls as suspicious DNS lmao +;Filename: {app}\help\index.html; Description: Open help/getting started guide (highly recommended for new users); Flags: postinstall unchecked shellexec Filename: {app}\client.exe; Description: Open the client; Flags: postinstall nowait unchecked [Files] -Source: dist\Hydrus Network\* ; DestDir: {app}; Flags: ignoreversion recursesubdirs createallsubdirs +Source: dist\Hydrus Network\*; DestDir: {app}; Flags: ignoreversion recursesubdirs createallsubdirs [InstallDelete] Name: {app}\Crypto; Type: filesandordirs; Components: install Name: {app}\tcl; Type: filesandordirs; Components: install @@ -66,4 +68,4 @@ Name: {app}\_tkinter.pyd; Type: files; Components: install Name: {app}\_yaml.cp36-win_amd64.pyd; Type: files; Components: install Name: {app}\_yaml.cp37-win_amd64.pyd; Type: files; Components: install Name: {app}\_cffi_backend.cp36-win_amd64.pyd; Type: files; Components: install -Name: {app}\_distutils_findvs.pyd; Type: files; Components: install \ No newline at end of file +Name: {app}\_distutils_findvs.pyd; Type: files; Components: install
-
+
More system predicate types and input formats will be available in future. Please test out the system predicates you want to send. Reverse engineering system predicate data from text is obviously tricky. If a system predicate does not parse, you'll get 400.
The file and tag services are for search domain selection, just like clicking the buttons in the client. They are optional--default is 'my files' and 'all known tags', and you can use either key or name as in GET /get_services, whichever is easiest for your situation.
-file_sort_type is an integer according to this enum (default is import time):
+file_sort_asc is 'true' for ascending, and 'false' for descending. The default is descending.
+file_sort_type is by default import time. It is an integer according to the following enum, and I have written the semantic (asc/desc) meaning for each type after:
-
-
file_sort_asc is 'true' for ascending, and 'false' for descending. The default is descending. What ascending or descending means in a context can be complicated (e.g. for ratio), so you might want to play around with it or double-check the UI in the client to figure it out.
Response description: The full list of numerical file ids that match the search.