diff --git a/help/Fixing_Hydrus_Random_Crashes_Under_Linux.md b/help/Fixing_Hydrus_Random_Crashes_Under_Linux.md index 45876f07..11d61349 100644 --- a/help/Fixing_Hydrus_Random_Crashes_Under_Linux.md +++ b/help/Fixing_Hydrus_Random_Crashes_Under_Linux.md @@ -1,14 +1,14 @@ -### Virtual Memory Under Linux +# Virtual Memory Under Linux -# Why does hydrus keep crashing under Linux when it has lots of virtual memory? +## Why does hydrus keep crashing under Linux when it has lots of virtual memory? -## Symptoms +### Symptoms - Hydrus crashes without a crash log - Standard error reads `Killed` - System logs say OOMKiller - Programs appear to havevery high virtual memory utilization despite low real memory. -## tl;dr :: The fix +### tl;dr :: The fix Add the followng line to the end of `/etc/sysctl.conf`. You will need admin, so use @@ -40,7 +40,7 @@ You may add as many swapfiles as you like, and should add a new swapfile before Reboot for all changes to take effect, or use `sysctl` to set `vm` variables. -## Details +### Details Linux's memory allocator is lazy and does not perform opportunistic reclaim. This means that the system will continue to give your process memory from the real and virtual memory pool(swap) until there is none left. Linux will only cleanup if the available total real and virtual memory falls below the **watermark** as defined in the system control configuration file `/etc/sysctl.conf`. @@ -54,21 +54,17 @@ If `vm.min_free_kbytes` is less than the ammount requested and there is no virtu Increase the `vm.min_free_kbytes` value to prevent this scenario. -### The OOM Killer +#### The OOM Killer The OOM kill decides which program to kill to reclaim memory, since hydrus loves memory it is usually picked first, even if another program asking for memory caused the OOM condition. Setting the minimum free kilobytes higher will avoid the running of the OOMkiller which is always preferable, and almost always preventable. -### Memory Overcommmit +#### Memory Overcommmit We mentioned that Linux will keep giving out memory, but actually it's possible for Linux to launch the OOM killer if it just feel like our program is aking for too much memory too quickly. Since hydrus is a heavyweight scientific processing package we need to turn this feature off. To turn it off change the value of `vm.overcommit_memory` which defaults to `2`. Set `vm.overcommit_memory=1` this prevents the OS from using a heuristic and it will just always give memory to anyone who asks for it. -### What about swappiness? +#### What about swappiness? Swapiness is a setting you might have seen, but it only determines Linux's desire to spend a little bit of time moving memory you haven't touched in a while out of real memory and into virtual memory, it will not prevent the OOM condition it just determines how much time to use for moving things into swap. - - -### Virtual Memory Under Linux 2: The rememoryning - # Why does my Linux system studder or become unresponsive when hydrus has been running a while? You are running out of pages because Linux releases I/O buffer pages only when a file is closed. Thus the OS is waiting for you to hit the watermark(as described in "why is hydrus crashing") to start freeing pages, which causes the chug. When contents is written from memory to disk the page is retained so that if you reread that part of the disk the OS does not need to access disk it just pulls it from the much faster memory. This is usually a good thing, but Hydrus does not close database files so it eats up pages over time. This is really good for hydrus but sucks for the responsiveness of other apps, and will cause hydrus to consume pages after doing a lengthy operation in anticipation of needing them again, even when it is thereafter idle. You need to set `vm.dirtytime_expire_seconds` to a lower value. @@ -88,8 +84,6 @@ https://www.kernel.org/doc/Documentation/sysctl/vm.txt -### Virtual Memory Under Linux 3: The return of the memory - # Why does everything become clunky for a bit if I have tuned all of the above settings? The kernel launches a process called `kswapd` to swap and reclaim memory pages, its behaviour is goverened by the following two values @@ -109,6 +103,7 @@ i.e. If 32GiB (real and virt) of memory, it will try to keep at least 0.224 GiB An example /etc/sysctl.conf section for virtual memory settings. +```ini ######## # virtual memory ######## @@ -132,4 +127,5 @@ vm.watermark_scale_factor=70 #Have the kernel prefer to reclaim I/O pages at 110% of the rate at which it frees other pages. #Don't set this value much over 100 or the kernel will spend all its time reclaiming I/O pages -vm.vfs_cache_pressure=110 \ No newline at end of file +vm.vfs_cache_pressure=110 +``` diff --git a/help/changelog.html b/help/changelog.html index f4cb634a..9067dd14 100755 --- a/help/changelog.html +++ b/help/changelog.html @@ -8,6 +8,44 @@

changelog

File ids are internal and specific to an individual client. For a client, a file with hash H always has the same file id N, but two clients will have different ideas about which N goes with which H. They are a bit faster than hashes to retrieve and search with en masse, which is why they are exposed here.

-

The search will be performed on the 'local files' file domain and 'all known tags' tag domain. At current, they will be sorted in import time order, newest to oldest (if you would like to paginate them before fetching metadata), but sort options will expand in future.

-

This search does not apply the implicit limit that most clients set to all searches (usually 10,000), so if you do system:everything on a client with millions of files, expect to get boshed. Even with a system:limit included, large queries may take several seconds to respond.

+

This search does not apply the implicit limit that most clients set to all searches (usually 10,000), so if you do system:everything on a client with millions of files, expect to get boshed. Even with a system:limit included, complicated queries with large result sets may take several seconds to respond. Just like the client itself.

diff --git a/hydrus/client/ClientImageHandling.py b/hydrus/client/ClientImageHandling.py index a0fcc89f..23c0d7a0 100644 --- a/hydrus/client/ClientImageHandling.py +++ b/hydrus/client/ClientImageHandling.py @@ -49,9 +49,13 @@ def GenerateShapePerceptualHashes( path, mime ): if depth == 4: # doing this on 10000x10000 pngs eats ram like mad - target_resolution = HydrusImageHandling.GetThumbnailResolution( ( x, y ), ( 1024, 1024 ) ) + # we don't want to do GetThumbnailResolution as for extremely wide or tall images, we'll then scale below 32 pixels for one dimension, losing information! + # however, it does not matter if we stretch the image a bit, since we'll be coercing 32x32 in a minute - numpy_image = HydrusImageHandling.ResizeNumPyImage( numpy_image, target_resolution ) + new_x = min( 256, x ) + new_y = min( 256, y ) + + numpy_image = cv2.resize( numpy_image, ( new_x, new_y ), interpolation = cv2.INTER_AREA ) ( y, x, depth ) = numpy_image.shape @@ -59,22 +63,30 @@ def GenerateShapePerceptualHashes( path, mime ): numpy_alpha = numpy_image[ :, :, 3 ] - numpy_alpha_float = numpy_alpha / 255.0 + numpy_image_rgb = numpy_image[ :, :, :3 ] - numpy_image_bgr = numpy_image[ :, :, :3 ] - - numpy_image_gray_bare = cv2.cvtColor( numpy_image_bgr, cv2.COLOR_RGB2GRAY ) + numpy_image_gray_bare = cv2.cvtColor( numpy_image_rgb, cv2.COLOR_RGB2GRAY ) # create a white greyscale canvas - white = numpy.ones( ( y, x ) ) * 255.0 + white = numpy.full( ( y, x ), 255.0 ) - # paste the grayscale image onto the white canvas using: pixel * alpha + white * ( 1 - alpha ) + # paste the grayscale image onto the white canvas using: pixel * alpha_float + white * ( 1 - alpha_float ) - numpy_image_gray = numpy.uint8( ( numpy_image_gray_bare * numpy_alpha_float ) + ( white * ( numpy.ones( ( y, x ) ) - numpy_alpha_float ) ) ) + # note alpha 255 = opaque, alpha 0 = transparent + + # also, note: + # white * ( 1 - alpha_float ) + # = + # 255 * ( 1 - ( alpha / 255 ) ) + # = + # 255 - alpha + + numpy_image_gray = numpy.uint8( ( numpy_image_gray_bare * ( numpy_alpha / 255.0 ) ) + ( white - numpy_alpha ) ) else: + # this single step is nice and fast, so we won't scale to 256x256 beforehand numpy_image_gray = cv2.cvtColor( numpy_image, cv2.COLOR_RGB2GRAY ) diff --git a/hydrus/client/ClientServices.py b/hydrus/client/ClientServices.py index 161d25f9..ab18cf1d 100644 --- a/hydrus/client/ClientServices.py +++ b/hydrus/client/ClientServices.py @@ -995,6 +995,11 @@ class ServiceRestricted( ServiceRemote ): self._service_options = dictionary[ 'service_options' ] + def _SetNewServiceOptions( self, service_options ): + + self._service_options = service_options + + def CanSyncAccount( self, including_external_communication = True ): with self._lock: @@ -1269,7 +1274,7 @@ class ServiceRestricted( ServiceRemote ): with self._lock: - self._next_account_sync = HydrusData.GetNow() + self._next_account_sync = HydrusData.GetNow() - 1 self._SetDirty() @@ -1350,7 +1355,9 @@ class ServiceRestricted( ServiceRemote ): with self._lock: - self._service_options = options_response[ 'service_options' ] + service_options = options_response[ 'service_options' ] + + self._SetNewServiceOptions( service_options ) except HydrusExceptions.SerialisationException: @@ -1543,6 +1550,18 @@ class ServiceRepository( ServiceRestricted ): job_key.SetVariable( 'popup_text_2', popup_message ) + def _SetNewServiceOptions( self, service_options ): + + if 'update_period' in service_options and 'update_period' in self._service_options and service_options[ 'update_period' ] != self._service_options[ 'update_period' ]: + + update_period = service_options[ 'update_period' ] + + self._metadata.CalculateNewNextUpdateDue( update_period ) + + + ServiceRestricted._SetNewServiceOptions( self, service_options ) + + def _SyncDownloadMetadata( self ): with self._lock: diff --git a/hydrus/client/db/ClientDB.py b/hydrus/client/db/ClientDB.py index b2080ec7..d70d60a5 100644 --- a/hydrus/client/db/ClientDB.py +++ b/hydrus/client/db/ClientDB.py @@ -587,7 +587,7 @@ class DB( HydrusDB.HydrusDB ): tag_service_ids = self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES ) - with HydrusDB.TemporaryIntegerTable( self._c, new_hash_ids, 'hash_id' ) as temp_hash_id_table_name: + with self._MakeTemporaryIntegerTable( new_hash_ids, 'hash_id' ) as temp_hash_id_table_name: for tag_service_id in tag_service_ids: @@ -599,7 +599,7 @@ class DB( HydrusDB.HydrusDB ): # push the service updates, done - self._c.executemany( 'UPDATE service_info SET info = info + ? WHERE service_id = ? AND info_type = ?;', service_info_updates ) + self._ExecuteMany( 'UPDATE service_info SET info = info + ? WHERE service_id = ? AND info_type = ?;', service_info_updates ) @@ -627,8 +627,8 @@ class DB( HydrusDB.HydrusDB ): self._CacheTagsGenerate( file_service_id, service_id ) - self._c.execute( 'INSERT OR IGNORE INTO tag_sibling_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', ( service_id, 0, service_id ) ) - self._c.execute( 'INSERT OR IGNORE INTO tag_parent_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', ( service_id, 0, service_id ) ) + self._Execute( 'INSERT OR IGNORE INTO tag_sibling_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', ( service_id, 0, service_id ) ) + self._Execute( 'INSERT OR IGNORE INTO tag_parent_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', ( service_id, 0, service_id ) ) self._service_ids_to_sibling_applicable_service_ids = None self._service_ids_to_sibling_interested_service_ids = None @@ -699,10 +699,10 @@ class DB( HydrusDB.HydrusDB ): def _AddTagParents( self, service_id, pairs, defer_cache_update = False ): - self._c.executemany( 'DELETE FROM tag_parents WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ?;', ( ( service_id, child_tag_id, parent_tag_id ) for ( child_tag_id, parent_tag_id ) in pairs ) ) - self._c.executemany( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ? AND status = ?;', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_PENDING ) for ( child_tag_id, parent_tag_id ) in pairs ) ) + self._ExecuteMany( 'DELETE FROM tag_parents WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ?;', ( ( service_id, child_tag_id, parent_tag_id ) for ( child_tag_id, parent_tag_id ) in pairs ) ) + self._ExecuteMany( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ? AND status = ?;', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_PENDING ) for ( child_tag_id, parent_tag_id ) in pairs ) ) - self._c.executemany( 'INSERT OR IGNORE INTO tag_parents ( service_id, child_tag_id, parent_tag_id, status ) VALUES ( ?, ?, ?, ? );', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_CURRENT ) for ( child_tag_id, parent_tag_id ) in pairs ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO tag_parents ( service_id, child_tag_id, parent_tag_id, status ) VALUES ( ?, ?, ?, ? );', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_CURRENT ) for ( child_tag_id, parent_tag_id ) in pairs ) ) tag_ids = set( itertools.chain.from_iterable( pairs ) ) @@ -714,10 +714,10 @@ class DB( HydrusDB.HydrusDB ): def _AddTagSiblings( self, service_id, pairs, defer_cache_update = False ): - self._c.executemany( 'DELETE FROM tag_siblings WHERE service_id = ? AND bad_tag_id = ?;', ( ( service_id, bad_tag_id ) for ( bad_tag_id, good_tag_id ) in pairs ) ) - self._c.executemany( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND status = ?;', ( ( service_id, bad_tag_id, HC.CONTENT_STATUS_PENDING ) for ( bad_tag_id, good_tag_id ) in pairs ) ) + self._ExecuteMany( 'DELETE FROM tag_siblings WHERE service_id = ? AND bad_tag_id = ?;', ( ( service_id, bad_tag_id ) for ( bad_tag_id, good_tag_id ) in pairs ) ) + self._ExecuteMany( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND status = ?;', ( ( service_id, bad_tag_id, HC.CONTENT_STATUS_PENDING ) for ( bad_tag_id, good_tag_id ) in pairs ) ) - self._c.executemany( 'INSERT OR IGNORE INTO tag_siblings ( service_id, bad_tag_id, good_tag_id, status ) VALUES ( ?, ?, ?, ? );', ( ( service_id, bad_tag_id, good_tag_id, HC.CONTENT_STATUS_CURRENT ) for ( bad_tag_id, good_tag_id ) in pairs ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO tag_siblings ( service_id, bad_tag_id, good_tag_id, status ) VALUES ( ?, ?, ?, ? );', ( ( service_id, bad_tag_id, good_tag_id, HC.CONTENT_STATUS_CURRENT ) for ( bad_tag_id, good_tag_id ) in pairs ) ) tag_ids = set( itertools.chain.from_iterable( pairs ) ) @@ -737,13 +737,13 @@ class DB( HydrusDB.HydrusDB ): update_rows = list( service_ids_to_counts.items() ) - self._c.executemany( 'UPDATE service_info SET info = info - ? WHERE service_id = ? AND info_type = ?;', [ ( count, service_id, HC.SERVICE_INFO_NUM_INBOX ) for ( service_id, count ) in update_rows ] ) + self._ExecuteMany( 'UPDATE service_info SET info = info - ? WHERE service_id = ? AND info_type = ?;', [ ( count, service_id, HC.SERVICE_INFO_NUM_INBOX ) for ( service_id, count ) in update_rows ] ) def _Backup( self, path ): - self._CloseDBCursor() + self._CloseDBConnection() try: @@ -804,7 +804,7 @@ class DB( HydrusDB.HydrusDB ): finally: - self._InitDBCursor() + self._InitDBConnection() job_key.SetVariable( 'popup_text_1', 'backup complete!' ) @@ -836,7 +836,7 @@ class DB( HydrusDB.HydrusDB ): ac_current_counts = collections.Counter() ac_pending_counts = collections.Counter() - with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name: + with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name: display_tag_ids = self._CacheTagDisplayGetImplies( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, storage_tag_id ) @@ -909,7 +909,7 @@ class DB( HydrusDB.HydrusDB ): ac_counts = collections.Counter() - with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name: + with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name: display_tag_ids = self._CacheTagDisplayGetImplies( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, storage_tag_id ) @@ -941,18 +941,27 @@ class DB( HydrusDB.HydrusDB ): - def _CacheCombinedFilesDisplayMappingsClear( self, tag_service_id ): + def _CacheCombinedFilesDisplayMappingsClear( self, tag_service_id, keep_pending = False ): combined_display_ac_cache_table_name = GenerateCombinedFilesMappingsACCacheTableName( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id ) - self._c.execute( 'DELETE FROM {};'.format( combined_display_ac_cache_table_name ) ) + if keep_pending: + + self._Execute( 'UPDATE {} SET current_count = 0;'.format( combined_display_ac_cache_table_name ) ) + + self._Execute( 'DELETE FROM {} WHERE current_count = 0 AND pending_count = 0;'.format( combined_display_ac_cache_table_name ) ) + + else: + + self._Execute( 'DELETE FROM {};'.format( combined_display_ac_cache_table_name ) ) + def _CacheCombinedFilesDisplayMappingsDrop( self, tag_service_id ): combined_display_ac_cache_table_name = GenerateCombinedFilesMappingsACCacheTableName( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( combined_display_ac_cache_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( combined_display_ac_cache_table_name ) ) def _CacheCombinedFilesDisplayMappingsGenerate( self, tag_service_id, status_hook = None ): @@ -960,7 +969,7 @@ class DB( HydrusDB.HydrusDB ): combined_ac_cache_table_name = GenerateCombinedFilesMappingsACCacheTableName( ClientTags.TAG_DISPLAY_STORAGE, tag_service_id ) combined_display_ac_cache_table_name = GenerateCombinedFilesMappingsACCacheTableName( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id ) - self._c.execute( 'CREATE TABLE ' + combined_display_ac_cache_table_name + ' ( tag_id INTEGER PRIMARY KEY, current_count INTEGER, pending_count INTEGER );' ) + self._Execute( 'CREATE TABLE ' + combined_display_ac_cache_table_name + ' ( tag_id INTEGER PRIMARY KEY, current_count INTEGER, pending_count INTEGER );' ) # @@ -969,7 +978,7 @@ class DB( HydrusDB.HydrusDB ): status_hook( 'copying storage counts' ) - self._c.execute( 'INSERT OR IGNORE INTO {} ( tag_id, current_count, pending_count ) SELECT tag_id, current_count, pending_count FROM {};'.format( combined_display_ac_cache_table_name, combined_ac_cache_table_name ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( tag_id, current_count, pending_count ) SELECT tag_id, current_count, pending_count FROM {};'.format( combined_display_ac_cache_table_name, combined_ac_cache_table_name ) ) self.modules_db_maintenance.AnalyzeTable( combined_display_ac_cache_table_name ) @@ -978,7 +987,7 @@ class DB( HydrusDB.HydrusDB ): ac_counts = collections.Counter() - with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name: + with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name: display_tag_ids = self._CacheTagDisplayGetImplies( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, storage_tag_id ) @@ -1022,7 +1031,7 @@ class DB( HydrusDB.HydrusDB ): status_hook( message ) - all_pending_storage_tag_ids = self._STS( self._c.execute( 'SELECT DISTINCT tag_id FROM {};'.format( pending_mappings_table_name ) ) ) + all_pending_storage_tag_ids = self._STS( self._Execute( 'SELECT DISTINCT tag_id FROM {};'.format( pending_mappings_table_name ) ) ) storage_tag_ids_to_display_tag_ids = self._CacheTagDisplayGetTagsToImplies( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, all_pending_storage_tag_ids ) @@ -1031,8 +1040,8 @@ class DB( HydrusDB.HydrusDB ): del all_pending_storage_tag_ids del storage_tag_ids_to_display_tag_ids - self._c.execute( 'UPDATE {} SET pending_count = 0 WHERE pending_count > 0;'.format( ac_cache_table_name ) ) - self._c.execute( 'DELETE FROM {} WHERE current_count = 0 AND pending_count = 0;'.format( ac_cache_table_name ) ) + self._Execute( 'UPDATE {} SET pending_count = 0 WHERE pending_count > 0;'.format( ac_cache_table_name ) ) + self._Execute( 'DELETE FROM {} WHERE current_count = 0 AND pending_count = 0;'.format( ac_cache_table_name ) ) all_pending_display_tag_ids_to_implied_by_storage_tag_ids = self._CacheTagDisplayGetTagsToImpliedBy( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, all_pending_display_tag_ids, tags_are_ideal = True ) @@ -1055,14 +1064,14 @@ class DB( HydrusDB.HydrusDB ): ( storage_tag_id, ) = storage_tag_ids - ( pending_delta, ) = self._c.execute( 'SELECT COUNT( DISTINCT hash_id ) FROM {} WHERE tag_id = ?;'.format( pending_mappings_table_name ), ( storage_tag_id, ) ).fetchone() + ( pending_delta, ) = self._Execute( 'SELECT COUNT( DISTINCT hash_id ) FROM {} WHERE tag_id = ?;'.format( pending_mappings_table_name ), ( storage_tag_id, ) ).fetchone() else: - with HydrusDB.TemporaryIntegerTable( self._c, storage_tag_ids, 'tag_id' ) as temp_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( storage_tag_ids, 'tag_id' ) as temp_tag_ids_table_name: # temp tags to mappings merged - ( pending_delta, ) = self._c.execute( 'SELECT COUNT( DISTINCT hash_id ) FROM {} CROSS JOIN {} USING ( tag_id );'.format( temp_tag_ids_table_name, pending_mappings_table_name ) ).fetchone() + ( pending_delta, ) = self._Execute( 'SELECT COUNT( DISTINCT hash_id ) FROM {} CROSS JOIN {} USING ( tag_id );'.format( temp_tag_ids_table_name, pending_mappings_table_name ) ).fetchone() @@ -1076,7 +1085,7 @@ class DB( HydrusDB.HydrusDB ): ac_counts = collections.Counter() - with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name: + with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name: display_tag_ids = self._CacheTagDisplayGetImplies( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, storage_tag_id ) @@ -1108,20 +1117,29 @@ class DB( HydrusDB.HydrusDB ): - def _CacheCombinedFilesMappingsClear( self, tag_service_id ): + def _CacheCombinedFilesMappingsClear( self, tag_service_id, keep_pending = False ): combined_ac_cache_table_name = GenerateCombinedFilesMappingsACCacheTableName( ClientTags.TAG_DISPLAY_STORAGE, tag_service_id ) - self._c.execute( 'DELETE FROM {};'.format( combined_ac_cache_table_name ) ) + if keep_pending: + + self._Execute( 'UPDATE {} SET current_count = 0;'.format( combined_ac_cache_table_name ) ) + + self._Execute( 'DELETE FROM {} WHERE current_count = 0 AND pending_count = 0;'.format( combined_ac_cache_table_name ) ) + + else: + + self._Execute( 'DELETE FROM {};'.format( combined_ac_cache_table_name ) ) + - self._CacheCombinedFilesDisplayMappingsClear( tag_service_id ) + self._CacheCombinedFilesDisplayMappingsClear( tag_service_id, keep_pending = keep_pending ) def _CacheCombinedFilesMappingsDrop( self, tag_service_id ): combined_ac_cache_table_name = GenerateCombinedFilesMappingsACCacheTableName( ClientTags.TAG_DISPLAY_STORAGE, tag_service_id ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( combined_ac_cache_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( combined_ac_cache_table_name ) ) self._CacheCombinedFilesDisplayMappingsDrop( tag_service_id ) @@ -1130,25 +1148,25 @@ class DB( HydrusDB.HydrusDB ): combined_ac_cache_table_name = GenerateCombinedFilesMappingsACCacheTableName( ClientTags.TAG_DISPLAY_STORAGE, tag_service_id ) - self._c.execute( 'CREATE TABLE ' + combined_ac_cache_table_name + ' ( tag_id INTEGER PRIMARY KEY, current_count INTEGER, pending_count INTEGER );' ) + self._Execute( 'CREATE TABLE ' + combined_ac_cache_table_name + ' ( tag_id INTEGER PRIMARY KEY, current_count INTEGER, pending_count INTEGER );' ) # ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = ClientDBMappingsStorage.GenerateMappingsTableNames( tag_service_id ) - current_mappings_exist = self._c.execute( 'SELECT 1 FROM ' + current_mappings_table_name + ' LIMIT 1;' ).fetchone() is not None - pending_mappings_exist = self._c.execute( 'SELECT 1 FROM ' + pending_mappings_table_name + ' LIMIT 1;' ).fetchone() is not None + current_mappings_exist = self._Execute( 'SELECT 1 FROM ' + current_mappings_table_name + ' LIMIT 1;' ).fetchone() is not None + pending_mappings_exist = self._Execute( 'SELECT 1 FROM ' + pending_mappings_table_name + ' LIMIT 1;' ).fetchone() is not None if current_mappings_exist or pending_mappings_exist: # not worth iterating through all known tags for an empty service for ( group_of_ids, num_done, num_to_do ) in HydrusDB.ReadLargeIdQueryInSeparateChunks( self._c, 'SELECT tag_id FROM tags;', 10000 ): # must be a cleverer way of doing this - with HydrusDB.TemporaryIntegerTable( self._c, group_of_ids, 'tag_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( group_of_ids, 'tag_id' ) as temp_table_name: current_counter = collections.Counter() # temp tags to mappings - for ( tag_id, count ) in self._c.execute( 'SELECT tag_id, COUNT( * ) FROM {} CROSS JOIN {} USING ( tag_id ) GROUP BY ( tag_id );'.format( temp_table_name, current_mappings_table_name ) ): + for ( tag_id, count ) in self._Execute( 'SELECT tag_id, COUNT( * ) FROM {} CROSS JOIN {} USING ( tag_id ) GROUP BY ( tag_id );'.format( temp_table_name, current_mappings_table_name ) ): current_counter[ tag_id ] = count @@ -1156,7 +1174,7 @@ class DB( HydrusDB.HydrusDB ): pending_counter = collections.Counter() # temp tags to mappings - for ( tag_id, count ) in self._c.execute( 'SELECT tag_id, COUNT( * ) FROM {} CROSS JOIN {} USING ( tag_id ) GROUP BY ( tag_id );'.format( temp_table_name, pending_mappings_table_name ) ): + for ( tag_id, count ) in self._Execute( 'SELECT tag_id, COUNT( * ) FROM {} CROSS JOIN {} USING ( tag_id ) GROUP BY ( tag_id );'.format( temp_table_name, pending_mappings_table_name ) ): pending_counter[ tag_id ] = count @@ -1169,7 +1187,7 @@ class DB( HydrusDB.HydrusDB ): if len( inserts ) > 0: - self._c.executemany( 'INSERT OR IGNORE INTO {} ( tag_id, current_count, pending_count ) VALUES ( ?, ?, ? );'.format( combined_ac_cache_table_name ), inserts ) + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( tag_id, current_count, pending_count ) VALUES ( ?, ?, ? );'.format( combined_ac_cache_table_name ), inserts ) @@ -1192,10 +1210,10 @@ class DB( HydrusDB.HydrusDB ): status_hook( message ) - all_pending_storage_tag_ids = self._STS( self._c.execute( 'SELECT DISTINCT tag_id FROM {};'.format( pending_mappings_table_name ) ) ) + all_pending_storage_tag_ids = self._STS( self._Execute( 'SELECT DISTINCT tag_id FROM {};'.format( pending_mappings_table_name ) ) ) - self._c.execute( 'UPDATE {} SET pending_count = 0 WHERE pending_count > 0;'.format( ac_cache_table_name ) ) - self._c.execute( 'DELETE FROM {} WHERE current_count = 0 AND pending_count = 0;'.format( ac_cache_table_name ) ) + self._Execute( 'UPDATE {} SET pending_count = 0 WHERE pending_count > 0;'.format( ac_cache_table_name ) ) + self._Execute( 'DELETE FROM {} WHERE current_count = 0 AND pending_count = 0;'.format( ac_cache_table_name ) ) ac_cache_changes = [] @@ -1210,7 +1228,7 @@ class DB( HydrusDB.HydrusDB ): status_hook( message ) - ( pending_delta, ) = self._c.execute( 'SELECT COUNT( DISTINCT hash_id ) FROM {} WHERE tag_id = ?;'.format( pending_mappings_table_name ), ( storage_tag_id, ) ).fetchone() + ( pending_delta, ) = self._Execute( 'SELECT COUNT( DISTINCT hash_id ) FROM {} WHERE tag_id = ?;'.format( pending_mappings_table_name ), ( storage_tag_id, ) ).fetchone() ac_cache_changes.append( ( storage_tag_id, 0, pending_delta ) ) @@ -1262,7 +1280,7 @@ class DB( HydrusDB.HydrusDB ): specific_ac_cache_table_name = GenerateSpecificACCacheTableName( ClientTags.TAG_DISPLAY_STORAGE, self.modules_services.combined_local_file_service_id, tag_service_id ) - service_tag_ids = self._STL( self._c.execute( 'SELECT tag_id FROM {} WHERE current_count > 0;'.format( specific_ac_cache_table_name ) ) ) + service_tag_ids = self._STL( self._Execute( 'SELECT tag_id FROM {} WHERE current_count > 0;'.format( specific_ac_cache_table_name ) ) ) tag_ids.update( service_tag_ids ) @@ -1303,9 +1321,9 @@ class DB( HydrusDB.HydrusDB ): for ( tag_id, current_delta, pending_delta ) in ac_cache_changes: - self._c.execute( 'INSERT OR IGNORE INTO {} ( tag_id, current_count, pending_count ) VALUES ( ?, ?, ? );'.format( ac_cache_table_name ), ( tag_id, current_delta, pending_delta ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( tag_id, current_count, pending_count ) VALUES ( ?, ?, ? );'.format( ac_cache_table_name ), ( tag_id, current_delta, pending_delta ) ) - if HydrusDB.GetRowCount( self._c ) > 0: + if self._GetRowCount() > 0: new_tag_ids.add( tag_id ) @@ -1318,7 +1336,7 @@ class DB( HydrusDB.HydrusDB ): if len( new_tag_ids ) < len( ac_cache_changes ): - self._c.executemany( 'UPDATE {} SET current_count = current_count + ?, pending_count = pending_count + ? WHERE tag_id = ?;'.format( ac_cache_table_name ), ( ( num_current, num_pending, tag_id ) for ( tag_id, num_current, num_pending ) in ac_cache_changes if tag_id not in new_tag_ids ) ) + self._ExecuteMany( 'UPDATE {} SET current_count = current_count + ?, pending_count = pending_count + ? WHERE tag_id = ?;'.format( ac_cache_table_name ), ( ( num_current, num_pending, tag_id ) for ( tag_id, num_current, num_pending ) in ac_cache_changes if tag_id not in new_tag_ids ) ) if tag_display_type == ClientTags.TAG_DISPLAY_STORAGE and len( new_tag_ids ) > 0: @@ -1353,7 +1371,7 @@ class DB( HydrusDB.HydrusDB ): ac_cache_table_name = self._CacheMappingsGetACCacheTableName( tag_display_type, file_service_id, tag_service_id ) - return self._c.execute( 'SELECT tag_id, current_count, pending_count FROM {} WHERE tag_id = ?;'.format( ac_cache_table_name ), ( tag_id, ) ).fetchall() + return self._Execute( 'SELECT tag_id, current_count, pending_count FROM {} WHERE tag_id = ?;'.format( ac_cache_table_name ), ( tag_id, ) ).fetchall() def _CacheMappingsGetAutocompleteCountsForTags( self, tag_display_type, file_service_id, tag_service_id, temp_tag_id_table_name ): @@ -1361,7 +1379,7 @@ class DB( HydrusDB.HydrusDB ): ac_cache_table_name = self._CacheMappingsGetACCacheTableName( tag_display_type, file_service_id, tag_service_id ) # temp tags to counts - return self._c.execute( 'SELECT tag_id, current_count, pending_count FROM {} CROSS JOIN {} USING ( tag_id );'.format( temp_tag_id_table_name, ac_cache_table_name ) ).fetchall() + return self._Execute( 'SELECT tag_id, current_count, pending_count FROM {} CROSS JOIN {} USING ( tag_id );'.format( temp_tag_id_table_name, ac_cache_table_name ) ).fetchall() def _CacheMappingsReduceACCounts( self, tag_display_type, file_service_id, tag_service_id, ac_cache_changes ): @@ -1375,9 +1393,9 @@ class DB( HydrusDB.HydrusDB ): for ( tag_id, current_delta, pending_delta ) in ac_cache_changes: - self._c.execute( 'DELETE FROM {} WHERE tag_id = ? AND current_count = ? AND pending_count = ?;'.format( ac_cache_table_name ), ( tag_id, current_delta, pending_delta ) ) + self._Execute( 'DELETE FROM {} WHERE tag_id = ? AND current_count = ? AND pending_count = ?;'.format( ac_cache_table_name ), ( tag_id, current_delta, pending_delta ) ) - if HydrusDB.GetRowCount( self._c ) > 0: + if self._GetRowCount() > 0: deleted_tag_ids.add( tag_id ) @@ -1390,7 +1408,7 @@ class DB( HydrusDB.HydrusDB ): if len( deleted_tag_ids ) < len( ac_cache_changes ): - self._c.executemany( 'UPDATE {} SET current_count = current_count - ?, pending_count = pending_count - ? WHERE tag_id = ?;'.format( ac_cache_table_name ), ( ( current_delta, pending_delta, tag_id ) for ( tag_id, current_delta, pending_delta ) in ac_cache_changes if tag_id not in deleted_tag_ids ) ) + self._ExecuteMany( 'UPDATE {} SET current_count = current_count - ?, pending_count = pending_count - ? WHERE tag_id = ?;'.format( ac_cache_table_name ), ( ( current_delta, pending_delta, tag_id ) for ( tag_id, current_delta, pending_delta ) in ac_cache_changes if tag_id not in deleted_tag_ids ) ) if tag_display_type == ClientTags.TAG_DISPLAY_STORAGE and len( deleted_tag_ids ) > 0: @@ -1475,12 +1493,12 @@ class DB( HydrusDB.HydrusDB ): ( cache_current_mappings_table_name, cache_deleted_mappings_table_name, cache_pending_mappings_table_name ) = GenerateSpecificMappingsCacheTableNames( file_service_id, tag_service_id ) # temp hashes to mappings - storage_current_mapping_ids_raw = self._c.execute( 'SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_ids_table_name, cache_current_mappings_table_name ) ).fetchall() + storage_current_mapping_ids_raw = self._Execute( 'SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_ids_table_name, cache_current_mappings_table_name ) ).fetchall() storage_current_mapping_ids_dict = HydrusData.BuildKeyToSetDict( storage_current_mapping_ids_raw ) # temp hashes to mappings - storage_pending_mapping_ids_raw = self._c.execute( 'SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_ids_table_name, cache_pending_mappings_table_name ) ).fetchall() + storage_pending_mapping_ids_raw = self._Execute( 'SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_ids_table_name, cache_pending_mappings_table_name ) ).fetchall() storage_pending_mapping_ids_dict = HydrusData.BuildKeyToSetDict( storage_pending_mapping_ids_raw ) @@ -1512,7 +1530,7 @@ class DB( HydrusDB.HydrusDB ): if current_delta > 0: - self._c.executemany( 'INSERT OR IGNORE INTO ' + cache_display_current_mappings_table_name + ' ( hash_id, tag_id ) VALUES ( ?, ? );', ( ( hash_id, display_tag_id ) for hash_id in display_current_hash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO ' + cache_display_current_mappings_table_name + ' ( hash_id, tag_id ) VALUES ( ?, ? );', ( ( hash_id, display_tag_id ) for hash_id in display_current_hash_ids ) ) # @@ -1523,7 +1541,7 @@ class DB( HydrusDB.HydrusDB ): if pending_delta > 0: - self._c.executemany( 'INSERT OR IGNORE INTO ' + cache_display_pending_mappings_table_name + ' ( hash_id, tag_id ) VALUES ( ?, ? );', ( ( hash_id, display_tag_id ) for hash_id in display_pending_hash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO ' + cache_display_pending_mappings_table_name + ' ( hash_id, tag_id ) VALUES ( ?, ? );', ( ( hash_id, display_tag_id ) for hash_id in display_pending_hash_ids ) ) # @@ -1572,19 +1590,19 @@ class DB( HydrusDB.HydrusDB ): ( add_tag_id, ) = add_tag_ids - self._c.execute( 'INSERT OR IGNORE INTO {} ( hash_id, tag_id ) SELECT hash_id, ? FROM {} WHERE tag_id = ?;'.format( cache_display_mappings_table_name, cache_mappings_table_name ), ( tag_id, add_tag_id ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( hash_id, tag_id ) SELECT hash_id, ? FROM {} WHERE tag_id = ?;'.format( cache_display_mappings_table_name, cache_mappings_table_name ), ( tag_id, add_tag_id ) ) - statuses_to_count_delta[ status ] = HydrusDB.GetRowCount( self._c ) + statuses_to_count_delta[ status ] = self._GetRowCount() else: - with HydrusDB.TemporaryIntegerTable( self._c, add_tag_ids, 'tag_id' ) as temp_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( add_tag_ids, 'tag_id' ) as temp_tag_ids_table_name: # for all new implications, get files with those tags and not existing - self._c.execute( 'INSERT OR IGNORE INTO {} ( hash_id, tag_id ) SELECT hash_id, ? FROM {} CROSS JOIN {} USING ( tag_id );'.format( cache_display_mappings_table_name, temp_tag_ids_table_name, cache_mappings_table_name ), ( tag_id, ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( hash_id, tag_id ) SELECT hash_id, ? FROM {} CROSS JOIN {} USING ( tag_id );'.format( cache_display_mappings_table_name, temp_tag_ids_table_name, cache_mappings_table_name ), ( tag_id, ) ) - statuses_to_count_delta[ status ] = HydrusDB.GetRowCount( self._c ) + statuses_to_count_delta[ status ] = self._GetRowCount() @@ -1612,9 +1630,9 @@ class DB( HydrusDB.HydrusDB ): for display_tag_id in display_tag_ids: - self._c.executemany( 'INSERT OR IGNORE INTO ' + cache_display_current_mappings_table_name + ' ( hash_id, tag_id ) VALUES ( ?, ? );', ( ( hash_id, display_tag_id ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO ' + cache_display_current_mappings_table_name + ' ( hash_id, tag_id ) VALUES ( ?, ? );', ( ( hash_id, display_tag_id ) for hash_id in hash_ids ) ) - num_added = HydrusDB.GetRowCount( self._c ) + num_added = self._GetRowCount() if num_added > 0: @@ -1630,15 +1648,25 @@ class DB( HydrusDB.HydrusDB ): - def _CacheSpecificDisplayMappingsClear( self, file_service_id, tag_service_id ): + def _CacheSpecificDisplayMappingsClear( self, file_service_id, tag_service_id, keep_pending = False ): specific_display_ac_cache_table_name = GenerateSpecificACCacheTableName( ClientTags.TAG_DISPLAY_ACTUAL, file_service_id, tag_service_id ) ( cache_display_current_mappings_table_name, cache_display_pending_mappings_table_name ) = GenerateSpecificDisplayMappingsCacheTableNames( file_service_id, tag_service_id ) - self._c.execute( 'DELETE FROM {};'.format( specific_display_ac_cache_table_name ) ) - self._c.execute( 'DELETE FROM {};'.format( cache_display_current_mappings_table_name ) ) - self._c.execute( 'DELETE FROM {};'.format( cache_display_pending_mappings_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( cache_display_current_mappings_table_name ) ) + + if keep_pending: + + self._Execute( 'UPDATE {} SET current_count = 0;'.format( specific_display_ac_cache_table_name ) ) + + self._Execute( 'DELETE FROM {} WHERE current_count = 0 AND pending_count = 0;'.format( specific_display_ac_cache_table_name ) ) + + else: + + self._Execute( 'DELETE FROM {};'.format( cache_display_pending_mappings_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( specific_display_ac_cache_table_name ) ) + def _CacheSpecificDisplayMappingsDrop( self, file_service_id, tag_service_id ): @@ -1647,9 +1675,9 @@ class DB( HydrusDB.HydrusDB ): ( cache_display_current_mappings_table_name, cache_display_pending_mappings_table_name ) = GenerateSpecificDisplayMappingsCacheTableNames( file_service_id, tag_service_id ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( specific_display_ac_cache_table_name ) ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( cache_display_current_mappings_table_name ) ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( cache_display_pending_mappings_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( specific_display_ac_cache_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_display_current_mappings_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_display_pending_mappings_table_name ) ) def _CacheSpecificDisplayMappingsDeleteFiles( self, file_service_id, tag_service_id, hash_ids, hash_id_table_name ): @@ -1657,12 +1685,12 @@ class DB( HydrusDB.HydrusDB ): ( cache_display_current_mappings_table_name, cache_display_pending_mappings_table_name ) = GenerateSpecificDisplayMappingsCacheTableNames( file_service_id, tag_service_id ) # temp hashes to mappings - current_mapping_ids_raw = self._c.execute( 'SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_id_table_name, cache_display_current_mappings_table_name ) ).fetchall() + current_mapping_ids_raw = self._Execute( 'SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_id_table_name, cache_display_current_mappings_table_name ) ).fetchall() current_mapping_ids_dict = HydrusData.BuildKeyToSetDict( current_mapping_ids_raw ) # temp hashes to mappings - pending_mapping_ids_raw = self._c.execute( 'SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_id_table_name, cache_display_pending_mappings_table_name ) ).fetchall() + pending_mapping_ids_raw = self._Execute( 'SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_id_table_name, cache_display_pending_mappings_table_name ) ).fetchall() pending_mapping_ids_dict = HydrusData.BuildKeyToSetDict( pending_mapping_ids_raw ) @@ -1686,8 +1714,8 @@ class DB( HydrusDB.HydrusDB ): ac_cache_changes.append( ( tag_id, num_current, num_pending ) ) - self._c.executemany( 'DELETE FROM ' + cache_display_current_mappings_table_name + ' WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) ) - self._c.executemany( 'DELETE FROM ' + cache_display_pending_mappings_table_name + ' WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + cache_display_current_mappings_table_name + ' WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + cache_display_pending_mappings_table_name + ' WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) ) if len( ac_cache_changes ) > 0: @@ -1729,9 +1757,9 @@ class DB( HydrusDB.HydrusDB ): # ultimately here, we are doing "delete all display mappings with hash_ids that have a storage mapping for a removee tag and no storage mappings for a keep tag # in order to reduce overhead, we go full meme and do a bunch of different situations - with HydrusDB.TemporaryIntegerTable( self._c, [], 'tag_id' ) as temp_removee_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( [], 'tag_id' ) as temp_removee_tag_ids_table_name: - with HydrusDB.TemporaryIntegerTable( self._c, [], 'tag_id' ) as temp_keep_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( [], 'tag_id' ) as temp_keep_tag_ids_table_name: if len( removee_tag_ids ) == 1: @@ -1741,7 +1769,7 @@ class DB( HydrusDB.HydrusDB ): else: - self._c.executemany( 'INSERT INTO {} ( tag_id ) VALUES ( ? );'.format( temp_removee_tag_ids_table_name ), ( ( removee_tag_id, ) for removee_tag_id in removee_tag_ids ) ) + self._ExecuteMany( 'INSERT INTO {} ( tag_id ) VALUES ( ? );'.format( temp_removee_tag_ids_table_name ), ( ( removee_tag_id, ) for removee_tag_id in removee_tag_ids ) ) hash_id_in_storage_remove = 'hash_id IN ( SELECT DISTINCT hash_id FROM {} CROSS JOIN {} USING ( tag_id ) )'.format( temp_removee_tag_ids_table_name, cache_mappings_table_name ) @@ -1772,7 +1800,7 @@ class DB( HydrusDB.HydrusDB ): else: - self._c.executemany( 'INSERT INTO {} ( tag_id ) VALUES ( ? );'.format( temp_keep_tag_ids_table_name ), ( ( keep_tag_id, ) for keep_tag_id in keep_tag_ids ) ) + self._ExecuteMany( 'INSERT INTO {} ( tag_id ) VALUES ( ? );'.format( temp_keep_tag_ids_table_name ), ( ( keep_tag_id, ) for keep_tag_id in keep_tag_ids ) ) if DoingAFileJoinTagSearchIsFaster( removee_tag_ids_weight, keep_tag_ids_weight ): @@ -1791,9 +1819,9 @@ class DB( HydrusDB.HydrusDB ): query = 'DELETE FROM {} WHERE tag_id = {} AND {};'.format( cache_display_mappings_table_name, tag_id, predicates_phrase ) - self._c.execute( query ) + self._Execute( query ) - statuses_to_count_delta[ status ] = HydrusDB.GetRowCount( self._c ) + statuses_to_count_delta[ status ] = self._GetRowCount() @@ -1830,9 +1858,9 @@ class DB( HydrusDB.HydrusDB ): # nothing else implies this tag on display, so can just straight up delete - self._c.executemany( 'DELETE FROM {} WHERE tag_id = ? AND hash_id = ?;'.format( cache_display_current_mappings_table_name ), ( ( display_tag_id, hash_id ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM {} WHERE tag_id = ? AND hash_id = ?;'.format( cache_display_current_mappings_table_name ), ( ( display_tag_id, hash_id ) for hash_id in hash_ids ) ) - num_deleted = HydrusDB.GetRowCount( self._c ) + num_deleted = self._GetRowCount() else: @@ -1841,13 +1869,13 @@ class DB( HydrusDB.HydrusDB ): mappings_table_name = statuses_to_table_names[ HC.CONTENT_STATUS_CURRENT ] - with HydrusDB.TemporaryIntegerTable( self._c, other_implied_by_tag_ids, 'tag_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( other_implied_by_tag_ids, 'tag_id' ) as temp_table_name: delete = 'DELETE FROM {} WHERE tag_id = ? AND hash_id = ? AND NOT EXISTS ( SELECT 1 FROM {} CROSS JOIN {} USING ( tag_id ) WHERE hash_id = ? );'.format( cache_display_current_mappings_table_name, mappings_table_name, temp_table_name ) - self._c.executemany( delete, ( ( display_tag_id, hash_id, hash_id ) for hash_id in hash_ids ) ) + self._ExecuteMany( delete, ( ( display_tag_id, hash_id, hash_id ) for hash_id in hash_ids ) ) - num_deleted = HydrusDB.GetRowCount( self._c ) + num_deleted = self._GetRowCount() @@ -1872,11 +1900,11 @@ class DB( HydrusDB.HydrusDB ): ( cache_current_mappings_table_name, cache_deleted_mappings_table_name, cache_pending_mappings_table_name ) = GenerateSpecificMappingsCacheTableNames( file_service_id, tag_service_id ) ( cache_display_current_mappings_table_name, cache_display_pending_mappings_table_name ) = GenerateSpecificDisplayMappingsCacheTableNames( file_service_id, tag_service_id ) - self._c.execute( 'CREATE TABLE ' + specific_display_ac_cache_table_name + ' ( tag_id INTEGER PRIMARY KEY, current_count INTEGER, pending_count INTEGER );' ) + self._Execute( 'CREATE TABLE ' + specific_display_ac_cache_table_name + ' ( tag_id INTEGER PRIMARY KEY, current_count INTEGER, pending_count INTEGER );' ) - self._c.execute( 'CREATE TABLE ' + cache_display_current_mappings_table_name + ' ( hash_id INTEGER, tag_id INTEGER, PRIMARY KEY ( hash_id, tag_id ) ) WITHOUT ROWID;' ) + self._Execute( 'CREATE TABLE ' + cache_display_current_mappings_table_name + ' ( hash_id INTEGER, tag_id INTEGER, PRIMARY KEY ( hash_id, tag_id ) ) WITHOUT ROWID;' ) - self._c.execute( 'CREATE TABLE ' + cache_display_pending_mappings_table_name + ' ( hash_id INTEGER, tag_id INTEGER, PRIMARY KEY ( hash_id, tag_id ) ) WITHOUT ROWID;' ) + self._Execute( 'CREATE TABLE ' + cache_display_pending_mappings_table_name + ' ( hash_id INTEGER, tag_id INTEGER, PRIMARY KEY ( hash_id, tag_id ) ) WITHOUT ROWID;' ) if status_hook is not None: @@ -1885,9 +1913,9 @@ class DB( HydrusDB.HydrusDB ): combined_ac_cache_table_name = GenerateCombinedFilesMappingsACCacheTableName( ClientTags.TAG_DISPLAY_STORAGE, tag_service_id ) - self._c.execute( 'INSERT OR IGNORE INTO {} ( tag_id, current_count, pending_count ) SELECT tag_id, current_count, pending_count FROM {};'.format( specific_display_ac_cache_table_name, specific_ac_cache_table_name ) ) - self._c.execute( 'INSERT OR IGNORE INTO {} ( hash_id, tag_id ) SELECT hash_id, tag_id FROM {};'.format( cache_display_current_mappings_table_name, cache_current_mappings_table_name ) ) - self._c.execute( 'INSERT OR IGNORE INTO {} ( hash_id, tag_id ) SELECT hash_id, tag_id FROM {};'.format( cache_display_pending_mappings_table_name, cache_pending_mappings_table_name ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( tag_id, current_count, pending_count ) SELECT tag_id, current_count, pending_count FROM {};'.format( specific_display_ac_cache_table_name, specific_ac_cache_table_name ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( hash_id, tag_id ) SELECT hash_id, tag_id FROM {};'.format( cache_display_current_mappings_table_name, cache_current_mappings_table_name ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( hash_id, tag_id ) SELECT hash_id, tag_id FROM {};'.format( cache_display_pending_mappings_table_name, cache_pending_mappings_table_name ) ) if status_hook is not None: @@ -1912,9 +1940,9 @@ class DB( HydrusDB.HydrusDB ): for display_tag_id in display_tag_ids: - self._c.executemany( 'INSERT OR IGNORE INTO ' + cache_display_pending_mappings_table_name + ' ( hash_id, tag_id ) VALUES ( ?, ? );', ( ( hash_id, display_tag_id ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO ' + cache_display_pending_mappings_table_name + ' ( hash_id, tag_id ) VALUES ( ?, ? );', ( ( hash_id, display_tag_id ) for hash_id in hash_ids ) ) - num_added = HydrusDB.GetRowCount( self._c ) + num_added = self._GetRowCount() if num_added > 0: @@ -1944,7 +1972,7 @@ class DB( HydrusDB.HydrusDB ): status_hook( message ) - all_pending_storage_tag_ids = self._STS( self._c.execute( 'SELECT DISTINCT tag_id FROM {};'.format( cache_pending_mappings_table_name ) ) ) + all_pending_storage_tag_ids = self._STS( self._Execute( 'SELECT DISTINCT tag_id FROM {};'.format( cache_pending_mappings_table_name ) ) ) storage_tag_ids_to_display_tag_ids = self._CacheTagDisplayGetTagsToImplies( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, all_pending_storage_tag_ids ) @@ -1953,10 +1981,10 @@ class DB( HydrusDB.HydrusDB ): del all_pending_storage_tag_ids del storage_tag_ids_to_display_tag_ids - self._c.execute( 'UPDATE {} SET pending_count = 0 WHERE pending_count > 0;'.format( ac_cache_table_name ) ) - self._c.execute( 'DELETE FROM {} WHERE current_count = 0 AND pending_count = 0;'.format( ac_cache_table_name ) ) + self._Execute( 'UPDATE {} SET pending_count = 0 WHERE pending_count > 0;'.format( ac_cache_table_name ) ) + self._Execute( 'DELETE FROM {} WHERE current_count = 0 AND pending_count = 0;'.format( ac_cache_table_name ) ) - self._c.execute( 'DELETE FROM {};'.format( cache_display_pending_mappings_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( cache_display_pending_mappings_table_name ) ) all_pending_display_tag_ids_to_implied_by_storage_tag_ids = self._CacheTagDisplayGetTagsToImpliedBy( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, all_pending_display_tag_ids, tags_are_ideal = True ) @@ -1977,18 +2005,18 @@ class DB( HydrusDB.HydrusDB ): ( storage_tag_id, ) = storage_tag_ids - self._c.execute( 'INSERT OR IGNORE INTO {} ( tag_id, hash_id ) SELECT ?, hash_id FROM {} WHERE tag_id = ?;'.format( cache_display_pending_mappings_table_name, cache_pending_mappings_table_name ), ( display_tag_id, storage_tag_id ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( tag_id, hash_id ) SELECT ?, hash_id FROM {} WHERE tag_id = ?;'.format( cache_display_pending_mappings_table_name, cache_pending_mappings_table_name ), ( display_tag_id, storage_tag_id ) ) - pending_delta = HydrusDB.GetRowCount( self._c ) + pending_delta = self._GetRowCount() else: - with HydrusDB.TemporaryIntegerTable( self._c, storage_tag_ids, 'tag_id' ) as temp_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( storage_tag_ids, 'tag_id' ) as temp_tag_ids_table_name: # temp tags to mappings merged - self._c.execute( 'INSERT OR IGNORE INTO {} ( tag_id, hash_id ) SELECT DISTINCT ?, hash_id FROM {} CROSS JOIN {} USING ( tag_id );'.format( cache_display_pending_mappings_table_name, temp_tag_ids_table_name, cache_pending_mappings_table_name ), ( display_tag_id, ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( tag_id, hash_id ) SELECT DISTINCT ?, hash_id FROM {} CROSS JOIN {} USING ( tag_id );'.format( cache_display_pending_mappings_table_name, temp_tag_ids_table_name, cache_pending_mappings_table_name ), ( display_tag_id, ) ) - pending_delta = HydrusDB.GetRowCount( self._c ) + pending_delta = self._GetRowCount() @@ -2019,9 +2047,9 @@ class DB( HydrusDB.HydrusDB ): # nothing else implies this tag on display, so can just straight up delete - self._c.executemany( 'DELETE FROM {} WHERE tag_id = ? AND hash_id = ?;'.format( cache_display_pending_mappings_table_name ), ( ( display_tag_id, hash_id ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM {} WHERE tag_id = ? AND hash_id = ?;'.format( cache_display_pending_mappings_table_name ), ( ( display_tag_id, hash_id ) for hash_id in hash_ids ) ) - num_rescinded = HydrusDB.GetRowCount( self._c ) + num_rescinded = self._GetRowCount() else: @@ -2030,15 +2058,15 @@ class DB( HydrusDB.HydrusDB ): mappings_table_name = statuses_to_table_names[ HC.CONTENT_STATUS_PENDING ] - with HydrusDB.TemporaryIntegerTable( self._c, other_implied_by_tag_ids, 'tag_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( other_implied_by_tag_ids, 'tag_id' ) as temp_table_name: # storage mappings to temp other tag ids # delete mappings where it shouldn't exist for other reasons lad delete = 'DELETE FROM {} WHERE tag_id = ? AND hash_id = ? AND NOT EXISTS ( SELECT 1 FROM {} CROSS JOIN {} USING ( tag_id ) WHERE hash_id = ? )'.format( cache_display_pending_mappings_table_name, mappings_table_name, temp_table_name ) - self._c.executemany( delete, ( ( display_tag_id, hash_id, hash_id ) for hash_id in hash_ids ) ) + self._ExecuteMany( delete, ( ( display_tag_id, hash_id, hash_id ) for hash_id in hash_ids ) ) - num_rescinded = HydrusDB.GetRowCount( self._c ) + num_rescinded = self._GetRowCount() @@ -2060,22 +2088,22 @@ class DB( HydrusDB.HydrusDB ): cache_files_table_name = GenerateSpecificFilesTableName( file_service_id, tag_service_id ) - self._c.execute( 'INSERT OR IGNORE INTO {} SELECT hash_id FROM {};'.format( cache_files_table_name, hash_ids_table_name ) ) + self._Execute( 'INSERT OR IGNORE INTO {} SELECT hash_id FROM {};'.format( cache_files_table_name, hash_ids_table_name ) ) ( cache_current_mappings_table_name, cache_deleted_mappings_table_name, cache_pending_mappings_table_name ) = GenerateSpecificMappingsCacheTableNames( file_service_id, tag_service_id ) ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = ClientDBMappingsStorage.GenerateMappingsTableNames( tag_service_id ) # deleted don't have a/c counts to update, so we can do it all in one go here - self._c.execute( 'INSERT OR IGNORE INTO {} ( hash_id, tag_id ) SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( cache_deleted_mappings_table_name, hash_ids_table_name, deleted_mappings_table_name ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( hash_id, tag_id ) SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( cache_deleted_mappings_table_name, hash_ids_table_name, deleted_mappings_table_name ) ) # temp hashes to mappings - current_mapping_ids_raw = self._c.execute( 'SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_ids_table_name, current_mappings_table_name ) ).fetchall() + current_mapping_ids_raw = self._Execute( 'SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_ids_table_name, current_mappings_table_name ) ).fetchall() current_mapping_ids_dict = HydrusData.BuildKeyToSetDict( current_mapping_ids_raw ) # temp hashes to mappings - pending_mapping_ids_raw = self._c.execute( 'SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_ids_table_name, pending_mappings_table_name ) ).fetchall() + pending_mapping_ids_raw = self._Execute( 'SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_ids_table_name, pending_mappings_table_name ) ).fetchall() pending_mapping_ids_dict = HydrusData.BuildKeyToSetDict( pending_mapping_ids_raw ) @@ -2092,9 +2120,9 @@ class DB( HydrusDB.HydrusDB ): if current_delta > 0: - self._c.executemany( 'INSERT OR IGNORE INTO ' + cache_current_mappings_table_name + ' ( hash_id, tag_id ) VALUES ( ?, ? );', ( ( hash_id, tag_id ) for hash_id in current_hash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO ' + cache_current_mappings_table_name + ' ( hash_id, tag_id ) VALUES ( ?, ? );', ( ( hash_id, tag_id ) for hash_id in current_hash_ids ) ) - current_delta = HydrusDB.GetRowCount( self._c ) + current_delta = self._GetRowCount() # @@ -2105,9 +2133,9 @@ class DB( HydrusDB.HydrusDB ): if pending_delta > 0: - self._c.executemany( 'INSERT OR IGNORE INTO ' + cache_pending_mappings_table_name + ' ( hash_id, tag_id ) VALUES ( ?, ? );', ( ( hash_id, tag_id ) for hash_id in pending_hash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO ' + cache_pending_mappings_table_name + ' ( hash_id, tag_id ) VALUES ( ?, ? );', ( ( hash_id, tag_id ) for hash_id in pending_hash_ids ) ) - pending_delta = HydrusDB.GetRowCount( self._c ) + pending_delta = self._GetRowCount() # @@ -2133,19 +2161,19 @@ class DB( HydrusDB.HydrusDB ): # we have to interleave this into the iterator so that if two siblings with the same ideal are pend->currented at once, we remain logic consistent for soletag lookups! self._CacheSpecificDisplayMappingsRescindPendingMappings( file_service_id, tag_service_id, tag_id, filtered_hash_ids ) - self._c.executemany( 'DELETE FROM ' + cache_pending_mappings_table_name + ' WHERE hash_id = ? AND tag_id = ?;', ( ( hash_id, tag_id ) for hash_id in filtered_hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + cache_pending_mappings_table_name + ' WHERE hash_id = ? AND tag_id = ?;', ( ( hash_id, tag_id ) for hash_id in filtered_hash_ids ) ) - num_pending_rescinded = HydrusDB.GetRowCount( self._c ) + num_pending_rescinded = self._GetRowCount() # - self._c.executemany( 'INSERT OR IGNORE INTO ' + cache_current_mappings_table_name + ' ( hash_id, tag_id ) VALUES ( ?, ? );', ( ( hash_id, tag_id ) for hash_id in filtered_hash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO ' + cache_current_mappings_table_name + ' ( hash_id, tag_id ) VALUES ( ?, ? );', ( ( hash_id, tag_id ) for hash_id in filtered_hash_ids ) ) - num_current_inserted = HydrusDB.GetRowCount( self._c ) + num_current_inserted = self._GetRowCount() # - self._c.executemany( 'DELETE FROM ' + cache_deleted_mappings_table_name + ' WHERE hash_id = ? AND tag_id = ?;', ( ( hash_id, tag_id ) for hash_id in filtered_hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + cache_deleted_mappings_table_name + ' WHERE hash_id = ? AND tag_id = ?;', ( ( hash_id, tag_id ) for hash_id in filtered_hash_ids ) ) if num_current_inserted > 0: @@ -2165,7 +2193,7 @@ class DB( HydrusDB.HydrusDB ): - def _CacheSpecificMappingsClear( self, file_service_id, tag_service_id ): + def _CacheSpecificMappingsClear( self, file_service_id, tag_service_id, keep_pending = False ): cache_files_table_name = GenerateSpecificFilesTableName( file_service_id, tag_service_id ) @@ -2173,13 +2201,25 @@ class DB( HydrusDB.HydrusDB ): ( cache_current_mappings_table_name, cache_deleted_mappings_table_name, cache_pending_mappings_table_name ) = GenerateSpecificMappingsCacheTableNames( file_service_id, tag_service_id ) - self._c.execute( 'DELETE FROM {};'.format( cache_files_table_name ) ) - self._c.execute( 'DELETE FROM {};'.format( cache_current_mappings_table_name ) ) - self._c.execute( 'DELETE FROM {};'.format( cache_deleted_mappings_table_name ) ) - self._c.execute( 'DELETE FROM {};'.format( cache_pending_mappings_table_name ) ) - self._c.execute( 'DELETE FROM {};'.format( specific_ac_cache_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( cache_files_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( cache_current_mappings_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( cache_deleted_mappings_table_name ) ) - self._CacheSpecificDisplayMappingsClear( file_service_id, tag_service_id ) + if keep_pending: + + self._Execute( 'UPDATE {} SET current_count = 0;'.format( specific_ac_cache_table_name ) ) + + self._Execute( 'DELETE FROM {} WHERE current_count = 0 AND pending_count = 0;'.format( specific_ac_cache_table_name ) ) + + self._Execute( 'INSERT OR IGNORE INTO {} ( hash_id ) SELECT DISTINCT hash_id FROM {};'.format( cache_files_table_name, cache_pending_mappings_table_name ) ) + + else: + + self._Execute( 'DELETE FROM {};'.format( cache_pending_mappings_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( specific_ac_cache_table_name ) ) + + + self._CacheSpecificDisplayMappingsClear( file_service_id, tag_service_id, keep_pending = keep_pending ) def _CacheSpecificMappingsDrop( self, file_service_id, tag_service_id ): @@ -2190,11 +2230,11 @@ class DB( HydrusDB.HydrusDB ): ( cache_current_mappings_table_name, cache_deleted_mappings_table_name, cache_pending_mappings_table_name ) = GenerateSpecificMappingsCacheTableNames( file_service_id, tag_service_id ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( cache_files_table_name ) ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( cache_current_mappings_table_name ) ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( cache_deleted_mappings_table_name ) ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( cache_pending_mappings_table_name ) ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( specific_ac_cache_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_files_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_current_mappings_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_deleted_mappings_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_pending_mappings_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( specific_ac_cache_table_name ) ) self._CacheSpecificDisplayMappingsDrop( file_service_id, tag_service_id ) @@ -2205,25 +2245,25 @@ class DB( HydrusDB.HydrusDB ): cache_files_table_name = GenerateSpecificFilesTableName( file_service_id, tag_service_id ) - self._c.executemany( 'DELETE FROM ' + cache_files_table_name + ' WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + cache_files_table_name + ' WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) ) ( cache_current_mappings_table_name, cache_deleted_mappings_table_name, cache_pending_mappings_table_name ) = GenerateSpecificMappingsCacheTableNames( file_service_id, tag_service_id ) # temp hashes to mappings - deleted_mapping_ids_raw = self._c.execute( 'SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_id_table_name, cache_deleted_mappings_table_name ) ).fetchall() + deleted_mapping_ids_raw = self._Execute( 'SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_id_table_name, cache_deleted_mappings_table_name ) ).fetchall() if len( deleted_mapping_ids_raw ) > 0: - self._c.executemany( 'DELETE FROM {} WHERE tag_id = ? AND hash_id = ?;'.format( cache_deleted_mappings_table_name ), deleted_mapping_ids_raw ) + self._ExecuteMany( 'DELETE FROM {} WHERE tag_id = ? AND hash_id = ?;'.format( cache_deleted_mappings_table_name ), deleted_mapping_ids_raw ) # temp hashes to mappings - current_mapping_ids_raw = self._c.execute( 'SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_id_table_name, cache_current_mappings_table_name ) ).fetchall() + current_mapping_ids_raw = self._Execute( 'SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_id_table_name, cache_current_mappings_table_name ) ).fetchall() current_mapping_ids_dict = HydrusData.BuildKeyToSetDict( current_mapping_ids_raw ) # temp hashes to mappings - pending_mapping_ids_raw = self._c.execute( 'SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_id_table_name, cache_pending_mappings_table_name ) ).fetchall() + pending_mapping_ids_raw = self._Execute( 'SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_id_table_name, cache_pending_mappings_table_name ) ).fetchall() pending_mapping_ids_dict = HydrusData.BuildKeyToSetDict( pending_mapping_ids_raw ) @@ -2247,8 +2287,8 @@ class DB( HydrusDB.HydrusDB ): ac_cache_changes.append( ( tag_id, num_current, num_pending ) ) - self._c.executemany( 'DELETE FROM ' + cache_current_mappings_table_name + ' WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) ) - self._c.executemany( 'DELETE FROM ' + cache_pending_mappings_table_name + ' WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + cache_current_mappings_table_name + ' WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + cache_pending_mappings_table_name + ' WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) ) if len( ac_cache_changes ) > 0: @@ -2264,13 +2304,13 @@ class DB( HydrusDB.HydrusDB ): self._CacheSpecificDisplayMappingsDeleteMappings( file_service_id, tag_service_id, tag_id, filtered_hash_ids ) - self._c.executemany( 'DELETE FROM ' + cache_current_mappings_table_name + ' WHERE hash_id = ? AND tag_id = ?;', ( ( hash_id, tag_id ) for hash_id in filtered_hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + cache_current_mappings_table_name + ' WHERE hash_id = ? AND tag_id = ?;', ( ( hash_id, tag_id ) for hash_id in filtered_hash_ids ) ) - num_deleted = HydrusDB.GetRowCount( self._c ) + num_deleted = self._GetRowCount() # - self._c.executemany( 'INSERT OR IGNORE INTO ' + cache_deleted_mappings_table_name + ' ( hash_id, tag_id ) VALUES ( ?, ? );', ( ( hash_id, tag_id ) for hash_id in filtered_hash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO ' + cache_deleted_mappings_table_name + ' ( hash_id, tag_id ) VALUES ( ?, ? );', ( ( hash_id, tag_id ) for hash_id in filtered_hash_ids ) ) if num_deleted > 0: @@ -2289,15 +2329,15 @@ class DB( HydrusDB.HydrusDB ): ( cache_current_mappings_table_name, cache_deleted_mappings_table_name, cache_pending_mappings_table_name ) = GenerateSpecificMappingsCacheTableNames( file_service_id, tag_service_id ) - self._c.execute( 'CREATE TABLE ' + cache_files_table_name + ' ( hash_id INTEGER PRIMARY KEY );' ) + self._Execute( 'CREATE TABLE ' + cache_files_table_name + ' ( hash_id INTEGER PRIMARY KEY );' ) - self._c.execute( 'CREATE TABLE ' + specific_ac_cache_table_name + ' ( tag_id INTEGER PRIMARY KEY, current_count INTEGER, pending_count INTEGER );' ) + self._Execute( 'CREATE TABLE ' + specific_ac_cache_table_name + ' ( tag_id INTEGER PRIMARY KEY, current_count INTEGER, pending_count INTEGER );' ) - self._c.execute( 'CREATE TABLE ' + cache_current_mappings_table_name + ' ( hash_id INTEGER, tag_id INTEGER, PRIMARY KEY ( hash_id, tag_id ) ) WITHOUT ROWID;' ) + self._Execute( 'CREATE TABLE ' + cache_current_mappings_table_name + ' ( hash_id INTEGER, tag_id INTEGER, PRIMARY KEY ( hash_id, tag_id ) ) WITHOUT ROWID;' ) - self._c.execute( 'CREATE TABLE ' + cache_deleted_mappings_table_name + ' ( hash_id INTEGER, tag_id INTEGER, PRIMARY KEY ( hash_id, tag_id ) ) WITHOUT ROWID;' ) + self._Execute( 'CREATE TABLE ' + cache_deleted_mappings_table_name + ' ( hash_id INTEGER, tag_id INTEGER, PRIMARY KEY ( hash_id, tag_id ) ) WITHOUT ROWID;' ) - self._c.execute( 'CREATE TABLE ' + cache_pending_mappings_table_name + ' ( hash_id INTEGER, tag_id INTEGER, PRIMARY KEY ( hash_id, tag_id ) ) WITHOUT ROWID;' ) + self._Execute( 'CREATE TABLE ' + cache_pending_mappings_table_name + ' ( hash_id INTEGER, tag_id INTEGER, PRIMARY KEY ( hash_id, tag_id ) ) WITHOUT ROWID;' ) self._CreateIndex( cache_current_mappings_table_name, [ 'tag_id', 'hash_id' ], unique = True ) self._CreateIndex( cache_deleted_mappings_table_name, [ 'tag_id', 'hash_id' ], unique = True ) @@ -2311,7 +2351,7 @@ class DB( HydrusDB.HydrusDB ): for ( i, block_of_hash_ids ) in enumerate( HydrusData.SplitListIntoChunks( hash_ids, BLOCK_SIZE ) ): - with HydrusDB.TemporaryIntegerTable( self._c, block_of_hash_ids, 'hash_id' ) as temp_hash_id_table_name: + with self._MakeTemporaryIntegerTable( block_of_hash_ids, 'hash_id' ) as temp_hash_id_table_name: self._CacheSpecificMappingsAddFiles( file_service_id, tag_service_id, block_of_hash_ids, temp_hash_id_table_name ) @@ -2331,14 +2371,14 @@ class DB( HydrusDB.HydrusDB ): file_service_ids_to_valid_hash_ids = collections.defaultdict( set ) - with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_table_name: for file_service_id in file_service_ids: cache_files_table_name = GenerateSpecificFilesTableName( file_service_id, tag_service_id ) # temp hashes to files - valid_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_table_name, cache_files_table_name ) ) ) + valid_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_table_name, cache_files_table_name ) ) ) file_service_ids_to_valid_hash_ids[ file_service_id ] = valid_hash_ids @@ -2355,14 +2395,14 @@ class DB( HydrusDB.HydrusDB ): file_service_ids_to_valid_hash_ids = collections.defaultdict( set ) - with HydrusDB.TemporaryIntegerTable( self._c, all_hash_ids, 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( all_hash_ids, 'hash_id' ) as temp_table_name: for file_service_id in file_service_ids: cache_files_table_name = GenerateSpecificFilesTableName( file_service_id, tag_service_id ) # temp hashes to files - valid_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_table_name, cache_files_table_name ) ) ) + valid_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_table_name, cache_files_table_name ) ) ) file_service_ids_to_valid_hash_ids[ file_service_id ] = valid_hash_ids @@ -2377,9 +2417,9 @@ class DB( HydrusDB.HydrusDB ): ( cache_current_mappings_table_name, cache_deleted_mappings_table_name, cache_pending_mappings_table_name ) = GenerateSpecificMappingsCacheTableNames( file_service_id, tag_service_id ) - self._c.executemany( 'INSERT OR IGNORE INTO ' + cache_pending_mappings_table_name + ' ( hash_id, tag_id ) VALUES ( ?, ? );', ( ( hash_id, tag_id ) for hash_id in filtered_hash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO ' + cache_pending_mappings_table_name + ' ( hash_id, tag_id ) VALUES ( ?, ? );', ( ( hash_id, tag_id ) for hash_id in filtered_hash_ids ) ) - num_added = HydrusDB.GetRowCount( self._c ) + num_added = self._GetRowCount() if num_added > 0: @@ -2407,12 +2447,12 @@ class DB( HydrusDB.HydrusDB ): status_hook( message ) - all_pending_storage_tag_ids = self._STS( self._c.execute( 'SELECT DISTINCT tag_id FROM {};'.format( pending_mappings_table_name ) ) ) + all_pending_storage_tag_ids = self._STS( self._Execute( 'SELECT DISTINCT tag_id FROM {};'.format( pending_mappings_table_name ) ) ) - self._c.execute( 'UPDATE {} SET pending_count = 0 WHERE pending_count > 0;'.format( ac_cache_table_name ) ) - self._c.execute( 'DELETE FROM {} WHERE current_count = 0 AND pending_count = 0;'.format( ac_cache_table_name ) ) + self._Execute( 'UPDATE {} SET pending_count = 0 WHERE pending_count > 0;'.format( ac_cache_table_name ) ) + self._Execute( 'DELETE FROM {} WHERE current_count = 0 AND pending_count = 0;'.format( ac_cache_table_name ) ) - self._c.execute( 'DELETE FROM {};'.format( cache_pending_mappings_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( cache_pending_mappings_table_name ) ) ac_cache_changes = [] @@ -2427,9 +2467,9 @@ class DB( HydrusDB.HydrusDB ): status_hook( message ) - self._c.execute( 'INSERT OR IGNORE INTO {} ( tag_id, hash_id ) SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id ) WHERE tag_id = ?;'.format( cache_pending_mappings_table_name, pending_mappings_table_name, cache_files_table_name ), ( storage_tag_id, ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( tag_id, hash_id ) SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id ) WHERE tag_id = ?;'.format( cache_pending_mappings_table_name, pending_mappings_table_name, cache_files_table_name ), ( storage_tag_id, ) ) - pending_delta = HydrusDB.GetRowCount( self._c ) + pending_delta = self._GetRowCount() ac_cache_changes.append( ( storage_tag_id, 0, pending_delta ) ) @@ -2449,9 +2489,9 @@ class DB( HydrusDB.HydrusDB ): self._CacheSpecificDisplayMappingsRescindPendingMappings( file_service_id, tag_service_id, tag_id, filtered_hash_ids ) - self._c.executemany( 'DELETE FROM ' + cache_pending_mappings_table_name + ' WHERE hash_id = ? AND tag_id = ?;', ( ( hash_id, tag_id ) for hash_id in filtered_hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + cache_pending_mappings_table_name + ' WHERE hash_id = ? AND tag_id = ?;', ( ( hash_id, tag_id ) for hash_id in filtered_hash_ids ) ) - num_deleted = HydrusDB.GetRowCount( self._c ) + num_deleted = self._GetRowCount() if num_deleted > 0: @@ -2533,16 +2573,16 @@ class DB( HydrusDB.HydrusDB ): ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( service_id ) - actual_sibling_rows = set( self._c.execute( 'SELECT bad_tag_id, ideal_tag_id FROM {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) ) - ideal_sibling_rows = set( self._c.execute( 'SELECT bad_tag_id, ideal_tag_id FROM {};'.format( cache_ideal_tag_siblings_lookup_table_name ) ) ) + actual_sibling_rows = set( self._Execute( 'SELECT bad_tag_id, ideal_tag_id FROM {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) ) + ideal_sibling_rows = set( self._Execute( 'SELECT bad_tag_id, ideal_tag_id FROM {};'.format( cache_ideal_tag_siblings_lookup_table_name ) ) ) sibling_rows_to_remove = actual_sibling_rows.difference( ideal_sibling_rows ) sibling_rows_to_add = ideal_sibling_rows.difference( actual_sibling_rows ) ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( service_id ) - actual_parent_rows = set( self._c.execute( 'SELECT child_tag_id, ancestor_tag_id FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) ) ) - ideal_parent_rows = set( self._c.execute( 'SELECT child_tag_id, ancestor_tag_id FROM {};'.format( cache_ideal_tag_parents_lookup_table_name ) ) ) + actual_parent_rows = set( self._Execute( 'SELECT child_tag_id, ancestor_tag_id FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) ) ) + ideal_parent_rows = set( self._Execute( 'SELECT child_tag_id, ancestor_tag_id FROM {};'.format( cache_ideal_tag_parents_lookup_table_name ) ) ) parent_rows_to_remove = actual_parent_rows.difference( ideal_parent_rows ) parent_rows_to_add = ideal_parent_rows.difference( actual_parent_rows ) @@ -2605,21 +2645,21 @@ class DB( HydrusDB.HydrusDB ): # all parent definitions are sibling collapsed, so are terminus of their sibling chains # so get all of the parent chain, then get all chains that point to those - with HydrusDB.TemporaryIntegerTable( self._c, tag_ids, 'tag_id' ) as temp_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_tag_ids_table_name: - with HydrusDB.TemporaryIntegerTable( self._c, [], 'ideal_tag_id' ) as temp_ideal_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( [], 'ideal_tag_id' ) as temp_ideal_tag_ids_table_name: self._CacheTagSiblingsGetIdealsIntoTable( display_type, tag_service_id, temp_tag_ids_table_name, temp_ideal_tag_ids_table_name ) - with HydrusDB.TemporaryIntegerTable( self._c, [], 'tag_id' ) as temp_parent_chain_members_table_name: + with self._MakeTemporaryIntegerTable( [], 'tag_id' ) as temp_parent_chain_members_table_name: self._CacheTagParentsGetChainsMembersTables( display_type, tag_service_id, temp_ideal_tag_ids_table_name, temp_parent_chain_members_table_name ) - with HydrusDB.TemporaryIntegerTable( self._c, [], 'tag_id' ) as temp_chain_members_table_name: + with self._MakeTemporaryIntegerTable( [], 'tag_id' ) as temp_chain_members_table_name: self._CacheTagSiblingsGetChainsMembersFromIdealsTables( display_type, tag_service_id, temp_parent_chain_members_table_name, temp_chain_members_table_name ) - return self._STS( self._c.execute( 'SELECT tag_id FROM {};'.format( temp_chain_members_table_name ) ) ) + return self._STS( self._Execute( 'SELECT tag_id FROM {};'.format( temp_chain_members_table_name ) ) ) @@ -2921,9 +2961,9 @@ class DB( HydrusDB.HydrusDB ): - self._c.execute( 'DELETE FROM tag_sibling_application;' ) + self._Execute( 'DELETE FROM tag_sibling_application;' ) - self._c.executemany( 'INSERT OR IGNORE INTO tag_sibling_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', inserts ) + self._ExecuteMany( 'INSERT OR IGNORE INTO tag_sibling_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', inserts ) self._service_ids_to_sibling_applicable_service_ids = None self._service_ids_to_sibling_interested_service_ids = None @@ -2969,9 +3009,9 @@ class DB( HydrusDB.HydrusDB ): - self._c.execute( 'DELETE FROM tag_parent_application;' ) + self._Execute( 'DELETE FROM tag_parent_application;' ) - self._c.executemany( 'INSERT OR IGNORE INTO tag_parent_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', inserts ) + self._ExecuteMany( 'INSERT OR IGNORE INTO tag_parent_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', inserts ) self._service_ids_to_parent_applicable_service_ids = None self._service_ids_to_parent_interested_service_ids = None @@ -3175,7 +3215,7 @@ class DB( HydrusDB.HydrusDB ): previous_chain_tag_ids_to_implied_by = self._CacheTagDisplayGetTagsToImpliedBy( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, possibly_affected_tag_ids ) - self._c.execute( 'DELETE FROM {} WHERE bad_tag_id = ? AND ideal_tag_id = ?;'.format( cache_actual_tag_siblings_lookup_table_name ), smallest_sibling_row ) + self._Execute( 'DELETE FROM {} WHERE bad_tag_id = ? AND ideal_tag_id = ?;'.format( cache_actual_tag_siblings_lookup_table_name ), smallest_sibling_row ) after_chain_tag_ids_to_implied_by = self._CacheTagDisplayGetTagsToImpliedBy( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, possibly_affected_tag_ids ) @@ -3197,7 +3237,7 @@ class DB( HydrusDB.HydrusDB ): previous_chain_tag_ids_to_implied_by = self._CacheTagDisplayGetTagsToImpliedBy( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, possibly_affected_tag_ids ) - self._c.execute( 'DELETE FROM {} WHERE child_tag_id = ? AND ancestor_tag_id = ?;'.format( cache_actual_tag_parents_lookup_table_name ), smallest_parent_row ) + self._Execute( 'DELETE FROM {} WHERE child_tag_id = ? AND ancestor_tag_id = ?;'.format( cache_actual_tag_parents_lookup_table_name ), smallest_parent_row ) after_chain_tag_ids_to_implied_by = self._CacheTagDisplayGetTagsToImpliedBy( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, possibly_affected_tag_ids ) @@ -3260,7 +3300,7 @@ class DB( HydrusDB.HydrusDB ): previous_chain_tag_ids_to_implied_by = self._CacheTagDisplayGetTagsToImpliedBy( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, possibly_affected_tag_ids ) - self._c.execute( 'INSERT OR IGNORE INTO {} ( bad_tag_id, ideal_tag_id ) VALUES ( ?, ? );'.format( cache_actual_tag_siblings_lookup_table_name ), largest_sibling_row ) + self._Execute( 'INSERT OR IGNORE INTO {} ( bad_tag_id, ideal_tag_id ) VALUES ( ?, ? );'.format( cache_actual_tag_siblings_lookup_table_name ), largest_sibling_row ) after_chain_tag_ids_to_implied_by = self._CacheTagDisplayGetTagsToImpliedBy( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, possibly_affected_tag_ids ) @@ -3282,7 +3322,7 @@ class DB( HydrusDB.HydrusDB ): previous_chain_tag_ids_to_implied_by = self._CacheTagDisplayGetTagsToImpliedBy( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, possibly_affected_tag_ids ) - self._c.execute( 'INSERT OR IGNORE INTO {} ( child_tag_id, ancestor_tag_id ) VALUES ( ?, ? );'.format( cache_actual_tag_parents_lookup_table_name ), largest_parent_row ) + self._Execute( 'INSERT OR IGNORE INTO {} ( child_tag_id, ancestor_tag_id ) VALUES ( ?, ? );'.format( cache_actual_tag_parents_lookup_table_name ), largest_parent_row ) after_chain_tag_ids_to_implied_by = self._CacheTagDisplayGetTagsToImpliedBy( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, possibly_affected_tag_ids ) @@ -3381,8 +3421,8 @@ class DB( HydrusDB.HydrusDB ): ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( tag_service_id ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( cache_actual_tag_parents_lookup_table_name ) ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( cache_ideal_tag_parents_lookup_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_actual_tag_parents_lookup_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_ideal_tag_parents_lookup_table_name ) ) def _CacheTagParentsFilterChained( self, display_type, tag_service_id, ideal_tag_ids ): @@ -3409,13 +3449,13 @@ class DB( HydrusDB.HydrusDB ): cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) - with HydrusDB.TemporaryIntegerTable( self._c, ideal_tag_ids, 'tag_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( ideal_tag_ids, 'tag_id' ) as temp_table_name: # keep these separate--older sqlite can't do cross join to an OR ON # temp tags to lookup - chain_tag_ids = self._STS( self._c.execute( 'SELECT tag_id FROM {} CROSS JOIN {} ON ( child_tag_id = tag_id );'.format( temp_table_name, cache_tag_parents_lookup_table_name ) ) ) - chain_tag_ids.update( self._STI( self._c.execute( 'SELECT tag_id FROM {} CROSS JOIN {} ON ( ancestor_tag_id = tag_id );'.format( temp_table_name, cache_tag_parents_lookup_table_name ) ) ) ) + chain_tag_ids = self._STS( self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} ON ( child_tag_id = tag_id );'.format( temp_table_name, cache_tag_parents_lookup_table_name ) ) ) + chain_tag_ids.update( self._STI( self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} ON ( ancestor_tag_id = tag_id );'.format( temp_table_name, cache_tag_parents_lookup_table_name ) ) ) ) return chain_tag_ids @@ -3425,8 +3465,8 @@ class DB( HydrusDB.HydrusDB ): ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( tag_service_id ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( child_tag_id INTEGER, ancestor_tag_id INTEGER, PRIMARY KEY ( child_tag_id, ancestor_tag_id ) );'.format( cache_actual_tag_parents_lookup_table_name ) ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( child_tag_id INTEGER, ancestor_tag_id INTEGER, PRIMARY KEY ( child_tag_id, ancestor_tag_id ) );'.format( cache_ideal_tag_parents_lookup_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( child_tag_id INTEGER, ancestor_tag_id INTEGER, PRIMARY KEY ( child_tag_id, ancestor_tag_id ) );'.format( cache_actual_tag_parents_lookup_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( child_tag_id INTEGER, ancestor_tag_id INTEGER, PRIMARY KEY ( child_tag_id, ancestor_tag_id ) );'.format( cache_ideal_tag_parents_lookup_table_name ) ) self._CreateIndex( cache_actual_tag_parents_lookup_table_name, [ 'ancestor_tag_id' ] ) self._CreateIndex( cache_ideal_tag_parents_lookup_table_name, [ 'ancestor_tag_id' ] ) @@ -3439,7 +3479,7 @@ class DB( HydrusDB.HydrusDB ): def _CacheTagParentsGenerateApplicationDicts( self ): - unsorted_dict = HydrusData.BuildKeyToListDict( ( master_service_id, ( index, application_service_id ) ) for ( master_service_id, index, application_service_id ) in self._c.execute( 'SELECT master_service_id, service_index, application_service_id FROM tag_parent_application;' ) ) + unsorted_dict = HydrusData.BuildKeyToListDict( ( master_service_id, ( index, application_service_id ) ) for ( master_service_id, index, application_service_id ) in self._Execute( 'SELECT master_service_id, service_index, application_service_id FROM tag_parent_application;' ) ) self._service_ids_to_parent_applicable_service_ids = collections.defaultdict( list ) @@ -3460,7 +3500,7 @@ class DB( HydrusDB.HydrusDB ): cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) - ancestor_ids = self._STS( self._c.execute( 'SELECT ancestor_tag_id FROM {} WHERE child_tag_id = ?;'.format( cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ) ) ) + ancestor_ids = self._STS( self._Execute( 'SELECT ancestor_tag_id FROM {} WHERE child_tag_id = ?;'.format( cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ) ) ) return ancestor_ids @@ -3494,13 +3534,13 @@ class DB( HydrusDB.HydrusDB ): ( ideal_tag_id, ) = next_search_tag_ids - round_of_tag_ids = self._STS( self._c.execute( 'SELECT child_tag_id FROM {} WHERE ancestor_tag_id = ? UNION ALL SELECT ancestor_tag_id FROM {} WHERE child_tag_id = ?;'.format( cache_tag_parents_lookup_table_name, cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ideal_tag_id ) ) ) + round_of_tag_ids = self._STS( self._Execute( 'SELECT child_tag_id FROM {} WHERE ancestor_tag_id = ? UNION ALL SELECT ancestor_tag_id FROM {} WHERE child_tag_id = ?;'.format( cache_tag_parents_lookup_table_name, cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ideal_tag_id ) ) ) else: - with HydrusDB.TemporaryIntegerTable( self._c, next_search_tag_ids, 'tag_id' ) as temp_next_search_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( next_search_tag_ids, 'tag_id' ) as temp_next_search_tag_ids_table_name: - round_of_tag_ids = self._STS( self._c.execute( 'SELECT child_tag_id FROM {} CROSS JOIN {} ON ( ancestor_tag_id = tag_id ) UNION ALL SELECT ancestor_tag_id FROM {} CROSS JOIN {} ON ( child_tag_id = tag_id );'.format( temp_next_search_tag_ids_table_name, cache_tag_parents_lookup_table_name, temp_next_search_tag_ids_table_name, cache_tag_parents_lookup_table_name ) ) ) + round_of_tag_ids = self._STS( self._Execute( 'SELECT child_tag_id FROM {} CROSS JOIN {} ON ( ancestor_tag_id = tag_id ) UNION ALL SELECT ancestor_tag_id FROM {} CROSS JOIN {} ON ( child_tag_id = tag_id );'.format( temp_next_search_tag_ids_table_name, cache_tag_parents_lookup_table_name, temp_next_search_tag_ids_table_name, cache_tag_parents_lookup_table_name ) ) ) @@ -3520,7 +3560,7 @@ class DB( HydrusDB.HydrusDB ): cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) - first_ideal_tag_ids = self._STS( self._c.execute( 'SELECT ideal_tag_id FROM {};'.format( ideal_tag_ids_table_name ) ) ) + first_ideal_tag_ids = self._STS( self._Execute( 'SELECT ideal_tag_id FROM {};'.format( ideal_tag_ids_table_name ) ) ) chain_tag_ids = set( first_ideal_tag_ids ) we_have_looked_up = set() @@ -3532,13 +3572,13 @@ class DB( HydrusDB.HydrusDB ): ( ideal_tag_id, ) = next_search_tag_ids - round_of_tag_ids = self._STS( self._c.execute( 'SELECT child_tag_id FROM {} WHERE ancestor_tag_id = ? UNION ALL SELECT ancestor_tag_id FROM {} WHERE child_tag_id = ?;'.format( cache_tag_parents_lookup_table_name, cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ideal_tag_id ) ) ) + round_of_tag_ids = self._STS( self._Execute( 'SELECT child_tag_id FROM {} WHERE ancestor_tag_id = ? UNION ALL SELECT ancestor_tag_id FROM {} WHERE child_tag_id = ?;'.format( cache_tag_parents_lookup_table_name, cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ideal_tag_id ) ) ) else: - with HydrusDB.TemporaryIntegerTable( self._c, next_search_tag_ids, 'tag_id' ) as temp_next_search_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( next_search_tag_ids, 'tag_id' ) as temp_next_search_tag_ids_table_name: - round_of_tag_ids = self._STS( self._c.execute( 'SELECT child_tag_id FROM {} CROSS JOIN {} ON ( ancestor_tag_id = tag_id ) UNION ALL SELECT ancestor_tag_id FROM {} CROSS JOIN {} ON ( child_tag_id = tag_id );'.format( temp_next_search_tag_ids_table_name, cache_tag_parents_lookup_table_name, temp_next_search_tag_ids_table_name, cache_tag_parents_lookup_table_name ) ) ) + round_of_tag_ids = self._STS( self._Execute( 'SELECT child_tag_id FROM {} CROSS JOIN {} ON ( ancestor_tag_id = tag_id ) UNION ALL SELECT ancestor_tag_id FROM {} CROSS JOIN {} ON ( child_tag_id = tag_id );'.format( temp_next_search_tag_ids_table_name, cache_tag_parents_lookup_table_name, temp_next_search_tag_ids_table_name, cache_tag_parents_lookup_table_name ) ) ) @@ -3546,7 +3586,7 @@ class DB( HydrusDB.HydrusDB ): if len( new_tag_ids ) > 0: - self._c.executemany( 'INSERT OR IGNORE INTO {} ( tag_id ) VALUES ( ? );', ( ( tag_id, ) for tag_id in round_of_tag_ids.difference( new_tag_ids ) ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( tag_id ) VALUES ( ? );', ( ( tag_id, ) for tag_id in round_of_tag_ids.difference( new_tag_ids ) ) ) chain_tag_ids.update( new_tag_ids ) @@ -3561,7 +3601,7 @@ class DB( HydrusDB.HydrusDB ): cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) - descendant_ids = self._STS( self._c.execute( 'SELECT child_tag_id FROM {} WHERE ancestor_tag_id = ?;'.format( cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ) ) ) + descendant_ids = self._STS( self._Execute( 'SELECT child_tag_id FROM {} WHERE ancestor_tag_id = ?;'.format( cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ) ) ) return descendant_ids @@ -3593,9 +3633,9 @@ class DB( HydrusDB.HydrusDB ): cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) - with HydrusDB.TemporaryIntegerTable( self._c, ideal_tag_ids, 'child_tag_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( ideal_tag_ids, 'child_tag_id' ) as temp_table_name: - tag_ids_to_ancestors = HydrusData.BuildKeyToSetDict( self._c.execute( 'SELECT child_tag_id, ancestor_tag_id FROM {} CROSS JOIN {} USING ( child_tag_id );'.format( temp_table_name, cache_tag_parents_lookup_table_name ) ) ) + tag_ids_to_ancestors = HydrusData.BuildKeyToSetDict( self._Execute( 'SELECT child_tag_id, ancestor_tag_id FROM {} CROSS JOIN {} USING ( child_tag_id );'.format( temp_table_name, cache_tag_parents_lookup_table_name ) ) ) for tag_id in ideal_tag_ids: @@ -3626,9 +3666,9 @@ class DB( HydrusDB.HydrusDB ): cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) - with HydrusDB.TemporaryIntegerTable( self._c, ideal_tag_ids, 'ancestor_tag_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( ideal_tag_ids, 'ancestor_tag_id' ) as temp_table_name: - tag_ids_to_descendants = HydrusData.BuildKeyToSetDict( self._c.execute( 'SELECT ancestor_tag_id, child_tag_id FROM {} CROSS JOIN {} USING ( ancestor_tag_id );'.format( temp_table_name, cache_tag_parents_lookup_table_name ) ) ) + tag_ids_to_descendants = HydrusData.BuildKeyToSetDict( self._Execute( 'SELECT ancestor_tag_id, child_tag_id FROM {} CROSS JOIN {} USING ( ancestor_tag_id );'.format( temp_table_name, cache_tag_parents_lookup_table_name ) ) ) for ideal_tag_id in ideal_tag_ids: @@ -3664,7 +3704,7 @@ class DB( HydrusDB.HydrusDB ): cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( display_type, tag_service_id ) - return self._c.execute( 'SELECT 1 FROM {} WHERE child_tag_id = ? OR ancestor_tag_id = ?;'.format( cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ideal_tag_id ) ).fetchone() is not None + return self._Execute( 'SELECT 1 FROM {} WHERE child_tag_id = ? OR ancestor_tag_id = ?;'.format( cache_tag_parents_lookup_table_name ), ( ideal_tag_id, ideal_tag_id ) ).fetchone() is not None def _CacheTagParentsRegen( self, tag_service_ids ): @@ -3673,7 +3713,7 @@ class DB( HydrusDB.HydrusDB ): cache_tag_parents_lookup_table_name = GenerateTagParentsLookupCacheTableName( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id ) - self._c.execute( 'DELETE FROM {};'.format( cache_tag_parents_lookup_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( cache_tag_parents_lookup_table_name ) ) applicable_service_ids = self._CacheTagParentsGetApplicableServiceIds( tag_service_id ) @@ -3707,7 +3747,7 @@ class DB( HydrusDB.HydrusDB ): - self._c.executemany( 'INSERT OR IGNORE INTO {} ( child_tag_id, ancestor_tag_id ) VALUES ( ?, ? );'.format( cache_tag_parents_lookup_table_name ), tps.IterateDescendantAncestorPairs() ) + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( child_tag_id, ancestor_tag_id ) VALUES ( ?, ? );'.format( cache_tag_parents_lookup_table_name ), tps.IterateDescendantAncestorPairs() ) if tag_service_id in self._service_ids_to_display_application_status: @@ -3742,7 +3782,7 @@ class DB( HydrusDB.HydrusDB ): # this should now contain all possible tag_ids that could be in tag parents right now related to what we were given - self._c.executemany( 'DELETE FROM {} WHERE child_tag_id = ? OR ancestor_tag_id = ?;'.format( cache_tag_parents_lookup_table_name ), ( ( tag_id, tag_id ) for tag_id in tag_ids_to_clear_and_regen ) ) + self._ExecuteMany( 'DELETE FROM {} WHERE child_tag_id = ? OR ancestor_tag_id = ?;'.format( cache_tag_parents_lookup_table_name ), ( ( tag_id, tag_id ) for tag_id in tag_ids_to_clear_and_regen ) ) # we wipe them @@ -3778,7 +3818,7 @@ class DB( HydrusDB.HydrusDB ): - self._c.executemany( 'INSERT OR IGNORE INTO {} ( child_tag_id, ancestor_tag_id ) VALUES ( ?, ? );'.format( cache_tag_parents_lookup_table_name ), tps.IterateDescendantAncestorPairs() ) + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( child_tag_id, ancestor_tag_id ) VALUES ( ?, ? );'.format( cache_tag_parents_lookup_table_name ), tps.IterateDescendantAncestorPairs() ) if tag_service_id in self._service_ids_to_display_application_status: @@ -3815,9 +3855,9 @@ class DB( HydrusDB.HydrusDB ): for tag_id in tag_ids: - self._c.execute( 'INSERT OR IGNORE INTO {} ( tag_id, namespace_id, subtag_id ) SELECT tag_id, namespace_id, subtag_id FROM tags WHERE tag_id = ?;'.format( tags_table_name ), ( tag_id, ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( tag_id, namespace_id, subtag_id ) SELECT tag_id, namespace_id, subtag_id FROM tags WHERE tag_id = ?;'.format( tags_table_name ), ( tag_id, ) ) - if HydrusDB.GetRowCount( self._c ) > 0: + if self._GetRowCount() > 0: actually_new_tag_ids.add( tag_id ) @@ -3827,13 +3867,13 @@ class DB( HydrusDB.HydrusDB ): if file_service_id == self.modules_services.combined_file_service_id: - self._c.execute( 'UPDATE service_info SET info = info + ? WHERE service_id = ? AND info_type = ?;', ( len( actually_new_tag_ids ), tag_service_id, HC.SERVICE_INFO_NUM_TAGS ) ) + self._Execute( 'UPDATE service_info SET info = info + ? WHERE service_id = ? AND info_type = ?;', ( len( actually_new_tag_ids ), tag_service_id, HC.SERVICE_INFO_NUM_TAGS ) ) - with HydrusDB.TemporaryIntegerTable( self._c, actually_new_tag_ids, 'tag_id' ) as temp_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( actually_new_tag_ids, 'tag_id' ) as temp_tag_ids_table_name: # temp tags to fast tag definitions to subtags - subtag_ids_and_subtags = self._c.execute( 'SELECT subtag_id, subtag FROM {} CROSS JOIN {} USING ( tag_id ) CROSS JOIN subtags USING ( subtag_id );'.format( temp_tag_ids_table_name, tags_table_name ) ).fetchall() + subtag_ids_and_subtags = self._Execute( 'SELECT subtag_id, subtag FROM {} CROSS JOIN {} USING ( tag_id ) CROSS JOIN subtags USING ( subtag_id );'.format( temp_tag_ids_table_name, tags_table_name ) ).fetchall() subtags_fts4_table_name = self._CacheTagsGetSubtagsFTS4TableName( file_service_id, tag_service_id ) subtags_searchable_map_table_name = self._CacheTagsGetSubtagsSearchableMapTableName( file_service_id, tag_service_id ) @@ -3847,12 +3887,12 @@ class DB( HydrusDB.HydrusDB ): searchable_subtag_id = self.modules_tags.GetSubtagId( searchable_subtag ) - self._c.execute( 'INSERT OR IGNORE INTO {} ( subtag_id, searchable_subtag_id ) VALUES ( ?, ? );'.format( subtags_searchable_map_table_name ), ( subtag_id, searchable_subtag_id ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( subtag_id, searchable_subtag_id ) VALUES ( ?, ? );'.format( subtags_searchable_map_table_name ), ( subtag_id, searchable_subtag_id ) ) # - self._c.execute( 'INSERT OR IGNORE INTO {} ( docid, subtag ) VALUES ( ?, ? );'.format( subtags_fts4_table_name ), ( subtag_id, searchable_subtag ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( docid, subtag ) VALUES ( ?, ? );'.format( subtags_fts4_table_name ), ( subtag_id, searchable_subtag ) ) if subtag.isdecimal(): @@ -3862,7 +3902,7 @@ class DB( HydrusDB.HydrusDB ): if CanCacheInteger( integer_subtag ): - self._c.execute( 'INSERT OR IGNORE INTO {} ( subtag_id, integer_subtag ) VALUES ( ?, ? );'.format( integer_subtags_table_name ), ( subtag_id, integer_subtag ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( subtag_id, integer_subtag ) VALUES ( ?, ? );'.format( integer_subtags_table_name ), ( subtag_id, integer_subtag ) ) except ValueError: @@ -3887,38 +3927,38 @@ class DB( HydrusDB.HydrusDB ): subtags_searchable_map_table_name = self._CacheTagsGetSubtagsSearchableMapTableName( file_service_id, tag_service_id ) integer_subtags_table_name = self._CacheTagsGetIntegerSubtagsTableName( file_service_id, tag_service_id ) - with HydrusDB.TemporaryIntegerTable( self._c, tag_ids, 'tag_id' ) as temp_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_tag_ids_table_name: # temp tag ids to tag definitions - subtag_ids = self._STS( self._c.execute( 'SELECT subtag_id FROM {} CROSS JOIN {} USING ( tag_id );'.format( temp_tag_ids_table_name, tags_table_name ) ) ) + subtag_ids = self._STS( self._Execute( 'SELECT subtag_id FROM {} CROSS JOIN {} USING ( tag_id );'.format( temp_tag_ids_table_name, tags_table_name ) ) ) # - self._c.executemany( 'DELETE FROM {} WHERE tag_id = ?;'.format( tags_table_name ), ( ( tag_id, ) for tag_id in tag_ids ) ) + self._ExecuteMany( 'DELETE FROM {} WHERE tag_id = ?;'.format( tags_table_name ), ( ( tag_id, ) for tag_id in tag_ids ) ) - num_deleted = HydrusDB.GetRowCount( self._c ) + num_deleted = self._GetRowCount() if num_deleted > 0: if file_service_id == self.modules_services.combined_file_service_id: - self._c.execute( 'UPDATE service_info SET info = info - ? WHERE service_id = ? AND info_type = ?;', ( num_deleted, tag_service_id, HC.SERVICE_INFO_NUM_TAGS ) ) + self._Execute( 'UPDATE service_info SET info = info - ? WHERE service_id = ? AND info_type = ?;', ( num_deleted, tag_service_id, HC.SERVICE_INFO_NUM_TAGS ) ) # # subtags may exist under other namespaces, so exclude those that do - with HydrusDB.TemporaryIntegerTable( self._c, subtag_ids, 'subtag_id' ) as temp_subtag_ids_table_name: + with self._MakeTemporaryIntegerTable( subtag_ids, 'subtag_id' ) as temp_subtag_ids_table_name: - still_existing_subtag_ids = self._STS( self._c.execute( 'SELECT subtag_id FROM {} CROSS JOIN {} USING ( subtag_id );'.format( temp_subtag_ids_table_name, tags_table_name ) ) ) + still_existing_subtag_ids = self._STS( self._Execute( 'SELECT subtag_id FROM {} CROSS JOIN {} USING ( subtag_id );'.format( temp_subtag_ids_table_name, tags_table_name ) ) ) deletee_subtag_ids = subtag_ids.difference( still_existing_subtag_ids ) - self._c.executemany( 'DELETE FROM {} WHERE docid = ?;'.format( subtags_fts4_table_name ), ( ( subtag_id, ) for subtag_id in deletee_subtag_ids ) ) - self._c.executemany( 'DELETE FROM {} WHERE subtag_id = ?;'.format( subtags_searchable_map_table_name ), ( ( subtag_id, ) for subtag_id in deletee_subtag_ids ) ) - self._c.executemany( 'DELETE FROM {} WHERE subtag_id = ?;'.format( integer_subtags_table_name ), ( ( subtag_id, ) for subtag_id in deletee_subtag_ids ) ) + self._ExecuteMany( 'DELETE FROM {} WHERE docid = ?;'.format( subtags_fts4_table_name ), ( ( subtag_id, ) for subtag_id in deletee_subtag_ids ) ) + self._ExecuteMany( 'DELETE FROM {} WHERE subtag_id = ?;'.format( subtags_searchable_map_table_name ), ( ( subtag_id, ) for subtag_id in deletee_subtag_ids ) ) + self._ExecuteMany( 'DELETE FROM {} WHERE subtag_id = ?;'.format( integer_subtags_table_name ), ( ( subtag_id, ) for subtag_id in deletee_subtag_ids ) ) @@ -3927,19 +3967,19 @@ class DB( HydrusDB.HydrusDB ): tags_table_name = self._CacheTagsGetTagsTableName( file_service_id, tag_service_id ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( tags_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( tags_table_name ) ) subtags_fts4_table_name = self._CacheTagsGetSubtagsFTS4TableName( file_service_id, tag_service_id ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( subtags_fts4_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( subtags_fts4_table_name ) ) subtags_searchable_map_table_name = self._CacheTagsGetSubtagsSearchableMapTableName( file_service_id, tag_service_id ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( subtags_searchable_map_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( subtags_searchable_map_table_name ) ) integer_subtags_table_name = self._CacheTagsGetIntegerSubtagsTableName( file_service_id, tag_service_id ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( integer_subtags_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( integer_subtags_table_name ) ) def _CacheTagsFileServiceIsCoveredByAllLocalFiles( self, file_service_id ): @@ -3956,16 +3996,16 @@ class DB( HydrusDB.HydrusDB ): subtags_searchable_map_table_name = self._CacheTagsGetSubtagsSearchableMapTableName( file_service_id, tag_service_id ) integer_subtags_table_name = self._CacheTagsGetIntegerSubtagsTableName( file_service_id, tag_service_id ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER PRIMARY KEY, namespace_id INTEGER, subtag_id INTEGER );'.format( tags_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER PRIMARY KEY, namespace_id INTEGER, subtag_id INTEGER );'.format( tags_table_name ) ) self._CreateIndex( tags_table_name, [ 'namespace_id', 'subtag_id' ], unique = True ) self._CreateIndex( tags_table_name, [ 'subtag_id' ] ) - self._c.execute( 'CREATE VIRTUAL TABLE IF NOT EXISTS {} USING fts4( subtag );'.format( subtags_fts4_table_name ) ) + self._Execute( 'CREATE VIRTUAL TABLE IF NOT EXISTS {} USING fts4( subtag );'.format( subtags_fts4_table_name ) ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( subtag_id INTEGER PRIMARY KEY, searchable_subtag_id INTEGER );'.format( subtags_searchable_map_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( subtag_id INTEGER PRIMARY KEY, searchable_subtag_id INTEGER );'.format( subtags_searchable_map_table_name ) ) self._CreateIndex( subtags_searchable_map_table_name, [ 'searchable_subtag_id' ] ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( subtag_id INTEGER PRIMARY KEY, integer_subtag INTEGER );'.format( integer_subtags_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( subtag_id INTEGER PRIMARY KEY, integer_subtag INTEGER );'.format( integer_subtags_table_name ) ) self._CreateIndex( integer_subtags_table_name, [ 'integer_subtag' ] ) @@ -4090,7 +4130,7 @@ class DB( HydrusDB.HydrusDB ): subtags_fts4_table_name = self._CacheTagsGetSubtagsFTS4TableName( file_service_id, tag_service_id ) subtags_searchable_map_table_name = self._CacheTagsGetSubtagsSearchableMapTableName( file_service_id, tag_service_id ) - self._c.execute( 'DELETE FROM {};'.format( subtags_searchable_map_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( subtags_searchable_map_table_name ) ) query = 'SELECT docid FROM {};'.format( subtags_fts4_table_name ) @@ -4100,7 +4140,7 @@ class DB( HydrusDB.HydrusDB ): for subtag_id in group_of_subtag_ids: - result = self._c.execute( 'SELECT subtag FROM subtags WHERE subtag_id = ?;', ( subtag_id, ) ).fetchone() + result = self._Execute( 'SELECT subtag FROM subtags WHERE subtag_id = ?;', ( subtag_id, ) ).fetchone() if result is None: @@ -4115,7 +4155,7 @@ class DB( HydrusDB.HydrusDB ): searchable_subtag_id = self.modules_tags.GetSubtagId( searchable_subtag ) - self._c.execute( 'INSERT OR IGNORE INTO {} ( subtag_id, searchable_subtag_id ) VALUES ( ?, ? );'.format( subtags_searchable_map_table_name ), ( subtag_id, searchable_subtag_id ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( subtag_id, searchable_subtag_id ) VALUES ( ?, ? );'.format( subtags_searchable_map_table_name ), ( subtag_id, searchable_subtag_id ) ) @@ -4137,11 +4177,11 @@ class DB( HydrusDB.HydrusDB ): subtags_searchable_map_table_name = self._CacheTagsGetSubtagsSearchableMapTableName( file_service_id, tag_service_id ) integer_subtags_table_name = self._CacheTagsGetIntegerSubtagsTableName( file_service_id, tag_service_id ) - missing_subtag_ids = self._STS( self._c.execute( 'SELECT subtag_id FROM {} EXCEPT SELECT docid FROM {};'.format( tags_table_name, subtags_fts4_table_name ) ) ) + missing_subtag_ids = self._STS( self._Execute( 'SELECT subtag_id FROM {} EXCEPT SELECT docid FROM {};'.format( tags_table_name, subtags_fts4_table_name ) ) ) for subtag_id in missing_subtag_ids: - result = self._c.execute( 'SELECT subtag FROM subtags WHERE subtag_id = ?;', ( subtag_id, ) ).fetchone() + result = self._Execute( 'SELECT subtag FROM subtags WHERE subtag_id = ?;', ( subtag_id, ) ).fetchone() if result is None: @@ -4156,12 +4196,12 @@ class DB( HydrusDB.HydrusDB ): searchable_subtag_id = self.modules_tags.GetSubtagId( searchable_subtag ) - self._c.execute( 'INSERT OR IGNORE INTO {} ( subtag_id, searchable_subtag_id ) VALUES ( ?, ? );'.format( subtags_searchable_map_table_name ), ( subtag_id, searchable_subtag_id ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( subtag_id, searchable_subtag_id ) VALUES ( ?, ? );'.format( subtags_searchable_map_table_name ), ( subtag_id, searchable_subtag_id ) ) # - self._c.execute( 'INSERT OR IGNORE INTO {} ( docid, subtag ) VALUES ( ?, ? );'.format( subtags_fts4_table_name ), ( subtag_id, searchable_subtag ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( docid, subtag ) VALUES ( ?, ? );'.format( subtags_fts4_table_name ), ( subtag_id, searchable_subtag ) ) if subtag.isdecimal(): @@ -4171,7 +4211,7 @@ class DB( HydrusDB.HydrusDB ): if CanCacheInteger( integer_subtag ): - self._c.execute( 'INSERT OR IGNORE INTO {} ( subtag_id, integer_subtag ) VALUES ( ?, ? );'.format( integer_subtags_table_name ), ( subtag_id, integer_subtag ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( subtag_id, integer_subtag ) VALUES ( ?, ? );'.format( integer_subtags_table_name ), ( subtag_id, integer_subtag ) ) except ValueError: @@ -4201,19 +4241,19 @@ class DB( HydrusDB.HydrusDB ): chained_tag_ids = self._CacheTagDisplayFilterChained( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id, tag_ids ) unchained_tag_ids = { tag_id for tag_id in tag_ids if tag_id not in chained_tag_ids } - with HydrusDB.TemporaryIntegerTable( self._c, tag_ids, 'tag_id' ) as temp_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_tag_ids_table_name: - with HydrusDB.TemporaryIntegerTable( self._c, unchained_tag_ids, 'tag_id' ) as temp_unchained_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( unchained_tag_ids, 'tag_id' ) as temp_unchained_tag_ids_table_name: for file_service_id in file_service_ids: tags_table_name = self._CacheTagsGetTagsTableName( file_service_id, tag_service_id ) - already_exist = self._STS( self._c.execute( 'SELECT tag_id FROM {} CROSS JOIN {} USING ( tag_id );'.format( temp_tag_ids_table_name, tags_table_name ) ) ) + already_exist = self._STS( self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} USING ( tag_id );'.format( temp_tag_ids_table_name, tags_table_name ) ) ) ac_cache_table_name = self._CacheMappingsGetACCacheTableName( ClientTags.TAG_DISPLAY_STORAGE, file_service_id, tag_service_id ) - exist_in_ac_cache_tag_ids = self._STS( self._c.execute( 'SELECT tag_id FROM {} CROSS JOIN {} USING ( tag_id );'.format( temp_unchained_tag_ids_table_name, ac_cache_table_name ) ) ) + exist_in_ac_cache_tag_ids = self._STS( self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} USING ( tag_id );'.format( temp_unchained_tag_ids_table_name, ac_cache_table_name ) ) ) should_have = chained_tag_ids.union( exist_in_ac_cache_tag_ids ) @@ -4233,8 +4273,8 @@ class DB( HydrusDB.HydrusDB ): ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( tag_service_id ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( cache_ideal_tag_siblings_lookup_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( cache_ideal_tag_siblings_lookup_table_name ) ) def _CacheTagSiblingsFilterChained( self, display_type, tag_service_id, tag_ids ): @@ -4261,13 +4301,13 @@ class DB( HydrusDB.HydrusDB ): cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) - with HydrusDB.TemporaryIntegerTable( self._c, tag_ids, 'tag_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_table_name: # keep these separate--older sqlite can't do cross join to an OR ON # temp tags to lookup - chain_tag_ids = self._STS( self._c.execute( 'SELECT tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ) ) - chain_tag_ids.update( self._STI( self._c.execute( 'SELECT tag_id FROM {} CROSS JOIN {} ON ( ideal_tag_id = tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ) ) ) + chain_tag_ids = self._STS( self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ) ) + chain_tag_ids.update( self._STI( self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} ON ( ideal_tag_id = tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ) ) ) return chain_tag_ids @@ -4282,8 +4322,8 @@ class DB( HydrusDB.HydrusDB ): # keep these separate--older sqlite can't do cross join to an OR ON # temp tags to lookup - self._c.execute( 'INSERT OR IGNORE INTO {} SELECT ideal_tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) - self._STI( self._c.execute( 'INSERT OR IGNORE INTO {} SELECT tag_id FROM {} CROSS JOIN {} ON ( ideal_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) ) + self._Execute( 'INSERT OR IGNORE INTO {} SELECT ideal_tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) + self._STI( self._Execute( 'INSERT OR IGNORE INTO {} SELECT tag_id FROM {} CROSS JOIN {} ON ( ideal_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) ) def _CacheTagSiblingsFilterChainedIntoTable( self, display_type, tag_service_id, tag_ids_table_name, results_table_name ): @@ -4295,16 +4335,16 @@ class DB( HydrusDB.HydrusDB ): # keep these separate--older sqlite can't do cross join to an OR ON # temp tags to lookup - self._c.execute( 'INSERT OR IGNORE INTO {} SELECT tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) - self._STI( self._c.execute( 'INSERT OR IGNORE INTO {} SELECT tag_id FROM {} CROSS JOIN {} ON ( ideal_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) ) + self._Execute( 'INSERT OR IGNORE INTO {} SELECT tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) + self._STI( self._Execute( 'INSERT OR IGNORE INTO {} SELECT tag_id FROM {} CROSS JOIN {} ON ( ideal_tag_id = tag_id );'.format( results_table_name, tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) ) def _CacheTagSiblingsGenerate( self, tag_service_id ): ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( tag_service_id ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( bad_tag_id INTEGER PRIMARY KEY, ideal_tag_id INTEGER );'.format( cache_actual_tag_siblings_lookup_table_name ) ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( bad_tag_id INTEGER PRIMARY KEY, ideal_tag_id INTEGER );'.format( cache_ideal_tag_siblings_lookup_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( bad_tag_id INTEGER PRIMARY KEY, ideal_tag_id INTEGER );'.format( cache_actual_tag_siblings_lookup_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( bad_tag_id INTEGER PRIMARY KEY, ideal_tag_id INTEGER );'.format( cache_ideal_tag_siblings_lookup_table_name ) ) self._CreateIndex( cache_actual_tag_siblings_lookup_table_name, [ 'ideal_tag_id' ] ) self._CreateIndex( cache_ideal_tag_siblings_lookup_table_name, [ 'ideal_tag_id' ] ) @@ -4317,7 +4357,7 @@ class DB( HydrusDB.HydrusDB ): def _CacheTagSiblingsGenerateApplicationDicts( self ): - unsorted_dict = HydrusData.BuildKeyToListDict( ( master_service_id, ( index, application_service_id ) ) for ( master_service_id, index, application_service_id ) in self._c.execute( 'SELECT master_service_id, service_index, application_service_id FROM tag_sibling_application;' ) ) + unsorted_dict = HydrusData.BuildKeyToListDict( ( master_service_id, ( index, application_service_id ) ) for ( master_service_id, index, application_service_id ) in self._Execute( 'SELECT master_service_id, service_index, application_service_id FROM tag_sibling_application;' ) ) self._service_ids_to_sibling_applicable_service_ids = collections.defaultdict( list ) @@ -4338,7 +4378,7 @@ class DB( HydrusDB.HydrusDB ): cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) - sibling_tag_ids = self._STS( self._c.execute( 'SELECT bad_tag_id FROM {} WHERE ideal_tag_id = ?;'.format( cache_tag_siblings_lookup_table_name ), ( ideal_tag_id, ) ) ) + sibling_tag_ids = self._STS( self._Execute( 'SELECT bad_tag_id FROM {} WHERE ideal_tag_id = ?;'.format( cache_tag_siblings_lookup_table_name ), ( ideal_tag_id, ) ) ) sibling_tag_ids.add( ideal_tag_id ) @@ -4360,10 +4400,10 @@ class DB( HydrusDB.HydrusDB ): cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) - with HydrusDB.TemporaryIntegerTable( self._c, ideal_tag_ids, 'ideal_tag_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( ideal_tag_ids, 'ideal_tag_id' ) as temp_table_name: # temp tags to lookup - sibling_tag_ids = self._STS( self._c.execute( 'SELECT bad_tag_id FROM {} CROSS JOIN {} USING ( ideal_tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ) ) + sibling_tag_ids = self._STS( self._Execute( 'SELECT bad_tag_id FROM {} CROSS JOIN {} USING ( ideal_tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ) ) sibling_tag_ids.update( ideal_tag_ids ) @@ -4375,10 +4415,10 @@ class DB( HydrusDB.HydrusDB ): cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) - self._c.execute( 'INSERT OR IGNORE INTO {} ( tag_id ) SELECT ideal_tag_id FROM {};'.format( results_table_name, ideal_tag_ids_table_name ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( tag_id ) SELECT ideal_tag_id FROM {};'.format( results_table_name, ideal_tag_ids_table_name ) ) # tags to lookup - self._c.execute( 'INSERT OR IGNORE INTO {} ( tag_id ) SELECT bad_tag_id FROM {} CROSS JOIN {} USING ( ideal_tag_id );'.format( results_table_name, ideal_tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( tag_id ) SELECT bad_tag_id FROM {} CROSS JOIN {} USING ( ideal_tag_id );'.format( results_table_name, ideal_tag_ids_table_name, cache_tag_siblings_lookup_table_name ) ) def _CacheTagSiblingsGetApplicableServiceIds( self, tag_service_id ): @@ -4395,7 +4435,7 @@ class DB( HydrusDB.HydrusDB ): cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) - result = self._c.execute( 'SELECT ideal_tag_id FROM {} WHERE bad_tag_id = ?;'.format( cache_tag_siblings_lookup_table_name ), ( tag_id, ) ).fetchone() + result = self._Execute( 'SELECT ideal_tag_id FROM {} WHERE bad_tag_id = ?;'.format( cache_tag_siblings_lookup_table_name ), ( tag_id, ) ).fetchone() if result is None: @@ -4429,11 +4469,11 @@ class DB( HydrusDB.HydrusDB ): cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) - with HydrusDB.TemporaryIntegerTable( self._c, tag_ids, 'tag_id' ) as temp_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_tag_ids_table_name: magic_case = 'CASE WHEN ideal_tag_id NOT NULL THEN ideal_tag_id ELSE tag_id END' - cursor = self._c.execute( + cursor = self._Execute( 'SELECT {} FROM {} LEFT OUTER JOIN {} ON ( tag_id = bad_tag_id );'.format( magic_case, temp_tag_ids_table_name, @@ -4448,10 +4488,10 @@ class DB( HydrusDB.HydrusDB ): no_ideal_found_tag_ids = set( tag_ids ) ideal_tag_ids = set() - with HydrusDB.TemporaryIntegerTable( self._c, tag_ids, 'tag_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_table_name: # temp tags to lookup - for ( tag_id, ideal_tag_id ) in self._c.execute( 'SELECT tag_id, ideal_tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ): + for ( tag_id, ideal_tag_id ) in self._Execute( 'SELECT tag_id, ideal_tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ): no_ideal_found_tag_ids.discard( tag_id ) ideal_tag_ids.add( ideal_tag_id ) @@ -4470,7 +4510,7 @@ class DB( HydrusDB.HydrusDB ): magic_case = 'CASE WHEN ideal_tag_id NOT NULL THEN ideal_tag_id ELSE tag_id END' - cursor = self._c.execute( + cursor = self._Execute( 'INSERT OR IGNORE INTO {} ( ideal_tag_id ) SELECT {} FROM {} LEFT OUTER JOIN {} ON ( tag_id = bad_tag_id );'.format( results_table_name, magic_case, @@ -4501,10 +4541,10 @@ class DB( HydrusDB.HydrusDB ): cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) - with HydrusDB.TemporaryIntegerTable( self._c, ideal_tag_ids, 'ideal_tag_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( ideal_tag_ids, 'ideal_tag_id' ) as temp_table_name: # temp tags to lookup - ideal_tag_ids_to_chain_members = HydrusData.BuildKeyToSetDict( self._c.execute( 'SELECT ideal_tag_id, bad_tag_id FROM {} CROSS JOIN {} USING ( ideal_tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ) ) + ideal_tag_ids_to_chain_members = HydrusData.BuildKeyToSetDict( self._Execute( 'SELECT ideal_tag_id, bad_tag_id FROM {} CROSS JOIN {} USING ( ideal_tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ) ) # this returns ideal in the chain, and chains of size 1 @@ -4584,7 +4624,7 @@ class DB( HydrusDB.HydrusDB ): cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( ClientTags.TAG_DISPLAY_ACTUAL, tag_service_id ) - pair_ids = self._c.execute( 'SELECT bad_tag_id, ideal_tag_id FROM {};'.format( cache_tag_siblings_lookup_table_name ) ).fetchall() + pair_ids = self._Execute( 'SELECT bad_tag_id, ideal_tag_id FROM {};'.format( cache_tag_siblings_lookup_table_name ) ).fetchall() all_tag_ids = set( itertools.chain.from_iterable( pair_ids ) ) @@ -4618,10 +4658,10 @@ class DB( HydrusDB.HydrusDB ): no_ideal_found_tag_ids = set( tag_ids ) tag_ids_to_ideal_tag_ids = {} - with HydrusDB.TemporaryIntegerTable( self._c, tag_ids, 'tag_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_table_name: # temp tags to lookup - for ( tag_id, ideal_tag_id ) in self._c.execute( 'SELECT tag_id, ideal_tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ): + for ( tag_id, ideal_tag_id ) in self._Execute( 'SELECT tag_id, ideal_tag_id FROM {} CROSS JOIN {} ON ( bad_tag_id = tag_id );'.format( temp_table_name, cache_tag_siblings_lookup_table_name ) ): no_ideal_found_tag_ids.discard( tag_id ) tag_ids_to_ideal_tag_ids[ tag_id ] = ideal_tag_id @@ -4637,7 +4677,7 @@ class DB( HydrusDB.HydrusDB ): cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( display_type, tag_service_id ) - return self._c.execute( 'SELECT 1 FROM {} WHERE bad_tag_id = ? OR ideal_tag_id = ?;'.format( cache_tag_siblings_lookup_table_name ), ( tag_id, tag_id ) ).fetchone() is not None + return self._Execute( 'SELECT 1 FROM {} WHERE bad_tag_id = ? OR ideal_tag_id = ?;'.format( cache_tag_siblings_lookup_table_name ), ( tag_id, tag_id ) ).fetchone() is not None def _CacheTagSiblingsRegen( self, tag_service_ids ): @@ -4646,7 +4686,7 @@ class DB( HydrusDB.HydrusDB ): cache_tag_siblings_lookup_table_name = GenerateTagSiblingsLookupCacheTableName( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id ) - self._c.execute( 'DELETE FROM {};'.format( cache_tag_siblings_lookup_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( cache_tag_siblings_lookup_table_name ) ) applicable_service_ids = self._CacheTagSiblingsGetApplicableServiceIds( tag_service_id ) @@ -4674,7 +4714,7 @@ class DB( HydrusDB.HydrusDB ): - self._c.executemany( 'INSERT OR IGNORE INTO {} ( bad_tag_id, ideal_tag_id ) VALUES ( ?, ? );'.format( cache_tag_siblings_lookup_table_name ), tss.GetBadTagsToIdealTags().items() ) + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( bad_tag_id, ideal_tag_id ) VALUES ( ?, ? );'.format( cache_tag_siblings_lookup_table_name ), tss.GetBadTagsToIdealTags().items() ) if tag_service_id in self._service_ids_to_display_application_status: @@ -4700,7 +4740,7 @@ class DB( HydrusDB.HydrusDB ): tag_ids_to_clear_and_regen.update( self._CacheTagSiblingsGetChainsMembersFromIdeals( ClientTags.TAG_DISPLAY_IDEAL, tag_service_id, ideal_tag_ids ) ) - self._c.executemany( 'DELETE FROM {} WHERE bad_tag_id = ? OR ideal_tag_id = ?;'.format( cache_tag_siblings_lookup_table_name ), ( ( tag_id, tag_id ) for tag_id in tag_ids_to_clear_and_regen ) ) + self._ExecuteMany( 'DELETE FROM {} WHERE bad_tag_id = ? OR ideal_tag_id = ?;'.format( cache_tag_siblings_lookup_table_name ), ( ( tag_id, tag_id ) for tag_id in tag_ids_to_clear_and_regen ) ) applicable_tag_service_ids = self._CacheTagSiblingsGetApplicableServiceIds( tag_service_id ) @@ -4730,7 +4770,7 @@ class DB( HydrusDB.HydrusDB ): - self._c.executemany( 'INSERT OR IGNORE INTO {} ( bad_tag_id, ideal_tag_id ) VALUES ( ?, ? );'.format( cache_tag_siblings_lookup_table_name ), tss.GetBadTagsToIdealTags().items() ) + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( bad_tag_id, ideal_tag_id ) VALUES ( ?, ? );'.format( cache_tag_siblings_lookup_table_name ), tss.GetBadTagsToIdealTags().items() ) if tag_service_id in self._service_ids_to_display_application_status: @@ -4774,11 +4814,11 @@ class DB( HydrusDB.HydrusDB ): job_key.SetStatusTitle( prefix_string + 'running' ) job_key.SetVariable( 'popup_text_1', 'errors found so far: ' + HydrusData.ToHumanInt( num_errors ) ) - db_names = [ name for ( index, name, path ) in self._c.execute( 'PRAGMA database_list;' ) if name not in ( 'mem', 'temp', 'durable_temp' ) ] + db_names = [ name for ( index, name, path ) in self._Execute( 'PRAGMA database_list;' ) if name not in ( 'mem', 'temp', 'durable_temp' ) ] for db_name in db_names: - for ( text, ) in self._c.execute( 'PRAGMA ' + db_name + '.integrity_check;' ): + for ( text, ) in self._Execute( 'PRAGMA ' + db_name + '.integrity_check;' ): ( i_paused, should_quit ) = job_key.WaitIfNeeded() @@ -4913,7 +4953,7 @@ class DB( HydrusDB.HydrusDB ): def _ClearOrphanTables( self ): - service_ids = self._STL( self._c.execute( 'SELECT service_id FROM services;' ) ) + service_ids = self._STL( self._Execute( 'SELECT service_id FROM services;' ) ) table_prefixes = [] @@ -4935,8 +4975,8 @@ class DB( HydrusDB.HydrusDB ): existing_table_names = set() - existing_table_names.update( self._STS( self._c.execute( 'SELECT name FROM sqlite_master WHERE type = ?;', ( 'table', ) ) ) ) - existing_table_names.update( self._STS( self._c.execute( 'SELECT name FROM external_master.sqlite_master WHERE type = ?;', ( 'table', ) ) ) ) + existing_table_names.update( self._STS( self._Execute( 'SELECT name FROM sqlite_master WHERE type = ?;', ( 'table', ) ) ) ) + existing_table_names.update( self._STS( self._Execute( 'SELECT name FROM external_master.sqlite_master WHERE type = ?;', ( 'table', ) ) ) ) existing_table_names = { name for name in existing_table_names if True in ( name.startswith( table_prefix ) for table_prefix in table_prefixes ) } @@ -4946,7 +4986,7 @@ class DB( HydrusDB.HydrusDB ): HydrusData.ShowText( 'Dropping ' + table_name ) - self._c.execute( 'DROP table ' + table_name + ';' ) + self._Execute( 'DROP table ' + table_name + ';' ) @@ -4964,26 +5004,26 @@ class DB( HydrusDB.HydrusDB ): self.modules_services.CreateInitialTables() self.modules_services.CreateInitialIndices() - self._c.execute( 'CREATE TABLE IF NOT EXISTS alternate_file_groups ( alternates_group_id INTEGER PRIMARY KEY );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS alternate_file_groups ( alternates_group_id INTEGER PRIMARY KEY );' ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS alternate_file_group_members ( alternates_group_id INTEGER, media_id INTEGER UNIQUE, PRIMARY KEY ( alternates_group_id, media_id ) );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS alternate_file_group_members ( alternates_group_id INTEGER, media_id INTEGER UNIQUE, PRIMARY KEY ( alternates_group_id, media_id ) );' ) - self._c.execute( 'CREATE TABLE client_files_locations ( prefix TEXT, location TEXT );' ) + self._Execute( 'CREATE TABLE client_files_locations ( prefix TEXT, location TEXT );' ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS ideal_client_files_locations ( location TEXT, weight INTEGER );' ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS ideal_thumbnail_override_location ( location TEXT );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS ideal_client_files_locations ( location TEXT, weight INTEGER );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS ideal_thumbnail_override_location ( location TEXT );' ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS duplicate_files ( media_id INTEGER PRIMARY KEY, king_hash_id INTEGER UNIQUE );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS duplicate_files ( media_id INTEGER PRIMARY KEY, king_hash_id INTEGER UNIQUE );' ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS duplicate_file_members ( media_id INTEGER, hash_id INTEGER UNIQUE, PRIMARY KEY ( media_id, hash_id ) );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS duplicate_file_members ( media_id INTEGER, hash_id INTEGER UNIQUE, PRIMARY KEY ( media_id, hash_id ) );' ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS duplicate_false_positives ( smaller_alternates_group_id INTEGER, larger_alternates_group_id INTEGER, PRIMARY KEY ( smaller_alternates_group_id, larger_alternates_group_id ) );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS duplicate_false_positives ( smaller_alternates_group_id INTEGER, larger_alternates_group_id INTEGER, PRIMARY KEY ( smaller_alternates_group_id, larger_alternates_group_id ) );' ) self._CreateIndex( 'duplicate_false_positives', [ 'larger_alternates_group_id', 'smaller_alternates_group_id' ], unique = True ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS potential_duplicate_pairs ( smaller_media_id INTEGER, larger_media_id INTEGER, distance INTEGER, PRIMARY KEY ( smaller_media_id, larger_media_id ) );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS potential_duplicate_pairs ( smaller_media_id INTEGER, larger_media_id INTEGER, distance INTEGER, PRIMARY KEY ( smaller_media_id, larger_media_id ) );' ) self._CreateIndex( 'potential_duplicate_pairs', [ 'larger_media_id', 'smaller_media_id' ], unique = True ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS confirmed_alternate_pairs ( smaller_media_id INTEGER, larger_media_id INTEGER, PRIMARY KEY ( smaller_media_id, larger_media_id ) );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS confirmed_alternate_pairs ( smaller_media_id INTEGER, larger_media_id INTEGER, PRIMARY KEY ( smaller_media_id, larger_media_id ) );' ) self._CreateIndex( 'confirmed_alternate_pairs', [ 'larger_media_id', 'smaller_media_id' ], unique = True ) self.modules_files_storage.CreateInitialTables() @@ -4995,66 +5035,66 @@ class DB( HydrusDB.HydrusDB ): self.modules_files_maintenance.CreateInitialTables() self.modules_files_maintenance.CreateInitialIndices() - self._c.execute( 'CREATE TABLE file_notes ( hash_id INTEGER, name_id INTEGER, note_id INTEGER, PRIMARY KEY ( hash_id, name_id ) );' ) + self._Execute( 'CREATE TABLE file_notes ( hash_id INTEGER, name_id INTEGER, note_id INTEGER, PRIMARY KEY ( hash_id, name_id ) );' ) self._CreateIndex( 'file_notes', [ 'note_id' ] ) self._CreateIndex( 'file_notes', [ 'name_id' ] ) self.modules_serialisable.CreateInitialTables() self.modules_serialisable.CreateInitialIndices() - self._c.execute( 'CREATE TABLE local_ratings ( service_id INTEGER, hash_id INTEGER, rating REAL, PRIMARY KEY ( service_id, hash_id ) );' ) + self._Execute( 'CREATE TABLE local_ratings ( service_id INTEGER, hash_id INTEGER, rating REAL, PRIMARY KEY ( service_id, hash_id ) );' ) self._CreateIndex( 'local_ratings', [ 'hash_id' ] ) self._CreateIndex( 'local_ratings', [ 'rating' ] ) - self._c.execute( 'CREATE TABLE file_modified_timestamps ( hash_id INTEGER PRIMARY KEY, file_modified_timestamp INTEGER );' ) + self._Execute( 'CREATE TABLE file_modified_timestamps ( hash_id INTEGER PRIMARY KEY, file_modified_timestamp INTEGER );' ) self._CreateIndex( 'file_modified_timestamps', [ 'file_modified_timestamp' ] ) - self._c.execute( 'CREATE TABLE options ( options TEXT_YAML );', ) + self._Execute( 'CREATE TABLE options ( options TEXT_YAML );', ) - self._c.execute( 'CREATE TABLE recent_tags ( service_id INTEGER, tag_id INTEGER, timestamp INTEGER, PRIMARY KEY ( service_id, tag_id ) );' ) + self._Execute( 'CREATE TABLE recent_tags ( service_id INTEGER, tag_id INTEGER, timestamp INTEGER, PRIMARY KEY ( service_id, tag_id ) );' ) - self._c.execute( 'CREATE TABLE remote_thumbnails ( service_id INTEGER, hash_id INTEGER, PRIMARY KEY( service_id, hash_id ) );' ) + self._Execute( 'CREATE TABLE remote_thumbnails ( service_id INTEGER, hash_id INTEGER, PRIMARY KEY( service_id, hash_id ) );' ) - self._c.execute( 'CREATE TABLE service_filenames ( service_id INTEGER, hash_id INTEGER, filename TEXT, PRIMARY KEY ( service_id, hash_id ) );' ) + self._Execute( 'CREATE TABLE service_filenames ( service_id INTEGER, hash_id INTEGER, filename TEXT, PRIMARY KEY ( service_id, hash_id ) );' ) self._CreateIndex( 'service_filenames', [ 'hash_id' ] ) - self._c.execute( 'CREATE TABLE service_directories ( service_id INTEGER, directory_id INTEGER, num_files INTEGER, total_size INTEGER, note TEXT, PRIMARY KEY ( service_id, directory_id ) );' ) + self._Execute( 'CREATE TABLE service_directories ( service_id INTEGER, directory_id INTEGER, num_files INTEGER, total_size INTEGER, note TEXT, PRIMARY KEY ( service_id, directory_id ) );' ) self._CreateIndex( 'service_directories', [ 'directory_id' ] ) - self._c.execute( 'CREATE TABLE service_directory_file_map ( service_id INTEGER, directory_id INTEGER, hash_id INTEGER, PRIMARY KEY ( service_id, directory_id, hash_id ) );' ) + self._Execute( 'CREATE TABLE service_directory_file_map ( service_id INTEGER, directory_id INTEGER, hash_id INTEGER, PRIMARY KEY ( service_id, directory_id, hash_id ) );' ) self._CreateIndex( 'service_directory_file_map', [ 'directory_id' ] ) self._CreateIndex( 'service_directory_file_map', [ 'hash_id' ] ) - self._c.execute( 'CREATE TABLE service_info ( service_id INTEGER, info_type INTEGER, info INTEGER, PRIMARY KEY ( service_id, info_type ) );' ) + self._Execute( 'CREATE TABLE service_info ( service_id INTEGER, info_type INTEGER, info INTEGER, PRIMARY KEY ( service_id, info_type ) );' ) - self._c.execute( 'CREATE TABLE statuses ( status_id INTEGER PRIMARY KEY, status TEXT UNIQUE );' ) + self._Execute( 'CREATE TABLE statuses ( status_id INTEGER PRIMARY KEY, status TEXT UNIQUE );' ) - self._c.execute( 'CREATE TABLE tag_parents ( service_id INTEGER, child_tag_id INTEGER, parent_tag_id INTEGER, status INTEGER, PRIMARY KEY ( service_id, child_tag_id, parent_tag_id, status ) );' ) + self._Execute( 'CREATE TABLE tag_parents ( service_id INTEGER, child_tag_id INTEGER, parent_tag_id INTEGER, status INTEGER, PRIMARY KEY ( service_id, child_tag_id, parent_tag_id, status ) );' ) self._CreateIndex( 'tag_parents', [ 'service_id', 'parent_tag_id' ] ) - self._c.execute( 'CREATE TABLE tag_parent_petitions ( service_id INTEGER, child_tag_id INTEGER, parent_tag_id INTEGER, status INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, child_tag_id, parent_tag_id, status ) );' ) + self._Execute( 'CREATE TABLE tag_parent_petitions ( service_id INTEGER, child_tag_id INTEGER, parent_tag_id INTEGER, status INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, child_tag_id, parent_tag_id, status ) );' ) self._CreateIndex( 'tag_parent_petitions', [ 'service_id', 'parent_tag_id' ] ) - self._c.execute( 'CREATE TABLE tag_parent_application ( master_service_id INTEGER, service_index INTEGER, application_service_id INTEGER, PRIMARY KEY ( master_service_id, service_index ) );' ) + self._Execute( 'CREATE TABLE tag_parent_application ( master_service_id INTEGER, service_index INTEGER, application_service_id INTEGER, PRIMARY KEY ( master_service_id, service_index ) );' ) - self._c.execute( 'CREATE TABLE tag_siblings ( service_id INTEGER, bad_tag_id INTEGER, good_tag_id INTEGER, status INTEGER, PRIMARY KEY ( service_id, bad_tag_id, status ) );' ) + self._Execute( 'CREATE TABLE tag_siblings ( service_id INTEGER, bad_tag_id INTEGER, good_tag_id INTEGER, status INTEGER, PRIMARY KEY ( service_id, bad_tag_id, status ) );' ) self._CreateIndex( 'tag_siblings', [ 'service_id', 'good_tag_id' ] ) - self._c.execute( 'CREATE TABLE tag_sibling_petitions ( service_id INTEGER, bad_tag_id INTEGER, good_tag_id INTEGER, status INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, bad_tag_id, status ) );' ) + self._Execute( 'CREATE TABLE tag_sibling_petitions ( service_id INTEGER, bad_tag_id INTEGER, good_tag_id INTEGER, status INTEGER, reason_id INTEGER, PRIMARY KEY ( service_id, bad_tag_id, status ) );' ) self._CreateIndex( 'tag_sibling_petitions', [ 'service_id', 'good_tag_id' ] ) - self._c.execute( 'CREATE TABLE tag_sibling_application ( master_service_id INTEGER, service_index INTEGER, application_service_id INTEGER, PRIMARY KEY ( master_service_id, service_index ) );' ) + self._Execute( 'CREATE TABLE tag_sibling_application ( master_service_id INTEGER, service_index INTEGER, application_service_id INTEGER, PRIMARY KEY ( master_service_id, service_index ) );' ) - self._c.execute( 'CREATE TABLE url_map ( hash_id INTEGER, url_id INTEGER, PRIMARY KEY ( hash_id, url_id ) );' ) + self._Execute( 'CREATE TABLE url_map ( hash_id INTEGER, url_id INTEGER, PRIMARY KEY ( hash_id, url_id ) );' ) self._CreateIndex( 'url_map', [ 'url_id' ] ) - self._c.execute( 'CREATE TABLE file_viewing_stats ( hash_id INTEGER PRIMARY KEY, preview_views INTEGER, preview_viewtime INTEGER, media_views INTEGER, media_viewtime INTEGER );' ) + self._Execute( 'CREATE TABLE file_viewing_stats ( hash_id INTEGER PRIMARY KEY, preview_views INTEGER, preview_viewtime INTEGER, media_views INTEGER, media_viewtime INTEGER );' ) self._CreateIndex( 'file_viewing_stats', [ 'preview_views' ] ) self._CreateIndex( 'file_viewing_stats', [ 'preview_viewtime' ] ) self._CreateIndex( 'file_viewing_stats', [ 'media_views' ] ) self._CreateIndex( 'file_viewing_stats', [ 'media_viewtime' ] ) - self._c.execute( 'CREATE TABLE version ( version INTEGER );' ) + self._Execute( 'CREATE TABLE version ( version INTEGER );' ) # caches @@ -5091,11 +5131,11 @@ class DB( HydrusDB.HydrusDB ): for prefix in HydrusData.IterateHexPrefixes(): - self._c.execute( 'INSERT INTO client_files_locations ( prefix, location ) VALUES ( ?, ? );', ( 'f' + prefix, location ) ) - self._c.execute( 'INSERT INTO client_files_locations ( prefix, location ) VALUES ( ?, ? );', ( 't' + prefix, location ) ) + self._Execute( 'INSERT INTO client_files_locations ( prefix, location ) VALUES ( ?, ? );', ( 'f' + prefix, location ) ) + self._Execute( 'INSERT INTO client_files_locations ( prefix, location ) VALUES ( ?, ? );', ( 't' + prefix, location ) ) - self._c.execute( 'INSERT INTO ideal_client_files_locations ( location, weight ) VALUES ( ?, ? );', ( location, 1 ) ) + self._Execute( 'INSERT INTO ideal_client_files_locations ( location, weight ) VALUES ( ?, ? );', ( location, 1 ) ) init_service_info = [] @@ -5117,7 +5157,7 @@ class DB( HydrusDB.HydrusDB ): self._AddService( service_key, service_type, name, dictionary ) - self._c.executemany( 'INSERT INTO yaml_dumps VALUES ( ?, ?, ? );', ( ( ClientDBSerialisable.YAML_DUMP_ID_IMAGEBOARD, name, imageboards ) for ( name, imageboards ) in ClientDefaults.GetDefaultImageboards() ) ) + self._ExecuteMany( 'INSERT INTO yaml_dumps VALUES ( ?, ?, ? );', ( ( ClientDBSerialisable.YAML_DUMP_ID_IMAGEBOARD, name, imageboards ) for ( name, imageboards ) in ClientDefaults.GetDefaultImageboards() ) ) new_options = ClientOptions.ClientOptions() @@ -5223,11 +5263,11 @@ class DB( HydrusDB.HydrusDB ): self.modules_serialisable.SetJSONDump( column_list_manager ) - self._c.execute( 'INSERT INTO namespaces ( namespace_id, namespace ) VALUES ( ?, ? );', ( 1, '' ) ) + self._Execute( 'INSERT INTO namespaces ( namespace_id, namespace ) VALUES ( ?, ? );', ( 1, '' ) ) - self._c.execute( 'INSERT INTO version ( version ) VALUES ( ? );', ( HC.SOFTWARE_VERSION, ) ) + self._Execute( 'INSERT INTO version ( version ) VALUES ( ? );', ( HC.SOFTWARE_VERSION, ) ) - self._c.executemany( 'INSERT INTO json_dumps_named VALUES ( ?, ?, ?, ?, ? );', ClientDefaults.GetDefaultScriptRows() ) + self._ExecuteMany( 'INSERT INTO json_dumps_named VALUES ( ?, ?, ?, ?, ? );', ClientDefaults.GetDefaultScriptRows() ) def _CreateDBCaches( self ): @@ -5254,22 +5294,22 @@ class DB( HydrusDB.HydrusDB ): if media_min is not None: - self._c.execute( 'UPDATE file_viewing_stats SET media_views = CAST( media_viewtime / ? AS INTEGER ) WHERE media_views * ? > media_viewtime;', ( media_min, media_min ) ) + self._Execute( 'UPDATE file_viewing_stats SET media_views = CAST( media_viewtime / ? AS INTEGER ) WHERE media_views * ? > media_viewtime;', ( media_min, media_min ) ) if media_max is not None: - self._c.execute( 'UPDATE file_viewing_stats SET media_viewtime = media_views * ? WHERE media_viewtime > media_views * ?;', ( media_max, media_max ) ) + self._Execute( 'UPDATE file_viewing_stats SET media_viewtime = media_views * ? WHERE media_viewtime > media_views * ?;', ( media_max, media_max ) ) if preview_min is not None: - self._c.execute( 'UPDATE file_viewing_stats SET preview_views = CAST( preview_viewtime / ? AS INTEGER ) WHERE preview_views * ? > preview_viewtime;', ( preview_min, preview_min ) ) + self._Execute( 'UPDATE file_viewing_stats SET preview_views = CAST( preview_viewtime / ? AS INTEGER ) WHERE preview_views * ? > preview_viewtime;', ( preview_min, preview_min ) ) if preview_max is not None: - self._c.execute( 'UPDATE file_viewing_stats SET preview_viewtime = preview_views * ? WHERE preview_viewtime > preview_views * ?;', ( preview_max, preview_max ) ) + self._Execute( 'UPDATE file_viewing_stats SET preview_viewtime = preview_views * ? WHERE preview_viewtime > preview_views * ?;', ( preview_max, preview_max ) ) @@ -5354,7 +5394,7 @@ class DB( HydrusDB.HydrusDB ): tag_service_ids = self.modules_services.GetServiceIds( HC.REAL_TAG_SERVICES ) - with HydrusDB.TemporaryIntegerTable( self._c, existing_hash_ids, 'hash_id' ) as temp_hash_id_table_name: + with self._MakeTemporaryIntegerTable( existing_hash_ids, 'hash_id' ) as temp_hash_id_table_name: for tag_service_id in tag_service_ids: @@ -5398,7 +5438,7 @@ class DB( HydrusDB.HydrusDB ): # push the info updates, notify - self._c.executemany( 'UPDATE service_info SET info = info + ? WHERE service_id = ? AND info_type = ?;', service_info_updates ) + self._ExecuteMany( 'UPDATE service_info SET info = info + ? WHERE service_id = ? AND info_type = ?;', service_info_updates ) def _DeletePending( self, service_key ): @@ -5411,14 +5451,14 @@ class DB( HydrusDB.HydrusDB ): ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = ClientDBMappingsStorage.GenerateMappingsTableNames( service_id ) - pending_rescinded_mappings_ids = list( HydrusData.BuildKeyToListDict( self._c.execute( 'SELECT tag_id, hash_id FROM ' + pending_mappings_table_name + ';' ) ).items() ) + pending_rescinded_mappings_ids = list( HydrusData.BuildKeyToListDict( self._Execute( 'SELECT tag_id, hash_id FROM ' + pending_mappings_table_name + ';' ) ).items() ) - petitioned_rescinded_mappings_ids = list( HydrusData.BuildKeyToListDict( self._c.execute( 'SELECT tag_id, hash_id FROM ' + petitioned_mappings_table_name + ';' ) ).items() ) + petitioned_rescinded_mappings_ids = list( HydrusData.BuildKeyToListDict( self._Execute( 'SELECT tag_id, hash_id FROM ' + petitioned_mappings_table_name + ';' ) ).items() ) self._UpdateMappings( service_id, pending_rescinded_mappings_ids = pending_rescinded_mappings_ids, petitioned_rescinded_mappings_ids = petitioned_rescinded_mappings_ids ) - self._c.execute( 'DELETE FROM tag_sibling_petitions WHERE service_id = ?;', ( service_id, ) ) - self._c.execute( 'DELETE FROM tag_parent_petitions WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM tag_sibling_petitions WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM tag_parent_petitions WHERE service_id = ?;', ( service_id, ) ) elif service.GetServiceType() in ( HC.FILE_REPOSITORY, HC.IPFS ): @@ -5481,9 +5521,9 @@ class DB( HydrusDB.HydrusDB ): # however, this seemed to cause some immense temp drive space bloat when dropping the mapping tables, as there seems to be a trigger/foreign reference check for every row to be deleted # so now we just blat all tables and trust in the Lord that we don't forget to add any new ones in future - self._c.execute( 'DELETE FROM local_ratings WHERE service_id = ?;', ( service_id, ) ) - self._c.execute( 'DELETE FROM recent_tags WHERE service_id = ?;', ( service_id, ) ) - self._c.execute( 'DELETE FROM service_info WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM local_ratings WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM recent_tags WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM service_info WHERE service_id = ?;', ( service_id, ) ) self._DeleteServiceDropFiles( service_id, service_type ) @@ -5505,8 +5545,8 @@ class DB( HydrusDB.HydrusDB ): interested_service_ids.discard( service_id ) # lmao, not any more! # we have to do this after the above since we need to know _what_ to regen with these new application rules in a sec - self._c.execute( 'DELETE FROM tag_sibling_application WHERE master_service_id = ? OR application_service_id = ?;', ( service_id, service_id ) ) - self._c.execute( 'DELETE FROM tag_parent_application WHERE master_service_id = ? OR application_service_id = ?;', ( service_id, service_id ) ) + self._Execute( 'DELETE FROM tag_sibling_application WHERE master_service_id = ? OR application_service_id = ?;', ( service_id, service_id ) ) + self._Execute( 'DELETE FROM tag_parent_application WHERE master_service_id = ? OR application_service_id = ?;', ( service_id, service_id ) ) self._service_ids_to_sibling_applicable_service_ids = None self._service_ids_to_sibling_interested_service_ids = None @@ -5551,22 +5591,22 @@ class DB( HydrusDB.HydrusDB ): directory_id = self.modules_texts.GetTextId( dirname ) - self._c.execute( 'DELETE FROM service_directories WHERE service_id = ? AND directory_id = ?;', ( service_id, directory_id ) ) - self._c.execute( 'DELETE FROM service_directory_file_map WHERE service_id = ? AND directory_id = ?;', ( service_id, directory_id ) ) + self._Execute( 'DELETE FROM service_directories WHERE service_id = ? AND directory_id = ?;', ( service_id, directory_id ) ) + self._Execute( 'DELETE FROM service_directory_file_map WHERE service_id = ? AND directory_id = ?;', ( service_id, directory_id ) ) def _DeleteServiceDropFiles( self, service_id, service_type ): if service_type == HC.FILE_REPOSITORY: - self._c.execute( 'DELETE FROM remote_thumbnails WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM remote_thumbnails WHERE service_id = ?;', ( service_id, ) ) if service_type == HC.IPFS: - self._c.execute( 'DELETE FROM service_filenames WHERE service_id = ?;', ( service_id, ) ) - self._c.execute( 'DELETE FROM service_directories WHERE service_id = ?;', ( service_id, ) ) - self._c.execute( 'DELETE FROM service_directory_file_map WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM service_filenames WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM service_directories WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM service_directory_file_map WHERE service_id = ?;', ( service_id, ) ) if service_type in HC.SPECIFIC_FILE_SERVICES: @@ -5604,16 +5644,16 @@ class DB( HydrusDB.HydrusDB ): def _DeleteServiceDropTagParents( self, service_id ): - self._c.execute( 'DELETE FROM tag_parents WHERE service_id = ?;', ( service_id, ) ) - self._c.execute( 'DELETE FROM tag_parent_petitions WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM tag_parents WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM tag_parent_petitions WHERE service_id = ?;', ( service_id, ) ) self._CacheTagParentsDrop( service_id ) def _DeleteServiceDropTagSiblings( self, service_id ): - self._c.execute( 'DELETE FROM tag_siblings WHERE service_id = ?;', ( service_id, ) ) - self._c.execute( 'DELETE FROM tag_sibling_petitions WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM tag_siblings WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM tag_sibling_petitions WHERE service_id = ?;', ( service_id, ) ) self._CacheTagSiblingsDrop( service_id ) @@ -5643,17 +5683,17 @@ class DB( HydrusDB.HydrusDB ): predicates_string = '' - self._c.execute( 'DELETE FROM service_info{};'.format( predicates_string ) ) + self._Execute( 'DELETE FROM service_info{};'.format( predicates_string ) ) self.pub_after_job( 'notify_new_pending' ) def _DeleteTagParents( self, service_id, pairs, defer_cache_update = False ): - self._c.executemany( 'DELETE FROM tag_parents WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ?;', ( ( service_id, child_tag_id, parent_tag_id ) for ( child_tag_id, parent_tag_id ) in pairs ) ) - self._c.executemany( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ? AND status = ?;', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_PETITIONED ) for ( child_tag_id, parent_tag_id ) in pairs ) ) + self._ExecuteMany( 'DELETE FROM tag_parents WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ?;', ( ( service_id, child_tag_id, parent_tag_id ) for ( child_tag_id, parent_tag_id ) in pairs ) ) + self._ExecuteMany( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ? AND status = ?;', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_PETITIONED ) for ( child_tag_id, parent_tag_id ) in pairs ) ) - self._c.executemany( 'INSERT OR IGNORE INTO tag_parents ( service_id, child_tag_id, parent_tag_id, status ) VALUES ( ?, ?, ?, ? );', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_DELETED ) for ( child_tag_id, parent_tag_id ) in pairs ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO tag_parents ( service_id, child_tag_id, parent_tag_id, status ) VALUES ( ?, ?, ?, ? );', ( ( service_id, child_tag_id, parent_tag_id, HC.CONTENT_STATUS_DELETED ) for ( child_tag_id, parent_tag_id ) in pairs ) ) tag_ids = set( itertools.chain.from_iterable( pairs ) ) @@ -5665,10 +5705,10 @@ class DB( HydrusDB.HydrusDB ): def _DeleteTagSiblings( self, service_id, pairs, defer_cache_update = False ): - self._c.executemany( 'DELETE FROM tag_siblings WHERE service_id = ? AND bad_tag_id = ?;', ( ( service_id, bad_tag_id ) for ( bad_tag_id, good_tag_id ) in pairs ) ) - self._c.executemany( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND status = ?;', ( ( service_id, bad_tag_id, HC.CONTENT_STATUS_PETITIONED ) for ( bad_tag_id, good_tag_id ) in pairs ) ) + self._ExecuteMany( 'DELETE FROM tag_siblings WHERE service_id = ? AND bad_tag_id = ?;', ( ( service_id, bad_tag_id ) for ( bad_tag_id, good_tag_id ) in pairs ) ) + self._ExecuteMany( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND status = ?;', ( ( service_id, bad_tag_id, HC.CONTENT_STATUS_PETITIONED ) for ( bad_tag_id, good_tag_id ) in pairs ) ) - self._c.executemany( 'INSERT OR IGNORE INTO tag_siblings ( service_id, bad_tag_id, good_tag_id, status ) VALUES ( ?, ?, ?, ? );', ( ( service_id, bad_tag_id, good_tag_id, HC.CONTENT_STATUS_DELETED ) for ( bad_tag_id, good_tag_id ) in pairs ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO tag_siblings ( service_id, bad_tag_id, good_tag_id, status ) VALUES ( ?, ?, ?, ? );', ( ( service_id, bad_tag_id, good_tag_id, HC.CONTENT_STATUS_DELETED ) for ( bad_tag_id, good_tag_id ) in pairs ) ) tag_ids = set( itertools.chain.from_iterable( pairs ) ) @@ -5763,7 +5803,7 @@ class DB( HydrusDB.HydrusDB ): if len( inserts ) > 0: - self._c.executemany( 'INSERT OR IGNORE INTO potential_duplicate_pairs ( smaller_media_id, larger_media_id, distance ) VALUES ( ?, ?, ? );', inserts ) + self._ExecuteMany( 'INSERT OR IGNORE INTO potential_duplicate_pairs ( smaller_media_id, larger_media_id, distance ) VALUES ( ?, ?, ? );', inserts ) @@ -5777,7 +5817,7 @@ class DB( HydrusDB.HydrusDB ): smaller_alternates_group_id = min( alternates_group_id_a, alternates_group_id_b ) larger_alternates_group_id = max( alternates_group_id_a, alternates_group_id_b ) - result = self._c.execute( 'SELECT 1 FROM duplicate_false_positives WHERE smaller_alternates_group_id = ? AND larger_alternates_group_id = ?;', ( smaller_alternates_group_id, larger_alternates_group_id ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM duplicate_false_positives WHERE smaller_alternates_group_id = ? AND larger_alternates_group_id = ?;', ( smaller_alternates_group_id, larger_alternates_group_id ) ).fetchone() false_positive_pair_found = result is not None @@ -5786,16 +5826,13 @@ class DB( HydrusDB.HydrusDB ): def _DuplicatesClearAllFalsePositiveRelations( self, alternates_group_id ): - self._c.execute( 'DELETE FROM duplicate_false_positives WHERE smaller_alternates_group_id = ? OR larger_alternates_group_id = ?;', ( alternates_group_id, alternates_group_id ) ) + self._Execute( 'DELETE FROM duplicate_false_positives WHERE smaller_alternates_group_id = ? OR larger_alternates_group_id = ?;', ( alternates_group_id, alternates_group_id ) ) media_ids = self._DuplicatesGetAlternateMediaIds( alternates_group_id ) - for media_id in media_ids: - - hash_ids = self._DuplicatesGetDuplicateHashIds( media_id ) - - self.modules_similar_files.ResetSearch( hash_ids ) - + hash_ids = self._DuplicatesGetDuplicatesHashIds( media_ids ) + + self.modules_similar_files.ResetSearch( hash_ids ) def _DuplicatesClearAllFalsePositiveRelationsFromHashes( self, hashes ): @@ -5827,19 +5864,16 @@ class DB( HydrusDB.HydrusDB ): smaller_alternates_group_id = min( alternates_group_id_a, alternates_group_id_b ) larger_alternates_group_id = max( alternates_group_id_a, alternates_group_id_b ) - self._c.execute( 'DELETE FROM duplicate_false_positives WHERE smaller_alternates_group_id = ? AND larger_alternates_group_id = ?;', ( smaller_alternates_group_id, larger_alternates_group_id ) ) + self._Execute( 'DELETE FROM duplicate_false_positives WHERE smaller_alternates_group_id = ? AND larger_alternates_group_id = ?;', ( smaller_alternates_group_id, larger_alternates_group_id ) ) for alternates_group_id in alternates_group_ids: media_ids = self._DuplicatesGetAlternateMediaIds( alternates_group_id ) - for media_id in media_ids: - - hash_ids = self._DuplicatesGetDuplicateHashIds( media_id ) - - self.modules_similar_files.ResetSearch( hash_ids ) - + hash_ids = self._DuplicatesGetDuplicatesHashIds( media_ids ) + + self.modules_similar_files.ResetSearch( hash_ids ) @@ -5878,13 +5912,13 @@ class DB( HydrusDB.HydrusDB ): all_media_ids.update( media_ids_a ) all_media_ids.update( media_ids_b ) - with HydrusDB.TemporaryIntegerTable( self._c, all_media_ids, 'media_id' ) as temp_media_ids_table_name: + with self._MakeTemporaryIntegerTable( all_media_ids, 'media_id' ) as temp_media_ids_table_name: # keep these separate--older sqlite can't do cross join to an OR ON # temp media ids to potential pairs - potential_duplicate_pairs = set( self._c.execute( 'SELECT smaller_media_id, larger_media_id FROM {} CROSS JOIN potential_duplicate_pairs ON ( smaller_media_id = media_id );'.format( temp_media_ids_table_name ) ).fetchall() ) - potential_duplicate_pairs.update( self._c.execute( 'SELECT smaller_media_id, larger_media_id FROM {} CROSS JOIN potential_duplicate_pairs ON ( larger_media_id = media_id );'.format( temp_media_ids_table_name ) ).fetchall() ) + potential_duplicate_pairs = set( self._Execute( 'SELECT smaller_media_id, larger_media_id FROM {} CROSS JOIN potential_duplicate_pairs ON ( smaller_media_id = media_id );'.format( temp_media_ids_table_name ) ).fetchall() ) + potential_duplicate_pairs.update( self._Execute( 'SELECT smaller_media_id, larger_media_id FROM {} CROSS JOIN potential_duplicate_pairs ON ( larger_media_id = media_id );'.format( temp_media_ids_table_name ) ).fetchall() ) deletees = [] @@ -5899,7 +5933,7 @@ class DB( HydrusDB.HydrusDB ): if len( deletees ) > 0: - self._c.executemany( 'DELETE FROM potential_duplicate_pairs WHERE smaller_media_id = ? AND larger_media_id = ?;', deletees ) + self._ExecuteMany( 'DELETE FROM potential_duplicate_pairs WHERE smaller_media_id = ? AND larger_media_id = ?;', deletees ) @@ -5917,20 +5951,15 @@ class DB( HydrusDB.HydrusDB ): media_ids = set() - for ( smaller_media_id, larger_media_id ) in self._c.execute( 'SELECT smaller_media_id, larger_media_id FROM potential_duplicate_pairs;' ): + for ( smaller_media_id, larger_media_id ) in self._Execute( 'SELECT smaller_media_id, larger_media_id FROM potential_duplicate_pairs;' ): media_ids.add( smaller_media_id ) media_ids.add( larger_media_id ) - hash_ids = set() + hash_ids = self._DuplicatesGetDuplicatesHashIds( media_ids ) - for media_id in media_ids: - - hash_ids.update( self._DuplicatesGetDuplicateHashIds( media_id ) ) - - - self._c.execute( 'DELETE FROM potential_duplicate_pairs;' ) + self._Execute( 'DELETE FROM potential_duplicate_pairs;' ) self.modules_similar_files.ResetSearch( hash_ids ) @@ -5969,12 +5998,12 @@ class DB( HydrusDB.HydrusDB ): self._DuplicatesRemoveAlternateMember( media_id ) - self._c.execute( 'DELETE FROM potential_duplicate_pairs WHERE smaller_media_id = ? OR larger_media_id = ?;', ( media_id, media_id ) ) + self._Execute( 'DELETE FROM potential_duplicate_pairs WHERE smaller_media_id = ? OR larger_media_id = ?;', ( media_id, media_id ) ) hash_ids = self._DuplicatesGetDuplicateHashIds( media_id ) - self._c.execute( 'DELETE FROM duplicate_file_members WHERE media_id = ?;', ( media_id, ) ) - self._c.execute( 'DELETE FROM duplicate_files WHERE media_id = ?;', ( media_id, ) ) + self._Execute( 'DELETE FROM duplicate_file_members WHERE media_id = ?;', ( media_id, ) ) + self._Execute( 'DELETE FROM duplicate_files WHERE media_id = ?;', ( media_id, ) ) self.modules_similar_files.ResetSearch( hash_ids ) @@ -6018,7 +6047,7 @@ class DB( HydrusDB.HydrusDB ): def _DuplicatesGetAlternatesGroupId( self, media_id, do_not_create = False ): - result = self._c.execute( 'SELECT alternates_group_id FROM alternate_file_group_members WHERE media_id = ?;', ( media_id, ) ).fetchone() + result = self._Execute( 'SELECT alternates_group_id FROM alternate_file_group_members WHERE media_id = ?;', ( media_id, ) ).fetchone() if result is None: @@ -6027,11 +6056,11 @@ class DB( HydrusDB.HydrusDB ): return None - self._c.execute( 'INSERT INTO alternate_file_groups DEFAULT VALUES;' ) + self._Execute( 'INSERT INTO alternate_file_groups DEFAULT VALUES;' ) - alternates_group_id = self._c.lastrowid + alternates_group_id = self._GetLastRowId() - self._c.execute( 'INSERT INTO alternate_file_group_members ( alternates_group_id, media_id ) VALUES ( ?, ? );', ( alternates_group_id, media_id ) ) + self._Execute( 'INSERT INTO alternate_file_group_members ( alternates_group_id, media_id ) VALUES ( ?, ? );', ( alternates_group_id, media_id ) ) else: @@ -6043,7 +6072,7 @@ class DB( HydrusDB.HydrusDB ): def _DuplicatesGetAlternateMediaIds( self, alternates_group_id ): - media_ids = self._STS( self._c.execute( 'SELECT media_id FROM alternate_file_group_members WHERE alternates_group_id = ?;', ( alternates_group_id, ) ) ) + media_ids = self._STS( self._Execute( 'SELECT media_id FROM alternate_file_group_members WHERE alternates_group_id = ?;', ( alternates_group_id, ) ) ) return media_ids @@ -6091,13 +6120,30 @@ class DB( HydrusDB.HydrusDB ): if file_service_id is None or file_service_id == self.modules_services.combined_file_service_id: - hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM duplicate_file_members WHERE media_id = ?;', ( media_id, ) ) ) + hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM duplicate_file_members WHERE media_id = ?;', ( media_id, ) ) ) else: table_join = self.modules_files_storage.GetCurrentTableJoinPhrase( file_service_id, 'duplicate_file_members' ) - hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {} WHERE media_id = ?;'.format( table_join ), ( media_id, ) ) ) + hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} WHERE media_id = ?;'.format( table_join ), ( media_id, ) ) ) + + + return hash_ids + + + def _DuplicatesGetDuplicatesHashIds( self, media_ids, file_service_id = None ): + + with self._MakeTemporaryIntegerTable( media_ids, 'media_id' ) as temp_media_ids_table_name: + + table_join = '{} CROSS JOIN {} USING ( media_id )'.format( temp_media_ids_table_name, 'duplicate_file_members' ) + + if file_service_id is not None and file_service_id != self.modules_services.combined_file_service_id: + + table_join = self.modules_files_storage.GetCurrentTableJoinPhrase( file_service_id, table_join ) + + + hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {};'.format( table_join ) ) ) return hash_ids @@ -6107,7 +6153,7 @@ class DB( HydrusDB.HydrusDB ): false_positive_alternates_group_ids = set() - results = self._c.execute( 'SELECT smaller_alternates_group_id, larger_alternates_group_id FROM duplicate_false_positives WHERE smaller_alternates_group_id = ? OR larger_alternates_group_id = ?;', ( alternates_group_id, alternates_group_id ) ).fetchall() + results = self._Execute( 'SELECT smaller_alternates_group_id, larger_alternates_group_id FROM duplicate_false_positives WHERE smaller_alternates_group_id = ? OR larger_alternates_group_id = ?;', ( alternates_group_id, alternates_group_id ) ).fetchall() for ( smaller_alternates_group_id, larger_alternates_group_id ) in results: @@ -6134,7 +6180,7 @@ class DB( HydrusDB.HydrusDB ): table_join = self._DuplicatesGetPotentialDuplicatePairsTableJoinOnFileService( file_service_key ) - ( num_potentials, ) = self._c.execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT smaller_media_id, larger_media_id FROM {} WHERE smaller_media_id = ? OR larger_media_id = ? );'.format( table_join ), ( media_id, media_id, ) ).fetchone() + ( num_potentials, ) = self._Execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT smaller_media_id, larger_media_id FROM {} WHERE smaller_media_id = ? OR larger_media_id = ? );'.format( table_join ), ( media_id, media_id, ) ).fetchone() if num_potentials > 0: @@ -6175,7 +6221,7 @@ class DB( HydrusDB.HydrusDB ): smaller_media_id = min( media_id, alt_media_id ) larger_media_id = max( media_id, alt_media_id ) - result = self._c.execute( 'SELECT 1 FROM confirmed_alternate_pairs WHERE smaller_media_id = ? AND larger_media_id = ?;', ( smaller_media_id, larger_media_id ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM confirmed_alternate_pairs WHERE smaller_media_id = ? AND larger_media_id = ?;', ( smaller_media_id, larger_media_id ) ).fetchone() if result is not None: @@ -6261,7 +6307,7 @@ class DB( HydrusDB.HydrusDB ): if alternates_group_id is not None: - alternates_media_ids = self._STS( self._c.execute( 'SELECT media_id FROM alternate_file_group_members WHERE alternates_group_id = ?;', ( alternates_group_id, ) ) ) + alternates_media_ids = self._STS( self._Execute( 'SELECT media_id FROM alternate_file_group_members WHERE alternates_group_id = ?;', ( alternates_group_id, ) ) ) alternates_media_ids.discard( media_id ) @@ -6315,7 +6361,7 @@ class DB( HydrusDB.HydrusDB ): table_join = self._DuplicatesGetPotentialDuplicatePairsTableJoinOnFileService( file_service_key ) - for ( smaller_media_id, larger_media_id ) in self._c.execute( 'SELECT smaller_media_id, larger_media_id FROM {} WHERE smaller_media_id = ? OR larger_media_id = ?;'.format( table_join ), ( media_id, media_id ) ).fetchall(): + for ( smaller_media_id, larger_media_id ) in self._Execute( 'SELECT smaller_media_id, larger_media_id FROM {} WHERE smaller_media_id = ? OR larger_media_id = ?;'.format( table_join ), ( media_id, media_id ) ).fetchall(): if smaller_media_id != media_id: @@ -6394,7 +6440,7 @@ class DB( HydrusDB.HydrusDB ): query = 'SELECT smaller_alternates_group_id, larger_alternates_group_id FROM duplicate_false_positives;' - for ( alternates_group_id_a, alternates_group_id_b ) in self._c.execute( query ): + for ( alternates_group_id_a, alternates_group_id_b ) in self._Execute( query ): alternates_group_ids_to_false_positives[ alternates_group_id_a ].append( alternates_group_id_b ) alternates_group_ids_to_false_positives[ alternates_group_id_b ].append( alternates_group_id_a ) @@ -6437,10 +6483,7 @@ class DB( HydrusDB.HydrusDB ): media_ids = self._DuplicatesGetAlternateMediaIds( alternates_group_id ) - for media_id in media_ids: - - hash_ids.update( self._DuplicatesGetDuplicateHashIds( media_id, file_service_id = file_service_id ) ) - + hash_ids = self._DuplicatesGetDuplicatesHashIds( media_ids, file_service_id = file_service_id ) @@ -6450,7 +6493,7 @@ class DB( HydrusDB.HydrusDB ): query = 'SELECT alternates_group_id, COUNT( * ) FROM alternate_file_group_members GROUP BY alternates_group_id;' - results = self._c.execute( query ).fetchall() + results = self._Execute( query ).fetchall() for ( alternates_group_id, count ) in results: @@ -6498,7 +6541,7 @@ class DB( HydrusDB.HydrusDB ): media_ids = [] - for ( media_id, count ) in self._c.execute( query ): + for ( media_id, count ) in self._Execute( query ): count -= 1 @@ -6508,9 +6551,7 @@ class DB( HydrusDB.HydrusDB ): - select_statement = 'SELECT hash_id FROM duplicate_file_members WHERE media_id = ?;' - - hash_ids = self._STS( self._ExecuteManySelectSingleParam( select_statement, media_ids ) ) + hash_ids = self._DuplicatesGetDuplicatesHashIds( media_ids, file_service_id = file_service_id ) elif dupe_type == HC.DUPLICATE_POTENTIAL: @@ -6523,24 +6564,19 @@ class DB( HydrusDB.HydrusDB ): media_ids_to_counts = collections.Counter() - for ( media_id, count ) in self._c.execute( smaller_query ): + for ( media_id, count ) in self._Execute( smaller_query ): media_ids_to_counts[ media_id ] += count - for ( media_id, count ) in self._c.execute( larger_query ): + for ( media_id, count ) in self._Execute( larger_query ): media_ids_to_counts[ media_id ] += count media_ids = [ media_id for ( media_id, count ) in media_ids_to_counts.items() if filter_func( count ) ] - hash_ids = set() - - for media_id in media_ids: - - hash_ids.update( self._DuplicatesGetDuplicateHashIds( media_id, file_service_id = file_service_id ) ) - + hash_ids = self._DuplicatesGetDuplicatesHashIds( media_ids, file_service_id = file_service_id ) return hash_ids @@ -6548,14 +6584,14 @@ class DB( HydrusDB.HydrusDB ): def _DuplicatesGetKingHashId( self, media_id ): - ( king_hash_id, ) = self._c.execute( 'SELECT king_hash_id FROM duplicate_files WHERE media_id = ?;', ( media_id, ) ).fetchone() + ( king_hash_id, ) = self._Execute( 'SELECT king_hash_id FROM duplicate_files WHERE media_id = ?;', ( media_id, ) ).fetchone() return king_hash_id def _DuplicatesGetMediaId( self, hash_id, do_not_create = False ): - result = self._c.execute( 'SELECT media_id FROM duplicate_file_members WHERE hash_id = ?;', ( hash_id, ) ).fetchone() + result = self._Execute( 'SELECT media_id FROM duplicate_file_members WHERE hash_id = ?;', ( hash_id, ) ).fetchone() if result is None: @@ -6564,11 +6600,11 @@ class DB( HydrusDB.HydrusDB ): return None - self._c.execute( 'INSERT INTO duplicate_files ( king_hash_id ) VALUES ( ? );', ( hash_id, ) ) + self._Execute( 'INSERT INTO duplicate_files ( king_hash_id ) VALUES ( ? );', ( hash_id, ) ) - media_id = self._c.lastrowid + media_id = self._GetLastRowId() - self._c.execute( 'INSERT INTO duplicate_file_members ( media_id, hash_id ) VALUES ( ?, ? );', ( media_id, hash_id ) ) + self._Execute( 'INSERT INTO duplicate_file_members ( media_id, hash_id ) VALUES ( ?, ? );', ( media_id, hash_id ) ) else: @@ -6627,7 +6663,7 @@ class DB( HydrusDB.HydrusDB ): is_complicated_search = False - with HydrusDB.TemporaryIntegerTable( self._c, [], 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( [], 'hash_id' ) as temp_table_name: # first we get a sample of current potential pairs in the db, given our limiting search context @@ -6653,7 +6689,7 @@ class DB( HydrusDB.HydrusDB ): preferred_hash_ids = query_hash_ids - self._c.executemany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( temp_table_name ), ( ( hash_id, ) for hash_id in query_hash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( temp_table_name ), ( ( hash_id, ) for hash_id in query_hash_ids ) ) self._AnalyzeTempTable( temp_table_name ) @@ -6663,7 +6699,7 @@ class DB( HydrusDB.HydrusDB ): potential_media_ids = set() # distinct important here for the search results table join - for ( smaller_media_id, larger_media_id ) in self._c.execute( 'SELECT DISTINCT smaller_media_id, larger_media_id FROM {};'.format( table_join ) ): + for ( smaller_media_id, larger_media_id ) in self._Execute( 'SELECT DISTINCT smaller_media_id, larger_media_id FROM {};'.format( table_join ) ): potential_media_ids.add( smaller_media_id ) potential_media_ids.add( larger_media_id ) @@ -6725,7 +6761,7 @@ class DB( HydrusDB.HydrusDB ): file_service_id = self.modules_services.GetServiceId( file_service_key ) - with HydrusDB.TemporaryIntegerTable( self._c, [], 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( [], 'hash_id' ) as temp_table_name: allowed_hash_ids = None preferred_hash_ids = None @@ -6747,7 +6783,7 @@ class DB( HydrusDB.HydrusDB ): preferred_hash_ids = query_hash_ids - self._c.executemany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( temp_table_name ), ( ( hash_id, ) for hash_id in query_hash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( temp_table_name ), ( ( hash_id, ) for hash_id in query_hash_ids ) ) self._AnalyzeTempTable( temp_table_name ) @@ -6755,7 +6791,7 @@ class DB( HydrusDB.HydrusDB ): # distinct important here for the search results table join - result = self._c.execute( 'SELECT DISTINCT smaller_media_id, larger_media_id, distance FROM {} LIMIT 2500;'.format( table_join ) ).fetchall() + result = self._Execute( 'SELECT DISTINCT smaller_media_id, larger_media_id, distance FROM {} LIMIT 2500;'.format( table_join ) ).fetchall() MAX_BATCH_SIZE = HG.client_controller.new_options.GetInteger( 'duplicate_filter_max_batch_size' ) @@ -6868,7 +6904,7 @@ class DB( HydrusDB.HydrusDB ): file_service_key = file_search_context.GetFileServiceKey() - with HydrusDB.TemporaryIntegerTable( self._c, [], 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( [], 'hash_id' ) as temp_table_name: if file_search_context.IsJustSystemEverything() or file_search_context.HasNoPredicates(): @@ -6878,7 +6914,7 @@ class DB( HydrusDB.HydrusDB ): query_hash_ids = self._GetHashIdsFromQuery( file_search_context, apply_implicit_limit = False ) - self._c.executemany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( temp_table_name ), ( ( hash_id, ) for hash_id in query_hash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( temp_table_name ), ( ( hash_id, ) for hash_id in query_hash_ids ) ) self._AnalyzeTempTable( temp_table_name ) @@ -6886,7 +6922,7 @@ class DB( HydrusDB.HydrusDB ): # distinct important here for the search results table join - ( potential_duplicates_count, ) = self._c.execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT smaller_media_id, larger_media_id FROM {} );'.format( table_join ) ).fetchone() + ( potential_duplicates_count, ) = self._Execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT smaller_media_id, larger_media_id FROM {} );'.format( table_join ) ).fetchone() return potential_duplicates_count @@ -6916,7 +6952,7 @@ class DB( HydrusDB.HydrusDB ): smaller_media_id = min( media_id_a, media_id_b ) larger_media_id = max( media_id_a, media_id_b ) - result = self._c.execute( 'SELECT 1 FROM confirmed_alternate_pairs WHERE smaller_media_id = ? AND larger_media_id = ?;', ( smaller_media_id, larger_media_id ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM confirmed_alternate_pairs WHERE smaller_media_id = ? AND larger_media_id = ?;', ( smaller_media_id, larger_media_id ) ).fetchone() return result is not None @@ -6959,26 +6995,26 @@ class DB( HydrusDB.HydrusDB ): smaller_alternates_group_id = min( alternates_group_id, mergee_alternates_group_id ) larger_alternates_group_id = max( alternates_group_id, mergee_alternates_group_id ) - self._c.execute( 'DELETE FROM duplicate_false_positives WHERE smaller_alternates_group_id = ? AND larger_alternates_group_id = ?;', ( smaller_alternates_group_id, larger_alternates_group_id ) ) + self._Execute( 'DELETE FROM duplicate_false_positives WHERE smaller_alternates_group_id = ? AND larger_alternates_group_id = ?;', ( smaller_alternates_group_id, larger_alternates_group_id ) ) self._DuplicatesSetAlternates( superior_media_id, mergee_media_id ) - self._c.execute( 'UPDATE duplicate_file_members SET media_id = ? WHERE media_id = ?;', ( superior_media_id, mergee_media_id ) ) + self._Execute( 'UPDATE duplicate_file_members SET media_id = ? WHERE media_id = ?;', ( superior_media_id, mergee_media_id ) ) smaller_media_id = min( superior_media_id, mergee_media_id ) larger_media_id = max( superior_media_id, mergee_media_id ) # ensure the potential merge pair is gone - self._c.execute( 'DELETE FROM potential_duplicate_pairs WHERE smaller_media_id = ? AND larger_media_id = ?;', ( smaller_media_id, larger_media_id ) ) + self._Execute( 'DELETE FROM potential_duplicate_pairs WHERE smaller_media_id = ? AND larger_media_id = ?;', ( smaller_media_id, larger_media_id ) ) # now merge potentials from the old to the new--however this has complicated tests to stop confirmed alts and so on, so can't just update ids - existing_potential_info_of_mergee_media_id = self._c.execute( 'SELECT smaller_media_id, larger_media_id, distance FROM potential_duplicate_pairs WHERE smaller_media_id = ? OR larger_media_id = ?;', ( mergee_media_id, mergee_media_id ) ).fetchall() + existing_potential_info_of_mergee_media_id = self._Execute( 'SELECT smaller_media_id, larger_media_id, distance FROM potential_duplicate_pairs WHERE smaller_media_id = ? OR larger_media_id = ?;', ( mergee_media_id, mergee_media_id ) ).fetchall() - self._c.execute( 'DELETE FROM potential_duplicate_pairs WHERE smaller_media_id = ? OR larger_media_id = ?;', ( mergee_media_id, mergee_media_id ) ) + self._Execute( 'DELETE FROM potential_duplicate_pairs WHERE smaller_media_id = ? OR larger_media_id = ?;', ( mergee_media_id, mergee_media_id ) ) for ( smaller_media_id, larger_media_id, distance ) in existing_potential_info_of_mergee_media_id: @@ -7000,24 +7036,24 @@ class DB( HydrusDB.HydrusDB ): # ensure any previous confirmed alt pair is gone - self._c.execute( 'DELETE FROM confirmed_alternate_pairs WHERE smaller_media_id = ? AND larger_media_id = ?;', ( smaller_media_id, larger_media_id ) ) + self._Execute( 'DELETE FROM confirmed_alternate_pairs WHERE smaller_media_id = ? AND larger_media_id = ?;', ( smaller_media_id, larger_media_id ) ) # now merge confirmed alts from the old to the new - self._c.execute( 'UPDATE OR IGNORE confirmed_alternate_pairs SET smaller_media_id = ? WHERE smaller_media_id = ?;', ( superior_media_id, mergee_media_id ) ) - self._c.execute( 'UPDATE OR IGNORE confirmed_alternate_pairs SET larger_media_id = ? WHERE larger_media_id = ?;', ( superior_media_id, mergee_media_id ) ) + self._Execute( 'UPDATE OR IGNORE confirmed_alternate_pairs SET smaller_media_id = ? WHERE smaller_media_id = ?;', ( superior_media_id, mergee_media_id ) ) + self._Execute( 'UPDATE OR IGNORE confirmed_alternate_pairs SET larger_media_id = ? WHERE larger_media_id = ?;', ( superior_media_id, mergee_media_id ) ) # and clear out potentials that are now invalid - confirmed_alternate_pairs = self._c.execute( 'SELECT smaller_media_id, larger_media_id FROM confirmed_alternate_pairs WHERE smaller_media_id = ? OR larger_media_id = ?;', ( superior_media_id, superior_media_id ) ).fetchall() + confirmed_alternate_pairs = self._Execute( 'SELECT smaller_media_id, larger_media_id FROM confirmed_alternate_pairs WHERE smaller_media_id = ? OR larger_media_id = ?;', ( superior_media_id, superior_media_id ) ).fetchall() - self._c.executemany( 'DELETE FROM potential_duplicate_pairs WHERE smaller_media_id = ? AND larger_media_id = ?;', confirmed_alternate_pairs ) + self._ExecuteMany( 'DELETE FROM potential_duplicate_pairs WHERE smaller_media_id = ? AND larger_media_id = ?;', confirmed_alternate_pairs ) # clear out empty records - self._c.execute( 'DELETE FROM alternate_file_group_members WHERE media_id = ?;', ( mergee_media_id, ) ) + self._Execute( 'DELETE FROM alternate_file_group_members WHERE media_id = ?;', ( mergee_media_id, ) ) - self._c.execute( 'DELETE FROM duplicate_files WHERE media_id = ?;', ( mergee_media_id, ) ) + self._Execute( 'DELETE FROM duplicate_files WHERE media_id = ?;', ( mergee_media_id, ) ) def _DuplicatesRemoveAlternateMember( self, media_id ): @@ -7028,15 +7064,15 @@ class DB( HydrusDB.HydrusDB ): alternates_media_ids = self._DuplicatesGetAlternateMediaIds( alternates_group_id ) - self._c.execute( 'DELETE FROM alternate_file_group_members WHERE media_id = ?;', ( media_id, ) ) + self._Execute( 'DELETE FROM alternate_file_group_members WHERE media_id = ?;', ( media_id, ) ) - self._c.execute( 'DELETE FROM confirmed_alternate_pairs WHERE smaller_media_id = ? OR larger_media_id = ?;', ( media_id, media_id ) ) + self._Execute( 'DELETE FROM confirmed_alternate_pairs WHERE smaller_media_id = ? OR larger_media_id = ?;', ( media_id, media_id ) ) if len( alternates_media_ids ) == 1: # i.e. what we just removed was the last of the group - self._c.execute( 'DELETE FROM alternate_file_groups WHERE alternates_group_id = ?;', ( alternates_group_id, ) ) + self._Execute( 'DELETE FROM alternate_file_groups WHERE alternates_group_id = ?;', ( alternates_group_id, ) ) - self._c.execute( 'DELETE FROM duplicate_false_positives WHERE smaller_alternates_group_id = ? OR larger_alternates_group_id = ?;', ( alternates_group_id, alternates_group_id ) ) + self._Execute( 'DELETE FROM duplicate_false_positives WHERE smaller_alternates_group_id = ? OR larger_alternates_group_id = ?;', ( alternates_group_id, alternates_group_id ) ) hash_ids = self._DuplicatesGetDuplicateHashIds( media_id ) @@ -7074,7 +7110,7 @@ class DB( HydrusDB.HydrusDB ): else: - self._c.execute( 'DELETE FROM duplicate_file_members WHERE hash_id = ?;', ( hash_id, ) ) + self._Execute( 'DELETE FROM duplicate_file_members WHERE hash_id = ?;', ( hash_id, ) ) self.modules_similar_files.ResetSearch( ( hash_id, ) ) @@ -7097,7 +7133,7 @@ class DB( HydrusDB.HydrusDB ): if media_id is not None: - self._c.execute( 'DELETE FROM potential_duplicate_pairs WHERE smaller_media_id = ? OR larger_media_id = ?;', ( media_id, media_id ) ) + self._Execute( 'DELETE FROM potential_duplicate_pairs WHERE smaller_media_id = ? OR larger_media_id = ?;', ( media_id, media_id ) ) @@ -7133,7 +7169,7 @@ class DB( HydrusDB.HydrusDB ): smaller_media_id = min( media_id_a, media_id_b ) larger_media_id = max( media_id_a, media_id_b ) - self._c.execute( 'INSERT OR IGNORE INTO confirmed_alternate_pairs ( smaller_media_id, larger_media_id ) VALUES ( ?, ? );', ( smaller_media_id, larger_media_id ) ) + self._Execute( 'INSERT OR IGNORE INTO confirmed_alternate_pairs ( smaller_media_id, larger_media_id ) VALUES ( ?, ? );', ( smaller_media_id, larger_media_id ) ) if alternates_group_id_a == alternates_group_id_b: @@ -7144,7 +7180,7 @@ class DB( HydrusDB.HydrusDB ): # first, for all false positive relationships that A already has, clear out potentials between B and those fps before it moves over - false_positive_pairs = self._c.execute( 'SELECT smaller_alternates_group_id, larger_alternates_group_id FROM duplicate_false_positives WHERE smaller_alternates_group_id = ? OR larger_alternates_group_id = ?;', ( alternates_group_id_a, alternates_group_id_a ) ) + false_positive_pairs = self._Execute( 'SELECT smaller_alternates_group_id, larger_alternates_group_id FROM duplicate_false_positives WHERE smaller_alternates_group_id = ? OR larger_alternates_group_id = ?;', ( alternates_group_id_a, alternates_group_id_a ) ) for ( smaller_false_positive_alternates_group_id, larger_false_positive_alternates_group_id ) in false_positive_pairs: @@ -7160,13 +7196,13 @@ class DB( HydrusDB.HydrusDB ): # first, update all B to A - self._c.execute( 'UPDATE alternate_file_group_members SET alternates_group_id = ? WHERE alternates_group_id = ?;', ( alternates_group_id_a, alternates_group_id_b ) ) + self._Execute( 'UPDATE alternate_file_group_members SET alternates_group_id = ? WHERE alternates_group_id = ?;', ( alternates_group_id_a, alternates_group_id_b ) ) # move false positive records for B to A - false_positive_pairs = self._c.execute( 'SELECT smaller_alternates_group_id, larger_alternates_group_id FROM duplicate_false_positives WHERE smaller_alternates_group_id = ? OR larger_alternates_group_id = ?;', ( alternates_group_id_b, alternates_group_id_b ) ) + false_positive_pairs = self._Execute( 'SELECT smaller_alternates_group_id, larger_alternates_group_id FROM duplicate_false_positives WHERE smaller_alternates_group_id = ? OR larger_alternates_group_id = ?;', ( alternates_group_id_b, alternates_group_id_b ) ) - self._c.execute( 'DELETE FROM duplicate_false_positives WHERE smaller_alternates_group_id = ? OR larger_alternates_group_id = ?;', ( alternates_group_id_b, alternates_group_id_b ) ) + self._Execute( 'DELETE FROM duplicate_false_positives WHERE smaller_alternates_group_id = ? OR larger_alternates_group_id = ?;', ( alternates_group_id_b, alternates_group_id_b ) ) for ( smaller_false_positive_alternates_group_id, larger_false_positive_alternates_group_id ) in false_positive_pairs: @@ -7182,7 +7218,7 @@ class DB( HydrusDB.HydrusDB ): # remove master record - self._c.execute( 'DELETE FROM alternate_file_groups WHERE alternates_group_id = ?;', ( alternates_group_id_b, ) ) + self._Execute( 'DELETE FROM alternate_file_groups WHERE alternates_group_id = ?;', ( alternates_group_id_b, ) ) # pubsub to refresh alternates info for alternates_group_id_a and _b goes here @@ -7207,7 +7243,7 @@ class DB( HydrusDB.HydrusDB ): # this shouldn't be strictly needed, but lets do it here anyway to catch unforeseen problems # it is ok to remove this even if we are just about to add it back in--this clears out invalid pairs and increases priority with distance 0 - self._c.execute( 'DELETE FROM potential_duplicate_pairs WHERE smaller_media_id = ? AND larger_media_id = ?;', ( smaller_media_id, larger_media_id ) ) + self._Execute( 'DELETE FROM potential_duplicate_pairs WHERE smaller_media_id = ? AND larger_media_id = ?;', ( smaller_media_id, larger_media_id ) ) if hash_id_a == hash_id_b: @@ -7333,12 +7369,12 @@ class DB( HydrusDB.HydrusDB ): smaller_alternates_group_id = min( alternates_group_id_a, alternates_group_id_b ) larger_alternates_group_id = max( alternates_group_id_a, alternates_group_id_b ) - self._c.execute( 'INSERT OR IGNORE INTO duplicate_false_positives ( smaller_alternates_group_id, larger_alternates_group_id ) VALUES ( ?, ? );', ( smaller_alternates_group_id, larger_alternates_group_id ) ) + self._Execute( 'INSERT OR IGNORE INTO duplicate_false_positives ( smaller_alternates_group_id, larger_alternates_group_id ) VALUES ( ?, ? );', ( smaller_alternates_group_id, larger_alternates_group_id ) ) def _DuplicatesSetKing( self, king_hash_id, media_id ): - self._c.execute( 'UPDATE duplicate_files SET king_hash_id = ? WHERE media_id = ?;', ( king_hash_id, media_id ) ) + self._Execute( 'UPDATE duplicate_files SET king_hash_id = ? WHERE media_id = ?;', ( king_hash_id, media_id ) ) def _DuplicatesSetKingFromHash( self, hash ): @@ -7358,7 +7394,7 @@ class DB( HydrusDB.HydrusDB ): tag_ids = set( tag_ids_to_tags.keys() ) - with HydrusDB.TemporaryIntegerTable( self._c, tag_ids, 'tag_id' ) as temp_tag_id_table_name: + with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_tag_id_table_name: counts = self._CacheMappingsGetAutocompleteCountsForTags( ClientTags.TAG_DISPLAY_STORAGE, self.modules_services.combined_file_service_id, service_id, temp_tag_id_table_name ) @@ -7393,7 +7429,7 @@ class DB( HydrusDB.HydrusDB ): if action == HC.CONTENT_UPDATE_ADD: - result = self._c.execute( 'SELECT 1 FROM {} WHERE tag_id = ? AND hash_id = ?;'.format( current_mappings_table_name ), ( tag_id, hash_id ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM {} WHERE tag_id = ? AND hash_id = ?;'.format( current_mappings_table_name ), ( tag_id, hash_id ) ).fetchone() if result is None: @@ -7406,7 +7442,7 @@ class DB( HydrusDB.HydrusDB ): elif action == HC.CONTENT_UPDATE_DELETE: - result = self._c.execute( 'SELECT 1 FROM {} WHERE tag_id = ? AND hash_id = ?;'.format( deleted_mappings_table_name ), ( tag_id, hash_id ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM {} WHERE tag_id = ? AND hash_id = ?;'.format( deleted_mappings_table_name ), ( tag_id, hash_id ) ).fetchone() if result is None: @@ -7419,11 +7455,11 @@ class DB( HydrusDB.HydrusDB ): elif action == HC.CONTENT_UPDATE_PEND: - result = self._c.execute( 'SELECT 1 FROM {} WHERE tag_id = ? AND hash_id = ?;'.format( current_mappings_table_name ), ( tag_id, hash_id ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM {} WHERE tag_id = ? AND hash_id = ?;'.format( current_mappings_table_name ), ( tag_id, hash_id ) ).fetchone() if result is None: - result = self._c.execute( 'SELECT 1 FROM {} WHERE tag_id = ? AND hash_id = ?;'.format( pending_mappings_table_name ), ( tag_id, hash_id ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM {} WHERE tag_id = ? AND hash_id = ?;'.format( pending_mappings_table_name ), ( tag_id, hash_id ) ).fetchone() if result is None: @@ -7441,7 +7477,7 @@ class DB( HydrusDB.HydrusDB ): elif action == HC.CONTENT_UPDATE_RESCIND_PEND: - result = self._c.execute( 'SELECT 1 FROM {} WHERE tag_id = ? AND hash_id = ?;'.format( pending_mappings_table_name ), ( tag_id, hash_id ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM {} WHERE tag_id = ? AND hash_id = ?;'.format( pending_mappings_table_name ), ( tag_id, hash_id ) ).fetchone() if result is None: @@ -7455,30 +7491,30 @@ class DB( HydrusDB.HydrusDB ): else: - with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name: + with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name: if action == HC.CONTENT_UPDATE_ADD: - existing_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id ) WHERE tag_id = ?;'.format( temp_hash_ids_table_name, current_mappings_table_name ), ( tag_id, ) ) ) + existing_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id ) WHERE tag_id = ?;'.format( temp_hash_ids_table_name, current_mappings_table_name ), ( tag_id, ) ) ) valid_hash_ids = set( hash_ids ).difference( existing_hash_ids ) elif action == HC.CONTENT_UPDATE_DELETE: - existing_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id ) WHERE tag_id = ?;'.format( temp_hash_ids_table_name, deleted_mappings_table_name ), ( tag_id, ) ) ) + existing_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id ) WHERE tag_id = ?;'.format( temp_hash_ids_table_name, deleted_mappings_table_name ), ( tag_id, ) ) ) valid_hash_ids = set( hash_ids ).difference( existing_hash_ids ) elif action == HC.CONTENT_UPDATE_PEND: - existing_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id ) WHERE tag_id = ?;'.format( temp_hash_ids_table_name, current_mappings_table_name ), ( tag_id, ) ) ) - existing_hash_ids.update( self._STI( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id ) WHERE tag_id = ?;'.format( temp_hash_ids_table_name, pending_mappings_table_name ), ( tag_id, ) ) ) ) + existing_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id ) WHERE tag_id = ?;'.format( temp_hash_ids_table_name, current_mappings_table_name ), ( tag_id, ) ) ) + existing_hash_ids.update( self._STI( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id ) WHERE tag_id = ?;'.format( temp_hash_ids_table_name, pending_mappings_table_name ), ( tag_id, ) ) ) ) valid_hash_ids = set( hash_ids ).difference( existing_hash_ids ) elif action == HC.CONTENT_UPDATE_RESCIND_PEND: - valid_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id ) WHERE tag_id = ?;'.format( temp_hash_ids_table_name, pending_mappings_table_name ), ( tag_id, ) ) ) + valid_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id ) WHERE tag_id = ?;'.format( temp_hash_ids_table_name, pending_mappings_table_name ), ( tag_id, ) ) ) @@ -7567,7 +7603,7 @@ class DB( HydrusDB.HydrusDB ): both_current_and_pending_mappings = list( HydrusData.BuildKeyToSetDict( - self._c.execute( 'SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( tag_id, hash_id );'.format( pending_mappings_table_name, current_mappings_table_name ) ) + self._Execute( 'SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( tag_id, hash_id );'.format( pending_mappings_table_name, current_mappings_table_name ) ) ).items() ) @@ -7579,7 +7615,7 @@ class DB( HydrusDB.HydrusDB ): both_deleted_and_petitioned_mappings = list( HydrusData.BuildKeyToSetDict( - self._c.execute( 'SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( tag_id, hash_id );'.format( petitioned_mappings_table_name, deleted_mappings_table_name ) ) + self._Execute( 'SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( tag_id, hash_id );'.format( petitioned_mappings_table_name, deleted_mappings_table_name ) ) ).items() ) @@ -7596,7 +7632,7 @@ class DB( HydrusDB.HydrusDB ): else: - self._c.execute( 'DELETE FROM service_info where info_type IN ( ?, ? );', ( HC.SERVICE_INFO_NUM_PENDING_MAPPINGS, HC.SERVICE_INFO_NUM_PETITIONED_MAPPINGS ) ) + self._Execute( 'DELETE FROM service_info where info_type IN ( ?, ? );', ( HC.SERVICE_INFO_NUM_PENDING_MAPPINGS, HC.SERVICE_INFO_NUM_PETITIONED_MAPPINGS ) ) self._controller.pub( 'notify_new_pending' ) @@ -7846,7 +7882,7 @@ class DB( HydrusDB.HydrusDB ): if tag_ids_table_name is None: - with HydrusDB.TemporaryIntegerTable( self._c, tag_ids, 'tag_id' ) as temp_tag_id_table_name: + with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_tag_id_table_name: for search_tag_service_id in search_tag_service_ids: @@ -8072,7 +8108,7 @@ class DB( HydrusDB.HydrusDB ): for search_tag_service_id in search_tag_service_ids: - cursor = self._c.execute( 'SELECT tag_id FROM {};'.format( self._CacheTagsGetTagsTableName( file_service_id, search_tag_service_id ) ) ) + cursor = self._Execute( 'SELECT tag_id FROM {};'.format( self._CacheTagsGetTagsTableName( file_service_id, search_tag_service_id ) ) ) cancelled_hook = None @@ -8098,7 +8134,7 @@ class DB( HydrusDB.HydrusDB ): else: - with HydrusDB.TemporaryIntegerTable( self._c, [], 'subtag_id' ) as temp_subtag_ids_table_name: + with self._MakeTemporaryIntegerTable( [], 'subtag_id' ) as temp_subtag_ids_table_name: self._GetSubtagIdsFromWildcardIntoTable( file_service_id, tag_service_id, half_complete_searchable_subtag, temp_subtag_ids_table_name, job_key = job_key ) @@ -8108,7 +8144,7 @@ class DB( HydrusDB.HydrusDB ): else: - with HydrusDB.TemporaryIntegerTable( self._c, namespace_ids, 'namespace_id' ) as temp_namespace_ids_table_name: + with self._MakeTemporaryIntegerTable( namespace_ids, 'namespace_id' ) as temp_namespace_ids_table_name: tag_ids = self._GetTagIdsFromNamespaceIdsSubtagIdsTables( file_service_id, tag_service_id, temp_namespace_ids_table_name, temp_subtag_ids_table_name, job_key = job_key ) @@ -8138,7 +8174,7 @@ class DB( HydrusDB.HydrusDB ): for batch_of_tag_ids in HydrusData.SplitListIntoChunks( tag_ids_without_siblings, 10240 ): - with HydrusDB.TemporaryIntegerTable( self._c, batch_of_tag_ids, 'tag_id' ) as temp_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( batch_of_tag_ids, 'tag_id' ) as temp_tag_ids_table_name: for sibling_tag_service_id in sibling_tag_service_ids: @@ -8147,15 +8183,15 @@ class DB( HydrusDB.HydrusDB ): return set() - with HydrusDB.TemporaryIntegerTable( self._c, [], 'ideal_tag_id' ) as temp_ideal_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( [], 'ideal_tag_id' ) as temp_ideal_tag_ids_table_name: self._CacheTagSiblingsFilterChainedIdealsIntoTable( ClientTags.TAG_DISPLAY_ACTUAL, sibling_tag_service_id, temp_tag_ids_table_name, temp_ideal_tag_ids_table_name ) - with HydrusDB.TemporaryIntegerTable( self._c, [], 'tag_id' ) as temp_chained_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( [], 'tag_id' ) as temp_chained_tag_ids_table_name: self._CacheTagSiblingsGetChainsMembersFromIdealsTables( ClientTags.TAG_DISPLAY_ACTUAL, sibling_tag_service_id, temp_ideal_tag_ids_table_name, temp_chained_tag_ids_table_name ) - tag_ids.update( self._STI( self._c.execute( 'SELECT tag_id FROM {};'.format( temp_chained_tag_ids_table_name ) ) ) ) + tag_ids.update( self._STI( self._Execute( 'SELECT tag_id FROM {};'.format( temp_chained_tag_ids_table_name ) ) ) ) @@ -8284,19 +8320,49 @@ class DB( HydrusDB.HydrusDB ): boned_stats = {} - current_files_table_name = ClientDBFilesStorage.GenerateFilesTableName( self.modules_services.combined_local_file_service_id, HC.CONTENT_STATUS_CURRENT ) - - ( num_total, size_total ) = self._c.execute( 'SELECT COUNT( hash_id ), SUM( size ) FROM {} CROSS JOIN files_info USING ( hash_id );'.format( current_files_table_name ) ).fetchone() - ( num_inbox, size_inbox ) = self._c.execute( 'SELECT COUNT( hash_id ), SUM( size ) FROM files_info NATURAL JOIN {} NATURAL JOIN file_inbox;'.format( current_files_table_name ) ).fetchone() - - if size_total is None: + with self._MakeTemporaryIntegerTable( [], 'hash_id' ) as temp_hash_id_table_name: - size_total = 0 + current_files_table_name = ClientDBFilesStorage.GenerateFilesTableName( self.modules_services.combined_local_file_service_id, HC.CONTENT_STATUS_CURRENT ) + + self._Execute( 'INSERT INTO {} ( hash_id ) SELECT hash_id FROM {};'.format( temp_hash_id_table_name, current_files_table_name ) ) + + for service_id in ( self.modules_services.trash_service_id, self.modules_services.local_update_service_id ): + + current_files_table_name = ClientDBFilesStorage.GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT ) + + self._Execute( 'DELETE FROM {} WHERE hash_id IN ( SELECT hash_id FROM {} );'.format( temp_hash_id_table_name, current_files_table_name ) ) + + + ( num_total, size_total ) = self._Execute( 'SELECT COUNT( hash_id ), SUM( size ) FROM {} CROSS JOIN files_info USING ( hash_id );'.format( temp_hash_id_table_name ) ).fetchone() + ( num_inbox, size_inbox ) = self._Execute( 'SELECT COUNT( hash_id ), SUM( size ) FROM files_info NATURAL JOIN {} NATURAL JOIN file_inbox;'.format( temp_hash_id_table_name ) ).fetchone() + + if size_total is None: + + size_total = 0 + + + if size_inbox is None: + + size_inbox = 0 + - if size_inbox is None: + with self._MakeTemporaryIntegerTable( [], 'hash_id' ) as temp_hash_id_table_name: - size_inbox = 0 + deleted_files_table_name = ClientDBFilesStorage.GenerateFilesTableName( self.modules_services.combined_local_file_service_id, HC.CONTENT_STATUS_DELETED ) + + self._Execute( 'INSERT INTO {} ( hash_id ) SELECT hash_id FROM {};'.format( temp_hash_id_table_name, deleted_files_table_name ) ) + + current_files_table_name = ClientDBFilesStorage.GenerateFilesTableName( self.modules_services.trash_service_id, HC.CONTENT_STATUS_CURRENT ) + + self._Execute( 'INSERT OR IGNORE INTO {} ( hash_id ) SELECT hash_id FROM {};'.format( temp_hash_id_table_name, current_files_table_name ) ) + + ( num_deleted, size_deleted ) = self._Execute( 'SELECT COUNT( hash_id ), SUM( size ) FROM {} CROSS JOIN files_info USING ( hash_id );'.format( temp_hash_id_table_name ) ).fetchone() + + if size_deleted is None: + + size_deleted = 0 + num_archive = num_total - num_inbox @@ -8304,10 +8370,12 @@ class DB( HydrusDB.HydrusDB ): boned_stats[ 'num_inbox' ] = num_inbox boned_stats[ 'num_archive' ] = num_archive + boned_stats[ 'num_deleted' ] = num_deleted boned_stats[ 'size_inbox' ] = size_inbox boned_stats[ 'size_archive' ] = size_archive + boned_stats[ 'size_deleted' ] = size_deleted - total_viewtime = self._c.execute( 'SELECT SUM( media_views ), SUM( media_viewtime ), SUM( preview_views ), SUM( preview_viewtime ) FROM file_viewing_stats;' ).fetchone() + total_viewtime = self._Execute( 'SELECT SUM( media_views ), SUM( media_viewtime ), SUM( preview_views ), SUM( preview_viewtime ) FROM file_viewing_stats;' ).fetchone() if total_viewtime is None: @@ -8323,14 +8391,50 @@ class DB( HydrusDB.HydrusDB ): + # + + earliest_import_time = 0 + + current_files_table_name = ClientDBFilesStorage.GenerateFilesTableName( self.modules_services.combined_local_file_service_id, HC.CONTENT_STATUS_CURRENT ) + + result = self._Execute( 'SELECT MIN( timestamp ) FROM {};'.format( current_files_table_name ) ).fetchone() + + if result is not None and result[0] is not None: + + earliest_import_time = result[0] + + + deleted_files_table_name = ClientDBFilesStorage.GenerateFilesTableName( self.modules_services.combined_local_file_service_id, HC.CONTENT_STATUS_DELETED ) + + result = self._Execute( 'SELECT MIN( original_timestamp ) FROM {};'.format( deleted_files_table_name ) ).fetchone() + + if result is not None and result[0] is not None: + + if earliest_import_time == 0: + + earliest_import_time = result[0] + + else: + + earliest_import_time = min( earliest_import_time, result[0] ) + + + + if earliest_import_time > 0: + + boned_stats[ 'earliest_import_time' ] = earliest_import_time + + + # + boned_stats[ 'total_viewtime' ] = total_viewtime - total_alternate_files = sum( ( count for ( alternates_group_id, count ) in self._c.execute( 'SELECT alternates_group_id, COUNT( * ) FROM alternate_file_group_members GROUP BY alternates_group_id;' ) if count > 1 ) ) - total_duplicate_files = sum( ( count for ( media_id, count ) in self._c.execute( 'SELECT media_id, COUNT( * ) FROM duplicate_file_members GROUP BY media_id;' ) if count > 1 ) ) + total_alternate_files = sum( ( count for ( alternates_group_id, count ) in self._Execute( 'SELECT alternates_group_id, COUNT( * ) FROM alternate_file_group_members GROUP BY alternates_group_id;' ) if count > 1 ) ) + total_duplicate_files = sum( ( count for ( media_id, count ) in self._Execute( 'SELECT media_id, COUNT( * ) FROM duplicate_file_members GROUP BY media_id;' ) if count > 1 ) ) table_join = self._DuplicatesGetPotentialDuplicatePairsTableJoinOnFileService( CC.COMBINED_LOCAL_FILE_SERVICE_KEY ) - ( total_potential_pairs, ) = self._c.execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT smaller_media_id, larger_media_id FROM {} );'.format( table_join ) ).fetchone() + ( total_potential_pairs, ) = self._Execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT smaller_media_id, larger_media_id FROM {} );'.format( table_join ) ).fetchone() boned_stats[ 'total_alternate_files' ] = total_alternate_files boned_stats[ 'total_duplicate_files' ] = total_duplicate_files @@ -8341,7 +8445,7 @@ class DB( HydrusDB.HydrusDB ): def _GetClientFilesLocations( self ): - result = { prefix : HydrusPaths.ConvertPortablePathToAbsPath( location ) for ( prefix, location ) in self._c.execute( 'SELECT prefix, location FROM client_files_locations;' ) } + result = { prefix : HydrusPaths.ConvertPortablePathToAbsPath( location ) for ( prefix, location ) in self._Execute( 'SELECT prefix, location FROM client_files_locations;' ) } if len( result ) < 512: @@ -8361,8 +8465,8 @@ class DB( HydrusDB.HydrusDB ): for prefix in HydrusData.IterateHexPrefixes(): - self._c.execute( 'INSERT OR IGNORE INTO client_files_locations ( prefix, location ) VALUES ( ?, ? );', ( 'f' + prefix, location ) ) - self._c.execute( 'INSERT OR IGNORE INTO client_files_locations ( prefix, location ) VALUES ( ?, ? );', ( 't' + prefix, location ) ) + self._Execute( 'INSERT OR IGNORE INTO client_files_locations ( prefix, location ) VALUES ( ?, ? );', ( 'f' + prefix, location ) ) + self._Execute( 'INSERT OR IGNORE INTO client_files_locations ( prefix, location ) VALUES ( ?, ? );', ( 't' + prefix, location ) ) @@ -8373,7 +8477,7 @@ class DB( HydrusDB.HydrusDB ): hash_id = self.modules_hashes_local_cache.GetHashId( hash ) - names_to_notes = { name : note for ( name, note ) in self._c.execute( 'SELECT label, note FROM file_notes, labels, notes ON ( file_notes.name_id = labels.label_id AND file_notes.note_id = notes.note_id ) WHERE hash_id = ?;', ( hash_id, ) ) } + names_to_notes = { name : note for ( name, note ) in self._Execute( 'SELECT label, note FROM file_notes, labels, notes ON ( file_notes.name_id = labels.label_id AND file_notes.note_id = notes.note_id ) WHERE hash_id = ?;', ( hash_id, ) ) } return names_to_notes @@ -8496,7 +8600,7 @@ class DB( HydrusDB.HydrusDB ): def _GetForceRefreshTagsManagers( self, hash_ids, hash_ids_to_current_file_service_ids = None ): - with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_table_name: self._AnalyzeTempTable( temp_table_name ) @@ -8528,7 +8632,7 @@ class DB( HydrusDB.HydrusDB ): else: - with HydrusDB.TemporaryIntegerTable( self._c, batch_of_hash_ids, 'hash_id' ) as temp_batch_hash_ids_table_name: + with self._MakeTemporaryIntegerTable( batch_of_hash_ids, 'hash_id' ) as temp_batch_hash_ids_table_name: ( batch_of_storage_tag_data, batch_of_display_tag_data ) = self._GetForceRefreshTagsManagersWithTableHashIdsTagData( common_file_service_id, tag_service_ids, temp_batch_hash_ids_table_name ) @@ -8594,7 +8698,7 @@ class DB( HydrusDB.HydrusDB ): for ( status, mappings_table_name ) in statuses_to_table_names.items(): # temp hashes to mappings - storage_tag_data.extend( ( hash_id, ( tag_service_id, status, tag_id ) ) for ( hash_id, tag_id ) in self._c.execute( 'SELECT hash_id, tag_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_ids_table_name, mappings_table_name ) ) ) + storage_tag_data.extend( ( hash_id, ( tag_service_id, status, tag_id ) ) for ( hash_id, tag_id ) in self._Execute( 'SELECT hash_id, tag_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_ids_table_name, mappings_table_name ) ) ) if common_file_service_id != self.modules_services.combined_file_service_id: @@ -8602,8 +8706,8 @@ class DB( HydrusDB.HydrusDB ): ( cache_current_display_mappings_table_name, cache_pending_display_mappings_table_name ) = GenerateSpecificDisplayMappingsCacheTableNames( common_file_service_id, tag_service_id ) # temp hashes to mappings - display_tag_data.extend( ( hash_id, ( tag_service_id, HC.CONTENT_STATUS_CURRENT, tag_id ) ) for ( hash_id, tag_id ) in self._c.execute( 'SELECT hash_id, tag_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_ids_table_name, cache_current_display_mappings_table_name ) ) ) - display_tag_data.extend( ( hash_id, ( tag_service_id, HC.CONTENT_STATUS_PENDING, tag_id ) ) for ( hash_id, tag_id ) in self._c.execute( 'SELECT hash_id, tag_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_ids_table_name, cache_pending_display_mappings_table_name ) ) ) + display_tag_data.extend( ( hash_id, ( tag_service_id, HC.CONTENT_STATUS_CURRENT, tag_id ) ) for ( hash_id, tag_id ) in self._Execute( 'SELECT hash_id, tag_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_ids_table_name, cache_current_display_mappings_table_name ) ) ) + display_tag_data.extend( ( hash_id, ( tag_service_id, HC.CONTENT_STATUS_PENDING, tag_id ) ) for ( hash_id, tag_id ) in self._Execute( 'SELECT hash_id, tag_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( hash_ids_table_name, cache_pending_display_mappings_table_name ) ) ) @@ -8640,7 +8744,7 @@ class DB( HydrusDB.HydrusDB ): namespace_ids = self._GetNamespaceIdsFromWildcard( namespace_wildcard ) - with HydrusDB.TemporaryIntegerTable( self._c, namespace_ids, 'namespace_id' ) as temp_namespace_ids_table_name: + with self._MakeTemporaryIntegerTable( namespace_ids, 'namespace_id' ) as temp_namespace_ids_table_name: # reason why I JOIN each table rather than join just the UNION is based on previous hell with having query planner figure out a "( UNION ) NATURAL JOIN stuff" situation # although this sometimes makes certifiable 2KB ( 6 UNION * 4-table ) queries, it actually works fast @@ -8657,7 +8761,7 @@ class DB( HydrusDB.HydrusDB ): for group_of_hash_ids in HydrusData.SplitIteratorIntoChunks( hash_ids, BLOCK_SIZE ): - with HydrusDB.TemporaryIntegerTable( self._c, group_of_hash_ids, 'hash_id' ) as hash_ids_table_name: + with self._MakeTemporaryIntegerTable( group_of_hash_ids, 'hash_id' ) as hash_ids_table_name: if namespace_wildcard is None: @@ -8674,7 +8778,7 @@ class DB( HydrusDB.HydrusDB ): query = 'SELECT hash_id, COUNT( tag_id ) FROM {} GROUP BY hash_id;'.format( unions ) - cursor = self._c.execute( query ) + cursor = self._Execute( query ) cancelled_hook = None @@ -8748,7 +8852,7 @@ class DB( HydrusDB.HydrusDB ): select_statement = 'SELECT hash_id FROM file_viewing_stats WHERE ' + test_phrase + ';' - hash_ids = self._STS( self._c.execute( select_statement ) ) + hash_ids = self._STS( self._Execute( select_statement ) ) return hash_ids @@ -8778,7 +8882,7 @@ class DB( HydrusDB.HydrusDB ): label_id = self.modules_texts.GetLabelId( name ) # as note name is rare, we force this to run opposite to typical: notes to temp hashes - return self._STS( self._c.execute( 'SELECT hash_id FROM file_notes CROSS JOIN {} USING ( hash_id ) WHERE name_id = ?;'.format( hash_ids_table_name ), ( label_id, ) ) ) + return self._STS( self._Execute( 'SELECT hash_id FROM file_notes CROSS JOIN {} USING ( hash_id ) WHERE name_id = ?;'.format( hash_ids_table_name ), ( label_id, ) ) ) def _GetHashIdsFromNumNotes( self, min_num_notes: typing.Optional[ int ], max_num_notes: typing.Optional[ int ], hash_ids_table_name: str ): @@ -8788,7 +8892,7 @@ class DB( HydrusDB.HydrusDB ): if has_notes or not_has_notes: - has_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {} WHERE EXISTS ( SELECT 1 FROM file_notes WHERE file_notes.hash_id = {}.hash_id );'.format( hash_ids_table_name, hash_ids_table_name ) ) ) + has_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} WHERE EXISTS ( SELECT 1 FROM file_notes WHERE file_notes.hash_id = {}.hash_id );'.format( hash_ids_table_name, hash_ids_table_name ) ) ) if has_notes: @@ -8796,7 +8900,7 @@ class DB( HydrusDB.HydrusDB ): else: - all_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {};'.format( hash_ids_table_name ) ) ) + all_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {};'.format( hash_ids_table_name ) ) ) hash_ids = all_hash_ids.difference( has_hash_ids ) @@ -8819,7 +8923,7 @@ class DB( HydrusDB.HydrusDB ): # temp hashes to notes query = 'SELECT hash_id, COUNT( * ) FROM {} CROSS JOIN file_notes USING ( hash_id ) GROUP BY hash_id;'.format( hash_ids_table_name ) - hash_ids = { hash_id for ( hash_id, count ) in self._c.execute( query ) if filt( count ) } + hash_ids = { hash_id for ( hash_id, count ) in self._Execute( query ) if filt( count ) } return hash_ids @@ -9196,7 +9300,7 @@ class DB( HydrusDB.HydrusDB ): current_files_table_name = ClientDBFilesStorage.GenerateFilesTableName( file_service_id, HC.CONTENT_STATUS_CURRENT ) - import_timestamp_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {} WHERE {};'.format( current_files_table_name, pred_string ) ) ) + import_timestamp_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} WHERE {};'.format( current_files_table_name, pred_string ) ) ) query_hash_ids = intersection_update_qhi( query_hash_ids, import_timestamp_hash_ids ) @@ -9213,7 +9317,7 @@ class DB( HydrusDB.HydrusDB ): pred_string = ' AND '.join( modified_timestamp_predicates ) - modified_timestamp_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM file_modified_timestamps WHERE {};'.format( pred_string ) ) ) + modified_timestamp_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM file_modified_timestamps WHERE {};'.format( pred_string ) ) ) query_hash_ids = intersection_update_qhi( query_hash_ids, modified_timestamp_hash_ids ) @@ -9251,7 +9355,7 @@ class DB( HydrusDB.HydrusDB ): if value == 'rated': - rating_hash_ids = self._STI( self._c.execute( 'SELECT hash_id FROM local_ratings WHERE service_id = ?;', ( service_id, ) ) ) + rating_hash_ids = self._STI( self._Execute( 'SELECT hash_id FROM local_ratings WHERE service_id = ?;', ( service_id, ) ) ) query_hash_ids = intersection_update_qhi( query_hash_ids, rating_hash_ids ) @@ -9294,7 +9398,7 @@ class DB( HydrusDB.HydrusDB ): predicate = str( value - half_a_star_value ) + ' < rating AND rating <= ' + str( value + half_a_star_value ) - rating_hash_ids = self._STI( self._c.execute( 'SELECT hash_id FROM local_ratings WHERE service_id = ? AND ' + predicate + ';', ( service_id, ) ) ) + rating_hash_ids = self._STI( self._Execute( 'SELECT hash_id FROM local_ratings WHERE service_id = ? AND ' + predicate + ';', ( service_id, ) ) ) query_hash_ids = intersection_update_qhi( query_hash_ids, rating_hash_ids ) @@ -9376,7 +9480,7 @@ class DB( HydrusDB.HydrusDB ): else: - with HydrusDB.TemporaryIntegerTable( self._c, query_hash_ids, 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( query_hash_ids, 'hash_id' ) as temp_table_name: self._AnalyzeTempTable( temp_table_name ) @@ -9402,7 +9506,7 @@ class DB( HydrusDB.HydrusDB ): else: - with HydrusDB.TemporaryIntegerTable( self._c, query_hash_ids, 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( query_hash_ids, 'hash_id' ) as temp_table_name: self._AnalyzeTempTable( temp_table_name ) @@ -9428,7 +9532,7 @@ class DB( HydrusDB.HydrusDB ): else: - with HydrusDB.TemporaryIntegerTable( self._c, query_hash_ids, 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( query_hash_ids, 'hash_id' ) as temp_table_name: self._AnalyzeTempTable( temp_table_name ) @@ -9483,21 +9587,21 @@ class DB( HydrusDB.HydrusDB ): if query_hash_ids is None: - query_hash_ids = intersection_update_qhi( query_hash_ids, self._STS( self._c.execute( 'SELECT hash_id FROM {} NATURAL JOIN files_info WHERE {};'.format( current_files_table_name, ' AND '.join( files_info_predicates ) ) ) ) ) + query_hash_ids = intersection_update_qhi( query_hash_ids, self._STS( self._Execute( 'SELECT hash_id FROM {} NATURAL JOIN files_info WHERE {};'.format( current_files_table_name, ' AND '.join( files_info_predicates ) ) ) ) ) else: if is_inbox and len( query_hash_ids ) == len( self.modules_files_metadata_basic.inbox_hash_ids ): - query_hash_ids = intersection_update_qhi( query_hash_ids, self._STS( self._c.execute( 'SELECT hash_id FROM {} NATURAL JOIN {} NATURAL JOIN files_info WHERE {};'.format( 'file_inbox', current_files_table_name, ' AND '.join( files_info_predicates ) ) ) ) ) + query_hash_ids = intersection_update_qhi( query_hash_ids, self._STS( self._Execute( 'SELECT hash_id FROM {} NATURAL JOIN {} NATURAL JOIN files_info WHERE {};'.format( 'file_inbox', current_files_table_name, ' AND '.join( files_info_predicates ) ) ) ) ) else: - with HydrusDB.TemporaryIntegerTable( self._c, query_hash_ids, 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( query_hash_ids, 'hash_id' ) as temp_table_name: self._AnalyzeTempTable( temp_table_name ) - query_hash_ids = intersection_update_qhi( query_hash_ids, self._STS( self._c.execute( 'SELECT hash_id FROM {} NATURAL JOIN {} NATURAL JOIN files_info WHERE {};'.format( temp_table_name, current_files_table_name, ' AND '.join( files_info_predicates ) ) ) ) ) + query_hash_ids = intersection_update_qhi( query_hash_ids, self._STS( self._Execute( 'SELECT hash_id FROM {} NATURAL JOIN {} NATURAL JOIN files_info WHERE {};'.format( temp_table_name, current_files_table_name, ' AND '.join( files_info_predicates ) ) ) ) ) @@ -9523,7 +9627,7 @@ class DB( HydrusDB.HydrusDB ): if there_are_simple_files_info_preds_to_search_for and not done_files_info_predicates: - with HydrusDB.TemporaryIntegerTable( self._c, query_hash_ids, 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( query_hash_ids, 'hash_id' ) as temp_table_name: self._AnalyzeTempTable( temp_table_name ) @@ -9531,7 +9635,7 @@ class DB( HydrusDB.HydrusDB ): select = 'SELECT hash_id FROM {} NATURAL JOIN files_info WHERE {};'.format( temp_table_name, predicate_string ) - files_info_hash_ids = self._STI( self._c.execute( select ) ) + files_info_hash_ids = self._STI( self._Execute( select ) ) query_hash_ids = intersection_update_qhi( query_hash_ids, files_info_hash_ids ) @@ -9567,7 +9671,7 @@ class DB( HydrusDB.HydrusDB ): for tag in tags_to_exclude: - with HydrusDB.TemporaryIntegerTable( self._c, query_hash_ids, 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( query_hash_ids, 'hash_id' ) as temp_table_name: self._AnalyzeTempTable( temp_table_name ) @@ -9584,7 +9688,7 @@ class DB( HydrusDB.HydrusDB ): for namespace in namespaces_to_exclude: - with HydrusDB.TemporaryIntegerTable( self._c, query_hash_ids, 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( query_hash_ids, 'hash_id' ) as temp_table_name: self._AnalyzeTempTable( temp_table_name ) @@ -9601,7 +9705,7 @@ class DB( HydrusDB.HydrusDB ): for wildcard in wildcards_to_exclude: - with HydrusDB.TemporaryIntegerTable( self._c, query_hash_ids, 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( query_hash_ids, 'hash_id' ) as temp_table_name: self._AnalyzeTempTable( temp_table_name ) @@ -9669,7 +9773,7 @@ class DB( HydrusDB.HydrusDB ): if value == 'not rated': - query_hash_ids.difference_update( self._STI( self._c.execute( 'SELECT hash_id FROM local_ratings WHERE service_id = ?;', ( service_id, ) ) ) ) + query_hash_ids.difference_update( self._STI( self._Execute( 'SELECT hash_id FROM local_ratings WHERE service_id = ?;', ( service_id, ) ) ) ) @@ -9727,7 +9831,7 @@ class DB( HydrusDB.HydrusDB ): if min_num_notes is not None or max_num_notes is not None: - with HydrusDB.TemporaryIntegerTable( self._c, query_hash_ids, 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( query_hash_ids, 'hash_id' ) as temp_table_name: self._AnalyzeTempTable( temp_table_name ) @@ -9743,7 +9847,7 @@ class DB( HydrusDB.HydrusDB ): for note_name in inclusive_note_names: - with HydrusDB.TemporaryIntegerTable( self._c, query_hash_ids, 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( query_hash_ids, 'hash_id' ) as temp_table_name: self._AnalyzeTempTable( temp_table_name ) @@ -9760,7 +9864,7 @@ class DB( HydrusDB.HydrusDB ): for note_name in exclusive_note_names: - with HydrusDB.TemporaryIntegerTable( self._c, query_hash_ids, 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( query_hash_ids, 'hash_id' ) as temp_table_name: self._AnalyzeTempTable( temp_table_name ) @@ -9839,7 +9943,7 @@ class DB( HydrusDB.HydrusDB ): else: - with HydrusDB.TemporaryIntegerTable( self._c, query_hash_ids, 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( query_hash_ids, 'hash_id' ) as temp_table_name: self._AnalyzeTempTable( temp_table_name ) @@ -9873,7 +9977,7 @@ class DB( HydrusDB.HydrusDB ): megalambda = lambda x: False not in ( l( x ) for l in lambdas ) - with HydrusDB.TemporaryIntegerTable( self._c, query_hash_ids, 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( query_hash_ids, 'hash_id' ) as temp_table_name: self._AnalyzeTempTable( temp_table_name ) @@ -9931,7 +10035,7 @@ class DB( HydrusDB.HydrusDB ): ( namespace, num ) = simple_preds[ 'min_tag_as_number' ] - with HydrusDB.TemporaryIntegerTable( self._c, query_hash_ids, 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( query_hash_ids, 'hash_id' ) as temp_table_name: self._AnalyzeTempTable( temp_table_name ) @@ -9945,7 +10049,7 @@ class DB( HydrusDB.HydrusDB ): ( namespace, num ) = simple_preds[ 'max_tag_as_number' ] - with HydrusDB.TemporaryIntegerTable( self._c, query_hash_ids, 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( query_hash_ids, 'hash_id' ) as temp_table_name: self._AnalyzeTempTable( temp_table_name ) @@ -10098,14 +10202,14 @@ class DB( HydrusDB.HydrusDB ): for query in queries: - cursor = self._c.execute( query, ( tag_id, ) ) + cursor = self._Execute( query, ( tag_id, ) ) result_hash_ids.update( self._STI( HydrusDB.ReadFromCancellableCursor( cursor, 1024, cancelled_hook ) ) ) else: - with HydrusDB.TemporaryIntegerTable( self._c, tag_ids, 'tag_id' ) as temp_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_tag_ids_table_name: if do_hash_table_join: @@ -10120,7 +10224,7 @@ class DB( HydrusDB.HydrusDB ): for query in queries: - cursor = self._c.execute( query ) + cursor = self._Execute( query ) result_hash_ids.update( self._STI( HydrusDB.ReadFromCancellableCursor( cursor, 1024, cancelled_hook ) ) ) @@ -10145,7 +10249,7 @@ class DB( HydrusDB.HydrusDB ): select = 'SELECT hash_id FROM {} WHERE url = ?;'.format( table_name ) - result_hash_ids = self._STS( self._c.execute( select, ( url, ) ) ) + result_hash_ids = self._STS( self._Execute( select, ( url, ) ) ) return result_hash_ids @@ -10166,7 +10270,7 @@ class DB( HydrusDB.HydrusDB ): result_hash_ids = set() - with HydrusDB.TemporaryIntegerTable( self._c, domain_ids, 'domain_id' ) as temp_domain_table_name: + with self._MakeTemporaryIntegerTable( domain_ids, 'domain_id' ) as temp_domain_table_name: if hash_ids_table_name is not None and hash_ids is not None and len( hash_ids ) < 50000: @@ -10180,7 +10284,7 @@ class DB( HydrusDB.HydrusDB ): select = 'SELECT hash_id, url FROM {} CROSS JOIN urls USING ( domain_id ) CROSS JOIN url_map USING ( url_id );'.format( temp_domain_table_name ) - for ( hash_id, url ) in self._c.execute( select ): + for ( hash_id, url ) in self._Execute( select ): # this is actually insufficient, as more detailed url classes may match if hash_id not in result_hash_ids and url_class.Matches( url ): @@ -10201,7 +10305,7 @@ class DB( HydrusDB.HydrusDB ): result_hash_ids = set() - with HydrusDB.TemporaryIntegerTable( self._c, domain_ids, 'domain_id' ) as temp_domain_table_name: + with self._MakeTemporaryIntegerTable( domain_ids, 'domain_id' ) as temp_domain_table_name: if hash_ids_table_name is not None and hash_ids is not None and len( hash_ids ) < 50000: @@ -10215,7 +10319,7 @@ class DB( HydrusDB.HydrusDB ): select = 'SELECT hash_id FROM {} CROSS JOIN urls USING ( domain_id ) CROSS JOIN url_map USING ( url_id );'.format( temp_domain_table_name ) - result_hash_ids = self._STS( self._c.execute( select ) ) + result_hash_ids = self._STS( self._Execute( select ) ) return result_hash_ids @@ -10237,7 +10341,7 @@ class DB( HydrusDB.HydrusDB ): result_hash_ids = set() - for ( hash_id, url ) in self._c.execute( select ): + for ( hash_id, url ) in self._Execute( select ): if hash_id not in result_hash_ids and re.search( regex, url ) is not None: @@ -10271,7 +10375,7 @@ class DB( HydrusDB.HydrusDB ): file_service_id = self.modules_services.GetServiceId( file_service_key ) tag_service_id = self.modules_services.GetServiceId( tag_search_context.service_key ) - with HydrusDB.TemporaryIntegerTable( self._c, [], 'subtag_id' ) as temp_subtag_ids_table_name: + with self._MakeTemporaryIntegerTable( [], 'subtag_id' ) as temp_subtag_ids_table_name: self._GetSubtagIdsFromWildcardIntoTable( file_service_id, tag_service_id, subtag_wildcard, temp_subtag_ids_table_name, job_key = job_key ) @@ -10279,7 +10383,7 @@ class DB( HydrusDB.HydrusDB ): possible_namespace_ids = self._GetNamespaceIdsFromWildcard( namespace_wildcard ) - with HydrusDB.TemporaryIntegerTable( self._c, possible_namespace_ids, 'namespace_id' ) as temp_namespace_ids_table_name: + with self._MakeTemporaryIntegerTable( possible_namespace_ids, 'namespace_id' ) as temp_namespace_ids_table_name: return self._GetHashIdsFromNamespaceIdsSubtagIdsTables( tag_display_type, file_service_key, tag_search_context, temp_namespace_ids_table_name, temp_subtag_ids_table_name, hash_ids = hash_ids, hash_ids_table_name = hash_ids_table_name, job_key = job_key ) @@ -10315,7 +10419,7 @@ class DB( HydrusDB.HydrusDB ): hash_ids_table_name = GenerateSpecificFilesTableName( file_service_id, tag_service_id ) - with HydrusDB.TemporaryIntegerTable( self._c, namespace_ids, 'namespace_id' ) as temp_namespace_ids_table_name: + with self._MakeTemporaryIntegerTable( namespace_ids, 'namespace_id' ) as temp_namespace_ids_table_name: mapping_and_tag_table_names = self._GetMappingAndTagTables( tag_display_type, file_service_key, tag_search_context ) @@ -10349,7 +10453,7 @@ class DB( HydrusDB.HydrusDB ): for query in queries: - nonzero_tag_hash_ids.update( self._STI( self._c.execute( query ) ) ) + nonzero_tag_hash_ids.update( self._STI( self._Execute( query ) ) ) if job_key is not None and job_key.IsCancelled(): @@ -10381,7 +10485,7 @@ class DB( HydrusDB.HydrusDB ): integer_subtags_table_name = self._CacheTagsGetIntegerSubtagsTableName( file_service_id, search_tag_service_id ) - some_possible_subtag_ids = self._STS( self._c.execute( 'SELECT subtag_id FROM {} WHERE integer_subtag {} {};'.format( integer_subtags_table_name, operator, num ) ) ) + some_possible_subtag_ids = self._STS( self._Execute( 'SELECT subtag_id FROM {} WHERE integer_subtag {} {};'.format( integer_subtags_table_name, operator, num ) ) ) possible_subtag_ids.update( some_possible_subtag_ids ) @@ -10493,14 +10597,14 @@ class DB( HydrusDB.HydrusDB ): locations_to_ideal_weights = {} - for ( portable_location, weight ) in self._c.execute( 'SELECT location, weight FROM ideal_client_files_locations;' ): + for ( portable_location, weight ) in self._Execute( 'SELECT location, weight FROM ideal_client_files_locations;' ): abs_location = HydrusPaths.ConvertPortablePathToAbsPath( portable_location ) locations_to_ideal_weights[ abs_location ] = weight - result = self._c.execute( 'SELECT location FROM ideal_thumbnail_override_location;' ).fetchone() + result = self._Execute( 'SELECT location FROM ideal_thumbnail_override_location;' ).fetchone() if result is None: @@ -10721,11 +10825,11 @@ class DB( HydrusDB.HydrusDB ): missing_hash_ids_to_hashes = self.modules_hashes_local_cache.GetHashIdsToHashes( hash_ids = missing_hash_ids ) - with HydrusDB.TemporaryIntegerTable( self._c, missing_hash_ids, 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( missing_hash_ids, 'hash_id' ) as temp_table_name: # everything here is temp hashes to metadata - hash_ids_to_info = { hash_id : ClientMediaManagers.FileInfoManager( hash_id, missing_hash_ids_to_hashes[ hash_id ], size, mime, width, height, duration, num_frames, has_audio, num_words ) for ( hash_id, size, mime, width, height, duration, num_frames, has_audio, num_words ) in self._c.execute( 'SELECT * FROM {} CROSS JOIN files_info USING ( hash_id );'.format( temp_table_name ) ) } + hash_ids_to_info = { hash_id : ClientMediaManagers.FileInfoManager( hash_id, missing_hash_ids_to_hashes[ hash_id ], size, mime, width, height, duration, num_frames, has_audio, num_words ) for ( hash_id, size, mime, width, height, duration, num_frames, has_audio, num_words ) in self._Execute( 'SELECT * FROM {} CROSS JOIN files_info USING ( hash_id );'.format( temp_table_name ) ) } ( hash_ids_to_current_file_service_ids_and_timestamps, hash_ids_to_deleted_file_service_ids_and_timestamps, @@ -10733,17 +10837,17 @@ class DB( HydrusDB.HydrusDB ): hash_ids_to_petitioned_file_service_ids ) = self.modules_files_storage.GetHashIdsToServiceInfoDicts( temp_table_name ) - hash_ids_to_urls = HydrusData.BuildKeyToSetDict( self._c.execute( 'SELECT hash_id, url FROM {} CROSS JOIN url_map USING ( hash_id ) CROSS JOIN urls USING ( url_id );'.format( temp_table_name ) ) ) + hash_ids_to_urls = HydrusData.BuildKeyToSetDict( self._Execute( 'SELECT hash_id, url FROM {} CROSS JOIN url_map USING ( hash_id ) CROSS JOIN urls USING ( url_id );'.format( temp_table_name ) ) ) - hash_ids_to_service_ids_and_filenames = HydrusData.BuildKeyToListDict( ( ( hash_id, ( service_id, filename ) ) for ( hash_id, service_id, filename ) in self._c.execute( 'SELECT hash_id, service_id, filename FROM {} CROSS JOIN service_filenames USING ( hash_id );'.format( temp_table_name ) ) ) ) + hash_ids_to_service_ids_and_filenames = HydrusData.BuildKeyToListDict( ( ( hash_id, ( service_id, filename ) ) for ( hash_id, service_id, filename ) in self._Execute( 'SELECT hash_id, service_id, filename FROM {} CROSS JOIN service_filenames USING ( hash_id );'.format( temp_table_name ) ) ) ) - hash_ids_to_local_ratings = HydrusData.BuildKeyToListDict( ( ( hash_id, ( service_id, rating ) ) for ( service_id, hash_id, rating ) in self._c.execute( 'SELECT service_id, hash_id, rating FROM {} CROSS JOIN local_ratings USING ( hash_id );'.format( temp_table_name ) ) ) ) + hash_ids_to_local_ratings = HydrusData.BuildKeyToListDict( ( ( hash_id, ( service_id, rating ) ) for ( service_id, hash_id, rating ) in self._Execute( 'SELECT service_id, hash_id, rating FROM {} CROSS JOIN local_ratings USING ( hash_id );'.format( temp_table_name ) ) ) ) - hash_ids_to_names_and_notes = HydrusData.BuildKeyToListDict( ( ( hash_id, ( name, note ) ) for ( hash_id, name, note ) in self._c.execute( 'SELECT file_notes.hash_id, label, note FROM {} CROSS JOIN file_notes USING ( hash_id ), labels, notes ON ( file_notes.name_id = labels.label_id AND file_notes.note_id = notes.note_id );'.format( temp_table_name ) ) ) ) + hash_ids_to_names_and_notes = HydrusData.BuildKeyToListDict( ( ( hash_id, ( name, note ) ) for ( hash_id, name, note ) in self._Execute( 'SELECT file_notes.hash_id, label, note FROM {} CROSS JOIN file_notes USING ( hash_id ), labels, notes ON ( file_notes.name_id = labels.label_id AND file_notes.note_id = notes.note_id );'.format( temp_table_name ) ) ) ) - hash_ids_to_file_viewing_stats_managers = { hash_id : ClientMediaManagers.FileViewingStatsManager( preview_views, preview_viewtime, media_views, media_viewtime ) for ( hash_id, preview_views, preview_viewtime, media_views, media_viewtime ) in self._c.execute( 'SELECT hash_id, preview_views, preview_viewtime, media_views, media_viewtime FROM {} CROSS JOIN file_viewing_stats USING ( hash_id );'.format( temp_table_name ) ) } + hash_ids_to_file_viewing_stats_managers = { hash_id : ClientMediaManagers.FileViewingStatsManager( preview_views, preview_viewtime, media_views, media_viewtime ) for ( hash_id, preview_views, preview_viewtime, media_views, media_viewtime ) in self._Execute( 'SELECT hash_id, preview_views, preview_viewtime, media_views, media_viewtime FROM {} CROSS JOIN file_viewing_stats USING ( hash_id );'.format( temp_table_name ) ) } - hash_ids_to_file_modified_timestamps = dict( self._c.execute( 'SELECT hash_id, file_modified_timestamp FROM {} CROSS JOIN file_modified_timestamps USING ( hash_id );'.format( temp_table_name ) ) ) + hash_ids_to_file_modified_timestamps = dict( self._Execute( 'SELECT hash_id, file_modified_timestamp FROM {} CROSS JOIN file_modified_timestamps USING ( hash_id );'.format( temp_table_name ) ) ) hash_ids_to_current_file_service_ids = { hash_id : [ file_service_id for ( file_service_id, timestamp ) in file_service_ids_and_timestamps ] for ( hash_id, file_service_ids_and_timestamps ) in hash_ids_to_current_file_service_ids_and_timestamps.items() } @@ -10872,13 +10976,13 @@ class DB( HydrusDB.HydrusDB ): if namespace_wildcard == '*': - return self._STL( self._c.execute( 'SELECT namespace_id FROM namespaces;' ) ) + return self._STL( self._Execute( 'SELECT namespace_id FROM namespaces;' ) ) elif '*' in namespace_wildcard: like_param = ConvertWildcardToSQLiteLikeParameter( namespace_wildcard ) - return self._STL( self._c.execute( 'SELECT namespace_id FROM namespaces WHERE namespace LIKE ?;', ( like_param, ) ) ) + return self._STL( self._Execute( 'SELECT namespace_id FROM namespaces WHERE namespace LIKE ?;', ( like_param, ) ) ) else: @@ -10925,13 +11029,13 @@ class DB( HydrusDB.HydrusDB ): def _GetOptions( self ): - result = self._c.execute( 'SELECT options FROM options;' ).fetchone() + result = self._Execute( 'SELECT options FROM options;' ).fetchone() if result is None: options = ClientDefaults.GetClientDefaultOptions() - self._c.execute( 'INSERT INTO options ( options ) VALUES ( ? );', ( options, ) ) + self._Execute( 'INSERT INTO options ( options ) VALUES ( ? );', ( options, ) ) else: @@ -10970,7 +11074,7 @@ class DB( HydrusDB.HydrusDB ): ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = ClientDBMappingsStorage.GenerateMappingsTableNames( service_id ) - pending_dict = HydrusData.BuildKeyToListDict( self._c.execute( 'SELECT tag_id, hash_id FROM ' + pending_mappings_table_name + ' ORDER BY tag_id LIMIT 100;' ) ) + pending_dict = HydrusData.BuildKeyToListDict( self._Execute( 'SELECT tag_id, hash_id FROM ' + pending_mappings_table_name + ' ORDER BY tag_id LIMIT 100;' ) ) pending_mapping_ids = list( pending_dict.items() ) @@ -11004,7 +11108,7 @@ class DB( HydrusDB.HydrusDB ): if account.HasPermission( HC.CONTENT_TYPE_MAPPINGS, HC.PERMISSION_ACTION_PETITION ): - petitioned_dict = HydrusData.BuildKeyToListDict( [ ( ( tag_id, reason_id ), hash_id ) for ( tag_id, hash_id, reason_id ) in self._c.execute( 'SELECT tag_id, hash_id, reason_id FROM ' + petitioned_mappings_table_name + ' ORDER BY reason_id LIMIT 100;' ) ] ) + petitioned_dict = HydrusData.BuildKeyToListDict( [ ( ( tag_id, reason_id ), hash_id ) for ( tag_id, hash_id, reason_id ) in self._Execute( 'SELECT tag_id, hash_id, reason_id FROM ' + petitioned_mappings_table_name + ' ORDER BY reason_id LIMIT 100;' ) ] ) petitioned_mapping_ids = list( petitioned_dict.items() ) @@ -11047,7 +11151,7 @@ class DB( HydrusDB.HydrusDB ): if account.HasPermission( HC.CONTENT_TYPE_TAG_PARENTS, HC.PERMISSION_ACTION_PETITION ): - pending = self._c.execute( 'SELECT child_tag_id, parent_tag_id, reason_id FROM tag_parent_petitions WHERE service_id = ? AND status = ? ORDER BY reason_id LIMIT 1;', ( service_id, HC.CONTENT_STATUS_PENDING ) ).fetchall() + pending = self._Execute( 'SELECT child_tag_id, parent_tag_id, reason_id FROM tag_parent_petitions WHERE service_id = ? AND status = ? ORDER BY reason_id LIMIT 1;', ( service_id, HC.CONTENT_STATUS_PENDING ) ).fetchall() for ( child_tag_id, parent_tag_id, reason_id ) in pending: @@ -11061,7 +11165,7 @@ class DB( HydrusDB.HydrusDB ): client_to_server_update.AddContent( HC.CONTENT_UPDATE_PEND, content, reason ) - petitioned = self._c.execute( 'SELECT child_tag_id, parent_tag_id, reason_id FROM tag_parent_petitions WHERE service_id = ? AND status = ? ORDER BY reason_id LIMIT 100;', ( service_id, HC.CONTENT_STATUS_PETITIONED ) ).fetchall() + petitioned = self._Execute( 'SELECT child_tag_id, parent_tag_id, reason_id FROM tag_parent_petitions WHERE service_id = ? AND status = ? ORDER BY reason_id LIMIT 100;', ( service_id, HC.CONTENT_STATUS_PETITIONED ) ).fetchall() for ( child_tag_id, parent_tag_id, reason_id ) in petitioned: @@ -11081,7 +11185,7 @@ class DB( HydrusDB.HydrusDB ): if account.HasPermission( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.PERMISSION_ACTION_PETITION ): - pending = self._c.execute( 'SELECT bad_tag_id, good_tag_id, reason_id FROM tag_sibling_petitions WHERE service_id = ? AND status = ? ORDER BY reason_id LIMIT 100;', ( service_id, HC.CONTENT_STATUS_PENDING ) ).fetchall() + pending = self._Execute( 'SELECT bad_tag_id, good_tag_id, reason_id FROM tag_sibling_petitions WHERE service_id = ? AND status = ? ORDER BY reason_id LIMIT 100;', ( service_id, HC.CONTENT_STATUS_PENDING ) ).fetchall() for ( bad_tag_id, good_tag_id, reason_id ) in pending: @@ -11095,7 +11199,7 @@ class DB( HydrusDB.HydrusDB ): client_to_server_update.AddContent( HC.CONTENT_UPDATE_PEND, content, reason ) - petitioned = self._c.execute( 'SELECT bad_tag_id, good_tag_id, reason_id FROM tag_sibling_petitions WHERE service_id = ? AND status = ? ORDER BY reason_id LIMIT 100;', ( service_id, HC.CONTENT_STATUS_PETITIONED ) ).fetchall() + petitioned = self._Execute( 'SELECT bad_tag_id, good_tag_id, reason_id FROM tag_sibling_petitions WHERE service_id = ? AND status = ? ORDER BY reason_id LIMIT 100;', ( service_id, HC.CONTENT_STATUS_PETITIONED ) ).fetchall() for ( bad_tag_id, good_tag_id, reason_id ) in petitioned: @@ -11222,7 +11326,7 @@ class DB( HydrusDB.HydrusDB ): # we could be clever and do LIMIT and ORDER BY in the delete, but not all compilations of SQLite have that turned on, so let's KISS - tag_ids_to_timestamp = { tag_id : timestamp for ( tag_id, timestamp ) in self._c.execute( 'SELECT tag_id, timestamp FROM recent_tags WHERE service_id = ?;', ( service_id, ) ) } + tag_ids_to_timestamp = { tag_id : timestamp for ( tag_id, timestamp ) in self._Execute( 'SELECT tag_id, timestamp FROM recent_tags WHERE service_id = ?;', ( service_id, ) ) } def sort_key( key ): @@ -11244,7 +11348,7 @@ class DB( HydrusDB.HydrusDB ): if len( decayed ) > 0: - self._c.executemany( 'DELETE FROM recent_tags WHERE service_id = ? AND tag_id = ?;', ( ( service_id, tag_id ) for tag_id in decayed ) ) + self._ExecuteMany( 'DELETE FROM recent_tags WHERE service_id = ? AND tag_id = ?;', ( ( service_id, tag_id ) for tag_id in decayed ) ) sorted_recent_tag_ids = newest_first[ : num_we_want ] @@ -11273,10 +11377,10 @@ class DB( HydrusDB.HydrusDB ): hash_ids_counter = collections.Counter() - with HydrusDB.TemporaryIntegerTable( self._c, tag_ids, 'tag_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( tag_ids, 'tag_id' ) as temp_table_name: # temp tags to mappings - cursor = self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( tag_id );'.format( temp_table_name, current_mappings_table_name ) ) + cursor = self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( tag_id );'.format( temp_table_name, current_mappings_table_name ) ) cancelled_hook = lambda: HydrusData.TimeHasPassedPrecise( stop_time_for_finding_files ) @@ -11313,7 +11417,7 @@ class DB( HydrusDB.HydrusDB ): for hash_id in hash_ids: - for tag_id in self._STI( self._c.execute( 'SELECT tag_id FROM ' + current_mappings_table_name + ' WHERE hash_id = ?;', ( hash_id, ) ) ): + for tag_id in self._STI( self._Execute( 'SELECT tag_id FROM ' + current_mappings_table_name + ' WHERE hash_id = ?;', ( hash_id, ) ) ): counter[ tag_id ] += 1 @@ -11352,7 +11456,7 @@ class DB( HydrusDB.HydrusDB ): current_files_table_name = ClientDBFilesStorage.GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT ) - needed_hash_ids = self._STL( self._c.execute( 'SELECT hash_id FROM {} NATURAL JOIN files_info WHERE mime IN {} EXCEPT SELECT hash_id FROM remote_thumbnails WHERE service_id = ?;'.format( current_files_table_name, HydrusData.SplayListForDB( HC.MIMES_WITH_THUMBNAILS ) ), ( service_id, ) ) ) + needed_hash_ids = self._STL( self._Execute( 'SELECT hash_id FROM {} NATURAL JOIN files_info WHERE mime IN {} EXCEPT SELECT hash_id FROM remote_thumbnails WHERE service_id = ?;'.format( current_files_table_name, HydrusData.SplayListForDB( HC.MIMES_WITH_THUMBNAILS ) ), ( service_id, ) ) ) needed_hashes = [] @@ -11364,7 +11468,7 @@ class DB( HydrusDB.HydrusDB ): if client_files_manager.LocklessHasThumbnail( hash ): - self._c.execute( 'INSERT OR IGNORE INTO remote_thumbnails ( service_id, hash_id ) VALUES ( ?, ? );', ( service_id, hash_id ) ) + self._Execute( 'INSERT OR IGNORE INTO remote_thumbnails ( service_id, hash_id ) VALUES ( ?, ? );', ( service_id, hash_id ) ) else: @@ -11385,7 +11489,7 @@ class DB( HydrusDB.HydrusDB ): service_id = self.modules_services.GetServiceId( service_key ) directory_id = self.modules_texts.GetTextId( dirname ) - hash_ids = self._STL( self._c.execute( 'SELECT hash_id FROM service_directory_file_map WHERE service_id = ? AND directory_id = ?;', ( service_id, directory_id ) ) ) + hash_ids = self._STL( self._Execute( 'SELECT hash_id FROM service_directory_file_map WHERE service_id = ? AND directory_id = ?;', ( service_id, directory_id ) ) ) hashes = self.modules_hashes_local_cache.GetHashes( hash_ids ) @@ -11396,7 +11500,7 @@ class DB( HydrusDB.HydrusDB ): service_id = self.modules_services.GetServiceId( service_key ) - incomplete_info = self._c.execute( 'SELECT directory_id, num_files, total_size, note FROM service_directories WHERE service_id = ?;', ( service_id, ) ).fetchall() + incomplete_info = self._Execute( 'SELECT directory_id, num_files, total_size, note FROM service_directories WHERE service_id = ?;', ( service_id, ) ).fetchall() info = [ ( self.modules_texts.GetText( directory_id ), num_files, total_size, note ) for ( directory_id, num_files, total_size, note ) in incomplete_info ] @@ -11405,7 +11509,7 @@ class DB( HydrusDB.HydrusDB ): def _GetServiceFilename( self, service_id, hash_id ): - result = self._c.execute( 'SELECT filename FROM service_filenames WHERE service_id = ? AND hash_id = ?;', ( service_id, hash_id ) ).fetchone() + result = self._Execute( 'SELECT filename FROM service_filenames WHERE service_id = ? AND hash_id = ?;', ( service_id, hash_id ) ).fetchone() if result is None: @@ -11422,7 +11526,7 @@ class DB( HydrusDB.HydrusDB ): service_id = self.modules_services.GetServiceId( service_key ) hash_ids = self.modules_hashes_local_cache.GetHashIds( hashes ) - result = sorted( ( filename for ( filename, ) in self._c.execute( 'SELECT filename FROM service_filenames WHERE service_id = ? AND hash_id IN ' + HydrusData.SplayListForDB( hash_ids ) + ';', ( service_id, ) ) ) ) + result = sorted( ( filename for ( filename, ) in self._Execute( 'SELECT filename FROM service_filenames WHERE service_id = ? AND hash_id IN ' + HydrusData.SplayListForDB( hash_ids ) + ';', ( service_id, ) ) ) ) return result @@ -11477,7 +11581,7 @@ class DB( HydrusDB.HydrusDB ): info_types = set( info_types ) - results = { info_type : info for ( info_type, info ) in self._c.execute( 'SELECT info_type, info FROM service_info WHERE service_id = ? AND info_type IN ' + HydrusData.SplayListForDB( info_types ) + ';', ( service_id, ) ) } + results = { info_type : info for ( info_type, info ) in self._Execute( 'SELECT info_type, info FROM service_info WHERE service_id = ? AND info_type IN ' + HydrusData.SplayListForDB( info_types ) + ';', ( service_id, ) ) } if len( results ) != len( info_types ): @@ -11543,7 +11647,7 @@ class DB( HydrusDB.HydrusDB ): tags_table_name = self._CacheTagsGetTagsTableName( self.modules_services.combined_file_service_id, service_id ) - ( info, ) = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( tags_table_name ) ).fetchone() + ( info, ) = self._Execute( 'SELECT COUNT( * ) FROM {};'.format( tags_table_name ) ).fetchone() elif info_type == HC.SERVICE_INFO_NUM_MAPPINGS: @@ -11551,7 +11655,7 @@ class DB( HydrusDB.HydrusDB ): column_name = 'current_count' - result = self._c.execute( 'SELECT SUM( {} ) FROM {};'.format( column_name, ac_cache_table_name ) ).fetchone() + result = self._Execute( 'SELECT SUM( {} ) FROM {};'.format( column_name, ac_cache_table_name ) ).fetchone() if result is None or result[0] is None: @@ -11582,33 +11686,33 @@ class DB( HydrusDB.HydrusDB ): elif info_type == HC.SERVICE_INFO_NUM_PENDING_TAG_SIBLINGS: - ( info, ) = self._c.execute( 'SELECT COUNT( * ) FROM tag_sibling_petitions WHERE service_id = ? AND status = ?;', ( service_id, HC.CONTENT_STATUS_PENDING ) ).fetchone() + ( info, ) = self._Execute( 'SELECT COUNT( * ) FROM tag_sibling_petitions WHERE service_id = ? AND status = ?;', ( service_id, HC.CONTENT_STATUS_PENDING ) ).fetchone() elif info_type == HC.SERVICE_INFO_NUM_PETITIONED_TAG_SIBLINGS: - ( info, ) = self._c.execute( 'SELECT COUNT( * ) FROM tag_sibling_petitions WHERE service_id = ? AND status = ?;', ( service_id, HC.CONTENT_STATUS_PETITIONED ) ).fetchone() + ( info, ) = self._Execute( 'SELECT COUNT( * ) FROM tag_sibling_petitions WHERE service_id = ? AND status = ?;', ( service_id, HC.CONTENT_STATUS_PETITIONED ) ).fetchone() elif info_type == HC.SERVICE_INFO_NUM_PENDING_TAG_PARENTS: - ( info, ) = self._c.execute( 'SELECT COUNT( * ) FROM tag_parent_petitions WHERE service_id = ? AND status = ?;', ( service_id, HC.CONTENT_STATUS_PENDING ) ).fetchone() + ( info, ) = self._Execute( 'SELECT COUNT( * ) FROM tag_parent_petitions WHERE service_id = ? AND status = ?;', ( service_id, HC.CONTENT_STATUS_PENDING ) ).fetchone() elif info_type == HC.SERVICE_INFO_NUM_PETITIONED_TAG_PARENTS: - ( info, ) = self._c.execute( 'SELECT COUNT( * ) FROM tag_parent_petitions WHERE service_id = ? AND status = ?;', ( service_id, HC.CONTENT_STATUS_PETITIONED ) ).fetchone() + ( info, ) = self._Execute( 'SELECT COUNT( * ) FROM tag_parent_petitions WHERE service_id = ? AND status = ?;', ( service_id, HC.CONTENT_STATUS_PETITIONED ) ).fetchone() elif service_type in ( HC.LOCAL_RATING_LIKE, HC.LOCAL_RATING_NUMERICAL ): if info_type == HC.SERVICE_INFO_NUM_FILES: - ( info, ) = self._c.execute( 'SELECT COUNT( * ) FROM local_ratings WHERE service_id = ?;', ( service_id, ) ).fetchone() + ( info, ) = self._Execute( 'SELECT COUNT( * ) FROM local_ratings WHERE service_id = ?;', ( service_id, ) ).fetchone() elif service_type == HC.LOCAL_BOORU: if info_type == HC.SERVICE_INFO_NUM_SHARES: - ( info, ) = self._c.execute( 'SELECT COUNT( * ) FROM yaml_dumps WHERE dump_type = ?;', ( ClientDBSerialisable.YAML_DUMP_ID_LOCAL_BOORU, ) ).fetchone() + ( info, ) = self._Execute( 'SELECT COUNT( * ) FROM yaml_dumps WHERE dump_type = ?;', ( ClientDBSerialisable.YAML_DUMP_ID_LOCAL_BOORU, ) ).fetchone() @@ -11619,7 +11723,7 @@ class DB( HydrusDB.HydrusDB ): if save_it: - self._c.execute( 'INSERT INTO service_info ( service_id, info_type, info ) VALUES ( ?, ?, ? );', ( service_id, info_type, info ) ) + self._Execute( 'INSERT INTO service_info ( service_id, info_type, info ) VALUES ( ?, ?, ? );', ( service_id, info_type, info ) ) results[ info_type ] = info @@ -11631,13 +11735,13 @@ class DB( HydrusDB.HydrusDB ): def _GetSiteId( self, name ): - result = self._c.execute( 'SELECT site_id FROM imageboard_sites WHERE name = ?;', ( name, ) ).fetchone() + result = self._Execute( 'SELECT site_id FROM imageboard_sites WHERE name = ?;', ( name, ) ).fetchone() if result is None: - self._c.execute( 'INSERT INTO imageboard_sites ( name ) VALUES ( ? );', ( name, ) ) + self._Execute( 'INSERT INTO imageboard_sites ( name ) VALUES ( ? );', ( name, ) ) - site_id = self._c.lastrowid + site_id = self._GetLastRowId() else: @@ -11694,7 +11798,7 @@ class DB( HydrusDB.HydrusDB ): if subtag_wildcard == '*': # hellmode, but shouldn't be called normally - cursor = self._c.execute( 'SELECT docid FROM {};'.format( subtags_fts4_table_name ) ) + cursor = self._Execute( 'SELECT docid FROM {};'.format( subtags_fts4_table_name ) ) elif ClientSearch.IsComplexWildcard( subtag_wildcard ) or not wildcard_has_fts4_searchable_characters: @@ -11712,7 +11816,7 @@ class DB( HydrusDB.HydrusDB ): query = 'SELECT docid FROM {} WHERE subtag LIKE ?;'.format( subtags_fts4_table_name ) - cursor = self._c.execute( query, ( like_param, ) ) + cursor = self._Execute( query, ( like_param, ) ) else: @@ -11724,7 +11828,7 @@ class DB( HydrusDB.HydrusDB ): query = 'SELECT docid FROM {} WHERE subtag MATCH ? AND subtag LIKE ?;'.format( subtags_fts4_table_name ) - cursor = self._c.execute( query, ( prefix_fts4_wildcard_param, like_param ) ) + cursor = self._Execute( query, ( prefix_fts4_wildcard_param, like_param ) ) else: @@ -11736,7 +11840,7 @@ class DB( HydrusDB.HydrusDB ): subtags_fts4_param = '"{}"'.format( subtag_wildcard ) - cursor = self._c.execute( 'SELECT docid FROM {} WHERE subtag MATCH ?;'.format( subtags_fts4_table_name ), ( subtags_fts4_param, ) ) + cursor = self._Execute( 'SELECT docid FROM {} WHERE subtag MATCH ?;'.format( subtags_fts4_table_name ), ( subtags_fts4_param, ) ) cancelled_hook = None @@ -11766,7 +11870,7 @@ class DB( HydrusDB.HydrusDB ): searchable_subtag_id = self.modules_tags.GetSubtagId( searchable_subtag ) - loop_of_subtag_ids = self._STS( self._c.execute( 'SELECT subtag_id FROM {} WHERE searchable_subtag_id = ?;'.format( subtags_searchable_map_table_name ), ( searchable_subtag_id, ) ) ) + loop_of_subtag_ids = self._STS( self._Execute( 'SELECT subtag_id FROM {} WHERE searchable_subtag_id = ?;'.format( subtags_searchable_map_table_name ), ( searchable_subtag_id, ) ) ) loop_of_subtag_ids.add( searchable_subtag_id ) @@ -11809,7 +11913,7 @@ class DB( HydrusDB.HydrusDB ): if subtag_wildcard == '*': # hellmode, but shouldn't be called normally - cursor = self._c.execute( 'SELECT docid FROM {};'.format( subtags_fts4_table_name ) ) + cursor = self._Execute( 'SELECT docid FROM {};'.format( subtags_fts4_table_name ) ) elif ClientSearch.IsComplexWildcard( subtag_wildcard ) or not wildcard_has_fts4_searchable_characters: @@ -11827,7 +11931,7 @@ class DB( HydrusDB.HydrusDB ): query = 'SELECT docid FROM {} WHERE subtag LIKE ?;'.format( subtags_fts4_table_name ) - cursor = self._c.execute( query, ( like_param, ) ) + cursor = self._Execute( query, ( like_param, ) ) else: @@ -11839,7 +11943,7 @@ class DB( HydrusDB.HydrusDB ): query = 'SELECT docid FROM {} WHERE subtag MATCH ? AND subtag LIKE ?;'.format( subtags_fts4_table_name ) - cursor = self._c.execute( query, ( prefix_fts4_wildcard_param, like_param ) ) + cursor = self._Execute( query, ( prefix_fts4_wildcard_param, like_param ) ) else: @@ -11851,7 +11955,7 @@ class DB( HydrusDB.HydrusDB ): subtags_fts4_param = '"{}"'.format( subtag_wildcard ) - cursor = self._c.execute( 'SELECT docid FROM {} WHERE subtag MATCH ?;'.format( subtags_fts4_table_name ), ( subtags_fts4_param, ) ) + cursor = self._Execute( 'SELECT docid FROM {} WHERE subtag MATCH ?;'.format( subtags_fts4_table_name ), ( subtags_fts4_param, ) ) cancelled_hook = None @@ -11863,7 +11967,7 @@ class DB( HydrusDB.HydrusDB ): loop_of_subtag_id_tuples = HydrusDB.ReadFromCancellableCursor( cursor, 1024, cancelled_hook = cancelled_hook ) - self._c.executemany( 'INSERT OR IGNORE INTO {} ( subtag_id ) VALUES ( ? );'.format( subtag_id_table_name ), loop_of_subtag_id_tuples ) + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( subtag_id ) VALUES ( ? );'.format( subtag_id_table_name ), loop_of_subtag_id_tuples ) else: @@ -11883,15 +11987,15 @@ class DB( HydrusDB.HydrusDB ): searchable_subtag_id = self.modules_tags.GetSubtagId( searchable_subtag ) - self._c.execute( 'INSERT OR IGNORE INTO {} ( subtag_id ) VALUES ( ? );'.format( subtag_id_table_name ), ( searchable_subtag_id, ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( subtag_id ) VALUES ( ? );'.format( subtag_id_table_name ), ( searchable_subtag_id, ) ) - self._c.execute( 'INSERT OR IGNORE INTO {} ( subtag_id ) SELECT subtag_id FROM {} WHERE searchable_subtag_id = ?;'.format( subtag_id_table_name, subtags_searchable_map_table_name ), ( searchable_subtag_id, ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( subtag_id ) SELECT subtag_id FROM {} WHERE searchable_subtag_id = ?;'.format( subtag_id_table_name, subtags_searchable_map_table_name ), ( searchable_subtag_id, ) ) if job_key is not None and job_key.IsCancelled(): - self._c.execute( 'DELETE FROM {};'.format( subtag_id_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( subtag_id_table_name ) ) return @@ -11907,7 +12011,7 @@ class DB( HydrusDB.HydrusDB ): final_result_tag_ids = set() - with HydrusDB.TemporaryIntegerTable( self._c, namespace_ids, 'namespace_id' ) as temp_namespace_ids_table_name: + with self._MakeTemporaryIntegerTable( namespace_ids, 'namespace_id' ) as temp_namespace_ids_table_name: if tag_service_id == self.modules_services.combined_tag_service_id: @@ -11926,12 +12030,12 @@ class DB( HydrusDB.HydrusDB ): ( namespace_id, ) = namespace_ids - cursor = self._c.execute( 'SELECT tag_id FROM {} WHERE namespace_id = ?;'.format( tags_table_name ), ( namespace_id, ) ) + cursor = self._Execute( 'SELECT tag_id FROM {} WHERE namespace_id = ?;'.format( tags_table_name ), ( namespace_id, ) ) else: # temp namespaces to tags - cursor = self._c.execute( 'SELECT tag_id FROM {} CROSS JOIN {} USING ( namespace_id );'.format( temp_namespace_ids_table_name, tags_table_name ) ) + cursor = self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} USING ( namespace_id );'.format( temp_namespace_ids_table_name, tags_table_name ) ) cancelled_hook = None @@ -11965,9 +12069,9 @@ class DB( HydrusDB.HydrusDB ): return set() - with HydrusDB.TemporaryIntegerTable( self._c, subtag_ids, 'subtag_id' ) as temp_subtag_ids_table_name: + with self._MakeTemporaryIntegerTable( subtag_ids, 'subtag_id' ) as temp_subtag_ids_table_name: - with HydrusDB.TemporaryIntegerTable( self._c, namespace_ids, 'namespace_id' ) as temp_namespace_ids_table_name: + with self._MakeTemporaryIntegerTable( namespace_ids, 'namespace_id' ) as temp_namespace_ids_table_name: return self._GetTagIdsFromNamespaceIdsSubtagIdsTables( file_service_id, tag_service_id, temp_namespace_ids_table_name, temp_subtag_ids_table_name, job_key = job_key ) @@ -11992,7 +12096,7 @@ class DB( HydrusDB.HydrusDB ): tags_table_name = self._CacheTagsGetTagsTableName( file_service_id, search_tag_service_id ) # temp subtags to tags to temp namespaces - cursor = self._c.execute( 'SELECT tag_id FROM {} CROSS JOIN {} USING ( subtag_id ) CROSS JOIN {} USING ( namespace_id );'.format( subtag_ids_table_name, tags_table_name, namespace_ids_table_name ) ) + cursor = self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} USING ( subtag_id ) CROSS JOIN {} USING ( namespace_id );'.format( subtag_ids_table_name, tags_table_name, namespace_ids_table_name ) ) cancelled_hook = None @@ -12024,7 +12128,7 @@ class DB( HydrusDB.HydrusDB ): return set() - with HydrusDB.TemporaryIntegerTable( self._c, subtag_ids, 'subtag_id' ) as temp_subtag_ids_table_name: + with self._MakeTemporaryIntegerTable( subtag_ids, 'subtag_id' ) as temp_subtag_ids_table_name: return self._GetTagIdsFromSubtagIdsTable( file_service_id, tag_service_id, temp_subtag_ids_table_name, job_key = job_key ) @@ -12048,7 +12152,7 @@ class DB( HydrusDB.HydrusDB ): tags_table_name = self._CacheTagsGetTagsTableName( file_service_id, search_tag_service_id ) # temp subtags to tags - cursor = self._c.execute( 'SELECT tag_id FROM {} CROSS JOIN {} USING ( subtag_id );'.format( subtag_ids_table_name, tags_table_name ) ) + cursor = self._Execute( 'SELECT tag_id FROM {} CROSS JOIN {} USING ( subtag_id );'.format( subtag_ids_table_name, tags_table_name ) ) cancelled_hook = None @@ -12101,7 +12205,7 @@ class DB( HydrusDB.HydrusDB ): def _GetTagParentsIds( self, service_id ): - statuses_and_pair_ids = self._c.execute( 'SELECT status, child_tag_id, parent_tag_id FROM tag_parents WHERE service_id = ? UNION SELECT status, child_tag_id, parent_tag_id FROM tag_parent_petitions WHERE service_id = ?;', ( service_id, service_id ) ).fetchall() + statuses_and_pair_ids = self._Execute( 'SELECT status, child_tag_id, parent_tag_id FROM tag_parents WHERE service_id = ? UNION SELECT status, child_tag_id, parent_tag_id FROM tag_parent_petitions WHERE service_id = ?;', ( service_id, service_id ) ).fetchall() unsorted_statuses_to_pair_ids = HydrusData.BuildKeyToListDict( ( status, ( child_tag_id, parent_tag_id ) ) for ( status, child_tag_id, parent_tag_id ) in statuses_and_pair_ids ) @@ -12130,7 +12234,7 @@ class DB( HydrusDB.HydrusDB ): tag_ids_seen_this_round.update( self._CacheTagSiblingsGetChainsMembersFromIdeals( ClientTags.TAG_DISPLAY_IDEAL, service_id, ideal_tag_ids ) ) - with HydrusDB.TemporaryIntegerTable( self._c, next_tag_ids, 'tag_id' ) as temp_next_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( next_tag_ids, 'tag_id' ) as temp_next_tag_ids_table_name: searched_tag_ids.update( next_tag_ids ) @@ -12146,7 +12250,7 @@ class DB( HydrusDB.HydrusDB ): query = ' UNION '.join( queries ) - for row in self._c.execute( query, ( service_id, service_id, service_id, service_id ) ): + for row in self._Execute( query, ( service_id, service_id, service_id, service_id ) ): result_rows.add( row ) @@ -12196,7 +12300,7 @@ class DB( HydrusDB.HydrusDB ): def _GetTagSiblingsIds( self, service_id ): - statuses_and_pair_ids = self._c.execute( 'SELECT status, bad_tag_id, good_tag_id FROM tag_siblings WHERE service_id = ? UNION SELECT status, bad_tag_id, good_tag_id FROM tag_sibling_petitions WHERE service_id = ?;', ( service_id, service_id ) ).fetchall() + statuses_and_pair_ids = self._Execute( 'SELECT status, bad_tag_id, good_tag_id FROM tag_siblings WHERE service_id = ? UNION SELECT status, bad_tag_id, good_tag_id FROM tag_sibling_petitions WHERE service_id = ?;', ( service_id, service_id ) ).fetchall() unsorted_statuses_to_pair_ids = HydrusData.BuildKeyToListDict( ( status, ( bad_tag_id, good_tag_id ) ) for ( status, bad_tag_id, good_tag_id ) in statuses_and_pair_ids ) @@ -12215,7 +12319,7 @@ class DB( HydrusDB.HydrusDB ): while len( next_tag_ids ) > 0: - with HydrusDB.TemporaryIntegerTable( self._c, next_tag_ids, 'tag_id' ) as temp_next_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( next_tag_ids, 'tag_id' ) as temp_next_tag_ids_table_name: done_tag_ids.update( next_tag_ids ) @@ -12233,7 +12337,7 @@ class DB( HydrusDB.HydrusDB ): query = ' UNION '.join( queries ) - for row in self._c.execute( query, ( service_id, service_id, service_id, service_id ) ): + for row in self._Execute( query, ( service_id, service_id, service_id, service_id ) ): result_rows.add( row ) @@ -12283,7 +12387,7 @@ class DB( HydrusDB.HydrusDB ): current_files_table_name = ClientDBFilesStorage.GenerateFilesTableName( self.modules_services.trash_service_id, HC.CONTENT_STATUS_CURRENT ) - hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {}{}{};'.format( current_files_table_name, age_phrase, limit_phrase ) ) ) + hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {}{}{};'.format( current_files_table_name, age_phrase, limit_phrase ) ) ) hash_ids = self._FilterForFileDeleteLock( self.modules_services.trash_service_id, hash_ids ) @@ -12323,7 +12427,7 @@ class DB( HydrusDB.HydrusDB ): for search_url in search_urls: - results = self._STS( self._c.execute( 'SELECT hash_id FROM url_map NATURAL JOIN urls WHERE url = ?;', ( search_url, ) ) ) + results = self._STS( self._Execute( 'SELECT hash_id FROM url_map NATURAL JOIN urls WHERE url = ?;', ( search_url, ) ) ) hash_ids.update( results ) @@ -12346,9 +12450,9 @@ class DB( HydrusDB.HydrusDB ): count = 0 - with HydrusDB.TemporaryIntegerTable( self._c, with_these_tag_ids, 'tag_id' ) as temp_with_these_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( with_these_tag_ids, 'tag_id' ) as temp_with_these_tag_ids_table_name: - with HydrusDB.TemporaryIntegerTable( self._c, without_these_tag_ids, 'tag_id' ) as temp_without_these_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( without_these_tag_ids, 'tag_id' ) as temp_without_these_tag_ids_table_name: for ( file_service_id, batch_of_hash_ids ) in file_service_ids_to_hash_ids.items(): @@ -12358,7 +12462,7 @@ class DB( HydrusDB.HydrusDB ): else: - with HydrusDB.TemporaryIntegerTable( self._c, batch_of_hash_ids, 'hash_id' ) as temp_batch_hash_ids_table_name: + with self._MakeTemporaryIntegerTable( batch_of_hash_ids, 'hash_id' ) as temp_batch_hash_ids_table_name: subcount = self._GetWithAndWithoutTagsForFilesFileCountFileService( status, file_service_id, tag_service_id, with_these_tag_ids, temp_with_these_tag_ids_table_name, without_these_tag_ids, temp_without_these_tag_ids_table_name, batch_of_hash_ids, temp_batch_hash_ids_table_name ) @@ -12409,12 +12513,11 @@ class DB( HydrusDB.HydrusDB ): hash_ids_weight = len( hash_ids ) - # ultimately here, we are doing "delete all display mappings with hash_ids that have a storage mapping for a removee tag and no storage mappings for a keep tag # in order to reduce overhead, we go full meme and do a bunch of different situations - with HydrusDB.TemporaryIntegerTable( self._c, [], 'tag_id' ) as temp_with_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( [], 'tag_id' ) as temp_with_tag_ids_table_name: - with HydrusDB.TemporaryIntegerTable( self._c, [], 'tag_id' ) as temp_without_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( [], 'tag_id' ) as temp_without_tag_ids_table_name: if DoingAFileJoinTagSearchIsFaster( hash_ids_weight, with_tag_ids_weight ): @@ -12436,22 +12539,25 @@ class DB( HydrusDB.HydrusDB ): else: - select_with_hash_ids_on_storage = 'SELECT hash_id FROM {} WHERE tag_id = {}'.format( mappings_table_name, with_tag_id ) + # mappings to temp files + select_with_hash_ids_on_storage = 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id ) WHERE tag_id = {}'.format( mappings_table_name, hash_ids_table_name, with_tag_id ) else: - self._c.executemany( 'INSERT INTO {} ( tag_id ) VALUES ( ? );'.format( temp_with_tag_ids_table_name ), ( ( with_tag_id, ) for with_tag_id in with_tag_ids ) ) + # distinct as with many tags hashes can appear twice (e.g. two siblings on the same file) + + self._ExecuteMany( 'INSERT INTO {} ( tag_id ) VALUES ( ? );'.format( temp_with_tag_ids_table_name ), ( ( with_tag_id, ) for with_tag_id in with_tag_ids ) ) if DoingAFileJoinTagSearchIsFaster( hash_ids_weight, with_tag_ids_weight ): - # temp files to mappings to tags + # temp files to mappings to temp tags select_with_hash_ids_on_storage = 'SELECT DISTINCT hash_id FROM {} CROSS JOIN {} USING ( hash_id ) CROSS JOIN {} USING ( tag_id )'.format( hash_ids_table_name, mappings_table_name, temp_with_tag_ids_table_name ) else: - # temp tags to mappings - select_with_hash_ids_on_storage = 'SELECT DISTINCT hash_id FROM {} CROSS JOIN {} USING ( tag_id )'.format( temp_with_tag_ids_table_name, mappings_table_name ) + # temp tags to mappings to temp files + select_with_hash_ids_on_storage = 'SELECT DISTINCT hash_id FROM {} CROSS JOIN {} USING ( tag_id ) CROSS JOIN {} USING ( hash_id )'.format( temp_with_tag_ids_table_name, mappings_table_name, hash_ids_table_name ) @@ -12462,9 +12568,9 @@ class DB( HydrusDB.HydrusDB ): else: # WARNING, WARNING: Big Brain Query, potentially great/awful - # note that in the 'clever/file join' situation, the number of total mappings is many, but we are deleting a few - # we want to precisely scan the status of the potential hashes to delete, not scan through them all to see what not to do - # therefore, we do NOT EXISTS, which just scans the parts, rather than NOT IN, which does the whole query and then checks against all results + # note that in the 'clever/file join' situation, the number of total mappings is many, but we are dealing with a few files + # in that situation, we want to say 'for every file in this list, check if it exists'. this is the 'NOT EXISTS' thing + # when we have lots of files, tag lookups are generally faster, so easier just to search by that tag in one go and check each file against that subquery result. this is 'hash_id NOT IN' if len( without_tag_ids ) == 1: @@ -12472,6 +12578,7 @@ class DB( HydrusDB.HydrusDB ): if DoingAFileJoinTagSearchIsFaster( select_with_weight, without_tag_ids_weight ): + # (files to) mappings hash_id_not_in_storage_without = 'NOT EXISTS ( SELECT 1 FROM {} as mt2 WHERE mt1.hash_id = mt2.hash_id and tag_id = {} )'.format( mappings_table_name, without_tag_id ) else: @@ -12481,7 +12588,7 @@ class DB( HydrusDB.HydrusDB ): else: - self._c.executemany( 'INSERT INTO {} ( tag_id ) VALUES ( ? );'.format( temp_without_tag_ids_table_name ), ( ( without_tag_id, ) for without_tag_id in without_tag_ids ) ) + self._ExecuteMany( 'INSERT INTO {} ( tag_id ) VALUES ( ? );'.format( temp_without_tag_ids_table_name ), ( ( without_tag_id, ) for without_tag_id in without_tag_ids ) ) if DoingAFileJoinTagSearchIsFaster( select_with_weight, without_tag_ids_weight ): @@ -12490,7 +12597,7 @@ class DB( HydrusDB.HydrusDB ): else: - # temp tags to mappings + # temp tags to mappings to temp files hash_id_not_in_storage_without = 'hash_id NOT IN ( SELECT DISTINCT hash_id FROM {} CROSS JOIN {} USING ( tag_id ) )'.format( temp_without_tag_ids_table_name, mappings_table_name ) @@ -12500,7 +12607,7 @@ class DB( HydrusDB.HydrusDB ): query = 'SELECT COUNT ( * ) FROM {};'.format( table_phrase ) - ( count, ) = self._c.execute( query ).fetchone() + ( count, ) = self._Execute( query ).fetchone() return count @@ -12540,9 +12647,9 @@ class DB( HydrusDB.HydrusDB ): # ultimately here, we are doing "delete all display mappings with hash_ids that have a storage mapping for a removee tag and no storage mappings for a keep tag # in order to reduce overhead, we go full meme and do a bunch of different situations - with HydrusDB.TemporaryIntegerTable( self._c, [], 'tag_id' ) as temp_with_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( [], 'tag_id' ) as temp_with_tag_ids_table_name: - with HydrusDB.TemporaryIntegerTable( self._c, [], 'tag_id' ) as temp_without_tag_ids_table_name: + with self._MakeTemporaryIntegerTable( [], 'tag_id' ) as temp_without_tag_ids_table_name: if len( with_tag_ids ) == 1: @@ -12552,7 +12659,7 @@ class DB( HydrusDB.HydrusDB ): else: - self._c.executemany( 'INSERT INTO {} ( tag_id ) VALUES ( ? );'.format( temp_with_tag_ids_table_name ), ( ( with_tag_id, ) for with_tag_id in with_tag_ids ) ) + self._ExecuteMany( 'INSERT INTO {} ( tag_id ) VALUES ( ? );'.format( temp_with_tag_ids_table_name ), ( ( with_tag_id, ) for with_tag_id in with_tag_ids ) ) # temp tags to mappings select_with_hash_ids_on_storage = 'SELECT DISTINCT hash_id FROM {} CROSS JOIN {} USING ( tag_id )'.format( temp_with_tag_ids_table_name, mappings_table_name ) @@ -12584,7 +12691,7 @@ class DB( HydrusDB.HydrusDB ): else: - self._c.executemany( 'INSERT INTO {} ( tag_id ) VALUES ( ? );'.format( temp_without_tag_ids_table_name ), ( ( without_tag_id, ) for without_tag_id in without_tag_ids ) ) + self._ExecuteMany( 'INSERT INTO {} ( tag_id ) VALUES ( ? );'.format( temp_without_tag_ids_table_name ), ( ( without_tag_id, ) for without_tag_id in without_tag_ids ) ) if DoingAFileJoinTagSearchIsFaster( with_tag_ids_weight, without_tag_ids_weight ): @@ -12603,7 +12710,7 @@ class DB( HydrusDB.HydrusDB ): query = 'SELECT COUNT ( * ) FROM {};'.format( table_phrase ) - ( count, ) = self._c.execute( query ).fetchone() + ( count, ) = self._Execute( query ).fetchone() statuses_to_count[ status ] = count @@ -12681,7 +12788,7 @@ class DB( HydrusDB.HydrusDB ): def _HashExists( self, hash ): - result = self._c.execute( 'SELECT 1 FROM hashes WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM hashes WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() if result is None: @@ -12751,7 +12858,7 @@ class DB( HydrusDB.HydrusDB ): file_modified_timestamp = file_import_job.GetFileModifiedTimestamp() - self._c.execute( 'REPLACE INTO file_modified_timestamps ( hash_id, file_modified_timestamp ) VALUES ( ?, ? );', ( hash_id, file_modified_timestamp ) ) + self._Execute( 'REPLACE INTO file_modified_timestamps ( hash_id, file_modified_timestamp ) VALUES ( ?, ? );', ( hash_id, file_modified_timestamp ) ) # @@ -12864,7 +12971,7 @@ class DB( HydrusDB.HydrusDB ): if len( service_ids_to_counts ) > 0: - self._c.executemany( 'UPDATE service_info SET info = info + ? WHERE service_id = ? AND info_type = ?;', [ ( count, service_id, HC.SERVICE_INFO_NUM_INBOX ) for ( service_id, count ) in service_ids_to_counts.items() ] ) + self._ExecuteMany( 'UPDATE service_info SET info = info + ? WHERE service_id = ? AND info_type = ?;', [ ( count, service_id, HC.SERVICE_INFO_NUM_INBOX ) for ( service_id, count ) in service_ids_to_counts.items() ] ) @@ -13044,7 +13151,7 @@ class DB( HydrusDB.HydrusDB ): def _MigrationClearJob( self, database_temp_job_name ): - self._c.execute( 'DROP TABLE {};'.format( database_temp_job_name ) ) + self._Execute( 'DROP TABLE {};'.format( database_temp_job_name ) ) def _MigrationGetMappings( self, database_temp_job_name, file_service_key, tag_service_key, hash_type, tag_filter, content_statuses ): @@ -13073,7 +13180,7 @@ class DB( HydrusDB.HydrusDB ): while not we_should_stop: - result = self._c.execute( 'SELECT hash_id FROM {};'.format( database_temp_job_name ) ).fetchone() + result = self._Execute( 'SELECT hash_id FROM {};'.format( database_temp_job_name ) ).fetchone() if result is None: @@ -13082,7 +13189,7 @@ class DB( HydrusDB.HydrusDB ): ( hash_id, ) = result - self._c.execute( 'DELETE FROM {} WHERE hash_id = ?;'.format( database_temp_job_name ), ( hash_id, ) ) + self._Execute( 'DELETE FROM {} WHERE hash_id = ?;'.format( database_temp_job_name ), ( hash_id, ) ) if hash_type == 'sha256': @@ -13104,7 +13211,7 @@ class DB( HydrusDB.HydrusDB ): for select_query in select_queries: - tag_ids = self._STL( self._c.execute( select_query, ( hash_id, ) ) ) + tag_ids = self._STL( self._Execute( select_query, ( hash_id, ) ) ) tag_ids_to_tags = self.modules_tags_local_cache.GetTagIdsToTags( tag_ids = tag_ids ) @@ -13137,7 +13244,7 @@ class DB( HydrusDB.HydrusDB ): while not we_should_stop: - result = self._c.execute( 'SELECT left_tag_id, right_tag_id FROM {};'.format( database_temp_job_name ) ).fetchone() + result = self._Execute( 'SELECT left_tag_id, right_tag_id FROM {};'.format( database_temp_job_name ) ).fetchone() if result is None: @@ -13146,7 +13253,7 @@ class DB( HydrusDB.HydrusDB ): ( left_tag_id, right_tag_id ) = result - self._c.execute( 'DELETE FROM {} WHERE left_tag_id = ? AND right_tag_id = ?;'.format( database_temp_job_name ), ( left_tag_id, right_tag_id ) ) + self._Execute( 'DELETE FROM {} WHERE left_tag_id = ? AND right_tag_id = ?;'.format( database_temp_job_name ), ( left_tag_id, right_tag_id ) ) left_tag = self.modules_tags_local_cache.GetTag( left_tag_id ) @@ -13174,13 +13281,13 @@ class DB( HydrusDB.HydrusDB ): file_service_id = self.modules_services.GetServiceId( file_service_key ) - self._c.execute( 'CREATE TABLE durable_temp.{} ( hash_id INTEGER PRIMARY KEY );'.format( database_temp_job_name ) ) + self._Execute( 'CREATE TABLE durable_temp.{} ( hash_id INTEGER PRIMARY KEY );'.format( database_temp_job_name ) ) if hashes is not None: hash_ids = self.modules_hashes_local_cache.GetHashIds( hashes ) - self._c.executemany( 'INSERT INTO {} ( hash_id ) VALUES ( ? );'.format( database_temp_job_name ), ( ( hash_id, ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'INSERT INTO {} ( hash_id ) VALUES ( ? );'.format( database_temp_job_name ), ( ( hash_id, ) for hash_id in hash_ids ) ) else: @@ -13262,14 +13369,14 @@ class DB( HydrusDB.HydrusDB ): for select_subquery in select_subqueries: - self._c.execute( 'INSERT OR IGNORE INTO {} ( hash_id ) {};'.format( database_temp_job_name, select_subquery ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( hash_id ) {};'.format( database_temp_job_name, select_subquery ) ) def _MigrationStartPairsJob( self, database_temp_job_name, tag_service_key, content_type, content_statuses ): - self._c.execute( 'CREATE TABLE durable_temp.{} ( left_tag_id INTEGER, right_tag_id INTEGER, PRIMARY KEY ( left_tag_id, right_tag_id ) );'.format( database_temp_job_name ) ) + self._Execute( 'CREATE TABLE durable_temp.{} ( left_tag_id INTEGER, right_tag_id INTEGER, PRIMARY KEY ( left_tag_id, right_tag_id ) );'.format( database_temp_job_name ) ) tag_service_id = self.modules_services.GetServiceId( tag_service_key ) @@ -13288,7 +13395,7 @@ class DB( HydrusDB.HydrusDB ): for source_table_name in source_table_names: - self._c.execute( 'INSERT OR IGNORE INTO {} ( left_tag_id, right_tag_id ) SELECT {}, {} FROM {} WHERE service_id = ? AND status IN {};'.format( database_temp_job_name, left_column_name, right_column_name, source_table_name, HydrusData.SplayListForDB( content_statuses ) ), ( tag_service_id, ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( left_tag_id, right_tag_id ) SELECT {}, {} FROM {} WHERE service_id = ? AND status IN {};'.format( database_temp_job_name, left_column_name, right_column_name, source_table_name, HydrusData.SplayListForDB( content_statuses ) ), ( tag_service_id, ) ) @@ -13306,7 +13413,7 @@ class DB( HydrusDB.HydrusDB ): num_done = 0 still_work_to_do = True - group_of_hash_ids = self._STL( self._c.execute( 'SELECT hash_id FROM shape_search_cache WHERE searched_distance IS NULL or searched_distance < ?;', ( search_distance, ) ).fetchmany( 10 ) ) + group_of_hash_ids = self._STL( self._Execute( 'SELECT hash_id FROM shape_search_cache WHERE searched_distance IS NULL or searched_distance < ?;', ( search_distance, ) ).fetchmany( 10 ) ) while len( group_of_hash_ids ) > 0: @@ -13344,12 +13451,12 @@ class DB( HydrusDB.HydrusDB ): self._DuplicatesAddPotentialDuplicates( media_id, potential_duplicate_media_ids_and_distances ) - self._c.execute( 'UPDATE shape_search_cache SET searched_distance = ? WHERE hash_id = ?;', ( search_distance, hash_id ) ) + self._Execute( 'UPDATE shape_search_cache SET searched_distance = ? WHERE hash_id = ?;', ( search_distance, hash_id ) ) num_done += 1 - group_of_hash_ids = self._STL( self._c.execute( 'SELECT hash_id FROM shape_search_cache WHERE searched_distance IS NULL or searched_distance < ?;', ( search_distance, ) ).fetchmany( 10 ) ) + group_of_hash_ids = self._STL( self._Execute( 'SELECT hash_id FROM shape_search_cache WHERE searched_distance IS NULL or searched_distance < ?;', ( search_distance, ) ).fetchmany( 10 ) ) still_work_to_do = False @@ -13422,7 +13529,7 @@ class DB( HydrusDB.HydrusDB ): service_ids_to_nums_cleared = self.modules_files_storage.ClearLocalDeleteRecord( hash_ids ) - self._c.executemany( 'UPDATE service_info SET info = info + ? WHERE service_id = ? AND info_type = ?;', ( ( -num_cleared, clear_service_id, HC.SERVICE_INFO_NUM_DELETED_FILES ) for ( clear_service_id, num_cleared ) in service_ids_to_nums_cleared.items() ) ) + self._ExecuteMany( 'UPDATE service_info SET info = info + ? WHERE service_id = ? AND info_type = ?;', ( ( -num_cleared, clear_service_id, HC.SERVICE_INFO_NUM_DELETED_FILES ) for ( clear_service_id, num_cleared ) in service_ids_to_nums_cleared.items() ) ) elif action == HC.CONTENT_UPDATE_ADD: @@ -13569,7 +13676,7 @@ class DB( HydrusDB.HydrusDB ): url_ids = { self.modules_urls.GetURLId( url ) for url in urls } hash_ids = self.modules_hashes_local_cache.GetHashIds( hashes ) - self._c.executemany( 'INSERT OR IGNORE INTO url_map ( hash_id, url_id ) VALUES ( ?, ? );', itertools.product( hash_ids, url_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO url_map ( hash_id, url_id ) VALUES ( ?, ? );', itertools.product( hash_ids, url_ids ) ) elif action == HC.CONTENT_UPDATE_DELETE: @@ -13578,7 +13685,7 @@ class DB( HydrusDB.HydrusDB ): url_ids = { self.modules_urls.GetURLId( url ) for url in urls } hash_ids = self.modules_hashes_local_cache.GetHashIds( hashes ) - self._c.executemany( 'DELETE FROM url_map WHERE hash_id = ? AND url_id = ?;', itertools.product( hash_ids, url_ids ) ) + self._ExecuteMany( 'DELETE FROM url_map WHERE hash_id = ? AND url_id = ?;', itertools.product( hash_ids, url_ids ) ) elif data_type == HC.CONTENT_TYPE_FILE_VIEWING_STATS: @@ -13589,7 +13696,7 @@ class DB( HydrusDB.HydrusDB ): if action == 'clear': - self._c.execute( 'DELETE FROM file_viewing_stats;' ) + self._Execute( 'DELETE FROM file_viewing_stats;' ) elif action == HC.CONTENT_UPDATE_ADD: @@ -13598,9 +13705,9 @@ class DB( HydrusDB.HydrusDB ): hash_id = self.modules_hashes_local_cache.GetHashId( hash ) - self._c.execute( 'INSERT OR IGNORE INTO file_viewing_stats ( hash_id, preview_views, preview_viewtime, media_views, media_viewtime ) VALUES ( ?, ?, ?, ?, ? );', ( hash_id, 0, 0, 0, 0 ) ) + self._Execute( 'INSERT OR IGNORE INTO file_viewing_stats ( hash_id, preview_views, preview_viewtime, media_views, media_viewtime ) VALUES ( ?, ?, ?, ?, ? );', ( hash_id, 0, 0, 0, 0 ) ) - self._c.execute( 'UPDATE file_viewing_stats SET preview_views = preview_views + ?, preview_viewtime = preview_viewtime + ?, media_views = media_views + ?, media_viewtime = media_viewtime + ? WHERE hash_id = ?;', ( preview_views_delta, preview_viewtime_delta, media_views_delta, media_viewtime_delta, hash_id ) ) + self._Execute( 'UPDATE file_viewing_stats SET preview_views = preview_views + ?, preview_viewtime = preview_viewtime + ?, media_views = media_views + ?, media_viewtime = media_viewtime + ? WHERE hash_id = ?;', ( preview_views_delta, preview_viewtime_delta, media_views_delta, media_viewtime_delta, hash_id ) ) elif action == HC.CONTENT_UPDATE_DELETE: @@ -13608,7 +13715,7 @@ class DB( HydrusDB.HydrusDB ): hash_ids = self.modules_hashes_local_cache.GetHashIds( hashes ) - self._c.executemany( 'DELETE FROM file_viewing_stats WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM file_viewing_stats WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) ) @@ -13678,9 +13785,9 @@ class DB( HydrusDB.HydrusDB ): ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = ClientDBMappingsStorage.GenerateMappingsTableNames( service_id ) - self._c.executemany( 'DELETE FROM {} WHERE tag_id = ? AND hash_id = ?;'.format( deleted_mappings_table_name ), ( ( tag_id, hash_id ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM {} WHERE tag_id = ? AND hash_id = ?;'.format( deleted_mappings_table_name ), ( ( tag_id, hash_id ) for hash_id in hash_ids ) ) - self._c.execute( 'DELETE FROM service_info WHERE service_id = ? AND info_type = ?;', ( service_id, HC.SERVICE_INFO_NUM_DELETED_MAPPINGS ) ) + self._Execute( 'DELETE FROM service_info WHERE service_id = ? AND info_type = ?;', ( service_id, HC.SERVICE_INFO_NUM_DELETED_MAPPINGS ) ) cache_file_service_ids = self.modules_services.GetServiceIds( HC.AUTOCOMPLETE_CACHE_SPECIFIC_FILE_SERVICES ) @@ -13688,7 +13795,7 @@ class DB( HydrusDB.HydrusDB ): ( cache_current_mappings_table_name, cache_deleted_mappings_table_name, cache_pending_mappings_table_name ) = GenerateSpecificMappingsCacheTableNames( cache_file_service_id, service_id ) - self._c.executemany( 'DELETE FROM ' + cache_deleted_mappings_table_name + ' WHERE hash_id = ? AND tag_id = ?;', ( ( hash_id, tag_id ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + cache_deleted_mappings_table_name + ' WHERE hash_id = ? AND tag_id = ?;', ( ( hash_id, tag_id ) for hash_id in hash_ids ) ) @@ -13750,9 +13857,9 @@ class DB( HydrusDB.HydrusDB ): reason_id = self.modules_texts.GetTextId( reason ) - self._c.execute( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ?;', ( service_id, child_tag_id, parent_tag_id ) ) + self._Execute( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ?;', ( service_id, child_tag_id, parent_tag_id ) ) - self._c.execute( 'INSERT OR IGNORE INTO tag_parent_petitions ( service_id, child_tag_id, parent_tag_id, reason_id, status ) VALUES ( ?, ?, ?, ?, ? );', ( service_id, child_tag_id, parent_tag_id, reason_id, new_status ) ) + self._Execute( 'INSERT OR IGNORE INTO tag_parent_petitions ( service_id, child_tag_id, parent_tag_id, reason_id, status ) VALUES ( ?, ?, ?, ?, ? );', ( service_id, child_tag_id, parent_tag_id, reason_id, new_status ) ) changed_parent_tag_ids.update( ( child_tag_id, parent_tag_id ) ) @@ -13782,7 +13889,7 @@ class DB( HydrusDB.HydrusDB ): continue - self._c.execute( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ? AND status = ?;', ( service_id, child_tag_id, parent_tag_id, deletee_status ) ) + self._Execute( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND child_tag_id = ? AND parent_tag_id = ? AND status = ?;', ( service_id, child_tag_id, parent_tag_id, deletee_status ) ) changed_parent_tag_ids.update( ( child_tag_id, parent_tag_id ) ) @@ -13849,9 +13956,9 @@ class DB( HydrusDB.HydrusDB ): reason_id = self.modules_texts.GetTextId( reason ) - self._c.execute( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND good_tag_id = ?;', ( service_id, bad_tag_id, good_tag_id ) ) + self._Execute( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND good_tag_id = ?;', ( service_id, bad_tag_id, good_tag_id ) ) - self._c.execute( 'INSERT OR IGNORE INTO tag_sibling_petitions ( service_id, bad_tag_id, good_tag_id, reason_id, status ) VALUES ( ?, ?, ?, ?, ? );', ( service_id, bad_tag_id, good_tag_id, reason_id, new_status ) ) + self._Execute( 'INSERT OR IGNORE INTO tag_sibling_petitions ( service_id, bad_tag_id, good_tag_id, reason_id, status ) VALUES ( ?, ?, ?, ?, ? );', ( service_id, bad_tag_id, good_tag_id, reason_id, new_status ) ) changed_sibling_tag_ids.update( ( bad_tag_id, good_tag_id ) ) @@ -13881,7 +13988,7 @@ class DB( HydrusDB.HydrusDB ): continue - self._c.execute( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND status = ?;', ( service_id, bad_tag_id, deletee_status ) ) + self._Execute( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND bad_tag_id = ? AND status = ?;', ( service_id, bad_tag_id, deletee_status ) ) changed_sibling_tag_ids.update( ( bad_tag_id, good_tag_id ) ) @@ -13905,18 +14012,18 @@ class DB( HydrusDB.HydrusDB ): ratings_added = 0 - self._c.executemany( 'DELETE FROM local_ratings WHERE service_id = ? AND hash_id = ?;', ( ( service_id, hash_id ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM local_ratings WHERE service_id = ? AND hash_id = ?;', ( ( service_id, hash_id ) for hash_id in hash_ids ) ) - ratings_added -= HydrusDB.GetRowCount( self._c ) + ratings_added -= self._GetRowCount() if rating is not None: - self._c.executemany( 'INSERT INTO local_ratings ( service_id, hash_id, rating ) VALUES ( ?, ?, ? );', [ ( service_id, hash_id, rating ) for hash_id in hash_ids ] ) + self._ExecuteMany( 'INSERT INTO local_ratings ( service_id, hash_id, rating ) VALUES ( ?, ?, ? );', [ ( service_id, hash_id, rating ) for hash_id in hash_ids ] ) - ratings_added += HydrusDB.GetRowCount( self._c ) + ratings_added += self._GetRowCount() - self._c.execute( 'UPDATE service_info SET info = info + ? WHERE service_id = ? AND info_type = ?;', ( ratings_added, service_id, HC.SERVICE_INFO_NUM_FILES ) ) + self._Execute( 'UPDATE service_info SET info = info + ? WHERE service_id = ? AND info_type = ?;', ( ratings_added, service_id, HC.SERVICE_INFO_NUM_FILES ) ) elif action == HC.CONTENT_UPDATE_ADVANCED: @@ -13927,27 +14034,27 @@ class DB( HydrusDB.HydrusDB ): deleted_files_table_name = ClientDBFilesStorage.GenerateFilesTableName( self.modules_services.combined_local_file_service_id, HC.CONTENT_STATUS_DELETED ) - self._c.execute( 'DELETE FROM local_ratings WHERE service_id = ? and hash_id IN ( SELECT hash_id FROM {} );'.format( deleted_files_table_name ), ( service_id, ) ) + self._Execute( 'DELETE FROM local_ratings WHERE service_id = ? and hash_id IN ( SELECT hash_id FROM {} );'.format( deleted_files_table_name ), ( service_id, ) ) - ratings_deleted = HydrusDB.GetRowCount( self._c ) + ratings_deleted = self._GetRowCount() - self._c.execute( 'UPDATE service_info SET info = info - ? WHERE service_id = ? AND info_type = ?;', ( ratings_deleted, service_id, HC.SERVICE_INFO_NUM_FILES ) ) + self._Execute( 'UPDATE service_info SET info = info - ? WHERE service_id = ? AND info_type = ?;', ( ratings_deleted, service_id, HC.SERVICE_INFO_NUM_FILES ) ) elif action == 'delete_for_non_local_files': current_files_table_name = ClientDBFilesStorage.GenerateFilesTableName( self.modules_services.combined_local_file_service_id, HC.CONTENT_STATUS_CURRENT ) - self._c.execute( 'DELETE FROM local_ratings WHERE local_ratings.service_id = ? and hash_id NOT IN ( SELECT hash_id FROM {} );'.format( current_files_table_name ), ( service_id, ) ) + self._Execute( 'DELETE FROM local_ratings WHERE local_ratings.service_id = ? and hash_id NOT IN ( SELECT hash_id FROM {} );'.format( current_files_table_name ), ( service_id, ) ) - ratings_deleted = HydrusDB.GetRowCount( self._c ) + ratings_deleted = self._GetRowCount() - self._c.execute( 'UPDATE service_info SET info = info - ? WHERE service_id = ? AND info_type = ?;', ( ratings_deleted, service_id, HC.SERVICE_INFO_NUM_FILES ) ) + self._Execute( 'UPDATE service_info SET info = info - ? WHERE service_id = ? AND info_type = ?;', ( ratings_deleted, service_id, HC.SERVICE_INFO_NUM_FILES ) ) elif action == 'delete_for_all_files': - self._c.execute( 'DELETE FROM local_ratings WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM local_ratings WHERE service_id = ?;', ( service_id, ) ) - self._c.execute( 'UPDATE service_info SET info = ? WHERE service_id = ? AND info_type = ?;', ( 0, service_id, HC.SERVICE_INFO_NUM_FILES ) ) + self._Execute( 'UPDATE service_info SET info = ? WHERE service_id = ? AND info_type = ?;', ( 0, service_id, HC.SERVICE_INFO_NUM_FILES ) ) @@ -13960,13 +14067,13 @@ class DB( HydrusDB.HydrusDB ): hash_id = self.modules_hashes_local_cache.GetHashId( hash ) name_id = self.modules_texts.GetLabelId( name ) - self._c.execute( 'DELETE FROM file_notes WHERE hash_id = ? AND name_id = ?;', ( hash_id, name_id ) ) + self._Execute( 'DELETE FROM file_notes WHERE hash_id = ? AND name_id = ?;', ( hash_id, name_id ) ) if len( note ) > 0: note_id = self.modules_texts.GetNoteId( note ) - self._c.execute( 'INSERT OR IGNORE INTO file_notes ( hash_id, name_id, note_id ) VALUES ( ?, ?, ? );', ( hash_id, name_id, note_id ) ) + self._Execute( 'INSERT OR IGNORE INTO file_notes ( hash_id, name_id, note_id ) VALUES ( ?, ?, ? );', ( hash_id, name_id, note_id ) ) elif action == HC.CONTENT_UPDATE_DELETE: @@ -13976,7 +14083,7 @@ class DB( HydrusDB.HydrusDB ): hash_id = self.modules_hashes_local_cache.GetHashId( hash ) name_id = self.modules_texts.GetLabelId( name ) - self._c.execute( 'DELETE FROM file_notes WHERE hash_id = ? AND name_id = ?;', ( hash_id, name_id ) ) + self._Execute( 'DELETE FROM file_notes WHERE hash_id = ? AND name_id = ?;', ( hash_id, name_id ) ) @@ -14333,7 +14440,7 @@ class DB( HydrusDB.HydrusDB ): if tags is None: - self._c.execute( 'DELETE FROM recent_tags WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM recent_tags WHERE service_id = ?;', ( service_id, ) ) else: @@ -14341,7 +14448,7 @@ class DB( HydrusDB.HydrusDB ): tag_ids = [ self.modules_tags.GetTagId( tag ) for tag in tags ] - self._c.executemany( 'REPLACE INTO recent_tags ( service_id, tag_id, timestamp ) VALUES ( ?, ?, ? );', ( ( service_id, tag_id, now ) for tag_id in tag_ids ) ) + self._ExecuteMany( 'REPLACE INTO recent_tags ( service_id, tag_id, timestamp ) VALUES ( ?, ?, ? );', ( ( service_id, tag_id, now ) for tag_id in tag_ids ) ) @@ -14441,7 +14548,7 @@ class DB( HydrusDB.HydrusDB ): temp_all_useful_definition_ids_table_name = 'durable_temp.all_useful_definition_ids_{}'.format( os.urandom( 8 ).hex() ) - self._c.execute( 'CREATE TABLE {} ( {} INTEGER PRIMARY KEY );'.format( temp_all_useful_definition_ids_table_name, definition_column_name ) ) + self._Execute( 'CREATE TABLE {} ( {} INTEGER PRIMARY KEY );'.format( temp_all_useful_definition_ids_table_name, definition_column_name ) ) try: @@ -14456,15 +14563,15 @@ class DB( HydrusDB.HydrusDB ): table_name ) - self._c.execute( query ) + self._Execute( query ) - num_to_do += HydrusDB.GetRowCount( self._c ) + num_to_do += self._GetRowCount() num_missing = 0 num_recovered = 0 - batch_of_definition_ids = self._c.execute( 'SELECT {} FROM {} LIMIT 1024;'.format( definition_column_name, temp_all_useful_definition_ids_table_name ) ) + batch_of_definition_ids = self._Execute( 'SELECT {} FROM {} LIMIT 1024;'.format( definition_column_name, temp_all_useful_definition_ids_table_name ) ) while len( batch_of_definition_ids ) > 1024: @@ -14476,7 +14583,7 @@ class DB( HydrusDB.HydrusDB ): hash = self.modules_hashes_local_cache.GetHash( definition_id ) - self._c.execute( 'INSERT OR IGNORE INTO hashes ( hash_id, hash ) VALUES ( ?, ? );', ( definition_id, sqlite3.Binary( hash ) ) ) + self._Execute( 'INSERT OR IGNORE INTO hashes ( hash_id, hash ) VALUES ( ?, ? );', ( definition_id, sqlite3.Binary( hash ) ) ) HydrusData.Print( '{} {} had no master definition, but I was able to recover from the local cache'.format( definition_column_name, definition_id ) ) @@ -14488,7 +14595,7 @@ class DB( HydrusDB.HydrusDB ): for ( table_name, column_name ) in all_tables_and_columns: - self._c.execute( 'DELETE FROM {} WHERE {} = ?;'.format( table_name, column_name ), ( definition_id, ) ) + self._Execute( 'DELETE FROM {} WHERE {} = ?;'.format( table_name, column_name ), ( definition_id, ) ) # tell user they will want to run clear orphan files, reset service cache info, and may need to recalc some autocomplete counts depending on total missing definitions @@ -14499,12 +14606,12 @@ class DB( HydrusDB.HydrusDB ): - batch_of_definition_ids = self._c.execute( 'SELECT {} FROM {} LIMIT 1024;'.format( definition_column_name, temp_all_useful_definition_ids_table_name ) ) + batch_of_definition_ids = self._Execute( 'SELECT {} FROM {} LIMIT 1024;'.format( definition_column_name, temp_all_useful_definition_ids_table_name ) ) finally: - self._c.execute( 'DROP TABLE {};'.format( temp_all_useful_definition_ids_table_name ) ) + self._Execute( 'DROP TABLE {};'.format( temp_all_useful_definition_ids_table_name ) ) @@ -14748,13 +14855,13 @@ class DB( HydrusDB.HydrusDB ): tag_ids_in_dispute = set() - tag_ids_in_dispute.update( self._STS( self._c.execute( 'SELECT DISTINCT bad_tag_id FROM {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) ) ) - tag_ids_in_dispute.update( self._STS( self._c.execute( 'SELECT ideal_tag_id FROM {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) ) ) - tag_ids_in_dispute.update( self._STS( self._c.execute( 'SELECT DISTINCT child_tag_id FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) ) ) ) - tag_ids_in_dispute.update( self._STS( self._c.execute( 'SELECT DISTINCT ancestor_tag_id FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) ) ) ) + tag_ids_in_dispute.update( self._STS( self._Execute( 'SELECT DISTINCT bad_tag_id FROM {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) ) ) + tag_ids_in_dispute.update( self._STS( self._Execute( 'SELECT ideal_tag_id FROM {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) ) ) + tag_ids_in_dispute.update( self._STS( self._Execute( 'SELECT DISTINCT child_tag_id FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) ) ) ) + tag_ids_in_dispute.update( self._STS( self._Execute( 'SELECT DISTINCT ancestor_tag_id FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) ) ) ) - self._c.execute( 'DELETE FROM {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) - self._c.execute( 'DELETE FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) ) if tag_service_id in self._service_ids_to_display_application_status: @@ -14946,8 +15053,8 @@ class DB( HydrusDB.HydrusDB ): ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( tag_service_id ) ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( tag_service_id ) - self._c.execute( 'DELETE FROM {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) - self._c.execute( 'DELETE FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) ) if tag_service_id in self._service_ids_to_display_application_status: @@ -15165,7 +15272,7 @@ class DB( HydrusDB.HydrusDB ): portable_dest = HydrusPaths.ConvertAbsPathToPortablePath( dest ) - self._c.execute( 'UPDATE client_files_locations SET location = ? WHERE prefix = ?;', ( portable_dest, prefix ) ) + self._Execute( 'UPDATE client_files_locations SET location = ? WHERE prefix = ?;', ( portable_dest, prefix ) ) if os.path.exists( full_source ): @@ -15184,7 +15291,7 @@ class DB( HydrusDB.HydrusDB ): portable_correct_location = HydrusPaths.ConvertAbsPathToPortablePath( correct_location ) - self._c.execute( 'UPDATE client_files_locations SET location = ? WHERE prefix = ?;', ( portable_correct_location, prefix ) ) + self._Execute( 'UPDATE client_files_locations SET location = ? WHERE prefix = ?;', ( portable_correct_location, prefix ) ) @@ -15194,7 +15301,7 @@ class DB( HydrusDB.HydrusDB ): self._controller.frame_splash_status.SetText( 'checking database' ) - ( version, ) = self._c.execute( 'SELECT version FROM version;' ).fetchone() + ( version, ) = self._Execute( 'SELECT version FROM version;' ).fetchone() HydrusDB.HydrusDB._RepairDB( self ) @@ -15205,7 +15312,7 @@ class DB( HydrusDB.HydrusDB ): # master - existing_master_tables = self._STS( self._c.execute( 'SELECT name FROM external_master.sqlite_master WHERE type = ?;', ( 'table', ) ) ) + existing_master_tables = self._STS( self._Execute( 'SELECT name FROM external_master.sqlite_master WHERE type = ?;', ( 'table', ) ) ) main_master_tables = set() @@ -15246,7 +15353,7 @@ class DB( HydrusDB.HydrusDB ): BlockingSafeShowMessage( message ) - self._c.execute( 'CREATE TABLE external_master.local_hashes ( hash_id INTEGER PRIMARY KEY, md5 BLOB_BYTES, sha1 BLOB_BYTES, sha512 BLOB_BYTES );' ) + self._Execute( 'CREATE TABLE external_master.local_hashes ( hash_id INTEGER PRIMARY KEY, md5 BLOB_BYTES, sha1 BLOB_BYTES, sha512 BLOB_BYTES );' ) self._CreateIndex( 'external_master.local_hashes', [ 'md5' ] ) @@ -15255,7 +15362,7 @@ class DB( HydrusDB.HydrusDB ): # mappings - existing_mapping_tables = self._STS( self._c.execute( 'SELECT name FROM external_mappings.sqlite_master WHERE type = ?;', ( 'table', ) ) ) + existing_mapping_tables = self._STS( self._Execute( 'SELECT name FROM external_mappings.sqlite_master WHERE type = ?;', ( 'table', ) ) ) main_mappings_tables = set() @@ -15286,7 +15393,7 @@ class DB( HydrusDB.HydrusDB ): # caches - existing_cache_tables = self._STS( self._c.execute( 'SELECT name FROM external_caches.sqlite_master WHERE type = ?;', ( 'table', ) ) ) + existing_cache_tables = self._STS( self._Execute( 'SELECT name FROM external_caches.sqlite_master WHERE type = ?;', ( 'table', ) ) ) main_cache_tables = set() @@ -15649,7 +15756,7 @@ class DB( HydrusDB.HydrusDB ): tags_table_name = self._CacheTagsGetTagsTableName( file_service_id, tag_service_id ) - result = self._c.execute( 'SELECT 1 FROM {} WHERE tag_id = ?;'.format( tags_table_name ), ( tag_id, ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM {} WHERE tag_id = ?;'.format( tags_table_name ), ( tag_id, ) ).fetchone() if result is not None: @@ -15737,22 +15844,22 @@ class DB( HydrusDB.HydrusDB ): - with HydrusDB.TemporaryIntegerTable( self._c, group_of_hash_ids, 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( group_of_hash_ids, 'hash_id' ) as temp_table_name: # temp hashes to mappings insert_template = 'INSERT OR IGNORE INTO {} ( tag_id, hash_id ) SELECT tag_id, hash_id FROM {} CROSS JOIN {} USING ( hash_id );' - self._c.execute( insert_template.format( current_mappings_table_name, temp_table_name, cache_current_mappings_table_name ) ) + self._Execute( insert_template.format( current_mappings_table_name, temp_table_name, cache_current_mappings_table_name ) ) - num_rows_recovered += HydrusDB.GetRowCount( self._c ) + num_rows_recovered += self._GetRowCount() - self._c.execute( insert_template.format( deleted_mappings_table_name, temp_table_name, cache_deleted_mappings_table_name ) ) + self._Execute( insert_template.format( deleted_mappings_table_name, temp_table_name, cache_deleted_mappings_table_name ) ) - num_rows_recovered += HydrusDB.GetRowCount( self._c ) + num_rows_recovered += self._GetRowCount() - self._c.execute( insert_template.format( pending_mappings_table_name, temp_table_name, cache_pending_mappings_table_name ) ) + self._Execute( insert_template.format( pending_mappings_table_name, temp_table_name, cache_pending_mappings_table_name ) ) - num_rows_recovered += HydrusDB.GetRowCount( self._c ) + num_rows_recovered += self._GetRowCount() @@ -15900,6 +16007,8 @@ class DB( HydrusDB.HydrusDB ): try: + service_info_types_to_delete = [] + job_key.SetVariable( 'popup_text_1', '{}: calculating'.format( prefix ) ) self._controller.pub( 'modal_message', job_key ) @@ -15908,11 +16017,13 @@ class DB( HydrusDB.HydrusDB ): if HC.CONTENT_TYPE_FILES in content_types: - self._c.execute( 'DELETE FROM remote_thumbnails WHERE service_id = ?;', ( service_id, ) ) + service_info_types_to_delete.extend( { HC.SERVICE_INFO_NUM_FILES, HC.SERVICE_INFO_NUM_VIEWABLE_FILES, HC.SERVICE_INFO_TOTAL_SIZE, HC.SERVICE_INFO_NUM_DELETED_FILES } ) + + self._Execute( 'DELETE FROM remote_thumbnails WHERE service_id = ?;', ( service_id, ) ) if service_type in HC.SPECIFIC_FILE_SERVICES: - self.modules_files_storage.ClearFilesTables( service_id ) + self.modules_files_storage.ClearFilesTables( service_id, keep_pending = True ) if service_type in HC.AUTOCOMPLETE_CACHE_SPECIFIC_FILE_SERVICES: @@ -15921,7 +16032,7 @@ class DB( HydrusDB.HydrusDB ): for tag_service_id in tag_service_ids: - self._CacheSpecificMappingsClear( service_id, tag_service_id ) + self._CacheSpecificMappingsClear( service_id, tag_service_id, keep_pending = True ) # not clear since siblings and parents can contribute self._CacheTagsDrop( service_id, tag_service_id ) @@ -15933,11 +16044,13 @@ class DB( HydrusDB.HydrusDB ): if HC.CONTENT_TYPE_MAPPINGS in content_types: + service_info_types_to_delete.extend( { HC.SERVICE_INFO_NUM_FILES, HC.SERVICE_INFO_NUM_TAGS, HC.SERVICE_INFO_NUM_MAPPINGS, HC.SERVICE_INFO_NUM_DELETED_MAPPINGS } ) + if service_type in HC.REAL_TAG_SERVICES: self.modules_mappings_storage.ClearMappingsTables( service_id ) - self._CacheCombinedFilesMappingsClear( service_id ) + self._CacheCombinedFilesMappingsClear( service_id, keep_pending = True ) self._CacheTagsDrop( self.modules_services.combined_file_service_id, service_id ) self._CacheTagsGenerate( self.modules_services.combined_file_service_id, service_id ) @@ -15947,7 +16060,7 @@ class DB( HydrusDB.HydrusDB ): for file_service_id in file_service_ids: - self._CacheSpecificMappingsClear( file_service_id, service_id ) + self._CacheSpecificMappingsClear( file_service_id, service_id, keep_pending = True ) # not clear since siblings and parents can contribute self._CacheTagsDrop( file_service_id, service_id ) @@ -15959,23 +16072,23 @@ class DB( HydrusDB.HydrusDB ): if HC.CONTENT_TYPE_TAG_PARENTS in content_types: - self._c.execute( 'DELETE FROM tag_parents WHERE service_id = ?;', ( service_id, ) ) - self._c.execute( 'DELETE FROM tag_parent_petitions WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM tag_parents WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM tag_parent_petitions WHERE service_id = ? AND status = ?;', ( service_id, HC.CONTENT_STATUS_PETITIONED ) ) ( cache_ideal_tag_parents_lookup_table_name, cache_actual_tag_parents_lookup_table_name ) = GenerateTagParentsLookupCacheTableNames( service_id ) # do not delete from actual! - self._c.execute( 'DELETE FROM {};'.format( cache_ideal_tag_parents_lookup_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( cache_ideal_tag_parents_lookup_table_name ) ) if HC.CONTENT_TYPE_TAG_SIBLINGS in content_types: - self._c.execute( 'DELETE FROM tag_siblings WHERE service_id = ?;', ( service_id, ) ) - self._c.execute( 'DELETE FROM tag_sibling_petitions WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM tag_siblings WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM tag_sibling_petitions WHERE service_id = ? AND status = ?;', ( service_id, HC.CONTENT_STATUS_PETITIONED ) ) ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( service_id ) - self._c.execute( 'DELETE FROM {};'.format( cache_ideal_tag_siblings_lookup_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( cache_ideal_tag_siblings_lookup_table_name ) ) # @@ -15997,6 +16110,8 @@ class DB( HydrusDB.HydrusDB ): + self._ExecuteMany( 'DELETE FROM service_info WHERE service_id = ? AND info_type = ?;', ( ( service_id, info_type ) for info_type in service_info_types_to_delete ) ) + self.modules_repositories.ReprocessRepository( service_key, content_types ) self.pub_after_job( 'notify_new_services_data' ) @@ -16029,7 +16144,7 @@ class DB( HydrusDB.HydrusDB ): try: - self._c.execute( 'UPDATE options SET options = ?;', ( options, ) ) + self._Execute( 'UPDATE options SET options = ?;', ( options, ) ) except: @@ -16050,22 +16165,22 @@ class DB( HydrusDB.HydrusDB ): raise Exception( 'No locations passed in ideal locations list!' ) - self._c.execute( 'DELETE FROM ideal_client_files_locations;' ) + self._Execute( 'DELETE FROM ideal_client_files_locations;' ) for ( abs_location, weight ) in locations_to_ideal_weights.items(): portable_location = HydrusPaths.ConvertAbsPathToPortablePath( abs_location ) - self._c.execute( 'INSERT INTO ideal_client_files_locations ( location, weight ) VALUES ( ?, ? );', ( portable_location, weight ) ) + self._Execute( 'INSERT INTO ideal_client_files_locations ( location, weight ) VALUES ( ?, ? );', ( portable_location, weight ) ) - self._c.execute( 'DELETE FROM ideal_thumbnail_override_location;' ) + self._Execute( 'DELETE FROM ideal_thumbnail_override_location;' ) if ideal_thumbnail_override_location is not None: portable_ideal_thumbnail_override_location = HydrusPaths.ConvertAbsPathToPortablePath( ideal_thumbnail_override_location ) - self._c.execute( 'INSERT INTO ideal_thumbnail_override_location ( location ) VALUES ( ? );', ( portable_ideal_thumbnail_override_location, ) ) + self._Execute( 'INSERT INTO ideal_thumbnail_override_location ( location ) VALUES ( ? );', ( portable_ideal_thumbnail_override_location, ) ) @@ -16085,19 +16200,19 @@ class DB( HydrusDB.HydrusDB ): def _SetServiceFilename( self, service_id, hash_id, filename ): - self._c.execute( 'REPLACE INTO service_filenames ( service_id, hash_id, filename ) VALUES ( ?, ?, ? );', ( service_id, hash_id, filename ) ) + self._Execute( 'REPLACE INTO service_filenames ( service_id, hash_id, filename ) VALUES ( ?, ?, ? );', ( service_id, hash_id, filename ) ) def _SetServiceDirectory( self, service_id, hash_ids, dirname, note ): directory_id = self.modules_texts.GetTextId( dirname ) - self._c.execute( 'DELETE FROM service_directories WHERE service_id = ? AND directory_id = ?;', ( service_id, directory_id ) ) - self._c.execute( 'DELETE FROM service_directory_file_map WHERE service_id = ? AND directory_id = ?;', ( service_id, directory_id ) ) + self._Execute( 'DELETE FROM service_directories WHERE service_id = ? AND directory_id = ?;', ( service_id, directory_id ) ) + self._Execute( 'DELETE FROM service_directory_file_map WHERE service_id = ? AND directory_id = ?;', ( service_id, directory_id ) ) num_files = len( hash_ids ) - result = self._c.execute( 'SELECT SUM( size ) FROM files_info WHERE hash_id IN ' + HydrusData.SplayListForDB( hash_ids ) + ';' ).fetchone() + result = self._Execute( 'SELECT SUM( size ) FROM files_info WHERE hash_id IN ' + HydrusData.SplayListForDB( hash_ids ) + ';' ).fetchone() if result is None: @@ -16108,8 +16223,8 @@ class DB( HydrusDB.HydrusDB ): ( total_size, ) = result - self._c.execute( 'INSERT INTO service_directories ( service_id, directory_id, num_files, total_size, note ) VALUES ( ?, ?, ?, ?, ? );', ( service_id, directory_id, num_files, total_size, note ) ) - self._c.executemany( 'INSERT INTO service_directory_file_map ( service_id, directory_id, hash_id ) VALUES ( ?, ?, ? );', ( ( service_id, directory_id, hash_id ) for hash_id in hash_ids ) ) + self._Execute( 'INSERT INTO service_directories ( service_id, directory_id, num_files, total_size, note ) VALUES ( ?, ?, ?, ?, ? );', ( service_id, directory_id, num_files, total_size, note ) ) + self._ExecuteMany( 'INSERT INTO service_directory_file_map ( service_id, directory_id, hash_id ) VALUES ( ?, ?, ? );', ( ( service_id, directory_id, hash_id ) for hash_id in hash_ids ) ) def _TryToSortHashIds( self, file_service_id, hash_ids, sort_by: ClientMedia.MediaSort ): @@ -16373,7 +16488,7 @@ class DB( HydrusDB.HydrusDB ): if version == 411: - result = self._c.execute( 'SELECT 1 FROM sqlite_master WHERE name = ?;', ( 'current_files_hash_id_index', ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM sqlite_master WHERE name = ?;', ( 'current_files_hash_id_index', ) ).fetchone() if result is None: @@ -16398,7 +16513,7 @@ class DB( HydrusDB.HydrusDB ): self._controller.frame_splash_status.SetSubtext( 'dropping old tag index' ) - self._c.execute( 'DROP INDEX IF EXISTS tags_subtag_id_namespace_id_index;' ) + self._Execute( 'DROP INDEX IF EXISTS tags_subtag_id_namespace_id_index;' ) self._controller.frame_splash_status.SetSubtext( 'creating first new tag index' ) @@ -16414,18 +16529,18 @@ class DB( HydrusDB.HydrusDB ): BlockingSafeShowMessage( message ) - self._c.execute( 'ALTER TABLE tags RENAME TO tags_old;' ) + self._Execute( 'ALTER TABLE tags RENAME TO tags_old;' ) self._controller.frame_splash_status.SetSubtext( 'running deduplication' ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.tags ( tag_id INTEGER PRIMARY KEY, namespace_id INTEGER, subtag_id INTEGER );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.tags ( tag_id INTEGER PRIMARY KEY, namespace_id INTEGER, subtag_id INTEGER );' ) self._CreateIndex( 'external_master.tags', [ 'namespace_id', 'subtag_id' ], unique = True ) - self._c.execute( 'INSERT OR IGNORE INTO tags SELECT * FROM tags_old;' ) + self._Execute( 'INSERT OR IGNORE INTO tags SELECT * FROM tags_old;' ) self._controller.frame_splash_status.SetSubtext( 'cleaning up deduplication' ) - self._c.execute( 'DROP TABLE tags_old;' ) + self._Execute( 'DROP TABLE tags_old;' ) message = 'Ok, looks like the deduplication worked! There is a small chance you will get a tag definition error notification in the coming days or months. Do not worry too much--hydrus will generally fix itself, but let hydev know if it causes a bigger problem.' @@ -16490,7 +16605,7 @@ class DB( HydrusDB.HydrusDB ): old_tag_siblings_lookup_base_name = 'tag_siblings_lookup_cache_{}'.format( tag_service_id ) - result = self._c.execute( 'SELECT 1 FROM external_caches.sqlite_master WHERE name = ?;', ( old_tag_siblings_lookup_base_name, ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM external_caches.sqlite_master WHERE name = ?;', ( old_tag_siblings_lookup_base_name, ) ).fetchone() if result is not None: @@ -16498,11 +16613,11 @@ class DB( HydrusDB.HydrusDB ): ( cache_ideal_tag_siblings_lookup_table_name, cache_actual_tag_siblings_lookup_table_name ) = GenerateTagSiblingsLookupCacheTableNames( tag_service_id ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( bad_tag_id INTEGER PRIMARY KEY, ideal_tag_id INTEGER );'.format( cache_actual_tag_siblings_lookup_table_name ) ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( bad_tag_id INTEGER PRIMARY KEY, ideal_tag_id INTEGER );'.format( cache_ideal_tag_siblings_lookup_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( bad_tag_id INTEGER PRIMARY KEY, ideal_tag_id INTEGER );'.format( cache_actual_tag_siblings_lookup_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( bad_tag_id INTEGER PRIMARY KEY, ideal_tag_id INTEGER );'.format( cache_ideal_tag_siblings_lookup_table_name ) ) - self._c.execute( 'INSERT OR IGNORE INTO {} ( bad_tag_id, ideal_tag_id ) SELECT bad_tag_id, ideal_tag_id FROM {};'.format( cache_actual_tag_siblings_lookup_table_name, old_tag_siblings_lookup_name ) ) - self._c.execute( 'INSERT OR IGNORE INTO {} ( bad_tag_id, ideal_tag_id ) SELECT bad_tag_id, ideal_tag_id FROM {};'.format( cache_ideal_tag_siblings_lookup_table_name, old_tag_siblings_lookup_name ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( bad_tag_id, ideal_tag_id ) SELECT bad_tag_id, ideal_tag_id FROM {};'.format( cache_actual_tag_siblings_lookup_table_name, old_tag_siblings_lookup_name ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( bad_tag_id, ideal_tag_id ) SELECT bad_tag_id, ideal_tag_id FROM {};'.format( cache_ideal_tag_siblings_lookup_table_name, old_tag_siblings_lookup_name ) ) self._CreateIndex( cache_actual_tag_siblings_lookup_table_name, [ 'ideal_tag_id' ] ) self._CreateIndex( cache_ideal_tag_siblings_lookup_table_name, [ 'ideal_tag_id' ] ) @@ -16510,13 +16625,13 @@ class DB( HydrusDB.HydrusDB ): self.modules_db_maintenance.AnalyzeTable( cache_actual_tag_siblings_lookup_table_name ) self.modules_db_maintenance.AnalyzeTable( cache_ideal_tag_siblings_lookup_table_name ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( old_tag_siblings_lookup_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( old_tag_siblings_lookup_name ) ) # - result = self._c.execute( 'SELECT 1 FROM sqlite_master WHERE name = ?;', ( 'tag_parent_application', ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM sqlite_master WHERE name = ?;', ( 'tag_parent_application', ) ).fetchone() if result is None: @@ -16524,11 +16639,11 @@ class DB( HydrusDB.HydrusDB ): try: - self._c.execute( 'CREATE TABLE IF NOT EXISTS tag_parent_application ( master_service_id INTEGER, service_index INTEGER, application_service_id INTEGER, PRIMARY KEY ( master_service_id, service_index ) );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS tag_parent_application ( master_service_id INTEGER, service_index INTEGER, application_service_id INTEGER, PRIMARY KEY ( master_service_id, service_index ) );' ) inserts = [ ( tag_service_id, 0, tag_service_id ) for tag_service_id in tag_service_ids ] - self._c.executemany( 'INSERT OR IGNORE INTO tag_parent_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', inserts ) + self._ExecuteMany( 'INSERT OR IGNORE INTO tag_parent_application ( master_service_id, service_index, application_service_id ) VALUES ( ?, ?, ? );', inserts ) except Exception as e: @@ -16749,8 +16864,8 @@ class DB( HydrusDB.HydrusDB ): # - result_master = self._c.execute( 'SELECT 1 FROM external_master.sqlite_master WHERE name = ?;', ( 'subtags_fts4', ) ).fetchone() - result_caches = self._c.execute( 'SELECT 1 FROM external_caches.sqlite_master WHERE name = ?;', ( 'subtags_fts4', ) ).fetchone() + result_master = self._Execute( 'SELECT 1 FROM external_master.sqlite_master WHERE name = ?;', ( 'subtags_fts4', ) ).fetchone() + result_caches = self._Execute( 'SELECT 1 FROM external_caches.sqlite_master WHERE name = ?;', ( 'subtags_fts4', ) ).fetchone() if result_master is not None or result_caches is not None: @@ -16758,15 +16873,15 @@ class DB( HydrusDB.HydrusDB ): self._controller.frame_splash_status.SetText( 'dropping old cache - subtags fts4' ) - self._c.execute( 'DROP TABLE IF EXISTS subtags_fts4;' ) + self._Execute( 'DROP TABLE IF EXISTS subtags_fts4;' ) self._controller.frame_splash_status.SetText( 'dropping old cache - subtags searchable map' ) - self._c.execute( 'DROP TABLE IF EXISTS subtags_searchable_map;' ) + self._Execute( 'DROP TABLE IF EXISTS subtags_searchable_map;' ) self._controller.frame_splash_status.SetText( 'dropping old cache - integer subtags' ) - self._c.execute( 'DROP TABLE IF EXISTS integer_subtags;' ) + self._Execute( 'DROP TABLE IF EXISTS integer_subtags;' ) self.modules_services.combined_file_service_id = self.modules_services.GetServiceId( CC.COMBINED_FILE_SERVICE_KEY ) @@ -17098,7 +17213,7 @@ class DB( HydrusDB.HydrusDB ): subtags_searchable_map_table_name = self._CacheTagsGetSubtagsSearchableMapTableName( file_service_id, tag_service_id ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( subtag_id INTEGER PRIMARY KEY, searchable_subtag_id INTEGER );'.format( subtags_searchable_map_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( subtag_id INTEGER PRIMARY KEY, searchable_subtag_id INTEGER );'.format( subtags_searchable_map_table_name ) ) self._CreateIndex( subtags_searchable_map_table_name, [ 'searchable_subtag_id' ] ) @@ -17131,10 +17246,10 @@ class DB( HydrusDB.HydrusDB ): tag_ids_in_dispute = set() - tag_ids_in_dispute.update( self._STS( self._c.execute( 'SELECT DISTINCT bad_tag_id FROM {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) ) ) - tag_ids_in_dispute.update( self._STS( self._c.execute( 'SELECT ideal_tag_id FROM {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) ) ) - tag_ids_in_dispute.update( self._STS( self._c.execute( 'SELECT DISTINCT child_tag_id FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) ) ) ) - tag_ids_in_dispute.update( self._STS( self._c.execute( 'SELECT DISTINCT ancestor_tag_id FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) ) ) ) + tag_ids_in_dispute.update( self._STS( self._Execute( 'SELECT DISTINCT bad_tag_id FROM {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) ) ) + tag_ids_in_dispute.update( self._STS( self._Execute( 'SELECT ideal_tag_id FROM {};'.format( cache_actual_tag_siblings_lookup_table_name ) ) ) ) + tag_ids_in_dispute.update( self._STS( self._Execute( 'SELECT DISTINCT child_tag_id FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) ) ) ) + tag_ids_in_dispute.update( self._STS( self._Execute( 'SELECT DISTINCT ancestor_tag_id FROM {};'.format( cache_actual_tag_parents_lookup_table_name ) ) ) ) if len( tag_ids_in_dispute ) > 0: @@ -17340,7 +17455,7 @@ class DB( HydrusDB.HydrusDB ): if version == 436: - result = self._c.execute( 'SELECT sql FROM sqlite_master WHERE name = ?;', ( 'deleted_files', ) ).fetchone() + result = self._Execute( 'SELECT sql FROM sqlite_master WHERE name = ?;', ( 'deleted_files', ) ).fetchone() if result is None: @@ -17351,20 +17466,20 @@ class DB( HydrusDB.HydrusDB ): if 'timestamp' not in s: - self._c.execute( 'ALTER TABLE deleted_files ADD COLUMN timestamp INTEGER;' ) - self._c.execute( 'ALTER TABLE deleted_files ADD COLUMN original_timestamp INTEGER;' ) + self._Execute( 'ALTER TABLE deleted_files ADD COLUMN timestamp INTEGER;' ) + self._Execute( 'ALTER TABLE deleted_files ADD COLUMN original_timestamp INTEGER;' ) - self._c.execute( 'UPDATE deleted_files SET timestamp = ?, original_timestamp = ?;', ( None, None ) ) + self._Execute( 'UPDATE deleted_files SET timestamp = ?, original_timestamp = ?;', ( None, None ) ) my_files_service_id = self.modules_services.GetServiceId( CC.LOCAL_FILE_SERVICE_KEY ) - self._c.execute( 'INSERT OR IGNORE INTO deleted_files ( service_id, hash_id, timestamp, original_timestamp ) SELECT ?, hash_id, timestamp, original_timestamp FROM deleted_files WHERE service_id = ?;', ( my_files_service_id, self.modules_services.combined_local_file_service_id ) ) - self._c.execute( 'INSERT OR IGNORE INTO deleted_files ( service_id, hash_id, timestamp, original_timestamp ) SELECT ?, hash_id, ?, timestamp FROM current_files WHERE service_id = ?;', ( my_files_service_id, None, self.modules_services.trash_service_id ) ) + self._Execute( 'INSERT OR IGNORE INTO deleted_files ( service_id, hash_id, timestamp, original_timestamp ) SELECT ?, hash_id, timestamp, original_timestamp FROM deleted_files WHERE service_id = ?;', ( my_files_service_id, self.modules_services.combined_local_file_service_id ) ) + self._Execute( 'INSERT OR IGNORE INTO deleted_files ( service_id, hash_id, timestamp, original_timestamp ) SELECT ?, hash_id, ?, timestamp FROM current_files WHERE service_id = ?;', ( my_files_service_id, None, self.modules_services.trash_service_id ) ) self._CreateIndex( 'deleted_files', [ 'timestamp' ] ) self._CreateIndex( 'deleted_files', [ 'original_timestamp' ] ) - self._c.execute( 'DELETE FROM service_info WHERE info_type = ?;', ( HC.SERVICE_INFO_NUM_DELETED_FILES, ) ) + self._Execute( 'DELETE FROM service_info WHERE info_type = ?;', ( HC.SERVICE_INFO_NUM_DELETED_FILES, ) ) self.modules_db_maintenance.AnalyzeTable( 'deleted_files' ) @@ -17450,7 +17565,7 @@ class DB( HydrusDB.HydrusDB ): if version == 441: - result = self._c.execute( 'SELECT 1 FROM sqlite_master WHERE name = ?;', ( 'json_dumps_hashed', ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM sqlite_master WHERE name = ?;', ( 'json_dumps_hashed', ) ).fetchone() if result is None: @@ -17458,7 +17573,7 @@ class DB( HydrusDB.HydrusDB ): legacy_dump_type = HydrusSerialisable.SERIALISABLE_TYPE_GUI_SESSION_LEGACY - result = self._c.execute( 'SELECT SUM( LENGTH( dump ) ) FROM json_dumps_named WHERE dump_type = ?;', ( legacy_dump_type, ) ).fetchone() + result = self._Execute( 'SELECT SUM( LENGTH( dump ) ) FROM json_dumps_named WHERE dump_type = ?;', ( legacy_dump_type, ) ).fetchone() if result is None or result[0] is None: @@ -17484,9 +17599,9 @@ class DB( HydrusDB.HydrusDB ): one_worked_ok = False - self._c.execute( 'CREATE TABLE json_dumps_hashed ( hash BLOB_BYTES PRIMARY KEY, dump_type INTEGER, version INTEGER, dump BLOB_BYTES );' ) + self._Execute( 'CREATE TABLE json_dumps_hashed ( hash BLOB_BYTES PRIMARY KEY, dump_type INTEGER, version INTEGER, dump BLOB_BYTES );' ) - names_and_timestamps = self._c.execute( 'SELECT dump_name, timestamp FROM json_dumps_named WHERE dump_type = ?;', ( legacy_dump_type, ) ).fetchall() + names_and_timestamps = self._Execute( 'SELECT dump_name, timestamp FROM json_dumps_named WHERE dump_type = ?;', ( legacy_dump_type, ) ).fetchall() from hydrus.client.gui.pages import ClientGUISessionLegacy @@ -17496,7 +17611,7 @@ class DB( HydrusDB.HydrusDB ): self._controller.frame_splash_status.SetSubtext( 'converting "{}" "{}"\u2026'.format( name, HydrusData.ConvertTimestampToPrettyTime( timestamp ) ) ) - ( dump_version, dump ) = self._c.execute( 'SELECT version, dump FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', ( legacy_dump_type, name, timestamp ) ).fetchone() + ( dump_version, dump ) = self._Execute( 'SELECT version, dump FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', ( legacy_dump_type, name, timestamp ) ).fetchone() try: @@ -17549,7 +17664,7 @@ class DB( HydrusDB.HydrusDB ): self.modules_serialisable.SetJSONDump( session, force_timestamp = timestamp ) - self._c.execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', ( legacy_dump_type, name, timestamp ) ) + self._Execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', ( legacy_dump_type, name, timestamp ) ) one_worked_ok = True @@ -17559,7 +17674,7 @@ class DB( HydrusDB.HydrusDB ): raise Exception( 'When trying to update your sessions to the new format, none of them converted correctly! Rather than send you into an empty and potentially non-functional client, the update is now being abandoned. Please roll back to v441 and let hydev know!' ) - self._c.execute( 'DELETE FROM json_dumps_named WHERE dump_type = ?;', ( legacy_dump_type, ) ) + self._Execute( 'DELETE FROM json_dumps_named WHERE dump_type = ?;', ( legacy_dump_type, ) ) self._controller.frame_splash_status.SetSubtext( 'session converting finished' ) @@ -17616,17 +17731,17 @@ class DB( HydrusDB.HydrusDB ): if version == 446: - result = self._c.execute( 'SELECT 1 FROM json_dumps_named WHERE dump_type = ? AND dump_name = ?;', ( 32, 'gelbooru md5' ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM json_dumps_named WHERE dump_type = ? AND dump_name = ?;', ( 32, 'gelbooru md5' ) ).fetchone() if result is not None: try: - self._c.execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ?;', ( 32, 'gelbooru md5' ) ) + self._Execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ?;', ( 32, 'gelbooru md5' ) ) script_info = ( 32, 'gelbooru md5', 2, HydrusData.GetNow(), '''["http://gelbooru.com/index.php", 0, 1, [55, 1, [[[4, "hex"]], "some hash bytes"]], "md5", {"s": "list", "page": "post"}, [[30, 6, ["we got sent back to main gallery page -- title test", 8, [27, 7, [[26, 1, [[62, 2, [0, "head", {}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "title", {}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [84, 1, [26, 1, []]]]], [true, [51, 1, [2, "Image List", null, null, "Image List"]]]]], [30, 6, ["", 0, [27, 7, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-general"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [84, 1, [26, 1, []]]]], ""]], [30, 6, ["", 0, [27, 7, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-copyright"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [84, 1, [26, 1, []]]]], "series"]], [30, 6, ["", 0, [27, 7, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-artist"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [84, 1, [26, 1, []]]]], "creator"]], [30, 6, ["", 0, [27, 7, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-character"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [84, 1, [26, 1, []]]]], "character"]], [30, 6, ["we got sent back to main gallery page -- page links exist", 8, [27, 7, [[26, 1, [[62, 2, [0, "div", {"id": "paginator"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 2, "class", [84, 1, [26, 1, []]]]], [true, [51, 1, [3, "", null, null, "pagination"]]]]]]]''' ) - self._c.execute( 'REPLACE INTO json_dumps_named VALUES ( ?, ?, ?, ?, ? );', script_info ) + self._Execute( 'REPLACE INTO json_dumps_named VALUES ( ?, ?, ?, ?, ? );', script_info ) except Exception as e: @@ -17640,7 +17755,7 @@ class DB( HydrusDB.HydrusDB ): # - result = self._c.execute( 'SELECT 1 FROM sqlite_master WHERE name = ?;', ( 'current_files', ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM sqlite_master WHERE name = ?;', ( 'current_files', ) ).fetchone() if result is not None: @@ -17656,10 +17771,10 @@ class DB( HydrusDB.HydrusDB ): ( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = ClientDBFilesStorage.GenerateFilesTableNames( service_id ) - self._c.execute( 'INSERT INTO {} ( hash_id, timestamp ) SELECT hash_id, timestamp FROM current_files WHERE service_id = ?;'.format( current_files_table_name ), ( service_id, ) ) - self._c.execute( 'INSERT INTO {} ( hash_id, timestamp, original_timestamp ) SELECT hash_id, timestamp, original_timestamp FROM deleted_files WHERE service_id = ?;'.format( deleted_files_table_name ), ( service_id, ) ) - self._c.execute( 'INSERT INTO {} ( hash_id ) SELECT hash_id FROM file_transfers WHERE service_id = ?;'.format( pending_files_table_name ), ( service_id, ) ) - self._c.execute( 'INSERT INTO {} ( hash_id, reason_id ) SELECT hash_id, reason_id FROM file_petitions WHERE service_id = ?;'.format( petitioned_files_table_name ), ( service_id, ) ) + self._Execute( 'INSERT INTO {} ( hash_id, timestamp ) SELECT hash_id, timestamp FROM current_files WHERE service_id = ?;'.format( current_files_table_name ), ( service_id, ) ) + self._Execute( 'INSERT INTO {} ( hash_id, timestamp, original_timestamp ) SELECT hash_id, timestamp, original_timestamp FROM deleted_files WHERE service_id = ?;'.format( deleted_files_table_name ), ( service_id, ) ) + self._Execute( 'INSERT INTO {} ( hash_id ) SELECT hash_id FROM file_transfers WHERE service_id = ?;'.format( pending_files_table_name ), ( service_id, ) ) + self._Execute( 'INSERT INTO {} ( hash_id, reason_id ) SELECT hash_id, reason_id FROM file_petitions WHERE service_id = ?;'.format( petitioned_files_table_name ), ( service_id, ) ) self.modules_db_maintenance.AnalyzeTable( current_files_table_name ) self.modules_db_maintenance.AnalyzeTable( deleted_files_table_name ) @@ -17667,10 +17782,10 @@ class DB( HydrusDB.HydrusDB ): self.modules_db_maintenance.AnalyzeTable( petitioned_files_table_name ) - self._c.execute( 'DROP TABLE current_files;' ) - self._c.execute( 'DROP TABLE deleted_files;' ) - self._c.execute( 'DROP TABLE file_transfers;' ) - self._c.execute( 'DROP TABLE file_petitions;' ) + self._Execute( 'DROP TABLE current_files;' ) + self._Execute( 'DROP TABLE deleted_files;' ) + self._Execute( 'DROP TABLE file_transfers;' ) + self._Execute( 'DROP TABLE file_petitions;' ) except Exception as e: @@ -17693,7 +17808,7 @@ class DB( HydrusDB.HydrusDB ): table_join = self.modules_files_storage.GetCurrentTableJoinPhrase( self.modules_services.combined_local_file_service_id, 'files_info' ) - hash_ids = self._STL( self._c.execute( 'SELECT hash_id FROM {} WHERE mime = ?;'.format( table_join ), ( HC.APPLICATION_PSD, ) ) ) + hash_ids = self._STL( self._Execute( 'SELECT hash_id FROM {} WHERE mime = ?;'.format( table_join ), ( HC.APPLICATION_PSD, ) ) ) from hydrus.client import ClientFiles @@ -17711,38 +17826,40 @@ class DB( HydrusDB.HydrusDB ): if version == 448: + self._controller.frame_splash_status.SetSubtext( 'updating repository update storage' ) + for service_id in self.modules_services.GetServiceIds( HC.REPOSITORIES ): service_type = self.modules_services.GetService( service_id ).GetServiceType() ( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = ClientDBRepositories.GenerateRepositoryUpdatesTableNames( service_id ) - result = self._c.execute( 'SELECT 1 FROM sqlite_master WHERE name = ?;', ( repository_unregistered_updates_table_name, ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM sqlite_master WHERE name = ?;', ( repository_unregistered_updates_table_name, ) ).fetchone() if result is not None: continue - all_data = self._c.execute( 'SELECT update_index, hash_id, processed FROM {};'.format( repository_updates_table_name ) ).fetchall() + all_data = self._Execute( 'SELECT update_index, hash_id, processed FROM {};'.format( repository_updates_table_name ) ).fetchall() - self._c.execute( 'DROP TABLE {};'.format( repository_updates_table_name ) ) + self._Execute( 'DROP TABLE {};'.format( repository_updates_table_name ) ) # - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( update_index INTEGER, hash_id INTEGER, PRIMARY KEY ( update_index, hash_id ) );'.format( repository_updates_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( update_index INTEGER, hash_id INTEGER, PRIMARY KEY ( update_index, hash_id ) );'.format( repository_updates_table_name ) ) self._CreateIndex( repository_updates_table_name, [ 'hash_id' ] ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY );'.format( repository_unregistered_updates_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY );'.format( repository_unregistered_updates_table_name ) ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER, content_type INTEGER, processed INTEGER_BOOLEAN, PRIMARY KEY ( hash_id, content_type ) );'.format( repository_updates_processed_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER, content_type INTEGER, processed INTEGER_BOOLEAN, PRIMARY KEY ( hash_id, content_type ) );'.format( repository_updates_processed_table_name ) ) self._CreateIndex( repository_updates_processed_table_name, [ 'content_type' ] ) # for ( update_index, hash_id, processed ) in all_data: - self._c.execute( 'INSERT OR IGNORE INTO {} ( update_index, hash_id ) VALUES ( ?, ? );'.format( repository_updates_table_name ), ( update_index, hash_id ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( update_index, hash_id ) VALUES ( ?, ? );'.format( repository_updates_table_name ), ( update_index, hash_id ) ) try: @@ -17750,7 +17867,7 @@ class DB( HydrusDB.HydrusDB ): except HydrusExceptions.DataMissing: - self._c.execute( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( repository_unregistered_updates_table_name ), ( hash_id, ) ) + self._Execute( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( repository_unregistered_updates_table_name ), ( hash_id, ) ) continue @@ -17771,12 +17888,14 @@ class DB( HydrusDB.HydrusDB ): - self._c.executemany( 'INSERT OR IGNORE INTO {} ( hash_id, content_type, processed ) VALUES ( ?, ?, ? );'.format( repository_updates_processed_table_name ), ( ( hash_id, content_type, processed ) for content_type in content_types ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( hash_id, content_type, processed ) VALUES ( ?, ?, ? );'.format( repository_updates_processed_table_name ), ( ( hash_id, content_type, processed ) for content_type in content_types ) ) self.modules_repositories.DoOutstandingUpdateRegistration() + self._controller.frame_splash_status.SetSubtext( 'resetting siblings and parents' ) + for service in self.modules_services.GetServices( ( HC.TAG_REPOSITORY, ) ): service_key = service.GetServiceKey() @@ -17787,7 +17906,7 @@ class DB( HydrusDB.HydrusDB ): self._controller.frame_splash_status.SetTitleText( 'updated db to v{}'.format( HydrusData.ToHumanInt( version + 1 ) ) ) - self._c.execute( 'UPDATE version SET version = ?;', ( version + 1, ) ) + self._Execute( 'UPDATE version SET version = ?;', ( version + 1, ) ) def _UpdateMappings( self, tag_service_id, mappings_ids = None, deleted_mappings_ids = None, pending_mappings_ids = None, pending_rescinded_mappings_ids = None, petitioned_mappings_ids = None, petitioned_rescinded_mappings_ids = None ): @@ -17828,11 +17947,11 @@ class DB( HydrusDB.HydrusDB ): filtered_hashes_generator = self._CacheSpecificMappingsGetFilteredHashesGenerator( file_service_ids, tag_service_id, hash_ids_being_altered ) - self._c.execute( 'CREATE TABLE mem.temp_hash_ids ( hash_id INTEGER );' ) + self._Execute( 'CREATE TABLE mem.temp_hash_ids ( hash_id INTEGER );' ) - self._c.executemany( 'INSERT INTO temp_hash_ids ( hash_id ) VALUES ( ? );', ( ( hash_id, ) for hash_id in hash_ids_being_altered ) ) + self._ExecuteMany( 'INSERT INTO temp_hash_ids ( hash_id ) VALUES ( ? );', ( ( hash_id, ) for hash_id in hash_ids_being_altered ) ) - pre_existing_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM temp_hash_ids WHERE EXISTS ( SELECT 1 FROM {} WHERE hash_id = temp_hash_ids.hash_id );'.format( current_mappings_table_name ) ) ) + pre_existing_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM temp_hash_ids WHERE EXISTS ( SELECT 1 FROM {} WHERE hash_id = temp_hash_ids.hash_id );'.format( current_mappings_table_name ) ) ) num_files_added = len( hash_ids_being_added.difference( pre_existing_hash_ids ) ) @@ -17852,17 +17971,17 @@ class DB( HydrusDB.HydrusDB ): self._CacheCombinedFilesDisplayMappingsAddMappingsForChained( tag_service_id, tag_id, hash_ids ) - self._c.executemany( 'DELETE FROM ' + deleted_mappings_table_name + ' WHERE tag_id = ? AND hash_id = ?;', ( ( tag_id, hash_id ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + deleted_mappings_table_name + ' WHERE tag_id = ? AND hash_id = ?;', ( ( tag_id, hash_id ) for hash_id in hash_ids ) ) - num_deleted_deleted = HydrusDB.GetRowCount( self._c ) + num_deleted_deleted = self._GetRowCount() - self._c.executemany( 'DELETE FROM ' + pending_mappings_table_name + ' WHERE tag_id = ? AND hash_id = ?;', ( ( tag_id, hash_id ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + pending_mappings_table_name + ' WHERE tag_id = ? AND hash_id = ?;', ( ( tag_id, hash_id ) for hash_id in hash_ids ) ) - num_pending_deleted = HydrusDB.GetRowCount( self._c ) + num_pending_deleted = self._GetRowCount() - self._c.executemany( 'INSERT OR IGNORE INTO ' + current_mappings_table_name + ' VALUES ( ?, ? );', ( ( tag_id, hash_id ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO ' + current_mappings_table_name + ' VALUES ( ?, ? );', ( ( tag_id, hash_id ) for hash_id in hash_ids ) ) - num_current_inserted = HydrusDB.GetRowCount( self._c ) + num_current_inserted = self._GetRowCount() change_in_num_deleted_mappings -= num_deleted_deleted change_in_num_pending_mappings -= num_pending_deleted @@ -17888,17 +18007,17 @@ class DB( HydrusDB.HydrusDB ): self._CacheCombinedFilesDisplayMappingsDeleteMappingsForChained( tag_service_id, tag_id, hash_ids ) - self._c.executemany( 'DELETE FROM ' + current_mappings_table_name + ' WHERE tag_id = ? AND hash_id = ?;', ( ( tag_id, hash_id ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + current_mappings_table_name + ' WHERE tag_id = ? AND hash_id = ?;', ( ( tag_id, hash_id ) for hash_id in hash_ids ) ) - num_current_deleted = HydrusDB.GetRowCount( self._c ) + num_current_deleted = self._GetRowCount() - self._c.executemany( 'DELETE FROM ' + petitioned_mappings_table_name + ' WHERE tag_id = ? AND hash_id = ?;', ( ( tag_id, hash_id ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + petitioned_mappings_table_name + ' WHERE tag_id = ? AND hash_id = ?;', ( ( tag_id, hash_id ) for hash_id in hash_ids ) ) - num_petitions_deleted = HydrusDB.GetRowCount( self._c ) + num_petitions_deleted = self._GetRowCount() - self._c.executemany( 'INSERT OR IGNORE INTO ' + deleted_mappings_table_name + ' VALUES ( ?, ? );', ( ( tag_id, hash_id ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO ' + deleted_mappings_table_name + ' VALUES ( ?, ? );', ( ( tag_id, hash_id ) for hash_id in hash_ids ) ) - num_deleted_inserted = HydrusDB.GetRowCount( self._c ) + num_deleted_inserted = self._GetRowCount() change_in_num_mappings -= num_current_deleted change_in_num_petitioned_mappings -= num_petitions_deleted @@ -17924,9 +18043,9 @@ class DB( HydrusDB.HydrusDB ): self._CacheCombinedFilesDisplayMappingsPendMappingsForChained( tag_service_id, tag_id, hash_ids ) - self._c.executemany( 'INSERT OR IGNORE INTO ' + pending_mappings_table_name + ' VALUES ( ?, ? );', ( ( tag_id, hash_id ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO ' + pending_mappings_table_name + ' VALUES ( ?, ? );', ( ( tag_id, hash_id ) for hash_id in hash_ids ) ) - num_pending_inserted = HydrusDB.GetRowCount( self._c ) + num_pending_inserted = self._GetRowCount() change_in_num_pending_mappings += num_pending_inserted @@ -17950,9 +18069,9 @@ class DB( HydrusDB.HydrusDB ): self._CacheCombinedFilesDisplayMappingsRescindPendingMappingsForChained( tag_service_id, tag_id, hash_ids ) - self._c.executemany( 'DELETE FROM ' + pending_mappings_table_name + ' WHERE tag_id = ? AND hash_id = ?;', ( ( tag_id, hash_id ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + pending_mappings_table_name + ' WHERE tag_id = ? AND hash_id = ?;', ( ( tag_id, hash_id ) for hash_id in hash_ids ) ) - num_pending_deleted = HydrusDB.GetRowCount( self._c ) + num_pending_deleted = self._GetRowCount() change_in_num_pending_mappings -= num_pending_deleted @@ -17969,9 +18088,9 @@ class DB( HydrusDB.HydrusDB ): # - post_existing_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM temp_hash_ids WHERE EXISTS ( SELECT 1 FROM {} WHERE hash_id = temp_hash_ids.hash_id );'.format( current_mappings_table_name ) ) ) + post_existing_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM temp_hash_ids WHERE EXISTS ( SELECT 1 FROM {} WHERE hash_id = temp_hash_ids.hash_id );'.format( current_mappings_table_name ) ) ) - self._c.execute( 'DROP TABLE temp_hash_ids;' ) + self._Execute( 'DROP TABLE temp_hash_ids;' ) num_files_removed = len( pre_existing_hash_ids.intersection( hash_ids_being_removed ).difference( post_existing_hash_ids ) ) @@ -17979,18 +18098,18 @@ class DB( HydrusDB.HydrusDB ): for ( tag_id, hash_ids, reason_id ) in petitioned_mappings_ids: - self._c.executemany( 'INSERT OR IGNORE INTO ' + petitioned_mappings_table_name + ' VALUES ( ?, ?, ? );', [ ( tag_id, hash_id, reason_id ) for hash_id in hash_ids ] ) + self._ExecuteMany( 'INSERT OR IGNORE INTO ' + petitioned_mappings_table_name + ' VALUES ( ?, ?, ? );', [ ( tag_id, hash_id, reason_id ) for hash_id in hash_ids ] ) - num_petitions_inserted = HydrusDB.GetRowCount( self._c ) + num_petitions_inserted = self._GetRowCount() change_in_num_petitioned_mappings += num_petitions_inserted for ( tag_id, hash_ids ) in petitioned_rescinded_mappings_ids: - self._c.executemany( 'DELETE FROM ' + petitioned_mappings_table_name + ' WHERE tag_id = ? AND hash_id = ?;', ( ( tag_id, hash_id ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + petitioned_mappings_table_name + ' WHERE tag_id = ? AND hash_id = ?;', ( ( tag_id, hash_id ) for hash_id in hash_ids ) ) - num_petitions_deleted = HydrusDB.GetRowCount( self._c ) + num_petitions_deleted = self._GetRowCount() change_in_num_petitioned_mappings -= num_petitions_deleted @@ -18003,7 +18122,7 @@ class DB( HydrusDB.HydrusDB ): if change_in_num_petitioned_mappings != 0: service_info_updates.append( ( change_in_num_petitioned_mappings, tag_service_id, HC.SERVICE_INFO_NUM_PETITIONED_MAPPINGS ) ) if change_in_num_files != 0: service_info_updates.append( ( change_in_num_files, tag_service_id, HC.SERVICE_INFO_NUM_FILES ) ) - if len( service_info_updates ) > 0: self._c.executemany( 'UPDATE service_info SET info = info + ? WHERE service_id = ? AND info_type = ?;', service_info_updates ) + if len( service_info_updates ) > 0: self._ExecuteMany( 'UPDATE service_info SET info = info + ? WHERE service_id = ? AND info_type = ?;', service_info_updates ) def _UpdateServerServices( self, admin_service_key, serverside_services, service_keys_to_access_keys, deletee_service_keys ): @@ -18189,7 +18308,7 @@ class DB( HydrusDB.HydrusDB ): job_key.SetStatusTitle( 'database maintenance - vacuum' ) - self._CloseDBCursor() + self._CloseDBConnection() try: @@ -18231,7 +18350,7 @@ class DB( HydrusDB.HydrusDB ): HydrusData.ShowText( text ) - self._InitDBCursor() + self._InitDBConnection() return @@ -18241,7 +18360,7 @@ class DB( HydrusDB.HydrusDB ): finally: - self._InitDBCursor() + self._InitDBConnection() self.modules_db_maintenance.RegisterSuccessfulVacuum( name ) diff --git a/hydrus/client/db/ClientDBDefinitionsCache.py b/hydrus/client/db/ClientDBDefinitionsCache.py index 6d596680..982921bf 100644 --- a/hydrus/client/db/ClientDBDefinitionsCache.py +++ b/hydrus/client/db/ClientDBDefinitionsCache.py @@ -47,14 +47,14 @@ class ClientDBCacheLocalHashes( HydrusDBModule.HydrusDBModule ): ( uncached_hash_id, ) = uncached_hash_ids # this makes 0 or 1 rows, so do fetchall rather than fetchone - local_uncached_hash_ids_to_hashes = { hash_id : hash for ( hash_id, hash ) in self._c.execute( 'SELECT hash_id, hash FROM local_hashes_cache WHERE hash_id = ?;', ( uncached_hash_id, ) ) } + local_uncached_hash_ids_to_hashes = { hash_id : hash for ( hash_id, hash ) in self._Execute( 'SELECT hash_id, hash FROM local_hashes_cache WHERE hash_id = ?;', ( uncached_hash_id, ) ) } else: - with HydrusDB.TemporaryIntegerTable( self._c, uncached_hash_ids, 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( uncached_hash_ids, 'hash_id' ) as temp_table_name: # temp hash_ids to actual hashes - local_uncached_hash_ids_to_hashes = { hash_id : hash for ( hash_id, hash ) in self._c.execute( 'SELECT hash_id, hash FROM {} CROSS JOIN local_hashes_cache USING ( hash_id );'.format( temp_table_name ) ) } + local_uncached_hash_ids_to_hashes = { hash_id : hash for ( hash_id, hash ) in self._Execute( 'SELECT hash_id, hash FROM {} CROSS JOIN local_hashes_cache USING ( hash_id );'.format( temp_table_name ) ) } @@ -73,24 +73,24 @@ class ClientDBCacheLocalHashes( HydrusDBModule.HydrusDBModule ): def CreateInitialTables( self ): - self._c.execute( 'CREATE TABLE IF NOT EXISTS external_caches.local_hashes_cache ( hash_id INTEGER PRIMARY KEY, hash BLOB_BYTES UNIQUE );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.local_hashes_cache ( hash_id INTEGER PRIMARY KEY, hash BLOB_BYTES UNIQUE );' ) def AddHashIdsToCache( self, hash_ids ): hash_ids_to_hashes = self.modules_hashes.GetHashIdsToHashes( hash_ids = hash_ids ) - self._c.executemany( 'INSERT OR IGNORE INTO local_hashes_cache ( hash_id, hash ) VALUES ( ?, ? );', ( ( hash_id, sqlite3.Binary( hash ) ) for ( hash_id, hash ) in hash_ids_to_hashes.items() ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO local_hashes_cache ( hash_id, hash ) VALUES ( ?, ? );', ( ( hash_id, sqlite3.Binary( hash ) ) for ( hash_id, hash ) in hash_ids_to_hashes.items() ) ) def ClearCache( self ): - self._c.execute( 'DELETE FROM local_hashes_cache;' ) + self._Execute( 'DELETE FROM local_hashes_cache;' ) def DropHashIdsFromCache( self, hash_ids ): - self._c.executemany( 'DELETE FROM local_hashes_cache WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM local_hashes_cache WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) ) def GetExpectedTableNames( self ) -> typing.Collection[ str ]: @@ -118,7 +118,7 @@ class ClientDBCacheLocalHashes( HydrusDBModule.HydrusDBModule ): def GetHashId( self, hash ) -> int: - result = self._c.execute( 'SELECT hash_id FROM local_hashes_cache WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() + result = self._Execute( 'SELECT hash_id FROM local_hashes_cache WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() if result is None: @@ -144,7 +144,7 @@ class ClientDBCacheLocalHashes( HydrusDBModule.HydrusDBModule ): continue - result = self._c.execute( 'SELECT hash_id FROM local_hashes_cache WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() + result = self._Execute( 'SELECT hash_id FROM local_hashes_cache WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() if result is None: @@ -191,7 +191,7 @@ class ClientDBCacheLocalHashes( HydrusDBModule.HydrusDBModule ): def HasHashId( self, hash_id: int ): - result = self._c.execute( 'SELECT 1 FROM local_hashes_cache WHERE hash_id = ?;', ( hash_id, ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM local_hashes_cache WHERE hash_id = ?;', ( hash_id, ) ).fetchone() return result is not None @@ -235,14 +235,14 @@ class ClientDBCacheLocalTags( HydrusDBModule.HydrusDBModule ): ( uncached_tag_id, ) = uncached_tag_ids # this makes 0 or 1 rows, so do fetchall rather than fetchone - local_uncached_tag_ids_to_tags = { tag_id : tag for ( tag_id, tag ) in self._c.execute( 'SELECT tag_id, tag FROM local_tags_cache WHERE tag_id = ?;', ( uncached_tag_id, ) ) } + local_uncached_tag_ids_to_tags = { tag_id : tag for ( tag_id, tag ) in self._Execute( 'SELECT tag_id, tag FROM local_tags_cache WHERE tag_id = ?;', ( uncached_tag_id, ) ) } else: - with HydrusDB.TemporaryIntegerTable( self._c, uncached_tag_ids, 'tag_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( uncached_tag_ids, 'tag_id' ) as temp_table_name: # temp tag_ids to actual tags - local_uncached_tag_ids_to_tags = { tag_id : tag for ( tag_id, tag ) in self._c.execute( 'SELECT tag_id, tag FROM {} CROSS JOIN local_tags_cache USING ( tag_id );'.format( temp_table_name ) ) } + local_uncached_tag_ids_to_tags = { tag_id : tag for ( tag_id, tag ) in self._Execute( 'SELECT tag_id, tag FROM {} CROSS JOIN local_tags_cache USING ( tag_id );'.format( temp_table_name ) ) } @@ -261,24 +261,24 @@ class ClientDBCacheLocalTags( HydrusDBModule.HydrusDBModule ): def CreateInitialTables( self ): - self._c.execute( 'CREATE TABLE IF NOT EXISTS external_caches.local_tags_cache ( tag_id INTEGER PRIMARY KEY, tag TEXT UNIQUE );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.local_tags_cache ( tag_id INTEGER PRIMARY KEY, tag TEXT UNIQUE );' ) def AddTagIdsToCache( self, tag_ids ): tag_ids_to_tags = self.modules_tags.GetTagIdsToTags( tag_ids = tag_ids ) - self._c.executemany( 'INSERT OR IGNORE INTO local_tags_cache ( tag_id, tag ) VALUES ( ?, ? );', tag_ids_to_tags.items() ) + self._ExecuteMany( 'INSERT OR IGNORE INTO local_tags_cache ( tag_id, tag ) VALUES ( ?, ? );', tag_ids_to_tags.items() ) def ClearCache( self ): - self._c.execute( 'DELETE FROM local_tags_cache;' ) + self._Execute( 'DELETE FROM local_tags_cache;' ) def DropTagIdsFromCache( self, tag_ids ): - self._c.executemany( 'DELETE FROM local_tags_cache WHERE tag_id = ?;', ( ( tag_id, ) for tag_id in tag_ids ) ) + self._ExecuteMany( 'DELETE FROM local_tags_cache WHERE tag_id = ?;', ( ( tag_id, ) for tag_id in tag_ids ) ) def GetExpectedTableNames( self ) -> typing.Collection[ str ]: @@ -317,7 +317,7 @@ class ClientDBCacheLocalTags( HydrusDBModule.HydrusDBModule ): raise HydrusExceptions.TagSizeException( '"{}" tag seems not valid--when cleaned, it ends up with zero size!'.format( tag ) ) - result = self._c.execute( 'SELECT tag_id FROM local_tags_cache WHERE tag = ?;', ( tag, ) ).fetchone() + result = self._Execute( 'SELECT tag_id FROM local_tags_cache WHERE tag = ?;', ( tag, ) ).fetchone() if result is None: @@ -349,7 +349,7 @@ class ClientDBCacheLocalTags( HydrusDBModule.HydrusDBModule ): def UpdateTagInCache( self, tag_id, tag ): - self._c.execute( 'UPDATE local_tags_cache SET tag = ? WHERE tag_id = ?;', ( tag, tag_id ) ) + self._Execute( 'UPDATE local_tags_cache SET tag = ? WHERE tag_id = ?;', ( tag, tag_id ) ) if tag_id in self._tag_ids_to_tags_cache: diff --git a/hydrus/client/db/ClientDBFilesMaintenance.py b/hydrus/client/db/ClientDBFilesMaintenance.py index e3df8a69..6328592d 100644 --- a/hydrus/client/db/ClientDBFilesMaintenance.py +++ b/hydrus/client/db/ClientDBFilesMaintenance.py @@ -47,12 +47,12 @@ class ClientDBFilesMaintenance( HydrusDBModule.HydrusDBModule ): for deletee_job_type in deletee_job_types: - self._c.executemany( 'DELETE FROM file_maintenance_jobs WHERE hash_id = ? AND job_type = ?;', ( ( hash_id, deletee_job_type ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM file_maintenance_jobs WHERE hash_id = ? AND job_type = ?;', ( ( hash_id, deletee_job_type ) for hash_id in hash_ids ) ) # - self._c.executemany( 'REPLACE INTO file_maintenance_jobs ( hash_id, job_type, time_can_start ) VALUES ( ?, ?, ? );', ( ( hash_id, job_type, time_can_start ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'REPLACE INTO file_maintenance_jobs ( hash_id, job_type, time_can_start ) VALUES ( ?, ?, ? );', ( ( hash_id, job_type, time_can_start ) for hash_id in hash_ids ) ) def AddJobsHashes( self, hashes, job_type, time_can_start = 0 ): @@ -64,17 +64,17 @@ class ClientDBFilesMaintenance( HydrusDBModule.HydrusDBModule ): def CancelFiles( self, hash_ids ): - self._c.executemany( 'DELETE FROM file_maintenance_jobs WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM file_maintenance_jobs WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) ) def CancelJobs( self, job_type ): - self._c.execute( 'DELETE FROM file_maintenance_jobs WHERE job_type = ?;', ( job_type, ) ) + self._Execute( 'DELETE FROM file_maintenance_jobs WHERE job_type = ?;', ( job_type, ) ) def CreateInitialTables( self ): - self._c.execute( 'CREATE TABLE IF NOT EXISTS external_caches.file_maintenance_jobs ( hash_id INTEGER, job_type INTEGER, time_can_start INTEGER, PRIMARY KEY ( hash_id, job_type ) );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.file_maintenance_jobs ( hash_id INTEGER, job_type INTEGER, time_can_start INTEGER, PRIMARY KEY ( hash_id, job_type ) );' ) def ClearJobs( self, cleared_job_tuples ): @@ -108,7 +108,7 @@ class ClientDBFilesMaintenance( HydrusDBModule.HydrusDBModule ): self.AddJobs( { hash_id }, ClientFiles.REGENERATE_FILE_DATA_JOB_OTHER_HASHES ) - result = self._c.execute( 'SELECT 1 FROM file_modified_timestamps WHERE hash_id = ?;', ( hash_id, ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM file_modified_timestamps WHERE hash_id = ?;', ( hash_id, ) ).fetchone() if result is None: @@ -131,7 +131,7 @@ class ClientDBFilesMaintenance( HydrusDBModule.HydrusDBModule ): file_modified_timestamp = additional_data - self._c.execute( 'REPLACE INTO file_modified_timestamps ( hash_id, file_modified_timestamp ) VALUES ( ?, ? );', ( hash_id, file_modified_timestamp ) ) + self._Execute( 'REPLACE INTO file_modified_timestamps ( hash_id, file_modified_timestamp ) VALUES ( ?, ? );', ( hash_id, file_modified_timestamp ) ) new_file_info.add( ( hash_id, hash ) ) @@ -170,7 +170,7 @@ class ClientDBFilesMaintenance( HydrusDBModule.HydrusDBModule ): job_types_to_delete.extend( ClientFiles.regen_file_enum_to_overruled_jobs[ job_type ] ) - self._c.executemany( 'DELETE FROM file_maintenance_jobs WHERE hash_id = ? AND job_type = ?;', ( ( hash_id, job_type_to_delete ) for job_type_to_delete in job_types_to_delete ) ) + self._ExecuteMany( 'DELETE FROM file_maintenance_jobs WHERE hash_id = ? AND job_type = ?;', ( ( hash_id, job_type_to_delete ) for job_type_to_delete in job_types_to_delete ) ) if len( new_file_info ) > 0: @@ -216,7 +216,7 @@ class ClientDBFilesMaintenance( HydrusDBModule.HydrusDBModule ): for job_type in possible_job_types: - hash_ids = self._STL( self._c.execute( 'SELECT hash_id FROM file_maintenance_jobs WHERE job_type = ? AND time_can_start < ? LIMIT ?;', ( job_type, HydrusData.GetNow(), 256 ) ) ) + hash_ids = self._STL( self._Execute( 'SELECT hash_id FROM file_maintenance_jobs WHERE job_type = ? AND time_can_start < ? LIMIT ?;', ( job_type, HydrusData.GetNow(), 256 ) ) ) if len( hash_ids ) > 0: @@ -231,7 +231,7 @@ class ClientDBFilesMaintenance( HydrusDBModule.HydrusDBModule ): def GetJobCounts( self ): - result = self._c.execute( 'SELECT job_type, COUNT( * ) FROM file_maintenance_jobs WHERE time_can_start < ? GROUP BY job_type;', ( HydrusData.GetNow(), ) ).fetchall() + result = self._Execute( 'SELECT job_type, COUNT( * ) FROM file_maintenance_jobs WHERE time_can_start < ? GROUP BY job_type;', ( HydrusData.GetNow(), ) ).fetchall() job_types_to_count = dict( result ) diff --git a/hydrus/client/db/ClientDBFilesMetadataBasic.py b/hydrus/client/db/ClientDBFilesMetadataBasic.py index fe1a3846..fdbb2260 100644 --- a/hydrus/client/db/ClientDBFilesMetadataBasic.py +++ b/hydrus/client/db/ClientDBFilesMetadataBasic.py @@ -34,9 +34,9 @@ class ClientDBFilesMetadataBasic( HydrusDBModule.HydrusDBModule ): def _InitCaches( self ): - if self._c.execute( 'SELECT 1 FROM sqlite_master WHERE name = ?;', ( 'file_inbox', ) ).fetchone() is not None: + if self._Execute( 'SELECT 1 FROM sqlite_master WHERE name = ?;', ( 'file_inbox', ) ).fetchone() is not None: - self.inbox_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM file_inbox;' ) ) + self.inbox_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM file_inbox;' ) ) @@ -52,7 +52,7 @@ class ClientDBFilesMetadataBasic( HydrusDBModule.HydrusDBModule ): # hash_id, size, mime, width, height, duration, num_frames, has_audio, num_words - self._c.executemany( insert_phrase + ' files_info ( hash_id, size, mime, width, height, duration, num_frames, has_audio, num_words ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ? );', rows ) + self._ExecuteMany( insert_phrase + ' files_info ( hash_id, size, mime, width, height, duration, num_frames, has_audio, num_words ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ? );', rows ) def ArchiveFiles( self, hash_ids: typing.Collection[ int ] ) -> typing.Set[ int ]: @@ -66,7 +66,7 @@ class ClientDBFilesMetadataBasic( HydrusDBModule.HydrusDBModule ): if len( archiveable_hash_ids ) > 0: - self._c.executemany( 'DELETE FROM file_inbox WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in archiveable_hash_ids ) ) + self._ExecuteMany( 'DELETE FROM file_inbox WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in archiveable_hash_ids ) ) self.inbox_hash_ids.difference_update( archiveable_hash_ids ) @@ -76,8 +76,8 @@ class ClientDBFilesMetadataBasic( HydrusDBModule.HydrusDBModule ): def CreateInitialTables( self ): - self._c.execute( 'CREATE TABLE file_inbox ( hash_id INTEGER PRIMARY KEY );' ) - self._c.execute( 'CREATE TABLE files_info ( hash_id INTEGER PRIMARY KEY, size INTEGER, mime INTEGER, width INTEGER, height INTEGER, duration INTEGER, num_frames INTEGER, has_audio INTEGER_BOOLEAN, num_words INTEGER );' ) + self._Execute( 'CREATE TABLE file_inbox ( hash_id INTEGER PRIMARY KEY );' ) + self._Execute( 'CREATE TABLE files_info ( hash_id INTEGER PRIMARY KEY, size INTEGER, mime INTEGER, width INTEGER, height INTEGER, duration INTEGER, num_frames INTEGER, has_audio INTEGER_BOOLEAN, num_words INTEGER );' ) def GetExpectedTableNames( self ) -> typing.Collection[ str ]: @@ -92,7 +92,7 @@ class ClientDBFilesMetadataBasic( HydrusDBModule.HydrusDBModule ): def GetMime( self, hash_id: int ) -> int: - result = self._c.execute( 'SELECT mime FROM files_info WHERE hash_id = ?;', ( hash_id, ) ).fetchone() + result = self._Execute( 'SELECT mime FROM files_info WHERE hash_id = ?;', ( hash_id, ) ).fetchone() if result is None: @@ -110,13 +110,13 @@ class ClientDBFilesMetadataBasic( HydrusDBModule.HydrusDBModule ): ( hash_id, ) = hash_ids - result = self._STL( self._c.execute( 'SELECT mime FROM files_info WHERE hash_id = ?;', ( hash_id, ) ) ) + result = self._STL( self._Execute( 'SELECT mime FROM files_info WHERE hash_id = ?;', ( hash_id, ) ) ) else: - with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name: + with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name: - result = self._STL( self._c.execute( 'SELECT mime FROM {} CROSS JOIN files_info USING ( hash_id );'.format( temp_hash_ids_table_name ) ) ) + result = self._STL( self._Execute( 'SELECT mime FROM {} CROSS JOIN files_info USING ( hash_id );'.format( temp_hash_ids_table_name ) ) ) @@ -125,7 +125,7 @@ class ClientDBFilesMetadataBasic( HydrusDBModule.HydrusDBModule ): def GetResolution( self, hash_id: int ): - result = self._c.execute( 'SELECT width, height FROM files_info WHERE hash_id = ?;', ( hash_id, ) ).fetchone() + result = self._Execute( 'SELECT width, height FROM files_info WHERE hash_id = ?;', ( hash_id, ) ).fetchone() if result is None: @@ -151,13 +151,13 @@ class ClientDBFilesMetadataBasic( HydrusDBModule.HydrusDBModule ): ( hash_id, ) = hash_ids - result = self._c.execute( 'SELECT size FROM files_info WHERE hash_id = ?;', ( hash_id, ) ).fetchone() + result = self._Execute( 'SELECT size FROM files_info WHERE hash_id = ?;', ( hash_id, ) ).fetchone() else: - with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name: + with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name: - result = self._c.execute( 'SELECT SUM( size ) FROM {} CROSS JOIN files_info USING ( hash_id );'.format( temp_hash_ids_table_name ) ).fetchone() + result = self._Execute( 'SELECT SUM( size ) FROM {} CROSS JOIN files_info USING ( hash_id );'.format( temp_hash_ids_table_name ) ).fetchone() @@ -182,7 +182,7 @@ class ClientDBFilesMetadataBasic( HydrusDBModule.HydrusDBModule ): if len( inboxable_hash_ids ) > 0: - self._c.executemany( 'INSERT OR IGNORE INTO file_inbox VALUES ( ? );', ( ( hash_id, ) for hash_id in inboxable_hash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO file_inbox VALUES ( ? );', ( ( hash_id, ) for hash_id in inboxable_hash_ids ) ) self.inbox_hash_ids.update( inboxable_hash_ids ) diff --git a/hydrus/client/db/ClientDBFilesStorage.py b/hydrus/client/db/ClientDBFilesStorage.py index 3fe2b8a4..920b893d 100644 --- a/hydrus/client/db/ClientDBFilesStorage.py +++ b/hydrus/client/db/ClientDBFilesStorage.py @@ -66,11 +66,11 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): ( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id ) - self._c.executemany( 'INSERT OR IGNORE INTO {} VALUES ( ?, ? );'.format( current_files_table_name ), ( ( hash_id, timestamp ) for ( hash_id, timestamp ) in insert_rows ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO {} VALUES ( ?, ? );'.format( current_files_table_name ), ( ( hash_id, timestamp ) for ( hash_id, timestamp ) in insert_rows ) ) - self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( pending_files_table_name ), ( ( hash_id, ) for ( hash_id, timestamp ) in insert_rows ) ) + self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( pending_files_table_name ), ( ( hash_id, ) for ( hash_id, timestamp ) in insert_rows ) ) - pending_changed = HydrusDB.GetRowCount( self._c ) > 0 + pending_changed = self._GetRowCount() > 0 return pending_changed @@ -79,21 +79,26 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): deleted_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_DELETED ) - self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( deleted_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( deleted_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) ) - num_deleted = HydrusDB.GetRowCount( self._c ) + num_deleted = self._GetRowCount() return num_deleted - def ClearFilesTables( self, service_id: int ): + def ClearFilesTables( self, service_id: int, keep_pending = False ): ( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id ) - self._c.execute( 'DELETE FROM {};'.format( current_files_table_name ) ) - self._c.execute( 'DELETE FROM {};'.format( deleted_files_table_name ) ) - self._c.execute( 'DELETE FROM {};'.format( pending_files_table_name ) ) - self._c.execute( 'DELETE FROM {};'.format( petitioned_files_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( current_files_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( deleted_files_table_name ) ) + + if not keep_pending: + + self._Execute( 'DELETE FROM {};'.format( pending_files_table_name ) ) + + + self._Execute( 'DELETE FROM {};'.format( petitioned_files_table_name ) ) def ClearLocalDeleteRecord( self, hash_ids = None ): @@ -112,14 +117,14 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): deleted_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_DELETED ) - self._c.execute( 'DELETE FROM {} WHERE hash_id NOT IN ( SELECT hash_id FROM {} );'.format( deleted_files_table_name, trash_current_files_table_name ) ) + self._Execute( 'DELETE FROM {} WHERE hash_id NOT IN ( SELECT hash_id FROM {} );'.format( deleted_files_table_name, trash_current_files_table_name ) ) - num_cleared = HydrusDB.GetRowCount( self._c ) + num_cleared = self._GetRowCount() service_ids_to_nums_cleared[ service_id ] = num_cleared - self._c.execute( 'DELETE FROM local_file_deletion_reasons WHERE hash_id NOT IN ( SELECT hash_id FROM {} );'.format( trash_current_files_table_name ) ) + self._Execute( 'DELETE FROM local_file_deletion_reasons WHERE hash_id NOT IN ( SELECT hash_id FROM {} );'.format( trash_current_files_table_name ) ) else: @@ -133,14 +138,14 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): deleted_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_DELETED ) - self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( deleted_files_table_name ), ( ( hash_id, ) for hash_id in ok_to_clear_hash_ids ) ) + self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( deleted_files_table_name ), ( ( hash_id, ) for hash_id in ok_to_clear_hash_ids ) ) - num_cleared = HydrusDB.GetRowCount( self._c ) + num_cleared = self._GetRowCount() service_ids_to_nums_cleared[ service_id ] = num_cleared - self._c.executemany( 'DELETE FROM local_file_deletion_reasons WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in ok_to_clear_hash_ids ) ) + self._ExecuteMany( 'DELETE FROM local_file_deletion_reasons WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in ok_to_clear_hash_ids ) ) @@ -149,25 +154,25 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): def CreateInitialTables( self ): - self._c.execute( 'CREATE TABLE local_file_deletion_reasons ( hash_id INTEGER PRIMARY KEY, reason_id INTEGER );' ) + self._Execute( 'CREATE TABLE local_file_deletion_reasons ( hash_id INTEGER PRIMARY KEY, reason_id INTEGER );' ) def DeletePending( self, service_id: int ): ( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id ) - self._c.execute( 'DELETE FROM {};'.format( pending_files_table_name ) ) - self._c.execute( 'DELETE FROM {};'.format( petitioned_files_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( pending_files_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( petitioned_files_table_name ) ) def DropFilesTables( self, service_id: int ): ( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( current_files_table_name ) ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( deleted_files_table_name ) ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( pending_files_table_name ) ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( petitioned_files_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( current_files_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( deleted_files_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( pending_files_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( petitioned_files_table_name ) ) def FilterAllCurrentHashIds( self, hash_ids, just_these_service_ids = None ): @@ -183,13 +188,13 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): current_hash_ids = set() - with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name: + with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name: for service_id in service_ids: current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT ) - hash_id_iterator = self._STI( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ) ) + hash_id_iterator = self._STI( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ) ) current_hash_ids.update( hash_id_iterator ) @@ -211,13 +216,13 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): pending_hash_ids = set() - with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name: + with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name: for service_id in service_ids: pending_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PENDING ) - hash_id_iterator = self._STI( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, pending_files_table_name ) ) ) + hash_id_iterator = self._STI( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, pending_files_table_name ) ) ) pending_hash_ids.update( hash_id_iterator ) @@ -233,11 +238,11 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): return set( hash_ids ) - with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name: + with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name: current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT ) - current_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ) ) + current_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ) ) return current_hash_ids @@ -250,11 +255,11 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): return set( hash_ids ) - with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name: + with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name: pending_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PENDING ) - pending_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, pending_files_table_name ) ) ) + pending_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, pending_files_table_name ) ) ) return pending_hash_ids @@ -264,16 +269,16 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): ( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, timestamp INTEGER );'.format( current_files_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, timestamp INTEGER );'.format( current_files_table_name ) ) self._CreateIndex( current_files_table_name, [ 'timestamp' ] ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, timestamp INTEGER, original_timestamp INTEGER );'.format( deleted_files_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, timestamp INTEGER, original_timestamp INTEGER );'.format( deleted_files_table_name ) ) self._CreateIndex( deleted_files_table_name, [ 'timestamp' ] ) self._CreateIndex( deleted_files_table_name, [ 'original_timestamp' ] ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY );'.format( pending_files_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY );'.format( pending_files_table_name ) ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, reason_id INTEGER );'.format( petitioned_files_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, reason_id INTEGER );'.format( petitioned_files_table_name ) ) self._CreateIndex( petitioned_files_table_name, [ 'reason_id' ] ) @@ -281,7 +286,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): pending_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PENDING ) - result = self._c.execute( 'SELECT hash_id FROM {};'.format( pending_files_table_name ) ).fetchone() + result = self._Execute( 'SELECT hash_id FROM {};'.format( pending_files_table_name ) ).fetchone() if result is None: @@ -299,7 +304,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): petitioned_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PETITIONED ) - result = self._c.execute( 'SELECT hash_id FROM {};'.format( petitioned_files_table_name ) ).fetchone() + result = self._Execute( 'SELECT hash_id FROM {};'.format( petitioned_files_table_name ) ).fetchone() if result is None: @@ -320,11 +325,11 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): if only_viewable: # hashes to mimes - result = self._c.execute( 'SELECT COUNT( * ) FROM {} CROSS JOIN files_info USING ( hash_id ) WHERE mime IN {};'.format( current_files_table_name, HydrusData.SplayListForDB( HC.SEARCHABLE_MIMES ) ) ).fetchone() + result = self._Execute( 'SELECT COUNT( * ) FROM {} CROSS JOIN files_info USING ( hash_id ) WHERE mime IN {};'.format( current_files_table_name, HydrusData.SplayListForDB( HC.SEARCHABLE_MIMES ) ) ).fetchone() else: - result = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( current_files_table_name ) ).fetchone() + result = self._Execute( 'SELECT COUNT( * ) FROM {};'.format( current_files_table_name ) ).fetchone() ( count, ) = result @@ -336,7 +341,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT ) - result = self._c.execute( 'SELECT COUNT( * ) FROM {} CROSS JOIN file_inbox USING ( hash_id );'.format( current_files_table_name ) ).fetchone() + result = self._Execute( 'SELECT COUNT( * ) FROM {} CROSS JOIN file_inbox USING ( hash_id );'.format( current_files_table_name ) ).fetchone() ( count, ) = result @@ -347,7 +352,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT ) - hash_ids = self._STL( self._c.execute( 'SELECT hash_id FROM {};'.format( current_files_table_name ) ) ) + hash_ids = self._STL( self._Execute( 'SELECT hash_id FROM {};'.format( current_files_table_name ) ) ) return hash_ids @@ -357,7 +362,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT ) # hashes to size - result = self._c.execute( 'SELECT SUM( size ) FROM {} CROSS JOIN files_info USING ( hash_id );'.format( current_files_table_name ) ).fetchone() + result = self._Execute( 'SELECT SUM( size ) FROM {} CROSS JOIN files_info USING ( hash_id );'.format( current_files_table_name ) ).fetchone() ( count, ) = result @@ -368,9 +373,9 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT ) - with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name: + with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name: - rows = dict( self._c.execute( 'SELECT hash_id, timestamp FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ) ) + rows = dict( self._Execute( 'SELECT hash_id, timestamp FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ) ) return rows @@ -387,7 +392,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT ) - result = self._c.execute( 'SELECT timestamp FROM {} WHERE hash_id = ?;'.format( current_files_table_name ), ( hash_id, ) ).fetchone() + result = self._Execute( 'SELECT timestamp FROM {} WHERE hash_id = ?;'.format( current_files_table_name ), ( hash_id, ) ).fetchone() if result is None: @@ -405,7 +410,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): deleted_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_DELETED ) - result = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( deleted_files_table_name ) ).fetchone() + result = self._Execute( 'SELECT COUNT( * ) FROM {};'.format( deleted_files_table_name ) ).fetchone() ( count, ) = result @@ -415,7 +420,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): def GetDeletionStatus( self, service_id, hash_id ): # can have a value here and just be in trash, so we fetch it whatever the end result - result = self._c.execute( 'SELECT reason_id FROM local_file_deletion_reasons WHERE hash_id = ?;', ( hash_id, ) ).fetchone() + result = self._Execute( 'SELECT reason_id FROM local_file_deletion_reasons WHERE hash_id = ?;', ( hash_id, ) ).fetchone() if result is None: @@ -433,7 +438,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): is_deleted = False timestamp = None - result = self._c.execute( 'SELECT timestamp FROM {} WHERE hash_id = ?;'.format( deleted_files_table_name ), ( hash_id, ) ).fetchone() + result = self._Execute( 'SELECT timestamp FROM {} WHERE hash_id = ?;'.format( deleted_files_table_name ), ( hash_id, ) ).fetchone() if result is not None: @@ -462,7 +467,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT ) - for hash_id in self._STI( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ) ): + for hash_id in self._STI( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ) ): hash_ids_to_current_file_service_ids[ hash_id ].append( service_id ) @@ -482,22 +487,22 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): ( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id ) - for ( hash_id, timestamp ) in self._c.execute( 'SELECT hash_id, timestamp FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ): + for ( hash_id, timestamp ) in self._Execute( 'SELECT hash_id, timestamp FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ): hash_ids_to_current_file_service_ids_and_timestamps[ hash_id ].append( ( service_id, timestamp ) ) - for ( hash_id, timestamp, original_timestamp ) in self._c.execute( 'SELECT hash_id, timestamp, original_timestamp FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, deleted_files_table_name ) ): + for ( hash_id, timestamp, original_timestamp ) in self._Execute( 'SELECT hash_id, timestamp, original_timestamp FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, deleted_files_table_name ) ): hash_ids_to_deleted_file_service_ids_and_timestamps[ hash_id ].append( ( service_id, timestamp, original_timestamp ) ) - for hash_id in self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, pending_files_table_name ) ): + for hash_id in self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, pending_files_table_name ) ): hash_ids_to_pending_file_service_ids[ hash_id ].append( service_id ) - for hash_id in self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, petitioned_files_table_name ) ): + for hash_id in self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, petitioned_files_table_name ) ): hash_ids_to_petitioned_file_service_ids[ hash_id ].append( service_id ) @@ -516,7 +521,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT ) combined_local_current_files_table_name = GenerateFilesTableName( self.modules_services.combined_local_file_service_id, HC.CONTENT_STATUS_CURRENT ) - ( num_local, ) = self._c.execute( 'SELECT COUNT( * ) FROM {} CROSS JOIN {} USING ( hash_id );'.format( current_files_table_name, combined_local_current_files_table_name ) ).fetchone() + ( num_local, ) = self._Execute( 'SELECT COUNT( * ) FROM {} CROSS JOIN {} USING ( hash_id );'.format( current_files_table_name, combined_local_current_files_table_name ) ).fetchone() return num_local @@ -525,7 +530,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): pending_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PENDING ) - result = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( pending_files_table_name ) ).fetchone() + result = self._Execute( 'SELECT COUNT( * ) FROM {};'.format( pending_files_table_name ) ).fetchone() ( count, ) = result @@ -536,7 +541,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): petitioned_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PETITIONED ) - result = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( petitioned_files_table_name ) ).fetchone() + result = self._Execute( 'SELECT COUNT( * ) FROM {};'.format( petitioned_files_table_name ) ).fetchone() ( count, ) = result @@ -545,7 +550,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): def GetServiceIdCounts( self, hash_ids ) -> typing.Dict[ int, int ]: - with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name: + with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name: service_ids_to_counts = {} @@ -554,7 +559,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT ) # temp hashes to files - ( count, ) = self._c.execute( 'SELECT COUNT( * ) FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ).fetchone() + ( count, ) = self._Execute( 'SELECT COUNT( * ) FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ).fetchone() service_ids_to_counts[ service_id ] = count @@ -567,7 +572,7 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): petitioned_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PETITIONED ) - petitioned_rows = list( HydrusData.BuildKeyToListDict( self._c.execute( 'SELECT reason_id, hash_id FROM {} ORDER BY reason_id LIMIT 100;'.format( petitioned_files_table_name ) ) ).items() ) + petitioned_rows = list( HydrusData.BuildKeyToListDict( self._Execute( 'SELECT reason_id, hash_id FROM {} ORDER BY reason_id LIMIT 100;'.format( petitioned_files_table_name ) ) ).items() ) return petitioned_rows @@ -598,9 +603,9 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): deleted_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_DELETED ) - with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name: + with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name: - rows = self._c.execute( 'SELECT hash_id, original_timestamp FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, deleted_files_table_name ) ).fetchall() + rows = self._Execute( 'SELECT hash_id, original_timestamp FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, deleted_files_table_name ) ).fetchall() return rows @@ -610,16 +615,16 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): pending_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PENDING ) - self._c.executemany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( pending_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( pending_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) ) def PetitionFiles( self, service_id, reason_id, hash_ids ): petitioned_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PETITIONED ) - self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( petitioned_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( petitioned_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) ) - self._c.executemany( 'INSERT OR IGNORE INTO {} ( hash_id, reason_id ) VALUES ( ?, ? );'.format( petitioned_files_table_name ), ( ( hash_id, reason_id ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( hash_id, reason_id ) VALUES ( ?, ? );'.format( petitioned_files_table_name ), ( ( hash_id, reason_id ) for hash_id in hash_ids ) ) def RecordDeleteFiles( self, service_id, insert_rows ): @@ -628,12 +633,12 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): now = HydrusData.GetNow() - self._c.executemany( + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( hash_id, timestamp, original_timestamp ) VALUES ( ?, ?, ? );'.format( deleted_files_table_name ), ( ( hash_id, now, original_timestamp ) for ( hash_id, original_timestamp ) in insert_rows ) ) - num_new_deleted_files = HydrusDB.GetRowCount( self._c ) + num_new_deleted_files = self._GetRowCount() return num_new_deleted_files @@ -642,25 +647,25 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): pending_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PENDING ) - self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( pending_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( pending_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) ) def RescindPetitionFiles( self, service_id, hash_ids ): petitioned_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PETITIONED ) - self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( petitioned_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( petitioned_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) ) def RemoveFiles( self, service_id, hash_ids ): ( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id ) - self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( current_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( current_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) ) - self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( petitioned_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( petitioned_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) ) - pending_changed = HydrusDB.GetRowCount( self._c ) > 0 + pending_changed = self._GetRowCount() > 0 return pending_changed @@ -669,5 +674,5 @@ class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ): reason_id = self.modules_texts.GetTextId( reason ) - self._c.executemany( 'REPLACE INTO local_file_deletion_reasons ( hash_id, reason_id ) VALUES ( ?, ? );', ( ( hash_id, reason_id ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'REPLACE INTO local_file_deletion_reasons ( hash_id, reason_id ) VALUES ( ?, ? );', ( ( hash_id, reason_id ) for hash_id in hash_ids ) ) diff --git a/hydrus/client/db/ClientDBMaintenance.py b/hydrus/client/db/ClientDBMaintenance.py index 0324717e..301c8ff9 100644 --- a/hydrus/client/db/ClientDBMaintenance.py +++ b/hydrus/client/db/ClientDBMaintenance.py @@ -30,7 +30,7 @@ class ClientDBMaintenance( HydrusDBModule.HydrusDBModule ): def _TableHasAtLeastRowCount( self, name, row_count ): - cursor = self._c.execute( 'SELECT 1 FROM {};'.format( name ) ) + cursor = self._Execute( 'SELECT 1 FROM {};'.format( name ) ) for i in range( row_count ): @@ -47,7 +47,7 @@ class ClientDBMaintenance( HydrusDBModule.HydrusDBModule ): def _TableIsEmpty( self, name ): - result = self._c.execute( 'SELECT 1 FROM {};'.format( name ) ) + result = self._Execute( 'SELECT 1 FROM {};'.format( name ) ) return result is None @@ -95,7 +95,7 @@ class ClientDBMaintenance( HydrusDBModule.HydrusDBModule ): - self._c.execute( 'ANALYZE sqlite_master;' ) # this reloads the current stats into the query planner + self._Execute( 'ANALYZE sqlite_master;' ) # this reloads the current stats into the query planner job_key.SetVariable( 'popup_text_1', 'done!' ) @@ -114,7 +114,7 @@ class ClientDBMaintenance( HydrusDBModule.HydrusDBModule ): do_it = True - result = self._c.execute( 'SELECT num_rows FROM analyze_timestamps WHERE name = ?;', ( name, ) ).fetchone() + result = self._Execute( 'SELECT num_rows FROM analyze_timestamps WHERE name = ?;', ( name, ) ).fetchone() if result is not None: @@ -130,22 +130,22 @@ class ClientDBMaintenance( HydrusDBModule.HydrusDBModule ): if do_it: - self._c.execute( 'ANALYZE ' + name + ';' ) + self._Execute( 'ANALYZE ' + name + ';' ) - ( num_rows, ) = self._c.execute( 'SELECT COUNT( * ) FROM ' + name + ';' ).fetchone() + ( num_rows, ) = self._Execute( 'SELECT COUNT( * ) FROM ' + name + ';' ).fetchone() - self._c.execute( 'DELETE FROM analyze_timestamps WHERE name = ?;', ( name, ) ) + self._Execute( 'DELETE FROM analyze_timestamps WHERE name = ?;', ( name, ) ) - self._c.execute( 'INSERT OR IGNORE INTO analyze_timestamps ( name, num_rows, timestamp ) VALUES ( ?, ?, ? );', ( name, num_rows, HydrusData.GetNow() ) ) + self._Execute( 'INSERT OR IGNORE INTO analyze_timestamps ( name, num_rows, timestamp ) VALUES ( ?, ?, ? );', ( name, num_rows, HydrusData.GetNow() ) ) def CreateInitialTables( self ): - self._c.execute( 'CREATE TABLE last_shutdown_work_time ( last_shutdown_work_time INTEGER );' ) + self._Execute( 'CREATE TABLE last_shutdown_work_time ( last_shutdown_work_time INTEGER );' ) - self._c.execute( 'CREATE TABLE analyze_timestamps ( name TEXT, num_rows INTEGER, timestamp INTEGER );' ) - self._c.execute( 'CREATE TABLE vacuum_timestamps ( name TEXT, timestamp INTEGER );' ) + self._Execute( 'CREATE TABLE analyze_timestamps ( name TEXT, num_rows INTEGER, timestamp INTEGER );' ) + self._Execute( 'CREATE TABLE vacuum_timestamps ( name TEXT, timestamp INTEGER );' ) def GetExpectedTableNames( self ) -> typing.Collection[ str ]: @@ -161,7 +161,7 @@ class ClientDBMaintenance( HydrusDBModule.HydrusDBModule ): def GetLastShutdownWorkTime( self ): - result = self._c.execute( 'SELECT last_shutdown_work_time FROM last_shutdown_work_time;' ).fetchone() + result = self._Execute( 'SELECT last_shutdown_work_time FROM last_shutdown_work_time;' ).fetchone() if result is None: @@ -175,13 +175,13 @@ class ClientDBMaintenance( HydrusDBModule.HydrusDBModule ): def GetTableNamesDueAnalysis( self, force_reanalyze = False ): - db_names = [ name for ( index, name, path ) in self._c.execute( 'PRAGMA database_list;' ) if name not in ( 'mem', 'temp', 'durable_temp' ) ] + db_names = [ name for ( index, name, path ) in self._Execute( 'PRAGMA database_list;' ) if name not in ( 'mem', 'temp', 'durable_temp' ) ] all_names = set() for db_name in db_names: - all_names.update( ( name for ( name, ) in self._c.execute( 'SELECT name FROM {}.sqlite_master WHERE type = ?;'.format( db_name ), ( 'table', ) ) ) ) + all_names.update( ( name for ( name, ) in self._Execute( 'SELECT name FROM {}.sqlite_master WHERE type = ?;'.format( db_name ), ( 'table', ) ) ) ) all_names.discard( 'sqlite_stat1' ) @@ -203,7 +203,7 @@ class ClientDBMaintenance( HydrusDBModule.HydrusDBModule ): boundaries.append( ( 100000, False, 3 * 30 * 86400 ) ) # anything bigger than 100k rows will now not be analyzed - existing_names_to_info = { name : ( num_rows, timestamp ) for ( name, num_rows, timestamp ) in self._c.execute( 'SELECT name, num_rows, timestamp FROM analyze_timestamps;' ) } + existing_names_to_info = { name : ( num_rows, timestamp ) for ( name, num_rows, timestamp ) in self._Execute( 'SELECT name, num_rows, timestamp FROM analyze_timestamps;' ) } names_to_analyze = [] @@ -268,11 +268,11 @@ class ClientDBMaintenance( HydrusDBModule.HydrusDBModule ): path = os.path.join( self._db_dir, filename ) - ( page_size, ) = self._c.execute( 'PRAGMA {}.page_size;'.format( name ) ).fetchone() - ( page_count, ) = self._c.execute( 'PRAGMA {}.page_count;'.format( name ) ).fetchone() - ( freelist_count, ) = self._c.execute( 'PRAGMA {}.freelist_count;'.format( name ) ).fetchone() + ( page_size, ) = self._Execute( 'PRAGMA {}.page_size;'.format( name ) ).fetchone() + ( page_count, ) = self._Execute( 'PRAGMA {}.page_count;'.format( name ) ).fetchone() + ( freelist_count, ) = self._Execute( 'PRAGMA {}.freelist_count;'.format( name ) ).fetchone() - result = self._c.execute( 'SELECT timestamp FROM vacuum_timestamps WHERE name = ?;', ( name, ) ).fetchone() + result = self._Execute( 'SELECT timestamp FROM vacuum_timestamps WHERE name = ?;', ( name, ) ).fetchone() if result is None: @@ -299,14 +299,14 @@ class ClientDBMaintenance( HydrusDBModule.HydrusDBModule ): def RegisterShutdownWork( self ): - self._c.execute( 'DELETE from last_shutdown_work_time;' ) + self._Execute( 'DELETE from last_shutdown_work_time;' ) - self._c.execute( 'INSERT INTO last_shutdown_work_time ( last_shutdown_work_time ) VALUES ( ? );', ( HydrusData.GetNow(), ) ) + self._Execute( 'INSERT INTO last_shutdown_work_time ( last_shutdown_work_time ) VALUES ( ? );', ( HydrusData.GetNow(), ) ) def RegisterSuccessfulVacuum( self, name: str ): - self._c.execute( 'DELETE FROM vacuum_timestamps WHERE name = ?;', ( name, ) ) + self._Execute( 'DELETE FROM vacuum_timestamps WHERE name = ?;', ( name, ) ) - self._c.execute( 'INSERT OR IGNORE INTO vacuum_timestamps ( name, timestamp ) VALUES ( ?, ? );', ( name, HydrusData.GetNow() ) ) + self._Execute( 'INSERT OR IGNORE INTO vacuum_timestamps ( name, timestamp ) VALUES ( ?, ? );', ( name, HydrusData.GetNow() ) ) diff --git a/hydrus/client/db/ClientDBMappingsStorage.py b/hydrus/client/db/ClientDBMappingsStorage.py index 38a94045..ec78d50e 100644 --- a/hydrus/client/db/ClientDBMappingsStorage.py +++ b/hydrus/client/db/ClientDBMappingsStorage.py @@ -52,36 +52,36 @@ class ClientDBMappingsStorage( HydrusDBModule.HydrusDBModule ): ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id ) - self._c.execute( 'DELETE FROM {};'.format( current_mappings_table_name ) ) - self._c.execute( 'DELETE FROM {};'.format( deleted_mappings_table_name ) ) - self._c.execute( 'DELETE FROM {};'.format( pending_mappings_table_name ) ) - self._c.execute( 'DELETE FROM {};'.format( petitioned_mappings_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( current_mappings_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( deleted_mappings_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( pending_mappings_table_name ) ) + self._Execute( 'DELETE FROM {};'.format( petitioned_mappings_table_name ) ) def DropMappingsTables( self, service_id: int ): ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( current_mappings_table_name ) ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( deleted_mappings_table_name ) ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( pending_mappings_table_name ) ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( petitioned_mappings_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( current_mappings_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( deleted_mappings_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( pending_mappings_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( petitioned_mappings_table_name ) ) def GenerateMappingsTables( self, service_id: int ): ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;'.format( current_mappings_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;'.format( current_mappings_table_name ) ) self._CreateIndex( current_mappings_table_name, [ 'hash_id', 'tag_id' ], unique = True ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;'.format( deleted_mappings_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;'.format( deleted_mappings_table_name ) ) self._CreateIndex( deleted_mappings_table_name, [ 'hash_id', 'tag_id' ], unique = True ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;'.format( pending_mappings_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;'.format( pending_mappings_table_name ) ) self._CreateIndex( pending_mappings_table_name, [ 'hash_id', 'tag_id' ], unique = True ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, reason_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;'.format( petitioned_mappings_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( tag_id INTEGER, hash_id INTEGER, reason_id INTEGER, PRIMARY KEY ( tag_id, hash_id ) ) WITHOUT ROWID;'.format( petitioned_mappings_table_name ) ) self._CreateIndex( petitioned_mappings_table_name, [ 'hash_id', 'tag_id' ], unique = True ) @@ -89,7 +89,7 @@ class ClientDBMappingsStorage( HydrusDBModule.HydrusDBModule ): ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id ) - result = self._c.execute( 'SELECT COUNT( DISTINCT hash_id ) FROM {};'.format( current_mappings_table_name ) ).fetchone() + result = self._Execute( 'SELECT COUNT( DISTINCT hash_id ) FROM {};'.format( current_mappings_table_name ) ).fetchone() ( count, ) = result @@ -100,7 +100,7 @@ class ClientDBMappingsStorage( HydrusDBModule.HydrusDBModule ): ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id ) - result = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( deleted_mappings_table_name ) ).fetchone() + result = self._Execute( 'SELECT COUNT( * ) FROM {};'.format( deleted_mappings_table_name ) ).fetchone() ( count, ) = result @@ -111,7 +111,7 @@ class ClientDBMappingsStorage( HydrusDBModule.HydrusDBModule ): ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id ) - result = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( pending_mappings_table_name ) ).fetchone() + result = self._Execute( 'SELECT COUNT( * ) FROM {};'.format( pending_mappings_table_name ) ).fetchone() ( count, ) = result @@ -122,7 +122,7 @@ class ClientDBMappingsStorage( HydrusDBModule.HydrusDBModule ): ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id ) - result = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( petitioned_mappings_table_name ) ).fetchone() + result = self._Execute( 'SELECT COUNT( * ) FROM {};'.format( petitioned_mappings_table_name ) ).fetchone() ( count, ) = result diff --git a/hydrus/client/db/ClientDBMaster.py b/hydrus/client/db/ClientDBMaster.py index fbcb3f43..e2bcb024 100644 --- a/hydrus/client/db/ClientDBMaster.py +++ b/hydrus/client/db/ClientDBMaster.py @@ -53,14 +53,14 @@ class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ): ( uncached_hash_id, ) = uncached_hash_ids - rows = self._c.execute( 'SELECT hash_id, hash FROM hashes WHERE hash_id = ?;', ( uncached_hash_id, ) ).fetchall() + rows = self._Execute( 'SELECT hash_id, hash FROM hashes WHERE hash_id = ?;', ( uncached_hash_id, ) ).fetchall() else: - with HydrusDB.TemporaryIntegerTable( self._c, uncached_hash_ids, 'hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( uncached_hash_ids, 'hash_id' ) as temp_table_name: # temp hash_ids to actual hashes - rows = self._c.execute( 'SELECT hash_id, hash FROM {} CROSS JOIN hashes USING ( hash_id );'.format( temp_table_name ) ).fetchall() + rows = self._Execute( 'SELECT hash_id, hash FROM {} CROSS JOIN hashes USING ( hash_id );'.format( temp_table_name ) ).fetchall() @@ -100,9 +100,9 @@ class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ): def CreateInitialTables( self ): - self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.hashes ( hash_id INTEGER PRIMARY KEY, hash BLOB_BYTES UNIQUE );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.hashes ( hash_id INTEGER PRIMARY KEY, hash BLOB_BYTES UNIQUE );' ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.local_hashes ( hash_id INTEGER PRIMARY KEY, md5 BLOB_BYTES, sha1 BLOB_BYTES, sha512 BLOB_BYTES );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.local_hashes ( hash_id INTEGER PRIMARY KEY, md5 BLOB_BYTES, sha1 BLOB_BYTES, sha512 BLOB_BYTES );' ) def GetExpectedTableNames( self ) -> typing.Collection[ str ]: @@ -117,7 +117,7 @@ class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ): def GetExtraHash( self, hash_type, hash_id ) -> bytes: - result = self._c.execute( 'SELECT {} FROM local_hashes WHERE hash_id = ?;'.format( hash_type ), ( hash_id, ) ).fetchone() + result = self._Execute( 'SELECT {} FROM local_hashes WHERE hash_id = ?;'.format( hash_type ), ( hash_id, ) ).fetchone() if result is None: @@ -146,7 +146,7 @@ class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ): continue - result = self._c.execute( 'SELECT hash_id FROM local_hashes WHERE {} = ?;'.format( given_hash_type ), ( sqlite3.Binary( given_hash ), ) ).fetchone() + result = self._Execute( 'SELECT hash_id FROM local_hashes WHERE {} = ?;'.format( given_hash_type ), ( sqlite3.Binary( given_hash ), ) ).fetchone() if result is not None: @@ -163,7 +163,7 @@ class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ): else: - desired_hashes = [ desired_hash for ( desired_hash, ) in self._c.execute( 'SELECT {} FROM local_hashes WHERE hash_id IN {};'.format( desired_hash_type, HydrusData.SplayListForDB( hash_ids ) ) ) ] + desired_hashes = [ desired_hash for ( desired_hash, ) in self._Execute( 'SELECT {} FROM local_hashes WHERE hash_id IN {};'.format( desired_hash_type, HydrusData.SplayListForDB( hash_ids ) ) ) ] return desired_hashes @@ -185,13 +185,13 @@ class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ): def GetHashId( self, hash ) -> int: - result = self._c.execute( 'SELECT hash_id FROM hashes WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() + result = self._Execute( 'SELECT hash_id FROM hashes WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() if result is None: - self._c.execute( 'INSERT INTO hashes ( hash ) VALUES ( ? );', ( sqlite3.Binary( hash ), ) ) + self._Execute( 'INSERT INTO hashes ( hash ) VALUES ( ? );', ( sqlite3.Binary( hash ), ) ) - hash_id = self._c.lastrowid + hash_id = self._GetLastRowId() else: @@ -205,15 +205,15 @@ class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ): if hash_type == 'md5': - result = self._c.execute( 'SELECT hash_id FROM local_hashes WHERE md5 = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() + result = self._Execute( 'SELECT hash_id FROM local_hashes WHERE md5 = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() elif hash_type == 'sha1': - result = self._c.execute( 'SELECT hash_id FROM local_hashes WHERE sha1 = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() + result = self._Execute( 'SELECT hash_id FROM local_hashes WHERE sha1 = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() elif hash_type == 'sha512': - result = self._c.execute( 'SELECT hash_id FROM local_hashes WHERE sha512 = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() + result = self._Execute( 'SELECT hash_id FROM local_hashes WHERE sha512 = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() if result is None: @@ -238,7 +238,7 @@ class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ): continue - result = self._c.execute( 'SELECT hash_id FROM hashes WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() + result = self._Execute( 'SELECT hash_id FROM hashes WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() if result is None: @@ -254,11 +254,11 @@ class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ): if len( hashes_not_in_db ) > 0: - self._c.executemany( 'INSERT INTO hashes ( hash ) VALUES ( ? );', ( ( sqlite3.Binary( hash ), ) for hash in hashes_not_in_db ) ) + self._ExecuteMany( 'INSERT INTO hashes ( hash ) VALUES ( ? );', ( ( sqlite3.Binary( hash ), ) for hash in hashes_not_in_db ) ) for hash in hashes_not_in_db: - ( hash_id, ) = self._c.execute( 'SELECT hash_id FROM hashes WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() + ( hash_id, ) = self._Execute( 'SELECT hash_id FROM hashes WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() hash_ids.add( hash_id ) @@ -295,21 +295,21 @@ class ClientDBMasterHashes( HydrusDBModule.HydrusDBModule ): def HasExtraHashes( self, hash_id ): - result = self._c.execute( 'SELECT 1 FROM local_hashes WHERE hash_id = ?;', ( hash_id, ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM local_hashes WHERE hash_id = ?;', ( hash_id, ) ).fetchone() return result is not None def HasHashId( self, hash_id: int ): - result = self._c.execute( 'SELECT 1 FROM hashes WHERE hash_id = ?;', ( hash_id, ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM hashes WHERE hash_id = ?;', ( hash_id, ) ).fetchone() return result is not None def SetExtraHashes( self, hash_id, md5, sha1, sha512 ): - self._c.execute( 'INSERT OR IGNORE INTO local_hashes ( hash_id, md5, sha1, sha512 ) VALUES ( ?, ?, ?, ? );', ( hash_id, sqlite3.Binary( md5 ), sqlite3.Binary( sha1 ), sqlite3.Binary( sha512 ) ) ) + self._Execute( 'INSERT OR IGNORE INTO local_hashes ( hash_id, md5, sha1, sha512 ) VALUES ( ?, ?, ?, ? );', ( hash_id, sqlite3.Binary( md5 ), sqlite3.Binary( sha1 ), sqlite3.Binary( sha512 ) ) ) class ClientDBMasterTexts( HydrusDBModule.HydrusDBModule ): @@ -328,13 +328,13 @@ class ClientDBMasterTexts( HydrusDBModule.HydrusDBModule ): def CreateInitialTables( self ): - self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.labels ( label_id INTEGER PRIMARY KEY, label TEXT UNIQUE );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.labels ( label_id INTEGER PRIMARY KEY, label TEXT UNIQUE );' ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.notes ( note_id INTEGER PRIMARY KEY, note TEXT UNIQUE );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.notes ( note_id INTEGER PRIMARY KEY, note TEXT UNIQUE );' ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.texts ( text_id INTEGER PRIMARY KEY, text TEXT UNIQUE );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.texts ( text_id INTEGER PRIMARY KEY, text TEXT UNIQUE );' ) - self._c.execute( 'CREATE VIRTUAL TABLE IF NOT EXISTS external_caches.notes_fts4 USING fts4( note );' ) + self._Execute( 'CREATE VIRTUAL TABLE IF NOT EXISTS external_caches.notes_fts4 USING fts4( note );' ) def GetExpectedTableNames( self ) -> typing.Collection[ str ]: @@ -351,13 +351,13 @@ class ClientDBMasterTexts( HydrusDBModule.HydrusDBModule ): def GetLabelId( self, label ): - result = self._c.execute( 'SELECT label_id FROM labels WHERE label = ?;', ( label, ) ).fetchone() + result = self._Execute( 'SELECT label_id FROM labels WHERE label = ?;', ( label, ) ).fetchone() if result is None: - self._c.execute( 'INSERT INTO labels ( label ) VALUES ( ? );', ( label, ) ) + self._Execute( 'INSERT INTO labels ( label ) VALUES ( ? );', ( label, ) ) - label_id = self._c.lastrowid + label_id = self._GetLastRowId() else: @@ -369,15 +369,15 @@ class ClientDBMasterTexts( HydrusDBModule.HydrusDBModule ): def GetNoteId( self, note: str ) -> int: - result = self._c.execute( 'SELECT note_id FROM notes WHERE note = ?;', ( note, ) ).fetchone() + result = self._Execute( 'SELECT note_id FROM notes WHERE note = ?;', ( note, ) ).fetchone() if result is None: - self._c.execute( 'INSERT INTO notes ( note ) VALUES ( ? );', ( note, ) ) + self._Execute( 'INSERT INTO notes ( note ) VALUES ( ? );', ( note, ) ) - note_id = self._c.lastrowid + note_id = self._GetLastRowId() - self._c.execute( 'REPLACE INTO notes_fts4 ( docid, note ) VALUES ( ?, ? );', ( note_id, note ) ) + self._Execute( 'REPLACE INTO notes_fts4 ( docid, note ) VALUES ( ?, ? );', ( note_id, note ) ) else: @@ -394,7 +394,7 @@ class ClientDBMasterTexts( HydrusDBModule.HydrusDBModule ): def GetText( self, text_id ): - result = self._c.execute( 'SELECT text FROM texts WHERE text_id = ?;', ( text_id, ) ).fetchone() + result = self._Execute( 'SELECT text FROM texts WHERE text_id = ?;', ( text_id, ) ).fetchone() if result is None: @@ -408,13 +408,13 @@ class ClientDBMasterTexts( HydrusDBModule.HydrusDBModule ): def GetTextId( self, text ): - result = self._c.execute( 'SELECT text_id FROM texts WHERE text = ?;', ( text, ) ).fetchone() + result = self._Execute( 'SELECT text_id FROM texts WHERE text = ?;', ( text, ) ).fetchone() if result is None: - self._c.execute( 'INSERT INTO texts ( text ) VALUES ( ? );', ( text, ) ) + self._Execute( 'INSERT INTO texts ( text ) VALUES ( ? );', ( text, ) ) - text_id = self._c.lastrowid + text_id = self._GetLastRowId() else: @@ -465,14 +465,14 @@ class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ): ( uncached_tag_id, ) = uncached_tag_ids - rows = self._c.execute( 'SELECT tag_id, namespace, subtag FROM tags NATURAL JOIN namespaces NATURAL JOIN subtags WHERE tag_id = ?;', ( uncached_tag_id, ) ).fetchall() + rows = self._Execute( 'SELECT tag_id, namespace, subtag FROM tags NATURAL JOIN namespaces NATURAL JOIN subtags WHERE tag_id = ?;', ( uncached_tag_id, ) ).fetchall() else: - with HydrusDB.TemporaryIntegerTable( self._c, uncached_tag_ids, 'tag_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( uncached_tag_ids, 'tag_id' ) as temp_table_name: # temp tag_ids to tags to subtags and namespaces - rows = self._c.execute( 'SELECT tag_id, namespace, subtag FROM {} CROSS JOIN tags USING ( tag_id ) CROSS JOIN subtags USING ( subtag_id ) CROSS JOIN namespaces USING ( namespace_id );'.format( temp_table_name ) ).fetchall() + rows = self._Execute( 'SELECT tag_id, namespace, subtag FROM {} CROSS JOIN tags USING ( tag_id ) CROSS JOIN subtags USING ( subtag_id ) CROSS JOIN namespaces USING ( namespace_id );'.format( temp_table_name ) ).fetchall() @@ -491,7 +491,7 @@ class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ): namespace_id = self.GetNamespaceId( namespace ) subtag_id = self.GetSubtagId( subtag ) - self._c.execute( 'REPLACE INTO tags ( tag_id, namespace_id, subtag_id ) VALUES ( ?, ?, ? );', ( tag_id, namespace_id, subtag_id ) ) + self._Execute( 'REPLACE INTO tags ( tag_id, namespace_id, subtag_id ) VALUES ( ?, ?, ? );', ( tag_id, namespace_id, subtag_id ) ) uncached_tag_ids_to_tags[ tag_id ] = tag @@ -504,11 +504,11 @@ class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ): def CreateInitialTables( self ): - self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.namespaces ( namespace_id INTEGER PRIMARY KEY, namespace TEXT UNIQUE );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.namespaces ( namespace_id INTEGER PRIMARY KEY, namespace TEXT UNIQUE );' ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.subtags ( subtag_id INTEGER PRIMARY KEY, subtag TEXT UNIQUE );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.subtags ( subtag_id INTEGER PRIMARY KEY, subtag TEXT UNIQUE );' ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.tags ( tag_id INTEGER PRIMARY KEY, namespace_id INTEGER, subtag_id INTEGER );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.tags ( tag_id INTEGER PRIMARY KEY, namespace_id INTEGER, subtag_id INTEGER );' ) def GetExpectedTableNames( self ) -> typing.Collection[ str ]: @@ -528,19 +528,19 @@ class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ): if self.null_namespace_id is None: - ( self.null_namespace_id, ) = self._c.execute( 'SELECT namespace_id FROM namespaces WHERE namespace = ?;', ( '', ) ).fetchone() + ( self.null_namespace_id, ) = self._Execute( 'SELECT namespace_id FROM namespaces WHERE namespace = ?;', ( '', ) ).fetchone() return self.null_namespace_id - result = self._c.execute( 'SELECT namespace_id FROM namespaces WHERE namespace = ?;', ( namespace, ) ).fetchone() + result = self._Execute( 'SELECT namespace_id FROM namespaces WHERE namespace = ?;', ( namespace, ) ).fetchone() if result is None: - self._c.execute( 'INSERT INTO namespaces ( namespace ) VALUES ( ? );', ( namespace, ) ) + self._Execute( 'INSERT INTO namespaces ( namespace ) VALUES ( ? );', ( namespace, ) ) - namespace_id = self._c.lastrowid + namespace_id = self._GetLastRowId() else: @@ -552,13 +552,13 @@ class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ): def GetSubtagId( self, subtag ) -> int: - result = self._c.execute( 'SELECT subtag_id FROM subtags WHERE subtag = ?;', ( subtag, ) ).fetchone() + result = self._Execute( 'SELECT subtag_id FROM subtags WHERE subtag = ?;', ( subtag, ) ).fetchone() if result is None: - self._c.execute( 'INSERT INTO subtags ( subtag ) VALUES ( ? );', ( subtag, ) ) + self._Execute( 'INSERT INTO subtags ( subtag ) VALUES ( ? );', ( subtag, ) ) - subtag_id = self._c.lastrowid + subtag_id = self._GetLastRowId() else: @@ -602,13 +602,13 @@ class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ): namespace_id = self.GetNamespaceId( namespace ) subtag_id = self.GetSubtagId( subtag ) - result = self._c.execute( 'SELECT tag_id FROM tags WHERE namespace_id = ? AND subtag_id = ?;', ( namespace_id, subtag_id ) ).fetchone() + result = self._Execute( 'SELECT tag_id FROM tags WHERE namespace_id = ? AND subtag_id = ?;', ( namespace_id, subtag_id ) ).fetchone() if result is None: - self._c.execute( 'INSERT INTO tags ( namespace_id, subtag_id ) VALUES ( ?, ? );', ( namespace_id, subtag_id ) ) + self._Execute( 'INSERT INTO tags ( namespace_id, subtag_id ) VALUES ( ?, ? );', ( namespace_id, subtag_id ) ) - tag_id = self._c.lastrowid + tag_id = self._GetLastRowId() else: @@ -641,7 +641,7 @@ class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ): return True - result = self._c.execute( 'SELECT 1 FROM namespaces WHERE namespace = ?;', ( namespace, ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM namespaces WHERE namespace = ?;', ( namespace, ) ).fetchone() if result is None: @@ -664,7 +664,7 @@ class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ): return False - result = self._c.execute( 'SELECT 1 FROM subtags WHERE subtag = ?;', ( subtag, ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM subtags WHERE subtag = ?;', ( subtag, ) ).fetchone() if result is None: @@ -711,7 +711,7 @@ class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ): subtag_id = self.GetSubtagId( subtag ) - result = self._c.execute( 'SELECT 1 FROM tags WHERE namespace_id = ? AND subtag_id = ?;', ( namespace_id, subtag_id ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM tags WHERE namespace_id = ? AND subtag_id = ?;', ( namespace_id, subtag_id ) ).fetchone() if result is None: @@ -730,7 +730,7 @@ class ClientDBMasterTags( HydrusDBModule.HydrusDBModule ): def UpdateTagId( self, tag_id, namespace_id, subtag_id ): - self._c.execute( 'UPDATE tags SET namespace_id = ?, subtag_id = ? WHERE tag_id = ?;', ( namespace_id, subtag_id, tag_id ) ) + self._Execute( 'UPDATE tags SET namespace_id = ?, subtag_id = ? WHERE tag_id = ?;', ( namespace_id, subtag_id, tag_id ) ) if tag_id in self._tag_ids_to_tags_cache: @@ -756,9 +756,9 @@ class ClientDBMasterURLs( HydrusDBModule.HydrusDBModule ): def CreateInitialTables( self ): - self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.url_domains ( domain_id INTEGER PRIMARY KEY, domain TEXT UNIQUE );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.url_domains ( domain_id INTEGER PRIMARY KEY, domain TEXT UNIQUE );' ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.urls ( url_id INTEGER PRIMARY KEY, domain_id INTEGER, url TEXT UNIQUE );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.urls ( url_id INTEGER PRIMARY KEY, domain_id INTEGER, url TEXT UNIQUE );' ) def GetExpectedTableNames( self ) -> typing.Collection[ str ]: @@ -780,13 +780,13 @@ class ClientDBMasterURLs( HydrusDBModule.HydrusDBModule ): def GetURLDomainId( self, domain ): - result = self._c.execute( 'SELECT domain_id FROM url_domains WHERE domain = ?;', ( domain, ) ).fetchone() + result = self._Execute( 'SELECT domain_id FROM url_domains WHERE domain = ?;', ( domain, ) ).fetchone() if result is None: - self._c.execute( 'INSERT INTO url_domains ( domain ) VALUES ( ? );', ( domain, ) ) + self._Execute( 'INSERT INTO url_domains ( domain ) VALUES ( ? );', ( domain, ) ) - domain_id = self._c.lastrowid + domain_id = self._GetLastRowId() else: @@ -813,7 +813,7 @@ class ClientDBMasterURLs( HydrusDBModule.HydrusDBModule ): search_phrase = '%.{}'.format( domain ) - for ( domain_id, ) in self._c.execute( 'SELECT domain_id FROM url_domains WHERE domain LIKE ?;', ( search_phrase, ) ): + for ( domain_id, ) in self._Execute( 'SELECT domain_id FROM url_domains WHERE domain LIKE ?;', ( search_phrase, ) ): domain_ids.add( domain_id ) @@ -823,7 +823,7 @@ class ClientDBMasterURLs( HydrusDBModule.HydrusDBModule ): def GetURLId( self, url ): - result = self._c.execute( 'SELECT url_id FROM urls WHERE url = ?;', ( url, ) ).fetchone() + result = self._Execute( 'SELECT url_id FROM urls WHERE url = ?;', ( url, ) ).fetchone() if result is None: @@ -838,9 +838,9 @@ class ClientDBMasterURLs( HydrusDBModule.HydrusDBModule ): domain_id = self.GetURLDomainId( domain ) - self._c.execute( 'INSERT INTO urls ( domain_id, url ) VALUES ( ?, ? );', ( domain_id, url ) ) + self._Execute( 'INSERT INTO urls ( domain_id, url ) VALUES ( ?, ? );', ( domain_id, url ) ) - url_id = self._c.lastrowid + url_id = self._GetLastRowId() else: diff --git a/hydrus/client/db/ClientDBRepositories.py b/hydrus/client/db/ClientDBRepositories.py index 22d8b0aa..078195d4 100644 --- a/hydrus/client/db/ClientDBRepositories.py +++ b/hydrus/client/db/ClientDBRepositories.py @@ -125,13 +125,13 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): if hash_ids is None: - hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {};'.format( repository_unregistered_updates_table_name ) ) ) + hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {};'.format( repository_unregistered_updates_table_name ) ) ) else: - with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name: + with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name: - hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, repository_unregistered_updates_table_name ) ) ) + hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, repository_unregistered_updates_table_name ) ) ) @@ -141,9 +141,9 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): service_type = self.modules_services.GetService( service_id ).GetServiceType() - with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name: + with self._MakeTemporaryIntegerTable( hash_ids, 'hash_id' ) as temp_hash_ids_table_name: - hash_ids_to_mimes = { hash_id : mime for ( hash_id, mime ) in self._c.execute( 'SELECT hash_id, mime FROM {} CROSS JOIN files_info USING ( hash_id );'.format( temp_hash_ids_table_name ) ) } + hash_ids_to_mimes = { hash_id : mime for ( hash_id, mime ) in self._Execute( 'SELECT hash_id, mime FROM {} CROSS JOIN files_info USING ( hash_id );'.format( temp_hash_ids_table_name ) ) } if len( hash_ids_to_mimes ) > 0: @@ -165,8 +165,8 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): inserts.extend( ( ( hash_id, content_type, processed ) for content_type in content_types ) ) - self._c.executemany( 'INSERT OR IGNORE INTO {} ( hash_id, content_type, processed ) VALUES ( ?, ?, ? );'.format( repository_updates_processed_table_name ), inserts ) - self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( repository_unregistered_updates_table_name ), ( ( hash_id, ) for hash_id in hash_ids_to_mimes.keys() ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( hash_id, content_type, processed ) VALUES ( ?, ?, ? );'.format( repository_updates_processed_table_name ), inserts ) + self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( repository_unregistered_updates_table_name ), ( ( hash_id, ) for hash_id in hash_ids_to_mimes.keys() ) ) @@ -175,7 +175,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): ( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = GenerateRepositoryUpdatesTableNames( service_id ) - self._c.executemany( 'UPDATE {} SET processed = ? WHERE content_type = ?;'.format( repository_updates_processed_table_name ), ( ( False, content_type ) for content_type in content_types ) ) + self._ExecuteMany( 'UPDATE {} SET processed = ? WHERE content_type = ?;'.format( repository_updates_processed_table_name ), ( ( False, content_type ) for content_type in content_types ) ) self._ClearOutstandingWorkCache( service_id ) @@ -186,7 +186,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): table_join = self.modules_files_storage.GetCurrentTableJoinPhrase( self.modules_services.local_update_service_id, repository_updates_table_name ) - update_hash_ids = self._STL( self._c.execute( 'SELECT hash_id FROM {};'.format( table_join ) ) ) + update_hash_ids = self._STL( self._Execute( 'SELECT hash_id FROM {};'.format( table_join ) ) ) self.modules_files_maintenance.AddJobs( update_hash_ids, job_type ) @@ -208,9 +208,9 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): ( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = GenerateRepositoryUpdatesTableNames( service_id ) - self._c.executemany( 'INSERT OR IGNORE INTO {} ( update_index, hash_id ) VALUES ( ?, ? );'.format( repository_updates_table_name ), inserts ) + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( update_index, hash_id ) VALUES ( ?, ? );'.format( repository_updates_table_name ), inserts ) - self._c.executemany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( repository_unregistered_updates_table_name ), ( ( hash_id, ) for ( update_index, hash_id ) in inserts ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( repository_unregistered_updates_table_name ), ( ( hash_id, ) for ( update_index, hash_id ) in inserts ) ) self._RegisterUpdates( service_id ) @@ -225,14 +225,14 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): ( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = GenerateRepositoryUpdatesTableNames( service_id ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( repository_updates_table_name ) ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( repository_unregistered_updates_table_name ) ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( repository_updates_processed_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( repository_updates_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( repository_unregistered_updates_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( repository_updates_processed_table_name ) ) ( hash_id_map_table_name, tag_id_map_table_name ) = GenerateRepositoryDefinitionTableNames( service_id ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( hash_id_map_table_name ) ) - self._c.execute( 'DROP TABLE IF EXISTS {};'.format( tag_id_map_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( hash_id_map_table_name ) ) + self._Execute( 'DROP TABLE IF EXISTS {};'.format( tag_id_map_table_name ) ) self._ClearOutstandingWorkCache( service_id ) @@ -249,18 +249,18 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): ( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = GenerateRepositoryUpdatesTableNames( service_id ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( update_index INTEGER, hash_id INTEGER, PRIMARY KEY ( update_index, hash_id ) );'.format( repository_updates_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( update_index INTEGER, hash_id INTEGER, PRIMARY KEY ( update_index, hash_id ) );'.format( repository_updates_table_name ) ) self._CreateIndex( repository_updates_table_name, [ 'hash_id' ] ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY );'.format( repository_unregistered_updates_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY );'.format( repository_unregistered_updates_table_name ) ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER, content_type INTEGER, processed INTEGER_BOOLEAN, PRIMARY KEY ( hash_id, content_type ) );'.format( repository_updates_processed_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER, content_type INTEGER, processed INTEGER_BOOLEAN, PRIMARY KEY ( hash_id, content_type ) );'.format( repository_updates_processed_table_name ) ) self._CreateIndex( repository_updates_processed_table_name, [ 'content_type' ] ) ( hash_id_map_table_name, tag_id_map_table_name ) = GenerateRepositoryDefinitionTableNames( service_id ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( service_hash_id INTEGER PRIMARY KEY, hash_id INTEGER );'.format( hash_id_map_table_name ) ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( service_tag_id INTEGER PRIMARY KEY, tag_id INTEGER );'.format( tag_id_map_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( service_hash_id INTEGER PRIMARY KEY, hash_id INTEGER );'.format( hash_id_map_table_name ) ) + self._Execute( 'CREATE TABLE IF NOT EXISTS {} ( service_tag_id INTEGER PRIMARY KEY, tag_id INTEGER );'.format( tag_id_map_table_name ) ) def GetExpectedTableNames( self ) -> typing.Collection[ str ]: @@ -277,14 +277,14 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): ( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = GenerateRepositoryUpdatesTableNames( service_id ) - ( num_updates, ) = self._c.execute( 'SELECT COUNT( * ) FROM {}'.format( repository_updates_table_name ) ).fetchone() + ( num_updates, ) = self._Execute( 'SELECT COUNT( * ) FROM {}'.format( repository_updates_table_name ) ).fetchone() table_join = self.modules_files_storage.GetCurrentTableJoinPhrase( self.modules_services.local_update_service_id, repository_updates_table_name ) - ( num_local_updates, ) = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( table_join ) ).fetchone() + ( num_local_updates, ) = self._Execute( 'SELECT COUNT( * ) FROM {};'.format( table_join ) ).fetchone() - content_types_to_num_updates = collections.Counter( dict( self._c.execute( 'SELECT content_type, COUNT( * ) FROM {} GROUP BY content_type;'.format( repository_updates_processed_table_name ) ) ) ) - content_types_to_num_processed_updates = collections.Counter( dict( self._c.execute( 'SELECT content_type, COUNT( * ) FROM {} WHERE processed = ? GROUP BY content_type;'.format( repository_updates_processed_table_name ), ( True, ) ) ) ) + content_types_to_num_updates = collections.Counter( dict( self._Execute( 'SELECT content_type, COUNT( * ) FROM {} GROUP BY content_type;'.format( repository_updates_processed_table_name ) ) ) ) + content_types_to_num_processed_updates = collections.Counter( dict( self._Execute( 'SELECT content_type, COUNT( * ) FROM {} WHERE processed = ? GROUP BY content_type;'.format( repository_updates_processed_table_name ), ( True, ) ) ) ) # little helpful thing that pays off later for content_type in content_types_to_num_updates: @@ -307,17 +307,17 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): ( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = GenerateRepositoryUpdatesTableNames( service_id ) - result = self._c.execute( 'SELECT 1 FROM {} WHERE content_type = ? AND processed = ?;'.format( repository_updates_processed_table_name ), ( HC.CONTENT_TYPE_DEFINITIONS, True ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM {} WHERE content_type = ? AND processed = ?;'.format( repository_updates_processed_table_name ), ( HC.CONTENT_TYPE_DEFINITIONS, True ) ).fetchone() this_is_first_definitions_work = result is None - result = self._c.execute( 'SELECT 1 FROM {} WHERE content_type != ? AND processed = ?;'.format( repository_updates_processed_table_name ), ( HC.CONTENT_TYPE_DEFINITIONS, True ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM {} WHERE content_type != ? AND processed = ?;'.format( repository_updates_processed_table_name ), ( HC.CONTENT_TYPE_DEFINITIONS, True ) ).fetchone() this_is_first_content_work = result is None min_unregistered_update_index = None - result = self._c.execute( 'SELECT MIN( update_index ) FROM {} CROSS JOIN {} USING ( hash_id );'.format( repository_unregistered_updates_table_name, repository_updates_table_name ) ).fetchone() + result = self._Execute( 'SELECT MIN( update_index ) FROM {} CROSS JOIN {} USING ( hash_id );'.format( repository_unregistered_updates_table_name, repository_updates_table_name ) ).fetchone() if result is not None: @@ -336,12 +336,12 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): query = 'SELECT update_index, hash_id, content_type FROM {} CROSS JOIN {} USING ( hash_id ) WHERE {};'.format( repository_updates_processed_table_name, repository_updates_table_name, predicate_phrase ) - rows = self._c.execute( query ).fetchall() + rows = self._Execute( query ).fetchall() update_indices_to_unprocessed_hash_ids = HydrusData.BuildKeyToSetDict( ( ( update_index, hash_id ) for ( update_index, hash_id, content_type ) in rows ) ) hash_ids_to_content_types_to_process = HydrusData.BuildKeyToSetDict( ( ( hash_id, content_type ) for ( update_index, hash_id, content_type ) in rows ) ) - all_hash_ids = set( itertools.chain.from_iterable( update_indices_to_unprocessed_hash_ids.values() ) ) + all_hash_ids = set( hash_ids_to_content_types_to_process.keys() ) all_local_hash_ids = self.modules_files_storage.FilterCurrentHashIds( self.modules_services.local_update_service_id, all_hash_ids ) @@ -400,11 +400,11 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): ( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = GenerateRepositoryUpdatesTableNames( service_id ) - all_hash_ids = self._STL( self._c.execute( 'SELECT hash_id FROM {} ORDER BY update_index ASC;'.format( repository_updates_table_name ) ) ) + all_hash_ids = self._STL( self._Execute( 'SELECT hash_id FROM {} ORDER BY update_index ASC;'.format( repository_updates_table_name ) ) ) table_join = self.modules_files_storage.GetCurrentTableJoinPhrase( self.modules_services.local_update_service_id, repository_updates_table_name ) - existing_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {};'.format( table_join ) ) ) + existing_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {};'.format( table_join ) ) ) needed_hash_ids = [ hash_id for hash_id in all_hash_ids if hash_id not in existing_hash_ids ] @@ -455,7 +455,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): if content_type not in content_types_to_outstanding_local_processing: - result = self._STL( self._c.execute( 'SELECT 1 FROM {} WHERE content_type = ? AND processed = ?;'.format( repository_updates_processed_table_name ), ( content_type, False ) ).fetchmany( 20 ) ) + result = self._STL( self._Execute( 'SELECT 1 FROM {} WHERE content_type = ? AND processed = ?;'.format( repository_updates_processed_table_name ), ( content_type, False ) ).fetchmany( 20 ) ) content_types_to_outstanding_local_processing[ content_type ] = len( result ) >= 20 @@ -473,7 +473,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): hash_id_map_table_name = GenerateRepositoryFileDefinitionTableName( service_id ) - result = self._c.execute( 'SELECT hash_id FROM {} WHERE service_hash_id = ?;'.format( hash_id_map_table_name ), ( service_hash_id, ) ).fetchone() + result = self._Execute( 'SELECT hash_id FROM {} WHERE service_hash_id = ?;'.format( hash_id_map_table_name ), ( service_hash_id, ) ).fetchone() if result is None: @@ -489,10 +489,10 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): hash_id_map_table_name = GenerateRepositoryFileDefinitionTableName( service_id ) - with HydrusDB.TemporaryIntegerTable( self._c, service_hash_ids, 'service_hash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( service_hash_ids, 'service_hash_id' ) as temp_table_name: # temp service hashes to lookup - hash_ids_potentially_dupes = self._STL( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( service_hash_id );'.format( temp_table_name, hash_id_map_table_name ) ) ) + hash_ids_potentially_dupes = self._STL( self._Execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( service_hash_id );'.format( temp_table_name, hash_id_map_table_name ) ) ) # every service_id can only exist once, but technically a hash_id could be mapped to two service_ids @@ -502,7 +502,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): for service_hash_id in service_hash_ids: - result = self._c.execute( 'SELECT hash_id FROM {} WHERE service_hash_id = ?;'.format( hash_id_map_table_name ), ( service_hash_id, ) ).fetchone() + result = self._Execute( 'SELECT hash_id FROM {} WHERE service_hash_id = ?;'.format( hash_id_map_table_name ), ( service_hash_id, ) ).fetchone() if result is None: @@ -522,7 +522,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): tag_id_map_table_name = GenerateRepositoryTagDefinitionTableName( service_id ) - result = self._c.execute( 'SELECT tag_id FROM {} WHERE service_tag_id = ?;'.format( tag_id_map_table_name ), ( service_tag_id, ) ).fetchone() + result = self._Execute( 'SELECT tag_id FROM {} WHERE service_tag_id = ?;'.format( tag_id_map_table_name ), ( service_tag_id, ) ).fetchone() if result is None: @@ -569,7 +569,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): inserts.append( ( service_hash_id, hash_id ) ) - self._c.executemany( 'REPLACE INTO {} ( service_hash_id, hash_id ) VALUES ( ?, ? );'.format( hash_id_map_table_name ), inserts ) + self._ExecuteMany( 'REPLACE INTO {} ( service_hash_id, hash_id ) VALUES ( ?, ? );'.format( hash_id_map_table_name ), inserts ) num_rows_processed += len( inserts ) @@ -606,7 +606,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): inserts.append( ( service_tag_id, tag_id ) ) - self._c.executemany( 'REPLACE INTO {} ( service_tag_id, tag_id ) VALUES ( ?, ? );'.format( tag_id_map_table_name ), inserts ) + self._ExecuteMany( 'REPLACE INTO {} ( service_tag_id, tag_id ) VALUES ( ?, ? );'.format( tag_id_map_table_name ), inserts ) num_rows_processed += len( inserts ) @@ -644,15 +644,15 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): ( repository_updates_table_name, repository_unregistered_updates_table_name, repository_updates_processed_table_name ) = GenerateRepositoryUpdatesTableNames( service_id ) - current_update_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {};'.format( repository_updates_table_name ) ) ) + current_update_hash_ids = self._STS( self._Execute( 'SELECT hash_id FROM {};'.format( repository_updates_table_name ) ) ) all_future_update_hash_ids = self.modules_hashes_local_cache.GetHashIds( metadata.GetUpdateHashes() ) deletee_hash_ids = current_update_hash_ids.difference( all_future_update_hash_ids ) - self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( repository_updates_table_name ), ( ( hash_id, ) for hash_id in deletee_hash_ids ) ) - self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( repository_unregistered_updates_table_name ), ( ( hash_id, ) for hash_id in deletee_hash_ids ) ) - self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( repository_updates_processed_table_name ), ( ( hash_id, ) for hash_id in deletee_hash_ids ) ) + self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( repository_updates_table_name ), ( ( hash_id, ) for hash_id in deletee_hash_ids ) ) + self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( repository_unregistered_updates_table_name ), ( ( hash_id, ) for hash_id in deletee_hash_ids ) ) + self._ExecuteMany( 'DELETE FROM {} WHERE hash_id = ?;'.format( repository_updates_processed_table_name ), ( ( hash_id, ) for hash_id in deletee_hash_ids ) ) inserts = [] @@ -664,7 +664,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): if hash_id in current_update_hash_ids: - self._c.execute( 'UPDATE {} SET update_index = ? WHERE hash_id = ?;'.format( repository_updates_table_name ), ( update_index, hash_id ) ) + self._Execute( 'UPDATE {} SET update_index = ? WHERE hash_id = ?;'.format( repository_updates_table_name ), ( update_index, hash_id ) ) else: @@ -673,8 +673,8 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): - self._c.executemany( 'INSERT OR IGNORE INTO {} ( update_index, hash_id ) VALUES ( ?, ? );'.format( repository_updates_table_name ), inserts ) - self._c.executemany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( repository_unregistered_updates_table_name ), ( ( hash_id, ) for ( update_index, hash_id ) in inserts ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( update_index, hash_id ) VALUES ( ?, ? );'.format( repository_updates_table_name ), inserts ) + self._ExecuteMany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( repository_unregistered_updates_table_name ), ( ( hash_id, ) for ( update_index, hash_id ) in inserts ) ) self._RegisterUpdates( service_id ) @@ -687,7 +687,7 @@ class ClientDBRepositories( HydrusDBModule.HydrusDBModule ): update_hash_id = self.modules_hashes_local_cache.GetHashId( update_hash ) - self._c.executemany( 'UPDATE {} SET processed = ? WHERE hash_id = ? AND content_type = ?;'.format( repository_updates_processed_table_name ), ( ( True, update_hash_id, content_type ) for content_type in content_types ) ) + self._ExecuteMany( 'UPDATE {} SET processed = ? WHERE hash_id = ? AND content_type = ?;'.format( repository_updates_processed_table_name ), ( ( True, update_hash_id, content_type ) for content_type in content_types ) ) for content_type in content_types: diff --git a/hydrus/client/db/ClientDBSerialisable.py b/hydrus/client/db/ClientDBSerialisable.py index 932f130b..27a07d58 100644 --- a/hydrus/client/db/ClientDBSerialisable.py +++ b/hydrus/client/db/ClientDBSerialisable.py @@ -162,32 +162,32 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): def CreateInitialTables( self ): - self._c.execute( 'CREATE TABLE json_dict ( name TEXT PRIMARY KEY, dump BLOB_BYTES );' ) - self._c.execute( 'CREATE TABLE json_dumps ( dump_type INTEGER PRIMARY KEY, version INTEGER, dump BLOB_BYTES );' ) - self._c.execute( 'CREATE TABLE json_dumps_named ( dump_type INTEGER, dump_name TEXT, version INTEGER, timestamp INTEGER, dump BLOB_BYTES, PRIMARY KEY ( dump_type, dump_name, timestamp ) );' ) - self._c.execute( 'CREATE TABLE json_dumps_hashed ( hash BLOB_BYTES PRIMARY KEY, dump_type INTEGER, version INTEGER, dump BLOB_BYTES );' ) + self._Execute( 'CREATE TABLE json_dict ( name TEXT PRIMARY KEY, dump BLOB_BYTES );' ) + self._Execute( 'CREATE TABLE json_dumps ( dump_type INTEGER PRIMARY KEY, version INTEGER, dump BLOB_BYTES );' ) + self._Execute( 'CREATE TABLE json_dumps_named ( dump_type INTEGER, dump_name TEXT, version INTEGER, timestamp INTEGER, dump BLOB_BYTES, PRIMARY KEY ( dump_type, dump_name, timestamp ) );' ) + self._Execute( 'CREATE TABLE json_dumps_hashed ( hash BLOB_BYTES PRIMARY KEY, dump_type INTEGER, version INTEGER, dump BLOB_BYTES );' ) - self._c.execute( 'CREATE TABLE yaml_dumps ( dump_type INTEGER, dump_name TEXT, dump TEXT_YAML, PRIMARY KEY ( dump_type, dump_name ) );' ) + self._Execute( 'CREATE TABLE yaml_dumps ( dump_type INTEGER, dump_name TEXT, dump TEXT_YAML, PRIMARY KEY ( dump_type, dump_name ) );' ) def DeleteJSONDump( self, dump_type ): - self._c.execute( 'DELETE FROM json_dumps WHERE dump_type = ?;', ( dump_type, ) ) + self._Execute( 'DELETE FROM json_dumps WHERE dump_type = ?;', ( dump_type, ) ) def DeleteJSONDumpNamed( self, dump_type, dump_name = None, timestamp = None ): if dump_name is None: - self._c.execute( 'DELETE FROM json_dumps_named WHERE dump_type = ?;', ( dump_type, ) ) + self._Execute( 'DELETE FROM json_dumps_named WHERE dump_type = ?;', ( dump_type, ) ) elif timestamp is None: - self._c.execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) ) + self._Execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) ) else: - self._c.execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', ( dump_type, dump_name, timestamp ) ) + self._Execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', ( dump_type, dump_name, timestamp ) ) @@ -195,20 +195,20 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): if dump_name is None: - self._c.execute( 'DELETE FROM yaml_dumps WHERE dump_type = ?;', ( dump_type, ) ) + self._Execute( 'DELETE FROM yaml_dumps WHERE dump_type = ?;', ( dump_type, ) ) else: if dump_type == YAML_DUMP_ID_LOCAL_BOORU: dump_name = dump_name.hex() - self._c.execute( 'DELETE FROM yaml_dumps WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) ) + self._Execute( 'DELETE FROM yaml_dumps WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) ) if dump_type == YAML_DUMP_ID_LOCAL_BOORU: service_id = self.modules_services.GetServiceId( CC.LOCAL_BOORU_SERVICE_KEY ) - self._c.execute( 'DELETE FROM service_info WHERE service_id = ? AND info_type = ?;', ( service_id, HC.SERVICE_INFO_NUM_SHARES ) ) + self._Execute( 'DELETE FROM service_info WHERE service_id = ? AND info_type = ?;', ( service_id, HC.SERVICE_INFO_NUM_SHARES ) ) HG.client_controller.pub( 'refresh_local_booru_shares' ) @@ -219,7 +219,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): all_expected_hashes = set() # not the GetJSONDumpNamesToBackupTimestamps call, which excludes the latest save! - names_and_timestamps = self._c.execute( 'SELECT dump_name, timestamp FROM json_dumps_named WHERE dump_type = ?;', ( HydrusSerialisable.SERIALISABLE_TYPE_GUI_SESSION_CONTAINER, ) ).fetchall() + names_and_timestamps = self._Execute( 'SELECT dump_name, timestamp FROM json_dumps_named WHERE dump_type = ?;', ( HydrusSerialisable.SERIALISABLE_TYPE_GUI_SESSION_CONTAINER, ) ).fetchall() for ( name, timestamp ) in names_and_timestamps: @@ -252,7 +252,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): for hash in hashes: - result = self._c.execute( 'SELECT version, dump_type, dump FROM json_dumps_hashed WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() + result = self._Execute( 'SELECT version, dump_type, dump FROM json_dumps_hashed WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() if result is None: @@ -289,7 +289,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): except: - self._c.execute( 'DELETE FROM json_dumps_hashed WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ) + self._Execute( 'DELETE FROM json_dumps_hashed WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ) self._cursor_transaction_wrapper.CommitAndBegin() @@ -341,7 +341,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): def GetJSONDump( self, dump_type ): - result = self._c.execute( 'SELECT version, dump FROM json_dumps WHERE dump_type = ?;', ( dump_type, ) ).fetchone() + result = self._Execute( 'SELECT version, dump FROM json_dumps WHERE dump_type = ?;', ( dump_type, ) ).fetchone() if result is None: @@ -362,7 +362,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): except: - self._c.execute( 'DELETE FROM json_dumps WHERE dump_type = ?;', ( dump_type, ) ) + self._Execute( 'DELETE FROM json_dumps WHERE dump_type = ?;', ( dump_type, ) ) self._cursor_transaction_wrapper.CommitAndBegin() @@ -392,7 +392,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): if dump_name is None: - results = self._c.execute( 'SELECT dump_name, version, dump, timestamp FROM json_dumps_named WHERE dump_type = ?;', ( dump_type, ) ).fetchall() + results = self._Execute( 'SELECT dump_name, version, dump, timestamp FROM json_dumps_named WHERE dump_type = ?;', ( dump_type, ) ).fetchall() objs = [] @@ -411,7 +411,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): except: - self._c.execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', ( dump_type, dump_name, object_timestamp ) ) + self._Execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', ( dump_type, dump_name, object_timestamp ) ) self._cursor_transaction_wrapper.CommitAndBegin() @@ -425,11 +425,11 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): if timestamp is None: - result = self._c.execute( 'SELECT version, dump, timestamp FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? ORDER BY timestamp DESC;', ( dump_type, dump_name ) ).fetchone() + result = self._Execute( 'SELECT version, dump, timestamp FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? ORDER BY timestamp DESC;', ( dump_type, dump_name ) ).fetchone() else: - result = self._c.execute( 'SELECT version, dump, timestamp FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', ( dump_type, dump_name, timestamp ) ).fetchone() + result = self._Execute( 'SELECT version, dump, timestamp FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', ( dump_type, dump_name, timestamp ) ).fetchone() if result is None: @@ -450,7 +450,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): except: - self._c.execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', ( dump_type, dump_name, object_timestamp ) ) + self._Execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', ( dump_type, dump_name, object_timestamp ) ) self._cursor_transaction_wrapper.CommitAndBegin() @@ -463,14 +463,14 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): def GetJSONDumpNames( self, dump_type ): - names = [ name for ( name, ) in self._c.execute( 'SELECT DISTINCT dump_name FROM json_dumps_named WHERE dump_type = ?;', ( dump_type, ) ) ] + names = [ name for ( name, ) in self._Execute( 'SELECT DISTINCT dump_name FROM json_dumps_named WHERE dump_type = ?;', ( dump_type, ) ) ] return names def GetJSONDumpNamesToBackupTimestamps( self, dump_type ): - names_to_backup_timestamps = HydrusData.BuildKeyToListDict( self._c.execute( 'SELECT dump_name, timestamp FROM json_dumps_named WHERE dump_type = ? ORDER BY timestamp ASC;', ( dump_type, ) ) ) + names_to_backup_timestamps = HydrusData.BuildKeyToListDict( self._Execute( 'SELECT dump_name, timestamp FROM json_dumps_named WHERE dump_type = ? ORDER BY timestamp ASC;', ( dump_type, ) ) ) for ( name, timestamp_list ) in list( names_to_backup_timestamps.items() ): @@ -487,7 +487,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): def GetJSONSimple( self, name ): - result = self._c.execute( 'SELECT dump FROM json_dict WHERE name = ?;', ( name, ) ).fetchone() + result = self._Execute( 'SELECT dump FROM json_dict WHERE name = ?;', ( name, ) ).fetchone() if result is None: @@ -515,7 +515,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): if dump_name is None: - result = { dump_name : data for ( dump_name, data ) in self._c.execute( 'SELECT dump_name, dump FROM yaml_dumps WHERE dump_type = ?;', ( dump_type, ) ) } + result = { dump_name : data for ( dump_name, data ) in self._Execute( 'SELECT dump_name, dump FROM yaml_dumps WHERE dump_type = ?;', ( dump_type, ) ) } if dump_type == YAML_DUMP_ID_LOCAL_BOORU: @@ -526,7 +526,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): if dump_type == YAML_DUMP_ID_LOCAL_BOORU: dump_name = dump_name.hex() - result = self._c.execute( 'SELECT dump FROM yaml_dumps WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) ).fetchone() + result = self._Execute( 'SELECT dump FROM yaml_dumps WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) ).fetchone() if result is None: @@ -546,7 +546,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): def GetYAMLDumpNames( self, dump_type ): - names = [ name for ( name, ) in self._c.execute( 'SELECT dump_name FROM yaml_dumps WHERE dump_type = ?;', ( dump_type, ) ) ] + names = [ name for ( name, ) in self._Execute( 'SELECT dump_name FROM yaml_dumps WHERE dump_type = ?;', ( dump_type, ) ) ] if dump_type == YAML_DUMP_ID_LOCAL_BOORU: @@ -558,7 +558,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): def HaveHashedJSONDump( self, hash ): - result = self._c.execute( 'SELECT 1 FROM json_dumps_hashed WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM json_dumps_hashed WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() return result is not None @@ -577,13 +577,13 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): all_expected_hashes = self.GetAllExpectedHashedJSONHashes() - all_stored_hashes = self._STS( self._c.execute( 'SELECT hash FROM json_dumps_hashed;' ) ) + all_stored_hashes = self._STS( self._Execute( 'SELECT hash FROM json_dumps_hashed;' ) ) all_deletee_hashes = all_stored_hashes.difference( all_expected_hashes ) if len( all_deletee_hashes ) > 0: - self._c.executemany( 'DELETE FROM json_dumps_hashed WHERE hash = ?;', ( ( sqlite3.Binary( hash ), ) for hash in all_deletee_hashes ) ) + self._ExecuteMany( 'DELETE FROM json_dumps_hashed WHERE hash = ?;', ( ( sqlite3.Binary( hash ), ) for hash in all_deletee_hashes ) ) maintenance_tracker.NotifyHashedSerialisableMaintenanceDone() @@ -636,7 +636,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): try: - self._c.execute( 'INSERT INTO json_dumps_hashed ( hash, dump_type, version, dump ) VALUES ( ?, ?, ?, ? );', ( sqlite3.Binary( hash ), dump_type, version, dump_buffer ) ) + self._Execute( 'INSERT INTO json_dumps_hashed ( hash, dump_type, version, dump ) VALUES ( ?, ?, ?, ? );', ( sqlite3.Binary( hash ), dump_type, version, dump_buffer ) ) except: @@ -703,7 +703,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): if store_backups: - existing_timestamps = sorted( self._STI( self._c.execute( 'SELECT timestamp FROM json_dumps_named WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) ) ) ) + existing_timestamps = sorted( self._STI( self._Execute( 'SELECT timestamp FROM json_dumps_named WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) ) ) ) if len( existing_timestamps ) > 0: @@ -721,11 +721,11 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): deletee_timestamps.append( object_timestamp ) # if save gets spammed twice in one second, we'll overwrite - self._c.executemany( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', [ ( dump_type, dump_name, timestamp ) for timestamp in deletee_timestamps ] ) + self._ExecuteMany( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', [ ( dump_type, dump_name, timestamp ) for timestamp in deletee_timestamps ] ) else: - self._c.execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) ) + self._Execute( 'DELETE FROM json_dumps_named WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) ) else: @@ -737,7 +737,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): try: - self._c.execute( 'INSERT INTO json_dumps_named ( dump_type, dump_name, version, timestamp, dump ) VALUES ( ?, ?, ?, ?, ? );', ( dump_type, dump_name, version, object_timestamp, dump_buffer ) ) + self._Execute( 'INSERT INTO json_dumps_named ( dump_type, dump_name, version, timestamp, dump ) VALUES ( ?, ?, ?, ?, ? );', ( dump_type, dump_name, version, object_timestamp, dump_buffer ) ) except: @@ -826,13 +826,13 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): raise Exception( 'Trying to json dump the object ' + str( obj ) + ' caused an error. Its serialisable info has been dumped to the log.' ) - self._c.execute( 'DELETE FROM json_dumps WHERE dump_type = ?;', ( dump_type, ) ) + self._Execute( 'DELETE FROM json_dumps WHERE dump_type = ?;', ( dump_type, ) ) dump_buffer = GenerateBigSQLiteDumpBuffer( dump ) try: - self._c.execute( 'INSERT INTO json_dumps ( dump_type, version, dump ) VALUES ( ?, ?, ? );', ( dump_type, version, dump_buffer ) ) + self._Execute( 'INSERT INTO json_dumps ( dump_type, version, dump ) VALUES ( ?, ?, ? );', ( dump_type, version, dump_buffer ) ) except: @@ -881,7 +881,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): if value is None: - self._c.execute( 'DELETE FROM json_dict WHERE name = ?;', ( name, ) ) + self._Execute( 'DELETE FROM json_dict WHERE name = ?;', ( name, ) ) else: @@ -891,7 +891,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): try: - self._c.execute( 'REPLACE INTO json_dict ( name, dump ) VALUES ( ?, ? );', ( name, dump_buffer ) ) + self._Execute( 'REPLACE INTO json_dict ( name, dump ) VALUES ( ?, ? );', ( name, dump_buffer ) ) except: @@ -910,11 +910,11 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): dump_name = dump_name.hex() - self._c.execute( 'DELETE FROM yaml_dumps WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) ) + self._Execute( 'DELETE FROM yaml_dumps WHERE dump_type = ? AND dump_name = ?;', ( dump_type, dump_name ) ) try: - self._c.execute( 'INSERT INTO yaml_dumps ( dump_type, dump_name, dump ) VALUES ( ?, ?, ? );', ( dump_type, dump_name, data ) ) + self._Execute( 'INSERT INTO yaml_dumps ( dump_type, dump_name, dump ) VALUES ( ?, ?, ? );', ( dump_type, dump_name, data ) ) except: @@ -927,7 +927,7 @@ class ClientDBSerialisable( HydrusDBModule.HydrusDBModule ): service_id = self.modules_services.GetServiceId( CC.LOCAL_BOORU_SERVICE_KEY ) - self._c.execute( 'DELETE FROM service_info WHERE service_id = ? AND info_type = ?;', ( service_id, HC.SERVICE_INFO_NUM_SHARES ) ) + self._Execute( 'DELETE FROM service_info WHERE service_id = ? AND info_type = ?;', ( service_id, HC.SERVICE_INFO_NUM_SHARES ) ) HG.client_controller.pub( 'refresh_local_booru_shares' ) diff --git a/hydrus/client/db/ClientDBServices.py b/hydrus/client/db/ClientDBServices.py index 31f2454b..077d329f 100644 --- a/hydrus/client/db/ClientDBServices.py +++ b/hydrus/client/db/ClientDBServices.py @@ -37,9 +37,9 @@ class ClientDBMasterServices( HydrusDBModule.HydrusDBModule ): def _InitCaches( self ): - if self._c.execute( 'SELECT 1 FROM sqlite_master WHERE name = ?;', ( 'services', ) ).fetchone() is not None: + if self._Execute( 'SELECT 1 FROM sqlite_master WHERE name = ?;', ( 'services', ) ).fetchone() is not None: - all_data = self._c.execute( 'SELECT service_id, service_key, service_type, name, dictionary_string FROM services;' ).fetchall() + all_data = self._Execute( 'SELECT service_id, service_key, service_type, name, dictionary_string FROM services;' ).fetchall() for ( service_id, service_key, service_type, name, dictionary_string ) in all_data: @@ -62,7 +62,7 @@ class ClientDBMasterServices( HydrusDBModule.HydrusDBModule ): def CreateInitialTables( self ): - self._c.execute( 'CREATE TABLE services ( service_id INTEGER PRIMARY KEY AUTOINCREMENT, service_key BLOB_BYTES UNIQUE, service_type INTEGER, name TEXT, dictionary_string TEXT );' ) + self._Execute( 'CREATE TABLE services ( service_id INTEGER PRIMARY KEY AUTOINCREMENT, service_key BLOB_BYTES UNIQUE, service_type INTEGER, name TEXT, dictionary_string TEXT );' ) def GetExpectedTableNames( self ) -> typing.Collection[ str ]: @@ -78,9 +78,9 @@ class ClientDBMasterServices( HydrusDBModule.HydrusDBModule ): dictionary_string = dictionary.DumpToString() - self._c.execute( 'INSERT INTO services ( service_key, service_type, name, dictionary_string ) VALUES ( ?, ?, ?, ? );', ( sqlite3.Binary( service_key ), service_type, name, dictionary_string ) ) + self._Execute( 'INSERT INTO services ( service_key, service_type, name, dictionary_string ) VALUES ( ?, ?, ?, ? );', ( sqlite3.Binary( service_key ), service_type, name, dictionary_string ) ) - service_id = self._c.lastrowid + service_id = self._GetLastRowId() service = ClientServices.GenerateService( service_key, service_type, name, dictionary ) @@ -125,7 +125,7 @@ class ClientDBMasterServices( HydrusDBModule.HydrusDBModule ): - self._c.execute( 'DELETE FROM services WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM services WHERE service_id = ?;', ( service_id, ) ) def GetNonDupeName( self, name ) -> str: @@ -188,7 +188,7 @@ class ClientDBMasterServices( HydrusDBModule.HydrusDBModule ): dictionary_string = dictionary.DumpToString() - self._c.execute( 'UPDATE services SET name = ?, dictionary_string = ? WHERE service_id = ?;', ( name, dictionary_string, service_id ) ) + self._Execute( 'UPDATE services SET name = ?, dictionary_string = ? WHERE service_id = ?;', ( name, dictionary_string, service_id ) ) self._service_ids_to_services[ service_id ] = service diff --git a/hydrus/client/db/ClientDBSimilarFiles.py b/hydrus/client/db/ClientDBSimilarFiles.py index 59bad713..dda971ce 100644 --- a/hydrus/client/db/ClientDBSimilarFiles.py +++ b/hydrus/client/db/ClientDBSimilarFiles.py @@ -25,7 +25,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): def _AddLeaf( self, phash_id, phash ): - result = self._c.execute( 'SELECT phash_id FROM shape_vptree WHERE parent_id IS NULL;' ).fetchone() + result = self._Execute( 'SELECT phash_id FROM shape_vptree WHERE parent_id IS NULL;' ).fetchone() if result is None: @@ -46,7 +46,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): ancestor_id = next_ancestor_id - ( ancestor_phash, ancestor_radius, ancestor_inner_id, ancestor_inner_population, ancestor_outer_id, ancestor_outer_population ) = self._c.execute( 'SELECT phash, radius, inner_id, inner_population, outer_id, outer_population FROM shape_perceptual_hashes NATURAL JOIN shape_vptree WHERE phash_id = ?;', ( ancestor_id, ) ).fetchone() + ( ancestor_phash, ancestor_radius, ancestor_inner_id, ancestor_inner_population, ancestor_outer_id, ancestor_outer_population ) = self._Execute( 'SELECT phash, radius, inner_id, inner_population, outer_id, outer_population FROM shape_perceptual_hashes NATURAL JOIN shape_vptree WHERE phash_id = ?;', ( ancestor_id, ) ).fetchone() distance_to_ancestor = HydrusData.Get64BitHammingDistance( phash, ancestor_phash ) @@ -58,7 +58,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): if ancestor_inner_id is None: - self._c.execute( 'UPDATE shape_vptree SET inner_id = ?, radius = ? WHERE phash_id = ?;', ( phash_id, distance_to_ancestor, ancestor_id ) ) + self._Execute( 'UPDATE shape_vptree SET inner_id = ?, radius = ? WHERE phash_id = ?;', ( phash_id, distance_to_ancestor, ancestor_id ) ) parent_id = ancestor_id @@ -71,7 +71,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): if ancestor_outer_id is None: - self._c.execute( 'UPDATE shape_vptree SET outer_id = ? WHERE phash_id = ?;', ( phash_id, ancestor_id ) ) + self._Execute( 'UPDATE shape_vptree SET outer_id = ? WHERE phash_id = ?;', ( phash_id, ancestor_id ) ) parent_id = ancestor_id @@ -84,7 +84,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): if smaller / larger < 0.5: - self._c.execute( 'INSERT OR IGNORE INTO shape_maintenance_branch_regen ( phash_id ) VALUES ( ? );', ( ancestor_id, ) ) + self._Execute( 'INSERT OR IGNORE INTO shape_maintenance_branch_regen ( phash_id ) VALUES ( ? );', ( ancestor_id, ) ) # we only do this for the eldest ancestor, as the eventual rebalancing will affect all children @@ -93,8 +93,8 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): - self._c.executemany( 'UPDATE shape_vptree SET inner_population = inner_population + 1 WHERE phash_id = ?;', ( ( ancestor_id, ) for ancestor_id in ancestors_we_are_inside ) ) - self._c.executemany( 'UPDATE shape_vptree SET outer_population = outer_population + 1 WHERE phash_id = ?;', ( ( ancestor_id, ) for ancestor_id in ancestors_we_are_outside ) ) + self._ExecuteMany( 'UPDATE shape_vptree SET inner_population = inner_population + 1 WHERE phash_id = ?;', ( ( ancestor_id, ) for ancestor_id in ancestors_we_are_inside ) ) + self._ExecuteMany( 'UPDATE shape_vptree SET outer_population = outer_population + 1 WHERE phash_id = ?;', ( ( ancestor_id, ) for ancestor_id in ancestors_we_are_outside ) ) radius = None @@ -103,7 +103,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): outer_id = None outer_population = 0 - self._c.execute( 'INSERT OR REPLACE INTO shape_vptree ( phash_id, parent_id, radius, inner_id, inner_population, outer_id, outer_population ) VALUES ( ?, ?, ?, ?, ?, ?, ? );', ( phash_id, parent_id, radius, inner_id, inner_population, outer_id, outer_population ) ) + self._Execute( 'INSERT OR REPLACE INTO shape_vptree ( phash_id, parent_id, radius, inner_id, inner_population, outer_id, outer_population ) VALUES ( ?, ?, ?, ?, ?, ?, ? );', ( phash_id, parent_id, radius, inner_id, inner_population, outer_id, outer_population ) ) def _GenerateBranch( self, job_key, parent_id, phash_id, phash, children ): @@ -190,7 +190,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): job_key.SetVariable( 'popup_text_2', 'branch constructed, now committing' ) - self._c.executemany( 'INSERT OR REPLACE INTO shape_vptree ( phash_id, parent_id, radius, inner_id, inner_population, outer_id, outer_population ) VALUES ( ?, ?, ?, ?, ?, ?, ? );', insert_rows ) + self._ExecuteMany( 'INSERT OR REPLACE INTO shape_vptree ( phash_id, parent_id, radius, inner_id, inner_population, outer_id, outer_population ) VALUES ( ?, ?, ?, ?, ?, ?, ? );', insert_rows ) def _GetInitialIndexGenerationTuples( self ): @@ -205,13 +205,13 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): def _GetPHashId( self, phash ): - result = self._c.execute( 'SELECT phash_id FROM shape_perceptual_hashes WHERE phash = ?;', ( sqlite3.Binary( phash ), ) ).fetchone() + result = self._Execute( 'SELECT phash_id FROM shape_perceptual_hashes WHERE phash = ?;', ( sqlite3.Binary( phash ), ) ).fetchone() if result is None: - self._c.execute( 'INSERT INTO shape_perceptual_hashes ( phash ) VALUES ( ? );', ( sqlite3.Binary( phash ), ) ) + self._Execute( 'INSERT INTO shape_perceptual_hashes ( phash ) VALUES ( ? );', ( sqlite3.Binary( phash ), ) ) - phash_id = self._c.lastrowid + phash_id = self._GetLastRowId() self._AddLeaf( phash_id, phash ) @@ -317,7 +317,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): # grab everything in the branch - ( parent_id, ) = self._c.execute( 'SELECT parent_id FROM shape_vptree WHERE phash_id = ?;', ( phash_id, ) ).fetchone() + ( parent_id, ) = self._Execute( 'SELECT parent_id FROM shape_vptree WHERE phash_id = ?;', ( phash_id, ) ).fetchone() cte_table_name = 'branch ( branch_phash_id )' initial_select = 'SELECT ?' @@ -325,7 +325,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): with_clause = 'WITH RECURSIVE ' + cte_table_name + ' AS ( ' + initial_select + ' UNION ALL ' + recursive_select + ')' - unbalanced_nodes = self._c.execute( with_clause + ' SELECT branch_phash_id, phash FROM branch, shape_perceptual_hashes ON phash_id = branch_phash_id;', ( phash_id, ) ).fetchall() + unbalanced_nodes = self._Execute( with_clause + ' SELECT branch_phash_id, phash FROM branch, shape_perceptual_hashes ON phash_id = branch_phash_id;', ( phash_id, ) ).fetchall() # removal of old branch, maintenance schedule, and orphan phashes @@ -333,18 +333,18 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): unbalanced_phash_ids = { p_id for ( p_id, p_h ) in unbalanced_nodes } - self._c.executemany( 'DELETE FROM shape_vptree WHERE phash_id = ?;', ( ( p_id, ) for p_id in unbalanced_phash_ids ) ) + self._ExecuteMany( 'DELETE FROM shape_vptree WHERE phash_id = ?;', ( ( p_id, ) for p_id in unbalanced_phash_ids ) ) - self._c.executemany( 'DELETE FROM shape_maintenance_branch_regen WHERE phash_id = ?;', ( ( p_id, ) for p_id in unbalanced_phash_ids ) ) + self._ExecuteMany( 'DELETE FROM shape_maintenance_branch_regen WHERE phash_id = ?;', ( ( p_id, ) for p_id in unbalanced_phash_ids ) ) - with HydrusDB.TemporaryIntegerTable( self._c, unbalanced_phash_ids, 'phash_id' ) as temp_phash_ids_table_name: + with self._MakeTemporaryIntegerTable( unbalanced_phash_ids, 'phash_id' ) as temp_phash_ids_table_name: - useful_phash_ids = self._STS( self._c.execute( 'SELECT phash_id FROM {} CROSS JOIN shape_perceptual_hash_map USING ( phash_id );'.format( temp_phash_ids_table_name ) ) ) + useful_phash_ids = self._STS( self._Execute( 'SELECT phash_id FROM {} CROSS JOIN shape_perceptual_hash_map USING ( phash_id );'.format( temp_phash_ids_table_name ) ) ) orphan_phash_ids = unbalanced_phash_ids.difference( useful_phash_ids ) - self._c.executemany( 'DELETE FROM shape_perceptual_hashes WHERE phash_id = ?;', ( ( p_id, ) for p_id in orphan_phash_ids ) ) + self._ExecuteMany( 'DELETE FROM shape_perceptual_hashes WHERE phash_id = ?;', ( ( p_id, ) for p_id in orphan_phash_ids ) ) useful_nodes = [ row for row in unbalanced_nodes if row[0] in useful_phash_ids ] @@ -363,7 +363,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): if parent_id is not None: - ( parent_inner_id, ) = self._c.execute( 'SELECT inner_id FROM shape_vptree WHERE phash_id = ?;', ( parent_id, ) ).fetchone() + ( parent_inner_id, ) = self._Execute( 'SELECT inner_id FROM shape_vptree WHERE phash_id = ?;', ( parent_id, ) ).fetchone() if parent_inner_id == phash_id: @@ -374,7 +374,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): query = 'UPDATE shape_vptree SET outer_id = ?, outer_population = ? WHERE phash_id = ?;' - self._c.execute( query, ( new_phash_id, useful_population, parent_id ) ) + self._Execute( query, ( new_phash_id, useful_population, parent_id ) ) if useful_population > 0: @@ -394,11 +394,11 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): phash_ids.add( phash_id ) - self._c.executemany( 'INSERT OR IGNORE INTO shape_perceptual_hash_map ( phash_id, hash_id ) VALUES ( ?, ? );', ( ( phash_id, hash_id ) for phash_id in phash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO shape_perceptual_hash_map ( phash_id, hash_id ) VALUES ( ?, ? );', ( ( phash_id, hash_id ) for phash_id in phash_ids ) ) - if HydrusDB.GetRowCount( self._c ) > 0: + if self._GetRowCount() > 0: - self._c.execute( 'REPLACE INTO shape_search_cache ( hash_id, searched_distance ) VALUES ( ?, ? );', ( hash_id, None ) ) + self._Execute( 'REPLACE INTO shape_search_cache ( hash_id, searched_distance ) VALUES ( ?, ? );', ( hash_id, None ) ) return phash_ids @@ -406,31 +406,31 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): def CreateInitialTables( self ): - self._c.execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_perceptual_hashes ( phash_id INTEGER PRIMARY KEY, phash BLOB_BYTES UNIQUE );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_perceptual_hashes ( phash_id INTEGER PRIMARY KEY, phash BLOB_BYTES UNIQUE );' ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_perceptual_hash_map ( phash_id INTEGER, hash_id INTEGER, PRIMARY KEY ( phash_id, hash_id ) );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_perceptual_hash_map ( phash_id INTEGER, hash_id INTEGER, PRIMARY KEY ( phash_id, hash_id ) );' ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_vptree ( phash_id INTEGER PRIMARY KEY, parent_id INTEGER, radius INTEGER, inner_id INTEGER, inner_population INTEGER, outer_id INTEGER, outer_population INTEGER );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_vptree ( phash_id INTEGER PRIMARY KEY, parent_id INTEGER, radius INTEGER, inner_id INTEGER, inner_population INTEGER, outer_id INTEGER, outer_population INTEGER );' ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_maintenance_branch_regen ( phash_id INTEGER PRIMARY KEY );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_maintenance_branch_regen ( phash_id INTEGER PRIMARY KEY );' ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_search_cache ( hash_id INTEGER PRIMARY KEY, searched_distance INTEGER );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_caches.shape_search_cache ( hash_id INTEGER PRIMARY KEY, searched_distance INTEGER );' ) def DisassociatePHashes( self, hash_id, phash_ids ): - self._c.executemany( 'DELETE FROM shape_perceptual_hash_map WHERE phash_id = ? AND hash_id = ?;', ( ( phash_id, hash_id ) for phash_id in phash_ids ) ) + self._ExecuteMany( 'DELETE FROM shape_perceptual_hash_map WHERE phash_id = ? AND hash_id = ?;', ( ( phash_id, hash_id ) for phash_id in phash_ids ) ) - useful_phash_ids = { phash for ( phash, ) in self._c.execute( 'SELECT phash_id FROM shape_perceptual_hash_map WHERE phash_id IN ' + HydrusData.SplayListForDB( phash_ids ) + ';' ) } + useful_phash_ids = { phash for ( phash, ) in self._Execute( 'SELECT phash_id FROM shape_perceptual_hash_map WHERE phash_id IN ' + HydrusData.SplayListForDB( phash_ids ) + ';' ) } useless_phash_ids = phash_ids.difference( useful_phash_ids ) - self._c.executemany( 'INSERT OR IGNORE INTO shape_maintenance_branch_regen ( phash_id ) VALUES ( ? );', ( ( phash_id, ) for phash_id in useless_phash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO shape_maintenance_branch_regen ( phash_id ) VALUES ( ? );', ( ( phash_id, ) for phash_id in useless_phash_ids ) ) def FileIsInSystem( self, hash_id ): - result = self._c.execute( 'SELECT 1 FROM shape_search_cache WHERE hash_id = ?;', ( hash_id, ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM shape_search_cache WHERE hash_id = ?;', ( hash_id, ) ).fetchone() return result is not None @@ -445,7 +445,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): def GetMaintenanceStatus( self ): - searched_distances_to_count = collections.Counter( dict( self._c.execute( 'SELECT searched_distance, COUNT( * ) FROM shape_search_cache GROUP BY searched_distance;' ) ) ) + searched_distances_to_count = collections.Counter( dict( self._Execute( 'SELECT searched_distance, COUNT( * ) FROM shape_search_cache GROUP BY searched_distance;' ) ) ) return searched_distances_to_count @@ -480,7 +480,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): job_key.SetStatusTitle( 'similar files metadata maintenance' ) - rebalance_phash_ids = self._STL( self._c.execute( 'SELECT phash_id FROM shape_maintenance_branch_regen;' ) ) + rebalance_phash_ids = self._STL( self._Execute( 'SELECT phash_id FROM shape_maintenance_branch_regen;' ) ) num_to_do = len( rebalance_phash_ids ) @@ -510,15 +510,15 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): job_key.SetVariable( 'popup_text_1', text ) job_key.SetVariable( 'popup_gauge_1', ( num_done, num_to_do ) ) - with HydrusDB.TemporaryIntegerTable( self._c, rebalance_phash_ids, 'phash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( rebalance_phash_ids, 'phash_id' ) as temp_table_name: # temp phashes to tree - ( biggest_phash_id, ) = self._c.execute( 'SELECT phash_id FROM {} CROSS JOIN shape_vptree USING ( phash_id ) ORDER BY inner_population + outer_population DESC;'.format( temp_table_name ) ).fetchone() + ( biggest_phash_id, ) = self._Execute( 'SELECT phash_id FROM {} CROSS JOIN shape_vptree USING ( phash_id ) ORDER BY inner_population + outer_population DESC;'.format( temp_table_name ) ).fetchone() self._RegenerateBranch( job_key, biggest_phash_id ) - rebalance_phash_ids = self._STL( self._c.execute( 'SELECT phash_id FROM shape_maintenance_branch_regen;' ) ) + rebalance_phash_ids = self._STL( self._Execute( 'SELECT phash_id FROM shape_maintenance_branch_regen;' ) ) finally: @@ -541,7 +541,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): search_distance = new_options.GetInteger( 'similar_files_duplicate_pairs_search_distance' ) - ( count, ) = self._c.execute( 'SELECT COUNT( * ) FROM ( SELECT 1 FROM shape_search_cache WHERE searched_distance IS NULL or searched_distance < ? LIMIT 100 );', ( search_distance, ) ).fetchone() + ( count, ) = self._Execute( 'SELECT COUNT( * ) FROM ( SELECT 1 FROM shape_search_cache WHERE searched_distance IS NULL or searched_distance < ? LIMIT 100 );', ( search_distance, ) ).fetchone() if count >= 100: @@ -566,13 +566,13 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): ( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = ClientDBFilesStorage.GenerateFilesTableNames( self.modules_services.combined_local_file_service_id ) - self._c.execute( 'DELETE FROM shape_perceptual_hash_map WHERE hash_id NOT IN ( SELECT hash_id FROM {} );'.format( current_files_table_name ) ) + self._Execute( 'DELETE FROM shape_perceptual_hash_map WHERE hash_id NOT IN ( SELECT hash_id FROM {} );'.format( current_files_table_name ) ) job_key.SetVariable( 'popup_text_1', 'gathering all leaves' ) - self._c.execute( 'DELETE FROM shape_vptree;' ) + self._Execute( 'DELETE FROM shape_vptree;' ) - all_nodes = self._c.execute( 'SELECT phash_id, phash FROM shape_perceptual_hashes;' ).fetchall() + all_nodes = self._Execute( 'SELECT phash_id, phash FROM shape_perceptual_hashes;' ).fetchall() job_key.SetVariable( 'popup_text_1', HydrusData.ToHumanInt( len( all_nodes ) ) + ' leaves found, now regenerating' ) @@ -593,14 +593,14 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): def ResetSearch( self, hash_ids ): - self._c.executemany( 'UPDATE shape_search_cache SET searched_distance = NULL WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) ) + self._ExecuteMany( 'UPDATE shape_search_cache SET searched_distance = NULL WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in hash_ids ) ) def Search( self, hash_id, max_hamming_distance ): if max_hamming_distance == 0: - similar_hash_ids = self._STL( self._c.execute( 'SELECT hash_id FROM shape_perceptual_hash_map WHERE phash_id IN ( SELECT phash_id FROM shape_perceptual_hash_map WHERE hash_id = ? );', ( hash_id, ) ) ) + similar_hash_ids = self._STL( self._Execute( 'SELECT hash_id FROM shape_perceptual_hash_map WHERE phash_id IN ( SELECT phash_id FROM shape_perceptual_hash_map WHERE hash_id = ? );', ( hash_id, ) ) ) similar_hash_ids_and_distances = [ ( similar_hash_id, 0 ) for similar_hash_id in similar_hash_ids ] @@ -608,7 +608,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): search_radius = max_hamming_distance - top_node_result = self._c.execute( 'SELECT phash_id FROM shape_vptree WHERE parent_id IS NULL;' ).fetchone() + top_node_result = self._Execute( 'SELECT phash_id FROM shape_vptree WHERE parent_id IS NULL;' ).fetchone() if top_node_result is None: @@ -617,7 +617,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): ( root_node_phash_id, ) = top_node_result - search = self._STL( self._c.execute( 'SELECT phash FROM shape_perceptual_hashes NATURAL JOIN shape_perceptual_hash_map WHERE hash_id = ?;', ( hash_id, ) ) ) + search = self._STL( self._Execute( 'SELECT phash FROM shape_perceptual_hashes NATURAL JOIN shape_perceptual_hash_map WHERE hash_id = ?;', ( hash_id, ) ) ) if len( search ) == 0: @@ -655,10 +655,10 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): results = list( self._ExecuteManySelectSingleParam( select_statement, group_of_current_potentials ) ) ''' - with HydrusDB.TemporaryIntegerTable( self._c, group_of_current_potentials, 'phash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( group_of_current_potentials, 'phash_id' ) as temp_table_name: # temp phash_ids to actual phashes and tree info - results = self._c.execute( 'SELECT phash_id, phash, radius, inner_id, outer_id FROM {} CROSS JOIN shape_perceptual_hashes USING ( phash_id ) CROSS JOIN shape_vptree USING ( phash_id );'.format( temp_table_name ) ).fetchall() + results = self._Execute( 'SELECT phash_id, phash, radius, inner_id, outer_id FROM {} CROSS JOIN shape_perceptual_hashes USING ( phash_id ) CROSS JOIN shape_vptree USING ( phash_id );'.format( temp_table_name ) ).fetchall() for ( node_phash_id, node_phash, node_radius, inner_phash_id, outer_phash_id ) in results: @@ -728,10 +728,10 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): similar_phash_ids = list( similar_phash_ids_to_distances.keys() ) - with HydrusDB.TemporaryIntegerTable( self._c, similar_phash_ids, 'phash_id' ) as temp_table_name: + with self._MakeTemporaryIntegerTable( similar_phash_ids, 'phash_id' ) as temp_table_name: # temp phashes to hash map - similar_phash_ids_to_hash_ids = HydrusData.BuildKeyToListDict( self._c.execute( 'SELECT phash_id, hash_id FROM {} CROSS JOIN shape_perceptual_hash_map USING ( phash_id );'.format( temp_table_name ) ) ) + similar_phash_ids_to_hash_ids = HydrusData.BuildKeyToListDict( self._Execute( 'SELECT phash_id, hash_id FROM {} CROSS JOIN shape_perceptual_hash_map USING ( phash_id );'.format( temp_table_name ) ) ) similar_hash_ids_to_distances = {} @@ -766,7 +766,7 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): def SetPHashes( self, hash_id, phashes ): - current_phash_ids = self._STS( self._c.execute( 'SELECT phash_id FROM shape_perceptual_hash_map WHERE hash_id = ?;', ( hash_id, ) ) ) + current_phash_ids = self._STS( self._Execute( 'SELECT phash_id FROM shape_perceptual_hash_map WHERE hash_id = ?;', ( hash_id, ) ) ) if len( current_phash_ids ) > 0: @@ -781,10 +781,10 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ): def StopSearchingFile( self, hash_id ): - phash_ids = self._STS( self._c.execute( 'SELECT phash_id FROM shape_perceptual_hash_map WHERE hash_id = ?;', ( hash_id, ) ) ) + phash_ids = self._STS( self._Execute( 'SELECT phash_id FROM shape_perceptual_hash_map WHERE hash_id = ?;', ( hash_id, ) ) ) self.DisassociatePHashes( hash_id, phash_ids ) - self._c.execute( 'DELETE FROM shape_search_cache WHERE hash_id = ?;', ( hash_id, ) ) + self._Execute( 'DELETE FROM shape_search_cache WHERE hash_id = ?;', ( hash_id, ) ) diff --git a/hydrus/client/gui/ClientGUI.py b/hydrus/client/gui/ClientGUI.py index 498a6130..af968139 100644 --- a/hydrus/client/gui/ClientGUI.py +++ b/hydrus/client/gui/ClientGUI.py @@ -4896,6 +4896,26 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p HydrusData.ShowText( 'Profiling done: {} slow jobs, {} fast jobs'.format( HydrusData.ToHumanInt( slow ), HydrusData.ToHumanInt( fast ) ) ) + elif name == 'query_planner_mode': + + if not HG.query_planner_mode: + + now = HydrusData.GetNow() + + HG.query_planner_start_time = now + HG.query_planner_query_count = 0 + + HG.query_planner_mode = True + + HydrusData.ShowText( 'Query Planner mode on!' ) + + else: + + HG.query_planner_mode = False + + HydrusData.ShowText( 'Query Planning done: {} queries analyzed'.format( HydrusData.ToHumanInt( HG.query_planner_query_count ) ) ) + + elif name == 'pubsub_report_mode': HG.pubsub_report_mode = not HG.pubsub_report_mode @@ -6017,10 +6037,13 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p profile_mode_message += os.linesep * 2 profile_mode_message += 'Turn the mode on, do the slow thing for a bit, and then turn it off. In your database directory will be a new profile log, which is really helpful for hydrus dev to figure out what is running slow for you and how to fix it.' profile_mode_message += os.linesep * 2 + profile_mode_message += 'A new Query Planner mode also makes very detailed database analysis. This is an alternate profiling mode hydev is testing.' + profile_mode_message += os.linesep * 2 profile_mode_message += 'More information is available in the help, under \'reducing program lag\'.' ClientGUIMenus.AppendMenuItem( profiling, 'what is this?', 'Show profile info.', QW.QMessageBox.information, self, 'Profile modes', profile_mode_message ) ClientGUIMenus.AppendMenuCheckItem( profiling, 'profile mode', 'Run detailed \'profiles\'.', HG.profile_mode, self._SwitchBoolean, 'profile_mode' ) + ClientGUIMenus.AppendMenuCheckItem( profiling, 'query planner mode', 'Run detailed \'query plans\'.', HG.query_planner_mode, self._SwitchBoolean, 'query_planner_mode' ) ClientGUIMenus.AppendMenu( debug, profiling, 'profiling' ) diff --git a/hydrus/client/gui/ClientGUIExport.py b/hydrus/client/gui/ClientGUIExport.py index 60c0c217..4d387e78 100644 --- a/hydrus/client/gui/ClientGUIExport.py +++ b/hydrus/client/gui/ClientGUIExport.py @@ -15,6 +15,7 @@ from hydrus.core import HydrusTags from hydrus.client import ClientConstants as CC from hydrus.client import ClientExporting from hydrus.client import ClientSearch +from hydrus.client import ClientThreading from hydrus.client.gui import ClientGUIDialogsQuick from hydrus.client.gui import ClientGUIFunctions from hydrus.client.gui import ClientGUIScrolledPanels @@ -770,6 +771,8 @@ class ReviewExportFilesPanel( ClientGUIScrolledPanels.ReviewPanel ): to_do = self._paths.GetData() + to_do = [ ( ordering_index, media, self._GetPath( media ) ) for ( ordering_index, media ) in to_do ] + num_to_do = len( to_do ) def qt_update_label( text ): @@ -799,19 +802,33 @@ class ReviewExportFilesPanel( ClientGUIScrolledPanels.ReviewPanel ): def do_it( directory, neighbouring_txt_tag_service_keys, delete_afterwards, export_symlinks, quit_afterwards ): + job_key = ClientThreading.JobKey( cancellable = True ) + + job_key.SetStatusTitle( 'file export' ) + + HG.client_controller.pub( 'message', job_key ) + pauser = HydrusData.BigJobPauser() - for ( index, ( ordering_index, media ) ) in enumerate( to_do ): + for ( index, ( ordering_index, media, path ) ) in enumerate( to_do ): + + if job_key.IsCancelled(): + + break + try: - QP.CallAfter( qt_update_label, HydrusData.ConvertValueRangeToPrettyString(index+1,num_to_do) ) + x_of_y = HydrusData.ConvertValueRangeToPrettyString( index + 1, num_to_do ) + + job_key.SetVariable( 'popup_text_1', 'Done {}'.format( x_of_y ) ) + job_key.SetVariable( 'popup_gauge_1', ( index + 1, num_to_do ) ) + + QP.CallAfter( qt_update_label, x_of_y ) hash = media.GetHash() mime = media.GetMime() - path = self._GetPath( media ) - path = os.path.normpath( path ) if not path.startswith( directory ): @@ -869,7 +886,7 @@ class ReviewExportFilesPanel( ClientGUIScrolledPanels.ReviewPanel ): pauser.Pause() - if delete_afterwards: + if not job_key.IsCancelled() and delete_afterwards: QP.CallAfter( qt_update_label, 'deleting' ) @@ -877,11 +894,11 @@ class ReviewExportFilesPanel( ClientGUIScrolledPanels.ReviewPanel ): if delete_lock_for_archived_files: - deletee_hashes = { media.GetHash() for ( ordering_index, media ) in to_do if not media.HasArchive() } + deletee_hashes = { media.GetHash() for ( ordering_index, media, path ) in to_do if not media.HasArchive() } else: - deletee_hashes = { media.GetHash() for ( ordering_index, media ) in to_do } + deletee_hashes = { media.GetHash() for ( ordering_index, media, path ) in to_do } chunks_of_hashes = HydrusData.SplitListIntoChunks( deletee_hashes, 64 ) @@ -896,6 +913,13 @@ class ReviewExportFilesPanel( ClientGUIScrolledPanels.ReviewPanel ): + job_key.DeleteVariable( 'popup_gauge_1' ) + job_key.SetVariable( 'popup_text_1', 'Done!' ) + + job_key.Finish() + + job_key.Delete( 5 ) + QP.CallAfter( qt_update_label, 'done!' ) time.sleep( 1 ) diff --git a/hydrus/client/gui/ClientGUIScrolledPanelsReview.py b/hydrus/client/gui/ClientGUIScrolledPanelsReview.py index 82664c4b..6aff80bc 100644 --- a/hydrus/client/gui/ClientGUIScrolledPanelsReview.py +++ b/hydrus/client/gui/ClientGUIScrolledPanelsReview.py @@ -2585,8 +2585,10 @@ class ReviewHowBonedAmI( ClientGUIScrolledPanels.ReviewPanel ): num_inbox = boned_stats[ 'num_inbox' ] num_archive = boned_stats[ 'num_archive' ] + num_deleted = boned_stats[ 'num_deleted' ] size_inbox = boned_stats[ 'size_inbox' ] size_archive = boned_stats[ 'size_archive' ] + size_deleted = boned_stats[ 'size_deleted' ] total_viewtime = boned_stats[ 'total_viewtime' ] total_alternate_files = boned_stats[ 'total_alternate_files' ] total_duplicate_files = boned_stats[ 'total_duplicate_files' ] @@ -2595,6 +2597,9 @@ class ReviewHowBonedAmI( ClientGUIScrolledPanels.ReviewPanel ): num_total = num_archive + num_inbox size_total = size_archive + size_inbox + num_supertotal = num_total + num_deleted + size_supertotal = size_total + size_deleted + vbox = QP.VBoxLayout() if num_total < 1000: @@ -2630,13 +2635,21 @@ class ReviewHowBonedAmI( ClientGUIScrolledPanels.ReviewPanel ): else: + notebook = ClientGUICommon.BetterNotebook( self ) + + # + + panel = QW.QWidget( notebook ) + + panel_vbox = QP.VBoxLayout() + average_filesize = size_total // num_total summary_label = 'Total: {} files, totalling {}, averaging {}'.format( HydrusData.ToHumanInt( num_total ), HydrusData.ToHumanBytes( size_total ), HydrusData.ToHumanBytes( average_filesize ) ) - summary_st = ClientGUICommon.BetterStaticText( self, label = summary_label ) + summary_st = ClientGUICommon.BetterStaticText( panel, label = summary_label ) - QP.AddToLayout( vbox, summary_st, CC.FLAGS_CENTER ) + QP.AddToLayout( panel_vbox, summary_st, CC.FLAGS_CENTER ) num_archive_percent = num_archive / num_total size_archive_percent = size_archive / size_total @@ -2644,42 +2657,97 @@ class ReviewHowBonedAmI( ClientGUIScrolledPanels.ReviewPanel ): num_inbox_percent = num_inbox / num_total size_inbox_percent = size_inbox / size_total - archive_label = 'Archive: ' + HydrusData.ToHumanInt( num_archive ) + ' files (' + ClientData.ConvertZoomToPercentage( num_archive_percent ) + '), totalling ' + HydrusData.ToHumanBytes( size_archive ) + '(' + ClientData.ConvertZoomToPercentage( size_archive_percent ) + ')' + num_deleted_percent = num_deleted / num_supertotal + size_deleted_percent = size_deleted / size_supertotal - archive_st = ClientGUICommon.BetterStaticText( self, label = archive_label ) + archive_label = 'Archive: {} files ({}), totalling {} ({})'.format( HydrusData.ToHumanInt( num_archive ), ClientData.ConvertZoomToPercentage( num_archive_percent ), HydrusData.ToHumanBytes( size_archive ), ClientData.ConvertZoomToPercentage( size_archive_percent ) ) - inbox_label = 'Inbox: ' + HydrusData.ToHumanInt( num_inbox ) + ' files (' + ClientData.ConvertZoomToPercentage( num_inbox_percent ) + '), totalling ' + HydrusData.ToHumanBytes( size_inbox ) + '(' + ClientData.ConvertZoomToPercentage( size_inbox_percent ) + ')' + archive_st = ClientGUICommon.BetterStaticText( panel, label = archive_label ) - inbox_st = ClientGUICommon.BetterStaticText( self, label = inbox_label ) + inbox_label = 'Inbox: {} files ({}), totalling {} ({})'.format( HydrusData.ToHumanInt( num_inbox ), ClientData.ConvertZoomToPercentage( num_inbox_percent ), HydrusData.ToHumanBytes( size_inbox ), ClientData.ConvertZoomToPercentage( size_inbox_percent ) ) - QP.AddToLayout( vbox, archive_st, CC.FLAGS_CENTER ) - QP.AddToLayout( vbox, inbox_st, CC.FLAGS_CENTER ) + inbox_st = ClientGUICommon.BetterStaticText( panel, label = inbox_label ) + + deleted_label = 'Deleted: {} files ({}), totalling {} ({})'.format( HydrusData.ToHumanInt( num_deleted ), ClientData.ConvertZoomToPercentage( num_deleted_percent ), HydrusData.ToHumanBytes( size_deleted ), ClientData.ConvertZoomToPercentage( size_deleted_percent ) ) + + deleted_st = ClientGUICommon.BetterStaticText( panel, label = deleted_label ) + + QP.AddToLayout( panel_vbox, archive_st, CC.FLAGS_CENTER ) + QP.AddToLayout( panel_vbox, inbox_st, CC.FLAGS_CENTER ) + QP.AddToLayout( panel_vbox, deleted_st, CC.FLAGS_CENTER ) + + if 'earliest_import_time' in boned_stats: + + eit = boned_stats[ 'earliest_import_time' ] + + eit_label = 'Earliest file import: {} ({})'.format( HydrusData.ConvertTimestampToPrettyTime( eit ), HydrusData.TimestampToPrettyTimeDelta( eit ) ) + + eit_st = ClientGUICommon.BetterStaticText( panel, label = eit_label ) + + QP.AddToLayout( panel_vbox, eit_st, CC.FLAGS_CENTER ) + + + panel_vbox.addStretch( 1 ) + + panel.setLayout( panel_vbox ) + + notebook.addTab( panel, 'files' ) + + # + + panel = QW.QWidget( notebook ) + + panel_vbox = QP.VBoxLayout() ( media_views, media_viewtime, preview_views, preview_viewtime ) = total_viewtime media_label = 'Total media views: ' + HydrusData.ToHumanInt( media_views ) + ', totalling ' + HydrusData.TimeDeltaToPrettyTimeDelta( media_viewtime ) - media_st = ClientGUICommon.BetterStaticText( self, label = media_label ) + media_st = ClientGUICommon.BetterStaticText( panel, label = media_label ) preview_label = 'Total preview views: ' + HydrusData.ToHumanInt( preview_views ) + ', totalling ' + HydrusData.TimeDeltaToPrettyTimeDelta( preview_viewtime ) - preview_st = ClientGUICommon.BetterStaticText( self, label = preview_label ) + preview_st = ClientGUICommon.BetterStaticText( panel, label = preview_label ) - QP.AddToLayout( vbox, media_st, CC.FLAGS_CENTER ) - QP.AddToLayout( vbox, preview_st, CC.FLAGS_CENTER ) + QP.AddToLayout( panel_vbox, media_st, CC.FLAGS_CENTER ) + QP.AddToLayout( panel_vbox, preview_st, CC.FLAGS_CENTER ) + + panel_vbox.addStretch( 1 ) + + panel.setLayout( panel_vbox ) + + notebook.addTab( panel, 'views' ) + + # + + panel = QW.QWidget( notebook ) + + panel_vbox = QP.VBoxLayout() potentials_label = 'Total duplicate potential pairs: {}'.format( HydrusData.ToHumanInt( total_potential_pairs ) ) duplicates_label = 'Total files set duplicate: {}'.format( HydrusData.ToHumanInt( total_duplicate_files ) ) alternates_label = 'Total duplicate file groups set alternate: {}'.format( HydrusData.ToHumanInt( total_alternate_files ) ) - potentials_st = ClientGUICommon.BetterStaticText( self, label = potentials_label ) - duplicates_st = ClientGUICommon.BetterStaticText( self, label = duplicates_label ) - alternates_st = ClientGUICommon.BetterStaticText( self, label = alternates_label ) + potentials_st = ClientGUICommon.BetterStaticText( panel, label = potentials_label ) + duplicates_st = ClientGUICommon.BetterStaticText( panel, label = duplicates_label ) + alternates_st = ClientGUICommon.BetterStaticText( panel, label = alternates_label ) - QP.AddToLayout( vbox, potentials_st, CC.FLAGS_CENTER ) - QP.AddToLayout( vbox, duplicates_st, CC.FLAGS_CENTER ) - QP.AddToLayout( vbox, alternates_st, CC.FLAGS_CENTER ) + QP.AddToLayout( panel_vbox, potentials_st, CC.FLAGS_CENTER ) + QP.AddToLayout( panel_vbox, duplicates_st, CC.FLAGS_CENTER ) + QP.AddToLayout( panel_vbox, alternates_st, CC.FLAGS_CENTER ) + panel_vbox.addStretch( 1 ) + + panel.setLayout( panel_vbox ) + + notebook.addTab( panel, 'duplicates' ) + + # + + QP.AddToLayout( vbox, notebook, CC.FLAGS_EXPAND_PERPENDICULAR ) + + + vbox.addStretch( 1 ) self.widget().setLayout( vbox ) diff --git a/hydrus/client/gui/pages/ClientGUIResults.py b/hydrus/client/gui/pages/ClientGUIResults.py index 49863b86..3396cdc9 100644 --- a/hydrus/client/gui/pages/ClientGUIResults.py +++ b/hydrus/client/gui/pages/ClientGUIResults.py @@ -3793,50 +3793,11 @@ class MediaPanelThumbnails( MediaPanel ): ClientGUIMenus.AppendMenuItem( duplicates_action_submenu, 'set this file as the best quality of its group', 'Set the focused media to be the King of its group.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_SET_FOCUSED_KING ) ) - if dissolution_actions_available: - - duplicates_single_dissolution_menu = QW.QMenu( duplicates_action_submenu ) - - if focus_can_be_searched: - - ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'schedule this file to be searched for potentials again', 'Queue this file for another potentials search. Will not remove any existing potentials.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_RESET_FOCUSED_POTENTIAL_SEARCH ) ) - - - if focus_has_potentials: - - ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'remove this file\'s potential relationships', 'Clear out this file\'s potential relationships.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_REMOVE_FOCUSED_POTENTIALS ) ) - - - if focus_is_in_duplicate_group: - - if not focus_is_definitely_king: - - ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'remove this file from its duplicate group', 'Extract this file from its duplicate group and reset its search status.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_REMOVE_FOCUSED_FROM_DUPLICATE_GROUP ) ) - - - ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'dissolve this file\'s duplicate group completely', 'Completely eliminate this file\'s duplicate group and reset all files\' search status.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_DISSOLVE_FOCUSED_DUPLICATE_GROUP ) ) - - - if focus_is_in_alternate_group: - - ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'remove this file from its alternate group', 'Extract this file\'s duplicate group from its alternate group and reset the duplicate group\'s search status.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_REMOVE_FOCUSED_FROM_ALTERNATE_GROUP ) ) - - ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'dissolve this file\'s alternate group completely', 'Completely eliminate this file\'s alternate group and all duplicate group members. This resets search status for all involved files.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_DISSOLVE_FOCUSED_ALTERNATE_GROUP ) ) - - - if focus_has_fps: - - ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'delete all false-positive relationships this file\'s alternate group has with other groups', 'Clear out all false-positive relationships this file\'s alternates group has with other groups and resets search status.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_CLEAR_FOCUSED_FALSE_POSITIVES ) ) - - - ClientGUIMenus.AppendMenu( duplicates_action_submenu, duplicates_single_dissolution_menu, 'remove/reset for this file' ) - - + + ClientGUIMenus.AppendSeparator( duplicates_action_submenu ) if multiple_selected: - ClientGUIMenus.AppendSeparator( duplicates_action_submenu ) - label = 'set this file as better than the ' + HydrusData.ToHumanInt( num_selected - 1 ) + ' other selected' ClientGUIMenus.AppendMenuItem( duplicates_action_submenu, label, 'Set the focused media to be better than the other selected files.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_SET_FOCUSED_BETTER ) ) @@ -3860,13 +3821,75 @@ class MediaPanelThumbnails( MediaPanel ): ClientGUIMenus.AppendMenuItem( duplicates_action_submenu, 'set selected collections as groups of alternates', 'Set files in the selection which are collected together as alternates.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_SET_ALTERNATE_COLLECTIONS ) ) + # + + ClientGUIMenus.AppendSeparator( duplicates_action_submenu ) + + duplicates_edit_action_submenu = QW.QMenu( duplicates_action_submenu ) + + for duplicate_type in ( HC.DUPLICATE_BETTER, HC.DUPLICATE_SAME_QUALITY ): + + ClientGUIMenus.AppendMenuItem( duplicates_edit_action_submenu, 'for ' + HC.duplicate_type_string_lookup[duplicate_type], 'Edit what happens when you set this status.', self._EditDuplicateActionOptions, duplicate_type ) + + + if HG.client_controller.new_options.GetBoolean( 'advanced_mode' ): + + ClientGUIMenus.AppendMenuItem( duplicates_edit_action_submenu, 'for ' + HC.duplicate_type_string_lookup[HC.DUPLICATE_ALTERNATE] + ' (advanced!)', 'Edit what happens when you set this status.', self._EditDuplicateActionOptions, HC.DUPLICATE_ALTERNATE ) + + + ClientGUIMenus.AppendMenu( duplicates_action_submenu, duplicates_edit_action_submenu, 'edit default duplicate metadata merge options' ) + + # + ClientGUIMenus.AppendSeparator( duplicates_action_submenu ) ClientGUIMenus.AppendMenuItem( duplicates_action_submenu, 'set all possible pair combinations as \'potential\' duplicates for the duplicates filter.', 'Queue all these files up in the duplicates filter.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_SET_POTENTIAL ) ) - if advanced_mode: + + if dissolution_actions_available: + + ClientGUIMenus.AppendSeparator( duplicates_action_submenu ) + + duplicates_single_dissolution_menu = QW.QMenu( duplicates_action_submenu ) + + if focus_can_be_searched: - ClientGUIMenus.AppendSeparator( duplicates_action_submenu ) + ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'schedule this file to be searched for potentials again', 'Queue this file for another potentials search. Will not remove any existing potentials.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_RESET_FOCUSED_POTENTIAL_SEARCH ) ) + + + if focus_has_potentials: + + ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'remove this file\'s potential relationships', 'Clear out this file\'s potential relationships.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_REMOVE_FOCUSED_POTENTIALS ) ) + + + if focus_is_in_duplicate_group: + + if not focus_is_definitely_king: + + ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'remove this file from its duplicate group', 'Extract this file from its duplicate group and reset its search status.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_REMOVE_FOCUSED_FROM_DUPLICATE_GROUP ) ) + + + ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'dissolve this file\'s duplicate group completely', 'Completely eliminate this file\'s duplicate group and reset all files\' search status.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_DISSOLVE_FOCUSED_DUPLICATE_GROUP ) ) + + + if focus_is_in_alternate_group: + + ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'remove this file from its alternate group', 'Extract this file\'s duplicate group from its alternate group and reset the duplicate group\'s search status.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_REMOVE_FOCUSED_FROM_ALTERNATE_GROUP ) ) + + ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'dissolve this file\'s alternate group completely', 'Completely eliminate this file\'s alternate group and all duplicate group members. This resets search status for all involved files.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_DISSOLVE_FOCUSED_ALTERNATE_GROUP ) ) + + + if focus_has_fps: + + ClientGUIMenus.AppendMenuItem( duplicates_single_dissolution_menu, 'delete all false-positive relationships this file\'s alternate group has with other groups', 'Clear out all false-positive relationships this file\'s alternates group has with other groups and resets search status.', self.ProcessApplicationCommand, CAC.ApplicationCommand( CAC.APPLICATION_COMMAND_TYPE_SIMPLE, CAC.SIMPLE_DUPLICATE_MEDIA_CLEAR_FOCUSED_FALSE_POSITIVES ) ) + + + ClientGUIMenus.AppendMenu( duplicates_action_submenu, duplicates_single_dissolution_menu, 'remove/reset for this file' ) + + + if multiple_selected: + + if advanced_mode: duplicates_multiple_dissolution_menu = QW.QMenu( duplicates_action_submenu ) @@ -3879,21 +3902,6 @@ class MediaPanelThumbnails( MediaPanel ): ClientGUIMenus.AppendMenu( duplicates_action_submenu, duplicates_multiple_dissolution_menu, 'remove/reset for all selected' ) - duplicates_edit_action_submenu = QW.QMenu( duplicates_action_submenu ) - - for duplicate_type in ( HC.DUPLICATE_BETTER, HC.DUPLICATE_SAME_QUALITY ): - ClientGUIMenus.AppendMenuItem( duplicates_edit_action_submenu, 'for ' + HC.duplicate_type_string_lookup[duplicate_type], 'Edit what happens when you set this status.', self._EditDuplicateActionOptions, duplicate_type ) - - - if HG.client_controller.new_options.GetBoolean( 'advanced_mode' ): - - ClientGUIMenus.AppendMenuItem( duplicates_edit_action_submenu, 'for ' + HC.duplicate_type_string_lookup[HC.DUPLICATE_ALTERNATE] + ' (advanced!)', 'Edit what happens when you set this status.', self._EditDuplicateActionOptions, HC.DUPLICATE_ALTERNATE ) - - - ClientGUIMenus.AppendSeparator( duplicates_action_submenu ) - - ClientGUIMenus.AppendMenu( duplicates_action_submenu, duplicates_edit_action_submenu, 'edit default duplicate metadata merge options' ) - ClientGUIMenus.AppendMenu( duplicates_menu, duplicates_action_submenu, 'set relationship' ) diff --git a/hydrus/client/importing/ClientImportGallery.py b/hydrus/client/importing/ClientImportGallery.py index ca281a56..8f296b02 100644 --- a/hydrus/client/importing/ClientImportGallery.py +++ b/hydrus/client/importing/ClientImportGallery.py @@ -1389,7 +1389,10 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ): publish_to_page = False - gallery_import.Start( self._page_key, publish_to_page ) + if self._have_started: + + gallery_import.Start( self._page_key, publish_to_page ) + self._AddGalleryImport( gallery_import ) @@ -1459,7 +1462,10 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ): publish_to_page = False - gallery_import.Start( self._page_key, publish_to_page ) + if self._have_started: + + gallery_import.Start( self._page_key, publish_to_page ) + self._AddGalleryImport( gallery_import ) diff --git a/hydrus/client/importing/ClientImportWatchers.py b/hydrus/client/importing/ClientImportWatchers.py index 6d0b4938..50619848 100644 --- a/hydrus/client/importing/ClientImportWatchers.py +++ b/hydrus/client/importing/ClientImportWatchers.py @@ -241,7 +241,10 @@ class MultipleWatcherImport( HydrusSerialisable.SerialisableBase ): publish_to_page = False - watcher.Start( self._page_key, publish_to_page ) + if self._have_started: + + watcher.Start( self._page_key, publish_to_page ) + self._AddWatcher( watcher ) diff --git a/hydrus/client/networking/ClientLocalServerResources.py b/hydrus/client/networking/ClientLocalServerResources.py index 15fa7317..91bfda5d 100644 --- a/hydrus/client/networking/ClientLocalServerResources.py +++ b/hydrus/client/networking/ClientLocalServerResources.py @@ -37,10 +37,10 @@ LOCAL_BOORU_STRING_PARAMS = set() LOCAL_BOORU_JSON_PARAMS = set() LOCAL_BOORU_JSON_BYTE_LIST_PARAMS = set() -CLIENT_API_INT_PARAMS = { 'file_id' } -CLIENT_API_BYTE_PARAMS = { 'hash', 'destination_page_key', 'page_key', 'Hydrus-Client-API-Access-Key', 'Hydrus-Client-API-Session-Key' } +CLIENT_API_INT_PARAMS = { 'file_id', 'file_sort_type' } +CLIENT_API_BYTE_PARAMS = { 'hash', 'destination_page_key', 'page_key', 'Hydrus-Client-API-Access-Key', 'Hydrus-Client-API-Session-Key', 'tag_service_key', 'file_service_key' } CLIENT_API_STRING_PARAMS = { 'name', 'url', 'domain' } -CLIENT_API_JSON_PARAMS = { 'basic_permissions', 'system_inbox', 'system_archive', 'tags', 'file_ids', 'only_return_identifiers', 'detailed_url_information', 'simple' } +CLIENT_API_JSON_PARAMS = { 'basic_permissions', 'system_inbox', 'system_archive', 'tags', 'file_ids', 'only_return_identifiers', 'detailed_url_information', 'simple', 'file_sort_asc' } CLIENT_API_JSON_BYTE_LIST_PARAMS = { 'hashes' } def ParseLocalBooruGETArgs( requests_args ): @@ -1580,16 +1580,121 @@ class HydrusResourceClientAPIRestrictedGetFilesSearchFiles( HydrusResourceClient def _threadDoGETJob( self, request: HydrusServerRequest.HydrusRequest ): - # optionally pull this from the params, obviously - location_search_context = ClientSearch.LocationSearchContext( current_service_keys = [ CC.LOCAL_FILE_SERVICE_KEY ] ) + if 'file_service_key' in request.parsed_request_args or 'file_service_name' in request.parsed_request_args: + + if 'file_service_key' in request.parsed_request_args: + + file_service_key = request.parsed_request_args[ 'file_service_key' ] + + else: + + file_service_name = request.parsed_request_args[ 'file_service_name' ] + + try: + + file_service_key = HG.client_controller.services_manager.GetServiceKeyFromName( HC.ALL_FILE_SERVICES, file_service_name ) + + except: + + raise HydrusExceptions.BadRequestException( 'Could not find the service "{}"!'.format( file_service_name ) ) + + + + try: + + service = HG.client_controller.services_manager.GetService( file_service_key ) + + except: + + raise HydrusExceptions.BadRequestException( 'Could not find that file service!' ) + + + if service.GetServiceType() not in HC.ALL_FILE_SERVICES: + + raise HydrusExceptions.BadRequestException( 'Sorry, that service key did not give a file service!' ) + + + else: + + # I guess ideally we would go for the 'all local services' umbrella, or a list of them, or however we end up doing that + # for now we'll fudge it + + file_service_key = list( HG.client_controller.services_manager.GetServiceKeys( ( HC.LOCAL_FILE_DOMAIN, ) ) )[0] + - tag_search_context = ClientSearch.TagSearchContext( service_key = CC.COMBINED_TAG_SERVICE_KEY ) + if 'tag_service_key' in request.parsed_request_args or 'tag_service_name' in request.parsed_request_args: + + if 'tag_service_key' in request.parsed_request_args: + + tag_service_key = request.parsed_request_args[ 'tag_service_key' ] + + else: + + tag_service_name = request.parsed_request_args[ 'tag_service_name' ] + + try: + + tag_service_key = HG.client_controller.services_manager.GetServiceKeyFromName( HC.ALL_TAG_SERVICES, tag_service_name ) + + except: + + raise HydrusExceptions.BadRequestException( 'Could not find the service "{}"!'.format( tag_service_name ) ) + + + + try: + + service = HG.client_controller.services_manager.GetService( tag_service_key ) + + except: + + raise HydrusExceptions.BadRequestException( 'Could not find that tag service!' ) + + + if service.GetServiceType() not in HC.ALL_TAG_SERVICES: + + raise HydrusExceptions.BadRequestException( 'Sorry, that service key did not give a tag service!' ) + + + else: + + tag_service_key = CC.COMBINED_TAG_SERVICE_KEY + + + if tag_service_key == CC.COMBINED_TAG_SERVICE_KEY and file_service_key == CC.COMBINED_FILE_SERVICE_KEY: + + raise HydrusExceptions.BadRequestException( 'Sorry, search for all known tags over all known files is not supported!' ) + + + location_search_context = ClientSearch.LocationSearchContext( current_service_keys = [ file_service_key ] ) + tag_search_context = ClientSearch.TagSearchContext( service_key = tag_service_key ) predicates = ParseClientAPISearchPredicates( request ) file_search_context = ClientSearch.FileSearchContext( location_search_context = location_search_context, tag_search_context = tag_search_context, predicates = predicates ) + file_sort_type = CC.SORT_FILES_BY_IMPORT_TIME + + if 'file_sort_type' in request.parsed_request_args: + + file_sort_type = request.parsed_request_args[ 'file_sort_type' ] + + + if file_sort_type not in CC.SYSTEM_SORT_TYPES: + + raise HydrusExceptions.BadRequestException( 'Sorry, did not understand that sort type!' ) + + + file_sort_asc = False + + if 'file_sort_asc' in request.parsed_request_args: + + file_sort_asc = request.parsed_request_args.GetValue( 'file_sort_asc', bool ) + + + sort_order = CC.SORT_ASC if file_sort_asc else CC.SORT_DESC + # newest first - sort_by = ClientMedia.MediaSort( sort_type = ( 'system', CC.SORT_FILES_BY_IMPORT_TIME ), sort_order = CC.SORT_DESC ) + sort_by = ClientMedia.MediaSort( sort_type = ( 'system', file_sort_type ), sort_order = CC.SORT_DESC ) hash_ids = HG.client_controller.Read( 'file_query_ids', file_search_context, sort_by = sort_by, apply_implicit_limit = False ) diff --git a/hydrus/core/HydrusConstants.py b/hydrus/core/HydrusConstants.py index 8a427865..df0693e2 100644 --- a/hydrus/core/HydrusConstants.py +++ b/hydrus/core/HydrusConstants.py @@ -81,8 +81,8 @@ options = {} # Misc NETWORK_VERSION = 20 -SOFTWARE_VERSION = 449 -CLIENT_API_VERSION = 18 +SOFTWARE_VERSION = 450 +CLIENT_API_VERSION = 19 SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 ) @@ -293,10 +293,10 @@ NICE_RESOLUTIONS = {} NICE_RESOLUTIONS[ ( 640, 480 ) ] = '480p' NICE_RESOLUTIONS[ ( 1280, 720 ) ] = '720p' NICE_RESOLUTIONS[ ( 1920, 1080 ) ] = '1080p' -NICE_RESOLUTIONS[ ( 3840, 2060 ) ] = '4k' +NICE_RESOLUTIONS[ ( 3840, 2160 ) ] = '4k' NICE_RESOLUTIONS[ ( 720, 1280 ) ] = 'vertical 720p' NICE_RESOLUTIONS[ ( 1080, 1920 ) ] = 'vertical 1080p' -NICE_RESOLUTIONS[ ( 2060, 3840 ) ] = 'vertical 4k' +NICE_RESOLUTIONS[ ( 2160, 3840 ) ] = 'vertical 4k' NICE_RATIOS = {} @@ -434,7 +434,10 @@ NONEDITABLE_SERVICES = ( LOCAL_FILE_DOMAIN, LOCAL_FILE_TRASH_DOMAIN, COMBINED_FI SPECIFIC_FILE_SERVICES = ( LOCAL_FILE_DOMAIN, LOCAL_FILE_TRASH_DOMAIN, COMBINED_LOCAL_FILE, FILE_REPOSITORY, IPFS ) AUTOCOMPLETE_CACHE_SPECIFIC_FILE_SERVICES = ( LOCAL_FILE_DOMAIN, LOCAL_FILE_TRASH_DOMAIN, COMBINED_LOCAL_FILE, FILE_REPOSITORY ) TAG_CACHE_SPECIFIC_FILE_SERVICES = ( COMBINED_LOCAL_FILE, FILE_REPOSITORY ) + ALL_SERVICES = REMOTE_SERVICES + LOCAL_SERVICES + ( COMBINED_FILE, COMBINED_TAG ) +ALL_TAG_SERVICES = REAL_TAG_SERVICES + ( COMBINED_TAG, ) +ALL_FILE_SERVICES = FILE_SERVICES + ( COMBINED_FILE, ) SERVICES_WITH_THUMBNAILS = [ FILE_REPOSITORY, LOCAL_FILE_DOMAIN ] diff --git a/hydrus/core/HydrusController.py b/hydrus/core/HydrusController.py index 81ff9082..cca8f4e9 100644 --- a/hydrus/core/HydrusController.py +++ b/hydrus/core/HydrusController.py @@ -644,6 +644,41 @@ class HydrusController( object ): + def PrintQueryPlan( self, query, plan_lines ): + + pretty_timestamp = time.strftime( '%Y-%m-%d %H-%M-%S', time.localtime( HG.query_planner_start_time ) ) + + query_planner_log_filename = '{} query planner - {}.log'.format( self._name, pretty_timestamp ) + + query_planner_log_path = os.path.join( self.db_dir, query_planner_log_filename ) + + with open( query_planner_log_path, 'a', encoding = 'utf-8' ) as f: + + prefix = time.strftime( '%Y/%m/%d %H:%M:%S: ' ) + + if ' ' in query: + + first_word = query.split( ' ', 1 )[0] + + else: + + first_word = 'unknown' + + + f.write( prefix + first_word ) + f.write( os.linesep ) + f.write( query ) + + if len( plan_lines ) > 0: + + f.write( os.linesep ) + f.write( os.linesep.join( ( str( p ) for p in plan_lines ) ) ) + + + f.write( os.linesep * 2 ) + + + def Read( self, action, *args, **kwargs ): return self._Read( action, *args, **kwargs ) diff --git a/hydrus/core/HydrusDB.py b/hydrus/core/HydrusDB.py index b5ed4071..07f37d36 100644 --- a/hydrus/core/HydrusDB.py +++ b/hydrus/core/HydrusDB.py @@ -6,6 +6,7 @@ import sqlite3 import traceback import time +from hydrus.core import HydrusDBBase from hydrus.core import HydrusConstants as HC from hydrus.core import HydrusData from hydrus.core import HydrusEncryption @@ -59,13 +60,6 @@ def GetApproxVacuumDuration( db_size ): return approx_vacuum_duration -def GetRowCount( c: sqlite3.Cursor ): - - row_count = c.rowcount - - if row_count == -1: return 0 - else: return row_count - def ReadFromCancellableCursor( cursor, largest_group_size, cancelled_hook = None ): if cancelled_hook is None: @@ -164,11 +158,13 @@ def VacuumDB( db_path ): c.execute( 'PRAGMA journal_mode = {};'.format( HG.db_journal_mode ) ) -class DBCursorTransactionWrapper( object ): +class DBCursorTransactionWrapper( HydrusDBBase.DBBase ): def __init__( self, c: sqlite3.Cursor, transaction_commit_period: int ): - self._c = c + HydrusDBBase.DBBase.__init__( self ) + + self._SetCursor( c ) self._transaction_commit_period = transaction_commit_period @@ -184,8 +180,8 @@ class DBCursorTransactionWrapper( object ): if not self._in_transaction: - self._c.execute( 'BEGIN IMMEDIATE;' ) - self._c.execute( 'SAVEPOINT hydrus_savepoint;' ) + self._Execute( 'BEGIN IMMEDIATE;' ) + self._Execute( 'SAVEPOINT hydrus_savepoint;' ) self._transaction_start_time = HydrusData.GetNow() self._in_transaction = True @@ -197,24 +193,24 @@ class DBCursorTransactionWrapper( object ): if self._in_transaction: - self._c.execute( 'COMMIT;' ) + self._Execute( 'COMMIT;' ) self._in_transaction = False self._transaction_contains_writes = False if HG.db_journal_mode == 'WAL' and HydrusData.TimeHasPassed( self._last_wal_checkpoint_time + 1800 ): - self._c.execute( 'PRAGMA wal_checkpoint(PASSIVE);' ) + self._Execute( 'PRAGMA wal_checkpoint(PASSIVE);' ) self._last_wal_checkpoint_time = HydrusData.GetNow() if HydrusData.TimeHasPassed( self._last_mem_refresh_time + 600 ): - self._c.execute( 'DETACH mem;' ) - self._c.execute( 'ATTACH ":memory:" AS mem;' ) + self._Execute( 'DETACH mem;' ) + self._Execute( 'ATTACH ":memory:" AS mem;' ) - TemporaryIntegerTableNameCache.instance().Clear() + HydrusDBBase.TemporaryIntegerTableNameCache.instance().Clear() self._last_mem_refresh_time = HydrusData.GetNow() @@ -249,10 +245,10 @@ class DBCursorTransactionWrapper( object ): if self._in_transaction: - self._c.execute( 'ROLLBACK TO hydrus_savepoint;' ) + self._Execute( 'ROLLBACK TO hydrus_savepoint;' ) # any temp int tables created in this lad will be rolled back, so 'initialised' can't be trusted. just reset, no big deal - TemporaryIntegerTableNameCache.instance().Clear() + HydrusDBBase.TemporaryIntegerTableNameCache.instance().Clear() # still in transaction # transaction may no longer contain writes, but it isn't important to figure out that it doesn't @@ -265,9 +261,9 @@ class DBCursorTransactionWrapper( object ): def Save( self ): - self._c.execute( 'RELEASE hydrus_savepoint;' ) + self._Execute( 'RELEASE hydrus_savepoint;' ) - self._c.execute( 'SAVEPOINT hydrus_savepoint;' ) + self._Execute( 'SAVEPOINT hydrus_savepoint;' ) def TimeToCommit( self ): @@ -275,7 +271,7 @@ class DBCursorTransactionWrapper( object ): return self._in_transaction and self._transaction_contains_writes and HydrusData.TimeHasPassed( self._transaction_start_time + self._transaction_commit_period ) -class HydrusDB( object ): +class HydrusDB( HydrusDBBase.DBBase ): READ_WRITE_ACTIONS = [] UPDATE_WAIT = 2 @@ -287,13 +283,15 @@ class HydrusDB( object ): raise Exception( 'Sorry, it looks like the db partition has less than 500MB, please free up some space.' ) + HydrusDBBase.DBBase.__init__( self ) + self._controller = controller self._db_dir = db_dir self._db_name = db_name self._modules = [] - TemporaryIntegerTableNameCache() + HydrusDBBase.TemporaryIntegerTableNameCache() self._ssl_cert_filename = '{}.crt'.format( self._db_name ) self._ssl_key_filename = '{}.key'.format( self._db_name ) @@ -332,7 +330,6 @@ class HydrusDB( object ): self._current_job_name = '' self._db = None - self._c = None self._is_connected = False self._cursor_transaction_wrapper = None @@ -342,12 +339,12 @@ class HydrusDB( object ): # open and close to clean up in case last session didn't close well self._InitDB() - self._CloseDBCursor() + self._CloseDBConnection() self._InitDB() - ( version, ) = self._c.execute( 'SELECT version FROM version;' ).fetchone() + ( version, ) = self._Execute( 'SELECT version FROM version;' ).fetchone() if version > HC.SOFTWARE_VERSION: @@ -405,10 +402,10 @@ class HydrusDB( object ): raise e - ( version, ) = self._c.execute( 'SELECT version FROM version;' ).fetchone() + ( version, ) = self._Execute( 'SELECT version FROM version;' ).fetchone() - self._CloseDBCursor() + self._CloseDBConnection() self._controller.CallToThreadLongRunning( self.MainLoop ) @@ -427,8 +424,8 @@ class HydrusDB( object ): # this is useful to do after populating a temp table so the query planner can decide which index to use in a big join that uses it - self._c.execute( 'ANALYZE {};'.format( temp_table_name ) ) - self._c.execute( 'ANALYZE mem.sqlite_master;' ) # this reloads the current stats into the query planner, may no longer be needed + self._Execute( 'ANALYZE {};'.format( temp_table_name ) ) + self._Execute( 'ANALYZE mem.sqlite_master;' ) # this reloads the current stats into the query planner, may no longer be needed def _AttachExternalDatabases( self ): @@ -442,12 +439,12 @@ class HydrusDB( object ): db_path = os.path.join( self._db_dir, filename ) - self._c.execute( 'ATTACH ? AS ' + name + ';', ( db_path, ) ) + self._Execute( 'ATTACH ? AS ' + name + ';', ( db_path, ) ) db_path = os.path.join( self._db_dir, self._durable_temp_db_filename ) - self._c.execute( 'ATTACH ? AS durable_temp;', ( db_path, ) ) + self._Execute( 'ATTACH ? AS durable_temp;', ( db_path, ) ) def _CleanAfterJobWork( self ): @@ -455,9 +452,9 @@ class HydrusDB( object ): self._pubsubs = [] - def _CloseDBCursor( self ): + def _CloseDBConnection( self ): - TemporaryIntegerTableNameCache.instance().Clear() + HydrusDBBase.TemporaryIntegerTableNameCache.instance().Clear() if self._db is not None: @@ -466,14 +463,13 @@ class HydrusDB( object ): self._cursor_transaction_wrapper.Commit() - self._c.close() + self._CloseCursor() + self._db.close() - del self._c del self._db self._db = None - self._c = None self._is_connected = False @@ -488,35 +484,6 @@ class HydrusDB( object ): raise NotImplementedError() - def _CreateIndex( self, table_name, columns, unique = False ): - - if '.' in table_name: - - table_name_simple = table_name.split( '.' )[1] - - else: - - table_name_simple = table_name - - - index_name = table_name + '_' + '_'.join( columns ) + '_index' - - if unique: - - create_phrase = 'CREATE UNIQUE INDEX IF NOT EXISTS ' - - else: - - create_phrase = 'CREATE INDEX IF NOT EXISTS ' - - - on_phrase = ' ON ' + table_name_simple + ' (' + ', '.join( columns ) + ');' - - statement = create_phrase + index_name + on_phrase - - self._c.execute( statement ) - - def _DisplayCatastrophicError( self, text ): message = 'The db encountered a serious error! This is going to be written to the log as well, but here it is for a screenshot:' @@ -534,30 +501,6 @@ class HydrusDB( object ): - def _ExecuteManySelectSingleParam( self, query, single_param_iterator ): - - select_args_iterator = ( ( param, ) for param in single_param_iterator ) - - return self._ExecuteManySelect( query, select_args_iterator ) - - - def _ExecuteManySelect( self, query, select_args_iterator ): - - # back in python 2, we did batches of 256 hash_ids/whatever at a time in big "hash_id IN (?,?,?,?,...)" predicates. - # this was useful to get over some 100,000 x fetchall() call overhead, but it would sometimes throw the SQLite query planner off and do non-optimal queries - # (basically, the "hash_id in (256)" would weight the hash_id index request x 256 vs another when comparing the sqlite_stat1 tables, which could lead to WEWLAD for some indices with low median very-high mean skewed distribution - # python 3 is better about call overhead, so we'll go back to what is pure - # cursor.executemany SELECT when - - for select_args in select_args_iterator: - - for result in self._c.execute( query, select_args ): - - yield result - - - - def _GenerateDBJob( self, job_type, synchronous, action, *args, **kwargs ): return HydrusData.JobDatabase( job_type, synchronous, action, *args, **kwargs ) @@ -597,9 +540,9 @@ class HydrusDB( object ): - self._InitDBCursor() + self._InitDBConnection() - result = self._c.execute( 'SELECT 1 FROM sqlite_master WHERE type = ? AND name = ?;', ( 'table', 'version' ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM sqlite_master WHERE type = ? AND name = ?;', ( 'table', 'version' ) ).fetchone() if result is None: @@ -616,9 +559,9 @@ class HydrusDB( object ): - def _InitDBCursor( self ): + def _InitDBConnection( self ): - self._CloseDBCursor() + self._CloseDBConnection() db_path = os.path.join( self._db_dir, self._db_filenames[ 'main' ] ) @@ -626,7 +569,9 @@ class HydrusDB( object ): self._db = sqlite3.connect( db_path, isolation_level = None, detect_types = sqlite3.PARSE_DECLTYPES ) - self._c = self._db.cursor() + c = self._db.cursor() + + self._SetCursor( c ) self._is_connected = True @@ -636,42 +581,42 @@ class HydrusDB( object ): if HG.no_db_temp_files: - self._c.execute( 'PRAGMA temp_store = 2;' ) # use memory for temp store exclusively + self._Execute( 'PRAGMA temp_store = 2;' ) # use memory for temp store exclusively self._AttachExternalDatabases() - self._c.execute( 'ATTACH ":memory:" AS mem;' ) + self._Execute( 'ATTACH ":memory:" AS mem;' ) except Exception as e: raise HydrusExceptions.DBAccessException( 'Could not connect to database! This could be an issue related to WAL and network storage, or something else. If it is not obvious to you, please let hydrus dev know. Error follows:' + os.linesep * 2 + str( e ) ) - TemporaryIntegerTableNameCache.instance().Clear() + HydrusDBBase.TemporaryIntegerTableNameCache.instance().Clear() # durable_temp is not excluded here - db_names = [ name for ( index, name, path ) in self._c.execute( 'PRAGMA database_list;' ) if name not in ( 'mem', 'temp' ) ] + db_names = [ name for ( index, name, path ) in self._Execute( 'PRAGMA database_list;' ) if name not in ( 'mem', 'temp' ) ] for db_name in db_names: # MB -> KB cache_size = HG.db_cache_size * 1024 - self._c.execute( 'PRAGMA {}.cache_size = -{};'.format( db_name, cache_size ) ) + self._Execute( 'PRAGMA {}.cache_size = -{};'.format( db_name, cache_size ) ) - self._c.execute( 'PRAGMA {}.journal_mode = {};'.format( db_name, HG.db_journal_mode ) ) + self._Execute( 'PRAGMA {}.journal_mode = {};'.format( db_name, HG.db_journal_mode ) ) if HG.db_journal_mode in ( 'PERSIST', 'WAL' ): - self._c.execute( 'PRAGMA {}.journal_size_limit = {};'.format( db_name, 1024 ** 3 ) ) # 1GB for now + self._Execute( 'PRAGMA {}.journal_size_limit = {};'.format( db_name, 1024 ** 3 ) ) # 1GB for now - self._c.execute( 'PRAGMA {}.synchronous = {};'.format( db_name, HG.db_synchronous ) ) + self._Execute( 'PRAGMA {}.synchronous = {};'.format( db_name, HG.db_synchronous ) ) try: - self._c.execute( 'SELECT * FROM {}.sqlite_master;'.format( db_name ) ).fetchone() + self._Execute( 'SELECT * FROM {}.sqlite_master;'.format( db_name ) ).fetchone() except sqlite3.OperationalError as e: @@ -771,9 +716,9 @@ class HydrusDB( object ): HydrusData.Print( 'When the transaction failed, attempting to rollback the database failed. Please restart the client as soon as is convenient.' ) - self._CloseDBCursor() + self._CloseDBConnection() - self._InitDBCursor() + self._InitDBConnection() HydrusData.PrintException( rollback_e ) @@ -815,28 +760,7 @@ class HydrusDB( object ): def _ShrinkMemory( self ): - self._c.execute( 'PRAGMA shrink_memory;' ) - - - def _STI( self, iterable_cursor ): - - # strip singleton tuples to an iterator - - return ( item for ( item, ) in iterable_cursor ) - - - def _STL( self, iterable_cursor ): - - # strip singleton tuples to a list - - return [ item for ( item, ) in iterable_cursor ] - - - def _STS( self, iterable_cursor ): - - # strip singleton tuples to a set - - return { item for ( item, ) in iterable_cursor } + self._Execute( 'PRAGMA shrink_memory;' ) def _UnloadModules( self ): @@ -951,7 +875,7 @@ class HydrusDB( object ): try: - self._InitDBCursor() # have to reinitialise because the thread id has changed + self._InitDBConnection() # have to reinitialise because the thread id has changed self._InitCaches() @@ -1030,7 +954,7 @@ class HydrusDB( object ): if self._pause_and_disconnect: - self._CloseDBCursor() + self._CloseDBConnection() while self._pause_and_disconnect: @@ -1042,11 +966,11 @@ class HydrusDB( object ): time.sleep( 1 ) - self._InitDBCursor() + self._InitDBConnection() - self._CloseDBCursor() + self._CloseDBConnection() temp_path = os.path.join( self._db_dir, self._durable_temp_db_filename ) @@ -1111,100 +1035,3 @@ class HydrusDB( object ): if synchronous: return job.GetResult() -class TemporaryIntegerTableNameCache( object ): - - my_instance = None - - def __init__( self ): - - TemporaryIntegerTableNameCache.my_instance = self - - self._column_names_to_table_names = collections.defaultdict( collections.deque ) - self._column_names_counter = collections.Counter() - - - @staticmethod - def instance() -> 'TemporaryIntegerTableNameCache': - - if TemporaryIntegerTableNameCache.my_instance is None: - - raise Exception( 'TemporaryIntegerTableNameCache is not yet initialised!' ) - - else: - - return TemporaryIntegerTableNameCache.my_instance - - - - def Clear( self ): - - self._column_names_to_table_names = collections.defaultdict( collections.deque ) - self._column_names_counter = collections.Counter() - - - def GetName( self, column_name ): - - table_names = self._column_names_to_table_names[ column_name ] - - initialised = True - - if len( table_names ) == 0: - - initialised = False - - i = self._column_names_counter[ column_name ] - - table_name = 'mem.temp_int_{}_{}'.format( column_name, i ) - - table_names.append( table_name ) - - self._column_names_counter[ column_name ] += 1 - - - table_name = table_names.pop() - - return ( initialised, table_name ) - - - def ReleaseName( self, column_name, table_name ): - - self._column_names_to_table_names[ column_name ].append( table_name ) - - -class TemporaryIntegerTable( object ): - - def __init__( self, cursor, integer_iterable, column_name ): - - if not isinstance( integer_iterable, set ): - - integer_iterable = set( integer_iterable ) - - - self._cursor = cursor - self._integer_iterable = integer_iterable - self._column_name = column_name - - ( self._initialised, self._table_name ) = TemporaryIntegerTableNameCache.instance().GetName( self._column_name ) - - - def __enter__( self ): - - if not self._initialised: - - self._cursor.execute( 'CREATE TABLE IF NOT EXISTS {} ( {} INTEGER PRIMARY KEY );'.format( self._table_name, self._column_name ) ) - - - self._cursor.executemany( 'INSERT INTO {} ( {} ) VALUES ( ? );'.format( self._table_name, self._column_name ), ( ( i, ) for i in self._integer_iterable ) ) - - return self._table_name - - - def __exit__( self, exc_type, exc_val, exc_tb ): - - self._cursor.execute( 'DELETE FROM {};'.format( self._table_name ) ) - - TemporaryIntegerTableNameCache.instance().ReleaseName( self._column_name, self._table_name ) - - return False - - diff --git a/hydrus/core/HydrusDBBase.py b/hydrus/core/HydrusDBBase.py new file mode 100644 index 00000000..66a63d4b --- /dev/null +++ b/hydrus/core/HydrusDBBase.py @@ -0,0 +1,257 @@ +import collections +import sqlite3 + +from hydrus.core import HydrusGlobals as HG + +class TemporaryIntegerTableNameCache( object ): + + my_instance = None + + def __init__( self ): + + TemporaryIntegerTableNameCache.my_instance = self + + self._column_names_to_table_names = collections.defaultdict( collections.deque ) + self._column_names_counter = collections.Counter() + + + @staticmethod + def instance() -> 'TemporaryIntegerTableNameCache': + + if TemporaryIntegerTableNameCache.my_instance is None: + + raise Exception( 'TemporaryIntegerTableNameCache is not yet initialised!' ) + + else: + + return TemporaryIntegerTableNameCache.my_instance + + + + def Clear( self ): + + self._column_names_to_table_names = collections.defaultdict( collections.deque ) + self._column_names_counter = collections.Counter() + + + def GetName( self, column_name ): + + table_names = self._column_names_to_table_names[ column_name ] + + initialised = True + + if len( table_names ) == 0: + + initialised = False + + i = self._column_names_counter[ column_name ] + + table_name = 'mem.temp_int_{}_{}'.format( column_name, i ) + + table_names.append( table_name ) + + self._column_names_counter[ column_name ] += 1 + + + table_name = table_names.pop() + + return ( initialised, table_name ) + + + def ReleaseName( self, column_name, table_name ): + + self._column_names_to_table_names[ column_name ].append( table_name ) + + +class TemporaryIntegerTable( object ): + + def __init__( self, cursor: sqlite3.Cursor, integer_iterable, column_name ): + + if not isinstance( integer_iterable, set ): + + integer_iterable = set( integer_iterable ) + + + self._cursor = cursor + self._integer_iterable = integer_iterable + self._column_name = column_name + + ( self._initialised, self._table_name ) = TemporaryIntegerTableNameCache.instance().GetName( self._column_name ) + + + def __enter__( self ): + + if not self._initialised: + + self._cursor.execute( 'CREATE TABLE IF NOT EXISTS {} ( {} INTEGER PRIMARY KEY );'.format( self._table_name, self._column_name ) ) + + + self._cursor.executemany( 'INSERT INTO {} ( {} ) VALUES ( ? );'.format( self._table_name, self._column_name ), ( ( i, ) for i in self._integer_iterable ) ) + + return self._table_name + + + def __exit__( self, exc_type, exc_val, exc_tb ): + + self._cursor.execute( 'DELETE FROM {};'.format( self._table_name ) ) + + TemporaryIntegerTableNameCache.instance().ReleaseName( self._column_name, self._table_name ) + + return False + + +class DBBase( object ): + + def __init__( self ): + + self._c = None + + + def _CloseCursor( self ): + + if self._c is not None: + + self._c.close() + + del self._c + + self._c = None + + + + def _CreateIndex( self, table_name, columns, unique = False ): + + if unique: + + create_phrase = 'CREATE UNIQUE INDEX IF NOT EXISTS' + + else: + + create_phrase = 'CREATE INDEX IF NOT EXISTS' + + + index_name = self._GenerateIndexName( table_name, columns ) + + if '.' in table_name: + + table_name_simple = table_name.split( '.' )[1] + + else: + + table_name_simple = table_name + + + statement = '{} {} ON {} ({});'.format( create_phrase, index_name, table_name_simple, ', '.join( columns ) ) + + self._Execute( statement ) + + + def _Execute( self, query, *args ) -> sqlite3.Cursor: + + if HG.query_planner_mode: + + plan_lines = self._c.execute( 'EXPLAIN QUERY PLAN {}'.format( query ), *args ).fetchall() + + HG.query_planner_query_count += 1 + + HG.client_controller.PrintQueryPlan( query, plan_lines ) + + + return self._c.execute( query, *args ) + + + def _ExecuteMany( self, query, args_iterator ): + + if HG.query_planner_mode: + + args_iterator = list( args_iterator ) + + plan_lines = self._c.execute( 'EXPLAIN QUERY PLAN {}'.format( query ), args_iterator[0] ).fetchall() + + HG.query_planner_query_count += 1 + + HG.client_controller.PrintQueryPlan( query, plan_lines ) + + + self._c.executemany( query, args_iterator ) + + + def _GenerateIndexName( self, table_name, columns ): + + return '{}_{}_index'.format( table_name, '_'.join( columns ) ) + + + def _ExecuteManySelectSingleParam( self, query, single_param_iterator ): + + select_args_iterator = ( ( param, ) for param in single_param_iterator ) + + return self._ExecuteManySelect( query, select_args_iterator ) + + + def _ExecuteManySelect( self, query, select_args_iterator ): + + # back in python 2, we did batches of 256 hash_ids/whatever at a time in big "hash_id IN (?,?,?,?,...)" predicates. + # this was useful to get over some 100,000 x fetchall() call overhead, but it would sometimes throw the SQLite query planner off and do non-optimal queries + # (basically, the "hash_id in (256)" would weight the hash_id index request x 256 vs another when comparing the sqlite_stat1 tables, which could lead to WEWLAD for some indices with low median very-high mean skewed distribution + # python 3 is better about call overhead, so we'll go back to what is pure + # cursor.executemany SELECT when + + for select_args in select_args_iterator: + + for result in self._Execute( query, select_args ): + + yield result + + + + + def _GetLastRowId( self ) -> int: + + return self._c.lastrowid + + + def _GetRowCount( self ): + + row_count = self._c.rowcount + + if row_count == -1: + + return 0 + + else: + + return row_count + + + + def _MakeTemporaryIntegerTable( self, integer_iterable, column_name ): + + return TemporaryIntegerTable( self._c, integer_iterable, column_name ) + + + def _SetCursor( self, c: sqlite3.Cursor ): + + self._c = c + + + def _STI( self, iterable_cursor ): + + # strip singleton tuples to an iterator + + return ( item for ( item, ) in iterable_cursor ) + + + def _STL( self, iterable_cursor ): + + # strip singleton tuples to a list + + return [ item for ( item, ) in iterable_cursor ] + + + def _STS( self, iterable_cursor ): + + # strip singleton tuples to a set + + return { item for ( item, ) in iterable_cursor } + + \ No newline at end of file diff --git a/hydrus/core/HydrusDBModule.py b/hydrus/core/HydrusDBModule.py index f5f3b049..2a6bd143 100644 --- a/hydrus/core/HydrusDBModule.py +++ b/hydrus/core/HydrusDBModule.py @@ -1,41 +1,17 @@ import sqlite3 import typing -class HydrusDBModule( object ): +from hydrus.core import HydrusDBBase + +class HydrusDBModule( HydrusDBBase.DBBase ): def __init__( self, name, cursor: sqlite3.Cursor ): + HydrusDBBase.DBBase.__init__( self ) + self.name = name - self._c = cursor - - def _CreateIndex( self, table_name, columns, unique = False ): - - if '.' in table_name: - - table_name_simple = table_name.split( '.' )[1] - - else: - - table_name_simple = table_name - - - index_name = self._GenerateIndexName( table_name, columns ) - - if unique: - - create_phrase = 'CREATE UNIQUE INDEX IF NOT EXISTS ' - - else: - - create_phrase = 'CREATE INDEX IF NOT EXISTS ' - - - on_phrase = ' ON ' + table_name_simple + ' (' + ', '.join( columns ) + ');' - - statement = create_phrase + index_name + on_phrase - - self._c.execute( statement ) + self._SetCursor( cursor ) def _GetInitialIndexGenerationTuples( self ): @@ -43,32 +19,6 @@ class HydrusDBModule( object ): raise NotImplementedError() - def _GenerateIndexName( self, table_name, columns ): - - return '{}_{}_index'.format( table_name, '_'.join( columns ) ) - - - def _STI( self, iterable_cursor ): - - # strip singleton tuples to an iterator - - return ( item for ( item, ) in iterable_cursor ) - - - def _STL( self, iterable_cursor ): - - # strip singleton tuples to a list - - return [ item for ( item, ) in iterable_cursor ] - - - def _STS( self, iterable_cursor ): - - # strip singleton tuples to a set - - return { item for ( item, ) in iterable_cursor } - - def CreateInitialIndices( self ): index_generation_tuples = self._GetInitialIndexGenerationTuples() diff --git a/hydrus/core/HydrusGlobals.py b/hydrus/core/HydrusGlobals.py index b2beb5cf..9a573a65 100644 --- a/hydrus/core/HydrusGlobals.py +++ b/hydrus/core/HydrusGlobals.py @@ -36,6 +36,10 @@ menu_profile_min_job_time_ms = 16 pubsub_profile_min_job_time_ms = 5 ui_timer_profile_min_job_time_ms = 5 +query_planner_mode = False + +query_planner_start_time = 0 +query_planner_query_count = 0 profile_start_time = 0 profile_slow_count = 0 profile_fast_count = 0 diff --git a/hydrus/core/HydrusRatingArchive.py b/hydrus/core/HydrusRatingArchive.py index f3f13d55..d1427917 100644 --- a/hydrus/core/HydrusRatingArchive.py +++ b/hydrus/core/HydrusRatingArchive.py @@ -51,7 +51,7 @@ class HydrusRatingArchive( object ): if not os.path.exists( self._path ): create_db = True else: create_db = False - self._InitDBCursor() + self._InitDBConnection() if create_db: self._InitDB() @@ -65,7 +65,7 @@ class HydrusRatingArchive( object ): self._c.execute( 'CREATE TABLE ratings ( hash BLOB PRIMARY KEY, rating REAL );' ) - def _InitDBCursor( self ): + def _InitDBConnection( self ): self._db = sqlite3.connect( self._path, isolation_level = None, detect_types = sqlite3.PARSE_DECLTYPES ) diff --git a/hydrus/core/HydrusTagArchive.py b/hydrus/core/HydrusTagArchive.py index 9a3c25ab..23bd7172 100644 --- a/hydrus/core/HydrusTagArchive.py +++ b/hydrus/core/HydrusTagArchive.py @@ -100,7 +100,7 @@ class HydrusTagArchive( object ): if not os.path.exists( self._path ): create_db = True else: create_db = False - self._InitDBCursor() + self._InitDBConnection() if create_db: self._InitDB() @@ -129,7 +129,7 @@ class HydrusTagArchive( object ): self._c.execute( 'CREATE UNIQUE INDEX tags_tag_index ON tags ( tag );' ) - def _InitDBCursor( self ): + def _InitDBConnection( self ): self._db = sqlite3.connect( self._path, isolation_level = None, detect_types = sqlite3.PARSE_DECLTYPES ) @@ -496,7 +496,7 @@ class HydrusTagPairArchive( object ): is_new_db = not os.path.exists( self._path ) - self._InitDBCursor() + self._InitDBConnection() if is_new_db: @@ -525,7 +525,7 @@ class HydrusTagPairArchive( object ): self._c.execute( 'CREATE UNIQUE INDEX tags_tag_index ON tags ( tag );' ) - def _InitDBCursor( self ): + def _InitDBConnection( self ): self._db = sqlite3.connect( self._path, isolation_level = None, detect_types = sqlite3.PARSE_DECLTYPES ) diff --git a/hydrus/core/networking/HydrusNetwork.py b/hydrus/core/networking/HydrusNetwork.py index 2f00cd13..93631963 100644 --- a/hydrus/core/networking/HydrusNetwork.py +++ b/hydrus/core/networking/HydrusNetwork.py @@ -1972,16 +1972,22 @@ class Metadata( HydrusSerialisable.SerialisableBase ): self._metadata[ update_index ] = ( update_hashes, begin, end ) - for update_index in sorted( self._metadata.keys() ): - - ( update_hashes, begin, end ) = self._metadata[ update_index ] + self._RecalcHashes() + + self._biggest_end = self._CalculateBiggestEnd() + + + def _RecalcHashes( self ): + + self._update_hashes = set() + self._update_hashes_ordered = [] + + for ( update_index, ( update_hashes, begin, end ) ) in sorted( self._metadata.items() ): self._update_hashes.update( update_hashes ) self._update_hashes_ordered.extend( update_hashes ) - self._biggest_end = self._CalculateBiggestEnd() - def AppendUpdate( self, update_hashes, begin, end, next_update_due ): @@ -2021,9 +2027,7 @@ class Metadata( HydrusSerialisable.SerialisableBase ): with self._lock: - data = sorted( self._metadata.items() ) - - for ( update_index, ( update_hashes, begin, end ) ) in data: + for ( update_index, ( update_hashes, begin, end ) ) in sorted( self._metadata.items() ): if HydrusData.SetsIntersect( hashes, update_hashes ): @@ -2229,6 +2233,8 @@ class Metadata( HydrusSerialisable.SerialisableBase ): self._next_update_due = new_next_update_due self._biggest_end = self._CalculateBiggestEnd() + self._RecalcHashes() + def UpdateIsEmpty( self, update_index ): diff --git a/hydrus/external/SystemPredicateParser.py b/hydrus/external/SystemPredicateParser.py index 2a923943..0ff25728 100644 --- a/hydrus/external/SystemPredicateParser.py +++ b/hydrus/external/SystemPredicateParser.py @@ -260,7 +260,7 @@ def parse_value(string, spec): return string[len(match[0]):], (hashes, distance) raise ValueError("Invalid value, expected a list of hashes with distance") elif spec == Value.HASHLIST_WITH_ALGORITHM: - match = re.match('(?P([0-9a-f]+(\s|,)+)+[0-9a-f]+)((with\s+)?algorithm)?\s*(?Psha256|sha512|md5|sha1|)', string) + match = re.match('(?P[0-9a-f]+((\s|,)+[0-9a-f]+)*)((with\s+)?algorithm)?\s*(?Psha256|sha512|md5|sha1|)', string) if match: hashes = set(hsh.strip() for hsh in re.sub('\s', ' ', match['hashes'].replace(',', ' ')).split(' ') if len(hsh) > 0) algorithm = match['algorithm'] if len(match['algorithm']) > 0 else 'sha256' diff --git a/hydrus/server/ServerDB.py b/hydrus/server/ServerDB.py index 517e4004..e08e6517 100644 --- a/hydrus/server/ServerDB.py +++ b/hydrus/server/ServerDB.py @@ -122,9 +122,9 @@ class DB( HydrusDB.HydrusDB ): dump = account_type.DumpToString() - self._c.execute( 'INSERT INTO account_types ( service_id, dump ) VALUES ( ?, ? );', ( service_id, dump ) ) + self._Execute( 'INSERT INTO account_types ( service_id, dump ) VALUES ( ?, ? );', ( service_id, dump ) ) - account_type_id = self._c.lastrowid + account_type_id = self._GetLastRowId() return account_type_id @@ -135,7 +135,7 @@ class DB( HydrusDB.HydrusDB ): master_hash_id = self._GetMasterHashId( hash ) - result = self._c.execute( 'SELECT 1 FROM files_info WHERE master_hash_id = ?;', ( master_hash_id, ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM files_info WHERE master_hash_id = ?;', ( master_hash_id, ) ).fetchone() if result is None: @@ -175,7 +175,7 @@ class DB( HydrusDB.HydrusDB ): - self._c.execute( 'INSERT OR IGNORE INTO files_info ( master_hash_id, size, mime, width, height, duration, num_frames, num_words ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ? );', ( master_hash_id, size, mime, width, height, duration, num_frames, num_words ) ) + self._Execute( 'INSERT OR IGNORE INTO files_info ( master_hash_id, size, mime, width, height, duration, num_frames, num_words ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ? );', ( master_hash_id, size, mime, width, height, duration, num_frames, num_words ) ) return master_hash_id @@ -187,9 +187,9 @@ class DB( HydrusDB.HydrusDB ): dictionary_string = dictionary.DumpToString() - self._c.execute( 'INSERT INTO services ( service_key, service_type, name, port, dictionary_string ) VALUES ( ?, ?, ?, ?, ? );', ( sqlite3.Binary( service_key ), service_type, name, port, dictionary_string ) ) + self._Execute( 'INSERT INTO services ( service_key, service_type, name, port, dictionary_string ) VALUES ( ?, ?, ?, ?, ? );', ( sqlite3.Binary( service_key ), service_type, name, port, dictionary_string ) ) - service_id = self._c.lastrowid + service_id = self._GetLastRowId() # @@ -246,22 +246,22 @@ class DB( HydrusDB.HydrusDB ): account_id = self._GetAccountId( account_key ) - self._c.execute( 'INSERT INTO sessions ( session_key, service_id, account_id, expires ) VALUES ( ?, ?, ?, ? );', ( sqlite3.Binary( session_key ), service_id, account_id, expires ) ) + self._Execute( 'INSERT INTO sessions ( session_key, service_id, account_id, expires ) VALUES ( ?, ?, ?, ? );', ( sqlite3.Binary( session_key ), service_id, account_id, expires ) ) def _Analyze( self, maintenance_mode = HC.MAINTENANCE_FORCED, stop_time = None ): stale_time_delta = 30 * 86400 - existing_names_to_timestamps = dict( self._c.execute( 'SELECT name, timestamp FROM analyze_timestamps;' ).fetchall() ) + existing_names_to_timestamps = dict( self._Execute( 'SELECT name, timestamp FROM analyze_timestamps;' ).fetchall() ) - db_names = [ name for ( index, name, path ) in self._c.execute( 'PRAGMA database_list;' ) if name not in ( 'mem', 'temp', 'durable_temp' ) ] + db_names = [ name for ( index, name, path ) in self._Execute( 'PRAGMA database_list;' ) if name not in ( 'mem', 'temp', 'durable_temp' ) ] all_names = set() for db_name in db_names: - all_names.update( ( name for ( name, ) in self._c.execute( 'SELECT name FROM ' + db_name + '.sqlite_master WHERE type = ?;', ( 'table', ) ) ) ) + all_names.update( ( name for ( name, ) in self._Execute( 'SELECT name FROM ' + db_name + '.sqlite_master WHERE type = ?;', ( 'table', ) ) ) ) all_names.discard( 'sqlite_stat1' ) @@ -285,11 +285,11 @@ class DB( HydrusDB.HydrusDB ): started = HydrusData.GetNowPrecise() - self._c.execute( 'ANALYZE ' + name + ';' ) + self._Execute( 'ANALYZE ' + name + ';' ) - self._c.execute( 'DELETE FROM analyze_timestamps WHERE name = ?;', ( name, ) ) + self._Execute( 'DELETE FROM analyze_timestamps WHERE name = ?;', ( name, ) ) - self._c.execute( 'INSERT OR IGNORE INTO analyze_timestamps ( name, timestamp ) VALUES ( ?, ? );', ( name, HydrusData.GetNow() ) ) + self._Execute( 'INSERT OR IGNORE INTO analyze_timestamps ( name, timestamp ) VALUES ( ?, ? );', ( name, HydrusData.GetNow() ) ) time_took = HydrusData.GetNowPrecise() - started @@ -304,7 +304,7 @@ class DB( HydrusDB.HydrusDB ): - self._c.execute( 'ANALYZE sqlite_master;' ) # this reloads the current stats into the query planner + self._Execute( 'ANALYZE sqlite_master;' ) # this reloads the current stats into the query planner finally: @@ -326,7 +326,7 @@ class DB( HydrusDB.HydrusDB ): try: - self._CloseDBCursor() + self._CloseDBConnection() backup_path = os.path.join( self._db_dir, 'server_backup' ) @@ -355,7 +355,7 @@ class DB( HydrusDB.HydrusDB ): HydrusData.Print( 'backing up: copying files' ) HydrusPaths.MirrorTree( self._files_dir, os.path.join( backup_path, 'server_files' ) ) - self._InitDBCursor() + self._InitDBConnection() HydrusData.Print( 'backing up: done!' ) @@ -376,34 +376,34 @@ class DB( HydrusDB.HydrusDB ): HydrusPaths.MakeSureDirectoryExists( new_dir ) - self._c.execute( 'CREATE TABLE services ( service_id INTEGER PRIMARY KEY, service_key BLOB_BYTES, service_type INTEGER, name TEXT, port INTEGER, dictionary_string TEXT );' ) + self._Execute( 'CREATE TABLE services ( service_id INTEGER PRIMARY KEY, service_key BLOB_BYTES, service_type INTEGER, name TEXT, port INTEGER, dictionary_string TEXT );' ) - self._c.execute( 'CREATE TABLE accounts ( account_id INTEGER PRIMARY KEY, service_id INTEGER, account_key BLOB_BYTES, hashed_access_key BLOB_BYTES, account_type_id INTEGER, created INTEGER, expires INTEGER, dictionary_string TEXT );' ) - self._c.execute( 'CREATE UNIQUE INDEX accounts_account_key_index ON accounts ( account_key );' ) - self._c.execute( 'CREATE UNIQUE INDEX accounts_hashed_access_key_index ON accounts ( hashed_access_key );' ) + self._Execute( 'CREATE TABLE accounts ( account_id INTEGER PRIMARY KEY, service_id INTEGER, account_key BLOB_BYTES, hashed_access_key BLOB_BYTES, account_type_id INTEGER, created INTEGER, expires INTEGER, dictionary_string TEXT );' ) + self._Execute( 'CREATE UNIQUE INDEX accounts_account_key_index ON accounts ( account_key );' ) + self._Execute( 'CREATE UNIQUE INDEX accounts_hashed_access_key_index ON accounts ( hashed_access_key );' ) - self._c.execute( 'CREATE TABLE account_scores ( service_id INTEGER, account_id INTEGER, score_type INTEGER, score INTEGER, PRIMARY KEY ( service_id, account_id, score_type ) );' ) + self._Execute( 'CREATE TABLE account_scores ( service_id INTEGER, account_id INTEGER, score_type INTEGER, score INTEGER, PRIMARY KEY ( service_id, account_id, score_type ) );' ) - self._c.execute( 'CREATE TABLE account_types ( account_type_id INTEGER PRIMARY KEY, service_id INTEGER, dump TEXT );' ) + self._Execute( 'CREATE TABLE account_types ( account_type_id INTEGER PRIMARY KEY, service_id INTEGER, dump TEXT );' ) - self._c.execute( 'CREATE TABLE analyze_timestamps ( name TEXT, timestamp INTEGER );' ) + self._Execute( 'CREATE TABLE analyze_timestamps ( name TEXT, timestamp INTEGER );' ) - self._c.execute( 'CREATE TABLE files_info ( master_hash_id INTEGER PRIMARY KEY, size INTEGER, mime INTEGER, width INTEGER, height INTEGER, duration INTEGER, num_frames INTEGER, num_words INTEGER );' ) + self._Execute( 'CREATE TABLE files_info ( master_hash_id INTEGER PRIMARY KEY, size INTEGER, mime INTEGER, width INTEGER, height INTEGER, duration INTEGER, num_frames INTEGER, num_words INTEGER );' ) - self._c.execute( 'CREATE TABLE reasons ( reason_id INTEGER PRIMARY KEY, reason TEXT );' ) - self._c.execute( 'CREATE UNIQUE INDEX reasons_reason_index ON reasons ( reason );' ) + self._Execute( 'CREATE TABLE reasons ( reason_id INTEGER PRIMARY KEY, reason TEXT );' ) + self._Execute( 'CREATE UNIQUE INDEX reasons_reason_index ON reasons ( reason );' ) - self._c.execute( 'CREATE TABLE registration_keys ( registration_key BLOB_BYTES PRIMARY KEY, service_id INTEGER, account_type_id INTEGER, account_key BLOB_BYTES, access_key BLOB_BYTES UNIQUE, expires INTEGER );' ) + self._Execute( 'CREATE TABLE registration_keys ( registration_key BLOB_BYTES PRIMARY KEY, service_id INTEGER, account_type_id INTEGER, account_key BLOB_BYTES, access_key BLOB_BYTES UNIQUE, expires INTEGER );' ) - self._c.execute( 'CREATE TABLE sessions ( session_key BLOB_BYTES, service_id INTEGER, account_id INTEGER, expires INTEGER );' ) + self._Execute( 'CREATE TABLE sessions ( session_key BLOB_BYTES, service_id INTEGER, account_id INTEGER, expires INTEGER );' ) - self._c.execute( 'CREATE TABLE version ( version INTEGER, year INTEGER, month INTEGER );' ) + self._Execute( 'CREATE TABLE version ( version INTEGER, year INTEGER, month INTEGER );' ) # master - self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.hashes ( master_hash_id INTEGER PRIMARY KEY, hash BLOB_BYTES UNIQUE );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.hashes ( master_hash_id INTEGER PRIMARY KEY, hash BLOB_BYTES UNIQUE );' ) - self._c.execute( 'CREATE TABLE IF NOT EXISTS external_master.tags ( master_tag_id INTEGER PRIMARY KEY, tag TEXT UNIQUE );' ) + self._Execute( 'CREATE TABLE IF NOT EXISTS external_master.tags ( master_tag_id INTEGER PRIMARY KEY, tag TEXT UNIQUE );' ) # inserts @@ -411,7 +411,7 @@ class DB( HydrusDB.HydrusDB ): ( current_year, current_month ) = ( current_time_struct.tm_year, current_time_struct.tm_mon ) - self._c.execute( 'INSERT INTO version ( version, year, month ) VALUES ( ?, ?, ? );', ( HC.SOFTWARE_VERSION, current_year, current_month ) ) + self._Execute( 'INSERT INTO version ( version, year, month ) VALUES ( ?, ?, ? );', ( HC.SOFTWARE_VERSION, current_year, current_month ) ) # set up server admin @@ -445,23 +445,23 @@ class DB( HydrusDB.HydrusDB ): ( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name, ip_addresses_table_name ) = GenerateRepositoryFilesTableNames( service_id ) - self._c.executemany( 'DELETE FROM ' + pending_files_table_name + ' WHERE account_id = ?;', ( ( subject_account_id, ) for subject_account_id in subject_account_ids ) ) - self._c.executemany( 'DELETE FROM ' + petitioned_files_table_name + ' WHERE account_id = ?;', ( ( subject_account_id, ) for subject_account_id in subject_account_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + pending_files_table_name + ' WHERE account_id = ?;', ( ( subject_account_id, ) for subject_account_id in subject_account_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + petitioned_files_table_name + ' WHERE account_id = ?;', ( ( subject_account_id, ) for subject_account_id in subject_account_ids ) ) ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateRepositoryMappingsTableNames( service_id ) - self._c.executemany( 'DELETE FROM ' + pending_mappings_table_name + ' WHERE account_id = ?;', ( ( subject_account_id, ) for subject_account_id in subject_account_ids ) ) - self._c.executemany( 'DELETE FROM ' + petitioned_mappings_table_name + ' WHERE account_id = ?;', ( ( subject_account_id, ) for subject_account_id in subject_account_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + pending_mappings_table_name + ' WHERE account_id = ?;', ( ( subject_account_id, ) for subject_account_id in subject_account_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + petitioned_mappings_table_name + ' WHERE account_id = ?;', ( ( subject_account_id, ) for subject_account_id in subject_account_ids ) ) ( current_tag_parents_table_name, deleted_tag_parents_table_name, pending_tag_parents_table_name, petitioned_tag_parents_table_name ) = GenerateRepositoryTagParentsTableNames( service_id ) - self._c.executemany( 'DELETE FROM ' + pending_tag_parents_table_name + ' WHERE account_id = ?;', ( ( subject_account_id, ) for subject_account_id in subject_account_ids ) ) - self._c.executemany( 'DELETE FROM ' + petitioned_tag_parents_table_name + ' WHERE account_id = ?;', ( ( subject_account_id, ) for subject_account_id in subject_account_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + pending_tag_parents_table_name + ' WHERE account_id = ?;', ( ( subject_account_id, ) for subject_account_id in subject_account_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + petitioned_tag_parents_table_name + ' WHERE account_id = ?;', ( ( subject_account_id, ) for subject_account_id in subject_account_ids ) ) ( current_tag_siblings_table_name, deleted_tag_siblings_table_name, pending_tag_siblings_table_name, petitioned_tag_siblings_table_name ) = GenerateRepositoryTagSiblingsTableNames( service_id ) - self._c.executemany( 'DELETE FROM ' + pending_tag_siblings_table_name + ' WHERE account_id = ?;', ( ( subject_account_id, ) for subject_account_id in subject_account_ids ) ) - self._c.executemany( 'DELETE FROM ' + petitioned_tag_siblings_table_name + ' WHERE account_id = ?;', ( ( subject_account_id, ) for subject_account_id in subject_account_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + pending_tag_siblings_table_name + ' WHERE account_id = ?;', ( ( subject_account_id, ) for subject_account_id in subject_account_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + petitioned_tag_siblings_table_name + ' WHERE account_id = ?;', ( ( subject_account_id, ) for subject_account_id in subject_account_ids ) ) def _DeleteService( self, service_key ): @@ -471,13 +471,13 @@ class DB( HydrusDB.HydrusDB ): service_id = self._GetServiceId( service_key ) - self._c.execute( 'DELETE FROM services WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM services WHERE service_id = ?;', ( service_id, ) ) - self._c.execute( 'DELETE FROM accounts WHERE service_id = ?;', ( service_id, ) ) - self._c.execute( 'DELETE FROM account_types WHERE service_id = ?;', ( service_id, ) ) - self._c.execute( 'DELETE FROM account_scores WHERE service_id = ?;', ( service_id, ) ) - self._c.execute( 'DELETE FROM registration_keys WHERE service_id = ?;', ( service_id, ) ) - self._c.execute( 'DELETE FROM sessions WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM accounts WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM account_types WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM account_scores WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM registration_keys WHERE service_id = ?;', ( service_id, ) ) + self._Execute( 'DELETE FROM sessions WHERE service_id = ?;', ( service_id, ) ) if service_type in HC.REPOSITORIES: @@ -500,7 +500,7 @@ class DB( HydrusDB.HydrusDB ): if account_type.IsNullAccount(): - result = self._c.execute( 'SELECT 1 FROM accounts WHERE account_type_id = ?;', ( account_type_id, ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM accounts WHERE account_type_id = ?;', ( account_type_id, ) ).fetchone() if result is not None: @@ -519,7 +519,7 @@ class DB( HydrusDB.HydrusDB ): keys = [ ( force_registration_key, os.urandom( HC.HYDRUS_KEY_LENGTH ), os.urandom( HC.HYDRUS_KEY_LENGTH ) ) for i in range( num ) ] - self._c.executemany( 'INSERT INTO registration_keys ( registration_key, service_id, account_type_id, account_key, access_key, expires ) VALUES ( ?, ?, ?, ?, ?, ? );', [ ( sqlite3.Binary( hashlib.sha256( registration_key ).digest() ), service_id, account_type_id, sqlite3.Binary( account_key ), sqlite3.Binary( access_key ), expires ) for ( registration_key, account_key, access_key ) in keys ] ) + self._ExecuteMany( 'INSERT INTO registration_keys ( registration_key, service_id, account_type_id, account_key, access_key, expires ) VALUES ( ?, ?, ?, ?, ?, ? );', [ ( sqlite3.Binary( hashlib.sha256( registration_key ).digest() ), service_id, account_type_id, sqlite3.Binary( account_key ), sqlite3.Binary( access_key ), expires ) for ( registration_key, account_key, access_key ) in keys ] ) return [ registration_key for ( registration_key, account_key, access_key ) in keys ] @@ -533,7 +533,7 @@ class DB( HydrusDB.HydrusDB ): registration_key_sha256 = hashlib.sha256( registration_key ).digest() - result = self._c.execute( 'SELECT 1 FROM registration_keys WHERE service_id = ? AND registration_key = ?;', ( service_id, sqlite3.Binary( registration_key_sha256 ) ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM registration_keys WHERE service_id = ? AND registration_key = ?;', ( service_id, sqlite3.Binary( registration_key_sha256 ) ) ).fetchone() if result is None: @@ -542,14 +542,14 @@ class DB( HydrusDB.HydrusDB ): new_access_key = os.urandom( HC.HYDRUS_KEY_LENGTH ) - self._c.execute( 'UPDATE registration_keys SET access_key = ? WHERE service_id = ? AND registration_key = ?;', ( sqlite3.Binary( new_access_key ), service_id, sqlite3.Binary( registration_key_sha256 ) ) ) + self._Execute( 'UPDATE registration_keys SET access_key = ? WHERE service_id = ? AND registration_key = ?;', ( sqlite3.Binary( new_access_key ), service_id, sqlite3.Binary( registration_key_sha256 ) ) ) return new_access_key def _GetAccount( self, service_id, account_id ) -> HydrusNetwork.Account: - ( account_key, account_type_id, created, expires, dictionary_string ) = self._c.execute( 'SELECT account_key, account_type_id, created, expires, dictionary_string FROM accounts WHERE service_id = ? AND account_id = ?;', ( service_id, account_id ) ).fetchone() + ( account_key, account_type_id, created, expires, dictionary_string ) = self._Execute( 'SELECT account_key, account_type_id, created, expires, dictionary_string FROM accounts WHERE service_id = ? AND account_id = ?;', ( service_id, account_id ) ).fetchone() account_type = self._GetAccountType( service_id, account_type_id ) @@ -591,11 +591,11 @@ class DB( HydrusDB.HydrusDB ): ( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name, ip_addresses_table_name ) = GenerateRepositoryFilesTableNames( service_id ) - result = self._c.execute( 'SELECT account_id FROM ' + current_files_table_name + ' WHERE service_hash_id = ?;', ( service_hash_id, ) ).fetchone() + result = self._Execute( 'SELECT account_id FROM ' + current_files_table_name + ' WHERE service_hash_id = ?;', ( service_hash_id, ) ).fetchone() if result is None: - result = self._c.execute( 'SELECT account_id FROM ' + deleted_files_table_name + ' WHERE service_hash_id = ?;', ( service_hash_id, ) ).fetchone() + result = self._Execute( 'SELECT account_id FROM ' + deleted_files_table_name + ' WHERE service_hash_id = ?;', ( service_hash_id, ) ).fetchone() if result is None: @@ -642,11 +642,11 @@ class DB( HydrusDB.HydrusDB ): ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateRepositoryMappingsTableNames( service_id ) - result = self._c.execute( 'SELECT account_id FROM ' + current_mappings_table_name + ' WHERE service_tag_id = ? AND service_hash_id = ?;', ( service_tag_id, service_hash_id ) ).fetchone() + result = self._Execute( 'SELECT account_id FROM ' + current_mappings_table_name + ' WHERE service_tag_id = ? AND service_hash_id = ?;', ( service_tag_id, service_hash_id ) ).fetchone() if result is None: - result = self._c.execute( 'SELECT account_id FROM ' + deleted_mappings_table_name + ' WHERE service_tag_id = ? AND service_hash_id = ?;', ( service_tag_id, service_hash_id ) ).fetchone() + result = self._Execute( 'SELECT account_id FROM ' + deleted_mappings_table_name + ' WHERE service_tag_id = ? AND service_hash_id = ?;', ( service_tag_id, service_hash_id ) ).fetchone() if result is None: @@ -679,7 +679,7 @@ class DB( HydrusDB.HydrusDB ): service_id = self._GetServiceId( service_key ) - result = self._c.execute( 'SELECT account_key FROM accounts WHERE service_id = ? AND hashed_access_key = ?;', ( service_id, sqlite3.Binary( hashlib.sha256( access_key ).digest() ), ) ).fetchone() + result = self._Execute( 'SELECT account_key FROM accounts WHERE service_id = ? AND hashed_access_key = ?;', ( service_id, sqlite3.Binary( hashlib.sha256( access_key ).digest() ), ) ).fetchone() if result is None: @@ -689,14 +689,14 @@ class DB( HydrusDB.HydrusDB ): try: - ( account_type_id, account_key, expires ) = self._c.execute( 'SELECT account_type_id, account_key, expires FROM registration_keys WHERE access_key = ?;', ( sqlite3.Binary( access_key ), ) ).fetchone() + ( account_type_id, account_key, expires ) = self._Execute( 'SELECT account_type_id, account_key, expires FROM registration_keys WHERE access_key = ?;', ( sqlite3.Binary( access_key ), ) ).fetchone() except: raise HydrusExceptions.InsufficientCredentialsException( 'The service could not find that account in its database.' ) - self._c.execute( 'DELETE FROM registration_keys WHERE access_key = ?;', ( sqlite3.Binary( access_key ), ) ) + self._Execute( 'DELETE FROM registration_keys WHERE access_key = ?;', ( sqlite3.Binary( access_key ), ) ) # @@ -712,7 +712,7 @@ class DB( HydrusDB.HydrusDB ): dictionary_string = dictionary.DumpToString() - self._c.execute( 'INSERT INTO accounts ( service_id, account_key, hashed_access_key, account_type_id, created, expires, dictionary_string ) VALUES ( ?, ?, ?, ?, ?, ?, ? );', ( service_id, sqlite3.Binary( account_key ), sqlite3.Binary( hashed_access_key ), account_type_id, created, expires, dictionary_string ) ) + self._Execute( 'INSERT INTO accounts ( service_id, account_key, hashed_access_key, account_type_id, created, expires, dictionary_string ) VALUES ( ?, ?, ?, ?, ?, ?, ? );', ( service_id, sqlite3.Binary( account_key ), sqlite3.Binary( hashed_access_key ), account_type_id, created, expires, dictionary_string ) ) else: @@ -724,7 +724,7 @@ class DB( HydrusDB.HydrusDB ): def _GetAccountKeyFromAccountId( self, account_id ): - try: ( account_key, ) = self._c.execute( 'SELECT account_key FROM accounts WHERE account_id = ?;', ( account_id, ) ).fetchone() + try: ( account_key, ) = self._Execute( 'SELECT account_key FROM accounts WHERE account_id = ?;', ( account_id, ) ).fetchone() except: raise HydrusExceptions.InsufficientCredentialsException( 'The service could not find that account_id in its database.' ) return account_key @@ -732,7 +732,7 @@ class DB( HydrusDB.HydrusDB ): def _GetAccountId( self, account_key ): - result = self._c.execute( 'SELECT account_id FROM accounts WHERE account_key = ?;', ( sqlite3.Binary( account_key ), ) ).fetchone() + result = self._Execute( 'SELECT account_id FROM accounts WHERE account_key = ?;', ( sqlite3.Binary( account_key ), ) ).fetchone() if result is None: @@ -806,7 +806,7 @@ class DB( HydrusDB.HydrusDB ): service_id = self._GetServiceId( service_key ) - account_ids = self._STL( self._c.execute( 'SELECT account_id FROM accounts WHERE service_id = ?;', ( service_id, ) ) ) + account_ids = self._STL( self._Execute( 'SELECT account_id FROM accounts WHERE service_id = ?;', ( service_id, ) ) ) accounts = [ self._GetAccount( service_id, account_id ) for account_id in account_ids ] @@ -847,14 +847,14 @@ class DB( HydrusDB.HydrusDB ): account_type.ReportAutoCreateAccount() - self._c.execute( 'UPDATE account_types SET dump = ? WHERE service_id = ? AND account_type_id = ?;', ( account_type.DumpToString(), service_id, account_type_id ) ) + self._Execute( 'UPDATE account_types SET dump = ? WHERE service_id = ? AND account_type_id = ?;', ( account_type.DumpToString(), service_id, account_type_id ) ) return list( self._GenerateRegistrationKeys( service_id, num, account_type_id, expires ) )[0] def _GetHash( self, master_hash_id ): - result = self._c.execute( 'SELECT hash FROM hashes WHERE master_hash_id = ?;', ( master_hash_id, ) ).fetchone() + result = self._Execute( 'SELECT hash FROM hashes WHERE master_hash_id = ?;', ( master_hash_id, ) ).fetchone() if result is None: @@ -875,13 +875,13 @@ class DB( HydrusDB.HydrusDB ): def _GetMasterHashId( self, hash ): - result = self._c.execute( 'SELECT master_hash_id FROM hashes WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() + result = self._Execute( 'SELECT master_hash_id FROM hashes WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() if result is None: - self._c.execute( 'INSERT INTO hashes ( hash ) VALUES ( ? );', ( sqlite3.Binary( hash ), ) ) + self._Execute( 'INSERT INTO hashes ( hash ) VALUES ( ? );', ( sqlite3.Binary( hash ), ) ) - master_hash_id = self._c.lastrowid + master_hash_id = self._GetLastRowId() return master_hash_id @@ -905,7 +905,7 @@ class DB( HydrusDB.HydrusDB ): continue - result = self._c.execute( 'SELECT master_hash_id FROM hashes WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() + result = self._Execute( 'SELECT master_hash_id FROM hashes WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() if result is None: @@ -921,11 +921,11 @@ class DB( HydrusDB.HydrusDB ): if len( hashes_not_in_db ) > 0: - self._c.executemany( 'INSERT INTO hashes ( hash ) VALUES ( ? );', ( ( sqlite3.Binary( hash ), ) for hash in hashes_not_in_db ) ) + self._ExecuteMany( 'INSERT INTO hashes ( hash ) VALUES ( ? );', ( ( sqlite3.Binary( hash ), ) for hash in hashes_not_in_db ) ) for hash in hashes_not_in_db: - ( master_hash_id, ) = self._c.execute( 'SELECT master_hash_id FROM hashes WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() + ( master_hash_id, ) = self._Execute( 'SELECT master_hash_id FROM hashes WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() master_hash_ids.add( master_hash_id ) @@ -940,13 +940,13 @@ class DB( HydrusDB.HydrusDB ): HydrusTags.CheckTagNotEmpty( tag ) - result = self._c.execute( 'SELECT master_tag_id FROM tags WHERE tag = ?;', ( tag, ) ).fetchone() + result = self._Execute( 'SELECT master_tag_id FROM tags WHERE tag = ?;', ( tag, ) ).fetchone() if result is None: - self._c.execute( 'INSERT INTO tags ( tag ) VALUES ( ? );', ( tag, ) ) + self._Execute( 'INSERT INTO tags ( tag ) VALUES ( ? );', ( tag, ) ) - master_tag_id = self._c.lastrowid + master_tag_id = self._GetLastRowId() return master_tag_id @@ -962,14 +962,14 @@ class DB( HydrusDB.HydrusDB ): service_id = self._GetServiceId( service_key ) - ( options, ) = self._c.execute( 'SELECT options FROM services WHERE service_id = ?;', ( service_id, ) ).fetchone() + ( options, ) = self._Execute( 'SELECT options FROM services WHERE service_id = ?;', ( service_id, ) ).fetchone() return options def _GetReason( self, reason_id ): - result = self._c.execute( 'SELECT reason FROM reasons WHERE reason_id = ?;', ( reason_id, ) ).fetchone() + result = self._Execute( 'SELECT reason FROM reasons WHERE reason_id = ?;', ( reason_id, ) ).fetchone() if result is None: raise Exception( 'Reason error in database' ) @@ -980,13 +980,13 @@ class DB( HydrusDB.HydrusDB ): def _GetReasonId( self, reason ): - result = self._c.execute( 'SELECT reason_id FROM reasons WHERE reason = ?;', ( reason, ) ).fetchone() + result = self._Execute( 'SELECT reason_id FROM reasons WHERE reason = ?;', ( reason, ) ).fetchone() if result is None: - self._c.execute( 'INSERT INTO reasons ( reason ) VALUES ( ? );', ( reason, ) ) + self._Execute( 'INSERT INTO reasons ( reason ) VALUES ( ? );', ( reason, ) ) - reason_id = self._c.lastrowid + reason_id = self._GetLastRowId() return reason_id @@ -1000,7 +1000,7 @@ class DB( HydrusDB.HydrusDB ): def _GetServiceId( self, service_key ): - result = self._c.execute( 'SELECT service_id FROM services WHERE service_key = ?;', ( sqlite3.Binary( service_key ), ) ).fetchone() + result = self._Execute( 'SELECT service_id FROM services WHERE service_key = ?;', ( sqlite3.Binary( service_key ), ) ).fetchone() if result is None: @@ -1014,24 +1014,24 @@ class DB( HydrusDB.HydrusDB ): def _GetServiceIds( self, limited_types = HC.ALL_SERVICES ): - return [ service_id for ( service_id, ) in self._c.execute( 'SELECT service_id FROM services WHERE service_type IN ' + HydrusData.SplayListForDB( limited_types ) + ';' ) ] + return [ service_id for ( service_id, ) in self._Execute( 'SELECT service_id FROM services WHERE service_type IN ' + HydrusData.SplayListForDB( limited_types ) + ';' ) ] def _GetServiceKey( self, service_id ): - ( service_key, ) = self._c.execute( 'SELECT service_key FROM services WHERE service_id = ?;', ( service_id, ) ).fetchone() + ( service_key, ) = self._Execute( 'SELECT service_key FROM services WHERE service_id = ?;', ( service_id, ) ).fetchone() return service_key def _GetServiceKeys( self, limited_types = HC.ALL_SERVICES ): - return [ service_key for ( service_key, ) in self._c.execute( 'SELECT service_key FROM services WHERE service_type IN '+ HydrusData.SplayListForDB( limited_types ) + ';' ) ] + return [ service_key for ( service_key, ) in self._Execute( 'SELECT service_key FROM services WHERE service_type IN '+ HydrusData.SplayListForDB( limited_types ) + ';' ) ] def _GetServiceType( self, service_id ): - result = self._c.execute( 'SELECT service_type FROM services WHERE service_id = ?;', ( service_id, ) ).fetchone() + result = self._Execute( 'SELECT service_type FROM services WHERE service_id = ?;', ( service_id, ) ).fetchone() if result is None: raise Exception( 'Service id error in database' ) @@ -1044,7 +1044,7 @@ class DB( HydrusDB.HydrusDB ): services = [] - service_info = self._c.execute( 'SELECT service_key, service_type, name, port, dictionary_string FROM services WHERE service_type IN ' + HydrusData.SplayListForDB( limited_types ) + ';' ).fetchall() + service_info = self._Execute( 'SELECT service_key, service_type, name, port, dictionary_string FROM services WHERE service_type IN ' + HydrusData.SplayListForDB( limited_types ) + ';' ).fetchall() for ( service_key, service_type, name, port, dictionary_string ) in service_info: @@ -1067,19 +1067,19 @@ class DB( HydrusDB.HydrusDB ): now = HydrusData.GetNow() - self._c.execute( 'DELETE FROM sessions WHERE ? > expires;', ( now, ) ) + self._Execute( 'DELETE FROM sessions WHERE ? > expires;', ( now, ) ) sessions = [] if service_key is None: - results = self._c.execute( 'SELECT session_key, service_id, account_id, expires FROM sessions;' ).fetchall() + results = self._Execute( 'SELECT session_key, service_id, account_id, expires FROM sessions;' ).fetchall() else: service_id = self._GetServiceId( service_key) - results = self._c.execute( 'SELECT session_key, service_id, account_id, expires FROM sessions WHERE service_id = ?;', ( service_id, ) ).fetchall() + results = self._Execute( 'SELECT session_key, service_id, account_id, expires FROM sessions WHERE service_id = ?;', ( service_id, ) ).fetchall() service_ids_to_service_keys = {} @@ -1108,7 +1108,7 @@ class DB( HydrusDB.HydrusDB ): if account_id not in account_ids_to_hashed_access_keys: - ( hashed_access_key, ) = self._c.execute( 'SELECT hashed_access_key FROM accounts WHERE account_id = ?;', ( account_id, ) ).fetchone() + ( hashed_access_key, ) = self._Execute( 'SELECT hashed_access_key FROM accounts WHERE account_id = ?;', ( account_id, ) ).fetchone() account_ids_to_hashed_access_keys[ account_id ] = hashed_access_key @@ -1123,7 +1123,7 @@ class DB( HydrusDB.HydrusDB ): def _GetTag( self, master_tag_id ): - result = self._c.execute( 'SELECT tag FROM tags WHERE master_tag_id = ?;', ( master_tag_id, ) ).fetchone() + result = self._Execute( 'SELECT tag FROM tags WHERE master_tag_id = ?;', ( master_tag_id, ) ).fetchone() if result is None: @@ -1172,7 +1172,7 @@ class DB( HydrusDB.HydrusDB ): def _MasterHashExists( self, hash ): - result = self._c.execute( 'SELECT master_hash_id FROM hashes WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() + result = self._Execute( 'SELECT master_hash_id FROM hashes WHERE hash = ?;', ( sqlite3.Binary( hash ), ) ).fetchone() if result is None: @@ -1186,7 +1186,7 @@ class DB( HydrusDB.HydrusDB ): def _MasterTagExists( self, tag ): - result = self._c.execute( 'SELECT master_tag_id FROM tags WHERE tag = ?;', ( tag, ) ).fetchone() + result = self._Execute( 'SELECT master_tag_id FROM tags WHERE tag = ?;', ( tag, ) ).fetchone() if result is None: @@ -1223,7 +1223,7 @@ class DB( HydrusDB.HydrusDB ): raise HydrusExceptions.BadRequestException( 'You cannot reassign anyone to the null account!' ) - self._c.execute( 'UPDATE accounts SET account_type_id = ? WHERE account_id = ?;', ( new_account_type_id, subject_account_id ) ) + self._Execute( 'UPDATE accounts SET account_type_id = ? WHERE account_id = ?;', ( new_account_type_id, subject_account_id ) ) HG.server_controller.pub( 'update_session_accounts', service_key, ( subject_account_key, ) ) @@ -1286,9 +1286,9 @@ class DB( HydrusDB.HydrusDB ): raise HydrusExceptions.BadRequestException( 'You cannot modify the null account!' ) - ( current_expires, ) = self._c.execute( 'SELECT expires FROM accounts WHERE account_id = ?;', ( subject_account_id, ) ).fetchone() + ( current_expires, ) = self._Execute( 'SELECT expires FROM accounts WHERE account_id = ?;', ( subject_account_id, ) ).fetchone() - self._c.execute( 'UPDATE accounts SET expires = ? WHERE account_id = ?;', ( new_expires, subject_account_id ) ) + self._Execute( 'UPDATE accounts SET expires = ? WHERE account_id = ?;', ( new_expires, subject_account_id ) ) HG.server_controller.pub( 'update_session_accounts', service_key, ( subject_account_key, ) ) @@ -1433,11 +1433,11 @@ class DB( HydrusDB.HydrusDB ): dump = account_type.DumpToString() - ( existing_dump, ) = self._c.execute( 'SELECT dump FROM account_types WHERE service_id = ? AND account_type_id = ?;', ( service_id, account_type_id ) ).fetchone() + ( existing_dump, ) = self._Execute( 'SELECT dump FROM account_types WHERE service_id = ? AND account_type_id = ?;', ( service_id, account_type_id ) ).fetchone() if dump != existing_dump: - self._c.execute( 'UPDATE account_types SET dump = ? WHERE service_id = ? AND account_type_id = ?;', ( dump, service_id, account_type_id ) ) + self._Execute( 'UPDATE account_types SET dump = ? WHERE service_id = ? AND account_type_id = ?;', ( dump, service_id, account_type_id ) ) HydrusData.Print( 'Account {} updated the account type, "{}".'.format( @@ -1456,10 +1456,10 @@ class DB( HydrusDB.HydrusDB ): deletee_account_type_id = modification_account_type_keys_to_account_type_ids[ deletee_account_type_key ] new_account_type_id = modification_account_type_keys_to_account_type_ids[ new_account_type_key ] - self._c.execute( 'UPDATE accounts SET account_type_id = ? WHERE service_id = ? AND account_type_id = ?;', ( new_account_type_id, service_id, deletee_account_type_id ) ) - self._c.execute( 'UPDATE registration_keys SET account_type_id = ? WHERE service_id = ? AND account_type_id = ?;', ( new_account_type_id, service_id, deletee_account_type_id ) ) + self._Execute( 'UPDATE accounts SET account_type_id = ? WHERE service_id = ? AND account_type_id = ?;', ( new_account_type_id, service_id, deletee_account_type_id ) ) + self._Execute( 'UPDATE registration_keys SET account_type_id = ? WHERE service_id = ? AND account_type_id = ?;', ( new_account_type_id, service_id, deletee_account_type_id ) ) - self._c.execute( 'DELETE FROM account_types WHERE service_id = ? AND account_type_id = ?;', ( service_id, deletee_account_type_id ) ) + self._Execute( 'DELETE FROM account_types WHERE service_id = ? AND account_type_id = ?;', ( service_id, deletee_account_type_id ) ) deletee_account_type = current_account_type_keys_to_account_types[ deletee_account_type_key ] new_account_type = future_account_type_keys_to_account_types[ new_account_type_key ] @@ -1481,7 +1481,7 @@ class DB( HydrusDB.HydrusDB ): def _ModifyServices( self, account, services ): - current_service_keys = { service_key for ( service_key, ) in self._c.execute( 'SELECT service_key FROM services;' ) } + current_service_keys = { service_key for ( service_key, ) in self._Execute( 'SELECT service_key FROM services;' ) } future_service_keys = { service.GetServiceKey() for service in services } @@ -1507,7 +1507,7 @@ class DB( HydrusDB.HydrusDB ): dictionary_string = dictionary.DumpToString() - self._c.execute( 'UPDATE services SET name = ?, port = ?, dictionary_string = ? WHERE service_id = ?;', ( name, port, dictionary_string, service_id ) ) + self._Execute( 'UPDATE services SET name = ?, port = ?, dictionary_string = ? WHERE service_id = ?;', ( name, port, dictionary_string, service_id ) ) else: @@ -1554,7 +1554,7 @@ class DB( HydrusDB.HydrusDB ): self._account_type_ids_to_account_types = {} self._service_ids_to_account_type_keys_to_account_type_ids = collections.defaultdict( dict ) - data = self._c.execute( 'SELECT account_type_id, service_id, dump FROM account_types;' ).fetchall() + data = self._Execute( 'SELECT account_type_id, service_id, dump FROM account_types;' ).fetchall() for ( account_type_id, service_id, dump ) in data: @@ -1566,7 +1566,7 @@ class DB( HydrusDB.HydrusDB ): if account_type.IsNullAccount(): - result = self._c.execute( 'SELECT account_id FROM accounts WHERE account_type_id = ?;', ( account_type_id, ) ).fetchone() + result = self._Execute( 'SELECT account_id FROM accounts WHERE account_type_id = ?;', ( account_type_id, ) ).fetchone() if result is not None: @@ -1591,10 +1591,10 @@ class DB( HydrusDB.HydrusDB ): ip = file_dict[ 'ip' ] - self._c.execute( 'INSERT INTO ' + ip_addresses_table_name + ' ( master_hash_id, ip, ip_timestamp ) VALUES ( ?, ?, ? );', ( master_hash_id, ip, timestamp ) ) + self._Execute( 'INSERT INTO ' + ip_addresses_table_name + ' ( master_hash_id, ip, ip_timestamp ) VALUES ( ?, ?, ? );', ( master_hash_id, ip, timestamp ) ) - result = self._c.execute( 'SELECT 1 FROM ' + current_files_table_name + ' WHERE service_hash_id = ?;', ( service_hash_id, ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM ' + current_files_table_name + ' WHERE service_hash_id = ?;', ( service_hash_id, ) ).fetchone() if result is not None: @@ -1605,12 +1605,12 @@ class DB( HydrusDB.HydrusDB ): #self._RepositoryRewardFilePenders( service_id, service_hash_id, 1 ) - #self._c.execute( 'DELETE FROM ' + pending_files_table_name + ' WHERE service_hash_id = ?;', ( service_hash_id, ) ) - self._c.execute( 'DELETE FROM ' + deleted_files_table_name + ' WHERE service_hash_id = ?;', ( service_hash_id, ) ) + #self._Execute( 'DELETE FROM ' + pending_files_table_name + ' WHERE service_hash_id = ?;', ( service_hash_id, ) ) + self._Execute( 'DELETE FROM ' + deleted_files_table_name + ' WHERE service_hash_id = ?;', ( service_hash_id, ) ) else: - result = self._c.execute( 'SELECT 1 FROM ' + deleted_files_table_name + ' WHERE service_hash_id = ?;', ( service_hash_id, ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM ' + deleted_files_table_name + ' WHERE service_hash_id = ?;', ( service_hash_id, ) ).fetchone() if result is not None: @@ -1618,7 +1618,7 @@ class DB( HydrusDB.HydrusDB ): - self._c.execute( 'INSERT INTO ' + current_files_table_name + ' ( service_hash_id, account_id, file_timestamp ) VALUES ( ?, ?, ? );', ( service_hash_id, account_id, timestamp ) ) + self._Execute( 'INSERT INTO ' + current_files_table_name + ' ( service_hash_id, account_id, file_timestamp ) VALUES ( ?, ?, ? );', ( service_hash_id, account_id, timestamp ) ) def _RepositoryAddMappings( self, service_id, account_id, master_tag_id, master_hash_ids, overwrite_deleted, timestamp ): @@ -1632,8 +1632,8 @@ class DB( HydrusDB.HydrusDB ): #self._RepositoryRewardMappingPenders( service_id, service_tag_id, service_hash_ids, 1 ) - #self._c.executemany( 'DELETE FROM ' + pending_mappings_table_name + ' WHERE master_tag_id = ? AND master_hash_id = ?;', ( ( master_tag_id, master_hash_id ) for master_hash_id in master_hash_ids ) ) - self._c.executemany( 'DELETE FROM ' + deleted_mappings_table_name + ' WHERE service_tag_id = ? AND service_hash_id = ?;', ( ( service_tag_id, service_hash_id ) for service_hash_id in service_hash_ids ) ) + #self._ExecuteMany( 'DELETE FROM ' + pending_mappings_table_name + ' WHERE master_tag_id = ? AND master_hash_id = ?;', ( ( master_tag_id, master_hash_id ) for master_hash_id in master_hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + deleted_mappings_table_name + ' WHERE service_tag_id = ? AND service_hash_id = ?;', ( ( service_tag_id, service_hash_id ) for service_hash_id in service_hash_ids ) ) else: @@ -1646,7 +1646,7 @@ class DB( HydrusDB.HydrusDB ): # in future, delete from pending with the master ids here - self._c.executemany( 'INSERT OR IGNORE INTO ' + current_mappings_table_name + ' ( service_tag_id, service_hash_id, account_id, mapping_timestamp ) VALUES ( ?, ?, ?, ? );', [ ( service_tag_id, service_hash_id, account_id, timestamp ) for service_hash_id in service_hash_ids ] ) + self._ExecuteMany( 'INSERT OR IGNORE INTO ' + current_mappings_table_name + ' ( service_tag_id, service_hash_id, account_id, mapping_timestamp ) VALUES ( ?, ?, ?, ? );', [ ( service_tag_id, service_hash_id, account_id, timestamp ) for service_hash_id in service_hash_ids ] ) def _RepositoryAddTagParent( self, service_id, account_id, child_master_tag_id, parent_master_tag_id, overwrite_deleted, timestamp ): @@ -1660,12 +1660,12 @@ class DB( HydrusDB.HydrusDB ): self._RepositoryRewardTagParentPenders( service_id, child_master_tag_id, parent_master_tag_id, 1 ) - self._c.execute( 'DELETE FROM ' + pending_tag_parents_table_name + ' WHERE child_master_tag_id = ? AND parent_master_tag_id = ?;', ( child_master_tag_id, parent_master_tag_id ) ) - self._c.execute( 'DELETE FROM ' + deleted_tag_parents_table_name + ' WHERE child_service_tag_id = ? AND parent_service_tag_id = ?;', ( child_service_tag_id, parent_service_tag_id ) ) + self._Execute( 'DELETE FROM ' + pending_tag_parents_table_name + ' WHERE child_master_tag_id = ? AND parent_master_tag_id = ?;', ( child_master_tag_id, parent_master_tag_id ) ) + self._Execute( 'DELETE FROM ' + deleted_tag_parents_table_name + ' WHERE child_service_tag_id = ? AND parent_service_tag_id = ?;', ( child_service_tag_id, parent_service_tag_id ) ) else: - result = self._c.execute( 'SELECT 1 FROM ' + deleted_tag_parents_table_name + ' WHERE child_service_tag_id = ? AND parent_service_tag_id = ?;', ( child_service_tag_id, parent_service_tag_id ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM ' + deleted_tag_parents_table_name + ' WHERE child_service_tag_id = ? AND parent_service_tag_id = ?;', ( child_service_tag_id, parent_service_tag_id ) ).fetchone() if result is not None: @@ -1673,7 +1673,7 @@ class DB( HydrusDB.HydrusDB ): - self._c.execute( 'INSERT OR IGNORE INTO ' + current_tag_parents_table_name + ' ( child_service_tag_id, parent_service_tag_id, account_id, parent_timestamp ) VALUES ( ?, ?, ?, ? );', ( child_service_tag_id, parent_service_tag_id, account_id, timestamp ) ) + self._Execute( 'INSERT OR IGNORE INTO ' + current_tag_parents_table_name + ' ( child_service_tag_id, parent_service_tag_id, account_id, parent_timestamp ) VALUES ( ?, ?, ?, ? );', ( child_service_tag_id, parent_service_tag_id, account_id, timestamp ) ) def _RepositoryAddTagSibling( self, service_id, account_id, bad_master_tag_id, good_master_tag_id, overwrite_deleted, timestamp ): @@ -1687,12 +1687,12 @@ class DB( HydrusDB.HydrusDB ): self._RepositoryRewardTagSiblingPenders( service_id, bad_master_tag_id, good_master_tag_id, 1 ) - self._c.execute( 'DELETE FROM ' + pending_tag_siblings_table_name + ' WHERE bad_master_tag_id = ? AND good_master_tag_id = ?;', ( bad_master_tag_id, good_master_tag_id ) ) - self._c.execute( 'DELETE FROM ' + deleted_tag_siblings_table_name + ' WHERE bad_service_tag_id = ? AND good_service_tag_id = ?;', ( bad_service_tag_id, good_service_tag_id ) ) + self._Execute( 'DELETE FROM ' + pending_tag_siblings_table_name + ' WHERE bad_master_tag_id = ? AND good_master_tag_id = ?;', ( bad_master_tag_id, good_master_tag_id ) ) + self._Execute( 'DELETE FROM ' + deleted_tag_siblings_table_name + ' WHERE bad_service_tag_id = ? AND good_service_tag_id = ?;', ( bad_service_tag_id, good_service_tag_id ) ) else: - result = self._c.execute( 'SELECT 1 FROM ' + deleted_tag_siblings_table_name + ' WHERE bad_service_tag_id = ? AND good_service_tag_id = ?;', ( bad_service_tag_id, good_service_tag_id ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM ' + deleted_tag_siblings_table_name + ' WHERE bad_service_tag_id = ? AND good_service_tag_id = ?;', ( bad_service_tag_id, good_service_tag_id ) ).fetchone() if result is not None: @@ -1700,105 +1700,105 @@ class DB( HydrusDB.HydrusDB ): - self._c.execute( 'INSERT OR IGNORE INTO ' + current_tag_siblings_table_name + ' ( bad_service_tag_id, good_service_tag_id, account_id, sibling_timestamp ) VALUES ( ?, ?, ?, ? );', ( bad_service_tag_id, good_service_tag_id, account_id, timestamp ) ) + self._Execute( 'INSERT OR IGNORE INTO ' + current_tag_siblings_table_name + ' ( bad_service_tag_id, good_service_tag_id, account_id, sibling_timestamp ) VALUES ( ?, ?, ?, ? );', ( bad_service_tag_id, good_service_tag_id, account_id, timestamp ) ) def _RepositoryCreate( self, service_id ): ( hash_id_map_table_name, tag_id_map_table_name ) = GenerateRepositoryMasterMapTableNames( service_id ) - self._c.execute( 'CREATE TABLE ' + hash_id_map_table_name + ' ( service_hash_id INTEGER PRIMARY KEY, master_hash_id INTEGER UNIQUE, hash_id_timestamp INTEGER );' ) + self._Execute( 'CREATE TABLE ' + hash_id_map_table_name + ' ( service_hash_id INTEGER PRIMARY KEY, master_hash_id INTEGER UNIQUE, hash_id_timestamp INTEGER );' ) self._CreateIndex( hash_id_map_table_name, [ 'hash_id_timestamp' ] ) - self._c.execute( 'CREATE TABLE ' + tag_id_map_table_name + ' ( service_tag_id INTEGER PRIMARY KEY, master_tag_id INTEGER UNIQUE, tag_id_timestamp INTEGER );' ) + self._Execute( 'CREATE TABLE ' + tag_id_map_table_name + ' ( service_tag_id INTEGER PRIMARY KEY, master_tag_id INTEGER UNIQUE, tag_id_timestamp INTEGER );' ) self._CreateIndex( tag_id_map_table_name, [ 'tag_id_timestamp' ] ) # ( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name, ip_addresses_table_name ) = GenerateRepositoryFilesTableNames( service_id ) - self._c.execute( 'CREATE TABLE ' + current_files_table_name + ' ( service_hash_id INTEGER PRIMARY KEY, account_id INTEGER, file_timestamp INTEGER );' ) + self._Execute( 'CREATE TABLE ' + current_files_table_name + ' ( service_hash_id INTEGER PRIMARY KEY, account_id INTEGER, file_timestamp INTEGER );' ) self._CreateIndex( current_files_table_name, [ 'account_id' ] ) self._CreateIndex( current_files_table_name, [ 'file_timestamp' ] ) - self._c.execute( 'CREATE TABLE ' + deleted_files_table_name + ' ( service_hash_id INTEGER PRIMARY KEY, account_id INTEGER, file_timestamp INTEGER );' ) + self._Execute( 'CREATE TABLE ' + deleted_files_table_name + ' ( service_hash_id INTEGER PRIMARY KEY, account_id INTEGER, file_timestamp INTEGER );' ) self._CreateIndex( deleted_files_table_name, [ 'account_id' ] ) self._CreateIndex( deleted_files_table_name, [ 'file_timestamp' ] ) - self._c.execute( 'CREATE TABLE ' + pending_files_table_name + ' ( master_hash_id INTEGER, account_id INTEGER, reason_id INTEGER, PRIMARY KEY ( master_hash_id, account_id ) ) WITHOUT ROWID;' ) + self._Execute( 'CREATE TABLE ' + pending_files_table_name + ' ( master_hash_id INTEGER, account_id INTEGER, reason_id INTEGER, PRIMARY KEY ( master_hash_id, account_id ) ) WITHOUT ROWID;' ) self._CreateIndex( pending_files_table_name, [ 'account_id', 'reason_id' ] ) - self._c.execute( 'CREATE TABLE ' + petitioned_files_table_name + ' ( service_hash_id INTEGER, account_id INTEGER, reason_id INTEGER, PRIMARY KEY ( service_hash_id, account_id ) ) WITHOUT ROWID;' ) + self._Execute( 'CREATE TABLE ' + petitioned_files_table_name + ' ( service_hash_id INTEGER, account_id INTEGER, reason_id INTEGER, PRIMARY KEY ( service_hash_id, account_id ) ) WITHOUT ROWID;' ) self._CreateIndex( petitioned_files_table_name, [ 'account_id', 'reason_id' ] ) - self._c.execute( 'CREATE TABLE ' + ip_addresses_table_name + ' ( master_hash_id INTEGER, ip TEXT, ip_timestamp INTEGER );' ) + self._Execute( 'CREATE TABLE ' + ip_addresses_table_name + ' ( master_hash_id INTEGER, ip TEXT, ip_timestamp INTEGER );' ) # ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateRepositoryMappingsTableNames( service_id ) - self._c.execute( 'CREATE TABLE ' + current_mappings_table_name + ' ( service_tag_id INTEGER, service_hash_id INTEGER, account_id INTEGER, mapping_timestamp INTEGER, PRIMARY KEY ( service_tag_id, service_hash_id ) ) WITHOUT ROWID;' ) + self._Execute( 'CREATE TABLE ' + current_mappings_table_name + ' ( service_tag_id INTEGER, service_hash_id INTEGER, account_id INTEGER, mapping_timestamp INTEGER, PRIMARY KEY ( service_tag_id, service_hash_id ) ) WITHOUT ROWID;' ) self._CreateIndex( current_mappings_table_name, [ 'account_id' ] ) self._CreateIndex( current_mappings_table_name, [ 'mapping_timestamp' ] ) - self._c.execute( 'CREATE TABLE ' + deleted_mappings_table_name + ' ( service_tag_id INTEGER, service_hash_id INTEGER, account_id INTEGER, mapping_timestamp INTEGER, PRIMARY KEY ( service_tag_id, service_hash_id ) ) WITHOUT ROWID;' ) + self._Execute( 'CREATE TABLE ' + deleted_mappings_table_name + ' ( service_tag_id INTEGER, service_hash_id INTEGER, account_id INTEGER, mapping_timestamp INTEGER, PRIMARY KEY ( service_tag_id, service_hash_id ) ) WITHOUT ROWID;' ) self._CreateIndex( deleted_mappings_table_name, [ 'account_id' ] ) self._CreateIndex( deleted_mappings_table_name, [ 'mapping_timestamp' ] ) - self._c.execute( 'CREATE TABLE ' + pending_mappings_table_name + ' ( master_tag_id INTEGER, master_hash_id INTEGER, account_id INTEGER, reason_id INTEGER, PRIMARY KEY ( master_tag_id, master_hash_id, account_id ) ) WITHOUT ROWID;' ) + self._Execute( 'CREATE TABLE ' + pending_mappings_table_name + ' ( master_tag_id INTEGER, master_hash_id INTEGER, account_id INTEGER, reason_id INTEGER, PRIMARY KEY ( master_tag_id, master_hash_id, account_id ) ) WITHOUT ROWID;' ) self._CreateIndex( pending_mappings_table_name, [ 'account_id', 'reason_id' ] ) - self._c.execute( 'CREATE TABLE ' + petitioned_mappings_table_name + ' ( service_tag_id INTEGER, service_hash_id INTEGER, account_id INTEGER, reason_id INTEGER, PRIMARY KEY ( service_tag_id, service_hash_id, account_id ) ) WITHOUT ROWID;' ) + self._Execute( 'CREATE TABLE ' + petitioned_mappings_table_name + ' ( service_tag_id INTEGER, service_hash_id INTEGER, account_id INTEGER, reason_id INTEGER, PRIMARY KEY ( service_tag_id, service_hash_id, account_id ) ) WITHOUT ROWID;' ) self._CreateIndex( petitioned_mappings_table_name, [ 'account_id', 'reason_id' ] ) # ( current_tag_parents_table_name, deleted_tag_parents_table_name, pending_tag_parents_table_name, petitioned_tag_parents_table_name ) = GenerateRepositoryTagParentsTableNames( service_id ) - self._c.execute( 'CREATE TABLE ' + current_tag_parents_table_name + ' ( child_service_tag_id INTEGER, parent_service_tag_id INTEGER, account_id INTEGER, parent_timestamp INTEGER, PRIMARY KEY ( child_service_tag_id, parent_service_tag_id ) ) WITHOUT ROWID;' ) + self._Execute( 'CREATE TABLE ' + current_tag_parents_table_name + ' ( child_service_tag_id INTEGER, parent_service_tag_id INTEGER, account_id INTEGER, parent_timestamp INTEGER, PRIMARY KEY ( child_service_tag_id, parent_service_tag_id ) ) WITHOUT ROWID;' ) self._CreateIndex( current_tag_parents_table_name, [ 'account_id' ] ) self._CreateIndex( current_tag_parents_table_name, [ 'parent_timestamp' ] ) - self._c.execute( 'CREATE TABLE ' + deleted_tag_parents_table_name + ' ( child_service_tag_id INTEGER, parent_service_tag_id INTEGER, account_id INTEGER, parent_timestamp INTEGER, PRIMARY KEY ( child_service_tag_id, parent_service_tag_id ) ) WITHOUT ROWID;' ) + self._Execute( 'CREATE TABLE ' + deleted_tag_parents_table_name + ' ( child_service_tag_id INTEGER, parent_service_tag_id INTEGER, account_id INTEGER, parent_timestamp INTEGER, PRIMARY KEY ( child_service_tag_id, parent_service_tag_id ) ) WITHOUT ROWID;' ) self._CreateIndex( deleted_tag_parents_table_name, [ 'account_id' ] ) self._CreateIndex( deleted_tag_parents_table_name, [ 'parent_timestamp' ] ) - self._c.execute( 'CREATE TABLE ' + pending_tag_parents_table_name + ' ( child_master_tag_id INTEGER, parent_master_tag_id INTEGER, account_id INTEGER, reason_id INTEGER, PRIMARY KEY ( child_master_tag_id, parent_master_tag_id, account_id ) ) WITHOUT ROWID;' ) + self._Execute( 'CREATE TABLE ' + pending_tag_parents_table_name + ' ( child_master_tag_id INTEGER, parent_master_tag_id INTEGER, account_id INTEGER, reason_id INTEGER, PRIMARY KEY ( child_master_tag_id, parent_master_tag_id, account_id ) ) WITHOUT ROWID;' ) self._CreateIndex( pending_tag_parents_table_name, [ 'account_id', 'reason_id' ] ) - self._c.execute( 'CREATE TABLE ' + petitioned_tag_parents_table_name + ' ( child_service_tag_id INTEGER, parent_service_tag_id INTEGER, account_id INTEGER, reason_id INTEGER, PRIMARY KEY ( child_service_tag_id, parent_service_tag_id, account_id ) ) WITHOUT ROWID;' ) + self._Execute( 'CREATE TABLE ' + petitioned_tag_parents_table_name + ' ( child_service_tag_id INTEGER, parent_service_tag_id INTEGER, account_id INTEGER, reason_id INTEGER, PRIMARY KEY ( child_service_tag_id, parent_service_tag_id, account_id ) ) WITHOUT ROWID;' ) self._CreateIndex( petitioned_tag_parents_table_name, [ 'account_id', 'reason_id' ] ) # ( current_tag_siblings_table_name, deleted_tag_siblings_table_name, pending_tag_siblings_table_name, petitioned_tag_siblings_table_name ) = GenerateRepositoryTagSiblingsTableNames( service_id ) - self._c.execute( 'CREATE TABLE ' + current_tag_siblings_table_name + ' ( bad_service_tag_id INTEGER PRIMARY KEY, good_service_tag_id INTEGER, account_id INTEGER, sibling_timestamp INTEGER );' ) + self._Execute( 'CREATE TABLE ' + current_tag_siblings_table_name + ' ( bad_service_tag_id INTEGER PRIMARY KEY, good_service_tag_id INTEGER, account_id INTEGER, sibling_timestamp INTEGER );' ) self._CreateIndex( current_tag_siblings_table_name, [ 'account_id' ] ) self._CreateIndex( current_tag_siblings_table_name, [ 'sibling_timestamp' ] ) - self._c.execute( 'CREATE TABLE ' + deleted_tag_siblings_table_name + ' ( bad_service_tag_id INTEGER PRIMARY KEY, good_service_tag_id INTEGER, account_id INTEGER, sibling_timestamp INTEGER );' ) + self._Execute( 'CREATE TABLE ' + deleted_tag_siblings_table_name + ' ( bad_service_tag_id INTEGER PRIMARY KEY, good_service_tag_id INTEGER, account_id INTEGER, sibling_timestamp INTEGER );' ) self._CreateIndex( deleted_tag_siblings_table_name, [ 'account_id' ] ) self._CreateIndex( deleted_tag_siblings_table_name, [ 'sibling_timestamp' ] ) - self._c.execute( 'CREATE TABLE ' + pending_tag_siblings_table_name + ' ( bad_master_tag_id INTEGER, good_master_tag_id INTEGER, account_id INTEGER, reason_id INTEGER, PRIMARY KEY ( bad_master_tag_id, account_id ) ) WITHOUT ROWID;' ) + self._Execute( 'CREATE TABLE ' + pending_tag_siblings_table_name + ' ( bad_master_tag_id INTEGER, good_master_tag_id INTEGER, account_id INTEGER, reason_id INTEGER, PRIMARY KEY ( bad_master_tag_id, account_id ) ) WITHOUT ROWID;' ) self._CreateIndex( pending_tag_siblings_table_name, [ 'account_id', 'reason_id' ] ) - self._c.execute( 'CREATE TABLE ' + petitioned_tag_siblings_table_name + ' ( bad_service_tag_id INTEGER, good_service_tag_id INTEGER, account_id INTEGER, reason_id INTEGER, PRIMARY KEY ( bad_service_tag_id, account_id ) ) WITHOUT ROWID;' ) + self._Execute( 'CREATE TABLE ' + petitioned_tag_siblings_table_name + ' ( bad_service_tag_id INTEGER, good_service_tag_id INTEGER, account_id INTEGER, reason_id INTEGER, PRIMARY KEY ( bad_service_tag_id, account_id ) ) WITHOUT ROWID;' ) self._CreateIndex( petitioned_tag_siblings_table_name, [ 'account_id', 'reason_id' ] ) # ( update_table_name ) = GenerateRepositoryUpdateTableName( service_id ) - self._c.execute( 'CREATE TABLE ' + update_table_name + ' ( master_hash_id INTEGER PRIMARY KEY );' ) + self._Execute( 'CREATE TABLE ' + update_table_name + ' ( master_hash_id INTEGER PRIMARY KEY );' ) def _RepositoryCreateUpdate( self, service_key, begin, end ): service_id = self._GetServiceId( service_key ) - ( name, ) = self._c.execute( 'SELECT name FROM services WHERE service_id = ?;', ( service_id, ) ).fetchone() + ( name, ) = self._Execute( 'SELECT name FROM services WHERE service_id = ?;', ( service_id, ) ).fetchone() HydrusData.Print( 'Creating update for ' + repr( name ) + ' from ' + HydrusData.ConvertTimestampToPrettyTime( begin, in_utc = True ) + ' to ' + HydrusData.ConvertTimestampToPrettyTime( end, in_utc = True ) ) @@ -1842,7 +1842,7 @@ class DB( HydrusDB.HydrusDB ): master_hash_ids = self._GetMasterHashIds( update_hashes ) - self._c.executemany( 'INSERT OR IGNORE INTO ' + update_table_name + ' ( master_hash_id ) VALUES ( ? );', ( ( master_hash_id, ) for master_hash_id in master_hash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO ' + update_table_name + ' ( master_hash_id ) VALUES ( ? );', ( ( master_hash_id, ) for master_hash_id in master_hash_ids ) ) HydrusData.Print( 'Update OK. ' + HydrusData.ToHumanInt( total_definition_rows ) + ' definition rows and ' + HydrusData.ToHumanInt( total_content_rows ) + ' content rows in ' + HydrusData.ToHumanInt( len( updates ) ) + ' update files.' ) @@ -1860,10 +1860,10 @@ class DB( HydrusDB.HydrusDB ): self._RepositoryRewardFilePetitioners( service_id, valid_service_hash_ids, 1 ) - self._c.executemany( 'DELETE FROM ' + current_files_table_name + ' WHERE service_hash_id = ?', ( ( service_hash_id, ) for service_hash_id in valid_service_hash_ids ) ) - self._c.executemany( 'DELETE FROM ' + petitioned_files_table_name + ' WHERE service_hash_id = ?', ( ( service_hash_id, ) for service_hash_id in valid_service_hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + current_files_table_name + ' WHERE service_hash_id = ?', ( ( service_hash_id, ) for service_hash_id in valid_service_hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + petitioned_files_table_name + ' WHERE service_hash_id = ?', ( ( service_hash_id, ) for service_hash_id in valid_service_hash_ids ) ) - self._c.executemany( 'INSERT OR IGNORE INTO ' + deleted_files_table_name + ' ( service_hash_id, account_id, file_timestamp ) VALUES ( ?, ?, ? );', ( ( service_hash_id, account_id, timestamp ) for service_hash_id in valid_service_hash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO ' + deleted_files_table_name + ' ( service_hash_id, account_id, file_timestamp ) VALUES ( ?, ?, ? );', ( ( service_hash_id, account_id, timestamp ) for service_hash_id in valid_service_hash_ids ) ) def _RepositoryDeleteMappings( self, service_id, account_id, service_tag_id, service_hash_ids, timestamp ): @@ -1876,10 +1876,10 @@ class DB( HydrusDB.HydrusDB ): self._RepositoryRewardMappingPetitioners( service_id, service_tag_id, valid_service_hash_ids, 1 ) - self._c.executemany( 'DELETE FROM ' + current_mappings_table_name + ' WHERE service_tag_id = ? AND service_hash_id = ?;', ( ( service_tag_id, service_hash_id ) for service_hash_id in valid_service_hash_ids ) ) - self._c.executemany( 'DELETE FROM ' + petitioned_mappings_table_name + ' WHERE service_tag_id = ? AND service_hash_id = ?;', ( ( service_tag_id, service_hash_id ) for service_hash_id in valid_service_hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + current_mappings_table_name + ' WHERE service_tag_id = ? AND service_hash_id = ?;', ( ( service_tag_id, service_hash_id ) for service_hash_id in valid_service_hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + petitioned_mappings_table_name + ' WHERE service_tag_id = ? AND service_hash_id = ?;', ( ( service_tag_id, service_hash_id ) for service_hash_id in valid_service_hash_ids ) ) - self._c.executemany( 'INSERT OR IGNORE INTO ' + deleted_mappings_table_name + ' ( service_tag_id, service_hash_id, account_id, mapping_timestamp ) VALUES ( ?, ?, ?, ? );', ( ( service_tag_id, service_hash_id, account_id, timestamp ) for service_hash_id in valid_service_hash_ids ) ) + self._ExecuteMany( 'INSERT OR IGNORE INTO ' + deleted_mappings_table_name + ' ( service_tag_id, service_hash_id, account_id, mapping_timestamp ) VALUES ( ?, ?, ?, ? );', ( ( service_tag_id, service_hash_id, account_id, timestamp ) for service_hash_id in valid_service_hash_ids ) ) def _RepositoryDeleteTagParent( self, service_id, account_id, child_service_tag_id, parent_service_tag_id, timestamp ): @@ -1888,10 +1888,10 @@ class DB( HydrusDB.HydrusDB ): self._RepositoryRewardTagParentPetitioners( service_id, child_service_tag_id, parent_service_tag_id, 1 ) - self._c.execute( 'DELETE FROM ' + current_tag_parents_table_name + ' WHERE child_service_tag_id = ? AND parent_service_tag_id = ?;', ( child_service_tag_id, parent_service_tag_id ) ) - self._c.execute( 'DELETE FROM ' + petitioned_tag_parents_table_name + ' WHERE child_service_tag_id = ? AND parent_service_tag_id = ?;', ( child_service_tag_id, parent_service_tag_id ) ) + self._Execute( 'DELETE FROM ' + current_tag_parents_table_name + ' WHERE child_service_tag_id = ? AND parent_service_tag_id = ?;', ( child_service_tag_id, parent_service_tag_id ) ) + self._Execute( 'DELETE FROM ' + petitioned_tag_parents_table_name + ' WHERE child_service_tag_id = ? AND parent_service_tag_id = ?;', ( child_service_tag_id, parent_service_tag_id ) ) - self._c.execute( 'INSERT OR IGNORE INTO ' + deleted_tag_parents_table_name + ' ( child_service_tag_id, parent_service_tag_id, account_id, parent_timestamp ) VALUES ( ?, ?, ?, ? );', ( child_service_tag_id, parent_service_tag_id, account_id, timestamp ) ) + self._Execute( 'INSERT OR IGNORE INTO ' + deleted_tag_parents_table_name + ' ( child_service_tag_id, parent_service_tag_id, account_id, parent_timestamp ) VALUES ( ?, ?, ?, ? );', ( child_service_tag_id, parent_service_tag_id, account_id, timestamp ) ) def _RepositoryDeleteTagSibling( self, service_id, account_id, bad_service_tag_id, good_service_tag_id, timestamp ): @@ -1900,10 +1900,10 @@ class DB( HydrusDB.HydrusDB ): self._RepositoryRewardTagSiblingPetitioners( service_id, bad_service_tag_id, good_service_tag_id, 1 ) - self._c.execute( 'DELETE FROM ' + current_tag_siblings_table_name + ' WHERE bad_service_tag_id = ? AND good_service_tag_id = ?;', ( bad_service_tag_id, good_service_tag_id ) ) - self._c.execute( 'DELETE FROM ' + petitioned_tag_siblings_table_name + ' WHERE bad_service_tag_id = ? AND good_service_tag_id = ?;', ( bad_service_tag_id, good_service_tag_id ) ) + self._Execute( 'DELETE FROM ' + current_tag_siblings_table_name + ' WHERE bad_service_tag_id = ? AND good_service_tag_id = ?;', ( bad_service_tag_id, good_service_tag_id ) ) + self._Execute( 'DELETE FROM ' + petitioned_tag_siblings_table_name + ' WHERE bad_service_tag_id = ? AND good_service_tag_id = ?;', ( bad_service_tag_id, good_service_tag_id ) ) - self._c.execute( 'INSERT OR IGNORE INTO ' + deleted_tag_siblings_table_name + ' ( bad_service_tag_id, good_service_tag_id, account_id, sibling_timestamp ) VALUES ( ?, ?, ?, ? );', ( bad_service_tag_id, good_service_tag_id, account_id, timestamp ) ) + self._Execute( 'INSERT OR IGNORE INTO ' + deleted_tag_siblings_table_name + ' ( bad_service_tag_id, good_service_tag_id, account_id, sibling_timestamp ) VALUES ( ?, ?, ?, ? );', ( bad_service_tag_id, good_service_tag_id, account_id, timestamp ) ) def _RepositoryDenyFilePetition( self, service_id, service_hash_ids ): @@ -1912,7 +1912,7 @@ class DB( HydrusDB.HydrusDB ): ( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name, ip_addresses_table_name ) = GenerateRepositoryFilesTableNames( service_id ) - self._c.executemany( 'DELETE FROM ' + petitioned_files_table_name + ' WHERE service_hash_id = ?;', ( ( service_hash_id, ) for service_hash_id in service_hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + petitioned_files_table_name + ' WHERE service_hash_id = ?;', ( ( service_hash_id, ) for service_hash_id in service_hash_ids ) ) def _RepositoryDenyMappingPetition( self, service_id, service_tag_id, service_hash_ids ): @@ -1921,7 +1921,7 @@ class DB( HydrusDB.HydrusDB ): ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateRepositoryMappingsTableNames( service_id ) - self._c.executemany( 'DELETE FROM ' + petitioned_mappings_table_name + ' WHERE service_tag_id = ? AND service_hash_id = ?;', ( ( service_tag_id, service_hash_id ) for service_hash_id in service_hash_ids ) ) + self._ExecuteMany( 'DELETE FROM ' + petitioned_mappings_table_name + ' WHERE service_tag_id = ? AND service_hash_id = ?;', ( ( service_tag_id, service_hash_id ) for service_hash_id in service_hash_ids ) ) def _RepositoryDenyTagParentPend( self, service_id, child_master_tag_id, parent_master_tag_id ): @@ -1930,7 +1930,7 @@ class DB( HydrusDB.HydrusDB ): ( current_tag_parents_table_name, deleted_tag_parents_table_name, pending_tag_parents_table_name, petitioned_tag_parents_table_name ) = GenerateRepositoryTagParentsTableNames( service_id ) - self._c.execute( 'DELETE FROM ' + pending_tag_parents_table_name + ' WHERE child_master_tag_id = ? AND parent_master_tag_id = ?;', ( child_master_tag_id, parent_master_tag_id ) ) + self._Execute( 'DELETE FROM ' + pending_tag_parents_table_name + ' WHERE child_master_tag_id = ? AND parent_master_tag_id = ?;', ( child_master_tag_id, parent_master_tag_id ) ) def _RepositoryDenyTagParentPetition( self, service_id, child_service_tag_id, parent_service_tag_id ): @@ -1939,7 +1939,7 @@ class DB( HydrusDB.HydrusDB ): ( current_tag_parents_table_name, deleted_tag_parents_table_name, pending_tag_parents_table_name, petitioned_tag_parents_table_name ) = GenerateRepositoryTagParentsTableNames( service_id ) - self._c.execute( 'DELETE FROM ' + petitioned_tag_parents_table_name + ' WHERE child_service_tag_id = ? AND parent_service_tag_id = ?;', ( child_service_tag_id, parent_service_tag_id ) ) + self._Execute( 'DELETE FROM ' + petitioned_tag_parents_table_name + ' WHERE child_service_tag_id = ? AND parent_service_tag_id = ?;', ( child_service_tag_id, parent_service_tag_id ) ) def _RepositoryDenyTagSiblingPend( self, service_id, bad_master_tag_id, good_master_tag_id ): @@ -1948,7 +1948,7 @@ class DB( HydrusDB.HydrusDB ): ( current_tag_siblings_table_name, deleted_tag_siblings_table_name, pending_tag_siblings_table_name, petitioned_tag_siblings_table_name ) = GenerateRepositoryTagSiblingsTableNames( service_id ) - self._c.execute( 'DELETE FROM ' + pending_tag_siblings_table_name + ' WHERE bad_master_tag_id = ? AND good_master_tag_id = ?;', ( bad_master_tag_id, good_master_tag_id ) ) + self._Execute( 'DELETE FROM ' + pending_tag_siblings_table_name + ' WHERE bad_master_tag_id = ? AND good_master_tag_id = ?;', ( bad_master_tag_id, good_master_tag_id ) ) def _RepositoryDenyTagSiblingPetition( self, service_id, bad_service_tag_id, good_service_tag_id ): @@ -1957,7 +1957,7 @@ class DB( HydrusDB.HydrusDB ): ( current_tag_siblings_table_name, deleted_tag_siblings_table_name, pending_tag_siblings_table_name, petitioned_tag_siblings_table_name ) = GenerateRepositoryTagSiblingsTableNames( service_id ) - self._c.execute( 'DELETE FROM ' + petitioned_tag_siblings_table_name + ' WHERE bad_service_tag_id = ? AND good_service_tag_id = ?;', ( bad_service_tag_id, good_service_tag_id ) ) + self._Execute( 'DELETE FROM ' + petitioned_tag_siblings_table_name + ' WHERE bad_service_tag_id = ? AND good_service_tag_id = ?;', ( bad_service_tag_id, good_service_tag_id ) ) def _RepositoryDrop( self, service_id ): @@ -1978,7 +1978,7 @@ class DB( HydrusDB.HydrusDB ): for table_name in table_names: - self._c.execute( 'DROP TABLE ' + table_name + ';' ) + self._Execute( 'DROP TABLE ' + table_name + ';' ) @@ -2005,14 +2005,14 @@ class DB( HydrusDB.HydrusDB ): ( service_hash_ids_table_name, service_tag_ids_table_name ) = GenerateRepositoryMasterMapTableNames( service_id ) - for ( service_hash_id, hash ) in self._c.execute( 'SELECT service_hash_id, hash FROM ' + service_hash_ids_table_name + ' NATURAL JOIN hashes WHERE hash_id_timestamp BETWEEN ? AND ?;', ( begin, end ) ): + for ( service_hash_id, hash ) in self._Execute( 'SELECT service_hash_id, hash FROM ' + service_hash_ids_table_name + ' NATURAL JOIN hashes WHERE hash_id_timestamp BETWEEN ? AND ?;', ( begin, end ) ): row = ( HC.DEFINITIONS_TYPE_HASHES, service_hash_id, hash ) definitions_update_builder.AddRow( row ) - for ( service_tag_id, tag ) in self._c.execute( 'SELECT service_tag_id, tag FROM ' + service_tag_ids_table_name + ' NATURAL JOIN tags WHERE tag_id_timestamp BETWEEN ? AND ?;', ( begin, end ) ): + for ( service_tag_id, tag ) in self._Execute( 'SELECT service_tag_id, tag FROM ' + service_tag_ids_table_name + ' NATURAL JOIN tags WHERE tag_id_timestamp BETWEEN ? AND ?;', ( begin, end ) ): row = ( HC.DEFINITIONS_TYPE_TAGS, service_tag_id, tag ) @@ -2029,14 +2029,14 @@ class DB( HydrusDB.HydrusDB ): table_join = self._RepositoryGetFilesInfoFilesTableJoin( service_id, HC.CONTENT_STATUS_CURRENT ) - for ( service_hash_id, size, mime, timestamp, width, height, duration, num_frames, num_words ) in self._c.execute( 'SELECT service_hash_id, size, mime, file_timestamp, width, height, duration, num_frames, num_words FROM ' + table_join + ' WHERE file_timestamp BETWEEN ? AND ?;', ( begin, end ) ): + for ( service_hash_id, size, mime, timestamp, width, height, duration, num_frames, num_words ) in self._Execute( 'SELECT service_hash_id, size, mime, file_timestamp, width, height, duration, num_frames, num_words FROM ' + table_join + ' WHERE file_timestamp BETWEEN ? AND ?;', ( begin, end ) ): file_row = ( service_hash_id, size, mime, timestamp, width, height, duration, num_frames, num_words ) content_update_builder.AddRow( ( HC.CONTENT_TYPE_FILES, HC.CONTENT_UPDATE_ADD, file_row ) ) - service_hash_ids = [ service_hash_id for ( service_hash_id, ) in self._c.execute( 'SELECT service_hash_id FROM ' + deleted_files_table_name + ' WHERE file_timestamp BETWEEN ? AND ?;', ( begin, end ) ) ] + service_hash_ids = [ service_hash_id for ( service_hash_id, ) in self._Execute( 'SELECT service_hash_id FROM ' + deleted_files_table_name + ' WHERE file_timestamp BETWEEN ? AND ?;', ( begin, end ) ) ] for service_hash_id in service_hash_ids: @@ -2047,7 +2047,7 @@ class DB( HydrusDB.HydrusDB ): ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateRepositoryMappingsTableNames( service_id ) - service_tag_ids_to_service_hash_ids = HydrusData.BuildKeyToListDict( self._c.execute( 'SELECT service_tag_id, service_hash_id FROM ' + current_mappings_table_name + ' WHERE mapping_timestamp BETWEEN ? AND ?;', ( begin, end ) ) ) + service_tag_ids_to_service_hash_ids = HydrusData.BuildKeyToListDict( self._Execute( 'SELECT service_tag_id, service_hash_id FROM ' + current_mappings_table_name + ' WHERE mapping_timestamp BETWEEN ? AND ?;', ( begin, end ) ) ) for ( service_tag_id, service_hash_ids ) in list(service_tag_ids_to_service_hash_ids.items()): @@ -2059,7 +2059,7 @@ class DB( HydrusDB.HydrusDB ): - service_tag_ids_to_service_hash_ids = HydrusData.BuildKeyToListDict( self._c.execute( 'SELECT service_tag_id, service_hash_id FROM ' + deleted_mappings_table_name + ' WHERE mapping_timestamp BETWEEN ? AND ?;', ( begin, end ) ) ) + service_tag_ids_to_service_hash_ids = HydrusData.BuildKeyToListDict( self._Execute( 'SELECT service_tag_id, service_hash_id FROM ' + deleted_mappings_table_name + ' WHERE mapping_timestamp BETWEEN ? AND ?;', ( begin, end ) ) ) for ( service_tag_id, service_hash_ids ) in list(service_tag_ids_to_service_hash_ids.items()): @@ -2075,14 +2075,14 @@ class DB( HydrusDB.HydrusDB ): ( current_tag_parents_table_name, deleted_tag_parents_table_name, pending_tag_parents_table_name, petitioned_tag_parents_table_name ) = GenerateRepositoryTagParentsTableNames( service_id ) - pairs = self._c.execute( 'SELECT child_service_tag_id, parent_service_tag_id FROM ' + current_tag_parents_table_name + ' WHERE parent_timestamp BETWEEN ? AND ?;', ( begin, end ) ).fetchall() + pairs = self._Execute( 'SELECT child_service_tag_id, parent_service_tag_id FROM ' + current_tag_parents_table_name + ' WHERE parent_timestamp BETWEEN ? AND ?;', ( begin, end ) ).fetchall() for pair in pairs: content_update_builder.AddRow( ( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_UPDATE_ADD, pair ) ) - pairs = self._c.execute( 'SELECT child_service_tag_id, parent_service_tag_id FROM ' + deleted_tag_parents_table_name + ' WHERE parent_timestamp BETWEEN ? AND ?;', ( begin, end ) ).fetchall() + pairs = self._Execute( 'SELECT child_service_tag_id, parent_service_tag_id FROM ' + deleted_tag_parents_table_name + ' WHERE parent_timestamp BETWEEN ? AND ?;', ( begin, end ) ).fetchall() for pair in pairs: @@ -2093,14 +2093,14 @@ class DB( HydrusDB.HydrusDB ): ( current_tag_siblings_table_name, deleted_tag_siblings_table_name, pending_tag_siblings_table_name, petitioned_tag_siblings_table_name ) = GenerateRepositoryTagSiblingsTableNames( service_id ) - pairs = self._c.execute( 'SELECT bad_service_tag_id, good_service_tag_id FROM ' + current_tag_siblings_table_name + ' WHERE sibling_timestamp BETWEEN ? AND ?;', ( begin, end ) ).fetchall() + pairs = self._Execute( 'SELECT bad_service_tag_id, good_service_tag_id FROM ' + current_tag_siblings_table_name + ' WHERE sibling_timestamp BETWEEN ? AND ?;', ( begin, end ) ).fetchall() for pair in pairs: content_update_builder.AddRow( ( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_UPDATE_ADD, pair ) ) - pairs = self._c.execute( 'SELECT bad_service_tag_id, good_service_tag_id FROM ' + deleted_tag_siblings_table_name + ' WHERE sibling_timestamp BETWEEN ? AND ?;', ( begin, end ) ).fetchall() + pairs = self._Execute( 'SELECT bad_service_tag_id, good_service_tag_id FROM ' + deleted_tag_siblings_table_name + ' WHERE sibling_timestamp BETWEEN ? AND ?;', ( begin, end ) ).fetchall() for pair in pairs: @@ -2124,7 +2124,7 @@ class DB( HydrusDB.HydrusDB ): table_join = 'files_info NATURAL JOIN ' + hash_id_map_table_name + ' NATURAL JOIN ' + current_files_table_name - ( num_files, num_files_bytes ) = self._c.execute( 'SELECT COUNT( * ), SUM( size ) FROM ' + table_join + ' WHERE account_id = ?;', ( account_id, ) ).fetchone() + ( num_files, num_files_bytes ) = self._Execute( 'SELECT COUNT( * ), SUM( size ) FROM ' + table_join + ' WHERE account_id = ?;', ( account_id, ) ).fetchone() if num_files_bytes is None: @@ -2140,13 +2140,13 @@ class DB( HydrusDB.HydrusDB ): ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateRepositoryMappingsTableNames( service_id ) - num_mappings = len( self._c.execute( 'SELECT 1 FROM ' + current_mappings_table_name + ' WHERE account_id = ? LIMIT 5000;', ( account_id, ) ).fetchall() ) + num_mappings = len( self._Execute( 'SELECT 1 FROM ' + current_mappings_table_name + ' WHERE account_id = ? LIMIT 5000;', ( account_id, ) ).fetchall() ) account_info[ 'num_mappings' ] = num_mappings # - result = self._c.execute( 'SELECT score FROM account_scores WHERE service_id = ? AND account_id = ? AND score_type = ?;', ( service_id, account_id, HC.SCORE_PETITION ) ).fetchone() + result = self._Execute( 'SELECT score FROM account_scores WHERE service_id = ? AND account_id = ? AND score_type = ?;', ( service_id, account_id, HC.SCORE_PETITION ) ).fetchone() if result is None: petition_score = 0 else: ( petition_score, ) = result @@ -2160,7 +2160,7 @@ class DB( HydrusDB.HydrusDB ): ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateRepositoryMappingsTableNames( service_id ) - ( count, ) = self._c.execute( 'SELECT COUNT( * ) FROM ' + current_mappings_table_name + ' WHERE service_tag_id = ?;', ( service_tag_id, ) ).fetchone() + ( count, ) = self._Execute( 'SELECT COUNT( * ) FROM ' + current_mappings_table_name + ' WHERE service_tag_id = ?;', ( service_tag_id, ) ).fetchone() return count @@ -2193,7 +2193,7 @@ class DB( HydrusDB.HydrusDB ): ( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name, ip_addresses_table_name ) = GenerateRepositoryFilesTableNames( service_id ) - result = self._c.execute( 'SELECT DISTINCT account_id, reason_id FROM ' + petitioned_files_table_name + ' LIMIT 100;' ).fetchall() + result = self._Execute( 'SELECT DISTINCT account_id, reason_id FROM ' + petitioned_files_table_name + ' LIMIT 100;' ).fetchall() if len( result ) == 0: @@ -2210,7 +2210,7 @@ class DB( HydrusDB.HydrusDB ): reason = self._GetReason( reason_id ) - service_hash_ids = [ service_hash_id for ( service_hash_id, ) in self._c.execute( 'SELECT service_hash_id FROM ' + petitioned_files_table_name + ' WHERE account_id = ? AND reason_id = ?;', ( petitioner_account_id, reason_id ) ) ] + service_hash_ids = [ service_hash_id for ( service_hash_id, ) in self._Execute( 'SELECT service_hash_id FROM ' + petitioned_files_table_name + ' WHERE account_id = ? AND reason_id = ?;', ( petitioner_account_id, reason_id ) ) ] master_hash_ids = self._RepositoryGetMasterHashIds( service_id, service_hash_ids ) @@ -2231,7 +2231,7 @@ class DB( HydrusDB.HydrusDB ): master_hash_id = self._GetMasterHashId( hash ) - result = self._c.execute( 'SELECT ip, ip_timestamp FROM ' + ip_addresses_table_name + ' WHERE master_hash_id = ?;', ( master_hash_id, ) ).fetchone() + result = self._Execute( 'SELECT ip, ip_timestamp FROM ' + ip_addresses_table_name + ' WHERE master_hash_id = ?;', ( master_hash_id, ) ).fetchone() if result is None: @@ -2245,7 +2245,7 @@ class DB( HydrusDB.HydrusDB ): ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateRepositoryMappingsTableNames( service_id ) - result = self._c.execute( 'SELECT DISTINCT account_id, reason_id FROM ' + petitioned_mappings_table_name + ' LIMIT 100;' ).fetchall() + result = self._Execute( 'SELECT DISTINCT account_id, reason_id FROM ' + petitioned_mappings_table_name + ' LIMIT 100;' ).fetchall() if len( result ) == 0: @@ -2262,7 +2262,7 @@ class DB( HydrusDB.HydrusDB ): reason = self._GetReason( reason_id ) - tag_ids_to_hash_ids = HydrusData.BuildKeyToListDict( self._c.execute( 'SELECT service_tag_id, service_hash_id FROM ' + petitioned_mappings_table_name + ' WHERE account_id = ? AND reason_id = ?;', ( petitioner_account_id, reason_id ) ) ) + tag_ids_to_hash_ids = HydrusData.BuildKeyToListDict( self._Execute( 'SELECT service_tag_id, service_hash_id FROM ' + petitioned_mappings_table_name + ' WHERE account_id = ? AND reason_id = ?;', ( petitioner_account_id, reason_id ) ) ) contents = [] @@ -2387,7 +2387,7 @@ class DB( HydrusDB.HydrusDB ): ( hash_id_map_table_name, tag_id_map_table_name ) = GenerateRepositoryMasterMapTableNames( service_id ) - result = self._c.execute( 'SELECT master_tag_id FROM ' + tag_id_map_table_name + ' WHERE service_tag_id = ?;', ( service_tag_id, ) ).fetchone() + result = self._Execute( 'SELECT master_tag_id FROM ' + tag_id_map_table_name + ' WHERE service_tag_id = ?;', ( service_tag_id, ) ).fetchone() if result is None: @@ -2409,7 +2409,7 @@ class DB( HydrusDB.HydrusDB ): if account.HasPermission( HC.CONTENT_TYPE_FILES, HC.PERMISSION_ACTION_MODERATE ): - ( num_petitions, ) = self._c.execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT account_id, reason_id FROM ' + petitioned_files_table_name + ' LIMIT 1000 );' ).fetchone() + ( num_petitions, ) = self._Execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT account_id, reason_id FROM ' + petitioned_files_table_name + ' LIMIT 1000 );' ).fetchone() petition_count_info.append( ( HC.CONTENT_TYPE_FILES, HC.CONTENT_STATUS_PETITIONED, num_petitions ) ) @@ -2418,7 +2418,7 @@ class DB( HydrusDB.HydrusDB ): if account.HasPermission( HC.CONTENT_TYPE_MAPPINGS, HC.PERMISSION_ACTION_MODERATE ): - ( num_petitions, ) = self._c.execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT service_tag_id, account_id, reason_id FROM ' + petitioned_mappings_table_name + ' LIMIT 1000 );' ).fetchone() + ( num_petitions, ) = self._Execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT service_tag_id, account_id, reason_id FROM ' + petitioned_mappings_table_name + ' LIMIT 1000 );' ).fetchone() petition_count_info.append( ( HC.CONTENT_TYPE_MAPPINGS, HC.CONTENT_STATUS_PETITIONED, num_petitions ) ) @@ -2427,11 +2427,11 @@ class DB( HydrusDB.HydrusDB ): if account.HasPermission( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.PERMISSION_ACTION_MODERATE ): - ( num_petitions, ) = self._c.execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT account_id, reason_id FROM ' + pending_tag_parents_table_name + ' LIMIT 1000 );' ).fetchone() + ( num_petitions, ) = self._Execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT account_id, reason_id FROM ' + pending_tag_parents_table_name + ' LIMIT 1000 );' ).fetchone() petition_count_info.append( ( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_STATUS_PENDING, num_petitions ) ) - ( num_petitions, ) = self._c.execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT account_id, reason_id FROM ' + petitioned_tag_parents_table_name + ' LIMIT 1000 );' ).fetchone() + ( num_petitions, ) = self._Execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT account_id, reason_id FROM ' + petitioned_tag_parents_table_name + ' LIMIT 1000 );' ).fetchone() petition_count_info.append( ( HC.CONTENT_TYPE_TAG_PARENTS, HC.CONTENT_STATUS_PETITIONED, num_petitions ) ) @@ -2440,11 +2440,11 @@ class DB( HydrusDB.HydrusDB ): if account.HasPermission( HC.CONTENT_TYPE_TAG_PARENTS, HC.PERMISSION_ACTION_MODERATE ): - ( num_petitions, ) = self._c.execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT account_id, reason_id FROM ' + pending_tag_siblings_table_name + ' LIMIT 1000 );' ).fetchone() + ( num_petitions, ) = self._Execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT account_id, reason_id FROM ' + pending_tag_siblings_table_name + ' LIMIT 1000 );' ).fetchone() petition_count_info.append( ( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_STATUS_PENDING, num_petitions ) ) - ( num_petitions, ) = self._c.execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT account_id, reason_id FROM ' + petitioned_tag_siblings_table_name + ' LIMIT 1000 );' ).fetchone() + ( num_petitions, ) = self._Execute( 'SELECT COUNT( * ) FROM ( SELECT DISTINCT account_id, reason_id FROM ' + petitioned_tag_siblings_table_name + ' LIMIT 1000 );' ).fetchone() petition_count_info.append( ( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_STATUS_PETITIONED, num_petitions ) ) @@ -2494,13 +2494,13 @@ class DB( HydrusDB.HydrusDB ): ( hash_id_map_table_name, tag_id_map_table_name ) = GenerateRepositoryMasterMapTableNames( service_id ) - result = self._c.execute( 'SELECT service_hash_id FROM ' + hash_id_map_table_name + ' WHERE master_hash_id = ?;', ( master_hash_id, ) ).fetchone() + result = self._Execute( 'SELECT service_hash_id FROM ' + hash_id_map_table_name + ' WHERE master_hash_id = ?;', ( master_hash_id, ) ).fetchone() if result is None: - self._c.execute( 'INSERT INTO ' + hash_id_map_table_name + ' ( master_hash_id, hash_id_timestamp ) VALUES ( ?, ? );', ( master_hash_id, timestamp ) ) + self._Execute( 'INSERT INTO ' + hash_id_map_table_name + ' ( master_hash_id, hash_id_timestamp ) VALUES ( ?, ? );', ( master_hash_id, timestamp ) ) - service_hash_id = self._c.lastrowid + service_hash_id = self._GetLastRowId() return service_hash_id @@ -2521,7 +2521,7 @@ class DB( HydrusDB.HydrusDB ): for master_hash_id in master_hash_ids: - result = self._c.execute( 'SELECT service_hash_id FROM ' + hash_id_map_table_name + ' WHERE master_hash_id = ?;', ( master_hash_id, ) ).fetchone() + result = self._Execute( 'SELECT service_hash_id FROM ' + hash_id_map_table_name + ' WHERE master_hash_id = ?;', ( master_hash_id, ) ).fetchone() if result is None: @@ -2537,11 +2537,11 @@ class DB( HydrusDB.HydrusDB ): if len( master_hash_ids_not_in_table ) > 0: - self._c.executemany( 'INSERT INTO ' + hash_id_map_table_name + ' ( master_hash_id, hash_id_timestamp ) VALUES ( ?, ? );', ( ( master_hash_id, timestamp ) for master_hash_id in master_hash_ids_not_in_table ) ) + self._ExecuteMany( 'INSERT INTO ' + hash_id_map_table_name + ' ( master_hash_id, hash_id_timestamp ) VALUES ( ?, ? );', ( ( master_hash_id, timestamp ) for master_hash_id in master_hash_ids_not_in_table ) ) for master_hash_id in master_hash_ids_not_in_table: - ( service_hash_id, ) = self._c.execute( 'SELECT service_hash_id FROM ' + hash_id_map_table_name + ' WHERE master_hash_id = ?;', ( master_hash_id, ) ).fetchone() + ( service_hash_id, ) = self._Execute( 'SELECT service_hash_id FROM ' + hash_id_map_table_name + ' WHERE master_hash_id = ?;', ( master_hash_id, ) ).fetchone() service_hash_ids.add( service_hash_id ) @@ -2554,13 +2554,13 @@ class DB( HydrusDB.HydrusDB ): ( hash_id_map_table_name, tag_id_map_table_name ) = GenerateRepositoryMasterMapTableNames( service_id ) - result = self._c.execute( 'SELECT service_tag_id FROM ' + tag_id_map_table_name + ' WHERE master_tag_id = ?;', ( master_tag_id, ) ).fetchone() + result = self._Execute( 'SELECT service_tag_id FROM ' + tag_id_map_table_name + ' WHERE master_tag_id = ?;', ( master_tag_id, ) ).fetchone() if result is None: - self._c.execute( 'INSERT INTO ' + tag_id_map_table_name + ' ( master_tag_id, tag_id_timestamp ) VALUES ( ?, ? );', ( master_tag_id, timestamp ) ) + self._Execute( 'INSERT INTO ' + tag_id_map_table_name + ' ( master_tag_id, tag_id_timestamp ) VALUES ( ?, ? );', ( master_tag_id, timestamp ) ) - service_tag_id = self._c.lastrowid + service_tag_id = self._GetLastRowId() return service_tag_id @@ -2576,7 +2576,7 @@ class DB( HydrusDB.HydrusDB ): ( current_tag_parents_table_name, deleted_tag_parents_table_name, pending_tag_parents_table_name, petitioned_tag_parents_table_name ) = GenerateRepositoryTagParentsTableNames( service_id ) - result = self._c.execute( 'SELECT DISTINCT account_id, reason_id FROM ' + pending_tag_parents_table_name + ' LIMIT 100;' ).fetchall() + result = self._Execute( 'SELECT DISTINCT account_id, reason_id FROM ' + pending_tag_parents_table_name + ' LIMIT 100;' ).fetchall() if len( result ) == 0: @@ -2593,7 +2593,7 @@ class DB( HydrusDB.HydrusDB ): reason = self._GetReason( reason_id ) - pairs = self._c.execute( 'SELECT child_master_tag_id, parent_master_tag_id FROM ' + pending_tag_parents_table_name + ' WHERE account_id = ? AND reason_id = ?;', ( petitioner_account_id, reason_id ) ).fetchall() + pairs = self._Execute( 'SELECT child_master_tag_id, parent_master_tag_id FROM ' + pending_tag_parents_table_name + ' WHERE account_id = ? AND reason_id = ?;', ( petitioner_account_id, reason_id ) ).fetchall() contents = [] @@ -2629,7 +2629,7 @@ class DB( HydrusDB.HydrusDB ): ( current_tag_parents_table_name, deleted_tag_parents_table_name, pending_tag_parents_table_name, petitioned_tag_parents_table_name ) = GenerateRepositoryTagParentsTableNames( service_id ) - result = self._c.execute( 'SELECT DISTINCT account_id, reason_id FROM ' + petitioned_tag_parents_table_name + ' LIMIT 100;' ).fetchall() + result = self._Execute( 'SELECT DISTINCT account_id, reason_id FROM ' + petitioned_tag_parents_table_name + ' LIMIT 100;' ).fetchall() if len( result ) == 0: @@ -2646,7 +2646,7 @@ class DB( HydrusDB.HydrusDB ): reason = self._GetReason( reason_id ) - pairs = self._c.execute( 'SELECT child_service_tag_id, parent_service_tag_id FROM ' + petitioned_tag_parents_table_name + ' WHERE account_id = ? AND reason_id = ?;', ( petitioner_account_id, reason_id ) ).fetchall() + pairs = self._Execute( 'SELECT child_service_tag_id, parent_service_tag_id FROM ' + petitioned_tag_parents_table_name + ' WHERE account_id = ? AND reason_id = ?;', ( petitioner_account_id, reason_id ) ).fetchall() contents = [] @@ -2685,7 +2685,7 @@ class DB( HydrusDB.HydrusDB ): ( current_tag_siblings_table_name, deleted_tag_siblings_table_name, pending_tag_siblings_table_name, petitioned_tag_siblings_table_name ) = GenerateRepositoryTagSiblingsTableNames( service_id ) - result = self._c.execute( 'SELECT DISTINCT account_id, reason_id FROM ' + pending_tag_siblings_table_name + ' LIMIT 100;' ).fetchall() + result = self._Execute( 'SELECT DISTINCT account_id, reason_id FROM ' + pending_tag_siblings_table_name + ' LIMIT 100;' ).fetchall() if len( result ) == 0: @@ -2702,7 +2702,7 @@ class DB( HydrusDB.HydrusDB ): reason = self._GetReason( reason_id ) - pairs = self._c.execute( 'SELECT bad_master_tag_id, good_master_tag_id FROM ' + pending_tag_siblings_table_name + ' WHERE account_id = ? AND reason_id = ?;', ( petitioner_account_id, reason_id ) ).fetchall() + pairs = self._Execute( 'SELECT bad_master_tag_id, good_master_tag_id FROM ' + pending_tag_siblings_table_name + ' WHERE account_id = ? AND reason_id = ?;', ( petitioner_account_id, reason_id ) ).fetchall() contents = [] @@ -2738,7 +2738,7 @@ class DB( HydrusDB.HydrusDB ): ( current_tag_siblings_table_name, deleted_tag_siblings_table_name, pending_tag_siblings_table_name, petitioned_tag_siblings_table_name ) = GenerateRepositoryTagSiblingsTableNames( service_id ) - result = self._c.execute( 'SELECT DISTINCT account_id, reason_id FROM ' + petitioned_tag_siblings_table_name + ' LIMIT 100;' ).fetchall() + result = self._Execute( 'SELECT DISTINCT account_id, reason_id FROM ' + petitioned_tag_siblings_table_name + ' LIMIT 100;' ).fetchall() if len( result ) == 0: @@ -2755,7 +2755,7 @@ class DB( HydrusDB.HydrusDB ): reason = self._GetReason( reason_id ) - pairs = self._c.execute( 'SELECT bad_service_tag_id, good_service_tag_id FROM ' + petitioned_tag_siblings_table_name + ' WHERE account_id = ? AND reason_id = ?;', ( petitioner_account_id, reason_id ) ).fetchall() + pairs = self._Execute( 'SELECT bad_service_tag_id, good_service_tag_id FROM ' + petitioned_tag_siblings_table_name + ' WHERE account_id = ? AND reason_id = ?;', ( petitioner_account_id, reason_id ) ).fetchall() contents = [] @@ -2803,7 +2803,7 @@ class DB( HydrusDB.HydrusDB ): table_join = self._RepositoryGetFilesInfoFilesTableJoin( service_id, HC.CONTENT_STATUS_CURRENT ) - result = self._c.execute( 'SELECT mime FROM ' + table_join + ' WHERE master_hash_id = ?;', ( master_hash_id, ) ).fetchone() + result = self._Execute( 'SELECT mime FROM ' + table_join + ' WHERE master_hash_id = ?;', ( master_hash_id, ) ).fetchone() if result is None: @@ -2831,8 +2831,8 @@ class DB( HydrusDB.HydrusDB ): ( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name, ip_addresses_table_name ) = GenerateRepositoryFilesTableNames( service_id ) - self._c.execute( 'UPDATE {} SET account_id = ? WHERE file_timestamp BETWEEN ? AND ?;'.format( current_files_table_name ), ( null_account_id, begin, end ) ) - self._c.execute( 'UPDATE {} SET account_id = ? WHERE file_timestamp BETWEEN ? AND ?;'.format( deleted_files_table_name ), ( null_account_id, begin, end ) ) + self._Execute( 'UPDATE {} SET account_id = ? WHERE file_timestamp BETWEEN ? AND ?;'.format( current_files_table_name ), ( null_account_id, begin, end ) ) + self._Execute( 'UPDATE {} SET account_id = ? WHERE file_timestamp BETWEEN ? AND ?;'.format( deleted_files_table_name ), ( null_account_id, begin, end ) ) def _RepositoryNullifyHistoryMappings( self, service_id, begin, end ): @@ -2841,8 +2841,8 @@ class DB( HydrusDB.HydrusDB ): ( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateRepositoryMappingsTableNames( service_id ) - self._c.execute( 'UPDATE {} SET account_id = ? WHERE mapping_timestamp BETWEEN ? AND ?;'.format( current_mappings_table_name ), ( null_account_id, begin, end ) ) - self._c.execute( 'UPDATE {} SET account_id = ? WHERE mapping_timestamp BETWEEN ? AND ?;'.format( deleted_mappings_table_name ), ( null_account_id, begin, end ) ) + self._Execute( 'UPDATE {} SET account_id = ? WHERE mapping_timestamp BETWEEN ? AND ?;'.format( current_mappings_table_name ), ( null_account_id, begin, end ) ) + self._Execute( 'UPDATE {} SET account_id = ? WHERE mapping_timestamp BETWEEN ? AND ?;'.format( deleted_mappings_table_name ), ( null_account_id, begin, end ) ) def _RepositoryNullifyHistoryTagParents( self, service_id, begin, end ): @@ -2851,8 +2851,8 @@ class DB( HydrusDB.HydrusDB ): ( current_tag_parents_table_name, deleted_tag_parents_table_name, pending_tag_parents_table_name, petitioned_tag_parents_table_name ) = GenerateRepositoryTagParentsTableNames( service_id ) - self._c.execute( 'UPDATE {} SET account_id = ? WHERE parent_timestamp BETWEEN ? AND ?;'.format( current_tag_parents_table_name ), ( null_account_id, begin, end ) ) - self._c.execute( 'UPDATE {} SET account_id = ? WHERE parent_timestamp BETWEEN ? AND ?;'.format( deleted_tag_parents_table_name ), ( null_account_id, begin, end ) ) + self._Execute( 'UPDATE {} SET account_id = ? WHERE parent_timestamp BETWEEN ? AND ?;'.format( current_tag_parents_table_name ), ( null_account_id, begin, end ) ) + self._Execute( 'UPDATE {} SET account_id = ? WHERE parent_timestamp BETWEEN ? AND ?;'.format( deleted_tag_parents_table_name ), ( null_account_id, begin, end ) ) def _RepositoryNullifyHistoryTagSiblings( self, service_id, begin, end ): @@ -2861,8 +2861,8 @@ class DB( HydrusDB.HydrusDB ): ( current_tag_siblings_table_name, deleted_tag_siblings_table_name, pending_tag_siblings_table_name, petitioned_tag_siblings_table_name ) = GenerateRepositoryTagSiblingsTableNames( service_id ) - self._c.execute( 'UPDATE {} SET account_id = ? WHERE sibling_timestamp BETWEEN ? AND ?;'.format( current_tag_siblings_table_name ), ( null_account_id, begin, end ) ) - self._c.execute( 'UPDATE {} SET account_id = ? WHERE sibling_timestamp BETWEEN ? AND ?;'.format( deleted_tag_siblings_table_name ), ( null_account_id, begin, end ) ) + self._Execute( 'UPDATE {} SET account_id = ? WHERE sibling_timestamp BETWEEN ? AND ?;'.format( current_tag_siblings_table_name ), ( null_account_id, begin, end ) ) + self._Execute( 'UPDATE {} SET account_id = ? WHERE sibling_timestamp BETWEEN ? AND ?;'.format( deleted_tag_siblings_table_name ), ( null_account_id, begin, end ) ) def _RepositoryPendTagParent( self, service_id, account_id, child_master_tag_id, parent_master_tag_id, reason_id ): @@ -2877,7 +2877,7 @@ class DB( HydrusDB.HydrusDB ): child_service_tag_id = self._RepositoryGetServiceTagId( service_id, child_master_tag_id, HydrusData.GetNow() ) parent_service_tag_id = self._RepositoryGetServiceTagId( service_id, parent_master_tag_id, HydrusData.GetNow() ) - result = self._c.execute( 'SELECT 1 FROM ' + current_tag_parents_table_name + ' WHERE child_service_tag_id = ? AND parent_service_tag_id = ?;', ( child_service_tag_id, parent_service_tag_id ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM ' + current_tag_parents_table_name + ' WHERE child_service_tag_id = ? AND parent_service_tag_id = ?;', ( child_service_tag_id, parent_service_tag_id ) ).fetchone() if result is not None: @@ -2885,7 +2885,7 @@ class DB( HydrusDB.HydrusDB ): - self._c.execute( 'REPLACE INTO ' + pending_tag_parents_table_name + ' ( child_master_tag_id, parent_master_tag_id, account_id, reason_id ) VALUES ( ?, ?, ?, ? );', ( child_master_tag_id, parent_master_tag_id, account_id, reason_id ) ) + self._Execute( 'REPLACE INTO ' + pending_tag_parents_table_name + ' ( child_master_tag_id, parent_master_tag_id, account_id, reason_id ) VALUES ( ?, ?, ?, ? );', ( child_master_tag_id, parent_master_tag_id, account_id, reason_id ) ) def _RepositoryPendTagSibling( self, service_id, account_id, bad_master_tag_id, good_master_tag_id, reason_id ): @@ -2900,7 +2900,7 @@ class DB( HydrusDB.HydrusDB ): bad_service_tag_id = self._RepositoryGetServiceTagId( service_id, bad_master_tag_id, HydrusData.GetNow() ) good_service_tag_id = self._RepositoryGetServiceTagId( service_id, good_master_tag_id, HydrusData.GetNow() ) - result = self._c.execute( 'SELECT 1 FROM ' + current_tag_siblings_table_name + ' WHERE bad_service_tag_id = ? AND good_service_tag_id = ?;', ( bad_service_tag_id, good_service_tag_id ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM ' + current_tag_siblings_table_name + ' WHERE bad_service_tag_id = ? AND good_service_tag_id = ?;', ( bad_service_tag_id, good_service_tag_id ) ).fetchone() if result is not None: @@ -2908,7 +2908,7 @@ class DB( HydrusDB.HydrusDB ): - self._c.execute( 'REPLACE INTO ' + pending_tag_siblings_table_name + ' ( bad_master_tag_id, good_master_tag_id, account_id, reason_id ) VALUES ( ?, ?, ?, ? );', ( bad_master_tag_id, good_master_tag_id, account_id, reason_id ) ) + self._Execute( 'REPLACE INTO ' + pending_tag_siblings_table_name + ' ( bad_master_tag_id, good_master_tag_id, account_id, reason_id ) VALUES ( ?, ?, ?, ? );', ( bad_master_tag_id, good_master_tag_id, account_id, reason_id ) ) def _RepositoryPetitionFiles( self, service_id, account_id, service_hash_ids, reason_id ): @@ -2919,7 +2919,7 @@ class DB( HydrusDB.HydrusDB ): valid_service_hash_ids = [ service_hash_id for ( service_hash_id, ) in self._ExecuteManySelectSingleParam( select_statement, service_hash_ids ) ] - self._c.executemany( 'REPLACE INTO ' + petitioned_files_table_name + ' ( service_hash_id, account_id, reason_id ) VALUES ( ?, ?, ? );', ( ( service_hash_id, account_id, reason_id ) for service_hash_id in valid_service_hash_ids ) ) + self._ExecuteMany( 'REPLACE INTO ' + petitioned_files_table_name + ' ( service_hash_id, account_id, reason_id ) VALUES ( ?, ?, ? );', ( ( service_hash_id, account_id, reason_id ) for service_hash_id in valid_service_hash_ids ) ) def _RepositoryPetitionMappings( self, service_id, account_id, service_tag_id, service_hash_ids, reason_id ): @@ -2930,35 +2930,35 @@ class DB( HydrusDB.HydrusDB ): valid_service_hash_ids = [ service_hash_id for ( service_hash_id, ) in self._ExecuteManySelectSingleParam( select_statement, service_hash_ids ) ] - self._c.executemany( 'REPLACE INTO ' + petitioned_mappings_table_name + ' ( service_tag_id, service_hash_id, account_id, reason_id ) VALUES ( ?, ?, ?, ? );', [ ( service_tag_id, service_hash_id, account_id, reason_id ) for service_hash_id in valid_service_hash_ids ] ) + self._ExecuteMany( 'REPLACE INTO ' + petitioned_mappings_table_name + ' ( service_tag_id, service_hash_id, account_id, reason_id ) VALUES ( ?, ?, ?, ? );', [ ( service_tag_id, service_hash_id, account_id, reason_id ) for service_hash_id in valid_service_hash_ids ] ) def _RepositoryPetitionTagParent( self, service_id, account_id, child_service_tag_id, parent_service_tag_id, reason_id ): ( current_tag_parents_table_name, deleted_tag_parents_table_name, pending_tag_parents_table_name, petitioned_tag_parents_table_name ) = GenerateRepositoryTagParentsTableNames( service_id ) - result = self._c.execute( 'SELECT 1 FROM ' + current_tag_parents_table_name + ' WHERE child_service_tag_id = ? AND parent_service_tag_id = ?;', ( child_service_tag_id, parent_service_tag_id ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM ' + current_tag_parents_table_name + ' WHERE child_service_tag_id = ? AND parent_service_tag_id = ?;', ( child_service_tag_id, parent_service_tag_id ) ).fetchone() if result is None: return - self._c.execute( 'REPLACE INTO ' + petitioned_tag_parents_table_name + ' ( child_service_tag_id, parent_service_tag_id, account_id, reason_id ) VALUES ( ?, ?, ?, ? );', ( child_service_tag_id, parent_service_tag_id, account_id, reason_id ) ) + self._Execute( 'REPLACE INTO ' + petitioned_tag_parents_table_name + ' ( child_service_tag_id, parent_service_tag_id, account_id, reason_id ) VALUES ( ?, ?, ?, ? );', ( child_service_tag_id, parent_service_tag_id, account_id, reason_id ) ) def _RepositoryPetitionTagSibling( self, service_id, account_id, bad_service_tag_id, good_service_tag_id, reason_id ): ( current_tag_siblings_table_name, deleted_tag_siblings_table_name, pending_tag_siblings_table_name, petitioned_tag_siblings_table_name ) = GenerateRepositoryTagSiblingsTableNames( service_id ) - result = self._c.execute( 'SELECT 1 FROM ' + current_tag_siblings_table_name + ' WHERE bad_service_tag_id = ? AND good_service_tag_id = ?;', ( bad_service_tag_id, good_service_tag_id ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM ' + current_tag_siblings_table_name + ' WHERE bad_service_tag_id = ? AND good_service_tag_id = ?;', ( bad_service_tag_id, good_service_tag_id ) ).fetchone() if result is None: return - self._c.execute( 'REPLACE INTO ' + petitioned_tag_siblings_table_name + ' ( bad_service_tag_id, good_service_tag_id, account_id, reason_id ) VALUES ( ?, ?, ?, ? );', ( bad_service_tag_id, good_service_tag_id, account_id, reason_id ) ) + self._Execute( 'REPLACE INTO ' + petitioned_tag_siblings_table_name + ' ( bad_service_tag_id, good_service_tag_id, account_id, reason_id ) VALUES ( ?, ?, ?, ? );', ( bad_service_tag_id, good_service_tag_id, account_id, reason_id ) ) def _RepositoryProcessAddFile( self, service, account, file_dict, timestamp ): @@ -2988,7 +2988,7 @@ class DB( HydrusDB.HydrusDB ): table_join = self._RepositoryGetFilesInfoFilesTableJoin( service_id, HC.CONTENT_STATUS_CURRENT ) - ( total_current_storage, ) = self._c.execute( 'SELECT SUM( size ) FROM ' + table_join + ';' ).fetchone() + ( total_current_storage, ) = self._Execute( 'SELECT SUM( size ) FROM ' + table_join + ';' ).fetchone() if total_current_storage is None: @@ -2997,7 +2997,7 @@ class DB( HydrusDB.HydrusDB ): table_join = self._RepositoryGetFilesInfoFilesTableJoin( service_id, HC.CONTENT_STATUS_PENDING ) - ( total_pending_storage, ) = self._c.execute( 'SELECT SUM( size ) FROM ' + table_join + ';' ).fetchone() + ( total_pending_storage, ) = self._Execute( 'SELECT SUM( size ) FROM ' + table_join + ';' ).fetchone() if total_pending_storage is None: @@ -3313,7 +3313,7 @@ class DB( HydrusDB.HydrusDB ): ( current_tag_parents_table_name, deleted_tag_parents_table_name, pending_tag_parents_table_name, petitioned_tag_parents_table_name ) = GenerateRepositoryTagParentsTableNames( service_id ) - account_ids = [ account_id for ( account_id, ) in self._c.execute( 'SELECT account_id FROM ' + pending_tag_parents_table_name + ' WHERE child_master_tag_id = ? AND parent_master_tag_id = ?;', ( child_master_tag_id, parent_master_tag_id ) ) ] + account_ids = [ account_id for ( account_id, ) in self._Execute( 'SELECT account_id FROM ' + pending_tag_parents_table_name + ' WHERE child_master_tag_id = ? AND parent_master_tag_id = ?;', ( child_master_tag_id, parent_master_tag_id ) ) ] scores = [ ( account_id, weighted_score ) for account_id in account_ids ] @@ -3330,7 +3330,7 @@ class DB( HydrusDB.HydrusDB ): ( current_tag_parents_table_name, deleted_tag_parents_table_name, pending_tag_parents_table_name, petitioned_tag_parents_table_name ) = GenerateRepositoryTagParentsTableNames( service_id ) - account_ids = [ account_id for ( account_id, ) in self._c.execute( 'SELECT account_id FROM ' + petitioned_tag_parents_table_name + ' WHERE child_service_tag_id = ? AND parent_service_tag_id = ?;', ( child_service_tag_id, parent_service_tag_id ) ) ] + account_ids = [ account_id for ( account_id, ) in self._Execute( 'SELECT account_id FROM ' + petitioned_tag_parents_table_name + ' WHERE child_service_tag_id = ? AND parent_service_tag_id = ?;', ( child_service_tag_id, parent_service_tag_id ) ) ] scores = [ ( account_id, weighted_score ) for account_id in account_ids ] @@ -3349,7 +3349,7 @@ class DB( HydrusDB.HydrusDB ): ( current_tag_siblings_table_name, deleted_tag_siblings_table_name, pending_tag_siblings_table_name, petitioned_tag_siblings_table_name ) = GenerateRepositoryTagSiblingsTableNames( service_id ) - account_ids = [ account_id for ( account_id, ) in self._c.execute( 'SELECT account_id FROM ' + pending_tag_siblings_table_name + ' WHERE bad_master_tag_id = ? AND good_master_tag_id = ?;', ( bad_master_tag_id, good_master_tag_id ) ) ] + account_ids = [ account_id for ( account_id, ) in self._Execute( 'SELECT account_id FROM ' + pending_tag_siblings_table_name + ' WHERE bad_master_tag_id = ? AND good_master_tag_id = ?;', ( bad_master_tag_id, good_master_tag_id ) ) ] scores = [ ( account_id, weighted_score ) for account_id in account_ids ] @@ -3366,7 +3366,7 @@ class DB( HydrusDB.HydrusDB ): ( current_tag_siblings_table_name, deleted_tag_siblings_table_name, pending_tag_siblings_table_name, petitioned_tag_siblings_table_name ) = GenerateRepositoryTagSiblingsTableNames( service_id ) - account_ids = [ account_id for ( account_id, ) in self._c.execute( 'SELECT account_id FROM ' + petitioned_tag_siblings_table_name + ' WHERE bad_service_tag_id = ? AND good_service_tag_id = ?;', ( bad_service_tag_id, good_service_tag_id ) ) ] + account_ids = [ account_id for ( account_id, ) in self._Execute( 'SELECT account_id FROM ' + petitioned_tag_siblings_table_name + ' WHERE bad_service_tag_id = ? AND good_service_tag_id = ?;', ( bad_service_tag_id, good_service_tag_id ) ) ] scores = [ ( account_id, weighted_score ) for account_id in account_ids ] @@ -3377,7 +3377,7 @@ class DB( HydrusDB.HydrusDB ): ( hash_id_map_table_name, tag_id_map_table_name ) = GenerateRepositoryMasterMapTableNames( service_id ) - result = self._c.execute( 'SELECT 1 FROM ' + hash_id_map_table_name + ' WHERE master_hash_id = ?;', ( master_hash_id, ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM ' + hash_id_map_table_name + ' WHERE master_hash_id = ?;', ( master_hash_id, ) ).fetchone() if result is None: @@ -3393,7 +3393,7 @@ class DB( HydrusDB.HydrusDB ): ( hash_id_map_table_name, tag_id_map_table_name ) = GenerateRepositoryMasterMapTableNames( service_id ) - result = self._c.execute( 'SELECT 1 FROM ' + tag_id_map_table_name + ' WHERE master_tag_id = ?;', ( master_tag_id, ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM ' + tag_id_map_table_name + ' WHERE master_tag_id = ?;', ( master_tag_id, ) ).fetchone() if result is None: @@ -3463,9 +3463,9 @@ class DB( HydrusDB.HydrusDB ): def _RewardAccounts( self, service_id, score_type, scores ): - self._c.executemany( 'INSERT OR IGNORE INTO account_scores ( service_id, account_id, score_type, score ) VALUES ( ?, ?, ?, ? );', [ ( service_id, account_id, score_type, 0 ) for ( account_id, score ) in scores ] ) + self._ExecuteMany( 'INSERT OR IGNORE INTO account_scores ( service_id, account_id, score_type, score ) VALUES ( ?, ?, ?, ? );', [ ( service_id, account_id, score_type, 0 ) for ( account_id, score ) in scores ] ) - self._c.executemany( 'UPDATE account_scores SET score = score + ? WHERE service_id = ? AND account_id = ? and score_type = ?;', [ ( score, service_id, account_id, score_type ) for ( account_id, score ) in scores ] ) + self._ExecuteMany( 'UPDATE account_scores SET score = score + ? WHERE service_id = ? AND account_id = ? and score_type = ?;', [ ( score, service_id, account_id, score_type ) for ( account_id, score ) in scores ] ) def _SaveAccounts( self, service_id, accounts ): @@ -3476,7 +3476,7 @@ class DB( HydrusDB.HydrusDB ): dictionary_string = dictionary.DumpToString() - self._c.execute( 'UPDATE accounts SET dictionary_string = ? WHERE account_key = ?;', ( dictionary_string, sqlite3.Binary( account_key ) ) ) + self._Execute( 'UPDATE accounts SET dictionary_string = ? WHERE account_key = ?;', ( dictionary_string, sqlite3.Binary( account_key ) ) ) account.SetClean() @@ -3505,7 +3505,7 @@ class DB( HydrusDB.HydrusDB ): dictionary_string = dictionary.DumpToString() - self._c.execute( 'UPDATE services SET dictionary_string = ? WHERE service_key = ?;', ( dictionary_string, sqlite3.Binary( service_key ) ) ) + self._Execute( 'UPDATE services SET dictionary_string = ? WHERE service_key = ?;', ( dictionary_string, sqlite3.Binary( service_key ) ) ) service.SetClean() @@ -3517,13 +3517,13 @@ class DB( HydrusDB.HydrusDB ): if version == 433: - old_data = self._c.execute( 'SELECT account_type_id, service_id, account_type_key, title, dictionary_string FROM account_types;' ).fetchall() + old_data = self._Execute( 'SELECT account_type_id, service_id, account_type_key, title, dictionary_string FROM account_types;' ).fetchall() - self._c.execute( 'DROP TABLE account_types;' ) + self._Execute( 'DROP TABLE account_types;' ) from hydrus.core.networking import HydrusNetworkLegacy - self._c.execute( 'CREATE TABLE account_types ( account_type_id INTEGER PRIMARY KEY, service_id INTEGER, dump TEXT );' ) + self._Execute( 'CREATE TABLE account_types ( account_type_id INTEGER PRIMARY KEY, service_id INTEGER, dump TEXT );' ) for ( account_type_id, service_id, account_type_key, title, dictionary_string ) in old_data: @@ -3531,7 +3531,7 @@ class DB( HydrusDB.HydrusDB ): dump = account_type.DumpToString() - self._c.execute( 'INSERT INTO account_types ( account_type_id, service_id, dump ) VALUES ( ?, ?, ? );', ( account_type_id, service_id, dump ) ) + self._Execute( 'INSERT INTO account_types ( account_type_id, service_id, dump ) VALUES ( ?, ?, ? );', ( account_type_id, service_id, dump ) ) @@ -3565,7 +3565,7 @@ class DB( HydrusDB.HydrusDB ): HydrusData.Print( 'The server has updated to version ' + str( version + 1 ) ) - self._c.execute( 'UPDATE version SET version = ?;', ( version + 1, ) ) + self._Execute( 'UPDATE version SET version = ?;', ( version + 1, ) ) def _Vacuum( self ): @@ -3581,7 +3581,7 @@ class DB( HydrusDB.HydrusDB ): try: - db_names = [ name for ( index, name, path ) in self._c.execute( 'PRAGMA database_list;' ) if name not in ( 'mem', 'temp', 'durable_temp' ) ] + db_names = [ name for ( index, name, path ) in self._Execute( 'PRAGMA database_list;' ) if name not in ( 'mem', 'temp', 'durable_temp' ) ] db_names = [ name for name in db_names if name in self._db_filenames ] @@ -3609,7 +3609,7 @@ class DB( HydrusDB.HydrusDB ): if len( db_names ) > 0: - self._CloseDBCursor() + self._CloseDBConnection() try: @@ -3643,7 +3643,7 @@ class DB( HydrusDB.HydrusDB ): finally: - self._InitDBCursor() + self._InitDBConnection() @@ -3657,11 +3657,11 @@ class DB( HydrusDB.HydrusDB ): service_id = self._GetServiceId( service_key ) - result = self._c.execute( 'SELECT 1 FROM accounts WHERE service_id = ? AND hashed_access_key = ?;', ( service_id, sqlite3.Binary( hashlib.sha256( access_key ).digest() ) ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM accounts WHERE service_id = ? AND hashed_access_key = ?;', ( service_id, sqlite3.Binary( hashlib.sha256( access_key ).digest() ) ) ).fetchone() if result is None: - result = self._c.execute( 'SELECT 1 FROM registration_keys WHERE service_id = ? AND access_key = ?;', ( service_id, sqlite3.Binary( access_key ) ) ).fetchone() + result = self._Execute( 'SELECT 1 FROM registration_keys WHERE service_id = ? AND access_key = ?;', ( service_id, sqlite3.Binary( access_key ) ) ).fetchone() if result is None: diff --git a/hydrus/test/TestClientAPI.py b/hydrus/test/TestClientAPI.py index cf7dc4bf..1f5f0e67 100644 --- a/hydrus/test/TestClientAPI.py +++ b/hydrus/test/TestClientAPI.py @@ -1761,9 +1761,7 @@ class TestClientAPI( unittest.TestCase ): def _test_search_files( self, connection, set_up_permissions ): - hash_ids = [ 1, 2, 3, 4, 5, 10 ] - - HG.test_controller.SetRead( 'file_query_ids', set( hash_ids ) ) + hash_ids = [ 1, 2, 3, 4, 5, 10, 15, 16, 17, 18, 19, 20, 21, 25, 100, 101, 150 ] # search files failed tag permission @@ -1775,6 +1773,10 @@ class TestClientAPI( unittest.TestCase ): # + sample_hash_ids = set( random.sample( hash_ids, 3 ) ) + + HG.test_controller.SetRead( 'file_query_ids', set( sample_hash_ids ) ) + tags = [] path = '/get_files/search_files?tags={}'.format( urllib.parse.quote( json.dumps( tags ) ) ) @@ -1789,6 +1791,10 @@ class TestClientAPI( unittest.TestCase ): # + sample_hash_ids = set( random.sample( hash_ids, 3 ) ) + + HG.test_controller.SetRead( 'file_query_ids', set( sample_hash_ids ) ) + tags = [ 'kino' ] path = '/get_files/search_files?tags={}'.format( urllib.parse.quote( json.dumps( tags ) ) ) @@ -1803,6 +1809,10 @@ class TestClientAPI( unittest.TestCase ): # search files + sample_hash_ids = set( random.sample( hash_ids, 3 ) ) + + HG.test_controller.SetRead( 'file_query_ids', set( sample_hash_ids ) ) + tags = [ 'kino', 'green' ] path = '/get_files/search_files?tags={}'.format( urllib.parse.quote( json.dumps( tags ) ) ) @@ -1819,10 +1829,140 @@ class TestClientAPI( unittest.TestCase ): d = json.loads( text ) - expected_answer = { 'file_ids' : hash_ids } + expected_answer = { 'file_ids' : list( sample_hash_ids ) } self.assertEqual( d, expected_answer ) + # sort + + # this just tests if it parses, we don't have a full test for read params yet + + sample_hash_ids = set( random.sample( hash_ids, 3 ) ) + + HG.test_controller.SetRead( 'file_query_ids', set( sample_hash_ids ) ) + + tags = [ 'kino', 'green' ] + + path = '/get_files/search_files?tags={}&file_sort_type={}'.format( urllib.parse.quote( json.dumps( tags ) ), CC.SORT_FILES_BY_FRAMERATE ) + + connection.request( 'GET', path, headers = headers ) + + response = connection.getresponse() + + data = response.read() + + text = str( data, 'utf-8' ) + + self.assertEqual( response.status, 200 ) + + # sort + + # this just tests if it parses, we don't have a full test for read params yet + + sample_hash_ids = set( random.sample( hash_ids, 3 ) ) + + HG.test_controller.SetRead( 'file_query_ids', set( sample_hash_ids ) ) + + tags = [ 'kino', 'green' ] + + path = '/get_files/search_files?tags={}&file_sort_type={}&file_sort_asc={}'.format( urllib.parse.quote( json.dumps( tags ) ), CC.SORT_FILES_BY_FRAMERATE, 'true' ) + + connection.request( 'GET', path, headers = headers ) + + response = connection.getresponse() + + data = response.read() + + text = str( data, 'utf-8' ) + + self.assertEqual( response.status, 200 ) + + # file domain + + # this just tests if it parses, we don't have a full test for read params yet + + sample_hash_ids = set( random.sample( hash_ids, 3 ) ) + + HG.test_controller.SetRead( 'file_query_ids', set( sample_hash_ids ) ) + + tags = [ 'kino', 'green' ] + + path = '/get_files/search_files?tags={}&file_sort_type={}&file_sort_asc={}&file_service_name={}'.format( + urllib.parse.quote( json.dumps( tags ) ), + CC.SORT_FILES_BY_FRAMERATE, + 'true', + 'trash' + ) + + connection.request( 'GET', path, headers = headers ) + + response = connection.getresponse() + + data = response.read() + + text = str( data, 'utf-8' ) + + self.assertEqual( response.status, 200 ) + + # file and tag domain + + # this just tests if it parses, we don't have a full test for read params yet + + sample_hash_ids = set( random.sample( hash_ids, 3 ) ) + + HG.test_controller.SetRead( 'file_query_ids', set( sample_hash_ids ) ) + + tags = [ 'kino', 'green' ] + + path = '/get_files/search_files?tags={}&file_sort_type={}&file_sort_asc={}&file_service_key={}&tag_service_name={}'.format( + urllib.parse.quote( json.dumps( tags ) ), + CC.SORT_FILES_BY_FRAMERATE, + 'true', + CC.TRASH_SERVICE_KEY.hex(), + 'all%20known%20tags' + ) + + connection.request( 'GET', path, headers = headers ) + + response = connection.getresponse() + + data = response.read() + + text = str( data, 'utf-8' ) + + self.assertEqual( response.status, 200 ) + + # file and tag domain + + # this just tests if it parses, we don't have a full test for read params yet + + sample_hash_ids = set( random.sample( hash_ids, 3 ) ) + + HG.test_controller.SetRead( 'file_query_ids', set( sample_hash_ids ) ) + + tags = [ 'kino', 'green' ] + + path = '/get_files/search_files?tags={}&file_sort_type={}&file_sort_asc={}&file_service_key={}&tag_service_key={}'.format( + urllib.parse.quote( json.dumps( tags ) ), + CC.SORT_FILES_BY_FRAMERATE, + 'true', + CC.COMBINED_FILE_SERVICE_KEY.hex(), + CC.COMBINED_TAG_SERVICE_KEY.hex() + ) + + connection.request( 'GET', path, headers = headers ) + + response = connection.getresponse() + + data = response.read() + + text = str( data, 'utf-8' ) + + self.assertEqual( response.status, 400 ) + + + def _test_search_files_predicate_parsing( self, connection, set_up_permissions ): + # some file search param parsing class PretendRequest( object ): @@ -1927,6 +2067,9 @@ class TestClientAPI( unittest.TestCase ): self.assertEqual( set( predicates ), set( expected_predicates ) ) + + def _test_file_metadata( self, connection, set_up_permissions ): + # test file metadata api_permissions = set_up_permissions[ 'search_green_files' ] @@ -2192,8 +2335,12 @@ class TestClientAPI( unittest.TestCase ): self.assertEqual( d, expected_detailed_known_urls_metadata_result ) + + def _test_get_files( self, connection, set_up_permissions ): + # files and thumbs + file_id = 1 hash = b'\xadm5\x99\xa6\xc4\x89\xa5u\xeb\x19\xc0&\xfa\xce\x97\xa9\xcdey\xe7G(\xb0\xce\x94\xa6\x01\xd22\xf3\xc3' hash_hex = hash.hex() @@ -2532,6 +2679,9 @@ class TestClientAPI( unittest.TestCase ): self._test_manage_cookies( connection, set_up_permissions ) self._test_manage_pages( connection, set_up_permissions ) self._test_search_files( connection, set_up_permissions ) + self._test_search_files_predicate_parsing( connection, set_up_permissions ) + self._test_file_metadata( connection, set_up_permissions ) + self._test_get_files( connection, set_up_permissions ) self._test_permission_failures( connection, set_up_permissions ) self._test_cors_fails( connection ) diff --git a/hydrus/test/TestClientTags.py b/hydrus/test/TestClientTags.py index 5fd69875..430c41a6 100644 --- a/hydrus/test/TestClientTags.py +++ b/hydrus/test/TestClientTags.py @@ -1853,6 +1853,9 @@ class TestTagObjects( unittest.TestCase ): ( 'system:filetype is jpeg, png, apng', "system:filetype = image/jpg, image/png, apng" ), ( 'system:sha256 hash is in 3 hashes', "system:hash = abcdef01 abcdef02 abcdef03" ), ( 'system:md5 hash is in 3 hashes', "system:hash = abcdef01 abcdef, abcdef04 md5" ), + ( 'system:md5 hash is abcdef01', "system:hash = abcdef01 md5" ), + ( 'system:md5 hash is abcdef01', "system:Hash = Abcdef01 md5" ), + ( 'system:sha256 hash is abcdef0102', "system:hash = abcdef0102" ), ( 'system:modified time: since 7 years 1 month ago', "system:modified date < 7 years 45 days 70h" ), ( 'system:modified time: since 2011-06-04', "system:modified date > 2011-06-04" ), ( 'system:modified time: before 7 years 2 months ago', "system:date modified > 7 years 2 months" ), @@ -1886,12 +1889,12 @@ class TestTagObjects( unittest.TestCase ): ( 'system:media viewtime < 1 day 1 hour', "system:media viewtime < 1 days 1 hour 0 minutes" ), ( 'system:all viewtime > 1 hour 1 minute', "system:all viewtime > 1 hours 100 seconds" ), ( 'system:preview viewtime \u2248 2 days 7 hours', "system:preview viewtime ~= 1 day 30 hours 100 minutes 90s" ), - ( 'system:has a url matching regex: reg.*ex', " system:has url matching regex reg.*ex " ), - ( 'system:does not have a url matching regex: test', "system:does not have a url matching regex test" ), - ( 'system:has url: https://test.test/', "system:has_url https://test.test/" ), - ( 'system:does not have url: test url here', " system:doesn't have url test url here " ), - ( 'system:has a url with domain: test.com', "system:has domain test.com" ), - ( 'system:does not have a url with domain: test.com', "system:doesn't have domain test.com" ), + ( 'system:has a url matching regex: index\\.php', " system:has url matching regex index\\.php" ), + ( 'system:does not have a url matching regex: index\\.php', "system:does not have a url matching regex index\\.php" ), + ( 'system:has url: https://safebooru.donmai.us/posts/4695284', "system:has_url https://safebooru.donmai.us/posts/4695284" ), + ( 'system:does not have url: https://safebooru.donmai.us/posts/4695284', " system:doesn't have url https://safebooru.donmai.us/posts/4695284 " ), + ( 'system:has a url with domain: safebooru.com', "system:has domain safebooru.com" ), + ( 'system:does not have a url with domain: safebooru.com', "system:doesn't have domain safebooru.com" ), ( 'system:has safebooru file page url', "system:has a url with class safebooru file page" ), ( 'system:does not have safebooru file page url', "system:doesn't have a url with url class safebooru file page " ), ( 'system:page less than 5', "system:tag as number page < 5" )