changelog
-
+
version 401
+ - subscriptions: +
- as subs can now load more flexibly, previously hardcoded waits are now eliminated: +
- - the subscriptions manager now only waits three seconds after initial session load to boot (previously 15) +
- - the subscriptions manager now wakes as soon as the subscriptions dialog is ok'd or cancelled +
- - a timing calculation that would delay the work of a sub up to five or fifteen minutes if more queries would come due for sync in that time window (in order previously to batch to reduce read/write) is now eliminated--subs will now start as soon as any query is due. if you were ever confused why a query that seemed due did not boot after dialog ok or other wake-up event, this _should_ no longer happen +
- re-added the import/export/duplicate buttons to manage subs. export and dupe may need to do db work for a couple of seconds and will have a yes/no confirmation on larger jobs +
- the import button on manage subs accepts and converts the old 'legacy' subscription object format, including a copy/paste of the objects backed up to disk in the v400 update +
- fixed an issue where creating a subscription query and then deleting it in the same manage subs dialog session would result in surplus data being written to the db (which the next dialog launch would note and clear out) +
- an unusual error with pre-run domain checking, exposed by the new subscription code and e621 subs, where the gallery url has also recently changed, is now fixed +
- . +
- issue tracker: +
- the Github issue tracker (https://github.com/hydrusnetwork/hydrus/issues) is turned on again! it is now run by a team of volunteer users. the idea is going to be to try to merge duplicate feature suggestions with the proper platform and put some discussion and cognition and prioritisation into idea development before it gets to my desk, so I can be more focused and productive and so 95% of feature suggestions do not simply get banished to the shadow realm of the back of my todo +
- this is mostly intended for wishlist and other suggestions, as the tsunami was just getting too much for me to handle, but we'll see how it goes for things like bug reports as well. I'll still take any sort of report through my normal channels, if you are uncomfortable with github, or if you wish for me to forward an item to the issue tracker anonymously +
- the website, help documents, and hydrus help menu links have been updated regarding the issue tracker +
- . +
- the rest: +
- improved how the database 'update default downloader objects' job works, ensuring that new defaults are better at simply take the place of existing objects and do not break/reset existing url class to parser links +
- tightened up how automatic url class to parser linking works, eliminating some surplus and potentially bad data related to api links. furthermore, whenever the links between url classes and parsers update, existing surplus data, which may creep in when api links change, is now cleaned from the data structure +
- rolling out updated e621 url class and parser to deal with their alternate gallery url format +
- rolling out an updated derpibooru parser that will link to the new api class correctly +
- thanks to a user's submission, rolling out updated versions of the new default nitter parsers that pull creator:username tags +
- before every subprocess launch, and when waiting for all subprocess communication (e.g. to ffmpeg), now tests regularly for program shutdown. if an unusual situation develops where a subscription is doing a file import job while the OS is shutting down, and that system shut down would hang or is hanging on a 'ffmpeg can't be launched now' dialog, the hydrus client should now notice this and bomb out, rather than going for that never-running ffmpeg. this may not fix all instances of this issue, and further feedback on the client not closing down cleanly with the OS is welcome. +
- when adding a new path to the 'migrate database' panel, any symbolic links will be converted to canonical equivalents +
- added some location checks and appropriate errors when the database is doing file storage rebalancing +
- fixed an issue uploading swfs, video, or audio to the server when it is launched from a frozen executable build +
- misc code cleanup +
version 400
- subscription data overhaul: diff --git a/help/contact.html b/help/contact.html index deb456ac..e5b1ea9a 100755 --- a/help/contact.html +++ b/help/contact.html @@ -7,19 +7,20 @@
- homepage +
- github +
- issue tracker
- 8kun board (endchan bunker (.org))
- tumblr (rss)
- new downloads
- old downloads -
- github
- discord diff --git a/hydrus/client/ClientConstants.py b/hydrus/client/ClientConstants.py index 46f08d1c..f8f8cfca 100644 --- a/hydrus/client/ClientConstants.py +++ b/hydrus/client/ClientConstants.py @@ -532,6 +532,7 @@ class GlobalPixmaps( object ): self.tumblr = QG.QPixmap( os.path.join( HC.STATIC_DIR, 'tumblr.png' ) ) self.discord = QG.QPixmap( os.path.join( HC.STATIC_DIR, 'discord.png' ) ) self.patreon = QG.QPixmap( os.path.join( HC.STATIC_DIR, 'patreon.png' ) ) + self.github = QG.QPixmap( os.path.join( HC.STATIC_DIR, 'github.png' ) ) self.first = QG.QPixmap( os.path.join( HC.STATIC_DIR, 'first.png' ) ) self.previous = QG.QPixmap( os.path.join( HC.STATIC_DIR, 'previous.png' ) ) diff --git a/hydrus/client/ClientController.py b/hydrus/client/ClientController.py index fcfb7c4e..b3945311 100644 --- a/hydrus/client/ClientController.py +++ b/hydrus/client/ClientController.py @@ -133,6 +133,9 @@ class App( QW.QApplication ): HG.client_controller.gui.SaveAndClose() + HG.view_shutdown = True + HG.model_shutdown = True + class Controller( HydrusController.HydrusController ): diff --git a/hydrus/client/ClientDB.py b/hydrus/client/ClientDB.py index 4dd6a987..60f2c49d 100644 --- a/hydrus/client/ClientDB.py +++ b/hydrus/client/ClientDB.py @@ -11678,6 +11678,11 @@ class DB( HydrusDB.HydrusDB ): def _RelocateClientFiles( self, prefix, source, dest ): + if not os.path.exists( dest ): + + raise Exception( 'Was commanded to move prefix "{}" from "{}" to "{}", but that destination does not exist!'.format( prefix, source, dest ) ) + + full_source = os.path.join( source, prefix ) full_dest = os.path.join( dest, prefix ) @@ -14774,6 +14779,39 @@ class DB( HydrusDB.HydrusDB ): + if version == 400: + + try: + + domain_manager = self._GetJSONDump( HydrusSerialisable.SERIALISABLE_TYPE_NETWORK_DOMAIN_MANAGER ) + + domain_manager.Initialise() + + # + + domain_manager.OverwriteDefaultURLClasses( [ 'e621 gallery page (alternate format)' ] ) + + # + + domain_manager.OverwriteDefaultParsers( [ 'nitter tweet parser', 'nitter tweet parser (video from koto.reisen)', 'e621 gallery page parser', 'derpibooru gallery page api parser' ] ) + + # + + domain_manager.TryToLinkURLClassesAndParsers() + + # + + self._SetJSONDump( domain_manager ) + + except Exception as e: + + HydrusData.PrintException( e ) + + message = 'Trying to update some downloaders failed! Please let hydrus dev know!' + + self.pub_initial_message( message ) + + self._controller.pub( 'splash_set_title_text', 'updated db to v{}'.format( HydrusData.ToHumanInt( version + 1 ) ) ) diff --git a/hydrus/client/ClientManagers.py b/hydrus/client/ClientManagers.py index 1e24cd0e..4c7cfe30 100644 --- a/hydrus/client/ClientManagers.py +++ b/hydrus/client/ClientManagers.py @@ -489,8 +489,8 @@ class ServicesManager( object ): self._controller = controller self._lock = threading.Lock() - self._keys_to_services: typing.Dict[ bytes, ClientServices.Service ] = {} - self._services_sorted: typing.List[ ClientServices.Service ] = [] + self._keys_to_services = {} + self._services_sorted = [] self.RefreshServices() diff --git a/hydrus/client/ClientParsing.py b/hydrus/client/ClientParsing.py index aeff0255..8aa3117b 100644 --- a/hydrus/client/ClientParsing.py +++ b/hydrus/client/ClientParsing.py @@ -3648,7 +3648,7 @@ class StringProcessor( HydrusSerialisable.SerialisableBase ): StringProcessingStep.__init__( self ) - self._processing_steps: typing.List[ StringProcessingStep ] = [] + self._processing_steps = [] def _GetSerialisableInfo( self ): diff --git a/hydrus/client/gui/ClientGUI.py b/hydrus/client/gui/ClientGUI.py index 2bfd25cf..afd6e688 100644 --- a/hydrus/client/gui/ClientGUI.py +++ b/hydrus/client/gui/ClientGUI.py @@ -798,6 +798,8 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ): sbp_kwargs = HydrusData.GetSubprocessKWArgs( hide_terminal = False ) + HydrusData.CheckProgramIsNotShuttingDown() + subprocess.Popen( cmd, **sbp_kwargs ) time_waited = 0 @@ -2718,6 +2720,10 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ): HG.client_controller.subscriptions_manager.SetSubscriptions( subscriptions ) + else: + + HG.client_controller.subscriptions_manager.Wake() + @@ -4650,6 +4656,8 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p links = QW.QMenu( menu ) site = ClientGUIMenus.AppendMenuBitmapItem( links, 'site', 'Open hydrus\'s website, which is mostly a mirror of the local help.', CC.global_pixmaps().file_repository, ClientPaths.LaunchURLInWebBrowser, 'https://hydrusnetwork.github.io/hydrus/' ) + site = ClientGUIMenus.AppendMenuBitmapItem( links, 'github repository', 'Open the hydrus github repository.', CC.global_pixmaps().github, ClientPaths.LaunchURLInWebBrowser, 'https://github.com/hydrusnetwork/hydrus' ) + site = ClientGUIMenus.AppendMenuBitmapItem( links, 'issue tracker', 'Open the github issue tracker, which is run by users.', CC.global_pixmaps().github, ClientPaths.LaunchURLInWebBrowser, 'https://github.com/hydrusnetwork/hydrus/issues' ) site = ClientGUIMenus.AppendMenuBitmapItem( links, '8kun board', 'Open hydrus dev\'s 8kun board, where he makes release posts and other status updates.', CC.global_pixmaps().eight_kun, ClientPaths.LaunchURLInWebBrowser, 'https://8kun.top/hydrus/index.html' ) site = ClientGUIMenus.AppendMenuItem( links, 'Endchan board bunker', 'Open hydrus dev\'s Endchan board, the bunker for when 8kun is unavailable. Try .org if .net is unavailable.', ClientPaths.LaunchURLInWebBrowser, 'https://endchan.net/hydrus/index.html' ) site = ClientGUIMenus.AppendMenuBitmapItem( links, 'twitter', 'Open hydrus dev\'s twitter, where he makes general progress updates and emergency notifications.', CC.global_pixmaps().twitter, ClientPaths.LaunchURLInWebBrowser, 'https://twitter.com/hydrusnetwork' ) diff --git a/hydrus/client/gui/ClientGUIScrolledPanelsEdit.py b/hydrus/client/gui/ClientGUIScrolledPanelsEdit.py index e26443f6..3119885f 100644 --- a/hydrus/client/gui/ClientGUIScrolledPanelsEdit.py +++ b/hydrus/client/gui/ClientGUIScrolledPanelsEdit.py @@ -363,7 +363,7 @@ class EditDefaultTagImportOptionsPanel( ClientGUIScrolledPanels.EditPanel ): # - eligible_url_classes = [ url_class for url_class in url_classes if url_class.GetURLType() in ( HC.URL_TYPE_POST, HC.URL_TYPE_WATCHABLE ) and url_class.GetMatchKey() in self._url_class_keys_to_parser_keys ] + eligible_url_classes = [ url_class for url_class in url_classes if url_class.GetURLType() in ( HC.URL_TYPE_POST, HC.URL_TYPE_WATCHABLE ) and url_class.GetClassKey() in self._url_class_keys_to_parser_keys ] self._list_ctrl.AddDatas( eligible_url_classes ) @@ -388,7 +388,7 @@ class EditDefaultTagImportOptionsPanel( ClientGUIScrolledPanels.EditPanel ): def _ConvertDataToListCtrlTuples( self, url_class ): - url_class_key = url_class.GetMatchKey() + url_class_key = url_class.GetClassKey() name = url_class.GetName() url_type = url_class.GetURLType() @@ -422,7 +422,7 @@ class EditDefaultTagImportOptionsPanel( ClientGUIScrolledPanels.EditPanel ): for url_class in url_classes_to_clear: - url_class_key = url_class.GetMatchKey() + url_class_key = url_class.GetClassKey() if url_class_key in self._url_class_keys_to_tag_import_options: @@ -442,7 +442,7 @@ class EditDefaultTagImportOptionsPanel( ClientGUIScrolledPanels.EditPanel ): url_class = selected[0] - url_class_key = url_class.GetMatchKey() + url_class_key = url_class.GetClassKey() if url_class_key in self._url_class_keys_to_tag_import_options: @@ -472,7 +472,7 @@ class EditDefaultTagImportOptionsPanel( ClientGUIScrolledPanels.EditPanel ): if dlg.exec() == QW.QDialog.Accepted: - url_class_key = url_class.GetMatchKey() + url_class_key = url_class.GetClassKey() tag_import_options = panel.GetValue() @@ -490,7 +490,7 @@ class EditDefaultTagImportOptionsPanel( ClientGUIScrolledPanels.EditPanel ): def _GetDefaultTagImportOptions( self, url_class ): - url_class_key = url_class.GetMatchKey() + url_class_key = url_class.GetClassKey() if url_class_key in self._url_class_keys_to_tag_import_options: @@ -525,7 +525,7 @@ class EditDefaultTagImportOptionsPanel( ClientGUIScrolledPanels.EditPanel ): url_class = selected[0] - url_class_key = url_class.GetMatchKey() + url_class_key = url_class.GetClassKey() if url_class_key in self._url_class_keys_to_tag_import_options: @@ -560,7 +560,7 @@ class EditDefaultTagImportOptionsPanel( ClientGUIScrolledPanels.EditPanel ): for url_class in self._list_ctrl.GetData( only_selected = True ): - url_class_key = url_class.GetMatchKey() + url_class_key = url_class.GetClassKey() self._url_class_keys_to_tag_import_options[ url_class_key ] = tag_import_options.Duplicate() @@ -929,7 +929,7 @@ class EditDownloaderDisplayPanel( ClientGUIScrolledPanels.EditPanel ): self._gug_keys_to_gugs = { gug.GetGUGKey() : gug for gug in self._gugs } self._url_classes = url_classes - self._url_class_keys_to_url_classes = { url_class.GetMatchKey() : url_class for url_class in self._url_classes } + self._url_class_keys_to_url_classes = { url_class.GetClassKey() : url_class for url_class in self._url_classes } self._network_engine = network_engine @@ -3913,7 +3913,7 @@ class EditTagImportOptionsPanel( ClientGUIScrolledPanels.EditPanel ): url_classes = domain_manager.GetURLClasses() - url_class_keys_to_url_classes = { url_class.GetMatchKey() : url_class for url_class in url_classes } + url_class_keys_to_url_classes = { url_class.GetClassKey() : url_class for url_class in url_classes } url_class_names_and_default_tag_import_options = sorted( ( ( url_class_keys_to_url_classes[ url_class_key ].GetName(), url_class_keys_to_default_tag_import_options[ url_class_key ] ) for url_class_key in list( url_class_keys_to_default_tag_import_options.keys() ) if url_class_key in url_class_keys_to_url_classes ) ) @@ -5117,7 +5117,7 @@ class EditURLClassPanel( ClientGUIScrolledPanels.EditPanel ): def _GetValue( self ): - url_class_key = self._original_url_class.GetMatchKey() + url_class_key = self._original_url_class.GetClassKey() name = self._name.text() url_type = self._url_type.GetValue() preferred_scheme = self._preferred_scheme.GetValue() @@ -5495,7 +5495,7 @@ class EditURLClassesPanel( ClientGUIScrolledPanels.EditPanel ): HydrusSerialisable.SetNonDupeName( url_class, self._GetExistingNames() ) - url_class.RegenerateMatchKey() + url_class.RegenerateClassKey() self._list_ctrl.AddDatas( ( url_class, ) ) @@ -5614,7 +5614,7 @@ class EditURLClassLinksPanel( ClientGUIScrolledPanels.EditPanel ): ClientGUIScrolledPanels.EditPanel.__init__( self, parent ) self._url_classes = url_classes - self._url_class_keys_to_url_classes = { url_class.GetMatchKey() : url_class for url_class in self._url_classes } + self._url_class_keys_to_url_classes = { url_class.GetClassKey() : url_class for url_class in self._url_classes } self._parsers = parsers self._parser_keys_to_parsers = { parser.GetParserKey() : parser for parser in self._parsers } @@ -5672,7 +5672,7 @@ class EditURLClassLinksPanel( ClientGUIScrolledPanels.EditPanel ): continue - url_class_key = url_class.GetMatchKey() + url_class_key = url_class.GetClassKey() if url_class_key in url_class_keys_to_parser_keys: diff --git a/hydrus/client/gui/ClientGUIScrolledPanelsReview.py b/hydrus/client/gui/ClientGUIScrolledPanelsReview.py index 39e6396d..fc806b33 100644 --- a/hydrus/client/gui/ClientGUIScrolledPanelsReview.py +++ b/hydrus/client/gui/ClientGUIScrolledPanelsReview.py @@ -170,6 +170,8 @@ class MigrateDatabasePanel( ClientGUIScrolledPanels.ReviewPanel ): def _AddPath( self, path, starting_weight = 1 ): + path = os.path.realpath( path ) + if path in self._locations_to_ideal_weights: QW.QMessageBox.warning( self, 'Warning', 'You already have that location entered!' ) @@ -803,6 +805,8 @@ class MigrateDatabasePanel( ClientGUIScrolledPanels.ReviewPanel ): path = dlg.GetPath() + path = os.path.realpath( path ) + if path in self._locations_to_ideal_weights: QW.QMessageBox.warning( self, 'Warning', 'That path already exists as a regular file location! Please choose another.' ) diff --git a/hydrus/client/gui/ClientGUIStringControls.py b/hydrus/client/gui/ClientGUIStringControls.py index 92f634e7..eb73c04b 100644 --- a/hydrus/client/gui/ClientGUIStringControls.py +++ b/hydrus/client/gui/ClientGUIStringControls.py @@ -337,7 +337,7 @@ class StringToStringDictButton( ClientGUICommon.BetterButton ): ClientGUICommon.BetterButton.__init__( self, parent, label, self._Edit ) - self._value: typing.Dict[ str, str ] = {} + self._value = {} def _Edit( self ): diff --git a/hydrus/client/gui/ClientGUISubscriptions.py b/hydrus/client/gui/ClientGUISubscriptions.py index d209edac..2492e0c6 100644 --- a/hydrus/client/gui/ClientGUISubscriptions.py +++ b/hydrus/client/gui/ClientGUISubscriptions.py @@ -1,4 +1,6 @@ import os +import threading +import time import typing from qtpy import QtCore as QC @@ -1197,8 +1199,7 @@ class EditSubscriptionsPanel( ClientGUIScrolledPanels.EditPanel ): subscriptions_panel.AddSeparator() - # disabled for now - #subscriptions_panel.AddImportExportButtons( ( ClientImportSubscriptions.Subscription, ), self._AddSubscription ) + subscriptions_panel.AddImportExportButtons( ( ClientImportSubscriptionLegacy.SubscriptionLegacy, ClientImportSubscriptions.SubscriptionContainer ), self._AddSubscription, custom_get_callable = self._GetSelectedSubsAsExportableContainers ) subscriptions_panel.NewButtonRow() @@ -1248,7 +1249,62 @@ class EditSubscriptionsPanel( ClientGUIScrolledPanels.EditPanel ): self.widget().setLayout( vbox ) - def _AddSubscription( self, subscription ): + def _AddSubscription( self, unknown_subscription ): + + if isinstance( unknown_subscription, ( ClientImportSubscriptionLegacy.SubscriptionLegacy, ClientImportSubscriptions.SubscriptionContainer ) ): + + if isinstance( unknown_subscription, ClientImportSubscriptionLegacy.SubscriptionLegacy ): + + ( subscription, query_log_containers ) = ClientImportSubscriptionLegacy.ConvertLegacySubscriptionToNew( unknown_subscription ) + + elif isinstance( unknown_subscription, ClientImportSubscriptions.SubscriptionContainer ): + + subscription = unknown_subscription.subscription + query_log_containers = unknown_subscription.query_log_containers + + + old_names_to_query_log_containers = { query_log_container.GetName() : query_log_container for query_log_container in query_log_containers } + + there_were_missing_query_log_containers = False + + for query_header in subscription.GetQueryHeaders(): + + old_query_log_container_name = query_header.GetQueryLogContainerName() + + new_query_log_container_name = ClientImportSubscriptionQuery.GenerateQueryLogContainerName() + + query_header.SetQueryLogContainerName( new_query_log_container_name ) + + if old_query_log_container_name in old_names_to_query_log_containers: + + old_names_to_query_log_containers[ old_query_log_container_name ].SetName( new_query_log_container_name ) + + else: + + there_were_missing_query_log_containers = True + + + + if there_were_missing_query_log_containers: + + message = 'When importing this subscription, "{}", there was missing log data! I will still let you add it, but some of its queries are incomplete. If you are ok with this, ok and then immediately re-open the manage subscriptions dialog to reinitialise the missing data back to zero (and clear any orphaned data that came with this). If you are not ok with this, cancel out now or cancel out of the whole manage subs dialog.'.format( subscription.GetName() ) + + result = ClientGUIDialogsQuick.GetYesNo( self, message, title = 'missing query log data!', yes_label = 'import it anyway', no_label = 'back out now' ) + + if result != QW.QDialog.Accepted: + + return + + + + new_names_to_query_log_containers = { query_log_container.GetName() : query_log_container for query_log_container in query_log_containers } + + self._names_to_edited_query_log_containers.update( new_names_to_query_log_containers ) + + elif isinstance( unknown_subscription, ClientImportSubscriptions.Subscription ): + + subscription = unknown_subscription + subscription.SetNonDupeName( self._GetExistingNames() ) @@ -1514,13 +1570,78 @@ class EditSubscriptionsPanel( ClientGUIScrolledPanels.EditPanel ): return names - def _GetExportObject( self ): + def _GetSelectedSubsAsExportableContainers( self ): + + subs_to_export = self._subscriptions.GetData( only_selected = True ) + + required_query_log_headers = [] + + for sub in subs_to_export: + + required_query_log_headers.extend( sub.GetQueryHeaders() ) + + + missing_query_headers = [ query_header for query_header in required_query_log_headers if query_header.GetQueryLogContainerName() not in self._names_to_edited_query_log_containers ] + + if len( missing_query_headers ) > 0: + + if len( missing_query_headers ) > 25: + + message = 'Exporting or duplicating the current selection means reading query data for {} queries from the database. This may take just a couple of seconds, or, for hundreds of thousands of cached URLs, it could be a couple of minutes (and a whack of memory). Do not panic, it will get there in the end. Do you want to do the export?'.format( HydrusData.ToHumanInt( len( missing_query_headers ) ) ) + + result = ClientGUIDialogsQuick.GetYesNo( self, message ) + + if result != QW.QDialog.Accepted: + + return None + + + + self.setEnabled( False ) + + done = threading.Event() + + done_call = lambda: done.set() + + HG.client_controller.CallToThread( AsyncGetQueryLogContainers, self, missing_query_headers, self._CATCHQueryLogContainers, done_call ) + + while True: + + if not QP.isValid( self ): + + return None + + + if done.is_set(): + + break + + else: + + time.sleep( 0.25 ) + + + QW.QApplication.instance().processEvents() + + + self.setEnabled( True ) + to_export = HydrusSerialisable.SerialisableList() - for subscription in self._subscriptions.GetData( only_selected = True ): + for sub in subs_to_export: - to_export.append( subscription ) + query_log_container_names = [ query_header.GetQueryLogContainerName() for query_header in sub.GetQueryHeaders() ] + + query_log_containers = [ self._names_to_edited_query_log_containers[ query_log_container_name ] for query_log_container_name in query_log_container_names ] + + subscription_container = ClientImportSubscriptions.SubscriptionContainer() + + subscription_container.subscription = sub + subscription_container.query_log_containers = HydrusSerialisable.SerialisableList( query_log_containers ) + + # duplicate important here to make sure we aren't linked with existing objects on a dupe call + to_export.append( subscription_container.Duplicate() ) if len( to_export ) == 0: @@ -1537,32 +1658,6 @@ class EditSubscriptionsPanel( ClientGUIScrolledPanels.EditPanel ): - def _ImportObject( self, obj ): - - if isinstance( obj, HydrusSerialisable.SerialisableList ): - - for sub_obj in obj: - - self._ImportObject( sub_obj ) - - - else: - - if isinstance( obj, ClientImportSubscriptions.Subscription ): - - subscription = obj - - subscription.SetNonDupeName( self._GetExistingNames() ) - - self._subscriptions.AddDatas( ( subscription, ) ) - - else: - - QW.QMessageBox.warning( self, 'Warning', 'That was not a subscription--it was a: '+type(obj).__name__ ) - - - - def _STARTReset( self ): message = 'Resetting these subscriptions will delete all their remembered urls, meaning when they next run, they will try to download them all over again. This may be expensive in time and data. Only do it if you are willing to wait. Do you want to do it?' @@ -1762,16 +1857,18 @@ class EditSubscriptionsPanel( ClientGUIScrolledPanels.EditPanel ): subscriptions = self._subscriptions.GetData() - edited_query_log_containers = list( self._names_to_edited_query_log_containers.values() ) - - new_query_log_container_names = set() + required_query_log_container_names = set() for subscription in subscriptions: - new_query_log_container_names.update( subscription.GetAllQueryLogContainerNames() ) + required_query_log_container_names.update( subscription.GetAllQueryLogContainerNames() ) - deletee_query_log_container_names = self._existing_query_log_container_names.difference( new_query_log_container_names ) + edited_query_log_containers = list( self._names_to_edited_query_log_containers.values() ) + + edited_query_log_containers = [ query_log_container for query_log_container in edited_query_log_containers if query_log_container.GetName() in required_query_log_container_names ] + + deletee_query_log_container_names = self._existing_query_log_container_names.difference( required_query_log_container_names ) return ( subscriptions, edited_query_log_containers, deletee_query_log_container_names ) diff --git a/hydrus/client/gui/QtPorting.py b/hydrus/client/gui/QtPorting.py index 8d8afde8..0bc6e178 100644 --- a/hydrus/client/gui/QtPorting.py +++ b/hydrus/client/gui/QtPorting.py @@ -1500,7 +1500,7 @@ class StatusBar( QW.QStatusBar ): QW.QStatusBar.__init__( self ) - self._labels: typing.List[ QW.QLabel ] = [] + self._labels = [] for w in status_widths: diff --git a/hydrus/client/importing/ClientImportFileSeeds.py b/hydrus/client/importing/ClientImportFileSeeds.py index 076aaee2..1e45c96c 100644 --- a/hydrus/client/importing/ClientImportFileSeeds.py +++ b/hydrus/client/importing/ClientImportFileSeeds.py @@ -599,7 +599,14 @@ class FileSeed( HydrusSerialisable.SerialisableBase ): post_url = self.file_seed_data - ( url_to_check, parser ) = HG.client_controller.network_engine.domain_manager.GetURLToFetchAndParser( post_url ) + try: + + ( url_to_check, parser ) = HG.client_controller.network_engine.domain_manager.GetURLToFetchAndParser( post_url ) + + except HydrusExceptions.URLClassException: + + url_to_check = post_url + else: diff --git a/hydrus/client/importing/ClientImportGallerySeeds.py b/hydrus/client/importing/ClientImportGallerySeeds.py index f33aa92e..d3b8542a 100644 --- a/hydrus/client/importing/ClientImportGallerySeeds.py +++ b/hydrus/client/importing/ClientImportGallerySeeds.py @@ -192,7 +192,14 @@ class GallerySeed( HydrusSerialisable.SerialisableBase ): def GetExampleNetworkJob( self, network_job_factory ): - ( url_to_check, parser ) = HG.client_controller.network_engine.domain_manager.GetURLToFetchAndParser( self.url ) + try: + + ( url_to_check, parser ) = HG.client_controller.network_engine.domain_manager.GetURLToFetchAndParser( self.url ) + + except HydrusExceptions.URLClassException: + + url_to_check = self.url + network_job = network_job_factory( 'GET', url_to_check ) diff --git a/hydrus/client/importing/ClientImportOptions.py b/hydrus/client/importing/ClientImportOptions.py index d5335d55..509add66 100644 --- a/hydrus/client/importing/ClientImportOptions.py +++ b/hydrus/client/importing/ClientImportOptions.py @@ -609,8 +609,8 @@ class NoteImportOptions( HydrusSerialisable.SerialisableBase ): self._get_notes = False self._extend_existing_note_if_possible = True self._conflict_resolution = NOTE_IMPORT_CONFLICT_IGNORE - self._all_name_override: typing.Optional[ str ] = None - self._names_to_name_overrides: typing.Dict[ str, str ] = dict() + self._all_name_override = None + self._names_to_name_overrides = dict() def _GetSerialisableInfo( self ): diff --git a/hydrus/client/importing/ClientImportSimpleURLs.py b/hydrus/client/importing/ClientImportSimpleURLs.py index c50213f5..1f99ed99 100644 --- a/hydrus/client/importing/ClientImportSimpleURLs.py +++ b/hydrus/client/importing/ClientImportSimpleURLs.py @@ -994,7 +994,7 @@ class URLsImport( HydrusSerialisable.SerialisableBase ): with self._lock: - urls = [u for u in urls if len( u ) > 1] # > _1_ to take out the occasional whitespace + urls = [ u for u in urls if len( u ) > 1 ] # > _1_ to take out the occasional whitespace file_seeds = [] diff --git a/hydrus/client/importing/ClientImportSubscriptionLegacy.py b/hydrus/client/importing/ClientImportSubscriptionLegacy.py index ef33be63..77bc86eb 100644 --- a/hydrus/client/importing/ClientImportSubscriptionLegacy.py +++ b/hydrus/client/importing/ClientImportSubscriptionLegacy.py @@ -482,7 +482,7 @@ class SubscriptionLegacy( HydrusSerialisable.SerialisableBaseNamed ): self._gug_key_and_name = gug_key_and_name - self._queries: typing.List[ SubscriptionQueryLegacy ] = [] + self._queries = [] new_options = HG.client_controller.new_options @@ -1883,80 +1883,80 @@ class SubscriptionLegacy( HydrusSerialisable.SerialisableBaseNamed ): HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_SUBSCRIPTION_LEGACY ] = SubscriptionLegacy def ConvertLegacySubscriptionToNew( legacy_subscription: SubscriptionLegacy ): - - ( - name, - gug_key_and_name, - queries, - checker_options, - initial_file_limit, - periodic_file_limit, - paused, - file_import_options, - tag_import_options, - no_work_until, - no_work_until_reason - ) = legacy_subscription.ToTuple() - - subscription = ClientImportSubscriptions.Subscription( name ) - - subscription.SetTuple( - gug_key_and_name, - checker_options, - initial_file_limit, - periodic_file_limit, - paused, - file_import_options, - tag_import_options, - no_work_until - ) - - ( - show_a_popup_while_working, - publish_files_to_popup_button, - publish_files_to_page, - publish_label_override, - merge_query_publish_events - ) = legacy_subscription.GetPresentationOptions() - - subscription.SetPresentationOptions( - show_a_popup_while_working, - publish_files_to_popup_button, - publish_files_to_page, - publish_label_override, - merge_query_publish_events - ) - - query_headers = [] - query_log_containers = [] - - for query in queries: - - query_header = ClientImportSubscriptionQuery.SubscriptionQueryHeader() - - ( query_text, check_now, last_check_time, next_check_time, query_paused, status ) = query.ToTuple() - - query_header.SetQueryText( query_text ) - query_header.SetDisplayName( query.GetDisplayName() ) - query_header.SetCheckNow( check_now ) - query_header.SetLastCheckTime( last_check_time ) - query_header.SetNextCheckTime( next_check_time ) - query_header.SetPaused( query_paused ) - query_header.SetCheckerStatus( status ) - query_header.SetTagImportOptions( query.GetTagImportOptions() ) - - query_log_container = ClientImportSubscriptionQuery.SubscriptionQueryLogContainer( query_header.GetQueryLogContainerName() ) - - query_log_container.SetGallerySeedLog( query.GetGallerySeedLog() ) - query_log_container.SetFileSeedCache( query.GetFileSeedCache() ) - - query_header.SyncToQueryLogContainer( checker_options, query_log_container ) - - query_headers.append( query_header ) - query_log_containers.append( query_log_container ) - - - subscription.SetQueryHeaders( query_headers ) - - return ( subscription, query_log_containers ) + + ( + name, + gug_key_and_name, + queries, + checker_options, + initial_file_limit, + periodic_file_limit, + paused, + file_import_options, + tag_import_options, + no_work_until, + no_work_until_reason + ) = legacy_subscription.ToTuple() + + subscription = ClientImportSubscriptions.Subscription( name ) + + subscription.SetTuple( + gug_key_and_name, + checker_options, + initial_file_limit, + periodic_file_limit, + paused, + file_import_options, + tag_import_options, + no_work_until + ) + + ( + show_a_popup_while_working, + publish_files_to_popup_button, + publish_files_to_page, + publish_label_override, + merge_query_publish_events + ) = legacy_subscription.GetPresentationOptions() + + subscription.SetPresentationOptions( + show_a_popup_while_working, + publish_files_to_popup_button, + publish_files_to_page, + publish_label_override, + merge_query_publish_events + ) + + query_headers = [] + query_log_containers = [] + + for query in queries: + + query_header = ClientImportSubscriptionQuery.SubscriptionQueryHeader() + + ( query_text, check_now, last_check_time, next_check_time, query_paused, status ) = query.ToTuple() + + query_header.SetQueryText( query_text ) + query_header.SetDisplayName( query.GetDisplayName() ) + query_header.SetCheckNow( check_now ) + query_header.SetLastCheckTime( last_check_time ) + query_header.SetNextCheckTime( next_check_time ) + query_header.SetPaused( query_paused ) + query_header.SetCheckerStatus( status ) + query_header.SetTagImportOptions( query.GetTagImportOptions() ) + + query_log_container = ClientImportSubscriptionQuery.SubscriptionQueryLogContainer( query_header.GetQueryLogContainerName() ) + + query_log_container.SetGallerySeedLog( query.GetGallerySeedLog() ) + query_log_container.SetFileSeedCache( query.GetFileSeedCache() ) + + query_header.SyncToQueryLogContainer( checker_options, query_log_container ) + + query_headers.append( query_header ) + query_log_containers.append( query_log_container ) + + + subscription.SetQueryHeaders( query_headers ) + + return ( subscription, query_log_containers ) diff --git a/hydrus/client/importing/ClientImportSubscriptionQuery.py b/hydrus/client/importing/ClientImportSubscriptionQuery.py index 850cb154..6027c1ab 100644 --- a/hydrus/client/importing/ClientImportSubscriptionQuery.py +++ b/hydrus/client/importing/ClientImportSubscriptionQuery.py @@ -14,7 +14,7 @@ from hydrus.client.networking import ClientNetworkingContexts from hydrus.client.networking import ClientNetworkingDomain from hydrus.client.networking import ClientNetworkingJobs -def GenerateSubQueryName() -> str: +def GenerateQueryLogContainerName() -> str: return HydrusData.GenerateKey().hex() @@ -84,7 +84,7 @@ class SubscriptionQueryHeader( HydrusSerialisable.SerialisableBase ): HydrusSerialisable.SerialisableBase.__init__( self ) - self._query_log_container_name = GenerateSubQueryName() + self._query_log_container_name = GenerateQueryLogContainerName() self._query_text = 'query' self._display_name = None self._check_now = False @@ -694,6 +694,13 @@ class SubscriptionQueryHeader( HydrusSerialisable.SerialisableBase ): self._paused = paused + def SetQueryLogContainerName( self, query_log_container_name: str ): + + self._query_log_container_name = query_log_container_name + + self.SetQueryLogContainerStatus( LOG_CONTAINER_UNSYNCED ) + + def SetQueryLogContainerStatus( self, log_container_status: int ): self._query_log_container_status = log_container_status diff --git a/hydrus/client/importing/ClientImportSubscriptions.py b/hydrus/client/importing/ClientImportSubscriptions.py index 17199b7b..e9e35e99 100644 --- a/hydrus/client/importing/ClientImportSubscriptions.py +++ b/hydrus/client/importing/ClientImportSubscriptions.py @@ -1057,22 +1057,7 @@ class Subscription( HydrusSerialisable.SerialisableBaseNamed ): return None - # if there are three queries due say fifty seconds after our first one runs, we should wait that little bit longer - LAUNCH_WINDOW = 5 * 60 - - earliest_next_work_time = min( next_work_times ) - - latest_nearby_next_work_time = max( ( work_time for work_time in next_work_times if work_time < earliest_next_work_time + LAUNCH_WINDOW ) ) - - # but if we are expecting to launch it right now (e.g. check_now call), we won't wait - if HydrusData.TimeUntil( earliest_next_work_time ) < 60: - - best_next_work_time = earliest_next_work_time - - else: - - best_next_work_time = latest_nearby_next_work_time - + best_next_work_time = min( next_work_times ) if not HydrusData.TimeHasPassed( self._no_work_until ): @@ -1395,6 +1380,42 @@ class Subscription( HydrusSerialisable.SerialisableBaseNamed ): HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_SUBSCRIPTION ] = Subscription +LOG_CONTAINER_SYNCED = 0 +LOG_CONTAINER_UNSYNCED = 1 +LOG_CONTAINER_MISSING = 2 + +class SubscriptionContainer( HydrusSerialisable.SerialisableBase ): + + SERIALISABLE_TYPE = HydrusSerialisable.SERIALISABLE_TYPE_SUBSCRIPTION_CONTAINER + SERIALISABLE_NAME = 'Subscription with all data' + SERIALISABLE_VERSION = 1 + + def __init__( self ): + + HydrusSerialisable.SerialisableBase.__init__( self ) + + self.subscription = Subscription( 'default' ) + self.query_log_containers = HydrusSerialisable.SerialisableList() + + + def _GetSerialisableInfo( self ): + + serialisable_subscription = self.subscription.GetSerialisableTuple() + serialisable_query_log_containers = self.query_log_containers.GetSerialisableTuple() + + return ( serialisable_subscription, serialisable_query_log_containers ) + + + def _InitialiseFromSerialisableInfo( self, serialisable_info ): + + ( serialisable_subscription, serialisable_query_log_containers ) = serialisable_info + + self.subscription = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_subscription ) + self.query_log_containers = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_query_log_containers ) + + +HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_SUBSCRIPTION_CONTAINER ] = SubscriptionContainer + class SubscriptionJob( object ): def __init__( self, controller, subscription ): @@ -1485,7 +1506,7 @@ class SubscriptionsManager( object ): if len( self._names_to_running_subscription_info ) > 0: - return 1 + return 0.5 else: @@ -1493,11 +1514,11 @@ class SubscriptionsManager( object ): if subscription is not None: - return 1 + return 0.5 else: - return 15 + return 5 @@ -1585,12 +1606,16 @@ class SubscriptionsManager( object ): else: - if just_finished_work: + p1 = HG.client_controller.options[ 'pause_subs_sync' ] + p2 = HG.client_controller.new_options.GetBoolean( 'pause_all_new_network_traffic' ) + + stopped_because_pause = p1 or p2 + + if just_finished_work and not stopped_because_pause: - # don't want to have a load/save cycle repeating over and over + # even with the new data format, we don't want to have a load/save cycle repeating _too_ much, just to stop any weird cascades # this sets min resolution of a single sub repeat cycle - # we'll clear it when we have data breakup done - BUFFER_TIME = 60 * 60 + BUFFER_TIME = 120 next_work_time = max( next_work_time, HydrusData.GetNow() + BUFFER_TIME ) @@ -1617,7 +1642,7 @@ class SubscriptionsManager( object ): try: - self._wake_event.wait( 15 ) + self._wake_event.wait( 3 ) while not ( HG.view_shutdown or self._shutdown ): @@ -1687,6 +1712,8 @@ class SubscriptionsManager( object ): self._UpdateSubscriptionInfo( subscription ) + self._wake_event.set() + def ShowSnapshot( self ): diff --git a/hydrus/client/networking/ClientNetworking.py b/hydrus/client/networking/ClientNetworking.py index 33dd5455..070f9d4b 100644 --- a/hydrus/client/networking/ClientNetworking.py +++ b/hydrus/client/networking/ClientNetworking.py @@ -204,7 +204,7 @@ class NetworkEngine( object ): return True - elif not job.BandwidthOK(): + elif not job.TryToStartBandwidth(): return True diff --git a/hydrus/client/networking/ClientNetworkingDomain.py b/hydrus/client/networking/ClientNetworkingDomain.py index f1ee85b4..6ea4be55 100644 --- a/hydrus/client/networking/ClientNetworkingDomain.py +++ b/hydrus/client/networking/ClientNetworkingDomain.py @@ -492,6 +492,22 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ): self._RecalcCache() + def _CleanURLClassKeysToParserKeys( self ): + + api_pairs = ConvertURLClassesIntoAPIPairs( self._url_classes ) + + # anything that goes to an api url will be parsed by that api's parser--it can't have its own + for ( a, b ) in api_pairs: + + unparseable_url_class_key = a.GetClassKey() + + if unparseable_url_class_key in self._url_class_keys_to_parser_keys: + + del self._url_class_keys_to_parser_keys[ unparseable_url_class_key ] + + + + def _GetDefaultTagImportOptionsForURL( self, url ): url_class = self._GetURLClass( url ) @@ -516,7 +532,7 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ): return self._file_post_default_tag_import_options - url_class_key = url_class.GetMatchKey() + url_class_key = url_class.GetClassKey() if url_class_key in self._url_class_keys_to_default_tag_import_options: @@ -672,7 +688,7 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ): raise HydrusExceptions.URLClassException( 'Could not find a parser for ' + url + '!' + os.linesep * 2 + str( e ) ) - url_class_key = parser_url_class.GetMatchKey() + url_class_key = parser_url_class.GetClassKey() if url_class_key in self._url_class_keys_to_parser_keys: @@ -815,7 +831,7 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ): for url_class in url_classes: - url_class_key = url_class.GetMatchKey() + url_class_key = url_class.GetClassKey() name = url_class.GetName() @@ -1046,7 +1062,7 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ): # absent irrelevant variables, do we have the exact same object already in? name = new_url_class.GetName() - match_key = new_url_class.GetMatchKey() + match_key = new_url_class.GetClassKey() example_url = new_url_class.GetExampleURL() dupe_url_classes = [ url_class.Duplicate() for url_class in self._url_classes ] @@ -1054,7 +1070,7 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ): for dupe_url_class in dupe_url_classes: dupe_url_class.SetName( name ) - dupe_url_class.SetMatchKey( match_key ) + dupe_url_class.SetClassKey( match_key ) dupe_url_class.SetExampleURL( example_url ) if dupe_url_class.DumpToString() == new_url_class.DumpToString(): @@ -1095,7 +1111,7 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ): for url_class in new_url_classes: - url_class.RegenerateMatchKey() + url_class.RegenerateClassKey() for parser in new_parsers: @@ -1169,6 +1185,8 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ): self._url_class_keys_to_parser_keys.update( new_url_class_keys_to_parser_keys ) + self._CleanURLClassKeysToParserKeys() + # let's do a trytolink just in case there are loose ends due to some dupe being discarded earlier (e.g. url match is new, but parser was not). @@ -1220,7 +1238,7 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ): else: - url_class_key = url_class.GetMatchKey() + url_class_key = url_class.GetClassKey() if url_class_key in self._url_class_keys_to_display: @@ -1700,9 +1718,20 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ): default_gugs = ClientDefaults.GetDefaultGUGs() + existing_gug_names_to_keys = { gug.GetName() : gug.GetGUGKey() for gug in self._gugs } + for gug in default_gugs: - gug.RegenerateGUGKey() + gug_name = gug.GetName() + + if gug_name in existing_gug_names_to_keys: + + gug.SetGUGKey( existing_gug_names_to_keys[ gug_name ] ) + + else: + + gug.RegenerateGUGKey() + existing_gugs = list( self._gugs ) @@ -1722,9 +1751,20 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ): default_parsers = ClientDefaults.GetDefaultParsers() + existing_parser_names_to_keys = { parser.GetName() : parser.GetParserKey() for parser in self._parsers } + for parser in default_parsers: - parser.RegenerateParserKey() + name = parser.GetName() + + if name in existing_parser_names_to_keys: + + parser.SetParserKey( existing_parser_names_to_keys[ name ] ) + + else: + + parser.RegenerateParserKey() + existing_parsers = list( self._parsers ) @@ -1744,9 +1784,25 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ): default_url_classes = ClientDefaults.GetDefaultURLClasses() + existing_class_names_to_keys = { url_class.GetName() : url_class.GetClassKey() for url_class in self._url_classes } + for url_class in default_url_classes: - url_class.RegenerateMatchKey() + name = url_class.GetName() + + if name in existing_class_names_to_keys: + + url_class.SetClassKey( existing_class_names_to_keys[ name ] ) + + else: + + url_class.RegenerateClassKey() + + + + for url_class in default_url_classes: + + url_class.RegenerateClassKey() existing_url_classes = list( self._url_classes ) @@ -1762,7 +1818,7 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ): with self._lock: - url_class_key = url_class.GetMatchKey() + url_class_key = url_class.GetClassKey() parser_key = parser.GetParserKey() self._url_class_keys_to_parser_keys[ url_class_key ] = parser_key @@ -1899,7 +1955,7 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ): deletee_url_class_keys = set() - for ( url_class_key, parser_key ) in list(self._url_class_keys_to_parser_keys.items()): + for ( url_class_key, parser_key ) in self._url_class_keys_to_parser_keys.items(): if parser_key not in parser_keys: @@ -1926,8 +1982,8 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ): # by default, we will show post urls - old_post_url_class_keys = { url_class.GetMatchKey() for url_class in self._url_classes if url_class.IsPostURL() } - post_url_class_keys = { url_class.GetMatchKey() for url_class in url_classes if url_class.IsPostURL() } + old_post_url_class_keys = { url_class.GetClassKey() for url_class in self._url_classes if url_class.IsPostURL() } + post_url_class_keys = { url_class.GetClassKey() for url_class in url_classes if url_class.IsPostURL() } added_post_url_class_keys = post_url_class_keys.difference( old_post_url_class_keys ) @@ -1945,7 +2001,7 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ): # delete orphans - url_class_keys = { url_class.GetMatchKey() for url_class in url_classes } + url_class_keys = { url_class.GetClassKey() for url_class in url_classes } self._url_class_keys_to_display.intersection_update( url_class_keys ) @@ -1960,7 +2016,7 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ): for ( url_class_original, url_class_api ) in url_class_api_pairs: - url_class_key = url_class_original.GetMatchKey() + url_class_key = url_class_original.GetClassKey() if url_class_key in self._url_class_keys_to_parser_keys: @@ -1982,6 +2038,8 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ): self._url_class_keys_to_parser_keys.update( url_class_keys_to_parser_keys ) + self._CleanURLClassKeysToParserKeys() + self._SetDirty() @@ -2021,6 +2079,8 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ): self._url_class_keys_to_parser_keys.update( new_url_class_keys_to_parser_keys ) + self._CleanURLClassKeysToParserKeys() + self._SetDirty() @@ -2096,11 +2156,16 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ): for url_class in url_classes: + if url_class in api_pair_unparsable_url_classes: + + continue + + if url_class.Matches( example_url ): # we have a match. this is the 'correct' match for this example url, and we should not search any more, so we break below - url_class_key = url_class.GetMatchKey() + url_class_key = url_class.GetClassKey() parsable = url_class.IsParsable() linkable = url_class_key not in existing_url_class_keys_to_parser_keys and url_class_key not in new_url_class_keys_to_parser_keys @@ -2125,7 +2190,7 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ): continue - url_class_key = url_class.GetMatchKey() + url_class_key = url_class.GetClassKey() if url_class_key in existing_url_class_keys_to_parser_keys: @@ -2547,6 +2612,11 @@ class GalleryURLGenerator( HydrusSerialisable.SerialisableBaseNamed ): return ( self._url_template, self._replacement_phrase, self._search_terms_separator, self._example_search_text ) + def SetGUGKey( self, gug_key: bytes ): + + self._gallery_url_generator_key = gug_key + + def SetGUGKeyAndName( self, gug_key_and_name ): ( gug_key, name ) = gug_key_and_name @@ -2744,6 +2814,11 @@ class NestedGalleryURLGenerator( HydrusSerialisable.SerialisableBaseNamed ): self._gug_keys_and_names = good_gug_keys_and_names + def SetGUGKey( self, gug_key: bytes ): + + self._gallery_url_generator_key = gug_key + + def SetGUGKeyAndName( self, gug_key_and_name ): ( gug_key, name ) = gug_key_and_name @@ -3141,7 +3216,7 @@ class URLClass( HydrusSerialisable.SerialisableBaseNamed ): return ( self._gallery_index_type, self._gallery_index_identifier, self._gallery_index_delta ) - def GetMatchKey( self ): + def GetClassKey( self ): return self._url_class_key @@ -3359,7 +3434,7 @@ class URLClass( HydrusSerialisable.SerialisableBaseNamed ): return is_a_direct_file_page or is_a_single_file_post_page - def RegenerateMatchKey( self ): + def RegenerateClassKey( self ): self._url_class_key = HydrusData.GenerateKey() @@ -3369,7 +3444,7 @@ class URLClass( HydrusSerialisable.SerialisableBaseNamed ): self._example_url = example_url - def SetMatchKey( self, match_key ): + def SetClassKey( self, match_key ): self._url_class_key = match_key diff --git a/hydrus/client/networking/ClientNetworkingJobs.py b/hydrus/client/networking/ClientNetworkingJobs.py index c197f122..2ac727d1 100644 --- a/hydrus/client/networking/ClientNetworkingJobs.py +++ b/hydrus/client/networking/ClientNetworkingJobs.py @@ -689,68 +689,10 @@ class NetworkJob( object ): if self._ObeysBandwidth(): - result = self.engine.bandwidth_manager.TryToStartRequest( self._network_contexts ) - - if result: - - self._bandwidth_tracker.ReportRequestUsed() - - else: - - ( bandwidth_waiting_duration, bandwidth_network_context ) = self.engine.bandwidth_manager.GetWaitingEstimateAndContext( self._network_contexts ) - - will_override = self._bandwidth_manual_override_delayed_timestamp is not None - - override_coming_first = False - - if will_override: - - override_waiting_duration = self._bandwidth_manual_override_delayed_timestamp - HydrusData.GetNow() - - override_coming_first = override_waiting_duration < bandwidth_waiting_duration - - - just_now_threshold = 2 - - if override_coming_first: - - waiting_duration = override_waiting_duration - - waiting_str = 'overriding bandwidth ' + HydrusData.TimestampToPrettyTimeDelta( self._bandwidth_manual_override_delayed_timestamp, just_now_string = 'imminently', just_now_threshold = just_now_threshold ) - - else: - - waiting_duration = bandwidth_waiting_duration - - waiting_str = 'bandwidth free ' + HydrusData.TimestampToPrettyTimeDelta( HydrusData.GetNow() + waiting_duration, just_now_string = 'imminently', just_now_threshold = just_now_threshold ) - - - waiting_str += '\u2026 (' + bandwidth_network_context.ToHumanString() + ')' - - self._status_text = waiting_str - - if waiting_duration > 1200: - - self._Sleep( 30 ) - - elif waiting_duration > 120: - - self._Sleep( 10 ) - - elif waiting_duration > 10: - - self._Sleep( 1 ) - - - - return result + return self.engine.bandwidth_manager.CanDoWork( self._network_contexts ) else: - self._bandwidth_tracker.ReportRequestUsed() - - self.engine.bandwidth_manager.ReportRequestUsed( self._network_contexts ) - return True @@ -1404,6 +1346,79 @@ class NetworkJob( object ): return True + def TryToStartBandwidth( self ): + + with self._lock: + + if self._ObeysBandwidth(): + + result = self.engine.bandwidth_manager.TryToStartRequest( self._network_contexts ) + + if result: + + self._bandwidth_tracker.ReportRequestUsed() + + else: + + ( bandwidth_waiting_duration, bandwidth_network_context ) = self.engine.bandwidth_manager.GetWaitingEstimateAndContext( self._network_contexts ) + + will_override = self._bandwidth_manual_override_delayed_timestamp is not None + + override_coming_first = False + + if will_override: + + override_waiting_duration = self._bandwidth_manual_override_delayed_timestamp - HydrusData.GetNow() + + override_coming_first = override_waiting_duration < bandwidth_waiting_duration + + + just_now_threshold = 2 + + if override_coming_first: + + waiting_duration = override_waiting_duration + + waiting_str = 'overriding bandwidth ' + HydrusData.TimestampToPrettyTimeDelta( self._bandwidth_manual_override_delayed_timestamp, just_now_string = 'imminently', just_now_threshold = just_now_threshold ) + + else: + + waiting_duration = bandwidth_waiting_duration + + waiting_str = 'bandwidth free ' + HydrusData.TimestampToPrettyTimeDelta( HydrusData.GetNow() + waiting_duration, just_now_string = 'imminently', just_now_threshold = just_now_threshold ) + + + waiting_str += '\u2026 (' + bandwidth_network_context.ToHumanString() + ')' + + self._status_text = waiting_str + + if waiting_duration > 1200: + + self._Sleep( 30 ) + + elif waiting_duration > 120: + + self._Sleep( 10 ) + + elif waiting_duration > 10: + + self._Sleep( 1 ) + + + + return result + + else: + + self._bandwidth_tracker.ReportRequestUsed() + + self.engine.bandwidth_manager.ReportRequestUsed( self._network_contexts ) + + return True + + + + def WaitUntilDone( self ): while True: diff --git a/hydrus/core/HydrusAudioHandling.py b/hydrus/core/HydrusAudioHandling.py index c6950897..8e7d4a1a 100644 --- a/hydrus/core/HydrusAudioHandling.py +++ b/hydrus/core/HydrusAudioHandling.py @@ -68,6 +68,8 @@ def VideoHasAudio( path ): sbp_kwargs = HydrusData.GetSubprocessKWArgs() + HydrusData.CheckProgramIsNotShuttingDown() + try: process = subprocess.Popen( cmd, bufsize = 65536, stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE, **sbp_kwargs ) diff --git a/hydrus/core/HydrusConstants.py b/hydrus/core/HydrusConstants.py index ad53a40c..8438feeb 100644 --- a/hydrus/core/HydrusConstants.py +++ b/hydrus/core/HydrusConstants.py @@ -73,7 +73,7 @@ options = {} # Misc NETWORK_VERSION = 18 -SOFTWARE_VERSION = 400 +SOFTWARE_VERSION = 401 CLIENT_API_VERSION = 12 SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 ) diff --git a/hydrus/core/HydrusData.py b/hydrus/core/HydrusData.py index 25034a08..1b7e6d4d 100644 --- a/hydrus/core/HydrusData.py +++ b/hydrus/core/HydrusData.py @@ -57,6 +57,13 @@ def CalculateScoreFromRating( count, rating ): return score +def CheckProgramIsNotShuttingDown(): + + if HG.model_shutdown: + + raise HydrusExceptions.ShutdownException( 'Application is shutting down!' ) + + def CleanRunningFile( db_path, instance ): # just to be careful @@ -718,7 +725,7 @@ def GetSubprocessEnv(): remove_if_hydrus_base_dir = [ 'QT_PLUGIN_PATH', 'QML2_IMPORT_PATH', 'SSL_CERT_FILE' ] - hydrus_base_dir = HG.client_controller.GetDBDir() + hydrus_base_dir = HG.controller.GetDBDir() for key in remove_if_hydrus_base_dir: diff --git a/hydrus/core/HydrusFlashHandling.py b/hydrus/core/HydrusFlashHandling.py index b13828a6..8473315a 100644 --- a/hydrus/core/HydrusFlashHandling.py +++ b/hydrus/core/HydrusFlashHandling.py @@ -1,6 +1,7 @@ from hydrus.external import hexagonitswfheader from hydrus.core import HydrusConstants as HC from hydrus.core import HydrusData +from hydrus.core import HydrusThreading import os import subprocess import time @@ -49,6 +50,8 @@ def RenderPageToFile( path, temp_path, page_index ): sbp_kwargs = HydrusData.GetSubprocessKWArgs() + HydrusData.CheckProgramIsNotShuttingDown() + p = subprocess.Popen( cmd, **sbp_kwargs ) while p.poll() is None: @@ -63,5 +66,5 @@ def RenderPageToFile( path, temp_path, page_index ): time.sleep( 0.5 ) - p.communicate() + HydrusThreading.SubprocessCommunicate( p ) diff --git a/hydrus/core/HydrusNATPunch.py b/hydrus/core/HydrusNATPunch.py index 8472839d..2f03a8d2 100644 --- a/hydrus/core/HydrusNATPunch.py +++ b/hydrus/core/HydrusNATPunch.py @@ -2,6 +2,7 @@ from hydrus.core import HydrusConstants as HC from hydrus.core import HydrusData from hydrus.core import HydrusExceptions from hydrus.core import HydrusText +from hydrus.core import HydrusThreading import os import socket import subprocess @@ -35,11 +36,13 @@ def GetExternalIP(): sbp_kwargs = HydrusData.GetSubprocessKWArgs( text = True ) + HydrusData.CheckProgramIsNotShuttingDown() + p = subprocess.Popen( cmd, stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE, **sbp_kwargs ) HydrusData.WaitForProcessToFinish( p, 30 ) - ( stdout, stderr ) = p.communicate() + ( stdout, stderr ) = HydrusThreading.SubprocessCommunicate( p ) if stderr is not None and len( stderr ) > 0: @@ -83,11 +86,13 @@ def AddUPnPMapping( internal_client, internal_port, external_port, protocol, des sbp_kwargs = HydrusData.GetSubprocessKWArgs( text = True ) + HydrusData.CheckProgramIsNotShuttingDown() + p = subprocess.Popen( cmd, stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE, **sbp_kwargs ) HydrusData.WaitForProcessToFinish( p, 30 ) - ( stdout, stderr ) = p.communicate() + ( stdout, stderr ) = HydrusThreading.SubprocessCommunicate( p ) if 'x.x.x.x:' + str( external_port ) + ' TCP is redirected to internal ' + internal_client + ':' + str( internal_port ) in stdout: @@ -117,11 +122,13 @@ def GetUPnPMappings(): sbp_kwargs = HydrusData.GetSubprocessKWArgs( text = True ) + HydrusData.CheckProgramIsNotShuttingDown() + p = subprocess.Popen( cmd, stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE, **sbp_kwargs ) HydrusData.WaitForProcessToFinish( p, 30 ) - ( stdout, stderr ) = p.communicate() + ( stdout, stderr ) = HydrusThreading.SubprocessCommunicate( p ) if stderr is not None and len( stderr ) > 0: @@ -213,11 +220,13 @@ def RemoveUPnPMapping( external_port, protocol ): sbp_kwargs = HydrusData.GetSubprocessKWArgs( text = True ) + HydrusData.CheckProgramIsNotShuttingDown() + p = subprocess.Popen( cmd, stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE, **sbp_kwargs ) HydrusData.WaitForProcessToFinish( p, 30 ) - ( stdout, stderr ) = p.communicate() + ( stdout, stderr ) = HydrusThreading.SubprocessCommunicate( p ) if stderr is not None and len( stderr ) > 0: raise Exception( 'Problem while trying to remove UPnP mapping:' + os.linesep * 2 + stderr ) diff --git a/hydrus/core/HydrusPaths.py b/hydrus/core/HydrusPaths.py index 9b3256a3..684f0f73 100644 --- a/hydrus/core/HydrusPaths.py +++ b/hydrus/core/HydrusPaths.py @@ -469,9 +469,11 @@ def LaunchDirectory( path ): preexec_fn = getattr( os, 'setsid', None ) + HydrusData.CheckProgramIsNotShuttingDown() + process = subprocess.Popen( cmd, preexec_fn = preexec_fn, **sbp_kwargs ) - process.communicate() + HydrusThreading.SubprocessCommunicate( process ) @@ -524,9 +526,11 @@ def LaunchFile( path, launch_path = None ): sbp_kwargs = HydrusData.GetSubprocessKWArgs( hide_terminal = hide_terminal, text = True ) + HydrusData.CheckProgramIsNotShuttingDown() + process = subprocess.Popen( cmd, preexec_fn = preexec_fn, stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE, **sbp_kwargs ) - ( stdout, stderr ) = process.communicate() + ( stdout, stderr ) = HydrusThreading.SubprocessCommunicate( process ) if HG.subprocess_report_mode: @@ -850,9 +854,11 @@ def OpenFileLocation( path ): sbp_kwargs = HydrusData.GetSubprocessKWArgs( hide_terminal = False ) + HydrusData.CheckProgramIsNotShuttingDown() + process = subprocess.Popen( cmd, **sbp_kwargs ) - process.communicate() + HydrusThreading.SubprocessCommunicate( process ) thread = threading.Thread( target = do_it ) diff --git a/hydrus/core/HydrusSerialisable.py b/hydrus/core/HydrusSerialisable.py index 56041758..944a67fe 100644 --- a/hydrus/core/HydrusSerialisable.py +++ b/hydrus/core/HydrusSerialisable.py @@ -105,6 +105,7 @@ SERIALISABLE_TYPE_SUBSCRIPTION_QUERY_LOG_CONTAINER = 86 SERIALISABLE_TYPE_SUBSCRIPTION_QUERY_HEADER = 87 SERIALISABLE_TYPE_SUBSCRIPTION = 88 SERIALISABLE_TYPE_FILE_SEED_CACHE_STATUS = 89 +SERIALISABLE_TYPE_SUBSCRIPTION_CONTAINER = 90 SERIALISABLE_TYPES_TO_OBJECT_TYPES = {} diff --git a/hydrus/core/HydrusThreading.py b/hydrus/core/HydrusThreading.py index 3acb84b0..a71d4364 100644 --- a/hydrus/core/HydrusThreading.py +++ b/hydrus/core/HydrusThreading.py @@ -3,6 +3,7 @@ import collections from hydrus.core import HydrusExceptions import queue import random +import subprocess import threading import time import traceback @@ -69,6 +70,11 @@ def GetThreadInfo( thread = None ): def IsThreadShuttingDown(): + if HG.emergency_exit: + + return True + + me = threading.current_thread() if isinstance( me, DAEMON ): @@ -96,6 +102,39 @@ def ShutdownThread( thread ): thread_info[ 'shutting_down' ] = True +def SubprocessCommunicate( process: subprocess.Popen ): + + def do_test(): + + if HG.model_shutdown: + + try: + + process.kill() + + except: + + pass + + + raise HydrusExceptions.ShutdownException( 'Application is shutting down!' ) + + + + do_test() + + while True: + + try: + + return process.communicate( timeout = 10 ) + + except subprocess.TimeoutExpired: + + do_test() + + + class DAEMON( threading.Thread ): def __init__( self, controller, name ): diff --git a/hydrus/core/HydrusVideoHandling.py b/hydrus/core/HydrusVideoHandling.py index 65707ceb..8a06b7e3 100644 --- a/hydrus/core/HydrusVideoHandling.py +++ b/hydrus/core/HydrusVideoHandling.py @@ -3,6 +3,7 @@ from hydrus.core import HydrusConstants as HC from hydrus.core import HydrusData from hydrus.core import HydrusExceptions from hydrus.core import HydrusText +from hydrus.core import HydrusThreading import numpy import os import re @@ -44,6 +45,8 @@ def GetFFMPEGVersion(): cmd = [ FFMPEG_PATH, '-version' ] + HydrusData.CheckProgramIsNotShuttingDown() + try: sbp_kwargs = HydrusData.GetSubprocessKWArgs( text = True ) @@ -61,7 +64,7 @@ def GetFFMPEGVersion(): return 'unable to execute ffmpeg at path "{}"'.format( FFMPEG_PATH ) - ( stdout, stderr ) = process.communicate() + ( stdout, stderr ) = HydrusThreading.SubprocessCommunicate( process ) del process @@ -135,6 +138,8 @@ def GetFFMPEGInfoLines( path, count_frames_manually = False, only_first_second = sbp_kwargs = HydrusData.GetSubprocessKWArgs() + HydrusData.CheckProgramIsNotShuttingDown() + try: process = subprocess.Popen( cmd, bufsize = 10**5, stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE, **sbp_kwargs ) @@ -168,7 +173,7 @@ def GetFFMPEGInfoLines( path, count_frames_manually = False, only_first_second = raise FileNotFoundError( 'Cannot interact with video because FFMPEG not found--are you sure it is installed? Full error: ' + str( e ) ) - ( stdout, stderr ) = process.communicate() + ( stdout, stderr ) = HydrusThreading.SubprocessCommunicate( process ) data_bytes = stderr @@ -793,6 +798,8 @@ class VideoRendererFFMPEG( object ): sbp_kwargs = HydrusData.GetSubprocessKWArgs() + HydrusData.CheckProgramIsNotShuttingDown() + try: self.process = subprocess.Popen( cmd, bufsize = self.bufsize, stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE, **sbp_kwargs ) diff --git a/hydrus/test/TestClientNetworking.py b/hydrus/test/TestClientNetworking.py index ccf2d5b8..e31a12e0 100644 --- a/hydrus/test/TestClientNetworking.py +++ b/hydrus/test/TestClientNetworking.py @@ -523,25 +523,25 @@ class TestNetworkingJob( unittest.TestCase ): job = self._GetJob() - self.assertEqual( job.BandwidthOK(), True ) + self.assertEqual( job.TryToStartBandwidth(), True ) job.engine.bandwidth_manager.ReportDataUsed( [ DOMAIN_NETWORK_CONTEXT ], 50 ) job.engine.bandwidth_manager.SetRules( DOMAIN_NETWORK_CONTEXT, RESTRICTIVE_DATA_RULES ) - self.assertEqual( job.BandwidthOK(), False ) + self.assertEqual( job.TryToStartBandwidth(), False ) # job = self._GetJob( for_login = True ) - self.assertEqual( job.BandwidthOK(), True ) + self.assertEqual( job.TryToStartBandwidth(), True ) job.engine.bandwidth_manager.ReportDataUsed( [ DOMAIN_NETWORK_CONTEXT ], 50 ) job.engine.bandwidth_manager.SetRules( DOMAIN_NETWORK_CONTEXT, RESTRICTIVE_DATA_RULES ) - self.assertEqual( job.BandwidthOK(), True ) + self.assertEqual( job.TryToStartBandwidth(), True ) def test_bandwidth_ok( self ): @@ -558,11 +558,11 @@ class TestNetworkingJob( unittest.TestCase ): job.engine.bandwidth_manager.ReportDataUsed( [ DOMAIN_NETWORK_CONTEXT ], 50 ) - self.assertEqual( job.BandwidthOK(), True ) + self.assertEqual( job.TryToStartBandwidth(), True ) job.engine.bandwidth_manager.SetRules( DOMAIN_NETWORK_CONTEXT, PERMISSIVE_DATA_RULES ) - self.assertEqual( job.BandwidthOK(), True ) + self.assertEqual( job.TryToStartBandwidth(), True ) # @@ -570,11 +570,11 @@ class TestNetworkingJob( unittest.TestCase ): job.engine.bandwidth_manager.ReportDataUsed( [ DOMAIN_NETWORK_CONTEXT ], 50 ) - self.assertEqual( job.BandwidthOK(), True ) + self.assertEqual( job.TryToStartBandwidth(), True ) job.engine.bandwidth_manager.SetRules( DOMAIN_NETWORK_CONTEXT, PERMISSIVE_DATA_RULES ) - self.assertEqual( job.BandwidthOK(), True ) + self.assertEqual( job.TryToStartBandwidth(), True ) def test_bandwidth_reported( self ): @@ -585,7 +585,7 @@ class TestNetworkingJob( unittest.TestCase ): job = self._GetJob() - job.BandwidthOK() + job.TryToStartBandwidth() job.Start() @@ -695,25 +695,25 @@ class TestNetworkingJobHydrus( unittest.TestCase ): job = self._GetJob() - self.assertEqual( job.BandwidthOK(), True ) + self.assertEqual( job.TryToStartBandwidth(), True ) job.engine.bandwidth_manager.ReportDataUsed( [ HYDRUS_NETWORK_CONTEXT ], 50 ) job.engine.bandwidth_manager.SetRules( HYDRUS_NETWORK_CONTEXT, RESTRICTIVE_DATA_RULES ) - self.assertEqual( job.BandwidthOK(), False ) + self.assertEqual( job.TryToStartBandwidth(), False ) # job = self._GetJob( for_login = True ) - self.assertEqual( job.BandwidthOK(), True ) + self.assertEqual( job.TryToStartBandwidth(), True ) job.engine.bandwidth_manager.ReportDataUsed( [ HYDRUS_NETWORK_CONTEXT ], 50 ) job.engine.bandwidth_manager.SetRules( HYDRUS_NETWORK_CONTEXT, RESTRICTIVE_DATA_RULES ) - self.assertEqual( job.BandwidthOK(), True ) + self.assertEqual( job.TryToStartBandwidth(), True ) def test_bandwidth_ok( self ): @@ -730,11 +730,11 @@ class TestNetworkingJobHydrus( unittest.TestCase ): job.engine.bandwidth_manager.ReportDataUsed( [ HYDRUS_NETWORK_CONTEXT ], 50 ) - self.assertEqual( job.BandwidthOK(), True ) + self.assertEqual( job.TryToStartBandwidth(), True ) job.engine.bandwidth_manager.SetRules( HYDRUS_NETWORK_CONTEXT, PERMISSIVE_DATA_RULES ) - self.assertEqual( job.BandwidthOK(), True ) + self.assertEqual( job.TryToStartBandwidth(), True ) # @@ -742,11 +742,11 @@ class TestNetworkingJobHydrus( unittest.TestCase ): job.engine.bandwidth_manager.ReportDataUsed( [ HYDRUS_NETWORK_CONTEXT ], 50 ) - self.assertEqual( job.BandwidthOK(), True ) + self.assertEqual( job.TryToStartBandwidth(), True ) job.engine.bandwidth_manager.SetRules( HYDRUS_NETWORK_CONTEXT, PERMISSIVE_DATA_RULES ) - self.assertEqual( job.BandwidthOK(), True ) + self.assertEqual( job.TryToStartBandwidth(), True ) def test_bandwidth_reported( self ): diff --git a/hydrus/test/TestHydrusServer.py b/hydrus/test/TestHydrusServer.py index a4d22e0f..4b5882ee 100644 --- a/hydrus/test/TestHydrusServer.py +++ b/hydrus/test/TestHydrusServer.py @@ -22,13 +22,13 @@ import twisted.internet.ssl from hydrus.core import HydrusData from hydrus.core import HydrusGlobals as HG -with open( os.path.join( HC.STATIC_DIR, 'hydrus.png' ), 'rb' ) as f: +with open( os.path.join( HC.STATIC_DIR, 'hydrus.png' ), 'rb' ) as f_g: - EXAMPLE_FILE = f.read() + EXAMPLE_FILE = f_g.read() -with open( os.path.join( HC.STATIC_DIR, 'hydrus_small.png' ), 'rb' ) as f: +with open( os.path.join( HC.STATIC_DIR, 'hydrus_small.png' ), 'rb' ) as f_g: - EXAMPLE_THUMBNAIL = f.read() + EXAMPLE_THUMBNAIL = f_g.read() class TestServer( unittest.TestCase ): diff --git a/static/default/parsers/derpibooru gallery page api parser.png b/static/default/parsers/derpibooru gallery page api parser.png index 172e6ebf..d91606f2 100644 Binary files a/static/default/parsers/derpibooru gallery page api parser.png and b/static/default/parsers/derpibooru gallery page api parser.png differ diff --git a/static/default/parsers/e621 gallery page parser.png b/static/default/parsers/e621 gallery page parser.png index 578b82db..43e4f19a 100644 Binary files a/static/default/parsers/e621 gallery page parser.png and b/static/default/parsers/e621 gallery page parser.png differ diff --git a/static/default/parsers/nitter tweet parser (video from koto.reisen).png b/static/default/parsers/nitter tweet parser (video from koto.reisen).png index 13eca21c..b6e36163 100644 Binary files a/static/default/parsers/nitter tweet parser (video from koto.reisen).png and b/static/default/parsers/nitter tweet parser (video from koto.reisen).png differ diff --git a/static/default/parsers/nitter tweet parser.png b/static/default/parsers/nitter tweet parser.png index 2fe92745..923a926e 100644 Binary files a/static/default/parsers/nitter tweet parser.png and b/static/default/parsers/nitter tweet parser.png differ diff --git a/static/default/url_classes/e621 gallery page (alternate format).png b/static/default/url_classes/e621 gallery page (alternate format).png new file mode 100644 index 00000000..b1e9b187 Binary files /dev/null and b/static/default/url_classes/e621 gallery page (alternate format).png differ diff --git a/static/github.png b/static/github.png new file mode 100644 index 00000000..ccfb47fd Binary files /dev/null and b/static/github.png differ
-
+
contact and links
-Please send me all your bug reports, questions, ideas, and comments. It is always interesting to see how other people are using my software and what they generally think of it. Most of the changes every week are suggested by users.
-You can contact me by email, twitter, tumblr, discord, or the 8kun/Endchan boards--I do not mind which. I'm not active on github (I use it mostly as a mirror of my home dev environment) and do not check its messages or issues. I often like to spend a day or so to think before replying to non-urgent messages, but I do try to reply to everything.
-I am on the discord on Saturday afternoon, USA time, and Wednesday after I put the release out. If that is not a good time for you, feel free to leave me a DM and I will get to you when I can. There are also plenty of other hydrus users who idle who would be happy to help with any sort of support question.
+I welcome all your bug reports, questions, ideas, and comments. It is always interesting to see how other people are using my software and what they generally think of it. Most of the changes every week are suggested by users.
+You can contact me by email, twitter, tumblr, discord, or the 8kun/Endchan boards--I do not mind which. Please know that I have difficulty with social media, and while I try to reply to all messages, it sometimes takes me a while to catch up.
+The Github Issue Tracker was turned off for some time, as it did not fit my workflow and I could not keep up, but it is now running again, managed by a team of volunteer users. Please feel free to submit feature requests there if you are comfortable with Github. I am not socially active on Github, and it is mostly just a mirror of my home dev environment, where I work alone.
+I am on the discord on Saturday afternoon, USA time, if you would like to talk live, and briefly on Wednesday after I put the release out. If that is not a good time for you, feel free to leave me a DM and I will get to you when I can. There are also plenty of other hydrus users who idle who would be happy to help with any sort of support question.
I delete all tweets and resolved email conversations after three months. So, if you think you are waiting for a reply, or I said I was going to work on something you care about and seem to have forgotten, please do nudge me.
-If you have a problem with something on someone else's server, please, do not come to me, as I cannot help. If your ex-gf's nudes have leaked onto the internet or you just find something terribly offensive, I cannot help you at all.
Anyway: