Version 364

This commit is contained in:
Hydrus Network Developer 2019-08-14 19:40:48 -05:00
parent 144c24e93c
commit 6041b27035
28 changed files with 2033 additions and 1117 deletions

View File

@ -11,6 +11,7 @@ The client can do quite a lot! Please check out the help inside the release or [
* [homepage](http://hydrusnetwork.github.io/hydrus/)
* [email](mailto:hydrus.admin@gmail.com)
* [8chan board](https://8ch.net/hydrus/index.html)
* [endchan bunker](https://endchan.net/hydrus/)
* [twitter](https://twitter.com/hydrusnetwork)
* [tumblr](http://hydrus.tumblr.com/)
* [discord](https://discord.gg/3H8UTpb)

View File

@ -121,6 +121,8 @@ except Exception as e:
sys.exit( 1 )
controller = None
with HydrusLogger.HydrusLogger( db_dir, 'client' ) as logger:
try:
@ -162,14 +164,10 @@ with HydrusLogger.HydrusLogger( db_dir, 'client' ) as logger:
HG.view_shutdown = True
HG.model_shutdown = True
try:
if controller is not None:
controller.pubimmediate( 'wake_daemons' )
except:
HydrusData.Print( traceback.format_exc() )
reactor.callFromThread( reactor.stop )

View File

@ -121,6 +121,8 @@ except Exception as e:
sys.exit( 1 )
controller = None
with HydrusLogger.HydrusLogger( db_dir, 'client' ) as logger:
try:
@ -162,14 +164,10 @@ with HydrusLogger.HydrusLogger( db_dir, 'client' ) as logger:
HG.view_shutdown = True
HG.model_shutdown = True
try:
if controller is not None:
controller.pubimmediate( 'wake_daemons' )
except:
HydrusData.Print( traceback.format_exc() )
if not HG.twisted_is_broke:

View File

@ -8,6 +8,43 @@
<div class="content">
<h3>changelog</h3>
<ul>
<li><h3>version 364</h3></li>
<ul>
<li>repo processing makeover:</li>
<li>repository processing is now no longer a monolithic atomic database job! it now loads update files at a 'higher' level and streams packets of work to the database without occupying it continuously! hence, repository processing no longer creates a 'modal' popup that blocks the client--you can keep browsing while it works, and it won't hang up the client!</li>
<li>this new system runs on some different timings. in this first version, it will have lower rows/s in some situations and higher in others. please send me feedback if your processing is running significantly slower than before and I will tweak how this new routine decides to work and take breaks</li>
<li>multiple repos can now sync at once, ha ha</li>
<li>shutdown repository processing now states the name of the service being processed and x/y update process in the exit splash screen</li>
<li>the process that runs after repository processing that re-syncs all the open thumbnails' tags now works regardless of the number of thumbnails open and works asynchronously, streaming new tag managers in a way that will not block the main thread</li>
<li>'process now' button on review services is now available to all users and has a reworded warning text</li>
<li>the 1 hour limit on a repo processing job is now gone</li>
<li>pre-processing disk cache population is tentatively gone--let's see how it goes</li>
<li>the 10s db transaction time is raised to 30s. this speed some things up, including the new repo processing, but if a crash occurs, hydrus may now lose up to 30s of changes before the crash</li>
<li>.</li>
<li>the rest:</li>
<li>users in advanced mode now have a 'OR' button on their serch autocomplete input dropdown panels. this button opens a new panel that plugs into prkc's neat raw-text -> CNF parser, which allows you to enter raw-text searches such as '( blue eyes and blonde hair ) or ( green eyes and red hair )' into hydrus</li>
<li>fixed the silent audio track detection code, which was handling a data type incorrectly</li>
<li>improved the silent audio track detection code to handle another type of silence, thank you to the users who submitted examples--please send more false positives if you find them</li>
<li>fixed an issue where thumbnails that underwent a file metadata regeneration were not appearing to receive content updates (such as archive, or new tags/ratings) until a subsequent reload showed they had happened silently. this is a long-time bug, but the big whack of files added to the files maintenance system last week revealed it</li>
<li>the 'pause ui update cycles while main gui is minimised' change from last week now works on a per-frame basis. if the main gui is minimised, media viewers that are up will still run videos and so on, and vice versa</li>
<li>a few more ui events (e.g. statusbar & menubar updates) no longer occur while the client is minimised</li>
<li>duplicate processing pages will now only initialise and refresh their maintenance and dupe count numbers while they are the current page. this should speed up session load for heavy users and those with multiple duplicate pages open</li>
<li>gave the new autocomplete 'should broadcast the current text' tests another pass--it should be more reliable now broadcasting 'blue eyes' in the up-to-200ms window where the stub/full results for, say, 'blue ey' are still in</li>
<li>fixed an accidental logical error that meant 'character:'-style autocomplete queries could do a search and give some odd results, rather than just 'character:*anything*'. a similar check is added to the 'write' autocomplete</li>
<li>fixed an issue with autocomplete not clearing its list properly, defaulting back to the last cached results, when it wants to fetch system preds but cannot due to a busy db</li>
<li>fixed GET-argument gallery searches for search texts that include '&', '=', '/', or '?' (think 'panty_&_stocking_with_garterbelt')</li>
<li>removed the pixiv login script from the defaults--apparently they have added a captcha, so using Hydrus Companion with the Client API is now your best bet</li>
<li>the client's petition processing page will now prefer to fetch the same petition type as the last completed job, rather than always going for the top type with non-zero count</li>
<li>the client's petition processing page now has options to sort parent or sibling petitions by the left side or right--and it preserves check status!</li>
<li>the client's petition processing page now sorts tags by namespace first, then subtag</li>
<li>the client now starts, restarts, and stops port-hosted services using the same new technique as the server, increasing reliability and waiting more correctly for previous services to stop and so on</li>
<li>the client now explicitly commands its services to shut down on application close. a rare issue could sometimes leave the process alive because of a client api still hanging on to an old connection and having trouble with the shut-down db</li>
<li>the file maintenance manager will no longer spam to log during shutdown maintenance</li>
<li>sketched out first skeleton of the new unified global maintenance manager</li>
<li>improved some post-boot-error shutdown handling that was also doing tiny late errors on server 'stop' command</li>
<li>added endchan bunker links to contact pages and github readme</li>
<li>updated to ffmpeg 4.2 on windows</li>
</ul>
<li><h3>version 363</h3></li>
<ul>
<li>has audio:</li>
@ -28,6 +65,7 @@
<li>added/updated unit tests for the above</li>
<li>updated help for the above</li>
<li>client api version is now 10</li>
<li>.</li>
<li>the rest:</li>
<li>system:hash and system:similar to now accept multiple hashes! so, if you have 100 md5s, you can now search for them all at once</li>
<li>the thumbnail right-click->file relationships->find similar files now works for multiple selections!</li>

View File

@ -15,7 +15,7 @@
<p>Anyway:</p>
<ul>
<li><a href="https://hydrusnetwork.github.io/hydrus/">homepage</a></li>
<li><a href="https://8ch.net/hydrus/index.html">8chan board</a></li>
<li><a href="https://8ch.net/hydrus/index.html">8chan board</a> (<a href="https://endchan.net/hydrus/">endchan bunker</a>)</li>
<li><a href="http://hydrus.tumblr.com">tumblr</a> (<a href="http://hydrus.tumblr.com/rss">rss</a>)</li>
<li><a href="https://github.com/hydrusnetwork/hydrus/releases">new downloads</a></li>
<li><a href="https://www.mediafire.com/hydrus">old downloads</a></li>
@ -28,4 +28,4 @@
</ul>
</div>
</body>
</html>
</html>

View File

@ -955,17 +955,20 @@ class MediaResultCache( object ):
# repo sync or advanced content update occurred, so we need complete refresh
with self._lock:
def do_it( hash_ids ):
if len( self._hash_ids_to_media_results ) < 10000:
for group_of_hash_ids in HydrusData.SplitListIntoChunks( hash_ids, 256 ):
hash_ids = list( self._hash_ids_to_media_results.keys() )
if HydrusThreading.IsThreadShuttingDown():
return
for group_of_hash_ids in HydrusData.SplitListIntoChunks( hash_ids, 256 ):
hash_ids_to_tags_managers = HG.client_controller.Read( 'force_refresh_tags_managers', group_of_hash_ids )
with self._lock:
hash_ids_to_tags_managers = HG.client_controller.Read( 'force_refresh_tags_managers', group_of_hash_ids )
for ( hash_id, tags_manager ) in list(hash_ids_to_tags_managers.items()):
for ( hash_id, tags_manager ) in hash_ids_to_tags_managers.items():
if hash_id in self._hash_ids_to_media_results:
@ -974,9 +977,16 @@ class MediaResultCache( object ):
HG.client_controller.pub( 'notify_new_force_refresh_tags_gui' )
HG.client_controller.pub( 'notify_new_force_refresh_tags_gui' )
with self._lock:
hash_ids = list( self._hash_ids_to_media_results.keys() )
HG.client_controller.CallToThread( do_it, hash_ids )
def NewSiblings( self ):
@ -994,7 +1004,7 @@ class MediaResultCache( object ):
with self._lock:
for ( service_key, content_updates ) in list(service_keys_to_content_updates.items()):
for ( service_key, content_updates ) in service_keys_to_content_updates.items():
for content_update in content_updates:
@ -1016,7 +1026,7 @@ class MediaResultCache( object ):
with self._lock:
for ( service_key, service_updates ) in list(service_keys_to_service_updates.items()):
for ( service_key, service_updates ) in service_keys_to_service_updates.items():
for service_update in service_updates:
@ -1024,7 +1034,7 @@ class MediaResultCache( object ):
if action in ( HC.SERVICE_UPDATE_DELETE_PENDING, HC.SERVICE_UPDATE_RESET ):
for media_result in list(self._hash_ids_to_media_results.values()):
for media_result in self._hash_ids_to_media_results.values():
if action == HC.SERVICE_UPDATE_DELETE_PENDING:

View File

@ -55,7 +55,7 @@ import traceback
if not HG.twisted_is_broke:
from twisted.internet import reactor, defer
from twisted.internet import threads, reactor, defer
class App( wx.App ):
@ -845,7 +845,6 @@ class Controller( HydrusController.HydrusController ):
self.CallBlockingToWX( self._splash, wx_code )
self.sub( self, 'ToClipboard', 'clipboard' )
self.sub( self, 'RestartClientServerService', 'restart_client_server_service' )
def InitView( self ):
@ -895,10 +894,9 @@ class Controller( HydrusController.HydrusController ):
HydrusController.HydrusController.InitView( self )
self._listening_services = {}
self._service_keys_to_connected_ports = {}
self.RestartClientServerService( CC.LOCAL_BOORU_SERVICE_KEY )
self.RestartClientServerService( CC.CLIENT_API_SERVICE_KEY )
self.RestartClientServerServices()
if not HG.no_daemons:
@ -1158,95 +1156,13 @@ class Controller( HydrusController.HydrusController ):
self._timestamps[ 'last_page_change' ] = HydrusData.GetNow()
def RestartClientServerService( self, service_key ):
def RestartClientServerServices( self ):
service = self.services_manager.GetService( service_key )
service_type = service.GetServiceType()
services = [ self.services_manager.GetService( service_key ) for service_key in ( CC.LOCAL_BOORU_SERVICE_KEY, CC.CLIENT_API_SERVICE_KEY ) ]
name = service.GetName()
services = [ service for service in services if service.GetPort() is not None ]
port = service.GetPort()
allow_non_local_connections = service.AllowsNonLocalConnections()
def TWISTEDRestartServer():
def StartServer( *args, **kwargs ):
try:
time.sleep( 1 )
if HydrusNetworking.LocalPortInUse( port ):
text = 'The client\'s {} could not start because something was already bound to port {}.'.format( name, port )
text += os.linesep * 2
text += 'This usually means another hydrus client is already running and occupying that port. It could be a previous instantiation of this client that has yet to completely shut itself down.'
text += os.linesep * 2
text += 'You can change the port this service tries to host on under services->manage services.'
HydrusData.ShowText( text )
return
from . import ClientLocalServer
if service_type == HC.LOCAL_BOORU:
twisted_server = ClientLocalServer.HydrusServiceBooru( service, allow_non_local_connections = allow_non_local_connections )
elif service_type == HC.CLIENT_API_SERVICE:
twisted_server = ClientLocalServer.HydrusServiceClientAPI( service, allow_non_local_connections = allow_non_local_connections )
listening_connection = reactor.listenTCP( port, twisted_server )
self._listening_services[ service_key ] = listening_connection
if not HydrusNetworking.LocalPortInUse( port ):
text = 'Tried to bind port ' + str( port ) + ' for the local booru, but it appeared to fail. It could be a firewall or permissions issue, or perhaps another program was quietly already using it.'
HydrusData.ShowText( text )
except Exception as e:
wx.CallAfter( HydrusData.ShowException, e )
if service_key in self._listening_services:
listening_connection = self._listening_services[ service_key ]
del self._listening_services[ service_key ]
deferred = defer.maybeDeferred( listening_connection.stopListening )
if port is not None:
deferred.addCallback( StartServer )
else:
if port is not None:
StartServer()
if HG.twisted_is_broke:
HydrusData.ShowText( 'Twisted failed to import, so could not start the {}! Please contact hydrus dev!'.format( name ) )
else:
reactor.callFromThread( TWISTEDRestartServer )
self.CallToThread( self.SetRunningTwistedServices, services )
def RestoreDatabase( self ):
@ -1367,6 +1283,100 @@ class Controller( HydrusController.HydrusController ):
def SetRunningTwistedServices( self, services ):
def TWISTEDDoIt():
def StartServices( *args, **kwargs ):
HydrusData.Print( 'starting services\u2026' )
for service in services:
service_key = service.GetServiceKey()
service_type = service.GetServiceType()
name = service.GetName()
port = service.GetPort()
allow_non_local_connections = service.AllowsNonLocalConnections()
if port is None:
continue
try:
from . import ClientLocalServer
if service_type == HC.LOCAL_BOORU:
http_factory = ClientLocalServer.HydrusServiceBooru( service, allow_non_local_connections = allow_non_local_connections )
elif service_type == HC.CLIENT_API_SERVICE:
http_factory = ClientLocalServer.HydrusServiceClientAPI( service, allow_non_local_connections = allow_non_local_connections )
self._service_keys_to_connected_ports[ service_key ] = reactor.listenTCP( port, http_factory )
if not HydrusNetworking.LocalPortInUse( port ):
HydrusData.ShowText( 'Tried to bind port {} for "{}" but it failed.'.format( port, name ) )
except Exception as e:
HydrusData.ShowText( 'Could not start "{}":'.format( name ) )
HydrusData.ShowException( e )
HydrusData.Print( 'services started' )
if len( self._service_keys_to_connected_ports ) > 0:
HydrusData.Print( 'stopping services\u2026' )
deferreds = []
for port in self._service_keys_to_connected_ports.values():
deferred = defer.maybeDeferred( port.stopListening )
deferreds.append( deferred )
self._service_keys_to_connected_ports = {}
deferred = defer.DeferredList( deferreds )
if len( services ) > 0:
deferred.addCallback( StartServices )
elif len( services ) > 0:
StartServices()
if HG.twisted_is_broke:
if True in ( service.GetPort() is not None for service in services ):
HydrusData.ShowText( 'Twisted failed to import, so could not start the local booru/client api! Please contact hydrus dev!' )
else:
threads.blockingCallFromThread( reactor, TWISTEDDoIt )
def SetServices( self, services ):
with HG.dirty_object_lock:
@ -1380,6 +1390,8 @@ class Controller( HydrusController.HydrusController ):
self.services_manager.RefreshServices()
self.RestartClientServerServices()
def ShutdownModel( self ):
@ -1411,6 +1423,8 @@ class Controller( HydrusController.HydrusController ):
self.SetRunningTwistedServices( [] )
HydrusController.HydrusController.ShutdownView( self )

File diff suppressed because it is too large Load Diff

View File

@ -1637,7 +1637,7 @@ class FilesMaintenanceManager( object ):
if i % 10 == 0:
self._controller.pub( 'splash_set_status_text', status_text )
self._controller.pub( 'splash_set_status_text', status_text, print_to_log = False )
job_key.SetVariable( 'popup_text_1', status_text )

View File

@ -291,6 +291,7 @@ class FrameGUI( ClientGUITopLevelWindows.FrameThatResizes ):
self.Bind( wx.EVT_CLOSE, self.EventClose )
self.Bind( wx.EVT_SET_FOCUS, self.EventFocus )
self.Bind( wx.EVT_TIMER, self.TIMEREventAnimationUpdate, id = ID_TIMER_ANIMATION_UPDATE )
self.Bind( wx.EVT_ICONIZE, self.EventIconize )
self.Bind( wx.EVT_MOVE, self.EventMove )
self._last_move_pub = 0.0
@ -3377,7 +3378,7 @@ class FrameGUI( ClientGUITopLevelWindows.FrameThatResizes ):
def _RefreshStatusBar( self ):
if not self._notebook or not self._statusbar:
if not self._notebook or not self._statusbar or self.IsIconized():
return
@ -4299,7 +4300,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
dlg.SetPanel( panel )
r = dlg.ShowModal()
dlg.ShowModal()
@ -4393,6 +4394,15 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
self._notebook.EventMenuFromScreenPosition( screen_position )
def EventIconize( self, event ):
if not event.IsIconized():
wx.CallAfter( self.RefreshMenu )
wx.CallAfter( self.RefreshStatusBar )
def EventMenuClose( self, event ):
menu = event.GetMenu()
@ -4449,11 +4459,6 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
def TIMEREventAnimationUpdate( self, event ):
if self.IsIconized():
return
try:
windows = list( self._animation_update_windows )
@ -4467,6 +4472,11 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
continue
if window.GetTopLevelParent().IsIconized():
continue
try:
if HG.ui_timer_profile_mode:
@ -5009,7 +5019,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
def RefreshMenu( self ):
if not self:
if not self or self.IsIconized():
return
@ -5231,11 +5241,6 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
def REPEATINGUIUpdate( self ):
if self.IsIconized():
return
for window in list( self._ui_update_windows ):
if not window:
@ -5245,6 +5250,11 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
continue
if window.GetTopLevelParent().IsIconized():
continue
try:
if HG.ui_timer_profile_mode:
@ -5550,7 +5560,7 @@ class FrameSplashStatus( object ):
self._NotifyUI()
def SetTitleText( self, text, print_to_log = True ):
def SetTitleText( self, text, clear_undertexts = True, print_to_log = True ):
if print_to_log:
@ -5560,8 +5570,12 @@ class FrameSplashStatus( object ):
with self._lock:
self._title_text = text
self._status_text = ''
self._status_subtext = ''
if clear_undertexts:
self._status_text = ''
self._status_subtext = ''
self._NotifyUI()

View File

@ -8,6 +8,8 @@ from . import ClientGUIMenus
from . import ClientGUIShortcuts
from . import ClientSearch
from . import ClientThreading
from . import ClientGUIScrolledPanelsEdit
from . import ClientGUITopLevelWindows
import collections
from . import HydrusConstants as HC
from . import HydrusData
@ -80,9 +82,9 @@ def InsertStaticPredicatesForRead( predicates, parsed_search_text, include_unusu
else:
( namespace, half_complete_subtag ) = HydrusTags.SplitTag( search_text )
( namespace, subtag ) = HydrusTags.SplitTag( search_text )
if namespace != '' and half_complete_subtag in ( '', '*' ):
if namespace != '' and subtag in ( '', '*' ):
predicates.insert( 0, ClientSearch.Predicate( HC.PREDICATE_TYPE_NAMESPACE, namespace, inclusive ) )
@ -116,7 +118,9 @@ def InsertStaticPredicatesForWrite( predicates, parsed_search_text, tag_service_
( raw_entry, search_text, cache_text, entry_predicate, sibling_predicate ) = parsed_search_text
if search_text in ( '', ':', '*' ):
( namespace, subtag ) = HydrusTags.SplitTag( search_text )
if search_text in ( '', ':', '*' ) or subtag == '':
pass
@ -156,9 +160,11 @@ def ReadFetch( win, job_key, results_callable, parsed_search_text, wx_media_call
input_just_changed = search_text_for_current_cache is not None
definitely_do_it = input_just_changed or not initial_matches_fetched
db_not_going_to_hang_if_we_hit_it = not HG.client_controller.DBCurrentlyDoingJob()
if input_just_changed or db_not_going_to_hang_if_we_hit_it or not initial_matches_fetched:
if definitely_do_it or db_not_going_to_hang_if_we_hit_it:
if file_service_key == CC.COMBINED_FILE_SERVICE_KEY:
@ -173,8 +179,16 @@ def ReadFetch( win, job_key, results_callable, parsed_search_text, wx_media_call
cached_results = HG.client_controller.Read( 'file_system_predicates', search_service_key )
matches = cached_results
matches = cached_results
elif ( search_text_for_current_cache is None or search_text_for_current_cache == '' ) and cached_results is not None: # if repeating query but busy, use same
matches = cached_results
else:
matches = []
else:
@ -187,7 +201,7 @@ def ReadFetch( win, job_key, results_callable, parsed_search_text, wx_media_call
siblings_manager = HG.client_controller.tag_siblings_manager
if False and half_complete_subtag == '':
if half_complete_subtag == '':
search_text_for_current_cache = None
@ -412,34 +426,45 @@ def WriteFetch( win, job_key, results_callable, parsed_search_text, file_service
else:
must_do_a_search = False
( namespace, subtag ) = HydrusTags.SplitTag( search_text )
small_exact_match_search = ShouldDoExactSearch( cache_text )
if small_exact_match_search:
if subtag == '':
predicates = HG.client_controller.Read( 'autocomplete_predicates', file_service_key = file_service_key, tag_service_key = tag_service_key, search_text = cache_text, exact_match = True, add_namespaceless = False, job_key = job_key, collapse_siblings = False )
search_text_for_current_cache = None
matches = [] # a query like 'namespace:'
else:
cache_valid = CacheCanBeUsedForInput( search_text_for_current_cache, cache_text )
must_do_a_search = False
if must_do_a_search or not cache_valid:
small_exact_match_search = ShouldDoExactSearch( cache_text )
if small_exact_match_search:
search_text_for_current_cache = cache_text
predicates = HG.client_controller.Read( 'autocomplete_predicates', file_service_key = file_service_key, tag_service_key = tag_service_key, search_text = cache_text, exact_match = True, add_namespaceless = False, job_key = job_key, collapse_siblings = False )
cached_results = HG.client_controller.Read( 'autocomplete_predicates', file_service_key = file_service_key, tag_service_key = tag_service_key, search_text = search_text, add_namespaceless = False, job_key = job_key, collapse_siblings = False )
else:
cache_valid = CacheCanBeUsedForInput( search_text_for_current_cache, cache_text )
if must_do_a_search or not cache_valid:
search_text_for_current_cache = cache_text
cached_results = HG.client_controller.Read( 'autocomplete_predicates', file_service_key = file_service_key, tag_service_key = tag_service_key, search_text = search_text, add_namespaceless = False, job_key = job_key, collapse_siblings = False )
predicates = cached_results
next_search_is_probably_fast = True
predicates = cached_results
matches = ClientSearch.FilterPredicatesBySearchText( tag_service_key, search_text, predicates )
next_search_is_probably_fast = True
matches = ClientSearch.SortPredicates( matches )
matches = ClientSearch.FilterPredicatesBySearchText( tag_service_key, search_text, predicates )
matches = ClientSearch.SortPredicates( matches )
matches = InsertStaticPredicatesForWrite( matches, parsed_search_text, tag_service_key, expand_parents )
@ -545,7 +570,7 @@ class AutoCompleteDropdown( wx.Panel ):
self.SetSizer( vbox )
self._last_fetched_search_text = ''
self._current_list_raw_entry = ''
self._next_search_is_probably_fast = False
self._search_text_for_current_cache = None
@ -1154,8 +1179,6 @@ class AutoCompleteDropdown( wx.Panel ):
self._CancelCurrentResultsFetchJob()
self._last_fetched_search_text = search_text
self._search_text_for_current_cache = search_text_for_cache
self._cached_results = cached_results
@ -1256,6 +1279,8 @@ class AutoCompleteDropdownTags( AutoCompleteDropdown ):
def _SetResultsToList( self, results ):
self._current_list_raw_entry = self._text_ctrl.GetValue()
self._search_results_list.SetPredicates( results )
@ -1367,6 +1392,14 @@ class AutoCompleteDropdownTagsRead( AutoCompleteDropdownTags ):
self._synchronised = ClientGUICommon.OnOffButton( self._dropdown_window, self._page_key, 'notify_search_immediately', on_label = 'searching immediately', off_label = 'waiting -- tag counts may be inaccurate', start_on = synchronised )
self._synchronised.SetToolTip( 'select whether to renew the search as soon as a new predicate is entered' )
self._or_advanced = ClientGUICommon.BetterButton( self._dropdown_window, 'OR', self._AdvancedORInput )
self._or_advanced.SetToolTip( 'Advanced OR Search input.' )
if not HG.client_controller.new_options.GetBoolean( 'advanced_mode' ):
self._or_advanced.Hide()
self._or_cancel = ClientGUICommon.BetterBitmapButton( self._dropdown_window, CC.GlobalBMPs.delete, self._CancelORConstruction )
self._or_cancel.SetToolTip( 'Cancel OR Predicate construction.' )
self._or_cancel.Hide()
@ -1385,6 +1418,7 @@ class AutoCompleteDropdownTagsRead( AutoCompleteDropdownTags ):
sync_button_hbox = wx.BoxSizer( wx.HORIZONTAL )
sync_button_hbox.Add( self._synchronised, CC.FLAGS_EXPAND_BOTH_WAYS )
sync_button_hbox.Add( self._or_advanced, CC.FLAGS_VCENTER )
sync_button_hbox.Add( self._or_cancel, CC.FLAGS_VCENTER )
sync_button_hbox.Add( self._or_rewind, CC.FLAGS_VCENTER )
@ -1408,6 +1442,29 @@ class AutoCompleteDropdownTagsRead( AutoCompleteDropdownTags ):
HG.client_controller.sub( self, 'IncludePending', 'notify_include_pending' )
def _AdvancedORInput( self ):
title = 'enter advanced OR predicates'
with ClientGUITopLevelWindows.DialogEdit( self, title ) as dlg:
panel = ClientGUIScrolledPanelsEdit.EditAdvancedORPredicates( dlg )
dlg.SetPanel( panel )
if dlg.ShowModal() == wx.ID_OK:
predicates = panel.GetValue()
shift_down = False
if len( predicates ) > 0:
self._BroadcastChoices( predicates, shift_down )
def _BroadcastChoices( self, predicates, shift_down ):
or_pred_in_broadcast = self._under_construction_or_predicate is not None and self._under_construction_or_predicate in predicates
@ -1639,16 +1696,15 @@ class AutoCompleteDropdownTagsRead( AutoCompleteDropdownTags ):
( raw_entry, inclusive, wildcard_text, search_text, explicit_wildcard, cache_text, entry_predicate ) = self._ParseSearchText()
looking_at_search_results = self._dropdown_notebook.GetCurrentPage() == self._search_results_list
something_to_broadcast = cache_text != ''
current_page = self._dropdown_notebook.GetCurrentPage()
# looking at empty or system results
nothing_to_select = current_page == self._search_results_list and ( self._last_fetched_search_text == '' or not self._search_results_list.HasValues() )
# the list has results, but they are out of sync with what we have currently entered
# when the user has quickly typed something in and the results are not yet in
results_desynced_with_text = raw_entry != self._current_list_raw_entry
p1 = something_to_broadcast and nothing_to_select
p1 = looking_at_search_results and something_to_broadcast and results_desynced_with_text
return p1
@ -1948,22 +2004,22 @@ class AutoCompleteDropdownTagsWrite( AutoCompleteDropdownTags ):
( raw_entry, search_text, cache_text, entry_predicate, sibling_predicate ) = self._ParseSearchText()
current_page = self._dropdown_notebook.GetCurrentPage()
looking_at_search_results = self._dropdown_notebook.GetCurrentPage() == self._search_results_list
sitting_on_empty = raw_entry == ''
something_to_broadcast = not sitting_on_empty
nothing_to_select = isinstance( current_page, ClientGUIListBoxes.ListBox ) and not current_page.HasValues()
# the list has results, but they are out of sync with what we have currently entered
# when the user has quickly typed something in and the results are not yet in
results_desynced_with_text = raw_entry != self._current_list_raw_entry
p1 = something_to_broadcast and nothing_to_select
p1 = something_to_broadcast and results_desynced_with_text
# when the text ctrl is empty, we are looking at search results, and we want to push a None to the parent dialog
# when the text ctrl is empty and we want to push a None to the parent dialog
p2 = sitting_on_empty
p2 = sitting_on_empty and current_page == self._search_results_list
return p1 or p2
return looking_at_search_results and ( p1 or p2 )
def _StartResultsFetchJob( self, job_key ):

View File

@ -2707,6 +2707,11 @@ class ListBoxTagsStrings( ListBoxTags ):
def ForceTagRecalc( self ):
if self.GetTopLevelParent().IsIconized():
return
self._RecalcTags()
@ -3100,6 +3105,11 @@ class ListBoxTagsSelection( ListBoxTags ):
def ForceTagRecalc( self ):
if self.GetTopLevelParent().IsIconized():
return
self.SetTagsByMedia( self._last_media, force_reload = True )

View File

@ -33,6 +33,7 @@ from . import ClientTags
from . import ClientThreading
from . import HydrusData
from . import HydrusGlobals as HG
from . import HydrusTags
from . import HydrusThreading
import os
import time
@ -990,6 +991,9 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
new_options = self._controller.new_options
self._maintenance_numbers_dirty = True
self._dupe_count_numbers_dirty = True
self._currently_refreshing_maintenance_numbers = False
self._currently_refreshing_dupe_count_numbers = False
@ -1003,7 +1007,7 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
#
self._refresh_maintenance_status = ClientGUICommon.BetterStaticText( self._main_left_panel )
self._refresh_maintenance_button = ClientGUICommon.BetterBitmapButton( self._main_left_panel, CC.GlobalBMPs.refresh, self._RefreshMaintenanceStatus )
self._refresh_maintenance_button = ClientGUICommon.BetterBitmapButton( self._main_left_panel, CC.GlobalBMPs.refresh, self.RefreshMaintenanceNumbers )
menu_items = []
@ -1075,7 +1079,7 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
self._both_files_match = wx.CheckBox( self._filtering_panel )
self._num_potential_duplicates = ClientGUICommon.BetterStaticText( self._filtering_panel )
self._refresh_dupe_counts_button = ClientGUICommon.BetterBitmapButton( self._filtering_panel, CC.GlobalBMPs.refresh, self._RefreshDuplicateCounts )
self._refresh_dupe_counts_button = ClientGUICommon.BetterBitmapButton( self._filtering_panel, CC.GlobalBMPs.refresh, self.RefreshDuplicateNumbers )
self._launch_filter = ClientGUICommon.BetterButton( self._filtering_panel, 'launch the filter', self._LaunchFilter )
@ -1186,8 +1190,6 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
self._controller.sub( self, 'RefreshQuery', 'refresh_query' )
self._controller.sub( self, 'SearchImmediately', 'notify_search_immediately' )
HG.client_controller.pub( 'refresh_dupe_page_numbers' )
def _EditMergeOptions( self, duplicate_type ):
@ -1245,6 +1247,8 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
self._currently_refreshing_dupe_count_numbers = False
self._dupe_count_numbers_dirty = False
self._refresh_dupe_counts_button.Enable()
self._UpdatePotentialDuplicatesCount( potential_duplicates_count )
@ -1271,7 +1275,7 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
def _RefreshMaintenanceStatus( self ):
def _RefreshMaintenanceNumbers( self ):
def wx_code( similar_files_maintenance_status ):
@ -1282,6 +1286,8 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
self._currently_refreshing_maintenance_numbers = False
self._maintenance_numbers_dirty = False
self._refresh_maintenance_status.SetLabelText( '' )
self._refresh_maintenance_button.Enable()
@ -1322,7 +1328,7 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
self._controller.Write( 'delete_potential_duplicate_pairs' )
self._RefreshMaintenanceStatus()
self._maintenance_numbers_dirty = True
@ -1338,7 +1344,7 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
if self._ac_read.IsSynchronised():
self._RefreshDuplicateCounts()
self._dupe_count_numbers_dirty = True
@ -1365,7 +1371,7 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
if change_made:
self._RefreshDuplicateCounts()
self._dupe_count_numbers_dirty = True
self._ShowRandomPotentialDupes()
@ -1522,9 +1528,19 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
def RefreshAllNumbers( self ):
self._RefreshMaintenanceStatus()
self.RefreshDuplicateNumbers()
self._RefreshDuplicateCounts()
self.RefreshMaintenanceNumbers
def RefreshDuplicateNumbers( self ):
self._dupe_count_numbers_dirty = True
def RefreshMaintenanceNumbers( self ):
self._maintenance_numbers_dirty = True
def RefreshQuery( self, page_key ):
@ -1535,6 +1551,19 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
def REPEATINGPageUpdate( self ):
if self._maintenance_numbers_dirty:
self._RefreshMaintenanceNumbers()
if self._dupe_count_numbers_dirty:
self._RefreshDuplicateCounts()
def SearchImmediately( self, page_key, value ):
if page_key == self._page_key:
@ -3655,6 +3684,8 @@ class ManagementPanelPetitions( ManagementPanel ):
self._num_petition_info = None
self._current_petition = None
self._last_petition_type_fetched = None
#
self._petitions_info_panel = ClientGUICommon.StaticBox( self, 'petitions info' )
@ -3712,6 +3743,12 @@ class ManagementPanelPetitions( ManagementPanel ):
flip_selected = ClientGUICommon.BetterButton( self._petition_panel, 'flip selected', self._FlipSelected )
check_none = ClientGUICommon.BetterButton( self._petition_panel, 'check none', self._CheckNone )
self._sort_by_left = ClientGUICommon.BetterButton( self._petition_panel, 'sort by left', self._SortBy, 'left' )
self._sort_by_right = ClientGUICommon.BetterButton( self._petition_panel, 'sort by right', self._SortBy, 'right' )
self._sort_by_left.Disable()
self._sort_by_right.Disable()
self._contents = wx.CheckListBox( self._petition_panel, style = wx.LB_EXTENDED )
self._contents.Bind( wx.EVT_LISTBOX_DCLICK, self.EventContentDoubleClick )
@ -3740,10 +3777,16 @@ class ManagementPanelPetitions( ManagementPanel ):
check_hbox.Add( flip_selected, CC.FLAGS_VCENTER_EXPAND_DEPTH_ONLY )
check_hbox.Add( check_none, CC.FLAGS_VCENTER_EXPAND_DEPTH_ONLY )
sort_hbox = wx.BoxSizer( wx.HORIZONTAL )
sort_hbox.Add( self._sort_by_left, CC.FLAGS_VCENTER_EXPAND_DEPTH_ONLY )
sort_hbox.Add( self._sort_by_right, CC.FLAGS_VCENTER_EXPAND_DEPTH_ONLY )
self._petition_panel.Add( ClientGUICommon.BetterStaticText( self._petition_panel, label = 'Double-click a petition to see its files, if it has them.' ), CC.FLAGS_EXPAND_PERPENDICULAR )
self._petition_panel.Add( self._action_text, CC.FLAGS_EXPAND_PERPENDICULAR )
self._petition_panel.Add( self._reason_text, CC.FLAGS_EXPAND_PERPENDICULAR )
self._petition_panel.Add( check_hbox, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
self._petition_panel.Add( sort_hbox, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
self._petition_panel.Add( self._contents, CC.FLAGS_EXPAND_BOTH_WAYS )
self._petition_panel.Add( self._process, CC.FLAGS_EXPAND_PERPENDICULAR )
self._petition_panel.Add( self._modify_petitioner, CC.FLAGS_EXPAND_PERPENDICULAR )
@ -3789,6 +3832,9 @@ class ManagementPanelPetitions( ManagementPanel ):
self._contents.Clear()
self._process.Disable()
self._sort_by_left.Disable()
self._sort_by_right.Disable()
if self._can_ban:
self._modify_petitioner.Disable()
@ -3807,42 +3853,22 @@ class ManagementPanelPetitions( ManagementPanel ):
self._reason_text.SetBackgroundColour( action_colour )
if self._last_petition_type_fetched[0] in ( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_TYPE_TAG_PARENTS ):
self._sort_by_left.Enable()
self._sort_by_right.Enable()
else:
self._sort_by_left.Disable()
self._sort_by_right.Disable()
contents = self._current_petition.GetContents()
def key( c ):
if c.GetContentType() in ( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_TYPE_TAG_PARENTS ):
( part_two, part_one ) = c.GetContentData()
elif c.GetContentType() == HC.CONTENT_TYPE_MAPPINGS:
( tag, hashes ) = c.GetContentData()
part_one = tag
part_two = None
else:
part_one = None
part_two = None
return ( -c.GetVirtualWeight(), part_one, part_two )
contents_and_checks = [ ( c, True ) for c in contents ]
contents.sort( key = key )
self._contents.Clear()
for content in contents:
content_string = self._contents.EscapeMnemonics( content.ToString() )
self._contents.Append( content_string, content )
self._contents.SetCheckedItems( list( range( self._contents.GetCount() ) ) )
self._SetContentsAndChecks( contents_and_checks, 'right' )
self._process.Enable()
@ -3857,8 +3883,6 @@ class ManagementPanelPetitions( ManagementPanel ):
def _DrawNumPetitions( self ):
new_petition_fetched = False
for ( content_type, status, count ) in self._num_petition_info:
petition_type = ( content_type, status )
@ -3873,13 +3897,6 @@ class ManagementPanelPetitions( ManagementPanel ):
button.Enable()
if self._current_petition is None and not new_petition_fetched:
self._FetchPetition( content_type, status )
new_petition_fetched = True
else:
button.Disable()
@ -3888,6 +3905,40 @@ class ManagementPanelPetitions( ManagementPanel ):
def _FetchBestPetition( self ):
top_petition_type_with_count = None
for ( content_type, status, count ) in self._num_petition_info:
if count == 0:
continue
petition_type = ( content_type, status )
if top_petition_type_with_count is None:
top_petition_type_with_count = petition_type
if self._last_petition_type_fetched is not None and self._last_petition_type_fetched == petition_type:
self._FetchPetition( content_type, status )
return
if top_petition_type_with_count is not None:
( content_type, status ) = top_petition_type_with_count
self._FetchPetition( content_type, status )
def _FetchNumPetitions( self ):
def do_it( service ):
@ -3903,6 +3954,11 @@ class ManagementPanelPetitions( ManagementPanel ):
self._DrawNumPetitions()
if self._current_petition is None:
self._FetchBestPetition()
def wx_reset():
@ -3935,6 +3991,8 @@ class ManagementPanelPetitions( ManagementPanel ):
def _FetchPetition( self, content_type, status ):
self._last_petition_type_fetched = ( content_type, status )
( st, button ) = self._petition_types_to_controls[ ( content_type, status ) ]
def wx_setpet( petition ):
@ -3997,6 +4055,77 @@ class ManagementPanelPetitions( ManagementPanel ):
def _GetContentsAndChecks( self ):
contents_and_checks = []
for i in range( self._contents.GetCount() ):
content = self._contents.GetClientData( i )
check = self._contents.IsChecked( i )
contents_and_checks.append( ( content, check ) )
return contents_and_checks
def _SetContentsAndChecks( self, contents_and_checks, sort_type ):
def key( c_and_s ):
( c, s ) = c_and_s
if c.GetContentType() in ( HC.CONTENT_TYPE_TAG_SIBLINGS, HC.CONTENT_TYPE_TAG_PARENTS ):
( left, right ) = c.GetContentData()
if sort_type == 'left':
( part_one, part_two ) = ( HydrusTags.SplitTag( left ), HydrusTags.SplitTag( right ) )
elif sort_type == 'right':
( part_one, part_two ) = ( HydrusTags.SplitTag( right ), HydrusTags.SplitTag( left ) )
elif c.GetContentType() == HC.CONTENT_TYPE_MAPPINGS:
( tag, hashes ) = c.GetContentData()
part_one = HydrusTags.SplitTag( tag )
part_two = None
else:
part_one = None
part_two = None
return ( -c.GetVirtualWeight(), part_one, part_two )
contents_and_checks.sort( key = key )
self._contents.Clear()
to_check = []
for ( i, ( content, check ) ) in enumerate( contents_and_checks ):
content_string = self._contents.EscapeMnemonics( content.ToString() )
self._contents.Append( content_string, content )
if check:
to_check.append( i )
self._contents.SetCheckedItems( to_check )
def _ShowHashes( self, hashes ):
file_service_key = self._management_controller.GetKey( 'file_service' )
@ -4015,6 +4144,13 @@ class ManagementPanelPetitions( ManagementPanel ):
self._page.SwapMediaPanel( panel )
def _SortBy( self, sort_type ):
contents_and_checks = self._GetContentsAndChecks()
self._SetContentsAndChecks( contents_and_checks, sort_type )
def EventContentDoubleClick( self, event ):
selections = self._contents.GetSelections()

View File

@ -1125,7 +1125,6 @@ class ReviewServicePanel( wx.Panel ):
if not new_options.GetBoolean( 'advanced_mode' ):
self._sync_now_button.Hide()
self._export_updates_button.Hide()
self._reset_button.Hide()
@ -1298,11 +1297,9 @@ class ReviewServicePanel( wx.Panel ):
def _SyncNow( self ):
message = 'This will tell the database to process any outstanding update files.'
message = 'This will tell the database to process any possible outstanding update files right now.'
message += os.linesep * 2
message += 'This is a big task that usually runs during idle time. It locks the entire database and takes over the ui, stopping you from interacting with it. It is cancellable but may still take some time to return ui control to you.'
message += os.linesep * 2
message += 'If you are a new user, click \'no\'!'
message += 'You can still use the client while it runs, but it may make some things like autocomplete lookup a bit juddery.'
with ClientGUIDialogs.DialogYesNo( self, message ) as dlg:

View File

@ -26,6 +26,7 @@ from . import ClientNetworkingContexts
from . import ClientNetworkingDomain
from . import ClientParsing
from . import ClientPaths
from . import ClientSearch
from . import ClientTags
import collections
from . import HydrusConstants as HC
@ -36,6 +37,7 @@ from . import HydrusNetwork
from . import HydrusSerialisable
from . import HydrusTags
from . import HydrusText
from . import LogicExpressionQueryParser
import os
import wx
@ -156,6 +158,144 @@ class EditAccountTypePanel( ClientGUIScrolledPanels.EditPanel ):
return HydrusNetwork.AccountType.GenerateAccountTypeFromParameters( self._account_type_key, title, permissions, bandwidth_rules )
class EditAdvancedORPredicates( ClientGUIScrolledPanels.EditPanel ):
def __init__( self, parent, initial_string = None ):
ClientGUIScrolledPanels.EditPanel.__init__( self, parent )
self._input_text = wx.TextCtrl( self )
self._result_preview = wx.TextCtrl( self, style = wx.TE_MULTILINE )
self._result_preview.SetEditable( False )
size = ClientGUIFunctions.ConvertTextToPixels( self._result_preview, ( 64, 6 ) )
self._result_preview.SetInitialSize( size )
self._current_predicates = []
#
if initial_string is not None:
self._input_text.SetValue( initial_string )
#
rows = []
rows.append( ( 'Input: ', self._input_text ) )
rows.append( ( 'Result preview: ', self._result_preview ) )
gridbox = ClientGUICommon.WrapInGrid( self, rows )
vbox = wx.BoxSizer( wx.VERTICAL )
summary = 'Enter a complicated tag search here, such as \'( blue eyes and blonde hair ) or ( green eyes and red hair )\', and prkc\'s code will turn it into hydrus-compatible search predicates.'
summary += os.linesep * 2
summary += 'Accepted operators: not (!, -), and (&&), or (||), implies (=>), xor, xnor (iff, <=>), nand, nor.'
summary += os.linesep
summary += 'Parentheses work the usual way. \ can be used to escape characters (e.g. to search for tags including parentheses)'
st = ClientGUICommon.BetterStaticText( self, summary )
width = ClientGUIFunctions.ConvertTextToPixelWidth( st, 96 )
st.SetWrapWidth( width )
vbox.Add( st, CC.FLAGS_EXPAND_PERPENDICULAR )
vbox.Add( gridbox, CC.FLAGS_EXPAND_SIZER_BOTH_WAYS )
self.SetSizer( vbox )
self._UpdateText()
self._input_text.Bind( wx.EVT_TEXT, self.EventUpdateText )
def _UpdateText( self ):
text = self._input_text.GetValue()
self._current_predicates = []
output = ''
colour = ( 0, 0, 0 )
if len( text ) > 0:
try:
# this makes a list of sets, each set representing a list of AND preds
result = LogicExpressionQueryParser.parse_logic_expression_query( text )
for s in result:
output += ' OR '.join( s )
output += os.linesep
row_preds = []
for tag_string in s:
if tag_string.startswith( '-' ):
inclusive = False
tag_string = tag_string[1:]
else:
inclusive = True
row_pred = ClientSearch.Predicate( HC.PREDICATE_TYPE_TAG, tag_string, inclusive )
row_preds.append( row_pred )
if len( row_preds ) == 1:
self._current_predicates.append( row_preds[0] )
else:
self._current_predicates.append( ClientSearch.Predicate( HC.PREDICATE_TYPE_OR_CONTAINER, row_preds ) )
colour = ( 0, 128, 0 )
except ValueError:
output = 'Could not parse!'
colour = ( 128, 0, 0 )
self._result_preview.SetValue( output )
self._result_preview.SetForegroundColour( colour )
def EventUpdateText( self, event ):
self._UpdateText()
def GetValue( self ):
self._UpdateText()
if len( self._current_predicates ) == 0:
raise HydrusExceptions.VetoException( 'Please enter a string that parses into a set of search rules.' )
return self._current_predicates
class EditBandwidthRulesPanel( ClientGUIScrolledPanels.EditPanel ):
def __init__( self, parent, bandwidth_rules, summary = '' ):

View File

@ -0,0 +1,115 @@
from . import ClientConstants as CC
from . import ClientThreading
from . import HydrusConstants as HC
from . import HydrusData
from . import HydrusExceptions
from . import HydrusFileHandling
from . import HydrusGlobals as HG
from . import HydrusImageHandling
from . import HydrusNetworking
from . import HydrusPaths
from . import HydrusThreading
import os
import random
import threading
import time
class GlobalMaintenanceJobInterface( object ):
def GetName( self ):
raise NotImplementedError()
def GetSummary( self ):
raise NotImplementedError()
GLOBAL_MAINTENANCE_RUN_ON_SHUTDOWN = 0
GLOBAL_MAINTENANCE_RUN_ON_IDLE = 1
GLOBAL_MAINTENANCE_RUN_FOREGROUND = 2
# make serialisable too
class GlobalMaintenanceJobScheduler( object ):
def __init__( self, period = None, paused = None, times_can_run = None, max_running_time = None ):
if period is None:
period = 3600
if paused is None:
paused = False
if times_can_run is None:
times_can_run = [ GLOBAL_MAINTENANCE_RUN_ON_SHUTDOWN, GLOBAL_MAINTENANCE_RUN_ON_IDLE ]
if max_running_time is None:
max_running_time = ( 600, 86400 )
self._next_run_time = 0
self._no_work_until_time = 0
self._no_work_until_reason = ''
self._period = period
self._paused = paused
self._times_can_run = times_can_run
# convert max running time into a time-based bandwidth rule or whatever
# and have bw tracker and rules
def CanRun( self ):
if not HydrusData.TimeHasPassed( self._no_work_until_time ):
return False
# check shutdown, idle, foreground status
if not HydrusData.TimeHasPassed( self._next_run_time ):
return False
return True
def GetNextRunTime( self ):
return self._next_run_time
def Paused( self ):
return self._paused
def WorkCompleted( self ):
self._next_run_time = HydrusData.GetNow() + self._period
# make this serialisable. it'll save like domain manager
class GlobalMaintenanceManager( object ):
def __init__( self, controller ):
self._controller = controller
self._maintenance_lock = threading.Lock()
self._lock = threading.Lock()
# something like files maintenance manager. it'll also run itself, always checking on jobs, and will catch 'runnow' events too
# instead of storing in db, we'll store here in the object since small number of complicated jobs

View File

@ -2138,7 +2138,14 @@ class MediaSingleton( Media ):
def RefreshFileInfo( self ):
self._media_result.RefreshFileInfo()
media_results = HG.client_controller.Read( 'media_results', ( self._media_result.GetHash(), ) )
if len( media_results ) > 0:
media_result = media_results[0]
self._media_result = media_result
class MediaResult( object ):
@ -2293,18 +2300,6 @@ class MediaResult( object ):
def RefreshFileInfo( self ):
media_results = HG.client_controller.Read( 'media_results', ( self._file_info_manager.hash, ) )
if len( media_results ) > 0:
media_result = media_results[0]
self._file_info_manager = media_result._file_info_manager
def ResetService( self, service_key ):
self._tags_manager.ResetService( service_key )

View File

@ -108,6 +108,15 @@ def ConvertQueryTextToDict( query_text ):
# we generally do not want quote characters, %20 stuff, in our urls. we would prefer properly formatted unicode
# so, let's replace all keys and values with unquoted versions
# -but-
# we only replace if it is a completely reversable operation!
# odd situations like '6+girls+skirt', which comes here encoded as '6%2Bgirls+skirt', shouldn't turn into '6+girls+skirt'
# so if there are a mix of encoded and non-encoded, we won't touch it here m8
# except these chars, which screw with GET arg syntax when unquoted
bad_chars = [ '&', '=', '/', '?' ]
query_dict = {}
pairs = query_text.split( '&' )
@ -120,23 +129,20 @@ def ConvertQueryTextToDict( query_text ):
if len( result ) == 2:
# so, let's replace all keys and values with unquoted versions
# -but-
# we only replace if it is a completely reversable operation!
# odd situations like '6+girls+skirt', which comes here encoded as '6%2Bgirls+skirt', shouldn't turn into '6+girls+skirt'
# so if there are a mix of encoded and non-encoded, we won't touch it here m8
( key, value ) = result
try:
unquoted_key = urllib.parse.unquote( key )
requoted_key = urllib.parse.quote( unquoted_key )
if requoted_key == key:
if True not in ( bad_char in unquoted_key for bad_char in bad_chars ):
key = unquoted_key
requoted_key = urllib.parse.quote( unquoted_key )
if requoted_key == key:
key = unquoted_key
except:
@ -148,11 +154,14 @@ def ConvertQueryTextToDict( query_text ):
unquoted_value = urllib.parse.unquote( value )
requoted_value = urllib.parse.quote( unquoted_value )
if requoted_value == value:
if True not in ( bad_char in unquoted_value for bad_char in bad_chars ):
value = unquoted_value
requoted_value = urllib.parse.quote( unquoted_value )
if requoted_value == value:
value = unquoted_value
except:
@ -2260,7 +2269,9 @@ class GalleryURLGenerator( HydrusSerialisable.SerialisableBaseNamed ):
# when the tags separator is '+' but the tags include '6+girls', we run into fun internet land
if self._search_terms_separator in search_term:
bad_chars = [ self._search_terms_separator, '&', '=', '/', '?' ]
if True in ( bad_char in search_term for bad_char in bad_chars ):
search_term = urllib.parse.quote( search_term, safe = '' )

View File

@ -1,5 +1,6 @@
from . import ClientConstants as CC
from . import ClientDownloading
from . import ClientFiles
from . import ClientImporting
from . import ClientNetworkingContexts
from . import ClientNetworkingJobs
@ -1172,189 +1173,87 @@ class ServiceRepository( ServiceRestricted ):
self._paused = dictionary[ 'paused' ]
def _SyncProcessUpdates( self, maintenance_mode = HC.MAINTENANCE_IDLE, stop_time = None ):
def _LogFinalRowSpeed( self, precise_timestamp, total_rows, row_name ):
if total_rows == 0:
return
it_took = HydrusData.GetNowPrecise() - precise_timestamp
rows_s = HydrusData.ToHumanInt( int( total_rows / it_took ) )
summary = '{} processed {} {} at {} rows/s'.format( self._name, HydrusData.ToHumanInt( total_rows ), row_name, rows_s )
HydrusData.Print( summary )
def _ReportOngoingRowSpeed( self, job_key, rows_done, total_rows, precise_timestamp, rows_done_in_last_packet, row_name ):
it_took = HydrusData.GetNowPrecise() - precise_timestamp
rows_s = HydrusData.ToHumanInt( int( rows_done_in_last_packet / it_took ) )
popup_message = '{} {}: processing at {} rows/s'.format( row_name, HydrusData.ConvertValueRangeToPrettyString( rows_done, total_rows ), rows_s )
HG.client_controller.pub( 'splash_set_status_text', popup_message, print_to_log = False )
job_key.SetVariable( 'popup_text_2', popup_message )
def _SyncDownloadMetadata( self ):
with self._lock:
if not self._CanSyncProcess():
if not self._CanSyncDownload():
return
if HG.client_controller.ShouldStopThisWork( maintenance_mode, stop_time = stop_time ):
do_it = self._metadata.UpdateDue( from_client = True )
return
next_update_index = self._metadata.GetNextUpdateIndex()
service_key = self._service_key
name = self._name
try:
if do_it:
( did_some_work, did_everything ) = HG.client_controller.WriteSynchronous( 'process_repository', self._service_key, maintenance_mode = maintenance_mode, stop_time = stop_time )
if did_some_work:
try:
with self._lock:
self._SetDirty()
if self._service_type == HC.TAG_REPOSITORY:
HG.client_controller.pub( 'notify_new_force_refresh_tags_data' )
response = self.Request( HC.GET, 'metadata', { 'since' : next_update_index } )
metadata_slice = response[ 'metadata_slice' ]
except HydrusExceptions.CancelledException as e:
self._DelayFutureRequests( str( e ) )
return
except HydrusExceptions.NetworkException as e:
HydrusData.Print( 'Attempting to download metadata for ' + name + ' resulted in a network error:' )
HydrusData.Print( e )
return
if not did_everything:
time.sleep( 3 ) # stop spamming of repo sync daemon from bringing this up again too quick
except HydrusExceptions.ShutdownException:
return
except Exception as e:
HG.client_controller.WriteSynchronous( 'associate_repository_update_hashes', service_key, metadata_slice )
with self._lock:
message = 'Failed to process updates for the ' + self._name + ' repository! The error follows:'
HydrusData.ShowText( message )
HydrusData.ShowException( e )
self._paused = True
self._metadata.UpdateFromSlice( metadata_slice )
self._SetDirty()
HG.client_controller.pub( 'important_dirt_to_clean' )
def CanDoIdleShutdownWork( self ):
with self._lock:
if not self._CanSyncProcess():
return False
service_key = self._service_key
( download_value, processing_value, range ) = HG.client_controller.Read( 'repository_progress', service_key )
return processing_value < range
def GetNextUpdateDueString( self ):
with self._lock:
return self._metadata.GetNextUpdateDueString( from_client = True )
def GetUpdateHashes( self ):
with self._lock:
return self._metadata.GetUpdateHashes()
def GetUpdateInfo( self ):
with self._lock:
return self._metadata.GetUpdateInfo()
def IsPaused( self ):
with self._lock:
return self._paused
def PausePlay( self ):
with self._lock:
self._paused = not self._paused
self._SetDirty()
HG.client_controller.pub( 'important_dirt_to_clean' )
if not self._paused:
HG.client_controller.pub( 'notify_new_permissions' )
def Reset( self ):
with self._lock:
self._no_requests_reason = ''
self._no_requests_until = 0
self._account = HydrusNetwork.Account.GenerateUnknownAccount()
self._next_account_sync = 0
self._metadata = HydrusNetwork.Metadata()
self._SetDirty()
HG.client_controller.pub( 'important_dirt_to_clean' )
HG.client_controller.Write( 'reset_repository', self )
def Sync( self, maintenance_mode = HC.MAINTENANCE_IDLE, stop_time = None ):
with self._sync_lock: # to stop sync_now button clicks from stomping over the regular daemon and vice versa
try:
self.SyncDownloadMetadata()
self.SyncDownloadUpdates( stop_time )
self._SyncProcessUpdates( maintenance_mode = maintenance_mode, stop_time = stop_time )
self.SyncThumbnails( stop_time )
except HydrusExceptions.ShutdownException:
pass
except Exception as e:
self._DelayFutureRequests( str( e ) )
HydrusData.ShowText( 'The service "{}" encountered an error while trying to sync! The error was "{}". It will not do any work for a little while. If the fix is not obvious, please elevate this to hydrus dev.'.format( self._name, str( e ) ) )
HydrusData.ShowException( e )
finally:
if self.IsDirty():
HG.client_controller.pub( 'important_dirt_to_clean' )
def SyncDownloadUpdates( self, stop_time ):
def _SyncDownloadUpdates( self, stop_time ):
with self._lock:
@ -1525,56 +1424,467 @@ class ServiceRepository( ServiceRestricted ):
def SyncDownloadMetadata( self ):
def _SyncProcessUpdates( self, maintenance_mode = HC.MAINTENANCE_IDLE, stop_time = None ):
with self._lock:
if not self._CanSyncDownload():
if not self._CanSyncProcess():
return
do_it = self._metadata.UpdateDue( from_client = True )
if HG.client_controller.ShouldStopThisWork( maintenance_mode, stop_time = stop_time ):
next_update_index = self._metadata.GetNextUpdateIndex()
service_key = self._service_key
name = self._name
return
if do_it:
work_done = False
try:
job_key = ClientThreading.JobKey( cancellable = True, maintenance_mode = maintenance_mode, stop_time = stop_time )
title = '{} sync: processing updates'.format( self._name )
job_key.SetVariable( 'popup_title', title )
( this_is_first_definitions_work, definition_hashes, this_is_first_content_work, content_hashes ) = HG.client_controller.Read( 'repository_update_hashes_to_process', self._service_key )
if len( definition_hashes ) == 0 and len( content_hashes ) == 0:
return # no work to do
HydrusData.Print( title )
num_updates_done = 0
num_updates_to_do = len( definition_hashes ) + len( content_hashes )
HG.client_controller.pub( 'message', job_key )
HG.client_controller.pub( 'splash_set_title_text', title, print_to_log = False )
total_definition_rows_completed = 0
total_content_rows_completed = 0
did_definition_analyze = False
did_content_analyze = False
definition_start_time = HydrusData.GetNowPrecise()
try:
response = self.Request( HC.GET, 'metadata', { 'since' : next_update_index } )
for definition_hash in definition_hashes:
progress_string = HydrusData.ConvertValueRangeToPrettyString( num_updates_done + 1, num_updates_to_do )
splash_title = '{} sync: processing updates {}'.format( self._name, progress_string )
HG.client_controller.pub( 'splash_set_title_text', splash_title, clear_undertexts = False, print_to_log = False )
status = 'processing {}'.format( progress_string )
job_key.SetVariable( 'popup_text_1', status )
job_key.SetVariable( 'popup_gauge_1', ( num_updates_done, num_updates_to_do ) )
try:
update_path = HG.client_controller.client_files_manager.GetFilePath( definition_hash, HC.APPLICATION_HYDRUS_UPDATE_DEFINITIONS )
except HydrusExceptions.FileMissingException:
HG.client_controller.WriteSynchronous( 'schedule_repository_update_file_maintenance', self._service_key, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE )
raise Exception( 'An unusual error has occured during repository processing: an update file was missing. Your repository should be paused, and all update files have been scheduled for a presence check. Please permit file maintenance to check them, or tell it to do so manually, before unpausing your repository.' )
with open( update_path, 'rb' ) as f:
update_network_bytes = f.read()
try:
definition_update = HydrusSerialisable.CreateFromNetworkBytes( update_network_bytes )
except:
HG.client_controller.WriteSynchronous( 'schedule_repository_update_file_maintenance', self._service_key, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA )
raise Exception( 'An unusual error has occured during repository processing: an update file was invalid. Your repository should be paused, and all update files have been scheduled for an integrity check. Please permit file maintenance to check them, or tell it to do so manually, before unpausing your repository.' )
if not isinstance( definition_update, HydrusNetwork.DefinitionsUpdate ):
HG.client_controller.WriteSynchronous( 'schedule_repository_update_file_maintenance', self._service_key, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_METADATA )
raise Exception( 'An unusual error has occured during repository processing: an update file has incorrect metadata. Your repository should be paused, and all update files have been scheduled for a metadata rescan. Please permit file maintenance to fix them, or tell it to do so manually, before unpausing your repository.' )
rows_in_this_update = definition_update.GetNumRows()
rows_done_in_this_update = 0
iterator_dict = {}
iterator_dict[ 'service_hash_ids_to_hashes' ] = iter( definition_update.GetHashIdsToHashes().items() )
iterator_dict[ 'service_tag_ids_to_tags' ] = iter( definition_update.GetTagIdsToTags().items() )
while len( iterator_dict ) > 0:
this_work_start_time = HydrusData.GetNowPrecise()
if HG.client_controller.CurrentlyVeryIdle():
work_time = 29.5
break_time = 0.5
elif HG.client_controller.CurrentlyIdle():
work_time = 9.0
break_time = 1.0
else:
work_time = 0.5
break_time = 0.5
num_rows_done = HG.client_controller.WriteSynchronous( 'process_repository_definitions', self._service_key, definition_hash, iterator_dict, job_key, work_time )
rows_done_in_this_update += num_rows_done
total_definition_rows_completed += num_rows_done
work_done = True
if this_is_first_definitions_work and total_definition_rows_completed > 10000 and not did_definition_analyze:
HG.client_controller.WriteSynchronous( 'analyze', maintenance_mode = maintenance_mode, stop_time = stop_time )
did_definition_analyze = True
if HG.client_controller.ShouldStopThisWork( maintenance_mode, stop_time = stop_time ) or job_key.IsCancelled():
return
if HydrusData.TimeHasPassedPrecise( this_work_start_time + work_time ):
time.sleep( break_time )
self._ReportOngoingRowSpeed( job_key, rows_done_in_this_update, rows_in_this_update, this_work_start_time, num_rows_done, 'definitions' )
num_updates_done += 1
metadata_slice = response[ 'metadata_slice' ]
if this_is_first_definitions_work and not did_definition_analyze:
HG.client_controller.WriteSynchronous( 'analyze', maintenance_mode = maintenance_mode, stop_time = stop_time )
did_definition_analyze = True
except HydrusExceptions.CancelledException as e:
finally:
self._DelayFutureRequests( str( e ) )
self._LogFinalRowSpeed( definition_start_time, total_definition_rows_completed, 'definitions' )
return
except HydrusExceptions.NetworkException as e:
HydrusData.Print( 'Attempting to download metadata for ' + name + ' resulted in a network error:' )
HydrusData.Print( e )
if HG.client_controller.ShouldStopThisWork( maintenance_mode, stop_time = stop_time ) or job_key.IsCancelled():
return
HG.client_controller.WriteSynchronous( 'associate_repository_update_hashes', service_key, metadata_slice )
content_start_time = HydrusData.GetNowPrecise()
try:
for content_hash in content_hashes:
progress_string = HydrusData.ConvertValueRangeToPrettyString( num_updates_done + 1, num_updates_to_do )
splash_title = '{} sync: processing updates {}'.format( self._name, progress_string )
HG.client_controller.pub( 'splash_set_title_text', splash_title, clear_undertexts = False, print_to_log = False )
status = 'processing {}'.format( progress_string )
job_key.SetVariable( 'popup_text_1', status )
job_key.SetVariable( 'popup_gauge_1', ( num_updates_done, num_updates_to_do ) )
try:
update_path = HG.client_controller.client_files_manager.GetFilePath( content_hash, HC.APPLICATION_HYDRUS_UPDATE_CONTENT )
except HydrusExceptions.FileMissingException:
HG.client_controller.WriteSynchronous( 'schedule_repository_update_file_maintenance', self._service_key, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_PRESENCE )
raise Exception( 'An unusual error has occured during repository processing: an update file was missing. Your repository should be paused, and all update files have been scheduled for a presence check. Please permit file maintenance to check them, or tell it to do so manually, before unpausing your repository.' )
with open( update_path, 'rb' ) as f:
update_network_bytes = f.read()
try:
content_update = HydrusSerialisable.CreateFromNetworkBytes( update_network_bytes )
except:
HG.client_controller.WriteSynchronous( 'schedule_repository_update_file_maintenance', self._service_key, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_INTEGRITY_DATA )
raise Exception( 'An unusual error has occured during repository processing: an update file was invalid. Your repository should be paused, and all update files have been scheduled for an integrity check. Please permit file maintenance to check them, or tell it to do so manually, before unpausing your repository.' )
if not isinstance( content_update, HydrusNetwork.ContentUpdate ):
HG.client_controller.WriteSynchronous( 'schedule_repository_update_file_maintenance', self._service_key, ClientFiles.REGENERATE_FILE_DATA_JOB_FILE_METADATA )
raise Exception( 'An unusual error has occured during repository processing: an update file has incorrect metadata. Your repository should be paused, and all update files have been scheduled for a metadata rescan. Please permit file maintenance to fix them, or tell it to do so manually, before unpausing your repository.' )
rows_in_this_update = content_update.GetNumRows()
rows_done_in_this_update = 0
iterator_dict = {}
iterator_dict[ 'new_files' ] = iter( content_update.GetNewFiles() )
iterator_dict[ 'deleted_files' ] = iter( content_update.GetDeletedFiles() )
iterator_dict[ 'new_mappings' ] = iter( content_update.GetNewMappings() )
iterator_dict[ 'deleted_mappings' ] = iter( content_update.GetDeletedMappings() )
iterator_dict[ 'new_parents' ] = iter( content_update.GetNewTagParents() )
iterator_dict[ 'deleted_parents' ] = iter( content_update.GetDeletedTagParents() )
iterator_dict[ 'new_siblings' ] = iter( content_update.GetNewTagSiblings() )
iterator_dict[ 'deleted_siblings' ] = iter( content_update.GetDeletedTagSiblings() )
while len( iterator_dict ) > 0:
this_work_start_time = HydrusData.GetNowPrecise()
if HG.client_controller.CurrentlyVeryIdle():
work_time = 29.5
break_time = 0.5
elif HG.client_controller.CurrentlyIdle():
work_time = 9.0
break_time = 1.0
else:
work_time = 0.5
break_time = 0.2
num_rows_done = HG.client_controller.WriteSynchronous( 'process_repository_content', self._service_key, content_hash, iterator_dict, job_key, work_time )
rows_done_in_this_update += num_rows_done
total_content_rows_completed += num_rows_done
work_done = True
if this_is_first_content_work and total_content_rows_completed > 10000 and not did_content_analyze:
HG.client_controller.WriteSynchronous( 'analyze', maintenance_mode = maintenance_mode, stop_time = stop_time )
did_content_analyze = True
if HG.client_controller.ShouldStopThisWork( maintenance_mode, stop_time = stop_time ) or job_key.IsCancelled():
return
if HydrusData.TimeHasPassedPrecise( this_work_start_time + work_time ):
time.sleep( break_time )
self._ReportOngoingRowSpeed( job_key, rows_done_in_this_update, rows_in_this_update, this_work_start_time, num_rows_done, 'content rows' )
num_updates_done += 1
if this_is_first_content_work and not did_content_analyze:
HG.client_controller.WriteSynchronous( 'analyze', maintenance_mode = maintenance_mode, stop_time = stop_time )
did_content_analyze = True
finally:
self._LogFinalRowSpeed( content_start_time, total_content_rows_completed, 'content rows' )
except HydrusExceptions.ShutdownException:
return
except Exception as e:
with self._lock:
self._metadata.UpdateFromSlice( metadata_slice )
message = 'Failed to process updates for the ' + self._name + ' repository! The error follows:'
HydrusData.ShowText( message )
HydrusData.ShowException( e )
self._paused = True
self._SetDirty()
HG.client_controller.pub( 'important_dirt_to_clean' )
finally:
if work_done:
HG.client_controller.pub( 'notify_new_force_refresh_tags_data' )
self._SetDirty()
job_key.DeleteVariable( 'popup_text_1' )
job_key.DeleteVariable( 'popup_text_2' )
job_key.DeleteVariable( 'popup_gauge_1' )
job_key.Finish()
job_key.Delete( 3 )
time.sleep( 3 ) # stop daemon restarting instantly if it is being spammed to wake up, just add a break mate
def CanDoIdleShutdownWork( self ):
with self._lock:
if not self._CanSyncProcess():
return False
service_key = self._service_key
( download_value, processing_value, range ) = HG.client_controller.Read( 'repository_progress', service_key )
return processing_value < range
def GetNextUpdateDueString( self ):
with self._lock:
return self._metadata.GetNextUpdateDueString( from_client = True )
def GetUpdateHashes( self ):
with self._lock:
return self._metadata.GetUpdateHashes()
def GetUpdateInfo( self ):
with self._lock:
return self._metadata.GetUpdateInfo()
def IsPaused( self ):
with self._lock:
return self._paused
def PausePlay( self ):
with self._lock:
self._paused = not self._paused
self._SetDirty()
HG.client_controller.pub( 'important_dirt_to_clean' )
if not self._paused:
HG.client_controller.pub( 'notify_new_permissions' )
def Reset( self ):
with self._lock:
self._no_requests_reason = ''
self._no_requests_until = 0
self._account = HydrusNetwork.Account.GenerateUnknownAccount()
self._next_account_sync = 0
self._metadata = HydrusNetwork.Metadata()
self._SetDirty()
HG.client_controller.pub( 'important_dirt_to_clean' )
HG.client_controller.Write( 'reset_repository', self )
def Sync( self, maintenance_mode = HC.MAINTENANCE_IDLE, stop_time = None ):
with self._sync_lock: # to stop sync_now button clicks from stomping over the regular daemon and vice versa
try:
self._SyncDownloadMetadata()
self._SyncDownloadUpdates( stop_time )
self._SyncProcessUpdates( maintenance_mode = maintenance_mode, stop_time = stop_time )
self.SyncThumbnails( stop_time )
except HydrusExceptions.ShutdownException:
pass
except Exception as e:
self._DelayFutureRequests( str( e ) )
HydrusData.ShowText( 'The service "{}" encountered an error while trying to sync! The error was "{}". It will not do any work for a little while. If the fix is not obvious, please elevate this to hydrus dev.'.format( self._name, str( e ) ) )
HydrusData.ShowException( e )
finally:
if self.IsDirty():
HG.client_controller.pub( 'important_dirt_to_clean' )
def SyncProcessUpdates( self, maintenance_mode = HC.MAINTENANCE_IDLE, stop_time = None ):

View File

@ -85,6 +85,7 @@ def VideoHasAudio( path ):
# silent PCM data is just 00 bytes
# every now and then, you'll get a couple ffs for some reason, but this is not legit audio data
try:
@ -92,12 +93,10 @@ def VideoHasAudio( path ):
while len( chunk_of_pcm_data ) > 0:
for b in chunk_of_pcm_data:
# iterating over bytes gives you ints, recall
if True in ( b != 0 and b != 255 for b in chunk_of_pcm_data ):
if b != b'\x00':
return True
return True
chunk_of_pcm_data = process.stdout.read( 65536 )

View File

@ -67,7 +67,7 @@ options = {}
# Misc
NETWORK_VERSION = 18
SOFTWARE_VERSION = 363
SOFTWARE_VERSION = 364
CLIENT_API_VERSION = 10
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )

View File

@ -130,7 +130,7 @@ class HydrusDB( object ):
READ_WRITE_ACTIONS = []
UPDATE_WAIT = 2
TRANSACTION_COMMIT_TIME = 10
TRANSACTION_COMMIT_TIME = 30
def __init__( self, controller, db_dir, db_name ):

View File

@ -1151,6 +1151,34 @@ def SplitListIntoChunks( xs, n ):
yield xs[ i : i + n ]
def SplitMappingIteratorIntoChunks( xs, n ):
chunk_weight = 0
chunk = []
for ( tag_item, hash_items ) in xs:
for chunk_of_hash_items in SplitIteratorIntoChunks( hash_items, n ):
chunk.append( ( tag_item, chunk_of_hash_items ) )
chunk_weight += len( chunk_of_hash_items )
if chunk_weight > n:
yield chunk
chunk_weight = 0
chunk = []
if len( chunk ) > 0:
yield chunk
def SplitMappingListIntoChunks( xs, n ):
chunk_weight = 0

View File

@ -277,7 +277,7 @@ def SplitTag( tag ):
if ':' in tag:
return tag.split( ':', 1 )
return tuple( tag.split( ':', 1 ) )
else:

View File

@ -0,0 +1,327 @@
#made by prkc for Hydrus Network
#Licensed under the same terms as Hydrus Network
"""
Accepted operators: not (!, -), and (&&), or (||), implies (=>), xnor (iff, <=>), nand, nor.
Parentheses work the usual way. \ can be used to escape characters (eg. to search for tags including parentheses)
The usual precedence rules apply.
ValueErrors are thrown with a message on syntax/parser errors.
Some test inputs:
a or b
a OR b
a and b
not a
a implies b
a xor b
a nor b
a nand b
a xnor b
(a && b) and not (a xor !b)
blah blah blah and another_long_tag_241245!
a_and_b
test!
!test
aaaaa_\(bbb ccc \(\)\) and not x
(a || b) and c and d and e or f and x or not (y or k or z and (h or i or j or t and f))
"""
import re
#Generates tokens for the parser. Consumes the input string.
#As opposed to most lexers it doesn't split on spaces.
#In fact, it tries to avoid splitting when possible by only splitting on logical operators or parentheses.
#Lowercase input is assumed.
#Contains some special handling for:
# * escapes with the \ character (escaping any character is valid). 'a \or b' is parsed as a single tag 'a or b'.
# * to allow tags ending with ! and other special chars without escaping. '!a' is negation of 'a' but 'a!' is just a tag.
#Returns a token and the remaining (unconsumed) input
def next_token(src):
def check_tag_end(src):
if re.match(r"\s(and|or|implies|xor|nor|nand|xnor|iff)", src): return True
if re.match(r"&&|\|\||=>|<=>|\)|\(", src): return True
return False
src = src.strip()
if len(src) == 0: return ("end", None), ""
escape = False
if src[0] == '\\' and len(src) > 1:
escape = True
src = src[1:]
if not escape:
if src.startswith(("!","-")):
return ("not", None), src[1:]
if src.startswith("&&"):
return ("and", None), src[2:]
if src.startswith("||"):
return ("or", None), src[2:]
if src.startswith("=>"):
return ("implies", None), src[2:]
if src.startswith("<=>"):
return ("iff", None), src[3:]
if src.startswith("("):
return ("(", None), src[1:]
if src.startswith(")"):
return (")", None), src[1:]
m = re.match(r"(not|and|or|implies|xor|nor|nand|xnor|iff)[\s\(]", src)
if m:
kw = m.group(1)
return (kw if kw != "xnor" else "iff", None), src[len(kw):]
tag = ""
if escape:
tag += src[0]
src = src[1:]
while len(src) > 0 and not check_tag_end(src):
if len(src) > 1 and src[0] == '\\':
tag += src[1]
src = src[2:]
else:
tag += src[0]
src = src[1:]
tag = tag.strip()
if len(tag) == 0:
raise ValueError("Syntax error: empty search term")
return ("tag", tag), src
#Roughly following conventional preferences, or C/C++ for rarely used operators
precedence_table = { "not": 10, "and": 9, "or": 8, "nor": 7, "nand": 7, "xor": 6, "implies": 5, "iff": 4 }
def precedence(token):
if token[0] in precedence_table: return precedence_table[token[0]]
raise ValueError("Syntax error: '{}' is not an operator".format(token[0]))
#A simple class representing a node in a logical expression tree
class Node:
def __init__(self, op, children = []):
self.op = op
self.children = children[:]
def __str__(self): #pretty string form, for debug purposes
if self.op == "not":
return "not ({})".format(str(self.children[0]) if type(self.children[0]) != str else self.children[0])
else:
child_strs = ["("+(str(x) if type(x) != str else x)+")" for x in self.children]
final_str = ""
for child_s in child_strs[:-1]:
final_str += child_s
final_str += " "+self.op+" "
final_str += child_strs[-1]
return final_str
#Parse a string into a logical expression tree
#First uses the shunting-yard algorithm to parse into reverse polish notation (RPN),
#then builds the tree from that
def parse(src):
src = src.lower()
prev_tok_type = "start"
tok_type = "start"
rpn_result = []
operator_stack = []
#Parse into reverse polish notation using the shunting-yard algorithm
#Basic algorithm:
#https://en.wikipedia.org/wiki/Shunting-yard_algorithm
#Handling of unary operators:
#https://stackoverflow.com/questions/1593080/how-can-i-modify-my-shunting-yard-algorithm-so-it-accepts-unary-operators
#tl;dr - make unary operators right associative and higher precedence than any infix operator
#however it will also accept prefix operators as postfix - we check for that later
while True:
prev_tok_type = tok_type
token, src = next_token(src)
tok_type, tok_val = token
if tok_type == "end":
break
if tok_type == "tag":
rpn_result.append(token)
elif tok_type == "(":
operator_stack.append(token)
elif tok_type == ")":
while len(operator_stack) > 0 and operator_stack[-1][0] != "(":
rpn_result.append(operator_stack[-1])
del operator_stack[-1]
if len(operator_stack) > 0:
del operator_stack[-1]
else:
raise ValueError("Syntax error: mismatched parentheses")
else:
if tok_type == "not" and prev_tok_type in ["tag",")"]:
raise ValueError("Syntax error: invalid negation")
while len(operator_stack) > 0 and operator_stack[-1][0] != "(" and \
(precedence(operator_stack[-1]) > precedence(token) or (precedence(operator_stack[-1]) == precedence(token) and operator_stack[-1][0] != "not")):
rpn_result.append(operator_stack[-1])
del operator_stack[-1]
operator_stack.append(token)
while len(operator_stack) > 0:
if operator_stack[-1][0] in ["(", ")"]:
raise ValueError("Syntax error: mismatched parentheses")
rpn_result.append(operator_stack[-1])
del operator_stack[-1]
if len(rpn_result) == 0:
raise ValueError("Empty input!")
#Convert RPN into a tree
#The original shunting-yard algorithm doesn't check for wrong number of arguments so also check that here
rpn_result = list(reversed(rpn_result))
stack = []
while len(rpn_result) > 0:
if rpn_result[-1][0] == "tag":
stack.append(rpn_result[-1][1])
del rpn_result[-1]
else:
if rpn_result[-1][0] == "not":
if len(stack) == 0:
raise ValueError("Syntax error: wrong number of arguments")
op = Node("not", [stack[-1]])
del stack[-1]
stack.append(op)
else:
if len(stack) < 2:
raise ValueError("Syntax error: wrong number of arguments")
op = Node(rpn_result[-1][0], [stack[-2], stack[-1]])
del stack[-1]
del stack[-1]
stack.append(op)
del rpn_result[-1]
#The original shunting-yard algorithm also accepts prefix operators as postfix
#Check for that here
if len(stack) != 1:
raise ValueError("Parser error: unused values left in stack")
return stack[0]
#Input is an expression tree
#Convert all logical operators to 'and', 'or' and 'not'
def convert_to_and_or_not(node):
def negate(node):
return Node("not", [convert_to_and_or_not(node)])
if not hasattr(node, 'op'): return node
if node.op == "implies": #convert to !a || b
return Node("or", [negate(node.children[0]), convert_to_and_or_not(node.children[1])])
elif node.op == "xor": #convert to (a && !b) || (!a && b)
return Node("or", [
Node("and", [convert_to_and_or_not(node.children[0]), negate(node.children[1])]),
Node("and", [negate(node.children[0]), convert_to_and_or_not(node.children[1])])
])
elif node.op == "nor": #convert to !(a || b)
return negate(Node("or", node.children))
elif node.op == "nand": #convert to !(a && b)
return negate(Node("and", node.children))
elif node.op == "iff": #convert to (a && b) || (!a && !b)
return Node("or", [
convert_to_and_or_not(Node("and", node.children)),
Node("and", [negate(node.children[0]), negate(node.children[1])])
])
else:
return Node(node.op, list(map(convert_to_and_or_not, node.children)))
#Move negation inwards (downwards in the expr. tree) by using De Morgan's law,
#until they are directly apply to a term
#Also eliminates double negations
def move_not_inwards(node):
if hasattr(node, 'op'):
if node.op == "not" and hasattr(node.children[0], 'op'):
if node.children[0].op == "not": #eliminate double negation
return move_not_inwards(node.children[0].children[0])
elif node.children[0].op == "and": #apply De Morgan's law
return Node("or", [move_not_inwards(Node("not", [node.children[0].children[0]])), move_not_inwards(Node("not", [node.children[0].children[1]]))])
elif node.children[0].op == "or": #apply De Morgan's law
return Node("and", [move_not_inwards(Node("not", [node.children[0].children[0]])), move_not_inwards(Node("not", [node.children[0].children[1]]))])
else:
return Node(node.op, list(map(move_not_inwards, node.children)))
else:
return Node(node.op, list(map(move_not_inwards, node.children)))
return node
#Use the distribute law to swap 'and's and 'or's so we get CNF
#Basically pushes 'or's downwards in the expression tree
def distribute_and_over_or(node):
if hasattr(node, 'op'):
node.children = list(map(distribute_and_over_or, node.children))
if node.op == 'or' and hasattr(node.children[0], 'op') and node.children[0].op == 'and': #apply (A && B) || C -> (A || C) && (B || C)
a = node.children[0].children[0]
b = node.children[0].children[1]
c = node.children[1]
return Node("and", [distribute_and_over_or(Node("or", [a, c])), distribute_and_over_or(Node("or", [b, c]))])
elif node.op == 'or' and hasattr(node.children[1], 'op') and node.children[1].op == 'and': #apply C || (A && B) -> (A || C) && (B || C)
a = node.children[1].children[0]
b = node.children[1].children[1]
c = node.children[0]
return Node("and", [distribute_and_over_or(Node("or", [a, c])), distribute_and_over_or(Node("or", [b, c]))])
else:
return node
return node
#Flatten the tree so that 'and'/'or's don't have 'and'/'or's as direct children
#or(or(a,b),c) -> or(a,b,c)
#After this step, nodes can have more than two child
def flatten_tree(node):
if hasattr(node, 'op'):
node.children = list(map(flatten_tree, node.children))
if node.op == 'and':
new_children = []
for chld in node.children:
if hasattr(chld, 'op') and chld.op == 'and':
new_children += chld.children
else:
new_children.append(chld)
node.children = new_children
elif node.op == 'or':
new_children = []
for chld in node.children:
if hasattr(chld, 'op') and chld.op == 'or':
new_children += chld.children
else:
new_children.append(chld)
node.children = new_children
return node
#Convert the flattened tree to a list of sets of terms
#Do some basic simplification: removing tautological or repeating clauses
def convert_to_list_and_simplify(node):
res = []
if hasattr(node, 'op'):
if node.op == 'and':
for chld in node.children:
if type(chld) == str:
res.append(set([chld]))
elif chld.op == 'not':
res.append(set(["-"+chld.children[0]]))
else:
res.append(set(map(lambda x: "-"+x.children[0] if hasattr(x, "op") else x, chld.children)))
elif node.op == 'or':
res.append(set(map(lambda x: "-"+x.children[0] if hasattr(x, "op") else x, node.children)))
elif node.op == 'not':
res.append(set(["-"+node.children[0]]))
else:
res.append(set([node]))
filtered_res = []
last_found_always_true_clause = None
#Filter out tautologies
for clause in res:
always_true = False
for term in clause:
if "-"+term in clause:
always_true = True
last_found_always_true_clause = clause
break
if not always_true: filtered_res.append(clause)
#Remove repeating clauses
for i in range(len(filtered_res)):
for j in range(len(filtered_res)):
if i != j and filtered_res[i] == filtered_res[j]: filtered_res[i] = None
filtered_res = [x for x in filtered_res if x is not None]
#Do not return empty if all clauses are tautologies, return a single clause instead
if len(filtered_res) == 0 and last_found_always_true_clause:
filtered_res.append(last_found_always_true_clause)
return filtered_res
#This is the main function of this module that should be called from outside
def parse_logic_expression_query(input_str):
return convert_to_list_and_simplify(flatten_tree(distribute_and_over_or(move_not_inwards(convert_to_and_or_not(parse(input_str))))))

View File

@ -367,7 +367,7 @@ class Controller( HydrusController.HydrusController ):
if not HydrusNetworking.LocalPortInUse( port ):
raise Exception( 'Tried to bind port ' + str( port ) + ' but it failed.' )
raise Exception( 'Tried to bind port {} for "{}" but it failed.'.format( port, service.GetName() ) )
except Exception as e:

View File

@ -117,6 +117,8 @@ except Exception as e:
print( 'Critical boot error occurred! Details written to crash.log!' )
controller = None
with HydrusLogger.HydrusLogger( db_dir, 'server' ) as logger:
try:
@ -156,14 +158,10 @@ with HydrusLogger.HydrusLogger( db_dir, 'server' ) as logger:
HG.view_shutdown = True
HG.model_shutdown = True
try:
if controller is not None:
controller.pubimmediate( 'wake_daemons' )
except:
HydrusData.Print( traceback.format_exc() )
reactor.callFromThread( reactor.stop )

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.5 KiB