Version 365
This commit is contained in:
parent
6041b27035
commit
84ac3552de
|
@ -6,6 +6,7 @@
|
|||
</head>
|
||||
<body>
|
||||
<div class="content">
|
||||
<p class="warning">The PTR is under bandwidth stress right now. More options to sync are likely to come in the coming months. To jump the queue, you can download a mirror of the PTR's files <a href="https://cuddlebear92.github.io/Quicksync/#two">here</a> and import them under the <i>services->import repository update files</i> menu entry.</p>
|
||||
<h3>first off</h3>
|
||||
<p>I have purposely not pre-baked this into the client's install. You have to put it in yourself. The client won't connect anywhere until you tell it to.</p>
|
||||
<h3>access keys</h3>
|
||||
|
@ -23,4 +24,4 @@
|
|||
<p>If you are an advanced user, you can manually export (<i>services->review services->export updates</i> if <i>advanced mode</i> is on) and import (<i>services->import repository update files</i>) the update files a repository gives your client to save you redownloading them elsewhere. If you want to set up a PTR client a bit quicker than usual (and you understand the processing routine and the delays it can expeience if it encounters thousands of new update files at once), check the update archive <a href="https://cuddlebear92.github.io/Quicksync/">here</a>.</p>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
</html>
|
||||
|
|
|
@ -8,6 +8,51 @@
|
|||
<div class="content">
|
||||
<h3>changelog</h3>
|
||||
<ul>
|
||||
<li><h3>version 365</h3></li>
|
||||
<ul>
|
||||
<li>new repo processing:</li>
|
||||
<li>improved the new asynchronous repository processing system in several ways:</li>
|
||||
<li>- it now uses the time it is allotted more accurately. when it has 0.45s to work, it hits this mark more often, especially on slower machines</li>
|
||||
<li>- it is now politer to the ui if plenty of other work is going on--if the db is loading search results or you are viewing video, it should pause for milliseconds as needed</li>
|
||||
<li>- it can now work up to 90% of the time during a manual 'process now' run</li>
|
||||
<li>- when it is working faster than expected, it accelerates its load to operate more efficiently</li>
|
||||
<li>as a result, the new system should now have faster rows/s and lag out the ui less</li>
|
||||
<li>.</li>
|
||||
<li>client api:</li>
|
||||
<li>improved how parameters are fetched and tested against expected type and given default values if appropriate, and updated all client api code to use this new system</li>
|
||||
<li>added /manage_pages/get_page_info, which gives simple or detailed info on a given page, found by page_key</li>
|
||||
<li>added page info response to hdd importers, simple downloaders, url downloaders, gallery downloaders, watcher downloaders--they say their pause status and file/gallery import info as appropriate</li>
|
||||
<li>added page info response to file import caches--they say their status and num_done/num_to_do, and in detailed mode report file import item info, which is url/path, created, modified, and source times, current status, and any note</li>
|
||||
<li>added page info response to gallery logs--they say their status and num_done/num_to_do, and in detailed mode report gallery log info, which is url, created and modified times, current status, and any note</li>
|
||||
<li>added page info response to thumbnail panes--they say their total num files, and in detailed mode list their ordered hashes</li>
|
||||
<li>started some help for this expansion, but it will need some feedback and more work to finish</li>
|
||||
<li>the client api now sorts /get-files/search_files results by import time, newest to oldest. this first hardcoded sort comes to help implement booru-like pagination, but will be expanded to support more types as I flesh out the ui side (as below) as well</li>
|
||||
<li>hydrus services, including the client, should now be able to handle larger request header+path total size (16KB->1MB). this helps some larger GET queries in the client api. let's see how this goes</li>
|
||||
<li>client api is now version 11</li>
|
||||
<li>.</li>
|
||||
<li>collect improvements:</li>
|
||||
<li>the collect data attached to pages is updated to its own object. the default value and existing page settings should update. all ui now handles the new clean object, rather than the old messy list</li>
|
||||
<li>the new collect object supports an option for whether to collect 'unmatched' thumbs or to leave them separate. this displays in the ui as a dropdown beside the collect-by checkboxlist</li>
|
||||
<li>to better distinguish between unmatched singles and matched collections with just one item, all one-item collections will now act as collections, with the little '1' collection icon in their corner (previously, they were split into singles). if this is annoying, I will add another option to control whether this occurs</li>
|
||||
<li>removed some old 'integrate media into existing collected structure code' that was complicated, never used, and now broken</li>
|
||||
<li>misc sort/collect refactoring</li>
|
||||
<li>deleted some old unused collection code</li>
|
||||
<li>.</li>
|
||||
<li>the rest:</li>
|
||||
<li>entering tags in the filename tagging panel, either for all or just selected, now pushes those tags to the 'recent tags' list in later manage tags dialogs</li>
|
||||
<li>added a framework to start sorting search results before the system:limit is applied--I will soon extend this to start catching the current ui sort (say, 'largest files first', and cut a system:limit appropriately, rather than the current random sample)</li>
|
||||
<li>added a faster table size check on the analyze maintenance call that will recognise fast-growing tables (e.g. initially empty/tiny repository processing tables that may have seen a ton of recent work) and schedule them better (this was previously potentially hanging certain maintenance checks/shutdown by several minutes when hitting a surprisingly giant table)</li>
|
||||
<li>reduced the analyze frequency for established tables</li>
|
||||
<li>the client will now explicitly count as 'idle' and 'very idle' during shutdown maintenance time, in case any shutdown job is considering that for how greedy it should be with work time</li>
|
||||
<li>fixed an issue where appending new media (thumbnails) to a page that already had that media but within a collection could create a duplicate media entry and invalidate some internal data links to the old media</li>
|
||||
<li>subscriptions will no longer print full traceback information when a network error causes a sync fail</li>
|
||||
<li>updated to yet another deviant art file page parser. title tags and embedded image links should be fixed again, post/source time is not fixed</li>
|
||||
<li>the deviant current art login script is confirmed to work for some users. my guess is certain people are getting cloudflare blocked or aren't being shown the new login page all the time yet, please send in any more info you discover</li>
|
||||
<li>the client will now recover from a missing options object by putting in a fresh one with default values, including a popup notifying you of the error and giving you a chance to bail out</li>
|
||||
<li>added a warning and link to the quicksync to the access_keys help page</li>
|
||||
<li>if the os commands the client to close due to a log off or system shut down, the client will kindly ask for a bit more time do to so if it is available</li>
|
||||
<li>updated the WTFPL license to v3</li>
|
||||
</ul>
|
||||
<li><h3>version 364</h3></li>
|
||||
<ul>
|
||||
<li>repo processing makeover:</li>
|
||||
|
|
|
@ -71,6 +71,7 @@
|
|||
<h4>Managing Pages</h4>
|
||||
<ul>
|
||||
<li><a href="#manage_pages_get_pages">GET /manage_pages/get_pages</a></li>
|
||||
<li><a href="#manage_pages_get_page_info">GET /manage_pages/get_page_info</a></li>
|
||||
<li><a href="#manage_pages_focus_page">POST /manage_pages/focus_page</a></li>
|
||||
</ul>
|
||||
<h4>Searching and Fetching Files</h4>
|
||||
|
@ -674,6 +675,100 @@
|
|||
<p>The 'page_key' is a unique identifier for the page. It will stay the same for a particular page throughout the session, but new ones are generated on a client restart or other session reload.</p>
|
||||
</ul>
|
||||
</div>
|
||||
<div class="apiborder" id="manage_pages_get_page_info">
|
||||
<h3><b>GET /manage_pages/get_page_info</b></h3>
|
||||
<p><i>Get information about a specific page.</i></p>
|
||||
<p class="warning">This is under construction. The current call dumps a ton of info for different downloader pages. Please experiment in IRL situations and give feedback for now! I will flesh out this help with more enumeration info and examples as this gets nailed down. POST commands to alter pages (adding, removing, highlighting), will come later.</p>
|
||||
<ul>
|
||||
<li><p>Restricted access: YES. Manage Pages permission needed.</p></li>
|
||||
<li><p>Required Headers: n/a</p></li>
|
||||
<li>
|
||||
<p>Arguments:</p>
|
||||
<ul>
|
||||
<li>page_key : (hexadecimal page_key as stated in <a href="#manage_pages_get_pages">/manage_pages/get_pages</a>)</li>
|
||||
<li>simple : true or false (optional, defaulting to true)</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li>
|
||||
<p>Example request:</p>
|
||||
<ul>
|
||||
<li><p>/manage_pages/get_page_info?page_key=aebbf4b594e6986bddf1eeb0b5846a1e6bc4e07088e517aff166f1aeb1c3c9da&simple=true</p></li>
|
||||
</ul>
|
||||
</li>
|
||||
<p>Response description: A JSON Object of the page's information. At present, this mostly means downloader information.</p>
|
||||
<li>
|
||||
<p>Example response with simple = true:</p>
|
||||
<ul>
|
||||
<li>
|
||||
<pre>{
|
||||
"name" : "threads",
|
||||
"page_key" : "aebbf4b594e6986bddf1eeb0b5846a1e6bc4e07088e517aff166f1aeb1c3c9da",
|
||||
"page_type" : 3,
|
||||
"management" : {
|
||||
"multiple_watcher_import" : {
|
||||
"watcher_imports" : [
|
||||
{
|
||||
"url" : "https://someimageboard.net/m/123456",
|
||||
"watcher_key" = "cf8c3525c57a46b0e5c2625812964364a2e801f8c49841c216b8f8d7a4d06d85",
|
||||
"created" = 1566164269,
|
||||
"last_check_time" = 1566164272,
|
||||
"next_check_time" = 1566174272,
|
||||
"files_paused" = false,
|
||||
"checking_paused" = false,
|
||||
"checking_status" = 0,
|
||||
"subject" = "gundam pictures",
|
||||
"imports" : {
|
||||
"status" : "4 successful (2 already in db)",
|
||||
"simple_status" : "4",
|
||||
"total_processed" : 4,
|
||||
"total_to_process" : 4
|
||||
},
|
||||
"gallery_log" : {
|
||||
"status" = "1 successful",
|
||||
"simple_status" = "1",
|
||||
"total_processed" = 1,
|
||||
"total_to_process" = 1
|
||||
}
|
||||
},
|
||||
{
|
||||
"url" : "https://someimageboard.net/a/1234",
|
||||
"watcher_key" = "6bc17555b76da5bde2dcceedc382cf7d23281aee6477c41b643cd144ec168510",
|
||||
"created" = 1566063125,
|
||||
"last_check_time" = 1566063133,
|
||||
"next_check_time" = 1566104272,
|
||||
"files_paused" = false,
|
||||
"checking_paused" = true,
|
||||
"checking_status" = 1,
|
||||
"subject" = "anime pictures",
|
||||
"imports" : {
|
||||
"status" : "124 successful (22 already in db), 2 previously deleted",
|
||||
"simple_status" : "124",
|
||||
"total_processed" : 124,
|
||||
"total_to_process" : 124
|
||||
},
|
||||
"gallery_log" : {
|
||||
"status" = "3 successful",
|
||||
"simple_status" = "3",
|
||||
"total_processed" = 3,
|
||||
"total_to_process" = 3
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"highlight" : "cf8c3525c57a46b0e5c2625812964364a2e801f8c49841c216b8f8d7a4d06d85"
|
||||
}
|
||||
},
|
||||
"media" : {
|
||||
"num_files" : 4
|
||||
}
|
||||
}</pre>
|
||||
</li>
|
||||
</ul>
|
||||
<p>As you can see, even the 'simple' mode can get very large. Imagine that response for a page watching 100 threads! Turning simple mode off will display every import item, gallery log entry, and all hashes in the media (thumbnail) panel.</p>
|
||||
<p>For this first version, the five importer pages--hdd import, simple downloader, url downloader, gallery page, and watcher page--all give rich info based on their specific variables. The first three only have one importer/gallery log combo, but the latter two of course can have multiple. The "imports" and "gallery_log" entries are all in the same data format.</p>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
<div class="apiborder" id="manage_pages_focus_page">
|
||||
<h3><b>POST /manage_pages/focus_page</b></h3>
|
||||
<p><i>'Show' a page in the main GUI, making it the current page in view. If it is already the current page, no change is made.</i></p>
|
||||
|
@ -740,7 +835,7 @@
|
|||
</ul>
|
||||
</li>
|
||||
<p>File ids are internal and specific to an individual client. For a client, a file with hash H always has the same file id N, but two clients will have different ideas about which N goes with which H. They are a bit faster than hashes to retrieve and search with <i>en masse</i>, which is why they are exposed here.</p>
|
||||
<p>The search will be performed on the 'local files' file domain and 'all known tags' tag domain.</p>
|
||||
<p>The search will be performed on the 'local files' file domain and 'all known tags' tag domain. At current, they will be sorted in import time order, newest to oldest (if you would like to paginate them before fetching metadata), but sort options will expand in future.</p>
|
||||
<p>Note that most clients will have an invisible system:limit of 10,000 files on all queries. I expect to add more system predicates to help searching for untagged files, but it is tricky to fetch all files under any circumstance. Large queries may take several seconds to respond.</p>
|
||||
</ul>
|
||||
</div>
|
||||
|
@ -807,6 +902,7 @@
|
|||
"width" : 1920,
|
||||
"height" : 1080,
|
||||
"duration" : 4040,
|
||||
"has_audio" : true,
|
||||
"num_frames" : 102,
|
||||
"num_words" : null,
|
||||
"known_urls" : [
|
||||
|
|
|
@ -57,27 +57,6 @@ Shift-LeftClick-Drag - Drag (in Filter)
|
|||
Ctrl + MouseWheel - Zoom
|
||||
Z - Zoom Full/Fit'''
|
||||
|
||||
COLLECT_BY_S = 0
|
||||
COLLECT_BY_SV = 1
|
||||
COLLECT_BY_SVC = 2
|
||||
NO_COLLECTIONS = 3
|
||||
|
||||
COLLECTION_CHOICES = [ 'collect by series', 'collect by series-volume', 'collect by series-volume-chapter', 'no collections' ]
|
||||
|
||||
collection_enum_lookup = {}
|
||||
|
||||
collection_enum_lookup[ 'collect by series' ] = COLLECT_BY_S
|
||||
collection_enum_lookup[ 'collect by series-volume' ] = COLLECT_BY_SV
|
||||
collection_enum_lookup[ 'collect by series-volume-chapter' ] = COLLECT_BY_SVC
|
||||
collection_enum_lookup[ 'no collections' ] = NO_COLLECTIONS
|
||||
|
||||
collection_string_lookup = {}
|
||||
|
||||
collection_string_lookup[ COLLECT_BY_S ] = 'collect by series'
|
||||
collection_string_lookup[ COLLECT_BY_SV ] = 'collect by series-volume'
|
||||
collection_string_lookup[ COLLECT_BY_SVC ] = 'collect by series-volume-chapter'
|
||||
collection_string_lookup[ NO_COLLECTIONS ] = 'no collections'
|
||||
|
||||
if HC.PLATFORM_OSX:
|
||||
|
||||
DELETE_KEYS = ( wx.WXK_BACK, wx.WXK_DELETE, wx.WXK_NUMPAD_DELETE )
|
||||
|
|
|
@ -73,6 +73,11 @@ class App( wx.App ):
|
|||
|
||||
HG.client_controller.gui.Exit()
|
||||
|
||||
if event.CanVeto(): # if any more time is offered, take it
|
||||
|
||||
event.Veto()
|
||||
|
||||
|
||||
|
||||
def EventQueryEndSession( self, event ):
|
||||
|
||||
|
@ -80,6 +85,11 @@ class App( wx.App ):
|
|||
|
||||
HG.client_controller.gui.Exit()
|
||||
|
||||
if event.CanVeto(): # if any more time is offered, take it
|
||||
|
||||
event.Veto()
|
||||
|
||||
|
||||
|
||||
class Controller( HydrusController.HydrusController ):
|
||||
|
||||
|
@ -382,6 +392,11 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
def CurrentlyIdle( self ):
|
||||
|
||||
if HG.program_is_shutting_down:
|
||||
|
||||
return True
|
||||
|
||||
|
||||
if HG.force_idle_mode:
|
||||
|
||||
self._idle_started = 0
|
||||
|
@ -444,6 +459,11 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
def CurrentlyVeryIdle( self ):
|
||||
|
||||
if HG.program_is_shutting_down:
|
||||
|
||||
return True
|
||||
|
||||
|
||||
if self._idle_started is not None and HydrusData.TimeHasPassed( self._idle_started + 3600 ):
|
||||
|
||||
return True
|
||||
|
@ -483,6 +503,8 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
def Exit( self ):
|
||||
|
||||
HG.program_is_shutting_down = True
|
||||
|
||||
if HG.emergency_exit:
|
||||
|
||||
HydrusData.DebugPrint( 'doing fast shutdown\u2026' )
|
||||
|
@ -1422,8 +1444,8 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
|
||||
|
||||
|
||||
self.SetRunningTwistedServices( [] )
|
||||
self.SetRunningTwistedServices( [] )
|
||||
|
||||
|
||||
HydrusController.HydrusController.ShutdownView( self )
|
||||
|
||||
|
|
|
@ -394,9 +394,9 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
|
||||
|
||||
def _AnalyzeStaleBigTables( self, maintenance_mode = HC.MAINTENANCE_FORCED, stop_time = None, force_reanalyze = False ):
|
||||
def _AnalyzeDueTables( self, maintenance_mode = HC.MAINTENANCE_FORCED, stop_time = None, force_reanalyze = False ):
|
||||
|
||||
names_to_analyze = self._GetBigTableNamesToAnalyze( force_reanalyze = force_reanalyze )
|
||||
names_to_analyze = self._GetTableNamesDueAnalysis( force_reanalyze = force_reanalyze )
|
||||
|
||||
if len( names_to_analyze ) > 0:
|
||||
|
||||
|
@ -4271,7 +4271,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
return predicates
|
||||
|
||||
|
||||
def _GetBigTableNamesToAnalyze( self, force_reanalyze = False ):
|
||||
def _GetTableNamesDueAnalysis( self, force_reanalyze = False ):
|
||||
|
||||
db_names = [ name for ( index, name, path ) in self._c.execute( 'PRAGMA database_list;' ) if name not in ( 'mem', 'temp' ) ]
|
||||
|
||||
|
@ -4297,9 +4297,9 @@ class DB( HydrusDB.HydrusDB ):
|
|||
big_table_minimum = 10000
|
||||
huge_table_minimum = 1000000
|
||||
|
||||
small_table_stale_time_delta = 86400
|
||||
big_table_stale_time_delta = 30 * 86400
|
||||
huge_table_stale_time_delta = 30 * 86400 * 6
|
||||
small_table_stale_time_delta = 3 * 86400
|
||||
big_table_stale_time_delta = 30 * 86400 * 3
|
||||
huge_table_stale_time_delta = 30 * 86400 * 18
|
||||
|
||||
existing_names_to_info = { name : ( num_rows, timestamp ) for ( name, num_rows, timestamp ) in self._c.execute( 'SELECT name, num_rows, timestamp FROM analyze_timestamps;' ) }
|
||||
|
||||
|
@ -4311,15 +4311,37 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
( num_rows, timestamp ) = existing_names_to_info[ name ]
|
||||
|
||||
if num_rows > big_table_minimum:
|
||||
if num_rows < big_table_minimum:
|
||||
|
||||
if num_rows > huge_table_minimum:
|
||||
if HydrusData.TimeHasPassed( timestamp + small_table_stale_time_delta ):
|
||||
|
||||
due_time = timestamp + huge_table_stale_time_delta
|
||||
# how do we deal with a small table that just grew by 3 million rows?
|
||||
# we want this func to always be fast, so we'll just test small tables with fast rowcount test
|
||||
# if a table exceeds the next threshold or has doubled in size, we'll analyze it
|
||||
|
||||
test_count_value = min( big_table_minimum, num_rows * 2 )
|
||||
|
||||
if self._TableHasAtLeastRowCount( name, test_count_value ):
|
||||
|
||||
names_to_analyze.append( name )
|
||||
|
||||
else:
|
||||
|
||||
# and we don't want to bother the user with analyze notifications for tiny tables all the time, so if they aren't large, just do them now
|
||||
|
||||
self._AnalyzeTable( name )
|
||||
|
||||
|
||||
|
||||
else:
|
||||
|
||||
if num_rows < huge_table_minimum:
|
||||
|
||||
due_time = timestamp + big_table_stale_time_delta
|
||||
|
||||
else:
|
||||
|
||||
due_time = timestamp + big_table_stale_time_delta
|
||||
due_time = timestamp + huge_table_stale_time_delta
|
||||
|
||||
|
||||
if HydrusData.TimeHasPassed( due_time ):
|
||||
|
@ -4327,14 +4349,6 @@ class DB( HydrusDB.HydrusDB ):
|
|||
names_to_analyze.append( name )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
# these usually take a couple of milliseconds, so just sneak them in here. no need to bother the user with a prompt
|
||||
if HydrusData.TimeHasPassed( timestamp + small_table_stale_time_delta ):
|
||||
|
||||
self._AnalyzeTable( name )
|
||||
|
||||
|
||||
|
||||
else:
|
||||
|
||||
|
@ -4964,7 +4978,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
return hash_ids
|
||||
|
||||
|
||||
def _GetHashIdsFromQuery( self, search_context, job_key = None, query_hash_ids = None, apply_implicit_limit = True ):
|
||||
def _GetHashIdsFromQuery( self, search_context, job_key = None, query_hash_ids = None, apply_implicit_limit = True, sort_by = None ):
|
||||
|
||||
if job_key is None:
|
||||
|
||||
|
@ -5771,17 +5785,53 @@ class DB( HydrusDB.HydrusDB ):
|
|||
return set()
|
||||
|
||||
|
||||
#
|
||||
|
||||
query_hash_ids = list( query_hash_ids )
|
||||
|
||||
if sort_by is not None:
|
||||
|
||||
( sort_metadata, sort_data ) = sort_by.sort_type
|
||||
sort_asc = sort_by.sort_asc
|
||||
|
||||
query = None
|
||||
|
||||
if sort_metadata == 'system':
|
||||
|
||||
if sort_data == CC.SORT_FILES_BY_IMPORT_TIME:
|
||||
|
||||
query = 'SELECT hash_id, timestamp FROM files_info NATURAL JOIN current_files WHERE hash_id IN {} AND service_id = {};'.format( '{}', self._local_file_service_id )
|
||||
|
||||
key = lambda row: row[1]
|
||||
|
||||
reverse = sort_asc == CC.SORT_DESC
|
||||
|
||||
|
||||
|
||||
if query is not None:
|
||||
|
||||
query_hash_ids_and_other_data = list( self._SelectFromList( query, query_hash_ids ) )
|
||||
|
||||
query_hash_ids_and_other_data.sort( key = key, reverse = reverse )
|
||||
|
||||
query_hash_ids = [ row[0] for row in query_hash_ids_and_other_data ]
|
||||
|
||||
|
||||
|
||||
#
|
||||
|
||||
limit = system_predicates.GetLimit( apply_implicit_limit = apply_implicit_limit )
|
||||
|
||||
if limit is not None and limit <= len( query_hash_ids ):
|
||||
|
||||
query_hash_ids = random.sample( query_hash_ids, limit )
|
||||
|
||||
else:
|
||||
|
||||
query_hash_ids = list( query_hash_ids )
|
||||
if sort_by is None:
|
||||
|
||||
query_hash_ids = random.sample( query_hash_ids, limit )
|
||||
|
||||
else:
|
||||
|
||||
query_hash_ids = query_hash_ids[:limit]
|
||||
|
||||
|
||||
|
||||
return query_hash_ids
|
||||
|
@ -6397,7 +6447,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
# analyze
|
||||
|
||||
names_to_analyze = self._GetBigTableNamesToAnalyze()
|
||||
names_to_analyze = self._GetTableNamesDueAnalysis()
|
||||
|
||||
if len( names_to_analyze ) > 0:
|
||||
|
||||
|
@ -10049,10 +10099,10 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
def _ProcessRepositoryContent( self, service_key, content_hash, content_iterator_dict, job_key, work_time ):
|
||||
|
||||
FILES_CHUNK_SIZE = 200
|
||||
MAPPINGS_CHUNK_SIZE = 500
|
||||
NEW_TAG_PARENTS_CHUNK_SIZE = 10
|
||||
PAIR_ROWS_CHUNK_SIZE = 1000
|
||||
FILES_INITIAL_CHUNK_SIZE = 20
|
||||
MAPPINGS_INITIAL_CHUNK_SIZE = 50
|
||||
NEW_TAG_PARENTS_INITIAL_CHUNK_SIZE = 1
|
||||
PAIR_ROWS_INITIAL_CHUNK_SIZE = 100
|
||||
|
||||
service_id = self._GetServiceId( service_key )
|
||||
|
||||
|
@ -10066,7 +10116,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
i = content_iterator_dict[ 'new_files' ]
|
||||
|
||||
for chunk in HydrusData.SplitIteratorIntoChunks( i, FILES_CHUNK_SIZE ):
|
||||
for chunk in HydrusData.SplitIteratorIntoAutothrottledChunks( i, FILES_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
|
||||
files_info_rows = []
|
||||
files_rows = []
|
||||
|
@ -10101,7 +10151,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
i = content_iterator_dict[ 'deleted_files' ]
|
||||
|
||||
for chunk in HydrusData.SplitIteratorIntoChunks( i, FILES_CHUNK_SIZE ):
|
||||
for chunk in HydrusData.SplitIteratorIntoAutothrottledChunks( i, FILES_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
|
||||
service_hash_ids = chunk
|
||||
|
||||
|
@ -10126,7 +10176,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
i = content_iterator_dict[ 'new_mappings' ]
|
||||
|
||||
for chunk in HydrusData.SplitMappingIteratorIntoChunks( i, MAPPINGS_CHUNK_SIZE ):
|
||||
for chunk in HydrusData.SplitMappingIteratorIntoAutothrottledChunks( i, MAPPINGS_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
|
||||
mappings_ids = []
|
||||
|
||||
|
@ -10161,7 +10211,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
i = content_iterator_dict[ 'deleted_mappings' ]
|
||||
|
||||
for chunk in HydrusData.SplitMappingIteratorIntoChunks( i, MAPPINGS_CHUNK_SIZE ):
|
||||
for chunk in HydrusData.SplitMappingIteratorIntoAutothrottledChunks( i, MAPPINGS_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
|
||||
deleted_mappings_ids = []
|
||||
|
||||
|
@ -10196,7 +10246,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
i = content_iterator_dict[ 'new_parents' ]
|
||||
|
||||
for chunk in HydrusData.SplitIteratorIntoChunks( i, NEW_TAG_PARENTS_CHUNK_SIZE ):
|
||||
for chunk in HydrusData.SplitIteratorIntoAutothrottledChunks( i, NEW_TAG_PARENTS_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
|
||||
parent_ids = []
|
||||
|
||||
|
@ -10227,7 +10277,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
i = content_iterator_dict[ 'deleted_parents' ]
|
||||
|
||||
for chunk in HydrusData.SplitIteratorIntoChunks( i, PAIR_ROWS_CHUNK_SIZE ):
|
||||
for chunk in HydrusData.SplitIteratorIntoAutothrottledChunks( i, PAIR_ROWS_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
|
||||
parent_ids = []
|
||||
|
||||
|
@ -10260,7 +10310,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
i = content_iterator_dict[ 'new_siblings' ]
|
||||
|
||||
for chunk in HydrusData.SplitIteratorIntoChunks( i, PAIR_ROWS_CHUNK_SIZE ):
|
||||
for chunk in HydrusData.SplitIteratorIntoAutothrottledChunks( i, PAIR_ROWS_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
|
||||
sibling_ids = []
|
||||
|
||||
|
@ -10293,7 +10343,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
i = content_iterator_dict[ 'deleted_siblings' ]
|
||||
|
||||
for chunk in HydrusData.SplitIteratorIntoChunks( i, PAIR_ROWS_CHUNK_SIZE ):
|
||||
for chunk in HydrusData.SplitIteratorIntoAutothrottledChunks( i, PAIR_ROWS_INITIAL_CHUNK_SIZE, precise_time_to_stop ):
|
||||
|
||||
sibling_ids = []
|
||||
|
||||
|
@ -10341,7 +10391,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
i = definition_iterator_dict[ 'service_hash_ids_to_hashes' ]
|
||||
|
||||
for chunk in HydrusData.SplitIteratorIntoChunks( i, 500 ):
|
||||
for chunk in HydrusData.SplitIteratorIntoAutothrottledChunks( i, 50, precise_time_to_stop ):
|
||||
|
||||
inserts = []
|
||||
|
||||
|
@ -10369,7 +10419,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
i = definition_iterator_dict[ 'service_tag_ids_to_tags' ]
|
||||
|
||||
for chunk in HydrusData.SplitIteratorIntoChunks( i, 500 ):
|
||||
for chunk in HydrusData.SplitIteratorIntoAutothrottledChunks( i, 50, precise_time_to_stop ):
|
||||
|
||||
inserts = []
|
||||
|
||||
|
@ -10769,6 +10819,27 @@ class DB( HydrusDB.HydrusDB ):
|
|||
self._RegenerateACCache()
|
||||
|
||||
|
||||
#
|
||||
|
||||
new_options = self._GetJSONDump( HydrusSerialisable.SERIALISABLE_TYPE_CLIENT_OPTIONS )
|
||||
|
||||
if new_options is None:
|
||||
|
||||
message = 'On boot, your main options object was missing!'
|
||||
message += os.linesep * 2
|
||||
message += 'If you wish, click ok on this message and the client will re-add fresh options with default values. But if you want to solve this problem otherwise, kill the hydrus process now.'
|
||||
message += os.linesep * 2
|
||||
message += 'If you do not already know what caused this, it was likely a hard drive fault--either due to a recent abrupt power cut or actual hardware failure. Check \'help my db is broke.txt\' in the install_dir/db directory as soon as you can.'
|
||||
|
||||
self._controller.CallBlockingToWX( self._controller, wx.MessageBox, message )
|
||||
|
||||
new_options = ClientOptions.ClientOptions()
|
||||
|
||||
new_options.SetSimpleDownloaderFormulae( ClientDefaults.GetDefaultSimpleDownloaderFormulae() )
|
||||
|
||||
self._SetJSONDump( new_options )
|
||||
|
||||
|
||||
|
||||
def _ReportOverupdatedDB( self, version ):
|
||||
|
||||
|
@ -11222,6 +11293,23 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
|
||||
|
||||
def _TableHasAtLeastRowCount( self, name, row_count ):
|
||||
|
||||
cursor = self._c.execute( 'SELECT 1 FROM {};'.format( name ) )
|
||||
|
||||
for i in range( row_count ):
|
||||
|
||||
r = cursor.fetchone()
|
||||
|
||||
if r is None:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def _TagExists( self, tag ):
|
||||
|
||||
tag = HydrusTags.CleanTag( tag )
|
||||
|
@ -13009,6 +13097,67 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
|
||||
|
||||
if version == 364:
|
||||
|
||||
try:
|
||||
|
||||
( options, ) = self._c.execute( 'SELECT options FROM options;' ).fetchone()
|
||||
|
||||
new_options = self._GetJSONDump( HydrusSerialisable.SERIALISABLE_TYPE_CLIENT_OPTIONS )
|
||||
|
||||
default_collect = options[ 'default_collect' ]
|
||||
|
||||
if default_collect is None:
|
||||
|
||||
default_collect = []
|
||||
|
||||
|
||||
namespaces = [ n for ( t, n ) in default_collect if t == 'namespace' ]
|
||||
rating_service_keys = [ bytes.fromhex( r ) for ( t, r ) in default_collect if t == 'rating' ]
|
||||
|
||||
default_media_collect = ClientMedia.MediaCollect( namespaces = namespaces, rating_service_keys = rating_service_keys )
|
||||
|
||||
new_options.SetDefaultCollect( default_media_collect )
|
||||
|
||||
self._SetJSONDump( new_options )
|
||||
|
||||
except:
|
||||
|
||||
HydrusData.PrintException( e )
|
||||
|
||||
message = 'Trying to update your default collection settings failed! Please check them in the options dialog.'
|
||||
|
||||
self.pub_initial_message( message )
|
||||
|
||||
|
||||
try:
|
||||
|
||||
domain_manager = self._GetJSONDump( HydrusSerialisable.SERIALISABLE_TYPE_NETWORK_DOMAIN_MANAGER )
|
||||
|
||||
domain_manager.Initialise()
|
||||
|
||||
#
|
||||
|
||||
domain_manager.OverwriteDefaultParsers( ( 'deviant art file page parser', ) )
|
||||
|
||||
#
|
||||
|
||||
domain_manager.TryToLinkURLClassesAndParsers()
|
||||
|
||||
#
|
||||
|
||||
self._SetJSONDump( domain_manager )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
HydrusData.PrintException( e )
|
||||
|
||||
message = 'Trying to update some url classes and parsers failed! Please let hydrus dev know!'
|
||||
|
||||
self.pub_initial_message( message )
|
||||
|
||||
|
||||
|
||||
self._controller.pub( 'splash_set_title_text', 'updated db to v' + str( version + 1 ) )
|
||||
|
||||
self._c.execute( 'UPDATE version SET version = ?;', ( version + 1, ) )
|
||||
|
@ -13527,7 +13676,7 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
result = None
|
||||
|
||||
if action == 'analyze': self._AnalyzeStaleBigTables( *args, **kwargs )
|
||||
if action == 'analyze': self._AnalyzeDueTables( *args, **kwargs )
|
||||
elif action == 'associate_repository_update_hashes': self._AssociateRepositoryUpdateHashes( *args, **kwargs )
|
||||
elif action == 'backup': self._Backup( *args, **kwargs )
|
||||
elif action == 'clear_false_positive_relations': self._DuplicatesClearAllFalsePositiveRelationsFromHashes( *args, **kwargs )
|
||||
|
|
|
@ -13,7 +13,6 @@ def GetClientDefaultOptions():
|
|||
options = {}
|
||||
|
||||
options[ 'play_dumper_noises' ] = True
|
||||
options[ 'default_collect' ] = []
|
||||
options[ 'export_path' ] = None
|
||||
options[ 'hpos' ] = 400
|
||||
options[ 'vpos' ] = 700
|
||||
|
|
|
@ -4680,9 +4680,23 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
return self._notebook.GetCurrentMediaPage()
|
||||
|
||||
|
||||
def GetCurrentSessionPageInfoDict( self ):
|
||||
def GetCurrentSessionPageAPIInfoDict( self ):
|
||||
|
||||
return self._notebook.GetPageInfoDict( is_selected = True )
|
||||
return self._notebook.GetSessionAPIInfoDict( is_selected = True )
|
||||
|
||||
|
||||
def GetPageAPIInfoDict( self, page_key, simple ):
|
||||
|
||||
page = self._notebook.GetPageFromPageKey( page_key )
|
||||
|
||||
if page is None:
|
||||
|
||||
return None
|
||||
|
||||
else:
|
||||
|
||||
return page.GetAPIInfoDict( simple )
|
||||
|
||||
|
||||
|
||||
def GetTotalPageCounts( self ):
|
||||
|
|
|
@ -3153,7 +3153,7 @@ class CanvasFilterDuplicates( CanvasWithHovers ):
|
|||
|
||||
elif result == wx.ID_YES:
|
||||
|
||||
self._CommitProcessed()
|
||||
self._CommitProcessed( blocking = False )
|
||||
|
||||
|
||||
|
||||
|
@ -3165,7 +3165,7 @@ class CanvasFilterDuplicates( CanvasWithHovers ):
|
|||
CanvasWithHovers._Close( self )
|
||||
|
||||
|
||||
def _CommitProcessed( self ):
|
||||
def _CommitProcessed( self, blocking = True ):
|
||||
|
||||
pair_info = []
|
||||
|
||||
|
@ -3184,7 +3184,14 @@ class CanvasFilterDuplicates( CanvasWithHovers ):
|
|||
|
||||
if len( pair_info ) > 0:
|
||||
|
||||
HG.client_controller.WriteSynchronous( 'duplicate_pair_status', pair_info )
|
||||
if blocking:
|
||||
|
||||
HG.client_controller.WriteSynchronous( 'duplicate_pair_status', pair_info )
|
||||
|
||||
else:
|
||||
|
||||
HG.client_controller.Write( 'duplicate_pair_status', pair_info )
|
||||
|
||||
|
||||
|
||||
self._processed_pairs = []
|
||||
|
@ -3550,7 +3557,7 @@ class CanvasFilterDuplicates( CanvasWithHovers ):
|
|||
|
||||
if result == wx.ID_YES:
|
||||
|
||||
self._CommitProcessed()
|
||||
self._CommitProcessed( blocking = True )
|
||||
|
||||
else:
|
||||
|
||||
|
@ -3647,7 +3654,7 @@ class CanvasFilterDuplicates( CanvasWithHovers ):
|
|||
|
||||
wx.MessageBox( 'At least one of the potential files in this pair was not in this client. Likely it was very recently deleted through a different process. Your decisions until now will be saved, and then the duplicate filter will close.' )
|
||||
|
||||
self._CommitProcessed()
|
||||
self._CommitProcessed( blocking = True )
|
||||
|
||||
self._Close()
|
||||
|
||||
|
|
|
@ -703,74 +703,106 @@ class BufferedWindowIcon( BufferedWindow ):
|
|||
self._dirty = False
|
||||
|
||||
|
||||
class CheckboxCollect( wx.ComboCtrl ):
|
||||
class CheckboxCollect( wx.Panel ):
|
||||
|
||||
def __init__( self, parent, management_controller = None ):
|
||||
|
||||
wx.ComboCtrl.__init__( self, parent, style = wx.CB_READONLY )
|
||||
wx.Panel.__init__( self, parent )
|
||||
|
||||
self._management_controller = management_controller
|
||||
|
||||
if self._management_controller is not None and self._management_controller.HasVariable( 'media_collect' ):
|
||||
|
||||
self._collect_by = self._management_controller.GetVariable( 'media_collect' )
|
||||
self._media_collect = self._management_controller.GetVariable( 'media_collect' )
|
||||
|
||||
else:
|
||||
|
||||
self._collect_by = HC.options[ 'default_collect' ]
|
||||
self._media_collect = HG.client_controller.new_options.GetDefaultCollect()
|
||||
|
||||
|
||||
if self._collect_by is None:
|
||||
|
||||
self._collect_by = []
|
||||
|
||||
self._collect_comboctrl = wx.ComboCtrl( self, style = wx.CB_READONLY )
|
||||
|
||||
popup = self._Popup( self._collect_by )
|
||||
self._collect_combopopup = self._Popup( self._media_collect, self )
|
||||
|
||||
#self.UseAltPopupWindow( True )
|
||||
#self._collect_comboctrl.UseAltPopupWindow( True )
|
||||
|
||||
self.SetPopupControl( popup )
|
||||
self._collect_comboctrl.SetPopupControl( self._collect_combopopup )
|
||||
|
||||
self.SetValue( 'no collections' ) # initialising to this because if there are no collections, no broadcast call goes through
|
||||
self._collect_unmatched = BetterChoice( self )
|
||||
|
||||
width = ClientGUIFunctions.ConvertTextToPixelWidth( self._collect_unmatched, 19 )
|
||||
|
||||
self._collect_unmatched.SetMinSize( ( width, -1 ) )
|
||||
|
||||
self._collect_unmatched.Append( 'collect unmatched', True )
|
||||
self._collect_unmatched.Append( 'leave unmatched', False )
|
||||
|
||||
#
|
||||
|
||||
self._collect_unmatched.SetValue( self._media_collect.collect_unmatched )
|
||||
|
||||
#
|
||||
|
||||
hbox = wx.BoxSizer( wx.HORIZONTAL )
|
||||
|
||||
hbox.Add( self._collect_comboctrl, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
hbox.Add( self._collect_unmatched, CC.FLAGS_VCENTER )
|
||||
|
||||
self.SetSizer( hbox )
|
||||
|
||||
#
|
||||
|
||||
self._collect_comboctrl.SetValue( 'no collections' ) # initialising to this because if there are no collections, no broadcast call goes through
|
||||
|
||||
self._collect_unmatched.Bind( wx.EVT_CHOICE, self.EventChanged )
|
||||
|
||||
|
||||
def EventChanged( self, event ):
|
||||
|
||||
self.CollectValuesChanged()
|
||||
|
||||
|
||||
def GetValue( self ):
|
||||
|
||||
return self._collect_by
|
||||
return self._media_collect
|
||||
|
||||
|
||||
def SetCollectTypes( self, collect_by, description ):
|
||||
def CollectValuesChanged( self ):
|
||||
|
||||
collect_by = list( collect_by )
|
||||
( namespaces, rating_service_keys, description ) = self._collect_combopopup._control.GetValues()
|
||||
|
||||
self._collect_by = collect_by
|
||||
collect_unmatched = self._collect_unmatched.GetValue()
|
||||
|
||||
self.SetValue( description )
|
||||
self._media_collect = ClientMedia.MediaCollect( namespaces = namespaces, rating_service_keys = rating_service_keys, collect_unmatched = collect_unmatched )
|
||||
|
||||
self._collect_comboctrl.SetValue( description )
|
||||
|
||||
if self._management_controller is not None:
|
||||
|
||||
self._management_controller.SetVariable( 'media_collect', collect_by )
|
||||
self._management_controller.SetVariable( 'media_collect', self._media_collect )
|
||||
|
||||
page_key = self._management_controller.GetKey( 'page' )
|
||||
|
||||
HG.client_controller.pub( 'collect_media', page_key, self._collect_by )
|
||||
HG.client_controller.pub( 'collect_media', page_key, self._media_collect )
|
||||
|
||||
|
||||
|
||||
class _Popup( wx.ComboPopup ):
|
||||
|
||||
def __init__( self, collect_by ):
|
||||
def __init__( self, media_collect, parent_panel ):
|
||||
|
||||
wx.ComboPopup.__init__( self )
|
||||
|
||||
self._initial_collect_by = collect_by
|
||||
self._initial_media_collect = media_collect
|
||||
|
||||
self._parent_panel = parent_panel
|
||||
|
||||
self._control = None
|
||||
|
||||
|
||||
def Create( self, parent ):
|
||||
|
||||
self._control = self._Control( parent, self.GetComboCtrl(), self._initial_collect_by )
|
||||
self._control = self._Control( parent, self._parent_panel, self._initial_media_collect )
|
||||
|
||||
return True
|
||||
|
||||
|
@ -802,7 +834,7 @@ class CheckboxCollect( wx.ComboCtrl ):
|
|||
|
||||
class _Control( wx.CheckListBox ):
|
||||
|
||||
def __init__( self, parent, special_parent, collect_by ):
|
||||
def __init__( self, parent, parent_panel, media_collect ):
|
||||
|
||||
text_and_data_tuples = set()
|
||||
|
||||
|
@ -820,7 +852,7 @@ class CheckboxCollect( wx.ComboCtrl ):
|
|||
|
||||
for ratings_service in ratings_services:
|
||||
|
||||
text_and_data_tuples.append( ( ratings_service.GetName(), ( 'rating', ratings_service.GetServiceKey().hex() ) ) )
|
||||
text_and_data_tuples.append( ( ratings_service.GetName(), ( 'rating', ratings_service.GetServiceKey() ) ) )
|
||||
|
||||
|
||||
texts = [ text for ( text, data ) in text_and_data_tuples ] # we do this so it sizes its height properly on init
|
||||
|
@ -834,43 +866,51 @@ class CheckboxCollect( wx.ComboCtrl ):
|
|||
self.Append( text, data )
|
||||
|
||||
|
||||
self._special_parent = special_parent
|
||||
self._parent_panel = parent_panel
|
||||
|
||||
self.Bind( wx.EVT_CHECKLISTBOX, self.EventChanged )
|
||||
|
||||
self.Bind( wx.EVT_LEFT_DOWN, self.EventLeftDown )
|
||||
|
||||
wx.CallAfter( self.SetValue, collect_by )
|
||||
wx.CallAfter( self.SetValue, media_collect )
|
||||
|
||||
|
||||
def _BroadcastCollect( self ):
|
||||
|
||||
( collect_by, description ) = self._GetValues()
|
||||
|
||||
self._special_parent.SetCollectTypes( collect_by, description )
|
||||
self._parent_panel.CollectValuesChanged()
|
||||
|
||||
|
||||
def _GetValues( self ):
|
||||
def GetValues( self ):
|
||||
|
||||
collect_by = []
|
||||
namespaces = []
|
||||
rating_service_keys = []
|
||||
|
||||
for index in self.GetCheckedItems():
|
||||
|
||||
collect_by.append( self.GetClientData( index ) )
|
||||
( collect_type, collect_data ) = self.GetClientData( index )
|
||||
|
||||
if collect_type == 'namespace':
|
||||
|
||||
namespaces.append( collect_data )
|
||||
|
||||
elif collect_type == 'rating':
|
||||
|
||||
rating_service_keys.append( collect_data )
|
||||
|
||||
|
||||
|
||||
collect_by_strings = self.GetCheckedStrings()
|
||||
collect_strings = self.GetCheckedStrings()
|
||||
|
||||
if len( collect_by ) > 0:
|
||||
if len( collect_strings ) > 0:
|
||||
|
||||
description = 'collect by ' + '-'.join( collect_by_strings )
|
||||
description = 'collect by ' + '-'.join( collect_strings )
|
||||
|
||||
else:
|
||||
|
||||
description = 'no collections'
|
||||
|
||||
|
||||
return ( collect_by, description )
|
||||
return ( namespaces, rating_service_keys, description )
|
||||
|
||||
|
||||
# as inspired by http://trac.wxwidgets.org/attachment/ticket/14413/test_clb_workaround.py
|
||||
|
@ -897,29 +937,25 @@ class CheckboxCollect( wx.ComboCtrl ):
|
|||
|
||||
def GetDescription( self ):
|
||||
|
||||
( collect_by, description ) = self._GetValues()
|
||||
( namespaces, rating_service_keys, description ) = self.GetValues()
|
||||
|
||||
return description
|
||||
|
||||
|
||||
def SetValue( self, collect_by ):
|
||||
def SetValue( self, media_collect ):
|
||||
|
||||
try:
|
||||
|
||||
# an old possible value, now collapsed to []
|
||||
if collect_by is None:
|
||||
|
||||
collect_by = []
|
||||
|
||||
|
||||
# tuple for the set hashing
|
||||
desired_collect_by_rows = { tuple( item ) for item in collect_by }
|
||||
|
||||
indices_to_check = []
|
||||
|
||||
for index in range( self.GetCount() ):
|
||||
|
||||
if self.GetClientData( index ) in desired_collect_by_rows:
|
||||
( collect_type, collect_data ) = self.GetClientData( index )
|
||||
|
||||
p1 = collect_type == 'namespace' and collect_data in media_collect.namespaces
|
||||
p2 = collect_type == 'rating' and collect_data in media_collect.rating_service_keys
|
||||
|
||||
if p1 or p2:
|
||||
|
||||
indices_to_check.append( index )
|
||||
|
||||
|
@ -1175,7 +1211,7 @@ class ChoiceSort( wx.Panel ):
|
|||
|
||||
|
||||
|
||||
def ACollectHappened( self, page_key, collect_by ):
|
||||
def ACollectHappened( self, page_key, media_collect ):
|
||||
|
||||
if self._management_controller is not None:
|
||||
|
||||
|
|
|
@ -701,6 +701,8 @@ class FilenameTaggingOptionsPanel( wx.Panel ):
|
|||
|
||||
def EnterTags( self, tags ):
|
||||
|
||||
HG.client_controller.Write( 'push_recent_tags', self._service_key, tags )
|
||||
|
||||
tag_parents_manager = HG.client_controller.tag_parents_manager
|
||||
|
||||
parents = set()
|
||||
|
@ -723,6 +725,8 @@ class FilenameTaggingOptionsPanel( wx.Panel ):
|
|||
|
||||
def EnterTagsSingle( self, tags ):
|
||||
|
||||
HG.client_controller.Write( 'push_recent_tags', self._service_key, tags )
|
||||
|
||||
tag_parents_manager = HG.client_controller.tag_parents_manager
|
||||
|
||||
parents = set()
|
||||
|
|
|
@ -312,8 +312,10 @@ class EditLoginsPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
vbox = wx.BoxSizer( wx.VERTICAL )
|
||||
|
||||
warning = 'WARNING: Your credentials are stored in plaintext! For this and other reasons, I recommend you use throwaway accounts with hydrus!'
|
||||
warning += os.linesep * 2
|
||||
warning += 'If a login script does not work for you, or the site you want has a complicated captcha, check out the Hydrus Companion web browser add-on--it can copy login cookies to hydrus!'
|
||||
|
||||
warning_st = ClientGUICommon.BetterStaticText( self, warning )
|
||||
warning_st = ClientGUICommon.BetterStaticText( self, warning, style = wx.ALIGN_CENTER )
|
||||
|
||||
warning_st.SetForegroundColour( ( 128, 0, 0 ) )
|
||||
|
||||
|
|
|
@ -75,15 +75,7 @@ def CreateManagementController( page_name, management_type, file_service_key = N
|
|||
management_controller.SetType( management_type )
|
||||
management_controller.SetKey( 'file_service', file_service_key )
|
||||
management_controller.SetVariable( 'media_sort', new_options.GetDefaultSort() )
|
||||
|
||||
collect_by = HC.options[ 'default_collect' ]
|
||||
|
||||
if collect_by is None:
|
||||
|
||||
collect_by = []
|
||||
|
||||
|
||||
management_controller.SetVariable( 'media_collect', collect_by )
|
||||
management_controller.SetVariable( 'media_collect', new_options.GetDefaultCollect() )
|
||||
|
||||
return management_controller
|
||||
|
||||
|
@ -528,7 +520,7 @@ class ManagementController( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
SERIALISABLE_TYPE = HydrusSerialisable.SERIALISABLE_TYPE_MANAGEMENT_CONTROLLER
|
||||
SERIALISABLE_NAME = 'Client Page Management Controller'
|
||||
SERIALISABLE_VERSION = 9
|
||||
SERIALISABLE_VERSION = 10
|
||||
|
||||
def __init__( self, page_name = 'page' ):
|
||||
|
||||
|
@ -757,6 +749,80 @@ class ManagementController( HydrusSerialisable.SerialisableBase ):
|
|||
return ( 9, new_serialisable_info )
|
||||
|
||||
|
||||
if version == 9:
|
||||
|
||||
( page_name, management_type, serialisable_keys, serialisable_simples, serialisable_serialisables ) = old_serialisable_info
|
||||
|
||||
if 'media_collect' in serialisable_simples:
|
||||
|
||||
try:
|
||||
|
||||
old_collect = serialisable_simples[ 'media_collect' ]
|
||||
|
||||
if old_collect is None:
|
||||
|
||||
old_collect = []
|
||||
|
||||
|
||||
namespaces = [ n for ( t, n ) in old_collect if t == 'namespace' ]
|
||||
rating_service_keys = [ bytes.fromhex( r ) for ( t, r ) in old_collect if t == 'rating' ]
|
||||
|
||||
except:
|
||||
|
||||
namespaces = []
|
||||
rating_service_keys = []
|
||||
|
||||
|
||||
media_collect = ClientMedia.MediaCollect( namespaces = namespaces, rating_service_keys = rating_service_keys )
|
||||
|
||||
serialisable_serialisables[ 'media_collect' ] = media_collect.GetSerialisableTuple()
|
||||
|
||||
del serialisable_simples[ 'media_collect' ]
|
||||
|
||||
|
||||
new_serialisable_info = ( page_name, management_type, serialisable_keys, serialisable_simples, serialisable_serialisables )
|
||||
|
||||
return ( 10, new_serialisable_info )
|
||||
|
||||
|
||||
|
||||
def GetAPIInfoDict( self, simple ):
|
||||
|
||||
d = {}
|
||||
|
||||
if self._management_type == MANAGEMENT_TYPE_IMPORT_HDD:
|
||||
|
||||
hdd_import = self._serialisables[ 'hdd_import' ]
|
||||
|
||||
d[ 'hdd_import' ] = hdd_import.GetAPIInfoDict( simple )
|
||||
|
||||
elif self._management_type == MANAGEMENT_TYPE_IMPORT_SIMPLE_DOWNLOADER:
|
||||
|
||||
simple_downloader_import = self._serialisables[ 'simple_downloader_import' ]
|
||||
|
||||
d[ 'simple_downloader_import' ] = simple_downloader_import.GetAPIInfoDict( simple )
|
||||
|
||||
elif self._management_type == MANAGEMENT_TYPE_IMPORT_MULTIPLE_GALLERY:
|
||||
|
||||
multiple_gallery_import = self._serialisables[ 'multiple_gallery_import' ]
|
||||
|
||||
d[ 'multiple_gallery_import' ] = multiple_gallery_import.GetAPIInfoDict( simple )
|
||||
|
||||
elif self._management_type == MANAGEMENT_TYPE_IMPORT_MULTIPLE_WATCHER:
|
||||
|
||||
multiple_watcher_import = self._serialisables[ 'multiple_watcher_import' ]
|
||||
|
||||
d[ 'multiple_watcher_import' ] = multiple_watcher_import.GetAPIInfoDict( simple )
|
||||
|
||||
elif self._management_type == MANAGEMENT_TYPE_IMPORT_URLS:
|
||||
|
||||
urls_import = self._serialisables[ 'urls_import' ]
|
||||
|
||||
d[ 'urls_import' ] = urls_import.GetAPIInfoDict( simple )
|
||||
|
||||
|
||||
return d
|
||||
|
||||
|
||||
def GetKey( self, name ):
|
||||
|
||||
|
@ -885,26 +951,26 @@ class ManagementPanel( wx.lib.scrolledpanel.ScrolledPanel ):
|
|||
self._page = page
|
||||
self._page_key = self._management_controller.GetKey( 'page' )
|
||||
|
||||
self._sort_by = ClientGUICommon.ChoiceSort( self, management_controller = self._management_controller )
|
||||
self._media_sort = ClientGUICommon.ChoiceSort( self, management_controller = self._management_controller )
|
||||
|
||||
self._collect_by = ClientGUICommon.CheckboxCollect( self, management_controller = self._management_controller )
|
||||
self._media_collect = ClientGUICommon.CheckboxCollect( self, management_controller = self._management_controller )
|
||||
|
||||
|
||||
def GetCollectBy( self ):
|
||||
def GetMediaCollect( self ):
|
||||
|
||||
if self._collect_by.IsShown():
|
||||
if self._media_collect.IsShown():
|
||||
|
||||
return self._collect_by.GetValue()
|
||||
return self._media_collect.GetValue()
|
||||
|
||||
else:
|
||||
|
||||
return []
|
||||
return ClientMedia.MediaCollect()
|
||||
|
||||
|
||||
|
||||
def GetSortBy( self ):
|
||||
def GetMediaSort( self ):
|
||||
|
||||
return self._sort_by.GetSort()
|
||||
return self._media_sort.GetSort()
|
||||
|
||||
|
||||
def _MakeCurrentSelectionTagsBox( self, sizer ):
|
||||
|
@ -1110,8 +1176,8 @@ class ManagementPanelDuplicateFilter( ManagementPanel ):
|
|||
|
||||
#
|
||||
|
||||
self._sort_by.Hide()
|
||||
self._collect_by.Hide()
|
||||
self._media_sort.Hide()
|
||||
self._media_collect.Hide()
|
||||
|
||||
distance_hbox = wx.BoxSizer( wx.HORIZONTAL )
|
||||
|
||||
|
@ -1604,7 +1670,7 @@ class ManagementPanelImporter( ManagementPanel ):
|
|||
|
||||
if page_key == self._page_key:
|
||||
|
||||
self._sort_by.BroadcastSort()
|
||||
self._media_sort.BroadcastSort()
|
||||
|
||||
|
||||
|
||||
|
@ -1637,9 +1703,9 @@ class ManagementPanelImporterHDD( ManagementPanelImporter ):
|
|||
|
||||
vbox = ClientGUICommon.BetterBoxSizer( wx.VERTICAL )
|
||||
|
||||
vbox.Add( self._sort_by, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
vbox.Add( self._media_sort, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
|
||||
self._collect_by.Hide()
|
||||
self._media_collect.Hide()
|
||||
|
||||
self._import_queue_panel.Add( self._current_action, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
self._import_queue_panel.Add( self._file_seed_cache_control, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
@ -1808,9 +1874,9 @@ class ManagementPanelImporterMultipleGallery( ManagementPanelImporter ):
|
|||
|
||||
vbox = ClientGUICommon.BetterBoxSizer( wx.VERTICAL )
|
||||
|
||||
vbox.Add( self._sort_by, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
vbox.Add( self._media_sort, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
|
||||
self._collect_by.Hide()
|
||||
self._media_collect.Hide()
|
||||
|
||||
vbox.Add( self._gallery_downloader_panel, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
vbox.Add( self._highlighted_gallery_import_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
@ -2471,9 +2537,9 @@ class ManagementPanelImporterMultipleWatcher( ManagementPanelImporter ):
|
|||
|
||||
vbox = ClientGUICommon.BetterBoxSizer( wx.VERTICAL )
|
||||
|
||||
vbox.Add( self._sort_by, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
vbox.Add( self._media_sort, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
|
||||
self._collect_by.Hide()
|
||||
self._media_collect.Hide()
|
||||
|
||||
vbox.Add( self._watchers_panel, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
vbox.Add( self._highlighted_watcher_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
@ -3206,9 +3272,9 @@ class ManagementPanelImporterSimpleDownloader( ManagementPanelImporter ):
|
|||
|
||||
vbox = ClientGUICommon.BetterBoxSizer( wx.VERTICAL )
|
||||
|
||||
vbox.Add( self._sort_by, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
vbox.Add( self._media_sort, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
|
||||
self._collect_by.Hide()
|
||||
self._media_collect.Hide()
|
||||
|
||||
vbox.Add( self._simple_downloader_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
|
@ -3574,9 +3640,9 @@ class ManagementPanelImporterURLs( ManagementPanelImporter ):
|
|||
|
||||
vbox = ClientGUICommon.BetterBoxSizer( wx.VERTICAL )
|
||||
|
||||
vbox.Add( self._sort_by, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
vbox.Add( self._media_sort, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
|
||||
self._collect_by.Hide()
|
||||
self._media_collect.Hide()
|
||||
|
||||
vbox.Add( self._url_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
|
@ -3793,8 +3859,8 @@ class ManagementPanelPetitions( ManagementPanel ):
|
|||
|
||||
vbox = ClientGUICommon.BetterBoxSizer( wx.VERTICAL )
|
||||
|
||||
vbox.Add( self._sort_by, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
vbox.Add( self._collect_by, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
vbox.Add( self._media_sort, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
vbox.Add( self._media_collect, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
vbox.Add( self._petitions_info_panel, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
vbox.Add( self._petition_panel, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
|
@ -4137,9 +4203,9 @@ class ManagementPanelPetitions( ManagementPanel ):
|
|||
|
||||
panel = ClientGUIMedia.MediaPanelThumbnails( self._page, self._page_key, file_service_key, media_results )
|
||||
|
||||
panel.Collect( self._page_key, self._collect_by.GetValue() )
|
||||
panel.Collect( self._page_key, self._media_collect.GetValue() )
|
||||
|
||||
panel.Sort( self._page_key, self._sort_by.GetSort() )
|
||||
panel.Sort( self._page_key, self._media_sort.GetSort() )
|
||||
|
||||
self._page.SwapMediaPanel( panel )
|
||||
|
||||
|
@ -4381,8 +4447,8 @@ class ManagementPanelQuery( ManagementPanel ):
|
|||
|
||||
vbox = ClientGUICommon.BetterBoxSizer( wx.VERTICAL )
|
||||
|
||||
vbox.Add( self._sort_by, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
vbox.Add( self._collect_by, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
vbox.Add( self._media_sort, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
vbox.Add( self._media_collect, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
if self._search_enabled:
|
||||
|
||||
|
@ -4393,7 +4459,6 @@ class ManagementPanelQuery( ManagementPanel ):
|
|||
|
||||
self.SetSizer( vbox )
|
||||
|
||||
self._controller.sub( self, 'AddMediaResultsFromQuery', 'add_media_results_from_query' )
|
||||
self._controller.sub( self, 'SearchImmediately', 'notify_search_immediately' )
|
||||
self._controller.sub( self, 'RefreshQuery', 'refresh_query' )
|
||||
self._controller.sub( self, 'ChangeFileServicePubsub', 'change_file_service' )
|
||||
|
@ -4468,7 +4533,7 @@ class ManagementPanelQuery( ManagementPanel ):
|
|||
|
||||
else:
|
||||
|
||||
self._sort_by.BroadcastSort()
|
||||
self._media_sort.BroadcastSort()
|
||||
|
||||
|
||||
|
||||
|
@ -4512,14 +4577,6 @@ class ManagementPanelQuery( ManagementPanel ):
|
|||
|
||||
|
||||
|
||||
def AddMediaResultsFromQuery( self, query_job_key, media_results ):
|
||||
|
||||
if query_job_key == self._query_job_key:
|
||||
|
||||
self._controller.pub( 'add_media_results', self._page_key, media_results, append = False )
|
||||
|
||||
|
||||
|
||||
def ChangeFileServicePubsub( self, page_key, service_key ):
|
||||
|
||||
if page_key == self._page_key:
|
||||
|
@ -4598,9 +4655,9 @@ class ManagementPanelQuery( ManagementPanel ):
|
|||
|
||||
panel = ClientGUIMedia.MediaPanelThumbnails( self._page, self._page_key, file_service_key, media_results )
|
||||
|
||||
panel.Collect( self._page_key, self._collect_by.GetValue() )
|
||||
panel.Collect( self._page_key, self._media_collect.GetValue() )
|
||||
|
||||
panel.Sort( self._page_key, self._sort_by.GetSort() )
|
||||
panel.Sort( self._page_key, self._media_sort.GetSort() )
|
||||
|
||||
self._page.SwapMediaPanel( panel )
|
||||
|
||||
|
|
|
@ -2239,13 +2239,13 @@ class MediaPanel( ClientMedia.ListeningMediaList, wx.ScrolledCanvas ):
|
|||
|
||||
|
||||
|
||||
def AddMediaResults( self, page_key, media_results, append = True ):
|
||||
def AddMediaResults( self, page_key, media_results ):
|
||||
|
||||
if page_key == self._page_key:
|
||||
|
||||
HG.client_controller.pub( 'refresh_page_name', self._page_key )
|
||||
|
||||
return ClientMedia.ListeningMediaList.AddMediaResults( self, media_results, append = append )
|
||||
return ClientMedia.ListeningMediaList.AddMediaResults( self, media_results )
|
||||
|
||||
|
||||
|
||||
|
@ -2254,13 +2254,13 @@ class MediaPanel( ClientMedia.ListeningMediaList, wx.ScrolledCanvas ):
|
|||
self._page_key = 'dead media panel page key'
|
||||
|
||||
|
||||
def Collect( self, page_key, collect_by = None ):
|
||||
def Collect( self, page_key, media_collect = None ):
|
||||
|
||||
if page_key == self._page_key:
|
||||
|
||||
self._Select( 'none' )
|
||||
|
||||
ClientMedia.ListeningMediaList.Collect( self, collect_by )
|
||||
ClientMedia.ListeningMediaList.Collect( self, media_collect )
|
||||
|
||||
self._RecalculateVirtualSize()
|
||||
|
||||
|
@ -3333,11 +3333,11 @@ class MediaPanelThumbnails( MediaPanel ):
|
|||
self.Refresh()
|
||||
|
||||
|
||||
def AddMediaResults( self, page_key, media_results, append = True ):
|
||||
def AddMediaResults( self, page_key, media_results ):
|
||||
|
||||
if page_key == self._page_key:
|
||||
|
||||
thumbnails = MediaPanel.AddMediaResults( self, page_key, media_results, append = append )
|
||||
thumbnails = MediaPanel.AddMediaResults( self, page_key, media_results )
|
||||
|
||||
self._RecalculateVirtualSize()
|
||||
|
||||
|
|
|
@ -415,13 +415,13 @@ class Page( wx.SplitterWindow ):
|
|||
|
||||
self._media_panel.ClearPageKey()
|
||||
|
||||
collect_by = self._management_panel.GetCollectBy()
|
||||
media_collect = self._management_panel.GetMediaCollect()
|
||||
|
||||
if collect_by != []:
|
||||
if media_collect.DoesACollect():
|
||||
|
||||
new_panel.Collect( self._page_key, collect_by )
|
||||
new_panel.Collect( self._page_key, media_collect )
|
||||
|
||||
sort_by = self._management_panel.GetSortBy()
|
||||
sort_by = self._management_panel.GetMediaSort()
|
||||
|
||||
new_panel.Sort( self._page_key, sort_by )
|
||||
|
||||
|
@ -470,6 +470,25 @@ class Page( wx.SplitterWindow ):
|
|||
self._controller.pub( 'set_focus', self._page_key, None )
|
||||
|
||||
|
||||
def GetAPIInfoDict( self, simple ):
|
||||
|
||||
d = {}
|
||||
|
||||
d[ 'name' ] = self._management_controller.GetPageName()
|
||||
d[ 'page_key' ] = self._page_key.hex()
|
||||
d[ 'page_type' ] = self._management_controller.GetType()
|
||||
|
||||
management_info = self._management_controller.GetAPIInfoDict( simple )
|
||||
|
||||
d[ 'management' ] = management_info
|
||||
|
||||
media_info = self._media_panel.GetAPIInfoDict( simple )
|
||||
|
||||
d[ 'media' ] = media_info
|
||||
|
||||
return d
|
||||
|
||||
|
||||
def GetHashes( self ):
|
||||
|
||||
if self._initialised:
|
||||
|
@ -539,7 +558,7 @@ class Page( wx.SplitterWindow ):
|
|||
return { self._page_key }
|
||||
|
||||
|
||||
def GetPageInfoDict( self, is_selected = False ):
|
||||
def GetSessionAPIInfoDict( self, is_selected = False ):
|
||||
|
||||
root = {}
|
||||
|
||||
|
@ -1863,6 +1882,11 @@ class PagesNotebook( wx.Notebook ):
|
|||
self._controller.pub( 'notify_page_change' )
|
||||
|
||||
|
||||
def GetAPIInfoDict( self, simple ):
|
||||
|
||||
return {}
|
||||
|
||||
|
||||
def GetCurrentMediaPage( self ):
|
||||
|
||||
page = self.GetCurrentPage()
|
||||
|
@ -2001,6 +2025,11 @@ class PagesNotebook( wx.Notebook ):
|
|||
|
||||
def GetPageFromPageKey( self, page_key ):
|
||||
|
||||
if self._page_key == page_key:
|
||||
|
||||
return self
|
||||
|
||||
|
||||
for page in self._GetPages():
|
||||
|
||||
if page.GetPageKey() == page_key:
|
||||
|
@ -2037,7 +2066,7 @@ class PagesNotebook( wx.Notebook ):
|
|||
return page_keys
|
||||
|
||||
|
||||
def GetPageInfoDict( self, is_selected = True ):
|
||||
def GetSessionAPIInfoDict( self, is_selected = True ):
|
||||
|
||||
current_page = self.GetCurrentPage()
|
||||
|
||||
|
@ -2047,7 +2076,7 @@ class PagesNotebook( wx.Notebook ):
|
|||
|
||||
page_is_focused = is_selected and page == current_page
|
||||
|
||||
page_info_dict = page.GetPageInfoDict( is_selected = is_selected )
|
||||
page_info_dict = page.GetSessionAPIInfoDict( is_selected = is_selected )
|
||||
|
||||
my_pages_list.append( page_info_dict )
|
||||
|
||||
|
|
|
@ -3369,13 +3369,13 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
wx.Panel.__init__( self, parent )
|
||||
|
||||
self._default_sort = ClientGUICommon.ChoiceSort( self )
|
||||
self._default_media_sort = ClientGUICommon.ChoiceSort( self )
|
||||
|
||||
self._fallback_sort = ClientGUICommon.ChoiceSort( self )
|
||||
self._fallback_media_sort = ClientGUICommon.ChoiceSort( self )
|
||||
|
||||
self._save_page_sort_on_change = wx.CheckBox( self )
|
||||
|
||||
self._default_collect = ClientGUICommon.CheckboxCollect( self )
|
||||
self._default_media_collect = ClientGUICommon.CheckboxCollect( self )
|
||||
|
||||
self._sort_by = wx.ListBox( self )
|
||||
self._sort_by.Bind( wx.EVT_LEFT_DCLICK, self.EventRemoveSortBy )
|
||||
|
@ -3389,24 +3389,24 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
try:
|
||||
|
||||
self._default_sort.SetSort( self._new_options.GetDefaultSort() )
|
||||
self._default_media_sort.SetSort( self._new_options.GetDefaultSort() )
|
||||
|
||||
except:
|
||||
|
||||
media_sort = ClientMedia.MediaSort( ( 'system', CC.SORT_FILES_BY_FILESIZE ), CC.SORT_ASC )
|
||||
|
||||
self._default_sort.SetSort( media_sort )
|
||||
self._default_media_sort.SetSort( media_sort )
|
||||
|
||||
|
||||
try:
|
||||
|
||||
self._fallback_sort.SetSort( self._new_options.GetFallbackSort() )
|
||||
self._fallback_media_sort.SetSort( self._new_options.GetFallbackSort() )
|
||||
|
||||
except:
|
||||
|
||||
media_sort = ClientMedia.MediaSort( ( 'system', CC.SORT_FILES_BY_IMPORT_TIME ), CC.SORT_ASC )
|
||||
|
||||
self._fallback_sort.SetSort( media_sort )
|
||||
self._fallback_media_sort.SetSort( media_sort )
|
||||
|
||||
|
||||
for ( sort_by_type, sort_by ) in HC.options[ 'sort_by' ]:
|
||||
|
@ -3420,10 +3420,10 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
rows = []
|
||||
|
||||
rows.append( ( 'Default sort: ', self._default_sort ) )
|
||||
rows.append( ( 'Secondary sort (when primary gives two equal values): ', self._fallback_sort ) )
|
||||
rows.append( ( 'Default sort: ', self._default_media_sort ) )
|
||||
rows.append( ( 'Secondary sort (when primary gives two equal values): ', self._fallback_media_sort ) )
|
||||
rows.append( ( 'Update default sort every time a new sort is manually chosen: ', self._save_page_sort_on_change ) )
|
||||
rows.append( ( 'Default collect: ', self._default_collect ) )
|
||||
rows.append( ( 'Default collect: ', self._default_media_collect ) )
|
||||
|
||||
gridbox = ClientGUICommon.WrapInGrid( self, rows )
|
||||
|
||||
|
@ -3481,14 +3481,17 @@ class ManageOptionsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
def UpdateOptions( self ):
|
||||
|
||||
self._new_options.SetDefaultSort( self._default_sort.GetSort() )
|
||||
self._new_options.SetFallbackSort( self._fallback_sort.GetSort() )
|
||||
self._new_options.SetDefaultSort( self._default_media_sort.GetSort() )
|
||||
self._new_options.SetFallbackSort( self._fallback_media_sort.GetSort() )
|
||||
self._new_options.SetBoolean( 'save_page_sort_on_change', self._save_page_sort_on_change.GetValue() )
|
||||
HC.options[ 'default_collect' ] = self._default_collect.GetValue()
|
||||
self._new_options.SetDefaultCollect( self._default_media_collect.GetValue() )
|
||||
|
||||
sort_by_choices = []
|
||||
|
||||
for sort_by in [ self._sort_by.GetClientData( i ) for i in range( self._sort_by.GetCount() ) ]: sort_by_choices.append( ( 'namespaces', sort_by ) )
|
||||
for sort_by in [ self._sort_by.GetClientData( i ) for i in range( self._sort_by.GetCount() ) ]:
|
||||
|
||||
sort_by_choices.append( ( 'namespaces', sort_by ) )
|
||||
|
||||
|
||||
HC.options[ 'sort_by' ] = sort_by_choices
|
||||
|
||||
|
|
|
@ -674,6 +674,20 @@ class FileSeed( HydrusSerialisable.SerialisableBase ):
|
|||
pass
|
||||
|
||||
|
||||
def GetAPIInfoDict( self, simple ):
|
||||
|
||||
d = {}
|
||||
|
||||
d[ 'import_data' ] = self.file_seed_data
|
||||
d[ 'created' ] = self.created
|
||||
d[ 'modified' ] = self.modified
|
||||
d[ 'source_time' ] = self.source_time
|
||||
d[ 'status' ] = self.status
|
||||
d[ 'note' ] = self.note
|
||||
|
||||
return d
|
||||
|
||||
|
||||
def GetExampleNetworkJob( self, network_job_factory ):
|
||||
|
||||
if self.IsAPostURL():
|
||||
|
@ -1922,6 +1936,33 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
|
|||
self.NotifyFileSeedsUpdated( ( file_seed, ) )
|
||||
|
||||
|
||||
def GetAPIInfoDict( self, simple ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
d = {}
|
||||
|
||||
if self._status_dirty:
|
||||
|
||||
self._GenerateStatus()
|
||||
|
||||
|
||||
( status, simple_status, ( total_processed, total ) ) = self._status_cache
|
||||
|
||||
d[ 'status' ] = status
|
||||
d[ 'simple_status' ] = status
|
||||
d[ 'total_processed' ] = total_processed
|
||||
d[ 'total_to_process' ] = total
|
||||
|
||||
if not simple:
|
||||
|
||||
d[ 'import_items' ] = [ file_seed.GetAPIInfoDict( simple ) for file_seed in self._file_seeds ]
|
||||
|
||||
|
||||
return d
|
||||
|
||||
|
||||
|
||||
def GetEarliestSourceTime( self ):
|
||||
|
||||
with self._lock:
|
||||
|
|
|
@ -432,6 +432,24 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def GetAPIInfoDict( self, simple ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
d = {}
|
||||
|
||||
d[ 'query_text' ] = self._query
|
||||
d[ 'source' ] = self._source_name
|
||||
d[ 'gallery_key' ] = self._gallery_import_key.hex()
|
||||
d[ 'files_paused' ] = self._files_paused
|
||||
d[ 'gallery_paused' ] = self._gallery_paused
|
||||
d[ 'imports' ] = self._file_seed_cache.GetAPIInfoDict( simple )
|
||||
d[ 'gallery_log' ] = self._gallery_seed_log.GetAPIInfoDict( simple )
|
||||
|
||||
return d
|
||||
|
||||
|
||||
|
||||
def GetCreationTime( self ):
|
||||
|
||||
with self._lock:
|
||||
|
@ -1121,6 +1139,27 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def GetAPIInfoDict( self, simple ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
d = {}
|
||||
|
||||
d[ 'gallery_imports' ] = [ gallery_import.GetAPIInfoDict( simple ) for gallery_import in self._gallery_imports ]
|
||||
|
||||
if self._highlighted_gallery_import_key is None:
|
||||
|
||||
d[ 'highlight' ] = None
|
||||
|
||||
else:
|
||||
|
||||
d[ 'highlight' ] = self._highlighted_gallery_import_key.hex()
|
||||
|
||||
|
||||
return d
|
||||
|
||||
|
||||
|
||||
def GetFileLimit( self ):
|
||||
|
||||
with self._lock:
|
||||
|
|
|
@ -164,6 +164,19 @@ class GallerySeed( HydrusSerialisable.SerialisableBase ):
|
|||
return gallery_seed
|
||||
|
||||
|
||||
def GetAPIInfoDict( self, simple ):
|
||||
|
||||
d = {}
|
||||
|
||||
d[ 'url' ] = self.url
|
||||
d[ 'created' ] = self.created
|
||||
d[ 'modified' ] = self.modified
|
||||
d[ 'status' ] = self.status
|
||||
d[ 'note' ] = self.note
|
||||
|
||||
return d
|
||||
|
||||
|
||||
def GetExampleNetworkJob( self, network_job_factory ):
|
||||
|
||||
( url_to_check, parser ) = HG.client_controller.network_engine.domain_manager.GetURLToFetchAndParser( self.url )
|
||||
|
@ -741,6 +754,32 @@ class GallerySeedLog( HydrusSerialisable.SerialisableBase ):
|
|||
return None
|
||||
|
||||
|
||||
def GetAPIInfoDict( self, simple ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
d = {}
|
||||
|
||||
if self._status_dirty:
|
||||
|
||||
self._GenerateStatus()
|
||||
|
||||
|
||||
( status, ( total_processed, total ) ) = self._status_cache
|
||||
|
||||
d[ 'status' ] = status
|
||||
d[ 'total_processed' ] = total_processed
|
||||
d[ 'total_to_process' ] = total
|
||||
|
||||
if not simple:
|
||||
|
||||
d[ 'log_items' ] = [ gallery_seed.GetAPIInfoDict( simple ) for gallery_seed in self._gallery_seeds ]
|
||||
|
||||
|
||||
return d
|
||||
|
||||
|
||||
|
||||
def GetGallerySeedLogKey( self ):
|
||||
|
||||
return self._gallery_seed_log_key
|
||||
|
|
|
@ -202,6 +202,20 @@ class HDDImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def GetAPIInfoDict( self, simple ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
d = {}
|
||||
|
||||
d[ 'imports' ] = self._file_seed_cache.GetAPIInfoDict( simple )
|
||||
|
||||
d[ 'files_paused' ] = self._paused
|
||||
|
||||
return d
|
||||
|
||||
|
||||
|
||||
def GetFileImportOptions( self ):
|
||||
|
||||
with self._lock:
|
||||
|
|
|
@ -406,6 +406,24 @@ class SimpleDownloaderImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def GetAPIInfoDict( self, simple ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
d = {}
|
||||
|
||||
d[ 'imports' ] = self._file_seed_cache.GetAPIInfoDict( simple )
|
||||
|
||||
d[ 'gallery_log' ] = self._gallery_seed_log.GetAPIInfoDict( simple )
|
||||
|
||||
d[ 'files_paused' ] = self._files_paused
|
||||
|
||||
d[ 'gallery_paused' ] = self._queue_paused
|
||||
|
||||
return d
|
||||
|
||||
|
||||
|
||||
def GetFileSeedCache( self ):
|
||||
|
||||
with self._lock:
|
||||
|
@ -855,6 +873,22 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def GetAPIInfoDict( self, simple ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
d = {}
|
||||
|
||||
d[ 'imports' ] = self._file_seed_cache.GetAPIInfoDict( simple )
|
||||
|
||||
d[ 'gallery_log' ] = self._gallery_seed_log.GetAPIInfoDict( simple )
|
||||
|
||||
d[ 'files_paused' ] = self._paused
|
||||
|
||||
return d
|
||||
|
||||
|
||||
|
||||
def GetFileSeedCache( self ):
|
||||
|
||||
with self._lock:
|
||||
|
|
|
@ -1339,7 +1339,8 @@ class Subscription( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
delay = HG.client_controller.new_options.GetInteger( 'subscription_network_error_delay' )
|
||||
|
||||
HydrusData.Print( 'The subscription ' + self._name + ' encountered an exception when trying to sync:' )
|
||||
HydrusData.PrintException( e )
|
||||
|
||||
HydrusData.Print( e )
|
||||
|
||||
job_key.SetVariable( 'popup_text_1', 'Encountered a network error, will retry again later' )
|
||||
|
||||
|
|
|
@ -254,6 +254,29 @@ class MultipleWatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def GetAPIInfoDict( self, simple ):
|
||||
|
||||
highlighted_watcher = self.GetHighlightedWatcher()
|
||||
|
||||
with self._lock:
|
||||
|
||||
d = {}
|
||||
|
||||
d[ 'watcher_imports' ] = [ watcher_import.GetAPIInfoDict( simple ) for watcher_import in self._watchers ]
|
||||
|
||||
if highlighted_watcher is None:
|
||||
|
||||
d[ 'highlight' ] = None
|
||||
|
||||
else:
|
||||
|
||||
d[ 'highlight' ] = highlighted_watcher.GetWatcherKey().hex()
|
||||
|
||||
|
||||
return d
|
||||
|
||||
|
||||
|
||||
def GetHighlightedWatcher( self ):
|
||||
|
||||
with self._lock:
|
||||
|
@ -925,6 +948,28 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def GetAPIInfoDict( self, simple ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
d = {}
|
||||
|
||||
d[ 'url' ] = self._url
|
||||
d[ 'watcher_key' ] = self._watcher_key.hex()
|
||||
d[ 'created' ] = self._creation_time
|
||||
d[ 'last_check_time' ] = self._last_check_time
|
||||
d[ 'next_check_time' ] = self._next_check_time
|
||||
d[ 'files_paused' ] = self._files_paused
|
||||
d[ 'checking_paused' ] = self._checking_paused
|
||||
d[ 'checking_status' ] = self._checking_status
|
||||
d[ 'subject' ] = self._subject
|
||||
d[ 'imports' ] = self._file_seed_cache.GetAPIInfoDict( simple )
|
||||
d[ 'gallery_log' ] = self._gallery_seed_log.GetAPIInfoDict( simple )
|
||||
|
||||
return d
|
||||
|
||||
|
||||
|
||||
def CanRetryFailed( self ):
|
||||
|
||||
with self._lock:
|
||||
|
|
|
@ -89,6 +89,7 @@ class HydrusServiceClientAPI( HydrusClientService ):
|
|||
|
||||
manage_pages.putChild( b'focus_page', ClientLocalServerResources.HydrusResourceClientAPIRestrictedManagePagesFocusPage( self._service, self._client_requests_domain ) )
|
||||
manage_pages.putChild( b'get_pages', ClientLocalServerResources.HydrusResourceClientAPIRestrictedManagePagesGetPages( self._service, self._client_requests_domain ) )
|
||||
manage_pages.putChild( b'get_page_info', ClientLocalServerResources.HydrusResourceClientAPIRestrictedManagePagesGetPageInfo( self._service, self._client_requests_domain ) )
|
||||
|
||||
return root
|
||||
|
||||
|
|
|
@ -2,6 +2,7 @@ import collections
|
|||
from . import ClientAPI
|
||||
from . import ClientConstants as CC
|
||||
from . import ClientImportFileSeeds
|
||||
from . import ClientMedia
|
||||
from . import ClientNetworkingContexts
|
||||
from . import ClientSearch
|
||||
from . import ClientTags
|
||||
|
@ -31,7 +32,7 @@ LOCAL_BOORU_JSON_BYTE_LIST_PARAMS = set()
|
|||
CLIENT_API_INT_PARAMS = { 'file_id' }
|
||||
CLIENT_API_BYTE_PARAMS = { 'hash', 'destination_page_key', 'page_key', 'Hydrus-Client-API-Access-Key', 'Hydrus-Client-API-Session-Key' }
|
||||
CLIENT_API_STRING_PARAMS = { 'name', 'url', 'domain' }
|
||||
CLIENT_API_JSON_PARAMS = { 'basic_permissions', 'system_inbox', 'system_archive', 'tags', 'file_ids', 'only_return_identifiers' }
|
||||
CLIENT_API_JSON_PARAMS = { 'basic_permissions', 'system_inbox', 'system_archive', 'tags', 'file_ids', 'only_return_identifiers', 'simple' }
|
||||
CLIENT_API_JSON_BYTE_LIST_PARAMS = { 'hashes' }
|
||||
|
||||
def ParseLocalBooruGETArgs( requests_args ):
|
||||
|
@ -284,7 +285,7 @@ class HydrusResourceBooruGallery( HydrusResourceBooru ):
|
|||
# in future, make this a standard frame with a search key that'll load xml or yaml AJAX stuff
|
||||
# with file info included, so the page can sort and whatever
|
||||
|
||||
share_key = request.parsed_request_args[ 'share_key' ]
|
||||
share_key = request.parsed_request_args.GetValue( 'share_key', bytes )
|
||||
|
||||
local_booru_manager = HG.client_controller.local_booru_manager
|
||||
|
||||
|
@ -365,8 +366,8 @@ class HydrusResourceBooruPage( HydrusResourceBooru ):
|
|||
|
||||
def _threadDoGETJob( self, request ):
|
||||
|
||||
share_key = request.parsed_request_args[ 'share_key' ]
|
||||
hash = request.parsed_request_args[ 'hash' ]
|
||||
share_key = request.parsed_request_args.GetValue( 'share_key', bytes )
|
||||
hash = request.parsed_request_args.GetValue( 'hash', bytes )
|
||||
|
||||
local_booru_manager = HG.client_controller.local_booru_manager
|
||||
|
||||
|
@ -454,8 +455,8 @@ class HydrusResourceBooruThumbnail( HydrusResourceBooru ):
|
|||
|
||||
def _threadDoGETJob( self, request ):
|
||||
|
||||
share_key = request.parsed_request_args[ 'share_key' ]
|
||||
hash = request.parsed_request_args[ 'hash' ]
|
||||
share_key = request.parsed_request_args.GetValue( 'share_key', bytes )
|
||||
hash = request.parsed_request_args.GetValue( 'hash', bytes )
|
||||
|
||||
local_booru_manager = HG.client_controller.local_booru_manager
|
||||
|
||||
|
@ -548,8 +549,9 @@ class HydrusResourceClientAPIPermissionsRequest( HydrusResourceClientAPI ):
|
|||
raise HydrusExceptions.InsufficientCredentialsException( 'The permission registration dialog is not open. Please open it under "review services" in the hydrus client.' )
|
||||
|
||||
|
||||
name = request.parsed_request_args[ 'name' ]
|
||||
basic_permissions = request.parsed_request_args[ 'basic_permissions' ]
|
||||
name = request.parsed_request_args.GetValue( 'name', str )
|
||||
|
||||
basic_permissions = request.parsed_request_args.GetValue( 'basic_permissions', list )
|
||||
|
||||
basic_permissions = [ int( value ) for value in basic_permissions ]
|
||||
|
||||
|
@ -673,7 +675,7 @@ class HydrusResourceClientAPIRestricted( HydrusResourceClientAPI ):
|
|||
|
||||
if name_of_key in request.parsed_request_args:
|
||||
|
||||
key = request.parsed_request_args[ name_of_key ]
|
||||
key = request.parsed_request_args.GetValue( name_of_key, bytes )
|
||||
|
||||
|
||||
|
||||
|
@ -762,7 +764,7 @@ class HydrusResourceClientAPIRestrictedAddFile( HydrusResourceClientAPIRestricte
|
|||
|
||||
if not hasattr( request, 'temp_file_info' ):
|
||||
|
||||
path = request.parsed_request_args[ 'path' ]
|
||||
path = request.parsed_request_args.GetValue( 'path', str )
|
||||
|
||||
if not os.path.exists( path ):
|
||||
|
||||
|
@ -821,14 +823,14 @@ class HydrusResourceClientAPIRestrictedAddTagsAddTags( HydrusResourceClientAPIRe
|
|||
|
||||
if 'hash' in request.parsed_request_args:
|
||||
|
||||
hash = request.parsed_request_args[ 'hash' ]
|
||||
hash = request.parsed_request_args.GetValue( 'hash', bytes )
|
||||
|
||||
hashes.add( hash )
|
||||
|
||||
|
||||
if 'hashes' in request.parsed_request_args:
|
||||
|
||||
more_hashes = request.parsed_request_args[ 'hashes' ]
|
||||
more_hashes = request.parsed_request_args.GetValue( 'hashes', list )
|
||||
|
||||
hashes.update( more_hashes )
|
||||
|
||||
|
@ -840,18 +842,13 @@ class HydrusResourceClientAPIRestrictedAddTagsAddTags( HydrusResourceClientAPIRe
|
|||
|
||||
#
|
||||
|
||||
add_siblings_and_parents = True
|
||||
|
||||
if 'add_siblings_and_parents' in request.parsed_request_args:
|
||||
|
||||
add_siblings_and_parents = request.parsed_request_args[ 'add_siblings_and_parents' ]
|
||||
|
||||
add_siblings_and_parents = request.parsed_request_args.GetValue( 'add_siblings_and_parents', bool, default_value = True )
|
||||
|
||||
service_keys_to_content_updates = collections.defaultdict( list )
|
||||
|
||||
if 'service_names_to_tags' in request.parsed_request_args:
|
||||
|
||||
service_names_to_tags = request.parsed_request_args[ 'service_names_to_tags' ]
|
||||
service_names_to_tags = request.parsed_request_args.GetValue( 'service_names_to_tags', dict )
|
||||
|
||||
for ( service_name, tags ) in service_names_to_tags.items():
|
||||
|
||||
|
@ -899,7 +896,7 @@ class HydrusResourceClientAPIRestrictedAddTagsAddTags( HydrusResourceClientAPIRe
|
|||
|
||||
if 'service_names_to_actions_to_tags' in request.parsed_request_args:
|
||||
|
||||
service_names_to_actions_to_tags = request.parsed_request_args[ 'service_names_to_actions_to_tags' ]
|
||||
service_names_to_actions_to_tags = request.parsed_request_args.GetValue( 'service_names_to_actions_to_tags', dict )
|
||||
|
||||
for ( service_name, actions_to_tags ) in service_names_to_actions_to_tags.items():
|
||||
|
||||
|
@ -1005,7 +1002,7 @@ class HydrusResourceClientAPIRestrictedAddTagsCleanTags( HydrusResourceClientAPI
|
|||
|
||||
def _threadDoGETJob( self, request ):
|
||||
|
||||
tags = request.parsed_request_args[ 'tags' ]
|
||||
tags = request.parsed_request_args.GetValue( 'tags', list )
|
||||
|
||||
tags = list( HydrusTags.CleanTags( tags ) )
|
||||
|
||||
|
@ -1037,24 +1034,14 @@ class HydrusResourceClientAPIRestrictedAddURLsAssociateURL( HydrusResourceClient
|
|||
|
||||
if 'url_to_add' in request.parsed_request_args:
|
||||
|
||||
url = request.parsed_request_args[ 'url_to_add' ]
|
||||
|
||||
if not isinstance( url, str ):
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Did not understand the given URL "{}"!'.format( url ) )
|
||||
|
||||
url = request.parsed_request_args.GetValue( 'url_to_add', str )
|
||||
|
||||
urls_to_add.append( url )
|
||||
|
||||
|
||||
if 'urls_to_add' in request.parsed_request_args:
|
||||
|
||||
urls = request.parsed_request_args[ 'urls_to_add' ]
|
||||
|
||||
if not isinstance( urls, list ):
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Did not understand the given URLs!' )
|
||||
|
||||
urls = request.parsed_request_args.GetValue( 'urls_to_add', list )
|
||||
|
||||
for url in urls:
|
||||
|
||||
|
@ -1071,24 +1058,14 @@ class HydrusResourceClientAPIRestrictedAddURLsAssociateURL( HydrusResourceClient
|
|||
|
||||
if 'url_to_delete' in request.parsed_request_args:
|
||||
|
||||
url = request.parsed_request_args[ 'url_to_delete' ]
|
||||
|
||||
if not isinstance( url, str ):
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Did not understand the given URL "{}"!'.format( url ) )
|
||||
|
||||
url = request.parsed_request_args.GetValue( 'url_to_delete', str )
|
||||
|
||||
urls_to_delete.append( url )
|
||||
|
||||
|
||||
if 'urls_to_delete' in request.parsed_request_args:
|
||||
|
||||
urls = request.parsed_request_args[ 'urls_to_delete' ]
|
||||
|
||||
if not isinstance( urls, list ):
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Did not understand the given URLs!' )
|
||||
|
||||
urls = request.parsed_request_args.GetValue( 'urls_to_delete', list )
|
||||
|
||||
for url in urls:
|
||||
|
||||
|
@ -1101,21 +1078,30 @@ class HydrusResourceClientAPIRestrictedAddURLsAssociateURL( HydrusResourceClient
|
|||
|
||||
|
||||
|
||||
if len( urls_to_add ) == 0 and len( urls_to_delete ) == 0:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Did not find any URLs to add or delete!' )
|
||||
|
||||
|
||||
applicable_hashes = []
|
||||
|
||||
if 'hash' in request.parsed_request_args:
|
||||
|
||||
applicable_hashes.append( request.parsed_request_args[ 'hash' ] )
|
||||
hash = request.parsed_request_args.GetValue( 'hash', bytes )
|
||||
|
||||
applicable_hashes.append( hash )
|
||||
|
||||
|
||||
if 'hashes' in request.parsed_request_args:
|
||||
|
||||
applicable_hashes.extend( request.parsed_request_args[ 'hashes' ] )
|
||||
hashes = request.parsed_request_args.GetValue( 'hashes', list )
|
||||
|
||||
applicable_hashes.extend( hashes )
|
||||
|
||||
|
||||
if len( urls_to_add ) == 0 and len( urls_to_delete ) == 0:
|
||||
if len( applicable_hashes ) == 0:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Did not find any URLs to add!' )
|
||||
raise HydrusExceptions.BadRequestException( 'Did not find any hashes to apply the urls to!' )
|
||||
|
||||
|
||||
service_keys_to_content_updates = collections.defaultdict( list )
|
||||
|
@ -1145,7 +1131,7 @@ class HydrusResourceClientAPIRestrictedAddURLsGetURLFiles( HydrusResourceClientA
|
|||
|
||||
def _threadDoGETJob( self, request ):
|
||||
|
||||
url = request.parsed_request_args[ 'url' ]
|
||||
url = request.parsed_request_args.GetValue( 'url', str )
|
||||
|
||||
if url == '':
|
||||
|
||||
|
@ -1189,7 +1175,7 @@ class HydrusResourceClientAPIRestrictedAddURLsGetURLInfo( HydrusResourceClientAP
|
|||
|
||||
def _threadDoGETJob( self, request ):
|
||||
|
||||
url = request.parsed_request_args[ 'url' ]
|
||||
url = request.parsed_request_args.GetValue( 'url', str )
|
||||
|
||||
if url == '':
|
||||
|
||||
|
@ -1220,7 +1206,7 @@ class HydrusResourceClientAPIRestrictedAddURLsImportURL( HydrusResourceClientAPI
|
|||
|
||||
def _threadDoPOSTJob( self, request ):
|
||||
|
||||
url = request.parsed_request_args[ 'url' ]
|
||||
url = request.parsed_request_args.GetValue( 'url', str )
|
||||
|
||||
if url == '':
|
||||
|
||||
|
@ -1235,7 +1221,7 @@ class HydrusResourceClientAPIRestrictedAddURLsImportURL( HydrusResourceClientAPI
|
|||
|
||||
request.client_api_permissions.CheckPermission( ClientAPI.CLIENT_API_PERMISSION_ADD_TAGS )
|
||||
|
||||
service_names_to_tags = request.parsed_request_args[ 'service_names_to_tags' ]
|
||||
service_names_to_tags = request.parsed_request_args.GetValue( 'service_names_to_tags', dict )
|
||||
|
||||
for ( service_name, tags ) in service_names_to_tags.items():
|
||||
|
||||
|
@ -1263,37 +1249,17 @@ class HydrusResourceClientAPIRestrictedAddURLsImportURL( HydrusResourceClientAPI
|
|||
|
||||
if 'destination_page_name' in request.parsed_request_args:
|
||||
|
||||
destination_page_name = request.parsed_request_args[ 'destination_page_name' ]
|
||||
|
||||
if not isinstance( destination_page_name, str ):
|
||||
|
||||
raise HydrusExceptions.BadRequestException( '"destination_page_name" did not seem to be a string!' )
|
||||
|
||||
destination_page_name = request.parsed_request_args.GetValue( 'destination_page_name', str )
|
||||
|
||||
|
||||
destination_page_key = None
|
||||
|
||||
if 'destination_page_key' in request.parsed_request_args:
|
||||
|
||||
destination_page_key = request.parsed_request_args[ 'destination_page_key' ]
|
||||
|
||||
if not isinstance( destination_page_key, bytes ):
|
||||
|
||||
raise HydrusExceptions.BadRequestException( '"destination_page_key" did not seem to be a hex string!' )
|
||||
|
||||
destination_page_key = request.parsed_request_args.GetValue( 'destination_page_key', bytes )
|
||||
|
||||
|
||||
show_destination_page = False
|
||||
|
||||
if 'show_destination_page' in request.parsed_request_args:
|
||||
|
||||
show_destination_page = request.parsed_request_args[ 'show_destination_page' ]
|
||||
|
||||
if not isinstance( show_destination_page, bool ):
|
||||
|
||||
raise HydrusExceptions.BadRequestException( '"show_destination_page" did not seem to be a boolean!' )
|
||||
|
||||
|
||||
show_destination_page = request.parsed_request_args.GetValue( 'show_destination_page', bool, default_value = False )
|
||||
|
||||
def do_it():
|
||||
|
||||
|
@ -1335,7 +1301,10 @@ class HydrusResourceClientAPIRestrictedGetFilesSearchFiles( HydrusResourceClient
|
|||
|
||||
file_search_context = ClientSearch.FileSearchContext( file_service_key = CC.LOCAL_FILE_SERVICE_KEY, tag_service_key = CC.COMBINED_TAG_SERVICE_KEY, predicates = predicates )
|
||||
|
||||
hash_ids = HG.client_controller.Read( 'file_query_ids', file_search_context )
|
||||
# newest first
|
||||
sort_by = ClientMedia.MediaSort( sort_type = ( 'system', CC.SORT_FILES_BY_IMPORT_TIME ), sort_asc = CC.SORT_DESC )
|
||||
|
||||
hash_ids = HG.client_controller.Read( 'file_query_ids', file_search_context, sort_by = sort_by )
|
||||
|
||||
request.client_api_permissions.SetLastSearchResults( hash_ids )
|
||||
|
||||
|
@ -1356,7 +1325,7 @@ class HydrusResourceClientAPIRestrictedGetFilesGetFile( HydrusResourceClientAPIR
|
|||
|
||||
if 'file_id' in request.parsed_request_args:
|
||||
|
||||
file_id = request.parsed_request_args[ 'file_id' ]
|
||||
file_id = request.parsed_request_args.GetValue( 'file_id', int )
|
||||
|
||||
request.client_api_permissions.CheckPermissionToSeeFiles( ( file_id, ) )
|
||||
|
||||
|
@ -1366,7 +1335,7 @@ class HydrusResourceClientAPIRestrictedGetFilesGetFile( HydrusResourceClientAPIR
|
|||
|
||||
request.client_api_permissions.CheckCanSeeAllFiles()
|
||||
|
||||
hash = request.parsed_request_args[ 'hash' ]
|
||||
hash = request.parsed_request_args.GetValue( 'hash', bytes )
|
||||
|
||||
( media_result, ) = HG.client_controller.Read( 'media_results', ( hash, ) )
|
||||
|
||||
|
@ -1401,18 +1370,13 @@ class HydrusResourceClientAPIRestrictedGetFilesFileMetadata( HydrusResourceClien
|
|||
|
||||
def _threadDoGETJob( self, request ):
|
||||
|
||||
only_return_identifiers = False
|
||||
|
||||
if 'only_return_identifiers' in request.parsed_request_args:
|
||||
|
||||
only_return_identifiers = request.parsed_request_args[ 'only_return_identifiers' ]
|
||||
|
||||
only_return_identifiers = request.parsed_request_args.GetValue( 'only_return_identifiers', bool, default_value = False )
|
||||
|
||||
try:
|
||||
|
||||
if 'file_ids' in request.parsed_request_args:
|
||||
|
||||
file_ids = request.parsed_request_args[ 'file_ids' ]
|
||||
file_ids = request.parsed_request_args.GetValue( 'file_ids', list )
|
||||
|
||||
request.client_api_permissions.CheckPermissionToSeeFiles( file_ids )
|
||||
|
||||
|
@ -1429,7 +1393,7 @@ class HydrusResourceClientAPIRestrictedGetFilesFileMetadata( HydrusResourceClien
|
|||
|
||||
request.client_api_permissions.CheckCanSeeAllFiles()
|
||||
|
||||
hashes = request.parsed_request_args[ 'hashes' ]
|
||||
hashes = request.parsed_request_args.GetValue( 'hashes', list )
|
||||
|
||||
if only_return_identifiers:
|
||||
|
||||
|
@ -1537,7 +1501,7 @@ class HydrusResourceClientAPIRestrictedGetFilesGetThumbnail( HydrusResourceClien
|
|||
|
||||
if 'file_id' in request.parsed_request_args:
|
||||
|
||||
file_id = request.parsed_request_args[ 'file_id' ]
|
||||
file_id = request.parsed_request_args.GetValue( 'file_id', int )
|
||||
|
||||
request.client_api_permissions.CheckPermissionToSeeFiles( ( file_id, ) )
|
||||
|
||||
|
@ -1547,7 +1511,7 @@ class HydrusResourceClientAPIRestrictedGetFilesGetThumbnail( HydrusResourceClien
|
|||
|
||||
request.client_api_permissions.CheckCanSeeAllFiles()
|
||||
|
||||
hash = request.parsed_request_args[ 'hash' ]
|
||||
hash = request.parsed_request_args.GetValue( 'hash', bytes )
|
||||
|
||||
( media_result, ) = HG.client_controller.Read( 'media_results', ( hash, ) )
|
||||
|
||||
|
@ -1586,12 +1550,7 @@ class HydrusResourceClientAPIRestrictedManageCookiesGetCookies( HydrusResourceCl
|
|||
|
||||
def _threadDoGETJob( self, request ):
|
||||
|
||||
if 'domain' not in request.parsed_request_args:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'Please include a domain parameter!' )
|
||||
|
||||
|
||||
domain = request.parsed_request_args[ 'domain' ]
|
||||
domain = request.parsed_request_args.GetValue( 'domain', str )
|
||||
|
||||
if '.' not in domain:
|
||||
|
||||
|
@ -1630,7 +1589,7 @@ class HydrusResourceClientAPIRestrictedManageCookiesSetCookies( HydrusResourceCl
|
|||
|
||||
def _threadDoPOSTJob( self, request ):
|
||||
|
||||
cookie_rows = request.parsed_request_args[ 'cookies' ]
|
||||
cookie_rows = request.parsed_request_args.GetValue( 'cookies', list )
|
||||
|
||||
for cookie_row in cookie_rows:
|
||||
|
||||
|
@ -1701,17 +1660,7 @@ class HydrusResourceClientAPIRestrictedManagePagesFocusPage( HydrusResourceClien
|
|||
return HG.client_controller.gui.ShowPage( page_key )
|
||||
|
||||
|
||||
if 'page_key' not in request.parsed_request_args:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'No "page_key" given!' )
|
||||
|
||||
|
||||
page_key = request.parsed_request_args[ 'page_key' ]
|
||||
|
||||
if not isinstance( page_key, bytes ):
|
||||
|
||||
raise HydrusExceptions.BadRequestException( '"page_key" did not seem to be a hex string!' )
|
||||
|
||||
page_key = request.parsed_request_args.GetValue( 'page_key', bytes )
|
||||
|
||||
try:
|
||||
|
||||
|
@ -1733,7 +1682,7 @@ class HydrusResourceClientAPIRestrictedManagePagesGetPages( HydrusResourceClient
|
|||
|
||||
def do_it():
|
||||
|
||||
return HG.client_controller.gui.GetCurrentSessionPageInfoDict()
|
||||
return HG.client_controller.gui.GetCurrentSessionPageAPIInfoDict()
|
||||
|
||||
|
||||
page_info_dict = HG.client_controller.CallBlockingToWX( HG.client_controller.gui, do_it )
|
||||
|
@ -1747,3 +1696,32 @@ class HydrusResourceClientAPIRestrictedManagePagesGetPages( HydrusResourceClient
|
|||
return response_context
|
||||
|
||||
|
||||
class HydrusResourceClientAPIRestrictedManagePagesGetPageInfo( HydrusResourceClientAPIRestrictedManagePages ):
|
||||
|
||||
def _threadDoGETJob( self, request ):
|
||||
|
||||
def do_it( page_key, simple ):
|
||||
|
||||
return HG.client_controller.gui.GetPageAPIInfoDict( page_key, simple )
|
||||
|
||||
|
||||
page_key = request.parsed_request_args.GetValue( 'page_key', bytes )
|
||||
|
||||
simple = request.parsed_request_args.GetValue( 'simple', bool, default_value = True )
|
||||
|
||||
page_info_dict = HG.client_controller.CallBlockingToWX( HG.client_controller.gui, do_it, page_key, simple )
|
||||
|
||||
if page_info_dict is None:
|
||||
|
||||
raise HydrusExceptions.NotFoundException( 'Did not find a page for "{}"!'.format( page_key.hex() ) )
|
||||
|
||||
|
||||
body_dict = { 'page_info' : page_info_dict }
|
||||
|
||||
body = json.dumps( body_dict )
|
||||
|
||||
response_context = HydrusServerResources.ResponseContext( 200, mime = HC.APPLICATION_JSON, body = body )
|
||||
|
||||
return response_context
|
||||
|
||||
|
||||
|
|
|
@ -881,6 +881,55 @@ class Media( object ):
|
|||
|
||||
def __ne__( self, other ): return self.__hash__() != other.__hash__()
|
||||
|
||||
class MediaCollect( HydrusSerialisable.SerialisableBase ):
|
||||
|
||||
SERIALISABLE_TYPE = HydrusSerialisable.SERIALISABLE_TYPE_MEDIA_COLLECT
|
||||
SERIALISABLE_NAME = 'Media Collect'
|
||||
SERIALISABLE_VERSION = 1
|
||||
|
||||
def __init__( self, namespaces = None, rating_service_keys = None, collect_unmatched = None ):
|
||||
|
||||
if namespaces is None:
|
||||
|
||||
namespaces = []
|
||||
|
||||
|
||||
if rating_service_keys is None:
|
||||
|
||||
rating_service_keys = []
|
||||
|
||||
|
||||
if collect_unmatched is None:
|
||||
|
||||
collect_unmatched = True
|
||||
|
||||
|
||||
self.namespaces = namespaces
|
||||
self.rating_service_keys = rating_service_keys
|
||||
self.collect_unmatched = collect_unmatched
|
||||
|
||||
|
||||
def _GetSerialisableInfo( self ):
|
||||
|
||||
serialisable_rating_service_keys = [ key.hex() for key in self.rating_service_keys ]
|
||||
|
||||
return ( self.namespaces, serialisable_rating_service_keys, self.collect_unmatched )
|
||||
|
||||
|
||||
def _InitialiseFromSerialisableInfo( self, serialisable_info ):
|
||||
|
||||
( self.namespaces, serialisable_rating_service_keys, self.collect_unmatched ) = serialisable_info
|
||||
|
||||
self.rating_service_keys = [ bytes.fromhex( serialisable_key ) for serialisable_key in serialisable_rating_service_keys ]
|
||||
|
||||
|
||||
def DoesACollect( self ):
|
||||
|
||||
return len( self.namespaces ) > 0 or len( self.rating_service_keys ) > 0
|
||||
|
||||
|
||||
HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_MEDIA_COLLECT ] = MediaCollect
|
||||
|
||||
class MediaList( object ):
|
||||
|
||||
def __init__( self, file_service_key, media_results ):
|
||||
|
@ -893,10 +942,7 @@ class MediaList( object ):
|
|||
self._hashes_to_collected_media = {}
|
||||
|
||||
self._media_sort = MediaSort( ( 'system', CC.SORT_FILES_BY_FILESIZE ), CC.SORT_ASC )
|
||||
self._collect_by = []
|
||||
|
||||
self._collect_map_singletons = {}
|
||||
self._collect_map_collected = {}
|
||||
self._media_collect = MediaCollect()
|
||||
|
||||
self._sorted_media = SortedList( [ self._GenerateMediaSingleton( media_result ) for media_result in media_results ] )
|
||||
|
||||
|
@ -911,12 +957,12 @@ class MediaList( object ):
|
|||
return len( self._singleton_media ) + sum( map( len, self._collected_media ) )
|
||||
|
||||
|
||||
def _CalculateCollectionKeysToMedias( self, collect_by, medias ):
|
||||
def _CalculateCollectionKeysToMedias( self, media_collect, medias ):
|
||||
|
||||
keys_to_medias = collections.defaultdict( list )
|
||||
|
||||
namespaces_to_collect_by = [ data for ( collect_by_type, data ) in collect_by if collect_by_type == 'namespace' ]
|
||||
ratings_to_collect_by = [ bytes.fromhex( data ) for ( collect_by_type, data ) in collect_by if collect_by_type == 'rating' ]
|
||||
namespaces_to_collect_by = list( media_collect.namespaces )
|
||||
ratings_to_collect_by = list( media_collect.rating_service_keys )
|
||||
|
||||
services_manager = HG.client_controller.services_manager
|
||||
|
||||
|
@ -928,7 +974,7 @@ class MediaList( object ):
|
|||
|
||||
else:
|
||||
|
||||
namespace_key = None
|
||||
namespace_key = frozenset()
|
||||
|
||||
|
||||
if len( ratings_to_collect_by ) > 0:
|
||||
|
@ -937,7 +983,7 @@ class MediaList( object ):
|
|||
|
||||
else:
|
||||
|
||||
rating_key = None
|
||||
rating_key = frozenset()
|
||||
|
||||
|
||||
keys_to_medias[ ( namespace_key, rating_key ) ].append( media )
|
||||
|
@ -1093,139 +1139,84 @@ class MediaList( object ):
|
|||
self._singleton_media.difference_update( singleton_media )
|
||||
self._collected_media.difference_update( collected_media )
|
||||
|
||||
keys_to_remove = [ key for ( key, media ) in self._collect_map_singletons if media in singleton_media ]
|
||||
|
||||
for key in keys_to_remove:
|
||||
|
||||
del self._collect_map_singletons[ key ]
|
||||
|
||||
|
||||
keys_to_remove = [ key for ( key, media ) in self._collect_map_collected if media in collected_media ]
|
||||
|
||||
for key in keys_to_remove:
|
||||
|
||||
del self._collect_map_collected[ key ]
|
||||
|
||||
|
||||
self._sorted_media.remove_items( singleton_media.union( collected_media ) )
|
||||
|
||||
self._RecalcHashes()
|
||||
|
||||
|
||||
def AddMedia( self, new_media, append = True ):
|
||||
def AddMedia( self, new_media ):
|
||||
|
||||
if append:
|
||||
new_media = FlattenMedia( new_media )
|
||||
|
||||
addable_media = []
|
||||
|
||||
for media in new_media:
|
||||
|
||||
for media in new_media:
|
||||
hash = media.GetHash()
|
||||
|
||||
if hash in self._hashes:
|
||||
|
||||
hash = media.GetHash()
|
||||
|
||||
self._hashes.add( hash )
|
||||
|
||||
self._hashes_to_singleton_media[ hash ] = media
|
||||
continue
|
||||
|
||||
|
||||
self._singleton_media.update( new_media )
|
||||
self._sorted_media.append_items( new_media )
|
||||
addable_media.append( media )
|
||||
|
||||
else:
|
||||
self._hashes.add( hash )
|
||||
|
||||
if self._collect_by is not None:
|
||||
|
||||
keys_to_medias = self._CalculateCollectionKeysToMedias( self._collect_by, new_media )
|
||||
|
||||
new_media = []
|
||||
|
||||
for ( key, medias ) in list(keys_to_medias.items()):
|
||||
|
||||
if key in self._collect_map_singletons:
|
||||
|
||||
singleton_media = self._collect_map_singletons[ key ]
|
||||
|
||||
self._sorted_media.remove_items( singleton_media )
|
||||
self._singleton_media.discard( singleton_media )
|
||||
del self._collect_map_singletons[ key ]
|
||||
|
||||
medias.append( singleton_media )
|
||||
|
||||
collected_media = self._GenerateMediaCollection( [ media.GetMediaResult() for media in medias ] )
|
||||
|
||||
collected_media.Sort( self._media_sort )
|
||||
|
||||
self._collected_media.add( collected_media )
|
||||
self._collect_map_collected[ key ] = collected_media
|
||||
|
||||
new_media.append( collected_media )
|
||||
|
||||
elif key in self._collect_map_collected:
|
||||
|
||||
collected_media = self._collect_map_collected[ key ]
|
||||
|
||||
self._sorted_media.remove_items( collected_media )
|
||||
|
||||
collected_media.AddMedia( medias )
|
||||
|
||||
collected_media.Sort( self._media_sort )
|
||||
|
||||
new_media.append( collected_media )
|
||||
|
||||
elif len( medias ) == 1:
|
||||
|
||||
( singleton_media, ) = medias
|
||||
|
||||
self._singleton_media.add( singleton_media )
|
||||
self._collect_map_singletons[ key ] = singleton_media
|
||||
|
||||
else:
|
||||
|
||||
collected_media = self._GenerateMediaCollection( [ media.GetMediaResult() for media in medias ] )
|
||||
|
||||
collected_media.Sort( self._media_sort )
|
||||
|
||||
self._collected_media.add( collected_media )
|
||||
self._collect_map_collected[ key ] = collected_media
|
||||
|
||||
new_media.append( collected_media )
|
||||
|
||||
|
||||
|
||||
|
||||
self._sorted_media.insert_items( new_media )
|
||||
|
||||
self._RecalcHashes()
|
||||
self._hashes_to_singleton_media[ hash ] = media
|
||||
|
||||
|
||||
self._singleton_media.update( addable_media )
|
||||
self._sorted_media.append_items( addable_media )
|
||||
|
||||
return new_media
|
||||
|
||||
|
||||
def Collect( self, collect_by = None ):
|
||||
def Collect( self, media_collect = None ):
|
||||
|
||||
if collect_by == None:
|
||||
if media_collect == None:
|
||||
|
||||
collect_by = self._collect_by
|
||||
media_collect = self._media_collect
|
||||
|
||||
|
||||
self._collect_by = collect_by
|
||||
self._media_collect = media_collect
|
||||
|
||||
flat_media = list( self._singleton_media )
|
||||
|
||||
for media in self._collected_media:
|
||||
|
||||
self._singleton_media.update( [ self._GenerateMediaSingleton( media_result ) for media_result in media.GenerateMediaResults() ] )
|
||||
flat_media.extend( [ self._GenerateMediaSingleton( media_result ) for media_result in media.GenerateMediaResults() ] )
|
||||
|
||||
|
||||
self._collected_media = set()
|
||||
|
||||
self._collect_map_singletons = {}
|
||||
self._collect_map_collected = {}
|
||||
|
||||
if len( collect_by ) > 0:
|
||||
if self._media_collect.DoesACollect():
|
||||
|
||||
keys_to_medias = self._CalculateCollectionKeysToMedias( collect_by, self._singleton_media )
|
||||
keys_to_medias = self._CalculateCollectionKeysToMedias( media_collect, flat_media )
|
||||
|
||||
self._collect_map_singletons = { key : medias[0] for ( key, medias ) in list(keys_to_medias.items()) if len( medias ) == 1 }
|
||||
self._collect_map_collected = { key : self._GenerateMediaCollection( [ media.GetMediaResult() for media in medias ] ) for ( key, medias ) in list(keys_to_medias.items()) if len( medias ) > 1 }
|
||||
# add an option here I think, to media_collect to say if collections with one item should be singletons or not
|
||||
|
||||
self._singleton_media = set( self._collect_map_singletons.values() )
|
||||
self._collected_media = set( self._collect_map_collected.values() )
|
||||
self._singleton_media = set()#{ medias[0] for ( key, medias ) in keys_to_medias.items() if len( medias ) == 1 }
|
||||
|
||||
if not self._media_collect.collect_unmatched:
|
||||
|
||||
unmatched_key = ( frozenset(), frozenset() )
|
||||
|
||||
if unmatched_key in keys_to_medias:
|
||||
|
||||
unmatched_medias = keys_to_medias[ unmatched_key ]
|
||||
|
||||
self._singleton_media.update( unmatched_medias )
|
||||
|
||||
del keys_to_medias[ unmatched_key ]
|
||||
|
||||
|
||||
|
||||
self._collected_media = { self._GenerateMediaCollection( [ media.GetMediaResult() for media in medias ] ) for ( key, medias ) in keys_to_medias.items() }# if len( medias ) > 1 }
|
||||
|
||||
else:
|
||||
|
||||
self._singleton_media = set( flat_media )
|
||||
|
||||
self._collected_media = set()
|
||||
|
||||
|
||||
self._sorted_media = SortedList( list( self._singleton_media ) + list( self._collected_media ) )
|
||||
|
@ -1334,6 +1325,22 @@ class MediaList( object ):
|
|||
return media_results
|
||||
|
||||
|
||||
def GetAPIInfoDict( self, simple ):
|
||||
|
||||
d = {}
|
||||
|
||||
d[ 'num_files' ] = self.GetNumFiles()
|
||||
|
||||
if not simple:
|
||||
|
||||
hashes = self.GetHashes( ordered = True )
|
||||
|
||||
d[ 'hashes' ] = [ hash.hex() for hash in hashes ]
|
||||
|
||||
|
||||
return d
|
||||
|
||||
|
||||
def GetFirst( self ):
|
||||
|
||||
return self._GetFirst()
|
||||
|
@ -1572,7 +1579,7 @@ class ListeningMediaList( MediaList ):
|
|||
HG.client_controller.sub( self, 'ProcessServiceUpdates', 'service_updates_gui' )
|
||||
|
||||
|
||||
def AddMediaResults( self, media_results, append = True ):
|
||||
def AddMediaResults( self, media_results ):
|
||||
|
||||
new_media = []
|
||||
|
||||
|
@ -1588,7 +1595,7 @@ class ListeningMediaList( MediaList ):
|
|||
new_media.append( self._GenerateMediaSingleton( media_result ) )
|
||||
|
||||
|
||||
self.AddMedia( new_media, append = append )
|
||||
self.AddMedia( new_media )
|
||||
|
||||
return new_media
|
||||
|
||||
|
@ -1674,9 +1681,9 @@ class MediaCollection( MediaList, Media ):
|
|||
self._file_viewing_stats_manager = FileViewingStatsManager( preview_views, preview_viewtime, media_views, media_viewtime )
|
||||
|
||||
|
||||
def AddMedia( self, new_media, append = True ):
|
||||
def AddMedia( self, new_media ):
|
||||
|
||||
MediaList.AddMedia( self, new_media, append = True )
|
||||
MediaList.AddMedia( self, new_media )
|
||||
|
||||
self._RecalcInternals()
|
||||
|
||||
|
|
|
@ -517,6 +517,8 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
self._dictionary[ 'default_sort' ] = ClientMedia.MediaSort( ( 'system', CC.SORT_FILES_BY_FILESIZE ), CC.SORT_ASC )
|
||||
self._dictionary[ 'fallback_sort' ] = ClientMedia.MediaSort( ( 'system', CC.SORT_FILES_BY_IMPORT_TIME ), CC.SORT_ASC )
|
||||
|
||||
self._dictionary[ 'default_collect' ] = ClientMedia.MediaCollect()
|
||||
|
||||
|
||||
def _InitialiseFromSerialisableInfo( self, serialisable_info ):
|
||||
|
||||
|
@ -668,6 +670,14 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def GetDefaultCollect( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
return self._dictionary[ 'default_collect' ]
|
||||
|
||||
|
||||
|
||||
def GetColour( self, colour_type, colourset = None ):
|
||||
|
||||
with self._lock:
|
||||
|
@ -691,14 +701,6 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def GetDefaultWatcherCheckerOptions( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
return self._dictionary[ 'misc' ][ 'default_thread_watcher_options' ]
|
||||
|
||||
|
||||
|
||||
def GetDefaultSort( self ):
|
||||
|
||||
with self._lock:
|
||||
|
@ -715,6 +717,14 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def GetDefaultWatcherCheckerOptions( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
return self._dictionary[ 'misc' ][ 'default_thread_watcher_options' ]
|
||||
|
||||
|
||||
|
||||
def GetDuplicateActionOptions( self, duplicate_type ):
|
||||
|
||||
with self._lock:
|
||||
|
@ -940,6 +950,14 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def SetDefaultCollect( self, media_collect ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._dictionary[ 'default_collect' ] = media_collect
|
||||
|
||||
|
||||
|
||||
def SetColour( self, colour_type, colourset, colour ):
|
||||
|
||||
with self._lock:
|
||||
|
@ -957,14 +975,6 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def SetDefaultWatcherCheckerOptions( self, checker_options ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._dictionary[ 'misc' ][ 'default_thread_watcher_options' ] = checker_options
|
||||
|
||||
|
||||
|
||||
def SetDefaultSort( self, media_sort ):
|
||||
|
||||
with self._lock:
|
||||
|
@ -981,6 +991,14 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def SetDefaultWatcherCheckerOptions( self, checker_options ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._dictionary[ 'misc' ][ 'default_thread_watcher_options' ] = checker_options
|
||||
|
||||
|
||||
|
||||
def SetDuplicateActionOptions( self, duplicate_type, duplicate_action_options ):
|
||||
|
||||
with self._lock:
|
||||
|
|
|
@ -1540,13 +1540,13 @@ class ServiceRepository( ServiceRestricted ):
|
|||
|
||||
elif HG.client_controller.CurrentlyIdle():
|
||||
|
||||
work_time = 9.0
|
||||
break_time = 1.0
|
||||
work_time = 9.5
|
||||
break_time = 0.5
|
||||
|
||||
else:
|
||||
|
||||
work_time = 0.5
|
||||
break_time = 0.5
|
||||
work_time = 0.45
|
||||
break_time = 0.05
|
||||
|
||||
|
||||
num_rows_done = HG.client_controller.WriteSynchronous( 'process_repository_definitions', self._service_key, definition_hash, iterator_dict, job_key, work_time )
|
||||
|
@ -1556,7 +1556,7 @@ class ServiceRepository( ServiceRestricted ):
|
|||
|
||||
work_done = True
|
||||
|
||||
if this_is_first_definitions_work and total_definition_rows_completed > 10000 and not did_definition_analyze:
|
||||
if this_is_first_definitions_work and total_definition_rows_completed > 1000 and not did_definition_analyze:
|
||||
|
||||
HG.client_controller.WriteSynchronous( 'analyze', maintenance_mode = maintenance_mode, stop_time = stop_time )
|
||||
|
||||
|
@ -1572,6 +1572,8 @@ class ServiceRepository( ServiceRestricted ):
|
|||
|
||||
time.sleep( break_time )
|
||||
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
|
||||
self._ReportOngoingRowSpeed( job_key, rows_done_in_this_update, rows_in_this_update, this_work_start_time, num_rows_done, 'definitions' )
|
||||
|
||||
|
@ -1672,13 +1674,13 @@ class ServiceRepository( ServiceRestricted ):
|
|||
|
||||
elif HG.client_controller.CurrentlyIdle():
|
||||
|
||||
work_time = 9.0
|
||||
break_time = 1.0
|
||||
work_time = 9.5
|
||||
break_time = 0.5
|
||||
|
||||
else:
|
||||
|
||||
work_time = 0.5
|
||||
break_time = 0.2
|
||||
work_time = 0.45
|
||||
break_time = 0.05
|
||||
|
||||
|
||||
num_rows_done = HG.client_controller.WriteSynchronous( 'process_repository_content', self._service_key, content_hash, iterator_dict, job_key, work_time )
|
||||
|
@ -1688,7 +1690,7 @@ class ServiceRepository( ServiceRestricted ):
|
|||
|
||||
work_done = True
|
||||
|
||||
if this_is_first_content_work and total_content_rows_completed > 10000 and not did_content_analyze:
|
||||
if this_is_first_content_work and total_content_rows_completed > 1000 and not did_content_analyze:
|
||||
|
||||
HG.client_controller.WriteSynchronous( 'analyze', maintenance_mode = maintenance_mode, stop_time = stop_time )
|
||||
|
||||
|
@ -1704,6 +1706,8 @@ class ServiceRepository( ServiceRestricted ):
|
|||
|
||||
time.sleep( break_time )
|
||||
|
||||
HG.client_controller.WaitUntilViewFree()
|
||||
|
||||
|
||||
self._ReportOngoingRowSpeed( job_key, rows_done_in_this_update, rows_in_this_update, this_work_start_time, num_rows_done, 'content rows' )
|
||||
|
||||
|
|
|
@ -67,8 +67,8 @@ options = {}
|
|||
# Misc
|
||||
|
||||
NETWORK_VERSION = 18
|
||||
SOFTWARE_VERSION = 364
|
||||
CLIENT_API_VERSION = 10
|
||||
SOFTWARE_VERSION = 365
|
||||
CLIENT_API_VERSION = 11
|
||||
|
||||
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )
|
||||
|
||||
|
|
|
@ -1059,6 +1059,22 @@ def Profile( summary, code, g, l, min_duration_ms = 20 ):
|
|||
|
||||
HG.controller.PrintProfile( summary, details )
|
||||
|
||||
def PullNFromIterator( iterator, n ):
|
||||
|
||||
chunk = []
|
||||
|
||||
for item in iterator:
|
||||
|
||||
chunk.append( item )
|
||||
|
||||
if len( chunk ) == n:
|
||||
|
||||
return chunk
|
||||
|
||||
|
||||
|
||||
return chunk
|
||||
|
||||
def RandomPop( population ):
|
||||
|
||||
random_index = random.randint( 0, len( population ) - 1 )
|
||||
|
@ -1138,7 +1154,41 @@ def SplitIteratorIntoChunks( iterator, n ):
|
|||
|
||||
yield chunk
|
||||
|
||||
|
||||
|
||||
def SplitIteratorIntoAutothrottledChunks( iterator, starting_n, precise_time_to_stop ):
|
||||
|
||||
n = starting_n
|
||||
|
||||
chunk = PullNFromIterator( iterator, n )
|
||||
|
||||
while len( chunk ) > 0:
|
||||
|
||||
time_work_started = GetNowPrecise()
|
||||
|
||||
yield chunk
|
||||
|
||||
work_time = GetNowPrecise() - time_work_started
|
||||
|
||||
items_per_second = n / work_time
|
||||
|
||||
time_remaining = precise_time_to_stop - GetNowPrecise()
|
||||
|
||||
if TimeHasPassedPrecise( precise_time_to_stop ):
|
||||
|
||||
n = 1
|
||||
|
||||
else:
|
||||
|
||||
expected_items_in_remaining_time = max( 1, int( time_remaining * items_per_second ) )
|
||||
|
||||
quad_speed = n * 4
|
||||
|
||||
n = min( quad_speed, expected_items_in_remaining_time )
|
||||
|
||||
|
||||
chunk = PullNFromIterator( iterator, n )
|
||||
|
||||
|
||||
def SplitListIntoChunks( xs, n ):
|
||||
|
||||
if isinstance( xs, set ):
|
||||
|
@ -1151,6 +1201,63 @@ def SplitListIntoChunks( xs, n ):
|
|||
yield xs[ i : i + n ]
|
||||
|
||||
|
||||
def SplitMappingIteratorIntoAutothrottledChunks( iterator, starting_n, precise_time_to_stop ):
|
||||
|
||||
n = starting_n
|
||||
|
||||
chunk_weight = 0
|
||||
chunk = []
|
||||
|
||||
for ( tag_item, hash_items ) in iterator:
|
||||
|
||||
hash_item_iterator = iter( hash_items )
|
||||
|
||||
chunk_of_hash_items = PullNFromIterator( hash_item_iterator, max( 1, n - chunk_weight ) )
|
||||
|
||||
while len( chunk_of_hash_items ) > 0:
|
||||
|
||||
chunk.append( ( tag_item, chunk_of_hash_items ) )
|
||||
|
||||
chunk_weight += len( chunk_of_hash_items )
|
||||
|
||||
if chunk_weight >= n:
|
||||
|
||||
time_work_started = GetNowPrecise()
|
||||
|
||||
yield chunk
|
||||
|
||||
chunk_weight = 0
|
||||
chunk = []
|
||||
|
||||
work_time = GetNowPrecise() - time_work_started
|
||||
|
||||
items_per_second = n / work_time
|
||||
|
||||
time_remaining = precise_time_to_stop - GetNowPrecise()
|
||||
|
||||
if TimeHasPassedPrecise( precise_time_to_stop ):
|
||||
|
||||
n = 1
|
||||
|
||||
else:
|
||||
|
||||
expected_items_in_remaining_time = max( 1, int( time_remaining * items_per_second ) )
|
||||
|
||||
quad_speed = n * 4
|
||||
|
||||
n = min( quad_speed, expected_items_in_remaining_time )
|
||||
|
||||
|
||||
|
||||
chunk_of_hash_items = PullNFromIterator( hash_item_iterator, max( 1, n - chunk_weight ) )
|
||||
|
||||
|
||||
|
||||
if len( chunk ) > 0:
|
||||
|
||||
yield chunk
|
||||
|
||||
|
||||
def SplitMappingIteratorIntoChunks( xs, n ):
|
||||
|
||||
chunk_weight = 0
|
||||
|
|
|
@ -42,6 +42,7 @@ currently_uploading_pending = False
|
|||
shutting_down_due_to_already_running = False
|
||||
|
||||
do_idle_shutdown_work = False
|
||||
program_is_shutting_down = False
|
||||
shutdown_complete = False
|
||||
restart = False
|
||||
emergency_exit = False
|
||||
|
|
|
@ -176,6 +176,50 @@ class ParsedRequestArguments( dict ):
|
|||
raise HydrusExceptions.BadRequestException( 'It looks like the parameter "{}" was missing!'.format( key ) )
|
||||
|
||||
|
||||
def GetValue( self, key, expected_type, default_value = None ):
|
||||
|
||||
if key in self:
|
||||
|
||||
value = self[ key ]
|
||||
|
||||
if not isinstance( value, expected_type ):
|
||||
|
||||
error_text_lookup = {}
|
||||
|
||||
error_text_lookup[ int ] = 'integer'
|
||||
error_text_lookup[ str ] = 'string'
|
||||
error_text_lookup[ bytes ] = 'hex-encoded bytestring'
|
||||
error_text_lookup[ bool ] = 'boolean'
|
||||
error_text_lookup[ list ] = 'list'
|
||||
error_text_lookup[ dict ] = 'object/dict'
|
||||
|
||||
if expected_type in error_text_lookup:
|
||||
|
||||
type_error_text = error_text_lookup[ expected_type ]
|
||||
|
||||
else:
|
||||
|
||||
type_error_text = 'unknown!'
|
||||
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'The parameter "{}" was not the expected type: {}!'.format( key, type_error_text ) )
|
||||
|
||||
|
||||
return value
|
||||
|
||||
else:
|
||||
|
||||
if default_value is None:
|
||||
|
||||
raise HydrusExceptions.BadRequestException( 'The required parameter "{}" was missing!'.format( key ) )
|
||||
|
||||
else:
|
||||
|
||||
return default_value
|
||||
|
||||
|
||||
|
||||
|
||||
class BandwidthRules( HydrusSerialisable.SerialisableBase ):
|
||||
|
||||
SERIALISABLE_TYPE = HydrusSerialisable.SERIALISABLE_TYPE_BANDWIDTH_RULES
|
||||
|
|
|
@ -92,6 +92,7 @@ SERIALISABLE_TYPE_LOGIN_STEP = 74
|
|||
SERIALISABLE_TYPE_CLIENT_API_MANAGER = 75
|
||||
SERIALISABLE_TYPE_CLIENT_API_PERMISSIONS = 76
|
||||
SERIALISABLE_TYPE_SERVICE_KEYS_TO_TAGS = 77
|
||||
SERIALISABLE_TYPE_MEDIA_COLLECT = 78
|
||||
|
||||
SERIALISABLE_TYPES_TO_OBJECT_TYPES = {}
|
||||
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
from . import HydrusConstants as HC
|
||||
from . import HydrusServerResources
|
||||
import traceback
|
||||
from twisted.web.http import _GenericHTTPChannelProtocol, HTTPChannel
|
||||
from twisted.web.server import Request, Site
|
||||
from twisted.web.resource import Resource
|
||||
from . import HydrusData
|
||||
|
@ -48,6 +49,11 @@ class HydrusRequestLogging( HydrusRequest ):
|
|||
HydrusData.Print( message )
|
||||
|
||||
|
||||
|
||||
class FatHTTPChannel( HTTPChannel ):
|
||||
|
||||
totalHeadersSize = 1048576 # :^)
|
||||
|
||||
class HydrusService( Site ):
|
||||
|
||||
def __init__( self, service ):
|
||||
|
@ -58,6 +64,8 @@ class HydrusService( Site ):
|
|||
|
||||
Site.__init__( self, root )
|
||||
|
||||
self.protocol = self._ProtocolFactory
|
||||
|
||||
if service.LogsRequests():
|
||||
|
||||
self.requestFactory = HydrusRequestLogging
|
||||
|
@ -79,3 +87,7 @@ class HydrusService( Site ):
|
|||
return root
|
||||
|
||||
|
||||
def _ProtocolFactory( self ):
|
||||
|
||||
return _GenericHTTPChannelProtocol( FatHTTPChannel() )
|
||||
|
||||
|
|
|
@ -469,7 +469,7 @@ class Controller( object ):
|
|||
return False
|
||||
|
||||
|
||||
def GetCurrentSessionPageInfoDict( self ):
|
||||
def GetCurrentSessionPageAPIInfoDict( self ):
|
||||
|
||||
return {
|
||||
"name" : "top pages notebook",
|
||||
|
@ -528,6 +528,11 @@ class Controller( object ):
|
|||
return self._managers[ manager_type ]
|
||||
|
||||
|
||||
def GetPageAPIInfoDict( self, page_key, simple ):
|
||||
|
||||
return {}
|
||||
|
||||
|
||||
def GetWrite( self, name ):
|
||||
|
||||
write = self._writes[ name ]
|
||||
|
|
37
license.txt
37
license.txt
|
@ -1,13 +1,24 @@
|
|||
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
|
||||
Version 2, December 2004
|
||||
|
||||
Copyright (C) 2004 Sam Hocevar <sam@hocevar.net>
|
||||
|
||||
Everyone is permitted to copy and distribute verbatim or modified
|
||||
copies of this license document, and changing it is allowed as long
|
||||
as the name is changed.
|
||||
|
||||
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
|
||||
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
|
||||
|
||||
0. You just DO WHAT THE FUCK YOU WANT TO.
|
||||
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
|
||||
Version 3, May 2010
|
||||
|
||||
Copyright (C) 2010 by Kris Craig
|
||||
Olympia, WA USA
|
||||
|
||||
Everyone is permitted to copy and distribute verbatim or modified
|
||||
copies of this license document, and changing it is allowed as long
|
||||
as the name is changed.
|
||||
|
||||
This license applies to any copyrightable work with which it is
|
||||
packaged and/or distributed, except works that are already covered by
|
||||
another license. Any other license that applies to the same work
|
||||
shall take precedence over this one.
|
||||
|
||||
To the extent permitted by applicable law, the works covered by this
|
||||
license are provided "as is" and do not come with any warranty except
|
||||
where otherwise explicitly stated.
|
||||
|
||||
|
||||
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
|
||||
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION, AND MODIFICATION
|
||||
|
||||
0. You just DO WHAT THE FUCK YOU WANT TO.
|
Binary file not shown.
Before Width: | Height: | Size: 3.3 KiB After Width: | Height: | Size: 3.2 KiB |
Loading…
Reference in New Issue