Version 447
This commit is contained in:
parent
646303640b
commit
e95ddf7fb6
|
@ -8,6 +8,48 @@
|
|||
<div class="content">
|
||||
<h3 id="changelog"><a href="#changelog">changelog</a></h3>
|
||||
<ul>
|
||||
<li><h3 id="version_447"><a href="#version_447">version 447</a></h3></li>
|
||||
<ul>
|
||||
<li>misc:</li>
|
||||
<li>fixed drag and dropping multiple newline separated urls onto the client when those urls come from a generic text source</li>
|
||||
<li>pages now cache their 'ordered' file id list. this speeds up several little jobs, but most importantly should reduce session save time for sessions with tens of thousands of files</li>
|
||||
<li>common file resolutions such as 1920x1080 are now replaced in labels with '1080p' strings as already used in the duplicate system. also added 'vertical' variants of 720p, 1080p, and 4k</li>
|
||||
<li>when a page preview viewer gets a call to clear its current media when it is not currently the page in view, it now recognises that properly. this was happening (a 'sticky' preview) on drag and drops that navigated and terminated on other pages</li>
|
||||
<li>the various 'retry ignored' commands on downloaders now give an interstitial dialog where you can choose to retry 'all', '404s', or 'blacklisted' files only</li>
|
||||
<li>manage tag siblings/parents now disables its import button until its data is loaded. imports that were clicked through before loading were being forgotten due to tangled logic, so for now I'll just disable the button!</li>
|
||||
<li>reduced some more spiky database I/O overhead from the UI's perspective (now savepoints are performed after a result is returned, just like I recently did with transaction commit)</li>
|
||||
<li>duplicate potentials search will now update the y in its x/y progress display if many files have been imported since the search was started and x becomes larger than y (due to y secretly growing)</li>
|
||||
<li>fixed the default 'gelbooru md5' file lookup script. if you have a lookup script with this name, it will be updated to my new default automatically. I don't really like fixing this old system, but I am not sure when I will fit in my big rewrite that will merge it with the normal downloader system, so this is a quick fix for the meantime</li>
|
||||
<li>if you are one of the users who had weird unfixable 404 update file problems with the PTR, please try unpausing and doing a metadata resync one more time this week. fingers crossed, this is fixed. please let me know how you get on too, fixed or not, and also if you have had 'malformed' database problems in the past</li>
|
||||
<li>.</li>
|
||||
<li>multi column lists:</li>
|
||||
<li>improved the precision of longer text pixel_width->text and text->pixel_width calculations, which are particularly used in the multi-column list state saving system. another multi-column size calculation bug, where lists could grow by 1 character's width on >~60 character width columns on every dialog reopen, is now fixed</li>
|
||||
<li>multi-column lists should now calculate last column width more precisely and accurately regardless of vertical scrollbar presence or recent show/hide</li>
|
||||
<li>the snapping system that locks last column size to 5-character multiples can now snap up or down, increasing error tolerance</li>
|
||||
<li>I added a hack to stop the bug some people had of multi-column lists suddenly growing wide, up to screen width, in a resize loop. I think it works, but as I cannot reproduce this error, please let me know how you get on. resizing the options->external programs panel seems to initiate it reliably for those users affected</li>
|
||||
<li>.</li>
|
||||
<li>profile mode:</li>
|
||||
<li>all debug profile modes (callto, db, server, menu, pubsub, and ui) are now merged into one mode under help->debug</li>
|
||||
<li>this new mode no longer spams popups, and it only prints 'slow' jobs to the profile log</li>
|
||||
<li>it also makes a new profile log every time it is turned on, using mode start timestamp rather than client boot timestamp, and when profile mode is turned off, there is a popup summary of how many fast and slow jobs passed through during the log time</li>
|
||||
<li>touched up profile code, timing thresholds, summary statements, and the help</li>
|
||||
<li>.</li>
|
||||
<li>special update rule this week:</li>
|
||||
<li>due to the big file storage rework this week, there's some bit rot in older update routines. 447 cannot update databases older than 411, and it _may_ have trouble updating before 436. if this applies to you, the client will error out or warn you before continuing. I'd like to know what happens to you if you are v411-435 so I can refine these messages</li>
|
||||
<li>.</li>
|
||||
<li>boring database refactoring:</li>
|
||||
<li>the primary current, deleted, pending, and petitioned files tables are now split according to service, much as I have done with mapping tables in the past. this saves a little space and accelerates many file calculations on large clients. if you have a client database script or patch that inspects 'current_files' or 'deleted_files', you'll now be looking at client_files_x etc.., where x is the service_id, and they obviously no longer have a service_id column</li>
|
||||
<li>a new file storage database module manages these tables, and also some misc file deletion metadata</li>
|
||||
<li>refactored all raw file storage updates, filters, and searches to the new module</li>
|
||||
<li>the mappings and new file storage database modules are now responsible for various 'num files/mappings' metadata calculations</li>
|
||||
<li>most file operations on smaller domains, typically trash or repository update files, will be significantly faster (since the much larger 'my files' table data isn't fattening the relevant indices, and worst case query planning is so much better)</li>
|
||||
<li>cleaned up a ton of file domain filtering code as a result of all this</li>
|
||||
<li>physical file deletion is now much faster when the client has many pending file uploads to a file repository or IPFS service</li>
|
||||
<li>complicated duplicate file operations of many sorts should be a _little_ faster now, particularly on large clients</li>
|
||||
<li>searching files with 'file import time' sort should be a little faster in many situations</li>
|
||||
<li>tag repositories no longer bother going down to the database level to to see if they have any thumbnails to sync with</li>
|
||||
<li>everyone also gets a local file id cache regen this week, it may take a few seconds on update</li>
|
||||
</ul>
|
||||
<li><h3 id="version_446"><a href="#version_446">version 446</a></h3></li>
|
||||
<ul>
|
||||
<li>misc:</li>
|
||||
|
|
|
@ -8,6 +8,7 @@
|
|||
<div class="content">
|
||||
<h3 id="intro"><a href="#intro">hydrus is cpu and hdd hungry</a></h3>
|
||||
<p>The hydrus client manages a lot of complicated data and gives you a lot of power over it. To add millions of files and tags to its database, and then to perform difficult searches over that information, it needs to use a lot of CPU time and hard drive time--sometimes in small laggy blips, and occasionally in big 100% CPU chunks. I don't put training wheels or limiters on the software either, so if you search for 300,000 files, the client will try to fetch that many.</p>
|
||||
<p>Furthermore, I am just one unprofessional guy dealing with a lot of legacy code from when I was even worse at programming. I am always working to reduce lag and other inconveniences, and improve UI feedback when many things are going on, but there is still a lot for me to do.</p>
|
||||
<p>In general, the client works best on snappy computers with low-latency hard drives where it does not have to constantly compete with other CPU- or HDD- heavy programs. Running hydrus on your games computer is no problem at all, but if you leave the client on all the time, then make sure under the options it is set not to do idle work while your CPU is busy, so your games can run freely. Similarly, if you run two clients on the same computer, you should have them set to work at different times, because if they both try to process 500,000 tags at once on the same hard drive, they will each slow to a crawl.</p>
|
||||
<p>If you run on an HDD, keeping it defragged is very important, and good practice for all your programs anyway. Make sure you know what this is and that you do it.</p>
|
||||
<h3 id="maintenance_and_processing"><a href="#maintenance_and_processing">maintenance and processing</a></h3>
|
||||
|
@ -16,22 +17,20 @@
|
|||
<p>If you have an old, slower computer though, or if your hard drive is high latency, make sure these options are set for whatever is best for your situation. Turning off idle time completely is often helpful as some older computers are slow to even recognise--mid task--that you want to use the client again, or take too long to abandon a big task half way through. If you set your client to only do work on shutdown, then you can control exactly when that happens.</p>
|
||||
<h3 id="reducing_lag"><a href="#reducing_lag">reducing search and general gui lag</a></h3>
|
||||
<p>Searching for tags via the autocomplete dropdown and searching for files in general can sometimes take a very long time. It depends on many things. In general, the more predicates (tags and system:something) you have active for a search, and the more specific they are, the faster it will be.</p>
|
||||
<p>You can also look at <i>file->options->speed and memory</i>, again especially if you have a slow computer. Increasing the autocomplete thresholds is very often helpful. You can even force autocompletes to only fetch results when you manually ask for them.</p>
|
||||
<p>You can also look at <i>file->options->speed and memory</i>. Increasing the autocomplete thresholds under <i>tags->manage tag display and search</i> is also often helpful. You can even force autocompletes to only fetch results when you manually ask for them.</p>
|
||||
<p>Having lots of thumbnails open or downloads running can slow many things down. Check the 'pages' menu to see your current session weight. If it is about 50,000, or you have individual pages with more than 10,000 files or download URLs, try cutting down a bit.</p>
|
||||
<h3 id="profiles"><a href="#profiles">finally - profiles</a></h3>
|
||||
<p>Lots of my code remains unoptimised for certain situations. My development environment only has a few thousand images and a few million tags. As I write code, I am usually more concerned with getting it to work at all rather than getting it to work fast for every possible scenario. So, if something is running slow for you, but your computer is otherwise working fine, let me know and I can almost always speed it up.</p>
|
||||
<p>Let me know:</p>
|
||||
<p>Programming is all about re-editing your first, second, third drafts of an idea. You are always going back to old code and adding new features or making it work better. If something is running slow for you, I can almost always speed it up or at least improve the way it schedules that chunk of work.</p>
|
||||
<p>However figuring out exactly why something is running slow or holding up the UI is tricky and often gives an unexpected result. I can guess what might be running inefficiently from reports, but what I really need to be sure is a <i>profile</i>, which drills down into every function of a job, counting how many times they are called and timing how long they take. A profile for a single call looks like <a href="profile_example.txt">this</a>.</p>
|
||||
<p>So, please let me know:</p>
|
||||
<ul>
|
||||
<li>The general steps to reproduce the problem (e.g. "Running system:numtags>4 is ridiculously slow on its own on 'all known tags'.")</li>
|
||||
<li>Your operating system and its version (e.g. "Windows 8.1")</li>
|
||||
<li>Your computer's general power (e.g. "A couple of years old. It runs most stuff ok.")</li>
|
||||
<li>Your client's approximate overall size (e.g. "500k files, and it syncs to the PTR.")</li>
|
||||
<li>The type of hard drive you are running hydrus from. (e.g. "A 2TB 7200rpm drive that is 20% full. I regularly defrag it.")</li>
|
||||
<li>Any <i>profiles</i> you have collected.</li>
|
||||
</ul>
|
||||
<p>A <i>profile</i> is a large block of debug text that lets me know which parts of my code are running slow for you. A profile for a single call looks like <a href="profile_example.txt">this</a>.</p>
|
||||
<p>It is very helpful to me to have a profile. You can generate one by going <i>help->debug->xxx profile mode</i>, which tells the client to generate profile information for every subsequent xxx request. This can be spammy, so don't leave it on for a very long time (you can turn it off by hitting the help menu entry again).</p>
|
||||
<p>For most problems, you probably want <i>db profile mode</i>.</p>
|
||||
<p>Turn on a profile mode, do the thing that runs slow for you (importing a file, fetching some tags, whatever), and then check your database folder (most likely <i>install_dir/db</i>) for a new 'client profile - DATE.log' file. This file will be filled with several sets of tables with timing information. Please send that whole file to me, or if it is too large, cut what seems important. It should not contain any personal information, but feel free to look through it.</p>
|
||||
<p>You can generate a profile by hitting <i>help->debug->profile mode</i>, which tells the client to generate profile information for almost all of its behind the scenes jobs. This can be spammy, so don't leave it on for a very long time (you can turn it off by hitting the help menu entry again).</p>
|
||||
<p>Turn on profile mode, do the thing that runs slow for you (importing a file, fetching some tags, whatever), and then check your database folder (most likely <i>install_dir/db</i>) for a new 'client profile - DATE.log' file. This file will be filled with several sets of tables with timing information. Please send that whole file to me, or if it is too large, cut what seems important. It should not contain any personal information, but feel free to look through it.</p>
|
||||
<p>There are several ways to <a href="contact.html">contact me</a>.</p>
|
||||
</div>
|
||||
</body>
|
||||
|
|
|
@ -375,7 +375,7 @@ def ShowTextClient( text ):
|
|||
|
||||
HG.client_controller.pub( 'message', job_key )
|
||||
|
||||
def TimestampToPrettyTimeDelta( timestamp, just_now_string = 'just now', just_now_threshold = 3, show_seconds = True, no_prefix = False ):
|
||||
def TimestampToPrettyTimeDelta( timestamp, just_now_string = 'just now', just_now_threshold = 3, history_suffix = ' ago', show_seconds = True, no_prefix = False ):
|
||||
|
||||
if HG.client_controller.new_options.GetBoolean( 'always_show_iso_time' ):
|
||||
|
||||
|
@ -383,7 +383,7 @@ def TimestampToPrettyTimeDelta( timestamp, just_now_string = 'just now', just_no
|
|||
|
||||
else:
|
||||
|
||||
return HydrusData.TimestampToPrettyTimeDelta( timestamp, just_now_string = just_now_string, just_now_threshold = just_now_threshold, show_seconds = show_seconds, no_prefix = no_prefix )
|
||||
return HydrusData.TimestampToPrettyTimeDelta( timestamp, just_now_string = just_now_string, just_now_threshold = just_now_threshold, history_suffix = history_suffix, show_seconds = show_seconds, no_prefix = no_prefix )
|
||||
|
||||
|
||||
class Booru( HydrusData.HydrusYAMLBase ):
|
||||
|
|
|
@ -275,7 +275,7 @@ def GetDefaultScriptRows():
|
|||
|
||||
script_info.append( ( 32, 'iqdb danbooru', 2, HydrusData.GetNow(), '''["https://danbooru.iqdb.org/", 1, 0, [55, 1, [[], "some hash bytes"]], "file", {}, [[29, 1, ["link to danbooru", [27, 6, [[26, 1, [[62, 2, [0, "td", {"class": "image"}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 0, "href", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], [[30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "section", {"id": "tag-list"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "li", {"class": "tag-type-1"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {"class": "search-tag"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, "creator"]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "section", {"id": "tag-list"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "li", {"class": "tag-type-3"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {"class": "search-tag"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, "series"]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "section", {"id": "tag-list"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "li", {"class": "tag-type-4"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {"class": "search-tag"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, "character"]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "section", {"id": "tag-list"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "li", {"class": "tag-type-0"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {"class": "search-tag"}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, ""]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "section", {"id": "post-information"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "li", {}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [2, "Rating:*", null, null, "Rating: Safe"]], [55, 1, [[[0, 8]], "Rating: Safe"]]]], 0, false, "rating"]], [30, 4, ["", 7, [27, 6, [[26, 1, [[62, 2, [0, "section", {"id": "post-information"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "li", {}, null, null, true, [51, 1, [2, "Source:*", null, null, "Source:"]]]], [62, 2, [0, "a", {}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 0, "href", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, [8, 0]]]]]], [30, 4, ["no iqdb match found", 8, [27, 6, [[26, 1, [[62, 2, [0, "th", {}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, [false, [51, 1, [2, "Best match", null, null, "Best match"]]]]]]]''' ) )
|
||||
script_info.append( ( 32, 'danbooru md5', 2, HydrusData.GetNow(), '''["https://danbooru.donmai.us/", 0, 1, [55, 1, [[[4, "hex"]], "some hash bytes"]], "md5", {"page": "post", "s": "list"}, [[30, 4, ["we got sent back to main gallery page -- title test", 8, [27, 6, [[26, 1, [[62, 2, [0, "head", {}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "title", {}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, [true, [51, 1, [2, "Image List", null, null, "Image List"]]]]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-0"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, ""]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-3"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, "series"]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-1"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, "creator"]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-4"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, "character"]], [30, 4, ["we got sent back to main gallery page -- page links exist", 8, [27, 6, [[26, 1, [[62, 2, [0, "div", {}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 0, "class", [51, 1, [3, "", null, null, "example string"]], [55, 1, [[], "parsed information"]]]], 0, false, [true, [51, 1, [2, "pagination", null, null, "pagination"]]]]], [30, 4, ["", 0, [27, 6, [[26, 1, [[62, 2, [0, "section", {"id": "post-information"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "li", {}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "href", [51, 1, [2, "Rating:*", null, null, "Rating: Safe"]], [55, 1, [[[0, 8]], "Rating: Safe"]]]], 0, false, "rating"]]]]''' ) )
|
||||
script_info.append( ( 32, 'gelbooru md5', 1, HydrusData.GetNow(), '''["https://gelbooru.com/index.php", 0, 1, 1, "md5", {"s": "list", "page": "post"}, [[30, 1, ["we got sent back to main gallery page -- title test", 8, [27, 1, [[["head", {}, 0], ["title", {}, 0]], null]], [true, true, "Image List"]]], [30, 1, ["", 0, [27, 1, [[["li", {"class": "tag-type-general"}, null], ["a", {}, 1]], null]], ""]], [30, 1, ["", 0, [27, 1, [[["li", {"class": "tag-type-copyright"}, null], ["a", {}, 1]], null]], "series"]], [30, 1, ["", 0, [27, 1, [[["li", {"class": "tag-type-artist"}, null], ["a", {}, 1]], null]], "creator"]], [30, 1, ["", 0, [27, 1, [[["li", {"class": "tag-type-character"}, null], ["a", {}, 1]], null]], "character"]], [30, 1, ["we got sent back to main gallery page -- page links exist", 8, [27, 1, [[["div", {}, null]], "class"]], [true, true, "pagination"]]]]]''' ) )
|
||||
script_info.append( ( 32, 'gelbooru md5', 2, HydrusData.GetNow(), '''["http://gelbooru.com/index.php", 0, 1, [55, 1, [[[4, "hex"]], "some hash bytes"]], "md5", {"s": "list", "page": "post"}, [[30, 6, ["we got sent back to main gallery page -- title test", 8, [27, 7, [[26, 1, [[62, 2, [0, "head", {}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "title", {}, 0, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [84, 1, [26, 1, []]]]], [true, [51, 1, [2, "Image List", null, null, "Image List"]]]]], [30, 6, ["", 0, [27, 7, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-general"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [84, 1, [26, 1, []]]]], ""]], [30, 6, ["", 0, [27, 7, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-copyright"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [84, 1, [26, 1, []]]]], "series"]], [30, 6, ["", 0, [27, 7, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-artist"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [84, 1, [26, 1, []]]]], "creator"]], [30, 6, ["", 0, [27, 7, [[26, 1, [[62, 2, [0, "li", {"class": "tag-type-character"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, 1, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 1, "", [84, 1, [26, 1, []]]]], "character"]], [30, 6, ["we got sent back to main gallery page -- page links exist", 8, [27, 7, [[26, 1, [[62, 2, [0, "div", {"id": "paginator"}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]], [62, 2, [0, "a", {}, null, null, false, [51, 1, [3, "", null, null, "example string"]]]]]], 2, "class", [84, 1, [26, 1, []]]]], [true, [51, 1, [3, "", null, null, "pagination"]]]]]]]''' ) )
|
||||
|
||||
return script_info
|
||||
|
||||
|
|
|
@ -174,6 +174,25 @@ class DuplicatesManager( object ):
|
|||
|
||||
num_searched_estimate += num_done
|
||||
|
||||
if num_searched_estimate > total_num_files:
|
||||
|
||||
similar_files_maintenance_status = HG.client_controller.Read( 'similar_files_maintenance_status' )
|
||||
|
||||
if similar_files_maintenance_status is None:
|
||||
|
||||
break
|
||||
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._similar_files_maintenance_status = similar_files_maintenance_status
|
||||
|
||||
searched_distances_to_count = self._similar_files_maintenance_status
|
||||
|
||||
total_num_files = max( num_searched_estimate, sum( searched_distances_to_count.values() ) )
|
||||
|
||||
|
||||
|
||||
text = 'searching: {}'.format( HydrusData.ConvertValueRangeToPrettyString( num_searched_estimate, total_num_files ) )
|
||||
job_key.SetVariable( 'popup_text_1', text )
|
||||
job_key.SetVariable( 'popup_gauge_1', ( num_searched_estimate, total_num_files ) )
|
||||
|
@ -183,7 +202,7 @@ class DuplicatesManager( object ):
|
|||
break
|
||||
|
||||
|
||||
time.sleep( time_it_took ) # ideally 0.5s, but potentially longer
|
||||
time.sleep( min( 5, time_it_took ) ) # ideally 0.5s, but potentially longer
|
||||
|
||||
|
||||
job_key.Delete()
|
||||
|
|
|
@ -2422,6 +2422,11 @@ class ServiceRepository( ServiceRestricted ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
if self._service_type != HC.FILE_REPOSITORY:
|
||||
|
||||
return
|
||||
|
||||
|
||||
if not self._CanSyncDownload():
|
||||
|
||||
return
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,665 @@
|
|||
import collections
|
||||
import sqlite3
|
||||
import typing
|
||||
|
||||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusDB
|
||||
from hydrus.core import HydrusDBModule
|
||||
|
||||
from hydrus.client.db import ClientDBMaster
|
||||
from hydrus.client.db import ClientDBServices
|
||||
|
||||
def GenerateFilesTableNames( service_id: int ) -> typing.Tuple[ str, str, str, str ]:
|
||||
|
||||
suffix = str( service_id )
|
||||
|
||||
current_files_table_name = 'current_files_{}'.format( suffix )
|
||||
|
||||
deleted_files_table_name = 'deleted_files_{}'.format( suffix )
|
||||
|
||||
pending_files_table_name = 'pending_files_{}'.format( suffix )
|
||||
|
||||
petitioned_files_table_name = 'petitioned_files_{}'.format( suffix )
|
||||
|
||||
return ( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name )
|
||||
|
||||
def GenerateFilesTableName( service_id: int, status: int ) -> str:
|
||||
|
||||
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id )
|
||||
|
||||
if status == HC.CONTENT_STATUS_CURRENT:
|
||||
|
||||
return current_files_table_name
|
||||
|
||||
elif status == HC.CONTENT_STATUS_DELETED:
|
||||
|
||||
return deleted_files_table_name
|
||||
|
||||
elif status == HC.CONTENT_STATUS_PENDING:
|
||||
|
||||
return pending_files_table_name
|
||||
|
||||
else:
|
||||
|
||||
return petitioned_files_table_name
|
||||
|
||||
|
||||
class ClientDBFilesStorage( HydrusDBModule.HydrusDBModule ):
|
||||
|
||||
def __init__( self, cursor: sqlite3.Cursor, modules_services: ClientDBServices.ClientDBMasterServices, modules_texts: ClientDBMaster.ClientDBMasterTexts ):
|
||||
|
||||
self.modules_services = modules_services
|
||||
self.modules_texts = modules_texts
|
||||
|
||||
HydrusDBModule.HydrusDBModule.__init__( self, 'client files storage', cursor )
|
||||
|
||||
|
||||
def _GetInitialIndexGenerationTuples( self ):
|
||||
|
||||
index_generation_tuples = []
|
||||
|
||||
return index_generation_tuples
|
||||
|
||||
|
||||
def AddFiles( self, service_id, insert_rows ):
|
||||
|
||||
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id )
|
||||
|
||||
self._c.executemany( 'INSERT OR IGNORE INTO {} VALUES ( ?, ? );'.format( current_files_table_name ), ( ( hash_id, timestamp ) for ( hash_id, timestamp ) in insert_rows ) )
|
||||
|
||||
self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( pending_files_table_name ), ( ( hash_id, ) for ( hash_id, timestamp ) in insert_rows ) )
|
||||
|
||||
pending_changed = HydrusDB.GetRowCount( self._c ) > 0
|
||||
|
||||
return pending_changed
|
||||
|
||||
|
||||
def ClearDeleteRecord( self, service_id, hash_ids ):
|
||||
|
||||
deleted_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_DELETED )
|
||||
|
||||
self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( deleted_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) )
|
||||
|
||||
num_deleted = HydrusDB.GetRowCount( self._c )
|
||||
|
||||
return num_deleted
|
||||
|
||||
|
||||
def ClearLocalDeleteRecord( self, hash_ids = None ):
|
||||
|
||||
# we delete from everywhere, but not for files currently in the trash
|
||||
|
||||
service_ids_to_nums_cleared = {}
|
||||
|
||||
local_non_trash_service_ids = self.modules_services.GetServiceIds( ( HC.COMBINED_LOCAL_FILE, HC.LOCAL_FILE_DOMAIN ) )
|
||||
|
||||
if hash_ids is None:
|
||||
|
||||
trash_current_files_table_name = GenerateFilesTableName( self.modules_services.trash_service_id, HC.CONTENT_STATUS_CURRENT )
|
||||
|
||||
for service_id in local_non_trash_service_ids:
|
||||
|
||||
deleted_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_DELETED )
|
||||
|
||||
self._c.execute( 'DELETE FROM {} WHERE hash_id NOT IN ( SELECT hash_id FROM {} );'.format( deleted_files_table_name, trash_current_files_table_name ) )
|
||||
|
||||
num_cleared = HydrusDB.GetRowCount( self._c )
|
||||
|
||||
service_ids_to_nums_cleared[ service_id ] = num_cleared
|
||||
|
||||
|
||||
self._c.execute( 'DELETE FROM local_file_deletion_reasons WHERE hash_id NOT IN ( SELECT hash_id FROM {} );'.format( trash_current_files_table_name ) )
|
||||
|
||||
else:
|
||||
|
||||
trashed_hash_ids = self.FilterCurrentHashIds( self.modules_services.trash_service_id, hash_ids )
|
||||
|
||||
ok_to_clear_hash_ids = set( hash_ids ).difference( trashed_hash_ids )
|
||||
|
||||
if len( ok_to_clear_hash_ids ) > 0:
|
||||
|
||||
for service_id in local_non_trash_service_ids:
|
||||
|
||||
deleted_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_DELETED )
|
||||
|
||||
self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( deleted_files_table_name ), ( ( hash_id, ) for hash_id in ok_to_clear_hash_ids ) )
|
||||
|
||||
num_cleared = HydrusDB.GetRowCount( self._c )
|
||||
|
||||
service_ids_to_nums_cleared[ service_id ] = num_cleared
|
||||
|
||||
|
||||
self._c.executemany( 'DELETE FROM local_file_deletion_reasons WHERE hash_id = ?;', ( ( hash_id, ) for hash_id in ok_to_clear_hash_ids ) )
|
||||
|
||||
|
||||
|
||||
return service_ids_to_nums_cleared
|
||||
|
||||
|
||||
def CreateInitialTables( self ):
|
||||
|
||||
self._c.execute( 'CREATE TABLE local_file_deletion_reasons ( hash_id INTEGER PRIMARY KEY, reason_id INTEGER );' )
|
||||
|
||||
|
||||
def FilterAllCurrentHashIds( self, hash_ids, just_these_service_ids = None ):
|
||||
|
||||
if just_these_service_ids is None:
|
||||
|
||||
service_ids = self.modules_services.GetServiceIds( HC.SPECIFIC_FILE_SERVICES )
|
||||
|
||||
else:
|
||||
|
||||
service_ids = just_these_service_ids
|
||||
|
||||
|
||||
current_hash_ids = set()
|
||||
|
||||
with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
|
||||
|
||||
for service_id in service_ids:
|
||||
|
||||
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
|
||||
|
||||
hash_id_iterator = self._STI( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ) )
|
||||
|
||||
current_hash_ids.update( hash_id_iterator )
|
||||
|
||||
|
||||
|
||||
return current_hash_ids
|
||||
|
||||
|
||||
def FilterAllPendingHashIds( self, hash_ids, just_these_service_ids = None ):
|
||||
|
||||
if just_these_service_ids is None:
|
||||
|
||||
service_ids = self.modules_services.GetServiceIds( HC.SPECIFIC_FILE_SERVICES )
|
||||
|
||||
else:
|
||||
|
||||
service_ids = just_these_service_ids
|
||||
|
||||
|
||||
pending_hash_ids = set()
|
||||
|
||||
with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
|
||||
|
||||
for service_id in service_ids:
|
||||
|
||||
pending_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PENDING )
|
||||
|
||||
hash_id_iterator = self._STI( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, pending_files_table_name ) ) )
|
||||
|
||||
pending_hash_ids.update( hash_id_iterator )
|
||||
|
||||
|
||||
|
||||
return pending_hash_ids
|
||||
|
||||
|
||||
def FilterCurrentHashIds( self, service_id, hash_ids ):
|
||||
|
||||
if service_id == self.modules_services.combined_file_service_id:
|
||||
|
||||
return set( hash_ids )
|
||||
|
||||
|
||||
with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
|
||||
|
||||
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
|
||||
|
||||
current_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ) )
|
||||
|
||||
|
||||
return current_hash_ids
|
||||
|
||||
|
||||
def FilterPendingHashIds( self, service_id, hash_ids ):
|
||||
|
||||
if service_id == self.modules_services.combined_file_service_id:
|
||||
|
||||
return set( hash_ids )
|
||||
|
||||
|
||||
with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
|
||||
|
||||
pending_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PENDING )
|
||||
|
||||
pending_hash_ids = self._STS( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, pending_files_table_name ) ) )
|
||||
|
||||
|
||||
return pending_hash_ids
|
||||
|
||||
|
||||
def DeletePending( self, service_id: int ):
|
||||
|
||||
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id )
|
||||
|
||||
self._c.execute( 'DELETE FROM {};'.format( pending_files_table_name ) )
|
||||
self._c.execute( 'DELETE FROM {};'.format( petitioned_files_table_name ) )
|
||||
|
||||
|
||||
def DropFilesTables( self, service_id: int ):
|
||||
|
||||
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id )
|
||||
|
||||
self._c.execute( 'DROP TABLE IF EXISTS {};'.format( current_files_table_name ) )
|
||||
self._c.execute( 'DROP TABLE IF EXISTS {};'.format( deleted_files_table_name ) )
|
||||
self._c.execute( 'DROP TABLE IF EXISTS {};'.format( pending_files_table_name ) )
|
||||
self._c.execute( 'DROP TABLE IF EXISTS {};'.format( petitioned_files_table_name ) )
|
||||
|
||||
|
||||
def GenerateFilesTables( self, service_id: int ):
|
||||
|
||||
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id )
|
||||
|
||||
self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, timestamp INTEGER );'.format( current_files_table_name ) )
|
||||
self._CreateIndex( current_files_table_name, [ 'timestamp' ] )
|
||||
|
||||
self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, timestamp INTEGER, original_timestamp INTEGER );'.format( deleted_files_table_name ) )
|
||||
self._CreateIndex( deleted_files_table_name, [ 'timestamp' ] )
|
||||
self._CreateIndex( deleted_files_table_name, [ 'original_timestamp' ] )
|
||||
|
||||
self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY );'.format( pending_files_table_name ) )
|
||||
|
||||
self._c.execute( 'CREATE TABLE IF NOT EXISTS {} ( hash_id INTEGER PRIMARY KEY, reason_id INTEGER );'.format( petitioned_files_table_name ) )
|
||||
self._CreateIndex( petitioned_files_table_name, [ 'reason_id' ] )
|
||||
|
||||
|
||||
def GetAPendingHashId( self, service_id ):
|
||||
|
||||
pending_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PENDING )
|
||||
|
||||
result = self._c.execute( 'SELECT hash_id FROM {};'.format( pending_files_table_name ) ).fetchone()
|
||||
|
||||
if result is None:
|
||||
|
||||
return None
|
||||
|
||||
else:
|
||||
|
||||
( hash_id, ) = result
|
||||
|
||||
return hash_id
|
||||
|
||||
|
||||
|
||||
def GetAPetitionedHashId( self, service_id ):
|
||||
|
||||
petitioned_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PETITIONED )
|
||||
|
||||
result = self._c.execute( 'SELECT hash_id FROM {};'.format( petitioned_files_table_name ) ).fetchone()
|
||||
|
||||
if result is None:
|
||||
|
||||
return None
|
||||
|
||||
else:
|
||||
|
||||
( hash_id, ) = result
|
||||
|
||||
return hash_id
|
||||
|
||||
|
||||
|
||||
def GetCurrentFilesCount( self, service_id, only_viewable = False ):
|
||||
|
||||
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
|
||||
|
||||
if only_viewable:
|
||||
|
||||
# hashes to mimes
|
||||
result = self._c.execute( 'SELECT COUNT( * ) FROM {} CROSS JOIN files_info USING ( hash_id ) WHERE mime IN {};'.format( current_files_table_name, HydrusData.SplayListForDB( HC.SEARCHABLE_MIMES ) ) ).fetchone()
|
||||
|
||||
else:
|
||||
|
||||
result = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( current_files_table_name ) ).fetchone()
|
||||
|
||||
|
||||
( count, ) = result
|
||||
|
||||
return count
|
||||
|
||||
|
||||
def GetCurrentFilesInboxCount( self, service_id ):
|
||||
|
||||
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
|
||||
|
||||
result = self._c.execute( 'SELECT COUNT( * ) FROM {} CROSS JOIN file_inbox USING ( hash_id );'.format( current_files_table_name ) ).fetchone()
|
||||
|
||||
( count, ) = result
|
||||
|
||||
return count
|
||||
|
||||
|
||||
def GetCurrentHashIdsList( self, service_id ):
|
||||
|
||||
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
|
||||
|
||||
hash_ids = self._STL( self._c.execute( 'SELECT hash_id FROM {};'.format( current_files_table_name ) ) )
|
||||
|
||||
return hash_ids
|
||||
|
||||
|
||||
def GetCurrentFilesTotalSize( self, service_id ):
|
||||
|
||||
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
|
||||
|
||||
# hashes to size
|
||||
result = self._c.execute( 'SELECT SUM( size ) FROM {} CROSS JOIN files_info USING ( hash_id );'.format( current_files_table_name ) ).fetchone()
|
||||
|
||||
( count, ) = result
|
||||
|
||||
return count
|
||||
|
||||
|
||||
def GetCurrentHashIdsToTimestamps( self, service_id, hash_ids ):
|
||||
|
||||
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
|
||||
|
||||
with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
|
||||
|
||||
rows = dict( self._c.execute( 'SELECT hash_id, timestamp FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ) )
|
||||
|
||||
|
||||
return rows
|
||||
|
||||
|
||||
def GetCurrentTableJoinPhrase( self, service_id, table_name ):
|
||||
|
||||
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
|
||||
|
||||
return '{} CROSS JOIN {} USING ( hash_id )'.format( table_name, current_files_table_name )
|
||||
|
||||
|
||||
def GetCurrentTimestamp( self, service_id: int, hash_id: int ):
|
||||
|
||||
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
|
||||
|
||||
result = self._c.execute( 'SELECT timestamp FROM {} WHERE hash_id = ?;'.format( current_files_table_name ), ( hash_id, ) ).fetchone()
|
||||
|
||||
if result is None:
|
||||
|
||||
return None
|
||||
|
||||
else:
|
||||
|
||||
( timestamp, ) = result
|
||||
|
||||
return timestamp
|
||||
|
||||
|
||||
|
||||
def GetDeletedFilesCount( self, service_id: int ) -> int:
|
||||
|
||||
deleted_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_DELETED )
|
||||
|
||||
result = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( deleted_files_table_name ) ).fetchone()
|
||||
|
||||
( count, ) = result
|
||||
|
||||
return count
|
||||
|
||||
|
||||
def GetDeletionStatus( self, service_id, hash_id ):
|
||||
|
||||
# can have a value here and just be in trash, so we fetch it whatever the end result
|
||||
result = self._c.execute( 'SELECT reason_id FROM local_file_deletion_reasons WHERE hash_id = ?;', ( hash_id, ) ).fetchone()
|
||||
|
||||
if result is None:
|
||||
|
||||
file_deletion_reason = 'Unknown deletion reason.'
|
||||
|
||||
else:
|
||||
|
||||
( reason_id, ) = result
|
||||
|
||||
file_deletion_reason = self.modules_texts.GetText( reason_id )
|
||||
|
||||
|
||||
deleted_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_DELETED )
|
||||
|
||||
is_deleted = False
|
||||
timestamp = None
|
||||
|
||||
result = self._c.execute( 'SELECT timestamp FROM {} WHERE hash_id = ?;'.format( deleted_files_table_name ), ( hash_id, ) ).fetchone()
|
||||
|
||||
if result is not None:
|
||||
|
||||
is_deleted = True
|
||||
|
||||
( timestamp, ) = result
|
||||
|
||||
|
||||
return ( is_deleted, timestamp, file_deletion_reason )
|
||||
|
||||
|
||||
def GetExpectedTableNames( self ) -> typing.Collection[ str ]:
|
||||
|
||||
expected_table_names = [
|
||||
'local_file_deletion_reasons',
|
||||
]
|
||||
|
||||
return expected_table_names
|
||||
|
||||
|
||||
def GetHashIdsToCurrentServiceIds( self, temp_hash_ids_table_name ):
|
||||
|
||||
hash_ids_to_current_file_service_ids = collections.defaultdict( list )
|
||||
|
||||
for service_id in self.modules_services.GetServiceIds( HC.SPECIFIC_FILE_SERVICES ):
|
||||
|
||||
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
|
||||
|
||||
for hash_id in self._STI( self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ) ):
|
||||
|
||||
hash_ids_to_current_file_service_ids[ hash_id ].append( service_id )
|
||||
|
||||
|
||||
|
||||
return hash_ids_to_current_file_service_ids
|
||||
|
||||
|
||||
def GetHashIdsToServiceInfoDicts( self, temp_hash_ids_table_name ):
|
||||
|
||||
hash_ids_to_current_file_service_ids_and_timestamps = collections.defaultdict( list )
|
||||
hash_ids_to_deleted_file_service_ids_and_timestamps = collections.defaultdict( list )
|
||||
hash_ids_to_pending_file_service_ids = collections.defaultdict( list )
|
||||
hash_ids_to_petitioned_file_service_ids = collections.defaultdict( list )
|
||||
|
||||
for service_id in self.modules_services.GetServiceIds( HC.SPECIFIC_FILE_SERVICES ):
|
||||
|
||||
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id )
|
||||
|
||||
for ( hash_id, timestamp ) in self._c.execute( 'SELECT hash_id, timestamp FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ):
|
||||
|
||||
hash_ids_to_current_file_service_ids_and_timestamps[ hash_id ].append( ( service_id, timestamp ) )
|
||||
|
||||
|
||||
for ( hash_id, timestamp, original_timestamp ) in self._c.execute( 'SELECT hash_id, timestamp, original_timestamp FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, deleted_files_table_name ) ):
|
||||
|
||||
hash_ids_to_deleted_file_service_ids_and_timestamps[ hash_id ].append( ( service_id, timestamp, original_timestamp ) )
|
||||
|
||||
|
||||
for hash_id in self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, pending_files_table_name ) ):
|
||||
|
||||
hash_ids_to_pending_file_service_ids[ hash_id ].append( service_id )
|
||||
|
||||
|
||||
for hash_id in self._c.execute( 'SELECT hash_id FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, petitioned_files_table_name ) ):
|
||||
|
||||
hash_ids_to_petitioned_file_service_ids[ hash_id ].append( service_id )
|
||||
|
||||
|
||||
|
||||
return (
|
||||
hash_ids_to_current_file_service_ids_and_timestamps,
|
||||
hash_ids_to_deleted_file_service_ids_and_timestamps,
|
||||
hash_ids_to_pending_file_service_ids,
|
||||
hash_ids_to_petitioned_file_service_ids
|
||||
)
|
||||
|
||||
|
||||
def GetNumLocal( self, service_id: int ) -> int:
|
||||
|
||||
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
|
||||
combined_local_current_files_table_name = GenerateFilesTableName( self.modules_services.combined_local_file_service_id, HC.CONTENT_STATUS_CURRENT )
|
||||
|
||||
( num_local, ) = self._c.execute( 'SELECT COUNT( * ) FROM {} CROSS JOIN {} USING ( hash_id );'.format( current_files_table_name, combined_local_current_files_table_name ) ).fetchone()
|
||||
|
||||
return num_local
|
||||
|
||||
|
||||
def GetPendingFilesCount( self, service_id: int ) -> int:
|
||||
|
||||
pending_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PENDING )
|
||||
|
||||
result = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( pending_files_table_name ) ).fetchone()
|
||||
|
||||
( count, ) = result
|
||||
|
||||
return count
|
||||
|
||||
|
||||
def GetPetitionedFilesCount( self, service_id: int ) -> int:
|
||||
|
||||
petitioned_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PETITIONED )
|
||||
|
||||
result = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( petitioned_files_table_name ) ).fetchone()
|
||||
|
||||
( count, ) = result
|
||||
|
||||
return count
|
||||
|
||||
|
||||
def GetServiceIdCounts( self, hash_ids ) -> typing.Dict[ int, int ]:
|
||||
|
||||
with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
|
||||
|
||||
service_ids_to_counts = {}
|
||||
|
||||
for service_id in self.modules_services.GetServiceIds( HC.SPECIFIC_FILE_SERVICES ):
|
||||
|
||||
current_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_CURRENT )
|
||||
|
||||
# temp hashes to files
|
||||
( count, ) = self._c.execute( 'SELECT COUNT( * ) FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, current_files_table_name ) ).fetchone()
|
||||
|
||||
service_ids_to_counts[ service_id ] = count
|
||||
|
||||
|
||||
|
||||
return service_ids_to_counts
|
||||
|
||||
|
||||
def GetSomePetitionedRows( self, service_id: int ):
|
||||
|
||||
petitioned_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PETITIONED )
|
||||
|
||||
petitioned_rows = list( HydrusData.BuildKeyToListDict( self._c.execute( 'SELECT reason_id, hash_id FROM {} ORDER BY reason_id LIMIT 100;'.format( petitioned_files_table_name ) ) ).items() )
|
||||
|
||||
return petitioned_rows
|
||||
|
||||
|
||||
def GetTablesAndColumnsThatUseDefinitions( self, content_type: int ) -> typing.List[ typing.Tuple[ str, str ] ]:
|
||||
|
||||
if HC.CONTENT_TYPE_HASH:
|
||||
|
||||
tables_and_columns = []
|
||||
|
||||
for service_id in self.modules_services.GetServiceIds( HC.SPECIFIC_FILE_SERVICES ):
|
||||
|
||||
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id )
|
||||
|
||||
tables_and_columns.extend( [
|
||||
( current_files_table_name, 'hash_id' ),
|
||||
( deleted_files_table_name, 'hash_id' ),
|
||||
( pending_files_table_name, 'hash_id' ),
|
||||
( petitioned_files_table_name, 'hash_id' )
|
||||
] )
|
||||
|
||||
|
||||
return tables_and_columns
|
||||
|
||||
|
||||
return []
|
||||
|
||||
|
||||
def GetUndeleteRows( self, service_id, hash_ids ):
|
||||
|
||||
deleted_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_DELETED )
|
||||
|
||||
with HydrusDB.TemporaryIntegerTable( self._c, hash_ids, 'hash_id' ) as temp_hash_ids_table_name:
|
||||
|
||||
rows = self._c.execute( 'SELECT hash_id, original_timestamp FROM {} CROSS JOIN {} USING ( hash_id );'.format( temp_hash_ids_table_name, deleted_files_table_name ) ).fetchall()
|
||||
|
||||
|
||||
return rows
|
||||
|
||||
|
||||
def PendFiles( self, service_id, hash_ids ):
|
||||
|
||||
pending_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PENDING )
|
||||
|
||||
self._c.executemany( 'INSERT OR IGNORE INTO {} ( hash_id ) VALUES ( ? );'.format( pending_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) )
|
||||
|
||||
|
||||
def PetitionFiles( self, service_id, reason_id, hash_ids ):
|
||||
|
||||
petitioned_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PETITIONED )
|
||||
|
||||
self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( petitioned_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) )
|
||||
|
||||
self._c.executemany( 'INSERT OR IGNORE INTO {} ( hash_id, reason_id ) VALUES ( ?, ? );'.format( petitioned_files_table_name ), ( ( hash_id, reason_id ) for hash_id in hash_ids ) )
|
||||
|
||||
|
||||
def RecordDeleteFiles( self, service_id, insert_rows ):
|
||||
|
||||
deleted_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_DELETED )
|
||||
|
||||
now = HydrusData.GetNow()
|
||||
|
||||
self._c.executemany(
|
||||
'INSERT OR IGNORE INTO {} ( hash_id, timestamp, original_timestamp ) VALUES ( ?, ?, ? );'.format( deleted_files_table_name ),
|
||||
( ( hash_id, now, original_timestamp ) for ( hash_id, original_timestamp ) in insert_rows )
|
||||
)
|
||||
|
||||
num_new_deleted_files = HydrusDB.GetRowCount( self._c )
|
||||
|
||||
return num_new_deleted_files
|
||||
|
||||
|
||||
def RescindPendFiles( self, service_id, hash_ids ):
|
||||
|
||||
pending_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PENDING )
|
||||
|
||||
self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( pending_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) )
|
||||
|
||||
|
||||
def RescindPetitionFiles( self, service_id, hash_ids ):
|
||||
|
||||
petitioned_files_table_name = GenerateFilesTableName( service_id, HC.CONTENT_STATUS_PETITIONED )
|
||||
|
||||
self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( petitioned_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) )
|
||||
|
||||
|
||||
def RemoveFiles( self, service_id, hash_ids ):
|
||||
|
||||
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = GenerateFilesTableNames( service_id )
|
||||
|
||||
self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( current_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) )
|
||||
|
||||
self._c.executemany( 'DELETE FROM {} WHERE hash_id = ?;'.format( petitioned_files_table_name ), ( ( hash_id, ) for hash_id in hash_ids ) )
|
||||
|
||||
pending_changed = HydrusDB.GetRowCount( self._c ) > 0
|
||||
|
||||
return pending_changed
|
||||
|
||||
|
||||
def SetFileDeletionReason( self, hash_ids, reason ):
|
||||
|
||||
reason_id = self.modules_texts.GetTextId( reason )
|
||||
|
||||
self._c.executemany( 'REPLACE INTO local_file_deletion_reasons ( hash_id, reason_id ) VALUES ( ?, ? );', ( ( hash_id, reason_id ) for hash_id in hash_ids ) )
|
||||
|
|
@ -75,22 +75,48 @@ class ClientDBMappingsStorage( HydrusDBModule.HydrusDBModule ):
|
|||
self._CreateIndex( petitioned_mappings_table_name, [ 'hash_id', 'tag_id' ], unique = True )
|
||||
|
||||
|
||||
def GetPendingCount( self, service_id ):
|
||||
def GetCurrentFilesCount( self, service_id: int ) -> int:
|
||||
|
||||
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id )
|
||||
|
||||
result = self._c.execute( 'SELECT COUNT( DISTINCT hash_id ) FROM {};'.format( current_mappings_table_name ) ).fetchone()
|
||||
|
||||
( count, ) = result
|
||||
|
||||
return count
|
||||
|
||||
|
||||
def GetDeletedMappingsCount( self, service_id: int ) -> int:
|
||||
|
||||
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id )
|
||||
|
||||
result = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( deleted_mappings_table_name ) ).fetchone()
|
||||
|
||||
( count, ) = result
|
||||
|
||||
return count
|
||||
|
||||
|
||||
def GetPendingMappingsCount( self, service_id: int ) -> int:
|
||||
|
||||
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id )
|
||||
|
||||
result = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( pending_mappings_table_name ) ).fetchone()
|
||||
|
||||
if result is None:
|
||||
|
||||
return 0
|
||||
|
||||
else:
|
||||
|
||||
( count, ) = result
|
||||
|
||||
return count
|
||||
|
||||
( count, ) = result
|
||||
|
||||
return count
|
||||
|
||||
|
||||
def GetPetitionedMappingsCount( self, service_id: int ) -> int:
|
||||
|
||||
( current_mappings_table_name, deleted_mappings_table_name, pending_mappings_table_name, petitioned_mappings_table_name ) = GenerateMappingsTableNames( service_id )
|
||||
|
||||
result = self._c.execute( 'SELECT COUNT( * ) FROM {};'.format( petitioned_mappings_table_name ) ).fetchone()
|
||||
|
||||
( count, ) = result
|
||||
|
||||
return count
|
||||
|
||||
|
||||
def GetTablesAndColumnsThatUseDefinitions( self, content_type: int ) -> typing.List[ typing.Tuple[ str, str ] ]:
|
||||
|
|
|
@ -10,10 +10,15 @@ from hydrus.core import HydrusDBModule
|
|||
from hydrus.core import HydrusGlobals as HG
|
||||
|
||||
from hydrus.client import ClientThreading
|
||||
from hydrus.client.db import ClientDBFilesStorage
|
||||
from hydrus.client.db import ClientDBServices
|
||||
|
||||
class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
|
||||
|
||||
def __init__( self, cursor: sqlite3.Cursor ):
|
||||
def __init__( self, cursor: sqlite3.Cursor, modules_services: ClientDBServices.ClientDBMasterServices, modules_files_storage: ClientDBFilesStorage.ClientDBFilesStorage ):
|
||||
|
||||
self.modules_services = modules_services
|
||||
self.modules_files_storage = modules_files_storage
|
||||
|
||||
HydrusDBModule.HydrusDBModule.__init__( self, 'client similar files', cursor )
|
||||
|
||||
|
@ -548,7 +553,9 @@ class ClientDBSimilarFiles( HydrusDBModule.HydrusDBModule ):
|
|||
|
||||
job_key.SetVariable( 'popup_text_1', 'purging search info of orphans' )
|
||||
|
||||
self._c.execute( 'DELETE FROM shape_perceptual_hash_map WHERE hash_id NOT IN ( SELECT hash_id FROM current_files );' )
|
||||
( current_files_table_name, deleted_files_table_name, pending_files_table_name, petitioned_files_table_name ) = ClientDBFilesStorage.GenerateFilesTableNames( self.modules_services.combined_local_file_service_id )
|
||||
|
||||
self._c.execute( 'DELETE FROM shape_perceptual_hash_map WHERE hash_id NOT IN ( SELECT hash_id FROM {} );'.format( current_files_table_name ) )
|
||||
|
||||
job_key.SetVariable( 'popup_text_1', 'gathering all leaves' )
|
||||
|
||||
|
|
|
@ -4759,18 +4759,10 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
HG.cache_report_mode = not HG.cache_report_mode
|
||||
|
||||
elif name == 'callto_profile_mode':
|
||||
|
||||
HG.callto_profile_mode = not HG.callto_profile_mode
|
||||
|
||||
elif name == 'db_report_mode':
|
||||
|
||||
HG.db_report_mode = not HG.db_report_mode
|
||||
|
||||
elif name == 'db_profile_mode':
|
||||
|
||||
HG.db_profile_mode = not HG.db_profile_mode
|
||||
|
||||
elif name == 'db_ui_hang_relief_mode':
|
||||
|
||||
HG.db_ui_hang_relief_mode = not HG.db_ui_hang_relief_mode
|
||||
|
@ -4795,10 +4787,6 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
HG.media_load_report_mode = not HG.media_load_report_mode
|
||||
|
||||
elif name == 'menu_profile_mode':
|
||||
|
||||
HG.menu_profile_mode = not HG.menu_profile_mode
|
||||
|
||||
elif name == 'network_report_mode':
|
||||
|
||||
HG.network_report_mode = not HG.network_report_mode
|
||||
|
@ -4807,18 +4795,40 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
HG.phash_generation_report_mode = not HG.phash_generation_report_mode
|
||||
|
||||
elif name == 'profile_mode':
|
||||
|
||||
if not HG.profile_mode:
|
||||
|
||||
now = HydrusData.GetNow()
|
||||
|
||||
with HG.profile_counter_lock:
|
||||
|
||||
HG.profile_start_time = now
|
||||
HG.profile_slow_count = 0
|
||||
HG.profile_fast_count = 0
|
||||
|
||||
|
||||
|
||||
HG.profile_mode = True
|
||||
|
||||
HydrusData.ShowText( 'Profile mode on!' )
|
||||
|
||||
else:
|
||||
|
||||
HG.profile_mode = False
|
||||
|
||||
with HG.profile_counter_lock:
|
||||
|
||||
( slow, fast ) = ( HG.profile_slow_count, HG.profile_fast_count )
|
||||
|
||||
|
||||
HydrusData.ShowText( 'Profiling done: {} slow jobs, {} fast jobs'.format( HydrusData.ToHumanInt( slow ), HydrusData.ToHumanInt( fast ) ) )
|
||||
|
||||
|
||||
elif name == 'pubsub_report_mode':
|
||||
|
||||
HG.pubsub_report_mode = not HG.pubsub_report_mode
|
||||
|
||||
elif name == 'pubsub_profile_mode':
|
||||
|
||||
HG.pubsub_profile_mode = not HG.pubsub_profile_mode
|
||||
|
||||
elif name == 'server_profile_mode':
|
||||
|
||||
HG.server_profile_mode = not HG.server_profile_mode
|
||||
|
||||
elif name == 'shortcut_report_mode':
|
||||
|
||||
HG.shortcut_report_mode = not HG.shortcut_report_mode
|
||||
|
@ -4835,19 +4845,6 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
HG.thumbnail_debug_mode = not HG.thumbnail_debug_mode
|
||||
|
||||
elif name == 'ui_timer_profile_mode':
|
||||
|
||||
HG.ui_timer_profile_mode = not HG.ui_timer_profile_mode
|
||||
|
||||
if HG.ui_timer_profile_mode:
|
||||
|
||||
HydrusData.ShowText( 'ui timer profile mode activated' )
|
||||
|
||||
else:
|
||||
|
||||
HydrusData.ShowText( 'ui timer profile mode deactivated' )
|
||||
|
||||
|
||||
elif name == 'force_idle_mode':
|
||||
|
||||
HG.force_idle_mode = not HG.force_idle_mode
|
||||
|
@ -5339,11 +5336,11 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
try:
|
||||
|
||||
if HG.ui_timer_profile_mode:
|
||||
if HG.profile_mode:
|
||||
|
||||
summary = 'Profiling animation timer: ' + repr( window )
|
||||
|
||||
HydrusData.Profile( summary, 'window.TIMERAnimationUpdate()', globals(), locals(), min_duration_ms = 3, show_summary = True )
|
||||
HydrusData.Profile( summary, 'window.TIMERAnimationUpdate()', globals(), locals(), min_duration_ms = HG.ui_timer_profile_min_job_time_ms )
|
||||
|
||||
else:
|
||||
|
||||
|
@ -5943,23 +5940,18 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
ClientGUIMenus.AppendMenu( debug, debug_modes, 'debug modes' )
|
||||
|
||||
profile_modes = QW.QMenu( debug )
|
||||
profiling = QW.QMenu( debug )
|
||||
|
||||
profile_mode_message = 'If something is running slow, you can turn on one of these modes to have hydrus gather information on how long each part takes to run. You probably want \'db profile mode\'.'
|
||||
profile_mode_message = 'If something is running slow, you can turn on profile mode to have hydrus gather information on how long many jobs take to run.'
|
||||
profile_mode_message += os.linesep * 2
|
||||
profile_mode_message += 'Turn the mode on, do the slow thing for a bit, and then turn it off. In your database directory will be a new profile log, which is really helpful for hydrus dev to figure out what in your case is running slow.'
|
||||
profile_mode_message += 'Turn the mode on, do the slow thing for a bit, and then turn it off. In your database directory will be a new profile log, which is really helpful for hydrus dev to figure out what is running slow for you and how to fix it.'
|
||||
profile_mode_message += os.linesep * 2
|
||||
profile_mode_message += 'More information is available in the help, under \'reducing program lag\'.'
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( profile_modes, 'what is this?', 'Show profile info.', QW.QMessageBox.information, self, 'Profile modes', profile_mode_message )
|
||||
ClientGUIMenus.AppendMenuCheckItem( profile_modes, 'callto profile mode', 'Run detailed \'profiles\' on most threaded jobs and dump this information to the log (this is very useful for hydrus dev to have, if something is running slow for you in UI!).', HG.callto_profile_mode, self._SwitchBoolean, 'callto_profile_mode' )
|
||||
ClientGUIMenus.AppendMenuCheckItem( profile_modes, 'client api profile mode', 'Run detailed \'profiles\' on every client api query and dump this information to the log (this is very useful for hydrus dev to have, if something is running slow for you!).', HG.server_profile_mode, self._SwitchBoolean, 'server_profile_mode' )
|
||||
ClientGUIMenus.AppendMenuCheckItem( profile_modes, 'db profile mode', 'Run detailed \'profiles\' on every database query and dump this information to the log (this is very useful for hydrus dev to have, if something is running slow for you in the DB!).', HG.db_profile_mode, self._SwitchBoolean, 'db_profile_mode' )
|
||||
ClientGUIMenus.AppendMenuCheckItem( profile_modes, 'menu profile mode', 'Run detailed \'profiles\' on menu actions.', HG.menu_profile_mode, self._SwitchBoolean, 'menu_profile_mode' )
|
||||
ClientGUIMenus.AppendMenuCheckItem( profile_modes, 'pubsub profile mode', 'Run detailed \'profiles\' on every internal publisher/subscriber message and dump this information to the log. This can hammer your log with dozens of large dumps every second. Don\'t run it unless you know you need to.', HG.pubsub_profile_mode, self._SwitchBoolean, 'pubsub_profile_mode' )
|
||||
ClientGUIMenus.AppendMenuCheckItem( profile_modes, 'ui timer profile mode', 'Run detailed \'profiles\' on every ui timer update. This will likely spam you!', HG.ui_timer_profile_mode, self._SwitchBoolean, 'ui_timer_profile_mode' )
|
||||
ClientGUIMenus.AppendMenuItem( profiling, 'what is this?', 'Show profile info.', QW.QMessageBox.information, self, 'Profile modes', profile_mode_message )
|
||||
ClientGUIMenus.AppendMenuCheckItem( profiling, 'profile mode', 'Run detailed \'profiles\'.', HG.profile_mode, self._SwitchBoolean, 'profile_mode' )
|
||||
|
||||
ClientGUIMenus.AppendMenu( debug, profile_modes, 'profile modes' )
|
||||
ClientGUIMenus.AppendMenu( debug, profiling, 'profiling' )
|
||||
|
||||
report_modes = QW.QMenu( debug )
|
||||
|
||||
|
@ -7184,11 +7176,11 @@ Try to keep this below 10 million!'''
|
|||
|
||||
if page is not None:
|
||||
|
||||
if HG.ui_timer_profile_mode:
|
||||
if HG.profile_mode:
|
||||
|
||||
summary = 'Profiling page timer: ' + repr( page )
|
||||
|
||||
HydrusData.Profile( summary, 'page.REPEATINGPageUpdate()', globals(), locals(), min_duration_ms = 3, show_summary = True )
|
||||
HydrusData.Profile( summary, 'page.REPEATINGPageUpdate()', globals(), locals(), min_duration_ms = HG.ui_timer_profile_min_job_time_ms )
|
||||
|
||||
else:
|
||||
|
||||
|
@ -7229,11 +7221,11 @@ Try to keep this below 10 million!'''
|
|||
|
||||
try:
|
||||
|
||||
if HG.ui_timer_profile_mode:
|
||||
if HG.profile_mode:
|
||||
|
||||
summary = 'Profiling ui update timer: ' + repr( window )
|
||||
|
||||
HydrusData.Profile( summary, 'window.TIMERUIUpdate()', globals(), locals(), min_duration_ms = 3, show_summary = True )
|
||||
HydrusData.Profile( summary, 'window.TIMERUIUpdate()', globals(), locals(), min_duration_ms = HG.ui_timer_profile_min_job_time_ms )
|
||||
|
||||
else:
|
||||
|
||||
|
|
|
@ -7,6 +7,7 @@ from qtpy import QtWidgets as QW
|
|||
from hydrus.core import HydrusConstants as HC
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusPaths
|
||||
from hydrus.core import HydrusText
|
||||
|
||||
from hydrus.client import ClientExporting
|
||||
from hydrus.client.gui import ClientGUIFunctions
|
||||
|
@ -250,47 +251,65 @@ class FileDropTarget( QC.QObject ):
|
|||
|
||||
result = QC.Qt.MoveAction
|
||||
'''
|
||||
elif urls_dnd and self._filenames_callable is not None:
|
||||
elif urls_dnd or text_dnd:
|
||||
|
||||
paths = []
|
||||
urls = []
|
||||
|
||||
for url in mime_data.urls():
|
||||
if urls_dnd:
|
||||
|
||||
if url.isLocalFile():
|
||||
dnd_items = mime_data.urls()
|
||||
|
||||
for dnd_item in dnd_items:
|
||||
|
||||
paths.append( os.path.normpath( url.toLocalFile() ) )
|
||||
if dnd_item.isLocalFile():
|
||||
|
||||
paths.append( os.path.normpath( dnd_item.toLocalFile() ) )
|
||||
|
||||
else:
|
||||
|
||||
urls.append( dnd_item.url() )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
else:
|
||||
|
||||
text = mime_data.text()
|
||||
|
||||
text_lines = HydrusText.DeserialiseNewlinedTexts( text )
|
||||
|
||||
for text_line in text_lines:
|
||||
|
||||
urls.append( url.url() )
|
||||
if text_line.startswith( 'http' ):
|
||||
|
||||
urls.append( text_line )
|
||||
|
||||
# ignore 'paths'
|
||||
|
||||
|
||||
|
||||
|
||||
if len( paths ) > 0:
|
||||
if self._filenames_callable is not None:
|
||||
|
||||
QP.CallAfter( self._filenames_callable, paths ) # callafter to terminate dnd event now
|
||||
if len( paths ) > 0:
|
||||
|
||||
QP.CallAfter( self._filenames_callable, paths ) # callafter to terminate dnd event now
|
||||
|
||||
|
||||
|
||||
if len( urls ) > 0:
|
||||
if self._url_callable is not None:
|
||||
|
||||
for url in urls:
|
||||
if len( urls ) > 0:
|
||||
|
||||
QP.CallAfter( self._url_callable, url ) # callafter to terminate dnd event now
|
||||
for url in urls:
|
||||
|
||||
QP.CallAfter( self._url_callable, url ) # callafter to terminate dnd event now
|
||||
|
||||
|
||||
|
||||
|
||||
result = QC.Qt.IgnoreAction
|
||||
|
||||
elif text_dnd and self._url_callable is not None:
|
||||
|
||||
text = mime_data.text()
|
||||
|
||||
QP.CallAfter( self._url_callable, text ) # callafter to terminate dnd event now
|
||||
|
||||
result = QC.Qt.CopyAction
|
||||
|
||||
|
||||
else:
|
||||
|
||||
result = QC.Qt.IgnoreAction
|
||||
|
|
|
@ -28,6 +28,16 @@ from hydrus.client.gui.widgets import ClientGUICommon
|
|||
from hydrus.client.importing import ClientImportFileSeeds
|
||||
from hydrus.client.importing.options import FileImportOptions
|
||||
|
||||
def GetRetryIgnoredParam( window ):
|
||||
|
||||
choice_tuples = [
|
||||
( 'retry all', None, 'retry all' ),
|
||||
( 'retry 404s', '^404', 'retry all 404s' ),
|
||||
( 'retry blacklisted', 'blacklisted!$', 'retry all blacklisted' )
|
||||
]
|
||||
|
||||
return ClientGUIDialogsQuick.SelectFromListButtons( window, 'select what to retry', choice_tuples )
|
||||
|
||||
class EditFileSeedCachePanel( ClientGUIScrolledPanels.EditPanel ):
|
||||
|
||||
def __init__( self, parent, controller, file_seed_cache ):
|
||||
|
@ -523,16 +533,18 @@ class FileSeedCacheButton( ClientGUICommon.BetterButton ):
|
|||
|
||||
def _RetryIgnored( self ):
|
||||
|
||||
message = 'Are you sure you want to retry all the files that were ignored/vetoed?'
|
||||
try:
|
||||
|
||||
ignored_regex = GetRetryIgnoredParam( self )
|
||||
|
||||
except HydrusExceptions.CancelledException:
|
||||
|
||||
return
|
||||
|
||||
|
||||
result = ClientGUIDialogsQuick.GetYesNo( self, message )
|
||||
file_seed_cache = self._file_seed_cache_get_callable()
|
||||
|
||||
if result == QW.QDialog.Accepted:
|
||||
|
||||
file_seed_cache = self._file_seed_cache_get_callable()
|
||||
|
||||
file_seed_cache.RetryIgnored()
|
||||
|
||||
file_seed_cache.RetryIgnored( ignored_regex = ignored_regex )
|
||||
|
||||
|
||||
def _ShowFileSeedCacheFrame( self ):
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
import collections
|
||||
import typing
|
||||
|
||||
from qtpy import QtCore as QC
|
||||
from qtpy import QtWidgets as QW
|
||||
|
@ -22,9 +22,6 @@ def ClientToScreen( win: QW.QWidget, pos: QC.QPoint ) -> QC.QPoint:
|
|||
return QC.QPoint( 50, 50 )
|
||||
|
||||
|
||||
|
||||
MAGIC_TEXT_PADDING = 1.1
|
||||
|
||||
def ColourIsBright( colour: QG.QColor ):
|
||||
|
||||
it_is_bright = colour.valueF() > 0.75
|
||||
|
@ -37,29 +34,50 @@ def ColourIsGreyish( colour: QG.QColor ):
|
|||
|
||||
return it_is_greyish
|
||||
|
||||
def ConvertPixelsToTextWidth( window, pixels, round_down = False ):
|
||||
# OK, so we now have a fixed block for width, which we sometimes want to calculate in both directions.
|
||||
# by normalising our 'one character' width, the inverse calculation uses the same coefficient and we aren't losing so much in rounding
|
||||
NUM_CHARS_FOR_WIDTH_CALCULATIONS = 32
|
||||
MAGIC_TEXT_PADDING = 1.1
|
||||
|
||||
def GetOneCharacterPixelHeight( window ) -> float:
|
||||
|
||||
twenty_chars_in_pixels = int( window.fontMetrics().boundingRect( 20 * 'x' ).width() * MAGIC_TEXT_PADDING )
|
||||
one_char_in_pixels = twenty_chars_in_pixels / 20
|
||||
return window.fontMetrics().height() * MAGIC_TEXT_PADDING
|
||||
|
||||
def GetOneCharacterPixelWidth( window ) -> float:
|
||||
|
||||
char_block_width = window.fontMetrics().boundingRect( NUM_CHARS_FOR_WIDTH_CALCULATIONS * 'x' ).width() * MAGIC_TEXT_PADDING
|
||||
|
||||
one_char_width = char_block_width / NUM_CHARS_FOR_WIDTH_CALCULATIONS
|
||||
|
||||
return one_char_width
|
||||
|
||||
def ConvertPixelsToTextWidth( window, pixels, round_down = False ) -> int:
|
||||
|
||||
one_char_width = GetOneCharacterPixelWidth( window )
|
||||
|
||||
if round_down:
|
||||
|
||||
return int( pixels // one_char_in_pixels )
|
||||
return int( pixels // one_char_width )
|
||||
|
||||
else:
|
||||
|
||||
return round( pixels / one_char_in_pixels )
|
||||
return round( pixels / one_char_width )
|
||||
|
||||
|
||||
def ConvertTextToPixels( window, char_dimensions ):
|
||||
def ConvertTextToPixels( window, char_dimensions ) -> typing.Tuple[ int, int ]:
|
||||
|
||||
( char_cols, char_rows ) = char_dimensions
|
||||
|
||||
return ( int( window.fontMetrics().boundingRect( char_cols * 'x' ).width() * MAGIC_TEXT_PADDING ), int( char_rows * window.fontMetrics().height() * MAGIC_TEXT_PADDING ) )
|
||||
one_char_width = GetOneCharacterPixelWidth( window )
|
||||
one_char_height = GetOneCharacterPixelHeight( window )
|
||||
|
||||
def ConvertTextToPixelWidth( window, char_cols ):
|
||||
return ( round( char_cols * one_char_width ), round( char_rows * one_char_height ) )
|
||||
|
||||
return int( window.fontMetrics().boundingRect( char_cols * 'x' ).width() * MAGIC_TEXT_PADDING )
|
||||
def ConvertTextToPixelWidth( window, char_cols ) -> int:
|
||||
|
||||
one_char_width = GetOneCharacterPixelWidth( window )
|
||||
|
||||
return round( char_cols * one_char_width )
|
||||
|
||||
def DialogIsOpen():
|
||||
|
||||
|
|
|
@ -163,11 +163,11 @@ def GetEventCallable( callable, *args, **kwargs ):
|
|||
|
||||
def event_callable( checked_state ):
|
||||
|
||||
if HG.menu_profile_mode:
|
||||
if HG.profile_mode:
|
||||
|
||||
summary = 'Profiling menu: ' + repr( callable )
|
||||
|
||||
HydrusData.Profile( summary, 'callable( *args, **kwargs )', globals(), locals(), min_duration_ms = 3, show_summary = True )
|
||||
HydrusData.Profile( summary, 'callable( *args, **kwargs )', globals(), locals(), min_duration_ms = HG.menu_profile_min_job_time_ms )
|
||||
|
||||
else:
|
||||
|
||||
|
|
|
@ -998,16 +998,25 @@ class EditSubscriptionPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
def _STARTRetryIgnored( self ):
|
||||
|
||||
try:
|
||||
|
||||
ignored_regex = ClientGUIFileSeedCache.GetRetryIgnoredParam( self )
|
||||
|
||||
except HydrusExceptions.CancelledException:
|
||||
|
||||
return
|
||||
|
||||
|
||||
selected_query_headers = self._query_headers.GetData( only_selected = True )
|
||||
|
||||
query_headers = [ query_header for query_header in selected_query_headers if query_header.CanRetryIgnored() ]
|
||||
|
||||
call = HydrusData.Call( self._RetryIgnored, query_headers )
|
||||
call = HydrusData.Call( self._RetryIgnored, query_headers, ignored_regex )
|
||||
|
||||
self._DoAsyncGetQueryLogContainers( query_headers, call )
|
||||
|
||||
|
||||
def _RetryIgnored( self, query_headers: typing.Collection[ ClientImportSubscriptionQuery.SubscriptionQueryHeader ] ):
|
||||
def _RetryIgnored( self, query_headers: typing.Collection[ ClientImportSubscriptionQuery.SubscriptionQueryHeader ], ignored_regex = typing.Optional[ str ] ):
|
||||
|
||||
for query_header in query_headers:
|
||||
|
||||
|
@ -1020,7 +1029,7 @@ class EditSubscriptionPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
query_log_container = self._names_to_edited_query_log_containers[ query_log_container_name ]
|
||||
|
||||
query_log_container.GetFileSeedCache().RetryIgnored()
|
||||
query_log_container.GetFileSeedCache().RetryIgnored( ignored_regex = ignored_regex )
|
||||
|
||||
query_header.UpdateFileStatus( query_log_container )
|
||||
|
||||
|
@ -1889,6 +1898,15 @@ class EditSubscriptionsPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
def _STARTRetryIgnored( self ):
|
||||
|
||||
try:
|
||||
|
||||
ignored_regex = ClientGUIFileSeedCache.GetRetryIgnoredParam( self )
|
||||
|
||||
except HydrusExceptions.CancelledException:
|
||||
|
||||
return
|
||||
|
||||
|
||||
query_headers = []
|
||||
|
||||
subscriptions = self._subscriptions.GetData( only_selected = True )
|
||||
|
@ -1900,12 +1918,12 @@ class EditSubscriptionsPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
query_headers = [ query_header for query_header in query_headers if query_header.CanRetryIgnored() ]
|
||||
|
||||
call = HydrusData.Call( self._RetryIgnored, query_headers )
|
||||
call = HydrusData.Call( self._RetryIgnored, query_headers, ignored_regex )
|
||||
|
||||
self._DoAsyncGetQueryLogContainers( query_headers, call )
|
||||
|
||||
|
||||
def _RetryIgnored( self, query_headers: typing.Iterable[ ClientImportSubscriptionQuery.SubscriptionQueryHeader ] ):
|
||||
def _RetryIgnored( self, query_headers: typing.Iterable[ ClientImportSubscriptionQuery.SubscriptionQueryHeader ], ignored_regex: typing.Optional[ str ] ):
|
||||
|
||||
for query_header in query_headers:
|
||||
|
||||
|
@ -1918,7 +1936,7 @@ class EditSubscriptionsPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
query_log_container = self._names_to_edited_query_log_containers[ query_log_container_name ]
|
||||
|
||||
query_log_container.GetFileSeedCache().RetryIgnored()
|
||||
query_log_container.GetFileSeedCache().RetryIgnored( ignored_regex = ignored_regex )
|
||||
|
||||
query_header.UpdateFileStatus( query_log_container )
|
||||
|
||||
|
|
|
@ -2895,11 +2895,11 @@ class ManageTagParents( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
self._show_all = QW.QCheckBox( self )
|
||||
|
||||
listctrl_panel = ClientGUIListCtrl.BetterListCtrlPanel( self )
|
||||
self._listctrl_panel = ClientGUIListCtrl.BetterListCtrlPanel( self )
|
||||
|
||||
self._tag_parents = ClientGUIListCtrl.BetterListCtrl( listctrl_panel, CGLC.COLUMN_LIST_TAG_PARENTS.ID, 8, self._ConvertPairToListCtrlTuples, delete_key_callback = self._ListCtrlActivated, activation_callback = self._ListCtrlActivated )
|
||||
self._tag_parents = ClientGUIListCtrl.BetterListCtrl( self._listctrl_panel, CGLC.COLUMN_LIST_TAG_PARENTS.ID, 8, self._ConvertPairToListCtrlTuples, delete_key_callback = self._ListCtrlActivated, activation_callback = self._ListCtrlActivated )
|
||||
|
||||
listctrl_panel.SetListCtrl( self._tag_parents )
|
||||
self._listctrl_panel.SetListCtrl( self._tag_parents )
|
||||
|
||||
self._tag_parents.Sort()
|
||||
|
||||
|
@ -2910,14 +2910,16 @@ class ManageTagParents( ClientGUIScrolledPanels.ManagePanel ):
|
|||
menu_items.append( ( 'normal', 'from .txt file', 'Load parents from a .txt file.', HydrusData.Call( self._ImportFromTXT, False ) ) )
|
||||
menu_items.append( ( 'normal', 'from .txt file (only add pairs--no deletions)', 'Load parents from a .txt file.', HydrusData.Call( self._ImportFromTXT, True ) ) )
|
||||
|
||||
listctrl_panel.AddMenuButton( 'import', menu_items )
|
||||
self._listctrl_panel.AddMenuButton( 'import', menu_items )
|
||||
|
||||
menu_items = []
|
||||
|
||||
menu_items.append( ( 'normal', 'to clipboard', 'Save selected parents to your clipboard.', self._ExportToClipboard ) )
|
||||
menu_items.append( ( 'normal', 'to .txt file', 'Save selected parents to a .txt file.', self._ExportToTXT ) )
|
||||
|
||||
listctrl_panel.AddMenuButton( 'export', menu_items, enabled_only_on_selection = True )
|
||||
self._listctrl_panel.AddMenuButton( 'export', menu_items, enabled_only_on_selection = True )
|
||||
|
||||
self._listctrl_panel.setEnabled( False )
|
||||
|
||||
self._children = ClientGUIListBoxes.ListBoxTagsStringsAddRemove( self, self._service_key, ClientTags.TAG_DISPLAY_ACTUAL )
|
||||
self._parents = ClientGUIListBoxes.ListBoxTagsStringsAddRemove( self, self._service_key, ClientTags.TAG_DISPLAY_ACTUAL )
|
||||
|
@ -2972,7 +2974,7 @@ class ManageTagParents( ClientGUIScrolledPanels.ManagePanel ):
|
|||
QP.AddToLayout( vbox, self._sync_status_st, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
QP.AddToLayout( vbox, self._count_st, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
QP.AddToLayout( vbox, ClientGUICommon.WrapInText(self._show_all,self,'show all pairs'), CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
QP.AddToLayout( vbox, listctrl_panel, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
QP.AddToLayout( vbox, self._listctrl_panel, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
QP.AddToLayout( vbox, self._add, CC.FLAGS_ON_RIGHT )
|
||||
QP.AddToLayout( vbox, tags_box, CC.FLAGS_EXPAND_SIZER_BOTH_WAYS )
|
||||
QP.AddToLayout( vbox, input_box, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
|
@ -3314,8 +3316,6 @@ class ManageTagParents( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
QW.QMessageBox.information( self, 'Information', 'Uneven number of tags in clipboard!' )
|
||||
|
||||
return
|
||||
|
||||
|
||||
pairs = []
|
||||
|
||||
|
@ -3704,6 +3704,7 @@ class ManageTagParents( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
self._count_st.setText( 'Starting with '+HydrusData.ToHumanInt(len(original_statuses_to_pairs[HC.CONTENT_STATUS_CURRENT]))+' pairs.' )
|
||||
|
||||
self._listctrl_panel.setEnabled( True )
|
||||
self._child_input.setEnabled( True )
|
||||
self._parent_input.setEnabled( True )
|
||||
|
||||
|
@ -3858,11 +3859,11 @@ class ManageTagSiblings( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
self._show_all = QW.QCheckBox( self )
|
||||
|
||||
listctrl_panel = ClientGUIListCtrl.BetterListCtrlPanel( self )
|
||||
self._listctrl_panel = ClientGUIListCtrl.BetterListCtrlPanel( self )
|
||||
|
||||
self._tag_siblings = ClientGUIListCtrl.BetterListCtrl( listctrl_panel, CGLC.COLUMN_LIST_TAG_SIBLINGS.ID, 8, self._ConvertPairToListCtrlTuples, delete_key_callback = self._ListCtrlActivated, activation_callback = self._ListCtrlActivated )
|
||||
self._tag_siblings = ClientGUIListCtrl.BetterListCtrl( self._listctrl_panel, CGLC.COLUMN_LIST_TAG_SIBLINGS.ID, 8, self._ConvertPairToListCtrlTuples, delete_key_callback = self._ListCtrlActivated, activation_callback = self._ListCtrlActivated )
|
||||
|
||||
listctrl_panel.SetListCtrl( self._tag_siblings )
|
||||
self._listctrl_panel.SetListCtrl( self._tag_siblings )
|
||||
|
||||
self._tag_siblings.Sort()
|
||||
|
||||
|
@ -3873,14 +3874,16 @@ class ManageTagSiblings( ClientGUIScrolledPanels.ManagePanel ):
|
|||
menu_items.append( ( 'normal', 'from .txt file', 'Load siblings from a .txt file.', HydrusData.Call( self._ImportFromTXT, False ) ) )
|
||||
menu_items.append( ( 'normal', 'from .txt file (only add pairs--no deletions)', 'Load siblings from a .txt file.', HydrusData.Call( self._ImportFromTXT, True ) ) )
|
||||
|
||||
listctrl_panel.AddMenuButton( 'import', menu_items )
|
||||
self._listctrl_panel.AddMenuButton( 'import', menu_items )
|
||||
|
||||
menu_items = []
|
||||
|
||||
menu_items.append( ( 'normal', 'to clipboard', 'Save selected siblings to your clipboard.', self._ExportToClipboard ) )
|
||||
menu_items.append( ( 'normal', 'to .txt file', 'Save selected siblings to a .txt file.', self._ExportToTXT ) )
|
||||
|
||||
listctrl_panel.AddMenuButton( 'export', menu_items, enabled_only_on_selection = True )
|
||||
self._listctrl_panel.AddMenuButton( 'export', menu_items, enabled_only_on_selection = True )
|
||||
|
||||
self._listctrl_panel.setEnabled( False )
|
||||
|
||||
self._old_siblings = ClientGUIListBoxes.ListBoxTagsStringsAddRemove( self, self._service_key, ClientTags.TAG_DISPLAY_ACTUAL )
|
||||
self._new_sibling = ClientGUICommon.BetterStaticText( self )
|
||||
|
@ -3934,7 +3937,7 @@ class ManageTagSiblings( ClientGUIScrolledPanels.ManagePanel ):
|
|||
QP.AddToLayout( vbox, self._sync_status_st, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
QP.AddToLayout( vbox, self._count_st, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
QP.AddToLayout( vbox, ClientGUICommon.WrapInText(self._show_all,self,'show all pairs'), CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
QP.AddToLayout( vbox, listctrl_panel, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
QP.AddToLayout( vbox, self._listctrl_panel, CC.FLAGS_EXPAND_BOTH_WAYS )
|
||||
QP.AddToLayout( vbox, self._add, CC.FLAGS_ON_RIGHT )
|
||||
QP.AddToLayout( vbox, text_box, CC.FLAGS_EXPAND_SIZER_BOTH_WAYS )
|
||||
QP.AddToLayout( vbox, input_box, CC.FLAGS_EXPAND_SIZER_PERPENDICULAR )
|
||||
|
@ -4733,6 +4736,8 @@ class ManageTagSiblings( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
self._count_st.setText( 'Starting with '+HydrusData.ToHumanInt(len(original_statuses_to_pairs[HC.CONTENT_STATUS_CURRENT]))+' pairs.' )
|
||||
|
||||
self._listctrl_panel.setEnabled( True )
|
||||
|
||||
self._old_input.setEnabled( True )
|
||||
self._new_input.setEnabled( True )
|
||||
|
||||
|
|
|
@ -1246,11 +1246,11 @@ class CallAfterEventCatcher( QC.QObject ):
|
|||
|
||||
if event.type() == CallAfterEventType and isinstance( event, CallAfterEvent ):
|
||||
|
||||
if HG.callto_profile_mode:
|
||||
if HG.profile_mode:
|
||||
|
||||
summary = 'Profiling CallAfter Event: {}'.format( event._fn )
|
||||
|
||||
HydrusData.Profile( summary, 'event.Execute()', globals(), locals(), min_duration_ms = 3, show_summary = True )
|
||||
HydrusData.Profile( summary, 'event.Execute()', globals(), locals(), min_duration_ms = HG.callto_profile_min_job_time_ms )
|
||||
|
||||
else:
|
||||
|
||||
|
|
|
@ -1819,12 +1819,21 @@ class CanvasPanel( Canvas ):
|
|||
self.ShowMenu()
|
||||
|
||||
|
||||
def ClearMedia( self ):
|
||||
|
||||
self._hidden_page_current_media = None
|
||||
|
||||
Canvas.ClearMedia( self )
|
||||
|
||||
|
||||
def PageHidden( self ):
|
||||
|
||||
self._hidden_page_current_media = self._current_media
|
||||
hidden_page_current_media = self._current_media
|
||||
|
||||
self.ClearMedia()
|
||||
|
||||
self._hidden_page_current_media = hidden_page_current_media
|
||||
|
||||
|
||||
def PageShown( self ):
|
||||
|
||||
|
|
|
@ -242,6 +242,8 @@ class BetterListCtrl( QW.QTreeWidget ):
|
|||
|
||||
LAST_COLUMN_SNAP_DISTANCE_CHARS = 5
|
||||
|
||||
total_fixed_columns_width = 0
|
||||
|
||||
for visual_index in range( num_columns ):
|
||||
|
||||
logical_index = header.logicalIndex( visual_index )
|
||||
|
@ -252,18 +254,22 @@ class BetterListCtrl( QW.QTreeWidget ):
|
|||
|
||||
if visual_index == last_column_index:
|
||||
|
||||
if self.verticalScrollBar().isVisible():
|
||||
|
||||
width_pixels += max( 0, min( self.verticalScrollBar().width(), 20 ) )
|
||||
|
||||
# testing if scrollbar is visible is unreliable, since we don't know if it is laid out correct yet (we could be doing that now!)
|
||||
# so let's just hack it
|
||||
|
||||
width_pixels = self.width() - ( self.frameWidth() * 2 ) - total_fixed_columns_width
|
||||
|
||||
else:
|
||||
|
||||
total_fixed_columns_width += width_pixels
|
||||
|
||||
|
||||
width_chars = ClientGUIFunctions.ConvertPixelsToTextWidth( main_tlw, width_pixels )
|
||||
|
||||
if visual_index == last_column_index:
|
||||
|
||||
# here's the snap magic
|
||||
width_chars = round( width_chars // LAST_COLUMN_SNAP_DISTANCE_CHARS ) * LAST_COLUMN_SNAP_DISTANCE_CHARS
|
||||
# here's the snap magic. final width_chars is always a multiple of 5
|
||||
width_chars = round( width_chars / LAST_COLUMN_SNAP_DISTANCE_CHARS ) * LAST_COLUMN_SNAP_DISTANCE_CHARS
|
||||
|
||||
|
||||
columns.append( ( column_type, width_chars, shown ) )
|
||||
|
@ -751,6 +757,8 @@ class BetterListCtrl( QW.QTreeWidget ):
|
|||
|
||||
width = 0
|
||||
|
||||
width += self.frameWidth() * 2
|
||||
|
||||
# all but last column
|
||||
|
||||
for i in range( self.columnCount() - 1 ):
|
||||
|
@ -772,6 +780,10 @@ class BetterListCtrl( QW.QTreeWidget ):
|
|||
|
||||
width += self.columnWidth( self.columnCount() - 1 )
|
||||
|
||||
# this is a hack to stop the thing suddenly growing to screen width in a weird resize loop
|
||||
# I couldn't reproduce this error, so I assume it is a QSS or whatever font/style/scrollbar on some systems that caused inaccurate columnWidth result
|
||||
width = min( width, self.width() )
|
||||
|
||||
else:
|
||||
|
||||
last_column_chars = self._original_column_list_status.GetColumnWidth( last_column_type )
|
||||
|
@ -783,8 +795,6 @@ class BetterListCtrl( QW.QTreeWidget ):
|
|||
|
||||
#
|
||||
|
||||
width += self.frameWidth() * 2
|
||||
|
||||
if self._forced_height_num_chars is None:
|
||||
|
||||
num_rows = self._initial_height_num_chars
|
||||
|
|
|
@ -2136,9 +2136,18 @@ class ManagementPanelImporterMultipleGallery( ManagementPanelImporter ):
|
|||
|
||||
def _RetryIgnored( self ):
|
||||
|
||||
try:
|
||||
|
||||
ignored_regex = ClientGUIFileSeedCache.GetRetryIgnoredParam( self )
|
||||
|
||||
except HydrusExceptions.CancelledException:
|
||||
|
||||
return
|
||||
|
||||
|
||||
for gallery_import in self._gallery_importers_listctrl.GetData( only_selected = True ):
|
||||
|
||||
gallery_import.RetryIgnored()
|
||||
gallery_import.RetryIgnored( ignored_regex = ignored_regex )
|
||||
|
||||
|
||||
|
||||
|
@ -2967,9 +2976,18 @@ class ManagementPanelImporterMultipleWatcher( ManagementPanelImporter ):
|
|||
|
||||
def _RetryIgnored( self ):
|
||||
|
||||
try:
|
||||
|
||||
ignored_regex = ClientGUIFileSeedCache.GetRetryIgnoredParam( self )
|
||||
|
||||
except HydrusExceptions.CancelledException:
|
||||
|
||||
return
|
||||
|
||||
|
||||
for watcher in self._watchers_listctrl.GetData( only_selected = True ):
|
||||
|
||||
watcher.RetryIgnored()
|
||||
watcher.RetryIgnored( ignored_regex = ignored_regex )
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -811,20 +811,6 @@ class Page( QW.QSplitter ):
|
|||
|
||||
|
||||
|
||||
def ShowHideSplit( self ):
|
||||
|
||||
if QP.SplitterVisibleCount( self ) > 1:
|
||||
|
||||
QP.Unsplit( self, self._search_preview_split )
|
||||
|
||||
self._media_panel.SetFocusedMedia( None )
|
||||
|
||||
else:
|
||||
|
||||
self.SetSplitterPositions()
|
||||
|
||||
|
||||
|
||||
def SetMediaFocus( self ):
|
||||
|
||||
self._media_panel.setFocus( QC.Qt.OtherFocusReason )
|
||||
|
@ -873,6 +859,20 @@ class Page( QW.QSplitter ):
|
|||
|
||||
|
||||
|
||||
def ShowHideSplit( self ):
|
||||
|
||||
if QP.SplitterVisibleCount( self ) > 1:
|
||||
|
||||
QP.Unsplit( self, self._search_preview_split )
|
||||
|
||||
self._media_panel.SetFocusedMedia( None )
|
||||
|
||||
else:
|
||||
|
||||
self.SetSplitterPositions()
|
||||
|
||||
|
||||
|
||||
def _StartInitialMediaResultsLoad( self ):
|
||||
|
||||
def qt_code_status( status ):
|
||||
|
|
|
@ -2,6 +2,7 @@ import collections
|
|||
import itertools
|
||||
import os
|
||||
import random
|
||||
import re
|
||||
import threading
|
||||
import time
|
||||
import traceback
|
||||
|
@ -91,11 +92,14 @@ class FileSeed( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
def _CheckTagsVeto( self, tags, tag_import_options: TagImportOptions.TagImportOptions ):
|
||||
|
||||
tags_to_siblings = HG.client_controller.Read( 'tag_siblings_lookup', CC.COMBINED_TAG_SERVICE_KEY, tags )
|
||||
|
||||
all_chain_tags = set( itertools.chain.from_iterable( tags_to_siblings.values() ) )
|
||||
|
||||
tag_import_options.CheckTagsVeto( tags, all_chain_tags )
|
||||
if len( tags ) > 0:
|
||||
|
||||
tags_to_siblings = HG.client_controller.Read( 'tag_siblings_lookup', CC.COMBINED_TAG_SERVICE_KEY, tags )
|
||||
|
||||
all_chain_tags = set( itertools.chain.from_iterable( tags_to_siblings.values() ) )
|
||||
|
||||
tag_import_options.CheckTagsVeto( tags, all_chain_tags )
|
||||
|
||||
|
||||
|
||||
def _GetSerialisableInfo( self ):
|
||||
|
@ -2330,7 +2334,7 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
|
|||
self.NotifyFileSeedsUpdated( failed_file_seeds )
|
||||
|
||||
|
||||
def RetryIgnored( self ):
|
||||
def RetryIgnored( self, ignored_regex = None ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
|
@ -2338,6 +2342,14 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
for file_seed in ignored_file_seeds:
|
||||
|
||||
if ignored_regex is not None:
|
||||
|
||||
if re.search( ignored_regex, file_seed.note ) is None:
|
||||
|
||||
continue
|
||||
|
||||
|
||||
|
||||
file_seed.SetStatus( CC.STATUS_UNKNOWN )
|
||||
|
||||
|
||||
|
|
|
@ -734,11 +734,11 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def RetryIgnored( self ):
|
||||
def RetryIgnored( self, ignored_regex = None ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._file_seed_cache.RetryIgnored()
|
||||
self._file_seed_cache.RetryIgnored( ignored_regex = ignored_regex )
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1349,11 +1349,11 @@ class Subscription( HydrusSerialisable.SerialisableBaseNamed ):
|
|||
|
||||
|
||||
|
||||
def RetryIgnored( self ):
|
||||
def RetryIgnored( self, ignored_regex = None ):
|
||||
|
||||
for query_header in self._query_headers:
|
||||
|
||||
query_header.RetryIgnored()
|
||||
query_header.RetryIgnored( ignored_regex = ignored_regex )
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1443,11 +1443,11 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def RetryIgnored( self ):
|
||||
def RetryIgnored( self, ignored_regex = None ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
self._file_seed_cache.RetryIgnored()
|
||||
self._file_seed_cache.RetryIgnored( ignored_regex = ignored_regex )
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
import collections
|
||||
import itertools
|
||||
import random
|
||||
import typing
|
||||
|
||||
|
@ -74,25 +75,6 @@ def GetDuplicateComparisonScore( shown_media, comparison_media ):
|
|||
|
||||
return total_score
|
||||
|
||||
NICE_RESOLUTIONS = {}
|
||||
|
||||
NICE_RESOLUTIONS[ ( 640, 480 ) ] = '480p'
|
||||
NICE_RESOLUTIONS[ ( 1280, 720 ) ] = '720p'
|
||||
NICE_RESOLUTIONS[ ( 1920, 1080 ) ] = '1080p'
|
||||
NICE_RESOLUTIONS[ ( 3840, 2060 ) ] = '4k'
|
||||
|
||||
NICE_RATIOS = {}
|
||||
|
||||
NICE_RATIOS[ 1 ] = '1:1'
|
||||
NICE_RATIOS[ 4 / 3 ] = '4:3'
|
||||
NICE_RATIOS[ 5 / 4 ] = '5:4'
|
||||
NICE_RATIOS[ 16 / 9 ] = '16:9'
|
||||
NICE_RATIOS[ 21 / 9 ] = '21:9'
|
||||
NICE_RATIOS[ 47 / 20 ] = '2.35:1'
|
||||
NICE_RATIOS[ 9 / 16 ] = '9:16'
|
||||
NICE_RATIOS[ 2 / 3 ] = '2:3'
|
||||
NICE_RATIOS[ 4 / 5 ] = '4:5'
|
||||
|
||||
def GetDuplicateComparisonStatements( shown_media, comparison_media ):
|
||||
|
||||
new_options = HG.client_controller.new_options
|
||||
|
@ -252,9 +234,9 @@ def GetDuplicateComparisonStatements( shown_media, comparison_media ):
|
|||
score = -duplicate_comparison_score_higher_resolution
|
||||
|
||||
|
||||
if s_res in NICE_RESOLUTIONS:
|
||||
if s_res in HC.NICE_RESOLUTIONS:
|
||||
|
||||
s_string = NICE_RESOLUTIONS[ s_res ]
|
||||
s_string = HC.NICE_RESOLUTIONS[ s_res ]
|
||||
|
||||
else:
|
||||
|
||||
|
@ -266,9 +248,9 @@ def GetDuplicateComparisonStatements( shown_media, comparison_media ):
|
|||
|
||||
|
||||
|
||||
if c_res in NICE_RESOLUTIONS:
|
||||
if c_res in HC.NICE_RESOLUTIONS:
|
||||
|
||||
c_string = NICE_RESOLUTIONS[ c_res ]
|
||||
c_string = HC.NICE_RESOLUTIONS[ c_res ]
|
||||
|
||||
else:
|
||||
|
||||
|
@ -289,14 +271,14 @@ def GetDuplicateComparisonStatements( shown_media, comparison_media ):
|
|||
s_ratio = s_w / s_h
|
||||
c_ratio = c_w / c_h
|
||||
|
||||
s_nice = s_ratio in NICE_RATIOS
|
||||
c_nice = c_ratio in NICE_RATIOS
|
||||
s_nice = s_ratio in HC.NICE_RATIOS
|
||||
c_nice = c_ratio in HC.NICE_RATIOS
|
||||
|
||||
if s_nice or c_nice:
|
||||
|
||||
if s_nice:
|
||||
|
||||
s_string = NICE_RATIOS[ s_ratio ]
|
||||
s_string = HC.NICE_RATIOS[ s_ratio ]
|
||||
|
||||
else:
|
||||
|
||||
|
@ -305,7 +287,7 @@ def GetDuplicateComparisonStatements( shown_media, comparison_media ):
|
|||
|
||||
if c_nice:
|
||||
|
||||
c_string = NICE_RATIOS[ c_ratio ]
|
||||
c_string = HC.NICE_RATIOS[ c_ratio ]
|
||||
|
||||
else:
|
||||
|
||||
|
@ -413,7 +395,7 @@ def GetDuplicateComparisonStatements( shown_media, comparison_media ):
|
|||
score = 0
|
||||
|
||||
|
||||
statement = '{} {} {}'.format( ClientData.TimestampToPrettyTimeDelta( s_ts ), operator, ClientData.TimestampToPrettyTimeDelta( c_ts ) )
|
||||
statement = '{}, {} {}'.format( ClientData.TimestampToPrettyTimeDelta( s_ts, history_suffix = ' old' ), operator, ClientData.TimestampToPrettyTimeDelta( c_ts, history_suffix = ' old' ) )
|
||||
|
||||
statements_and_scores[ 'time_imported' ] = ( statement, score )
|
||||
|
||||
|
@ -814,6 +796,7 @@ class MediaList( object ):
|
|||
self._file_service_key = file_service_key
|
||||
|
||||
self._hashes = set()
|
||||
self._hashes_ordered = []
|
||||
|
||||
self._hashes_to_singleton_media = {}
|
||||
self._hashes_to_collected_media = {}
|
||||
|
@ -985,29 +968,34 @@ class MediaList( object ):
|
|||
def _RecalcHashes( self ):
|
||||
|
||||
self._hashes = set()
|
||||
self._hashes_ordered = []
|
||||
|
||||
self._hashes_to_singleton_media = {}
|
||||
self._hashes_to_collected_media = {}
|
||||
|
||||
for media in self._collected_media:
|
||||
for m in self._sorted_media:
|
||||
|
||||
hashes = media.GetHashes()
|
||||
|
||||
self._hashes.update( hashes )
|
||||
|
||||
for hash in hashes:
|
||||
if isinstance( m, MediaCollection ):
|
||||
|
||||
self._hashes_to_collected_media[ hash ] = media
|
||||
hashes = m.GetHashes( ordered = True )
|
||||
|
||||
self._hashes.update( hashes )
|
||||
self._hashes_ordered.extend( hashes )
|
||||
|
||||
for hash in hashes:
|
||||
|
||||
self._hashes_to_collected_media[ hash ] = m
|
||||
|
||||
|
||||
else:
|
||||
|
||||
hash = m.GetHash()
|
||||
|
||||
self._hashes.add( hash )
|
||||
self._hashes_ordered.append( hash )
|
||||
|
||||
self._hashes_to_singleton_media[ hash ] = m
|
||||
|
||||
|
||||
|
||||
for media in self._singleton_media:
|
||||
|
||||
hash = media.GetHash()
|
||||
|
||||
self._hashes.add( hash )
|
||||
|
||||
self._hashes_to_singleton_media[ hash ] = media
|
||||
|
||||
|
||||
|
||||
|
@ -1068,6 +1056,7 @@ class MediaList( object ):
|
|||
addable_media.append( media )
|
||||
|
||||
self._hashes.add( hash )
|
||||
self._hashes_ordered.append( hash )
|
||||
|
||||
self._hashes_to_singleton_media[ hash ] = media
|
||||
|
||||
|
@ -1490,9 +1479,16 @@ class MediaList( object ):
|
|||
|
||||
def GetHashes( self, has_location = None, discriminant = None, not_uploaded_to = None, ordered = False ):
|
||||
|
||||
if has_location is None and discriminant is None and not_uploaded_to is None and not ordered:
|
||||
if has_location is None and discriminant is None and not_uploaded_to is None:
|
||||
|
||||
return self._hashes
|
||||
if ordered:
|
||||
|
||||
return self._hashes_ordered
|
||||
|
||||
else:
|
||||
|
||||
return self._hashes
|
||||
|
||||
|
||||
else:
|
||||
|
||||
|
@ -1563,7 +1559,7 @@ class MediaList( object ):
|
|||
return self._sorted_media
|
||||
|
||||
|
||||
def HasAnyOfTheseHashes( self, hashes ):
|
||||
def HasAnyOfTheseHashes( self, hashes: set ):
|
||||
|
||||
return not hashes.isdisjoint( self._hashes )
|
||||
|
||||
|
@ -2431,7 +2427,10 @@ class MediaSingleton( Media ):
|
|||
|
||||
info_string = HydrusData.ToHumanBytes( size ) + ' ' + HC.mime_string_lookup[ mime ]
|
||||
|
||||
if width is not None and height is not None: info_string += ' (' + HydrusData.ToHumanInt( width ) + 'x' + HydrusData.ToHumanInt( height ) + ')'
|
||||
if width is not None and height is not None:
|
||||
|
||||
info_string += ' ({})'.format( HydrusData.ConvertResolutionToPrettyString( ( width, height ) ) )
|
||||
|
||||
|
||||
if duration is not None:
|
||||
|
||||
|
|
|
@ -81,7 +81,7 @@ options = {}
|
|||
# Misc
|
||||
|
||||
NETWORK_VERSION = 20
|
||||
SOFTWARE_VERSION = 446
|
||||
SOFTWARE_VERSION = 447
|
||||
CLIENT_API_VERSION = 17
|
||||
|
||||
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )
|
||||
|
@ -286,6 +286,28 @@ MAINTENANCE_SHUTDOWN = 1
|
|||
MAINTENANCE_FORCED = 2
|
||||
MAINTENANCE_ACTIVE = 3
|
||||
|
||||
NICE_RESOLUTIONS = {}
|
||||
|
||||
NICE_RESOLUTIONS[ ( 640, 480 ) ] = '480p'
|
||||
NICE_RESOLUTIONS[ ( 1280, 720 ) ] = '720p'
|
||||
NICE_RESOLUTIONS[ ( 1920, 1080 ) ] = '1080p'
|
||||
NICE_RESOLUTIONS[ ( 3840, 2060 ) ] = '4k'
|
||||
NICE_RESOLUTIONS[ ( 720, 1280 ) ] = 'vertical 720p'
|
||||
NICE_RESOLUTIONS[ ( 1080, 1920 ) ] = 'vertical 1080p'
|
||||
NICE_RESOLUTIONS[ ( 2060, 3840 ) ] = 'vertical 4k'
|
||||
|
||||
NICE_RATIOS = {}
|
||||
|
||||
NICE_RATIOS[ 1 ] = '1:1'
|
||||
NICE_RATIOS[ 4 / 3 ] = '4:3'
|
||||
NICE_RATIOS[ 5 / 4 ] = '5:4'
|
||||
NICE_RATIOS[ 16 / 9 ] = '16:9'
|
||||
NICE_RATIOS[ 21 / 9 ] = '21:9'
|
||||
NICE_RATIOS[ 47 / 20 ] = '2.35:1'
|
||||
NICE_RATIOS[ 9 / 16 ] = '9:16'
|
||||
NICE_RATIOS[ 2 / 3 ] = '2:3'
|
||||
NICE_RATIOS[ 4 / 5 ] = '4:5'
|
||||
|
||||
GET_DATA = 0
|
||||
POST_DATA = 1
|
||||
POST_PETITIONS = 2
|
||||
|
@ -407,6 +429,7 @@ REAL_TAG_SERVICES = ( LOCAL_TAG, TAG_REPOSITORY )
|
|||
ADDREMOVABLE_SERVICES = ( LOCAL_TAG, LOCAL_RATING_LIKE, LOCAL_RATING_NUMERICAL, FILE_REPOSITORY, TAG_REPOSITORY, SERVER_ADMIN, IPFS )
|
||||
MUST_HAVE_AT_LEAST_ONE_SERVICES = ( LOCAL_TAG, )
|
||||
NONEDITABLE_SERVICES = ( LOCAL_FILE_DOMAIN, LOCAL_FILE_TRASH_DOMAIN, COMBINED_FILE, COMBINED_TAG, COMBINED_LOCAL_FILE )
|
||||
SPECIFIC_FILE_SERVICES = ( LOCAL_FILE_DOMAIN, LOCAL_FILE_TRASH_DOMAIN, COMBINED_LOCAL_FILE, FILE_REPOSITORY, IPFS )
|
||||
AUTOCOMPLETE_CACHE_SPECIFIC_FILE_SERVICES = ( LOCAL_FILE_DOMAIN, LOCAL_FILE_TRASH_DOMAIN, COMBINED_LOCAL_FILE, FILE_REPOSITORY )
|
||||
TAG_CACHE_SPECIFIC_FILE_SERVICES = ( COMBINED_LOCAL_FILE, FILE_REPOSITORY )
|
||||
ALL_SERVICES = REMOTE_SERVICES + LOCAL_SERVICES + ( COMBINED_FILE, COMBINED_TAG )
|
||||
|
|
|
@ -628,9 +628,9 @@ class HydrusController( object ):
|
|||
|
||||
def PrintProfile( self, summary, profile_text ):
|
||||
|
||||
boot_pretty_timestamp = time.strftime( '%Y-%m-%d %H-%M-%S', time.localtime( self.GetBootTime() ) )
|
||||
pretty_timestamp = time.strftime( '%Y-%m-%d %H-%M-%S', time.localtime( HG.profile_start_time ) )
|
||||
|
||||
profile_log_filename = self._name + ' profile - ' + boot_pretty_timestamp + '.log'
|
||||
profile_log_filename = '{} profile - {}.log'.format( self._name, pretty_timestamp )
|
||||
|
||||
profile_log_path = os.path.join( self.db_dir, profile_log_filename )
|
||||
|
||||
|
|
|
@ -731,13 +731,13 @@ class HydrusDB( object ):
|
|||
result = self._Write( action, *args, **kwargs )
|
||||
|
||||
|
||||
self._cursor_transaction_wrapper.Save()
|
||||
|
||||
if job.IsSynchronous():
|
||||
|
||||
job.PutResult( result )
|
||||
|
||||
|
||||
self._cursor_transaction_wrapper.Save()
|
||||
|
||||
if self._cursor_transaction_wrapper.TimeToCommit():
|
||||
|
||||
self._current_status = 'db committing'
|
||||
|
@ -997,16 +997,16 @@ class HydrusDB( object ):
|
|||
|
||||
if HG.db_report_mode:
|
||||
|
||||
summary = 'Running ' + job.ToString()
|
||||
summary = 'Running db job: ' + job.ToString()
|
||||
|
||||
HydrusData.ShowText( summary )
|
||||
|
||||
|
||||
if HG.db_profile_mode:
|
||||
if HG.profile_mode:
|
||||
|
||||
summary = 'Profiling ' + job.ToString()
|
||||
summary = 'Profiling db job: ' + job.ToString()
|
||||
|
||||
HydrusData.Profile( summary, 'self._ProcessJob( job )', globals(), locals(), show_summary = True )
|
||||
HydrusData.Profile( summary, 'self._ProcessJob( job )', globals(), locals(), min_duration_ms = HG.db_profile_min_job_time_ms )
|
||||
|
||||
else:
|
||||
|
||||
|
|
|
@ -244,9 +244,14 @@ def ConvertPrettyStringsToUglyNamespaces( pretty_strings ):
|
|||
|
||||
def ConvertResolutionToPrettyString( resolution ):
|
||||
|
||||
if resolution in HC.NICE_RESOLUTIONS:
|
||||
|
||||
return HC.NICE_RESOLUTIONS[ resolution ]
|
||||
|
||||
|
||||
( width, height ) = resolution
|
||||
|
||||
return ToHumanInt( width ) + 'x' + ToHumanInt( height )
|
||||
return '{}x{}'.format( ToHumanInt( width ), ToHumanInt( height ) )
|
||||
|
||||
def ConvertStatusToPrefix( status ):
|
||||
|
||||
|
@ -429,7 +434,7 @@ def ConvertTimestampToPrettyTime( timestamp, in_utc = False, include_24h_time =
|
|||
return 'unparseable time {}'.format( timestamp )
|
||||
|
||||
|
||||
def TimestampToPrettyTimeDelta( timestamp, just_now_string = 'now', just_now_threshold = 3, show_seconds = True, no_prefix = False ):
|
||||
def TimestampToPrettyTimeDelta( timestamp, just_now_string = 'now', just_now_threshold = 3, history_suffix = ' ago', show_seconds = True, no_prefix = False ):
|
||||
|
||||
if timestamp is None:
|
||||
|
||||
|
@ -454,7 +459,7 @@ def TimestampToPrettyTimeDelta( timestamp, just_now_string = 'now', just_now_thr
|
|||
|
||||
if TimeHasPassed( timestamp ):
|
||||
|
||||
return time_delta_string + ' ago'
|
||||
return '{}{}'.format( time_delta_string, history_suffix )
|
||||
|
||||
else:
|
||||
|
||||
|
@ -1210,20 +1215,31 @@ def Profile( summary, code, g, l, min_duration_ms = 20, show_summary = False ):
|
|||
|
||||
details = output.read()
|
||||
|
||||
with HG.profile_counter_lock:
|
||||
|
||||
HG.profile_slow_count += 1
|
||||
|
||||
|
||||
if show_summary:
|
||||
|
||||
ShowText( summary )
|
||||
|
||||
|
||||
HG.controller.PrintProfile( summary, details )
|
||||
|
||||
else:
|
||||
|
||||
summary += ' - It took ' + TimeDeltaToPrettyTimeDelta( time_took ) + '.'
|
||||
with HG.profile_counter_lock:
|
||||
|
||||
HG.profile_fast_count += 1
|
||||
|
||||
|
||||
details = ''
|
||||
if show_summary:
|
||||
|
||||
HG.controller.PrintProfile( summary, details )
|
||||
|
||||
|
||||
|
||||
HG.controller.PrintProfile( summary, details )
|
||||
|
||||
def PullNFromIterator( iterator, n ):
|
||||
|
||||
chunk = []
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
import collections
|
||||
import threading
|
||||
|
||||
controller = None
|
||||
|
@ -26,12 +27,23 @@ db_synchronous = 2
|
|||
import_folders_running = False
|
||||
export_folders_running = False
|
||||
|
||||
profile_mode = False
|
||||
|
||||
db_profile_min_job_time_ms = 16
|
||||
callto_profile_min_job_time_ms = 5
|
||||
server_profile_min_job_time_ms = 5
|
||||
menu_profile_min_job_time_ms = 16
|
||||
pubsub_profile_min_job_time_ms = 5
|
||||
ui_timer_profile_min_job_time_ms = 5
|
||||
|
||||
profile_start_time = 0
|
||||
profile_slow_count = 0
|
||||
profile_fast_count = 0
|
||||
profile_counter_lock = threading.Lock()
|
||||
|
||||
db_ui_hang_relief_mode = False
|
||||
callto_report_mode = False
|
||||
server_profile_mode = False
|
||||
db_report_mode = False
|
||||
db_profile_mode = False
|
||||
callto_profile_mode = False
|
||||
file_report_mode = False
|
||||
media_load_report_mode = False
|
||||
gui_report_mode = False
|
||||
|
@ -42,11 +54,8 @@ subscription_report_mode = False
|
|||
hover_window_report_mode = False
|
||||
file_import_report_mode = False
|
||||
phash_generation_report_mode = False
|
||||
menu_profile_mode = False
|
||||
network_report_mode = False
|
||||
pubsub_report_mode = False
|
||||
pubsub_profile_mode = False
|
||||
ui_timer_profile_mode = False
|
||||
daemon_report_mode = False
|
||||
force_idle_mode = False
|
||||
no_page_limit_mode = False
|
||||
|
|
|
@ -113,19 +113,15 @@ class HydrusPubSub( object ):
|
|||
HydrusData.ShowText( ( topic, args, kwargs, callable_tuples ) )
|
||||
|
||||
|
||||
if HG.pubsub_profile_mode and not_a_report:
|
||||
if HG.profile_mode and not_a_report:
|
||||
|
||||
summary = 'Profiling ' + HydrusData.ToHumanInt( len( callable_tuples ) ) + ' x ' + topic
|
||||
|
||||
HydrusData.ShowText( summary )
|
||||
|
||||
per_summary = 'Profiling ' + topic
|
||||
summary = 'Profiling pubsub: {}'.format( topic )
|
||||
|
||||
for ( obj, callable ) in callable_tuples:
|
||||
|
||||
try:
|
||||
|
||||
HydrusData.Profile( per_summary, 'callable( *args, **kwargs )', globals(), locals() )
|
||||
HydrusData.Profile( summary, 'callable( *args, **kwargs )', globals(), locals(), min_duration_ms = HG.pubsub_profile_min_job_time_ms )
|
||||
|
||||
except HydrusExceptions.ShutdownException:
|
||||
|
||||
|
|
|
@ -390,11 +390,11 @@ class THREADCallToThread( DAEMON ):
|
|||
|
||||
self._callable = ( callable, args, kwargs )
|
||||
|
||||
if HG.callto_profile_mode:
|
||||
if HG.profile_mode:
|
||||
|
||||
summary = 'Profiling CallTo Job: {}'.format( callable )
|
||||
|
||||
HydrusData.Profile( summary, 'callable( *args, **kwargs )', globals(), locals(), min_duration_ms = 3, show_summary = True )
|
||||
HydrusData.Profile( summary, 'callable( *args, **kwargs )', globals(), locals(), min_duration_ms = HG.callto_profile_min_job_time_ms )
|
||||
|
||||
else:
|
||||
|
||||
|
|
|
@ -301,7 +301,7 @@ class HydrusResource( Resource ):
|
|||
return request
|
||||
|
||||
|
||||
if HG.server_profile_mode:
|
||||
if HG.profile_mode:
|
||||
|
||||
d = deferToThread( self._profileJob, self._threadDoGETJob, request )
|
||||
|
||||
|
@ -324,7 +324,7 @@ class HydrusResource( Resource ):
|
|||
return request
|
||||
|
||||
|
||||
if HG.server_profile_mode:
|
||||
if HG.profile_mode:
|
||||
|
||||
d = deferToThread( self._profileJob, self._threadDoOPTIONSJob, request )
|
||||
|
||||
|
@ -347,7 +347,7 @@ class HydrusResource( Resource ):
|
|||
return request
|
||||
|
||||
|
||||
if HG.server_profile_mode:
|
||||
if HG.profile_mode:
|
||||
|
||||
d = deferToThread( self._profileJob, self._threadDoPOSTJob, request )
|
||||
|
||||
|
@ -590,7 +590,7 @@ class HydrusResource( Resource ):
|
|||
|
||||
def _profileJob( self, call, request: HydrusServerRequest.HydrusRequest ):
|
||||
|
||||
HydrusData.Profile( 'client api {}'.format( request.path ), 'request.result_lmao = call( request )', globals(), locals(), min_duration_ms = 3, show_summary = True )
|
||||
HydrusData.Profile( 'Profiling client api: {}'.format( request.path ), 'request.result_lmao = call( request )', globals(), locals(), min_duration_ms = HG.server_profile_min_job_time_ms )
|
||||
|
||||
return request.result_lmao
|
||||
|
||||
|
|
Loading…
Reference in New Issue