Version 400
|
@ -0,0 +1,674 @@
|
|||
GNU GENERAL PUBLIC LICENSE
|
||||
Version 3, 29 June 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
|
||||
The GNU General Public License is a free, copyleft license for
|
||||
software and other kinds of works.
|
||||
|
||||
The licenses for most software and other practical works are designed
|
||||
to take away your freedom to share and change the works. By contrast,
|
||||
the GNU General Public License is intended to guarantee your freedom to
|
||||
share and change all versions of a program--to make sure it remains free
|
||||
software for all its users. We, the Free Software Foundation, use the
|
||||
GNU General Public License for most of our software; it applies also to
|
||||
any other work released this way by its authors. You can apply it to
|
||||
your programs, too.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
them if you wish), that you receive source code or can get it if you
|
||||
want it, that you can change the software or use pieces of it in new
|
||||
free programs, and that you know you can do these things.
|
||||
|
||||
To protect your rights, we need to prevent others from denying you
|
||||
these rights or asking you to surrender the rights. Therefore, you have
|
||||
certain responsibilities if you distribute copies of the software, or if
|
||||
you modify it: responsibilities to respect the freedom of others.
|
||||
|
||||
For example, if you distribute copies of such a program, whether
|
||||
gratis or for a fee, you must pass on to the recipients the same
|
||||
freedoms that you received. You must make sure that they, too, receive
|
||||
or can get the source code. And you must show them these terms so they
|
||||
know their rights.
|
||||
|
||||
Developers that use the GNU GPL protect your rights with two steps:
|
||||
(1) assert copyright on the software, and (2) offer you this License
|
||||
giving you legal permission to copy, distribute and/or modify it.
|
||||
|
||||
For the developers' and authors' protection, the GPL clearly explains
|
||||
that there is no warranty for this free software. For both users' and
|
||||
authors' sake, the GPL requires that modified versions be marked as
|
||||
changed, so that their problems will not be attributed erroneously to
|
||||
authors of previous versions.
|
||||
|
||||
Some devices are designed to deny users access to install or run
|
||||
modified versions of the software inside them, although the manufacturer
|
||||
can do so. This is fundamentally incompatible with the aim of
|
||||
protecting users' freedom to change the software. The systematic
|
||||
pattern of such abuse occurs in the area of products for individuals to
|
||||
use, which is precisely where it is most unacceptable. Therefore, we
|
||||
have designed this version of the GPL to prohibit the practice for those
|
||||
products. If such problems arise substantially in other domains, we
|
||||
stand ready to extend this provision to those domains in future versions
|
||||
of the GPL, as needed to protect the freedom of users.
|
||||
|
||||
Finally, every program is threatened constantly by software patents.
|
||||
States should not allow patents to restrict development and use of
|
||||
software on general-purpose computers, but in those that do, we wish to
|
||||
avoid the special danger that patents applied to a free program could
|
||||
make it effectively proprietary. To prevent this, the GPL assures that
|
||||
patents cannot be used to render the program non-free.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
TERMS AND CONDITIONS
|
||||
|
||||
0. Definitions.
|
||||
|
||||
"This License" refers to version 3 of the GNU General Public License.
|
||||
|
||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||
works, such as semiconductor masks.
|
||||
|
||||
"The Program" refers to any copyrightable work licensed under this
|
||||
License. Each licensee is addressed as "you". "Licensees" and
|
||||
"recipients" may be individuals or organizations.
|
||||
|
||||
To "modify" a work means to copy from or adapt all or part of the work
|
||||
in a fashion requiring copyright permission, other than the making of an
|
||||
exact copy. The resulting work is called a "modified version" of the
|
||||
earlier work or a work "based on" the earlier work.
|
||||
|
||||
A "covered work" means either the unmodified Program or a work based
|
||||
on the Program.
|
||||
|
||||
To "propagate" a work means to do anything with it that, without
|
||||
permission, would make you directly or secondarily liable for
|
||||
infringement under applicable copyright law, except executing it on a
|
||||
computer or modifying a private copy. Propagation includes copying,
|
||||
distribution (with or without modification), making available to the
|
||||
public, and in some countries other activities as well.
|
||||
|
||||
To "convey" a work means any kind of propagation that enables other
|
||||
parties to make or receive copies. Mere interaction with a user through
|
||||
a computer network, with no transfer of a copy, is not conveying.
|
||||
|
||||
An interactive user interface displays "Appropriate Legal Notices"
|
||||
to the extent that it includes a convenient and prominently visible
|
||||
feature that (1) displays an appropriate copyright notice, and (2)
|
||||
tells the user that there is no warranty for the work (except to the
|
||||
extent that warranties are provided), that licensees may convey the
|
||||
work under this License, and how to view a copy of this License. If
|
||||
the interface presents a list of user commands or options, such as a
|
||||
menu, a prominent item in the list meets this criterion.
|
||||
|
||||
1. Source Code.
|
||||
|
||||
The "source code" for a work means the preferred form of the work
|
||||
for making modifications to it. "Object code" means any non-source
|
||||
form of a work.
|
||||
|
||||
A "Standard Interface" means an interface that either is an official
|
||||
standard defined by a recognized standards body, or, in the case of
|
||||
interfaces specified for a particular programming language, one that
|
||||
is widely used among developers working in that language.
|
||||
|
||||
The "System Libraries" of an executable work include anything, other
|
||||
than the work as a whole, that (a) is included in the normal form of
|
||||
packaging a Major Component, but which is not part of that Major
|
||||
Component, and (b) serves only to enable use of the work with that
|
||||
Major Component, or to implement a Standard Interface for which an
|
||||
implementation is available to the public in source code form. A
|
||||
"Major Component", in this context, means a major essential component
|
||||
(kernel, window system, and so on) of the specific operating system
|
||||
(if any) on which the executable work runs, or a compiler used to
|
||||
produce the work, or an object code interpreter used to run it.
|
||||
|
||||
The "Corresponding Source" for a work in object code form means all
|
||||
the source code needed to generate, install, and (for an executable
|
||||
work) run the object code and to modify the work, including scripts to
|
||||
control those activities. However, it does not include the work's
|
||||
System Libraries, or general-purpose tools or generally available free
|
||||
programs which are used unmodified in performing those activities but
|
||||
which are not part of the work. For example, Corresponding Source
|
||||
includes interface definition files associated with source files for
|
||||
the work, and the source code for shared libraries and dynamically
|
||||
linked subprograms that the work is specifically designed to require,
|
||||
such as by intimate data communication or control flow between those
|
||||
subprograms and other parts of the work.
|
||||
|
||||
The Corresponding Source need not include anything that users
|
||||
can regenerate automatically from other parts of the Corresponding
|
||||
Source.
|
||||
|
||||
The Corresponding Source for a work in source code form is that
|
||||
same work.
|
||||
|
||||
2. Basic Permissions.
|
||||
|
||||
All rights granted under this License are granted for the term of
|
||||
copyright on the Program, and are irrevocable provided the stated
|
||||
conditions are met. This License explicitly affirms your unlimited
|
||||
permission to run the unmodified Program. The output from running a
|
||||
covered work is covered by this License only if the output, given its
|
||||
content, constitutes a covered work. This License acknowledges your
|
||||
rights of fair use or other equivalent, as provided by copyright law.
|
||||
|
||||
You may make, run and propagate covered works that you do not
|
||||
convey, without conditions so long as your license otherwise remains
|
||||
in force. You may convey covered works to others for the sole purpose
|
||||
of having them make modifications exclusively for you, or provide you
|
||||
with facilities for running those works, provided that you comply with
|
||||
the terms of this License in conveying all material for which you do
|
||||
not control copyright. Those thus making or running the covered works
|
||||
for you must do so exclusively on your behalf, under your direction
|
||||
and control, on terms that prohibit them from making any copies of
|
||||
your copyrighted material outside their relationship with you.
|
||||
|
||||
Conveying under any other circumstances is permitted solely under
|
||||
the conditions stated below. Sublicensing is not allowed; section 10
|
||||
makes it unnecessary.
|
||||
|
||||
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||
|
||||
No covered work shall be deemed part of an effective technological
|
||||
measure under any applicable law fulfilling obligations under article
|
||||
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||
similar laws prohibiting or restricting circumvention of such
|
||||
measures.
|
||||
|
||||
When you convey a covered work, you waive any legal power to forbid
|
||||
circumvention of technological measures to the extent such circumvention
|
||||
is effected by exercising rights under this License with respect to
|
||||
the covered work, and you disclaim any intention to limit operation or
|
||||
modification of the work as a means of enforcing, against the work's
|
||||
users, your or third parties' legal rights to forbid circumvention of
|
||||
technological measures.
|
||||
|
||||
4. Conveying Verbatim Copies.
|
||||
|
||||
You may convey verbatim copies of the Program's source code as you
|
||||
receive it, in any medium, provided that you conspicuously and
|
||||
appropriately publish on each copy an appropriate copyright notice;
|
||||
keep intact all notices stating that this License and any
|
||||
non-permissive terms added in accord with section 7 apply to the code;
|
||||
keep intact all notices of the absence of any warranty; and give all
|
||||
recipients a copy of this License along with the Program.
|
||||
|
||||
You may charge any price or no price for each copy that you convey,
|
||||
and you may offer support or warranty protection for a fee.
|
||||
|
||||
5. Conveying Modified Source Versions.
|
||||
|
||||
You may convey a work based on the Program, or the modifications to
|
||||
produce it from the Program, in the form of source code under the
|
||||
terms of section 4, provided that you also meet all of these conditions:
|
||||
|
||||
a) The work must carry prominent notices stating that you modified
|
||||
it, and giving a relevant date.
|
||||
|
||||
b) The work must carry prominent notices stating that it is
|
||||
released under this License and any conditions added under section
|
||||
7. This requirement modifies the requirement in section 4 to
|
||||
"keep intact all notices".
|
||||
|
||||
c) You must license the entire work, as a whole, under this
|
||||
License to anyone who comes into possession of a copy. This
|
||||
License will therefore apply, along with any applicable section 7
|
||||
additional terms, to the whole of the work, and all its parts,
|
||||
regardless of how they are packaged. This License gives no
|
||||
permission to license the work in any other way, but it does not
|
||||
invalidate such permission if you have separately received it.
|
||||
|
||||
d) If the work has interactive user interfaces, each must display
|
||||
Appropriate Legal Notices; however, if the Program has interactive
|
||||
interfaces that do not display Appropriate Legal Notices, your
|
||||
work need not make them do so.
|
||||
|
||||
A compilation of a covered work with other separate and independent
|
||||
works, which are not by their nature extensions of the covered work,
|
||||
and which are not combined with it such as to form a larger program,
|
||||
in or on a volume of a storage or distribution medium, is called an
|
||||
"aggregate" if the compilation and its resulting copyright are not
|
||||
used to limit the access or legal rights of the compilation's users
|
||||
beyond what the individual works permit. Inclusion of a covered work
|
||||
in an aggregate does not cause this License to apply to the other
|
||||
parts of the aggregate.
|
||||
|
||||
6. Conveying Non-Source Forms.
|
||||
|
||||
You may convey a covered work in object code form under the terms
|
||||
of sections 4 and 5, provided that you also convey the
|
||||
machine-readable Corresponding Source under the terms of this License,
|
||||
in one of these ways:
|
||||
|
||||
a) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by the
|
||||
Corresponding Source fixed on a durable physical medium
|
||||
customarily used for software interchange.
|
||||
|
||||
b) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by a
|
||||
written offer, valid for at least three years and valid for as
|
||||
long as you offer spare parts or customer support for that product
|
||||
model, to give anyone who possesses the object code either (1) a
|
||||
copy of the Corresponding Source for all the software in the
|
||||
product that is covered by this License, on a durable physical
|
||||
medium customarily used for software interchange, for a price no
|
||||
more than your reasonable cost of physically performing this
|
||||
conveying of source, or (2) access to copy the
|
||||
Corresponding Source from a network server at no charge.
|
||||
|
||||
c) Convey individual copies of the object code with a copy of the
|
||||
written offer to provide the Corresponding Source. This
|
||||
alternative is allowed only occasionally and noncommercially, and
|
||||
only if you received the object code with such an offer, in accord
|
||||
with subsection 6b.
|
||||
|
||||
d) Convey the object code by offering access from a designated
|
||||
place (gratis or for a charge), and offer equivalent access to the
|
||||
Corresponding Source in the same way through the same place at no
|
||||
further charge. You need not require recipients to copy the
|
||||
Corresponding Source along with the object code. If the place to
|
||||
copy the object code is a network server, the Corresponding Source
|
||||
may be on a different server (operated by you or a third party)
|
||||
that supports equivalent copying facilities, provided you maintain
|
||||
clear directions next to the object code saying where to find the
|
||||
Corresponding Source. Regardless of what server hosts the
|
||||
Corresponding Source, you remain obligated to ensure that it is
|
||||
available for as long as needed to satisfy these requirements.
|
||||
|
||||
e) Convey the object code using peer-to-peer transmission, provided
|
||||
you inform other peers where the object code and Corresponding
|
||||
Source of the work are being offered to the general public at no
|
||||
charge under subsection 6d.
|
||||
|
||||
A separable portion of the object code, whose source code is excluded
|
||||
from the Corresponding Source as a System Library, need not be
|
||||
included in conveying the object code work.
|
||||
|
||||
A "User Product" is either (1) a "consumer product", which means any
|
||||
tangible personal property which is normally used for personal, family,
|
||||
or household purposes, or (2) anything designed or sold for incorporation
|
||||
into a dwelling. In determining whether a product is a consumer product,
|
||||
doubtful cases shall be resolved in favor of coverage. For a particular
|
||||
product received by a particular user, "normally used" refers to a
|
||||
typical or common use of that class of product, regardless of the status
|
||||
of the particular user or of the way in which the particular user
|
||||
actually uses, or expects or is expected to use, the product. A product
|
||||
is a consumer product regardless of whether the product has substantial
|
||||
commercial, industrial or non-consumer uses, unless such uses represent
|
||||
the only significant mode of use of the product.
|
||||
|
||||
"Installation Information" for a User Product means any methods,
|
||||
procedures, authorization keys, or other information required to install
|
||||
and execute modified versions of a covered work in that User Product from
|
||||
a modified version of its Corresponding Source. The information must
|
||||
suffice to ensure that the continued functioning of the modified object
|
||||
code is in no case prevented or interfered with solely because
|
||||
modification has been made.
|
||||
|
||||
If you convey an object code work under this section in, or with, or
|
||||
specifically for use in, a User Product, and the conveying occurs as
|
||||
part of a transaction in which the right of possession and use of the
|
||||
User Product is transferred to the recipient in perpetuity or for a
|
||||
fixed term (regardless of how the transaction is characterized), the
|
||||
Corresponding Source conveyed under this section must be accompanied
|
||||
by the Installation Information. But this requirement does not apply
|
||||
if neither you nor any third party retains the ability to install
|
||||
modified object code on the User Product (for example, the work has
|
||||
been installed in ROM).
|
||||
|
||||
The requirement to provide Installation Information does not include a
|
||||
requirement to continue to provide support service, warranty, or updates
|
||||
for a work that has been modified or installed by the recipient, or for
|
||||
the User Product in which it has been modified or installed. Access to a
|
||||
network may be denied when the modification itself materially and
|
||||
adversely affects the operation of the network or violates the rules and
|
||||
protocols for communication across the network.
|
||||
|
||||
Corresponding Source conveyed, and Installation Information provided,
|
||||
in accord with this section must be in a format that is publicly
|
||||
documented (and with an implementation available to the public in
|
||||
source code form), and must require no special password or key for
|
||||
unpacking, reading or copying.
|
||||
|
||||
7. Additional Terms.
|
||||
|
||||
"Additional permissions" are terms that supplement the terms of this
|
||||
License by making exceptions from one or more of its conditions.
|
||||
Additional permissions that are applicable to the entire Program shall
|
||||
be treated as though they were included in this License, to the extent
|
||||
that they are valid under applicable law. If additional permissions
|
||||
apply only to part of the Program, that part may be used separately
|
||||
under those permissions, but the entire Program remains governed by
|
||||
this License without regard to the additional permissions.
|
||||
|
||||
When you convey a copy of a covered work, you may at your option
|
||||
remove any additional permissions from that copy, or from any part of
|
||||
it. (Additional permissions may be written to require their own
|
||||
removal in certain cases when you modify the work.) You may place
|
||||
additional permissions on material, added by you to a covered work,
|
||||
for which you have or can give appropriate copyright permission.
|
||||
|
||||
Notwithstanding any other provision of this License, for material you
|
||||
add to a covered work, you may (if authorized by the copyright holders of
|
||||
that material) supplement the terms of this License with terms:
|
||||
|
||||
a) Disclaiming warranty or limiting liability differently from the
|
||||
terms of sections 15 and 16 of this License; or
|
||||
|
||||
b) Requiring preservation of specified reasonable legal notices or
|
||||
author attributions in that material or in the Appropriate Legal
|
||||
Notices displayed by works containing it; or
|
||||
|
||||
c) Prohibiting misrepresentation of the origin of that material, or
|
||||
requiring that modified versions of such material be marked in
|
||||
reasonable ways as different from the original version; or
|
||||
|
||||
d) Limiting the use for publicity purposes of names of licensors or
|
||||
authors of the material; or
|
||||
|
||||
e) Declining to grant rights under trademark law for use of some
|
||||
trade names, trademarks, or service marks; or
|
||||
|
||||
f) Requiring indemnification of licensors and authors of that
|
||||
material by anyone who conveys the material (or modified versions of
|
||||
it) with contractual assumptions of liability to the recipient, for
|
||||
any liability that these contractual assumptions directly impose on
|
||||
those licensors and authors.
|
||||
|
||||
All other non-permissive additional terms are considered "further
|
||||
restrictions" within the meaning of section 10. If the Program as you
|
||||
received it, or any part of it, contains a notice stating that it is
|
||||
governed by this License along with a term that is a further
|
||||
restriction, you may remove that term. If a license document contains
|
||||
a further restriction but permits relicensing or conveying under this
|
||||
License, you may add to a covered work material governed by the terms
|
||||
of that license document, provided that the further restriction does
|
||||
not survive such relicensing or conveying.
|
||||
|
||||
If you add terms to a covered work in accord with this section, you
|
||||
must place, in the relevant source files, a statement of the
|
||||
additional terms that apply to those files, or a notice indicating
|
||||
where to find the applicable terms.
|
||||
|
||||
Additional terms, permissive or non-permissive, may be stated in the
|
||||
form of a separately written license, or stated as exceptions;
|
||||
the above requirements apply either way.
|
||||
|
||||
8. Termination.
|
||||
|
||||
You may not propagate or modify a covered work except as expressly
|
||||
provided under this License. Any attempt otherwise to propagate or
|
||||
modify it is void, and will automatically terminate your rights under
|
||||
this License (including any patent licenses granted under the third
|
||||
paragraph of section 11).
|
||||
|
||||
However, if you cease all violation of this License, then your
|
||||
license from a particular copyright holder is reinstated (a)
|
||||
provisionally, unless and until the copyright holder explicitly and
|
||||
finally terminates your license, and (b) permanently, if the copyright
|
||||
holder fails to notify you of the violation by some reasonable means
|
||||
prior to 60 days after the cessation.
|
||||
|
||||
Moreover, your license from a particular copyright holder is
|
||||
reinstated permanently if the copyright holder notifies you of the
|
||||
violation by some reasonable means, this is the first time you have
|
||||
received notice of violation of this License (for any work) from that
|
||||
copyright holder, and you cure the violation prior to 30 days after
|
||||
your receipt of the notice.
|
||||
|
||||
Termination of your rights under this section does not terminate the
|
||||
licenses of parties who have received copies or rights from you under
|
||||
this License. If your rights have been terminated and not permanently
|
||||
reinstated, you do not qualify to receive new licenses for the same
|
||||
material under section 10.
|
||||
|
||||
9. Acceptance Not Required for Having Copies.
|
||||
|
||||
You are not required to accept this License in order to receive or
|
||||
run a copy of the Program. Ancillary propagation of a covered work
|
||||
occurring solely as a consequence of using peer-to-peer transmission
|
||||
to receive a copy likewise does not require acceptance. However,
|
||||
nothing other than this License grants you permission to propagate or
|
||||
modify any covered work. These actions infringe copyright if you do
|
||||
not accept this License. Therefore, by modifying or propagating a
|
||||
covered work, you indicate your acceptance of this License to do so.
|
||||
|
||||
10. Automatic Licensing of Downstream Recipients.
|
||||
|
||||
Each time you convey a covered work, the recipient automatically
|
||||
receives a license from the original licensors, to run, modify and
|
||||
propagate that work, subject to this License. You are not responsible
|
||||
for enforcing compliance by third parties with this License.
|
||||
|
||||
An "entity transaction" is a transaction transferring control of an
|
||||
organization, or substantially all assets of one, or subdividing an
|
||||
organization, or merging organizations. If propagation of a covered
|
||||
work results from an entity transaction, each party to that
|
||||
transaction who receives a copy of the work also receives whatever
|
||||
licenses to the work the party's predecessor in interest had or could
|
||||
give under the previous paragraph, plus a right to possession of the
|
||||
Corresponding Source of the work from the predecessor in interest, if
|
||||
the predecessor has it or can get it with reasonable efforts.
|
||||
|
||||
You may not impose any further restrictions on the exercise of the
|
||||
rights granted or affirmed under this License. For example, you may
|
||||
not impose a license fee, royalty, or other charge for exercise of
|
||||
rights granted under this License, and you may not initiate litigation
|
||||
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||
any patent claim is infringed by making, using, selling, offering for
|
||||
sale, or importing the Program or any portion of it.
|
||||
|
||||
11. Patents.
|
||||
|
||||
A "contributor" is a copyright holder who authorizes use under this
|
||||
License of the Program or a work on which the Program is based. The
|
||||
work thus licensed is called the contributor's "contributor version".
|
||||
|
||||
A contributor's "essential patent claims" are all patent claims
|
||||
owned or controlled by the contributor, whether already acquired or
|
||||
hereafter acquired, that would be infringed by some manner, permitted
|
||||
by this License, of making, using, or selling its contributor version,
|
||||
but do not include claims that would be infringed only as a
|
||||
consequence of further modification of the contributor version. For
|
||||
purposes of this definition, "control" includes the right to grant
|
||||
patent sublicenses in a manner consistent with the requirements of
|
||||
this License.
|
||||
|
||||
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||
patent license under the contributor's essential patent claims, to
|
||||
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||
propagate the contents of its contributor version.
|
||||
|
||||
In the following three paragraphs, a "patent license" is any express
|
||||
agreement or commitment, however denominated, not to enforce a patent
|
||||
(such as an express permission to practice a patent or covenant not to
|
||||
sue for patent infringement). To "grant" such a patent license to a
|
||||
party means to make such an agreement or commitment not to enforce a
|
||||
patent against the party.
|
||||
|
||||
If you convey a covered work, knowingly relying on a patent license,
|
||||
and the Corresponding Source of the work is not available for anyone
|
||||
to copy, free of charge and under the terms of this License, through a
|
||||
publicly available network server or other readily accessible means,
|
||||
then you must either (1) cause the Corresponding Source to be so
|
||||
available, or (2) arrange to deprive yourself of the benefit of the
|
||||
patent license for this particular work, or (3) arrange, in a manner
|
||||
consistent with the requirements of this License, to extend the patent
|
||||
license to downstream recipients. "Knowingly relying" means you have
|
||||
actual knowledge that, but for the patent license, your conveying the
|
||||
covered work in a country, or your recipient's use of the covered work
|
||||
in a country, would infringe one or more identifiable patents in that
|
||||
country that you have reason to believe are valid.
|
||||
|
||||
If, pursuant to or in connection with a single transaction or
|
||||
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||
covered work, and grant a patent license to some of the parties
|
||||
receiving the covered work authorizing them to use, propagate, modify
|
||||
or convey a specific copy of the covered work, then the patent license
|
||||
you grant is automatically extended to all recipients of the covered
|
||||
work and works based on it.
|
||||
|
||||
A patent license is "discriminatory" if it does not include within
|
||||
the scope of its coverage, prohibits the exercise of, or is
|
||||
conditioned on the non-exercise of one or more of the rights that are
|
||||
specifically granted under this License. You may not convey a covered
|
||||
work if you are a party to an arrangement with a third party that is
|
||||
in the business of distributing software, under which you make payment
|
||||
to the third party based on the extent of your activity of conveying
|
||||
the work, and under which the third party grants, to any of the
|
||||
parties who would receive the covered work from you, a discriminatory
|
||||
patent license (a) in connection with copies of the covered work
|
||||
conveyed by you (or copies made from those copies), or (b) primarily
|
||||
for and in connection with specific products or compilations that
|
||||
contain the covered work, unless you entered into that arrangement,
|
||||
or that patent license was granted, prior to 28 March 2007.
|
||||
|
||||
Nothing in this License shall be construed as excluding or limiting
|
||||
any implied license or other defenses to infringement that may
|
||||
otherwise be available to you under applicable patent law.
|
||||
|
||||
12. No Surrender of Others' Freedom.
|
||||
|
||||
If conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot convey a
|
||||
covered work so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you may
|
||||
not convey it at all. For example, if you agree to terms that obligate you
|
||||
to collect a royalty for further conveying from those to whom you convey
|
||||
the Program, the only way you could satisfy both those terms and this
|
||||
License would be to refrain entirely from conveying the Program.
|
||||
|
||||
13. Use with the GNU Affero General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, you have
|
||||
permission to link or combine any covered work with a work licensed
|
||||
under version 3 of the GNU Affero General Public License into a single
|
||||
combined work, and to convey the resulting work. The terms of this
|
||||
License will continue to apply to the part which is the covered work,
|
||||
but the special requirements of the GNU Affero General Public License,
|
||||
section 13, concerning interaction through a network will apply to the
|
||||
combination as such.
|
||||
|
||||
14. Revised Versions of this License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions of
|
||||
the GNU General Public License from time to time. Such new versions will
|
||||
be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Program specifies that a certain numbered version of the GNU General
|
||||
Public License "or any later version" applies to it, you have the
|
||||
option of following the terms and conditions either of that numbered
|
||||
version or of any later version published by the Free Software
|
||||
Foundation. If the Program does not specify a version number of the
|
||||
GNU General Public License, you may choose any version ever published
|
||||
by the Free Software Foundation.
|
||||
|
||||
If the Program specifies that a proxy can decide which future
|
||||
versions of the GNU General Public License can be used, that proxy's
|
||||
public statement of acceptance of a version permanently authorizes you
|
||||
to choose that version for the Program.
|
||||
|
||||
Later license versions may give you additional or different
|
||||
permissions. However, no additional obligations are imposed on any
|
||||
author or copyright holder as a result of your choosing to follow a
|
||||
later version.
|
||||
|
||||
15. Disclaimer of Warranty.
|
||||
|
||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
||||
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
||||
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
||||
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||
|
||||
16. Limitation of Liability.
|
||||
|
||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
||||
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
||||
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
||||
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
||||
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
||||
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
||||
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||
SUCH DAMAGES.
|
||||
|
||||
17. Interpretation of Sections 15 and 16.
|
||||
|
||||
If the disclaimer of warranty and limitation of liability provided
|
||||
above cannot be given local legal effect according to their terms,
|
||||
reviewing courts shall apply local law that most closely approximates
|
||||
an absolute waiver of all civil liability in connection with the
|
||||
Program, unless a warranty or assumption of liability accompanies a
|
||||
copy of the Program in return for a fee.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
free software which everyone can redistribute and change under these terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest
|
||||
to attach them to the start of each source file to most effectively
|
||||
state the exclusion of warranty; and each file should have at least
|
||||
the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If the program does terminal interaction, make it output a short
|
||||
notice like this when it starts in an interactive mode:
|
||||
|
||||
<program> Copyright (C) <year> <name of author>
|
||||
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||
This is free software, and you are welcome to redistribute it
|
||||
under certain conditions; type `show c' for details.
|
||||
|
||||
The hypothetical commands `show w' and `show c' should show the appropriate
|
||||
parts of the General Public License. Of course, your program's commands
|
||||
might be different; for a GUI interface, you would use an "about box".
|
||||
|
||||
You should also get your employer (if you work as a programmer) or school,
|
||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||
For more information on this, and how to apply and follow the GNU GPL, see
|
||||
<https://www.gnu.org/licenses/>.
|
||||
|
||||
The GNU General Public License does not permit incorporating your program
|
||||
into proprietary programs. If your program is a subroutine library, you
|
||||
may consider it more useful to permit linking proprietary applications with
|
||||
the library. If this is what you want to do, use the GNU Lesser General
|
||||
Public License instead of this License. But first, please read
|
||||
<https://www.gnu.org/licenses/why-not-lgpl.html>.
|
|
@ -0,0 +1,26 @@
|
|||
MiniUPnP Project
|
||||
Copyright (c) 2005-2019, Thomas BERNARD
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright notice,
|
||||
this list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
* The name of the author may not be used to endorse or promote products
|
||||
derived from this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
|
||||
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
||||
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
||||
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||
POSSIBILITY OF SUCH DAMAGE.
|
|
@ -8,6 +8,77 @@
|
|||
<div class="content">
|
||||
<h3>changelog</h3>
|
||||
<ul>
|
||||
<li><h3>version 400</h3></li>
|
||||
<ul>
|
||||
<li>subscription data overhaul:</li>
|
||||
<li>the formerly monolithic subscription object is finally broken up into smaller pieces, reducing work and load lag and total db read/write for all actions</li>
|
||||
<li>subscriptions work the same as before, no user input is required. they just work better now™</li>
|
||||
<li>depending on the size and number of your subscriptions, the db update may take a minute or two this week. a backup of your old subscription objects will be created in your db directory, under a new 'legacy_subscriptions_backup' subdirectory</li>
|
||||
<li>the manage subscriptions dialog should now open within a second (assuming subs are not currently running). it should save just as fast, only with a little lag if you decide to make significant changes or go into many queries' logs, which are now fetched on demand inside the dialog</li>
|
||||
<li>when subscriptions run, they similarly only have to load the query they are currently working on. boot lag is now almost nothing, and total drive read/write data for a typical sub run is massively reduced</li>
|
||||
<li>the 'total files in a sub' limits no longer apply. you can have a sub with a thousand queries and half a million urls if you like</li>
|
||||
<li>basic subscription data is now held in memory at all times, opening up future fast access such as client api and general UI editing of subs. more work will happen here in coming weeks</li>
|
||||
<li>if due to hard drive fault or other unusual situations some subscription file/gallery log data is missing from the db, a running sub will note this, pause the sub, and provide a popup error for the user. the manage subscription dialog will correct it on launch by resetting the affected queries with new empty data</li>
|
||||
<li>similarly, if you launch the manage subs dialog and there is orphaned file/gallery log data in the db, this will be noticed, with the surplus data then backed up to the database directory and deleted from the database proper</li>
|
||||
<li>subscription queries can now handle domain and bandwidth tests for downloaders that host files/posts on a different domain to the gallery search step</li>
|
||||
<li>if subs are running when manage subs is booted, long delays while waiting for them to pause are less likely</li>
|
||||
<li>some subscription 'should run?' tests are improved for odd situations such as subs that have no queries or all DEAD queries</li>
|
||||
<li>improved some error handling in merge/separate code</li>
|
||||
<li>the 'show/copy quality info' buttons now work off the main thread, disabling the sub edit dialog while they work</li>
|
||||
<li>updated a little of the subs help</li>
|
||||
<li>.</li>
|
||||
<li>boring actual code changes for subs:</li>
|
||||
<li>wrote a query log container object to store bulky file and gallery log info</li>
|
||||
<li>wrote a query header object to store options and cache log summary info</li>
|
||||
<li>wrote a file cache status object to summarise important info so check timings and similar can be decided upon without needing to load a log</li>
|
||||
<li>the new cache is now used across the program for all file import summary presentation</li>
|
||||
<li>wrote a new subscription object to hold the new query headers and load logs as needed</li>
|
||||
<li>updated subscription management to deal with the new subscription objects. it now also keeps them in memory all the time</li>
|
||||
<li>wrote a fail-safe update from the old subscription objects to the new, which also saves a backup to disk, just in case of unforeseen problems in the near future</li>
|
||||
<li>updated the subscription ui code to deal with all the new objects</li>
|
||||
<li>updated the subscription ui to deal with asynchronous log fetching as needed</li>
|
||||
<li>cleaned up some file import status code</li>
|
||||
<li>moved old subscription code to a new legacy file</li>
|
||||
<li>refactored subscription ui code to a new file</li>
|
||||
<li>refactored and improved sub sync code</li>
|
||||
<li>misc subscription cleanup</li>
|
||||
<li>misc subscription ui cleanup</li>
|
||||
<li>added type hints to multiple subscription locations</li>
|
||||
<li>improved how missing serialisable object errors are handled at the db level</li>
|
||||
<li>.</li>
|
||||
<li>client api:</li>
|
||||
<li>the client api now delivers 'is_inbox', 'is_local', 'is_trashed' for 'GET /get_files/file_metadata'</li>
|
||||
<li>the client api's Access-Control-Allow-Headers CORS header is now '*', allowing all</li>
|
||||
<li>client api version is now 12</li>
|
||||
<li>.</li>
|
||||
<li>downloaders:</li>
|
||||
<li>twitter retired their old api on the 1st of June, and there is unfortunately no good hydrus solution for the new one. however thanks to a user's efforts, a nice new parser for nitter, a twitter wrapper, is added in today's update. please play with it--it has three downloaders, one for a user's media, one for retweets, and one for both together--and adjust your twitter subscriptions to use the new downloader as needed. the twitter downloader is no longer included for new hydrus users</li>
|
||||
<li>thanks to a user's submission, fixed the md5 hash fetching for default danbooru parsers</li>
|
||||
<li>derpibooru gallery searching _should_ be fixed to use their current api</li>
|
||||
<li>.</li>
|
||||
<li>the rest:</li>
|
||||
<li>when the client exits or gets a 'modal' maintenance popup window, all currently playing media windows will now pause</li>
|
||||
<li>regrettably, due to some content merging issues that are too complicated to improve at the moment, the dupe filter will no longer show the files of processed pairs in the duplicate filter more than once per batch. you won't get a series of AB, AC, AD any more. this will return in future</li>
|
||||
<li>the weird bug where double-clicking the topmost recent tags suggestion would actually remove the top two items _should_ be fixed. general selection-setting on this column should also be improved</li>
|
||||
<li>middle-clicking on a parent tag in a 'write' autocomplete dropdown no longer launches a page with that invalid parent 'label' tag included--it just does the base tag. the same is true of label tags (such as 'loading...') and namespace tags</li>
|
||||
<li>when changing 'expand parents on autocomplete' in the cog button on manage tags, the respective autocomplete now changes whether it displays parents</li>
|
||||
<li>this is slightly complicated: a tag 'write' context (like manage tags) now presents its autocomplete tags (filtering, siblings, parents) according to the tag service of the parent panel, not the current tag service of the autocomplete. so, if you are on 'my tags' panel and switch to 'all known tags' for the a/c, you will no longer get 'all known tags' siblings and parents and so on presented if 'my tags' is not set to take them. this was sometimes causing confusion when a list showed a parent but the underlying panel did not add it on tag entry</li>
|
||||
<li>to reduce blacklist confusion, when you launch the edit blacklist dialog from an edit tag import options panel, now only the 'blacklist' tab shows, the summary text is blacklist-specific, and the top intro message is improved. a separate 'whitelist' filter will be added in the near future to allow downloading of files only if they have certain tags</li>
|
||||
<li>'hard-replace siblings and parents' in _manage tags_ should now correctly remove bad siblings when they are currently pending</li>
|
||||
<li>network->downloaders->manage downloader and url display now has a checkbox to make the media viewer top-right hover show unmatched urls</li>
|
||||
<li>the '... elide page tab names' option now applies instantly on options dialog ok to all pages</li>
|
||||
<li>added 'copy_bmp_or_file_if_not_bmpable' shortcut command to media set. it tries copy_bmp first, then copy_file if not a static image</li>
|
||||
<li>fixed some edit tag filter layout to stop long intro messages making it super wide</li>
|
||||
<li>fixed an issue where tag filters could accept non-whitespace-stripped entries and entries with uppercase characters</li>
|
||||
<li>fixed a display typo where the 'clear orphan files' maintenance job, when set to delete orphans, was accidentally reporting (total number of thumbnails)/(number of files to delete) text in the file delete step instead of the correct (num_done/num_to_do)</li>
|
||||
<li>clarified the 'reset repository' commands in review services</li>
|
||||
<li>when launching an external program, the child process's environment's PATH is reset to what it was at hydrus boot (removing hydrus base dir)</li>
|
||||
<li>when launching an external program from the frozen build, if some Qt/SSL specific PATH variables have been set to hydrus subdirectories by pyinstaller or otherwise, they are now removed. (this hopefully fixes issues launching some Qt programs as external file launchers)</li>
|
||||
<li>added a separate requirements.txt for python 3.8, which can't handle PySide2 5.13.0</li>
|
||||
<li>updated help->about to deal better with missing mpv</li>
|
||||
<li>updated windows mpv to 2020-05-31 build, api version is now 1.108</li>
|
||||
<li>updated windows sqlite to 3.32.2</li>
|
||||
</ul>
|
||||
<li><h3>version 399</h3></li>
|
||||
<ul>
|
||||
<li>improvements:</li>
|
||||
|
|
|
@ -893,6 +893,9 @@
|
|||
"has_audio" : false,
|
||||
"num_frames" : null,
|
||||
"num_words" : null,
|
||||
"is_inbox" : true,
|
||||
"is_local" : true,
|
||||
"is_trashed" : false,
|
||||
"known_urls" : [],
|
||||
"service_names_to_statuses_to_tags" : {}
|
||||
},
|
||||
|
@ -908,6 +911,9 @@
|
|||
"has_audio" : true,
|
||||
"num_frames" : 102,
|
||||
"num_words" : null,
|
||||
"is_inbox" : false,
|
||||
"is_local" : true,
|
||||
"is_trashed" : false,
|
||||
"known_urls" : [
|
||||
"https://gelbooru.com/index.php?page=post&s=view&id=4841557",
|
||||
"https://img2.gelbooru.com//images/80/c8/80c8646b4a49395fb36c805f316c49a9.jpg",
|
||||
|
|
|
@ -17,7 +17,7 @@
|
|||
<p>Let's go to the preparation page first:</p>
|
||||
<p><img src="dupe_preparation.png" /></p>
|
||||
<p>The 'similar shape' algorithm works on <i>distance</i>. Two files with 0 distance are likely exact matches, such as resizes of the same file or lower/higher quality jpegs, whereas those with distance 4 tend to be to be hairstyle or costume changes. You will be starting on distance 0 and not expect to ever go above 4 or 8 or so. Going too high increases the danger of being overwhelmed by false positives.</p>
|
||||
<p class="apiborder">If you are interested, the current version of this system uses a 64-bit <a href="https://jenssegers.com/61/perceptual-image-hashes">phash</a> to represent the image shape and a <a href="https://en.wikipedia.org/wiki/VP-tree">VPTree</a> to search different files' phashes' relative <a href="https://en.wikipedia.org/wiki/Hamming_distance">hamming distance</a>. I expect to extend it in future with multiple phash generation (flips, rotations, and 'interesting' image crops and video frames) and most-common colour comparisons.</p>
|
||||
<p class="apiborder">If you are interested, the current version of this system uses a 64-bit <a href="https://jenssegers.com/perceptual-image-hashes">phash</a> to represent the image shape and a <a href="https://en.wikipedia.org/wiki/VP-tree">VPTree</a> to search different files' phashes' relative <a href="https://en.wikipedia.org/wiki/Hamming_distance">hamming distance</a>. I expect to extend it in future with multiple phash generation (flips, rotations, and 'interesting' image crops and video frames) and most-common colour comparisons.</p>
|
||||
<p>Searching for duplicates is fairly fast per file, but with a large client with hundreds of thousands of files, the total CPU time adds up. You can do a little manual searching if you like, but once you are all settled here, I recommend you hit the cog icon on the preparation page and let hydrus do this page's catch-up search work in your regular maintenance time. It'll swiftly catch up and keep you up to date without you even thinking about it.</p>
|
||||
<p>Start searching on the 'exact match' search distance of 0. It is generally easier and more valuable to get exact duplicates out of the way first.</p>
|
||||
<p>Once you have some files searched, you should see a potential pair count appear in the 'filtering' page.</p>
|
||||
|
|
|
@ -10,16 +10,17 @@
|
|||
<p>Let's say you found an artist you like. You downloaded everything of theirs from some site, but one or two pieces of new work is posted every week. You'd like to keep up with the new stuff, but you don't want to manually make a new download job every week for every single artist you like.</p>
|
||||
<h3 id="intro">what are subs?</h3>
|
||||
<p>Subscriptions are a way of telling the client to regularly and quietly repeat a gallery search. You set up a number of saved queries, and the client will 'sync' with the latest files in the gallery and download anything new, just as if you were running the download yourself.</p>
|
||||
<p>Subscriptions only work for booru-like galleries that put the newest files first, and they only keep up with new content--once they have done their first sync, which usually gets the most recent hundred files or so, they will never reach further into the past. Getting older files, as you will see later, is a job best done with a normal download page.</p>
|
||||
<p>Here's the dialog, which is under <i>network->downloaders->manage subscriptions</i>:</p>
|
||||
<p><img src="subscriptions_edit_subscriptions.png" /></p>
|
||||
<p>This is a very simple example--there is only one subscription, for safebooru. It has two 'queries' (i.e. searches to keep up with).</p>
|
||||
<p>It is important to note that while subscriptions can have multiple queries (even hundreds!), they <i>generally</i> only work on one site. Expect to create one subscription for safebooru, one for artstation, one for paheal, and so on for every site you care about. Advanced users may be able to think of ways to get subscriptions to work on multiple sites at once, but I recommend against this as it throws off some of the internal check timing calculations.</p>
|
||||
<p>It is important to note that while subscriptions can have multiple queries (even hundreds!), they <i>generally</i> only work on one site. Expect to create one subscription for safebooru, one for artstation, one for paheal, and so on for every site you care about. Advanced users may be able to think of ways to get around this, but I recommend against it as it throws off some of the internal check timing calculations.</p>
|
||||
<p>Before we trip over the advanced buttons here, let's zoom in on the actual subscription:</p>
|
||||
<p><a href="subscriptions_edit_subscription.png"><img src="subscriptions_edit_subscription.png" width="773px" height="553px" /></a></p>
|
||||
<p>This is a big and powerful panel! I recommend you open the screenshot up in a new browser tab, or in the actual client, so you can refer to it.</p>
|
||||
<p>Despite all the controls, the basic idea is simple: Up top, I have selected the 'safebooru tag search' download source, and then I have added two artists--"hong_soon-jae" and "houtengeki". These two queries have their own panels for reviewing what URLs they have worked on and further customising their behaviour, but all they <i>really</i> are is little bits of search text. When the subscription runs, it will put the given search text into the given download source just as if you were running the regular downloader.</p>
|
||||
<p><b>For the most part, all you need to do to set up a good subscription is give it a name, select the download source, and use the 'paste queries' button to paste what you want to search. Subscriptions have great default options for almost all query types, so you don't have to go any deeper than that to get started.</b></p>
|
||||
<h3><b class="warning">Do not change the 'file limits' options until you know <i>exactly</i> what they do and have a good reason to alter them!</b></h3>
|
||||
<h3><b class="warning">Do not change the max number of new files options until you know <i>exactly</i> what they do and have a good reason to alter them!</b></h3>
|
||||
<h3 id="description">how do subscriptions work?</h3>
|
||||
<p>Once you hit ok on the main subscription dialog, the subscription system should immediately come alive. If any queries are due for a 'check', they will perform their search and look for new files (i.e. URLs it has not seen before). Once that is finished, the file download queue will be worked through as normal. Typically, the sub will make a popup like this while it works:</p>
|
||||
<p><img src="subscriptions_popup.png" /></p>
|
||||
|
@ -29,16 +30,16 @@
|
|||
<p><img src="subscriptions_thumbnails.png" /></p>
|
||||
<p>This can often be a nice surprise!</p>
|
||||
<h3 id="good_subs">what makes a good subscription?</h3>
|
||||
<p>The same rules as for downloaders apply: <b>start slow, be hesitant, and plan for the long-term.</b> Artist queries make great subscriptions as they don't update reliably but not too often and have very stable quality. Pick the artists you like most, see where their stuff is posted, and set up your subs like that.</p>
|
||||
<p>The same rules as for downloaders apply: <b>start slow, be hesitant, and plan for the long-term.</b> Artist queries make great subscriptions as they update reliably but not too often and have very stable quality. Pick the artists you like most, see where their stuff is posted, and set up your subs like that.</p>
|
||||
<p>Series and character subscriptions are sometimes valuable, but they can be difficult to keep up with and have highly variable quality. It is not uncommon for users to only keep 15% of what a character sub produces. I do not recommend them for anything but your waifu.</p>
|
||||
<p>Attribute subscriptions like 'blue_eyes' or 'smile' make for terrible subs as the quality is all over the place and you will be inundated by way too much content. The only exceptions are for specific, low-count searches that really matter to you, like 'contrapposto' or 'gothic trap thighhighs'.</p>
|
||||
<p>Attribute subscriptions like 'blue_eyes' or 'smile' make for terrible subs as the quality is all over the place and you will be inundated by too much content. The only exceptions are for specific, low-count searches that really matter to you, like 'contrapposto' or 'gothic trap thighhighs'.</p>
|
||||
<p>If you end up subscribing to eight hundred things and get ten thousand new files a week, you made a mistake. Subscriptions are for <i>keeping up</i> with things you like. If you let them overwhelm you, you'll resent them.</p>
|
||||
<p>It is a good idea to run a 'full' download for a search before you set up a subscription. As well as making sure you have the exact right query text and that you have everything ever posted (beyond the 100 files deep a sub will typically look), it saves the bulk of the work (and waiting on bandwidth) for the manual downloader, where it belongs. When a new subscription picks up off a freshly completed download queue, its initial subscription sync only takes thirty seconds since its initial URLs are those that were already processed by the manual downloader. I recommend you stack artist searches up in the manual downloader using 'no limit' file limit, and when they are all finished, select them in the list and <i>right-click->copy queries</i>, which will put the search texts in your clipboard, newline-separated. This list can be pasted into the subscription dialog in one go with the 'paste queries' button again!</p>
|
||||
<p class="warning">The entire subscription system assumes the source is a typical 'newest first' booru-style search. If you dick around with some order_by:rating/random metatag, it won't work.</p>
|
||||
<h3 id="checking">how often do subscriptions check?</h3>
|
||||
<p>Hydrus subscriptions use the same variable-rate checking system as its thread watchers, just on a larger timescale. If you subscribe to a busy feed, it might check for new files once a day, but if you enter an artist who rarely posts, it might only check once every month. You don't have to do anything. The fine details of this are governed by the 'checker options' button. <b>This is one of the things you should not mess with as you start out.</b></p>
|
||||
<p>If a query goes too 'slow' (typically, this means no new files for 180 days), it will be marked DEAD in the same way a thread will, and it will not be checked again. You will get a little popup when this happens. This is all editable as you get a better feel for the system--if you wish, it is completely possible to set up a sub that never dies and only checks once a year.</p>
|
||||
<p>I do not recommend setting up a sub that needs to check more than once a day. The system tends only to wake up a few times per day anyway, and any search that is producing that many files is probably a bad fit for a subscription. <b>Subscriptions are for lightweight searches that are updated every now and then.</b></p>
|
||||
<p>I do not recommend setting up a sub that needs to check more than once a day. Any search that is producing that many files is probably a bad fit for a subscription. <b>Subscriptions are for lightweight searches that are updated every now and then.</b></p>
|
||||
<hr />
|
||||
<p><i>(you might like to come back to this point once you have tried subs for a week or so and want to refine your workflow)</i></p>
|
||||
<hr />
|
||||
|
@ -46,7 +47,7 @@
|
|||
<p>One the edit subscription panel, the 'presentation' options let you publish files to a page. The page will have the subscription's name, just like the button makes, but it cuts out the middle-man and 'locks it in' more than the button, which will be forgotten if you restart the client. <b>Also, if a page with that name already exists, the new files will be appended to it, just like a normal import page!</b> I strongly recommend moving to this once you have several subs going. Make a 'page of pages' called 'subs' and put all your subscription landing pages in there, and then you can check it whenever is convenient.</p>
|
||||
<p>If you discover your subscription workflow tends to be the same for each sub, you can also customise the publication 'label' used. If multiple subs all publish to the 'nsfw subs' label, they will all end up on the same 'nsfw subs' popup button or landing page. Sending multiple subscriptions' import streams into just one or two locations like this can be great.</p>
|
||||
<p>You can also hide the main working popup. I don't recommend this unless you are really having a problem with it, since it is useful to have that 'active' feedback if something goes wrong.</p>
|
||||
<p>Note that subscription file import options will, by default, only present 'new' files. Anything already in the db will still be recorded in the internal import cache and used to calculate next check times and so on, but it won't clutter your import stream. This is different to the default for all the other importers, but when you are ready to enter the ranks of the patricians, you will know to edit your 'loud' default file import options under <i>options->importing</i> to behave this way as well. Efficient workflows only care about new files.</p>
|
||||
<p>Note that subscription file import options will, by default, only present 'new' files. Anything already in the db will still be recorded in the internal import cache and used to calculate next check times and so on, but it won't clutter your import stream. This is different to the default for all the other importers, but when you are ready to enter the ranks of the Patricians, you will know to edit your 'loud' default file import options under <i>options->importing</i> to behave this way as well. Efficient workflows only care about new files.</p>
|
||||
<h3 id="syncing_explanation">how exactly does the sync work?</h3>
|
||||
<p>Figuring out when a repeating search has 'caught up' can be a tricky problem to solve. It sounds simple, but unusual situations like 'a file got tagged late, so it inserted deeper than it ideally should in the gallery search' or 'the website changed its URL format completely, help' can cause problems. Subscriptions are automatic systems, to they tend to be a bit more careful and paranoid about problems, lest they burn 10GB on 10,000 unexpected diaperfur images.</p>
|
||||
<p>The initial sync is simple. It does a regular search, stopping if it reaches the 'initial file limit' or the last file in the gallery, whichever comes first. The default initial file sync is 100, which is a great number for almost all situations.</p>
|
||||
|
@ -54,8 +55,8 @@
|
|||
<p>If the sub keeps finding apparently new URLs on a regular sync, it will stop upon hitting its 'periodic file limit', which is also usually 100. This is a safety stopgap, and usually happens when the site's URL format itself has changed, which may or may not require attention from you to figure out. If a user just went nuts and uploaded 500 new files to that tag in one day, you'll have a 'gap' in your sub sync, which you'll want to fill in with a manual download. If a sub hits its periodic file limit and thinks something like this happened, it will give you a popup explaining the situation.</p>
|
||||
<p>Please note that subscriptions only keep up with new content. They cannot search backwards in time in order to 'fill out' a search, nor can they fill in gaps. <span class="warning">Do not change the file limits or check times to try to make this happen.</span> If you want to ensure complete sync with all existing content for a particular search, please use the manual downloader.</p>
|
||||
<p>In practice, most subs only need to check the first page of a gallery since only the first two or three urls are new.</p>
|
||||
<h3 id="merging_and_splitting">I put character queries in my artist sub, and now things are all mixed up</h3>
|
||||
<p>On the main subscription dialog, there are 'merge' and 'split' buttons. These are powerful, but they will walk you through the process of pulling queries out of a sub and merging them back into a different one. Only subs that use the same download source can be merged. Give them a go, and if it all goes wrong, just hit the cancel button.</p>
|
||||
<h3 id="merging_and_separating">I put character queries in my artist sub, and now things are all mixed up</h3>
|
||||
<p>On the main subscription dialog, there are 'merge' and 'separate' buttons. These are powerful, but they will walk you through the process of pulling queries out of a sub and merging them back into a different one. Only subs that use the same download source can be merged. Give them a go, and if it all goes wrong, just hit the cancel button on the dialog.</p>
|
||||
|
||||
</div>
|
||||
</body>
|
||||
|
|
|
@ -1012,7 +1012,9 @@ class Controller( HydrusController.HydrusController ):
|
|||
|
||||
self.pub( 'splash_set_title_text', 'booting gui\u2026' )
|
||||
|
||||
self.subscriptions_manager = ClientImportSubscriptions.SubscriptionsManager( self )
|
||||
subscriptions = HG.client_controller.Read( 'serialisable_named', HydrusSerialisable.SERIALISABLE_TYPE_SUBSCRIPTION )
|
||||
|
||||
self.subscriptions_manager = ClientImportSubscriptions.SubscriptionsManager( self, subscriptions )
|
||||
|
||||
def qt_code_gui():
|
||||
|
||||
|
|
|
@ -1512,7 +1512,6 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
def _CleanUpCaches( self ):
|
||||
|
||||
self._subscriptions_cache = {}
|
||||
self._service_cache = {}
|
||||
|
||||
|
||||
|
@ -7185,13 +7184,20 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
if timestamp is None:
|
||||
|
||||
( version, dump, object_timestamp ) = self._c.execute( 'SELECT version, dump, timestamp FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? ORDER BY timestamp DESC;', ( dump_type, dump_name ) ).fetchone()
|
||||
result = self._c.execute( 'SELECT version, dump, timestamp FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? ORDER BY timestamp DESC;', ( dump_type, dump_name ) ).fetchone()
|
||||
|
||||
else:
|
||||
|
||||
( version, dump, object_timestamp ) = self._c.execute( 'SELECT version, dump, timestamp FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', ( dump_type, dump_name, timestamp ) ).fetchone()
|
||||
result = self._c.execute( 'SELECT version, dump, timestamp FROM json_dumps_named WHERE dump_type = ? AND dump_name = ? AND timestamp = ?;', ( dump_type, dump_name, timestamp ) ).fetchone()
|
||||
|
||||
|
||||
if result is None:
|
||||
|
||||
raise HydrusExceptions.DataMissing( 'Could not find the object of type "{}" and name "{}" and timestamp "{}".'.format( dump_type, dump_name, str( timestamp ) ) )
|
||||
|
||||
|
||||
( version, dump, object_timestamp ) = result
|
||||
|
||||
try:
|
||||
|
||||
if isinstance( dump, bytes ):
|
||||
|
@ -9012,7 +9018,6 @@ class DB( HydrusDB.HydrusDB ):
|
|||
self._combined_file_service_id = self._GetServiceId( CC.COMBINED_FILE_SERVICE_KEY )
|
||||
self._combined_tag_service_id = self._GetServiceId( CC.COMBINED_TAG_SERVICE_KEY )
|
||||
|
||||
self._subscriptions_cache = {}
|
||||
self._service_cache = {}
|
||||
|
||||
self._weakref_media_result_cache = ClientMediaResultCache.MediaResultCache()
|
||||
|
@ -11727,7 +11732,6 @@ class DB( HydrusDB.HydrusDB ):
|
|||
self._combined_file_service_id = self._GetServiceId( CC.COMBINED_FILE_SERVICE_KEY )
|
||||
self._combined_tag_service_id = self._GetServiceId( CC.COMBINED_TAG_SERVICE_KEY )
|
||||
|
||||
self._subscriptions_cache = {}
|
||||
self._service_cache = {}
|
||||
|
||||
self._weakref_media_result_cache = ClientMediaResultCache.MediaResultCache()
|
||||
|
@ -12723,227 +12727,6 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
self._controller.pub( 'splash_set_status_text', 'updating db to v' + str( version + 1 ) )
|
||||
|
||||
if version == 341:
|
||||
|
||||
try:
|
||||
|
||||
domain_manager = self._GetJSONDump( HydrusSerialisable.SERIALISABLE_TYPE_NETWORK_DOMAIN_MANAGER )
|
||||
|
||||
domain_manager.Initialise()
|
||||
|
||||
#
|
||||
|
||||
domain_manager.OverwriteDefaultParsers( [ 'gelbooru 0.2.5 file page parser' ] )
|
||||
|
||||
#
|
||||
|
||||
domain_manager.TryToLinkURLClassesAndParsers()
|
||||
|
||||
#
|
||||
|
||||
self._SetJSONDump( domain_manager )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
HydrusData.PrintException( e )
|
||||
|
||||
message = 'Trying to update some url classes and parsers failed! Please let hydrus dev know!'
|
||||
|
||||
self.pub_initial_message( message )
|
||||
|
||||
|
||||
|
||||
if version == 344:
|
||||
|
||||
message = 'The client now only uses one thumbnail per file (previously it needed two). Your \'resized\' thumbnails will now be deleted. This is a significant step that could take some time to complete. It will also significantly impact your next backup run.'
|
||||
message += os.linesep * 2
|
||||
message += 'In order to keep your recycle bin sane, the thumbnails will be permanently deleted. Therefore, this operation cannot be undone. If you are not ready to do this yet (for instance if you do not have a recent backup), kill the hydrus process in Task Manager now.'
|
||||
message += os.linesep * 2
|
||||
message += 'BTW: If you previously put your resized thumbnails on an SSD but not your \'full-size\' ones, you should check the \'migrate database\' dialog once the client boots so you can move the remaining thumbnail directories to fast storage.'
|
||||
|
||||
BlockingSafeShowMessage( message )
|
||||
|
||||
new_options = self._GetJSONDump( HydrusSerialisable.SERIALISABLE_TYPE_CLIENT_OPTIONS )
|
||||
|
||||
self._c.execute( 'CREATE TABLE IF NOT EXISTS ideal_client_files_locations ( location TEXT, weight INTEGER );' )
|
||||
self._c.execute( 'CREATE TABLE IF NOT EXISTS ideal_thumbnail_override_location ( location TEXT );' )
|
||||
|
||||
for ( location, weight ) in new_options._dictionary[ 'client_files_locations_ideal_weights' ]:
|
||||
|
||||
self._c.execute( 'INSERT INTO ideal_client_files_locations ( location, weight ) VALUES ( ?, ? );', ( location, weight ) )
|
||||
|
||||
|
||||
thumbnail_override_location = new_options._dictionary[ 'client_files_locations_full_size_thumbnail_override' ]
|
||||
|
||||
if thumbnail_override_location is not None:
|
||||
|
||||
self._c.execute( 'INSERT INTO ideal_thumbnail_override_location ( location ) VALUES ( ? );', ( thumbnail_override_location, ) )
|
||||
|
||||
|
||||
self._SetJSONDump( new_options )
|
||||
|
||||
#
|
||||
|
||||
error_occurred = False
|
||||
|
||||
for ( i, prefix ) in enumerate( HydrusData.IterateHexPrefixes() ):
|
||||
|
||||
self._controller.pub( 'splash_set_status_subtext', 'deleting resized thumbnails {}'.format( HydrusData.ConvertValueRangeToPrettyString( i + 1, 256 ) ) )
|
||||
|
||||
resized_prefix = 'r' + prefix
|
||||
|
||||
try:
|
||||
|
||||
( location, ) = self._c.execute( 'SELECT location FROM client_files_locations WHERE prefix = ?;', ( resized_prefix, ) ).fetchone()
|
||||
|
||||
except:
|
||||
|
||||
continue
|
||||
|
||||
|
||||
full_path = os.path.join( HydrusPaths.ConvertPortablePathToAbsPath( location ), resized_prefix )
|
||||
|
||||
if os.path.exists( full_path ):
|
||||
|
||||
try:
|
||||
|
||||
HydrusPaths.DeletePath( full_path )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
HydrusData.PrintException( e )
|
||||
|
||||
if not error_occurred:
|
||||
|
||||
error_occurred = True
|
||||
|
||||
message = 'There was a problem deleting one or more of your old \'rxx\' resized thumbnail directories, perhaps because of some old read-only files. There is no big harm here, since the old directories are no longer needed, but you will want to delete them yourself. Additional error information has been written to the log. Please contact hydrus dev if you need help.'
|
||||
|
||||
self.pub_initial_message( message )
|
||||
|
||||
|
||||
|
||||
|
||||
self._c.execute( 'DELETE FROM client_files_locations WHERE prefix = ?;', ( resized_prefix, ) )
|
||||
|
||||
|
||||
|
||||
if version == 345:
|
||||
|
||||
# I screwed up the permissions setting on 344 update so that certain non-windows users got de-execution-permissioned rxx folders, which then made them non-traversable and -deletable
|
||||
# so, let's give it another spin, albeit with less information since we have to guess potential location from remaining locations
|
||||
|
||||
if not HC.PLATFORM_WINDOWS:
|
||||
|
||||
locations_where_r_folders_were_found = set()
|
||||
|
||||
locations = self._STL( self._c.execute( 'SELECT DISTINCT location FROM client_files_locations;' ) )
|
||||
|
||||
possible_resized_paths = []
|
||||
|
||||
error_occurred = False
|
||||
|
||||
for prefix in HydrusData.IterateHexPrefixes():
|
||||
|
||||
resized_prefix = 'r' + prefix
|
||||
|
||||
for location in locations:
|
||||
|
||||
full_path = os.path.join( HydrusPaths.ConvertPortablePathToAbsPath( location ), resized_prefix )
|
||||
|
||||
if os.path.exists( full_path ):
|
||||
|
||||
possible_resized_paths.append( full_path )
|
||||
|
||||
locations_where_r_folders_were_found.add( location )
|
||||
|
||||
|
||||
|
||||
|
||||
num_possible_resized_paths = len( possible_resized_paths )
|
||||
|
||||
if num_possible_resized_paths > 0:
|
||||
|
||||
message = 'It appears that the update code from last week\'s release, 345, did not successfully delete all your old (and now unneeded) resized thumbnail directories.'
|
||||
message += os.linesep * 2
|
||||
message += 'I have found {} spare \'rxx\' directories (this number should be less than or equal to 256) in these current locations:'.format( num_possible_resized_paths )
|
||||
message += os.linesep * 2
|
||||
message += os.linesep.join( [ HydrusPaths.ConvertPortablePathToAbsPath( location ) for location in locations_where_r_folders_were_found ] )
|
||||
message += os.linesep * 2
|
||||
message += 'I will now attempt to delete these directories again, this time with fixed permissions. If you are not ready to do this, kill the hydrus process now.'
|
||||
|
||||
BlockingSafeShowMessage( message )
|
||||
|
||||
for ( i, full_path ) in enumerate( possible_resized_paths ):
|
||||
|
||||
self._controller.pub( 'splash_set_status_subtext', 'deleting resized thumbnails 2: electric boogaloo {}'.format( HydrusData.ConvertValueRangeToPrettyString( i + 1, num_possible_resized_paths ) ) )
|
||||
|
||||
try:
|
||||
|
||||
stat_result = os.stat( full_path )
|
||||
|
||||
current_bits = stat_result.st_mode
|
||||
|
||||
if not stat.S_IXUSR & current_bits:
|
||||
|
||||
os.chmod( full_path, current_bits | stat.S_IXUSR )
|
||||
|
||||
|
||||
HydrusPaths.DeletePath( full_path )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
HydrusData.PrintException( e )
|
||||
|
||||
if not error_occurred:
|
||||
|
||||
error_occurred = True
|
||||
|
||||
message = 'The second attempt to delete old resized directories also failed. Error information has been written to the log. Please consult hydrus dev if you cannot figure this out on your own.'
|
||||
|
||||
self.pub_initial_message( message )
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
if version == 346:
|
||||
|
||||
self._c.execute( 'CREATE TABLE IF NOT EXISTS local_file_deletion_reasons ( hash_id INTEGER PRIMARY KEY, reason_id INTEGER );' )
|
||||
|
||||
|
||||
if version == 347:
|
||||
|
||||
try:
|
||||
|
||||
domain_manager = self._GetJSONDump( HydrusSerialisable.SERIALISABLE_TYPE_NETWORK_DOMAIN_MANAGER )
|
||||
|
||||
domain_manager.Initialise()
|
||||
|
||||
#
|
||||
|
||||
domain_manager.OverwriteDefaultURLClasses( [ 'yiff.party file attachment long' ] )
|
||||
|
||||
#
|
||||
|
||||
domain_manager.TryToLinkURLClassesAndParsers()
|
||||
|
||||
#
|
||||
|
||||
self._SetJSONDump( domain_manager )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
HydrusData.PrintException( e )
|
||||
|
||||
message = 'Trying to update some url classes and parsers failed! Please let hydrus dev know!'
|
||||
|
||||
self.pub_initial_message( message )
|
||||
|
||||
|
||||
|
||||
if version == 349:
|
||||
|
||||
try:
|
||||
|
@ -14893,6 +14676,105 @@ class DB( HydrusDB.HydrusDB ):
|
|||
|
||||
|
||||
|
||||
if version == 399:
|
||||
|
||||
try:
|
||||
|
||||
legacy_subscription_names = self._GetJSONDumpNames( HydrusSerialisable.SERIALISABLE_TYPE_SUBSCRIPTION_LEGACY )
|
||||
|
||||
if len( legacy_subscription_names ) > 0:
|
||||
|
||||
try:
|
||||
|
||||
HydrusPaths.CheckHasSpaceForDBTransaction( self._db_dir, 500 * 1024 * 1024 )
|
||||
|
||||
except:
|
||||
|
||||
message = 'The big subscription update for v400 will now start. However this update is heavy and will also try to make a backup of your old subs, and it looks like your system drive or hydrus drive are a bit short on space right now. If your drives are truly currently real tight, please free up some space now. If you have thousands of subs with hundreds of thousands of URLs, you will need a few GB.'
|
||||
|
||||
BlockingSafeShowMessage( message )
|
||||
|
||||
|
||||
from hydrus.client.importing import ClientImportSubscriptionLegacy
|
||||
|
||||
sub_dir = os.path.join( self._db_dir, 'legacy_subscriptions_backup' )
|
||||
|
||||
HydrusPaths.MakeSureDirectoryExists( sub_dir )
|
||||
|
||||
for ( i, legacy_subscription_name ) in enumerate( legacy_subscription_names ):
|
||||
|
||||
self._controller.pub( 'splash_set_status_subtext', 'updating subscriptions: {}'.format( HydrusData.ConvertValueRangeToPrettyString( i + 1, len( legacy_subscription_names ) ) ) )
|
||||
|
||||
legacy_subscription = self._GetJSONDumpNamed( HydrusSerialisable.SERIALISABLE_TYPE_SUBSCRIPTION_LEGACY, legacy_subscription_name )
|
||||
|
||||
backup_path = os.path.join( sub_dir, 'sub_{}.json'.format( i ) )
|
||||
|
||||
with open( backup_path, 'w', encoding = 'utf-8' ) as f:
|
||||
|
||||
f.write( legacy_subscription.DumpToString() )
|
||||
|
||||
|
||||
( subscription, query_log_containers ) = ClientImportSubscriptionLegacy.ConvertLegacySubscriptionToNew( legacy_subscription )
|
||||
|
||||
self._SetJSONDump( subscription )
|
||||
|
||||
for query_log_container in query_log_containers:
|
||||
|
||||
self._SetJSONDump( query_log_container )
|
||||
|
||||
|
||||
self._DeleteJSONDumpNamed( HydrusSerialisable.SERIALISABLE_TYPE_SUBSCRIPTION_LEGACY, legacy_subscription_name )
|
||||
|
||||
|
||||
|
||||
except Exception as e:
|
||||
|
||||
message = 'Damn, the big subscription update for v400 did not work for you! No changes have been saved, your database is still on v399. You will get an error next, please send it to hydrus dev and go back to using v399 for now!'
|
||||
|
||||
BlockingSafeShowMessage( message )
|
||||
|
||||
raise
|
||||
|
||||
|
||||
#
|
||||
|
||||
try:
|
||||
|
||||
domain_manager = self._GetJSONDump( HydrusSerialisable.SERIALISABLE_TYPE_NETWORK_DOMAIN_MANAGER )
|
||||
|
||||
domain_manager.Initialise()
|
||||
|
||||
#
|
||||
|
||||
domain_manager.OverwriteDefaultURLClasses( [ 'twitter tweet', 'nitter tweet media', 'nitter tweet', 'nitter timeline', 'nitter media timeline', 'derpibooru gallery page', 'derpibooru gallery page api' ] )
|
||||
|
||||
#
|
||||
|
||||
domain_manager.OverwriteDefaultGUGs( [ 'nitter media lookup', 'nitter retweets lookup', 'nitter media and retweets lookup', 'derpibooru tag search', 'derpibooru tag search - no filter' ] )
|
||||
|
||||
#
|
||||
|
||||
domain_manager.OverwriteDefaultParsers( [ 'nitter media parser', 'nitter retweet parser', 'nitter tweet parser', 'nitter tweet parser (video from koto.reisen)', 'danbooru file page parser', 'danbooru file page parser - get webm ugoira', 'derpibooru gallery page api parser' ] )
|
||||
|
||||
#
|
||||
|
||||
domain_manager.TryToLinkURLClassesAndParsers()
|
||||
|
||||
#
|
||||
|
||||
self._SetJSONDump( domain_manager )
|
||||
|
||||
except Exception as e:
|
||||
|
||||
HydrusData.PrintException( e )
|
||||
|
||||
message = 'Trying to update some downloaders failed! Please let hydrus dev know!'
|
||||
|
||||
self.pub_initial_message( message )
|
||||
|
||||
|
||||
|
||||
|
||||
self._controller.pub( 'splash_set_title_text', 'updated db to v{}'.format( HydrusData.ToHumanInt( version + 1 ) ) )
|
||||
|
||||
self._c.execute( 'UPDATE version SET version = ?;', ( version + 1, ) )
|
||||
|
|
|
@ -535,7 +535,7 @@ class ClientFilesManager( object ):
|
|||
|
||||
dir = os.path.join( location, prefix )
|
||||
|
||||
filenames = os.listdir( dir )
|
||||
filenames = list( os.listdir( dir ) )
|
||||
|
||||
for filename in filenames:
|
||||
|
||||
|
@ -553,7 +553,7 @@ class ClientFilesManager( object ):
|
|||
|
||||
dir = os.path.join( location, prefix )
|
||||
|
||||
filenames = os.listdir( dir )
|
||||
filenames = list( os.listdir( dir ) )
|
||||
|
||||
for filename in filenames:
|
||||
|
||||
|
@ -893,7 +893,7 @@ class ClientFilesManager( object ):
|
|||
|
||||
time.sleep( 5 )
|
||||
|
||||
for path in orphan_paths:
|
||||
for ( i, path ) in enumerate( orphan_paths ):
|
||||
|
||||
( i_paused, should_quit ) = job_key.WaitIfNeeded()
|
||||
|
||||
|
|
|
@ -1479,7 +1479,13 @@ class HydrusResourceClientAPIRestrictedGetFilesFileMetadata( HydrusResourceClien
|
|||
metadata_row[ 'num_words' ] = file_info_manager.num_words
|
||||
metadata_row[ 'has_audio' ] = file_info_manager.has_audio
|
||||
|
||||
known_urls = sorted( media_result.GetLocationsManager().GetURLs() )
|
||||
locations_manager = media_result.GetLocationsManager()
|
||||
|
||||
metadata_row[ 'is_inbox' ] = locations_manager.inbox
|
||||
metadata_row[ 'is_local' ] = locations_manager.IsLocal()
|
||||
metadata_row[ 'is_trashed' ] = locations_manager.IsTrashed()
|
||||
|
||||
known_urls = sorted( locations_manager.GetURLs() )
|
||||
|
||||
metadata_row[ 'known_urls' ] = known_urls
|
||||
|
||||
|
|
|
@ -127,6 +127,8 @@ class ClientOptions( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
self._dictionary[ 'booleans' ][ 'disable_cv_for_gifs' ] = False
|
||||
|
||||
self._dictionary[ 'booleans' ][ 'show_unmatched_urls_in_media_viewer' ] = False
|
||||
|
||||
self._dictionary[ 'booleans' ][ 'set_search_focus_on_page_change' ] = False
|
||||
|
||||
self._dictionary[ 'booleans' ][ 'allow_remove_on_manage_tags_input' ] = True
|
||||
|
|
|
@ -66,6 +66,7 @@ from hydrus.client.gui import ClientGUIScrolledPanelsReview
|
|||
from hydrus.client.gui import ClientGUIShortcuts
|
||||
from hydrus.client.gui import ClientGUIShortcutControls
|
||||
from hydrus.client.gui import ClientGUIStyle
|
||||
from hydrus.client.gui import ClientGUISubscriptions
|
||||
from hydrus.client.gui import ClientGUISystemTray
|
||||
from hydrus.client.gui import ClientGUITags
|
||||
from hydrus.client.gui import ClientGUITopLevelWindows
|
||||
|
@ -532,16 +533,23 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
|
|||
library_versions.append( ( 'OpenCV', cv2.__version__ ) )
|
||||
library_versions.append( ( 'Pillow', PIL.__version__ ) )
|
||||
|
||||
if ClientGUIMPV.MPV_IS_AVAILABLE:
|
||||
if HC.RUNNING_FROM_FROZEN_BUILD and HC.PLATFORM_MACOS:
|
||||
|
||||
library_versions.append( ( 'mpv api version: ', ClientGUIMPV.GetClientAPIVersionString() ) )
|
||||
library_versions.append( ( 'mpv: ', 'is not currently available on macOS' ) )
|
||||
|
||||
else:
|
||||
|
||||
HydrusData.ShowText( 'MPV failed to import because:' )
|
||||
HydrusData.ShowText( ClientGUIMPV.mpv_failed_reason )
|
||||
|
||||
library_versions.append( ( 'mpv', 'not available' ) )
|
||||
if ClientGUIMPV.MPV_IS_AVAILABLE:
|
||||
|
||||
library_versions.append( ( 'mpv api version: ', ClientGUIMPV.GetClientAPIVersionString() ) )
|
||||
|
||||
else:
|
||||
|
||||
HydrusData.ShowText( 'If this information helps, MPV failed to import because:' )
|
||||
HydrusData.ShowText( ClientGUIMPV.mpv_failed_reason )
|
||||
|
||||
library_versions.append( ( 'mpv', 'not available' ) )
|
||||
|
||||
|
||||
|
||||
library_versions.append( ( 'FFMPEG', HydrusVideoHandling.GetFFMPEGVersion() ) )
|
||||
|
@ -1698,15 +1706,10 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
|
|||
page.PageHidden()
|
||||
|
||||
|
||||
from hydrus.client.gui import ClientGUICanvasFrame
|
||||
HG.client_controller.pub( 'pause_all_media' )
|
||||
|
||||
for tlw in visible_tlws:
|
||||
|
||||
if isinstance( tlw, ClientGUICanvasFrame.CanvasFrame ):
|
||||
|
||||
tlw.PauseMedia()
|
||||
|
||||
|
||||
tlw.hide()
|
||||
|
||||
self._system_tray_hidden_tlws.append( ( tlw.isMaximized(), tlw ) )
|
||||
|
@ -2162,17 +2165,21 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
|
|||
|
||||
url_class_keys_to_display = domain_manager.GetURLClassKeysToDisplay()
|
||||
|
||||
panel = ClientGUIScrolledPanelsEdit.EditDownloaderDisplayPanel( dlg, self._controller.network_engine, gugs, gug_keys_to_display, url_classes, url_class_keys_to_display )
|
||||
show_unmatched_urls_in_media_viewer = HG.client_controller.new_options.GetBoolean( 'show_unmatched_urls_in_media_viewer' )
|
||||
|
||||
panel = ClientGUIScrolledPanelsEdit.EditDownloaderDisplayPanel( dlg, self._controller.network_engine, gugs, gug_keys_to_display, url_classes, url_class_keys_to_display, show_unmatched_urls_in_media_viewer )
|
||||
|
||||
dlg.SetPanel( panel )
|
||||
|
||||
if dlg.exec() == QW.QDialog.Accepted:
|
||||
|
||||
( gug_keys_to_display, url_class_keys_to_display ) = panel.GetValue()
|
||||
( gug_keys_to_display, url_class_keys_to_display, show_unmatched_urls_in_media_viewer ) = panel.GetValue()
|
||||
|
||||
domain_manager.SetGUGKeysToDisplay( gug_keys_to_display )
|
||||
domain_manager.SetURLClassKeysToDisplay( url_class_keys_to_display )
|
||||
|
||||
HG.client_controller.new_options.SetBoolean( 'show_unmatched_urls_in_media_viewer', show_unmatched_urls_in_media_viewer )
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -2606,24 +2613,110 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
|
|||
|
||||
def _ManageSubscriptions( self ):
|
||||
|
||||
def qt_do_it( subscriptions, original_pause_status ):
|
||||
def qt_do_it( subscriptions, missing_query_log_container_names, surplus_query_log_container_names, original_pause_status ):
|
||||
|
||||
if len( missing_query_log_container_names ) > 0:
|
||||
|
||||
text = '{} subscription queries had missing database data! This is a serious error!'.format( HydrusData.ToHumanInt( len( missing_query_log_container_names ) ) )
|
||||
text += os.linesep * 2
|
||||
text += 'If you continue, the client will now create and save empty file/gallery logs for those queries, essentially resetting them, but if you know you need to exit and fix your database in a different way, cancel out now.'
|
||||
text += os.linesep * 2
|
||||
text += 'If you do not know why this happened, you may have had a hard drive fault. Please consult "install_dir/db/help my db is broke.txt", and you may want to contact hydrus dev.'
|
||||
|
||||
result = ClientGUIDialogsQuick.GetYesNo( self, text, title = 'Missing Query Logs!', yes_label = 'continue', no_label = 'back out' )
|
||||
|
||||
if result == QW.QDialog.Accepted:
|
||||
|
||||
from hydrus.client.importing import ClientImportSubscriptionQuery
|
||||
|
||||
for missing_query_log_container_name in missing_query_log_container_names:
|
||||
|
||||
query_log_container = ClientImportSubscriptionQuery.SubscriptionQueryLogContainer( missing_query_log_container_name )
|
||||
|
||||
HG.client_controller.WriteSynchronous( 'serialisable', query_log_container )
|
||||
|
||||
|
||||
for subscription in subscriptions:
|
||||
|
||||
for query_header in subscription.GetQueryHeaders():
|
||||
|
||||
if query_header.GetQueryLogContainerName() in missing_query_log_container_names:
|
||||
|
||||
query_header.Reset( query_log_container )
|
||||
|
||||
|
||||
|
||||
|
||||
HG.client_controller.subscriptions_manager.SetSubscriptions( subscriptions ) # save the reset
|
||||
|
||||
else:
|
||||
|
||||
return
|
||||
|
||||
|
||||
|
||||
if len( surplus_query_log_container_names ) > 0:
|
||||
|
||||
text = 'When loading subscription data, the client discovered surplus orphaned subscription data for {} queries! This data is harmless and no longer used. The situation is however unusual, and probably due to an unusual deletion routine or a bug.'.format( HydrusData.ToHumanInt( len( surplus_query_log_container_names ) ) )
|
||||
text += os.linesep * 2
|
||||
text += 'If you continue, this surplus data will backed up to your database directory and then safely deleted from the database itself, but if you recently did manual database editing and know you need to exit and fix your database in a different way, cancel out now.'
|
||||
text += os.linesep * 2
|
||||
text += 'If you do not know why this happened, hydrus dev would be interested in being told about it and the surrounding circumstances.'
|
||||
|
||||
result = ClientGUIDialogsQuick.GetYesNo( self, text, title = 'Orphan Query Logs!', yes_label = 'continue', no_label = 'back out' )
|
||||
|
||||
if result == QW.QDialog.Accepted:
|
||||
|
||||
sub_dir = os.path.join( self._controller.GetDBDir(), 'orphaned_query_log_containers' )
|
||||
|
||||
HydrusPaths.MakeSureDirectoryExists( sub_dir )
|
||||
|
||||
for surplus_query_log_container_name in surplus_query_log_container_names:
|
||||
|
||||
surplus_query_log_container = HG.client_controller.Read( 'serialisable_named', HydrusSerialisable.SERIALISABLE_TYPE_SUBSCRIPTION_QUERY_LOG_CONTAINER, surplus_query_log_container_name )
|
||||
|
||||
backup_path = os.path.join( sub_dir, 'qlc_{}.json'.format( surplus_query_log_container_name ) )
|
||||
|
||||
with open( backup_path, 'w', encoding = 'utf-8' ) as f:
|
||||
|
||||
f.write( surplus_query_log_container.DumpToString() )
|
||||
|
||||
|
||||
HG.client_controller.WriteSynchronous( 'delete_serialisable_named', HydrusSerialisable.SERIALISABLE_TYPE_SUBSCRIPTION_QUERY_LOG_CONTAINER, surplus_query_log_container_name )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
return
|
||||
|
||||
|
||||
|
||||
title = 'manage subscriptions'
|
||||
frame_key = 'manage_subscriptions_dialog'
|
||||
|
||||
with ClientGUITopLevelWindowsPanels.DialogEdit( self, title, frame_key ) as dlg:
|
||||
|
||||
panel = ClientGUIScrolledPanelsEdit.EditSubscriptionsPanel( dlg, subscriptions, original_pause_status )
|
||||
panel = ClientGUISubscriptions.EditSubscriptionsPanel( dlg, subscriptions, original_pause_status )
|
||||
|
||||
dlg.SetPanel( panel )
|
||||
|
||||
if dlg.exec() == QW.QDialog.Accepted:
|
||||
|
||||
subscriptions = panel.GetValue()
|
||||
( subscriptions, edited_query_log_containers, deletee_query_log_container_names ) = panel.GetValue()
|
||||
|
||||
for edited_query_log_container in edited_query_log_containers:
|
||||
|
||||
HG.client_controller.Write( 'serialisable', edited_query_log_container )
|
||||
|
||||
|
||||
HG.client_controller.Write( 'serialisables_overwrite', [ HydrusSerialisable.SERIALISABLE_TYPE_SUBSCRIPTION ], subscriptions )
|
||||
|
||||
HG.client_controller.subscriptions_manager.NewSubscriptions( subscriptions )
|
||||
for deletee_query_log_container_name in deletee_query_log_container_names:
|
||||
|
||||
HG.client_controller.Write( 'delete_serialisable_named', HydrusSerialisable.SERIALISABLE_TYPE_SUBSCRIPTION_QUERY_LOG_CONTAINER, deletee_query_log_container_name )
|
||||
|
||||
|
||||
HG.client_controller.subscriptions_manager.SetSubscriptions( subscriptions )
|
||||
|
||||
|
||||
|
||||
|
@ -2664,40 +2757,24 @@ class FrameGUI( ClientGUITopLevelWindows.MainFrameThatResizes ):
|
|||
|
||||
|
||||
|
||||
job_key = ClientThreading.JobKey( cancellable = True )
|
||||
subscriptions = HG.client_controller.subscriptions_manager.GetSubscriptions()
|
||||
|
||||
job_key.SetVariable( 'popup_title', 'loading subscriptions' )
|
||||
expected_query_log_container_names = set()
|
||||
|
||||
controller.CallLater( 1.0, controller.pub, 'message', job_key )
|
||||
|
||||
subscription_names = HG.client_controller.Read( 'serialisable_names', HydrusSerialisable.SERIALISABLE_TYPE_SUBSCRIPTION )
|
||||
|
||||
num_to_do = len( subscription_names )
|
||||
|
||||
subscriptions = []
|
||||
|
||||
for ( i, name ) in enumerate( subscription_names ):
|
||||
for subscription in subscriptions:
|
||||
|
||||
if job_key.IsCancelled():
|
||||
|
||||
job_key.Delete()
|
||||
|
||||
return
|
||||
|
||||
|
||||
job_key.SetVariable( 'popup_text_1', HydrusData.ConvertValueRangeToPrettyString( i + 1, num_to_do ) + ': ' + name )
|
||||
job_key.SetVariable( 'popup_gauge_1', ( i + 1, num_to_do ) )
|
||||
|
||||
subscription = HG.client_controller.Read( 'serialisable_named', HydrusSerialisable.SERIALISABLE_TYPE_SUBSCRIPTION, name )
|
||||
|
||||
subscriptions.append( subscription )
|
||||
expected_query_log_container_names.update( subscription.GetAllQueryLogContainerNames() )
|
||||
|
||||
|
||||
job_key.Delete()
|
||||
actual_query_log_container_names = set( HG.client_controller.Read( 'serialisable_names', HydrusSerialisable.SERIALISABLE_TYPE_SUBSCRIPTION_QUERY_LOG_CONTAINER ) )
|
||||
|
||||
missing_query_log_container_names = expected_query_log_container_names.difference( actual_query_log_container_names )
|
||||
|
||||
surplus_query_log_container_names = actual_query_log_container_names.difference( expected_query_log_container_names )
|
||||
|
||||
try:
|
||||
|
||||
controller.CallBlockingToQt( self, qt_do_it, subscriptions, original_pause_status )
|
||||
controller.CallBlockingToQt( self, qt_do_it, subscriptions, missing_query_log_container_names, surplus_query_log_container_names, original_pause_status )
|
||||
|
||||
except HydrusExceptions.QtDeadWindowException:
|
||||
|
||||
|
@ -3860,6 +3937,8 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
else:
|
||||
|
||||
HG.client_controller.pub( 'pause_all_media' )
|
||||
|
||||
title = job_key.GetIfHasVariable( 'popup_title' )
|
||||
|
||||
if title is None:
|
||||
|
@ -4372,7 +4451,7 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
|
||||
if self._controller.new_options.GetBoolean( 'advanced_mode' ):
|
||||
|
||||
ClientGUIMenus.AppendMenuItem( submenu, 'nudge subscriptions awake', 'Tell the subs daemon to wake up, just in case any subs are due.', self._controller.subscriptions_manager.ClearCacheAndWake )
|
||||
ClientGUIMenus.AppendMenuItem( submenu, 'nudge subscriptions awake', 'Tell the subs daemon to wake up, just in case any subs are due.', self._controller.subscriptions_manager.Wake )
|
||||
|
||||
|
||||
ClientGUIMenus.AppendSeparator( submenu )
|
||||
|
@ -5995,6 +6074,8 @@ The password is cleartext here but obscured in the entry dialog. Enter a blank p
|
|||
self._controller.CreateSplash( 'hydrus client exiting' )
|
||||
|
||||
|
||||
HG.client_controller.pub( 'pause_all_media' )
|
||||
|
||||
try:
|
||||
|
||||
if QP.isValid( self._message_manager ):
|
||||
|
|
|
@ -361,7 +361,7 @@ def ShouldDoExactSearch( entry_text ):
|
|||
|
||||
return 0 < len( test_text ) <= autocomplete_exact_match_threshold
|
||||
|
||||
def WriteFetch( win, job_key, results_callable, parsed_autocomplete_text: ClientSearch.ParsedAutocompleteText, file_service_key: bytes, tag_service_key: bytes, expand_parents: bool, results_cache: ClientSearch.PredicateResultsCache ):
|
||||
def WriteFetch( win, job_key, results_callable, parsed_autocomplete_text: ClientSearch.ParsedAutocompleteText, file_service_key: bytes, tag_service_key: bytes, expand_parents: bool, display_tag_service_key, results_cache: ClientSearch.PredicateResultsCache ):
|
||||
|
||||
tag_search_context = ClientSearch.TagSearchContext( service_key = tag_service_key )
|
||||
|
||||
|
@ -387,7 +387,7 @@ def WriteFetch( win, job_key, results_callable, parsed_autocomplete_text: Client
|
|||
results_cache = ClientSearch.PredicateResultsCacheTag( predicates, strict_search_text, True )
|
||||
|
||||
|
||||
matches = results_cache.FilterPredicates( tag_service_key, strict_search_text )
|
||||
matches = results_cache.FilterPredicates( display_tag_service_key, strict_search_text )
|
||||
|
||||
else:
|
||||
|
||||
|
@ -402,7 +402,7 @@ def WriteFetch( win, job_key, results_callable, parsed_autocomplete_text: Client
|
|||
|
||||
if cache_valid:
|
||||
|
||||
matches = results_cache.FilterPredicates( tag_service_key, autocomplete_search_text )
|
||||
matches = results_cache.FilterPredicates( display_tag_service_key, autocomplete_search_text )
|
||||
|
||||
else:
|
||||
|
||||
|
@ -412,13 +412,13 @@ def WriteFetch( win, job_key, results_callable, parsed_autocomplete_text: Client
|
|||
|
||||
if is_explicit_wildcard:
|
||||
|
||||
matches = ClientSearch.FilterPredicatesBySearchText( tag_service_key, autocomplete_search_text, predicates )
|
||||
matches = ClientSearch.FilterPredicatesBySearchText( display_tag_service_key, autocomplete_search_text, predicates )
|
||||
|
||||
else:
|
||||
|
||||
results_cache = ClientSearch.PredicateResultsCacheTag( predicates, strict_search_text, False )
|
||||
|
||||
matches = results_cache.FilterPredicates( tag_service_key, autocomplete_search_text )
|
||||
matches = results_cache.FilterPredicates( display_tag_service_key, autocomplete_search_text )
|
||||
|
||||
|
||||
|
||||
|
@ -426,11 +426,11 @@ def WriteFetch( win, job_key, results_callable, parsed_autocomplete_text: Client
|
|||
matches = ClientSearch.SortPredicates( matches )
|
||||
|
||||
|
||||
InsertTagPredicates( matches, tag_service_key, parsed_autocomplete_text )
|
||||
InsertTagPredicates( matches, display_tag_service_key, parsed_autocomplete_text )
|
||||
|
||||
if expand_parents:
|
||||
|
||||
matches = HG.client_controller.tag_parents_manager.ExpandPredicates( tag_service_key, matches )
|
||||
matches = HG.client_controller.tag_parents_manager.ExpandPredicates( display_tag_service_key, matches )
|
||||
|
||||
|
||||
HG.client_controller.CallLaterQtSafe( win, 0.0, results_callable, job_key, parsed_autocomplete_text, results_cache, matches )
|
||||
|
@ -2195,6 +2195,8 @@ class AutoCompleteDropdownTagsWrite( AutoCompleteDropdownTags ):
|
|||
|
||||
def __init__( self, parent, chosen_tag_callable, expand_parents, file_service_key, tag_service_key, null_entry_callable = None, tag_service_key_changed_callable = None, show_paste_button = False ):
|
||||
|
||||
self._display_tag_service_key = tag_service_key
|
||||
|
||||
self._chosen_tag_callable = chosen_tag_callable
|
||||
self._expand_parents = expand_parents
|
||||
self._null_entry_callable = null_entry_callable
|
||||
|
@ -2278,7 +2280,7 @@ class AutoCompleteDropdownTagsWrite( AutoCompleteDropdownTags ):
|
|||
|
||||
def _InitFavouritesList( self ):
|
||||
|
||||
favs_list = ListBoxTagsACWrite( self._dropdown_notebook, self.BroadcastChoices, self._tag_service_key, self._float_mode, height_num_chars = self._list_height_num_chars )
|
||||
favs_list = ListBoxTagsACWrite( self._dropdown_notebook, self.BroadcastChoices, self._display_tag_service_key, self._float_mode, height_num_chars = self._list_height_num_chars )
|
||||
|
||||
return favs_list
|
||||
|
||||
|
@ -2287,7 +2289,7 @@ class AutoCompleteDropdownTagsWrite( AutoCompleteDropdownTags ):
|
|||
|
||||
self._list_height_num_chars = 8
|
||||
|
||||
return ListBoxTagsACWrite( self._dropdown_notebook, self.BroadcastChoices, self._tag_service_key, self._float_mode, height_num_chars = self._list_height_num_chars )
|
||||
return ListBoxTagsACWrite( self._dropdown_notebook, self.BroadcastChoices, self._display_tag_service_key, self._float_mode, height_num_chars = self._list_height_num_chars )
|
||||
|
||||
|
||||
def _Paste( self ):
|
||||
|
@ -2326,7 +2328,6 @@ class AutoCompleteDropdownTagsWrite( AutoCompleteDropdownTags ):
|
|||
|
||||
|
||||
|
||||
|
||||
def _ShouldTakeResponsibilityForEnter( self ):
|
||||
|
||||
parsed_autocomplete_text = self._GetParsedAutocompleteText()
|
||||
|
@ -2355,18 +2356,18 @@ class AutoCompleteDropdownTagsWrite( AutoCompleteDropdownTags ):
|
|||
|
||||
stub_predicates = []
|
||||
|
||||
InsertTagPredicates( stub_predicates, self._tag_service_key, parsed_autocomplete_text )
|
||||
InsertTagPredicates( stub_predicates, self._display_tag_service_key, parsed_autocomplete_text )
|
||||
|
||||
if self._expand_parents:
|
||||
|
||||
stub_predicates = HG.client_controller.tag_parents_manager.ExpandPredicates( self._tag_service_key, stub_predicates )
|
||||
stub_predicates = HG.client_controller.tag_parents_manager.ExpandPredicates( self._display_tag_service_key, stub_predicates )
|
||||
|
||||
|
||||
AppendLoadingPredicate( stub_predicates )
|
||||
|
||||
HG.client_controller.CallLaterQtSafe( self, 0.2, self.SetStubPredicates, job_key, stub_predicates, parsed_autocomplete_text )
|
||||
|
||||
HG.client_controller.CallToThread( WriteFetch, self, job_key, self.SetFetchedResults, parsed_autocomplete_text, self._file_service_key, self._tag_service_key, self._expand_parents, self._results_cache )
|
||||
HG.client_controller.CallToThread( WriteFetch, self, job_key, self.SetFetchedResults, parsed_autocomplete_text, self._file_service_key, self._tag_service_key, self._expand_parents, self._display_tag_service_key, self._results_cache )
|
||||
|
||||
|
||||
def _TakeResponsibilityForEnter( self, shift_down ):
|
||||
|
|
|
@ -442,18 +442,20 @@ class Canvas( QW.QWidget ):
|
|||
|
||||
def _CopyBMPToClipboard( self ):
|
||||
|
||||
copied = False
|
||||
|
||||
if self._current_media is not None:
|
||||
|
||||
if self._current_media.GetMime() in HC.IMAGES:
|
||||
|
||||
HG.client_controller.pub( 'clipboard', 'bmp', self._current_media )
|
||||
|
||||
else:
|
||||
|
||||
QW.QMessageBox.critical( self, 'Error', 'Sorry, cannot take bmps of anything but static images right now!' )
|
||||
copied = True
|
||||
|
||||
|
||||
|
||||
return copied
|
||||
|
||||
|
||||
def _CopyHashToClipboard( self, hash_type ):
|
||||
|
||||
|
@ -1459,6 +1461,15 @@ class Canvas( QW.QWidget ):
|
|||
|
||||
self._CopyBMPToClipboard()
|
||||
|
||||
elif action == 'copy_bmp_or_file_if_not_bmpable':
|
||||
|
||||
copied = self._CopyBMPToClipboard()
|
||||
|
||||
if not copied:
|
||||
|
||||
self._CopyFileToClipboard()
|
||||
|
||||
|
||||
elif action == 'copy_file':
|
||||
|
||||
self._CopyFileToClipboard()
|
||||
|
@ -2476,6 +2487,13 @@ class CanvasFilterDuplicates( CanvasWithHovers ):
|
|||
self._processed_pairs = []
|
||||
self._hashes_due_to_be_deleted_in_this_batch = set()
|
||||
|
||||
# ok we started excluding pairs if they had been deleted, now I am extending it to any files that have been processed.
|
||||
# main thing is if you have AB, AC, that's neat and a bunch of people want it, but current processing system doesn't do B->A->C merge if it happens in a single batch
|
||||
# I need to store dupe merge options rather than content updates apply them in db transaction or do the retroactive sync or similar to get this done properly
|
||||
# so regrettably I turn it off for now
|
||||
|
||||
self._hashes_processed_in_this_batch = set()
|
||||
|
||||
file_service_key = self._file_search_context.GetFileServiceKey()
|
||||
|
||||
self._media_list = ClientMedia.ListeningMediaList( file_service_key, [] )
|
||||
|
@ -2523,6 +2541,7 @@ class CanvasFilterDuplicates( CanvasWithHovers ):
|
|||
|
||||
self._processed_pairs = []
|
||||
self._hashes_due_to_be_deleted_in_this_batch = set()
|
||||
self._hashes_processed_in_this_batch = set()
|
||||
|
||||
|
||||
def _CurrentMediaIsBetter( self, delete_second = True ):
|
||||
|
@ -2787,6 +2806,7 @@ class CanvasFilterDuplicates( CanvasWithHovers ):
|
|||
|
||||
|
||||
self._hashes_due_to_be_deleted_in_this_batch.difference_update( hash_pair )
|
||||
self._hashes_processed_in_this_batch.difference_update( hash_pair )
|
||||
|
||||
self._ShowNewPair()
|
||||
|
||||
|
@ -2833,6 +2853,9 @@ class CanvasFilterDuplicates( CanvasWithHovers ):
|
|||
|
||||
was_auto_skipped = False
|
||||
|
||||
self._hashes_processed_in_this_batch.update( first_media.GetHashes() )
|
||||
self._hashes_processed_in_this_batch.update( second_media.GetHashes() )
|
||||
|
||||
if delete_first or delete_second or delete_both:
|
||||
|
||||
if delete_first or delete_both:
|
||||
|
@ -2935,6 +2958,7 @@ class CanvasFilterDuplicates( CanvasWithHovers ):
|
|||
|
||||
|
||||
self._hashes_due_to_be_deleted_in_this_batch.difference_update( hash_pair )
|
||||
self._hashes_processed_in_this_batch.difference_update( hash_pair )
|
||||
|
||||
|
||||
|
||||
|
@ -2943,6 +2967,7 @@ class CanvasFilterDuplicates( CanvasWithHovers ):
|
|||
if len( self._unprocessed_pairs ) == 0:
|
||||
|
||||
self._hashes_due_to_be_deleted_in_this_batch = set()
|
||||
self._hashes_processed_in_this_batch = set()
|
||||
self._processed_pairs = [] # just in case someone 'skip'ed everything in the last batch, so this never got cleared above
|
||||
|
||||
self.ClearMedia()
|
||||
|
@ -2961,6 +2986,11 @@ class CanvasFilterDuplicates( CanvasWithHovers ):
|
|||
|
||||
( first_hash, second_hash ) = pair
|
||||
|
||||
if first_hash in self._hashes_processed_in_this_batch or second_hash in self._hashes_processed_in_this_batch:
|
||||
|
||||
return False
|
||||
|
||||
|
||||
if first_hash in self._hashes_due_to_be_deleted_in_this_batch or second_hash in self._hashes_due_to_be_deleted_in_this_batch:
|
||||
|
||||
return False
|
||||
|
|
|
@ -917,6 +917,8 @@ class MediaContainer( QW.QWidget ):
|
|||
|
||||
self.hide()
|
||||
|
||||
HG.client_controller.sub( self, 'Pause', 'pause_all_media' )
|
||||
|
||||
|
||||
def _DestroyOrHideThisMediaWindow( self, media_window ):
|
||||
|
||||
|
|
|
@ -342,9 +342,9 @@ class EditFileSeedCachePanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
def _UpdateText( self ):
|
||||
|
||||
( status, simple_status, ( total_processed, total ) ) = self._file_seed_cache.GetStatus()
|
||||
file_seed_cache_status = self._file_seed_cache.GetStatus()
|
||||
|
||||
self._text.setText( status )
|
||||
self._text.setText( file_seed_cache_status.GetStatusText() )
|
||||
|
||||
|
||||
def GetValue( self ):
|
||||
|
@ -510,7 +510,7 @@ class FileSeedCacheButton( ClientGUICommon.BetterBitmapButton ):
|
|||
|
||||
file_seed_cache = self._file_seed_cache_get_callable()
|
||||
|
||||
file_seed_cache.RetryFailures()
|
||||
file_seed_cache.RetryFailed()
|
||||
|
||||
|
||||
|
||||
|
@ -758,9 +758,11 @@ class FileSeedCacheStatusControl( QW.QFrame ):
|
|||
|
||||
else:
|
||||
|
||||
( import_summary, simple_status, ( num_done, num_to_do ) ) = self._file_seed_cache.GetStatus()
|
||||
file_seed_cache_status = self._file_seed_cache.GetStatus()
|
||||
|
||||
self._import_summary_st.setText( import_summary )
|
||||
( num_done, num_to_do ) = file_seed_cache_status.GetValueRange()
|
||||
|
||||
self._import_summary_st.setText( file_seed_cache_status.GetStatusText() )
|
||||
|
||||
if num_to_do == 0:
|
||||
|
||||
|
@ -797,7 +799,9 @@ class FileSeedCacheStatusControl( QW.QFrame ):
|
|||
|
||||
if self._file_seed_cache is not None:
|
||||
|
||||
( import_summary, simple_status, ( num_done, num_to_do ) ) = self._file_seed_cache.GetStatus()
|
||||
file_seed_cache_status = self._file_seed_cache.GetStatus()
|
||||
|
||||
( num_done, num_to_do ) = file_seed_cache_status.GetValueRange()
|
||||
|
||||
( old_num_done, old_num_to_do ) = self._progress_gauge.GetValueRange()
|
||||
|
||||
|
|
|
@ -466,7 +466,7 @@ class GallerySeedLogButton( ClientGUICommon.BetterBitmapButton ):
|
|||
|
||||
gallery_seed_log = self._gallery_seed_log_get_callable()
|
||||
|
||||
gallery_seed_log.RetryFailures()
|
||||
gallery_seed_log.RetryFailed()
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1635,10 +1635,17 @@ class ListBox( QW.QScrollArea ):
|
|||
|
||||
if len( self._ordered_terms ) > 0:
|
||||
|
||||
self._selected_terms = set()
|
||||
if len( self._selected_terms ) == 1 and self._IsSelected( 0 ):
|
||||
|
||||
return
|
||||
|
||||
|
||||
self._DeselectAll()
|
||||
|
||||
self._Hit( False, False, 0 )
|
||||
|
||||
self.widget().update()
|
||||
|
||||
|
||||
|
||||
def SetMinimumHeightNumChars( self, minimum_height_num_chars ):
|
||||
|
@ -1821,7 +1828,10 @@ class ListBoxTags( ListBox ):
|
|||
|
||||
if isinstance( term, ClientSearch.Predicate ):
|
||||
|
||||
predicates.append( term )
|
||||
if term.GetType() not in ( ClientSearch.PREDICATE_TYPE_LABEL, ClientSearch.PREDICATE_TYPE_NAMESPACE, ClientSearch.PREDICATE_TYPE_PARENT ):
|
||||
|
||||
predicates.append( term )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
|
|
|
@ -1856,11 +1856,13 @@ class ManagementPanelImporterMultipleGallery( ManagementPanelImporter ):
|
|||
|
||||
pretty_status = status
|
||||
|
||||
( file_seed_cache_status, file_seed_cache_simple_status, ( num_done, num_total ) ) = gallery_import.GetFileSeedCache().GetStatus()
|
||||
file_seed_cache_status = gallery_import.GetFileSeedCache().GetStatus()
|
||||
|
||||
( num_done, num_total ) = file_seed_cache_status.GetValueRange()
|
||||
|
||||
progress = ( num_total, num_done )
|
||||
|
||||
pretty_progress = file_seed_cache_simple_status
|
||||
pretty_progress = file_seed_cache_status.GetStatusText( simple = True )
|
||||
|
||||
added = gallery_import.GetCreationTime()
|
||||
|
||||
|
@ -2232,10 +2234,12 @@ class ManagementPanelImporterMultipleGallery( ManagementPanelImporter ):
|
|||
|
||||
else:
|
||||
|
||||
( status, simple_status, ( value, range ) ) = self._multiple_gallery_import.GetTotalStatus()
|
||||
file_seed_cache_status = self._multiple_gallery_import.GetTotalStatus()
|
||||
|
||||
text_top = HydrusData.ToHumanInt( num_gallery_imports ) + ' queries - ' + HydrusData.ConvertValueRangeToPrettyString( value, range )
|
||||
text_bottom = status
|
||||
( num_done, num_total ) = file_seed_cache_status.GetValueRange()
|
||||
|
||||
text_top = '{} queries - {}'.format( HydrusData.ToHumanInt( num_gallery_imports ), HydrusData.ConvertValueRangeToPrettyString( num_done, num_total ) )
|
||||
text_bottom = file_seed_cache_status.GetStatusText()
|
||||
|
||||
|
||||
self._gallery_importers_status_st_top.setText( text_top )
|
||||
|
@ -2561,11 +2565,13 @@ class ManagementPanelImporterMultipleWatcher( ManagementPanelImporter ):
|
|||
pretty_checking_paused = ''
|
||||
|
||||
|
||||
( status, simple_status, ( num_done, num_total ) ) = watcher.GetFileSeedCache().GetStatus()
|
||||
file_seed_cache_status = watcher.GetFileSeedCache().GetStatus()
|
||||
|
||||
( num_done, num_total ) = file_seed_cache_status.GetValueRange()
|
||||
|
||||
progress = ( num_total, num_done )
|
||||
|
||||
pretty_progress = simple_status
|
||||
pretty_progress = file_seed_cache_status.GetStatusText( simple = True )
|
||||
|
||||
added = watcher.GetCreationTime()
|
||||
|
||||
|
@ -2953,10 +2959,12 @@ class ManagementPanelImporterMultipleWatcher( ManagementPanelImporter ):
|
|||
num_dead_text = HydrusData.ToHumanInt( num_dead ) + ' DEAD - '
|
||||
|
||||
|
||||
( status, simple_status, ( value, range ) ) = self._multiple_watcher_import.GetTotalStatus()
|
||||
file_seed_cache_status = self._multiple_watcher_import.GetTotalStatus()
|
||||
|
||||
text_top = HydrusData.ToHumanInt( num_watchers ) + ' watchers - ' + num_dead_text + HydrusData.ConvertValueRangeToPrettyString( value, range )
|
||||
text_bottom = status
|
||||
( num_done, num_total ) = file_seed_cache_status.GetValueRange()
|
||||
|
||||
text_top = '{} watchers - {}'.format( HydrusData.ToHumanInt( num_watchers ), HydrusData.ConvertValueRangeToPrettyString( num_done, num_total ) )
|
||||
text_bottom = file_seed_cache_status.GetStatusText()
|
||||
|
||||
|
||||
self._watchers_status_st_top.setText( text_top )
|
||||
|
|
|
@ -943,11 +943,6 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
|
|||
|
||||
QP.TabWidgetWithDnD.__init__( self, parent )
|
||||
|
||||
if HG.client_controller.new_options.GetBoolean( 'elide_page_tab_names' ):
|
||||
|
||||
self.tabBar().setElideMode( QC.Qt.ElideMiddle )
|
||||
|
||||
|
||||
self._parent_notebook = parent
|
||||
|
||||
# this is disabled for now because it seems borked in Qt
|
||||
|
@ -973,6 +968,7 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
|
|||
|
||||
self._controller.sub( self, 'RefreshPageName', 'refresh_page_name' )
|
||||
self._controller.sub( self, 'NotifyPageUnclosed', 'notify_page_unclosed' )
|
||||
self._controller.sub( self, '_UpdatePageTabEliding', 'notify_new_options' )
|
||||
|
||||
self._widget_event_filter = QP.WidgetEventFilter( self )
|
||||
self._widget_event_filter.EVT_LEFT_DCLICK( self.EventLeftDoubleClick )
|
||||
|
@ -987,6 +983,8 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
|
|||
|
||||
self._previous_page_index = -1
|
||||
|
||||
self._UpdatePageTabEliding()
|
||||
|
||||
|
||||
def _RefreshPageNamesAfterDnD( self, page_widget, source_widget ):
|
||||
|
||||
|
@ -1003,6 +1001,18 @@ class PagesNotebook( QP.TabWidgetWithDnD ):
|
|||
|
||||
|
||||
|
||||
def _UpdatePageTabEliding( self ):
|
||||
|
||||
if HG.client_controller.new_options.GetBoolean( 'elide_page_tab_names' ):
|
||||
|
||||
self.tabBar().setElideMode( QC.Qt.ElideMiddle )
|
||||
|
||||
else:
|
||||
|
||||
self.tabBar().setElideMode( QC.Qt.ElideNone )
|
||||
|
||||
|
||||
|
||||
def _UpdatePreviousPageIndex( self ):
|
||||
|
||||
self._previous_page_index = self.currentIndex()
|
||||
|
|
|
@ -1141,10 +1141,10 @@ class ReviewServicePanel( QW.QWidget ):
|
|||
|
||||
reset_menu_items = []
|
||||
|
||||
reset_menu_items.append( ( 'normal', 'reprocess definitions', 'Reprocess all definitions.', self._ReprocessDefinitions ) )
|
||||
reset_menu_items.append( ( 'normal', 'reprocess content', 'Reprocess all content.', self._ReprocessContent ) )
|
||||
reset_menu_items.append( ( 'normal', 'fill in definition gaps', 'Reprocess all definitions.', self._ReprocessDefinitions ) )
|
||||
reset_menu_items.append( ( 'normal', 'fill in content gaps', 'Reprocess all content.', self._ReprocessContent ) )
|
||||
reset_menu_items.append( ( 'separator', None, None, None ) )
|
||||
reset_menu_items.append( ( 'normal', 'complete wipe and reset', 'Reset entire repository.', self._Reset ) )
|
||||
reset_menu_items.append( ( 'normal', 'wipe database data and reprocess from update files', 'Reset entire repository.', self._Reset ) )
|
||||
|
||||
self._reset_button = ClientGUICommon.MenuButton( self, 'reset processing', reset_menu_items )
|
||||
|
||||
|
@ -1364,7 +1364,7 @@ class ReviewServicePanel( QW.QWidget ):
|
|||
|
||||
name = self._service.GetName()
|
||||
|
||||
message = 'This will delete all the processed information for ' + name + ' from the database.' + os.linesep * 2 + 'Once the service is reset, you will have to reprocess everything that has been downloaded over again. The client will naturally do this in its idle time as before, just starting over from the beginning.' + os.linesep * 2 + 'If you do not understand what this does, click no!'
|
||||
message = 'This will delete all the processed information for ' + name + ' from the database.' + os.linesep * 2 + 'Once the service is reset, you will have to reprocess everything from your downloaded update files. The client will naturally do this in its idle time as before, just starting over from the beginning.' + os.linesep * 2 + 'If you do not understand what this does, click no!'
|
||||
|
||||
result = ClientGUIDialogsQuick.GetYesNo( self, message )
|
||||
|
||||
|
|
|
@ -146,6 +146,8 @@ class MediaPanel( ClientMedia.ListeningMediaList, QW.QScrollArea ):
|
|||
|
||||
def _CopyBMPToClipboard( self ):
|
||||
|
||||
copied = False
|
||||
|
||||
if self._focused_media is not None:
|
||||
|
||||
if self._HasFocusSingleton():
|
||||
|
@ -156,13 +158,13 @@ class MediaPanel( ClientMedia.ListeningMediaList, QW.QScrollArea ):
|
|||
|
||||
HG.client_controller.pub( 'clipboard', 'bmp', media )
|
||||
|
||||
else:
|
||||
|
||||
QW.QMessageBox.critical( self, 'Error', 'Sorry, cannot take bmps of anything but static images right now!' )
|
||||
copied = True
|
||||
|
||||
|
||||
|
||||
|
||||
return copied
|
||||
|
||||
|
||||
def _CopyFilesToClipboard( self ):
|
||||
|
||||
|
@ -1852,6 +1854,15 @@ class MediaPanel( ClientMedia.ListeningMediaList, QW.QScrollArea ):
|
|||
|
||||
self._CopyBMPToClipboard()
|
||||
|
||||
elif action == 'copy_bmp_or_file_if_not_bmpable':
|
||||
|
||||
copied = self._CopyBMPToClipboard()
|
||||
|
||||
if not copied:
|
||||
|
||||
self._CopyFilesToClipboard()
|
||||
|
||||
|
||||
elif action == 'copy_file':
|
||||
|
||||
self._CopyFilesToClipboard()
|
||||
|
|
|
@ -214,7 +214,7 @@ shortcut_names_to_descriptions[ 'preview_media_window' ] = 'Actions for any vide
|
|||
SHORTCUTS_RESERVED_NAMES = [ 'global', 'archive_delete_filter', 'duplicate_filter', 'media', 'main_gui', 'media_viewer_browser', 'media_viewer', 'media_viewer_media_window', 'preview_media_window' ]
|
||||
|
||||
SHORTCUTS_GLOBAL_ACTIONS = [ 'global_audio_mute', 'global_audio_unmute', 'global_audio_mute_flip', 'exit_application', 'exit_application_force_maintenance', 'restart_application', 'hide_to_system_tray' ]
|
||||
SHORTCUTS_MEDIA_ACTIONS = [ 'manage_file_tags', 'manage_file_ratings', 'manage_file_urls', 'manage_file_notes', 'archive_file', 'inbox_file', 'delete_file', 'undelete_file', 'export_files', 'export_files_quick_auto_export', 'remove_file_from_view', 'open_file_in_external_program', 'open_selection_in_new_page', 'launch_the_archive_delete_filter', 'copy_bmp', 'copy_file', 'copy_path', 'copy_sha256_hash', 'get_similar_to_exact', 'get_similar_to_very_similar', 'get_similar_to_similar', 'get_similar_to_speculative', 'duplicate_media_set_alternate', 'duplicate_media_set_alternate_collections', 'duplicate_media_set_custom', 'duplicate_media_set_focused_better', 'duplicate_media_set_focused_king', 'duplicate_media_set_same_quality', 'open_known_url' ]
|
||||
SHORTCUTS_MEDIA_ACTIONS = [ 'manage_file_tags', 'manage_file_ratings', 'manage_file_urls', 'manage_file_notes', 'archive_file', 'inbox_file', 'delete_file', 'undelete_file', 'export_files', 'export_files_quick_auto_export', 'remove_file_from_view', 'open_file_in_external_program', 'open_selection_in_new_page', 'launch_the_archive_delete_filter', 'copy_bmp', 'copy_bmp_or_file_if_not_bmpable', 'copy_file', 'copy_path', 'copy_sha256_hash', 'get_similar_to_exact', 'get_similar_to_very_similar', 'get_similar_to_similar', 'get_similar_to_speculative', 'duplicate_media_set_alternate', 'duplicate_media_set_alternate_collections', 'duplicate_media_set_custom', 'duplicate_media_set_focused_better', 'duplicate_media_set_focused_king', 'duplicate_media_set_same_quality', 'open_known_url' ]
|
||||
SHORTCUTS_MEDIA_VIEWER_ACTIONS = [ 'pause_media', 'pause_play_media', 'move_animation_to_previous_frame', 'move_animation_to_next_frame', 'switch_between_fullscreen_borderless_and_regular_framed_window', 'pan_up', 'pan_down', 'pan_left', 'pan_right', 'pan_top_edge', 'pan_bottom_edge', 'pan_left_edge', 'pan_right_edge', 'pan_vertical_center', 'pan_horizontal_center', 'zoom_in', 'zoom_out', 'switch_between_100_percent_and_canvas_zoom', 'flip_darkmode', 'close_media_viewer' ]
|
||||
SHORTCUTS_MEDIA_VIEWER_BROWSER_ACTIONS = [ 'view_next', 'view_first', 'view_last', 'view_previous', 'pause_play_slideshow', 'show_menu', 'close_media_viewer' ]
|
||||
SHORTCUTS_MAIN_GUI_ACTIONS = [ 'refresh', 'refresh_all_pages', 'refresh_page_of_pages_pages', 'new_page', 'new_page_of_pages', 'new_duplicate_filter_page', 'new_gallery_downloader_page', 'new_url_downloader_page', 'new_simple_downloader_page', 'new_watcher_downloader_page', 'synchronised_wait_switch', 'set_media_focus', 'show_hide_splitters', 'set_search_focus', 'unclose_page', 'close_page', 'redo', 'undo', 'flip_darkmode', 'check_all_import_folders', 'flip_debug_force_idle_mode_do_not_set_this', 'show_and_focus_manage_tags_favourite_tags', 'show_and_focus_manage_tags_related_tags', 'show_and_focus_manage_tags_file_lookup_script_tags', 'show_and_focus_manage_tags_recent_tags', 'focus_media_viewer' ]
|
||||
|
|
|
@ -231,6 +231,11 @@ class RecentTagsPanel( QW.QWidget ):
|
|||
|
||||
self._UpdateTagDisplay()
|
||||
|
||||
if len( self._recent_tags.GetTags() ) > 0:
|
||||
|
||||
self._recent_tags.SelectTopItem()
|
||||
|
||||
|
||||
|
||||
recent_tags = HG.client_controller.Read( 'recent_tags', service_key )
|
||||
|
||||
|
@ -242,17 +247,10 @@ class RecentTagsPanel( QW.QWidget ):
|
|||
|
||||
def _UpdateTagDisplay( self ):
|
||||
|
||||
had_selection_before = len( self._recent_tags.GetSelectedTags() ) > 0
|
||||
|
||||
tags = FilterSuggestedTagsForMedia( self._last_fetched_tags, self._media, self._service_key )
|
||||
|
||||
self._recent_tags.SetTags( tags )
|
||||
|
||||
if had_selection_before and len( tags ) > 0:
|
||||
|
||||
self._recent_tags.SelectTopItem()
|
||||
|
||||
|
||||
|
||||
def EventClear( self ):
|
||||
|
||||
|
|
|
@ -344,11 +344,11 @@ class EditTagFilterPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
TEST_RESULT_DEFAULT = 'Enter a tag here to test if it passes the current filter:'
|
||||
TEST_RESULT_BLACKLIST_DEFAULT = 'Enter a tag here to test if it passes the current filter in a tag import options blacklist (siblings tested, unnamespaced rules match namespaced tags):'
|
||||
|
||||
def __init__( self, parent, tag_filter, prefer_blacklist = False, namespaces = None, message = None ):
|
||||
def __init__( self, parent, tag_filter, only_show_blacklist = False, namespaces = None, message = None ):
|
||||
|
||||
ClientGUIScrolledPanels.EditPanel.__init__( self, parent )
|
||||
|
||||
self._prefer_blacklist = prefer_blacklist
|
||||
self._only_show_blacklist = only_show_blacklist
|
||||
self._namespaces = namespaces
|
||||
|
||||
self._wildcard_replacements = {}
|
||||
|
@ -384,19 +384,19 @@ class EditTagFilterPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
#
|
||||
|
||||
if self._prefer_blacklist:
|
||||
if self._only_show_blacklist:
|
||||
|
||||
self._whitelist_panel.setVisible( False )
|
||||
self._notebook.addTab( self._blacklist_panel, 'blacklist' )
|
||||
self._notebook.addTab( self._whitelist_panel, 'whitelist' )
|
||||
self._advanced_panel.setVisible( False )
|
||||
|
||||
else:
|
||||
|
||||
self._notebook.addTab( self._whitelist_panel, 'whitelist' )
|
||||
self._notebook.addTab( self._blacklist_panel, 'blacklist' )
|
||||
self._notebook.addTab( self._advanced_panel, 'advanced' )
|
||||
|
||||
|
||||
self._notebook.addTab( self._advanced_panel, 'advanced' )
|
||||
|
||||
#
|
||||
|
||||
self._redundant_st = ClientGUICommon.BetterStaticText( self, '', ellipsize_end = True )
|
||||
|
@ -423,7 +423,11 @@ class EditTagFilterPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
if message is not None:
|
||||
|
||||
QP.AddToLayout( vbox, ClientGUICommon.BetterStaticText(self,message), CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
st = ClientGUICommon.BetterStaticText( self, message )
|
||||
|
||||
st.setWordWrap( True )
|
||||
|
||||
QP.AddToLayout( vbox, st, CC.FLAGS_EXPAND_PERPENDICULAR )
|
||||
|
||||
|
||||
hbox = QP.HBoxLayout()
|
||||
|
@ -592,6 +596,8 @@ class EditTagFilterPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
def _CleanTagSliceInput( self, tag_slice ):
|
||||
|
||||
tag_slice = tag_slice.lower().strip()
|
||||
|
||||
while '**' in tag_slice:
|
||||
|
||||
tag_slice = tag_slice.replace( '**', '*' )
|
||||
|
@ -1246,9 +1252,16 @@ class EditTagFilterPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
tag_filter = self.GetValue()
|
||||
|
||||
pretty_tag_filter = tag_filter.ToPermittedString()
|
||||
if self._only_show_blacklist:
|
||||
|
||||
pretty_tag_filter = tag_filter.ToBlacklistString()
|
||||
|
||||
else:
|
||||
|
||||
pretty_tag_filter = 'currently keeping: {}'.format( tag_filter.ToPermittedString() )
|
||||
|
||||
|
||||
self._current_filter_st.setText( 'currently keeping: '+pretty_tag_filter )
|
||||
self._current_filter_st.setText( pretty_tag_filter )
|
||||
|
||||
self._UpdateTest()
|
||||
|
||||
|
@ -1388,11 +1401,9 @@ class EditTagFilterPanel( ClientGUIScrolledPanels.EditPanel ):
|
|||
|
||||
selection_tests = []
|
||||
|
||||
if self._prefer_blacklist:
|
||||
if self._only_show_blacklist:
|
||||
|
||||
selection_tests.append( ( blacklist_possible, self._blacklist_panel ) )
|
||||
selection_tests.append( ( whitelist_possible, self._whitelist_panel ) )
|
||||
selection_tests.append( ( True, self._advanced_panel ) )
|
||||
|
||||
else:
|
||||
|
||||
|
@ -1738,7 +1749,7 @@ class ManageTagsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
menu_items = []
|
||||
|
||||
check_manager = ClientGUICommon.CheckboxManagerOptions( 'add_parents_on_manage_tags' )
|
||||
check_manager = ClientGUICommon.CheckboxManagerCalls( self._FlipExpandParents, lambda: self._new_options.GetBoolean( 'add_parents_on_manage_tags' ) )
|
||||
|
||||
menu_items.append( ( 'check', 'auto-add entered tags\' parents on add/pend action', 'If checked, adding any tag that has parents will also add those parents.', check_manager ) )
|
||||
|
||||
|
@ -2259,6 +2270,7 @@ class ManageTagsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
addee_action = HC.CONTENT_UPDATE_ADD
|
||||
removee_action = HC.CONTENT_UPDATE_DELETE
|
||||
other_removee_action = HC.CONTENT_UPDATE_RESCIND_PEND
|
||||
reason = None
|
||||
|
||||
content_updates = []
|
||||
|
@ -2266,6 +2278,7 @@ class ManageTagsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
for ( tag, hashes ) in removee_tags_to_hashes.items():
|
||||
|
||||
content_updates.append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_MAPPINGS, removee_action, ( tag, hashes ), reason = reason ) )
|
||||
content_updates.append( HydrusData.ContentUpdate( HC.CONTENT_TYPE_MAPPINGS, other_removee_action, ( tag, hashes ), reason = reason ) )
|
||||
|
||||
|
||||
for ( tag, hashes ) in addee_tags_to_hashes.items():
|
||||
|
@ -2302,6 +2315,15 @@ class ManageTagsPanel( ClientGUIScrolledPanels.ManagePanel ):
|
|||
self._tags_box.SetTagsByMedia( self._media )
|
||||
|
||||
|
||||
def _FlipExpandParents( self ):
|
||||
|
||||
value = not self._new_options.GetBoolean( 'add_parents_on_manage_tags' )
|
||||
|
||||
self._new_options.SetBoolean( 'add_parents_on_manage_tags', value )
|
||||
|
||||
self._add_tag_box.SetExpandParents( value )
|
||||
|
||||
|
||||
def _FlipShowDeleted( self ):
|
||||
|
||||
self._show_deleted = not self._show_deleted
|
||||
|
@ -4258,13 +4280,13 @@ class ManageTagSiblings( ClientGUIScrolledPanels.ManagePanel ):
|
|||
|
||||
class TagFilterButton( ClientGUICommon.BetterButton ):
|
||||
|
||||
def __init__( self, parent, message, tag_filter, is_blacklist = False, label_prefix = None ):
|
||||
def __init__( self, parent, message, tag_filter, only_show_blacklist = False, label_prefix = None ):
|
||||
|
||||
ClientGUICommon.BetterButton.__init__( self, parent, 'tag filter', self._EditTagFilter )
|
||||
|
||||
self._message = message
|
||||
self._tag_filter = tag_filter
|
||||
self._is_blacklist = is_blacklist
|
||||
self._only_show_blacklist = only_show_blacklist
|
||||
self._label_prefix = label_prefix
|
||||
|
||||
self._UpdateLabel()
|
||||
|
@ -4272,11 +4294,20 @@ class TagFilterButton( ClientGUICommon.BetterButton ):
|
|||
|
||||
def _EditTagFilter( self ):
|
||||
|
||||
with ClientGUITopLevelWindowsPanels.DialogEdit( self, 'edit tag filter' ) as dlg:
|
||||
if self._only_show_blacklist:
|
||||
|
||||
title = 'edit blacklist'
|
||||
|
||||
else:
|
||||
|
||||
title = 'edit tag filter'
|
||||
|
||||
|
||||
with ClientGUITopLevelWindowsPanels.DialogEdit( self, title ) as dlg:
|
||||
|
||||
namespaces = HG.client_controller.network_engine.domain_manager.GetParserNamespaces()
|
||||
|
||||
panel = EditTagFilterPanel( dlg, self._tag_filter, prefer_blacklist = self._is_blacklist, namespaces = namespaces, message = self._message )
|
||||
panel = EditTagFilterPanel( dlg, self._tag_filter, only_show_blacklist = self._only_show_blacklist, namespaces = namespaces, message = self._message )
|
||||
|
||||
dlg.SetPanel( panel )
|
||||
|
||||
|
@ -4291,7 +4322,7 @@ class TagFilterButton( ClientGUICommon.BetterButton ):
|
|||
|
||||
def _UpdateLabel( self ):
|
||||
|
||||
if self._is_blacklist:
|
||||
if self._only_show_blacklist:
|
||||
|
||||
tt = self._tag_filter.ToBlacklistString()
|
||||
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
import collections
|
||||
import os
|
||||
import random
|
||||
import threading
|
||||
import time
|
||||
import traceback
|
||||
|
@ -1443,6 +1444,219 @@ class FileSeed( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_FILE_SEED ] = FileSeed
|
||||
|
||||
class FileSeedCacheStatus( HydrusSerialisable.SerialisableBase ):
|
||||
|
||||
SERIALISABLE_TYPE = HydrusSerialisable.SERIALISABLE_TYPE_FILE_SEED_CACHE_STATUS
|
||||
SERIALISABLE_NAME = 'Import File Status Cache Status'
|
||||
SERIALISABLE_VERSION = 1
|
||||
|
||||
def __init__( self ):
|
||||
|
||||
self._generation_time = HydrusData.GetNow()
|
||||
self._statuses_to_counts = collections.Counter()
|
||||
self._latest_added_time = 0
|
||||
|
||||
|
||||
def _GetSerialisableInfo( self ):
|
||||
|
||||
serialisable_statuses_to_counts = list( self._statuses_to_counts.items() )
|
||||
|
||||
return ( self._generation_time, serialisable_statuses_to_counts, self._latest_added_time )
|
||||
|
||||
|
||||
def _InitialiseFromSerialisableInfo( self, serialisable_info ):
|
||||
|
||||
( self._generation_time, serialisable_statuses_to_counts, self._latest_added_time ) = serialisable_info
|
||||
|
||||
self._statuses_to_counts = collections.Counter()
|
||||
|
||||
self._statuses_to_counts.update( dict( serialisable_statuses_to_counts ) )
|
||||
|
||||
|
||||
def GetFileSeedCount( self, status: typing.Optional[ int ] = None ) -> int:
|
||||
|
||||
if status is None:
|
||||
|
||||
return sum( self._statuses_to_counts.values() )
|
||||
|
||||
else:
|
||||
|
||||
return self._statuses_to_counts[ status ]
|
||||
|
||||
|
||||
|
||||
def GetGenerationTime( self ) -> int:
|
||||
|
||||
return self._generation_time
|
||||
|
||||
|
||||
def GetLatestAddedTime( self ) -> int:
|
||||
|
||||
return self._latest_added_time
|
||||
|
||||
|
||||
def GetStatusText( self, simple = False ) -> str:
|
||||
|
||||
num_successful_and_new = self._statuses_to_counts[ CC.STATUS_SUCCESSFUL_AND_NEW ]
|
||||
num_successful_but_redundant = self._statuses_to_counts[ CC.STATUS_SUCCESSFUL_BUT_REDUNDANT ]
|
||||
num_ignored = self._statuses_to_counts[ CC.STATUS_VETOED ]
|
||||
num_deleted = self._statuses_to_counts[ CC.STATUS_DELETED ]
|
||||
num_failed = self._statuses_to_counts[ CC.STATUS_ERROR ]
|
||||
num_skipped = self._statuses_to_counts[ CC.STATUS_SKIPPED ]
|
||||
num_unknown = self._statuses_to_counts[ CC.STATUS_UNKNOWN ]
|
||||
|
||||
if simple:
|
||||
|
||||
total = sum( self._statuses_to_counts.values() )
|
||||
|
||||
total_processed = total - num_unknown
|
||||
|
||||
#
|
||||
|
||||
status_text = ''
|
||||
|
||||
if total > 0:
|
||||
|
||||
if num_unknown > 0:
|
||||
|
||||
status_text += HydrusData.ConvertValueRangeToPrettyString( total_processed, total )
|
||||
|
||||
else:
|
||||
|
||||
status_text += HydrusData.ToHumanInt( total_processed )
|
||||
|
||||
|
||||
show_new_on_file_seed_short_summary = HG.client_controller.new_options.GetBoolean( 'show_new_on_file_seed_short_summary' )
|
||||
|
||||
if show_new_on_file_seed_short_summary and num_successful_and_new:
|
||||
|
||||
status_text += ' - {}N'.format( HydrusData.ToHumanInt( num_successful_and_new ) )
|
||||
|
||||
|
||||
simple_status_strings = []
|
||||
|
||||
if num_ignored > 0:
|
||||
|
||||
simple_status_strings.append( '{}Ig'.format( HydrusData.ToHumanInt( num_ignored ) ) )
|
||||
|
||||
|
||||
show_deleted_on_file_seed_short_summary = HG.client_controller.new_options.GetBoolean( 'show_deleted_on_file_seed_short_summary' )
|
||||
|
||||
if show_deleted_on_file_seed_short_summary and num_deleted > 0:
|
||||
|
||||
simple_status_strings.append( '{}D'.format( HydrusData.ToHumanInt( num_deleted ) ) )
|
||||
|
||||
|
||||
if num_failed > 0:
|
||||
|
||||
simple_status_strings.append( '{}F'.format( HydrusData.ToHumanInt( num_failed ) ) )
|
||||
|
||||
|
||||
if num_skipped > 0:
|
||||
|
||||
simple_status_strings.append( '{}S'.format( HydrusData.ToHumanInt( num_skipped ) ) )
|
||||
|
||||
|
||||
if len( simple_status_strings ) > 0:
|
||||
|
||||
status_text += ' - {}'.format( ''.join( simple_status_strings ) )
|
||||
|
||||
|
||||
|
||||
else:
|
||||
|
||||
status_strings = []
|
||||
|
||||
num_successful = num_successful_and_new + num_successful_but_redundant
|
||||
|
||||
if num_successful > 0:
|
||||
|
||||
s = '{} successful'.format( HydrusData.ToHumanInt( num_successful ) )
|
||||
|
||||
if num_successful_and_new > 0:
|
||||
|
||||
if num_successful_but_redundant > 0:
|
||||
|
||||
s += ' ({} already in db)'.format( HydrusData.ToHumanInt( num_successful_but_redundant ) )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
s += ' (all already in db)'
|
||||
|
||||
|
||||
status_strings.append( s )
|
||||
|
||||
|
||||
if num_ignored > 0:
|
||||
|
||||
status_strings.append( '{} ignored'.format( HydrusData.ToHumanInt( num_ignored ) ) )
|
||||
|
||||
|
||||
if num_deleted > 0:
|
||||
|
||||
status_strings.append( '{} previously deleted'.format( HydrusData.ToHumanInt( num_deleted ) ) )
|
||||
|
||||
|
||||
if num_failed > 0:
|
||||
|
||||
status_strings.append( '{} failed'.format( HydrusData.ToHumanInt( num_failed ) ) )
|
||||
|
||||
|
||||
if num_skipped > 0:
|
||||
|
||||
status_strings.append( '{} skipped'.format( HydrusData.ToHumanInt( num_skipped ) ) )
|
||||
|
||||
|
||||
status_text = ', '.join( status_strings )
|
||||
|
||||
|
||||
return status_text
|
||||
|
||||
|
||||
def GetStatusesToCounts( self ) -> typing.Mapping[ int, int ]:
|
||||
|
||||
return self._statuses_to_counts
|
||||
|
||||
|
||||
def GetValueRange( self ) -> typing.Tuple[ int, int ]:
|
||||
|
||||
total = sum( self._statuses_to_counts.values() )
|
||||
|
||||
num_unknown = self._statuses_to_counts[ CC.STATUS_UNKNOWN ]
|
||||
|
||||
total_processed = total - num_unknown
|
||||
|
||||
return ( total_processed, total )
|
||||
|
||||
|
||||
def HasWorkToDo( self ):
|
||||
|
||||
( num_done, num_total ) = self.GetValueRange()
|
||||
|
||||
return num_done < num_total
|
||||
|
||||
|
||||
def Merge( self, file_seed_cache_status: "FileSeedCacheStatus" ):
|
||||
|
||||
self._latest_added_time = max( self._latest_added_time, file_seed_cache_status.GetLatestAddedTime() )
|
||||
self._statuses_to_counts.update( file_seed_cache_status.GetStatusesToCounts() )
|
||||
|
||||
|
||||
def SetStatusesToCounts( self, statuses_to_counts: typing.Mapping[ int, int ] ):
|
||||
|
||||
self._statuses_to_counts = collections.Counter()
|
||||
|
||||
self._statuses_to_counts.update( statuses_to_counts )
|
||||
|
||||
|
||||
def SetLatestAddedTime( self, latest_added_time: int ):
|
||||
|
||||
self._latest_added_time = latest_added_time
|
||||
|
||||
|
||||
HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_FILE_SEED_CACHE_STATUS ] = FileSeedCacheStatus
|
||||
|
||||
class FileSeedCache( HydrusSerialisable.SerialisableBase ):
|
||||
|
||||
SERIALISABLE_TYPE = HydrusSerialisable.SERIALISABLE_TYPE_FILE_SEED_CACHE
|
||||
|
@ -1461,8 +1675,7 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
self._file_seed_cache_key = HydrusData.GenerateKey()
|
||||
|
||||
self._status_cache = None
|
||||
self._status_cache_generation_time = 0
|
||||
self._status_cache = FileSeedCacheStatus()
|
||||
|
||||
self._status_dirty = True
|
||||
|
||||
|
@ -1476,8 +1689,12 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
def _GenerateStatus( self ):
|
||||
|
||||
self._status_cache = GenerateStatusesToCountsStatus( self._GetStatusesToCounts() )
|
||||
self._status_cache_generation_time = HydrusData.GetNow()
|
||||
fscs = FileSeedCacheStatus()
|
||||
|
||||
fscs.SetLatestAddedTime( self._GetLatestAddedTime() )
|
||||
fscs.SetStatusesToCounts( self._GetStatusesToCounts() )
|
||||
|
||||
self._status_cache = fscs
|
||||
|
||||
self._status_dirty = False
|
||||
|
||||
|
@ -1494,6 +1711,33 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def _GetLatestAddedTime( self ):
|
||||
|
||||
if len( self._file_seeds ) == 0:
|
||||
|
||||
latest_timestamp = 0
|
||||
|
||||
else:
|
||||
|
||||
latest_timestamp = max( ( file_seed.created for file_seed in self._file_seeds ) )
|
||||
|
||||
|
||||
return latest_timestamp
|
||||
|
||||
|
||||
def _GetNextFileSeed( self, status: int ) -> typing.Optional[ FileSeed ]:
|
||||
|
||||
for file_seed in self._file_seeds:
|
||||
|
||||
if file_seed.status == status:
|
||||
|
||||
return file_seed
|
||||
|
||||
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def _GetSerialisableInfo( self ):
|
||||
|
||||
return self._file_seeds.GetSerialisableTuple()
|
||||
|
@ -1912,12 +2156,13 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
|
|||
self._GenerateStatus()
|
||||
|
||||
|
||||
( status, simple_status, ( total_processed, total ) ) = self._status_cache
|
||||
d[ 'status' ] = self._status_cache.GetStatusText()
|
||||
d[ 'simple_status' ] = self._status_cache.GetStatusText( simple = True )
|
||||
|
||||
d[ 'status' ] = status
|
||||
d[ 'simple_status' ] = status
|
||||
d[ 'total_processed' ] = total_processed
|
||||
d[ 'total_to_process' ] = total
|
||||
( num_done, num_total ) = self._status_cache.GetValueRange()
|
||||
|
||||
d[ 'total_processed' ] = num_done
|
||||
d[ 'total_to_process' ] = num_total
|
||||
|
||||
if not simple:
|
||||
|
||||
|
@ -1943,6 +2188,35 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
|
|||
return earliest_timestamp
|
||||
|
||||
|
||||
def GetExampleFileSeed( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
if len( self._file_seeds ) == 0:
|
||||
|
||||
return None
|
||||
|
||||
else:
|
||||
|
||||
example_seed = self._GetNextFileSeed( CC.STATUS_UNKNOWN )
|
||||
|
||||
if example_seed is None:
|
||||
|
||||
example_seed = random.choice( self._file_seeds[-10:] )
|
||||
|
||||
|
||||
if example_seed.file_seed_type == FILE_SEED_TYPE_HDD:
|
||||
|
||||
return None
|
||||
|
||||
else:
|
||||
|
||||
return example_seed
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def GetFileSeedCacheKey( self ):
|
||||
|
||||
return self._file_seed_cache_key
|
||||
|
@ -1999,21 +2273,6 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
|
|||
return hashes
|
||||
|
||||
|
||||
def GetLatestAddedTime( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
if len( self._file_seeds ) == 0:
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
latest_timestamp = max( ( file_seed.created for file_seed in self._file_seeds ) )
|
||||
|
||||
|
||||
return latest_timestamp
|
||||
|
||||
|
||||
def GetLatestSourceTime( self ):
|
||||
|
||||
with self._lock:
|
||||
|
@ -2029,21 +2288,13 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
|
|||
return latest_timestamp
|
||||
|
||||
|
||||
def GetNextFileSeed( self, status ):
|
||||
def GetNextFileSeed( self, status: int ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
for file_seed in self._file_seeds:
|
||||
|
||||
if file_seed.status == status:
|
||||
|
||||
return file_seed
|
||||
|
||||
|
||||
return self._GetNextFileSeed( status )
|
||||
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def GetNumNewFilesSince( self, since: int ):
|
||||
|
||||
|
@ -2113,27 +2364,6 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def GetStatusGenerationTime( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
if self._status_dirty:
|
||||
|
||||
return HydrusData.GetNow()
|
||||
|
||||
|
||||
return self._status_cache_generation_time
|
||||
|
||||
|
||||
|
||||
def GetStatusesToCounts( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
return self._GetStatusesToCounts()
|
||||
|
||||
|
||||
|
||||
def GetValueRange( self ):
|
||||
|
||||
with self._lock:
|
||||
|
@ -2143,9 +2373,7 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
|
|||
self._GenerateStatus()
|
||||
|
||||
|
||||
( status, simple_status, ( total_processed, total ) ) = self._status_cache
|
||||
|
||||
return ( total_processed, total )
|
||||
return self._status_cache.GetValueRange()
|
||||
|
||||
|
||||
|
||||
|
@ -2242,7 +2470,7 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
|
|||
self.RemoveFileSeeds( file_seeds_to_delete )
|
||||
|
||||
|
||||
def RetryFailures( self ):
|
||||
def RetryFailed( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
|
@ -2281,143 +2509,20 @@ class FileSeedCache( HydrusSerialisable.SerialisableBase ):
|
|||
self._GenerateStatus()
|
||||
|
||||
|
||||
( status, simple_status, ( total_processed, total ) ) = self._status_cache
|
||||
|
||||
return total_processed < total
|
||||
return self._status_cache.HasWorkToDo()
|
||||
|
||||
|
||||
|
||||
HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_FILE_SEED_CACHE ] = FileSeedCache
|
||||
|
||||
def GenerateFileSeedCacheStatus( file_seed_cache: FileSeedCache ):
|
||||
|
||||
statuses_to_counts = file_seed_cache.GetStatusesToCounts()
|
||||
|
||||
return GenerateStatusesToCountsStatus( statuses_to_counts )
|
||||
|
||||
def GenerateFileSeedCachesStatus( file_seed_caches: typing.Iterable[ FileSeedCache ] ):
|
||||
|
||||
statuses_to_counts = collections.Counter()
|
||||
fscs = FileSeedCacheStatus()
|
||||
|
||||
for file_seed_cache in file_seed_caches:
|
||||
|
||||
statuses_to_counts.update( file_seed_cache.GetStatusesToCounts() )
|
||||
fscs.Merge( file_seed_cache.GetStatus() )
|
||||
|
||||
|
||||
return GenerateStatusesToCountsStatus( statuses_to_counts )
|
||||
|
||||
def GenerateStatusesToCountsStatus( statuses_to_counts: collections.Counter ):
|
||||
|
||||
num_successful_and_new = statuses_to_counts[ CC.STATUS_SUCCESSFUL_AND_NEW ]
|
||||
num_successful_but_redundant = statuses_to_counts[ CC.STATUS_SUCCESSFUL_BUT_REDUNDANT ]
|
||||
num_ignored = statuses_to_counts[ CC.STATUS_VETOED ]
|
||||
num_deleted = statuses_to_counts[ CC.STATUS_DELETED ]
|
||||
num_failed = statuses_to_counts[ CC.STATUS_ERROR ]
|
||||
num_skipped = statuses_to_counts[ CC.STATUS_SKIPPED ]
|
||||
num_unknown = statuses_to_counts[ CC.STATUS_UNKNOWN ]
|
||||
|
||||
status_strings = []
|
||||
|
||||
num_successful = num_successful_and_new + num_successful_but_redundant
|
||||
|
||||
if num_successful > 0:
|
||||
|
||||
s = HydrusData.ToHumanInt( num_successful ) + ' successful'
|
||||
|
||||
if num_successful_and_new > 0:
|
||||
|
||||
if num_successful_but_redundant > 0:
|
||||
|
||||
s += ' (' + HydrusData.ToHumanInt( num_successful_but_redundant ) + ' already in db)'
|
||||
|
||||
|
||||
else:
|
||||
|
||||
s += ' (all already in db)'
|
||||
|
||||
|
||||
status_strings.append( s )
|
||||
|
||||
|
||||
if num_ignored > 0:
|
||||
|
||||
status_strings.append( HydrusData.ToHumanInt( num_ignored ) + ' ignored' )
|
||||
|
||||
|
||||
if num_deleted > 0:
|
||||
|
||||
status_strings.append( HydrusData.ToHumanInt( num_deleted ) + ' previously deleted' )
|
||||
|
||||
|
||||
if num_failed > 0:
|
||||
|
||||
status_strings.append( HydrusData.ToHumanInt( num_failed ) + ' failed' )
|
||||
|
||||
|
||||
if num_skipped > 0:
|
||||
|
||||
status_strings.append( HydrusData.ToHumanInt( num_skipped ) + ' skipped' )
|
||||
|
||||
|
||||
status = ', '.join( status_strings )
|
||||
|
||||
#
|
||||
|
||||
total = sum( statuses_to_counts.values() )
|
||||
|
||||
total_processed = total - num_unknown
|
||||
|
||||
#
|
||||
|
||||
simple_status = ''
|
||||
|
||||
if total > 0:
|
||||
|
||||
if num_unknown > 0:
|
||||
|
||||
simple_status += HydrusData.ConvertValueRangeToPrettyString( total_processed, total )
|
||||
|
||||
else:
|
||||
|
||||
simple_status += HydrusData.ToHumanInt( total_processed )
|
||||
|
||||
|
||||
show_new_on_file_seed_short_summary = HG.client_controller.new_options.GetBoolean( 'show_new_on_file_seed_short_summary' )
|
||||
|
||||
if show_new_on_file_seed_short_summary and num_successful_and_new:
|
||||
|
||||
simple_status += ' - ' + HydrusData.ToHumanInt( num_successful_and_new ) + 'N'
|
||||
|
||||
|
||||
simple_status_strings = []
|
||||
|
||||
if num_ignored > 0:
|
||||
|
||||
simple_status_strings.append( HydrusData.ToHumanInt( num_ignored ) + 'Ig' )
|
||||
|
||||
|
||||
show_deleted_on_file_seed_short_summary = HG.client_controller.new_options.GetBoolean( 'show_deleted_on_file_seed_short_summary' )
|
||||
|
||||
if show_deleted_on_file_seed_short_summary and num_deleted > 0:
|
||||
|
||||
simple_status_strings.append( HydrusData.ToHumanInt( num_deleted ) + 'D' )
|
||||
|
||||
|
||||
if num_failed > 0:
|
||||
|
||||
simple_status_strings.append( HydrusData.ToHumanInt( num_failed ) + 'F' )
|
||||
|
||||
|
||||
if num_skipped > 0:
|
||||
|
||||
simple_status_strings.append( HydrusData.ToHumanInt( num_skipped ) + 'S' )
|
||||
|
||||
|
||||
if len( simple_status_strings ) > 0:
|
||||
|
||||
simple_status += ' - ' + ''.join( simple_status_strings )
|
||||
|
||||
|
||||
|
||||
return ( status, simple_status, ( total_processed, total ) )
|
||||
return fscs
|
||||
|
||||
|
|
|
@ -715,7 +715,7 @@ class GalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
self._file_seed_cache.RetryFailures()
|
||||
self._file_seed_cache.RetryFailed()
|
||||
|
||||
|
||||
|
||||
|
@ -935,8 +935,7 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
self._gallery_import_keys_to_gallery_imports = {}
|
||||
|
||||
self._status_dirty = True
|
||||
self._status_cache = None
|
||||
self._status_cache_generation_time = 0
|
||||
self._status_cache = ClientImportFileSeeds.FileSeedCacheStatus()
|
||||
|
||||
self._last_time_imports_changed = HydrusData.GetNowPrecise()
|
||||
|
||||
|
@ -1024,7 +1023,6 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
self._status_cache = ClientImportFileSeeds.GenerateFileSeedCachesStatus( file_seed_caches )
|
||||
|
||||
self._status_dirty = False
|
||||
self._status_cache_generation_time = HydrusData.GetNow()
|
||||
|
||||
|
||||
def _RemoveGalleryImport( self, gallery_import_key ):
|
||||
|
@ -1285,7 +1283,7 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def GetTotalStatus( self ):
|
||||
def GetTotalStatus( self ) -> ClientImportFileSeeds.FileSeedCacheStatus:
|
||||
|
||||
with self._lock:
|
||||
|
||||
|
@ -1499,7 +1497,7 @@ class MultipleGalleryImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
file_seed_cache = gallery_import.GetFileSeedCache()
|
||||
|
||||
if file_seed_cache.GetStatusGenerationTime() > self._status_cache_generation_time: # has there has been an update?
|
||||
if file_seed_cache.GetStatus().GetGenerationTime() > self._status_cache.GetGenerationTime(): # has there has been an update?
|
||||
|
||||
self._SetDirty()
|
||||
|
||||
|
|
|
@ -1,9 +1,11 @@
|
|||
import collections
|
||||
import itertools
|
||||
import os
|
||||
import random
|
||||
import threading
|
||||
import time
|
||||
import traceback
|
||||
import typing
|
||||
|
||||
from hydrus.client import ClientConstants as CC
|
||||
from hydrus.client import ClientParsing
|
||||
|
@ -557,7 +559,6 @@ class GallerySeedLog( HydrusSerialisable.SerialisableBase ):
|
|||
self._gallery_seed_log_key = HydrusData.GenerateKey()
|
||||
|
||||
self._status_cache = None
|
||||
self._status_cache_generation_time = 0
|
||||
|
||||
self._status_dirty = True
|
||||
|
||||
|
@ -574,11 +575,23 @@ class GallerySeedLog( HydrusSerialisable.SerialisableBase ):
|
|||
statuses_to_counts = self._GetStatusesToCounts()
|
||||
|
||||
self._status_cache = GenerateGallerySeedLogStatus( statuses_to_counts )
|
||||
self._status_cache_generation_time = HydrusData.GetNow()
|
||||
|
||||
self._status_dirty = False
|
||||
|
||||
|
||||
def _GetNextGallerySeed( self, status: int ) -> typing.Optional[ GallerySeed ]:
|
||||
|
||||
for gallery_seed in self._gallery_seeds:
|
||||
|
||||
if gallery_seed.status == status:
|
||||
|
||||
return gallery_seed
|
||||
|
||||
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def _GetStatusesToCounts( self ):
|
||||
|
||||
statuses_to_counts = collections.Counter()
|
||||
|
@ -775,21 +788,27 @@ class GallerySeedLog( HydrusSerialisable.SerialisableBase ):
|
|||
self.NotifyGallerySeedsUpdated( ( gallery_seed, ) )
|
||||
|
||||
|
||||
def GetNextGallerySeed( self, status ):
|
||||
def GetExampleGallerySeed( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
for gallery_seed in self._gallery_seeds:
|
||||
if len( self._gallery_seeds ) == 0:
|
||||
|
||||
if gallery_seed.status == status:
|
||||
return None
|
||||
|
||||
else:
|
||||
|
||||
example_seed = self._GetNextGallerySeed( CC.STATUS_UNKNOWN )
|
||||
|
||||
if example_seed is None:
|
||||
|
||||
return gallery_seed
|
||||
example_seed = random.choice( self._gallery_seeds[-10:] )
|
||||
|
||||
|
||||
return example_seed
|
||||
|
||||
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def GetAPIInfoDict( self, simple ):
|
||||
|
||||
|
@ -863,6 +882,14 @@ class GallerySeedLog( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def GetNextGallerySeed( self, status ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
return self._GetNextGallerySeed( status )
|
||||
|
||||
|
||||
|
||||
def GetStatus( self ):
|
||||
|
||||
with self._lock:
|
||||
|
@ -876,19 +903,6 @@ class GallerySeedLog( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def GetStatusGenerationTime( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
if self._status_dirty:
|
||||
|
||||
return HydrusData.GetNow()
|
||||
|
||||
|
||||
return self._status_cache_generation_time
|
||||
|
||||
|
||||
|
||||
def GetStatusesToCounts( self ):
|
||||
|
||||
with self._lock:
|
||||
|
@ -985,7 +999,7 @@ class GallerySeedLog( HydrusSerialisable.SerialisableBase ):
|
|||
self.NotifyGallerySeedsUpdated( new_gallery_seeds )
|
||||
|
||||
|
||||
def RetryFailures( self ):
|
||||
def RetryFailed( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
|
|
|
@ -942,14 +942,6 @@ class URLsImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def GetStatus( self ):
|
||||
|
||||
with self._lock:
|
||||
|
||||
return ( self._file_seed_cache.GetStatus(), self._paused )
|
||||
|
||||
|
||||
|
||||
def GetValueRange( self ):
|
||||
|
||||
with self._lock:
|
||||
|
|
|
@ -1,3 +1,5 @@
|
|||
import typing
|
||||
|
||||
from hydrus.core import HydrusData
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusSerialisable
|
||||
|
@ -6,178 +8,44 @@ from hydrus.client.importing import ClientImporting
|
|||
from hydrus.client.importing import ClientImportFileSeeds
|
||||
from hydrus.client.importing import ClientImportGallerySeeds
|
||||
from hydrus.client.importing import ClientImportOptions
|
||||
from hydrus.client.networking import ClientNetworking
|
||||
from hydrus.client.networking import ClientNetworkingBandwidth
|
||||
from hydrus.client.networking import ClientNetworkingContexts
|
||||
from hydrus.client.networking import ClientNetworkingDomain
|
||||
from hydrus.client.networking import ClientNetworkingJobs
|
||||
|
||||
class SubscriptionQuery( HydrusSerialisable.SerialisableBase ):
|
||||
def GenerateSubQueryName() -> str:
|
||||
|
||||
SERIALISABLE_TYPE = HydrusSerialisable.SERIALISABLE_TYPE_SUBSCRIPTION_QUERY
|
||||
SERIALISABLE_NAME = 'Subscription Query'
|
||||
SERIALISABLE_VERSION = 3
|
||||
return HydrusData.GenerateKey().hex()
|
||||
|
||||
def __init__( self, query = 'query text' ):
|
||||
class SubscriptionQueryLogContainer( HydrusSerialisable.SerialisableBaseNamed ):
|
||||
|
||||
SERIALISABLE_TYPE = HydrusSerialisable.SERIALISABLE_TYPE_SUBSCRIPTION_QUERY_LOG_CONTAINER
|
||||
SERIALISABLE_NAME = 'Subscription Query Container'
|
||||
SERIALISABLE_VERSION = 1
|
||||
|
||||
def __init__( self, name ):
|
||||
|
||||
HydrusSerialisable.SerialisableBase.__init__( self )
|
||||
HydrusSerialisable.SerialisableBaseNamed.__init__( self, name )
|
||||
|
||||
self._query = query
|
||||
self._display_name = None
|
||||
self._check_now = False
|
||||
self._last_check_time = 0
|
||||
self._next_check_time = 0
|
||||
self._paused = False
|
||||
self._status = ClientImporting.CHECKER_STATUS_OK
|
||||
self._gallery_seed_log = ClientImportGallerySeeds.GallerySeedLog()
|
||||
self._file_seed_cache = ClientImportFileSeeds.FileSeedCache()
|
||||
self._tag_import_options = ClientImportOptions.TagImportOptions()
|
||||
|
||||
|
||||
def _GetExampleNetworkContexts( self, subscription_name ):
|
||||
|
||||
file_seed = self._file_seed_cache.GetNextFileSeed( CC.STATUS_UNKNOWN )
|
||||
|
||||
subscription_key = self.GetNetworkJobSubscriptionKey( subscription_name )
|
||||
|
||||
if file_seed is None:
|
||||
|
||||
return [ ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_SUBSCRIPTION, subscription_key ), ClientNetworkingContexts.GLOBAL_NETWORK_CONTEXT ]
|
||||
|
||||
|
||||
url = file_seed.file_seed_data
|
||||
|
||||
try: # if the url is borked for some reason
|
||||
|
||||
example_nj = ClientNetworkingJobs.NetworkJobSubscription( subscription_key, 'GET', url )
|
||||
example_network_contexts = example_nj.GetNetworkContexts()
|
||||
|
||||
except:
|
||||
|
||||
return [ ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_SUBSCRIPTION, subscription_key ), ClientNetworkingContexts.GLOBAL_NETWORK_CONTEXT ]
|
||||
|
||||
|
||||
return example_network_contexts
|
||||
|
||||
|
||||
def _GetSerialisableInfo( self ):
|
||||
|
||||
serialisable_gallery_seed_log = self._gallery_seed_log.GetSerialisableTuple()
|
||||
serialisable_file_seed_cache = self._file_seed_cache.GetSerialisableTuple()
|
||||
serialisable_tag_import_options = self._tag_import_options.GetSerialisableTuple()
|
||||
|
||||
return ( self._query, self._display_name, self._check_now, self._last_check_time, self._next_check_time, self._paused, self._status, serialisable_gallery_seed_log, serialisable_file_seed_cache, serialisable_tag_import_options )
|
||||
return ( serialisable_gallery_seed_log, serialisable_file_seed_cache )
|
||||
|
||||
|
||||
def _InitialiseFromSerialisableInfo( self, serialisable_info ):
|
||||
|
||||
( self._query, self._display_name, self._check_now, self._last_check_time, self._next_check_time, self._paused, self._status, serialisable_gallery_seed_log, serialisable_file_seed_cache, serialisable_tag_import_options ) = serialisable_info
|
||||
( serialisable_gallery_seed_log, serialisable_file_seed_cache ) = serialisable_info
|
||||
|
||||
self._gallery_seed_log = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_gallery_seed_log )
|
||||
self._file_seed_cache = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_file_seed_cache )
|
||||
self._tag_import_options = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_tag_import_options )
|
||||
|
||||
|
||||
def _UpdateSerialisableInfo( self, version, old_serialisable_info ):
|
||||
|
||||
if version == 1:
|
||||
|
||||
( query, check_now, last_check_time, next_check_time, paused, status, serialisable_file_seed_cache ) = old_serialisable_info
|
||||
|
||||
gallery_seed_log = ClientImportGallerySeeds.GallerySeedLog()
|
||||
|
||||
serialisable_gallery_seed_log = gallery_seed_log.GetSerialisableTuple()
|
||||
|
||||
new_serialisable_info = ( query, check_now, last_check_time, next_check_time, paused, status, serialisable_gallery_seed_log, serialisable_file_seed_cache )
|
||||
|
||||
return ( 2, new_serialisable_info )
|
||||
|
||||
|
||||
if version == 2:
|
||||
|
||||
( query, check_now, last_check_time, next_check_time, paused, status, serialisable_gallery_seed_log, serialisable_file_seed_cache ) = old_serialisable_info
|
||||
|
||||
display_name = None
|
||||
tag_import_options = ClientImportOptions.TagImportOptions()
|
||||
|
||||
serialisable_tag_import_options = tag_import_options.GetSerialisableTuple()
|
||||
|
||||
new_serialisable_info = ( query, display_name, check_now, last_check_time, next_check_time, paused, status, serialisable_gallery_seed_log, serialisable_file_seed_cache, serialisable_tag_import_options )
|
||||
|
||||
return ( 3, new_serialisable_info )
|
||||
|
||||
|
||||
|
||||
def BandwidthOK( self, subscription_name ):
|
||||
|
||||
example_network_contexts = self._GetExampleNetworkContexts( subscription_name )
|
||||
|
||||
threshold = 90
|
||||
|
||||
bandwidth_ok = HG.client_controller.network_engine.bandwidth_manager.CanDoWork( example_network_contexts, threshold = threshold )
|
||||
|
||||
if HG.subscription_report_mode:
|
||||
|
||||
HydrusData.ShowText( 'Query "' + self.GetHumanName() + '" bandwidth/domain test. Bandwidth ok: {}'.format( bandwidth_ok ) )
|
||||
|
||||
|
||||
return bandwidth_ok
|
||||
|
||||
|
||||
def CanCheckNow( self ):
|
||||
|
||||
return not self._check_now
|
||||
|
||||
|
||||
def CanRetryFailed( self ):
|
||||
|
||||
return self._file_seed_cache.GetFileSeedCount( CC.STATUS_ERROR ) > 0
|
||||
|
||||
|
||||
def CanRetryIgnored( self ):
|
||||
|
||||
return self._file_seed_cache.GetFileSeedCount( CC.STATUS_VETOED ) > 0
|
||||
|
||||
|
||||
def CheckNow( self ):
|
||||
|
||||
self._check_now = True
|
||||
self._paused = False
|
||||
|
||||
self._next_check_time = 0
|
||||
self._status = ClientImporting.CHECKER_STATUS_OK
|
||||
|
||||
|
||||
def DomainOK( self ):
|
||||
|
||||
file_seed = self._file_seed_cache.GetNextFileSeed( CC.STATUS_UNKNOWN )
|
||||
|
||||
if file_seed is None:
|
||||
|
||||
return True
|
||||
|
||||
|
||||
url = file_seed.file_seed_data
|
||||
|
||||
domain_ok = HG.client_controller.network_engine.domain_manager.DomainOK( url )
|
||||
|
||||
if HG.subscription_report_mode:
|
||||
|
||||
HydrusData.ShowText( 'Query "' + self.GetHumanName() + '" domain test. Domain ok: {}'.format( domain_ok ) )
|
||||
|
||||
|
||||
return domain_ok
|
||||
|
||||
|
||||
def GetBandwidthWaitingEstimate( self, subscription_name ):
|
||||
|
||||
example_network_contexts = self._GetExampleNetworkContexts( subscription_name )
|
||||
|
||||
( estimate, bandwidth_network_context ) = HG.client_controller.network_engine.bandwidth_manager.GetWaitingEstimateAndContext( example_network_contexts )
|
||||
|
||||
return estimate
|
||||
|
||||
|
||||
def GetDisplayName( self ):
|
||||
|
||||
return self._display_name
|
||||
|
||||
|
||||
def GetFileSeedCache( self ):
|
||||
|
@ -190,11 +58,147 @@ class SubscriptionQuery( HydrusSerialisable.SerialisableBase ):
|
|||
return self._gallery_seed_log
|
||||
|
||||
|
||||
def GetHumanName( self ):
|
||||
def SetFileSeedCache( self, file_seed_cache: ClientImportFileSeeds.FileSeedCache ):
|
||||
|
||||
self._file_seed_cache = file_seed_cache
|
||||
|
||||
|
||||
def SetGallerySeedLog( self, gallery_seed_log: ClientImportGallerySeeds.GallerySeedLog ):
|
||||
|
||||
self._gallery_seed_log = gallery_seed_log
|
||||
|
||||
|
||||
HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_SUBSCRIPTION_QUERY_LOG_CONTAINER ] = SubscriptionQueryLogContainer
|
||||
|
||||
LOG_CONTAINER_SYNCED = 0
|
||||
LOG_CONTAINER_UNSYNCED = 1
|
||||
LOG_CONTAINER_MISSING = 2
|
||||
|
||||
class SubscriptionQueryHeader( HydrusSerialisable.SerialisableBase ):
|
||||
|
||||
SERIALISABLE_TYPE = HydrusSerialisable.SERIALISABLE_TYPE_SUBSCRIPTION_QUERY_HEADER
|
||||
SERIALISABLE_NAME = 'Subscription Query Summary'
|
||||
SERIALISABLE_VERSION = 1
|
||||
|
||||
def __init__( self ):
|
||||
|
||||
HydrusSerialisable.SerialisableBase.__init__( self )
|
||||
|
||||
self._query_log_container_name = GenerateSubQueryName()
|
||||
self._query_text = 'query'
|
||||
self._display_name = None
|
||||
self._check_now = False
|
||||
self._last_check_time = 0
|
||||
self._next_check_time = 0
|
||||
self._paused = False
|
||||
self._checker_status = ClientImporting.CHECKER_STATUS_OK
|
||||
self._query_log_container_status = LOG_CONTAINER_UNSYNCED
|
||||
self._file_seed_cache_status = ClientImportFileSeeds.FileSeedCacheStatus()
|
||||
self._tag_import_options = ClientImportOptions.TagImportOptions()
|
||||
self._raw_file_velocity = ( 0, 1 )
|
||||
self._pretty_file_velocity = 'unknown'
|
||||
self._example_file_seed = None
|
||||
self._example_gallery_seed = None
|
||||
|
||||
# a status cache, so we know number complete, unknown, ignored, etc...
|
||||
# prob should have a new serialisable object for this mate
|
||||
|
||||
|
||||
def _DomainOK( self, domain_manager: ClientNetworkingDomain.NetworkDomainManager, example_url: typing.Optional[ str ] ):
|
||||
|
||||
if example_url is None:
|
||||
|
||||
domain_ok = True
|
||||
|
||||
else:
|
||||
|
||||
domain_ok = domain_manager.DomainOK( example_url )
|
||||
|
||||
|
||||
if HG.subscription_report_mode:
|
||||
|
||||
HydrusData.ShowText( 'Query "{}" domain test. Domain ok: {}'.format( self._GetHumanName(), domain_ok ) )
|
||||
|
||||
|
||||
return domain_ok
|
||||
|
||||
|
||||
def _GenerateNetworkJobFactory( self, subscription_name: str ):
|
||||
|
||||
subscription_key = self._GenerateNetworkJobSubscriptionKey( subscription_name )
|
||||
|
||||
def network_job_factory( *args, **kwargs ):
|
||||
|
||||
network_job = ClientNetworkingJobs.NetworkJobSubscription( subscription_key, *args, **kwargs )
|
||||
|
||||
network_job.OverrideBandwidth( 30 )
|
||||
|
||||
return network_job
|
||||
|
||||
|
||||
return network_job_factory
|
||||
|
||||
|
||||
def _GenerateNetworkJobSubscriptionKey( self, subscription_name: str ):
|
||||
|
||||
return '{}: {}'.format( subscription_name, self._GetHumanName() )
|
||||
|
||||
|
||||
def _GetExampleFileURL( self ):
|
||||
|
||||
if self._example_file_seed is None or self._example_file_seed.file_seed_type == ClientImportFileSeeds.FILE_SEED_TYPE_HDD:
|
||||
|
||||
example_url = None
|
||||
|
||||
else:
|
||||
|
||||
example_url = self._example_file_seed.file_seed_data
|
||||
|
||||
|
||||
return example_url
|
||||
|
||||
|
||||
def _GetExampleGalleryURL( self ):
|
||||
|
||||
if self._example_gallery_seed is None:
|
||||
|
||||
example_url = None
|
||||
|
||||
else:
|
||||
|
||||
example_url = self._example_gallery_seed.url
|
||||
|
||||
|
||||
return example_url
|
||||
|
||||
|
||||
def _GetExampleNetworkContexts( self, example_url: typing.Optional[ str ], subscription_name: str ):
|
||||
|
||||
subscription_key = self._GenerateNetworkJobSubscriptionKey( subscription_name )
|
||||
|
||||
if example_url is None:
|
||||
|
||||
return [ ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_SUBSCRIPTION, subscription_key ), ClientNetworkingContexts.GLOBAL_NETWORK_CONTEXT ]
|
||||
|
||||
|
||||
try: # if the url is borked for some reason
|
||||
|
||||
example_nj = ClientNetworkingJobs.NetworkJobSubscription( subscription_key, 'GET', example_url )
|
||||
example_network_contexts = example_nj.GetNetworkContexts()
|
||||
|
||||
except:
|
||||
|
||||
return [ ClientNetworkingContexts.NetworkContext( CC.NETWORK_CONTEXT_SUBSCRIPTION, subscription_key ), ClientNetworkingContexts.GLOBAL_NETWORK_CONTEXT ]
|
||||
|
||||
|
||||
return example_network_contexts
|
||||
|
||||
|
||||
def _GetHumanName( self ) -> str:
|
||||
|
||||
if self._display_name is None:
|
||||
|
||||
return self._query
|
||||
return self._query_text
|
||||
|
||||
else:
|
||||
|
||||
|
@ -202,14 +206,250 @@ class SubscriptionQuery( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def GetLastChecked( self ):
|
||||
def _GetSerialisableInfo( self ):
|
||||
|
||||
serialisable_file_seed_cache_status = self._file_seed_cache_status.GetSerialisableTuple()
|
||||
serialisable_tag_import_options = self._tag_import_options.GetSerialisableTuple()
|
||||
|
||||
serialisable_example_file_seed = HydrusSerialisable.GetNoneableSerialisableTuple( self._example_file_seed )
|
||||
serialisable_example_gallery_seed = HydrusSerialisable.GetNoneableSerialisableTuple( self._example_gallery_seed )
|
||||
|
||||
return (
|
||||
self._query_log_container_name,
|
||||
self._query_text,
|
||||
self._display_name,
|
||||
self._check_now,
|
||||
self._last_check_time,
|
||||
self._next_check_time,
|
||||
self._paused,
|
||||
self._checker_status,
|
||||
self._query_log_container_status,
|
||||
serialisable_file_seed_cache_status,
|
||||
serialisable_tag_import_options,
|
||||
self._raw_file_velocity,
|
||||
self._pretty_file_velocity,
|
||||
serialisable_example_file_seed,
|
||||
serialisable_example_gallery_seed
|
||||
)
|
||||
|
||||
|
||||
def _InitialiseFromSerialisableInfo( self, serialisable_info ):
|
||||
|
||||
(
|
||||
self._query_log_container_name,
|
||||
self._query_text,
|
||||
self._display_name,
|
||||
self._check_now,
|
||||
self._last_check_time,
|
||||
self._next_check_time,
|
||||
self._paused,
|
||||
self._checker_status,
|
||||
self._query_log_container_status,
|
||||
serialisable_file_seed_cache_status,
|
||||
serialisable_tag_import_options,
|
||||
self._raw_file_velocity,
|
||||
self._pretty_file_velocity,
|
||||
serialisable_example_file_seed,
|
||||
serialisable_example_gallery_seed
|
||||
) = serialisable_info
|
||||
|
||||
self._file_seed_cache_status = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_file_seed_cache_status )
|
||||
self._tag_import_options = HydrusSerialisable.CreateFromSerialisableTuple( serialisable_tag_import_options )
|
||||
|
||||
self._example_file_seed = HydrusSerialisable.CreateFromNoneableSerialisableTuple( serialisable_example_file_seed )
|
||||
self._example_gallery_seed = HydrusSerialisable.CreateFromNoneableSerialisableTuple( serialisable_example_gallery_seed )
|
||||
|
||||
|
||||
def CanCheckNow( self ):
|
||||
|
||||
return not self._check_now
|
||||
|
||||
|
||||
def CanRetryFailed( self ):
|
||||
|
||||
return self._file_seed_cache_status.GetFileSeedCount( CC.STATUS_ERROR ) > 0
|
||||
|
||||
|
||||
def CanRetryIgnored( self ):
|
||||
|
||||
return self._file_seed_cache_status.GetFileSeedCount( CC.STATUS_VETOED ) > 0
|
||||
|
||||
|
||||
def CheckNow( self ):
|
||||
|
||||
self._check_now = True
|
||||
self._paused = False
|
||||
|
||||
self._next_check_time = 0
|
||||
self._checker_status = ClientImporting.CHECKER_STATUS_OK
|
||||
|
||||
|
||||
def FileBandwidthOK( self, bandwidth_manager: ClientNetworkingBandwidth.NetworkBandwidthManager, subscription_name: str ):
|
||||
|
||||
example_url = self._GetExampleFileURL()
|
||||
|
||||
example_network_contexts = self._GetExampleNetworkContexts( example_url, subscription_name )
|
||||
|
||||
threshold = 90
|
||||
|
||||
bandwidth_ok = bandwidth_manager.CanDoWork( example_network_contexts, threshold = threshold )
|
||||
|
||||
if HG.subscription_report_mode:
|
||||
|
||||
HydrusData.ShowText( 'Query "' + self._GetHumanName() + '" bandwidth/domain test. Bandwidth ok: {}'.format( bandwidth_ok ) )
|
||||
|
||||
|
||||
return bandwidth_ok
|
||||
|
||||
|
||||
def FileDomainOK( self, domain_manager: ClientNetworkingDomain.NetworkDomainManager ):
|
||||
|
||||
example_url = self._GetExampleFileURL()
|
||||
|
||||
return self._DomainOK( domain_manager, example_url )
|
||||
|
||||
|
||||
def FileLoginOK( self, network_engine: ClientNetworking.NetworkEngine, subscription_name: str ) -> typing.Tuple[ bool, str ]:
|
||||
|
||||
reason = 'login looks good!'
|
||||
|
||||
if self._example_file_seed is None:
|
||||
|
||||
result = True
|
||||
|
||||
else:
|
||||
|
||||
nj = self._example_file_seed.GetExampleNetworkJob( self._GenerateNetworkJobFactory( subscription_name ) )
|
||||
|
||||
nj.engine = network_engine
|
||||
|
||||
if nj.NeedsLogin():
|
||||
|
||||
try:
|
||||
|
||||
nj.CheckCanLogin()
|
||||
|
||||
result = True
|
||||
|
||||
except Exception as e:
|
||||
|
||||
result = False
|
||||
reason = str( e )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
result = True
|
||||
|
||||
|
||||
|
||||
if HG.subscription_report_mode:
|
||||
|
||||
HydrusData.ShowText( 'Query "{}" pre-work file login test. Login ok: {}. {}'.format( self._GetHumanName(), str( result ), reason ) )
|
||||
|
||||
|
||||
return ( result, reason )
|
||||
|
||||
|
||||
|
||||
def GalleryDomainOK( self, domain_manager: ClientNetworkingDomain.NetworkDomainManager ):
|
||||
|
||||
example_url = self._GetExampleGalleryURL()
|
||||
|
||||
return self._DomainOK( domain_manager, example_url )
|
||||
|
||||
|
||||
def GalleryLoginOK( self, network_engine: ClientNetworking.NetworkEngine, subscription_name: str ) -> typing.Tuple[ bool, str ]:
|
||||
|
||||
reason = 'login looks good!'
|
||||
|
||||
if self._example_gallery_seed is None:
|
||||
|
||||
result = True
|
||||
|
||||
else:
|
||||
|
||||
nj = self._example_gallery_seed.GetExampleNetworkJob( self._GenerateNetworkJobFactory( subscription_name ) )
|
||||
|
||||
nj.engine = network_engine
|
||||
|
||||
if nj.NeedsLogin():
|
||||
|
||||
try:
|
||||
|
||||
nj.CheckCanLogin()
|
||||
|
||||
result = True
|
||||
|
||||
except Exception as e:
|
||||
|
||||
result = False
|
||||
reason = str( e )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
result = True
|
||||
|
||||
|
||||
|
||||
if HG.subscription_report_mode:
|
||||
|
||||
HydrusData.ShowText( 'Query "{}" pre-work sync login test. Login ok: {}. {}'.format( self._GetHumanName(), str( result ), reason ) )
|
||||
|
||||
|
||||
return ( result, reason )
|
||||
|
||||
|
||||
def GenerateNetworkJobFactory( self, subscription_name: str ):
|
||||
|
||||
return self._GenerateNetworkJobFactory( subscription_name )
|
||||
|
||||
|
||||
def GetBandwidthWaitingEstimate( self, bandwidth_manager: ClientNetworkingBandwidth.NetworkBandwidthManager, subscription_name: str ):
|
||||
|
||||
example_url = self._GetExampleFileURL()
|
||||
|
||||
example_network_contexts = self._GetExampleNetworkContexts( example_url, subscription_name )
|
||||
|
||||
( estimate, bandwidth_network_context ) = bandwidth_manager.GetWaitingEstimateAndContext( example_network_contexts )
|
||||
|
||||
return estimate
|
||||
|
||||
|
||||
def GetCheckerStatus( self ):
|
||||
|
||||
return self._checker_status
|
||||
|
||||
|
||||
def GetDisplayName( self ):
|
||||
|
||||
return self._display_name
|
||||
|
||||
|
||||
def GetHumanName( self ):
|
||||
|
||||
return self._GetHumanName()
|
||||
|
||||
|
||||
def GetFileSeedCacheStatus( self ):
|
||||
|
||||
return self._file_seed_cache_status
|
||||
|
||||
|
||||
def GetFileVelocityInfo( self ):
|
||||
|
||||
return ( self._raw_file_velocity, self._pretty_file_velocity )
|
||||
|
||||
|
||||
def GetLastCheckTime( self ):
|
||||
|
||||
return self._last_check_time
|
||||
|
||||
|
||||
def GetLatestAddedTime( self ):
|
||||
|
||||
return self._file_seed_cache.GetLatestAddedTime()
|
||||
return self._file_seed_cache_status.GetLatestAddedTime()
|
||||
|
||||
|
||||
def GetNextCheckStatusString( self ):
|
||||
|
@ -218,7 +458,7 @@ class SubscriptionQuery( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
return 'checking on dialog ok'
|
||||
|
||||
elif self._status == ClientImporting.CHECKER_STATUS_DEAD:
|
||||
elif self._checker_status == ClientImporting.CHECKER_STATUS_DEAD:
|
||||
|
||||
return 'dead, so not checking'
|
||||
|
||||
|
@ -242,20 +482,32 @@ class SubscriptionQuery( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def GetNextWorkTime( self, subscription_name ):
|
||||
def GetNextCheckTime( self ):
|
||||
|
||||
if self.IsPaused():
|
||||
return self._next_check_time
|
||||
|
||||
|
||||
def GetNextWorkTime( self, bandwidth_manager: ClientNetworkingBandwidth.NetworkBandwidthManager, subscription_name: str ):
|
||||
|
||||
if not self.IsExpectingToWorkInFuture():
|
||||
|
||||
return None
|
||||
|
||||
|
||||
work_times = set()
|
||||
|
||||
if self._query_log_container_status == LOG_CONTAINER_UNSYNCED:
|
||||
|
||||
work_times.add( 0 )
|
||||
|
||||
|
||||
work_times.add( self._next_check_time )
|
||||
|
||||
if self.HasFileWorkToDo():
|
||||
|
||||
try:
|
||||
|
||||
file_bandwidth_estimate = self.GetBandwidthWaitingEstimate( subscription_name )
|
||||
file_bandwidth_estimate = self.GetBandwidthWaitingEstimate( bandwidth_manager, subscription_name )
|
||||
|
||||
except:
|
||||
|
||||
|
@ -276,11 +528,6 @@ class SubscriptionQuery( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
if not self.IsDead():
|
||||
|
||||
work_times.add( self._next_check_time )
|
||||
|
||||
|
||||
if len( work_times ) == 0:
|
||||
|
||||
return None
|
||||
|
@ -289,19 +536,19 @@ class SubscriptionQuery( HydrusSerialisable.SerialisableBase ):
|
|||
return min( work_times )
|
||||
|
||||
|
||||
def GetNumURLsAndFailed( self ):
|
||||
def GetQueryLogContainerName( self ):
|
||||
|
||||
return ( self._file_seed_cache.GetFileSeedCount( CC.STATUS_UNKNOWN ), len( self._file_seed_cache ), self._file_seed_cache.GetFileSeedCount( CC.STATUS_ERROR ) )
|
||||
return self._query_log_container_name
|
||||
|
||||
|
||||
def GetNetworkJobSubscriptionKey( self, subscription_name ):
|
||||
def GetQueryLogContainerStatus( self ):
|
||||
|
||||
return subscription_name + ': ' + self.GetHumanName()
|
||||
return self._query_log_container_status
|
||||
|
||||
|
||||
def GetQueryText( self ):
|
||||
|
||||
return self._query
|
||||
return self._query_text
|
||||
|
||||
|
||||
def GetTagImportOptions( self ):
|
||||
|
@ -311,19 +558,34 @@ class SubscriptionQuery( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
def HasFileWorkToDo( self ):
|
||||
|
||||
file_seed = self._file_seed_cache.GetNextFileSeed( CC.STATUS_UNKNOWN )
|
||||
result = self._file_seed_cache_status.HasWorkToDo()
|
||||
|
||||
if HG.subscription_report_mode:
|
||||
|
||||
HydrusData.ShowText( 'Query "' + self._query + '" HasFileWorkToDo test. Next import is ' + repr( file_seed ) + '.' )
|
||||
HydrusData.ShowText( 'Query "{}" HasFileWorkToDo test. Result is {}.'.format( self._query_text, result ) )
|
||||
|
||||
|
||||
return file_seed is not None
|
||||
return result
|
||||
|
||||
|
||||
def IsCheckingNow( self ):
|
||||
|
||||
return self._check_now
|
||||
|
||||
|
||||
def IsDead( self ):
|
||||
|
||||
return self._status == ClientImporting.CHECKER_STATUS_DEAD
|
||||
return self._checker_status == ClientImporting.CHECKER_STATUS_DEAD
|
||||
|
||||
|
||||
def IsExpectingToWorkInFuture( self ):
|
||||
|
||||
if self.IsPaused() or self.IsDead() or not self.IsLogContainerOK():
|
||||
|
||||
return False
|
||||
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def IsInitialSync( self ):
|
||||
|
@ -331,6 +593,11 @@ class SubscriptionQuery( HydrusSerialisable.SerialisableBase ):
|
|||
return self._last_check_time == 0
|
||||
|
||||
|
||||
def IsLogContainerOK( self ):
|
||||
|
||||
return self._query_log_container_status != LOG_CONTAINER_MISSING
|
||||
|
||||
|
||||
def IsPaused( self ):
|
||||
|
||||
return self._paused
|
||||
|
@ -340,10 +607,10 @@ class SubscriptionQuery( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
if HG.subscription_report_mode:
|
||||
|
||||
HydrusData.ShowText( 'Query "' + self._query + '" IsSyncDue test. Paused/dead status is {}/{}, check time due is {}, and check_now is {}.'.format( self._paused, self.IsDead(), HydrusData.TimeHasPassed( self._next_check_time ), self._check_now ) )
|
||||
HydrusData.ShowText( 'Query "' + self._query_text + '" IsSyncDue test. Paused/dead/container status is {}/{}/{}, check time due is {}, and check_now is {}.'.format( self._paused, self.IsDead(), self.IsLogContainerOK(), HydrusData.TimeHasPassed( self._next_check_time ), self._check_now ) )
|
||||
|
||||
|
||||
if self._paused or self.IsDead():
|
||||
if not self.IsExpectingToWorkInFuture():
|
||||
|
||||
return False
|
||||
|
||||
|
@ -356,7 +623,7 @@ class SubscriptionQuery( HydrusSerialisable.SerialisableBase ):
|
|||
self._paused = not self._paused
|
||||
|
||||
|
||||
def RegisterSyncComplete( self, checker_options: ClientImportOptions.CheckerOptions ):
|
||||
def RegisterSyncComplete( self, checker_options: ClientImportOptions.CheckerOptions, query_log_container: SubscriptionQueryLogContainer ):
|
||||
|
||||
self._last_check_time = HydrusData.GetNow()
|
||||
|
||||
|
@ -366,77 +633,109 @@ class SubscriptionQuery( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
compact_before_this_time = self._last_check_time - death_period
|
||||
|
||||
if self._gallery_seed_log.CanCompact( compact_before_this_time ):
|
||||
gallery_seed_log = query_log_container.GetGallerySeedLog()
|
||||
|
||||
if gallery_seed_log.CanCompact( compact_before_this_time ):
|
||||
|
||||
self._gallery_seed_log.Compact( compact_before_this_time )
|
||||
gallery_seed_log.Compact( compact_before_this_time )
|
||||
|
||||
|
||||
if self._file_seed_cache.CanCompact( compact_before_this_time ):
|
||||
file_seed_cache = query_log_container.GetFileSeedCache()
|
||||
|
||||
if file_seed_cache.CanCompact( compact_before_this_time ):
|
||||
|
||||
self._file_seed_cache.Compact( compact_before_this_time )
|
||||
file_seed_cache.Compact( compact_before_this_time )
|
||||
|
||||
|
||||
self.SyncToQueryLogContainer( checker_options, query_log_container )
|
||||
|
||||
|
||||
def Reset( self ):
|
||||
def Reset( self, query_log_container: SubscriptionQueryLogContainer ):
|
||||
|
||||
self._last_check_time = 0
|
||||
self._next_check_time = 0
|
||||
self._status = ClientImporting.CHECKER_STATUS_OK
|
||||
self._checker_status = ClientImporting.CHECKER_STATUS_OK
|
||||
self._paused = False
|
||||
|
||||
self._file_seed_cache = ClientImportFileSeeds.FileSeedCache()
|
||||
file_seed_cache = ClientImportFileSeeds.FileSeedCache()
|
||||
|
||||
query_log_container.SetFileSeedCache( file_seed_cache )
|
||||
|
||||
self.UpdateFileStatus( query_log_container )
|
||||
|
||||
|
||||
def RetryFailures( self ):
|
||||
|
||||
self._file_seed_cache.RetryFailures()
|
||||
|
||||
|
||||
def RetryIgnored( self ):
|
||||
|
||||
self._file_seed_cache.RetryIgnored()
|
||||
|
||||
|
||||
def SetCheckNow( self, check_now ):
|
||||
def SetCheckNow( self, check_now: bool ):
|
||||
|
||||
self._check_now = check_now
|
||||
|
||||
|
||||
def SetCheckerStatus( self, checker_status: int ):
|
||||
|
||||
self._checker_status = checker_status
|
||||
|
||||
|
||||
def SetDisplayName( self, display_name ):
|
||||
|
||||
self._display_name = display_name
|
||||
|
||||
|
||||
def SetPaused( self, paused ):
|
||||
def SetLastCheckTime( self, last_check_time: int ):
|
||||
|
||||
self._last_check_time = last_check_time
|
||||
|
||||
|
||||
def SetNextCheckTime( self, next_check_time: int ):
|
||||
|
||||
self._next_check_time = next_check_time
|
||||
|
||||
|
||||
def SetPaused( self, paused: bool ):
|
||||
|
||||
self._paused = paused
|
||||
|
||||
|
||||
def SetQueryAndSeeds( self, query, file_seed_cache, gallery_seed_log ):
|
||||
def SetQueryLogContainerStatus( self, log_container_status: int ):
|
||||
|
||||
self._query = query
|
||||
self._file_seed_cache = file_seed_cache
|
||||
self._gallery_seed_log = gallery_seed_log
|
||||
self._query_log_container_status = log_container_status
|
||||
|
||||
if self._query_log_container_status == LOG_CONTAINER_UNSYNCED:
|
||||
|
||||
self._raw_file_velocity = ( 0, 1 )
|
||||
self._pretty_file_velocity = 'unknown'
|
||||
|
||||
|
||||
|
||||
def SetTagImportOptions( self, tag_import_options ):
|
||||
def SetQueryText( self, query_text: str ):
|
||||
|
||||
self._query_text = query_text
|
||||
|
||||
|
||||
def SetTagImportOptions( self, tag_import_options: ClientImportOptions.TagImportOptions ):
|
||||
|
||||
self._tag_import_options = tag_import_options
|
||||
|
||||
|
||||
def UpdateNextCheckTime( self, checker_options: ClientImportOptions.CheckerOptions ):
|
||||
def SyncToQueryLogContainer( self, checker_options: ClientImportOptions.CheckerOptions, query_log_container: SubscriptionQueryLogContainer ):
|
||||
|
||||
gallery_seed_log = query_log_container.GetGallerySeedLog()
|
||||
|
||||
self._example_gallery_seed = gallery_seed_log.GetExampleGallerySeed()
|
||||
|
||||
self.UpdateFileStatus( query_log_container )
|
||||
|
||||
file_seed_cache = query_log_container.GetFileSeedCache()
|
||||
|
||||
if self._check_now:
|
||||
|
||||
self._next_check_time = 0
|
||||
|
||||
self._status = ClientImporting.CHECKER_STATUS_OK
|
||||
self._checker_status = ClientImporting.CHECKER_STATUS_OK
|
||||
|
||||
else:
|
||||
|
||||
if checker_options.IsDead( self._file_seed_cache, self._last_check_time ):
|
||||
if checker_options.IsDead( file_seed_cache, self._last_check_time ):
|
||||
|
||||
self._status = ClientImporting.CHECKER_STATUS_DEAD
|
||||
self._checker_status = ClientImporting.CHECKER_STATUS_DEAD
|
||||
|
||||
if not self.HasFileWorkToDo():
|
||||
|
||||
|
@ -446,13 +745,38 @@ class SubscriptionQuery( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
last_next_check_time = self._next_check_time
|
||||
|
||||
self._next_check_time = checker_options.GetNextCheckTime( self._file_seed_cache, self._last_check_time, last_next_check_time )
|
||||
self._next_check_time = checker_options.GetNextCheckTime( file_seed_cache, self._last_check_time, last_next_check_time )
|
||||
|
||||
|
||||
|
||||
def ToTuple( self ):
|
||||
self._raw_file_velocity = checker_options.GetRawCurrentVelocity( file_seed_cache, self._last_check_time )
|
||||
self._pretty_file_velocity = checker_options.GetPrettyCurrentVelocity( file_seed_cache, self._last_check_time, no_prefix = True )
|
||||
|
||||
return ( self._query, self._check_now, self._last_check_time, self._next_check_time, self._paused, self._status )
|
||||
self._query_log_container_status = LOG_CONTAINER_SYNCED
|
||||
|
||||
|
||||
HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_SUBSCRIPTION_QUERY ] = SubscriptionQuery
|
||||
def UpdateFileStatus( self, query_log_container: SubscriptionQueryLogContainer ):
|
||||
|
||||
file_seed_cache = query_log_container.GetFileSeedCache()
|
||||
|
||||
self._file_seed_cache_status = file_seed_cache.GetStatus()
|
||||
self._example_file_seed = file_seed_cache.GetExampleFileSeed()
|
||||
|
||||
|
||||
def WantsToResyncWithLogContainer( self ):
|
||||
|
||||
return self._query_log_container_status == LOG_CONTAINER_UNSYNCED
|
||||
|
||||
|
||||
HydrusSerialisable.SERIALISABLE_TYPES_TO_OBJECT_TYPES[ HydrusSerialisable.SERIALISABLE_TYPE_SUBSCRIPTION_QUERY_HEADER ] = SubscriptionQueryHeader
|
||||
|
||||
def GenerateQueryHeadersStatus( query_headers: typing.Iterable[ SubscriptionQueryHeader ] ):
|
||||
|
||||
fscs = ClientImportFileSeeds.FileSeedCacheStatus()
|
||||
|
||||
for query_header in query_headers:
|
||||
|
||||
fscs.Merge( query_header.GetFileSeedCacheStatus() )
|
||||
|
||||
|
||||
return fscs
|
||||
|
||||
|
|
|
@ -45,8 +45,7 @@ class MultipleWatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
self._watchers_repeating_job = None
|
||||
|
||||
self._status_dirty = True
|
||||
self._status_cache = None
|
||||
self._status_cache_generation_time = 0
|
||||
self._status_cache = ClientImportFileSeeds.FileSeedCacheStatus()
|
||||
|
||||
#
|
||||
|
||||
|
@ -134,7 +133,6 @@ class MultipleWatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
self._status_cache = ClientImportFileSeeds.GenerateFileSeedCachesStatus( file_seed_caches )
|
||||
|
||||
self._status_dirty = False
|
||||
self._status_cache_generation_time = HydrusData.GetNow()
|
||||
|
||||
|
||||
def _RemoveWatcher( self, watcher_key ):
|
||||
|
@ -334,7 +332,7 @@ class MultipleWatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
|
||||
|
||||
def GetTotalStatus( self ):
|
||||
def GetTotalStatus( self ) -> ClientImportFileSeeds.FileSeedCacheStatus:
|
||||
|
||||
with self._lock:
|
||||
|
||||
|
@ -492,7 +490,7 @@ class MultipleWatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
file_seed_cache = watcher.GetFileSeedCache()
|
||||
|
||||
if file_seed_cache.GetStatusGenerationTime() > self._status_cache_generation_time: # has there has been an update?
|
||||
if file_seed_cache.GetStatus().GetGenerationTime() > self._status_cache.GetGenerationTime(): # has there has been an update?
|
||||
|
||||
self._SetDirty()
|
||||
|
||||
|
@ -1334,7 +1332,7 @@ class WatcherImport( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
with self._lock:
|
||||
|
||||
self._file_seed_cache.RetryFailures()
|
||||
self._file_seed_cache.RetryFailed()
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1184,7 +1184,10 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
def ConvertURLsToMediaViewerTuples( self, urls ):
|
||||
|
||||
show_unmatched_urls_in_media_viewer = HG.client_controller.new_options.GetBoolean( 'show_unmatched_urls_in_media_viewer' )
|
||||
|
||||
url_tuples = []
|
||||
unmatched_url_tuples = []
|
||||
|
||||
with self._lock:
|
||||
|
||||
|
@ -1199,7 +1202,23 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ):
|
|||
continue
|
||||
|
||||
|
||||
if url_class is not None:
|
||||
if url_class is None:
|
||||
|
||||
if show_unmatched_urls_in_media_viewer:
|
||||
|
||||
try:
|
||||
|
||||
domain = ConvertURLIntoDomain( url )
|
||||
|
||||
except HydrusExceptions.URLClassException:
|
||||
|
||||
continue
|
||||
|
||||
|
||||
unmatched_url_tuples.append( ( domain, url ) )
|
||||
|
||||
|
||||
else:
|
||||
|
||||
url_class_key = url_class.GetMatchKey()
|
||||
|
||||
|
@ -1220,6 +1239,10 @@ class NetworkDomainManager( HydrusSerialisable.SerialisableBase ):
|
|||
|
||||
url_tuples.sort()
|
||||
|
||||
unmatched_url_tuples.sort()
|
||||
|
||||
url_tuples.extend( unmatched_url_tuples )
|
||||
|
||||
return url_tuples
|
||||
|
||||
|
||||
|
|
|
@ -73,8 +73,8 @@ options = {}
|
|||
# Misc
|
||||
|
||||
NETWORK_VERSION = 18
|
||||
SOFTWARE_VERSION = 399
|
||||
CLIENT_API_VERSION = 11
|
||||
SOFTWARE_VERSION = 400
|
||||
CLIENT_API_VERSION = 12
|
||||
|
||||
SERVER_THUMBNAIL_DIMENSIONS = ( 200, 200 )
|
||||
|
||||
|
|
|
@ -21,6 +21,8 @@ from hydrus.core import HydrusExceptions
|
|||
from hydrus.core import HydrusGlobals as HG
|
||||
from hydrus.core import HydrusText
|
||||
|
||||
ORIGINAL_PATH = None
|
||||
|
||||
def default_dict_list(): return collections.defaultdict( list )
|
||||
|
||||
def default_dict_set(): return collections.defaultdict( set )
|
||||
|
@ -681,18 +683,23 @@ def GetSubprocessEnv():
|
|||
ShowText( 'Your unmodified env is: {}'.format( env ) )
|
||||
|
||||
|
||||
env = os.environ.copy()
|
||||
|
||||
if ORIGINAL_PATH is not None:
|
||||
|
||||
env[ 'PATH' ] = ORIGINAL_PATH
|
||||
|
||||
|
||||
if HC.RUNNING_FROM_FROZEN_BUILD:
|
||||
|
||||
# let's make a proper env for subprocess that doesn't have pyinstaller woo woo in it
|
||||
|
||||
env = os.environ.copy()
|
||||
|
||||
changes_made = False
|
||||
|
||||
swaperoo_strings = [ 'LD_LIBRARY_PATH', 'XDG_DATA_DIRS' ]
|
||||
orig_swaperoo_strings = [ 'LD_LIBRARY_PATH', 'XDG_DATA_DIRS' ]
|
||||
ok_to_remove_absent_orig = [ 'LD_LIBRARY_PATH' ]
|
||||
|
||||
for key in swaperoo_strings:
|
||||
for key in orig_swaperoo_strings:
|
||||
|
||||
orig_key = '{}_ORIG'.format( key )
|
||||
|
||||
|
@ -710,6 +717,19 @@ def GetSubprocessEnv():
|
|||
|
||||
|
||||
|
||||
remove_if_hydrus_base_dir = [ 'QT_PLUGIN_PATH', 'QML2_IMPORT_PATH', 'SSL_CERT_FILE' ]
|
||||
hydrus_base_dir = HG.client_controller.GetDBDir()
|
||||
|
||||
for key in remove_if_hydrus_base_dir:
|
||||
|
||||
if key in env and env[ key ].startswith( hydrus_base_dir ):
|
||||
|
||||
del env[ key ]
|
||||
|
||||
changes_made = True
|
||||
|
||||
|
||||
|
||||
if ( HC.PLATFORM_LINUX or HC.PLATFORM_MACOS ):
|
||||
|
||||
if 'PATH' in env:
|
||||
|
|
|
@ -24,6 +24,8 @@ def AddBaseDirToEnvPath():
|
|||
|
||||
if 'PATH' in os.environ:
|
||||
|
||||
HydrusData.ORIGINAL_PATH = os.environ[ 'PATH' ]
|
||||
|
||||
os.environ[ 'PATH' ] = HC.BASE_DIR + os.pathsep + os.environ[ 'PATH' ]
|
||||
|
||||
|
||||
|
|
|
@ -18,7 +18,7 @@ except: # ImportError wasn't enough here as Linux went up the shoot with a __ver
|
|||
SERIALISABLE_TYPE_BASE = 0
|
||||
SERIALISABLE_TYPE_BASE_NAMED = 1
|
||||
SERIALISABLE_TYPE_SHORTCUT_SET = 2
|
||||
SERIALISABLE_TYPE_SUBSCRIPTION = 3
|
||||
SERIALISABLE_TYPE_SUBSCRIPTION_LEGACY = 3
|
||||
SERIALISABLE_TYPE_PERIODIC = 4
|
||||
SERIALISABLE_TYPE_GALLERY_IDENTIFIER = 5
|
||||
SERIALISABLE_TYPE_TAG_IMPORT_OPTIONS = 6
|
||||
|
@ -69,7 +69,7 @@ SERIALISABLE_TYPE_URL_CLASS = 50
|
|||
SERIALISABLE_TYPE_STRING_MATCH = 51
|
||||
SERIALISABLE_TYPE_CHECKER_OPTIONS = 52
|
||||
SERIALISABLE_TYPE_NETWORK_DOMAIN_MANAGER = 53
|
||||
SERIALISABLE_TYPE_SUBSCRIPTION_QUERY = 54
|
||||
SERIALISABLE_TYPE_SUBSCRIPTION_QUERY_LEGACY = 54
|
||||
SERIALISABLE_TYPE_STRING_CONVERTER = 55
|
||||
SERIALISABLE_TYPE_FILENAME_TAGGING_OPTIONS = 56
|
||||
SERIALISABLE_TYPE_FILE_SEED = 57
|
||||
|
@ -101,6 +101,10 @@ SERIALISABLE_TYPE_NOTE_IMPORT_OPTIONS = 82
|
|||
SERIALISABLE_TYPE_STRING_SPLITTER = 83
|
||||
SERIALISABLE_TYPE_STRING_PROCESSOR = 84
|
||||
SERIALISABLE_TYPE_TAG_AUTOCOMPLETE_OPTIONS = 85
|
||||
SERIALISABLE_TYPE_SUBSCRIPTION_QUERY_LOG_CONTAINER = 86
|
||||
SERIALISABLE_TYPE_SUBSCRIPTION_QUERY_HEADER = 87
|
||||
SERIALISABLE_TYPE_SUBSCRIPTION = 88
|
||||
SERIALISABLE_TYPE_FILE_SEED_CACHE_STATUS = 89
|
||||
|
||||
SERIALISABLE_TYPES_TO_OBJECT_TYPES = {}
|
||||
|
||||
|
@ -126,6 +130,17 @@ def CreateFromNetworkBytes( network_string ):
|
|||
|
||||
return CreateFromString( obj_string )
|
||||
|
||||
def CreateFromNoneableSerialisableTuple( obj_tuple_or_none ):
|
||||
|
||||
if obj_tuple_or_none is None:
|
||||
|
||||
return None
|
||||
|
||||
else:
|
||||
|
||||
return CreateFromSerialisableTuple( obj_tuple_or_none )
|
||||
|
||||
|
||||
def CreateFromString( obj_string ):
|
||||
|
||||
obj_tuple = json.loads( obj_string )
|
||||
|
@ -151,6 +166,17 @@ def CreateFromSerialisableTuple( obj_tuple ):
|
|||
|
||||
return obj
|
||||
|
||||
def GetNoneableSerialisableTuple( obj_or_none ):
|
||||
|
||||
if obj_or_none is None:
|
||||
|
||||
return None
|
||||
|
||||
else:
|
||||
|
||||
return obj_or_none.GetSerialisableTuple()
|
||||
|
||||
|
||||
def SetNonDupeName( obj, disallowed_names ):
|
||||
|
||||
non_dupe_name = HydrusData.GetNonDupeName( obj.GetName(), disallowed_names )
|
||||
|
|
|
@ -739,7 +739,7 @@ class HydrusResource( Resource ):
|
|||
|
||||
if self._service.SupportsCORS():
|
||||
|
||||
request.setHeader( 'Access-Control-Allow-Headers', 'Hydrus-Client-API-Access-Key' )
|
||||
request.setHeader( 'Access-Control-Allow-Headers', '*' )
|
||||
request.setHeader( 'Access-Control-Allow-Origin', '*' )
|
||||
request.setHeader( 'Access-Control-Allow-Methods', allowed_methods_string )
|
||||
|
||||
|
|
|
@ -442,7 +442,7 @@ class TestClientAPI( unittest.TestCase ):
|
|||
self.assertEqual( response.status, 200 )
|
||||
|
||||
self.assertEqual( response.getheader( 'Access-Control-Allow-Methods' ), 'GET' )
|
||||
self.assertEqual( response.getheader( 'Access-Control-Allow-Headers' ), 'Hydrus-Client-API-Access-Key' )
|
||||
self.assertEqual( response.getheader( 'Access-Control-Allow-Headers' ), '*' )
|
||||
self.assertEqual( response.getheader( 'Access-Control-Allow-Origin' ), '*' )
|
||||
|
||||
|
||||
|
@ -1587,7 +1587,7 @@ class TestClientAPI( unittest.TestCase ):
|
|||
|
||||
tags_manager = ClientMediaManagers.TagsManager( service_keys_to_statuses_to_tags )
|
||||
|
||||
locations_manager = ClientMediaManagers.LocationsManager( set(), set(), set(), set(), urls = urls )
|
||||
locations_manager = ClientMediaManagers.LocationsManager( set(), set(), set(), set(), inbox = False, urls = urls )
|
||||
ratings_manager = ClientMediaManagers.RatingsManager( {} )
|
||||
notes_manager = ClientMediaManagers.NotesManager( {} )
|
||||
file_viewing_stats_manager = ClientMediaManagers.FileViewingStatsManager( 0, 0, 0, 0 )
|
||||
|
@ -1621,6 +1621,10 @@ class TestClientAPI( unittest.TestCase ):
|
|||
metadata_row[ 'num_frames' ] = file_info_manager.num_frames
|
||||
metadata_row[ 'num_words' ] = file_info_manager.num_words
|
||||
|
||||
metadata_row[ 'is_inbox' ] = False
|
||||
metadata_row[ 'is_local' ] = False
|
||||
metadata_row[ 'is_trashed' ] = False
|
||||
|
||||
metadata_row[ 'known_urls' ] = list( sorted_urls )
|
||||
|
||||
tags_manager = media_result.GetTagsManager()
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
from hydrus.client.gui import ClientGUIDialogs
|
||||
from hydrus.client.gui import ClientGUIScrolledPanelsEdit
|
||||
from hydrus.client.gui import ClientGUISubscriptions
|
||||
from hydrus.client.gui import ClientGUITopLevelWindowsPanels
|
||||
import unittest
|
||||
from hydrus.core import HydrusGlobals as HG
|
||||
|
@ -61,7 +62,7 @@ class TestDBDialogs( unittest.TestCase ):
|
|||
|
||||
with ClientGUITopLevelWindowsPanels.DialogEdit( None, title ) as dlg:
|
||||
|
||||
panel = ClientGUIScrolledPanelsEdit.EditSubscriptionsPanel( dlg, [] )
|
||||
panel = ClientGUISubscriptions.EditSubscriptionsPanel( dlg, [] )
|
||||
|
||||
dlg.SetPanel( panel )
|
||||
|
||||
|
|
|
@ -469,7 +469,7 @@ class TestSerialisables( unittest.TestCase ):
|
|||
self.assertEqual( obj.GetName(), dupe_obj.GetName() )
|
||||
|
||||
self.assertEqual( obj._gug_key_and_name, dupe_obj._gug_key_and_name )
|
||||
self.assertEqual( len( obj._queries ), len( dupe_obj._queries ) )
|
||||
self.assertEqual( len( obj._query_headers ), len( dupe_obj._query_headers ) )
|
||||
self.assertEqual( obj._initial_file_limit, dupe_obj._initial_file_limit )
|
||||
self.assertEqual( obj._periodic_file_limit, dupe_obj._periodic_file_limit )
|
||||
self.assertEqual( obj._paused, dupe_obj._paused )
|
||||
|
@ -485,7 +485,17 @@ class TestSerialisables( unittest.TestCase ):
|
|||
self._dump_and_load_and_test( sub, test )
|
||||
|
||||
gug_key_and_name = ( HydrusData.GenerateKey(), 'muh test gug' )
|
||||
queries = [ ClientImportSubscriptionQuery.SubscriptionQuery( 'test query' ), ClientImportSubscriptionQuery.SubscriptionQuery( 'test query 2' ) ]
|
||||
|
||||
query_headers = []
|
||||
|
||||
q = ClientImportSubscriptionQuery.SubscriptionQueryHeader()
|
||||
q.SetQueryText( 'test query' )
|
||||
query_headers.append( q )
|
||||
|
||||
q = ClientImportSubscriptionQuery.SubscriptionQueryHeader()
|
||||
q.SetQueryText( 'test query 2' )
|
||||
query_headers.append( q )
|
||||
|
||||
checker_options = ClientImportOptions.CheckerOptions()
|
||||
initial_file_limit = 100
|
||||
periodic_file_limit = 50
|
||||
|
@ -501,11 +511,11 @@ class TestSerialisables( unittest.TestCase ):
|
|||
|
||||
sub.SetTuple( gug_key_and_name, checker_options, initial_file_limit, periodic_file_limit, paused, file_import_options, tag_import_options, no_work_until )
|
||||
|
||||
sub.SetQueries( queries )
|
||||
sub.SetQueryHeaders( query_headers )
|
||||
|
||||
self.assertEqual( sub.GetGUGKeyAndName(), gug_key_and_name )
|
||||
self.assertEqual( sub.GetTagImportOptions(), tag_import_options )
|
||||
self.assertEqual( sub.GetQueries(), queries )
|
||||
self.assertEqual( sub.GetQueryHeaders(), query_headers )
|
||||
|
||||
self.assertEqual( sub._paused, False )
|
||||
sub.PauseResume()
|
||||
|
|
|
@ -0,0 +1,23 @@
|
|||
beautifulsoup4>=4.0.0
|
||||
chardet>=3.0.4
|
||||
cloudscraper>=1.2.33
|
||||
html5lib>=1.0.1
|
||||
lxml>=4.5.0
|
||||
lz4>=3.0.0
|
||||
nose>=1.3.0
|
||||
numpy>=1.16.0
|
||||
opencv-python-headless>=4.0.0
|
||||
Pillow>=6.0.0
|
||||
psutil>=5.0.0
|
||||
pylzma>=0.5.0
|
||||
pyOpenSSL>=19.1.0
|
||||
PySide2==5.15.0
|
||||
PySocks>=1.7.0
|
||||
python-mpv>=0.4.5
|
||||
PyYAML>=5.0.0
|
||||
QtPy>=1.9.0
|
||||
requests>=2.23.0
|
||||
Send2Trash>=1.5.0
|
||||
service-identity>=18.1.0
|
||||
six>=1.14.0
|
||||
Twisted>=20.3.0
|
Before Width: | Height: | Size: 1.9 KiB After Width: | Height: | Size: 1.9 KiB |
Before Width: | Height: | Size: 1.7 KiB After Width: | Height: | Size: 1.7 KiB |
After Width: | Height: | Size: 2.0 KiB |
After Width: | Height: | Size: 1.6 KiB |
After Width: | Height: | Size: 1.6 KiB |
Before Width: | Height: | Size: 1.7 KiB |
Before Width: | Height: | Size: 2.7 KiB After Width: | Height: | Size: 2.7 KiB |
Before Width: | Height: | Size: 2.3 KiB After Width: | Height: | Size: 2.3 KiB |
Before Width: | Height: | Size: 2.0 KiB After Width: | Height: | Size: 2.0 KiB |
After Width: | Height: | Size: 1.8 KiB |
After Width: | Height: | Size: 1.9 KiB |
After Width: | Height: | Size: 2.7 KiB |
After Width: | Height: | Size: 1.9 KiB |
Before Width: | Height: | Size: 2.3 KiB |
Before Width: | Height: | Size: 2.7 KiB |
Before Width: | Height: | Size: 2.4 KiB |
Before Width: | Height: | Size: 2.8 KiB |
Before Width: | Height: | Size: 3.3 KiB |
Before Width: | Height: | Size: 1.8 KiB After Width: | Height: | Size: 1.9 KiB |
Before Width: | Height: | Size: 1.8 KiB After Width: | Height: | Size: 1.8 KiB |
After Width: | Height: | Size: 1.5 KiB |
After Width: | Height: | Size: 1.3 KiB |
After Width: | Height: | Size: 1.6 KiB |
After Width: | Height: | Size: 1.4 KiB |
Before Width: | Height: | Size: 1.9 KiB After Width: | Height: | Size: 1.4 KiB |
Before Width: | Height: | Size: 1.9 KiB |
Before Width: | Height: | Size: 2.1 KiB |