blocksize was hardcoded to 8192, preventing efficient upload when using
file-like body. Add blocksize argument to __init__, so users can
configure the blocksize to fit their needs.
I tested this uploading data from /dev/zero to a web server dropping the
received data, to test the overhead of the HTTPConnection.send() with a
file-like object.
Here is an example 10g upload with the default buffer size (8192):
$ time ~/src/cpython/release/python upload-httplib.py 10 https://localhost:8000/
Uploaded 10.00g in 17.53 seconds (584.00m/s)
real 0m17.574s
user 0m8.887s
sys 0m5.971s
Same with 512k blocksize:
$ time ~/src/cpython/release/python upload-httplib.py 10 https://localhost:8000/
Uploaded 10.00g in 6.60 seconds (1551.15m/s)
real 0m6.641s
user 0m3.426s
sys 0m2.162s
In real world usage the difference will be smaller, depending on the
local and remote storage and the network.
See https://github.com/nirs/http-bench for more info.
All Blake2 params have to be encoded in little-endian byte order. For
the two multi-byte integer params, leaf_length and node_offset, that
means that assigning a native-endian integer to them appears to work on
little-endian platforms, but gives the wrong result on big-endian. The
current libb2 API doesn't make that very clear, and @sneves is working
on new API functions in the GH issue above. In the meantime, we can work
around the problem by explicitly assigning little-endian values to the
parameter block.
See https://github.com/BLAKE2/libb2/issues/12.
* bpo-31310: multiprocessing's semaphore tracker should be launched again if crashed
* Avoid mucking with process state in test.
Add a warning if the semaphore process died, as semaphores may then be leaked.
* Add NEWS entry
* bpo-31308: If multiprocessing's forkserver dies, launch it again when necessary.
* Fix test on Windows
* Add NEWS entry
* Adopt a different approach: ignore SIGINT and SIGTERM, as in semaphore tracker.
* Fix comment
* Make sure the test doesn't muck with process state
* Also test previously-started processes
* Update 2017-08-30-17-59-36.bpo-31308.KbexyC.rst
* Avoid masking SIGTERM in forkserver. It's not necessary and causes a race condition in test_many_processes.
When a single .c file contains several functions and/or methods with
the same name, a safety _METHODDEF #define statement is generated
only for one of them.
This fixes the bug by using the full name of the function to avoid
duplicates rather than just the name.
* bpo-28643: Record profile-opt build progress with stamp files
The profile-opt makefile target is expensive to build. Since the
makefile does not contain complete dependency information for this
target, much extra work can get done if the build is interrupted and
re-started. Even running "make" a second time will result in a huge
amount of redundant work.
As a minimal fix (rather than removing recursive "make" and adding a
proper dependency graph), split the profile-opt target into parts:
- ensure tree is clean (profile-clean-stamp)
- build with profile generation enabled (profile-gen-stamp)
- run task to generate profile information (profile-run-stamp)
- build optimized Python using above information (profile-opt)
We use "stamp" files to record completion of the steps. Running
"make clean" will not remove the profile-run-stamp file.
Other minor changes:
- remove the "build_all_use_profile" target. I don't expect callers
of the makefile to use this target so that should be safe.
- remove execution of "profile-removal" at end of "profile-opt". I
don't see any reason to not to keep the profile information, given
the cost to generate it. Removing the "profile-run-stamp" file
will force re-generation of it.
Add new time functions:
* time.clock_gettime_ns()
* time.clock_settime_ns()
* time.monotonic_ns()
* time.perf_counter_ns()
* time.process_time_ns()
* time.time_ns()
Add new _PyTime functions:
* _PyTime_FromTimespec()
* _PyTime_FromNanosecondsObject()
* _PyTime_FromTimeval()
Other changes:
* Add also os.times() tests to test_os.
* pytime_fromtimeval() and pytime_fromtimeval() now return
_PyTime_MAX or _PyTime_MIN on overflow, rather than undefined
behaviour
* _PyTime_FromNanoseconds() parameter type changes from long long to
_PyTime_t
Modify the code to use ncurses is_pad() instead of checking WINDOW
_flags field. If your platform does not provide the is_pad(), the
existing way that checks the field will be enabled.
Note: This change does not drop support for platforms where do not
have both WINDOW _flags field and is_pad().
Editor and output windows only see an empty last prompt line.
This simplifies the code and fixes a minor bug when newline is inserted.
Sys.ps1, if present, is read on Shell start-up, but is not set or changed.
The startup refactoring means command line settings
are now applied after settings are read from the
environment.
This updates the way command line settings are applied
to account for that, ensures more settings are first read
from the environment in _PyInitializeCore, and adds a
simple test case covering the flags that are easy to check.
Fix the pthread+semaphore implementation of
PyThread_acquire_lock_timed() when called with timeout > 0 and
intr_flag=0: recompute the timeout if sem_timedwait() is interrupted
by a signal (EINTR).
See also the PEP 475.
The pthread implementation of PyThread_acquire_lock() now fails with
a fatal error if the timeout is larger than PY_TIMEOUT_MAX, as done
in the Windows implementation.
The check prevents any risk of overflow in PyThread_acquire_lock().
Add also PY_DWORD_MAX constant.
Calendar.itermonthdates() will now consistently raise an exception when a date falls outside of the 0001-01-01 through 9999-12-31 range. To support applications that cannot tolerate such exceptions, the new methods itermonthdays3() and itermonthdays4() are added. The new methods return tuples and are not restricted by the range supported by datetime.date.
Thanks @serhiy-storchaka for suggesting the itermonthdays4() method and for the review.