be possible to provoke unbounded recursion now, but leaving that to someone
else to provoke and repair.
Bugfix candidate -- although this is getting harder to backstitch, and the
cases it's protecting against are mondo contrived.
code, less memory. Tests have uncovered no drawbacks. Christian and
Vladimir are the other two people who have burned many brain cells on the
dict code in recent years, and they like the approach too, so I'm checking
it in without further ado.
in release builds. Suggested by Martin v. Loewis.
I'm half tempted to macroize PyErr_Occurred too, as the whole thing could
collapse to just
_PyThreadState_Current->curexc_type
random inputs: if you ran the test 100 times, you could expect it to
report a bogus failure. So loosened its expectations.
Also changed the way failing tests are printed, so that when run under
regrtest.py we get enough info to reproduce the failure.
exactly once. But the test code can't know that, as the number of times
__cmp__ is called depends on internal details of the dict implementation.
This is especially nasty because the __hash__ method returns the address
of the class object, so the hash codes seen by the dict can vary across
runs, causing the dict to use a different probe order across runs. I
just happened to see this test fail about 1 run in 7 today, but only
under a release build and when passing -O to Python. So, changed the test
to be predictable across runs.
PySequence_Size(), not PyObject_Size(): the later considers the mapping
methods as well as the sequence methods, which is not needed here. Either
should be equally fast in this case, but PySequence_Size() offers a better
conceptual match.
absolute or relative.
remove(), rename() descriptions: Give more information about the cross-
platform behavior of these functions, so single-platform developers
can be aware of the potential issues when writing portable code.
This closes SF patch #426598.
In the default branch, keep three ifs that are used if level == 0, the
most common case. Note that first if here is a slight optimization
for the 'O' format.
Second part of SF patch 426072.
Note that lots of code was re-indented.
Replace two-step of convertsimple() and convertsimple1() with
convertsimple() and helper converterr(), which is called to format
error messages when convertsimple() fails. The old code did all the
real work in convertsimple1(), but deferred error message formatting
to conversimple(). The result was paying the price of a second
function call on every call just to format error messages in the
failure cases.
Factor out of the buffer-handling code in convertsimple() and package
it as convertbuffer().
Add two macros to ease readability of Unicode coversions,
UNICODE_DEFAULT_ENCODING() and CONV_UNICODE, an error string.
The convertsimple() routine had awful indentation problems, primarily
because there were two tabs between the case line and the body of the
case statements. This patch reformats the entire function to have a
single tab between case line and case body, which makes the code
easier to read (and consistent with ceval). The introduction of
converterr() exacerbated the problem and prompted this fix.
Also, eliminate non-standard whitespace after opening paren and before
closing paren in a few if statements.
(This checkin is part of SF patch 426072.)
name of the test, only write the output file if it already exists (and
tell the user to consider removing it). This avoids the generation of
unnecessary turds.
Keyword arguments passed to builtin functions that don't take them are
ignored.
>>> {}.clear(x=2)
>>>
instead of
>>> {}.clear(x=2)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: clear() takes no keyword arguments
Added information on PyIter_Check(), PyIter_Next(),
PyObject_Unicode(), PyString_AsDecodedObject(),
PyString_AsEncodedObject(), and PyThreadState_GetDict().
state *which* other function the current one is like, even if the
descriptions are adjacent.
Revise the _PyTuple_Resize() description to reflect the removal of the
third parameter.
updatecache(): When using imputil, sys.path may contain things other than
strings. Ignore such things instead of blowing up.
Hard to say whether this is a bugfix or a feature ...
instead of multiplication to generate the probe sequence. The idea is
recorded in Python-Dev for Dec 2000, but that version is prone to rare
infinite loops.
The value is in getting *all* the bits of the hash code to participate;
and, e.g., this speeds up querying every key in a dict with keys
[i << 16 for i in range(20000)] by a factor of 500. Should be equally
valuable in any bad case where the high-order hash bits were getting
ignored.
Also wrote up some of the motivations behind Python's ever-more-subtle
hash table strategy.
resizing.
Accurate timings are impossible on my Win98SE box, but this is obviously
faster even on this box for reasonable list.append() cases. I give
credit for this not to the resizing strategy but to getting rid of integer
multiplication and divsion (in favor of shifting) when computing the
rounded-up size.
For unreasonable list.append() cases, Win98SE now displays linear behavior
for one-at-time appends up to a list with about 35 million elements. Then
it dies with a MemoryError, due to fatally fragmented *address space*
(there's plenty of VM available, but by this point Win9X has broken user
space into many distinct heaps none of which has enough contiguous space
left to resize the list, and for whatever reason Win9x isn't coalescing
the dead heaps). Before the patch it got a MemoryError for the same
reason, but once the list reached about 2 million elements.
Haven't yet tried on Win2K but have high hopes extreme list.append()
will be much better behaved now (NT & Win2K didn't fragment address space,
but suffered obvious quadratic-time behavior before as lists got large).
For other systems I'm relying on common sense: replacing integer * and /
by << and >> can't plausibly hurt, the number of function calls hasn't
changed, and the total operation count for reasonably small lists is about
the same (while the operations are cheaper now).
in the table of mapping object operations. Re-numbered the list of
notes to reflect the move of the "Added in version 2.2." note to the list
of notes instead of being inserted into the last column of the table.