and are lists, and then just the string elements (if any)).
There are good and bad reasons for this. The good reason is to support
dir() "like before" on objects of extension types that haven't migrated
to the class introspection API yet. The bad reason is that Python's own
method objects are such a type, and this is the quickest way to get their
im_self etc attrs to "show up" via dir(). It looks much messier to move
them to the new scheme, as their current getattr implementation presents
a view of their attrs that's a untion of their own attrs plus their
im_func's attrs. In particular, methodobject.__dict__ actually returns
methodobject.im_func.__dict__, and if that's important to preserve it
doesn't seem to fit the class introspection model at all.
- use PyModule_Check() instead of PyObject_TypeCheck(), now we can.
- don't assert that the __dict__ gotten out of a module is always
a dictionary; check its type, and raise an exception if it's not.
PEP 238. Changes:
- add a new flag variable Py_DivisionWarningFlag, declared in
pydebug.h, defined in object.c, set in main.c, and used in
{int,long,float,complex}object.c. When this flag is set, the
classic division operator issues a DeprecationWarning message.
- add a new API PyRun_SimpleStringFlags() to match
PyRun_SimpleString(). The main() function calls this so that
commands run with -c can also benefit from -Dnew.
- While I was at it, I changed the usage message in main() somewhat:
alphabetized the options, split it in *four* parts to fit in under
512 bytes (not that I still believe this is necessary -- doc strings
elsewhere are much longer), and perhaps most visibly, don't display
the full list of options on each command line error. Instead, the
full list is only displayed when -h is used, and otherwise a brief
reminder of -h is displayed. When -h is used, write to stdout so
that you can do `python -h | more'.
Notes:
- I don't want to use the -W option to control whether the classic
division warning is issued or not, because the machinery to decide
whether to display the warning or not is very expensive (it involves
calling into the warnings.py module). You can use -Werror to turn
the warnings into exceptions though.
- The -Dnew option doesn't select future division for all of the
program -- only for the __main__ module. I don't know if I'll ever
change this -- it would require changes to the .pyc file magic
number to do it right, and a more global notion of compiler flags.
- You can usefully combine -Dwarn and -Dnew: this gives the __main__
module new division, and warns about classic division everywhere
else.
- Do not compile unicodeobject, unicodectype, and unicodedata if Unicode is disabled
- check for Py_USING_UNICODE in all places that use Unicode functions
- disables unicode literals, and the builtin functions
- add the types.StringTypes list
- remove Unicode literals from most tests.
types -- currently Type, List, None and NotImplemented. To be called
from Py_Initialize() instead of accumulating calls there.
Also rename type(None) to NoneType and type(NotImplemented) to
NotImplementedType -- naming the type identical to the object was
confusing.
returns that. (This fix is also by MvL; checkin it in because I want
to make more changes here. I'm still not 100% satisfied -- see
comments attached to the patch.)
- Add an explicit call to PyType_Ready(&PyList_Type) to pythonrun.c
(just for the heck of it, really -- we should either explicitly
ready all types, or none).
And remove all the extern decls in the middle of .c files.
Apparently, it was excluded from the header file because it is
intended for internal use by the interpreter. It's still intended for
internal use and documented as such in the header file.
case of objects with equal types which support tp_compare. Give
type objects a tp_compare function.
Also add c<0 tests before a few PyErr_Occurred tests.
NEEDS DOC CHANGES
A few more AttributeErrors turned into TypeErrors, but in test_contains
this time.
The full story for instance objects is pretty much unexplainable, because
instance_contains() tries its own flavor of iteration-based containment
testing first, and PySequence_Contains doesn't get a chance at it unless
instance_contains() blows up. A consequence is that
some_complex_number in some_instance
dies with a TypeError unless some_instance.__class__ defines __iter__ but
does not define __getitem__.
the code necessary to accomplish this is simpler and faster if confined to
the object implementations, so we only do this there.
This causes no behaviorial changes beyond a (very slight) speedup.
object's type didn't define tp_print, there were still cases where the
full "print uses str() which falls back to repr()" semantics weren't
honored. This resulted in
>>> print None
<None object at 0x80bd674>
>>> print type(u'')
<type object at 0x80c0a80>
Fixed this by always using the appropriate PyObject_Repr() or
PyObject_Str() call, rather than trying to emulate what they would do.
Also simplified PyObject_Str() to always fall back on PyObject_Repr()
when tp_str is not defined (rather than making an extra check for
instances with a __str__ method). And got rid of the special case for
strings.
Fix a very old flaw in PyObject_Print(). Amazing! When an object
type defines tp_str but not tp_repr, 'print x' to a real file
object would not call the tp_str slot but rather print a default style
representation: <foo object at 0x....>. This even though 'print x' to
a file-like-object would correctly call the tp_str slot.
and the test for errors, so that an error in the default compare
doesn't go undetected. This fixes SF Bug #132933 (submitted by
effbot) -- list.sort doesn't detect comparision errors.
PyObject_Dump(): New function that is useful when debugging Python's C
runtime. In something like gdb it can be a pain to get some useful
information out of PyObject*'s. This function prints the str() of the
object to stderr, along with the object's refcount and hex address.
PyGC_Dump(): Similar to PyObject_Dump() but knows how to cast from the
garbage collector prefix back to the PyObject* structure.
[See Misc/gdbinit for some useful gdb hooks]
none_dealloc(): Rather than SEGV if we accidentally decref None out of
existance, we assign None's and NotImplemented's destructor slot to
this function, which just calls abort().
Barry, that comment belongs in the code, not in the checkin msg.
The code *used* to do this correctly (as you well know, since you
& I went thru considerable pain to fix this the first time).
However, because the *reason* for the convolution wasn't recorded
in the code as a comment, somebody threw it all away the first
time it got reworked.
c-code-isn't-often-self-explanatory-ly y'rs - tim
default_3way_compare(): Stick the checkin message from 2.110 in a
comment.
to integer types (i.e. Py_uintptr_t, our spelling of C9X's uintptr_t).
ANSI specifies that pointer compares other than == and != to
non-related structures are undefined. This quiets an Insure
portability warning.
I found where rich comparison of unequal recursive objects gave
unintuituve results. In a discussion with Tim, where we discovered
that our intuition on when a<=b should be true was failing, we decided
to outlaw ordering comparisons on recursive objects. (Once we have
fixed our intuition and designed a matching algorithm that's practical
and reasonable to implement, we can allow such orderings again.)
- Refactored the recursive-object comparison framework; more is now
done in the support routines so less needs to be done in the calling
routines (even at the expense of slowing it down a bit -- this
should normally never be invoked, it's mostly just there to avoid
blowing up the interpreter).
- Changed the framework so that the comparison operator used is also
stored. (The dictionary now stores triples (v, w, op) instead of
pairs (v, w).)
- Changed the nesting limit to a more reasonable small 20; this only
slows down comparisons of very deeply nested objects (unlikely to
occur in practice), while speeding up comparisons of recursive
objects (previously, this would first waste time and space on 500
nested comparisons before it would start detecting recursion).
- Changed rich comparisons for recursive objects to raise a ValueError
exception when recursion is detected for ordering oprators (<, <=,
>, >=).
Unrelated change:
- Moved PyObject_Unicode() to just under PyObject_Str(), where it
belongs. MAL's patch must've inserted in a random spot between two
functions in the file -- between two helpers for rich comparison...
- Use the compare nesting level and in-progress dictionary properly in
PyObject_RichCompare().
- Change the in-progress code to use static variables instead of
globals (both the nesting level and the key for the thread dict were
globals but have no reason to be globals; the key can even be a
function-static variable in get_inprogress_dict()).
- Rewrote try_rich_to_3way_compare() to benefit from the similarity of
the three cases, making it table-driven.
- In try_rich_to_3way_compare(), test for EQ before LT and GT. This
turns out essential when comparing recursive UserList instances;
with the old code, these would recurse into rich comparison three
times for each nesting level up to NESTING_LIMIT/2, making the total
number of calls in the order of 3**(NESTING_LIMIT/2)!
NOTE: I'm not 100% comfortable with this. It works for the standard
test suite (which compares a few trivial recursive data structures
only), but I'm not sure that the in-progress dictionary is used
properly by the rich comparison code. Jeremy suggested that maybe the
operation should be included in the dict. Currently I presume that
objects in the dict are equal unless proven otherwise, and I set the
outcome for the rich comparison accordingly: true for operators EQ,
LE, GE, and false for the other three. But Jeremy seems to think that
there may be counter-examples where this doesn't do the right thing.
except that it always returns Unicode objects.
A new C API PyObject_Unicode() is also provided.
This closes patch #101664.
Written by Marc-Andre Lemburg. Copyright assigned to Guido van Rossum.
PyObject_RichCompare() and PyObject_RichCompareBool().
XXX Note: the code that checks for deeply nested rich comparisons is
bogus -- it assumes the two objects are always identical, rather than
using the same logic as PyObject_Compare(). I'll fix that later.
Add definitions of INT_MAX and LONG_MAX to pyport.h.
Remove includes of limits.h and conditional definitions of INT_MAX
and LONG_MAX elsewhere.
This closes SourceForge patch #101659 and bug #115323.
objects for the attribute name. Unicode objects are converted to
a string using the default encoding before trying the lookup.
Note that previously it was allowed to pass arbitrary objects as
attribute name in case the tp_getattro/setattro slots were defined.
This patch fixes this by applying an explicit string check first:
all uses of these slots expect string objects and do not check
for the type resulting in a core dump. The tp_getattro/setattro
are still useful as optimization for lookups using interned
string objects though.
This patch fixes bug #113829.
types (i.e. Py_uintptr_t, our spelling of C9X's uintptr_t). ANSI
specifies that pointer compares other than == and != to non-related
structures are undefined. This quiets an Insure portability warning.
This was a misleading bug -- the true "bug" was that hash(x) gave an error
return when x is an infinity. Fixed that. Added new Py_IS_INFINITY macro to
pyport.h. Rearranged code to reduce growing duplication in hashing of float and
complex numbers, pushing Trent's earlier stab at that to a logical conclusion.
Fixed exceedingly rare bug where hashing of floats could return -1 even if there
wasn't an error (didn't waste time trying to construct a test case, it was simply
obvious from the code that it *could* happen). Improved complex hash so that
hash(complex(x, y)) doesn't systematically equal hash(complex(y, x)) anymore.
comments, docstrings or error messages. I fixed two minor things in
test_winreg.py ("didn't" -> "Didn't" and "Didnt" -> "Didn't").
There is a minor style issue involved: Guido seems to have preferred English
grammar (behaviour, honour) in a couple places. This patch changes that to
American, which is the more prominent style in the source. I prefer English
myself, so if English is preferred, I'd be happy to supply a patch myself ;)
The common technique for printing out a pointer has been to cast to a long
and use the "%lx" printf modifier. This is incorrect on Win64 where casting
to a long truncates the pointer. The "%p" formatter should be used instead.
The problem as stated by Tim:
> Unfortunately, the C committee refused to define what %p conversion "looks
> like" -- they explicitly allowed it to be implementation-defined. Older
> versions of Microsoft C even stuck a colon in the middle of the address (in
> the days of segment+offset addressing)!
The result is that the hex value of a pointer will maybe/maybe not have a 0x
prepended to it.
Notes on the patch:
There are two main classes of changes:
- in the various repr() functions that print out pointers
- debugging printf's in the various thread_*.h files (these are why the
patch is large)
Closes SourceForge patch #100505.
errors in some of the hash algorithms. For exmaple, in float_hash and
complex_hash a certain part of the value is not included in the hash
calculation. See Tim's, Guido's, and my discussion of this on
python-dev in May under the title "fix float_hash and complex_hash for
64-bit *nix"
(2) The hash algorithms that use pointers (e.g. func_hash, code_hash)
are universally not correct on Win64 (they assume that sizeof(long) ==
sizeof(void*))
As well, this patch significantly cleans up the hash code. It adds the
two function _Py_HashDouble and _PyHash_VoidPtr that the various
hashing routine are changed to use.
These help maintain the hash function invariant: (a==b) =>
(hash(a)==hash(b))) I have added Lib/test/test_hash.py and
Lib/test/output/test_hash to test this for some cases.
For more comments, read the patches@python.org archives.
For documentation read the comments in mymalloc.h and objimpl.h.
(This is not exactly what Vladimir posted to the patches list; I've
made a few changes, and Vladimir sent me a fix in private email for a
problem that only occurs in debug mode. I'm also holding back on his
change to main.c, which seems unnecessary to me.)
Improvements:
- does no longer need any extra memory
- has no relationship to tstate
- works in debug mode
- can easily be modified for free threading (hi Greg:)
Side effects:
Trashcan does change the order of object destruction.
Prevending that would be quite an immense effort, as
my attempts have shown. This version works always
the same, with debug mode or not. The slightly
changed destruction order should therefore be no problem.
Algorithm:
While the old idea of delaying the destruction of some
obejcts at a certain recursion level was kept, we now
no longer aloocate an object to hold these objects.
The delayed objects are instead chained together
via their ob_type field. The type is encoded via
ob_refcnt. When it comes to the destruction of the
chain of waiting objects, the topmost object is popped
off the chain and revived with type and refcount 1,
then it gets a normal Py_DECREF.
I am confident that this solution is near optimum
for minimizing side effects and code bloat.
Added wrapping macros to dictobject.c, listobject.c, tupleobject.c,
frameobject.c, traceback.c that safely prevends core dumps
on stack overflow. Macros and functions in object.c, object.h.
The method is an "elevator destructor" that turns cascading
deletes into tail recursive behavior when some limit is hit.
before calling it. This check was there when the objects were of the
same type *before* coercion, but not if they initially differed but
became the same *after* coercion.
PyNumber_Coerce() except that when the coercion can't be done and no
other exceptions happen, it returns 1 instead of raising an
exception.
Use this function in PyObject_Compare() to avoid raising an exception
simply because two objects with numeric behavior can't be coerced to a
common type; instead, proceed with the non-numeric default comparison.
Note that this is a somewhat questionable practice -- comparisons for
numeric objects shouldn't default to random behavior like this, but it
is required for backward compatibility. (Case in point, it broke
comparison of kjDict objects to integers in Aaron Watters' kjbuckets
extension.) A correct fix (for python 2.0) should involve a different
definiton of comparison altogether.
In _Py_PrintReferences(), no longer suppress once-referenced string.
Add Py_Malloc and friends and PyMem_Malloc and friends (malloc
wrappers for third parties).
getcounts() returns a list of counts of allocations and
deallocations for all different object types.
getobjects(n [, type ]) returns a list of recently allocated
and not-yet-freed objects of the given type (all
objects if no type given). Only the n most recent
(all if n==0) objects are returned.
getcounts is only available if compiled with -DCOUNT_ALLOCS,
getobjects is only available if compiled with -DTRACE_REFS. Note that
everything must be compiled with these options!
* object.[ch], bltinmodule.c, fileobject.c: changed str() to call
strobject() which calls an object's __str__ method if it has one.
strobject() is also called by writeobject() when PRINT_RAW is passed.
* ceval.c: rationalize code for PRINT_ITEM (no change in function!)
* funcobject.c, codeobject.c: added compare and hash functionality.
Functions with identical code objects and the same global dictionary are
equal. Code objects are equal when their code, constants list and names
list are identical (i.e. the filename and code name don't count).
(hash doesn't work yet since the constants are in a list and lists can't
be hashed -- suppose this should really be done with a tuple now we have
resizetuple!)
shared. The default is to save references to the integers in
the range -1..99. The lower limit can be set by defining
NSMALLNEGINTS (absolute value of smallest integer to be saved)
and NSMALLPOSINTS (1 more than the largest integer to be
saved).
tupleobject.c: Save a reference to the empty tuple to be returned
whenever a tuple of size 0 is requested. Tuples of size 1
upto, but not including, MAXSAVESIZE (default 20) are put in
free lists when deallocated. When MAXSAVESIZE equals 1, only
share references to the empty tuple, when MAXSAVESIZE equals
0, don't include the code at all and revert to the old
behavior.
object.c: Print some more statistics when COUNT_ALLOCS is defined.
image objects, and lots of new methods.
* Added counting of allocations and deallocations of builtin types if
COUNT_ALLOCS is defined. Had to move calls to NEWREF down in some
files.
* Bug fix in sorting lists.
objects of its derived classes; allow anything that has an attribute
named "__privileged__" access to anything.
* object.[ch]: added hasattr() -- test whether getattr() will succeed.
before it.
* ceval.c, object.c: moved testbool() to object.c (now extern visible)
* stringobject.c: fix bugs in and rationalize string resize in formatstring()
* tokenizer.[ch]: fix non-working code for lines longer than BUFSIZ
* Stubs for faster implementation of local variables (not yet finished)
* Added function name to code object. Print it for code and function
objects. THIS MAKES THE .PYC FILE FORMAT INCOMPATIBLE (the version
number has changed accordingly)
* Print address of self for built-in methods
* New internal functions getattro and setattro (getattr/setattr with
string object arg)
* Replaced "dictobject" with more powerful "mappingobject"
* New per-type functio tp_hash to implement arbitrary object hashing,
and hashobject() to interface to it
* Added built-in functions hash(v) and hasattr(v, 'name')
* classobject: made some functions static that accidentally weren't;
added __hash__ special instance method to implement hash()
* Added proper comparison for built-in methods and functions
* flmodule.c: added some missing functions; changed readonly flags of
some data members based upon FORMS documentation.
* listobject.c: fixed int/long arg lint bug (bites PC compilers).
* several: removed redundant print methods (repr is good enough).
* posixmodule.c: added (still experimental) process group functions.
calls the repr function. When the refcount is bad, don't print
the object at all (chances of crashes).
Changes to checking and printing of references: the consistency
check is somewhat faster; don't print strings referenced once
(most occur in function's name lists).