Python property lookup with custom __setattr__ and __slots__

I have a class that uses __slots__ and makes them nearly immutable by overriding __setattr__ to always raise an error:

class A:
    __slots__ = ['a', 'b', '_x']

    def __init__(self, a, b):
        object.__setattr__(self, 'a', a)
        object.__setattr__(self, 'b', b)

    def __setattr__(self, attr, value):
        raise AttributeError('Immutable!')

    def x():
        return self._x

    def x(value):
        object.__setattr__(self, '_x', value)

Here, the "private" attribute _x is a place-holder for a complex operation to interact with some custom hardware.

Since x is a property, I expect to be able to do something like

 inst = A(1, 2)
 inst.x = 3

Instead, I see my AttributeError with the message Immutable!.

There are a number of obvious workarounds here, such as to remove the custom __setattr__ (which I do not want to do) or to rewrite it as

def __setattr__(self, attr, value):
    if attr != 'x':
        raise AttributeError('Immutable!')
    super().__setattr__(attr, value)

This seems like an awkward method that has the potential to balloon out of proportion if I start adding more properties like that.

The real issue is that I do not understand why there is no conflict between __slots__ and the property, but there is one between __setattr__ and the property. What is happening with the lookup order, and is there another, more elegant workaround to this problem?

The real issue is that I do not understand why there is no conflict between __slots__ and the property, but there is one between __setattr__ and the property.

Both __slots__ and property implement attribute lookup by providing a descriptor for the corresponding attribute(s). The presence of __slots__ prevents arbitrary instance attribute creation not by doing anything to __setattr__, but by preventing creation of a __dict__. property and other descriptors don't rely on an instance __dict__, so they're unaffected.

However, __setattr__ handles all attribute assignment, meaning that descriptor invocation is __setattr__'s responsibility. If your __setattr__ doesn't handle descriptors, descriptors won't be handled, and property setters won't be invoked.

is there another, more elegant workaround to this problem?

You could explicitly allow only properties:

class A:
    def __setattr__(self, name, value):
        if not isinstance(getattr(type(self), name, None), property):
            raise AttributeError("Can't assign to attribute " + name)
        super().__setattr__(name, value)

or you could explicitly reject assignment to slots, and delegate other attribute assignment to super().__setattr__:

class A:
    def __setattr__(self, name, value):
        if isinstance(getattr(type(self), name, None), _SlotDescriptorType):
            raise AttributeError("Can't assign to slot " + name)
        super().__setattr__(name, value)

# Seems to be the same as types.MemberDescriptorType,
# but the docs don't guarantee it.
_SlotDescriptorType = type(A.a)

you are using the same name for the getter, setter and attribute. when setting up a property, you must rename the attribute locally; the convention is to prefix it with an underscore.

class test(object):
    def __init__(self, value):
        self._x =  value

    def x(self):
        return self._x

this is more of a python question.

python is very dynamic language. you can code things (classes) ahead of time, or python allows you to create classes completely dynamically at run-time. consider the following example of a simple vector class. you can create/code the class ahead of time like:

class myvector(object):
    x = 0
    y = 0

or you can create the class dynamically by doing:

fields = {'x':0, 'y':0}
myvector = type('myvector', (object,), fields)

the main difference between the methods is that for one you know the class attributes ahead of time, whereas for the second method as you can imagine, you can programmatically create the fields dictionary, therefore you can create completely dynamically class(es).

so when you know the attributes of the class ahead of time, you can set class attributes using the object notation:

instance.attribute = value

keep in mind that that is equivalent to:

instance.__setattr__("attribute", value)

however there are scenarios where you don't know the class attributes you will need to manipulate ahead of time. this is where you can use __setattr__ function. however it is not recommended practice. so instead the recommendation is to use python's build-in method setattr which internally calls the __setattr__ method:

setattr(instance, attribute, value)

using this approach you can set attributes you don't know ahead of time or you can even loop of some dict and set values from dict:

values = {
    'title': 'this is edit title',
for k, v in values.items():
    setattr(ticket, k, v)

not sure why the regular notation did not work for. it probably has nothing to do with the method you used to set attributes.

there are two concepts involved here: enumerations and attribute-style access to object members that can be initialised inline. for the latter, you'll need some kind of custom class, but since you want something very straightforward, a namedtuple is sufficient for that. so, combining namedtuple and enum, this could be a solution:

from enum import enum
from collections import namedtuple

color = namedtuple('color', ['value', 'displaystring'])

class colors(enum):

    def displaystring(self):
        return self.value.displaystring

    yellow = color(1, 'yellow')
    green = color(2, 'green')


so i figured out a way to actually do this but it involves 1) doing some hacking of the pybind11 code itself and 2) introducing some size inefficiencies to the bound python types. from my point of view, the size issues are fairly immaterial. yes it would be better to have everything perfectly sized but i'll take some extra bytes of memory for ease of use. given this inefficiency, though, i'm not submitting this as a pr to the pybind11 project. while i think the trade-off is worth it, i doubt that making this the default for most people would be desired. it would be possible, i guess to hide this functionality behind a #define in c++ but that seems like it would be super messy long-term. there is probably a better long-term answer that would involve a degree of template meta-programming (parameterizing on the python container type for class_) that i'm just not up to.

i'm providing my changes here as diffs against the current master branch in git when this was written (hash a54eab92d265337996b8e4b4149d9176c2d428a6).

the basic approach was

  1. modify pybind11 to allow the specification of an exception base class for a class_ instance.
  2. modify pybind11's internal container to have the extra fields needed for a python exception type
  3. write a small amount of custom binding code to handle setting the error correctly in python.

for the first part, i added a new attribute to type_record to specify if a class is an exception and added the associated process_attribute call for parsing it.

diff --git a/src/pybind11/include/pybind11/attr.h b/src/pybind11/include/pybind11/attr.h
index 58390239..b5535558 100644
--- a/src/pybind11/include/pybind11/attr.h
+++ b/src/pybind11/include/pybind11/attr.h
@@ -73,6 +73,9 @@ struct module_local { const bool value; constexpr module_local(bool v = true) :
 /// annotation to mark enums as an arithmetic type
 struct arithmetic { };

+// annotation that marks a class as needing an exception base type.
+struct is_except {};
 /** rst
     a call policy which places one or more guard variables (``ts...``) around the function call.

@@ -211,7 +214,8 @@ struct function_record {
 struct type_record {
     pybind11_noinline type_record()
         : multiple_inheritance(false), dynamic_attr(false), buffer_protocol(false),
 -          default_holder(true), module_local(false), is_final(false) { }
 -          default_holder(true), module_local(false), is_final(false),
 -          is_except(false) { }

     /// handle to the parent scope
     handle scope;
@@ -267,6 +271,9 @@ struct type_record {
     /// is the class inheritable from python classes?
     bool is_final : 1;

 -    // does the class need an exception base type?
 -    bool is_except : 1;
 -      pybind11_noinline void add_base(const std::type_info &base, void *(*caster)(void *)) {
         auto base_info = detail::get_type_info(base, false);
         if (!base_info) {
@@ -451,6 +458,11 @@ struct process_attribute<is_final> : process_attribute_default<is_final> {
     static void init(const is_final &, type_record *r) { r->is_final = true; }

+template <>
+struct process_attribute<is_except> : process_attribute_default<is_except> {
 -    static void init(const is_except &, type_record *r) { r->is_except = true; }

i modified the internals.h file to add a separate base class for exception types. i also added an extra bool argument to make_object_base_type.

diff --git a/src/pybind11/include/pybind11/detail/internals.h b/src/pybind11/include/pybind11/detail/internals.h
index 6224dfb2..d84df4f5 100644
--- a/src/pybind11/include/pybind11/detail/internals.h
+++ b/src/pybind11/include/pybind11/detail/internals.h
@@ -16,7 +16,7 @@ namespace_begin(detail)
 // forward declarations
 inline pytypeobject *make_static_property_type();
 inline pytypeobject *make_default_metaclass();
-inline pyobject *make_object_base_type(pytypeobject *metaclass);
+inline pyobject *make_object_base_type(pytypeobject *metaclass, bool is_except);

 // the old python thread local storage (tls) api is deprecated in python 3.7 in favor of the new
 // thread specific storage (tss) api.
@@ -107,6 +107,7 @@ struct internals {
     pytypeobject *static_property_type;
     pytypeobject *default_metaclass;
     pyobject *instance_base;
+    pyobject *exception_base;
 #if defined(with_thread)
     pyinterpreterstate *istate = nullptr;
@@ -292,7 +293,8 @@ pybind11_noinline inline internals &get_internals() {
         internals_ptr->static_property_type = make_static_property_type();
         internals_ptr->default_metaclass = make_default_metaclass();
-        internals_ptr->instance_base = make_object_base_type(internals_ptr->default_metaclass);
+        internals_ptr->instance_base = make_object_base_type(internals_ptr->default_metaclass, false);
+        internals_ptr->exception_base = make_object_base_type(internals_ptr->default_metaclass, true);

and then in class.h i added the necessary code to generate the exception base type. the first caveat is here. since pyexc_exception is a garbage collected type, i had to scope the assert call that checked the gc flag on the type. i have not currently seen any bad behavior from this change, but this is definitely voiding the warranty right here. i would highly, highly recommend always passing the py:dynamic_attr() flag to any classes you are using py:except on, since that turns on all the necessary bells and whistles to handle gc correctly (i think). a better solution might be to turn all those things on in make_object_base_type without having to invoke py::dynamic_attr.

diff --git a/src/pybind11/include/pybind11/detail/class.h b/src/pybind11/include/pybind11/detail/class.h
index a05edeb4..bbb9e772 100644
--- a/src/pybind11/include/pybind11/detail/class.h
+++ b/src/pybind11/include/pybind11/detail/class.h
@@ -368,7 +368,7 @@ extern "c" inline void pybind11_object_dealloc(pyobject *self) {
 /** create the type which can be used as a common base for all classes.  this is
     needed in order to satisfy python's requirements for multiple inheritance.
     return value: new reference. */
-inline pyobject *make_object_base_type(pytypeobject *metaclass) {
+inline pyobject *make_object_base_type(pytypeobject *metaclass, bool is_except=false) {
     constexpr auto *name = "pybind11_object";
     auto name_obj = reinterpret_steal<object>(pybind11_from_string(name));

@@ -387,7 +387,12 @@ inline pyobject *make_object_base_type(pytypeobject *metaclass) {

     auto type = &heap_type->ht_type;
     type->tp_name = name;
-    type->tp_base = type_incref(&pybaseobject_type);
+    if (is_except) {
+      type->tp_base = type_incref(reinterpret_cast<pytypeobject*>(pyexc_exception));
+    }
+    else {
+      type->tp_base = type_incref(&pybaseobject_type);
+    }
     type->tp_basicsize = static_cast<ssize_t>(sizeof(instance));
     type->tp_flags = py_tpflags_default | py_tpflags_basetype | py_tpflags_heaptype;

@@ -404,7 +409,9 @@ inline pyobject *make_object_base_type(pytypeobject *metaclass) {
     setattr((pyobject *) type, "__module__", str("pybind11_builtins"));
     pybind11_set_oldpy_qualname(type, name_obj);

-    assert(!pytype_hasfeature(type, py_tpflags_have_gc));
+    if (!is_except) {
+      assert(!pytype_hasfeature(type, py_tpflags_have_gc));
+    }
     return (pyobject *) heap_type;

@@ -565,7 +572,8 @@ inline pyobject* make_new_python_type(const type_record &rec) {

     auto &internals = get_internals();
     auto bases = tuple(rec.bases);
-    auto base = (bases.size() == 0) ? internals.instance_base
+    auto base = (bases.size() == 0) ? (rec.is_except ? internals.exception_base
+                                                     : internals.instance_base)

and then the final change, which is the inefficiency part. in python, everything is a pyobject, but that is really only two fields (setup with the pyobject_head macro) and the actual object struct may have a lot of extra fields. and having a very precise layout is important because python uses offsetof to seek into these things some times. from the python 2.7 source code (include/pyerrord.h) you can see the struct that is used for base exceptions

typedef struct {
    pyobject *dict;
    pyobject *args;
    pyobject *message;
} pybaseexceptionobject;

any pybind11 type that extends pyexc_exception has to have a instance struct that contains the same initial layout. and in pybind11 currently, the instance struct just has pyobject_head. that means if you don't change the instance struct, this will all compile, but when python seeks into this object, it will do with the assumption that hose extra fields exist and then it will seek right off the end of viable memory and you'll get all sorts of fun segfaults. so this change adds those extra fields to every class_ in pybind11. it does not seem to break normal classes to have these extra fields and it definitely seems to make exceptions work correctly. if we broke the warranty before, we just tore it up and lit it on fire.

diff --git a/src/pybind11/include/pybind11/detail/common.h b/src/pybind11/include/pybind11/detail/common.h
index dd626793..b32e0c70 100644
--- a/src/pybind11/include/pybind11/detail/common.h
+++ b/src/pybind11/include/pybind11/detail/common.h
@@ -392,6 +392,10 @@ struct nonsimple_values_and_holders {
 /// the 'instance' type which needs to be standard layout (need to be able to use 'offsetof')
 struct instance {
+    // necessary to support exceptions.
+    pyobject *dict;
+    pyobject *args;
+    pyobject *message;
     /// storage for pointers and holder; see simple_layout, below, for a description

however, once these changes are all made, here is what you can do. bind in the class

 auto pybaddata = py::class_< ::example::data::baddata>(module, "baddata", py::is_except(), py::dynamic_attr())
    .def(py::init< std::string, std::string >())
    .def("__str__", &::example::data::baddata::tostring)
    .def("getstack", &::example::data::baddata::getstack)
    .def_property("message", &::example::data::baddata::getmsg, &::example::data::baddata::setmsg)
    .def("getmsg", &::example::data::baddata::getmsg);

and take a function in c++ that throws the exception

void raiseme()
  throw ::example::data::baddata("this is an error", "");

and bind that in too

module.def("raiseme", &raiseme, "a function throws");

add an exception translator to put the entire python type into the exception

    py::register_exception_translator([](std::exception_ptr p) {
      try {
          if (p) {
      } catch (const ::example::data::baddata &e) {
        auto err = py::cast(e);
        auto errtype = err.get_type().ptr();
        pyerr_setobject(errtype, err.ptr());

and then you get all the things you could want!

>>> import example
>>> example.raiseme()
traceback (most recent call last):
  file "<stdin>", line 1, in <module>
example.baddata: baddata(msg=this is an error, stack=)

you can, of course, also instantiate and raise the exception from python as well

>>> import example
>>> raise example.baddata("this is my error", "no stack")
traceback (most recent call last):
  file "<stdin>", line 1, in <module>
example.baddata: baddata(msg=this is my error, stack=no stack)

Tags: Python Python 3.X Properties