-
Notifications
You must be signed in to change notification settings - Fork 1
Description
The PyTuple_Check(obj)
function cannot fail and the caller is not expected to check for errors. Expected usage:
if (PyTuple_CheckExact(v)) {
...
}
Here, "cannot fail" means that if you pass a NULL pointer, the code does crash.
I'm fine with strongly suggesting to check for errors in the general cases. I just ask for exceptions for specific cases.
The issue #5 requires all new functions to enforce the caller to always check for errors. IMO we need exceptions to that rule.
For example, PR python/cpython#112096 proposes adding Py_hash_t PyHash_Pointer(const void *ptr)
function which cannot fail. It would be annoying to have to check for an hypothetical error if the current implementation cannot fail, and it's unlikely that the function will change to report errors.
What's matter here is to provide a convenient API, more than correctness. Obviously, if we enforce checking the result, it would be easier to change the API later. IMO it's just not worth it here.
For example, a function is unlikely to fail right now and is the future if:
- It does not allocate memory.
- Argument types are primitive C types such as
uint64_t
ordouble
.
Some functions are also designed in a way so that they cannot fail. For example, PyUnicode_EqualToUTF8()
cannot fail. But if the first argument is not a Unicode object, the function does crash. It's a design choice. Example of usage:
if (PyUnicode_EqualToUTF8(key, kwlist[i])) {
match = 1;
break;
}
Having to check for errors on such basic operation "compare two strings" sounds really annoying. For example, the C strcmp() function cannot fail, it's the same. Obviously, if you pass NULL pointer or strings which are not terminated by NUL, strcmp() does crash. That's the trade-off for a convenient API.
By the way, I think that it's fine for some cases to log exceptions with sys.unraisablehook
. Like error in a callback that a function doesn't call directly, and the error cannot be reported to the function since there is an API which abstract the callback. For example, a weakref can have a callback which fail. When a Python object is finalized, errors in weakref callbacks cannot be reported to the finalizer: the finalizer doesn't know these callbacks nor how to handle these errors.
Examples which cannot fail:
int _Py_popcount32(uint32_t x)
Py_hash_t PyHash_Pointer(const void *ptr)
void Py_SET_REFCNT(PyObject *ob, Py_ssize_t refcnt)
void PyErr_SetObject(PyObject *, PyObject *)
Counter-examples of functions which cannot report where they can fail:
void Py_SetRecursionLimit(int)
:sys.setrecusionlimit()
adds an additional check which cannot be implemented in the C functionvoid PyFrame_FastToLocals(PyFrameObject *)
:int PyFrame_FastToLocalsWithError(PyFrameObject *f)
had to be added laterPyObject* PyDict_GetItem(PyObject *op, PyObject *key)
: ignore silently errors :-(PyDict_GetItemRef()
and other functions were added to report errors to the caller.
Corner cases:
- Destructors such as
PyTypeObject.tp_dealloc
functions: if they raise an exception, they must log it withPyErr_FormatUnraisable()
which callssys.unraisablehook
. void PyOS_AfterFork_Child(void)
: it's hard to report errors around afork()
call.void PyObject_ClearWeakRefs(PyObject *)
void Py_ReprLeave(PyObject *)
, butint Py_ReprEnter(PyObject *)
can fail