Using Python ctypes Array and c_int as an argument in C++ function binding with pybind11
23:54 27 Apr 2026

I have data from a device that is being captured in Python directly as a ctypes.Array of ctypes.c_uint16. I'm working PyTorch to create a new Tensor after some post-processing that must after data capture, which I am trying to implement in C/C++ to speed up a lot of iterative tasks that are slow in python, and then creating bindings with pybind11.

The function signature in C++ looks like this:

torch::Tensor foo(uint16_t* a, int b, int c, int d, uint64_t e, uint64_t* f);

with a pybind module call like:

// Create python bindings
namespace py = pybind11;

PYBIND11_MODULE(foo_lib, m){
    m.def("foo", &foo,
        py::arg("a"),
        py::arg("b"),
        py::arg("c"),
        py::arg("d"),
        py::arg("e"),
        py::arg("f")
    );
}

Where a is a C-style array of uint16_t, b, c, and d, are some integers for info like array length, and f is a C-style array of uint64_t that contains some metadata needed in processing. I have C functions that handle some clean up of the data that are already implemented and that I've tested and work, so I would prefer to use these data types in my C++ function when calling them.

I built a binding for this function foo with pybind called foo_lib and can import it into python, but when passing in the following types:

# test.py #

import torch
import foo_lib
from ctypes import Array, c_int, c_uint16, c_uint64

a: Array[c_uint16] = ... # The array of input data
b: c_int = c_int(0)
c: c_int = c_int(13)
d: c_int = c_int(16)
e: c_uint64 = c_uint64(1000)
f: Array[c_uint64] = ... # An array of some required metadata

# Process the output
output: torch.Tensor = foo_lib.foo(a, b, c, d, e, f)

I get this output:

Traceback (most recent call last):
  File "~/test.py", line 14, in 
    output: torch.Tensor = foo_lib.foo(a, b, c, d, e, f)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: foo(): incompatible function arguments. The following argument types are supported:
    1. (a: int, b: int, c: int, d: int, e: int, f: int) -> torch.Tensor

Invoked with: <__main__.c_ushort_Array_1000 object at 0x7ad66ff620d0>, c_int(0), c_int(13), c_int(16), c_ulong(1000), <__main__.c_ulong_Array_10 object at 0x7ad66ff62250>

It looks like pybind did a type conversion on all of the ctype arguments to a regular python int.

Is there a way I can get pybind to convert the actual C/C++ types in the signature to their corresponding python ctypes class types? Converting the data from python into another format other than the ctypes Arrays (like a numpy.ndarray) would at least partially defeat the purpose of making these C++ bindings in order to speed up the data processing, so I'm wondering if it is possible to configure pybind in a way to convert the types as they are properly. I would write the binding in pure C and use the ctypes cdll.LoadLibrary if I could, but I need access to the torch::Tensor object, so I am stuck using C++.

I'm using Python 3.12.3 on Ubuntu 24.04.4, with torch 2.7.1 and pybind11 2.11.2.

arrays ctypes torch pybind11 python-bindings