Skip to content

"No module named 'tensorrt.tensorrt'" for generated wheel file #2288

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
koiking213 opened this issue Sep 1, 2022 · 9 comments
Closed

"No module named 'tensorrt.tensorrt'" for generated wheel file #2288

koiking213 opened this issue Sep 1, 2022 · 9 comments
Assignees
Labels
triaged Issue has been triaged by maintainers

Comments

@koiking213
Copy link

Description

I generated Python 3.8 wheel file following this instruciton and installed it by pip3 install tensorrt-8.0.1.6-cp38-none-linux_aarch64.whl, but failed to import tensorrt

root@7c6e57b97e8f:/# python
Python 3.8.0 (default, Dec  9 2021, 17:53:27)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorrt
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.8/dist-packages/tensorrt/__init__.py", line 36, in <module>
    from .tensorrt import *
ModuleNotFoundError: No module named 'tensorrt.tensorrt'

The wheel file seems to be invalid.

Environment

TensorRT Version:

  • preinstalled on Jetson: 8.0.1
  • this repository: release/8.0

NVIDIA GPU: Jetson AGX Xavier
NVIDIA Driver Version:
CUDA Version: 10.2
CUDNN Version: 8.2.1.32-1
Operating System: JetPack 4.6 (rev3)
Python Version (if applicable): 3.8.0
Tensorflow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if so, version):

  • wheel build: baremetal
  • wheel installation: container (nvcr.io/nvidia/l4t-base:r32.6.1 with Python 3.8)

Relevant Files

Steps To Reproduce

  • prepare for external
mkdir -p $EXT_PATH && cd $EXT_PATH
git clone https://github.com/pybind/pybind11.git
wget https://www.python.org/ftp/python/3.8.0/Python-3.8.0.tgz
tar xvf Python-3.8.0.tgz
mkdir -mkdir -p python3.8/include
p python3.8/include
cp -r Python-3.8.0/Include/* python3.8/include/
apt-get download libpython3.8-dev
ar x libpython3.8-dev_3.8.0-3ubuntu1~18.04.2_arm64.deb
tar xvf data.tar.xz
cp usr/include/aarch64-linux-gnu/python3.8/pyconfig.h python3.8/include/
  • build wheel
git clone -b release/8.0 https://github.com/NVIDIA/TensorRT.git
cd TensorRT/python/
PYTHON_MAJOR_VERSION=3 PYTHON_MINOR_VERSION=8 TARGET=aarch64 ./build.sh
  • installation
    inside docker container based on nvcr.io/nvidia/l4t-base:r32.6.1:
    pip3 install tensorrt-8.0.1.6-cp38-none-linux_aarch64.whl
@zerollzeng
Copy link
Collaborator

zerollzeng commented Sep 1, 2022

did you build the whl file successfully? try pip3 install ./tensorrt-8.0.1.6-cp38-none-linux_aarch64.whl?

@zerollzeng zerollzeng self-assigned this Sep 1, 2022
@zerollzeng zerollzeng added the triaged Issue has been triaged by maintainers label Sep 1, 2022
@koiking213
Copy link
Author

pip3 install finishes without error:

root@8e1587cb151c:/# pip3 install ./tensorrt-8.0.1.6-cp38-none-linux_aarch64.whl
Processing /tensorrt-8.0.1.6-cp38-none-linux_aarch64.whl
Installing collected packages: tensorrt
Successfully installed tensorrt-8.0.1.6
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv

I think whl file is not valid, as the file size is too small.

# ls -lh tensorrt-8.0.1.6-cp38-none-linux_aarch64.whl
-rw-rw-r-- 1 1000 1000 3.8K Sep  1 02:59 tensorrt-8.0.1.6-cp38-none-linux_aarch64.whl

Here is the build log of whl file:

$ TRT_OSSPATH=/mnt/extra/tensorrt-build/TensorRT PYTHON_MAJOR_VERSION=3 PYTHON_MINOR_VERSION=8 TARGET=aarch64 ./build.sh
/mnt/extra/tensorrt-build/TensorRT/python/build /mnt/extra/tensorrt-build/TensorRT/python
ForwardDeclarations.h utils.h
Building for TensorRT version: 8.0.1, library version: 8
-- Targeting TRT Platform: x86_64
-- CUDA version set to 11.3.1
-- cuDNN version set to 8.2
-- Protobuf version set to 3.0.0
-- Setting up another Protobuf build for cross compilation targeting aarch64-Linux
-- Using libprotobuf /mnt/extra/tensorrt-build/TensorRT/python/third_party.protobuf_aarch64/lib/libprotobuf.a
-- ========================= Importing and creating target nvinfer ==========================
-- Looking for library nvinfer
-- Library that was found /usr/lib/aarch64-linux-gnu/libnvinfer.so
-- ==========================================================================================
-- ========================= Importing and creating target nvuffparser ==========================
-- Looking for library nvparsers
-- Library that was found /usr/lib/aarch64-linux-gnu/libnvparsers.so
-- ==========================================================================================
CMake Warning at CMakeLists.txt:159 (message):
  Detected CUDA version is < 11.0.  SM80 not supported.


-- GPU_ARCHS is not defined. Generating CUDA code for default SMs: 35;53;61;70;75
-- Protobuf proto/trtcaffe.proto -> proto/trtcaffe.pb.cc proto/trtcaffe.pb.h
-- /mnt/extra/tensorrt-build/TensorRT/python/parsers/caffe
Generated: /mnt/extra/tensorrt-build/TensorRT/python/parsers/onnx/third_party/onnx/onnx/onnx_onnx2trt_onnx-ml.proto
Generated: /mnt/extra/tensorrt-build/TensorRT/python/parsers/onnx/third_party/onnx/onnx/onnx-operators_onnx2trt_onnx-ml.proto
Generated: /mnt/extra/tensorrt-build/TensorRT/python/parsers/onnx/third_party/onnx/onnx/onnx-data_onnx2trt_onnx.proto
--
-- ******** Summary ********
--   CMake version         : 3.22.3
--   CMake command         : /opt/cmake-3.22.3-linux-aarch64/bin/cmake
--   System                : Linux
--   C++ compiler          : /usr/bin/g++
--   C++ compiler version  : 7.5.0
--   CXX flags             : -Wno-deprecated-declarations  -DBUILD_SYSTEM=cmake_oss -Wall -Wno-deprecated-declarations -Wno-unused-function -Wnon-virtual-dtor
--   Build type            : Release
--   Compile definitions   : _PROTOBUF_INSTALL_DIR=/mnt/extra/tensorrt-build/TensorRT/python/third_party.protobuf;ONNX_NAMESPACE=onnx2trt_onnx
--   CMAKE_PREFIX_PATH     :
--   CMAKE_INSTALL_PREFIX  : /mnt/extra/tensorrt-build/TensorRT/python/..
--   CMAKE_MODULE_PATH     :
--
--   ONNX version          : 1.8.0
--   ONNX NAMESPACE        : onnx2trt_onnx
--   ONNX_BUILD_TESTS      : OFF
--   ONNX_BUILD_BENCHMARKS : OFF
--   ONNX_USE_LITE_PROTO   : OFF
--   ONNXIFI_DUMMY_BACKEND : OFF
--   ONNXIFI_ENABLE_EXT    : OFF
--
--   Protobuf compiler     :
--   Protobuf includes     :
--   Protobuf libraries    :
--   BUILD_ONNX_PYTHON     : OFF
-- Found TensorRT headers at /mnt/extra/tensorrt-build/TensorRT/include
-- Find TensorRT libs at /usr/lib/aarch64-linux-gnu/libnvinfer.so;/usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so
-- Adding new sample: sample_algorithm_selector
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: samples
-- Adding new sample: sample_char_rnn
--     - Parsers Used: uff;caffe;onnx
--     - InferPlugin Used: OFF
--     - Licensing: samples
-- Adding new sample: sample_dynamic_reshape
--     - Parsers Used: onnx
--     - InferPlugin Used: OFF
--     - Licensing: samples
-- Adding new sample: sample_fasterRCNN
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: samples
-- Adding new sample: sample_googlenet
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: samples
-- Adding new sample: sample_int8
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: samples
-- Adding new sample: sample_int8_api
--     - Parsers Used: onnx
--     - InferPlugin Used: OFF
--     - Licensing: samples
-- Adding new sample: sample_mlp
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: samples
-- Adding new sample: sample_mnist
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: samples
-- Adding new sample: sample_mnist_api
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: samples
-- Adding new sample: sample_nmt
--     - Parsers Used: none
--     - InferPlugin Used: OFF
--     - Licensing: samples
-- Adding new sample: sample_onnx_mnist
--     - Parsers Used: onnx
--     - InferPlugin Used: OFF
--     - Licensing: samples
-- Adding new sample: sample_reformat_free_io
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: samples
-- Adding new sample: sample_ssd
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: samples
-- Adding new sample: sample_uff_fasterRCNN
--     - Parsers Used: uff
--     - InferPlugin Used: ON
--     - Licensing: samples
-- Adding new sample: sample_uff_maskRCNN
--     - Parsers Used: uff
--     - InferPlugin Used: ON
--     - Licensing: samples
-- Adding new sample: sample_uff_mnist
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: samples
-- Adding new sample: sample_uff_plugin_v2_ext
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: samples
-- Adding new sample: sample_uff_ssd
--     - Parsers Used: uff
--     - InferPlugin Used: ON
--     - Licensing: samples
-- Adding new sample: sample_onnx_mnist_coord_conv_ac
--     - Parsers Used: onnx
--     - InferPlugin Used: ON
--     - Licensing: samples
-- Adding new sample: trtexec
--     - Parsers Used: caffe;uff;onnx
--     - InferPlugin Used: OFF
--     - Licensing: samples
-- Configuring done
CMake Warning (dev) in plugin/CMakeLists.txt:
  Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
  empty CUDA_ARCHITECTURES not allowed.  Run "cmake --help-policy CMP0104"
  for policy details.  Use the cmake_policy command to set the policy and
  suppress this warning.

  CUDA_ARCHITECTURES is empty for target "nvinfer_plugin".
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) in plugin/CMakeLists.txt:
  Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
  empty CUDA_ARCHITECTURES not allowed.  Run "cmake --help-policy CMP0104"
  for policy details.  Use the cmake_policy command to set the policy and
  suppress this warning.

  CUDA_ARCHITECTURES is empty for target "nvinfer_plugin_static".
This warning is for project developers.  Use -Wno-dev to suppress it.

-- Generating done
-- Build files have been written to: /mnt/extra/tensorrt-build/TensorRT/python
make: *** No targets specified and no makefile found.  Stop.
Generating python 3.8 bindings for TensorRT 8.0.1.6
/mnt/extra/tensorrt-build/TensorRT/python/packaging /mnt/extra/tensorrt-build/TensorRT/python/build /mnt/extra/tensorrt-build/TensorRT/python
/mnt/extra/tensorrt-build/TensorRT/python/build /mnt/extra/tensorrt-build/TensorRT/python
/mnt/extra/tensorrt-build/TensorRT/python

@zerollzeng
Copy link
Collaborator

what does pip3 list show? yes the file size seems wrong. @kevinch-nv any idea?

@koiking213
Copy link
Author

There is tensorrt in pip3 list:

# pip3 list | grep tensorrt
tensorrt   8.0.1.6

@jfehre
Copy link

jfehre commented Sep 14, 2022

I had the same error with the same setting. Unfortunately, i couldn't find out exactly, why it isn't working.
However, i fixed it by using the pybind11 v2.9 branch instead of the master branch or stable branch of pybind11.

Edit: pybind/pybind11#4117 this is the belonging bug in pybind11

@koiking213
Copy link
Author

It also worked for me, thanks!

@0xAl3xH
Copy link

0xAl3xH commented May 17, 2023

I'm getting similar results here. I think it's weird that me and OP are getting this error:
make: *** No targets specified and no makefile found. Stop.
After digging in the build script it seems like make -j12 is not being executed in the right directory?
From build.sh:

mkdir -p ${WHEEL_OUTPUT_DIR}
pushd ${WHEEL_OUTPUT_DIR}

# Generate tensorrt.so
cmake .. -DCMAKE_BUILD_TYPE=Release \
         -DTARGET=${TARGET} \
         -DPYTHON_MAJOR_VERSION=${PYTHON_MAJOR_VERSION} \
         -DPYTHON_MINOR_VERSION=${PYTHON_MINOR_VERSION} \
         -DEXT_PATH=${EXT_PATH} \
         -DCUDA_INCLUDE_DIRS=${CUDA_ROOT}/include \
         -DTENSORRT_ROOT=${ROOT_PATH} \
         -DTENSORRT_MODULE=${TENSORRT_MODULE} \
         -DTENSORRT_LIBPATH=${TRT_LIBPATH}
make -j12

cmake .. generates a makefile in the directory above so when make -j12 is called it doesn't actually see any makefiles. I confirmed that manually invoking cd .. and make triggers a build.

How are you guys able to build the wheel?

Edit: It appears at some point my CMakeCache.txt had incorrectly points the source directory to the root directory so it's always using the CMakeLists.txt at root. Deleting CMakeCache.txt and re-running build.sh fixed this issue.

@sinisantino
Copy link

Hello I saw this was closed as completed. I am attempting a similar build but on a Jetson Nano. I am having the same error message. I followed the same steps you have except I am installing locally instead of using the container. That is this step where I am confused:

installation
inside docker container based on nvcr.io/nvidia/l4t-base:r32.6.1:
pip3 install tensorrt-8.0.1.6-cp38-none-linux_aarch64.whl

Is there a guide on how to install this inside of the docker container? Thank you.

@GrumpyChubbyCat
Copy link

GrumpyChubbyCat commented Apr 9, 2025

Hello I saw this was closed as completed. I am attempting a similar build but on a Jetson Nano. I am having the same error message. I followed the same steps you have except I am installing locally instead of using the container. That is this step where I am confused:

installation inside docker container based on nvcr.io/nvidia/l4t-base:r32.6.1: pip3 install tensorrt-8.0.1.6-cp38-none-linux_aarch64.whl

Is there a guide on how to install this inside of the docker container? Thank you.

I encountered a lot of errors and problems when compiling bindings. In the end, the following steps helped me:

  1. You really need to switch pybind11 to v2.9.
  2. After cloning, you need to create submodules, otherwise there will be no necessary headers:
git submodule update --init --recursive
  1. I used Cmake 4.0, so in CMakeLists.txt I changed this text:
cmake_minimum_required (VERSION 4.0 FATAL_ERROR)
  1. When compiling, I ordered full paths to the python and pybind11 headers.
sudo PY_INCLUDE=/usr/include/python3.8 PYBIND11_DIR=/home/user/external/pybind11/include TRT_OSSPATH=/home/user/external/TensorRT/ TARGET_ARCHITECTURE=aarch64 bash build.sh

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triaged Issue has been triaged by maintainers
Projects
None yet
Development

No branches or pull requests

6 participants