Closed
Description
Bug Description
Running example code inside docker does not work.
To Reproduce
Steps to reproduce the behavior:
docker build -f docker/Dockerfile -t torch_tensorrt:latest .
docker run --gpus all -it --rm -v /home/Desktop/tensorrt:/workspace/tensorrt torch_tensorrt:latest
- Run example code from README.
import torch
import torchvision
import torch_tensorrt
# Get a model
model = torchvision.models.alexnet(pretrained=True).eval().cuda()
# Create some example data
data = torch.randn((1, 3, 224, 224)).to("cuda")
# Trace the module with example data
traced_model = torch.jit.trace(model, [data])
# Compile module
compiled_trt_model = torch_tensorrt.compile(traced_model, {
"inputs": [torch_tensorrt.Input(data.shape)],
"enabled_precisions": {torch.float, torch.half}, # Run with FP16
})
results = compiled_trt_model(data.half())
Behavior
It should work, but results in:
Traceback (most recent call last):
File "tensorrt.py", line 15, in <module>
compiled_trt_model = torch_tensorrt.compile(traced_model, {
File "/opt/conda/lib/python3.8/site-packages/torch_tensorrt/_compile.py", line 88, in compile
target_ir = _module_ir(module, ir)
File "/opt/conda/lib/python3.8/site-packages/torch_tensorrt/_compile.py", line 49, in _module_ir
raise ValueError("Unknown ir was requested")
ValueError: Unknown ir was requested