Closed
Description
When I try to complie a model, I got such error
�[1;35mDEBUG: �[0mUnable to get schema for Node %b.1 : int, %nframe.1 : int, %c : int, %h.1 : int, %w.1 : int = prim::ListUnpack(%15) (NodeConverterRegistry.Convertable)
terminate called after throwing an instance of 'trtorch::Error'
what(): [enforce fail at core/conversion/conversion.cpp:392] Expected schema to be true but got false
Unable to get schema for Node %b.1 : int, %nframe.1 : int, %c : int, %h.1 : int, %w.1 : int = prim::ListUnpack(%15) (conversion.VerifyCoverterSupportForBlock)
and the related graph definition is this
%15 : int[] = aten::size(%images.1) # <string>:7:9
%b.1 : int, %nframe.1 : int, %c : int, %h.1 : int, %w.1 : int = prim::ListUnpack(%15)
Input shape is (1,1,3,672,672)
detailed log is here
listunpack.txt
GDB backtrace
#0 0x00007fff63987438 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54
#1 0x00007fff6398903a in __GI_abort () at abort.c:89
#2 0x00007ffff7a8ddde in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#3 0x00007ffff7a99896 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#4 0x00007ffff7a99901 in std::terminate() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#5 0x00007ffff7a99b55 in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#6 0x000000000047b116 in trtorch::core::conversion::GetUnsupportedOpsInBlock[abi:cxx11](torch::jit::Block const*) (b=0x5d4b9d50) at core/conversion/conversion.cpp:390
#7 0x000000000047b3a7 in trtorch::core::conversion::VerifyConverterSupportForBlock (b=0x5d4b9d50) at core/conversion/conversion.cpp:406
#8 0x000000000045d784 in trtorch::core::CheckMethodOperatorSupport (mod=..., method_name="forward") at core/compiler.cpp:136
#9 0x000000000045ac55 in trtorch::CheckMethodOperatorSupport (module=..., method_name="forward") at cpp/api/src/trtorch.cpp:14
#10 0x000000000042178d in main (argc=5, argv=0x7fffffffdf68) at cpp/trtorchc/main.cpp:371
In official pytorch source code, I find this
%16 : Tensor[] = aten::chunk(%gates, %7, %8)
%ingate.1 : Tensor, %forgetgate.1 : Tensor, %cellgate.1 : Tensor, %outgate.1 : Tensor = prim::ListUnpack(%16)
Dose this mean the aten::size is a operator rather than evaluator ?
In trtorch aten.cpp, we have
.evaluator({c10::Symbol::fromQualString("aten::size"),
[](const torch::jit::Node* n, kwargs& args) -> c10::optional<torch::jit::IValue> {
LOG_WARNING("There may be undefined behavior using dynamic shape and aten::size");
auto tensor_var = args.at(n->input(0));
if (n->inputs().size() == 1) {
if (tensor_var.isITensor()) {
auto tensor = tensor_var.ITensor();
return util::toVec(tensor->getDimensions());
} else {
auto tensor = tensor_var.unwrapToTensor();
return tensor.sizes();
}
} else {
auto dim = args.at(n->input(1)).unwrapToInt();
if (tensor_var.isITensor()) {
auto tensor = tensor_var.ITensor();
return util::toVec(tensor->getDimensions())[dim];
} else {
auto tensor = tensor_var.unwrapToTensor();
return tensor.sizes()[dim];
}
}
},
EvalOptions().validSchemas(
{"aten::size(Tensor self) -> (int[])", "aten::size.int(Tensor self, int dim) -> (int)"})})
.evaluator({c10::Symbol::fromQualString("aten::__getitem__"),
In another graph, compiling have the same issue
%46 : Tensor[] = aten::split(%45, %6, %7) # /opt/tiger/conda/lib/python3.7/site-packages/torch/tensor.py:375:0
%47 : Tensor, %48 : Tensor = prim::ListUnpack(%46)
�[1;35mDEBUG: �[0mUnable to get schema for Node %47 : Tensor, %48 : Tensor = prim::ListUnpack(%46) (NodeConverterRegistry.Convertable)
terminate called after throwing an instance of 'trtorch::Error'
what(): [enforce fail at core/conversion/conversion.cpp:392] Expected schema to be true but got false
Unable to get schema for Node %47 : Tensor, %48 : Tensor = prim::ListUnpack(%46) (conversion.VerifyCoverterSupportForBlock)