Skip to content

SSD MobileNetV2 MnasFPN gets converted successfully but inference fails at Conv2D op #1356

@codethief

Description

@codethief

Describe the bug
SSD MobileNetV2 MnasFPN gets converted successfully by tf2onnx 1.8.2 buts fails to execute using ONNXRuntime 1.6.0.

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 20.04
  • Tensorflow Version: 1.15.3
  • Python version: 3.6.12

To Reproduce
I trained a model based upon ssd_mobilenet_v2_mnasfpn_coco from the TFv1 Model Zoo.

Afterwards I converted it using tf2onnx:

python3 -m tf2onnx.convert --saved-model path/to/saved/model --output model.onnx --opset 11

which seemed to work perfectly. However, running inference with this model then resulted in:

>           return self._sess.run(output_names, input_feed, run_options)
E           onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running Conv node. Name:'WeightSharedConvolutionalBoxPredictor_1/PredictionTower/conv2d_0/separable_conv2d/depthwise:0_nchwc' Status Message: Input channels C is not equal to kernel channels * group. C: 48 kernel channels: 1 group: 1

/home/user/opt/miniconda3/envs/my-conda-env/lib/python3.6/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:124: Fail

Expected behavior
tf2onnx should have complained when converting the model or output a working model.

Additional context
The issue looks similar to #1100 but I'm not entirely sure. #824 also seems related but the suggestions there didn't help me as I'm not using use_explicit_padding: true in my pipeline.config in the first place.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions