Skip to content

Failed optimization when converting SSD Mobilenet V2 Coco #954

@turowicz

Description

@turowicz

Describe the bug
I have an SSD Mobilenet V2 COCO from TF model zoo that I have re-exported using TF 1.15.3 on GPU. I am now trying to convert that model to ONNX. I get the same error with the original network that comes from TF model zoo without re-export to latest TF 1.x.

Urgency
Blocking production workflow.

System information

  • Nvidia Jetson Nano JP44
  • Ubuntu 18.04
  • Python 3.6.9 (default, Apr 18 2020, 01:56:04) [GCC 8.4.0] on linux
  • tensorflow=1.15.2, onnx=1.7.0, tf2onnx=1.6.0/4a3c49

To Reproduce
I run the following command

python3 -m tf2onnx.convert --saved-model /workspace/ssd_mobilenet_v2_coco_2018_03_29/export_1/saved_model/ --output /workspace/ssd-im.tf15.onnx --opset 11 --fold_const

Expected behavior
I expect the network to convert successfully and an .onnx be created file without issues.

Screenshots
N/A

Additional context
Full log below

root@15891574462c:/workspace# python3 -m tf2onnx.convert --saved-model /workspace/ssd_mobilenet_v2_coco_2018_03_29/export_1/saved_model/ --output /workspace/ssd-im.tf15.onnx --opset 11 --fold_const
2020-06-04 12:07:47.778799: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tf2onnx/verbose_logging.py:76: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.

2020-06-04 12:07:54,009 - WARNING - From /usr/local/lib/python3.6/dist-packages/tf2onnx/verbose_logging.py:76: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.

2020-06-04 12:07:54.014732: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-06-04 12:07:54.031229: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-06-04 12:07:54.031367: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties: 
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
2020-06-04 12:07:54.031428: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
2020-06-04 12:07:54.035127: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-06-04 12:07:54.038234: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-06-04 12:07:54.038998: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-06-04 12:07:54.043749: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-06-04 12:07:54.047692: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-06-04 12:07:54.048185: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-06-04 12:07:54.048403: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-06-04 12:07:54.048652: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-06-04 12:07:54.048768: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0
2020-06-04 12:07:54.070342: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2020-06-04 12:07:54.071357: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2d24ae80 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-06-04 12:07:54.071428: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-06-04 12:07:54.141120: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-06-04 12:07:54.141518: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2d3b78d0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-06-04 12:07:54.141572: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): NVIDIA Tegra X1, Compute Capability 5.3
2020-06-04 12:07:54.142154: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-06-04 12:07:54.142288: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties: 
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
2020-06-04 12:07:54.142402: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
2020-06-04 12:07:54.142482: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-06-04 12:07:54.142533: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-06-04 12:07:54.142567: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-06-04 12:07:54.142615: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-06-04 12:07:54.142654: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-06-04 12:07:54.142689: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-06-04 12:07:54.142935: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-06-04 12:07:54.143154: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-06-04 12:07:54.143245: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0
2020-06-04 12:07:54.143344: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
2020-06-04 12:07:56.479318: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-06-04 12:07:56.479395: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186]      0 
2020-06-04 12:07:56.479424: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0:   N 
2020-06-04 12:07:56.479887: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-06-04 12:07:56.480179: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-06-04 12:07:56.480343: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 683 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
2020-06-04 12:09:00.281640: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-06-04 12:09:00.281823: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties: 
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
2020-06-04 12:09:00.281907: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
2020-06-04 12:09:00.281979: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-06-04 12:09:00.282030: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-06-04 12:09:00.282075: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-06-04 12:09:00.282117: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-06-04 12:09:00.282163: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-06-04 12:09:00.282207: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-06-04 12:09:00.282375: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-06-04 12:09:00.282580: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-06-04 12:09:00.282667: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0
2020-06-04 12:09:00.282730: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-06-04 12:09:00.282761: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186]      0 
2020-06-04 12:09:00.282784: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0:   N 
2020-06-04 12:09:00.282946: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-06-04 12:09:00.283149: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-06-04 12:09:00.283265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 683 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
2020-06-04 12:09:00,283 - INFO - Using tensorflow=1.15.2, onnx=1.7.0, tf2onnx=1.6.0/4a3c49
2020-06-04 12:09:00,284 - INFO - Using opset <onnx, 11>
2020-06-04 12:10:18,072 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat
2020-06-04 12:10:18,136 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_1
2020-06-04 12:10:18,202 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_2
2020-06-04 12:10:18,268 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_3
2020-06-04 12:10:18,333 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_4
2020-06-04 12:10:18,398 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_5
2020-06-04 12:10:18,463 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_6
2020-06-04 12:10:18,528 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_7
2020-06-04 12:10:18,593 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_8
2020-06-04 12:10:18,659 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_9
2020-06-04 12:10:18,724 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_10
2020-06-04 12:10:18,788 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_11
2020-06-04 12:10:18,854 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_12
2020-06-04 12:10:18,920 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_13
2020-06-04 12:10:18,986 - INFO - folding node type=Sqrt, name=MultipleGridAnchorGenerator/Sqrt
2020-06-04 12:10:19,051 - INFO - folding node type=Mul, name=MultipleGridAnchorGenerator/mul_15
2020-06-04 12:10:19,116 - INFO - folding node type=Range, name=MultipleGridAnchorGenerator/range
2020-06-04 12:10:19,182 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_14
2020-06-04 12:10:19,247 - INFO - folding node type=Range, name=MultipleGridAnchorGenerator/range_1
2020-06-04 12:10:19,312 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_15
2020-06-04 12:10:19,378 - INFO - folding node type=Sqrt, name=MultipleGridAnchorGenerator/Sqrt_1
2020-06-04 12:10:19,443 - INFO - folding node type=Mul, name=MultipleGridAnchorGenerator/mul_23
2020-06-04 12:10:19,509 - INFO - folding node type=Range, name=MultipleGridAnchorGenerator/range_2
2020-06-04 12:10:19,574 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_16
2020-06-04 12:10:19,640 - INFO - folding node type=Range, name=MultipleGridAnchorGenerator/range_3
2020-06-04 12:10:19,705 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_17
2020-06-04 12:10:19,771 - INFO - folding node type=Sqrt, name=MultipleGridAnchorGenerator/Sqrt_2
2020-06-04 12:10:19,836 - INFO - folding node type=Mul, name=MultipleGridAnchorGenerator/mul_31
2020-06-04 12:10:19,902 - INFO - folding node type=Range, name=MultipleGridAnchorGenerator/range_4
2020-06-04 12:10:19,968 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_18
2020-06-04 12:10:20,033 - INFO - folding node type=Range, name=MultipleGridAnchorGenerator/range_5
2020-06-04 12:10:20,099 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_19
2020-06-04 12:10:20,165 - INFO - folding node type=Sqrt, name=MultipleGridAnchorGenerator/Sqrt_3
2020-06-04 12:10:20,230 - INFO - folding node type=Mul, name=MultipleGridAnchorGenerator/mul_39
2020-06-04 12:10:20,295 - INFO - folding node type=Range, name=MultipleGridAnchorGenerator/range_6
2020-06-04 12:10:20,360 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_20
2020-06-04 12:10:20,425 - INFO - folding node type=Range, name=MultipleGridAnchorGenerator/range_7
2020-06-04 12:10:20,490 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_21
2020-06-04 12:10:20,556 - INFO - folding node type=Sqrt, name=MultipleGridAnchorGenerator/Sqrt_4
2020-06-04 12:10:20,621 - INFO - folding node type=Mul, name=MultipleGridAnchorGenerator/mul_47
2020-06-04 12:10:20,687 - INFO - folding node type=Range, name=MultipleGridAnchorGenerator/range_8
2020-06-04 12:10:20,752 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_22
2020-06-04 12:10:20,817 - INFO - folding node type=Range, name=MultipleGridAnchorGenerator/range_9
2020-06-04 12:10:20,881 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_23
2020-06-04 12:10:20,948 - INFO - folding node type=Sqrt, name=MultipleGridAnchorGenerator/Sqrt_5
2020-06-04 12:10:21,013 - INFO - folding node type=Mul, name=MultipleGridAnchorGenerator/mul_55
2020-06-04 12:10:21,079 - INFO - folding node type=Range, name=MultipleGridAnchorGenerator/range_10
2020-06-04 12:10:21,144 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_24
2020-06-04 12:10:21,209 - INFO - folding node type=Range, name=MultipleGridAnchorGenerator/range_11
2020-06-04 12:10:21,274 - INFO - folding node type=Cast, name=MultipleGridAnchorGenerator/ToFloat_25
2020-06-04 12:10:21,343 - INFO - folding node type=Cast, name=Postprocessor/ToFloat_1
2020-06-04 12:10:21,408 - INFO - folding node type=Cast, name=Postprocessor/ToFloat_2
2020-06-04 12:13:02,580 - INFO - Optimizing ONNX model
2020-06-04 12:13:07,559 - WARNING - Failed to apply optimize_transpose
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx/optimizer/__init__.py", line 50, in optimize_graph
    current = copy.deepcopy(graph)
  File "/usr/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 220, in _deepcopy_tuple
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.6/copy.py", line 220, in <listcomp>
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 159, in deepcopy
    copier = getattr(x, "__deepcopy__", None)
ReferenceError: weakly-referenced object no longer exists
2020-06-04 12:13:12,292 - WARNING - Failed to apply fold_constants
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx/optimizer/__init__.py", line 50, in optimize_graph
    current = copy.deepcopy(graph)
  File "/usr/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 220, in _deepcopy_tuple
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.6/copy.py", line 220, in <listcomp>
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 159, in deepcopy
    copier = getattr(x, "__deepcopy__", None)
ReferenceError: weakly-referenced object no longer exists
2020-06-04 12:13:15,257 - WARNING - Failed to apply loop_optimizer
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx/optimizer/__init__.py", line 50, in optimize_graph
    current = copy.deepcopy(graph)
  File "/usr/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 220, in _deepcopy_tuple
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.6/copy.py", line 220, in <listcomp>
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 159, in deepcopy
    copier = getattr(x, "__deepcopy__", None)
ReferenceError: weakly-referenced object no longer exists
2020-06-04 12:13:21,546 - WARNING - Failed to apply merge_duplication
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx/optimizer/__init__.py", line 50, in optimize_graph
    current = copy.deepcopy(graph)
  File "/usr/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 220, in _deepcopy_tuple
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.6/copy.py", line 220, in <listcomp>
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 159, in deepcopy
    copier = getattr(x, "__deepcopy__", None)
ReferenceError: weakly-referenced object no longer exists
2020-06-04 12:13:24,506 - WARNING - Failed to apply remove_identity
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx/optimizer/__init__.py", line 50, in optimize_graph
    current = copy.deepcopy(graph)
  File "/usr/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 220, in _deepcopy_tuple
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.6/copy.py", line 220, in <listcomp>
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 159, in deepcopy
    copier = getattr(x, "__deepcopy__", None)
ReferenceError: weakly-referenced object no longer exists
2020-06-04 12:13:28,998 - WARNING - Failed to apply remove_back_to_back
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx/optimizer/__init__.py", line 50, in optimize_graph
    current = copy.deepcopy(graph)
  File "/usr/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 220, in _deepcopy_tuple
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.6/copy.py", line 220, in <listcomp>
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.6/copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 159, in deepcopy
    copier = getattr(x, "__deepcopy__", None)
ReferenceError: weakly-referenced object no longer exists
2020-06-04 12:13:30,180 - INFO - After optimization: no change
2020-06-04 12:13:42,188 - INFO - 
2020-06-04 12:13:42,188 - INFO - Successfully converted TensorFlow model /workspace/ssd_mobilenet_v2_coco_2018_03_29/export_1/saved_model/ to ONNX
2020-06-04 12:13:44,192 - INFO - ONNX model is saved at /workspace/ssd-im.tf15.onnx

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions