Skip to content

Unable to convert TF-Hub Model to ONNX: You must feed a value for placeholder tensor #1260

@oborchers

Description

@oborchers

Describe the bug
When converting a previously downloaded TF-Hub model using the tf2onnx converter I get an error message telling me to feed a placeholder

Urgency
None

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04.05
  • ONNX Runtime installed from (source or binary): Source
  • ONNX Runtime version: 1.5.2
  • Python version:Python 3.8.5
  • Visual Studio version (if applicable): NA
  • GCC/Compiler version (if compiling from source): gcc 7.5.0 / cmake version 3.19.1
  • CUDA/cuDNN version: 11.1 / cudnn8_8.0.4.30
  • Tensorrt: cuda11.1-trt7.2.1.6
  • GPU model and memory: Nvidia V100 SXM2 (Driver 455.32.0)

To Reproduce
Download model from https://tfhub.dev/google/universal-sentence-encoder-large/5

python -m tf2onnx.convert --saved-model /data/nlp/tfhub_models/universal-sentence-encoder_v5 --output use_v5.onnx --opset 12

Expected behavior
Export of TF-Hub model works.

Error Output

2021-01-07 15:44:45.197535: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:928] Optimization results for grappler item: graph_to_optimize
  constant_folding: Graph size after: 4191 nodes (-11701), 5093 edges (-20107), time = 6368.6748ms.
  function_optimizer: function_optimizer did nothing. time = 15.89ms.
  constant_folding: Graph size after: 4191 nodes (0), 5093 edges (0), time = 1064.54797ms.
  function_optimizer: function_optimizer did nothing. time = 13.262ms.

Traceback (most recent call last):
  File "/home/oborchers/anaconda3/envs/dev/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/oborchers/anaconda3/envs/dev/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/tf2onnx/convert.py", line 190, in <module>
    main()
  File "/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/tf2onnx/convert.py", line 136, in main
    graph_def, inputs, outputs, initialized_tables = tf_loader.from_saved_model(
  File "/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/tf2onnx/tf_loader.py", line 433, in from_saved_model
    _from_saved_model_v2(model_path, input_names, output_names, tag, signatures, concrete_function, large_model)
  File "/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/tf2onnx/tf_loader.py", line 404, in _from_saved_model_v2
    _get_hash_table_info_from_trackable(imported, table_names, key_dtypes, value_dtypes,
  File "/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/tf2onnx/tf_loader.py", line 279, in _get_hash_table_info_from_trackable
    if isinstance(r, TfRestoredResourceType) and hasattr(r, '_create_resource') and hasattr(r, 'resource_handle'):
  File "/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/tensorflow/python/training/tracking/tracking.py", line 250, in resource_handle
    self._resource_handle = self._create_resource()
  File "/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 787, in __call__
    result = self._call(*args, **kwds)
  File "/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 854, in _call
    return self._concrete_stateful_fn._call_flat(
  File "/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 1947, in _call_flat
    return self._build_call_outputs(self._inference_function.call(
  File "/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 556, in call
    outputs = execute.execute(
  File "/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError:  You must feed a value for placeholder tensor 'PartitionedCall/unused_resource' with dtype resource
	 [[{{node PartitionedCall/unused_resource}}]] [Op:__inference_restored_function_body_136475]

Function call stack:
restored_function_body

Additional information
tf2onnx is installed from git

pip install git+https://github.com/onnx/tensorflow-onnx

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions