-
Notifications
You must be signed in to change notification settings - Fork 455
Description
Describe the bug
The VERSION_NUMBER
has been set to 1.9.0 since January. Shouldn't it be set to 1.8.5?
I noticed this when installing from master
in a (successful) attempt to workaround a conversion error I was getting with tf2onnx
version 1.8.4. I ran pip install git+https://github.com/onnx/tensorflow-onnx
and noticed that it installed as 1.9.0, which has not yet been released.
On a related note, here was the LSTM conversion error I was getting with tf2onnx
1.8.4 Python API:
Click to expand!
Traceback (most recent call last):
File "/usr/local/Caskroom/miniconda/base/envs/frnn-tf2/lib/python3.8/site-packages/tensorflow/python/framework/importer.py", line 496, in _import_graph_def_internal
results = c_api.TF_GraphImportGraphDefWithResults(
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 0 of node model_1/lstm/AssignVariableOp was passed float from model_1/lstm/lstm_cell/ones_like_1/ReadVariableOp/resource:0 incompatible with expected resource.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "mpi_learn.py", line 133, in <module>
mpi_train(conf, shot_list_train, shot_list_validate, loader,
File "/Users/felker/Desktop/plasma-python/plasma/models/mpi_runner.py", line 970, in mpi_train
specific_builder.save_model_weights(train_model, int(round(e)))
File "/Users/felker/Desktop/plasma-python/plasma/models/builder.py", line 411, in save_model_weights
model_proto, external_tensor_storage = tf2onnx.convert.from_keras(
File "/usr/local/Caskroom/miniconda/base/envs/frnn-tf2/lib/python3.8/site-packages/tf2onnx/convert.py", line 327, in from_keras
frozen_graph = tf_loader.from_function(concrete_func, input_names, output_names, large_model=large_model)
File "/usr/local/Caskroom/miniconda/base/envs/frnn-tf2/lib/python3.8/site-packages/tf2onnx/tf_loader.py", line 145, in from_function
frozen_func = convert_variables_to_constants_v2(func, lower_control_flow=False, aggressive_inlining=True)
File "/usr/local/Caskroom/miniconda/base/envs/frnn-tf2/lib/python3.8/site-packages/tensorflow/python/framework/convert_to_constants.py", line 1076, in convert_variables_to_constants_v2
return _construct_concrete_function(func, output_graph_def,
File "/usr/local/Caskroom/miniconda/base/envs/frnn-tf2/lib/python3.8/site-packages/tensorflow/python/framework/convert_to_constants.py", line 1001, in _construct_concrete_function
new_func = wrap_function.function_from_graph_def(output_graph_def,
File "/usr/local/Caskroom/miniconda/base/envs/frnn-tf2/lib/python3.8/site-packages/tensorflow/python/eager/wrap_function.py", line 650, in function_from_graph_def
wrapped_import = wrap_function(_imports_graph_def, [])
File "/usr/local/Caskroom/miniconda/base/envs/frnn-tf2/lib/python3.8/site-packages/tensorflow/python/eager/wrap_function.py", line 621, in wrap_function
func_graph.func_graph_from_py_func(
File "/usr/local/Caskroom/miniconda/base/envs/frnn-tf2/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 990, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/usr/local/Caskroom/miniconda/base/envs/frnn-tf2/lib/python3.8/site-packages/tensorflow/python/eager/wrap_function.py", line 87, in __call__
return self.call_with_variable_creator_scope(self._fn)(*args, **kwargs)
File "/usr/local/Caskroom/miniconda/base/envs/frnn-tf2/lib/python3.8/site-packages/tensorflow/python/eager/wrap_function.py", line 93, in wrapped
return fn(*args, **kwargs)
File "/usr/local/Caskroom/miniconda/base/envs/frnn-tf2/lib/python3.8/site-packages/tensorflow/python/eager/wrap_function.py", line 648, in _imports_graph_def
importer.import_graph_def(graph_def, name="")
File "/usr/local/Caskroom/miniconda/base/envs/frnn-tf2/lib/python3.8/site-packages/tensorflow/python/util/deprecation.py", line 538, in new_func
return func(*args, **kwargs)
File "/usr/local/Caskroom/miniconda/base/envs/frnn-tf2/lib/python3.8/site-packages/tensorflow/python/framework/importer.py", line 400, in import_graph_def
return _import_graph_def_internal(
File "/usr/local/Caskroom/miniconda/base/envs/frnn-tf2/lib/python3.8/site-packages/tensorflow/python/framework/importer.py", line 501, in _import_graph_def_internal
raise ValueError(str(e))
ValueError: Input 0 of node model_1/lstm/AssignVariableOp was passed float from model_1/lstm/lstm_cell/ones_like_1/ReadVariableOp/resource:0 incompatible with expected resource.
Seems related to the discussion in #1152 . Was this fixed in #1481 @TomWildenhain-Microsoft ?
The conversion works with both tf2onnx
1.8.5 and current master
:
TF freezing failed. Attempting to fix freezing errors.
Removed AssignVariableOp model/lstm_1/AssignVariableOp_1
Removed AssignVariableOp model/lstm_1/AssignVariableOp
Removed AssignVariableOp model/lstm/AssignVariableOp_1
Removed AssignVariableOp model/lstm/AssignVariableOp
WARNING:tensorflow:From /usr/local/Caskroom/miniconda/base/envs/frnn-tf2/lib/python3.8/site-packages/tf2onnx/tf_loader.py:627: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.extract_sub_graph`
I am trying to fully understand what is happening here--- is this related to how the conversion removes the stateful-ness from my LSTM model?
System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): macOS
- Tensorflow Version: 2.4.1
- Python version: 3.8