Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 3 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

| Build Type | OS | Python | Tensorflow | Onnx opset | Status |
| --- | --- | --- | --- | --- | --- |
| Unit Test - Basic | Linux, MacOS<sup>\*</sup>, Windows<sup>\*</sup> | 3.6, 3.7, 3.8 | 1.12-1.15, 2.1-2.3 | 7-12 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test?branchName=master)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=master) |
| Unit Test - Basic | Linux, MacOS<sup>\*</sup>, Windows<sup>\*</sup> | 3.6, 3.7, 3.8 | 1.12-1.15, 2.1-2.4 | 7-12 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test?branchName=master)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=master) |
| Unit Test - Full | Linux, MacOS, Windows | 3.6, 3.7, 3.8 | 1.12-1.15, 2.1-2.3 | 7-12 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test-matrix?branchName=master)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=18&branchName=master) | |

## Supported Versions
Expand All @@ -11,20 +11,15 @@

tensorflow-onnx will use the ONNX version installed on your system and installs the latest ONNX version if none is found.

We support ONNX opset-6 to opset-12. By default we use opset-8 for the resulting ONNX graph since most runtimes will support opset-8.
We support ONNX opset-6 to opset-12. By default we use opset-9 for the resulting ONNX graph since most runtimes will support opset-9.
Support for future opsets add added as they are released.

If you want the graph to be generated with a specific opset, use ```--opset``` in the command line, for example ```--opset 11```.
If you want the graph to be generated with a specific opset, use ```--opset``` in the command line, for example ```--opset 12```.

### TensorFlow

We support all ```tf-1.x graphs```. To keep our test matrix manageable we test tf2onnx running on top of ```tf-1.12 and up```. tf2onnx-1.5.4 was the last version that was tested all the way back to tf-1.4.

There is now ```support for tf-2.x```.
With the exception of LSTM unit tests, all unit tests are enabled and passing.
Unit tests that we still need to fix are marked with ```@skip_tf2```.
GRU/LSTM's are converting but not runnable due to type/shape inference issues at runtime (working on that one).
All unit tests are running in eager mode. After execution we take the python function, make it a graph and convert it to ONNX.
When running under tf-2.x tf2onnx will use the tensorflow V2 controlflow.

You can install tf2onnx on top of tf-1.x or tf-2.x.
Expand Down
2 changes: 1 addition & 1 deletion ci_build/azure_pipelines/unit_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ stages:
- template: 'templates/job_generator.yml'
parameters:
python_versions: ['3.8']
tf_versions: ['2.3.0']
tf_versions: ['2.4.0']
onnx_opsets: ['']
job:
steps:
Expand Down
2 changes: 1 addition & 1 deletion tf2onnx/constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
CONTRIB_OPS_DOMAIN = "ai.onnx.contrib"

# Default opset version for onnx domain
PREFERRED_OPSET = 8
PREFERRED_OPSET = 9

# Default opset for custom ops
TENSORFLOW_OPSET = helper.make_opsetid("ai.onnx.converters.tensorflow", 1)
Expand Down
9 changes: 9 additions & 0 deletions tf2onnx/onnx_opset/tensor.py
Original file line number Diff line number Diff line change
Expand Up @@ -1054,6 +1054,10 @@ def version_1(cls, ctx, node, **kwargs):
name=new_topk_name, attr={"k": k},
shapes=shapes, dtypes=[dtypes[0], onnx_pb.TensorProto.INT64])

if dtypes[0] != onnx_pb.TensorProto.FLOAT:
# opset-1 only supports float dtypes
ctx.insert_new_node_on_output("Cast", new_topk_node.input[0], to=onnx_pb.TensorProto.FLOAT)
ctx.insert_new_node_on_output("Cast", new_topk_node.output[0], to=dtypes[0])
new_cast_name = utils.make_name(topk_node_name)
ctx.make_node("Cast", [new_topk_node.output[1]], outputs=[topk_output2],
name=new_cast_name, attr={"to": onnx_pb.TensorProto.INT32},
Expand All @@ -1068,6 +1072,11 @@ def any_version_after10(cls, opset, ctx, node, **kwargs):
cast = ctx.make_node("Cast", [k_0d], attr={"to": onnx_pb.TensorProto.INT64})
k_1d = GraphBuilder(ctx).make_unsqueeze({'data': cast.output[0], "axes": [0]}, return_node=True)
ctx.replace_input(node, k_0d, k_1d.output[0], 1)
# cast X if needed
if dtypes[0] != onnx_pb.TensorProto.FLOAT:
# opset-10 supports types other than float but onnxruntime does not
ctx.insert_new_node_on_output("Cast", node.input[0], to=onnx_pb.TensorProto.FLOAT)
ctx.insert_new_node_on_output("Cast", node.output[0], to=dtypes[0])
# cast the index output to int32
cast_out = ctx.insert_new_node_on_output("Cast", node.output[1], name=utils.make_name(node.name), to=dtypes[1])
ctx.set_dtype(cast_out.output[0], dtypes[1])
Expand Down