You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+45-12Lines changed: 45 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,29 +2,24 @@
2
2
3
3
| Build Type | OS | Python | Tensorflow | Onnx opset | Status |
4
4
| --- | --- | --- | --- | --- | --- |
5
-
| Unit Test - Basic | Linux, MacOS<sup>\*</sup>, Windows<sup>\*</sup> | 3.6, 3.7| 1.12-1.15, 2.1-2.2| 7-12 |[](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=master)|
6
-
| Unit Test - Full | Linux, MacOS, Windows | 3.6, 3.7| 1.12-1.15, 2.1-2.2| 7-12 |[](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=18&branchName=master)||
5
+
| Unit Test - Basic | Linux, MacOS<sup>\*</sup>, Windows<sup>\*</sup> | 3.6, 3.7, 3.8 | 1.12-1.15, 2.1-2.4| 7-12 |[](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=master)|
6
+
| Unit Test - Full | Linux, MacOS, Windows | 3.6, 3.7, 3.8 | 1.12-1.15, 2.1-2.3| 7-12 |[](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=18&branchName=master)||
7
7
8
8
## Supported Versions
9
9
10
10
### ONNX
11
11
12
12
tensorflow-onnx will use the ONNX version installed on your system and installs the latest ONNX version if none is found.
13
13
14
-
We support ONNX opset-6 to opset-12. By default we use opset-8 for the resulting ONNX graph since most runtimes will support opset-8.
14
+
We support ONNX opset-6 to opset-12. By default we use opset-9 for the resulting ONNX graph since most runtimes will support opset-9.
15
15
Support for future opsets add added as they are released.
16
16
17
-
If you want the graph to be generated with a specific opset, use ```--opset``` in the command line, for example ```--opset 11```.
17
+
If you want the graph to be generated with a specific opset, use ```--opset``` in the command line, for example ```--opset 12```.
18
18
19
19
### TensorFlow
20
20
21
21
We support all ```tf-1.x graphs```. To keep our test matrix manageable we test tf2onnx running on top of ```tf-1.12 and up```. tf2onnx-1.5.4 was the last version that was tested all the way back to tf-1.4.
22
22
23
-
There is now ```experimental support for tf-2.x```.
24
-
With the exception of LSTM unit tests, all unit tests are enabled and passing.
25
-
Unit tests that we still need to fix are marked with ```@skip_tf2```.
26
-
GRU/LSTM's are converting but not runnable due to type/shape inference issues at runtime (working on that one).
27
-
All unit tests are running in eager mode. After execution we take the python function, make it a graph and convert it to ONNX.
28
23
When running under tf-2.x tf2onnx will use the tensorflow V2 controlflow.
29
24
30
25
You can install tf2onnx on top of tf-1.x or tf-2.x.
@@ -139,11 +134,16 @@ python -m tf2onnx.convert
139
134
[--outputs GRAPH_OUTPUS]
140
135
[--inputs-as-nchw inputs_provided_as_nchw]
141
136
[--opset OPSET]
137
+
[--tag TAG]
138
+
[--signature_def SIGNATURE_DEF]
139
+
[--concrete_function CONCRETE_FUNCTION]
142
140
[--target TARGET]
143
141
[--custom-ops list-of-custom-ops]
144
142
[--fold_const]
143
+
[--large_model]
145
144
[--continue_on_error]
146
145
[--verbose]
146
+
[--output_frozen_graph]
147
147
```
148
148
149
149
### Parameters
@@ -176,13 +176,44 @@ By default we preserve the image format of inputs (`nchw` or `nhwc`) as given in
176
176
177
177
By default we use the opset 8 to generate the graph. By specifying ```--opset``` the user can override the default to generate a graph with the desired opset. For example ```--opset 5``` would create a onnx graph that uses only ops available in opset 5. Because older opsets have in most cases fewer ops, some models might not convert on a older opset.
178
178
179
+
#### --tag
180
+
181
+
Only valid with parameter `--saved_model`. Specifies the tag in the saved_model to be used. Typical value is 'serve'.
182
+
183
+
#### --signature_def
184
+
185
+
Only valid with parameter `--saved_model`. Specifies which signature to use within the specified --tag value. Typical value is 'serving_default'.
186
+
187
+
#### --concrete_function
188
+
189
+
(This is experimental, valid only for TF2.x models)
190
+
191
+
Only valid with parameter `--saved_model`. If a model contains a list of concrete functions, under the function name `__call__` (as can be viewed using the command `saved_model_cli show --all`), this parameter is a 0-based integer specifying which function in that list should be converted. This parameter takes priority over `--signature_def`, which will be ignored.
192
+
193
+
#### --large_model
194
+
195
+
(This is experimental, valid only for TF2.x models)
196
+
197
+
Only valid with parameter `--saved_model`. When set, creates a zip file containing the ONNX protobuf model and large tensor values stored externally. This allows for converting models that exceed the 2 GB protobuf limit.
198
+
199
+
#### --output_frozen_graph
200
+
201
+
Saves the frozen tensorflow graph to file.
202
+
203
+
#### --custom-ops
204
+
205
+
If a model contains ops not recognized by onnx runtime, you can tag these ops with a custom op domain so that the
206
+
runtime can still open the model. The format is a comma-separated map of tf op names to domains in the format
207
+
OpName:domain. If only an op name is provided (no colon), the default domain of `ai.onnx.converters.tensorflow`
208
+
will be used.
209
+
179
210
#### --target
180
211
181
212
Some models require special handling to run on some runtimes. In particular, the model may use unsupported data types. Workarounds are activated with ```--target TARGET```. Currently supported values are listed on this [wiki](https://github.com/onnx/tensorflow-onnx/wiki/target). If your model will be run on Windows ML, you should specify the appropriate target value.
182
213
183
214
#### --fold_const
184
215
185
-
When set, TensorFlow fold_constants transformation is applied before conversion. This benefits features including Transpose optimization (e.g. Transpose operations introduced during tf-graph-to-onnx-graph conversion will be removed), and RNN unit conversion (for example LSTM). Older TensorFlow version might run into issues with this option depending on the model.
216
+
Deprecated. Constant folding is always enabled.
186
217
187
218
### <aname="summarize_graph"></a>Tool to get Graph Inputs & Outputs
0 commit comments