Skip to content

Commit 7f9dd7d

Browse files
hwangdeyufatcat-z
andcommitted
add load_op_libraries in docs
Signed-off-by: hwangdeyu <[email protected]> Co-authored-by: fatcat-z <[email protected]>
1 parent 872a96b commit 7f9dd7d

File tree

2 files changed

+7
-2
lines changed

2 files changed

+7
-2
lines changed

README.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -140,6 +140,7 @@ python -m tf2onnx.convert
140140
[--concrete_function CONCRETE_FUNCTION]
141141
[--target TARGET]
142142
[--custom-ops list-of-custom-ops]
143+
[--load_op_libraries tf_custom_ops_library]
143144
[--fold_const]
144145
[--large_model]
145146
[--continue_on_error]
@@ -226,6 +227,10 @@ runtime can still open the model. The format is a comma-separated map of tf op n
226227
OpName:domain. If only an op name is provided (no colon), the default domain of `ai.onnx.converters.tensorflow`
227228
will be used.
228229

230+
#### --load_op_libraries
231+
232+
Some ops are not covered by the existing TensorFlow library. You can create the custom library following by [Create an op tf wiki](https://www.tensorflow.org/guide/create_op). And use this parameter to load the comma-separated list of tf op library paths, each library file usually ends with `.so`.
233+
229234
#### --target
230235

231236
Some models require special handling to run on some runtimes. In particular, the model may use unsupported data types. Workarounds are activated with ```--target TARGET```. Currently supported values are listed on this [wiki](https://github.com/onnx/tensorflow-onnx/wiki/target). If your model will be run on Windows ML, you should specify the appropriate target value.

tf2onnx/convert.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -69,9 +69,9 @@ def get_args():
6969
parser.add_argument("--use-graph-names", help="(saved model only) skip renaming io using signature names",
7070
action="store_true")
7171
parser.add_argument("--opset", type=int, default=None, help="opset version to use for onnx domain")
72-
parser.add_argument("--dequantize", help="Remove quantization from model. Only supported for tflite currently.",
72+
parser.add_argument("--dequantize", help="remove quantization from model. Only supported for tflite currently.",
7373
action="store_true")
74-
parser.add_argument("--custom-ops", help="Comma-separated map of custom ops to domains in format OpName:domain. "
74+
parser.add_argument("--custom-ops", help="comma-separated map of custom ops to domains in format OpName:domain. "
7575
"Domain 'ai.onnx.converters.tensorflow' is used by default.")
7676
parser.add_argument("--extra_opset", default=None,
7777
help="extra opset with format like domain:version, e.g. com.microsoft:1")

0 commit comments

Comments
 (0)