Skip to content

Commit ab6584c

Browse files
authored
Remove --fold_const parameter (#1861)
* remove fold_const param Signed-off-by: hwangdeyu <[email protected]> * remove input_signature in from_keras_tf1() Signed-off-by: hwangdeyu <[email protected]>
1 parent 55d001a commit ab6584c

12 files changed

+25
-33
lines changed

README.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -140,7 +140,6 @@ python -m tf2onnx.convert
140140
[--concrete_function CONCRETE_FUNCTION]
141141
[--target TARGET]
142142
[--custom-ops list-of-custom-ops]
143-
[--fold_const]
144143
[--large_model]
145144
[--continue_on_error]
146145
[--verbose]
@@ -230,9 +229,6 @@ will be used.
230229

231230
Some models require special handling to run on some runtimes. In particular, the model may use unsupported data types. Workarounds are activated with ```--target TARGET```. Currently supported values are listed on this [wiki](https://github.com/onnx/tensorflow-onnx/wiki/target). If your model will be run on Windows ML, you should specify the appropriate target value.
232231

233-
#### --fold_const
234-
235-
Deprecated.
236232

237233
### <a name="summarize_graph"></a>Tool to get Graph Inputs & Outputs
238234

Troubleshooting.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,6 @@ The reason for this is that there is a dynamic input of a tensorflow op but the
3333

3434
An example of this is the [ONNX Slice operator before opset-10](https://github.com/onnx/onnx/blob/master/docs/Changelog.md#Slice-1) - the start and end of the slice are static attributes that need to be known at graph creation. In tensorflow the [strided slice op](https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/strided-slice) allows dynamic inputs. tf2onnx will try to find the real value of begin and end of the slice and can find them in most cases. But if those are real dynamic values calculate at runtime it will result in the message ```get tensor value: ... must be Const```.
3535

36-
You can pass the options ```--fold_const``` in the tf2onnx command line that allows tf2onnx to apply more aggressive constant folding which will increase chances to find a constant.
36+
You can pass the options ```--fold_const```(removed after tf2onnx-1.9.3) in the tf2onnx command line that allows tf2onnx to apply more aggressive constant folding which will increase chances to find a constant.
3737

3838
If this doesn't work the model is most likely not to be able to convert to ONNX. We used to see this a lot of issue with the ONNX Slice op and in opset-10 was updated for exactly this reason.

examples/rnn_tips.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ For other advanced RNN cells, it is supposed to good to convert as well, but the
1616
Use following commands to have a quick trial on your model:
1717

1818
```
19-
python -m tf2onnx.convert --input frozen_rnn_model.pb --inputs input1:0,input2:0 --outputs output1:0,output2:0 --fold_const --opset 8 --output target.onnx --continue_on_error
19+
python -m tf2onnx.convert --input frozen_rnn_model.pb --inputs input1:0,input2:0 --outputs output1:0,output2:0 --opset 8 --output target.onnx --continue_on_error
2020
```
2121

2222
## Limitation
@@ -36,7 +36,7 @@ Use [onnxruntime](https://github.com/Microsoft/onnxruntime) or [caffe2](https://
3636
There is a simpler way to run your models and test its correctness (compared with TensorFlow run) using following command.
3737

3838
```
39-
python tests\run_pretrained_models.py --backend onnxruntime --config rnn.yaml --tests model_name --fold_const --onnx-file ".\tmp" --opset 8
39+
python tests\run_pretrained_models.py --backend onnxruntime --config rnn.yaml --tests model_name --onnx-file ".\tmp" --opset 8
4040
```
4141

4242
The content of rnn.yaml looks as below. For inputs, an explicit numpy expression or a shape can be used. If a shape is specified, the value will be randomly generated.

tests/backend_test_base.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ def assert_results_equal(self, expected, actual, rtol, atol, mtol=None,
133133
if check_shape:
134134
self.assertEqual(expected_val.shape, actual_val.shape)
135135

136-
def freeze_and_run_tf(self, func, feed_dict, outputs, as_session, premade_placeholders, large_model, constant_fold):
136+
def freeze_and_run_tf(self, func, feed_dict, outputs, as_session, premade_placeholders, large_model):
137137
np.random.seed(1) # Make it reproducible.
138138
clean_feed_dict = {utils.node_name(k): v for k, v in feed_dict.items()}
139139
if is_tf2() and not as_session:
@@ -195,7 +195,7 @@ def freeze_and_run_tf(self, func, feed_dict, outputs, as_session, premade_placeh
195195
tf_reset_default_graph()
196196
with tf_session() as sess:
197197
tf.import_graph_def(graph_def, name='')
198-
graph_def = tf_optimize(list(feed_dict.keys()), outputs, graph_def, fold_constant=constant_fold)
198+
graph_def = tf_optimize(list(feed_dict.keys()), outputs, graph_def)
199199

200200
return result, graph_def, initialized_tables
201201

@@ -331,8 +331,8 @@ def get_dtype(info):
331331
self.assertEqual(get_dtype(info), graph.get_dtype(info.name))
332332

333333
def run_test_case(self, func, feed_dict, input_names_with_port, output_names_with_port,
334-
rtol=1e-07, atol=1e-5, mtol=None, convert_var_to_const=True, constant_fold=True,
335-
check_value=True, check_shape=True, check_dtype=True, process_args=None, onnx_feed_dict=None,
334+
rtol=1e-07, atol=1e-5, mtol=None, convert_var_to_const=True, check_value=True,
335+
check_shape=True, check_dtype=True, process_args=None, onnx_feed_dict=None,
336336
graph_validator=None, as_session=False, large_model=False, premade_placeholders=False,
337337
use_custom_ops=False, optimize=True):
338338
"""
@@ -361,7 +361,7 @@ def run_test_case(self, func, feed_dict, input_names_with_port, output_names_wit
361361

362362
expected, graph_def, initialized_tables = \
363363
self.freeze_and_run_tf(func, feed_dict, output_names_with_port, as_session,
364-
premade_placeholders, large_model, constant_fold)
364+
premade_placeholders, large_model)
365365

366366
graph_def_path = os.path.join(self.test_data_directory, self._testMethodName + "_after_tf_optimize.pb")
367367
utils.save_protobuf(graph_def_path, graph_def)

tests/test_backend.py

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -174,7 +174,6 @@ def get_maxpoolwithargmax_getdata():
174174
class BackendTests(Tf2OnnxBackendTestBase):
175175
def _run_test_case(self, func, output_names_with_port, feed_dict, **kwargs):
176176
kwargs["convert_var_to_const"] = False
177-
kwargs["constant_fold"] = False
178177
return self.run_test_case(func, feed_dict, [], output_names_with_port, **kwargs)
179178

180179
def _test_expand_dims_known_rank(self, idx):
@@ -709,7 +708,7 @@ def func(x):
709708
feed_dict = {"input_1:0": x_val}
710709
input_names_with_port = ["input_1:0"]
711710
output_names_with_port = ["output:0"]
712-
self.run_test_case(func, feed_dict, input_names_with_port, output_names_with_port, constant_fold=False,
711+
self.run_test_case(func, feed_dict, input_names_with_port, output_names_with_port,
713712
graph_validator=lambda g: (check_op_count(g, "RandomUniform", 0) and
714713
check_op_count(g, "RandomUniformLike", 0)))
715714

@@ -5229,7 +5228,7 @@ def func(query_holder):
52295228
lookup_results = hash_table.lookup(query_holder)
52305229
ret = tf.add(lookup_results, 0, name=_TFOUTPUT)
52315230
return ret
5232-
self._run_test_case(func, [_OUTPUT], {_INPUT: query}, constant_fold=False, as_session=True)
5231+
self._run_test_case(func, [_OUTPUT], {_INPUT: query}, as_session=True)
52335232
os.remove(filnm)
52345233

52355234
@check_opset_min_version(8, "CategoryMapper")

tests/test_const_fold.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,6 @@
1616
class ConstantFoldingTests(Tf2OnnxBackendTestBase):
1717
def _run_test_case(self, func, output_names_with_port, feed_dict, **kwargs):
1818
kwargs["convert_var_to_const"] = False
19-
kwargs["constant_fold"] = False
2019
return self.run_test_case(func, feed_dict, [], output_names_with_port, **kwargs)
2120

2221
def test_concat(self):

tests/test_string_ops.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -167,7 +167,7 @@ def func(text):
167167
return tokens_, begin_, end_, rows_
168168
# Fails due to Attempting to capture an EagerTensor without building a function.
169169
self._run_test_case(func, [_OUTPUT, _OUTPUT1, _OUTPUT2, _OUTPUT3],
170-
{_INPUT: text_val}, constant_fold=False, as_session=True)
170+
{_INPUT: text_val}, as_session=True)
171171

172172

173173
if __name__ == "__main__":

tests/test_tf_shape_inference.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ def _run_test_case(self, input_names_with_port, output_names_with_port):
4141
tf.import_graph_def(graph_def, name='')
4242

4343
# optimize graph
44-
graph_def = tf_optimize(input_names_with_port, output_names_with_port, sess.graph_def, True)
44+
graph_def = tf_optimize(input_names_with_port, output_names_with_port, sess.graph_def)
4545

4646
with tf_session() as sess:
4747
if self.config.is_debug_mode:

tf2onnx/convert.py

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -83,8 +83,7 @@ def get_args():
8383
parser.add_argument("--verbose", "-v", help="verbose output, option is additive", action="count")
8484
parser.add_argument("--debug", help="debug mode", action="store_true")
8585
parser.add_argument("--output_frozen_graph", help="output frozen tf graph to file")
86-
parser.add_argument("--fold_const", help="Deprecated. Constant folding is always enabled.",
87-
action="store_true")
86+
8887
# experimental
8988
parser.add_argument("--inputs-as-nchw", help="transpose inputs as from nhwc to nchw")
9089
args = parser.parse_args()
@@ -353,9 +352,9 @@ def _is_legacy_keras_model(model):
353352
return False
354353

355354

356-
def _from_keras_tf1(model, input_signature=None, opset=None, custom_ops=None, custom_op_handlers=None,
357-
custom_rewriter=None, inputs_as_nchw=None, extra_opset=None, shape_override=None,
358-
target=None, large_model=False, output_path=None):
355+
def _from_keras_tf1(model, opset=None, custom_ops=None, custom_op_handlers=None, custom_rewriter=None,
356+
inputs_as_nchw=None, extra_opset=None, shape_override=None, target=None,
357+
large_model=False, output_path=None):
359358
"""from_keras for tf 1.15"""
360359
input_names = [t.name for t in model.inputs]
361360
output_names = [t.name for t in model.outputs]
@@ -375,7 +374,7 @@ def _from_keras_tf1(model, input_signature=None, opset=None, custom_ops=None, cu
375374
frozen_graph, initialized_tables = tf_loader.freeze_session(sess, input_names, output_names, get_tables=True)
376375
with tf.Graph().as_default():
377376
tf.import_graph_def(frozen_graph, name="")
378-
frozen_graph = tf_loader.tf_optimize(input_names, output_names, frozen_graph, False)
377+
frozen_graph = tf_loader.tf_optimize(input_names, output_names, frozen_graph)
379378
model_proto, external_tensor_storage = _convert_common(
380379
frozen_graph,
381380
name=model.name,
@@ -423,8 +422,8 @@ def from_keras(model, input_signature=None, opset=None, custom_ops=None, custom_
423422
An ONNX model_proto and an external_tensor_storage dict.
424423
"""
425424
if LooseVersion(tf.__version__) < "2.0":
426-
return _from_keras_tf1(model, input_signature, opset, custom_ops, custom_op_handlers, custom_rewriter,
427-
inputs_as_nchw, extra_opset, shape_override, target, large_model, output_path)
425+
return _from_keras_tf1(model, opset, custom_ops, custom_op_handlers, custom_rewriter, inputs_as_nchw,
426+
extra_opset, shape_override, target, large_model, output_path)
428427

429428
old_out_names = _rename_duplicate_keras_model_names(model)
430429
from tensorflow.python.keras.saving import saving_utils as _saving_utils # pylint: disable=import-outside-toplevel

tf2onnx/rewriter/random_uniform.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,6 @@ def rewrite_random_uniform(g, ops):
3939
return ops
4040

4141

42-
# rewriter function when fold_const is enabled
4342
def rewrite_random_uniform_fold_const(g, ops):
4443
pattern = \
4544
OpTypePattern('Add', name='output', inputs=[

0 commit comments

Comments
 (0)