Skip to content

feat: For QDQ models, disable constant folding and ensure the conv weights are transposed without using an explicit op #1839

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 15 commits into from

Conversation

peri044
Copy link
Contributor

@peri044 peri044 commented Feb 8, 2022

For QDQ models, disable constant folding and ensure the conv weights are transposed without using an explicit op

@peri044 peri044 changed the title For QDQ models, disable constant folding and ensure the conv weights are transposed without using an explicit op feat: For QDQ models, disable constant folding and ensure the conv weights are transposed without using an explicit op Feb 8, 2022
Signed-off-by: Dheeraj Peri <[email protected]>
Copy link

@srihari-humbarwadi srihari-humbarwadi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

]
# This flag disables folding QDQ nodes around constants in the network (eg: around conv/FC weights)
rewrite_options.experimental_disable_folding_quantization_emulation = True
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does not exist in tensorflow v1.x versions which is also need to be supported by us.

Please add a validation(is_tf2()) here to make sure this will only be called when tf version is 2.x so the CI could pass.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @fatcat-z Added it now.

val = np.transpose(val, permutation)
weights_node.set_tensor_value(val)
need_transpose = False
#Change the quantization axis for Q and DQ node accordingly
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please update it to "# Change ..." for the format.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fatcat-z Done. There are some tflite errors which are unrelated to my changes. Are those supposed to pass ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fatcat-z Done. There are some tflite errors which are unrelated to my changes. Are those supposed to pass ?

Those tflite errors are not relative to your changes, they are caused by the latest tensorflow version, so you can ignore them.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm, the tensorflow version did not change so this must be something else that changed.
While the tflite fails are unrelated to this PR, somebody still need to fix it since CI won't let us merge this to master if it breaks the build.

Signed-off-by: Dheeraj Peri <[email protected]>
@lgtm-com
Copy link

lgtm-com bot commented Feb 10, 2022

This pull request introduces 1 alert and fixes 3 when merging acb77f4 into 0da3c78 - view on LGTM.com

new alerts:

  • 1 for Syntax error

fixed alerts:

  • 2 for Except block handles 'BaseException'
  • 1 for Variable defined multiple times

@@ -684,8 +684,12 @@ def tf_optimize_grappler(input_names, output_names, graph_def, fold_constant=Non
# depends on so for now don't turn this on, fold_constant is always enabled now.
rewrite_options.optimizers[:] = [
# 'pruning', 'constfold', 'arithmetic', 'dependency', 'function',
'constfold', 'function'
'function', 'dependency'
Copy link
Collaborator

@fatcat-z fatcat-z Feb 10, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you remove constfold optimizer for all scenarios here? I believe we only want to disable it for QDQ models, right?

Probably we should bring it back and add one more judgement below besides the version.

peri044 and others added 4 commits February 10, 2022 02:04
'optimizers' variable is in  'optimizer.optimize_graph' in '_convert_common'
Fix concat issue by enabling access to the 'optimizers' variable
@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
1 out of 2 committers have signed the CLA.

✅ peri044
❌ Gwena - Workstation


Gwena - Workstation seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@fatcat-z
Copy link
Collaborator

@peri044

There is a PR(#1843) has been merged into master branch to resolve some CI issues which has no relationship with your code. Please sync this change and check the results again.

@gcunhase
Copy link

@fatcat-z

It seems the last issue is with my username (it's Gwena-Workstation instead of gcunhase), so the sign-off is not working. How do you recommend we fix this issue? On the DCO details, it says to only follow the steps there if I'm the only author of the commits in this branch, which is not true. TIA.

@fatcat-z
Copy link
Collaborator

@fatcat-z

It seems the last issue is with my username (it's Gwena-Workstation instead of gcunhase), so the sign-off is not working. How do you recommend we fix this issue? On the DCO details, it says to only follow the steps there if I'm the only author of the commits in this branch, which is not true. TIA.

There is no too much experience for such case yet. Is it possible you can sync with @peri044 to create a new PR with all latest changes of this one and submitted by @peri044 so that your user name won't bother any more?

@peri044
Copy link
Contributor Author

peri044 commented Feb 19, 2022

@fatcat-z closing this in favor of #1856. Hopefully everything should pass now and ready to merge. Thanks !!

@peri044 peri044 closed this Feb 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants