Skip to content

Update Resize (opset 11) layer to support scales option when dims are… #2137

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Mar 18, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 30 additions & 13 deletions tf2onnx/onnx_opset/nn.py
Original file line number Diff line number Diff line change
Expand Up @@ -1391,20 +1391,37 @@ def version_11(cls, ctx, node, **kwargs):
else:
mode = "nearest"
roi = ctx.make_const(utils.make_name("roi"), np.array([]).astype(np.float32))
const_zero = ctx.make_const(utils.make_name("const_zero"), np.array([0]).astype(np.int64))
const_two = ctx.make_const(utils.make_name("const_two"), np.array([2]).astype(np.int64))
const_empty_float = ctx.make_const(utils.make_name("const_empty_float"), np.array([]).astype(np.float32))
input_nchw = ctx.make_node("Transpose", [node.input[0]], {"perm": constants.NHWC_TO_NCHW})
shape_input = ctx.make_node("Shape", [input_nchw.output[0]])
sliced_shape = ctx.make_node("Slice", [shape_input.output[0], const_zero.output[0], const_two.output[0]])
size_int64 = ctx.make_node("Cast", [node.input[1]], attr={"to": onnx_pb.TensorProto.INT64})
concat_shape = ctx.make_node("Concat", [sliced_shape.output[0], size_int64.output[0]], {'axis': 0})
resize_inputs = [
input_nchw.output[0],
roi.output[0],
const_empty_float.output[0],
concat_shape.output[0]
]
shape = ctx.get_shape(node.input[0])
if shape and shape[2] != -1 and shape[1] != -1 and node.inputs[1].is_const():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The shape comes from the input of Resize. Is it possible the rank is less than 3 so shape[2] doesn't exist?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi,
This line is a copy of a condition of the version 10 ... but you are right, if this appends there will be an issue.
From my first understanding, shape is (B, C, H, W) so you mean the graph could be in another shape dimension. So, a condition should check that shape is on 4 dims before entering in this instruction block like this for example:
if shape and len(shape) == 4 and and shape[2] != -1 and shape[1] != -1 and node.inputs[1].is_const()
or "playing" with the array indexes:
if shape and shape[-2] != -1 and shape[-1] != -1 and node.inputs[1].is_const()

Thanks for your review again,

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, This line is a copy of a condition of the version 10 ... but you are right, if this appends there will be an issue. From my first understanding, shape is (B, C, H, W) so you mean the graph could be in another shape dimension. So, a condition should check that shape is on 4 dims before entering in this instruction block like this for example: if shape and len(shape) == 4 and and shape[2] != -1 and shape[1] != -1 and node.inputs[1].is_const() or "playing" with the array indexes: if shape and shape[-2] != -1 and shape[-1] != -1 and node.inputs[1].is_const()

Thanks for your review again,

Sorry, my mistake and such check are not necessary since such shape is required by these 3 ops.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean, this is ok for you ? I do not know exactly how the process works. Do you attend an action from myself ?
or the merge will be done by you once tests are passed ?

target_shape = node.inputs[1].get_tensor_value()
n, h, w, c = shape
nh, nw = target_shape
if "sizes" in node.attr:
sizes_val = np.array([1.0, 1.0, nh, nw]).astype(np.int64)
resize_params = ctx.make_const(utils.make_name("sizes"), sizes_val, raw=False)
else: # scales
scale_val = np.array([1.0, 1.0, float(nh) / h, float(nw) / w]).astype(np.float32)
resize_params = ctx.make_const(utils.make_name("scales"), scale_val, raw=False)
resize_inputs = [
input_nchw.output[0],
roi.output[0],
resize_params.output[0]
]
else:
const_zero = ctx.make_const(utils.make_name("const_zero"), np.array([0]).astype(np.int64))
const_two = ctx.make_const(utils.make_name("const_two"), np.array([2]).astype(np.int64))
const_empty_float = ctx.make_const(utils.make_name("const_empty_float"), np.array([]).astype(np.float32))
shape_input = ctx.make_node("Shape", [input_nchw.output[0]])
sliced_shape = ctx.make_node("Slice", [shape_input.output[0], const_zero.output[0], const_two.output[0]])
size_int64 = ctx.make_node("Cast", [node.input[1]], attr={"to": onnx_pb.TensorProto.INT64})
concat_shape = ctx.make_node("Concat", [sliced_shape.output[0], size_int64.output[0]], {'axis': 0})
resize_inputs = [
input_nchw.output[0],
roi.output[0],
const_empty_float.output[0],
concat_shape.output[0]
]
transformation_mode = "asymmetric"
nearest_mode = "floor"
if "align_corners" in node.attr and node.attr["align_corners"].i:
Expand Down