Skip to content

[TorchToLinalg] Support lowering AtenReplicationPad3d to linalg #4233

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

vinitdeodhar
Copy link
Contributor

Add support of AtenReplicationPad3d in torch dialect and lowering it to linalg backend
AtenReplicationPad3d is lowered using a sequence of tensor.extract_slice and tensor.concat operations consistent with the existing lowerings of AtenReplicationPad1d and AtenReplicationPad2d for the linalg backend

@sahas3 sahas3 requested review from zjgarvey and sahas3 June 17, 2025 14:11
@vinitdeodhar
Copy link
Contributor Author

Could you please review this @zjgarvey ?

@vinitdeodhar
Copy link
Contributor Author

@zjgarvey , @zahidwx can you please review ?

@vinitdeodhar vinitdeodhar changed the title Support lowering AtenReplicationPad3d to linalg [TorchToLinalg] Support lowering AtenReplicationPad3d to linalg Jul 10, 2025
@vinitdeodhar
Copy link
Contributor Author

Hi @zjgarvey, can you please review this ?

@zjgarvey
Copy link
Collaborator

Hey, @vinitdeodhar . Would you mind addressing the ci failure first? Let me know if you have trouble debugging and I can help you out.

@vinitdeodhar
Copy link
Contributor Author

Hey, @vinitdeodhar . Would you mind addressing the ci failure first? Let me know if you have trouble debugging and I can help you out.

Hi @zjgarvey the ci failures do not seem to be related to the change and affect other PRs submitted at the time too. I dont have access rights to rerun the job and try again. Here is the error thrown:
torch._dynamo.exc.InternalTorchDynamoError: TimeoutError: Timeout

@zjgarvey
Copy link
Collaborator

Hi @zjgarvey the ci failures do not seem to be related to the change and affect other PRs submitted at the time too. I dont have access rights to rerun the job and try again. Here is the error thrown: torch._dynamo.exc.InternalTorchDynamoError: TimeoutError: Timeout

Can you sync the branch with main so we can re-run?

@vinitdeodhar
Copy link
Contributor Author

Hi @zjgarvey the ci failures do not seem to be related to the change and affect other PRs submitted at the time too. I dont have access rights to rerun the job and try again. Here is the error thrown: torch._dynamo.exc.InternalTorchDynamoError: TimeoutError: Timeout

Can you sync the branch with main so we can re-run?

Thanks ! I synced the branch that it resolved the failures

Copy link
Collaborator

@zjgarvey zjgarvey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the reminder ping. This mostly looks good to me, just some nit comments.

Comment on lines +434 to +437
// CHECK-DAG: %[[INT0:.*]] = torch.constant.int 0
// CHECK-DAG: %[[INT1:.*]] = torch.constant.int 1
// CHECK-DAG: %[[INT3:.*]] = torch.constant.int 3
// CHECK: %[[PAD_LIST:.*]] = torch.prim.ListConstruct %[[INT0]], %[[INT1]], %[[INT3]], %[[INT1]], %[[INT0]], %[[INT3]] : (!torch.int, !torch.int, !torch.int, !torch.int, !torch.int, !torch.int) -> !torch.list<int>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you should check these (they just end up getting DCE'd anyway).

Comment on lines +467 to +468
SmallVector<Value> slices(tileWidth, slice);
return rewriter.create<tensor::ConcatOp>(loc, dimension, slices);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm sure it folds later in the compiler, but would simple enough to add a check for tileWidth == 1 so we don't generate redundant concats (e.g. as seen in the lit test).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants