Enable Pytorch Attention for AMD gfx1150 (Strix Point)#12973
Enable Pytorch Attention for AMD gfx1150 (Strix Point)#12973comfyanonymous merged 1 commit intoComfy-Org:masterfrom
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughThe change adds gfx1150 to the AMD GPU architecture list in the attention code path alongside the existing gfx1151 support. This expands which AMD GPUs can utilize PyTorch attention when aotriton is available. The modification is minimal with one line added and one removed, introducing no other behavioral or control-flow changes. 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. 📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can enforce grammar and style rules using `languagetool`.Configure the |
I've been using --enable-pytorch-cross-attention on gfx1150 for a few months now (after ROCm 7.1.1 came out for Windows with AOTriton). For SDXL it's 15-20% faster than the default attention for me. Seems OK to enable by default?