Skip to content

Commit 4cf69f0

Browse files
lwxFormers Bot
authored andcommitted
Version 0.0.30
ghstack-source-id: b4660792cfd868da3d31a685bc7a43d3679b77c9 Pull Request resolved: fairinternal/xformers#1354 __original_commit__ = fairinternal/xformers@80dc26c
1 parent a5ac44d commit 4cf69f0

File tree

1 file changed

+8
-1
lines changed

1 file changed

+8
-1
lines changed

CHANGELOG.md

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,18 @@ All notable changes to this project will be documented in this file.
44
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
55
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
66

7-
## [0.0.30] - 2025-??-??
7+
## [0.0.30] - 2025-04-28
88
Pre-built binary wheels are available for PyTorch 2.7.0. Following PyTorch, we build wheels for CUDA 11.8, 12.6, and 12.8 only (we no longer build for CUDA 12.4).
99
xFormers now requires PyTorch >= 2.7
1010
### Added
1111
- [fMHA] Added support for local attention on the Flash3 backend (H100)
12+
- [fMHA] Added a new paged gappy attention bias
13+
### Improved
14+
- [fMHA] The FlashAttention3 backend now ships with more head dimensions to support MLA, and with a FLOPs formula in order to be compatible with PyTorch's partitioner-base automatic activation checkpointing
15+
- The fused operators for sequence parallelism were migrated to PyTorch's SymmetricMemory
16+
- The profiler prepends the traces' filenames with the rank of the process when doing distributed training
17+
### Removed
18+
- Removed documentation for legacy unmaintained components
1219

1320
## [0.0.29.post2] - 2025-01-31
1421
Pre-built binary wheels are available for PyTorch 2.6.0. Following PyTorch, we build wheels for CUDA 11.8, 12.4, and 12.6 only (we no longer build for CUDA 12.1).

0 commit comments

Comments
 (0)