Skip to content

Commit 464c7ff

Browse files
kvaneeshtorvalds
authored andcommitted
mm/hugetlb: filter out hugetlb pages if HUGEPAGE migration is not supported.
When scanning for movable pages, filter out Hugetlb pages if hugepage migration is not supported. Without this we hit infinte loop in __offline_pages() where we do pfn = scan_movable_pages(start_pfn, end_pfn); if (pfn) { /* We have movable pages */ ret = do_migrate_range(pfn, end_pfn); goto repeat; } Fix this by checking hugepage_migration_supported both in has_unmovable_pages which is the primary backoff mechanism for page offlining and for consistency reasons also into scan_movable_pages because it doesn't make any sense to return a pfn to non-migrateable huge page. This issue was revealed by, but not caused by 72b39cf ("mm, memory_hotplug: do not fail offlining too early"). Link: http://lkml.kernel.org/r/[email protected] Fixes: 72b39cf ("mm, memory_hotplug: do not fail offlining too early") Signed-off-by: Aneesh Kumar K.V <[email protected]> Reported-by: Haren Myneni <[email protected]> Acked-by: Michal Hocko <[email protected]> Reviewed-by: Naoya Horiguchi <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 04b8e94 commit 464c7ff

File tree

2 files changed

+6
-1
lines changed

2 files changed

+6
-1
lines changed

mm/memory_hotplug.c

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1333,7 +1333,8 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
13331333
if (__PageMovable(page))
13341334
return pfn;
13351335
if (PageHuge(page)) {
1336-
if (page_huge_active(page))
1336+
if (hugepage_migration_supported(page_hstate(page)) &&
1337+
page_huge_active(page))
13371338
return pfn;
13381339
else
13391340
pfn = round_up(pfn + 1,

mm/page_alloc.c

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7708,6 +7708,10 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
77087708
* handle each tail page individually in migration.
77097709
*/
77107710
if (PageHuge(page)) {
7711+
7712+
if (!hugepage_migration_supported(page_hstate(page)))
7713+
goto unmovable;
7714+
77117715
iter = round_up(iter + 1, 1<<compound_order(page)) - 1;
77127716
continue;
77137717
}

0 commit comments

Comments
 (0)