From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Aneesh Kumar K.V" , Haren Myneni , Michal Hocko , Naoya Horiguchi , Andrew Morton , Linus Torvalds Subject: [PATCH 4.18 007/158] mm/hugetlb: filter out hugetlb pages if HUGEPAGE migration is not supported. Date: Tue, 18 Sep 2018 00:40:37 +0200 Message-Id: <20180917211710.921267739@linuxfoundation.org> In-Reply-To: <20180917211710.383360696@linuxfoundation.org> References: <20180917211710.383360696@linuxfoundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: 4.18-stable review patch. If anyone has any objections, please let me know. ------------------ From: Aneesh Kumar K.V commit 464c7ffbcb164b2e5cebfa406b7fc6cdb7945344 upstream. When scanning for movable pages, filter out Hugetlb pages if hugepage migration is not supported. Without this we hit infinte loop in __offline_pages() where we do pfn = scan_movable_pages(start_pfn, end_pfn); if (pfn) { /* We have movable pages */ ret = do_migrate_range(pfn, end_pfn); goto repeat; } Fix this by checking hugepage_migration_supported both in has_unmovable_pages which is the primary backoff mechanism for page offlining and for consistency reasons also into scan_movable_pages because it doesn't make any sense to return a pfn to non-migrateable huge page. This issue was revealed by, but not caused by 72b39cfc4d75 ("mm, memory_hotplug: do not fail offlining too early"). Link: http://lkml.kernel.org/r/20180824063314.21981-1-aneesh.kumar@linux.ibm.com Fixes: 72b39cfc4d75 ("mm, memory_hotplug: do not fail offlining too early") Signed-off-by: Aneesh Kumar K.V Reported-by: Haren Myneni Acked-by: Michal Hocko Reviewed-by: Naoya Horiguchi Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/memory_hotplug.c | 3 ++- mm/page_alloc.c | 4 ++++ 2 files changed, 6 insertions(+), 1 deletion(-) --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1341,7 +1341,8 @@ static unsigned long scan_movable_pages( if (__PageMovable(page)) return pfn; if (PageHuge(page)) { - if (page_huge_active(page)) + if (hugepage_migration_supported(page_hstate(page)) && + page_huge_active(page)) return pfn; else pfn = round_up(pfn + 1, --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7649,6 +7649,10 @@ bool has_unmovable_pages(struct zone *zo * handle each tail page individually in migration. */ if (PageHuge(page)) { + + if (!hugepage_migration_supported(page_hstate(page))) + goto unmovable; + iter = round_up(iter + 1, 1<