From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 810EB33CC2; Fri, 24 Nov 2023 19:18:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="MMZbc8v5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0F805C433C9; Fri, 24 Nov 2023 19:18:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1700853523; bh=P+zWD0+IK9HK/RjqL8E+HnY0em+dy9299Su7AR7l7Wc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MMZbc8v5hD0OazcP6j0Ve5ivpFeb9CBlIab6MFhOVQ0iIOi7dUcSzc22T9Z9EhhqU bBV1qeYpSfplrp4zP/E/kEUquh6m8Dt+yXTLlq1y5xKMgajsCVPytXKVpm4Rj8oJ35 ZMtB27J4VcGDIi9uUmJRIIzmdWGWJA7nzfmYyMMY= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Zi Yan , Muchun Song , David Hildenbrand , "Matthew Wilcox (Oracle)" , Mike Kravetz , "Mike Rapoport (IBM)" , Thomas Bogendoerfer , Andrew Morton Subject: [PATCH 5.15 223/297] mm/memory_hotplug: use pfn math in place of direct struct page manipulation Date: Fri, 24 Nov 2023 17:54:25 +0000 Message-ID: <20231124172008.005622543@linuxfoundation.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231124172000.087816911@linuxfoundation.org> References: <20231124172000.087816911@linuxfoundation.org> User-Agent: quilt/0.67 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 5.15-stable review patch. If anyone has any objections, please let me know. ------------------ From: Zi Yan commit 1640a0ef80f6d572725f5b0330038c18e98ea168 upstream. When dealing with hugetlb pages, manipulating struct page pointers directly can get to wrong struct page, since struct page is not guaranteed to be contiguous on SPARSEMEM without VMEMMAP. Use pfn calculation to handle it properly. Without the fix, a wrong number of page might be skipped. Since skip cannot be negative, scan_movable_page() will end early and might miss a movable page with -ENOENT. This might fail offline_pages(). No bug is reported. The fix comes from code inspection. Link: https://lkml.kernel.org/r/20230913201248.452081-4-zi.yan@sent.com Fixes: eeb0efd071d8 ("mm,memory_hotplug: fix scan_movable_pages() for gigantic hugepages") Signed-off-by: Zi Yan Reviewed-by: Muchun Song Acked-by: David Hildenbrand Cc: Matthew Wilcox (Oracle) Cc: Mike Kravetz Cc: Mike Rapoport (IBM) Cc: Thomas Bogendoerfer Cc: Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- mm/memory_hotplug.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1677,7 +1677,7 @@ static int scan_movable_pages(unsigned l */ if (HPageMigratable(head)) goto found; - skip = compound_nr(head) - (page - head); + skip = compound_nr(head) - (pfn - page_to_pfn(head)); pfn += skip - 1; } return -ENOENT;