From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AA0731494DF; Wed, 5 Feb 2025 13:46:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738763167; cv=none; b=cABOmygxl47BYfZs62YOcWUuG0D02pk60HO5bHtcPI5ASLh47+VFrW+v7HBU44+wyEbsQEK4FNk0z9V8+jXR6z7vCk0P7xd3Ux5QnchncXYXxUicpRVoUA7OBFT8Oi0pksfonrUGeH9d6jBk/0O/bFysl+o0kvL1HZLTzh0tHtc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738763167; c=relaxed/simple; bh=DUwzIN6x3MN4ybT9THz/XR78+7uSM0q3TnREs5F8chc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YnL2RWEVxUvE4d6Wr5kbU128IhZyIIl1uVwz4f8EpjtMnuRKDCMljZWxWqWJbN1cuo5e4bOXguG0d0kLcAbQfEnTyJccMLhQidezKZX0tiv/+gak87GklSNkBiguPIKvKgmxDjZeSNF2mAXs5H+Nbk1fLV6w3B7zf8Z8Cr+vkR4= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=SdNKDpMZ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="SdNKDpMZ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 91E46C4CED1; Wed, 5 Feb 2025 13:46:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1738763167; bh=DUwzIN6x3MN4ybT9THz/XR78+7uSM0q3TnREs5F8chc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SdNKDpMZQZgEavMDb9XLdNy9NJetQlYF9avJJacJ9vxy/+5cbyjDTmxUiEvLDcEby fmN1vZPIkdYzdNC0ONLEdUJAYhAFCB0Jdg+sICwFYKibvZchF1CzRCQlzCbW483ttE rpjIIWNd/HBr5zp+I0HckKNGWXUqIXBYsZhpUuFg= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, "Ritesh Harjani (IBM)" , Sourabh Jain , Madhavan Srinivasan , Sasha Levin Subject: [PATCH 6.12 002/590] powerpc/book3s64/hugetlb: Fix disabling hugetlb when fadump is active Date: Wed, 5 Feb 2025 14:35:57 +0100 Message-ID: <20250205134455.323235319@linuxfoundation.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250205134455.220373560@linuxfoundation.org> References: <20250205134455.220373560@linuxfoundation.org> User-Agent: quilt/0.68 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 6.12-stable review patch. If anyone has any objections, please let me know. ------------------ From: Sourabh Jain [ Upstream commit d629d7a8efc33d05d62f4805c0ffb44727e3d99f ] Commit 8597538712eb ("powerpc/fadump: Do not use hugepages when fadump is active") disabled hugetlb support when fadump is active by returning early from hugetlbpage_init():arch/powerpc/mm/hugetlbpage.c and not populating hpage_shift/HPAGE_SHIFT. Later, commit 2354ad252b66 ("powerpc/mm: Update default hugetlb size early") moved the allocation of hpage_shift/HPAGE_SHIFT to early boot, which inadvertently re-enabled hugetlb support when fadump is active. Fix this by implementing hugepages_supported() on powerpc. This ensures that disabling hugetlb for the fadump kernel is independent of hpage_shift/HPAGE_SHIFT. Fixes: 2354ad252b66 ("powerpc/mm: Update default hugetlb size early") Reviewed-by: Ritesh Harjani (IBM) Signed-off-by: Sourabh Jain Signed-off-by: Madhavan Srinivasan Link: https://patch.msgid.link/20241217074640.1064510-1-sourabhjain@linux.ibm.com Signed-off-by: Sasha Levin --- arch/powerpc/include/asm/hugetlb.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/powerpc/include/asm/hugetlb.h b/arch/powerpc/include/asm/hugetlb.h index 18a3028ac3b6d..dad2e7980f245 100644 --- a/arch/powerpc/include/asm/hugetlb.h +++ b/arch/powerpc/include/asm/hugetlb.h @@ -15,6 +15,15 @@ extern bool hugetlb_disabled; +static inline bool hugepages_supported(void) +{ + if (hugetlb_disabled) + return false; + + return HPAGE_SHIFT != 0; +} +#define hugepages_supported hugepages_supported + void __init hugetlbpage_init_defaultsize(void); int slice_is_hugepage_only_range(struct mm_struct *mm, unsigned long addr, -- 2.39.5