From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C643AF417F3 for ; Mon, 9 Mar 2026 15:19:23 +0000 (UTC) Received: from boromir.ozlabs.org (localhost [127.0.0.1]) by lists.ozlabs.org (Postfix) with ESMTP id 4fV0zY6l3mz3cBW; Tue, 10 Mar 2026 02:19:21 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; arc=none smtp.remote-ip=172.105.4.254 ARC-Seal: i=1; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1773069561; cv=none; b=B9a77ycR2+8I7DYvT7xZCzo/VkcgSbn7iVqK4fIu3lAycIIOvIaurnsGW8KTGBjpf8uo25+oebfnFF1Nhzjk8S0787ZiQ8jDtWSSVsM+r2SKkvLg8tyzl5cHTGH8NOCyqydzNP+ofjNAAwoG/Z0edDEaXglylLON5UHI14j9XnNckgbPo8C6ieGZBUVVbr8BGcGmAsincOvRxvOA4KVtQ8X/SNUzYaTrqS5madSAk4azj23moKsmU0ADYZhiCsWF6elQhDSH9mKQp3vparrNtm3bz/zhdBn8QbUyZInUdUOkP2TWT7NwP0Vol6LIv1nYGCgCY0foSxActHzJnstEBQ== ARC-Message-Signature: i=1; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1773069561; c=relaxed/relaxed; bh=kn71YFrkqU6H+7eHvAMGxn11P+MzRxt3VLN1navmGlk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ic00zdHYj423oJ3UMvhZCgiMIBcAietYtNv3bgEYnMPEwM8H92vu+Q4+GeeiP5bd9x/Wu8Ighicq4+CdL48kJ/f7pdldF7vXA8PWfKRMwqkKhWpVc6hafi4zBT5dl+120aM+rSnZXayheflWdBNQcg79FYLyM/7fPoFynP0WIkzpMMZFx5lsDfRBStrQ091ee2xkFC68DqKUGp6pDQAST4SMv4gigGrB6yQR09sNHltMrpgr0C26f5gOjC/TUIhp5Wp3RZXJc0INJRVG1zJoiEbO6/eAAl7m63g0Dbm33trBJOAWQ/ZZ2lmZ7K0iIdko/X4kZETnGDCSK7U3OS65pg== ARC-Authentication-Results: i=1; lists.ozlabs.org; dmarc=pass (p=quarantine dis=none) header.from=kernel.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=DyLp+xOG; dkim-atps=neutral; spf=pass (client-ip=172.105.4.254; helo=tor.source.kernel.org; envelope-from=david@kernel.org; receiver=lists.ozlabs.org) smtp.mailfrom=kernel.org Authentication-Results: lists.ozlabs.org; dmarc=pass (p=quarantine dis=none) header.from=kernel.org Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=DyLp+xOG; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=kernel.org (client-ip=172.105.4.254; helo=tor.source.kernel.org; envelope-from=david@kernel.org; receiver=lists.ozlabs.org) Received: from tor.source.kernel.org (tor.source.kernel.org [172.105.4.254]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4fV0zX6ymBz2xm3 for ; Tue, 10 Mar 2026 02:19:20 +1100 (AEDT) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 4FE6660054; Mon, 9 Mar 2026 15:19:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1DDCBC2BC86; Mon, 9 Mar 2026 15:19:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773069559; bh=xyQnOW2UDFd5jAArw1nWS6MsXbZdktoGFPRukmSWN9U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DyLp+xOG03I6mJo4gcMfc/+Ifnd6+9mr3SmRZReS3gUB59IJtxRzKYh6uCwGbf35n mA09i4Y9TZaDcXjdoHWgA9psr/HHQ5Vmw9UjHOgORc2dtCFhdHH5c+CdCLVoerUQFA zgKYCWWMophRIJapyS/x9VRx1zzA0Xl85X4CYI0S9IAgmoqmrYHrU3vnG/8QmmjRBI /AYEcixMmT7fSlKbi01wP+sn/Qc3LMxUelwYIpPHcDiAOR1UoJYubJtfp78jUFZvi4 Jiu1hH7v7CekQmhFXZdscglWKrkGWSFQfoHbqKVI0WCvwVybBq7Bcl+7wsK7dsJ0kU Eq70tr05k+ywA== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, "David Hildenbrand (Arm)" , Andrew Morton , Madhavan Srinivasan , Nicholas Piggin , Michael Ellerman , "Christophe Leroy (CS GROUP)" , Muchun Song , Oscar Salvador , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , Paolo Bonzini , Dan Williams Subject: [PATCH v2 1/4] mm: move vma_kernel_pagesize() from hugetlb to mm.h Date: Mon, 9 Mar 2026 16:18:58 +0100 Message-ID: <20260309151901.123947-2-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260309151901.123947-1-david@kernel.org> References: <20260309151901.123947-1-david@kernel.org> X-Mailing-List: linuxppc-dev@lists.ozlabs.org List-Id: List-Help: List-Owner: List-Post: List-Archive: , List-Subscribe: , , List-Unsubscribe: Precedence: list MIME-Version: 1.0 Content-Transfer-Encoding: 8bit In the past, only hugetlb had special "vma_kernel_pagesize()" requirements, so it provided its own implementation. In commit 05ea88608d4e ("mm, hugetlbfs: introduce ->pagesize() to vm_operations_struct") we generalized that approach by providing a vm_ops->pagesize() callback to be used by device-dax. Once device-dax started using that callback in commit c1d53b92b95c ("device-dax: implement ->pagesize() for smaps to report MMUPageSize") it was missed that CONFIG_DEV_DAX does not depend on hugetlb support. So building a kernel with CONFIG_DEV_DAX but without CONFIG_HUGETLBFS would not pick up that value. Fix it by moving vma_kernel_pagesize() to mm.h, providing only a single implementation. While at it, improve the kerneldoc a bit. Ideally, we'd move vma_mmu_pagesize() as well to the header. However, its __weak symbol might be overwritten by a PPC variant in hugetlb code. So let's leave it in there for now, as it really only matters for some hugetlb oddities. This was found by code inspection. Fixes: c1d53b92b95c ("device-dax: implement ->pagesize() for smaps to report MMUPageSize") Reviewed-by: Lorenzo Stoakes (Oracle) Acked-by: Mike Rapoport (Microsoft) Cc: Dan Williams Signed-off-by: David Hildenbrand (Arm) --- include/linux/hugetlb.h | 7 ------- include/linux/mm.h | 20 ++++++++++++++++++++ mm/hugetlb.c | 17 ----------------- 3 files changed, 20 insertions(+), 24 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 65910437be1c..44c1848a2c21 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -777,8 +777,6 @@ static inline unsigned long huge_page_size(const struct hstate *h) return (unsigned long)PAGE_SIZE << h->order; } -extern unsigned long vma_kernel_pagesize(struct vm_area_struct *vma); - extern unsigned long vma_mmu_pagesize(struct vm_area_struct *vma); static inline unsigned long huge_page_mask(struct hstate *h) @@ -1177,11 +1175,6 @@ static inline unsigned long huge_page_mask(struct hstate *h) return PAGE_MASK; } -static inline unsigned long vma_kernel_pagesize(struct vm_area_struct *vma) -{ - return PAGE_SIZE; -} - static inline unsigned long vma_mmu_pagesize(struct vm_area_struct *vma) { return PAGE_SIZE; diff --git a/include/linux/mm.h b/include/linux/mm.h index 44e04a42fe77..227809790f1a 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1307,6 +1307,26 @@ static inline bool vma_is_shared_maywrite(const struct vm_area_struct *vma) return is_shared_maywrite(&vma->flags); } +/** + * vma_kernel_pagesize - Default page size granularity for this VMA. + * @vma: The user mapping. + * + * The kernel page size specifies in which granularity VMA modifications + * can be performed. Folios in this VMA will be aligned to, and at least + * the size of the number of bytes returned by this function. + * + * The default kernel page size is not affected by Transparent Huge Pages + * being in effect. + * + * Return: The default page size granularity for this VMA. + */ +static inline unsigned long vma_kernel_pagesize(struct vm_area_struct *vma) +{ + if (unlikely(vma->vm_ops && vma->vm_ops->pagesize)) + return vma->vm_ops->pagesize(vma); + return PAGE_SIZE; +} + static inline struct vm_area_struct *vma_find(struct vma_iterator *vmi, unsigned long max) { diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1d41fa3dd43e..66eadfa9e958 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1017,23 +1017,6 @@ static pgoff_t vma_hugecache_offset(struct hstate *h, (vma->vm_pgoff >> huge_page_order(h)); } -/** - * vma_kernel_pagesize - Page size granularity for this VMA. - * @vma: The user mapping. - * - * Folios in this VMA will be aligned to, and at least the size of the - * number of bytes returned by this function. - * - * Return: The default size of the folios allocated when backing a VMA. - */ -unsigned long vma_kernel_pagesize(struct vm_area_struct *vma) -{ - if (vma->vm_ops && vma->vm_ops->pagesize) - return vma->vm_ops->pagesize(vma); - return PAGE_SIZE; -} -EXPORT_SYMBOL_GPL(vma_kernel_pagesize); - /* * Return the page size being used by the MMU to back a VMA. In the majority * of cases, the page size used by the kernel matches the MMU size. On -- 2.43.0