From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B57E31A3029 for ; Sun, 5 Apr 2026 07:07:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775372842; cv=none; b=DmazCfi9QmHk4sVVbfEyB/z2GK4RXacGW1phdixr99cbeKsR1nwxAbGSpNV/KZ2bbjCWV9GCwBtLpd+2lX7tSK37/7AX8EkzH8tfkcTT0yfLnv5llKOEYk+BFpCLbBozJPRustpgNV+3WKFeQnTPdu4oePQMJ/8HCux9kjKpDc0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775372842; c=relaxed/simple; bh=ohaCCzJ/OyCQHyLoFW3oiQ6UbSpvt0apq1+Vo/peCVs=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=YtKVP6tdDS6jCmjUvMSx5WqE/3hicMQAWrL5GtbF435qew41b6Y+i71H1VWJv7dcShgwnxQ8pYaW1osysnVxUAErxF/ucVpOxqoowzfs7PhCwb7mywqoDi34+nrb+YTRBMBbiY313a3WB9uHd3yoz0KElRt/PeiZH8B9GQDoHO0= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=i9O82ylM; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="i9O82ylM" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 71541C116C6; Sun, 5 Apr 2026 07:07:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775372842; bh=ohaCCzJ/OyCQHyLoFW3oiQ6UbSpvt0apq1+Vo/peCVs=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=i9O82ylMyOl+wffziX6EHyuKjfZRNOtXBwBb6x2Hg7c6kPg80cHo7ELy/0MCqL7hM zzJgRcwLV/uuF/9OKHmDyL77CuR9VDSx12X/updR8lgoF6BSDTtPvgF27mrBLA42YZ CFFehpXSQqNLVVCdEnRvKBvi+rExumtbNx1Wj+1up9LfvMSwI0YpDP7d4ScIyBzM9x yfR5FMLbQ1atusilXnxq32F7XNd7iU/5SwKgCH6URQBKQ71r5hxIzqA1/6vwLjUzNY kzqgWRyHCBokcS+i9cmrqJ2yVxXMFNOA/XKsXXGVZ0iEINkynHLLaopjI1tX4qEA2W sYWRmBmFLsEdA== Date: Sun, 5 Apr 2026 10:07:14 +0300 From: Mike Rapoport To: Muchun Song Cc: Andrew Morton , David Hildenbrand , linux-mm@kvack.org, Muchun Song , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Suren Baghdasaryan , Michal Hocko , linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 1/5] mm/sparse-vmemmap: provide generic vmemmap_set_pmd() and vmemmap_check_pmd() Message-ID: References: <20260404122105.3989557-1-songmuchun@bytedance.com> <20260404122105.3989557-2-songmuchun@bytedance.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260404122105.3989557-2-songmuchun@bytedance.com> Hi, On Sat, Apr 04, 2026 at 08:20:54PM +0800, Muchun Song wrote: > The two weak functions are currently no-ops on every architecture, > forcing each platform that needs them to duplicate the same handful > of lines. Provide a generic implementation: > > - vmemmap_set_pmd() simply sets a huge PMD with PAGE_KERNEL protection. > > - vmemmap_check_pmd() verifies that the PMD is present and leaf, > then calls the existing vmemmap_verify() helper. > > Architectures that need special handling can continue to override the > weak symbols; everyone else gets the standard version for free. > > Signed-off-by: Muchun Song > --- > mm/sparse-vmemmap.c | 7 ++++++- > 1 file changed, 6 insertions(+), 1 deletion(-) > > diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c > index 6eadb9d116e4..1eb990610d50 100644 > --- a/mm/sparse-vmemmap.c > +++ b/mm/sparse-vmemmap.c > @@ -391,12 +391,17 @@ int __meminit vmemmap_populate_hvo(unsigned long addr, unsigned long end, > void __weak __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node, > unsigned long addr, unsigned long next) > { > + BUG_ON(!pmd_set_huge(pmd, virt_to_phys(p), PAGE_KERNEL)); Do we have to crash the kernel here? Wouldn't be better to make vmemmap_set_pmd() return error and make vmemmap_populate_hugepages() fall back to base pages in case vmemmap_set_pmd() errored? > } > > int __weak __meminit vmemmap_check_pmd(pmd_t *pmd, int node, > unsigned long addr, unsigned long next) > { > - return 0; > + if (!pmd_leaf(pmdp_get(pmd))) > + return 0; > + vmemmap_verify((pte_t *)pmd, node, addr, next); > + > + return 1; > } > > int __meminit vmemmap_populate_hugepages(unsigned long start, unsigned long end, > -- > 2.20.1 > -- Sincerely yours, Mike.