From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 67F5B2163A6 for ; Fri, 20 Dec 2024 18:35:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734719755; cv=none; b=qChXjSVwnoOxAeyRRuJhvPjJ7PPHN7jM2b9U/IF9ggpWZvB+ShPR3Ndlq7n5+erg3PdXXde61ivRIbe0yFTGKRPFzOVN5xbOUlimgUv4EHvFsOHeJZA+Tc/TB9hKEkUo2w+mK/d8eyuU3F0jaQtPJup9YBj3cO/7a+K5cybx3fw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734719755; c=relaxed/simple; bh=Z90n90F4QYUVE1JuWSkLfGWwJN9dxYup+dZdpIMDCC4=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=NcBNGtsDz7cneOwdCO5LPUfqFpB9g5KS2b65+R/S0/XGELReTXPH1rONgPHrGDeqoHT6Jz3DMxIAO4xen06B9H89nRIkrJnU5cA9wBGvHrbwfJSBcv+IgOxfUksHQ6ivGuBE9LSuJv1rWjCL44SfdVbWfjdHuh6NV7XGQpmJNc4= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id BD88BC4CECD; Fri, 20 Dec 2024 18:35:52 +0000 (UTC) Date: Fri, 20 Dec 2024 18:35:50 +0000 From: Catalin Marinas To: Zhenhua Huang Cc: will@kernel.org, ardb@kernel.org, ryan.roberts@arm.com, mark.rutland@arm.com, joey.gouly@arm.com, dave.hansen@linux.intel.com, akpm@linux-foundation.org, chenfeiyang@loongson.cn, chenhuacai@kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 2/2] arm64: mm: implement vmemmap_check_pmd for arm64 Message-ID: References: <20241209094227.1529977-1-quic_zhenhuah@quicinc.com> <20241209094227.1529977-3-quic_zhenhuah@quicinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20241209094227.1529977-3-quic_zhenhuah@quicinc.com> On Mon, Dec 09, 2024 at 05:42:27PM +0800, Zhenhua Huang wrote: > vmemmap_check_pmd() is used to determine if needs to populate to base > pages. Implement it for arm64 arch. > > Fixes: 2045a3b8911b ("mm/sparse-vmemmap: generalise vmemmap_populate_hugepages()") > Signed-off-by: Zhenhua Huang > --- > arch/arm64/mm/mmu.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index fd59ee44960e..41c7978a92be 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -1169,7 +1169,8 @@ int __meminit vmemmap_check_pmd(pmd_t *pmdp, int node, > unsigned long addr, unsigned long next) > { > vmemmap_verify((pte_t *)pmdp, node, addr, next); > - return 1; > + > + return pmd_sect(*pmdp); > } > > int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, Don't we need this patch only if we implement the first one? Please fold it into the other patch. -- Catalin