From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DD27F30CDBB for ; Tue, 3 Feb 2026 17:03:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770138190; cv=none; b=ffee0mn6ab2VWFL0P+n+QikcwCLkWEaNYsD70Z5vF6GOLY0QQ/Ke7H2C5KY8zcJg55AAdnojYtp4jA9dm0eq5GJoCq/iWkA5U+PdOjetCm+XeH8PzBQOb/yJrJUYy3p0mXlfeWbG2wtHGcArAcA0oLWtHnRA2DC40i0zoViUbUY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770138190; c=relaxed/simple; bh=lzqVUIL/GcxvLpsHi8mMj4ZXKGp+p6qejMHEKZTTs1s=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=nljP1sDP401HuMQWkLdCqw713dNVBQDEcFx5Ype5xXiQesnEbS9ko66n9Uks4ojXiv6yY2J/xgm71nfd7rabRqyMM2nwBBD46Lp0AAoFTSSjIAfcp4Zg4rpj9j4V7Ty+BvgQHDZKpOe+rrglPi1hOKerPusHqlq8bztZkCLMGWo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Pmq5xLiG; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Pmq5xLiG" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 49D29C116D0; Tue, 3 Feb 2026 17:03:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1770138190; bh=lzqVUIL/GcxvLpsHi8mMj4ZXKGp+p6qejMHEKZTTs1s=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Pmq5xLiGQ2Czpjx1R1Ehs1xzVOdvVr+bJrRlmjHMX6D59vEEdkhzAFDwvDZsyrNAB j4xDnLid4B6JVHIaY6/hjh5ayHBKd5es08d/4fXgiN7CTbwAPkckG3IfO3zS5kejaY xquLc2+M/lp65y2NQv/ybC5ysovD/bjmsziVormpZEIsPR53n8LFqDL3IpO8NTrA/I XWbqy++n7VuJrjtsvH3Ry1uGFoByDNmWZA2v7kcW+aaQiGiCze5DpGVhRAZs1uzDrr yf8KyYsAM/OV1IWhZj59uV3c35Q4s3L6braZT/iXPd0aQDHNhpn0YKGSXwkRpdLN3f rCxNJxnapcP4Q== Date: Tue, 3 Feb 2026 19:03:03 +0200 From: Mike Rapoport To: "Vishal Moola (Oracle)" Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, akpm@linux-foundation.org, "Matthew Wilcox (Oracle)" , Dave Hansen , Andy Lutomirski , Peter Zijlstra Subject: Re: [PATCH v3 1/3] x86/mm/pat: Convert pte code to use ptdescs Message-ID: References: <20260202172005.683870-1-vishal.moola@gmail.com> <20260202172005.683870-2-vishal.moola@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260202172005.683870-2-vishal.moola@gmail.com> Hi Vishal, On Mon, Feb 02, 2026 at 09:20:03AM -0800, Vishal Moola (Oracle) wrote: > In order to separately allocate ptdescs from pages, we need all allocation > and free sites to use the appropriate functions. Convert these pte > allocation/free sites to use ptdescs. > > Signed-off-by: Vishal Moola (Oracle) > --- > arch/x86/mm/pat/set_memory.c | 15 +++++++++------ > 1 file changed, 9 insertions(+), 6 deletions(-) > > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c > index 6c6eb486f7a6..f9f9d4ca8e71 100644 > --- a/arch/x86/mm/pat/set_memory.c > +++ b/arch/x86/mm/pat/set_memory.c > @@ -1408,7 +1408,7 @@ static bool try_to_free_pte_page(pte_t *pte) > if (!pte_none(pte[i])) > return false; > > - free_page((unsigned long)pte); > + pagetable_free(virt_to_ptdesc((void *)pte)); > return true; > } > > @@ -1537,12 +1537,15 @@ static void unmap_pud_range(p4d_t *p4d, unsigned long start, unsigned long end) > */ > } > > -static int alloc_pte_page(pmd_t *pmd) > +static int alloc_pte_ptdesc(pmd_t *pmd) > { > - pte_t *pte = (pte_t *)get_zeroed_page(GFP_KERNEL); > - if (!pte) > + pte_t *pte; > + struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0); AFAIR, x86 folks like reverse xmas tree for variable declarations. > + > + if (!ptdesc) > return -1; > > + pte = (pte_t *) ptdesc_address(ptdesc); No need to cast void * to another pointer type. Same comments are relevant for two other patches as well. > set_pmd(pmd, __pmd(__pa(pte) | _KERNPG_TABLE)); > return 0; > } > @@ -1600,7 +1603,7 @@ static long populate_pmd(struct cpa_data *cpa, > */ > pmd = pmd_offset(pud, start); > if (pmd_none(*pmd)) > - if (alloc_pte_page(pmd)) > + if (alloc_pte_ptdesc(pmd)) > return -1; > > populate_pte(cpa, start, pre_end, cur_pages, pmd, pgprot); > @@ -1641,7 +1644,7 @@ static long populate_pmd(struct cpa_data *cpa, > if (start < end) { > pmd = pmd_offset(pud, start); > if (pmd_none(*pmd)) > - if (alloc_pte_page(pmd)) > + if (alloc_pte_ptdesc(pmd)) > return -1; > > populate_pte(cpa, start, end, num_pages - cur_pages, > -- > 2.52.0 > -- Sincerely yours, Mike.