From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B106538A9AD for ; Thu, 29 Jan 2026 08:08:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769674120; cv=none; b=loef9cyRSjw1h/7qQjiqIR8Ws8ZDsAaCf9rg+iLjCWe+WPSF/lbxJn13CHcAINp74ph5RRtbfZzQjV0OHKMuUMEXSZQbxT6HjmEWLq3ytzkjmuJbk6GSYr1OXaKNdRHP4/5M/gAG9eJFYqYWr2XgOgGnqYhACn1z95KxAyFIwLg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769674120; c=relaxed/simple; bh=ItsSGL+KhllwnD/x4HEOfdMMSDgdVe9NdV9ncX7au3U=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=WXzBOW0jloURxdqo9T7wyPP13w8AKVNkwm2ljwD0svgdL0tur28G8dngVGMdqU92xngvmqSjgv7bpQerp1mhBcQ71GY1ytFMsGmQ8QR7jl+dUp9vay1DsSqS3CdMPmim+mHpRbnh8bywFIuR5beA6LVRx3p8gUREy6d+S0cFBQA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=EnT+zCXo; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="EnT+zCXo" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9A914C4CEF7; Thu, 29 Jan 2026 08:08:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1769674120; bh=ItsSGL+KhllwnD/x4HEOfdMMSDgdVe9NdV9ncX7au3U=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=EnT+zCXoVMtpn8QqutyFmX6WCkau4n/QvRmETYHctXV4Kn8LJhj/2zQ76/eoS3n3U rOm/ObdGhauhYV64mG30G6AB9p73PSICKm8NbFwHyBUan7925ybTKyJm+ajCYYkaTG TmIu/lJKbL1SRmcnlXxq4Qwi0bWZYjwEDl6hgEi0tbe1CQuQwp4/yjuww3f3dj4fF+ Lsxzirk6y+G+3FNvu4W1nBsv26tvHqEooX4LnqlKk/rjdw88VTxiaiFIRB7n3tWJVe 8t/a0/9sewfTqUtDfNsFzNFhTu/plRc+oVPLTn+Bkeg7RDt74+ngyg9UUhFS9al5/e Gpg1PomF9T2hg== Date: Thu, 29 Jan 2026 10:08:33 +0200 From: Mike Rapoport To: "Vishal Moola (Oracle)" Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, akpm@linux-foundation.org, "Matthew Wilcox (Oracle)" , Dave Hansen , Andy Lutomirski , Peter Zijlstra Subject: Re: [PATCH v2 1/3] x86/mm/pat: Convert pte code to use ptdescs Message-ID: References: <20260128224049.385013-1-vishal.moola@gmail.com> <20260128224049.385013-2-vishal.moola@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260128224049.385013-2-vishal.moola@gmail.com> On Wed, Jan 28, 2026 at 02:40:47PM -0800, Vishal Moola (Oracle) wrote: > In order to separately allocate ptdescs from pages, we need all allocation > and free sites to use the appropriate functions. Convert these pte > allocation/free sites to use ptdescs. > > Signed-off-by: Vishal Moola (Oracle) > --- > arch/x86/mm/pat/set_memory.c | 11 ++++++----- > 1 file changed, 6 insertions(+), 5 deletions(-) > > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c > index 6c6eb486f7a6..2dcb565d8f9b 100644 > --- a/arch/x86/mm/pat/set_memory.c > +++ b/arch/x86/mm/pat/set_memory.c > @@ -1408,7 +1408,7 @@ static bool try_to_free_pte_page(pte_t *pte) > if (!pte_none(pte[i])) > return false; > > - free_page((unsigned long)pte); > + pagetable_free(virt_to_ptdesc((void *)pte)); > return true; > } > > @@ -1537,9 +1537,10 @@ static void unmap_pud_range(p4d_t *p4d, unsigned long start, unsigned long end) > */ > } > > -static int alloc_pte_page(pmd_t *pmd) > +static int alloc_pte_ptdesc(pmd_t *pmd) > { > - pte_t *pte = (pte_t *)get_zeroed_page(GFP_KERNEL); > + pte_t *pte = (pte_t *) ptdesc_address( > + pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0)); Sorry I missed this last time, but ptdesc_address(NULL) does not return NULL. The allocation and conversion should be split IMHO. This applies to all instances in all the patches. > if (!pte) > return -1; > > @@ -1600,7 +1601,7 @@ static long populate_pmd(struct cpa_data *cpa, > */ > pmd = pmd_offset(pud, start); > if (pmd_none(*pmd)) > - if (alloc_pte_page(pmd)) > + if (alloc_pte_ptdesc(pmd)) > return -1; > > populate_pte(cpa, start, pre_end, cur_pages, pmd, pgprot); > @@ -1641,7 +1642,7 @@ static long populate_pmd(struct cpa_data *cpa, > if (start < end) { > pmd = pmd_offset(pud, start); > if (pmd_none(*pmd)) > - if (alloc_pte_page(pmd)) > + if (alloc_pte_ptdesc(pmd)) > return -1; > > populate_pte(cpa, start, end, num_pages - cur_pages, > -- > 2.52.0 > -- Sincerely yours, Mike.