From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 64B253C9439; Mon, 23 Mar 2026 17:25:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774286749; cv=none; b=TKi53dpqh28H4rVNtyl3KLZHgnrgt5tbN6e9dX1/WqOCzc9h/Xp6Ojq88jrpOMGK5f97jDwHea8DMszGWNZlVkgPMcDD+U88Jc8XZWq/715FzN/d8rECfTbHHt161UqD7CdELAmR9fAfGOQ6pJDi4FHfiBUEB+JThY7u5PVjU00= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774286749; c=relaxed/simple; bh=jRTJImNnD4NIltD+wQRde7Q2aOYrggkHC3z4X0Uu+ek=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=jmQIWbsQDMGeqZoi2wTrTTlHujtioOvaARMn3lNV3iOQt7u1EAM14TeB5kVqdqV+dWab1vVzIDo2E7WwUR0Vf1qYybDoONlNPgPtYXiJ4gEdWdbEFjBY+MMOyankBsp3mdpzcKhVlZvaYHLYd7mOHqmSrorKG4umrowXlp3xJLs= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B307914BF; Mon, 23 Mar 2026 10:25:41 -0700 (PDT) Received: from [10.57.83.179] (unknown [10.57.83.179]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 40AD63F694; Mon, 23 Mar 2026 10:25:46 -0700 (PDT) Message-ID: Date: Mon, 23 Mar 2026 17:25:44 +0000 Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1 2/3] arm64: mm: Handle invalid large leaf mappings correctly Content-Language: en-GB To: Kevin Brodsky , Catalin Marinas , Will Deacon , "David Hildenbrand (Arm)" , Dev Jain , Yang Shi , Suzuki K Poulose , Jinjiang Tu Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org References: <20260323130317.1737522-1-ryan.roberts@arm.com> <20260323130317.1737522-3-ryan.roberts@arm.com> <588b2b4f-9cf6-43e5-b0e5-55820c74cbbb@arm.com> From: Ryan Roberts In-Reply-To: <588b2b4f-9cf6-43e5-b0e5-55820c74cbbb@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 23/03/2026 16:52, Kevin Brodsky wrote: > On 23/03/2026 14:03, Ryan Roberts wrote: >> [...] >> >> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c >> index 358d1dc9a576f..87dfe4c82fa92 100644 >> --- a/arch/arm64/mm/pageattr.c >> +++ b/arch/arm64/mm/pageattr.c >> @@ -25,6 +25,11 @@ static ptdesc_t set_pageattr_masks(ptdesc_t val, struct mm_walk *walk) >> { >> struct page_change_data *masks = walk->private; >> >> + /* >> + * Some users clear and set bits which alias eachother (e.g. PTE_NG and > > Nit: "each other" > >> + * PTE_PRESENT_INVALID). It is therefore important that we always clear >> + * first then set. >> + */ >> val &= ~(pgprot_val(masks->clear_mask)); >> val |= (pgprot_val(masks->set_mask)); >> >> @@ -36,7 +41,7 @@ static int pageattr_pud_entry(pud_t *pud, unsigned long addr, >> { >> pud_t val = pudp_get(pud); >> >> - if (pud_sect(val)) { >> + if (pud_leaf(val)) { >> if (WARN_ON_ONCE((next - addr) != PUD_SIZE)) >> return -EINVAL; >> val = __pud(set_pageattr_masks(pud_val(val), walk)); >> @@ -52,7 +57,7 @@ static int pageattr_pmd_entry(pmd_t *pmd, unsigned long addr, >> { >> pmd_t val = pmdp_get(pmd); >> >> - if (pmd_sect(val)) { >> + if (pmd_leaf(val)) { >> if (WARN_ON_ONCE((next - addr) != PMD_SIZE)) >> return -EINVAL; >> val = __pmd(set_pageattr_masks(pmd_val(val), walk)); >> @@ -132,11 +137,12 @@ static int __change_memory_common(unsigned long start, unsigned long size, >> ret = update_range_prot(start, size, set_mask, clear_mask); >> >> /* >> - * If the memory is being made valid without changing any other bits >> - * then a TLBI isn't required as a non-valid entry cannot be cached in >> - * the TLB. >> + * If the memory is being switched from present-invalid to valid without >> + * changing any other bits then a TLBI isn't required as a non-valid >> + * entry cannot be cached in the TLB. >> */ >> - if (pgprot_val(set_mask) != PTE_VALID || pgprot_val(clear_mask)) >> + if (pgprot_val(set_mask) != (PTE_MAYBE_NG | PTE_VALID) || > > It isn't obvious to understand where all those PTE_MAYBE_NG come from if > one hasn't realised that PTE_PRESENT_INVALID overlays PTE_NG. > > Since for this purpose we always set/clear both PTE_VALID and > PTE_MAYBE_NG, maybe we could define some macro as PTE_VALID | > PTE_MAYBE_NG, as a counterpart to PTE_PRESENT_INVALID? How about: #define PTE_PRESENT_VALID_KERNEL (PTE_VALID | PTE_MAYBE_NG) The user space equivalent has NG clear, so important to clarify that this is the kernel value, I think. Thanks, Ryan > > - Kevin > >> [...]