From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4EAD51DA28; Wed, 3 Jan 2024 21:54:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CCD1BC433C8; Wed, 3 Jan 2024 21:54:34 +0000 (UTC) Date: Wed, 3 Jan 2024 21:54:32 +0000 From: Catalin Marinas To: Dave Hansen Cc: Jisheng Zhang , Will Deacon , "Aneesh Kumar K . V" , Andrew Morton , Nick Piggin , Peter Zijlstra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Arnd Bergmann , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Nadav Amit , Andrea Arcangeli , Andy Lutomirski , Dave Hansen , Thomas Gleixner , Yu Zhao , x86@kernel.org Subject: Re: [PATCH 1/2] mm/tlb: fix fullmm semantics Message-ID: References: <20231228084642.1765-1-jszhang@kernel.org> <20231228084642.1765-2-jszhang@kernel.org> <6ee6340a-ffe2-4106-b845-47cf443558c3@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6ee6340a-ffe2-4106-b845-47cf443558c3@intel.com> On Wed, Jan 03, 2024 at 12:26:29PM -0800, Dave Hansen wrote: > On 1/3/24 10:05, Catalin Marinas wrote: > >> --- a/mm/mmu_gather.c > >> +++ b/mm/mmu_gather.c > >> @@ -384,7 +384,7 @@ void tlb_finish_mmu(struct mmu_gather *tlb) > >> * On x86 non-fullmm doesn't yield significant difference > >> * against fullmm. > >> */ > >> - tlb->fullmm = 1; > >> + tlb->need_flush_all = 1; > >> __tlb_reset_range(tlb); > >> tlb->freed_tables = 1; > >> } > > The optimisation here was added about a year later in commit > > 7a30df49f63a ("mm: mmu_gather: remove __tlb_reset_range() for force > > flush"). Do we still need to keep freed_tables = 1 here? I'd say only > > __tlb_reset_range(). > > I think the __tlb_reset_range() can be dangerous if it clears > ->freed_tables. On x86 at least, it might lead to skipping the TLB IPI > for CPUs that are in lazy TLB mode. When those wake back up they might > start using the freed page tables. You are right, I did not realise freed_tables is reset in __tlb_reset_range(). -- Catalin