From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ED183CA101F for ; Fri, 12 Sep 2025 08:48:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:In-Reply-To:From:References:Cc:To:Subject:MIME-Version:Date: Message-ID:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=x/O37A+Sj5/ENSKE8ye7ob3Qbk26KdsluByV9/ixxKs=; b=4+TacY1fLLFqOtgUCsSJc67s+4 ns/75U21Zv93Ky0euISKr60N3baS6JztC7sfUz0ySiy4pJKBkygY5yQqRW3olXyz4uYEG5lrYbtG7 IRnJgTFJgRrd6cRChvSb3ueGqWv/6EEWxkI2cb4J+ZOvXRPZZeeiJBWYMKU+DPuoR2z3R4fLFDlkA hIZJPMc5Jm0yMvCgTgRUjPh+WE3TRZF3h2Fw1+8kPEjeXl2q9v3q8NOmcgQ5NPe0c+C/cWqVG/maF Dq+9epuC60CFsbjZnJqEtu/a7K83/ObA27yzLQHbXPn59bXeJEDa43b4ks/KArI0UMQuTcNqgvfzE aQG50PeA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1uwzSS-0000000810t-3A02; Fri, 12 Sep 2025 08:48:40 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1uwzSP-000000080yw-1mTZ for linux-arm-kernel@lists.infradead.org; Fri, 12 Sep 2025 08:48:39 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4134416A3; Fri, 12 Sep 2025 01:48:28 -0700 (PDT) Received: from [10.57.66.147] (unknown [10.57.66.147]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 63FC83F66E; Fri, 12 Sep 2025 01:48:29 -0700 (PDT) Message-ID: <781a6450-1c0b-4603-91cf-49f16cd78c28@arm.com> Date: Fri, 12 Sep 2025 10:48:26 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 2/7] mm: introduce local state for lazy_mmu sections To: David Hildenbrand , Alexander Gordeev Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andreas Larsson , Andrew Morton , Boris Ostrovsky , Borislav Petkov , Catalin Marinas , Christophe Leroy , Dave Hansen , "David S. Miller" , "H. Peter Anvin" , Ingo Molnar , Jann Horn , Juergen Gross , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , Peter Zijlstra , Ryan Roberts , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , Yeoreum Yun , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, xen-devel@lists.xenproject.org, Mark Rutland References: <20250908073931.4159362-1-kevin.brodsky@arm.com> <20250908073931.4159362-3-kevin.brodsky@arm.com> <2fecfae7-1140-4a23-a352-9fd339fcbae5-agordeev@linux.ibm.com> <47ee1df7-1602-4200-af94-475f84ca8d80@arm.com> <29383ee2-d6d6-4435-9052-d75a263a5c45@redhat.com> <9de08024-adfc-421b-8799-62653468cf63@arm.com> <4b4971fd-0445-4d86-8f3a-6ba3d68d15b7@arm.com> <4aa28016-5678-4c66-8104-8dcc3fa2f5ce@redhat.com> <15d01c8b-5475-442e-9df5-ca37b0d5dc04@arm.com> <7953a735-6129-4d22-be65-ce736630d539@redhat.com> Content-Language: en-GB From: Kevin Brodsky In-Reply-To: <7953a735-6129-4d22-be65-ce736630d539@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250912_014837_542211_4DD53510 X-CRM114-Status: GOOD ( 33.69 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 12/09/2025 10:04, David Hildenbrand wrote: >>> >>> struct lazy_mmu_state { >>>      uint8_t enabled_count; >>>      bool paused; >> >> Looking at the arm64 implementation, I'm thinking: instead of the paused >> member, how about a PF_LAZY_MMU task flag? It would be set when lazy_mmu >> is actually enabled (i.e. inside an enter()/leave() section, and not >> inside a pause()/resume() section). This way, architectures could use >> that flag directly to tell if lazy_mmu is enabled instead of reinventing >> the wheel, all in slightly different ways. Namely: >> >> * arm64 uses a thread flag (TIF_LAZY_MMU) - this is trivially replaced >> with PF_LAZY_MMU >> * powerpc and sparc use batch->active where batch is a per-CPU variable; >> I expect this can also be replaced with PF_LAZY_MMU >> * x86/xen is more complex as it has xen_lazy_mode which tracks both >> LAZY_MMU and LAZY_CPU modes. I'd probably leave that one alone, unless a >> Xen expert is motivated to refactor it. >> >> With that approach, the implementation of arch_enter() and arch_leave() >> becomes very simple (no tracking of lazy_mmu status) on arm64, powerpc >> and sparc. >> >> (Of course we could also have an "enabled" member in lazy_mmu_state >> instead of PF_LAZY_MMU, there is no functional difference.) >> > > No strong opinion, but to me it feels like PF_LAZY_MMU is rather "the > effective state when combining nested+paused", and might complicate > the code + sanity checks? > > So we could maintain that in addition fairly easily of course from the > core instead of letting archs do that manually. > > I would probably have to see the end result to judge whether removing > the "paused" bool makes things look more complicated or not. Agreed, it is a little difficult to consider all the cases ahead of time. What is clear to me though is that [paused] can be inferred from [count + enabled], and conversely [enabled] from [count + paused]. As a result I really wouldn't store both paused and enabled in task_struct - duplicating information is how you create inconsistent states. We can very easily introduce helpers to get the enabled/paused status regardless of how they're stored. Since "enabled" is what we need to know in most cases (arch checking the status), I would rather store "enabled" than "paused". But indeed, let's see how it turns out in practice. > >>> } >>> >>> c) With that config, common-code lazy_mmu_*() functions implement the >>> updating of the lazy_mmu_state in task_struct and call into arch code >>> on the transition from 0->1, 1->0 etc. >> >> Indeed, this is how I thought about it. There is actually quite a lot >> that can be moved to the generic functions: >> * Updating lazy_mmu_state >> * Sanity checks on lazy_mmu_state (e.g. underflow/overflow) >> * Bailing out if in_interrupt() (not done consistently across arch's at >> the moment) >> >>> >>> Maybe that can be done through exiting >>> arch_enter_lazy_mmu_mode()/arch_leave_lazy_mmu_mode() callbacks, maybe >>> we need more. I feel like >>> we might be able to implement that through the existing helpers. >> >> We might want to rename them to align with the new generic helpers, but >> yes otherwise the principle should remain unchanged. >> >> In fact, we will also need to revive arch_flush_lazy_mmu_mode(). > > That's okay if it's all hidden behaind a sane core API. > >> Indeed, >> in the nested situation, we need the following arch calls: >> >> enter() -> arch_enter() >>      enter() -> [nothing] >>      leave() -> arch_flush() >> leave() -> arch_leave() >> >> leave() must always flush whatever arch state was batched, as may be >> expected by the caller. >> >> How does all that sound? > > I am no expert on the "always flush when leaving", but it sounds > reasonable to me. This is a core expectation for lazy_mmu: when leave() is called, any batched state is flushed. The fact it currently happens unconditionally when calling leave() is in fact what stops nesting from exploding on arm64 with DEBUG_PAGEALLOC [1]. [1] https://lore.kernel.org/all/aEhKSq0zVaUJkomX@arm.com/ > > Which arch operations would you call from > > pause() > continue() I also wondered about that. I think the safest is to make them respectively arch_leave() and arch_enter() - the flushing entailed by arch_leave() might not be required, but it is safer. Additionally, powerpc/sparc disable preemption while in lazy_mmu, so it seems like a good idea to re-enable it while paused (by calling arch_leave()). - Kevin