From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C33EBCA1013 for ; Fri, 5 Sep 2025 11:14:09 +0000 (UTC) Received: from boromir.ozlabs.org (localhost [127.0.0.1]) by lists.ozlabs.org (Postfix) with ESMTP id 4cJDJ01qLjz3c2R; Fri, 5 Sep 2025 21:14:08 +1000 (AEST) Authentication-Results: lists.ozlabs.org; arc=none smtp.remote-ip=172.105.4.254 ARC-Seal: i=1; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1757070848; cv=none; b=GIZB/1GAdWzhcXnJii7F9mOVhC4nFkukJ39yXHUFaKzwK7H76io0Hh8forW5S9EevajwwbQEtlW3rnQP8CzZqUBbQyAWbTEX1WNbIUAg8LQDT0PIRXLkJqAQLbw49HKWuVRZpPKkkcCSk1mrCx8hvaaJ4iJz+eB/YUdAilWRUAstswROEswo27eZboWoB/w0OiidGGVyA/5tZYrBkGYV1yOnQZe6XNViGIPCMZz+nvMsik7TsKibgwxm2Y0AQGmPO5BNd1hhjapZ9YOYTg1DiPt7bbjP7qyIB1+eipAIl0nfdZFytuY9NPUaz+sXw0ZqY6+hfn27zXK923JZN94XwQ== ARC-Message-Signature: i=1; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1757070848; c=relaxed/relaxed; bh=g/wt05ObpdxwaNVDkgazUTN9jzFwrbxwSbbI3oRgsSg=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=HGTUODVmgi1t1q/qL4bppIOQUOc/trLsvmAGF6rpG0BA08m+C3J8iHFmEAdegkrRbi90mxhx9+Mji0fA6S1zrU7M24ZOTCrdV9/+i1nACuujVNIhZiWc8d8yy21ttK6IjsymmVYwE0yAmnW++wnPWGrPDTPZvf1ioxmAX6wboTgfEOFNnvezUFaux5VoeSlkYTsj4S/vcNbOjF6vVfOt97um/1DmNaHUhcOYECTdmXaDm5jfM3ZRG3DmBFS9V8qmH5sGVigj86E9jWi96++aPuVni0tS+3dtNOf9waziCtqRjziDXmbelj48RW/kwiDwk8mzS4uDWCeNRYSr4I9DfQ== ARC-Authentication-Results: i=1; lists.ozlabs.org; dmarc=pass (p=quarantine dis=none) header.from=kernel.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=ihMD5Uip; dkim-atps=neutral; spf=pass (client-ip=172.105.4.254; helo=tor.source.kernel.org; envelope-from=rppt@kernel.org; receiver=lists.ozlabs.org) smtp.mailfrom=kernel.org Authentication-Results: lists.ozlabs.org; dmarc=pass (p=quarantine dis=none) header.from=kernel.org Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=ihMD5Uip; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=kernel.org (client-ip=172.105.4.254; helo=tor.source.kernel.org; envelope-from=rppt@kernel.org; receiver=lists.ozlabs.org) Received: from tor.source.kernel.org (tor.source.kernel.org [172.105.4.254]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4cJDHz3Dp7z3c2P for ; Fri, 5 Sep 2025 21:14:07 +1000 (AEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id D8DF860280; Fri, 5 Sep 2025 11:14:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 93F34C4CEF1; Fri, 5 Sep 2025 11:13:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1757070844; bh=kDpzhB/6Cz5cknkR7vg25nX23vQb619EzR6gJMWg0zE=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=ihMD5UipqP42+z3YN+MrXn99n6Ycd3YsueOx6CaMNo+hdxO5RgsqI0SQxPdExPL3V BxsONFBqcXHbav7x1SkdqIDD48PmtLWsgPm58G0QlMiay/oaIVskT0KF8NuPaK9VFd /7wzkSZr/Kf5WRucMN8/UCNMZojnjOtaAmAF95Bjf8RmpqIT1ej5HygOv9JaajJW8G 3sqfezMFb01ibM/+0hkAjFN8sRUuYHQxj5dBVnj+if/YJu29wWq859HwQBz6EYrcmA 3EU4AmTwTsfLSifdqQkLCIzFzeBG/3X8Ze/ePLiEQWW6Tfu9V10hDtY4xQjiB241cI b5217/1pBB0Eg== Date: Fri, 5 Sep 2025 14:13:50 +0300 From: Mike Rapoport To: Kevin Brodsky Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Alexander Gordeev , Andreas Larsson , Andrew Morton , Boris Ostrovsky , Borislav Petkov , Catalin Marinas , Christophe Leroy , Dave Hansen , David Hildenbrand , "David S. Miller" , "H. Peter Anvin" , Ingo Molnar , Jann Horn , Juergen Gross , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Michael Ellerman , Michal Hocko , Nicholas Piggin , Peter Zijlstra , Ryan Roberts , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, xen-devel@lists.xenproject.org Subject: Re: [PATCH 7/7] mm: update lazy_mmu documentation Message-ID: References: <20250904125736.3918646-1-kevin.brodsky@arm.com> <20250904125736.3918646-8-kevin.brodsky@arm.com> X-Mailing-List: linuxppc-dev@lists.ozlabs.org List-Id: List-Help: List-Owner: List-Post: List-Archive: , List-Subscribe: , , List-Unsubscribe: Precedence: list MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250904125736.3918646-8-kevin.brodsky@arm.com> On Thu, Sep 04, 2025 at 01:57:36PM +0100, Kevin Brodsky wrote: > We now support nested lazy_mmu sections on all architectures > implementing the API. Update the API comment accordingly. > > Signed-off-by: Kevin Brodsky Acked-by: Mike Rapoport (Microsoft) > --- > include/linux/pgtable.h | 14 ++++++++++++-- > 1 file changed, 12 insertions(+), 2 deletions(-) > > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > index 6932c8e344ab..be0f059beb4d 100644 > --- a/include/linux/pgtable.h > +++ b/include/linux/pgtable.h > @@ -228,8 +228,18 @@ static inline int pmd_dirty(pmd_t pmd) > * of the lazy mode. So the implementation must assume preemption may be enabled > * and cpu migration is possible; it must take steps to be robust against this. > * (In practice, for user PTE updates, the appropriate page table lock(s) are > - * held, but for kernel PTE updates, no lock is held). Nesting is not permitted > - * and the mode cannot be used in interrupt context. > + * held, but for kernel PTE updates, no lock is held). The mode cannot be used > + * in interrupt context. > + * > + * Calls may be nested: an arch_{enter,leave}_lazy_mmu_mode() pair may be called > + * while the lazy MMU mode has already been enabled. An implementation should > + * handle this using the state returned by enter() and taken by the matching > + * leave() call; the LAZY_MMU_{DEFAULT,NESTED} flags can be used to indicate > + * whether this enter/leave pair is nested inside another or not. (It is up to > + * the implementation to track whether the lazy MMU mode is enabled at any point > + * in time.) The expectation is that leave() will flush any batched state > + * unconditionally, but only leave the lazy MMU mode if the passed state is not > + * LAZY_MMU_NESTED. > */ > #ifndef __HAVE_ARCH_ENTER_LAZY_MMU_MODE > typedef int lazy_mmu_state_t; > -- > 2.47.0 > -- Sincerely yours, Mike.