From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2E5B0CA1016 for ; Mon, 8 Sep 2025 08:24:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=46V8/3D4h6GmtbNpT0VfZMpez9sqUl6d0gc+rk27e8Y=; b=0jUTflSKmq/N2XAmP+HL3PIZx/ dB1xZsawzFkCqFfyb+wdQcp69J6v9TMB3dwoUd6AGulW2wzoA5jggjBQGvUCOEe3Jl+yc96iihdVG OzWIuhYANTE6mEUkGaFw3orpxhS5nYRLFyJeZpYP+mvCXNxJHZ3mWgLOm381qIqYFQ+4HyQ1Tu0gn KhjA6kxgClBhs9WSQMaB7iyL1NxlQLCf1Z6jSoat70H5XkO7vZ0wrMJ0IwVZvHQhOAg7OiWrRetw6 5puyHuPwYWpk3RriPZoxQm/0hKyyhzGIdIfBw6QxJJVXvh+2NkU8yruGuck7tDrYf69kPMc+aURV/ gv+ptryw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1uvXAg-0000000Fg2b-0Vq8; Mon, 08 Sep 2025 08:24:18 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1uvWUv-0000000FNEW-1sXE for linux-arm-kernel@lists.infradead.org; Mon, 08 Sep 2025 07:41:11 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A58261692; Mon, 8 Sep 2025 00:41:00 -0700 (PDT) Received: from e123572-lin.arm.com (e123572-lin.cambridge.arm.com [10.1.194.54]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5128D3F63F; Mon, 8 Sep 2025 00:41:04 -0700 (PDT) From: Kevin Brodsky To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Kevin Brodsky , Alexander Gordeev , Andreas Larsson , Andrew Morton , Boris Ostrovsky , Borislav Petkov , Catalin Marinas , Christophe Leroy , Dave Hansen , David Hildenbrand , "David S. Miller" , "H. Peter Anvin" , Ingo Molnar , Jann Horn , Juergen Gross , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , Peter Zijlstra , Ryan Roberts , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , Yeoreum Yun , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, xen-devel@lists.xenproject.org Subject: [PATCH v2 7/7] mm: update lazy_mmu documentation Date: Mon, 8 Sep 2025 08:39:31 +0100 Message-ID: <20250908073931.4159362-8-kevin.brodsky@arm.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20250908073931.4159362-1-kevin.brodsky@arm.com> References: <20250908073931.4159362-1-kevin.brodsky@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250908_004109_521559_47B4F279 X-CRM114-Status: GOOD ( 16.24 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We now support nested lazy_mmu sections on all architectures implementing the API. Update the API comment accordingly. Acked-by: Mike Rapoport (Microsoft) Signed-off-by: Kevin Brodsky --- include/linux/pgtable.h | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index df0eb898b3fc..85cd1fdb914f 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -228,8 +228,18 @@ static inline int pmd_dirty(pmd_t pmd) * of the lazy mode. So the implementation must assume preemption may be enabled * and cpu migration is possible; it must take steps to be robust against this. * (In practice, for user PTE updates, the appropriate page table lock(s) are - * held, but for kernel PTE updates, no lock is held). Nesting is not permitted - * and the mode cannot be used in interrupt context. + * held, but for kernel PTE updates, no lock is held). The mode cannot be used + * in interrupt context. + * + * Calls may be nested: an arch_{enter,leave}_lazy_mmu_mode() pair may be called + * while the lazy MMU mode has already been enabled. An implementation should + * handle this using the state returned by enter() and taken by the matching + * leave() call; the LAZY_MMU_{DEFAULT,NESTED} flags can be used to indicate + * whether this enter/leave pair is nested inside another or not. (It is up to + * the implementation to track whether the lazy MMU mode is enabled at any point + * in time.) The expectation is that leave() will flush any batched state + * unconditionally, but only leave the lazy MMU mode if the passed state is not + * LAZY_MMU_NESTED. */ #ifndef __HAVE_ARCH_ENTER_LAZY_MMU_MODE typedef int lazy_mmu_state_t; -- 2.47.0