From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 90EE633DEE0 for ; Fri, 10 Apr 2026 10:32:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775817169; cv=none; b=qGTmDWbT1v/5B6orVwuTayBIn8z2NF+W/AYneJwmHJt5aH+1Xa8nUhQJMpGArf9upJeCUVNx0O5lBPany/lDDS78qmMefL1bjn1eTy68KVuGjYiCce5IbmhK1cSx4KXO5c82vUABQGUDrhPW8ZPXNu8xVHfkJy6Xo72rXV/EZy4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775817169; c=relaxed/simple; bh=X7DlvXowYL6w+6n3AW64YW79PXkvSOgsC12lJIyy+DA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=bpxSERVOX4kDk4yBB/g1ddRa4F/veRNPYfwlC98wm7bbj/LBLW8hZ9C2aqkfGN0evmG+ps+p3tt21JSqPEYQt15uE3SKRYxzmFlbNE6YWNpxdQAcG2LYlM+Nyc/+EawPjRZ1eaGdnb+ZdsNte1mB5481MReN2sCV+GszxMgRRok= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b=chmVQdNF; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b="chmVQdNF" Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4EA5F2681; Fri, 10 Apr 2026 03:32:41 -0700 (PDT) Received: from a080796.blr.arm.com (a080796.arm.com [10.164.21.51]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D22F33FAF5; Fri, 10 Apr 2026 03:32:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1775817167; bh=X7DlvXowYL6w+6n3AW64YW79PXkvSOgsC12lJIyy+DA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=chmVQdNFRnxFZW1c2PeiQ45e2n4251vCjuRZc+/Ml+7/MHJC7bt7lwbm7ixvDeX+C 0gfeMHhQQxxQnRTUjY3X6UgbYTRXDReYyHWTuq1h9hyGe4I4EgCriYLjWt5vcyDVcq RJ5e1f2yFbEjw2vL/ZLDYEMQLNaWmCAoSBeZklIM= From: Dev Jain To: akpm@linux-foundation.org, david@kernel.org, hughd@google.com, chrisl@kernel.org Cc: ljs@kernel.org, Liam.Howlett@oracle.com, vbabka@kernel.org, rppt@kernel.org, surenb@google.com, mhocko@suse.com, kasong@tencent.com, qi.zheng@linux.dev, shakeel.butt@linux.dev, baohua@kernel.org, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, riel@surriel.com, harry@kernel.org, jannh@google.com, pfalcato@suse.de, baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com, youngjun.park@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, ryan.roberts@arm.com, anshuman.khandual@arm.com, Dev Jain Subject: [PATCH v2 3/9] mm/rmap: refactor some code around lazyfree folio unmapping Date: Fri, 10 Apr 2026 16:01:58 +0530 Message-Id: <20260410103204.120409-4-dev.jain@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260410103204.120409-1-dev.jain@arm.com> References: <20260410103204.120409-1-dev.jain@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit For lazyfree folio unmapping, after clearing the ptes we must abort the operation if the folio got dirtied or it has unexpected references. Refactor this logic into a function which will return whether we need to abort or not. If we abort, we restore the ptes and bail out of try_to_unmap_one. Otherwise adjust the rss stats of the mm and jump to a label. Also rename that label from "discard" to "finish_unmap"; the former is appropriate in the lazyfree context, but the code following the label is executed for other successful unmap code paths too, so 'discard' does not sound correct for them. Signed-off-by: Dev Jain --- mm/rmap.c | 95 ++++++++++++++++++++++++++++++++----------------------- 1 file changed, 55 insertions(+), 40 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index a9c43e2f6e695..fa5d6599dedf0 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1978,6 +1978,56 @@ static inline unsigned int folio_unmap_pte_batch(struct folio *folio, FPB_RESPECT_WRITE | FPB_RESPECT_SOFT_DIRTY); } +static inline bool can_unmap_lazyfree_folio_range(struct vm_area_struct *vma, + struct folio *folio, unsigned long address, pte_t *ptep, + pte_t pteval, unsigned long nr_pages) +{ + struct mm_struct *mm = vma->vm_mm; + int ref_count, map_count; + + /* + * Synchronize with gup_pte_range(): + * - clear PTE; barrier; read refcount + * - inc refcount; barrier; read PTE + */ + smp_mb(); + + ref_count = folio_ref_count(folio); + map_count = folio_mapcount(folio); + + /* + * Order reads for page refcount and dirty flag + * (see comments in __remove_mapping()). + */ + smp_rmb(); + + if (folio_test_dirty(folio) && !(vma->vm_flags & VM_DROPPABLE)) { + /* + * redirtied either using the page table or a previously + * obtained GUP reference. + */ + set_ptes(mm, address, ptep, pteval, nr_pages); + folio_set_swapbacked(folio); + return false; + } + + if (ref_count != 1 + map_count) { + /* + * Additional reference. Could be a GUP reference or any + * speculative reference. GUP users must mark the folio + * dirty if there was a modification. This folio cannot be + * reclaimed right now either way, so act just like nothing + * happened. + * We'll come back here later and detect if the folio was + * dirtied when the additional reference is gone. + */ + set_ptes(mm, address, ptep, pteval, nr_pages); + return false; + } + + return true; +} + static inline bool unmap_hugetlb_folio(struct vm_area_struct *vma, struct folio *folio, struct page_vma_mapped_walk *pvmw, struct page *page, enum ttu_flags flags, pte_t *pteval, @@ -2256,47 +2306,12 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, /* MADV_FREE page check */ if (!folio_test_swapbacked(folio)) { - int ref_count, map_count; - - /* - * Synchronize with gup_pte_range(): - * - clear PTE; barrier; read refcount - * - inc refcount; barrier; read PTE - */ - smp_mb(); - - ref_count = folio_ref_count(folio); - map_count = folio_mapcount(folio); - - /* - * Order reads for page refcount and dirty flag - * (see comments in __remove_mapping()). - */ - smp_rmb(); - - if (folio_test_dirty(folio) && !(vma->vm_flags & VM_DROPPABLE)) { - /* - * redirtied either using the page table or a previously - * obtained GUP reference. - */ - set_ptes(mm, address, pvmw.pte, pteval, nr_pages); - folio_set_swapbacked(folio); + if (!can_unmap_lazyfree_folio_range(vma, folio, address, + pvmw.pte, pteval, nr_pages)) goto walk_abort; - } else if (ref_count != 1 + map_count) { - /* - * Additional reference. Could be a GUP reference or any - * speculative reference. GUP users must mark the folio - * dirty if there was a modification. This folio cannot be - * reclaimed right now either way, so act just like nothing - * happened. - * We'll come back here later and detect if the folio was - * dirtied when the additional reference is gone. - */ - set_ptes(mm, address, pvmw.pte, pteval, nr_pages); - goto walk_abort; - } + add_mm_counter(mm, MM_ANONPAGES, -nr_pages); - goto discard; + goto finish_unmap; } if (folio_dup_swap(folio, subpage) < 0) { @@ -2359,7 +2374,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, */ add_mm_counter(mm, mm_counter_file(folio), -nr_pages); } -discard: +finish_unmap: if (unlikely(folio_test_hugetlb(folio))) { hugetlb_remove_rmap(folio); } else { -- 2.34.1