From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 1DE12292B44 for ; Tue, 10 Mar 2026 07:31:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773127876; cv=none; b=tFTWbj7CLx5L5peu9j7sxfSKhW/w5vWAbmeY6FLhjwSLHVAlEBcXYmW+h1BmKuM1zSKLo6FjJnOoH3WUFtaz7HSXPyk7N1YrSrjNZ3O5hnT+qh3ECnW57GL9TIIF3qmotNm2OxKfgsJwh95LJeIap0mm3CDIhH1etvTTDsSqYi8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773127876; c=relaxed/simple; bh=q2rLzpigrF7jknYgtD0LPRZ+vKpHZJSDe1tMDWjXK78=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=BL4/BF/FbQvoKS5iqRC6HR8DZ/nQpjfiEuGrlIiNTuF6kHoOUxzvyydC2vf9yBNHAAq6eftlKDa5ObsLrkuAVsmRP9RdW3ztnEKgYJAQx5lLwObPY9fhjU3dX+R1UoE3sA3mML9Wtl/70lev5i5W//2lNRLlaL8Ea80VPVL1JQ8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4534216A3; Tue, 10 Mar 2026 00:31:08 -0700 (PDT) Received: from a080796.blr.arm.com (a080796.arm.com [10.164.21.51]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8EA0F3F73B; Tue, 10 Mar 2026 00:31:05 -0700 (PDT) From: Dev Jain To: akpm@linux-foundation.org, axelrasmussen@google.com, yuanchu@google.com, david@kernel.org, hughd@google.com, chrisl@kernel.org, kasong@tencent.com Cc: weixugc@google.com, ljs@kernel.org, Liam.Howlett@oracle.com, vbabka@kernel.org, rppt@kernel.org, surenb@google.com, mhocko@suse.com, riel@surriel.com, harry.yoo@oracle.com, jannh@google.com, pfalcato@suse.de, baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com, baohua@kernel.org, youngjun.park@lge.com, ziy@nvidia.com, kas@kernel.org, willy@infradead.org, yuzhao@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, ryan.roberts@arm.com, anshuman.khandual@arm.com, Dev Jain Subject: [PATCH 3/9] mm/rmap: refactor lazyfree unmap commit path to commit_ttu_lazyfree_folio() Date: Tue, 10 Mar 2026 13:00:07 +0530 Message-Id: <20260310073013.4069309-4-dev.jain@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260310073013.4069309-1-dev.jain@arm.com> References: <20260310073013.4069309-1-dev.jain@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Clean up the code by refactoring the post-pte-clearing path of lazyfree folio unmapping, into commit_ttu_lazyfree_folio(). No functional change is intended. Signed-off-by: Dev Jain --- mm/rmap.c | 93 ++++++++++++++++++++++++++++++++----------------------- 1 file changed, 54 insertions(+), 39 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 1fa020edd954a..a61978141ee3f 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1966,6 +1966,57 @@ static inline unsigned int folio_unmap_pte_batch(struct folio *folio, FPB_RESPECT_WRITE | FPB_RESPECT_SOFT_DIRTY); } +static inline int commit_ttu_lazyfree_folio(struct vm_area_struct *vma, + struct folio *folio, unsigned long address, pte_t *ptep, + pte_t pteval, long nr_pages) +{ + struct mm_struct *mm = vma->vm_mm; + int ref_count, map_count; + + /* + * Synchronize with gup_pte_range(): + * - clear PTE; barrier; read refcount + * - inc refcount; barrier; read PTE + */ + smp_mb(); + + ref_count = folio_ref_count(folio); + map_count = folio_mapcount(folio); + + /* + * Order reads for page refcount and dirty flag + * (see comments in __remove_mapping()). + */ + smp_rmb(); + + if (folio_test_dirty(folio) && !(vma->vm_flags & VM_DROPPABLE)) { + /* + * redirtied either using the page table or a previously + * obtained GUP reference. + */ + set_ptes(mm, address, ptep, pteval, nr_pages); + folio_set_swapbacked(folio); + return 1; + } + + if (ref_count != 1 + map_count) { + /* + * Additional reference. Could be a GUP reference or any + * speculative reference. GUP users must mark the folio + * dirty if there was a modification. This folio cannot be + * reclaimed right now either way, so act just like nothing + * happened. + * We'll come back here later and detect if the folio was + * dirtied when the additional reference is gone. + */ + set_ptes(mm, address, ptep, pteval, nr_pages); + return 1; + } + + add_mm_counter(mm, MM_ANONPAGES, -nr_pages); + return 0; +} + /* * @arg: enum ttu_flags will be passed to this argument */ @@ -2227,46 +2278,10 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, /* MADV_FREE page check */ if (!folio_test_swapbacked(folio)) { - int ref_count, map_count; - - /* - * Synchronize with gup_pte_range(): - * - clear PTE; barrier; read refcount - * - inc refcount; barrier; read PTE - */ - smp_mb(); - - ref_count = folio_ref_count(folio); - map_count = folio_mapcount(folio); - - /* - * Order reads for page refcount and dirty flag - * (see comments in __remove_mapping()). - */ - smp_rmb(); - - if (folio_test_dirty(folio) && !(vma->vm_flags & VM_DROPPABLE)) { - /* - * redirtied either using the page table or a previously - * obtained GUP reference. - */ - set_ptes(mm, address, pvmw.pte, pteval, nr_pages); - folio_set_swapbacked(folio); + if (commit_ttu_lazyfree_folio(vma, folio, address, + pvmw.pte, pteval, + nr_pages)) goto walk_abort; - } else if (ref_count != 1 + map_count) { - /* - * Additional reference. Could be a GUP reference or any - * speculative reference. GUP users must mark the folio - * dirty if there was a modification. This folio cannot be - * reclaimed right now either way, so act just like nothing - * happened. - * We'll come back here later and detect if the folio was - * dirtied when the additional reference is gone. - */ - set_ptes(mm, address, pvmw.pte, pteval, nr_pages); - goto walk_abort; - } - add_mm_counter(mm, MM_ANONPAGES, -nr_pages); goto discard; } -- 2.34.1