From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A04E43CF68F for ; Wed, 6 May 2026 09:46:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778060764; cv=none; b=pwQz5YnIhVeA4I47LNftrb7Xfz1U9JCNedmX3niqZ/miSrW+SphEXRW9LsmYij/CSk0FNJcKVcdCuR6b/AInhaj1e/0ZSSTH1++abFne+pRHe+JdnHrOPxdVDWoLdR6Xh1GppZC7lSzmBU86cyM04b/6LwhSnudKkRj9QUL871M= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778060764; c=relaxed/simple; bh=5UvkyfIWSiH9Se1fWO1+prSHIFh4WaKl2h9l04UZKME=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=QLRZKkAqtiRBd91eTzY2aUiQByrJkWewiKmXDqMkQi8MKK/hZqwHyaNXAosWT3Gd6VYp3Rmk/kcVPFmyBCX4mxFuJ+03t0mrJX15Ys5m3pzGtMq81in9VsAg8IFOEeDXIG39FQsLRDVrxYbqOFpIkI/rXh2VD+AfoTKstbscgSQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b=DiPGPZeL; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b="DiPGPZeL" Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A82BF331A; Wed, 6 May 2026 02:45:56 -0700 (PDT) Received: from a080796.blr.arm.com (a080796.arm.com [10.164.21.51]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 22AF03F7B4; Wed, 6 May 2026 02:45:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1778060762; bh=5UvkyfIWSiH9Se1fWO1+prSHIFh4WaKl2h9l04UZKME=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DiPGPZeLFr6KMG8ted+Z+UsyEGMycs7zcIFrDVALzOZMUF/rnqvzLNjKXU82OjXNB JnkmmgPephEDbCnUf2tA0uLRAtVr41Ft5dBIAh0gsYDeZ7YiYX+FbrGEj8MKpD29yc 14q33w3pSD8nwfEgSBzmsOWZ5Nvhbj/qkI3UjjyE= From: Dev Jain To: akpm@linux-foundation.org, david@kernel.org, ljs@kernel.org, hughd@google.com, chrisl@kernel.org, kasong@tencent.com Cc: Dev Jain , riel@surriel.com, liam@infradead.org, vbabka@kernel.org, harry@kernel.org, jannh@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, qi.zheng@linux.dev, shakeel.butt@linux.dev, baohua@kernel.org, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, rppt@kernel.org, surenb@google.com, mhocko@suse.com, baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com, youngjun.park@lge.com, pfalcato@suse.de, ryan.roberts@arm.com, anshuman.khandual@arm.com Subject: [PATCH v3 5/9] mm/rmap: batch unmap folios belonging to uffd-wp VMAs Date: Wed, 6 May 2026 15:15:00 +0530 Message-Id: <20260506094504.2588857-6-dev.jain@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260506094504.2588857-1-dev.jain@arm.com> References: <20260506094504.2588857-1-dev.jain@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Commit a67fe41e214f ("mm: rmap: support batched unmapping for file large folios") extended batched unmapping for file folios. That also required making install_uffd_wp_ptes_if_needed() support batching, but that was left out for the time being, and correctness was maintained by stopping batching in case the VMA the folio belongs to is marked uffd-wp. Now that we have a batched version called install_uffd_wp_ptes_if_needed, simply call that. folio_unmap_pte_batch() ensures that the original state of the ptes is either all uffd or all non-uffd, so we maintain correctness. If uffd-wp bit is there, we have the following transitions of ptes after unmapping: 1) anon folio: present -> uffd-wp swap 2) file folio: present -> uffd-wp marker We must ensure that these ptes are not reprocessed by the while loop - if the batch length is less than the number of pages in the folio, then we must skip over this batch. The page_vma_mapped_walk API ensures this - check_pte() will return true only if any of [pvmw->pfn, pvmw->pfn + nr_pages) is mapped by the pte. There is no pfn underlying either a uffd-wp swap pte or a uffd-wp marker pte, so check_pte returns false and we keep skipping until we hit a present entry, which is where we want to batch from next. Signed-off-by: Dev Jain --- mm/rmap.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index b17dce752a1ea..25813e3605991 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1965,9 +1965,6 @@ static inline unsigned int folio_unmap_pte_batch(struct folio *folio, if (pte_unused(pte)) return 1; - if (userfaultfd_wp(vma)) - return 1; - /* * If unmap fails, we need to restore the ptes. To avoid accidentally * upgrading write permissions for ptes that were not originally @@ -2266,7 +2263,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, * we may want to replace a none pte with a marker pte if * it's file-backed, so we don't lose the tracking info. */ - install_uffd_wp_ptes_if_needed(vma, address, pvmw.pte, pteval, 1); + install_uffd_wp_ptes_if_needed(vma, address, pvmw.pte, pteval, nr_pages); /* Update high watermark before we lower rss */ update_hiwater_rss(mm); -- 2.34.1