From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 1B74D33F5A9 for ; Fri, 10 Apr 2026 10:32:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775817151; cv=none; b=jH3uo09x9javHLbWbjmtKiAzzdoDBrAInR6baNaiS1PhGuryLECcUesZmAmxz++YOSAZAJKaKhKfeQQs62MmrVhX/zuesx+FzQq885rroehjcKf9lp385DgVC//g4BZS8ux6UjnxXqiW4EtXI3tW0sqtm1MU4ZdDNYPHDzxtskY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775817151; c=relaxed/simple; bh=b08HICgezPVD7gUzyQf2pI1dALNHj0lOGqN3C5nNR0Q=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=RvVTVzFdOiMpEJ8+Md6/7cT/B/GXHni7RCWKoTlnfud30+8NQ7FvyoGxNAvAS/G6vaTEHytWGflZ3wEbuIaO+Hfuj/3jPQEAROwnakRlQJi+j6l12hxfxwMb2euRoqIckyW4O+00wz0s08XRPoNY00dD/69nEDnNDGzIdnzm7WM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b=UXc3vPoC; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b="UXc3vPoC" Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D07B41D13; Fri, 10 Apr 2026 03:32:23 -0700 (PDT) Received: from a080796.blr.arm.com (a080796.arm.com [10.164.21.51]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5C4853FAF5; Fri, 10 Apr 2026 03:32:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1775817149; bh=b08HICgezPVD7gUzyQf2pI1dALNHj0lOGqN3C5nNR0Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UXc3vPoCgwsQbgN+hDlEu3IRjHexFrPZmZsCCe4+iKS7P2XZFV6VtHkFHXQ6E0Ubr 1FEXUb/fAwuvXn/4Yq7VI3p4t9ki5EU1Jl953XMq83vVMouhMrLnLJsM7YJR3IDaEt VkOcTq+8HOT3JjA5js+iX624XwvwQwLxVI0HBGd4= From: Dev Jain To: akpm@linux-foundation.org, david@kernel.org, hughd@google.com, chrisl@kernel.org Cc: ljs@kernel.org, Liam.Howlett@oracle.com, vbabka@kernel.org, rppt@kernel.org, surenb@google.com, mhocko@suse.com, kasong@tencent.com, qi.zheng@linux.dev, shakeel.butt@linux.dev, baohua@kernel.org, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, riel@surriel.com, harry@kernel.org, jannh@google.com, pfalcato@suse.de, baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com, youngjun.park@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, ryan.roberts@arm.com, anshuman.khandual@arm.com, Dev Jain Subject: [PATCH v2 1/9] mm/rmap: initialize nr_pages to 1 at loop start in try_to_unmap_one Date: Fri, 10 Apr 2026 16:01:56 +0530 Message-Id: <20260410103204.120409-2-dev.jain@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260410103204.120409-1-dev.jain@arm.com> References: <20260410103204.120409-1-dev.jain@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Initialize nr_pages to 1 at the start of each loop iteration, like folio_referenced_one() does. Without this, nr_pages computed by a previous folio_unmap_pte_batch() call can be reused on a later iteration that does not run folio_unmap_pte_batch() again. I don’t think this is causing a bug today, but it is fragile. A real bug would require this sequence within the same try_to_unmap_one() call: 1. Hit the pte_present(pteval) branch and set nr_pages > 1. 2. Later hit the else branch and do pte_clear() for device-exclusive PTE, and execute rest of the code with nr_pages > 1. Executing the above would imply a lazyfree folio is mapped by a mix of present PTEs and device-exclusive PTEs. In practice, device-exclusive PTEs imply a GUP pin on the folio, and lazyfree unmapping aborts try_to_unmap_one() when it detects that condition. So today this likely does not manifest, but initializing nr_pages per-iteration is still the correct and safer behavior. Signed-off-by: Dev Jain --- mm/rmap.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/rmap.c b/mm/rmap.c index 78b7fb5f367ce..62a8c912fd788 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1991,7 +1991,8 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, struct page *subpage; struct mmu_notifier_range range; enum ttu_flags flags = (enum ttu_flags)(long)arg; - unsigned long nr_pages = 1, end_addr; + unsigned long nr_pages; + unsigned long end_addr; unsigned long pfn; unsigned long hsz = 0; int ptes = 0; @@ -2030,6 +2031,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, mmu_notifier_invalidate_range_start(&range); while (page_vma_mapped_walk(&pvmw)) { + nr_pages = 1; /* * If the folio is in an mlock()d vma, we must not swap it out. */ -- 2.34.1