From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 9D5BC3E9593 for ; Tue, 10 Mar 2026 07:31:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773127867; cv=none; b=JZa2870Knb7wUHMsigueUNV6HzrUuvImdonjU5/+iH/RmfG2+pC5PTXMFfBFoRmuFb0OtCBgvLFDdnxWAm2VoWDt484adO7XaSLqK0Qs0BkuHQ0yVqk9+6SKL5qzsRbSl+3gEQz4RmPajuzboZiWdICREUCC+nxx5hkHL8yckWg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773127867; c=relaxed/simple; bh=UORdVV0kozPBokbsJs/d7jDEqRDGNyPcMRZ5Dg/JibY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=o2tYxm7ubwkSkM/46Aoe5u5cMcbnqLqz0S483wtoetFnA5FsylvJVm3PtP9FhoNFFQtuFkwFzEGSMvQflnwlzcwttPPIWj+C15gPTpx/T+jbdFqgrDfi6o/s5S0etk/k4kxq4srmRKKkP3T8rvKOmrM4jfZZx9PLdacRsvF+s/g= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CCA92169C; Tue, 10 Mar 2026 00:30:58 -0700 (PDT) Received: from a080796.blr.arm.com (a080796.arm.com [10.164.21.51]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 30A363F73B; Tue, 10 Mar 2026 00:30:55 -0700 (PDT) From: Dev Jain To: akpm@linux-foundation.org, axelrasmussen@google.com, yuanchu@google.com, david@kernel.org, hughd@google.com, chrisl@kernel.org, kasong@tencent.com Cc: weixugc@google.com, ljs@kernel.org, Liam.Howlett@oracle.com, vbabka@kernel.org, rppt@kernel.org, surenb@google.com, mhocko@suse.com, riel@surriel.com, harry.yoo@oracle.com, jannh@google.com, pfalcato@suse.de, baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com, baohua@kernel.org, youngjun.park@lge.com, ziy@nvidia.com, kas@kernel.org, willy@infradead.org, yuzhao@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, ryan.roberts@arm.com, anshuman.khandual@arm.com, Dev Jain Subject: [PATCH 2/9] mm/rmap: initialize nr_pages to 1 at loop start in try_to_unmap_one Date: Tue, 10 Mar 2026 13:00:06 +0530 Message-Id: <20260310073013.4069309-3-dev.jain@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260310073013.4069309-1-dev.jain@arm.com> References: <20260310073013.4069309-1-dev.jain@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Initialize nr_pages to 1 at the start of the loop, similar to what is being done in folio_referenced_one(). It may happen that the nr_pages computed from a previous call to folio_unmap_pte_batch gets reused without again going through folio_unmap_pte_batch, messing up things. Although, I don't think there is any bug right now; a bug would have been there, if in the same instance of a call to try_to_unmap_one, we end up in the pte_present(pteval) branch, then in the else branch doing pte_clear() for device-exclusive ptes. This means that a lazyfree folio has some present entries and some device entries mapping it. Since a pte being device-exclusive means that a GUP reference on the underlying folio is held, the lazyfree unmapping path upon witnessing this will abort try_to_unmap_one. Signed-off-by: Dev Jain --- mm/rmap.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/rmap.c b/mm/rmap.c index 087c9f5b884fe..1fa020edd954a 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1982,7 +1982,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, unsigned long end_addr; unsigned long pfn; unsigned long hsz = 0; - long nr_pages = 1; + long nr_pages; int ptes = 0; /* @@ -2019,6 +2019,8 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, mmu_notifier_invalidate_range_start(&range); while (page_vma_mapped_walk(&pvmw)) { + nr_pages = 1; + /* * If the folio is in an mlock()d vma, we must not swap it out. */ -- 2.34.1