From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D240C7EE2A for ; Wed, 25 Jun 2025 12:58:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D19EC6B00B4; Wed, 25 Jun 2025 08:58:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CF1036B00BE; Wed, 25 Jun 2025 08:58:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BE0BF6B00C9; Wed, 25 Jun 2025 08:58:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id AC4A46B00B4 for ; Wed, 25 Jun 2025 08:58:35 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 6552A1D3B0D for ; Wed, 25 Jun 2025 12:58:35 +0000 (UTC) X-FDA: 83593926990.27.C04EA6A Received: from out-189.mta1.migadu.com (out-189.mta1.migadu.com [95.215.58.189]) by imf15.hostedemail.com (Postfix) with ESMTP id 5CB95A0006 for ; Wed, 25 Jun 2025 12:58:33 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=H0RQVztb; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf15.hostedemail.com: domain of lance.yang@linux.dev designates 95.215.58.189 as permitted sender) smtp.mailfrom=lance.yang@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1750856313; a=rsa-sha256; cv=none; b=CY0gMXNIZVbnniKtSjQqqVPud778e+tWNnjCZ8d7FcZFt0z9RvCB1K8iMwhs4jHcA699vH sARUoou8EV2fDeyVFNIiMOvFdpTg815UdT3PEVlKMzjFs90zb6BBmQ6XDsgaelZW1wgGVX mjsWCA9azIe9G0cqXhuiCApEFnIhSMA= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=H0RQVztb; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf15.hostedemail.com: domain of lance.yang@linux.dev designates 95.215.58.189 as permitted sender) smtp.mailfrom=lance.yang@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1750856313; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lY4C+H8rjCf0tB+LlRLUjdQXLKGrHdCeK+5aYVx8K34=; b=KxGK/RWqkyoeT1V6xmldgsr4P1yKI8CHUB2lmzuhsKedILVhQj9wKb52RIGX5Ni3gw/PR/ 4CWylSeJbAKdgSs0edUaG1A7gPy9thiflqe2Cwk9knvQsuV0Ssc5qb+1PSbCgfCjU42quJ tQ4ifb3sOdzKpwdQFmsWgG1WsC2F0gk= Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1750856311; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lY4C+H8rjCf0tB+LlRLUjdQXLKGrHdCeK+5aYVx8K34=; b=H0RQVztbREsh2CeBNzfiHRG3ArsNptNh9e4vAD7piCsogyawFZAZ0h7oX5wxSxKudfeOx8 zmgi8lKBBJDJyX+yA3Bd/pjb8f4zMM6qXYyQr+ccBk4iLesagodds3gfB2bKoUezfMEyH5 9KebcsTF3oF2ez4v96G72qSVlbShipA= Date: Wed, 25 Jun 2025 20:58:12 +0800 MIME-Version: 1.0 Subject: Re: [PATCH v4 3/4] mm: Support batched unmap for lazyfree large folios during reclamation Content-Language: en-US To: David Hildenbrand , Barry Song <21cnbao@gmail.com> Cc: akpm@linux-foundation.org, baolin.wang@linux.alibaba.com, chrisl@kernel.org, kasong@tencent.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, lorenzo.stoakes@oracle.com, ryan.roberts@arm.com, v-songbaohua@oppo.com, x86@kernel.org, ying.huang@intel.com, zhengtangquan@oppo.com, Lance Yang References: <2c19a6cf-0b42-477b-a672-ed8c1edd4267@redhat.com> <20250624162503.78957-1-ioworker0@gmail.com> <27d174e0-c209-4851-825a-0baeb56df86f@redhat.com> <938c4726-b93e-46df-bceb-65c7574714a6@linux.dev> <5ba95609-302b-456a-a863-2bd5df51baf2@redhat.com> <6179dd30-5351-4a79-b0d6-f0e85650a926@redhat.com> <5db6fb4c-079d-4237-80b3-637565457f39@redhat.com> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Lance Yang In-Reply-To: <5db6fb4c-079d-4237-80b3-637565457f39@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Rspamd-Queue-Id: 5CB95A0006 X-Rspamd-Server: rspam10 X-Stat-Signature: r3id6uxfpreawc7ox7e5dat458qj17k3 X-HE-Tag: 1750856313-904638 X-HE-Meta: U2FsdGVkX19Ww4iBp35hbVvR60jWGkx6EzEXuHQ8EnPRFhbgI/TYQ4ZzjPSmNjLogivts/AvHeKP2kJO8GUICyAYsVUifpvC89r7ForFCLkCsqic5+xOVTB5wkxoye33Gaw0yaZ2mcZAr+LushB/uJwJ0Dlo+qXbU4djfLAX5yhUa6hbAlOgxqUthaCeWvcfcIm37aN0FlRHQAOWwPPXEcleN58LZBUdsJjuvPlOUPG/j+p6uK5NsOgp0SDD6bH2T71GwxqaZ5Mg+8NB3KVypLDc5BaXfcZa9XZQ3mwXtteEjzKxtHWWeYmOuvZ5WNxauhl5VbMwQLDvxKHi1hb/kPGggifzBGw3qgNzi/B/BS/4URdmRlGkyHF2xgp8OsckWpUOsUN/dSel5Bbcpazjr4GTIIESxZLl+0E7wErf2XFP7wP2PSyvIOJmvEHZBorZMg9KH/wKJMeDm6Ct5VA/vMYhBibxCKQEDLotDvyIAm4a9oadI/oqEk5+jqDMfoE1PCVMBlfSpZwTEkfQbk0u1ZmnDQotCaKGr7OcX4mH7ymE4X8Da1/Ah033LVkJlDloBRPpRQgxbOowqG1RMFqpJwYvPcV6ZEdHYzoIBqsQykVyS4O89HYElviB9gigyZpQgXHR4QCXvTHgTOZJv+tCGm7KiYpcpjKvmAI6TD7b2UnXfrAdN7hP2DXKjiXIhxAcJhOWPMi60A1NUdLVE5Oqe0qVUstUh8sQj2eGhEsFI8k+ZFnkCtzS0O3niW6BTFfcjJKIHy+2RNvWagyyekzTqzC4l4FaJ5kLW/xywtg6CdJu0SX899SRJGbXeVD6NGxwNP9A0YC2N5DyhhXgX6xKWYgS/HyVVpd59j1t6lt/m+0o1a2ZXKSzV8oflhJV1sdgmkZRAKyCW3vO/mHY+Ys3gApWLtrvsLJHFpFgvOVpF3/nPotCDKGMPqe69HteP0/lXVxwkhjcurnEx6NjFyf VVmTj3rR c5Sawmxm81SOlHi/NqxD6oUoU6sp8qCVwqWA57hMvo4iAeKUK4A7K62V4O+bAOeRtyPBbjVj4rZV9ondRuKuEAqHVWhvLy/lyOyqKgLQ/WcCk9T+qG3c63r6gew3WSkdnd2ueUjrjaQfvHeVouychsuOd3fWwV8Ui81UlPzUTkslUl8Pwq+oRwwF7M/0hKkUtz+v8v4Pxt2EZcXzQ+P3Ppmqfw/ZKGWdpLxPyFXecgqtpw9qKuw8izifEnOfNHZZI5++leNut3cGnZ0Y= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2025/6/25 20:09, David Hildenbrand wrote: > On 25.06.25 13:42, Barry Song wrote: >> On Wed, Jun 25, 2025 at 11:27 PM David Hildenbrand >> wrote: >>> >>> On 25.06.25 13:15, Barry Song wrote: >>>> On Wed, Jun 25, 2025 at 11:01 PM David Hildenbrand >>>> wrote: >>>>> >>>>> On 25.06.25 12:57, Barry Song wrote: >>>>>>>> >>>>>>>> Note that I don't quite understand why we have to batch the >>>>>>>> whole thing >>>>>>>> or fallback to >>>>>>>> individual pages. Why can't we perform other batches that span >>>>>>>> only some >>>>>>>> PTEs? What's special >>>>>>>> about 1 PTE vs. 2 PTEs vs. all PTEs? >>>>>>> >>>>>>> That's a good point about the "all-or-nothing" batching logic ;) >>>>>>> >>>>>>> It seems the "all-or-nothing" approach is specific to the >>>>>>> lazyfree use >>>>>>> case, which needs to unmap the entire folio for reclamation. If >>>>>>> that's >>>>>>> not possible, it falls back to the single-page slow path. >>>>>> >>>>>> Other cases advance the PTE themselves, while try_to_unmap_one() >>>>>> relies >>>>>> on page_vma_mapped_walk() to advance the PTE. Unless we want to >>>>>> manually >>>>>> modify pvmw.pte and pvmw.address outside of >>>>>> page_vma_mapped_walk(), which >>>>>> to me seems like a violation of layers. :-) >>>>> >>>>> Please explain to me why the following is not clearer and better: >>>> >>>> This part is much clearer, but that doesn’t necessarily improve the >>>> overall >>>> picture. The main challenge is how to exit the iteration of >>>> while (page_vma_mapped_walk(&pvmw)). >>> >>> Okay, I get what you mean now. >>> >>>> >>>> Right now, we have it laid out quite straightforwardly: >>>>                   /* We have already batched the entire folio */ >>>>                   if (nr_pages > 1) >>>>                           goto walk_done; >>> >>> >>> Given that the comment is completely confusing whens seeing the >>> check ... :) >>> >>> /* >>>    * If we are sure that we batched the entire folio and cleared all >>> PTEs, >>>    * we can just optimize and stop right here. >>>    */ >>> if (nr_pages == folio_nr_pages(folio)) >>>          goto walk_done; >>> >>> would make the comment match. >> >> Yes, that clarifies it. >> >>> >>>> >>>> with any nr between 1 and folio_nr_pages(), we have to consider two >>>> issues: >>>> 1. How to skip PTE checks inside page_vma_mapped_walk for entries that >>>> were already handled in the previous batch; >>> >>> They are cleared if we reach that point. So the pte_none() checks will >>> simply skip them? >>> >>>> 2. How to break the iteration when this batch has arrived at the end. >>> >>> page_vma_mapped_walk() should be doing that? >> >> It seems you might have missed the part in my reply that says: >> "Of course, we could avoid both, but that would mean performing >> unnecessary >> checks inside page_vma_mapped_walk()." > > > That’s true for both. But I’m wondering why we’re still doing the > check, >> even when we’re fairly sure they’ve already been cleared or we’ve reached >> the end :-) > > :) > >> >> Somehow, I feel we could combine your cleanup code—which handles a batch >> size of "nr" between 1 and nr_pages—with the >> "if (nr_pages == folio_nr_pages(folio)) goto walk_done" check. > > Yeah, that's what I was suggesting. It would have to be part of the > cleanup I think. > > I'm still wondering if there is a case where > > if (nr_pages == folio_nr_pages(folio)) >     goto walk_done; > > would be wrong when dealing with small folios. We can make the check more explicit to avoid any future trouble ;) if (nr_pages > 1 && nr_pages == folio_nr_pages(folio)) goto walk_done; It should be safe for small folios. Thanks, Lance > >> In practice, this would let us skip almost all unnecessary checks, >> except for a few rare corner cases. >> >> For those corner cases where "nr" truly falls between 1 and nr_pages, >> we can just leave them as-is—performing the redundant check inside >> page_vma_mapped_walk(). > > I mean, batching mapcount+refcount updates etc. is always a win. If we > end up doing some unnecessary pte_none() checks, that might be > suboptimal but mostly noise in contrast to the other stuff we will > optimize out :) > > Agreed that if we can easily avoid these pte_none() checks, we should do > that. Optimizing that for "nr_pages == folio_nr_pages(folio)" makes sense. >