From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 96999F9D9 for ; Tue, 12 May 2026 05:19:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778563176; cv=none; b=of3SISDARHzQCGwESrmHIdZqseJC6mpKTvmcCej3+9+SASk/Wk3ABBVYyrJElab3340+NqlOenuvUdtjQcgfdiX45WwF1cap4yIUQlW7scxmHpg2fq+24Ovmj9rEC+RA1pcqodCOU9HR220pXQ8Ij9mSwaK+LbJCjTTNdk7UOCg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778563176; c=relaxed/simple; bh=2b1ps3KVb7VArKGFg5XYi66Xz4LpF6CRKZwL2J02jeA=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=DibKsKtPGBNJclRHItTEhhfjmzWFYXLofixm8QQ+sSqhjtokZJ7h4yVtb9rOGeHSBWG5gA+m2L82xxwQ23QMeS38lAQ063rcVuWz0pozyQn+6UZBstUK+Kjw095XbPShv6l9mq1vArab2AD+vSt1OnrO+FSE8AnHvw1fWkIFa0U= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b=UjdeBelh; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b="UjdeBelh" Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E102E1596; Mon, 11 May 2026 22:19:27 -0700 (PDT) Received: from [10.164.148.42] (MacBook-Pro.blr.arm.com [10.164.148.42]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2F16B3F85F; Mon, 11 May 2026 22:19:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1778563173; bh=2b1ps3KVb7VArKGFg5XYi66Xz4LpF6CRKZwL2J02jeA=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=UjdeBelhGwAguT2JtQTly8n9m3L9n9wml9JvKFu/JYSzM5VPZzxEIrqoKIANtALi2 //IsyzP0qkyqsHrq6GcZtjoyImMSEM+iqMzsTqKfq5FKzL8EEgw+6QGHMoSmmAWf9V ntQnmO8mBX/KDCtzWwbuZSuuxwIPhC/0Ksha+/l4= Message-ID: <6fa2581f-0df9-4cfb-a00c-d2cdbe86aeb1@arm.com> Date: Tue, 12 May 2026 10:49:20 +0530 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 3/9] mm/rmap: refactor some code around lazyfree folio unmapping To: "David Hildenbrand (Arm)" , akpm@linux-foundation.org, ljs@kernel.org, hughd@google.com, chrisl@kernel.org, kasong@tencent.com Cc: riel@surriel.com, liam@infradead.org, vbabka@kernel.org, harry@kernel.org, jannh@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, qi.zheng@linux.dev, shakeel.butt@linux.dev, baohua@kernel.org, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, rppt@kernel.org, surenb@google.com, mhocko@suse.com, baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com, youngjun.park@lge.com, pfalcato@suse.de, ryan.roberts@arm.com, anshuman.khandual@arm.com References: <20260506094504.2588857-1-dev.jain@arm.com> <20260506094504.2588857-4-dev.jain@arm.com> Content-Language: en-US From: Dev Jain In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 11/05/26 12:58 pm, David Hildenbrand (Arm) wrote: > On 5/6/26 11:44, Dev Jain wrote: >> For lazyfree folio unmapping, after clearing the ptes we must abort the >> operation if the folio got dirtied or it has unexpected references. >> >> Refactor this logic into a function which will return whether we need >> to abort or not. >> >> If we abort, we restore the ptes and bail out of try_to_unmap_one. >> Otherwise adjust the rss stats of the mm and jump to a label. >> >> Also rename that label from "discard" to "finish_unmap"; the former >> is appropriate in the lazyfree context, but the code following the label >> is executed for other successful unmap code paths too, so 'discard' does >> not sound correct for them. >> >> Signed-off-by: Dev Jain >> --- >> mm/rmap.c | 95 ++++++++++++++++++++++++++++++++----------------------- >> 1 file changed, 55 insertions(+), 40 deletions(-) >> >> diff --git a/mm/rmap.c b/mm/rmap.c >> index a98acdea0530a..bd4e3639e26ed 100644 >> --- a/mm/rmap.c >> +++ b/mm/rmap.c >> @@ -1978,6 +1978,56 @@ static inline unsigned int folio_unmap_pte_batch(struct folio *folio, >> FPB_RESPECT_WRITE | FPB_RESPECT_SOFT_DIRTY); >> } >> >> +static inline bool can_unmap_lazyfree_folio_range(struct vm_area_struct *vma, >> + struct folio *folio, unsigned long address, pte_t *ptep, >> + pte_t pteval, unsigned long nr_pages) > > > Similar comment: ttu_...* Ack > >> +{ >> + struct mm_struct *mm = vma->vm_mm; >> + int ref_count, map_count; >> + >> + /* >> + * Synchronize with gup_pte_range(): >> + * - clear PTE; barrier; read refcount >> + * - inc refcount; barrier; read PTE >> + */ >> + smp_mb(); >> + >> + ref_count = folio_ref_count(folio); >> + map_count = folio_mapcount(folio); >> + >> + /* >> + * Order reads for page refcount and dirty flag >> + * (see comments in __remove_mapping()). >> + */ >> + smp_rmb(); >> + >> + if (folio_test_dirty(folio) && !(vma->vm_flags & VM_DROPPABLE)) { >> + /* >> + * redirtied either using the page table or a previously >> + * obtained GUP reference. >> + */ >> + set_ptes(mm, address, ptep, pteval, nr_pages); >> + folio_set_swapbacked(folio); >> + return false; >> + } >> + >> + if (ref_count != 1 + map_count) { >> + /* >> + * Additional reference. Could be a GUP reference or any >> + * speculative reference. GUP users must mark the folio >> + * dirty if there was a modification. This folio cannot be >> + * reclaimed right now either way, so act just like nothing >> + * happened. >> + * We'll come back here later and detect if the folio was >> + * dirtied when the additional reference is gone. >> + */ >> + set_ptes(mm, address, ptep, pteval, nr_pages); >> + return false; >> + } >> + >> + return true; > > > Doing the set_ptes() in a function called "can_unmap_lazyfree_folio_range" is > not appropriate. > > Can we just leave that in the caller? We only do the when we return false. > > And hey, then you can call this function ttu_can_unmap_lazyfree_folio() and > avoid passing pte ranges. :) Yep great I'll do that. > >