From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4395A3859FC for ; Tue, 12 May 2026 08:57:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778576256; cv=none; b=nsq16dkdhzELfs6Jp7s6xaf+sHAaIO6yVY51DI0sWK4nGD9Q9+HD4GPip2EswMu7FlYp/+HtBqTKGgVyWX7r3KmjAa0hRHw4GxPqRXi+xi1KFbhQBq7KuKgwPT6qFjPnsXZi0LjiX4Z7SjNE2EqzvaGU+FkpOWboYLfQIWWOtdE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778576256; c=relaxed/simple; bh=ChJGA8LJhtHGwF4qm4tizGPdjMCvlIvG7xBdDCEg95I=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=kqmXvQDNtGkaI+hYxy2BVn05YXYYxdpfYat3qjRFyvku8nvTr8/DXSJ07prVC16LafeEB+yuIz6gMt9PEovNRbsny5V4Dd04Pdk1jUfkmCzOPytO88FkKTAlLVDqOUd0LYWp2741IC/A2M+FLmHNst578tL8nd2fhKCgdBr8BVI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b=oQuZ1+9y; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b="oQuZ1+9y" Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EF79915A1; Tue, 12 May 2026 01:57:21 -0700 (PDT) Received: from [10.164.148.42] (MacBook-Pro.blr.arm.com [10.164.148.42]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1F48D3F7B4; Tue, 12 May 2026 01:57:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1778576247; bh=ChJGA8LJhtHGwF4qm4tizGPdjMCvlIvG7xBdDCEg95I=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=oQuZ1+9yFi50Rsv7mmsP4lWaV+RKoCsk4wEHAGmAipxTmjuw0wEUW7j8cW2ePUlyL zpTWLWyKQ5H5j7wJpGKIMMLudbND9quGPUtePYeI/N4omzukw+uPrro6sJTT8DEQgr +Fhv0/Dj9zcaE0CwfmXGY8c+vuiR1hJjcgDX5Ue0= Message-ID: Date: Tue, 12 May 2026 14:27:14 +0530 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 8/9] mm/rmap: Add batched version of folio_try_share_anon_rmap_pte To: "David Hildenbrand (Arm)" , akpm@linux-foundation.org, ljs@kernel.org, hughd@google.com, chrisl@kernel.org, kasong@tencent.com Cc: riel@surriel.com, liam@infradead.org, vbabka@kernel.org, harry@kernel.org, jannh@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, qi.zheng@linux.dev, shakeel.butt@linux.dev, baohua@kernel.org, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, rppt@kernel.org, surenb@google.com, mhocko@suse.com, baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com, youngjun.park@lge.com, pfalcato@suse.de, ryan.roberts@arm.com, anshuman.khandual@arm.com References: <20260506094504.2588857-1-dev.jain@arm.com> <20260506094504.2588857-9-dev.jain@arm.com> <95b8224d-3ed4-4fe3-9954-d5ba0aa373f8@kernel.org> Content-Language: en-US From: Dev Jain In-Reply-To: <95b8224d-3ed4-4fe3-9954-d5ba0aa373f8@kernel.org> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 11/05/26 1:43 pm, David Hildenbrand (Arm) wrote: > On 5/6/26 11:45, Dev Jain wrote: >> To enable batched unmapping of anonymous folios, we need to handle the >> sharing of exclusive pages. Hence, a batched version of >> folio_try_share_anon_rmap_pte is required. >> >> Currently, the sole purpose of nr_pages in __folio_try_share_anon_rmap is >> to do some rmap sanity checks. Add helpers to clear the PageAnonExclusive >> bit on a batch of nr_pages. Note that __folio_try_share_anon_rmap can >> receive nr_pages == HPAGE_PMD_NR from the PMD path, but currently we only >> clear the bit on the head page. Retain this behaviour by setting >> nr_pages = 1 in case the caller is folio_try_share_anon_rmap_pmd. >> >> While at it, convert nr_pages to unsigned long to future-proof from >> overflow in case P4D-huge mappings etc get supported down the road. >> I haven't made such a change in each function receiving nr_pages in >> try_to_unmap_one - perhaps this can be done incrementally. >> >> Signed-off-by: Dev Jain >> --- >> include/linux/mm.h | 11 +++++++++++ >> include/linux/rmap.h | 27 ++++++++++++++++++++------- >> 2 files changed, 31 insertions(+), 7 deletions(-) >> >> diff --git a/include/linux/mm.h b/include/linux/mm.h >> index 31e27ff6a35fa..0b77329cf57a4 100644 >> --- a/include/linux/mm.h >> +++ b/include/linux/mm.h >> @@ -243,6 +243,17 @@ static inline unsigned long folio_page_idx(const struct folio *folio, >> return page - &folio->page; >> } >> >> +static __always_inline void folio_clear_pages_anon_exclusive(struct page *page, >> + unsigned long nr_pages) >> +{ >> + for (;;) { >> + ClearPageAnonExclusive(page); >> + if (--nr_pages == 0) >> + break; >> + ++page; >> + } >> +} > > Something called folio that doesn't consume a folio is odd. I'd prefer we don't add this. > > Is there a chance to simply change __folio_try_share_anon_rmap, so we get a single loop > inline? > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h > index 8dc0871e5f00..5a1c874b2112 100644 > --- a/include/linux/rmap.h > +++ b/include/linux/rmap.h > @@ -708,16 +708,13 @@ static inline int folio_try_dup_anon_rmap_pmd(struct folio *folio, > static __always_inline int __folio_try_share_anon_rmap(struct folio *folio, > struct page *page, int nr_pages, enum pgtable_level level) > { > + /* device private folios cannot get pinned via GUP. */ > + const bool pinnable = !folio_is_device_private(folio); > + > VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio); > VM_WARN_ON_FOLIO(!PageAnonExclusive(page), folio); > __folio_rmap_sanity_checks(folio, page, nr_pages, level); > > - /* device private folios cannot get pinned via GUP. */ > - if (unlikely(folio_is_device_private(folio))) { > - ClearPageAnonExclusive(page); > - return 0; > - } > - > /* > * We have to make sure that when we clear PageAnonExclusive, that > * the page is not pinned and that concurrent GUP-fast won't succeed in > @@ -760,19 +757,21 @@ static __always_inline int __folio_try_share_anon_rmap(struct folio *folio, > * so we use explicit ones here. > */ > > - /* Paired with the memory barrier in try_grab_folio(). */ > - if (IS_ENABLED(CONFIG_HAVE_GUP_FAST)) > - smp_mb(); > + if (pinnable) { > + /* Paired with the memory barrier in try_grab_folio(). */ > + if (IS_ENABLED(CONFIG_HAVE_GUP_FAST)) > + smp_mb(); > > - if (unlikely(folio_maybe_dma_pinned(folio))) > - return -EBUSY; > + if (unlikely(folio_maybe_dma_pinned(folio))) > + return -EBUSY; > + } > ClearPageAnonExclusive(page); > > /* > * This is conceptually a smp_wmb() paired with the smp_rmb() in > * gup_must_unshare(). > */ > - if (IS_ENABLED(CONFIG_HAVE_GUP_FAST)) > + if (pinnable && IS_ENABLED(CONFIG_HAVE_GUP_FAST)) > smp_mb__after_atomic(); > return 0; > > Okay this is better! I'll do this.