From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0CC22C433F5 for ; Thu, 23 Dec 2021 00:21:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 508766B0071; Wed, 22 Dec 2021 19:21:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4B6D36B0073; Wed, 22 Dec 2021 19:21:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 37EEA6B0074; Wed, 22 Dec 2021 19:21:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0236.hostedemail.com [216.40.44.236]) by kanga.kvack.org (Postfix) with ESMTP id 296A76B0071 for ; Wed, 22 Dec 2021 19:21:37 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D8FA5180E4EC3 for ; Thu, 23 Dec 2021 00:21:36 +0000 (UTC) X-FDA: 78947155392.08.FD43BE5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf23.hostedemail.com (Postfix) with ESMTP id 00BB8140014 for ; Thu, 23 Dec 2021 00:21:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=47BEa1hBBev4vX+Gi5bGfRRBsw7bv8W386eAVjO1ObE=; b=eQy3zB97VxV99BacLcEaJWTRRs tE0WyDHbK77GpmAPcW7qbHLnAr0KDQ++U2pIULowmjTm6tOyWF5rpc6j7YQQNU2DoOs5IH9MZiZ6e pX8voct0+kLRvyyLzq53tykTDw/Jia+YeA7IQd6MtXXI6InXHTgySPfWXi91cPvBm98WmwSM5YSoX 3ni5EcnDHHx1JaHxWmMC1Gdx4k0LlrS3OWE6LF2Qt8/bdsuZECflTMtqZiVUK0D8ibE/cVbDQ09Tr dotXGPxlQpWzFvXOHAZeAFdvS0rHLlsBABk5wzXUDrYwTsNFjStdGyeWERKodf5npN5Dg1BS9b7W6 HdqSIzbg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n0Br0-003oXl-KT; Thu, 23 Dec 2021 00:21:06 +0000 Date: Thu, 23 Dec 2021 00:21:06 +0000 From: Matthew Wilcox To: David Hildenbrand Cc: Jan Kara , Jason Gunthorpe , Linus Torvalds , Nadav Amit , Linux Kernel Mailing List , Andrew Morton , Hugh Dickins , David Rientjes , Shakeel Butt , John Hubbard , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Vlastimil Babka , Jann Horn , Michal Hocko , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Linux-MM , "open list:KERNEL SELFTEST FRAMEWORK" , "open list:DOCUMENTATION" Subject: Re: [PATCH v1 06/11] mm: support GUP-triggered unsharing via FAULT_FLAG_UNSHARE (!hugetlb) Message-ID: References: <20211221010312.GC1432915@nvidia.com> <900b7d4a-a5dc-5c7b-a374-c4a8cc149232@redhat.com> <20211221190706.GG1432915@nvidia.com> <3e0868e6-c714-1bf8-163f-389989bf5189@redhat.com> <20211222124141.GA685@quack2.suse.cz> <4a28e8a0-2efa-8b5e-10b5-38f1fc143a98@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4a28e8a0-2efa-8b5e-10b5-38f1fc143a98@redhat.com> X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 00BB8140014 X-Stat-Signature: d1hfz84kms7n3sjcjo7rq38whmudoi1x Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=eQy3zB97; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1640218888-774426 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Dec 22, 2021 at 02:09:41PM +0100, David Hildenbrand wrote: > Right, from an API perspective we really want people to use FOLL_PIN. > > To optimize this case in particular it would help if we would have the > FOLL flags on the unpin path. Then we could just decide internally > "well, short-term R/O FOLL_PIN can be really lightweight, we can treat > this like a FOLL_GET instead". And we would need that as well if we were > to keep different counters for R/O vs. R/W pinned. FYI, in my current tree, there's a gup_put_folio() which replaces put_compound_head: static void gup_put_folio(struct folio *folio, int refs, unsigned int flags) { if (flags & FOLL_PIN) { node_stat_mod_folio(folio, NR_FOLL_PIN_RELEASED, refs); if (hpage_pincount_available(&folio->page)) hpage_pincount_sub(&folio->page, refs); else refs *= GUP_PIN_COUNTING_BIAS; } folio_put_refs(folio, refs); } That can become non-static if it's needed. I'm still working on that series, because I'd like to get it to a point where we return one folio pointer instead of N page pointers. Not quite there yet.