linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: Fuad Tabba <tabba@google.com>,
	Christoph Hellwig <hch@infradead.org>,
	John Hubbard <jhubbard@nvidia.com>,
	Elliot Berman <quic_eberman@quicinc.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Shuah Khan <shuah@kernel.org>,
	Matthew Wilcox <willy@infradead.org>,
	maz@kernel.org, kvm@vger.kernel.org,
	linux-arm-msm@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org,
	pbonzini@redhat.com
Subject: Re: [PATCH RFC 0/5] mm/gup: Introduce exclusive GUP pinning
Date: Thu, 20 Jun 2024 20:53:07 +0200	[thread overview]
Message-ID: <66a285fc-e54e-4247-8801-e7e17ad795a6@redhat.com> (raw)
In-Reply-To: <20240620163626.GK2494510@nvidia.com>

On 20.06.24 18:36, Jason Gunthorpe wrote:
> On Thu, Jun 20, 2024 at 04:45:08PM +0200, David Hildenbrand wrote:
> 
>> If we could disallow pinning any shared pages, that would make life a lot
>> easier, but I think there were reasons for why we might require it. To
>> convert shared->private, simply unmap that folio (only the shared parts
>> could possibly be mapped) from all user page tables.
> 
> IMHO it should be reasonable to make it work like ZONE_MOVABLE and
> FOLL_LONGTERM. Making a shared page private is really no different
> from moving it.
> 
> And if you have built a VMM that uses VMA mapped shared pages and
> short-term pinning then you should really also ensure that the VM is
> aware when the pins go away. For instance if you are doing some virtio
> thing with O_DIRECT pinning then the guest will know the pins are gone
> when it observes virtio completions.
> 
> In this way making private is just like moving, we unmap the page and
> then drive the refcount to zero, then move it.
Yes, but here is the catch: what if a single shared subpage of a large 
folio is (validly) longterm pinned and you want to convert another 
shared subpage to private?

Sure, we can unmap the whole large folio (including all shared parts) 
before the conversion, just like we would do for migration. But we 
cannot detect that nobody pinned that subpage that we want to convert to 
private.

Core-mm is not, and will not, track pins per subpage.

So I only see two options:

a) Disallow long-term pinning. That means, we can, with a bit of wait,
    always convert subpages shared->private after unmapping them and
    waiting for the short-term pin to go away. Not too bad, and we
    already have other mechanisms disallow long-term pinnings (especially
    writable fs ones!).

b) Expose the large folio as multiple 4k folios to the core-mm.


b) would look as follows: we allocate a gigantic page from the (hugetlb) 
reserve into guest_memfd. Then, we break it down into individual 4k 
folios by splitting/demoting the folio. We make sure that all 4k folios 
are unmovable (raised refcount). We keep tracking internally that these 
4k folios comprise a single large gigantic page.

Core-mm can track for us now without any modifications per (previously 
subpage,) now small folios GUP pins and page table mappings without 
modifications.

Once we unmap the gigantic page from guest_memfd, we recronstruct the 
gigantic page and hand it back to the reserve (only possible once all 
pins are gone).

We can still map the whole thing into the KVM guest+iommu using a single 
large unit, because guest_memfd knows the origin/relationship of these 
pages. But we would only map individual pages into user page tables 
(unless we use large VM_PFNMAP mappings, but then also pinning would not 
work, so that's likely also not what we want).

The downside is that we won't benefit from vmemmap optimizations for 
large folios from hugetlb, and have more tracking overhead when mapping 
individual pages into user page tables.

OTOH, maybe we really *need* per-page tracking and this might be the 
simplest way forward, making GUP and friends just work naturally with it.

> 
>>> I'm kind of surprised the CC folks don't want the same thing for
>>> exactly the same reason. It is much easier to recover the huge
>>> mappings for the S2 in the presence of shared holes if you track it
>>> this way. Even CC will have this problem, to some degree, too.
>>
>> Precisely! RH (and therefore, me) is primarily interested in existing
>> guest_memfd users at this point ("CC"), and I don't see an easy way to get
>> that running with huge pages in the existing model reasonably well ...
> 
> IMHO it is an important topic so I'm glad you are thinking about it.

Thank my manager ;)

> 
> There is definately some overlap here where if you do teach
> guest_memfd about huge pages then you must also provide a away to map
> the fragments of them that have become shared. I think there is little
> option here unless you double allocate and/or destroy the performance
> properties of the huge pages.

Right, and that's not what we want.

> 
> It is just the nature of our system that shared pages must be in VMAs
> and must be copy_to/from_user/GUP'able/etc.

Right. Longterm GUP is not a real requirement.

-- 
Cheers,

David / dhildenb



  reply	other threads:[~2024-06-20 18:53 UTC|newest]

Thread overview: 70+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-19  0:05 [PATCH RFC 0/5] mm/gup: Introduce exclusive GUP pinning Elliot Berman
2024-06-19  0:05 ` [PATCH RFC 1/5] mm/gup: Move GUP_PIN_COUNTING_BIAS to page_ref.h Elliot Berman
2024-06-19  0:05 ` [PATCH RFC 2/5] mm/gup: Add an option for obtaining an exclusive pin Elliot Berman
2024-06-19  0:05 ` [PATCH RFC 3/5] mm/gup: Add support for re-pinning a normal pinned page as exclusive Elliot Berman
2024-06-19  0:05 ` [PATCH RFC 4/5] mm/gup-test: Verify exclusive pinned Elliot Berman
2024-06-19  0:05 ` [PATCH RFC 5/5] mm/gup_test: Verify GUP grabs same pages twice Elliot Berman
2024-06-19  0:11 ` [PATCH RFC 0/5] mm/gup: Introduce exclusive GUP pinning Elliot Berman
2024-06-19  2:44 ` John Hubbard
2024-06-19  7:37   ` David Hildenbrand
2024-06-19  9:11     ` Fuad Tabba
2024-06-19 11:51       ` Jason Gunthorpe
2024-06-19 12:01         ` Fuad Tabba
2024-06-19 12:42           ` Jason Gunthorpe
2024-06-20 15:37           ` Sean Christopherson
2024-06-21  8:23             ` Fuad Tabba
2024-06-21  8:43               ` David Hildenbrand
2024-06-21  8:54                 ` Fuad Tabba
2024-06-21  9:10                   ` David Hildenbrand
2024-06-21 10:16                     ` Fuad Tabba
2024-06-21 16:54                       ` Elliot Berman
2024-06-24 19:03                         ` Sean Christopherson
2024-06-24 21:50                           ` David Rientjes
2024-06-26  3:19                             ` Vishal Annapurve
2024-06-26  5:20                               ` Pankaj Gupta
2024-06-19 12:17         ` David Hildenbrand
2024-06-20  4:11         ` Christoph Hellwig
2024-06-20  8:32           ` Fuad Tabba
2024-06-20 13:55             ` Jason Gunthorpe
2024-06-20 14:01               ` David Hildenbrand
2024-06-20 14:29                 ` Jason Gunthorpe
2024-06-20 14:45                   ` David Hildenbrand
2024-06-20 16:04                     ` Sean Christopherson
2024-06-20 18:56                       ` David Hildenbrand
2024-06-20 16:36                     ` Jason Gunthorpe
2024-06-20 18:53                       ` David Hildenbrand [this message]
2024-06-20 20:30                         ` Sean Christopherson
2024-06-20 20:47                           ` David Hildenbrand
2024-06-20 22:32                             ` Sean Christopherson
2024-06-20 23:00                               ` Jason Gunthorpe
2024-06-20 23:11                           ` Jason Gunthorpe
2024-06-20 23:54                             ` Sean Christopherson
2024-06-21  7:43                               ` David Hildenbrand
2024-06-21 12:39                               ` Jason Gunthorpe
2024-06-20 23:08                         ` Jason Gunthorpe
2024-06-20 22:47                   ` Elliot Berman
2024-06-20 23:18                     ` Jason Gunthorpe
2024-06-21  7:32                       ` Quentin Perret
2024-06-21  8:02                         ` David Hildenbrand
2024-06-21  9:25                           ` Quentin Perret
2024-06-21  9:37                             ` David Hildenbrand
2024-06-21 16:48                             ` Elliot Berman
2024-06-21 12:26                         ` Jason Gunthorpe
2024-06-19 12:16       ` David Hildenbrand
2024-06-20  8:47         ` Fuad Tabba
2024-06-20  9:00           ` David Hildenbrand
2024-06-20 14:01             ` Jason Gunthorpe
2024-06-20 13:08     ` Mostafa Saleh
2024-06-20 14:14       ` David Hildenbrand
2024-06-20 14:34         ` Jason Gunthorpe
2024-08-02  8:26           ` Tian, Kevin
2024-08-02 11:22             ` Jason Gunthorpe
2024-08-05  2:24               ` Tian, Kevin
2024-08-05 23:22                 ` Jason Gunthorpe
2024-08-06  0:50                   ` Tian, Kevin
2024-06-20 16:33         ` Mostafa Saleh
2024-07-12 23:29 ` Ackerley Tng
2024-07-16 16:03   ` Sean Christopherson
2024-07-16 16:08     ` Jason Gunthorpe
2024-07-16 17:34       ` Sean Christopherson
2024-07-16 20:11         ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=66a285fc-e54e-4247-8801-e7e17ad795a6@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=hch@infradead.org \
    --cc=jgg@nvidia.com \
    --cc=jhubbard@nvidia.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=maz@kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=quic_eberman@quicinc.com \
    --cc=shuah@kernel.org \
    --cc=tabba@google.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).