From: David Hildenbrand <david@redhat.com>
To: Sean Christopherson <seanjc@google.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>, Fuad Tabba <tabba@google.com>,
Christoph Hellwig <hch@infradead.org>,
John Hubbard <jhubbard@nvidia.com>,
Elliot Berman <quic_eberman@quicinc.com>,
Andrew Morton <akpm@linux-foundation.org>,
Shuah Khan <shuah@kernel.org>,
Matthew Wilcox <willy@infradead.org>,
maz@kernel.org, kvm@vger.kernel.org,
linux-arm-msm@vger.kernel.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org,
pbonzini@redhat.com
Subject: Re: [PATCH RFC 0/5] mm/gup: Introduce exclusive GUP pinning
Date: Thu, 20 Jun 2024 20:56:20 +0200 [thread overview]
Message-ID: <bf8e96be-c6e7-40c9-a914-cd022d1fd056@redhat.com> (raw)
In-Reply-To: <ZnRTDUqLQ4XBRykl@google.com>
On 20.06.24 18:04, Sean Christopherson wrote:
> On Thu, Jun 20, 2024, David Hildenbrand wrote:
>> On 20.06.24 16:29, Jason Gunthorpe wrote:
>>> On Thu, Jun 20, 2024 at 04:01:08PM +0200, David Hildenbrand wrote:
>>>> On 20.06.24 15:55, Jason Gunthorpe wrote:
>>>>> On Thu, Jun 20, 2024 at 09:32:11AM +0100, Fuad Tabba wrote:
>>>> Regarding huge pages: assume the huge page (e.g., 1 GiB hugetlb) is shared,
>>>> now the VM requests to make one subpage private.
>>>
>>> I think the general CC model has the shared/private setup earlier on
>>> the VM lifecycle with large runs of contiguous pages. It would only
>>> become a problem if you intend to to high rate fine granual
>>> shared/private switching. Which is why I am asking what the actual
>>> "why" is here.
>>
>> I am not an expert on that, but I remember that the way memory
>> shared<->private conversion happens can heavily depend on the VM use case,
>
> Yeah, I forget the details, but there are scenarios where the guest will share
> (and unshare) memory at 4KiB (give or take) granularity, at runtime. There's an
> RFC[*] for making SWIOTLB operate at 2MiB is driven by the same underlying problems.
>
> But even if Linux-as-a-guest were better behaved, we (the host) can't prevent the
> guest from doing suboptimal conversions. In practice, killing the guest or
> refusing to convert memory isn't an option, i.e. we can't completely push the
> problem into the guest
Agreed!
>
> https://lore.kernel.org/all/20240112055251.36101-1-vannapurve@google.com
>
>> and that under pKVM we might see more frequent conversion, without even
>> going to user space.
>>
>>>
>>>> How to handle that without eventually running into a double
>>>> memory-allocation? (in the worst case, allocating a 1GiB huge page
>>>> for shared and for private memory).
>>>
>>> I expect you'd take the linear range of 1G of PFNs and fragment it
>>> into three ranges private/shared/private that span the same 1G.
>>>
>>> When you construct a page table (ie a S2) that holds these three
>>> ranges and has permission to access all the memory you want the page
>>> table to automatically join them back together into 1GB entry.
>>>
>>> When you construct a page table that has only access to the shared,
>>> then you'd only install the shared hole at its natural best size.
>>>
>>> So, I think there are two challenges - how to build an allocator and
>>> uAPI to manage this sort of stuff so you can keep track of any
>>> fractured pfns and ensure things remain in physical order.
>>>
>>> Then how to re-consolidate this for the KVM side of the world.
>>
>> Exactly!
>>
>>>
>>> guest_memfd, or something like it, is just really a good answer. You
>>> have it obtain the huge folio, and keep track on its own which sub
>>> pages can be mapped to a VMA because they are shared. KVM will obtain
>>> the PFNs directly from the fd and KVM will not see the shared
>>> holes. This means your S2's can be trivially constructed correctly.
>>>
>>> No need to double allocate..
>>
>> Yes, that's why my thinking so far was:
>>
>> Let guest_memfd (or something like that) consume huge pages (somehow, let it
>> access the hugetlb reserves). Preallocate that memory once, as the VM starts
>> up: just like we do with hugetlb in VMs.
>>
>> Let KVM track which parts are shared/private, and if required, let it map
>> only the shared parts to user space. KVM has all information to make these
>> decisions.
>>
>> If we could disallow pinning any shared pages, that would make life a lot
>> easier, but I think there were reasons for why we might require it. To
>> convert shared->private, simply unmap that folio (only the shared parts
>> could possibly be mapped) from all user page tables.
>>
>> Of course, there might be alternatives, and I'll be happy to learn about
>> them. The allcoator part would be fairly easy, and the uAPI part would
>> similarly be comparably easy. So far the theory :)
>>
>>>
>>> I'm kind of surprised the CC folks don't want the same thing for
>>> exactly the same reason. It is much easier to recover the huge
>>> mappings for the S2 in the presence of shared holes if you track it
>>> this way. Even CC will have this problem, to some degree, too.
>>
>> Precisely! RH (and therefore, me) is primarily interested in existing
>> guest_memfd users at this point ("CC"), and I don't see an easy way to get
>> that running with huge pages in the existing model reasonably well ...
>
> This is the general direction guest_memfd is headed, but getting there is easier
> said than done. E.g. as alluded to above, "simply unmap that folio" is quite
> difficult, bordering on infeasible if the kernel is allowed to gup() shared
> guest_memfd memory.
Right. I think ways forward are the ones stated in my mail to Jason:
disallow long-term GUP or expose the huge page as unmovable small folios
to core-mm.
Maybe there are other alternatives, but it all feels like we want the MM
to track in granularity of small pages, but map it into the KVM/IOMMU
page tables in large pages.
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2024-06-20 18:56 UTC|newest]
Thread overview: 70+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-19 0:05 [PATCH RFC 0/5] mm/gup: Introduce exclusive GUP pinning Elliot Berman
2024-06-19 0:05 ` [PATCH RFC 1/5] mm/gup: Move GUP_PIN_COUNTING_BIAS to page_ref.h Elliot Berman
2024-06-19 0:05 ` [PATCH RFC 2/5] mm/gup: Add an option for obtaining an exclusive pin Elliot Berman
2024-06-19 0:05 ` [PATCH RFC 3/5] mm/gup: Add support for re-pinning a normal pinned page as exclusive Elliot Berman
2024-06-19 0:05 ` [PATCH RFC 4/5] mm/gup-test: Verify exclusive pinned Elliot Berman
2024-06-19 0:05 ` [PATCH RFC 5/5] mm/gup_test: Verify GUP grabs same pages twice Elliot Berman
2024-06-19 0:11 ` [PATCH RFC 0/5] mm/gup: Introduce exclusive GUP pinning Elliot Berman
2024-06-19 2:44 ` John Hubbard
2024-06-19 7:37 ` David Hildenbrand
2024-06-19 9:11 ` Fuad Tabba
2024-06-19 11:51 ` Jason Gunthorpe
2024-06-19 12:01 ` Fuad Tabba
2024-06-19 12:42 ` Jason Gunthorpe
2024-06-20 15:37 ` Sean Christopherson
2024-06-21 8:23 ` Fuad Tabba
2024-06-21 8:43 ` David Hildenbrand
2024-06-21 8:54 ` Fuad Tabba
2024-06-21 9:10 ` David Hildenbrand
2024-06-21 10:16 ` Fuad Tabba
2024-06-21 16:54 ` Elliot Berman
2024-06-24 19:03 ` Sean Christopherson
2024-06-24 21:50 ` David Rientjes
2024-06-26 3:19 ` Vishal Annapurve
2024-06-26 5:20 ` Pankaj Gupta
2024-06-19 12:17 ` David Hildenbrand
2024-06-20 4:11 ` Christoph Hellwig
2024-06-20 8:32 ` Fuad Tabba
2024-06-20 13:55 ` Jason Gunthorpe
2024-06-20 14:01 ` David Hildenbrand
2024-06-20 14:29 ` Jason Gunthorpe
2024-06-20 14:45 ` David Hildenbrand
2024-06-20 16:04 ` Sean Christopherson
2024-06-20 18:56 ` David Hildenbrand [this message]
2024-06-20 16:36 ` Jason Gunthorpe
2024-06-20 18:53 ` David Hildenbrand
2024-06-20 20:30 ` Sean Christopherson
2024-06-20 20:47 ` David Hildenbrand
2024-06-20 22:32 ` Sean Christopherson
2024-06-20 23:00 ` Jason Gunthorpe
2024-06-20 23:11 ` Jason Gunthorpe
2024-06-20 23:54 ` Sean Christopherson
2024-06-21 7:43 ` David Hildenbrand
2024-06-21 12:39 ` Jason Gunthorpe
2024-06-20 23:08 ` Jason Gunthorpe
2024-06-20 22:47 ` Elliot Berman
2024-06-20 23:18 ` Jason Gunthorpe
2024-06-21 7:32 ` Quentin Perret
2024-06-21 8:02 ` David Hildenbrand
2024-06-21 9:25 ` Quentin Perret
2024-06-21 9:37 ` David Hildenbrand
2024-06-21 16:48 ` Elliot Berman
2024-06-21 12:26 ` Jason Gunthorpe
2024-06-19 12:16 ` David Hildenbrand
2024-06-20 8:47 ` Fuad Tabba
2024-06-20 9:00 ` David Hildenbrand
2024-06-20 14:01 ` Jason Gunthorpe
2024-06-20 13:08 ` Mostafa Saleh
2024-06-20 14:14 ` David Hildenbrand
2024-06-20 14:34 ` Jason Gunthorpe
2024-08-02 8:26 ` Tian, Kevin
2024-08-02 11:22 ` Jason Gunthorpe
2024-08-05 2:24 ` Tian, Kevin
2024-08-05 23:22 ` Jason Gunthorpe
2024-08-06 0:50 ` Tian, Kevin
2024-06-20 16:33 ` Mostafa Saleh
2024-07-12 23:29 ` Ackerley Tng
2024-07-16 16:03 ` Sean Christopherson
2024-07-16 16:08 ` Jason Gunthorpe
2024-07-16 17:34 ` Sean Christopherson
2024-07-16 20:11 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bf8e96be-c6e7-40c9-a914-cd022d1fd056@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=hch@infradead.org \
--cc=jgg@nvidia.com \
--cc=jhubbard@nvidia.com \
--cc=kvm@vger.kernel.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=maz@kernel.org \
--cc=pbonzini@redhat.com \
--cc=quic_eberman@quicinc.com \
--cc=seanjc@google.com \
--cc=shuah@kernel.org \
--cc=tabba@google.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).