linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Quentin Perret <qperret@google.com>
Cc: Matthew Wilcox <willy@infradead.org>,
	Fuad Tabba <tabba@google.com>,
	kvm@vger.kernel.org, kvmarm@lists.linux.dev, pbonzini@redhat.com,
	chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org,
	paul.walmsley@sifive.com, palmer@dabbelt.com,
	aou@eecs.berkeley.edu, seanjc@google.com,
	viro@zeniv.linux.org.uk, brauner@kernel.org,
	akpm@linux-foundation.org, xiaoyao.li@intel.com,
	yilun.xu@intel.com, chao.p.peng@linux.intel.com,
	jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com,
	yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com,
	mic@digikod.net, vbabka@suse.cz, vannapurve@google.com,
	ackerleytng@google.com, mail@maciej.szmigiero.name,
	michael.roth@amd.com, wei.w.wang@intel.com,
	liam.merwick@oracle.com, isaku.yamahata@gmail.com,
	kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com,
	steven.price@arm.com, quic_mnalajal@quicinc.com,
	quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com,
	quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com,
	quic_pheragu@quicinc.com, catalin.marinas@arm.com,
	james.morse@arm.com, yuzenghui@huawei.com,
	oliver.upton@linux.dev, maz@kernel.org, will@kernel.org,
	keirf@google.com, linux-mm@kvack.org
Subject: Re: folio_mmapped
Date: Thu, 29 Feb 2024 11:04:09 +0100	[thread overview]
Message-ID: <fc486cb4-0fe3-403f-b5e6-26d2140fcef9@redhat.com> (raw)
In-Reply-To: <Zd82V1aY-ZDyaG8U@google.com>

On 28.02.24 14:34, Quentin Perret wrote:
> On Wednesday 28 Feb 2024 at 14:00:50 (+0100), David Hildenbrand wrote:
>>>>> To add a layer of paint to the shed, the usage of SIGBUS for
>>>>> something that is really a permission access problem doesn't feel
>>>>
>>>> SIGBUS stands for "BUS error (bad memory access)."
>>>>
>>>> Which makes sense, if you try accessing something that can no longer be
>>>> accessed. It's now inaccessible. Even if it is temporarily.
>>>>
>>>> Just like a page with an MCE error. Swapin errors. Etc. You cannot access
>>>> it.
>>>>
>>>> It might be a permission problem on the pKVM side, but it's not the
>>>> traditional "permission problem" as in mprotect() and friends. You cannot
>>>> resolve that permission problem yourself. It's a higher entity that turned
>>>> that memory inaccessible.
>>>
>>> Well that's where I'm not sure to agree. Userspace can, in fact, get
>>> back all of that memory by simply killing the protected VM. With the
>>
>> Right, but that would likely "wipe" the pages so they can be made accessible
>> again, right?
> 
> Yep, indeed.
> 
>> That's the whole point why we are handing the pages over to the "higher
>> entity", and allow someone else (the VM) to turn them into a state where we
>> can no longer read them.
>>
>> (if you follow the other discussion, it would actually be nice if we could
>> read them and would get encrypted content back, like s390x does; but that's
>> a different discussion and I assume pretty much out of scope :) )
> 
> Interesting, I'll read up. On a side note, I'm also considering adding a
> guest-facing hypervisor interface to let the guest decide to opt out of
> the hypervisor wipe as discussed above. That would be useful for a guest
> that is shutting itself down (which could be cooperating with the host
> Linux) and that knows it has erased its secrets. That is in general
> difficult to do for an OS, but a simple approach could be to poison all
> its memory (or maybe encrypt it?) before opting out of that wipe.
> 
> The hypervisor wipe is done in hypervisor context (obviously), which is
> non-preemptible, so avoiding wiping (or encrypting) loads of memory
> there is highly desirable. Also pKVM doesn't have a linear map of all
> memory for security reasons, so we need to map/unmap the pages one by
> one, which sucks as much as it sounds.
> 
> But yes, we're digressing, that is all for later :)

:) Sounds like an interesting optimization.

An alternative would be to remember in pKVM that a page needs a wipe 
before reaccess. Once re-accessed by anybody (hypervisor or new guest), 
it first has to be wiped by pKVM.

... but that also sounds complicated and similar requires the linear 
map+unmap in pKVM page-by-page as they are reused. But at least a guest 
shutdown would be faster.

> 
>>> approach suggested here, the guestmem pages are entirely accessible to
>>> the host until they are attached to a running protected VM which
>>> triggers the protection. It is very much userspace saying "I promise not
>>> to touch these pages from now on" when it does that, in a way that I
>>> personally find very comparable to the mprotect case. It is not some
>>> other entity that pulls the carpet from under userspace's feet, it is
>>> userspace being inconsistent with itself that causes the issue here, and
>>> that's why SIGBUS feels kinda wrong as it tends to be used to report
>>> external errors of some sort.
>>
>> I recall that user space can also trigger SIGBUS when doing some
>> mmap()+truncate() thingies, and probably a bunch more, that could be fixed
>> up later.
> 
> Right, so that probably still falls into "there is no page" bucket
> rather than the "there is a page that is already accounted against the
> userspace process, but it doesn't have the permission to access it
> bucket. But yes that's probably an infinite debate.

Yes, we should rather focus on the bigger idea: have inaccessible memory 
that fails a pagefault instead of the mmap.

> 
>> I don't see a problem with SIUGBUS here, but I do understand your view. I
>> consider the exact signal a minor detail, though.
>>
>>>
>>>>> appropriate. Allocating memory via guestmem and donating that to a
>>>>> protected guest is a way for userspace to voluntarily relinquish access
>>>>> permissions to the memory it allocated. So a userspace process violating
>>>>> that could, IMO, reasonably expect a SEGV instead of SIGBUS. By the
>>>>> point that signal would be sent, the page would have been accounted
>>>>> against that userspace process, so not sure the paging examples that
>>>>> were discussed earlier are exactly comparable. To illustrate that
>>>>> differently, given that pKVM and Gunyah use MMU-based protection, there
>>>>> is nothing architecturally that prevents a guest from sharing a page
>>>>> back with Linux as RO.
>>>>
>>>> Sure, then allow page faults that allow for reads and give a signal on write
>>>> faults.
>>>>
>>>> In the scenario, it even makes more sense to not constantly require new
>>>> mmap's from user space just to access a now-shared page.
>>>>
>>>>> Note that we don't currently support this, so I
>>>>> don't want to conflate this use case, but that hopefully makes it a
>>>>> little more obvious that this is a "there is a page, but you don't
>>>>> currently have the permission to access it" problem rather than "sorry
>>>>> but we ran out of pages" problem.
>>>>
>>>> We could user other signals, at least as the semantics are clear and it's
>>>> documented. Maybe SIGSEGV would be warranted.
>>>>
>>>> I consider that a minor detail, though.
>>>>
>>>> Requiring mmap()/munmap() dances just to access a page that is now shared
>>>> from user space sounds a bit suboptimal. But I don't know all the details of
>>>> the user space implementation.
>>>
>>> Agreed, if we could save having to mmap() each page that gets shared
>>> back that would be a nice performance optimization.
>>>
>>>> "mmap() the whole thing once and only access what you are supposed to
>   (> > > access" sounds reasonable to me. If you don't play by the rules, you get a
>>>> signal.
>>>
>>> "... you get a signal, or maybe you don't". But yes I understand your
>>> point, and as per the above there are real benefits to this approach so
>>> why not.
>>>
>>> What do we expect userspace to do when a page goes from shared back to
>>> being guest-private, because e.g. the guest decides to unshare? Use
>>> munmap() on that page? Or perhaps an madvise() call of some sort? Note
>>> that this will be needed when starting a guest as well, as userspace
>>> needs to copy the guest payload in the guestmem file prior to starting
>>> the protected VM.
>>
>> Let's assume we have the whole guest_memfd mapped exactly once in our
>> process, a single VMA.
>>
>> When setting up the VM, we'll write the payload and then fire up the VM.
>>
>> That will (I assume) trigger some shared -> private conversion.
>>
>> When we want to convert shared -> private in the kernel, we would first
>> check if the page is currently mapped. If it is, we could try unmapping that
>> page using an rmap walk.
> 
> I had not considered that. That would most certainly be slow, but a well
> behaved userspace process shouldn't hit it so, that's probably not a
> problem...

If there really only is a single VMA that covers the page (or even mmaps 
the guest_memfd), it should not be too bad. For example, any 
fallocate(PUNCHHOLE) has to do the same, to unmap the page before 
discarding it from the pagecache.

But of course, no rmap walk is always better.

> 
>> Then, we'd make sure that there are really no other references and protect
>> against concurrent mapping of the page. Now we can convert the page to
>> private.
> 
> Right.
> 
> Alternatively, the shared->private conversion happens in the KVM vcpu
> run loop, so we'd be in a good position to exit the VCPU_RUN ioctl with a
> new exit reason saying "can't donate that page while it's shared" and
> have userspace use MADVISE_DONTNEED or munmap, or whatever on the back
> of that. But I tend to prefer the rmap option if it's workable as that
> avoids adding new KVM userspace ABI.

As discussed in the sub-thread, that might still be required.

One could think about completely forbidding GUP on these mmap'ed 
guest-memfds. But likely, there might be use cases in the future where 
you want to use GUP on shared memory inside a guest_memfd.

(the iouring example I gave might currently not work because 
FOLL_PIN|FOLL_LONGTERM|FOLL_WRITE only works on shmem+hugetlb, and 
guest_memfd will likely not be detected as shmem; 8ac268436e6d contains 
some details)

-- 
Cheers,

David / dhildenb



  parent reply	other threads:[~2024-02-29 10:04 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20240222161047.402609-1-tabba@google.com>
     [not found] ` <20240222141602976-0800.eberman@hu-eberman-lv.qualcomm.com>
2024-02-23  0:35   ` folio_mmapped Matthew Wilcox
2024-02-26  9:28     ` folio_mmapped David Hildenbrand
2024-02-26 21:14       ` folio_mmapped Elliot Berman
2024-02-27 14:59         ` folio_mmapped David Hildenbrand
2024-02-28 10:48           ` folio_mmapped Quentin Perret
2024-02-28 11:11             ` folio_mmapped David Hildenbrand
2024-02-28 12:44               ` folio_mmapped Quentin Perret
2024-02-28 13:00                 ` folio_mmapped David Hildenbrand
2024-02-28 13:34                   ` folio_mmapped Quentin Perret
2024-02-28 18:43                     ` folio_mmapped Elliot Berman
2024-02-28 18:51                       ` Quentin Perret
2024-02-29 10:04                     ` David Hildenbrand [this message]
2024-02-29 19:01                       ` folio_mmapped Fuad Tabba
2024-03-01  0:40                         ` folio_mmapped Elliot Berman
2024-03-01 11:16                           ` folio_mmapped David Hildenbrand
2024-03-04 12:53                             ` folio_mmapped Quentin Perret
2024-03-04 20:22                               ` folio_mmapped David Hildenbrand
2024-03-01 11:06                         ` folio_mmapped David Hildenbrand
2024-03-04 12:36                       ` folio_mmapped Quentin Perret
2024-03-04 19:04                         ` folio_mmapped Sean Christopherson
2024-03-04 20:17                           ` folio_mmapped David Hildenbrand
2024-03-04 21:43                             ` folio_mmapped Elliot Berman
2024-03-04 21:58                               ` folio_mmapped David Hildenbrand
2024-03-19  9:47                                 ` folio_mmapped Quentin Perret
2024-03-19  9:54                                   ` folio_mmapped David Hildenbrand
2024-03-18 17:06                             ` folio_mmapped Vishal Annapurve
2024-03-18 22:02                               ` folio_mmapped David Hildenbrand
     [not found]                                 ` <CAGtprH8B8y0Khrid5X_1twMce7r-Z7wnBiaNOi-QwxVj4D+L3w@mail.gmail.com>
2024-03-19  0:10                                   ` folio_mmapped Sean Christopherson
2024-03-19 10:26                                     ` folio_mmapped David Hildenbrand
2024-03-19 13:19                                       ` folio_mmapped David Hildenbrand
2024-03-19 14:31                                       ` folio_mmapped Will Deacon
2024-03-19 23:54                                         ` folio_mmapped Elliot Berman
2024-03-22 16:36                                           ` Will Deacon
2024-03-22 18:46                                             ` Elliot Berman
2024-03-27 19:31                                               ` Will Deacon
     [not found]                                         ` <2d6fc3c0-a55b-4316-90b8-deabb065d007@redhat.com>
2024-03-22 21:21                                           ` folio_mmapped David Hildenbrand
2024-03-26 22:04                                             ` folio_mmapped Elliot Berman
2024-03-27 19:34                                           ` folio_mmapped Will Deacon
2024-03-28  9:06                                             ` folio_mmapped David Hildenbrand
2024-03-28 10:10                                               ` folio_mmapped Quentin Perret
2024-03-28 10:32                                                 ` folio_mmapped David Hildenbrand
2024-03-28 10:58                                                   ` folio_mmapped Quentin Perret
2024-03-28 11:41                                                     ` folio_mmapped David Hildenbrand
2024-03-29 18:38                                                       ` folio_mmapped Vishal Annapurve
2024-04-04  0:15                                             ` folio_mmapped Sean Christopherson
2024-03-19 15:04                                       ` folio_mmapped Sean Christopherson
2024-03-22 17:16                                         ` folio_mmapped David Hildenbrand
2024-02-26  9:03   ` [RFC PATCH v1 00/26] KVM: Restricted mapping of guest_memfd at the host and pKVM/arm64 support Fuad Tabba

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fc486cb4-0fe3-403f-b5e6-26d2140fcef9@redhat.com \
    --to=david@redhat.com \
    --cc=ackerleytng@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=amoorthy@google.com \
    --cc=anup@brainfault.org \
    --cc=aou@eecs.berkeley.edu \
    --cc=brauner@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=chao.p.peng@linux.intel.com \
    --cc=chenhuacai@kernel.org \
    --cc=dmatlack@google.com \
    --cc=isaku.yamahata@gmail.com \
    --cc=isaku.yamahata@intel.com \
    --cc=james.morse@arm.com \
    --cc=jarkko@kernel.org \
    --cc=keirf@google.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.linux.dev \
    --cc=liam.merwick@oracle.com \
    --cc=linux-mm@kvack.org \
    --cc=mail@maciej.szmigiero.name \
    --cc=maz@kernel.org \
    --cc=mic@digikod.net \
    --cc=michael.roth@amd.com \
    --cc=mpe@ellerman.id.au \
    --cc=oliver.upton@linux.dev \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=pbonzini@redhat.com \
    --cc=qperret@google.com \
    --cc=quic_cvanscha@quicinc.com \
    --cc=quic_mnalajal@quicinc.com \
    --cc=quic_pderrin@quicinc.com \
    --cc=quic_pheragu@quicinc.com \
    --cc=quic_svaddagi@quicinc.com \
    --cc=quic_tsoni@quicinc.com \
    --cc=seanjc@google.com \
    --cc=steven.price@arm.com \
    --cc=suzuki.poulose@arm.com \
    --cc=tabba@google.com \
    --cc=vannapurve@google.com \
    --cc=vbabka@suse.cz \
    --cc=viro@zeniv.linux.org.uk \
    --cc=wei.w.wang@intel.com \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    --cc=xiaoyao.li@intel.com \
    --cc=yilun.xu@intel.com \
    --cc=yu.c.zhang@linux.intel.com \
    --cc=yuzenghui@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).