public inbox for linux-media@vger.kernel.org
 help / color / mirror / Atom feed
From: Demi Marie Obenour <demiobenour@gmail.com>
To: "Christian König" <christian.koenig@amd.com>,
	dri-devel@lists.freedesktop.org,
	"Xen developer discussion" <xen-devel@lists.xenproject.org>,
	linux-media@vger.kernel.org
Cc: Val Packett <val@invisiblethingslab.com>,
	Suwit Semal <sumit.semwal@linaro.org>
Subject: Re: Pinned, non-revocable mappings of VRAM: will bad things happen?
Date: Mon, 20 Apr 2026 14:46:15 -0400	[thread overview]
Message-ID: <08ad2301-3163-4497-8869-fa4cea30b384@gmail.com> (raw)
In-Reply-To: <964c3670-fad3-44ce-bd93-2057bca2dcb8@amd.com>


[-- Attachment #1.1.1: Type: text/plain, Size: 5497 bytes --]

On 4/20/26 13:58, Christian König wrote:
> On 4/20/26 19:03, Demi Marie Obenour wrote:
>> On 4/20/26 04:49, Christian König wrote:
>>> On 4/17/26 21:35, Demi Marie Obenour wrote:
> ...
>>>> Are any of the following reasonable options?
>>>>
>>>> 1. Change the guest kernel to only map (and thus pin) a small subset
>>>>    of VRAM at any given time.  If unmapped VRAM is accessed the guest
>>>>    traps the page fault, evicts an old VRAM mapping, and creates a
>>>>    new one.
>>>
>>> Yeah, that could potentially work.
>>>
>>> This is basically what we do on the host kernel driver when we can't resize the BAR for some reason. In that use case VRAM buffers are shuffled in and out of the CPU accessible window of VRAM on demand.
>>
>> How much is this going to hurt performance?
> 
> Hard to say, resizing the BAR can easily give you 10-15% more performance on some use cases.
> 
> But that involves physically transferring the data using a DMA. For this solution we basically only have to we basically only have to transfer a few messages between host and guest.
> 
> No idea how performant that is.

In this use-case, 20-30% performance penalties are likely to be
"business as usual".  Close to native performance would be ideal, but
to be useful it just needs to beat software rendering by a wide margin,
and not cause data corruption or vulnerabilities.

>>>> 2. Pretend that resizable BAR is not enabled, so the guest doesn't
>>>>    think it can map much of VRAM at once.  If resizable BAR is enabled
>>>>    on the host, it might be possible to split the large BAR mapping
>>>>    in a lot of ways.
>>>
>>> That won't work. The userspace parts of the driver stack don't care how large the BAR to access VRAM with the CPU is.
>>>
>>> The expectation is that the kernel driver makes thing CPU accessible as needed in the page fault handler.
>>>
>>> It is still a good idea for your solution #1 to give the amount of "pin-able" VRAM to the userspace stack as CPU visible VRAM limit so that test cases and applications try to lower their usage of VRAM, e.g. use system memory bounce buffers when possible.
>>
>> That makes sense.
>>
>>>> Or does Xen really need to allow the host to handle guest page faults?
>>>> That adds a huge amount of complexity to trusted and security-critical
>>>> parts of the system, so it really is a last resort.  Putting the
>>>> complexity in to the guest virtio-GPU driver is vastly preferable if
>>>> it can be made to work well.
>>>
>>> Well the nested page fault handling KVM offers has proven to be extremely useful. So when XEN can't do this it is clearly lacking an important feature.
>>
>> I agree. However, it is a lot of work to implement, which is why I'm
>> looking for alternatives if possible.
>>
>> KVM is part of the Linux kernel, so it can just call the Linux kernel
>> functions used to handle userspace page faults.  Xen is separate from
>> Linux, so it can't do that.  Instead, it will need to:
>>
>> 1. Determine that the fault needs to be handled by another VM, and
>>    the ID of the VM that needs to handle the fault.
>> 2. Send a message to the VM asking it to handle the fault.
>> 3. Block the vCPU until it gets a response.
>>
>> Then the VM owning the memory will need to call the page fault handler
>> and provide the memory to Xen.  Xen then needs to:
>>
>> 4. Map the memory into the nested page tables of the VM that faulted.
>> 5. Resume the vCPU.
>>
>>> But I have one question: When XEN has a problem handling faults from the guest on the host then how does that work for system memory mappings?
>>>
>>> There is really no difference between VRAM and system memory in the handling for the GPU driver stack.
>>>
>>> Regards,
>>> Christian.
>>
>> Generally, Xen makes the frontend (usually an unprivileged VM)
>> responsible for providing mappings to the backend (usually the host).
>> That is possible with system RAM but not with VRAM, because Xen has
>> no awareness of VRAM.  To Xen, VRAM is just a PCI BAR.
> 
> No, that doesn't work with system memory allocations of GPU drivers either.
> 
> We already had it multiple times that people tried to be clever and incremented the page reference counter on driver allocated system memory and were totally surprised that this can result in security issues and data corruption.
> 
> I seriously hope that this isn't the case here again. As far as I know XEN already has support for accessing VMAs with VM_PFN or otherwise I don't know how driver allocated system memory access could potentially work.
> 
> Accessing VRAM is pretty much the same use case as far as I can see.
> 
> Regards,
> Christian.

The Xen-native approach would be for system memory allocations to
be made using the Xen driver and then imported into the virtio-GPU
driver via dmabuf.  Is there any chance this could be made to happen?

If it's a lost cause, then how much is the memory overhead of pinning
everything ever used in a dmabuf?  It should be possible to account
pinned host memory against a guest's quota, but if that leads to an
unusable system it isn't going to be good.

Is supporting page faults in Xen the only solution that will be viable
long-term, considering the tolerance for very substantial performance
overheads compared to native?  AAA gaming isn't the initial goal here.
Qubes OS already supports PCI passthrough for that.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 7253 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  reply	other threads:[~2026-04-20 18:46 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-15 23:27 Pinned, non-revocable mappings of VRAM: will bad things happen? Demi Marie Obenour
2026-04-16  9:57 ` Christian König
2026-04-16 16:13   ` Demi Marie Obenour
2026-04-17  7:53     ` Christian König
2026-04-17 19:35       ` Demi Marie Obenour
2026-04-20  8:49         ` Christian König
2026-04-20 17:03           ` Demi Marie Obenour
2026-04-20 17:58             ` Christian König
2026-04-20 18:46               ` Demi Marie Obenour [this message]
2026-04-20 18:53                 ` Christian König
2026-04-20 19:12                   ` Demi Marie Obenour
2026-04-21 16:55                     ` Val Packett
2026-04-21 17:43                       ` Christian König
2026-04-22  1:27                       ` Demi Marie Obenour
2026-04-22  2:03                         ` Alex Deucher

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=08ad2301-3163-4497-8869-fa4cea30b384@gmail.com \
    --to=demiobenour@gmail.com \
    --cc=christian.koenig@amd.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=linux-media@vger.kernel.org \
    --cc=sumit.semwal@linaro.org \
    --cc=val@invisiblethingslab.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox