public inbox for linux-media@vger.kernel.org
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: Demi Marie Obenour <demiobenour@gmail.com>,
	dri-devel@lists.freedesktop.org,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	linux-media@vger.kernel.org
Cc: Val Packett <val@invisiblethingslab.com>,
	Suwit Semal <sumit.semwal@linaro.org>,
	"Pelloux-Prayer,
	Pierre-Eric" <Pierre-eric.Pelloux-prayer@amd.com>
Subject: Re: Pinned, non-revocable mappings of VRAM: will bad things happen?
Date: Mon, 20 Apr 2026 20:53:35 +0200	[thread overview]
Message-ID: <e5c00f2c-0819-48b4-b66e-71b9a40a7235@amd.com> (raw)
In-Reply-To: <08ad2301-3163-4497-8869-fa4cea30b384@gmail.com>

On 4/20/26 20:46, Demi Marie Obenour wrote:
> On 4/20/26 13:58, Christian König wrote:
>> On 4/20/26 19:03, Demi Marie Obenour wrote:
>>> On 4/20/26 04:49, Christian König wrote:
>>>> On 4/17/26 21:35, Demi Marie Obenour wrote:
>> ...
>>>>> Are any of the following reasonable options?
>>>>>
>>>>> 1. Change the guest kernel to only map (and thus pin) a small subset
>>>>>    of VRAM at any given time.  If unmapped VRAM is accessed the guest
>>>>>    traps the page fault, evicts an old VRAM mapping, and creates a
>>>>>    new one.
>>>>
>>>> Yeah, that could potentially work.
>>>>
>>>> This is basically what we do on the host kernel driver when we can't resize the BAR for some reason. In that use case VRAM buffers are shuffled in and out of the CPU accessible window of VRAM on demand.
>>>
>>> How much is this going to hurt performance?
>>
>> Hard to say, resizing the BAR can easily give you 10-15% more performance on some use cases.
>>
>> But that involves physically transferring the data using a DMA. For this solution we basically only have to we basically only have to transfer a few messages between host and guest.
>>
>> No idea how performant that is.
> 
> In this use-case, 20-30% performance penalties are likely to be
> "business as usual".

Well that is quite a bit.

> Close to native performance would be ideal, but
> to be useful it just needs to beat software rendering by a wide margin,
> and not cause data corruption or vulnerabilities.

That should still easily be the case, even trivial use cases are multiple magnitudes faster on GPUs compared to software rendering.

>>>
>>>> But I have one question: When XEN has a problem handling faults from the guest on the host then how does that work for system memory mappings?
>>>>
>>>> There is really no difference between VRAM and system memory in the handling for the GPU driver stack.
>>>>
>>>> Regards,
>>>> Christian.
>>>
>>> Generally, Xen makes the frontend (usually an unprivileged VM)
>>> responsible for providing mappings to the backend (usually the host).
>>> That is possible with system RAM but not with VRAM, because Xen has
>>> no awareness of VRAM.  To Xen, VRAM is just a PCI BAR.
>>
>> No, that doesn't work with system memory allocations of GPU drivers either.
>>
>> We already had it multiple times that people tried to be clever and incremented the page reference counter on driver allocated system memory and were totally surprised that this can result in security issues and data corruption.
>>
>> I seriously hope that this isn't the case here again. As far as I know XEN already has support for accessing VMAs with VM_PFN or otherwise I don't know how driver allocated system memory access could potentially work.
>>
>> Accessing VRAM is pretty much the same use case as far as I can see.
>>
>> Regards,
>> Christian.
> 
> The Xen-native approach would be for system memory allocations to
> be made using the Xen driver and then imported into the virtio-GPU
> driver via dmabuf.  Is there any chance this could be made to happen?

That could be. Adding Pierre-Eric to comment since he knows that use much better than I do.

> If it's a lost cause, then how much is the memory overhead of pinning
> everything ever used in a dmabuf?  It should be possible to account
> pinned host memory against a guest's quota, but if that leads to an
> unusable system it isn't going to be good.

That won't work at all.

We have use cases where you *must* migrate a DMA-buf to VRAM or otherwise the GPU can't use it.

A simple scanout to a monitor is such an use case for example, that is usually not possible from system memory.

> Is supporting page faults in Xen the only solution that will be viable
> long-term, considering the tolerance for very substantial performance
> overheads compared to native?  AAA gaming isn't the initial goal here.
> Qubes OS already supports PCI passthrough for that.

We have AAA gaming working on XEN through native context working for quite a while.

Pierre-Eric can tell you more about that.

Regards,
Christian.

  reply	other threads:[~2026-04-20 18:53 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-15 23:27 Pinned, non-revocable mappings of VRAM: will bad things happen? Demi Marie Obenour
2026-04-16  9:57 ` Christian König
2026-04-16 16:13   ` Demi Marie Obenour
2026-04-17  7:53     ` Christian König
2026-04-17 19:35       ` Demi Marie Obenour
2026-04-20  8:49         ` Christian König
2026-04-20 17:03           ` Demi Marie Obenour
2026-04-20 17:58             ` Christian König
2026-04-20 18:46               ` Demi Marie Obenour
2026-04-20 18:53                 ` Christian König [this message]
2026-04-20 19:12                   ` Demi Marie Obenour
2026-04-21 16:55                     ` Val Packett
2026-04-21 17:43                       ` Christian König
2026-04-22  1:27                       ` Demi Marie Obenour
2026-04-22  2:03                         ` Alex Deucher

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e5c00f2c-0819-48b4-b66e-71b9a40a7235@amd.com \
    --to=christian.koenig@amd.com \
    --cc=Pierre-eric.Pelloux-prayer@amd.com \
    --cc=demiobenour@gmail.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=linux-media@vger.kernel.org \
    --cc=sumit.semwal@linaro.org \
    --cc=val@invisiblethingslab.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox