xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Demi Marie Obenour <demiobenour@gmail.com>
To: "Jürgen Groß" <jgross@suse.com>,
	"Val Packett" <val@invisiblethingslab.com>,
	"Stefano Stabellini" <sstabellini@kernel.org>,
	"Oleksandr Tyshchenko" <oleksandr_tyshchenko@epam.com>,
	"Marek Marczykowski-Górecki" <marmarek@invisiblethingslab.com>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	Qubes Developer Mailing List <qubes-devel@googlegroups.com>
Subject: Re: [RFC PATCH] xen: privcmd: fix ioeventfd/ioreq crashing PV domain
Date: Tue, 4 Nov 2025 18:05:04 -0500	[thread overview]
Message-ID: <5c651be7-eac5-4c9e-a209-6db3a06c3d2e@gmail.com> (raw)
In-Reply-To: <5a3660c9-1b18-4d87-a1f7-efa8d68239d8@suse.com>


[-- Attachment #1.1.1: Type: text/plain, Size: 3003 bytes --]

On 11/4/25 07:15, Jürgen Groß wrote:
> On 15.10.25 21:57, Val Packett wrote:
>> Starting a virtio backend in a PV domain would panic the kernel in
>> alloc_ioreq, trying to dereference vma->vm_private_data as a pages
>> pointer when in reality it stayed as PRIV_VMA_LOCKED.
>>
>> Fix by allocating a pages array in mmap_resource in the PV case,
>> filling it with page info converted from the pfn array. This allows
>> ioreq to function successfully with a backend provided by a PV dom0.
>>
>> Signed-off-by: Val Packett <val@invisiblethingslab.com>
>> ---
>> I've been porting the xen-vhost-frontend[1] to Qubes, which runs on amd64
>> and we (still) use PV for dom0. The x86 part didn't give me much trouble,
>> but the first thing I found was this crash due to using a PV domain to host
>> the backend. alloc_ioreq was dereferencing the '1' constant and panicking
>> the dom0 kernel.
>>
>> I figured out that I can make a pages array in the expected format from the
>> pfn array where the actual memory mapping happens for the PV case, and with
>> the fix, the ioreq part works: the vhost frontend replies to the probing
>> sequence and the guest recognizes which virtio device is being provided.
>>
>> I still have another thing to debug: the MMIO accesses from the inner driver
>> (e.g. virtio_rng) don't get through to the vhost provider (ioeventfd does
>> not get notified), and manually kicking the eventfd from the frontend
>> seems to crash... Xen itself?? (no Linux panic on console, just a freeze and
>> quick reboot - will try to set up a serial console now)
> 
> IMHO for making the MMIO accesses work you'd need to implement ioreq-server
> support for PV-domains in the hypervisor. This will be a major endeavor, so
> before taking your Linux kernel patch I'd like to see this covered.

Would fixing this be a good use of time, or would it be better to
focus on switching to PVH dom0?  I don't know if it makes sense to
spend effort on PV dom0 when dom0 isn't going to be PV indefinitely.

Edera might well be interested in the PV case, as they run in cloud
VMs without nested virtualization.  That's not relevant to Qubes
OS, though.

>> But I figured I'd post this as an RFC already, since the other bug may be
>> unrelated and the ioreq area itself does work now. I'd like to hear some
>> feedback on this from people who actually know Xen :)
> 
> My main problem with your patch is that it is adding a memory allocation
> for a very rare use case impacting all current users of that functionality.
> 
> You could avoid that by using a different ioctl which could be selected by
> specifying a new flag when calling xenforeignmemory_open() (have a look
> into the Xen sources under tools/libs/foreignmemory/core.c).

Should there at least be a check to prevent the kernel from crashing?
I'd expect an unsupported use of the API to return an error, not
cause the kernel to oops.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 7253 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  reply	other threads:[~2025-11-04 23:05 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-15 19:57 [RFC PATCH] xen: privcmd: fix ioeventfd/ioreq crashing PV domain Val Packett
2025-10-19  0:42 ` Demi Marie Obenour
2025-10-19  1:07   ` Demi Marie Obenour
2025-11-04 12:15 ` Jürgen Groß
2025-11-04 23:05   ` Demi Marie Obenour [this message]
2025-11-04 23:06   ` Demi Marie Obenour
2025-11-05  1:16   ` Val Packett
2025-11-05 20:42     ` Demi Marie Obenour

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5c651be7-eac5-4c9e-a209-6db3a06c3d2e@gmail.com \
    --to=demiobenour@gmail.com \
    --cc=jgross@suse.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marmarek@invisiblethingslab.com \
    --cc=oleksandr_tyshchenko@epam.com \
    --cc=qubes-devel@googlegroups.com \
    --cc=sstabellini@kernel.org \
    --cc=val@invisiblethingslab.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).