public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Val Packett <val@invisiblethingslab.com>
To: "Teddy Astie" <teddy.astie@vates.tech>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Jason Wang" <jasowang@redhat.com>,
	"Xuan Zhuo" <xuanzhuo@linux.alibaba.com>,
	"Eugenio Pérez" <eperezma@redhat.com>
Cc: "Marek Marczykowski-Górecki" <marmarek@invisiblethingslab.com>,
	"Viresh Kumar" <viresh.kumar@linaro.org>,
	xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux.dev
Subject: Re: [RFC PATCH] virtio-mmio: add xenbus probing
Date: Thu, 30 Apr 2026 05:48:59 -0300	[thread overview]
Message-ID: <74953b6a-d195-4a12-800d-af324ff35b29@invisiblethingslab.com> (raw)
In-Reply-To: <1777536698.8631fc262581453bbf619ec5b2062170.19ddd7187da000f373@vates.tech>


On 4/30/26 5:11 AM, Teddy Astie wrote:
> Le 30/04/2026 à 06:06, Val Packett a écrit :
>> On 4/29/26 11:41 AM, Teddy Astie wrote:
>>> Hello,
>>>
>>> Le 29/04/2026 à 16:18, Val Packett a écrit :
>>>> […]
>>>>
>>>> I've been working on porting virtio-mmio support from Arm to x86_64,
>>>> with the goal of running vhost-user-gpu to power Wayland/GPU integration
>>>> for Qubes OS. (I'm aware of various proposals for alternative virtio
>>>> transports but virtio-mmio seems to be the only one that *is* upstream
>>>> already and just Works..) Setting up virtio-mmio through xenbus,
>>>> initially
>>>> motivated just by event channels being the only real way to get
>>>> interrupts
>>>> working on HVM, turned out to generally be quite pleasant and nice :)
>>> Is it HVM specific, or can we also make it work for PVH (we can actually
>>> attach a ioreq server to PVH guests) ?
>> Sorry, typo, I did mean PVH of course!
>>
>> I've been testing this with PVH guests + PV dom0, with my PV alloc_ioreq
>> fix:
>> https://lore.kernel.org/all/20251126062124.117425-1-
>> val@invisiblethingslab.com/
>>
>> (Time to resend that one as a non-RFC I guess…)
>>
>> HVM actually does have legacy ISA interrupts (which are often used with
>> virtio-mmio on KVM), funnily enough, and I've tried firing those from a
>> DMOP but that silly thing didn't work properly.
>>
>>>> I'd like to get some early feedback for this patch, particularly
>>>> the general stuff:
>>>>
>>>> * is this whole thing acceptable in general?
>>>> * should it be extracted into a different file?
>>>> * (from the Xen side) any input on the xenstore keys, what goes where?
>>>> * anything else to keep in mind?
>>>>
>>>> It does seem simple enough, so hopefully this can be done?
>>>>
>>>> The corresponding userspace-side WIP is available at:
>>>> https://github.com/QubesOS/xen-vhost-frontend
>>>>
>>>> And the required DMOP for firing the evtchn events will be sent
>>>> to xen-devel shortly as well.
>>> Could that be done through evtchn_send (or its userland counterpart) ?
>> Actually, yes… The use of DMOPs is only dictated by the current Linux
>> privcmd.c code (the irqfds created by the kernel react to events by
>> executing HYPERVISOR_dm_op with a stored operation), we can avoid the
>> need to modify Xen by simply expanding the privcmd driver to make
>> "evtchn fds". Sounds good, will do.
>>
> Given that the event channel used by device models is exposed through
> ioreq.vp_eport ("evtchn for notifications to/from device model"). I
> don't think you need to expand the privcmd interface, and you should be
> able to do this instead :
>
> open /dev/xen/evtchn
> perform IOCTL_EVTCHN_BIND_INTERDOMAIN (for each guest vCPU)
>     with remote_domain=guest_domid, remote_port=ioreq.vp_eport
>
> Then interact with the event channel through IOCTL_EVTCHN_NOTIFY (with
> local port given by IOCTL_EVTCHN_BIND_INTERDOMAIN) and read/write on the
> file descriptor.

So the reason there's currently an ioctl to bind an eventfd to fire a 
stored DMOP is that the whole idea is to (efficiently!) support generic, 
hypervisor-neutral device server implementations via the vhost-user 
protocol.

Now of course, the current implementation isn't *entirely* 
hypervisor-neutral as e.g. the vm-memory Rust crate (inside of the 
"neutral" vhost-user device servers) does need to be built with the 
`xen` feature. But still, that's how it works. What can be made generic 
is generic.

xen-vhost-frontend, which is the thing that integrates these with Xen, 
actually used to handle the interrupts in userspace[1] by firing the 
DMOP itself (which is where I could "just replace that with 
IOCTL_EVTCHN_NOTIFY") but that was offloaded to the kernel with the 
introduction of IOCTL_PRIVCMD_IRQFD[2], similarly to KVM_IRQFD.

Switching back to handling the eventfd in userspace would be a literal 
deoptimization :)

While throwing away the whole generic layer to do a fully integrated 
use-case-specific thing sounds more difficult/tedious than this, and not 
necessarily desirable in general.

[1]: 
https://github.com/vireshk/xen-vhost-frontend/commit/06d59035f8a387c0f600931d09dfaa27b80ede7f
[2]: 
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?id=f8941e6c4c712948663ec5d7bbb546f1a0f4e3f6

~val


  parent reply	other threads:[~2026-04-30  8:49 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-29 13:52 [RFC PATCH] virtio-mmio: add xenbus probing Val Packett
2026-04-29 15:35 ` Jürgen Groß
2026-04-30  4:04   ` Val Packett
     [not found] ` <1777473712.8631fc262581453bbf619ec5b2062170.19dd9b07146000f373@vates.tech>
2026-04-30  4:01   ` Val Packett
     [not found]     ` <1777536698.8631fc262581453bbf619ec5b2062170.19ddd7187da000f373@vates.tech>
2026-04-30  8:48       ` Val Packett [this message]
     [not found]         ` <1777556830.8631fc262581453bbf619ec5b2062170.19ddea4b728000f373@vates.tech>
2026-04-30 18:50           ` Val Packett

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=74953b6a-d195-4a12-800d-af324ff35b29@invisiblethingslab.com \
    --to=val@invisiblethingslab.com \
    --cc=eperezma@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marmarek@invisiblethingslab.com \
    --cc=mst@redhat.com \
    --cc=teddy.astie@vates.tech \
    --cc=viresh.kumar@linaro.org \
    --cc=virtualization@lists.linux.dev \
    --cc=xen-devel@lists.xenproject.org \
    --cc=xuanzhuo@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox