From: "Michael S. Tsirkin" <mst@redhat.com>
To: Parav Pandit <parav@nvidia.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
Manivannan Sadhasivam <mani@kernel.org>,
"Bill Mills (bill.mills@linaro.org)" <bill.mills@linaro.org>,
"virtio-comment@lists.linux.dev" <virtio-comment@lists.linux.dev>,
"Edgar E . Iglesias" <edgar.iglesias@amd.com>,
Arnaud Pouliquen <arnaud.pouliquen@foss.st.com>,
Viresh Kumar <viresh.kumar@linaro.org>,
Alex Bennee <alex.bennee@linaro.org>,
Armelle Laine <armellel@google.com>
Subject: Re: [PATCH v1 0/4] virtio-msg transport layer
Date: Wed, 25 Feb 2026 09:49:54 -0500 [thread overview]
Message-ID: <20260225094902-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <SJ0PR12MB68068FA1D1893B774C5A8A15DC75A@SJ0PR12MB6806.namprd12.prod.outlook.com>
On Wed, Feb 25, 2026 at 02:45:35PM +0000, Parav Pandit wrote:
>
> > From: Bertrand Marquis <Bertrand.Marquis@arm.com>
> > Sent: 25 February 2026 04:06 PM
> >
> > Hi Parav,
> >
> > > On 25 Feb 2026, at 11:24, Parav Pandit <parav@nvidia.com> wrote:
> > >
> > >>
> > >> From: Manivannan Sadhasivam <mani@kernel.org>
> > >> Sent: 25 February 2026 03:37 PM
> > >>
> > >> On Wed, Feb 25, 2026 at 08:03:48AM +0000, Bertrand Marquis wrote:
> > >>> Hi Manivannan,
> > >>>
> > >>>> On 25 Feb 2026, at 08:45, Manivannan Sadhasivam <mani@kernel.org> wrote:
> > >>>>
> > >>>> Hi Bertrand,
> > >>>>
> > >>>> On Fri, Feb 20, 2026 at 09:02:12AM +0000, Bertrand Marquis wrote:
> > >>>>> Hi Parav,
> > >>>>>
> > >>>>>> On 20 Feb 2026, at 07:13, Parav Pandit <parav@nvidia.com> wrote:
> > >>>>>>
> > >>>>>>
> > >>>>>>
> > >>>>>>> From: Michael S. Tsirkin <mst@redhat.com>
> > >>>>>>> Sent: 20 February 2026 05:25 AM
> > >>>>>>>
> > >>>>>>> On Fri, Feb 13, 2026 at 01:52:06PM +0000, Parav Pandit wrote:
> > >>>>>>>> Hi Bill,
> > >>>>>>>>
> > >>>>>>>>> From: Bill Mills <bill.mills@linaro.org>
> > >>>>>>>>> Sent: 26 January 2026 10:02 PM
> > >>>>>>>>>
> > >>>>>>>>> This series adds the virtio-msg transport layer.
> > >>>>>>>>>
> > >>>>>>>>> The individuals and organizations involved in this effort have had difficulty in
> > >>>>>>>>> using the existing virtio-transports in various situations and desire to add one
> > >>>>>>>>> more transport that performs its transport layer operations by sending and
> > >>>>>>>>> receiving messages.
> > >>>>>>>>>
> > >>>>>>>>> Implementations of virtio-msg will normally be done in multiple layers:
> > >>>>>>>>> * common / device level
> > >>>>>>>>> * bus level
> > >>>>>>>>>
> > >>>>>>>>> The common / device level defines the messages exchanged between the driver
> > >>>>>>>>> and a device. This common part should lead to a common driver holding most
> > >>>>>>>>> of the virtio specifics and can be shared by all virtio-msg bus implementations.
> > >>>>>>>>> The kernel implementation in [3] shows this separation. As with other transport
> > >>>>>>>>> layers, virtio-msg should not require modifications to existing virtio device
> > >>>>>>>>> implementations (virtio-net, virtio-blk etc). The common / device level is the
> > >>>>>>>>> main focus of this version of the patch series.
> > >>>>>>>>>
> > >>>>>>>>> The virtio-msg bus level implements the normal things a bus defines
> > >>>>>>>>> (enumeration, dma operations, etc) but also implements the message send and
> > >>>>>>>>> receive operations. A number of bus implementations are envisioned,
> > >>>>>>>>> some of which will be reusable and general purpose. Other bus implementations
> > >>>>>>>>> might be unique to a given situation, for example only used by a PCIe card
> > >>>>>>>>> and its driver.
> > >>>>>>>>>
> > >>>>>>>>> The standard bus messages are an effort to avoid different bus implementations
> > >>>>>>>>> doing the same thing in different ways for no good reason. However the
> > >>>>>>>>> different environments will require different things. Instead of trying to
> > >>>>>>>>> anticipate all needs and provide something very abstract, we think
> > >>>>>>>>> implementation specific messages will be needed at the bus level. Over time,
> > >>>>>>>>> if we see similar messages across multiple bus implementations, we will move to
> > >>>>>>>>> standardize a bus level message for that.
> > >>>>>>>>>
> > >>>>>>>>
> > >>>>>>>> I would review more, had first round of sparse review.
> > >>>>>>>> Please find few comments/questions below.
> > >>>>>>>
> > >>>>>>> I'd like to comment that I think it makes sense to have a basic simple transport and
> > >>>>>>> then add performance features on top as appropriate.
> > >>>>>> Sounds good. Simple but complete is needed.
> > >>>>>
> > >>>>> Agree.
> > >>>>>
> > >>>>>>
> > >>>>>>> So one way to address some of these comments is to show how
> > >>>>>>> they can be addressed with a feature bit down the road.
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>> 1. device number should be 32-bit in struct virtio_msg_header.
> > >>>>>>>>> From SIOV_R2 experiences, we learnt that some uses have use case for more than 64k devices.
> > >>>>>>>> Also mapping PCI BDF wont be enough in 16-bits considering domain field.
> > >>>>>>>>
> > >>>>>>>> 2. msg_size of 16-bits for 64KB-8 bytes is too less for data transfer.
> > >>>>>>>> For example, a TCP stream wants to send 64KB of data + payload, needs more than 64KB data.
> > >>>>>>>> Needs 32-bits.
> > >>>>>>>>
> > >>>>>>>> 3. BUS_MSG_EVENT_DEVICE to have symmetric name as ADDED and REMOVED (instead of READY)
> > >>>>>>>> But more below.
> > >>>>>>>>
> > >>>>>>>> 4. I dont find the transport messages to read and write to the driver memory supplied in VIRTIO_MSG_SET_VQUEUE addresses to
> > >> operate
> > >>>>>>> the virtqueues.
> > >>>>>>>> Dont we need VIRTIO_MEM_READ, VIRTIO_MEM_WRITE request and response?
> > >>>>>>>
> > >>>>>>> surely this can be an optional transport feature bit.
> > >>>>>>>
> > >>>>>> How is this optional?
> > >>>>>
> > >>>>> As said in a previous mail, we have messages already for that.
> > >>>>> Please confirm if that answer your question.
> > >>>>>
> > >>>>>> How can one implement a transport without defining the basic data transfer semantics?
> > >>>>>
> > >>>>> We did a lot of experiments and we are feature equivalent to PCI, MMIO or Channel I/O.
> > >>>>> If anything is missing, we are more than happy to discuss it and solve the issue.
> > >>>>>
> > >>>>
> > >>>> I'd love to have this transport over PCI because it addresses the shortcomings
> > >>>> of the existing PCI transport which just assumes that every config space access\
> > >>>> is trap and emulate.
> > >>>
> > >>> Agree and AMD did exactly that in their demonstrator.
> > >>> I will give you answers here as i know them but Edgar will probably give you more
> > >>> details (and probably fix my mistakes).
> > >>>
> > >>>>
> > >>>> But that being said, I somewhat agree with Parav that we should define the bus
> > >>>> implementations in the spec to avoid fixing the ABI in the implementations. For
> > >>>> instance, if we try to use this transport over PCI, we've got questions like:
> > >>>>
> > >>>> 1. How the device should be bind to the virtio-msg-pci bus driver and not with
> > >>>> the existing virtio-pci driver? Should it use a new Vendor ID or Sub-IDs?
> > >>>
> > >>> One bus is appearing as one pci device with its own Vendor ID,
> > >>>
> > >>
> > >> What should be the 'own Vendor ID' here?
> > >>
> > >> The existing virtio-pci driver binds to all devices with the Vendor ID of
> > >> PCI_VENDOR_ID_REDHAT_QUMRANET. So are you expecting the Vendors to use their own
> > >> VID for exposing the Virtio devices? That would mean, the drivers on the host
> > >> need update as well, which will not scale.
> > >>
> > >> It would be good if the existing virtio-pci devices can use this new transport
> > >> with only device side modifications.
> > >>
> > >>>>
> > >>>> 2. How the Virtio messages should be transferred? Is it through endpoint config
> > >>>> space or through some other means?
> > >>>
> > >>> The virtio messages are transfered using FIFOs stored in the BAR of the PCI
> > >>> device (ending up being memory shared between both sides)
> > >>>
> > >>
> > >> What should be the BAR number and size?
> > >>
> > >>>>
> > >>>> 3. How the notification be delivered from the device to the host? Through
> > >>>> INT-X/MSI/MSI-X or even polling?
> > >>>
> > >>> Notifications are delivered through MSI.
> > >>>
> > >>
> > >> So no INT-X or MSI-X? Why so?
> > >>
> > >> Anyhow, my objective is not to get answers for my above questions here in this
> > >> thread, but to state the reality that it would be hard for us to make use of
> > >> this new transport without defining the bus implementation.
> > >>
> > > +1 to most of the points that Manivannan explained.
> > >
> > > The whole new definition of message layer for the PCI does not make any sense at all where expectation for the device is to build yet
> > another interface for _Everything_ that already exists.
> > > and device is still have to implement all the existing things because the device does not know which driver will operate.
> > >
> > > And that too some register based inefficient interface.
> > > Just to reset the device one needs to fully setup the new message interface but device still have to be working.
> > > That defeats the whole purpose of reset_1 and reset_2 in the device.
> > >
> > > This does not bring anything better for the PCI devices at all.
> > >
> > > A transport binding should be defined for the bus binding.
> > > A bus that chooses a msg interface should be listed that way and bus choose inline messages can continue the way they are.
> > >
> > > If we are creating something brand-new, for PCI the only thing needed is:
> > > 1. Reset the device
> > > 2. Create an admin virtqueue
> > > 3. Transport everything needed through this virtqueue including features, configs, control.
> > >
> > > And this will work for any other bus or msg based too given only contract needed is to creating the aq.
> >
> > I think you misunderstood a bit the point of virtio-msg bus over PCI so let me try to explain.
> >
> > You see one PCI device (regular, not virtio) which is a "virtio-msg bus over PCI".
> >
> > When the virtio-msg bus over PCI it will communicate through this device with an external
> > system connected through the PCI bus.
> > The driver will enumerate virtio devices available behind this bus and register them so that
> > the corresponding virtio drivers are probed for them.
> > All virtio-msg messages required to communicate with those devices will be transferred through
> > a FIFO stored in the BAR of the pci device and standard PCI DMA will be used to share the
> > virtqueues with all the devices on the bus.
> >
> > So the PCI device is not one virtio device but one bus behind which there can be many devices.
> >
> > Is this making the concept a bit clearer ?
> >
> Yes. This makes a lot of sense now.
>
> This is a virtio-msg-transport device that needs its own device id in the table.
> And its binding to the PCI transport.
ok. how about an rfc of that idea on the list?
> So that device producer can implement this standard device and driver developer can develop the driver for multiplexing by reading the spec.
>
> > Cheers
> > Bertrand
> >
> >
> > >
> > >>>>
> > >>>> And these are just a few questions that comes to the top of my head. There could
> > >>>> be plenty more.
> > >>>>
> > >>>> How can we expect all the virtio-msg bus implementations to adhere to the same
> > >>>> format so that the interoperability offered by the Virtio spec is guaranteed?
> > >>>
> > >>> We spent a lot of time thinking on that (this started around 2 years ago) and we
> > >>> discussed several use cases and did some PoC to try to have everything covered
> > >>> (secure to non secure and vm to vm using ffa, system to system over PCI or hardware
> > >>> messaging system, PCI, Xen specific implementation) to check the needs and try to
> > >>> cover as much as we can.
> > >>>
> > >>> Now there might be cases we missed but we think that having a purely message based
> > >>> interface between the bus and the transport and split responsibilities the way we did
> > >>> is allowing lots of different bus implementations without affecting the transport and
> > >>> driver/device implementations on top.
> > >>>
> > >>> We identified that a common use case will be for the bus to transfer messages using
> > >>> FIFOs to optimize speed (at the end you need to have a way to share memory between
> > >>> both sides so why not using a part of it to transfer the messages to and reduce the number
> > >>> of data exchanges and copies) and this will be used by PCI, Xen, FF-A and others in
> > >>> practice (so we might standardize the FIFO format in the future to allow even more code
> > >>> reuse between busses).
> > >>>
> > >>
> > >> Not just the FIFO format, but how that FIFO gets shared between the device and
> > >> the host also needs to be documented. Maybe for this initial transport version,
> > >> you can start with defining the FF-A bus implementation?
> >
> >
> > IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended
> > recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy
> > the information in any medium. Thank you.
next prev parent reply other threads:[~2026-02-25 14:50 UTC|newest]
Thread overview: 105+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-26 16:32 [PATCH v1 0/4] virtio-msg transport layer Bill Mills
2026-01-26 16:32 ` [PATCH v1 1/4] virtio-msg: add new command for bus normative Bill Mills
2026-02-03 19:42 ` Edgar E. Iglesias
2026-01-26 16:32 ` [PATCH v1 2/4] virtio-msg: Add virtio-msg, a message based virtio transport layer Bill Mills
2026-02-06 16:28 ` Peter Hilber
2026-02-10 9:39 ` Bertrand Marquis
2026-02-12 11:16 ` Peter Hilber
2026-02-20 8:23 ` Bertrand Marquis
2026-02-26 13:53 ` Peter Hilber
2026-02-13 19:09 ` Demi Marie Obenour
2026-02-20 8:52 ` Bertrand Marquis
2026-02-21 2:04 ` Demi Marie Obenour
2026-02-23 7:44 ` Bertrand Marquis
2026-02-24 15:41 ` Demi Marie Obenour
2026-02-24 16:14 ` Bertrand Marquis
2026-02-24 17:36 ` Edgar E. Iglesias
2026-02-24 17:14 ` Demi Marie Obenour
2026-02-24 17:20 ` Bertrand Marquis
2026-02-24 17:46 ` Demi Marie Obenour
2026-02-25 7:26 ` Bertrand Marquis
2026-02-25 12:36 ` Demi Marie Obenour
2026-02-25 12:46 ` Bertrand Marquis
2026-01-26 16:32 ` [PATCH v1 3/4] virtio-msg: link virtio-msg content Bill Mills
2026-02-03 19:43 ` Edgar E. Iglesias
2026-01-26 16:32 ` [PATCH v1 4/4] virtio-msg: add conformance entries in conformance chapter Bill Mills
2026-02-03 19:43 ` Edgar E. Iglesias
2026-01-26 21:47 ` [PATCH v1 0/4] virtio-msg transport layer Demi Marie Obenour
2026-02-03 13:21 ` Michael S. Tsirkin
2026-02-03 19:48 ` Edgar E. Iglesias
2026-02-03 19:55 ` Michael S. Tsirkin
2026-02-04 8:33 ` Bertrand Marquis
2026-02-04 13:50 ` Arnaud POULIQUEN
2026-02-04 3:29 ` Viresh Kumar
2026-02-04 5:34 ` Manivannan Sadhasivam
2026-02-13 13:52 ` Parav Pandit
2026-02-13 19:45 ` Demi Marie Obenour
2026-02-19 17:31 ` Armelle Laine
2026-02-20 8:55 ` Bertrand Marquis
2026-02-19 23:54 ` Michael S. Tsirkin
2026-02-20 6:13 ` Parav Pandit
2026-02-20 9:02 ` Bertrand Marquis
2026-02-25 7:45 ` Manivannan Sadhasivam
2026-02-25 8:03 ` Bertrand Marquis
2026-02-25 8:12 ` Michael S. Tsirkin
2026-02-25 10:06 ` Manivannan Sadhasivam
2026-02-25 10:10 ` Michael S. Tsirkin
2026-02-25 10:14 ` Bertrand Marquis
2026-02-25 10:22 ` Michael S. Tsirkin
2026-02-25 10:53 ` Manivannan Sadhasivam
2026-02-25 10:24 ` Parav Pandit
2026-02-25 10:35 ` Bertrand Marquis
2026-02-25 10:52 ` Michael S. Tsirkin
2026-02-25 10:55 ` Bertrand Marquis
2026-02-25 10:58 ` Michael S. Tsirkin
2026-02-25 14:45 ` Parav Pandit
2026-02-25 14:49 ` Michael S. Tsirkin [this message]
2026-02-25 14:53 ` Bertrand Marquis
2026-02-25 15:00 ` Parav Pandit
2026-02-25 15:07 ` Parav Pandit
2026-02-25 15:12 ` Bertrand Marquis
2026-02-25 15:15 ` Michael S. Tsirkin
2026-02-25 15:36 ` Demi Marie Obenour
2026-02-25 15:40 ` Bertrand Marquis
2026-02-25 15:48 ` Demi Marie Obenour
2026-02-25 15:51 ` Bertrand Marquis
2026-02-25 16:15 ` Demi Marie Obenour
2026-02-26 5:40 ` Manivannan Sadhasivam
2026-02-26 7:05 ` Bertrand Marquis
2026-02-25 15:11 ` Manivannan Sadhasivam
2026-02-25 15:15 ` Parav Pandit
2026-02-26 5:36 ` Manivannan Sadhasivam
2026-02-26 5:59 ` Parav Pandit
2026-02-26 6:19 ` Manivannan Sadhasivam
2026-02-26 7:01 ` Bertrand Marquis
2026-02-26 7:28 ` Manivannan Sadhasivam
2026-02-26 19:20 ` Demi Marie Obenour
2026-02-26 22:08 ` Edgar E. Iglesias
2026-02-25 15:23 ` Demi Marie Obenour
2026-02-25 16:42 ` Edgar E. Iglesias
2026-02-25 12:53 ` Demi Marie Obenour
2026-02-25 13:09 ` Manivannan Sadhasivam
2026-02-25 13:12 ` Demi Marie Obenour
2026-02-25 13:29 ` Bertrand Marquis
2026-02-25 15:19 ` Demi Marie Obenour
2026-02-25 15:27 ` Bertrand Marquis
2026-02-20 10:03 ` Michael S. Tsirkin
2026-02-25 5:09 ` Parav Pandit
2026-02-25 7:25 ` Michael S. Tsirkin
2026-02-25 9:18 ` Parav Pandit
2026-02-25 9:22 ` Michael S. Tsirkin
2026-02-25 9:35 ` Bertrand Marquis
2026-02-25 9:54 ` Michael S. Tsirkin
2026-02-25 10:01 ` Bertrand Marquis
2026-02-25 10:08 ` Michael S. Tsirkin
2026-02-20 8:58 ` Bertrand Marquis
2026-02-20 8:40 ` Bertrand Marquis
2026-02-25 4:58 ` Parav Pandit
2026-02-25 7:52 ` Bertrand Marquis
2026-02-25 12:46 ` Demi Marie Obenour
2026-02-25 13:05 ` Bertrand Marquis
2026-02-25 13:09 ` Demi Marie Obenour
2026-02-25 15:17 ` Bertrand Marquis
2026-02-24 17:57 ` Demi Marie Obenour
2026-02-25 15:21 ` Alex Bennée
2026-02-25 15:46 ` Demi Marie Obenour
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260225094902-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=Bertrand.Marquis@arm.com \
--cc=alex.bennee@linaro.org \
--cc=armellel@google.com \
--cc=arnaud.pouliquen@foss.st.com \
--cc=bill.mills@linaro.org \
--cc=edgar.iglesias@amd.com \
--cc=mani@kernel.org \
--cc=parav@nvidia.com \
--cc=viresh.kumar@linaro.org \
--cc=virtio-comment@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox