public inbox for virtio-comment@lists.linux.dev
 help / color / mirror / Atom feed
From: Manivannan Sadhasivam <manisadhasivam.linux@gmail.com>
To: Parav Pandit <parav@nvidia.com>
Cc: "virtio-comment@lists.linux.dev" <virtio-comment@lists.linux.dev>,
	"mie@igel.co.jp" <mie@igel.co.jp>
Subject: Re: MSI for Virtio PCI transport
Date: Tue, 25 Jun 2024 11:13:46 +0530	[thread overview]
Message-ID: <20240625054346.GA2642@thinkpad> (raw)
In-Reply-To: <PH0PR12MB54812829338B9B950001BA9BDCD52@PH0PR12MB5481.namprd12.prod.outlook.com>

On Tue, Jun 25, 2024 at 04:09:07AM +0000, Parav Pandit wrote:
> Hi,
> 
> > From: Manivannan Sadhasivam <manisadhasivam.linux@gmail.com>
> > Sent: Monday, June 24, 2024 9:50 PM
> > 
> > Hi,
> > 
> > We are looking into adapting Virtio spec for configurable physical PCIe
> > endpoint devices to expose Virtio devices to the host machine connected
> > over PCIe. This allows us to use the existing frontend drivers on the host
> > machine, thus minimizing the development efforts. This idea is not new as
> > some vendors like NVidia have already released customized PCIe devices
> > exposing Virtio devices to the host machines. But we are working on making
> > the configurable PCIe devices running Linux kernel to expose Virtio devices
> > using the PCI Endpoint (EP) subsystem.
> > 
> > Below is the simplistic represenation of the idea with virt-net as an example.
> > But this could be extended to any supported Virtio devices:
> > 
> >            HOST                                    ENDPOINT
> > 
> > +-----------------------------+         +-----------------------------+
> > |                             |         |                             |
> > |         Linux Kernel        |         |         Linux Kernel        |
> > |                             |         |                             |
> > |                             |         |     +------------------+    |
> > |                             |         |     |                  |    |
> > |                             |         |     |       Modem      |    |
> > |                             |         |     |                  |    |
> > |                             |         |     +---------|--------+    |
> > |                             |         |               |             |
> > |     +------------------+    |         |     +---------|--------+    |
> > |     |                  |    |         |     |                  |    |
> > |     |     Virt-net     |    |         |     |    Virtio EPF    |    |
> > |     |                  |    |         |     |                  |    |
> > |     +---------|--------+    |         |     +---------|--------+    |
> > |               |             |         |               |             |
> > |     +---------|--------+    |         |     +---------|--------+    |
> > |     |                  |    |         |     |                  |    |
> > |     |    Virtio PCI    |    |         |     | PCI EP Subsystem |    |
> > |     |                  |    |         |     |                  |    |
> > |     +---------|--------+    |         |     +---------|--------+    |
> > | SW            |             |         | SW            |             |
> > ----------------|--------------         ----------------|--------------
> > | HW            |             |         | HW            |             |
> > |     +---------|--------+    |         |     +---------|--------+    |
> > |     |                  |    |         |     |                  |    |
> > |     |      PCIe RC     |    |         |     |     PCIe EP      |    |
> > |     |                  |    |         |     |                  |    |
> > +-----+---------|--------+----+         +-----+---------|--------+----+
> >                 |                                       |
> >                 |                                       |
> >                 |                                       |
> >                 |                PCIe                   |
> >                 -----------------------------------------
> > 
> Can you please explain what is PCIe EP subsystem is?
> I assume, it is a subsystem to somehow configure the PCIe EP HW instances?
> If yes, it is not connected to any PCIe RC in your diagram.
> 

PCIe EP subsystem is a Linux kernel framework to configure the PCIe EP IP inside
an SoC/device. Here 'Endpoint' is a separate SoC/device that is running Linux
kernel and uses PCIe EP subsystem in kernel [1] to configure the PCIe EP IP
based on product usecase like GPU card, NVMe, Modem, WLAN etc...

[1] https://docs.kernel.org/PCI/endpoint/pci-endpoint.html

> So how does the MSI help in this case?
> 

I think you are missing the point that 'Endpoint' is a separate SoC/device that
is connected to a host machine over PCIe. Just like how you would connect a PCIe
based GPU card to a Desktop PC. Only difference is, most of the PCIe cards will
run on a proprietary firmware supplied by the vendor, but here the firmware
itself can be built by the user and configurable. And this is where Virtio is
going to be exposed.

> 
> > While doing so, we faced an issue due to lack of MSI support defined in Virtio
> > spec for PCI transport. Currently, the PCI transport (starting from 0.9.5) has
> > only defined INTx (legacy) and MSI-X interrupts for the device to send
> > notifications to the guest. While it works well for the hypervisor to guest
> > communcation, when a physical PCIe device is used as a Virtio device, lack of
> > MSI support is hurting the performance (when there is no MSI-X).
> > 
> I am familiar with the scale issue of MSI-X, which is better for MSI (relative to MSI-X).
> What prevents implementing the MSI-X?
> 

As I said, most of the devices I'm aware doesn't support MSI-X in hardware
itself (I mean in the PCIe EP IP inside the SoC/device). For simple usecases
like WLAN, modem, MSI-X is not really required.

> > Most of the physical PCIe endpoint devices support MSI interrupts over MSI-
> I am not sure if this is true. :)
> But not a concern either.
> 

It really depends on the usecase I would say.

> > X for simplicity and with Virtio not supporting MSI, falling back to legacy INTx
> > interrupts is affecting the performance.
> > 
> > First of all, INTx requires the PCIe devices to send two MSG TLPs
> > (Assert/Deassert) to emulate level triggered interrupt on the host. And there
> > could be some delay between assert and deassert messages to make sure
> > that the host recognizes it as an interrupt (level trigger). Also, the INTx
> > interrupts are limited to 1 per function, so all the notifications from device
> > has to share this single interrupt (INTA).
> > 
> Yes, INTx deprecation is in my list but didn’t get their yet.
> 
> > On the other hand, MSI requires only one MWr TLP from the device to host
> > and since it is a posted write, there is no delay involved. Also, a single PCIe
> > function can use upto 32 MSIs, thus making it possible to use one MSI vector
> > per virtqueue (32 is more than enough for most of the usecases).
> > 
> > So my question is, why does the Virtio spec not supporting MSI? If there are
> > no major blocker in supporting MSI, could we propose adding MSI to the
> > Virtio spec?
> > 
> MSI addition is good for virtio for small scale devices of 1 to 32.
> PCIe EP may support MSI-X and MSI both the capabilities and sw can give preference to MSI when the need is <= 32 vectors.
> 

PCIe specification only mandates the devices to support either MSI or MSI-X.

Reference: PCIe spec r5.0, sec 6.1.4:

"All PCI Express device Functions that are capable of generating interrupts must
support MSI or MSI-X or both."

So MSI-X is clearly an optional feature which simple devices tend to ignore. But
if both are supported, then obviously Virtio will make use of MSI-X, but that's
not the case here.

> Though I don’t see it anyway related to PCIe EP configuration in your diagram.
> In other words, PCI EP subsystem can still work with MSI-X.
> Can you please elaborate it?
> 

I hope the above info clarifies. If not, please let me know.

- Mani

-- 
மணிவண்ணன் சதாசிவம்

  reply	other threads:[~2024-06-25  5:43 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-24 16:19 MSI for Virtio PCI transport Manivannan Sadhasivam
2024-06-25  4:09 ` Parav Pandit
2024-06-25  5:43   ` Manivannan Sadhasivam [this message]
2024-06-25  6:18     ` Parav Pandit
2024-06-25  7:55       ` Michael S. Tsirkin
2024-06-25  8:00         ` Parav Pandit
2024-06-25  8:09           ` Michael S. Tsirkin
2024-06-25  8:18             ` Parav Pandit
2024-06-25  8:29               ` Michael S. Tsirkin
2024-06-25  9:11       ` Manivannan Sadhasivam
2024-06-25  9:59         ` Parav Pandit
2024-06-25  7:52 ` Michael S. Tsirkin
2024-06-25  9:19   ` Manivannan Sadhasivam

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240625054346.GA2642@thinkpad \
    --to=manisadhasivam.linux@gmail.com \
    --cc=mie@igel.co.jp \
    --cc=parav@nvidia.com \
    --cc=virtio-comment@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox