From: "Michael S. Tsirkin" <mst@redhat.com>
To: Parav Pandit <parav@nvidia.com>
Cc: "virtio-dev@lists.oasis-open.org"
<virtio-dev@lists.oasis-open.org>,
"cohuck@redhat.com" <cohuck@redhat.com>,
"david.edmondson@oracle.com" <david.edmondson@oracle.com>,
"sburla@marvell.com" <sburla@marvell.com>,
"jasowang@redhat.com" <jasowang@redhat.com>,
Yishai Hadas <yishaih@nvidia.com>,
Maor Gottlieb <maorg@nvidia.com>,
"virtio-comment@lists.oasis-open.org"
<virtio-comment@lists.oasis-open.org>,
Shahaf Shuler <shahafs@nvidia.com>
Subject: Re: [virtio-comment] Re: [PATCH v2 0/2] transport-pci: Introduce legacy registers access using AQ
Date: Tue, 16 May 2023 00:32:48 -0400 [thread overview]
Message-ID: <20230516002024-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <PH0PR12MB5481CDAD2698E8078BFFA4DFDC789@PH0PR12MB5481.namprd12.prod.outlook.com>
On Mon, May 15, 2023 at 08:56:42PM +0000, Parav Pandit wrote:
>
>
> > From: Michael S. Tsirkin <mst@redhat.com>
> > Sent: Monday, May 15, 2023 4:30 PM
> > >
> > > I am not sure if this is a real issue. Because even the legacy guests
> > > have msix enabled by default. In theory yes, it can fall back to intx.
> >
> > Well. I feel we should be closer to being sure it's not an issue if we are going to
> > ignore it.
> > some actual data here:
> >
> > Even linux only enabled MSI-X in 2009.
> > Of course, other guests took longer. E.g.
> > a quick google search gave me this for some bsd variant (2017):
> > https://twitter.com/dragonflybsd/status/834494984229421057
> >
> > Many guests have tunables to disable msix. Why?
> > E.g. BSD keeps maintaining it at
> > hw.virtio.pci.disable_msix
> > not a real use-case and you know 100% no guests have set this to work around
> > some bug e.g. in bsd MSI-X core? How can you be sure?
> >
> >
> >
> > intx is used when guests run out of legacy interrupts, these setups are not hard
> > to create at all: just constrain the number of vCPUs while creating lots of
> > devices.
> >
> >
> > I could go on.
> >
> >
> >
> > > There are few options.
> > > 1. A hypervisor driver can be conservative and steal an msix of the VF
> > > for transporting intx.
> > > Pros: Does not need special things in device
> > > Cons:
> > > a. Fairly intrusive in hypervisor vf driver.
> > > b. May not be ever used as guest is unlikely to fail on msix
> >
> > Yea I do not like this since we are burning up msix vectors.
> > More reasons: this "pass through" msix has no chance to set ISR properly since
> > msix does not set ISR.
> >
> >
> > > 2. Since multiple VFs intx to be serviced, one command per VF in AQ is
> > > too much overhead that device needs to map a request to,
> > >
> > > A better way is to have an eventq of depth = num_vfs, like many other
> > > virtio devices have it.
> > >
> > > An eventq can hold per VF interrupt entry including the isr value that
> > > you suggest above.
> > >
> > > Something like,
> > >
> > > union eventq_entry {
> > > u8 raw_data[16];
> > > struct intx_entry {
> > > u8 event_opcode;
> > > u8 group_type;
> > > u8 reserved[6];
> > > le64 group_identifier;
> > > u8 isr_status;
> > > };
> > > };
> > >
> > > This eventq resides on the owner parent PF.
> > > isr_status is read on clear like today.
> >
> > This is what I wrote no?
> > lore.kernel.org/all/20230507050146-mutt-send-email-
> > mst%40kernel.org/t.mbox.gz
> >
> > how about a special command that is used when device would
> > normally send INT#x? it can also return ISR to reduce latency.
> >
> In response to your above suggestion of AQ command,
> I suggested the eventq that contains the isr_status that reduces latency as you suggest.
I don't see why we need to keep adding queues though.
Just use one of admin queues.
> > > May be such eventq can be useful in future for wider case.
> >
> > There's no maybe here is there? Things like live migration need events for sure.
> >
> > > We may have to find a different name for it as other devices has
> > > device specific eventq.
> >
> > We don't need a special name for it. Just use an adminq with a special
> > command that is only consumed when there is an event.
> This requires too many commands to be issued on the PF device.
> Potentially one per VF. And device needs to keep track of command to VF mapping.
>
> > Note you only need to queue a command if MSI is disabled.
> > Which is nice.
> Yes, it is nice.
> An eventq is a variation of it, where device can keep reporting the events without doing the extra mapping and without too many commands.
I don't get the difference then. The format you showed seems very close
to admin command. What is the difference? How do you avoid the need
to add a command per VF using INTx#?
> Additionally, eventq also works for 1.x device which will read the ISR status registers directly from the device.
>
> >
> > > I am inclined to differ this to a later point if one can identify the
> > > real failure with msix for the guest VM.
> > > So far we don't see this ever happening.
> >
> > What is the question exactly?
> >
> > Just have more devices than vectors,
> > an intel CPU only has ~200 of these, and current drivers want to use 2 vectors
> > and then fall back on INTx since that is shared.
> > Extremely easy to create - do you want a qemu command line to try?
> >
> Intel CPU has 256 per core (per vcpu). So they are really a lot.
> One needs to connect lot more devices to the cpu to run out of it.
> So yes, I would like to try the command to fail.
order of 128 functions then for a 1vcpu VM. You were previously talking
about tends of 1000s of functions as justification for avoiding config
space.
> > Do specific customers event use guests with msi-x disabled? Maybe no.
> > Does anyone use virtio with msi-x disabled? Most likely yes.
> I just feel that INTx emulation is extremely rare/narrow case of some applications that may never find its use on hw based devices.
If we use a dedicated command for this, I guess devices can just
avoid implementing the command if they do not feel like it?
> > So if we are going for legacy pci emulation let's have a comprehensive legacy
> > pci emulation please where host can either enable it for a guest or deny
> > completely, not kind of start running then fail mysteriously.
> A driver will be easily able to fail the call on INTx configuration failing the guest.
There's no configuration - INTx is the default - and no way to fail gracefully
for legacy. That is one of the things we should fix, at least hypervisor
should be able to detect failures.
> But lets see if can align to eventq/aq scheme to make it work.
This publicly archived list offers a means to provide input to the
OASIS Virtual I/O Device (VIRTIO) TC.
In order to verify user consent to the Feedback License terms and
to minimize spam in the list archive, subscription is required
before posting.
Subscribe: virtio-comment-subscribe@lists.oasis-open.org
Unsubscribe: virtio-comment-unsubscribe@lists.oasis-open.org
List help: virtio-comment-help@lists.oasis-open.org
List archive: https://lists.oasis-open.org/archives/virtio-comment/
Feedback License: https://www.oasis-open.org/who/ipr/feedback_license.pdf
List Guidelines: https://www.oasis-open.org/policies-guidelines/mailing-lists
Committee: https://www.oasis-open.org/committees/virtio/
Join OASIS: https://www.oasis-open.org/join/
WARNING: multiple messages have this Message-ID (diff)
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Parav Pandit <parav@nvidia.com>
Cc: "virtio-dev@lists.oasis-open.org"
<virtio-dev@lists.oasis-open.org>,
"cohuck@redhat.com" <cohuck@redhat.com>,
"david.edmondson@oracle.com" <david.edmondson@oracle.com>,
"sburla@marvell.com" <sburla@marvell.com>,
"jasowang@redhat.com" <jasowang@redhat.com>,
Yishai Hadas <yishaih@nvidia.com>,
Maor Gottlieb <maorg@nvidia.com>,
"virtio-comment@lists.oasis-open.org"
<virtio-comment@lists.oasis-open.org>,
Shahaf Shuler <shahafs@nvidia.com>
Subject: [virtio-dev] Re: [virtio-comment] Re: [PATCH v2 0/2] transport-pci: Introduce legacy registers access using AQ
Date: Tue, 16 May 2023 00:32:48 -0400 [thread overview]
Message-ID: <20230516002024-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <PH0PR12MB5481CDAD2698E8078BFFA4DFDC789@PH0PR12MB5481.namprd12.prod.outlook.com>
On Mon, May 15, 2023 at 08:56:42PM +0000, Parav Pandit wrote:
>
>
> > From: Michael S. Tsirkin <mst@redhat.com>
> > Sent: Monday, May 15, 2023 4:30 PM
> > >
> > > I am not sure if this is a real issue. Because even the legacy guests
> > > have msix enabled by default. In theory yes, it can fall back to intx.
> >
> > Well. I feel we should be closer to being sure it's not an issue if we are going to
> > ignore it.
> > some actual data here:
> >
> > Even linux only enabled MSI-X in 2009.
> > Of course, other guests took longer. E.g.
> > a quick google search gave me this for some bsd variant (2017):
> > https://twitter.com/dragonflybsd/status/834494984229421057
> >
> > Many guests have tunables to disable msix. Why?
> > E.g. BSD keeps maintaining it at
> > hw.virtio.pci.disable_msix
> > not a real use-case and you know 100% no guests have set this to work around
> > some bug e.g. in bsd MSI-X core? How can you be sure?
> >
> >
> >
> > intx is used when guests run out of legacy interrupts, these setups are not hard
> > to create at all: just constrain the number of vCPUs while creating lots of
> > devices.
> >
> >
> > I could go on.
> >
> >
> >
> > > There are few options.
> > > 1. A hypervisor driver can be conservative and steal an msix of the VF
> > > for transporting intx.
> > > Pros: Does not need special things in device
> > > Cons:
> > > a. Fairly intrusive in hypervisor vf driver.
> > > b. May not be ever used as guest is unlikely to fail on msix
> >
> > Yea I do not like this since we are burning up msix vectors.
> > More reasons: this "pass through" msix has no chance to set ISR properly since
> > msix does not set ISR.
> >
> >
> > > 2. Since multiple VFs intx to be serviced, one command per VF in AQ is
> > > too much overhead that device needs to map a request to,
> > >
> > > A better way is to have an eventq of depth = num_vfs, like many other
> > > virtio devices have it.
> > >
> > > An eventq can hold per VF interrupt entry including the isr value that
> > > you suggest above.
> > >
> > > Something like,
> > >
> > > union eventq_entry {
> > > u8 raw_data[16];
> > > struct intx_entry {
> > > u8 event_opcode;
> > > u8 group_type;
> > > u8 reserved[6];
> > > le64 group_identifier;
> > > u8 isr_status;
> > > };
> > > };
> > >
> > > This eventq resides on the owner parent PF.
> > > isr_status is read on clear like today.
> >
> > This is what I wrote no?
> > lore.kernel.org/all/20230507050146-mutt-send-email-
> > mst%40kernel.org/t.mbox.gz
> >
> > how about a special command that is used when device would
> > normally send INT#x? it can also return ISR to reduce latency.
> >
> In response to your above suggestion of AQ command,
> I suggested the eventq that contains the isr_status that reduces latency as you suggest.
I don't see why we need to keep adding queues though.
Just use one of admin queues.
> > > May be such eventq can be useful in future for wider case.
> >
> > There's no maybe here is there? Things like live migration need events for sure.
> >
> > > We may have to find a different name for it as other devices has
> > > device specific eventq.
> >
> > We don't need a special name for it. Just use an adminq with a special
> > command that is only consumed when there is an event.
> This requires too many commands to be issued on the PF device.
> Potentially one per VF. And device needs to keep track of command to VF mapping.
>
> > Note you only need to queue a command if MSI is disabled.
> > Which is nice.
> Yes, it is nice.
> An eventq is a variation of it, where device can keep reporting the events without doing the extra mapping and without too many commands.
I don't get the difference then. The format you showed seems very close
to admin command. What is the difference? How do you avoid the need
to add a command per VF using INTx#?
> Additionally, eventq also works for 1.x device which will read the ISR status registers directly from the device.
>
> >
> > > I am inclined to differ this to a later point if one can identify the
> > > real failure with msix for the guest VM.
> > > So far we don't see this ever happening.
> >
> > What is the question exactly?
> >
> > Just have more devices than vectors,
> > an intel CPU only has ~200 of these, and current drivers want to use 2 vectors
> > and then fall back on INTx since that is shared.
> > Extremely easy to create - do you want a qemu command line to try?
> >
> Intel CPU has 256 per core (per vcpu). So they are really a lot.
> One needs to connect lot more devices to the cpu to run out of it.
> So yes, I would like to try the command to fail.
order of 128 functions then for a 1vcpu VM. You were previously talking
about tends of 1000s of functions as justification for avoiding config
space.
> > Do specific customers event use guests with msi-x disabled? Maybe no.
> > Does anyone use virtio with msi-x disabled? Most likely yes.
> I just feel that INTx emulation is extremely rare/narrow case of some applications that may never find its use on hw based devices.
If we use a dedicated command for this, I guess devices can just
avoid implementing the command if they do not feel like it?
> > So if we are going for legacy pci emulation let's have a comprehensive legacy
> > pci emulation please where host can either enable it for a guest or deny
> > completely, not kind of start running then fail mysteriously.
> A driver will be easily able to fail the call on INTx configuration failing the guest.
There's no configuration - INTx is the default - and no way to fail gracefully
for legacy. That is one of the things we should fix, at least hypervisor
should be able to detect failures.
> But lets see if can align to eventq/aq scheme to make it work.
---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
next prev parent reply other threads:[~2023-05-16 4:32 UTC|newest]
Thread overview: 252+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-06 0:01 [virtio-comment] [PATCH v2 0/2] transport-pci: Introduce legacy registers access using AQ Parav Pandit
2023-05-06 0:01 ` [virtio-dev] " Parav Pandit
2023-05-06 0:01 ` [virtio-comment] [PATCH v2 1/2] transport-pci: Introduce legacy registers access commands Parav Pandit
2023-05-06 0:01 ` [virtio-dev] " Parav Pandit
2023-05-17 5:44 ` [virtio-comment] " Michael S. Tsirkin
2023-05-17 5:44 ` [virtio-dev] " Michael S. Tsirkin
2023-05-17 19:32 ` [virtio-comment] " Parav Pandit
2023-05-17 19:32 ` [virtio-dev] " Parav Pandit
2023-05-18 19:42 ` [virtio-comment] " Michael S. Tsirkin
2023-05-18 19:42 ` [virtio-dev] " Michael S. Tsirkin
2023-05-18 20:51 ` Parav Pandit
2023-05-18 20:51 ` [virtio-dev] " Parav Pandit
2023-05-19 1:54 ` Jason Wang
2023-05-19 1:54 ` [virtio-dev] " Jason Wang
2023-05-19 2:04 ` Parav Pandit
2023-05-19 2:04 ` [virtio-dev] " Parav Pandit
2023-05-19 6:06 ` Michael S. Tsirkin
2023-05-19 6:06 ` [virtio-dev] " Michael S. Tsirkin
2023-05-19 16:37 ` Parav Pandit
2023-05-19 16:37 ` [virtio-dev] " Parav Pandit
2023-05-21 9:16 ` Michael S. Tsirkin
2023-05-21 9:16 ` [virtio-dev] " Michael S. Tsirkin
2023-05-21 13:21 ` Parav Pandit
2023-05-21 13:21 ` [virtio-dev] " Parav Pandit
2023-05-21 14:33 ` Michael S. Tsirkin
2023-05-21 14:33 ` [virtio-dev] " Michael S. Tsirkin
2023-05-21 14:44 ` Parav Pandit
2023-05-21 14:44 ` [virtio-dev] " Parav Pandit
2023-05-22 20:07 ` Michael S. Tsirkin
2023-05-22 20:07 ` [virtio-dev] " Michael S. Tsirkin
2023-05-22 21:05 ` Parav Pandit
2023-05-22 21:05 ` [virtio-dev] " Parav Pandit
2023-05-22 21:34 ` Michael S. Tsirkin
2023-05-22 21:34 ` [virtio-dev] " Michael S. Tsirkin
2023-05-23 17:13 ` Parav Pandit
2023-05-23 17:13 ` [virtio-dev] " Parav Pandit
2023-05-23 18:48 ` Michael S. Tsirkin
2023-05-23 18:48 ` [virtio-dev] " Michael S. Tsirkin
2023-05-23 22:22 ` Parav Pandit
2023-05-23 22:22 ` [virtio-dev] " Parav Pandit
2023-05-24 1:17 ` Jason Wang
2023-05-24 1:17 ` [virtio-dev] " Jason Wang
2023-05-24 10:07 ` Michael S. Tsirkin
2023-05-24 10:07 ` [virtio-dev] " Michael S. Tsirkin
2023-05-24 19:18 ` Parav Pandit
2023-05-24 19:18 ` [virtio-dev] " Parav Pandit
2023-05-24 20:12 ` Michael S. Tsirkin
2023-05-24 20:12 ` [virtio-dev] " Michael S. Tsirkin
2023-05-24 21:02 ` Parav Pandit
2023-05-24 21:02 ` [virtio-dev] " Parav Pandit
2023-05-22 21:42 ` Michael S. Tsirkin
2023-05-22 21:42 ` [virtio-dev] " Michael S. Tsirkin
2023-05-22 0:54 ` Jason Wang
2023-05-22 0:54 ` [virtio-dev] " Jason Wang
2023-05-22 2:46 ` Parav Pandit
2023-05-22 2:46 ` [virtio-dev] " Parav Pandit
2023-05-22 19:35 ` Michael S. Tsirkin
2023-05-22 19:35 ` [virtio-dev] " Michael S. Tsirkin
2023-05-06 0:01 ` [virtio-comment] [PATCH v2 2/2] transport-pci: Add legacy register access conformance section Parav Pandit
2023-05-06 0:01 ` [virtio-dev] " Parav Pandit
2023-05-06 2:31 ` [virtio-comment] Re: [PATCH v2 0/2] transport-pci: Introduce legacy registers access using AQ Jason Wang
2023-05-06 2:31 ` [virtio-dev] " Jason Wang
2023-05-07 13:44 ` [virtio-comment] " Michael S. Tsirkin
2023-05-07 13:44 ` [virtio-dev] " Michael S. Tsirkin
2023-05-08 2:23 ` [virtio-comment] " Jason Wang
2023-05-08 2:23 ` [virtio-dev] " Jason Wang
2023-05-08 17:07 ` [virtio-comment] " Parav Pandit
2023-05-08 17:07 ` [virtio-dev] " Parav Pandit
2023-05-09 3:44 ` [virtio-comment] " Jason Wang
2023-05-09 3:44 ` [virtio-dev] " Jason Wang
2023-05-09 3:56 ` [virtio-comment] " Parav Pandit
2023-05-09 3:56 ` [virtio-dev] " Parav Pandit
2023-05-10 3:51 ` [virtio-comment] " Jason Wang
2023-05-10 3:51 ` [virtio-dev] " Jason Wang
2023-05-10 4:22 ` [virtio-comment] " Jason Wang
2023-05-10 4:22 ` [virtio-dev] " Jason Wang
2023-05-10 16:07 ` [virtio-comment] " Parav Pandit
2023-05-10 16:07 ` [virtio-dev] " Parav Pandit
2023-05-11 7:20 ` [virtio-comment] " Jason Wang
2023-05-11 7:20 ` [virtio-dev] " Jason Wang
2023-05-11 11:35 ` Michael S. Tsirkin
2023-05-11 11:35 ` [virtio-dev] " Michael S. Tsirkin
2023-05-15 5:08 ` Jason Wang
2023-05-15 5:08 ` [virtio-dev] " Jason Wang
2023-05-15 15:25 ` Parav Pandit
2023-05-15 15:25 ` [virtio-dev] " Parav Pandit
2023-05-10 16:04 ` Parav Pandit
2023-05-10 16:04 ` [virtio-dev] " Parav Pandit
2023-05-11 7:17 ` [virtio-comment] " Jason Wang
2023-05-11 7:17 ` [virtio-dev] " Jason Wang
2023-05-11 14:31 ` [virtio-comment] " Parav Pandit
2023-05-11 14:31 ` [virtio-dev] " Parav Pandit
2023-05-15 5:12 ` [virtio-comment] " Jason Wang
2023-05-15 5:12 ` [virtio-dev] " Jason Wang
2023-05-15 15:26 ` Parav Pandit
2023-05-15 15:26 ` [virtio-dev] " Parav Pandit
2023-05-10 6:04 ` [virtio-comment] " Michael S. Tsirkin
2023-05-10 6:04 ` [virtio-dev] " Michael S. Tsirkin
2023-05-10 7:01 ` [virtio-comment] " Jason Wang
2023-05-10 7:01 ` [virtio-dev] " Jason Wang
2023-05-10 7:43 ` [virtio-comment] " Michael S. Tsirkin
2023-05-10 7:43 ` [virtio-dev] " Michael S. Tsirkin
2023-05-10 16:13 ` [virtio-comment] " Parav Pandit
2023-05-10 16:13 ` [virtio-dev] " Parav Pandit
2023-05-11 7:04 ` [virtio-comment] " Jason Wang
2023-05-11 7:04 ` [virtio-dev] " Jason Wang
2023-05-11 12:54 ` [virtio-comment] " Michael S. Tsirkin
2023-05-11 12:54 ` [virtio-dev] " Michael S. Tsirkin
2023-05-11 13:02 ` [virtio-comment] " Parav Pandit
2023-05-11 13:02 ` [virtio-dev] " Parav Pandit
2023-05-15 7:30 ` [virtio-comment] " Jason Wang
2023-05-15 7:30 ` [virtio-dev] " Jason Wang
2023-05-15 10:08 ` [virtio-comment] " Michael S. Tsirkin
2023-05-15 10:08 ` [virtio-dev] " Michael S. Tsirkin
2023-05-15 14:30 ` [virtio-comment] " Parav Pandit
2023-05-15 14:30 ` [virtio-dev] " Parav Pandit
2023-05-23 18:16 ` [virtio-comment] " Michael S. Tsirkin
2023-05-23 18:16 ` [virtio-dev] " Michael S. Tsirkin
2023-05-23 21:32 ` [virtio-comment] " Parav Pandit
2023-05-23 21:32 ` [virtio-dev] " Parav Pandit
2023-05-24 5:56 ` [virtio-comment] " Michael S. Tsirkin
2023-05-24 5:56 ` [virtio-dev] " Michael S. Tsirkin
2023-05-24 18:57 ` [virtio-comment] " Parav Pandit
2023-05-24 18:57 ` [virtio-dev] " Parav Pandit
2023-05-24 19:58 ` [virtio-comment] " Michael S. Tsirkin
2023-05-24 19:58 ` [virtio-dev] " Michael S. Tsirkin
2023-05-24 20:01 ` [virtio-comment] " Parav Pandit
2023-05-24 20:01 ` [virtio-dev] " Parav Pandit
2023-05-24 20:15 ` [virtio-comment] " Michael S. Tsirkin
2023-05-24 20:15 ` [virtio-dev] " Michael S. Tsirkin
2023-05-15 15:59 ` [virtio-comment] " Parav Pandit
2023-05-15 15:59 ` [virtio-dev] " Parav Pandit
2023-05-16 6:21 ` [virtio-comment] " Michael S. Tsirkin
2023-05-16 6:21 ` [virtio-dev] " Michael S. Tsirkin
2023-05-16 19:11 ` [virtio-comment] " Parav Pandit
2023-05-16 19:11 ` [virtio-dev] " Parav Pandit
2023-05-16 20:58 ` [virtio-comment] " Michael S. Tsirkin
2023-05-16 20:58 ` [virtio-dev] " Michael S. Tsirkin
2023-05-16 21:19 ` [virtio-comment] " Parav Pandit
2023-05-16 21:19 ` [virtio-dev] " Parav Pandit
2023-05-16 21:23 ` [virtio-comment] " Michael S. Tsirkin
2023-05-16 21:23 ` [virtio-dev] " Michael S. Tsirkin
2023-05-16 21:30 ` [virtio-comment] " Parav Pandit
2023-05-16 21:30 ` [virtio-dev] " Parav Pandit
2023-05-15 7:13 ` [virtio-comment] " Jason Wang
2023-05-15 7:13 ` [virtio-dev] " Jason Wang
2023-05-11 13:15 ` [virtio-comment] " Parav Pandit
2023-05-11 13:15 ` [virtio-dev] " Parav Pandit
2023-05-11 13:45 ` [virtio-comment] " Michael S. Tsirkin
2023-05-11 13:45 ` [virtio-dev] " Michael S. Tsirkin
2023-05-12 14:03 ` [virtio-comment] " Parav Pandit
2023-05-12 14:03 ` Parav Pandit
2023-05-16 3:54 ` [virtio-comment] " Jason Wang
2023-05-16 3:54 ` [virtio-dev] " Jason Wang
2023-05-16 19:35 ` [virtio-comment] " Parav Pandit
2023-05-16 19:35 ` [virtio-dev] " Parav Pandit
2023-05-16 21:11 ` Michael S. Tsirkin
2023-05-16 21:11 ` [virtio-dev] " Michael S. Tsirkin
2023-05-16 21:49 ` Parav Pandit
2023-05-16 21:49 ` [virtio-dev] " Parav Pandit
2023-05-16 21:56 ` Michael S. Tsirkin
2023-05-16 21:56 ` [virtio-dev] " Michael S. Tsirkin
2023-05-10 16:11 ` Parav Pandit
2023-05-10 16:11 ` [virtio-dev] " Parav Pandit
2023-05-10 16:16 ` [virtio-comment] " Michael S. Tsirkin
2023-05-10 16:16 ` [virtio-dev] " Michael S. Tsirkin
2023-05-10 17:33 ` [virtio-comment] " Parav Pandit
2023-05-10 17:33 ` [virtio-dev] " Parav Pandit
2023-05-10 21:08 ` [virtio-comment] " Parav Pandit
2023-05-10 21:08 ` [virtio-dev] " Parav Pandit
2023-05-10 21:33 ` [virtio-comment] " Michael S. Tsirkin
2023-05-10 21:33 ` [virtio-dev] " Michael S. Tsirkin
2023-05-10 21:48 ` [virtio-comment] " Parav Pandit
2023-05-10 21:48 ` [virtio-dev] " Parav Pandit
2023-05-11 7:06 ` [virtio-comment] " Jason Wang
2023-05-11 7:06 ` [virtio-dev] " Jason Wang
2023-05-11 13:04 ` [virtio-comment] " Michael S. Tsirkin
2023-05-11 13:04 ` [virtio-dev] " Michael S. Tsirkin
2023-05-15 5:19 ` [virtio-comment] " Jason Wang
2023-05-15 5:19 ` [virtio-dev] " Jason Wang
2023-05-15 15:31 ` [virtio-comment] " Parav Pandit
2023-05-15 15:31 ` [virtio-dev] " Parav Pandit
2023-05-11 13:28 ` [virtio-comment] " Parav Pandit
2023-05-11 13:28 ` [virtio-dev] " Parav Pandit
2023-05-11 13:38 ` [virtio-comment] " Michael S. Tsirkin
2023-05-11 13:38 ` [virtio-dev] " Michael S. Tsirkin
2023-05-11 16:00 ` [virtio-comment] " Parav Pandit
2023-05-11 16:00 ` [virtio-dev] " Parav Pandit
2023-05-11 20:47 ` [virtio-comment] " Parav Pandit
2023-05-11 20:47 ` [virtio-dev] " Parav Pandit
2023-05-11 20:58 ` [virtio-comment] " Michael S. Tsirkin
2023-05-11 20:58 ` [virtio-dev] " Michael S. Tsirkin
2023-05-11 21:03 ` [virtio-comment] " Parav Pandit
2023-05-11 21:03 ` [virtio-dev] " Parav Pandit
2023-05-15 16:55 ` [virtio-comment] " Parav Pandit
2023-05-15 16:55 ` [virtio-dev] " Parav Pandit
2023-05-15 7:10 ` [virtio-comment] " Jason Wang
2023-05-15 7:10 ` [virtio-dev] " Jason Wang
2023-05-15 15:49 ` [virtio-comment] " Parav Pandit
2023-05-15 15:49 ` [virtio-dev] " Parav Pandit
2023-05-15 17:44 ` [virtio-comment] " Michael S. Tsirkin
2023-05-15 17:44 ` [virtio-dev] " Michael S. Tsirkin
2023-05-15 17:51 ` [virtio-comment] " Parav Pandit
2023-05-15 17:51 ` [virtio-dev] " Parav Pandit
2023-05-15 17:56 ` [virtio-comment] " Michael S. Tsirkin
2023-05-15 17:56 ` [virtio-dev] " Michael S. Tsirkin
2023-05-15 18:00 ` [virtio-comment] " Parav Pandit
2023-05-15 18:00 ` [virtio-dev] " Parav Pandit
2023-05-15 18:01 ` [virtio-comment] " Michael S. Tsirkin
2023-05-15 18:01 ` [virtio-dev] " Michael S. Tsirkin
2023-05-15 18:05 ` [virtio-comment] " Parav Pandit
2023-05-15 18:05 ` [virtio-dev] " Parav Pandit
2023-05-16 3:37 ` [virtio-comment] " Jason Wang
2023-05-16 3:37 ` [virtio-dev] " Jason Wang
2023-05-16 3:43 ` [virtio-comment] " Jason Wang
2023-05-16 3:43 ` [virtio-dev] " Jason Wang
2023-05-16 5:38 ` [virtio-comment] " Michael S. Tsirkin
2023-05-16 5:38 ` [virtio-dev] " Michael S. Tsirkin
2023-05-16 3:28 ` [virtio-comment] " Jason Wang
2023-05-16 3:28 ` [virtio-dev] " Jason Wang
2023-05-16 3:45 ` [virtio-comment] " Parav Pandit
2023-05-16 3:45 ` [virtio-dev] " Parav Pandit
2023-05-16 4:08 ` [virtio-comment] " Jason Wang
2023-05-16 4:08 ` [virtio-dev] " Jason Wang
2023-05-16 19:29 ` [virtio-comment] " Parav Pandit
2023-05-16 19:29 ` [virtio-dev] " Parav Pandit
2023-05-16 21:09 ` [virtio-comment] " Michael S. Tsirkin
2023-05-16 21:09 ` [virtio-dev] " Michael S. Tsirkin
2023-05-16 21:41 ` [virtio-comment] " Parav Pandit
2023-05-16 21:41 ` [virtio-dev] " Parav Pandit
2023-05-16 21:54 ` [virtio-comment] " Michael S. Tsirkin
2023-05-16 21:54 ` [virtio-dev] " Michael S. Tsirkin
2023-05-16 4:18 ` [virtio-comment] " Michael S. Tsirkin
2023-05-16 4:18 ` [virtio-dev] " Michael S. Tsirkin
2023-05-07 9:04 ` [virtio-comment] " Michael S. Tsirkin
2023-05-07 9:04 ` [virtio-dev] " Michael S. Tsirkin
2023-05-08 16:54 ` [virtio-comment] " Parav Pandit
2023-05-08 16:54 ` [virtio-dev] " Parav Pandit
2023-05-15 20:29 ` [virtio-comment] " Michael S. Tsirkin
2023-05-15 20:29 ` [virtio-dev] " Michael S. Tsirkin
2023-05-15 20:56 ` Parav Pandit
2023-05-15 20:56 ` [virtio-dev] " Parav Pandit
2023-05-16 4:32 ` Michael S. Tsirkin [this message]
2023-05-16 4:32 ` [virtio-dev] " Michael S. Tsirkin
2023-05-16 18:45 ` Parav Pandit
2023-05-16 18:45 ` [virtio-dev] " Parav Pandit
2023-05-16 20:42 ` Michael S. Tsirkin
2023-05-16 20:42 ` [virtio-dev] " Michael S. Tsirkin
2023-05-23 6:38 ` Michael S. Tsirkin
2023-05-23 6:38 ` [virtio-dev] " Michael S. Tsirkin
2023-05-23 17:28 ` [virtio-comment] " Parav Pandit
2023-05-23 17:28 ` [virtio-dev] " Parav Pandit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230516002024-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=cohuck@redhat.com \
--cc=david.edmondson@oracle.com \
--cc=jasowang@redhat.com \
--cc=maorg@nvidia.com \
--cc=parav@nvidia.com \
--cc=sburla@marvell.com \
--cc=shahafs@nvidia.com \
--cc=virtio-comment@lists.oasis-open.org \
--cc=virtio-dev@lists.oasis-open.org \
--cc=yishaih@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.