From: "Zhu, Lingshan" <lingshan.zhu@intel.com>
To: Parav Pandit <parav@nvidia.com>, Jason Wang <jasowang@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
"eperezma@redhat.com" <eperezma@redhat.com>,
"cohuck@redhat.com" <cohuck@redhat.com>,
"stefanha@redhat.com" <stefanha@redhat.com>,
"virtio-comment@lists.oasis-open.org"
<virtio-comment@lists.oasis-open.org>,
"virtio-dev@lists.oasis-open.org"
<virtio-dev@lists.oasis-open.org>
Subject: [virtio-dev] Re: [virtio-comment] [PATCH 5/5] virtio-pci: implement VIRTIO_F_QUEUE_STATE
Date: Mon, 11 Sep 2023 16:46:49 +0800 [thread overview]
Message-ID: <88b8b14c-88d8-1f76-0e6e-7b5f334171f1@intel.com> (raw)
In-Reply-To: <PH0PR12MB5481D11A7DF45292103D9290DCF2A@PH0PR12MB5481.namprd12.prod.outlook.com>
On 9/11/2023 4:12 PM, Parav Pandit wrote:
>> From: Zhu, Lingshan <lingshan.zhu@intel.com>
>> Sent: Monday, September 11, 2023 1:28 PM
>
>>> Basic facilities are added in [1] for passthrough devices.
>>> You can leverage them in your v2 for supporting p2p devices, dirty page
>> tracking, passthrough support, shorter downtime and more.
>> Basic facilities should be better not depend on others, but admin vq can re-use
>> the basic facilities.
>>
>> For P2P, what if the devices are placed in different IOMMU group?
> IOMMU grouping is one OS specific notion of an older API.
> Hypervisor needs to do right setup anyway for using PCI spec define access control and other semantics which is outside the scope of [1].
> It is outside primarily because proposal [1] is not migrating the whole "PCI device".
> It is migrating the virtio device, so that we can migrate from PCI VF member to some software based device too.
> And vis-versa.
Since you talked about P2P, IOMMU is basically for address space
isolation. For security reasons, it is usually
suggest to passthrough all devices in one IOMMU group to a single guest.
That means, if you want the VF to perform P2P with the PF there the AQ
resides, you have to place them in the same
IOMMU group and passthrough them all to a guest. So how this AQ serve
other purposes?
>
>>> QOS is such a broad term that is hard to debate unless you get to a specific
>> point.
>> E.g., there can be hundreds or thousands of VMs, how many admin vq are
>> required to serve them when LM? To converge, no timeout.
> How many RSS queues are required to reach 800Gbs NIC performance at what q depth at what interrupt moderation level?
> Such details are outside the scope of virtio specification.
> Those are implementation details of the device.
>
> Similarly here for AQ too.
> The inherent nature of AQ to queue commands and execute them out of order in the device is the fundamental reason, AQ is introduced.
> And one can have more AQs to do unrelated work, mainly from the hypervisor owner device who wants to enqueue unrelated commands in parallel.
As pointed above insufficient RSS capabilities may cause performance
overhead, not not a failure, the device still stay functional.
But too few AQ to serve too high volume of VMs may be a problem.
Yes the number of AQs are negotiable, but how many exactly should the HW
provide?
Naming a number or an algorithm for the ratio of devices / num_of_AQs is
beyond this topic, but I made my point clear.
>
>>>> E.g., do you need 100 admin vqs for 1000 VMs? How do you decide the
>>>> number in HW implementation and how does the driver get informed?
>>> Usually just one AQ is enough as proposal [1] is built around inherent
>> downtime reduction.
>>> You can ask similar question for RSS, how does hw device how many RSS
>>> queues are needed. 😊
>>> Device exposes number of supported AQs that driver is free to use.
>> RSS is not a must for the transition through maybe performance overhead.
>> But if the host can not finish Live Migration in the due time, then it is a failed
>> LM.
> It can aborts the LM and restore it back by resuming the device.
aborts means fail
>
>>> Most sane sys admins do not migrate 1000 VMs at same time for obvious
>> reasons.
>>> But when such requirements arise, a device may support it.
>>> Just like how a net device can support from 1 to 32K txqueues at spec level.
>> The orchestration layer may do that for host upgrade or power-saving.
>> And the VMs may be required to migrate together, for example:
>> a cluster of VMs in the same subnet.
>>
> Sure. AQ of depth 1K can support 1K outstanding commands at a time for 1000 member devices.
PCI transition is FIFO, can depth = 1K introduce significant latency?
And 1K depths is
almost identical to 2 X 500 queue depths, so still the same problem, how
many resource
does the HW need to reserve to serve the worst case?
Let's forget the numbers, the point is clear.
>
>> Lets do not introduce new frangibility
> I don’t see any frangibility added by [1].
> If you see one, please let me know.
The resource and latency explained above.
>
>>>>>> Remember CSP migrates all VMs on a host for powersaving or upgrade.
>>>>> I am not sure why the migration reason has any influence on the design.
>>>> Because this design is for live migration.
>>>>> The CSPs that we had discussed, care for performance more and hence
>>>> prefers passthrough instead or mediation and don’t seem to be doing
>>>> any nesting.
>>>>> CPU doesnt have support for 3 level of page table nesting either.
>>>>> I agree that there could be other users who care for nested functionality.
>>>>>
>>>>> Any ways, nesting and non-nesting are two different requirements.
>>>> The LM facility should server both,
>>> I don’t see how PCI spec let you do it.
>>> PCI community already handed over this to SR-PCIM interface outside of the
>> PCI spec domain.
>>> Hence, its done over admin queue for passthrough devices.
>>>
>>> If you can explain, how your proposal addresses passthrough support without
>> mediation and also does DMA, I am very interested to learn that.
>> Do you mean nested?
> Before nesting, just like to see basic single level passthrough to see functional and performant like [1].
I think we have discussed about this, the nested guest is not aware of
the admin vq and can not access it,
because the admin vq is a host facility.
>
>> Why this series can not support nested?
> I don’t see all the aspects that I covered in series [1] ranging from flr, device context migration, virtio level reset, dirty page tracking, p2p support, etc. covered in some device, vq suspend resume piece.
>
> [1] https://lists.oasis-open.org/archives/virtio-comment/202309/msg00061.html
We have discussed many other issues in this thread.
>
>>>> And it does not serve bare-metal live migration either.
>>> A bare-metal migration seems a distance theory as one need side cpu,
>> memory accessor apart from device accessor.
>>> But somehow if that exists, there will be similar admin device to migrate it
>> may be TDDISP will own this whole piece one day.
>> Bare metal live migration require other components like firmware OS and
>> partitioning, that's why the device live migration should not be a blocker.
> Device migration is not blocker.
> In-fact it facilitates for this future in case if that happens where side cpu like DPU or similar sideband virtio admin device can migrate over its admin vq.
>
> Long ago when admin commands were discussed, this was discussed too where a admin device may not be an owner device.
The admin vq can not migrate it self therefore baremetal can not be
migrated by admin vq
---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
next prev parent reply other threads:[~2023-09-11 8:46 UTC|newest]
Thread overview: 269+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-06 8:16 [virtio-dev] [PATCH 0/5] virtio: introduce SUSPEND bit and vq state Zhu Lingshan
2023-09-06 8:16 ` [virtio-dev] [PATCH 1/5] virtio: introduce vq state as basic facility Zhu Lingshan
2023-09-06 8:28 ` [virtio-dev] Re: [virtio-comment] " Michael S. Tsirkin
2023-09-06 9:43 ` Zhu, Lingshan
2023-09-14 11:25 ` Michael S. Tsirkin
2023-09-15 2:46 ` Zhu, Lingshan
2023-09-06 8:16 ` [virtio-dev] [PATCH 2/5] virtio: introduce SUSPEND bit in device status Zhu Lingshan
2023-09-14 11:34 ` [virtio-dev] " Michael S. Tsirkin
2023-09-15 2:57 ` Zhu, Lingshan
2023-09-15 11:10 ` Michael S. Tsirkin
2023-09-18 2:56 ` Zhu, Lingshan
2023-09-18 4:42 ` [virtio-dev] RE: [virtio-comment] " Parav Pandit
2023-09-18 5:14 ` [virtio-dev] " Zhu, Lingshan
2023-09-18 6:17 ` [virtio-dev] " Parav Pandit
2023-09-18 6:38 ` [virtio-dev] " Zhu, Lingshan
2023-09-18 6:46 ` [virtio-dev] " Parav Pandit
2023-09-18 6:49 ` [virtio-dev] " Zhu, Lingshan
2023-09-18 6:50 ` [virtio-dev] " Zhu, Lingshan
2023-09-06 8:16 ` [virtio-dev] [PATCH 3/5] virtqueue: constraints for virtqueue state Zhu Lingshan
2023-09-14 11:30 ` [virtio-dev] " Michael S. Tsirkin
2023-09-15 2:59 ` Zhu, Lingshan
2023-09-15 11:16 ` Michael S. Tsirkin
2023-09-18 3:02 ` [virtio-dev] Re: [virtio-comment] " Zhu, Lingshan
2023-09-18 17:30 ` Michael S. Tsirkin
2023-09-19 7:56 ` Zhu, Lingshan
2023-09-06 8:16 ` [virtio-dev] [PATCH 4/5] virtqueue: ignore resetting vqs when SUSPEND Zhu Lingshan
2023-09-14 11:09 ` [virtio-dev] " Michael S. Tsirkin
2023-09-15 4:06 ` Zhu, Lingshan
2023-09-06 8:16 ` [virtio-dev] [PATCH 5/5] virtio-pci: implement VIRTIO_F_QUEUE_STATE Zhu Lingshan
2023-09-06 8:32 ` [virtio-dev] Re: [virtio-comment] " Michael S. Tsirkin
2023-09-06 8:37 ` [virtio-dev] " Parav Pandit
2023-09-06 9:37 ` [virtio-dev] " Zhu, Lingshan
2023-09-11 3:01 ` Jason Wang
2023-09-11 4:11 ` [virtio-dev] " Parav Pandit
2023-09-11 6:30 ` [virtio-dev] " Jason Wang
2023-09-11 6:47 ` [virtio-dev] " Parav Pandit
2023-09-11 6:58 ` [virtio-dev] " Zhu, Lingshan
2023-09-11 7:07 ` [virtio-dev] " Parav Pandit
2023-09-11 7:18 ` [virtio-dev] " Zhu, Lingshan
2023-09-11 7:30 ` [virtio-dev] " Parav Pandit
2023-09-11 7:58 ` [virtio-dev] " Zhu, Lingshan
2023-09-11 8:12 ` [virtio-dev] " Parav Pandit
2023-09-11 8:46 ` Zhu, Lingshan [this message]
2023-09-11 9:05 ` Parav Pandit
2023-09-11 9:32 ` [virtio-dev] " Zhu, Lingshan
2023-09-11 10:21 ` [virtio-dev] " Parav Pandit
2023-09-12 4:06 ` [virtio-dev] " Zhu, Lingshan
2023-09-12 5:58 ` [virtio-dev] " Parav Pandit
2023-09-12 6:33 ` [virtio-dev] " Zhu, Lingshan
2023-09-12 6:47 ` [virtio-dev] " Parav Pandit
2023-09-12 7:27 ` [virtio-dev] " Zhu, Lingshan
2023-09-12 7:40 ` [virtio-dev] " Parav Pandit
2023-09-12 9:02 ` [virtio-dev] " Zhu, Lingshan
2023-09-12 9:21 ` [virtio-dev] " Parav Pandit
2023-09-12 13:03 ` [virtio-dev] " Zhu, Lingshan
2023-09-12 13:43 ` [virtio-dev] " Parav Pandit
2023-09-13 4:01 ` [virtio-dev] " Zhu, Lingshan
2023-09-13 4:12 ` [virtio-dev] " Parav Pandit
2023-09-13 4:20 ` [virtio-dev] " Zhu, Lingshan
2023-09-13 4:36 ` [virtio-dev] " Parav Pandit
2023-09-14 8:19 ` [virtio-dev] " Zhu, Lingshan
2023-09-11 11:50 ` [virtio-dev] " Parav Pandit
2023-09-12 3:43 ` [virtio-dev] " Jason Wang
2023-09-12 5:50 ` [virtio-dev] " Parav Pandit
2023-09-13 4:44 ` [virtio-dev] " Jason Wang
2023-09-13 6:05 ` [virtio-dev] " Parav Pandit
2023-09-14 3:11 ` [virtio-dev] " Jason Wang
2023-09-17 5:22 ` [virtio-dev] " Parav Pandit
2023-09-19 4:35 ` [virtio-dev] " Jason Wang
2023-09-19 7:33 ` [virtio-dev] " Parav Pandit
2023-09-12 3:48 ` [virtio-dev] " Zhu, Lingshan
2023-09-12 5:51 ` [virtio-dev] " Parav Pandit
2023-09-12 6:37 ` [virtio-dev] " Zhu, Lingshan
2023-09-12 6:49 ` [virtio-dev] " Parav Pandit
2023-09-12 7:29 ` [virtio-dev] " Zhu, Lingshan
2023-09-12 7:53 ` [virtio-dev] " Parav Pandit
2023-09-12 9:06 ` [virtio-dev] " Zhu, Lingshan
2023-09-12 9:08 ` Zhu, Lingshan
2023-09-12 9:35 ` [virtio-dev] " Parav Pandit
2023-09-12 10:14 ` [virtio-dev] " Zhu, Lingshan
2023-09-12 10:16 ` [virtio-dev] " Parav Pandit
2023-09-12 10:28 ` [virtio-dev] " Zhu, Lingshan
2023-09-13 2:23 ` [virtio-dev] " Parav Pandit
2023-09-13 4:03 ` [virtio-dev] " Zhu, Lingshan
2023-09-13 4:15 ` [virtio-dev] " Parav Pandit
2023-09-13 4:21 ` [virtio-dev] " Zhu, Lingshan
2023-09-13 4:37 ` [virtio-dev] " Parav Pandit
2023-09-14 3:11 ` [virtio-dev] " Jason Wang
2023-09-17 5:25 ` [virtio-dev] " Parav Pandit
2023-09-19 4:34 ` [virtio-dev] " Jason Wang
2023-09-19 7:32 ` [virtio-dev] " Parav Pandit
2023-09-14 8:22 ` [virtio-dev] " Zhu, Lingshan
2023-09-12 9:28 ` [virtio-dev] " Parav Pandit
2023-09-12 10:17 ` [virtio-dev] " Zhu, Lingshan
2023-09-12 10:25 ` [virtio-dev] " Parav Pandit
2023-09-12 10:32 ` [virtio-dev] " Zhu, Lingshan
2023-09-12 10:40 ` [virtio-dev] " Parav Pandit
2023-09-12 13:04 ` [virtio-dev] " Zhu, Lingshan
2023-09-12 13:36 ` [virtio-dev] " Parav Pandit
2023-09-12 4:10 ` [virtio-dev] " Jason Wang
2023-09-12 6:05 ` [virtio-dev] " Parav Pandit
2023-09-13 4:45 ` [virtio-dev] " Jason Wang
2023-09-13 6:39 ` [virtio-dev] " Parav Pandit
2023-09-14 3:08 ` [virtio-dev] " Jason Wang
2023-09-17 5:22 ` [virtio-dev] " Parav Pandit
2023-09-19 4:32 ` [virtio-dev] " Jason Wang
2023-09-19 7:32 ` [virtio-dev] " Parav Pandit
2023-09-13 8:27 ` [virtio-dev] " Michael S. Tsirkin
2023-09-14 3:11 ` Jason Wang
2023-09-12 4:18 ` Jason Wang
2023-09-12 6:11 ` [virtio-dev] " Parav Pandit
2023-09-12 6:43 ` [virtio-dev] " Zhu, Lingshan
2023-09-12 6:52 ` [virtio-dev] " Parav Pandit
2023-09-12 7:36 ` [virtio-dev] " Zhu, Lingshan
2023-09-12 7:43 ` [virtio-dev] " Parav Pandit
2023-09-12 10:27 ` [virtio-dev] " Zhu, Lingshan
2023-09-12 10:33 ` [virtio-dev] " Parav Pandit
2023-09-12 10:35 ` [virtio-dev] " Zhu, Lingshan
2023-09-12 10:41 ` [virtio-dev] " Parav Pandit
2023-09-12 13:09 ` [virtio-dev] " Zhu, Lingshan
2023-09-12 13:35 ` [virtio-dev] " Parav Pandit
2023-09-13 4:13 ` [virtio-dev] " Zhu, Lingshan
2023-09-13 4:19 ` [virtio-dev] " Parav Pandit
2023-09-13 4:22 ` [virtio-dev] " Zhu, Lingshan
2023-09-13 4:39 ` [virtio-dev] " Parav Pandit
2023-09-14 8:24 ` [virtio-dev] " Zhu, Lingshan
2023-09-13 4:56 ` Jason Wang
2023-09-13 4:43 ` Jason Wang
2023-09-13 4:46 ` [virtio-dev] " Parav Pandit
2023-09-14 3:12 ` [virtio-dev] " Jason Wang
2023-09-17 5:29 ` [virtio-dev] " Parav Pandit
2023-09-19 4:25 ` [virtio-dev] " Jason Wang
2023-09-19 7:32 ` [virtio-dev] " Parav Pandit
2023-09-11 6:59 ` Parav Pandit
2023-09-11 10:15 ` [virtio-dev] " Michael S. Tsirkin
2023-09-12 3:35 ` Jason Wang
2023-09-12 3:43 ` Zhu, Lingshan
2023-09-14 11:27 ` Michael S. Tsirkin
2023-09-15 4:13 ` Zhu, Lingshan
2023-09-06 8:29 ` [virtio-dev] Re: [virtio-comment] [PATCH 0/5] virtio: introduce SUSPEND bit and vq state Michael S. Tsirkin
2023-09-06 8:38 ` Zhu, Lingshan
2023-09-06 13:49 ` Michael S. Tsirkin
2023-09-07 1:51 ` Zhu, Lingshan
2023-09-07 10:57 ` Eugenio Perez Martin
2023-09-07 19:55 ` Michael S. Tsirkin
2023-09-14 11:14 ` [virtio-dev] " Michael S. Tsirkin
2023-09-15 4:28 ` Zhu, Lingshan
2023-09-17 5:32 ` Parav Pandit
2023-09-18 3:10 ` Zhu, Lingshan
2023-09-18 4:32 ` Parav Pandit
2023-09-18 5:21 ` Zhu, Lingshan
2023-09-18 5:25 ` Zhu, Lingshan
2023-09-18 6:37 ` Parav Pandit
2023-09-18 6:49 ` Zhu, Lingshan
2023-09-18 6:54 ` Parav Pandit
2023-09-18 9:34 ` Zhu, Lingshan
2023-09-18 18:41 ` Parav Pandit
2023-09-18 18:49 ` Michael S. Tsirkin
2023-09-20 6:06 ` Zhu, Lingshan
2023-09-20 6:08 ` Parav Pandit
2023-09-20 6:31 ` Zhu, Lingshan
2023-09-20 8:34 ` Parav Pandit
2023-09-20 9:44 ` Zhu, Lingshan
2023-09-20 9:52 ` Parav Pandit
2023-09-20 11:11 ` Zhu, Lingshan
2023-09-20 11:15 ` Parav Pandit
2023-09-20 11:27 ` Zhu, Lingshan
2023-09-21 5:13 ` Michael S. Tsirkin
2023-09-20 10:36 ` Michael S. Tsirkin
2023-09-20 10:55 ` Parav Pandit
2023-09-20 11:28 ` Zhu, Lingshan
2023-09-20 11:52 ` Michael S. Tsirkin
2023-09-20 12:05 ` Zhu, Lingshan
2023-09-20 12:08 ` Zhu, Lingshan
2023-09-20 12:22 ` Michael S. Tsirkin
2023-09-20 11:22 ` Zhu, Lingshan
2023-09-20 12:05 ` Michael S. Tsirkin
2023-09-20 12:13 ` Parav Pandit
2023-09-20 12:16 ` Zhu, Lingshan
2023-09-20 12:40 ` Michael S. Tsirkin
2023-09-21 3:14 ` Jason Wang
2023-09-21 3:51 ` Parav Pandit
2023-09-21 4:02 ` Jason Wang
2023-09-21 4:11 ` Parav Pandit
2023-09-21 4:19 ` Jason Wang
2023-09-21 4:29 ` Parav Pandit
2023-09-22 3:13 ` Jason Wang
2023-09-20 12:41 ` Michael S. Tsirkin
2023-09-20 13:41 ` Parav Pandit
2023-09-20 14:13 ` Michael S. Tsirkin
2023-09-20 14:16 ` Michael S. Tsirkin
2023-09-20 17:21 ` Parav Pandit
2023-09-20 20:03 ` Michael S. Tsirkin
2023-09-21 3:43 ` Parav Pandit
2023-09-21 5:41 ` Michael S. Tsirkin
2023-09-21 5:54 ` Parav Pandit
2023-09-21 6:06 ` Michael S. Tsirkin
2023-09-21 6:31 ` Parav Pandit
2023-09-21 7:20 ` Michael S. Tsirkin
2023-09-21 7:53 ` Parav Pandit
2023-09-21 8:11 ` Michael S. Tsirkin
2023-09-21 9:17 ` Parav Pandit
2023-09-21 10:01 ` Michael S. Tsirkin
2023-09-21 11:13 ` Parav Pandit
2023-09-21 10:09 ` Michael S. Tsirkin
2023-09-21 10:39 ` Parav Pandit
2023-09-21 12:22 ` Michael S. Tsirkin
2023-09-21 12:39 ` Parav Pandit
2023-09-21 13:04 ` Michael S. Tsirkin
2023-09-22 3:31 ` Jason Wang
2023-09-21 9:06 ` Zhu, Lingshan
2023-09-21 9:18 ` Zhu, Lingshan
2023-09-21 9:26 ` Parav Pandit
2023-09-21 9:55 ` Zhu, Lingshan
2023-09-21 11:28 ` Parav Pandit
2023-09-22 2:40 ` Zhu, Lingshan
2023-09-21 3:26 ` Jason Wang
2023-09-21 4:21 ` Parav Pandit
2023-09-21 3:18 ` Jason Wang
2023-09-21 4:03 ` Parav Pandit
2023-09-21 3:17 ` Jason Wang
2023-09-21 4:01 ` Parav Pandit
2023-09-21 4:09 ` Jason Wang
2023-09-21 4:19 ` Parav Pandit
2023-09-22 3:08 ` Jason Wang
2023-09-22 3:39 ` Zhu, Lingshan
2023-09-25 10:41 ` Parav Pandit
2023-09-26 2:45 ` Jason Wang
2023-09-26 3:40 ` Parav Pandit
2023-09-26 4:37 ` Jason Wang
2023-09-26 5:21 ` Parav Pandit
2023-10-09 8:49 ` Jason Wang
2023-10-12 10:03 ` Michael S. Tsirkin
2023-09-27 15:31 ` Michael S. Tsirkin
2023-09-26 5:36 ` Zhu, Lingshan
2023-09-26 6:03 ` Parav Pandit
2023-09-26 9:25 ` Zhu, Lingshan
2023-09-26 10:48 ` Michael S. Tsirkin
2023-09-27 8:20 ` Zhu, Lingshan
2023-09-27 10:39 ` Parav Pandit
2023-10-09 10:05 ` Zhu, Lingshan
2023-10-09 10:07 ` Parav Pandit
2023-09-27 15:40 ` Michael S. Tsirkin
2023-10-09 10:01 ` Zhu, Lingshan
2023-10-11 10:20 ` [virtio-dev] Re: [virtio-comment] " Michael S. Tsirkin
2023-10-11 10:38 ` Zhu, Lingshan
2023-10-11 11:52 ` [virtio-dev] " Parav Pandit
2023-10-12 10:57 ` [virtio-dev] " Zhu, Lingshan
2023-10-12 11:13 ` Michael S. Tsirkin
2023-10-12 9:59 ` Michael S. Tsirkin
2023-10-12 10:49 ` Zhu, Lingshan
2023-10-12 11:12 ` Michael S. Tsirkin
2023-10-13 10:18 ` Zhu, Lingshan
2023-10-12 14:38 ` Michael S. Tsirkin
2023-10-13 10:23 ` Zhu, Lingshan
2023-09-27 21:43 ` Michael S. Tsirkin
2023-09-19 8:01 ` Zhu, Lingshan
2023-09-19 9:06 ` Parav Pandit
2023-09-19 10:03 ` Zhu, Lingshan
2023-09-19 4:27 ` Jason Wang
2023-09-19 7:32 ` Parav Pandit
2023-09-19 7:46 ` Zhu, Lingshan
2023-09-19 7:53 ` Parav Pandit
2023-09-19 8:03 ` Zhu, Lingshan
2023-09-19 8:31 ` Parav Pandit
2023-09-19 8:39 ` Zhu, Lingshan
2023-09-19 9:09 ` Parav Pandit
2023-09-14 11:37 ` Michael S. Tsirkin
2023-09-15 4:41 ` Zhu, Lingshan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=88b8b14c-88d8-1f76-0e6e-7b5f334171f1@intel.com \
--to=lingshan.zhu@intel.com \
--cc=cohuck@redhat.com \
--cc=eperezma@redhat.com \
--cc=jasowang@redhat.com \
--cc=mst@redhat.com \
--cc=parav@nvidia.com \
--cc=stefanha@redhat.com \
--cc=virtio-comment@lists.oasis-open.org \
--cc=virtio-dev@lists.oasis-open.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox