public inbox for virtio-dev@lists.linux.dev
 help / color / mirror / Atom feed
From: "Zhu, Lingshan" <lingshan.zhu@intel.com>
To: Parav Pandit <parav@nvidia.com>, Jason Wang <jasowang@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	"eperezma@redhat.com" <eperezma@redhat.com>,
	"cohuck@redhat.com" <cohuck@redhat.com>,
	"stefanha@redhat.com" <stefanha@redhat.com>,
	"virtio-comment@lists.oasis-open.org"
	<virtio-comment@lists.oasis-open.org>,
	"virtio-dev@lists.oasis-open.org"
	<virtio-dev@lists.oasis-open.org>
Subject: [virtio-dev] Re: [virtio-comment] [PATCH 5/5] virtio-pci: implement VIRTIO_F_QUEUE_STATE
Date: Wed, 13 Sep 2023 12:01:15 +0800	[thread overview]
Message-ID: <7424d2ae-2366-882f-bd84-04ee5714764b@intel.com> (raw)
In-Reply-To: <PH0PR12MB5481687C209231F8C4CE8A1EDCF1A@PH0PR12MB5481.namprd12.prod.outlook.com>



On 9/12/2023 9:43 PM, Parav Pandit wrote:
>> From: Zhu, Lingshan <lingshan.zhu@intel.com>
>> Sent: Tuesday, September 12, 2023 6:33 PM
>>
>> On 9/12/2023 5:21 PM, Parav Pandit wrote:
>>>> From: Zhu, Lingshan <lingshan.zhu@intel.com>
>>>> Sent: Tuesday, September 12, 2023 2:33 PM admin vq require fixed and
>>>> dedicated resource to serve the VMs, the question still remains, does
>>>> is scale to server big amount of devices migration? how many admin
>>>> vqs do you need to serve 10 VMs, how many for 100? and so on? How to
>>>> scale?
>>>>
>>> Yes, it scales within the AQ and across multiple AQs.
>>> Please consult your board designers to know such limits for your device.
>> scales require multiple AQs, then how many should a vendor provide for the
>> worst case?
>>
>> I am boring for the same repeating questions.
> I said it scales, within the AQ. (and across AQs).
> I have answered enough times, so I will stop on same repeated question.
> Your repeated question is not helping anyone as it is not in the scope of virtio.
>
> If you think it is, please get it written first for RSS and MQ in net section and post for review.
You missed the point of the question and I agree no need to discuss this 
anymore.
>
>>>> If one admin vq can serve 100 VMs, can it migrate 1000VMs in reasonable
>> time?
>>>> If not, how many exactly.
>>>>
>>> Yes, it can serve both 100 and 1000 VMs in reasonable time.
>> I am not sure, the aq is limitless? Can serve thousands of VMs in a reasonable
>> time? Like in 300ms?
>>
> Yes.
really? limitless?
>
>> If you say, that require multiple AQ, then how many should a vendor provide?
>>
> I didn’t say multiple AQs must be used.
> It is same as NIC RQs.
don't you agree a single vq has its own performance limitations?
>
>> Don't say the board designer own the risks.
>>>> And register does not need to scale, it resides on the VF and only
>>>> serve the VF.
>>>>
>>> Since its per VF, by nature it is linearly growing entity that the board design
>> needs to support read and write with guaranteed timing.
>>> It clearly scaled poor than queue.
>> Please read my series. For example, we introduce a new bit SUSPEND in the
>> \field{device status}, any scalability issues here?
> That must behave like queue_reset, (it must get acknowledged from the device) that it is suspended.
> And that brings the scale issue.
In this series, it says:
+When setting SUSPEND, the driver MUST re-read \field{device status} to 
ensure the SUSPEND bit is set.

And this is nothing to do with scale.
> On top of that once the device is SUSPENDED, it cannot accept some other RESET_VQ command.
so as SiWei suggested, there will be a new feature bit introduced in V2 
for vq reset.
>
>>>> It does not reside on the PF to migrate the VFs.
>>> Hence it does not scale and cannot do parallel operation within the VF, unless
>> each register is replicated.
>> Why its not scale? It is a per device facility.
> Because the device needs to answer per device through some large scale memory to fit in a response time.
Again, it is a per-device facility, and it is register based serve the 
only one device itself.
And we do not plan to log the dirty pages in bar.
>
>> Why do you need parallel operation against the LM facility?
> Because your downtime was 300msec for 1000 VMs.
the LM facility in this series is per-device, it only severs itself.
>
>> That doesn't make a lot of sense.
>>> Using register of a queue for bulk data transfer is solved question when the
>> virtio spec was born.
>>> I don’t see a point to discuss it.
>>> Snippet from spec: " As a device can have zero or more virtqueues for bulk
>> data transport"
>> Where do you see the series intends to transfer bulk data through registers?
>>>> VFs config space can use the device dedicated resource like the bandwidth.
>>>>
>>>> for AQ, still you need to reserve resource and how much?
>>> It depends on your board, please consult your board designer to know
>> depending on the implementation.
>>>   From spec point of view, it should not be same as any other virtqueue.
>> so the vendor own the risk to implement AQ LM? Why they have to?
>>>>> No. I do not agree. It can fail and very hard for board designers.
>>>>> AQs are more reliable way to transport bulk data in scalable manner
>>>>> for tens
>>>> of member devices.
>>>> Really? How often do you observe virtio config space fail?
>>> On Intel Icelake server we have seen it failing with 128 VFs.
>>> And device needs to do very weird things to support 1000+ VFs forever
>> expanding config space, which is not the topic of this discussion anyway.
>> That is your setup problem.
>>>
>>>>>> Please allow me to provide an extreme example, is one single admin
>>>>>> vq limitless, that can serve hundreds to thousands of VMs migration?
>>>>> It is left to the device implementation. Just like RSS and multi queue
>> support?
>>>>> Is one Q enough for 800Gbps to 10Mbps link?
>>>>> Answer is: Not the scope of specification, spec provide the
>>>>> framework to scale
>>>> this way, but not impose on the device.
>>>> Even if not support RSS or MQ, the device still can work with
>>>> performance overhead, not fail.
>>>>
>>> _work_ is subjective.
>>> The financial transaction (application) failed. Packeted worked.
>>> LM commands were successful, but it was not timely.
>>>
>>> Same same..
>>>
>>>> Insufficient bandwidth & resource caused live migration fail is
>>>> totally different.
>>> Very abstract point and unrelated to administration commands.
>> It is your design facing the problem.
>>>>>> If not, two or
>>>>>> three or what number?
>>>>> It really does not matter. Its wrong point to discuss here.
>>>>> Number of queues and command execution depends on the device
>>>> implementation.
>>>>> A financial transaction application can timeout when a device
>>>>> queuing delay
>>>> for virtio net rx queue is long.
>>>>> And we don’t put details about such things in specification.
>>>>> Spec takes the requirements and provides driver device interface to
>>>> implement and scale.
>>>>> I still don’t follow the motivation behind the question.
>>>>> Is your question: How many admin queues are needed to migrate N
>>>>> member
>>>> devices? If so, it is implementation specific.
>>>>> It is similar to how such things depend on implementation for 30
>>>>> virtio device
>>>> types.
>>>>> And if are implying that because it is implementation specific, that
>>>>> is why
>>>> administration queue should not be used, but some configuration
>>>> register should be used.
>>>>> Than you should propose a config register interface to post
>>>>> virtqueue
>>>> descriptors that way for 30 device types!
>>>> if so, leave it as undefined? A potential risk for device implantation?
>>>> Then why must the admin vq?
>>> Because administration commands and admin vq does not impose devices to
>> implement thousands of registers which must have time bound completion
>> guarantee.
>>> The large part of industry including SIOV devices led by Intel and others are
>> moving away from register access mode.
>>> To summarize, administration commands and queue offer following benefits.
>>>
>>> 1. Ability to do bulk data transfer between driver and device
>>>
>>> 2. Ability to parallelize the work within driver and within device
>>> within single or multiple virtqueues
>>>
>>> 3. Eliminates implementing PCI read/write MMIO registers which demand
>>> low latency response interval
>>>
>>> 4. Better utilize host cpu as no one needs to poll on the device
>>> register for completion
>>>
>>> 5. Ability to handle variability in command completion by device and
>>> ability to notify the driver
>>>
>>> If this does not satisfy you, please refer to some of the past email discussions
>> during administration virtuqueue time.
>> I think you mixed up the facility and the implementation in my series, please
>> read.
> I don’t know what you refer to. You asked "why AQ is must?" I answered above what AQ has to offer than some synchronous register.
Again, we are implementing facilities, V2 will include inflgiht 
descriptors and dirty page tracking. That works for LM.

>


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


  reply	other threads:[~2023-09-13  4:01 UTC|newest]

Thread overview: 269+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-06  8:16 [virtio-dev] [PATCH 0/5] virtio: introduce SUSPEND bit and vq state Zhu Lingshan
2023-09-06  8:16 ` [virtio-dev] [PATCH 1/5] virtio: introduce vq state as basic facility Zhu Lingshan
2023-09-06  8:28   ` [virtio-dev] Re: [virtio-comment] " Michael S. Tsirkin
2023-09-06  9:43     ` Zhu, Lingshan
2023-09-14 11:25   ` Michael S. Tsirkin
2023-09-15  2:46     ` Zhu, Lingshan
2023-09-06  8:16 ` [virtio-dev] [PATCH 2/5] virtio: introduce SUSPEND bit in device status Zhu Lingshan
2023-09-14 11:34   ` [virtio-dev] " Michael S. Tsirkin
2023-09-15  2:57     ` Zhu, Lingshan
2023-09-15 11:10       ` Michael S. Tsirkin
2023-09-18  2:56         ` Zhu, Lingshan
2023-09-18  4:42           ` [virtio-dev] RE: [virtio-comment] " Parav Pandit
2023-09-18  5:14             ` [virtio-dev] " Zhu, Lingshan
2023-09-18  6:17               ` [virtio-dev] " Parav Pandit
2023-09-18  6:38                 ` [virtio-dev] " Zhu, Lingshan
2023-09-18  6:46                   ` [virtio-dev] " Parav Pandit
2023-09-18  6:49                     ` [virtio-dev] " Zhu, Lingshan
2023-09-18  6:50           ` [virtio-dev] " Zhu, Lingshan
2023-09-06  8:16 ` [virtio-dev] [PATCH 3/5] virtqueue: constraints for virtqueue state Zhu Lingshan
2023-09-14 11:30   ` [virtio-dev] " Michael S. Tsirkin
2023-09-15  2:59     ` Zhu, Lingshan
2023-09-15 11:16       ` Michael S. Tsirkin
2023-09-18  3:02         ` [virtio-dev] Re: [virtio-comment] " Zhu, Lingshan
2023-09-18 17:30           ` Michael S. Tsirkin
2023-09-19  7:56             ` Zhu, Lingshan
2023-09-06  8:16 ` [virtio-dev] [PATCH 4/5] virtqueue: ignore resetting vqs when SUSPEND Zhu Lingshan
2023-09-14 11:09   ` [virtio-dev] " Michael S. Tsirkin
2023-09-15  4:06     ` Zhu, Lingshan
2023-09-06  8:16 ` [virtio-dev] [PATCH 5/5] virtio-pci: implement VIRTIO_F_QUEUE_STATE Zhu Lingshan
2023-09-06  8:32   ` [virtio-dev] Re: [virtio-comment] " Michael S. Tsirkin
2023-09-06  8:37     ` [virtio-dev] " Parav Pandit
2023-09-06  9:37     ` [virtio-dev] " Zhu, Lingshan
2023-09-11  3:01     ` Jason Wang
2023-09-11  4:11       ` [virtio-dev] " Parav Pandit
2023-09-11  6:30         ` [virtio-dev] " Jason Wang
2023-09-11  6:47           ` [virtio-dev] " Parav Pandit
2023-09-11  6:58             ` [virtio-dev] " Zhu, Lingshan
2023-09-11  7:07               ` [virtio-dev] " Parav Pandit
2023-09-11  7:18                 ` [virtio-dev] " Zhu, Lingshan
2023-09-11  7:30                   ` [virtio-dev] " Parav Pandit
2023-09-11  7:58                     ` [virtio-dev] " Zhu, Lingshan
2023-09-11  8:12                       ` [virtio-dev] " Parav Pandit
2023-09-11  8:46                         ` [virtio-dev] " Zhu, Lingshan
2023-09-11  9:05                           ` [virtio-dev] " Parav Pandit
2023-09-11  9:32                             ` [virtio-dev] " Zhu, Lingshan
2023-09-11 10:21                               ` [virtio-dev] " Parav Pandit
2023-09-12  4:06                                 ` [virtio-dev] " Zhu, Lingshan
2023-09-12  5:58                                   ` [virtio-dev] " Parav Pandit
2023-09-12  6:33                                     ` [virtio-dev] " Zhu, Lingshan
2023-09-12  6:47                                       ` [virtio-dev] " Parav Pandit
2023-09-12  7:27                                         ` [virtio-dev] " Zhu, Lingshan
2023-09-12  7:40                                           ` [virtio-dev] " Parav Pandit
2023-09-12  9:02                                             ` [virtio-dev] " Zhu, Lingshan
2023-09-12  9:21                                               ` [virtio-dev] " Parav Pandit
2023-09-12 13:03                                                 ` [virtio-dev] " Zhu, Lingshan
2023-09-12 13:43                                                   ` [virtio-dev] " Parav Pandit
2023-09-13  4:01                                                     ` Zhu, Lingshan [this message]
2023-09-13  4:12                                                       ` Parav Pandit
2023-09-13  4:20                                                         ` [virtio-dev] " Zhu, Lingshan
2023-09-13  4:36                                                           ` [virtio-dev] " Parav Pandit
2023-09-14  8:19                                                             ` [virtio-dev] " Zhu, Lingshan
2023-09-11 11:50                               ` [virtio-dev] " Parav Pandit
2023-09-12  3:43                                 ` [virtio-dev] " Jason Wang
2023-09-12  5:50                                   ` [virtio-dev] " Parav Pandit
2023-09-13  4:44                                     ` [virtio-dev] " Jason Wang
2023-09-13  6:05                                       ` [virtio-dev] " Parav Pandit
2023-09-14  3:11                                         ` [virtio-dev] " Jason Wang
2023-09-17  5:22                                           ` [virtio-dev] " Parav Pandit
2023-09-19  4:35                                             ` [virtio-dev] " Jason Wang
2023-09-19  7:33                                               ` [virtio-dev] " Parav Pandit
2023-09-12  3:48                                 ` [virtio-dev] " Zhu, Lingshan
2023-09-12  5:51                                   ` [virtio-dev] " Parav Pandit
2023-09-12  6:37                                     ` [virtio-dev] " Zhu, Lingshan
2023-09-12  6:49                                       ` [virtio-dev] " Parav Pandit
2023-09-12  7:29                                         ` [virtio-dev] " Zhu, Lingshan
2023-09-12  7:53                                           ` [virtio-dev] " Parav Pandit
2023-09-12  9:06                                             ` [virtio-dev] " Zhu, Lingshan
2023-09-12  9:08                                               ` Zhu, Lingshan
2023-09-12  9:35                                                 ` [virtio-dev] " Parav Pandit
2023-09-12 10:14                                                   ` [virtio-dev] " Zhu, Lingshan
2023-09-12 10:16                                                     ` [virtio-dev] " Parav Pandit
2023-09-12 10:28                                                       ` [virtio-dev] " Zhu, Lingshan
2023-09-13  2:23                                                     ` [virtio-dev] " Parav Pandit
2023-09-13  4:03                                                       ` [virtio-dev] " Zhu, Lingshan
2023-09-13  4:15                                                         ` [virtio-dev] " Parav Pandit
2023-09-13  4:21                                                           ` [virtio-dev] " Zhu, Lingshan
2023-09-13  4:37                                                             ` [virtio-dev] " Parav Pandit
2023-09-14  3:11                                                               ` [virtio-dev] " Jason Wang
2023-09-17  5:25                                                                 ` [virtio-dev] " Parav Pandit
2023-09-19  4:34                                                                   ` [virtio-dev] " Jason Wang
2023-09-19  7:32                                                                     ` [virtio-dev] " Parav Pandit
2023-09-14  8:22                                                               ` [virtio-dev] " Zhu, Lingshan
2023-09-12  9:28                                               ` [virtio-dev] " Parav Pandit
2023-09-12 10:17                                                 ` [virtio-dev] " Zhu, Lingshan
2023-09-12 10:25                                                   ` [virtio-dev] " Parav Pandit
2023-09-12 10:32                                                     ` [virtio-dev] " Zhu, Lingshan
2023-09-12 10:40                                                       ` [virtio-dev] " Parav Pandit
2023-09-12 13:04                                                         ` [virtio-dev] " Zhu, Lingshan
2023-09-12 13:36                                                           ` [virtio-dev] " Parav Pandit
2023-09-12  4:10                         ` [virtio-dev] " Jason Wang
2023-09-12  6:05                           ` [virtio-dev] " Parav Pandit
2023-09-13  4:45                             ` [virtio-dev] " Jason Wang
2023-09-13  6:39                               ` [virtio-dev] " Parav Pandit
2023-09-14  3:08                                 ` [virtio-dev] " Jason Wang
2023-09-17  5:22                                   ` [virtio-dev] " Parav Pandit
2023-09-19  4:32                                     ` [virtio-dev] " Jason Wang
2023-09-19  7:32                                       ` [virtio-dev] " Parav Pandit
2023-09-13  8:27                               ` [virtio-dev] " Michael S. Tsirkin
2023-09-14  3:11                                 ` Jason Wang
2023-09-12  4:18             ` Jason Wang
2023-09-12  6:11               ` [virtio-dev] " Parav Pandit
2023-09-12  6:43                 ` [virtio-dev] " Zhu, Lingshan
2023-09-12  6:52                   ` [virtio-dev] " Parav Pandit
2023-09-12  7:36                     ` [virtio-dev] " Zhu, Lingshan
2023-09-12  7:43                       ` [virtio-dev] " Parav Pandit
2023-09-12 10:27                         ` [virtio-dev] " Zhu, Lingshan
2023-09-12 10:33                           ` [virtio-dev] " Parav Pandit
2023-09-12 10:35                             ` [virtio-dev] " Zhu, Lingshan
2023-09-12 10:41                               ` [virtio-dev] " Parav Pandit
2023-09-12 13:09                                 ` [virtio-dev] " Zhu, Lingshan
2023-09-12 13:35                                   ` [virtio-dev] " Parav Pandit
2023-09-13  4:13                                     ` [virtio-dev] " Zhu, Lingshan
2023-09-13  4:19                                       ` [virtio-dev] " Parav Pandit
2023-09-13  4:22                                         ` [virtio-dev] " Zhu, Lingshan
2023-09-13  4:39                                           ` [virtio-dev] " Parav Pandit
2023-09-14  8:24                                             ` [virtio-dev] " Zhu, Lingshan
2023-09-13  4:56                                         ` Jason Wang
2023-09-13  4:43                 ` Jason Wang
2023-09-13  4:46                   ` [virtio-dev] " Parav Pandit
2023-09-14  3:12                     ` [virtio-dev] " Jason Wang
2023-09-17  5:29                       ` [virtio-dev] " Parav Pandit
2023-09-19  4:25                         ` [virtio-dev] " Jason Wang
2023-09-19  7:32                           ` [virtio-dev] " Parav Pandit
2023-09-11  6:59           ` Parav Pandit
2023-09-11 10:15           ` [virtio-dev] " Michael S. Tsirkin
2023-09-12  3:35             ` Jason Wang
2023-09-12  3:43               ` Zhu, Lingshan
2023-09-14 11:27   ` Michael S. Tsirkin
2023-09-15  4:13     ` Zhu, Lingshan
2023-09-06  8:29 ` [virtio-dev] Re: [virtio-comment] [PATCH 0/5] virtio: introduce SUSPEND bit and vq state Michael S. Tsirkin
2023-09-06  8:38   ` Zhu, Lingshan
2023-09-06 13:49     ` Michael S. Tsirkin
2023-09-07  1:51       ` Zhu, Lingshan
2023-09-07 10:57       ` Eugenio Perez Martin
2023-09-07 19:55         ` Michael S. Tsirkin
2023-09-14 11:14 ` [virtio-dev] " Michael S. Tsirkin
2023-09-15  4:28   ` Zhu, Lingshan
2023-09-17  5:32     ` Parav Pandit
2023-09-18  3:10       ` Zhu, Lingshan
2023-09-18  4:32         ` Parav Pandit
2023-09-18  5:21           ` Zhu, Lingshan
2023-09-18  5:25             ` Zhu, Lingshan
2023-09-18  6:37               ` Parav Pandit
2023-09-18  6:49                 ` Zhu, Lingshan
2023-09-18  6:54                   ` Parav Pandit
2023-09-18  9:34                     ` Zhu, Lingshan
2023-09-18 18:41                       ` Parav Pandit
2023-09-18 18:49                         ` Michael S. Tsirkin
2023-09-20  6:06                           ` Zhu, Lingshan
2023-09-20  6:08                             ` Parav Pandit
2023-09-20  6:31                               ` Zhu, Lingshan
2023-09-20  8:34                                 ` Parav Pandit
2023-09-20  9:44                                   ` Zhu, Lingshan
2023-09-20  9:52                                     ` Parav Pandit
2023-09-20 11:11                                       ` Zhu, Lingshan
2023-09-20 11:15                                         ` Parav Pandit
2023-09-20 11:27                                           ` Zhu, Lingshan
2023-09-21  5:13                                             ` Michael S. Tsirkin
2023-09-20 10:36                             ` Michael S. Tsirkin
2023-09-20 10:55                               ` Parav Pandit
2023-09-20 11:28                                 ` Zhu, Lingshan
2023-09-20 11:52                                   ` Michael S. Tsirkin
2023-09-20 12:05                                     ` Zhu, Lingshan
2023-09-20 12:08                                       ` Zhu, Lingshan
2023-09-20 12:22                                       ` Michael S. Tsirkin
2023-09-20 11:22                               ` Zhu, Lingshan
2023-09-20 12:05                                 ` Michael S. Tsirkin
2023-09-20 12:13                                   ` Parav Pandit
2023-09-20 12:16                                   ` Zhu, Lingshan
2023-09-20 12:40                                     ` Michael S. Tsirkin
2023-09-21  3:14                                       ` Jason Wang
2023-09-21  3:51                                         ` Parav Pandit
2023-09-21  4:02                                           ` Jason Wang
2023-09-21  4:11                                             ` Parav Pandit
2023-09-21  4:19                                               ` Jason Wang
2023-09-21  4:29                                                 ` Parav Pandit
2023-09-22  3:13                                                   ` Jason Wang
2023-09-20 12:41                                   ` Michael S. Tsirkin
2023-09-20 13:41                                     ` Parav Pandit
2023-09-20 14:13                                       ` Michael S. Tsirkin
2023-09-20 14:16                                       ` Michael S. Tsirkin
2023-09-20 17:21                                         ` Parav Pandit
2023-09-20 20:03                                           ` Michael S. Tsirkin
2023-09-21  3:43                                             ` Parav Pandit
2023-09-21  5:41                                               ` Michael S. Tsirkin
2023-09-21  5:54                                                 ` Parav Pandit
2023-09-21  6:06                                                   ` Michael S. Tsirkin
2023-09-21  6:31                                                     ` Parav Pandit
2023-09-21  7:20                                                       ` Michael S. Tsirkin
2023-09-21  7:53                                                         ` Parav Pandit
2023-09-21  8:11                                                           ` Michael S. Tsirkin
2023-09-21  9:17                                                             ` Parav Pandit
2023-09-21 10:01                                                               ` Michael S. Tsirkin
2023-09-21 11:13                                                                 ` Parav Pandit
2023-09-21 10:09                                                               ` Michael S. Tsirkin
2023-09-21 10:39                                                                 ` Parav Pandit
2023-09-21 12:22                                                                   ` Michael S. Tsirkin
2023-09-21 12:39                                                                     ` Parav Pandit
2023-09-21 13:04                                                                       ` Michael S. Tsirkin
2023-09-22  3:31                                                                   ` Jason Wang
2023-09-21  9:06                                                 ` Zhu, Lingshan
2023-09-21  9:18                                       ` Zhu, Lingshan
2023-09-21  9:26                                         ` Parav Pandit
2023-09-21  9:55                                           ` Zhu, Lingshan
2023-09-21 11:28                                             ` Parav Pandit
2023-09-22  2:40                                               ` Zhu, Lingshan
2023-09-21  3:26                                     ` Jason Wang
2023-09-21  4:21                                       ` Parav Pandit
2023-09-21  3:18                                   ` Jason Wang
2023-09-21  4:03                                     ` Parav Pandit
2023-09-21  3:17                               ` Jason Wang
2023-09-21  4:01                                 ` Parav Pandit
2023-09-21  4:09                                   ` Jason Wang
2023-09-21  4:19                                     ` Parav Pandit
2023-09-22  3:08                                       ` Jason Wang
2023-09-22  3:39                                         ` Zhu, Lingshan
2023-09-25 10:41                                         ` Parav Pandit
2023-09-26  2:45                                           ` Jason Wang
2023-09-26  3:40                                             ` Parav Pandit
2023-09-26  4:37                                               ` Jason Wang
2023-09-26  5:21                                                 ` Parav Pandit
2023-10-09  8:49                                                   ` Jason Wang
2023-10-12 10:03                                                     ` Michael S. Tsirkin
2023-09-27 15:31                                                 ` Michael S. Tsirkin
2023-09-26  5:36                                               ` Zhu, Lingshan
2023-09-26  6:03                                                 ` Parav Pandit
2023-09-26  9:25                                                   ` Zhu, Lingshan
2023-09-26 10:48                                                     ` Michael S. Tsirkin
2023-09-27  8:20                                                       ` Zhu, Lingshan
2023-09-27 10:39                                                         ` Parav Pandit
2023-10-09 10:05                                                           ` Zhu, Lingshan
2023-10-09 10:07                                                             ` Parav Pandit
2023-09-27 15:40                                                         ` Michael S. Tsirkin
2023-10-09 10:01                                                           ` Zhu, Lingshan
2023-10-11 10:20                                                             ` [virtio-dev] Re: [virtio-comment] " Michael S. Tsirkin
2023-10-11 10:38                                                               ` Zhu, Lingshan
2023-10-11 11:52                                                                 ` [virtio-dev] " Parav Pandit
2023-10-12 10:57                                                                   ` [virtio-dev] " Zhu, Lingshan
2023-10-12 11:13                                                                     ` Michael S. Tsirkin
2023-10-12  9:59                                                                 ` Michael S. Tsirkin
2023-10-12 10:49                                                                   ` Zhu, Lingshan
2023-10-12 11:12                                                                     ` Michael S. Tsirkin
2023-10-13 10:18                                                                       ` Zhu, Lingshan
2023-10-12 14:38                                                                     ` Michael S. Tsirkin
2023-10-13 10:23                                                                       ` Zhu, Lingshan
2023-09-27 21:43                                           ` Michael S. Tsirkin
2023-09-19  8:01                         ` Zhu, Lingshan
2023-09-19  9:06                           ` Parav Pandit
2023-09-19 10:03                             ` Zhu, Lingshan
2023-09-19  4:27                     ` Jason Wang
2023-09-19  7:32                       ` Parav Pandit
2023-09-19  7:46                         ` Zhu, Lingshan
2023-09-19  7:53                           ` Parav Pandit
2023-09-19  8:03                             ` Zhu, Lingshan
2023-09-19  8:31                               ` Parav Pandit
2023-09-19  8:39                                 ` Zhu, Lingshan
2023-09-19  9:09                                   ` Parav Pandit
2023-09-14 11:37 ` Michael S. Tsirkin
2023-09-15  4:41   ` Zhu, Lingshan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7424d2ae-2366-882f-bd84-04ee5714764b@intel.com \
    --to=lingshan.zhu@intel.com \
    --cc=cohuck@redhat.com \
    --cc=eperezma@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=mst@redhat.com \
    --cc=parav@nvidia.com \
    --cc=stefanha@redhat.com \
    --cc=virtio-comment@lists.oasis-open.org \
    --cc=virtio-dev@lists.oasis-open.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox