virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: Si-Wei Liu <si-wei.liu@oracle.com>
To: Eli Cohen <elic@nvidia.com>
Cc: "lvivier@redhat.com" <lvivier@redhat.com>,
	"mst@redhat.com" <mst@redhat.com>,
	"virtualization@lists.linux-foundation.org"
	<virtualization@lists.linux-foundation.org>,
	"eperezma@redhat.com" <eperezma@redhat.com>
Subject: Re: [PATCH v1 1/2] vdpa: Add support for querying vendor statistics
Date: Tue, 8 Mar 2022 19:32:42 -0800	[thread overview]
Message-ID: <74495f15-8f1c-93db-1277-50198ac3284e@oracle.com> (raw)
In-Reply-To: <DM8PR12MB5400E03D7AD7833CEBF8DF9DAB099@DM8PR12MB5400.namprd12.prod.outlook.com>



On 3/8/2022 6:13 AM, Eli Cohen wrote:
>
>> -----Original Message-----
>> From: Si-Wei Liu <si-wei.liu@oracle.com>
>> Sent: Tuesday, March 8, 2022 8:16 AM
>> To: Eli Cohen <elic@nvidia.com>
>> Cc: mst@redhat.com; jasowang@redhat.com; virtualization@lists.linux-
>> foundation.org; eperezma@redhat.com; amorenoz@redhat.com;
>> lvivier@redhat.com; sgarzare@redhat.com; Parav Pandit <parav@nvidia.com>
>> Subject: Re: [PATCH v1 1/2] vdpa: Add support for querying vendor statistics
>>
>>
>>
>> On 3/6/2022 11:57 PM, Eli Cohen wrote:
>>>> -----Original Message-----
>>>> From: Si-Wei Liu <si-wei.liu@oracle.com>
>>>> Sent: Saturday, March 5, 2022 12:34 AM
>>>> To: Eli Cohen <elic@nvidia.com>
>>>> Cc: mst@redhat.com; jasowang@redhat.com; virtualization@lists.linux-
>>>> foundation.org; eperezma@redhat.com; amorenoz@redhat.com;
>>>> lvivier@redhat.com; sgarzare@redhat.com; Parav Pandit
>>>> <parav@nvidia.com>
>>>> Subject: Re: [PATCH v1 1/2] vdpa: Add support for querying vendor
>>>> statistics
>>>>
>>>> Sorry, I somehow missed this after my break. Please see comments in line.
>>>>
>>>> On 2/16/2022 10:46 PM, Eli Cohen wrote:
>>>>> On Wed, Feb 16, 2022 at 10:49:26AM -0800, Si-Wei Liu wrote:
>>>>>> On 2/16/2022 12:00 AM, Eli Cohen wrote:
>>>>>>> Allows to read vendor statistics of a vdpa device. The specific
>>>>>>> statistics data is received by the upstream driver in the form of
>>>>>>> an (attribute name, attribute value) pairs.
>>>>>>>
>>>>>>> An example of statistics for mlx5_vdpa device are:
>>>>>>>
>>>>>>> received_desc - number of descriptors received by the virtqueue
>>>>>>> completed_desc - number of descriptors completed by the virtqueue
>>>>>>>
>>>>>>> A descriptor using indirect buffers is still counted as 1. In
>>>>>>> addition, N chained descriptors are counted correctly N times as
>>>>>>> one
>>>> would expect.
>>>>>>> A new callback was added to vdpa_config_ops which provides the
>>>>>>> means for the vdpa driver to return statistics results.
>>>>>>>
>>>>>>> The interface allows for reading all the supported virtqueues,
>>>>>>> including the control virtqueue if it exists.
>>>>>>>
>>>>>>> Below are some examples taken from mlx5_vdpa which are introduced
>>>>>>> in the following patch:
>>>>>>>
>>>>>>> 1. Read statistics for the virtqueue at index 1
>>>>>>>
>>>>>>> $ vdpa dev vstats show vdpa-a qidx 1
>>>>>>> vdpa-a:
>>>>>>> queue_type tx queue_index 1 received_desc 3844836 completed_desc
>>>>>>> 3844836
>>>>>>>
>>>>>>> 2. Read statistics for the virtqueue at index 32 $ vdpa dev vstats
>>>>>>> show vdpa-a qidx 32
>>>>>>> vdpa-a:
>>>>>>> queue_type control_vq queue_index 32 received_desc 62
>>>>>>> completed_desc
>>>>>>> 62
>>>>>>>
>>>>>>> 3. Read statisitics for the virtqueue at index 0 with json output
>>>>>>> $ vdpa -j dev vstats show vdpa-a qidx 0 {"vstats":{"vdpa-a":{
>>>>>>>
>>>> "queue_type":"rx","queue_index":0,"name":"received_desc","value":4177
>>>> 76,\
>>>>>>>      "name":"completed_desc","value":417548}}}
>>>>>>>
>>>>>>> 4. Read statistics for the virtqueue at index 0 with preety json
>>>>>>> output $ vdpa -jp dev vstats show vdpa-a qidx 0 {
>>>>>>>         "vstats": {
>>>>>>>             "vdpa-a": {
>>>>>>>
>>>>>>>                 "queue_type": "rx",
>>>>>> I wonder where this info can be inferred? I don't see relevant
>>>>>> change in the patch series that helps gather the
>> VDPA_ATTR_DEV_QUEUE_TYPE?
>>>>>> Is this an arbitrary string defined by the vendor as well? If so,
>>>>>> how does the user expect to consume it?
>>>>> The queue tupe is deduced from the index and whether we have a
>>>>> virtqueue. Even numbers are rx, odd numbers are tx and if there is
>>>>> CVQ, the last one is CVQ.
>>>> OK, then VDPA_ATTR_DEV_QUEUE_TYPE attribute introduced in this patch
>>>> might not be useful at all?
>>> Right, will remove.
>>>
>>>> And how do you determine in the vdpa tool if CVQ is negotiated or
>>>> not?
>>> I make a netlink call to get the same information as " vdpa dev config show"
>> retrieves. I use the negotiated features to determine if a CVQ is available. If it
>> is, the number of VQs equals the control VQ index. So there are two netlink
>> calls under the hood.
>> The lock vdpa_dev_mutex won't hold across the two separate netlink calls, and
>> it may end up with inconsistent state - theoretically things could happen like
>> that the first call gets CVQ negotiated, but the later call for
>> get_vendor_vq_stats() on the cvq might get -EINVAL due to device reset. Can
>> the negotiated status and stat query be done within one single netlink call?
> I see your concern.
> The only reason I do the extra call is to know if we have a control VQ and what
> index it is, just to print a descriptive string telling if it's a either rx, tx or control VQ.
>
> So the cure can be simple. Let's have a new attribute that returns the type of
> virtqueue.
I am not sure I follow the cure. Wouldn't it be possible to get both 
negotiated status and the queue stat in vdpa_nl_cmd_dev_stats_get_doit() 
under the same vdpa_dev_mutex lock? And I am not even sure if it is a 
must to display the queue type - it doesn't seem the output includes the 
vdpa class info, which makes it hard for script to parse the this field 
in generic way.

>   I think Jason did not like the idea of communicating the kind of VQ
> from kernel to userspace but under these circumstances, maybe he would approve.
> Jason?
>
>> What worried me is that the queue index being dynamic and depended on
>> negotiation status would make host admin user quite hard to follow. The guest
>> may or may not advertise F_MQ and/or F_CTRL_VQ across various phases, e.g.
>> firmware (UEFI), boot loader (grub) till OS driver is up and running, which can
>> be agnostic to host admin. For most of the part it's not easy to script and
>> predict the queue index which can change from time to time. Can we define
>> the order of host predictable queue index, which is independent from any
>> guest negotiated state?
Here I think we can just use the plain queue index in the host view - 
say if vdpa net has 4 pairs of data vqs and 1 control vq, user may use 
qindex 8 across the board to identify the control vq, regardless if the 
F_MQ feature is negotiated or not in guest.


Regards,
-Siwei

>>
>>>> Looks to me there are still some loose end I don't quite yet
>>>> understand.
>>>>
>>>>
>>>>>>>                 "queue_index": 0,
>>> I think this can be removed since the command is for a specific index.
>>>
>>>>>>>                 "name": "received_desc",
>>>>>>>                 "value": 417776,
>>>>>>>                 "name": "completed_desc",
>>>>>>>                 "value": 417548
>>>>>> Not for this kernel patch, but IMHO it's the best to put the name &
>>>>>> value pairs in an array instead of flat entries in json's
>>>>>> hash/dictionary. The hash entries can be re-ordered deliberately by
>>>>>> external json parsing tool, ending up with inconsistent stat values.
>>>> This comment is missed for some reason. Please change the example in
>>>> the log if you agree to address it in vdpa tool. Or justify why
>>>> keeping the order for json hash/dictionary is fine.
>>> Sorry for skipping this comment.
>>> Do you mean to present the information like:
>>> "received_desc": 417776,
>>> "completed_desc": 417548,
>> I mean the following presentation:
>>
>> $ vdpa -jp dev vstats show vdpa-a qidx 0 {
>>       "vstats": {
>>           "vdpa-a": {
>>               "queue_stats": [{
>>                   "queue_index": 0,
>>                   "queue_type": "rx",
>>                   "stat_name": [ "received_desc","completed_desc" ],
>>                   "stat_value": [ 417776,417548 ],
>>               }]
>>           }
>>       }
>> }
>>
>> I think Parav had similar suggestion, too.
>>
>> Thanks,
>> -Siwei
>>
>>>> Thanks,
>>>> -Siwei
>>>>
>>>>>> Thanks,
>>>>>> -Siwei
>>>>>>>             }
>>>>>>>         }
>>>>>>> }
>>>>>>>
>>>>>>> Signed-off-by: Eli Cohen <elic@nvidia.com>
>>>>>>> ---
>>>>>>>      drivers/vdpa/vdpa.c       | 129
>>>> ++++++++++++++++++++++++++++++++++++++
>>>>>>>      include/linux/vdpa.h      |   5 ++
>>>>>>>      include/uapi/linux/vdpa.h |   7 +++
>>>>>>>      3 files changed, 141 insertions(+)
>>>>>>>
>>>>>>> diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c index
>>>>>>> 9846c9de4bfa..d0ff671baf88 100644
>>>>>>> --- a/drivers/vdpa/vdpa.c
>>>>>>> +++ b/drivers/vdpa/vdpa.c
>>>>>>> @@ -909,6 +909,74 @@ vdpa_dev_config_fill(struct vdpa_device
>>>>>>> *vdev,
>>>> struct sk_buff *msg, u32 portid,
>>>>>>>      	return err;
>>>>>>>      }
>>>>>>> +static int vdpa_fill_stats_rec(struct vdpa_device *vdev, struct
>>>>>>> +sk_buff
>>>> *msg,
>>>>>>> +			       struct genl_info *info, u32 index) {
>>>>>>> +	int err;
>>>>>>> +
>>>>>>> +	if (nla_put_u32(msg, VDPA_ATTR_DEV_QUEUE_INDEX, index))
>>>>>>> +		return -EMSGSIZE;
>>>>>>> +
>>>>>>> +	err = vdev->config->get_vendor_vq_stats(vdev, index, msg, info-
>>>>> extack);
>>>>>>> +	if (err)
>>>>>>> +		return err;
>>>>>>> +
>>>>>>> +	return 0;
>>>>>>> +}
>>>>>>> +
>>>>>>> +static int vendor_stats_fill(struct vdpa_device *vdev, struct sk_buff
>> *msg,
>>>>>>> +			     struct genl_info *info, u32 index) {
>>>>>>> +	int err;
>>>>>>> +
>>>>>>> +	if (!vdev->config->get_vendor_vq_stats)
>>>>>>> +		return -EOPNOTSUPP;
>>>>>>> +
>>>>>>> +	err = vdpa_fill_stats_rec(vdev, msg, info, index);
>>>>>>> +	if (err)
>>>>>>> +		return err;
>>>>>>> +
>>>>>>> +	return 0;
>>>>>>> +}
>>>>>>> +
>>>>>>> +static int vdpa_dev_vendor_stats_fill(struct vdpa_device *vdev,
>>>>>>> +				      struct sk_buff *msg,
>>>>>>> +				      struct genl_info *info, u32 index) {
>>>>>>> +	u32 device_id;
>>>>>>> +	void *hdr;
>>>>>>> +	int err;
>>>>>>> +	u32 portid = info->snd_portid;
>>>>>>> +	u32 seq = info->snd_seq;
>>>>>>> +	u32 flags = 0;
>>>>>>> +
>>>>>>> +	hdr = genlmsg_put(msg, portid, seq, &vdpa_nl_family, flags,
>>>>>>> +			  VDPA_CMD_DEV_VSTATS_GET);
>>>>>>> +	if (!hdr)
>>>>>>> +		return -EMSGSIZE;
>>>>>>> +
>>>>>>> +	if (nla_put_string(msg, VDPA_ATTR_DEV_NAME, dev_name(&vdev-
>>>>> dev))) {
>>>>>>> +		err = -EMSGSIZE;
>>>>>>> +		goto undo_msg;
>>>>>>> +	}
>>>>>>> +
>>>>>>> +	device_id = vdev->config->get_device_id(vdev);
>>>>>>> +	if (nla_put_u32(msg, VDPA_ATTR_DEV_ID, device_id)) {
>>>>>>> +		err = -EMSGSIZE;
>>>>>>> +		goto undo_msg;
>>>>>>> +	}
>>>>>>> +
>>>>>>> +	err = vendor_stats_fill(vdev, msg, info, index);
>>>>>>> +
>>>>>>> +	genlmsg_end(msg, hdr);
>>>>>>> +
>>>>>>> +	return err;
>>>>>>> +
>>>>>>> +undo_msg:
>>>>>>> +	genlmsg_cancel(msg, hdr);
>>>>>>> +	return err;
>>>>>>> +}
>>>>>>> +
>>>>>>>      static int vdpa_nl_cmd_dev_config_get_doit(struct sk_buff
>>>>>>> *skb, struct
>>>> genl_info *info)
>>>>>>>      {
>>>>>>>      	struct vdpa_device *vdev;
>>>>>>> @@ -990,6 +1058,60 @@ vdpa_nl_cmd_dev_config_get_dumpit(struct
>>>> sk_buff *msg, struct netlink_callback *
>>>>>>>      	return msg->len;
>>>>>>>      }
>>>>>>> +static int vdpa_nl_cmd_dev_stats_get_doit(struct sk_buff *skb,
>>>>>>> +					  struct genl_info *info)
>>>>>>> +{
>>>>>>> +	struct vdpa_device *vdev;
>>>>>>> +	struct sk_buff *msg;
>>>>>>> +	const char *devname;
>>>>>>> +	struct device *dev;
>>>>>>> +	u32 index;
>>>>>>> +	int err;
>>>>>>> +
>>>>>>> +	if (!info->attrs[VDPA_ATTR_DEV_NAME])
>>>>>>> +		return -EINVAL;
>>>>>>> +
>>>>>>> +	if (!info->attrs[VDPA_ATTR_DEV_QUEUE_INDEX])
>>>>>>> +		return -EINVAL;
>>>>>>> +
>>>>>>> +	devname = nla_data(info->attrs[VDPA_ATTR_DEV_NAME]);
>>>>>>> +	msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
>>>>>>> +	if (!msg)
>>>>>>> +		return -ENOMEM;
>>>>>>> +
>>>>>>> +	index = nla_get_u32(info->attrs[VDPA_ATTR_DEV_QUEUE_INDEX]);
>>>>>>> +	mutex_lock(&vdpa_dev_mutex);
>>>>>>> +	dev = bus_find_device(&vdpa_bus, NULL, devname,
>>>> vdpa_name_match);
>>>>>>> +	if (!dev) {
>>>>>>> +		NL_SET_ERR_MSG_MOD(info->extack, "device not found");
>>>>>>> +		err = -ENODEV;
>>>>>>> +		goto dev_err;
>>>>>>> +	}
>>>>>>> +	vdev = container_of(dev, struct vdpa_device, dev);
>>>>>>> +	if (!vdev->mdev) {
>>>>>>> +		NL_SET_ERR_MSG_MOD(info->extack, "unmanaged vdpa
>>>> device");
>>>>>>> +		err = -EINVAL;
>>>>>>> +		goto mdev_err;
>>>>>>> +	}
>>>>>>> +	err = vdpa_dev_vendor_stats_fill(vdev, msg, info, index);
>>>>>>> +	if (!err)
>>>>>>> +		err = genlmsg_reply(msg, info);
>>>>>>> +
>>>>>>> +	put_device(dev);
>>>>>>> +	mutex_unlock(&vdpa_dev_mutex);
>>>>>>> +
>>>>>>> +	if (err)
>>>>>>> +		nlmsg_free(msg);
>>>>>>> +
>>>>>>> +	return err;
>>>>>>> +
>>>>>>> +mdev_err:
>>>>>>> +	put_device(dev);
>>>>>>> +dev_err:
>>>>>>> +	mutex_unlock(&vdpa_dev_mutex);
>>>>>>> +	return err;
>>>>>>> +}
>>>>>>> +
>>>>>>>      static const struct nla_policy vdpa_nl_policy[VDPA_ATTR_MAX + 1] = {
>>>>>>>      	[VDPA_ATTR_MGMTDEV_BUS_NAME] = { .type =
>> NLA_NUL_STRING },
>>>>>>>      	[VDPA_ATTR_MGMTDEV_DEV_NAME] = { .type = NLA_STRING
>> }, @@ -
>>>> 997,6
>>>>>>> +1119,7 @@ static const struct nla_policy
>>>> vdpa_nl_policy[VDPA_ATTR_MAX + 1] = {
>>>>>>>      	[VDPA_ATTR_DEV_NET_CFG_MACADDR] =
>> NLA_POLICY_ETH_ADDR,
>>>>>>>      	/* virtio spec 1.1 section 5.1.4.1 for valid MTU range */
>>>>>>>      	[VDPA_ATTR_DEV_NET_CFG_MTU] =
>> NLA_POLICY_MIN(NLA_U16, 68),
>>>>>>> +	[VDPA_ATTR_DEV_QUEUE_INDEX] = NLA_POLICY_RANGE(NLA_U32, 0,
>>>> 65535),
>>>>>>>      };
>>>>>>>      static const struct genl_ops vdpa_nl_ops[] = { @@ -1030,6
>>>>>>> +1153,12 @@ static const struct genl_ops vdpa_nl_ops[] = {
>>>>>>>      		.doit = vdpa_nl_cmd_dev_config_get_doit,
>>>>>>>      		.dumpit = vdpa_nl_cmd_dev_config_get_dumpit,
>>>>>>>      	},
>>>>>>> +	{
>>>>>>> +		.cmd = VDPA_CMD_DEV_VSTATS_GET,
>>>>>>> +		.validate = GENL_DONT_VALIDATE_STRICT |
>>>> GENL_DONT_VALIDATE_DUMP,
>>>>>>> +		.doit = vdpa_nl_cmd_dev_stats_get_doit,
>>>>>>> +		.flags = GENL_ADMIN_PERM,
>>>>>>> +	},
>>>>>>>      };
>>>>>>>      static struct genl_family vdpa_nl_family __ro_after_init = {
>>>>>>> diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h index
>>>>>>> 2de442ececae..274203845cfc 100644
>>>>>>> --- a/include/linux/vdpa.h
>>>>>>> +++ b/include/linux/vdpa.h
>>>>>>> @@ -275,6 +275,9 @@ struct vdpa_config_ops {
>>>>>>>      			    const struct vdpa_vq_state *state);
>>>>>>>      	int (*get_vq_state)(struct vdpa_device *vdev, u16 idx,
>>>>>>>      			    struct vdpa_vq_state *state);
>>>>>>> +	int (*get_vendor_vq_stats)(struct vdpa_device *vdev, u16 idx,
>>>>>>> +				   struct sk_buff *msg,
>>>>>>> +				   struct netlink_ext_ack *extack);
>>>>>>>      	struct vdpa_notification_area
>>>>>>>      	(*get_vq_notification)(struct vdpa_device *vdev, u16 idx);
>>>>>>>      	/* vq irq is not expected to be changed once DRIVER_OK is set
>>>>>>> */ @@ -466,4 +469,6 @@ struct vdpa_mgmt_dev {
>>>>>>>      int vdpa_mgmtdev_register(struct vdpa_mgmt_dev *mdev);
>>>>>>>      void vdpa_mgmtdev_unregister(struct vdpa_mgmt_dev *mdev);
>>>>>>> +#define VDPA_INVAL_QUEUE_INDEX 0xffff
>>>>>>> +
>>>>>>>      #endif /* _LINUX_VDPA_H */
>>>>>>> diff --git a/include/uapi/linux/vdpa.h b/include/uapi/linux/vdpa.h
>>>>>>> index 1061d8d2d09d..c5f229a41dc2 100644
>>>>>>> --- a/include/uapi/linux/vdpa.h
>>>>>>> +++ b/include/uapi/linux/vdpa.h
>>>>>>> @@ -18,6 +18,7 @@ enum vdpa_command {
>>>>>>>      	VDPA_CMD_DEV_DEL,
>>>>>>>      	VDPA_CMD_DEV_GET,		/* can dump */
>>>>>>>      	VDPA_CMD_DEV_CONFIG_GET,	/* can dump */
>>>>>>> +	VDPA_CMD_DEV_VSTATS_GET,
>>>>>>>      };
>>>>>>>      enum vdpa_attr {
>>>>>>> @@ -46,6 +47,12 @@ enum vdpa_attr {
>>>>>>>      	VDPA_ATTR_DEV_NEGOTIATED_FEATURES,	/* u64 */
>>>>>>>      	VDPA_ATTR_DEV_MGMTDEV_MAX_VQS,		/*
>> u32 */
>>>>>>>      	VDPA_ATTR_DEV_SUPPORTED_FEATURES,	/* u64 */
>>>>>>> +
>>>>>>> +	VDPA_ATTR_DEV_QUEUE_INDEX,              /* u16 */
>>>>>>> +	VDPA_ATTR_DEV_QUEUE_TYPE,               /* string */
>>>>>>> +	VDPA_ATTR_DEV_VENDOR_ATTR_NAME,		/* string */
>>>>>>> +	VDPA_ATTR_DEV_VENDOR_ATTR_VALUE,        /* u64 */
>>>>>>> +
>>>>>>>      	/* new attributes must be added above here */
>>>>>>>      	VDPA_ATTR_MAX,
>>>>>>>      };

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  parent reply	other threads:[~2022-03-09  3:32 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20220216080022.56707-1-elic@nvidia.com>
     [not found] ` <20220216080022.56707-2-elic@nvidia.com>
2022-02-16 18:49   ` [PATCH v1 1/2] vdpa: Add support for querying vendor statistics Si-Wei Liu
     [not found]     ` <20220217064619.GB86497@mtl-vdi-166.wap.labs.mlnx>
2022-03-04 22:34       ` Si-Wei Liu
     [not found]         ` <DM8PR12MB5400E80073521E898056578BAB089@DM8PR12MB5400.namprd12.prod.outlook.com>
2022-03-08  6:15           ` Si-Wei Liu
     [not found]             ` <DM8PR12MB5400E03D7AD7833CEBF8DF9DAB099@DM8PR12MB5400.namprd12.prod.outlook.com>
2022-03-09  2:39               ` Jason Wang
2022-03-09  3:32               ` Si-Wei Liu [this message]
     [not found]                 ` <DM8PR12MB540086CCD1F535668D05E546AB0A9@DM8PR12MB5400.namprd12.prod.outlook.com>
2022-03-10  1:45                   ` Si-Wei Liu
     [not found]                     ` <DM8PR12MB54000042A48FDFA446EFE792AB0E9@DM8PR12MB5400.namprd12.prod.outlook.com>
2022-03-14  6:25                       ` Jason Wang
2022-03-15  8:11                         ` Si-Wei Liu
2022-03-15  7:53                       ` Si-Wei Liu
     [not found]                         ` <DM8PR12MB540054565515158F9209723EAB109@DM8PR12MB5400.namprd12.prod.outlook.com>
2022-03-16  6:52                           ` Si-Wei Liu
     [not found]                             ` <DM8PR12MB5400E7B2359FE4797F190AC5AB119@DM8PR12MB5400.namprd12.prod.outlook.com>
2022-03-16 22:00                               ` Si-Wei Liu
2022-03-17  2:32                                 ` Jason Wang
2022-03-18  0:58                                   ` Si-Wei Liu
2022-03-18  2:27                                     ` Jason Wang
2022-03-19  5:18                                       ` Si-Wei Liu
2022-03-22  3:51                                         ` Jason Wang
2022-03-03  7:53   ` Jason Wang
2022-03-07 11:03   ` Parav Pandit via Virtualization
     [not found] ` <20220216080022.56707-3-elic@nvidia.com>
2022-03-03  7:55   ` [PATCH v1 2/2] vdpa/mlx5: Add support for reading descriptor statistics Jason Wang
2022-03-04 16:06 ` [PATCH v1 0/2] Show statistics for a vdpa device Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=74495f15-8f1c-93db-1277-50198ac3284e@oracle.com \
    --to=si-wei.liu@oracle.com \
    --cc=elic@nvidia.com \
    --cc=eperezma@redhat.com \
    --cc=lvivier@redhat.com \
    --cc=mst@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).