From: Jason Wang <jasowang@redhat.com>
To: Cindy Lu <lulu@redhat.com>,
mst@redhat.com, armbru@redhat.com, eblake@redhat.com,
cohuck@redhat.com
Cc: mhabets@solarflare.com, qemu-devel@nongnu.org,
rob.miller@broadcom.com, saugatm@xilinx.com, hanand@xilinx.com,
hch@infradead.org, eperezma@redhat.com, jgg@mellanox.com,
shahafs@mellanox.com, kevin.tian@intel.com, parav@mellanox.com,
vmireyno@marvell.com, cunming.liang@intel.com, gdawar@xilinx.com,
jiri@mellanox.com, xiao.w.wang@intel.com, stefanha@redhat.com,
zhihong.wang@intel.com, aadam@redhat.com, rdunlap@infradead.org,
maxime.coquelin@redhat.com, lingshan.zhu@intel.com
Subject: Re: [RFC v1 4/4] vhost: introduce vhost_set_vring_ready method
Date: Tue, 21 Apr 2020 11:59:49 +0800 [thread overview]
Message-ID: <c70efeb6-e664-2f5b-dc90-8929f1033e35@redhat.com> (raw)
In-Reply-To: <20200420093241.4238-5-lulu@redhat.com>
On 2020/4/20 下午5:32, Cindy Lu wrote:
> From: Jason Wang <jasowang@redhat.com>
>
> Vhost-vdpa introduces VHOST_VDPA_SET_VRING_ENABLE which complies the
> semantic of queue_enable defined in virtio spec. This method can be
> used for preventing device from executing request for a specific
> virtqueue. This patch introduces the vhost_ops for this.
>
> Note that, we've already had vhost_set_vring_enable which has different
> semantic which allows to enable or disable a specific virtqueue for
> some kinds of vhost backends. E.g vhost-user use this to changes the
> number of active queue pairs.
This patch seems to mix fours things, please use dedicated patches for:
1) introduce queue_enabled method for virtio-bus
2) implement queue_enabled for virtio-pci
3) introduce vhost_set_vring_ready method for vhost ops
4) implement vhost_set_vring_ready for vdpa (need to be squashed into
the patch of vhost-vdpa).
Thanks
>
> Author: Jason Wang <jasowang@redhat.com>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
> hw/net/vhost_net-stub.c | 5 +++++
> hw/net/vhost_net.c | 16 ++++++++++++++++
> hw/virtio/vhost-vdpa.c | 9 +++------
> hw/virtio/virtio-pci.c | 13 +++++++++++++
> hw/virtio/virtio.c | 6 ++++++
> include/hw/virtio/vhost-backend.h | 2 ++
> include/hw/virtio/virtio-bus.h | 4 ++++
> include/net/vhost_net.h | 1 +
> 8 files changed, 50 insertions(+), 6 deletions(-)
>
> diff --git a/hw/net/vhost_net-stub.c b/hw/net/vhost_net-stub.c
> index aac0e98228..f5ef1e3055 100644
> --- a/hw/net/vhost_net-stub.c
> +++ b/hw/net/vhost_net-stub.c
> @@ -86,6 +86,11 @@ int vhost_set_vring_enable(NetClientState *nc, int enable)
> return 0;
> }
>
> +int vhost_set_vring_ready(NetClientState *nc)
> +{
> + return 0;
> +}
> +
> int vhost_net_set_mtu(struct vhost_net *net, uint16_t mtu)
> {
> return 0;
> diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> index 0d13fda2fc..463e333531 100644
> --- a/hw/net/vhost_net.c
> +++ b/hw/net/vhost_net.c
> @@ -380,6 +380,10 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
> goto err_start;
> }
> }
> +
> + if (virtio_queue_enabled(dev, i)) {
> + vhost_set_vring_ready(peer);
> + }
> }
>
> return 0;
> @@ -487,6 +491,18 @@ int vhost_set_vring_enable(NetClientState *nc, int enable)
> return 0;
> }
>
> +int vhost_set_vring_ready(NetClientState *nc)
> +{
> + VHostNetState *net = get_vhost_net(nc);
> + const VhostOps *vhost_ops = net->dev.vhost_ops;
> +
> + if (vhost_ops && vhost_ops->vhost_set_vring_ready) {
> + return vhost_ops->vhost_set_vring_ready(&net->dev);
> + }
> +
> + return 0;
> +}
> +
> int vhost_net_set_mtu(struct vhost_net *net, uint16_t mtu)
> {
> const VhostOps *vhost_ops = net->dev.vhost_ops;
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index 213b327600..49224ef9f8 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -325,18 +325,15 @@ static int vhost_vdpa_get_vq_index(struct vhost_dev *dev, int idx)
> return idx - dev->vq_index;
> }
>
> -static int vhost_vdpa_set_vring_enable(struct vhost_dev *dev, int enable)
> +static int vhost_vdpa_set_vring_ready(struct vhost_dev *dev)
> {
> int i;
>
> for (i = 0; i < dev->nvqs; ++i) {
> struct vhost_vring_state state = {
> .index = dev->vq_index + i,
> - .num = enable,
> + .num = 1,
> };
> -
> - state.num = 1;
> -
> vhost_vdpa_call(dev, VHOST_VDPA_SET_VRING_ENABLE, &state);
> }
>
> @@ -368,7 +365,7 @@ const VhostOps vdpa_ops = {
> .vhost_set_owner = vhost_vdpa_set_owner,
> .vhost_reset_device = vhost_vdpa_reset_device,
> .vhost_get_vq_index = vhost_vdpa_get_vq_index,
> - .vhost_set_vring_enable = vhost_vdpa_set_vring_enable,
> + .vhost_set_vring_ready = vhost_vdpa_set_vring_ready,
> .vhost_requires_shm_log = NULL,
> .vhost_migration_done = NULL,
> .vhost_backend_can_merge = NULL,
> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> index c6b47a9c73..4aaf5d953e 100644
> --- a/hw/virtio/virtio-pci.c
> +++ b/hw/virtio/virtio-pci.c
> @@ -1103,6 +1103,18 @@ static AddressSpace *virtio_pci_get_dma_as(DeviceState *d)
> return pci_get_address_space(dev);
> }
>
> +static bool virtio_pci_queue_enabled(DeviceState *d, int n)
> +{
> + VirtIOPCIProxy *proxy = VIRTIO_PCI(d);
> + VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
> +
> + if (virtio_vdev_has_feature(vdev, VIRTIO_F_VERSION_1)) {
> + return proxy->vqs[vdev->queue_sel].enabled;
> + }
> +
> + return virtio_queue_get_desc_addr(vdev, n) != 0;
> +}
> +
> static int virtio_pci_add_mem_cap(VirtIOPCIProxy *proxy,
> struct virtio_pci_cap *cap)
> {
> @@ -2053,6 +2065,7 @@ static void virtio_pci_bus_class_init(ObjectClass *klass, void *data)
> k->ioeventfd_enabled = virtio_pci_ioeventfd_enabled;
> k->ioeventfd_assign = virtio_pci_ioeventfd_assign;
> k->get_dma_as = virtio_pci_get_dma_as;
> + k->queue_enabled = virtio_pci_queue_enabled;
> }
>
> static const TypeInfo virtio_pci_bus_info = {
> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> index 04716b5f6c..09732a8836 100644
> --- a/hw/virtio/virtio.c
> +++ b/hw/virtio/virtio.c
> @@ -3169,6 +3169,12 @@ hwaddr virtio_queue_get_desc_addr(VirtIODevice *vdev, int n)
>
> bool virtio_queue_enabled(VirtIODevice *vdev, int n)
> {
> + BusState *qbus = qdev_get_parent_bus(DEVICE(vdev));
> + VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
> +
> + if (k->queue_enabled)
> + return k->queue_enabled(qbus->parent, n);
> +
> return virtio_queue_get_desc_addr(vdev, n) != 0;
> }
>
> diff --git a/include/hw/virtio/vhost-backend.h b/include/hw/virtio/vhost-backend.h
> index d81bd9885f..ce8de6d308 100644
> --- a/include/hw/virtio/vhost-backend.h
> +++ b/include/hw/virtio/vhost-backend.h
> @@ -78,6 +78,7 @@ typedef int (*vhost_reset_device_op)(struct vhost_dev *dev);
> typedef int (*vhost_get_vq_index_op)(struct vhost_dev *dev, int idx);
> typedef int (*vhost_set_vring_enable_op)(struct vhost_dev *dev,
> int enable);
> +typedef int (*vhost_set_vring_ready_op)(struct vhost_dev *dev);
> typedef bool (*vhost_requires_shm_log_op)(struct vhost_dev *dev);
> typedef int (*vhost_migration_done_op)(struct vhost_dev *dev,
> char *mac_addr);
> @@ -140,6 +141,7 @@ typedef struct VhostOps {
> vhost_reset_device_op vhost_reset_device;
> vhost_get_vq_index_op vhost_get_vq_index;
> vhost_set_vring_enable_op vhost_set_vring_enable;
> + vhost_set_vring_ready_op vhost_set_vring_ready;
> vhost_requires_shm_log_op vhost_requires_shm_log;
> vhost_migration_done_op vhost_migration_done;
> vhost_backend_can_merge_op vhost_backend_can_merge;
> diff --git a/include/hw/virtio/virtio-bus.h b/include/hw/virtio/virtio-bus.h
> index 38c9399cd4..0f6f215925 100644
> --- a/include/hw/virtio/virtio-bus.h
> +++ b/include/hw/virtio/virtio-bus.h
> @@ -83,6 +83,10 @@ typedef struct VirtioBusClass {
> */
> int (*ioeventfd_assign)(DeviceState *d, EventNotifier *notifier,
> int n, bool assign);
> + /*
> + * Whether queue number n is enabled.
> + */
> + bool (*queue_enabled)(DeviceState *d, int n);
> /*
> * Does the transport have variable vring alignment?
> * (ie can it ever call virtio_queue_set_align()?)
> diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
> index 6f3a624cf7..db473ff4d2 100644
> --- a/include/net/vhost_net.h
> +++ b/include/net/vhost_net.h
> @@ -35,6 +35,7 @@ int vhost_net_notify_migration_done(VHostNetState *net, char* mac_addr);
> VHostNetState *get_vhost_net(NetClientState *nc);
>
> int vhost_set_vring_enable(NetClientState * nc, int enable);
> +int vhost_set_vring_ready(NetClientState * nc);
>
> uint64_t vhost_net_get_acked_features(VHostNetState *net);
>
next prev parent reply other threads:[~2020-04-21 4:01 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-20 9:32 [RFC v1 0/4] vDPA support in qemu Cindy Lu
2020-04-20 9:32 ` [RFC v1 1/4] net: Introduce qemu_get_peer Cindy Lu
2020-04-21 3:23 ` Jason Wang
2020-04-21 8:10 ` Cindy Lu
2020-04-21 15:01 ` Laurent Vivier
2020-04-20 9:32 ` [RFC v1 2/4] vhost-vdpa: introduce vhost-vdpa net client Cindy Lu
2020-04-20 14:49 ` Eric Blake
2020-04-21 3:40 ` Jason Wang
2020-04-21 9:46 ` Cindy Lu
2020-04-21 15:06 ` Laurent Vivier
2020-04-21 15:47 ` Laurent Vivier
2020-04-22 9:21 ` Cindy Lu
2020-04-20 9:32 ` [RFC v1 3/4] vhost-vdpa: implement vhost-vdpa backend Cindy Lu
2020-04-20 14:51 ` Eric Blake
2020-04-21 3:56 ` Jason Wang
2020-04-21 9:12 ` Cindy Lu
2020-04-21 15:54 ` Laurent Vivier
2020-04-22 9:24 ` Cindy Lu
2020-05-07 15:12 ` Maxime Coquelin
2020-05-07 15:56 ` Cindy Lu
2020-05-07 15:30 ` Maxime Coquelin
2020-05-07 16:02 ` Cindy Lu
2020-04-20 9:32 ` [RFC v1 4/4] vhost: introduce vhost_set_vring_ready method Cindy Lu
2020-04-21 3:59 ` Jason Wang [this message]
2020-04-21 8:42 ` Cindy Lu
2020-04-21 4:03 ` [RFC v1 0/4] vDPA support in qemu Jason Wang
2020-04-21 4:05 ` Jason Wang
2020-04-21 9:47 ` Cindy Lu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c70efeb6-e664-2f5b-dc90-8929f1033e35@redhat.com \
--to=jasowang@redhat.com \
--cc=aadam@redhat.com \
--cc=armbru@redhat.com \
--cc=cohuck@redhat.com \
--cc=cunming.liang@intel.com \
--cc=eblake@redhat.com \
--cc=eperezma@redhat.com \
--cc=gdawar@xilinx.com \
--cc=hanand@xilinx.com \
--cc=hch@infradead.org \
--cc=jgg@mellanox.com \
--cc=jiri@mellanox.com \
--cc=kevin.tian@intel.com \
--cc=lingshan.zhu@intel.com \
--cc=lulu@redhat.com \
--cc=maxime.coquelin@redhat.com \
--cc=mhabets@solarflare.com \
--cc=mst@redhat.com \
--cc=parav@mellanox.com \
--cc=qemu-devel@nongnu.org \
--cc=rdunlap@infradead.org \
--cc=rob.miller@broadcom.com \
--cc=saugatm@xilinx.com \
--cc=shahafs@mellanox.com \
--cc=stefanha@redhat.com \
--cc=vmireyno@marvell.com \
--cc=xiao.w.wang@intel.com \
--cc=zhihong.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).