From: Jason Wang <jasowang@redhat.com>
To: mst@redhat.com, jasowang@redhat.com, qemu-devel@nongnu.org
Cc: eperezma@redhat.com, elic@nvidia.com, gdawar@xilinx.com,
lingshan.zhu@intel.com, lulu@redhat.com
Subject: [PATCH V4 09/10] virtio-net: vhost control virtqueue support
Date: Mon, 11 Oct 2021 12:28:28 +0800 [thread overview]
Message-ID: <20211011042829.4159-10-jasowang@redhat.com> (raw)
In-Reply-To: <20211011042829.4159-1-jasowang@redhat.com>
This patch implements the control virtqueue support for vhost. This
requires virtio-net to figure out the datapath queue pairs and control
virtqueue via is_datapath and pass the number of those two types
of virtqueues to vhost_net_start()/vhost_net_stop().
Signed-off-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20210907090322.1756-10-jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
hw/net/vhost_net.c | 2 +-
hw/net/virtio-net.c | 23 +++++++++++++++++++----
include/hw/virtio/virtio-net.h | 1 +
3 files changed, 21 insertions(+), 5 deletions(-)
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 3aabab06ea..0d888f29a6 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -326,7 +326,7 @@ int vhost_net_start(VirtIODevice *dev, NetClientState *ncs,
VirtIONet *n = VIRTIO_NET(dev);
int nvhosts = data_queue_pairs + cvq;
struct vhost_net *net;
- int r, e, i, last_index = data_qps * 2;
+ int r, e, i, last_index = data_queue_pairs * 2;
NetClientState *peer;
if (!cvq) {
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 2ade019b22..57a0cbc6cd 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -244,6 +244,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
VirtIODevice *vdev = VIRTIO_DEVICE(n);
NetClientState *nc = qemu_get_queue(n->nic);
int queue_pairs = n->multiqueue ? n->max_queue_pairs : 1;
+ int cvq = n->max_ncs - n->max_queue_pairs;
if (!get_vhost_net(nc->peer)) {
return;
@@ -285,14 +286,14 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
}
n->vhost_started = 1;
- r = vhost_net_start(vdev, n->nic->ncs, queue_pairs, 0);
+ r = vhost_net_start(vdev, n->nic->ncs, queue_pairs, cvq);
if (r < 0) {
error_report("unable to start vhost net: %d: "
"falling back on userspace virtio", -r);
n->vhost_started = 0;
}
} else {
- vhost_net_stop(vdev, n->nic->ncs, queue_pairs, 0);
+ vhost_net_stop(vdev, n->nic->ncs, queue_pairs, cvq);
n->vhost_started = 0;
}
}
@@ -3393,9 +3394,23 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
return;
}
- n->max_queue_pairs = MAX(n->nic_conf.peers.queues, 1);
+ n->max_ncs = MAX(n->nic_conf.peers.queues, 1);
+
+ /*
+ * Figure out the datapath queue pairs since the backend could
+ * provide control queue via peers as well.
+ */
+ if (n->nic_conf.peers.queues) {
+ for (i = 0; i < n->max_ncs; i++) {
+ if (n->nic_conf.peers.ncs[i]->is_datapath) {
+ ++n->max_queue_pairs;
+ }
+ }
+ }
+ n->max_queue_pairs = MAX(n->max_queue_pairs, 1);
+
if (n->max_queue_pairs * 2 + 1 > VIRTIO_QUEUE_MAX) {
- error_setg(errp, "Invalid number of queue_pairs (= %" PRIu32 "), "
+ error_setg(errp, "Invalid number of queue pairs (= %" PRIu32 "), "
"must be a positive integer less than %d.",
n->max_queue_pairs, (VIRTIO_QUEUE_MAX - 1) / 2);
virtio_cleanup(vdev);
diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h
index 71cbdc26d7..08ee6dea39 100644
--- a/include/hw/virtio/virtio-net.h
+++ b/include/hw/virtio/virtio-net.h
@@ -196,6 +196,7 @@ struct VirtIONet {
int multiqueue;
uint16_t max_queue_pairs;
uint16_t curr_queue_pairs;
+ uint16_t max_ncs;
size_t config_size;
char *netclient_name;
char *netclient_type;
--
2.25.1
next prev parent reply other threads:[~2021-10-11 4:33 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-11 4:28 [PATCH V4 00/10] vhost-vDPA multiqueue Jason Wang
2021-10-11 4:28 ` [PATCH V4 01/10] vhost-vdpa: open device fd in net_init_vhost_vdpa() Jason Wang
2021-10-11 4:28 ` [PATCH V4 02/10] vhost-vdpa: classify one time request Jason Wang
2021-10-11 4:28 ` [PATCH V4 03/10] vhost-vdpa: prepare for the multiqueue support Jason Wang
2021-10-18 15:44 ` Stefano Garzarella
2021-10-20 2:50 ` Jason Wang
2021-10-11 4:28 ` [PATCH V4 04/10] vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState * Jason Wang
2021-10-11 4:28 ` [PATCH V4 05/10] net: introduce control client Jason Wang
2021-10-11 4:28 ` [PATCH V4 06/10] vhost-net: control virtqueue support Jason Wang
2021-10-11 4:28 ` [PATCH V4 07/10] virtio-net: use "queue_pairs" instead of "queues" when possible Jason Wang
2021-10-11 4:28 ` [PATCH V4 08/10] vhost: record the last virtqueue index for the virtio device Jason Wang
2021-10-11 4:28 ` Jason Wang [this message]
2021-10-11 4:28 ` [PATCH V4 10/10] vhost-vdpa: multiqueue support Jason Wang
2021-10-19 7:21 ` [PATCH V4 00/10] vhost-vDPA multiqueue Michael S. Tsirkin
2021-10-19 7:24 ` Jason Wang
2021-10-19 10:44 ` Michael S. Tsirkin
2021-10-19 11:17 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211011042829.4159-10-jasowang@redhat.com \
--to=jasowang@redhat.com \
--cc=elic@nvidia.com \
--cc=eperezma@redhat.com \
--cc=gdawar@xilinx.com \
--cc=lingshan.zhu@intel.com \
--cc=lulu@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).