From: "Eugenio Pérez" <eperezma@redhat.com>
To: qemu-devel@nongnu.org
Cc: Laurent Vivier <lvivier@redhat.com>,
Parav Pandit <parav@mellanox.com>, Cindy Lu <lulu@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
Jason Wang <jasowang@redhat.com>,
Cornelia Huck <cohuck@redhat.com>,
Markus Armbruster <armbru@redhat.com>,
Gautam Dawar <gdawar@xilinx.com>,
Harpreet Singh Anand <hanand@xilinx.com>,
"Gonglei \(Arei\)" <arei.gonglei@huawei.com>,
Peter Xu <peterx@redhat.com>, Eli Cohen <eli@mellanox.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Zhu Lingshan <lingshan.zhu@intel.com>,
Eric Blake <eblake@redhat.com>,
Liuxiangdong <liuxiangdong5@huawei.com>
Subject: [RFC PATCH v5 14/23] vdpa: control virtqueue support on shadow virtqueue
Date: Fri, 8 Apr 2022 15:34:06 +0200 [thread overview]
Message-ID: <20220408133415.1371760-15-eperezma@redhat.com> (raw)
In-Reply-To: <20220408133415.1371760-1-eperezma@redhat.com>
Introduce the control virtqueue support for vDPA shadow virtqueue. This
is needed for advanced networking features like multiqueue.
To demonstrate command handling, VIRTIO_NET_F_CTRL_MACADDR and
VIRTIO_NET_CTRL_MQ are implemented. If vDPA device is started with SVQ
support and virtio-net driver changes MAC or the number of queues
virtio-net device model will be updated with the new one.
Others cvq commands could be added here straightforwardly but they have
been not tested.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
net/vhost-vdpa.c | 80 ++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 77 insertions(+), 3 deletions(-)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 290aa01e13..a83da4616c 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -11,6 +11,7 @@
#include "qemu/osdep.h"
#include "clients.h"
+#include "hw/virtio/virtio-net.h"
#include "net/vhost_net.h"
#include "net/vhost-vdpa.h"
#include "hw/virtio/vhost-vdpa.h"
@@ -69,6 +70,30 @@ const int vdpa_feature_bits[] = {
VHOST_INVALID_FEATURE_BIT
};
+/** Supported device specific feature bits with SVQ */
+static const uint64_t vdpa_svq_device_features =
+ BIT_ULL(VIRTIO_NET_F_CSUM) |
+ BIT_ULL(VIRTIO_NET_F_GUEST_CSUM) |
+ BIT_ULL(VIRTIO_NET_F_CTRL_GUEST_OFFLOADS) |
+ BIT_ULL(VIRTIO_NET_F_MTU) |
+ BIT_ULL(VIRTIO_NET_F_MAC) |
+ BIT_ULL(VIRTIO_NET_F_GUEST_TSO4) |
+ BIT_ULL(VIRTIO_NET_F_GUEST_TSO6) |
+ BIT_ULL(VIRTIO_NET_F_GUEST_ECN) |
+ BIT_ULL(VIRTIO_NET_F_GUEST_UFO) |
+ BIT_ULL(VIRTIO_NET_F_HOST_TSO4) |
+ BIT_ULL(VIRTIO_NET_F_HOST_TSO6) |
+ BIT_ULL(VIRTIO_NET_F_HOST_ECN) |
+ BIT_ULL(VIRTIO_NET_F_HOST_UFO) |
+ BIT_ULL(VIRTIO_NET_F_MRG_RXBUF) |
+ BIT_ULL(VIRTIO_NET_F_STATUS) |
+ BIT_ULL(VIRTIO_NET_F_CTRL_VQ) |
+ BIT_ULL(VIRTIO_NET_F_MQ) |
+ BIT_ULL(VIRTIO_F_ANY_LAYOUT) |
+ BIT_ULL(VIRTIO_NET_F_CTRL_MAC_ADDR) |
+ BIT_ULL(VIRTIO_NET_F_RSC_EXT) |
+ BIT_ULL(VIRTIO_NET_F_STANDBY);
+
VHostNetState *vhost_vdpa_get_vhost_net(NetClientState *nc)
{
VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
@@ -199,6 +224,46 @@ static int vhost_vdpa_get_iova_range(int fd,
return ret < 0 ? -errno : 0;
}
+static void vhost_vdpa_net_handle_ctrl(VirtIODevice *vdev,
+ const VirtQueueElement *elem)
+{
+ struct virtio_net_ctrl_hdr ctrl;
+ virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
+ size_t s;
+ struct iovec in = {
+ .iov_base = &status,
+ .iov_len = sizeof(status),
+ };
+
+ s = iov_to_buf(elem->out_sg, elem->out_num, 0, &ctrl, sizeof(ctrl.class));
+ if (s != sizeof(ctrl.class)) {
+ return;
+ }
+
+ switch (ctrl.class) {
+ case VIRTIO_NET_CTRL_MAC_ADDR_SET:
+ case VIRTIO_NET_CTRL_MQ:
+ break;
+ default:
+ return;
+ };
+
+ s = iov_to_buf(elem->in_sg, elem->in_num, 0, &status, sizeof(status));
+ if (s != sizeof(status) || status != VIRTIO_NET_OK) {
+ return;
+ }
+
+ status = VIRTIO_NET_ERR;
+ virtio_net_handle_ctrl_iov(vdev, &in, 1, elem->out_sg, elem->out_num);
+ if (status != VIRTIO_NET_OK) {
+ error_report("Bad CVQ processing in model");
+ }
+}
+
+static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
+ .used_elem_handler = vhost_vdpa_net_handle_ctrl,
+};
+
static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
const char *device,
const char *name,
@@ -226,6 +291,9 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
s->vhost_vdpa.device_fd = vdpa_device_fd;
s->vhost_vdpa.index = queue_pair_index;
s->vhost_vdpa.shadow_vqs_enabled = svq;
+ if (!is_datapath) {
+ s->vhost_vdpa.shadow_vq_ops = &vhost_vdpa_net_svq_ops;
+ }
s->vhost_vdpa.iova_tree = iova_tree;
ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs);
if (ret) {
@@ -314,9 +382,15 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
}
if (opts->x_svq) {
struct vhost_vdpa_iova_range iova_range;
-
- if (has_cvq) {
- error_setg(errp, "vdpa svq does not work with cvq");
+ uint64_t invalid_dev_features =
+ features & ~vdpa_svq_device_features &
+ /* Transport are all accepted at this point */
+ ~MAKE_64BIT_MASK(VIRTIO_TRANSPORT_F_START,
+ VIRTIO_TRANSPORT_F_END - VIRTIO_TRANSPORT_F_START);
+
+ if (invalid_dev_features) {
+ error_setg(errp, "vdpa svq does not work with features 0x%" PRIx64,
+ invalid_dev_features);
goto err_svq;
}
vhost_vdpa_get_iova_range(vdpa_device_fd, &iova_range);
--
2.27.0
next prev parent reply other threads:[~2022-04-08 13:56 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-08 13:33 [RFC PATCH v5 00/23] Net Control VQ support with asid in vDPA SVQ Eugenio Pérez
2022-04-08 13:33 ` [RFC PATCH v5 01/23] vdpa: Add missing tracing to batch mapping functions Eugenio Pérez
2022-04-14 3:32 ` Jason Wang
2022-04-08 13:33 ` [RFC PATCH v5 02/23] vdpa: Fix bad index calculus at vhost_vdpa_get_vring_base Eugenio Pérez
2022-04-14 3:34 ` Jason Wang
2022-04-22 9:00 ` Eugenio Perez Martin
2022-04-08 13:33 ` [RFC PATCH v5 03/23] util: Return void on iova_tree_remove Eugenio Pérez
2022-04-14 3:36 ` Jason Wang
2022-04-08 13:33 ` [RFC PATCH v5 04/23] hw/virtio: Replace g_memdup() by g_memdup2() Eugenio Pérez
2022-04-14 3:37 ` Jason Wang
2022-04-08 13:33 ` [RFC PATCH v5 05/23] vhost: Fix bad return of descriptors to SVQ Eugenio Pérez
2022-04-14 3:38 ` Jason Wang
2022-04-08 13:33 ` [RFC PATCH v5 06/23] vdpa: Add x-svq to NetdevVhostVDPAOptions Eugenio Pérez
2022-04-14 3:42 ` Jason Wang
2022-04-18 10:34 ` Eugenio Perez Martin
2022-04-08 13:33 ` [RFC PATCH v5 07/23] vhost: move descriptor translation to vhost_svq_vring_write_descs Eugenio Pérez
2022-04-08 13:34 ` [RFC PATCH v5 08/23] vdpa: Fix index calculus at vhost_vdpa_svqs_start Eugenio Pérez
2022-04-08 13:34 ` [RFC PATCH v5 09/23] virtio-net: Expose ctrl virtqueue logic Eugenio Pérez
2022-04-08 13:34 ` [RFC PATCH v5 10/23] vdpa: Extract get features part from vhost_vdpa_get_max_queue_pairs Eugenio Pérez
2022-04-08 13:34 ` [RFC PATCH v5 11/23] virtio: Make virtqueue_alloc_element non-static Eugenio Pérez
2022-04-08 13:34 ` [RFC PATCH v5 12/23] vhost: Add SVQElement Eugenio Pérez
2022-04-08 13:34 ` [RFC PATCH v5 13/23] vhost: Add custom used buffer callback Eugenio Pérez
2022-04-08 13:34 ` Eugenio Pérez [this message]
2022-04-08 13:34 ` [RFC PATCH v5 15/23] vhost: Add vhost_iova_tree_find Eugenio Pérez
2022-04-08 13:34 ` [RFC PATCH v5 16/23] vdpa: Add map/unmap operation callback to SVQ Eugenio Pérez
2022-04-08 13:34 ` [RFC PATCH v5 17/23] vhost: Add vhost_svq_inject Eugenio Pérez
2022-04-08 13:34 ` [RFC PATCH v5 18/23] vdpa: add NetClientState->start() callback Eugenio Pérez
2022-04-08 13:34 ` [RFC PATCH v5 19/23] vdpa: Add vhost_vdpa_start_control_svq Eugenio Pérez
2022-04-08 13:34 ` [RFC PATCH v5 20/23] vhost: Update kernel headers Eugenio Pérez
2022-04-08 13:34 ` [RFC PATCH v5 21/23] vhost: Make possible to check for device exclusive vq group Eugenio Pérez
2022-04-08 13:34 ` [RFC PATCH v5 22/23] vdpa: Add asid attribute to vdpa device Eugenio Pérez
2022-04-08 13:34 ` [RFC PATCH v5 23/23] vdpa: Add x-cvq-svq Eugenio Pérez
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220408133415.1371760-15-eperezma@redhat.com \
--to=eperezma@redhat.com \
--cc=arei.gonglei@huawei.com \
--cc=armbru@redhat.com \
--cc=cohuck@redhat.com \
--cc=eblake@redhat.com \
--cc=eli@mellanox.com \
--cc=gdawar@xilinx.com \
--cc=hanand@xilinx.com \
--cc=jasowang@redhat.com \
--cc=lingshan.zhu@intel.com \
--cc=liuxiangdong5@huawei.com \
--cc=lulu@redhat.com \
--cc=lvivier@redhat.com \
--cc=mst@redhat.com \
--cc=parav@mellanox.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).