From: Vlad Yasevich <vyasevic@redhat.com>
To: Maxime Coquelin <maxime.coquelin@redhat.com>,
mst@redhat.com, aconole@redhat.com, jasowang@redhat.com,
qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH v2] virtio_net: Bypass backends for MTU feature negotiation
Date: Tue, 23 May 2017 09:11:43 -0400 [thread overview]
Message-ID: <54836ca8-7cf3-6f27-f519-a32fc3e7a5d4@redhat.com> (raw)
In-Reply-To: <20170523123119.7414-1-maxime.coquelin@redhat.com>
On 05/23/2017 08:31 AM, Maxime Coquelin wrote:
> This patch adds a new internal "x-mtu-bypass-backend" property
> to bypass backends for MTU feature negotiation.
>
> When this property is set, the MTU feature is negotiated as soon
> as supported by the guest and a MTU value is set via the host_mtu
> parameter. In case the backend advertises the feature (e.g. DPDK's
> vhost-user backend), the feature negotiation is propagated down to
> the backend.
>
> When this property is not set, the backend has to support the MTU
> feature for its negotiation to succeed.
>
> For compatibility purpose, this property is disabled for machine
> types v2.9 and older.
>
> Cc: Aaron Conole <aconole@redhat.com>
> Suggested-by: Michael S. Tsirkin <mst@redhat.com>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Vlad Yasevich <vyasevic@redhat.com>
-vlad
> ---
>
> Tests performed:
> - Vhost-net kernel backendi, host_mtu=2000:
> * default machine type: guest MTU = 2000
> * pc-q35-2.9 machine type: Guest MTU = 1500
>
> - Vhost-net user backend, host_mtu=2000:
> * DPDK v17.05 (MTU feature supported on backend side)
> - default machine type: guest MTU = 2000
> - pc-q35-2.9 machine type: guest MTU = 2000)
> * DPDK v16.11 (MTU feature not supported on backend side)
> - default machine type: guest MTU = 2000
> - pc-q35-2.9 machine type: guest MTU = 1500)
>
> hw/net/virtio-net.c | 17 ++++++++++++++++-
> include/hw/compat.h | 4 ++++
> include/hw/virtio/virtio-net.h | 1 +
> include/hw/virtio/virtio.h | 1 +
> 4 files changed, 22 insertions(+), 1 deletion(-)
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 7d091c9..39c336e 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -589,7 +589,15 @@ static uint64_t virtio_net_get_features(VirtIODevice *vdev, uint64_t features,
> if (!get_vhost_net(nc->peer)) {
> return features;
> }
> - return vhost_net_get_features(get_vhost_net(nc->peer), features);
> + features = vhost_net_get_features(get_vhost_net(nc->peer), features);
> + vdev->backend_features = features;
> +
> + if (n->mtu_bypass_backend &&
> + (n->host_features & 1ULL << VIRTIO_NET_F_MTU)) {
> + features |= (1ULL << VIRTIO_NET_F_MTU);
> + }
> +
> + return features;
> }
>
> static uint64_t virtio_net_bad_features(VirtIODevice *vdev)
> @@ -640,6 +648,11 @@ static void virtio_net_set_features(VirtIODevice *vdev, uint64_t features)
> VirtIONet *n = VIRTIO_NET(vdev);
> int i;
>
> + if (n->mtu_bypass_backend &&
> + !virtio_has_feature(vdev->backend_features, VIRTIO_NET_F_MTU)) {
> + features &= ~(1ULL << VIRTIO_NET_F_MTU);
> + }
> +
> virtio_net_set_multiqueue(n,
> virtio_has_feature(features, VIRTIO_NET_F_MQ));
>
> @@ -2090,6 +2103,8 @@ static Property virtio_net_properties[] = {
> DEFINE_PROP_UINT16("rx_queue_size", VirtIONet, net_conf.rx_queue_size,
> VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE),
> DEFINE_PROP_UINT16("host_mtu", VirtIONet, net_conf.mtu, 0),
> + DEFINE_PROP_BOOL("x-mtu-bypass-backend", VirtIONet, mtu_bypass_backend,
> + true),
> DEFINE_PROP_END_OF_LIST(),
> };
>
> diff --git a/include/hw/compat.h b/include/hw/compat.h
> index 55b1765..181f450 100644
> --- a/include/hw/compat.h
> +++ b/include/hw/compat.h
> @@ -6,6 +6,10 @@
> .driver = "pci-bridge",\
> .property = "shpc",\
> .value = "off",\
> + },{\
> + .driver = "virtio-net-device",\
> + .property = "x-mtu-bypass-backend",\
> + .value = "off",\
> },
>
> #define HW_COMPAT_2_8 \
> diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h
> index 1eec9a2..602b486 100644
> --- a/include/hw/virtio/virtio-net.h
> +++ b/include/hw/virtio/virtio-net.h
> @@ -97,6 +97,7 @@ typedef struct VirtIONet {
> QEMUTimer *announce_timer;
> int announce_counter;
> bool needs_vnet_hdr_swap;
> + bool mtu_bypass_backend;
> } VirtIONet;
>
> void virtio_net_set_netclient_name(VirtIONet *n, const char *name,
> diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> index 7b6edba..80c45c3 100644
> --- a/include/hw/virtio/virtio.h
> +++ b/include/hw/virtio/virtio.h
> @@ -79,6 +79,7 @@ struct VirtIODevice
> uint16_t queue_sel;
> uint64_t guest_features;
> uint64_t host_features;
> + uint64_t backend_features;
> size_t config_len;
> void *config;
> uint16_t config_vector;
>
prev parent reply other threads:[~2017-05-23 13:11 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-20 8:06 [Qemu-devel] [PATCH] vhost_net: do not expose MTU feature bit to kernel backend Maxime Coquelin
2017-05-20 11:43 ` Aaron Conole
2017-05-22 17:24 ` Michael S. Tsirkin
2017-05-23 9:39 ` Maxime Coquelin
2017-05-23 12:31 ` [Qemu-devel] [PATCH v2] virtio_net: Bypass backends for MTU feature negotiation Maxime Coquelin
2017-05-23 13:11 ` Vlad Yasevich [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=54836ca8-7cf3-6f27-f519-a32fc3e7a5d4@redhat.com \
--to=vyasevic@redhat.com \
--cc=aconole@redhat.com \
--cc=jasowang@redhat.com \
--cc=maxime.coquelin@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).