From: "Michael S. Tsirkin" <mst@redhat.com>
To: Wei Wang <wei.w.wang@intel.com>
Cc: jasowang@redhat.com, stefanha@gmail.com,
marcandre.lureau@gmail.com, pbonzini@redhat.com,
virtio-dev@lists.oasis-open.org, qemu-devel@nongnu.org,
jan.scheurich@ericsson.com, eblake@redhat.com, armbru@redhat.com
Subject: Re: [Qemu-devel] [PATCH v1] virtio-net: enable configurable tx queue size
Date: Mon, 5 Jun 2017 18:38:28 +0300 [thread overview]
Message-ID: <20170605182514-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <1496653049-44530-1-git-send-email-wei.w.wang@intel.com>
On Mon, Jun 05, 2017 at 04:57:29PM +0800, Wei Wang wrote:
> This patch enables the virtio-net tx queue size to be configurable
> between 256 and 1024 by the user. The queue size specified by the
> user should be power of 2. If "tx_queue_size" is not offered by the
> user, the default queue size, 1024, will be used.
>
> For the traditional QEMU backend, setting the tx queue size to be 1024
> requires the guest virtio driver to support the VIRTIO_F_MAX_CHAIN_SIZE
> feature. This feature restricts the guest driver from chaining 1024
> vring descriptors, which may cause the device side implementation to
> send more than 1024 iov to writev.
>
> VIRTIO_F_MAX_CHAIN_SIZE is a common transport feature added for all
> virtio devices. However, each device has the flexibility to set the max
> chain size to limit its driver to chain vring descriptors. Currently,
> the max chain size of the virtio-net device is set to 1023.
>
> In the case that the tx queue size is set to 1024 and the
> VIRTIO_F_MAX_CHAIN_SIZE feature is not supported by the guest driver,
> the tx queue size will be reconfigured to be 512.
I'd like to see the reverse. Start with the current default.
If VIRTIO_F_MAX_CHAIN_SIZE is negotiated, increase the queue size.
> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
below should come after ---
> RFC to v1 changes:
> 1) change VIRTIO_F_MAX_CHAIN_SIZE to be a common virtio feature (was
> virtio-net specific);
> 2) change the default tx queue size to be 1024 (was 256);
> 3) For the vhost backend case, setting tx queue size to be 1024 dosen't
> require the VIRTIO_F_MAX_CHAIN_SIZE feature support.
> ---
> hw/net/virtio-net.c | 69 ++++++++++++++++++++++++--
> include/hw/virtio/virtio-net.h | 1 +
> include/hw/virtio/virtio.h | 2 +
> include/standard-headers/linux/virtio_config.h | 3 ++
> include/standard-headers/linux/virtio_net.h | 2 +
> 5 files changed, 73 insertions(+), 4 deletions(-)
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 7d091c9..5c82f54 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -33,8 +33,13 @@
>
> /* previously fixed value */
> #define VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE 256
> +#define VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE 1024
> +
> /* for now, only allow larger queues; with virtio-1, guest can downsize */
> #define VIRTIO_NET_RX_QUEUE_MIN_SIZE VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE
> +#define VIRTIO_NET_TX_QUEUE_MIN_SIZE 256
> +
> +#define VIRTIO_NET_MAX_CHAIN_SIZE 1023
>
> /*
> * Calculate the number of bytes up to and including the given 'field' of
> @@ -57,6 +62,8 @@ static VirtIOFeature feature_sizes[] = {
> .end = endof(struct virtio_net_config, max_virtqueue_pairs)},
> {.flags = 1 << VIRTIO_NET_F_MTU,
> .end = endof(struct virtio_net_config, mtu)},
> + {.flags = 1 << VIRTIO_F_MAX_CHAIN_SIZE,
> + .end = endof(struct virtio_net_config, max_chain_size)},
> {}
> };
>
Using a generic VIRTIO_F_MAX_CHAIN_SIZE flag, so this should go into the
generic virtio section, not virtio_net_config.
> @@ -84,6 +91,7 @@ static void virtio_net_get_config(VirtIODevice *vdev, uint8_t *config)
> virtio_stw_p(vdev, &netcfg.status, n->status);
> virtio_stw_p(vdev, &netcfg.max_virtqueue_pairs, n->max_queues);
> virtio_stw_p(vdev, &netcfg.mtu, n->net_conf.mtu);
> + virtio_stw_p(vdev, &netcfg.max_chain_size, VIRTIO_NET_MAX_CHAIN_SIZE);
> memcpy(netcfg.mac, n->mac, ETH_ALEN);
> memcpy(config, &netcfg, n->config_size);
> }
> @@ -635,9 +643,33 @@ static inline uint64_t virtio_net_supported_guest_offloads(VirtIONet *n)
> return virtio_net_guest_offloads_by_features(vdev->guest_features);
> }
>
> +static bool is_tx(int queue_index)
> +{
> + return queue_index % 2 == 1;
> +}
> +
> +static void virtio_net_reconfig_tx_queue_size(VirtIONet *n)
> +{
> + VirtIODevice *vdev = VIRTIO_DEVICE(n);
> + int i, num_queues = virtio_get_num_queues(vdev);
> +
> + /* Return when the queue size is already less than the 1024 */
> + if (n->net_conf.tx_queue_size < VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE) {
> + return;
> + }
> +
> + for (i = 0; i < num_queues; i++) {
> + if (is_tx(i)) {
> + n->net_conf.tx_queue_size = VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE / 2;
> + virtio_queue_set_num(vdev, i, n->net_conf.tx_queue_size);
> + }
> + }
> +}
> +
> static void virtio_net_set_features(VirtIODevice *vdev, uint64_t features)
> {
> VirtIONet *n = VIRTIO_NET(vdev);
> + NetClientState *nc = qemu_get_queue(n->nic);
> int i;
>
> virtio_net_set_multiqueue(n,
> @@ -649,6 +681,16 @@ static void virtio_net_set_features(VirtIODevice *vdev, uint64_t features)
> virtio_has_feature(features,
> VIRTIO_F_VERSION_1));
>
> + /*
> + * When the traditional QEMU backend is used, using 1024 tx queue size
> + * requires the driver to support the VIRTIO_F_MAX_CHAIN_SIZE feature. If
> + * the feature is not supported, reconfigure the tx queue size to 512.
> + */
> + if (!get_vhost_net(nc->peer) &&
> + !virtio_has_feature(features, VIRTIO_F_MAX_CHAIN_SIZE)) {
> + virtio_net_reconfig_tx_queue_size(n);
> + }
> +
> if (n->has_vnet_hdr) {
> n->curr_guest_offloads =
> virtio_net_guest_offloads_by_features(features);
> @@ -1297,8 +1339,8 @@ static int32_t virtio_net_flush_tx(VirtIONetQueue *q)
>
> out_num = elem->out_num;
> out_sg = elem->out_sg;
> - if (out_num < 1) {
> - virtio_error(vdev, "virtio-net header not in first element");
> + if (out_num < 1 || out_num > VIRTIO_NET_MAX_CHAIN_SIZE) {
> + virtio_error(vdev, "no packet or too large vring desc chain");
> virtqueue_detach_element(q->tx_vq, elem, 0);
> g_free(elem);
> return -EINVAL;
what about rx vq? we need to limit that as well, do we not?
> @@ -1496,13 +1538,15 @@ static void virtio_net_add_queue(VirtIONet *n, int index)
> virtio_net_handle_rx);
> if (n->net_conf.tx && !strcmp(n->net_conf.tx, "timer")) {
> n->vqs[index].tx_vq =
> - virtio_add_queue(vdev, 256, virtio_net_handle_tx_timer);
> + virtio_add_queue(vdev, n->net_conf.tx_queue_size,
> + virtio_net_handle_tx_timer);
> n->vqs[index].tx_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
> virtio_net_tx_timer,
> &n->vqs[index]);
> } else {
> n->vqs[index].tx_vq =
> - virtio_add_queue(vdev, 256, virtio_net_handle_tx_bh);
> + virtio_add_queue(vdev, n->net_conf.tx_queue_size,
> + virtio_net_handle_tx_bh);
> n->vqs[index].tx_bh = qemu_bh_new(virtio_net_tx_bh, &n->vqs[index]);
> }
>
> @@ -1891,6 +1935,10 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> n->host_features |= (0x1 << VIRTIO_NET_F_MTU);
> }
>
> + if (virtio_host_has_feature(vdev, VIRTIO_F_MAX_CHAIN_SIZE)) {
> + n->host_features |= (0x1 << VIRTIO_F_MAX_CHAIN_SIZE);
> + }
> +
> virtio_net_set_config_size(n, n->host_features);
> virtio_init(vdev, "virtio-net", VIRTIO_ID_NET, n->config_size);
>
> @@ -1910,6 +1958,17 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> return;
> }
>
> + if (n->net_conf.tx_queue_size < VIRTIO_NET_TX_QUEUE_MIN_SIZE ||
> + n->net_conf.tx_queue_size > VIRTQUEUE_MAX_SIZE ||
> + (n->net_conf.tx_queue_size & (n->net_conf.tx_queue_size - 1))) {
Pls use is_power_of_2.
> + error_setg(errp, "Invalid tx_queue_size (= %" PRIu16 "), "
> + "must be a power of 2 between %d and %d.",
> + n->net_conf.tx_queue_size, VIRTIO_NET_TX_QUEUE_MIN_SIZE,
> + VIRTQUEUE_MAX_SIZE);
I think management will need a way to discover the limits for
this value. Not sure how. Cc QAPI maintainers.
> + virtio_cleanup(vdev);
> + return;
> + }
> +
> n->max_queues = MAX(n->nic_conf.peers.queues, 1);
> if (n->max_queues * 2 + 1 > VIRTIO_QUEUE_MAX) {
> error_setg(errp, "Invalid number of queues (= %" PRIu32 "), "
> @@ -2089,6 +2148,8 @@ static Property virtio_net_properties[] = {
> DEFINE_PROP_STRING("tx", VirtIONet, net_conf.tx),
> DEFINE_PROP_UINT16("rx_queue_size", VirtIONet, net_conf.rx_queue_size,
> VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE),
> + DEFINE_PROP_UINT16("tx_queue_size", VirtIONet, net_conf.tx_queue_size,
> + VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE),
> DEFINE_PROP_UINT16("host_mtu", VirtIONet, net_conf.mtu, 0),
> DEFINE_PROP_END_OF_LIST(),
> };
> diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h
> index 1eec9a2..fd944ba 100644
> --- a/include/hw/virtio/virtio-net.h
> +++ b/include/hw/virtio/virtio-net.h
> @@ -36,6 +36,7 @@ typedef struct virtio_net_conf
> int32_t txburst;
> char *tx;
> uint16_t rx_queue_size;
> + uint16_t tx_queue_size;
> uint16_t mtu;
> } virtio_net_conf;
>
> diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> index 7b6edba..8e85e41 100644
> --- a/include/hw/virtio/virtio.h
> +++ b/include/hw/virtio/virtio.h
> @@ -260,6 +260,8 @@ typedef struct VirtIORNGConf VirtIORNGConf;
> VIRTIO_F_NOTIFY_ON_EMPTY, true), \
> DEFINE_PROP_BIT64("any_layout", _state, _field, \
> VIRTIO_F_ANY_LAYOUT, true), \
> + DEFINE_PROP_BIT64("max_chain_size", _state, _field, \
> + VIRTIO_F_MAX_CHAIN_SIZE, true), \
> DEFINE_PROP_BIT64("iommu_platform", _state, _field, \
> VIRTIO_F_IOMMU_PLATFORM, false)
>
> diff --git a/include/standard-headers/linux/virtio_config.h b/include/standard-headers/linux/virtio_config.h
> index b777069..b70cbfe 100644
> --- a/include/standard-headers/linux/virtio_config.h
> +++ b/include/standard-headers/linux/virtio_config.h
> @@ -60,6 +60,9 @@
> #define VIRTIO_F_ANY_LAYOUT 27
> #endif /* VIRTIO_CONFIG_NO_LEGACY */
>
> +/* Guest chains vring descriptors within a limit */
> +#define VIRTIO_F_MAX_CHAIN_SIZE 31
> +
Pls use a flag in the high 32 bit.
> /* v1.0 compliant. */
> #define VIRTIO_F_VERSION_1 32
>
> diff --git a/include/standard-headers/linux/virtio_net.h b/include/standard-headers/linux/virtio_net.h
> index 30ff249..729aaa8 100644
> --- a/include/standard-headers/linux/virtio_net.h
> +++ b/include/standard-headers/linux/virtio_net.h
> @@ -76,6 +76,8 @@ struct virtio_net_config {
> uint16_t max_virtqueue_pairs;
> /* Default maximum transmit unit advice */
> uint16_t mtu;
> + /* Maximum number of vring descriptors that can be chained */
> + uint16_t max_chain_size;
> } QEMU_PACKED;
>
> /*
> --
> 2.7.4
next prev parent reply other threads:[~2017-06-05 15:38 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-06-05 8:57 [Qemu-devel] [PATCH v1] virtio-net: enable configurable tx queue size Wei Wang
2017-06-05 15:38 ` Michael S. Tsirkin [this message]
2017-06-05 15:41 ` Eric Blake
2017-06-05 15:45 ` Michael S. Tsirkin
2017-06-06 3:32 ` Wei Wang
2017-06-07 1:04 ` Wei Wang
2017-06-08 19:01 ` Michael S. Tsirkin
2017-06-09 3:00 ` Wei Wang
2017-06-12 9:30 ` [Qemu-devel] [virtio-dev] " Wei Wang
2017-06-12 20:43 ` Michael S. Tsirkin
2017-06-13 3:10 ` Wei Wang
2017-06-13 3:19 ` Jason Wang
2017-06-13 3:51 ` Wei Wang
2017-06-13 3:55 ` Jason Wang
2017-06-13 3:59 ` Jason Wang
2017-06-13 6:13 ` Wei Wang
2017-06-13 6:31 ` Jason Wang
2017-06-13 7:49 ` [Qemu-devel] [virtio-dev] " Wei Wang
2017-06-13 6:08 ` [Qemu-devel] " Wei Wang
2017-06-13 6:29 ` [Qemu-devel] [virtio-dev] " Jason Wang
2017-06-13 7:17 ` Wei Wang
2017-06-13 9:04 ` Jason Wang
2017-06-13 9:50 ` Wei Wang
2017-06-13 10:46 ` Jason Wang
2017-06-14 11:26 ` Jason Wang
2017-06-14 15:22 ` Michael S. Tsirkin
2017-06-15 4:16 ` Jason Wang
2017-06-15 6:52 ` [Qemu-devel] [virtio-dev] " Wei Wang
2017-06-16 3:22 ` Michael S. Tsirkin
2017-06-16 8:57 ` Jason Wang
2017-06-16 10:10 ` Wei Wang
2017-06-16 15:15 ` Michael S. Tsirkin
2017-06-17 8:37 ` Wei Wang
2017-06-18 19:46 ` Michael S. Tsirkin
2017-06-19 7:40 ` Wei Wang
2017-06-16 15:19 ` Michael S. Tsirkin
2017-06-16 17:04 ` Maxime Coquelin
2017-06-16 20:33 ` Michael S. Tsirkin
2017-06-05 15:47 ` [Qemu-devel] " Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170605182514-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=armbru@redhat.com \
--cc=eblake@redhat.com \
--cc=jan.scheurich@ericsson.com \
--cc=jasowang@redhat.com \
--cc=marcandre.lureau@gmail.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@gmail.com \
--cc=virtio-dev@lists.oasis-open.org \
--cc=wei.w.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).