From: "Michael S. Tsirkin" <mst@redhat.com>
To: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Cc: virtualization@lists.linux-foundation.org,
netdev@vger.kernel.org, Jason Wang <jasowang@redhat.com>,
"David S. Miller" <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>
Subject: Re: [PATCH v2 3/3] virtio-net: enable virtio indirect cache
Date: Mon, 1 Nov 2021 04:33:38 -0400 [thread overview]
Message-ID: <20211101043229-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20211028104919.3393-4-xuanzhuo@linux.alibaba.com>
On Thu, Oct 28, 2021 at 06:49:19PM +0800, Xuan Zhuo wrote:
> If the VIRTIO_RING_F_INDIRECT_DESC negotiation succeeds, and the number
> of sgs used for sending packets is greater than 1. We must constantly
> call __kmalloc/kfree to allocate/release desc.
>
> In the case of extremely fast package delivery, the overhead cannot be
> ignored:
>
> 27.46% [kernel] [k] virtqueue_add
> 16.66% [kernel] [k] detach_buf_split
> 16.51% [kernel] [k] virtnet_xsk_xmit
> 14.04% [kernel] [k] virtqueue_add_outbuf
> 5.18% [kernel] [k] __kmalloc
> 4.08% [kernel] [k] kfree
> 2.80% [kernel] [k] virtqueue_get_buf_ctx
> 2.22% [kernel] [k] xsk_tx_peek_desc
> 2.08% [kernel] [k] memset_erms
> 0.83% [kernel] [k] virtqueue_kick_prepare
> 0.76% [kernel] [k] virtnet_xsk_run
> 0.62% [kernel] [k] __free_old_xmit_ptr
> 0.60% [kernel] [k] vring_map_one_sg
> 0.53% [kernel] [k] native_apic_mem_write
> 0.46% [kernel] [k] sg_next
> 0.43% [kernel] [k] sg_init_table
> 0.41% [kernel] [k] kmalloc_slab
>
> Compared to not using virtio indirect cache, virtio-net can get a 16%
> performance improvement when using indirect desc cache.
>
> In the test case, the CPU where the package is sent has reached 100%.
> The following are the PPS in two cases:
>
> indirect desc cache | no cache
> 3074658 | 2685132
> 3111866 | 2666118
> 3152527 | 2653632
> 3125867 | 2669820
> 3027147 | 2644464
> 3069211 | 2669777
> 3038522 | 2675645
> 3034507 | 2671302
> 3102257 | 2685504
> 3083712 | 2692800
> 3051771 | 2676928
> 3080684 | 2695040
> 3147816 | 2720876
> 3123887 | 2705492
> 3180963 | 2699520
> 3191579 | 2676480
> 3161670 | 2686272
> 3189768 | 2692588
> 3174272 | 2686692
> 3143434 | 2682416
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
> drivers/net/virtio_net.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 4ad25a8b0870..e1ade176ab46 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -31,6 +31,13 @@ module_param(csum, bool, 0444);
> module_param(gso, bool, 0444);
> module_param(napi_tx, bool, 0644);
>
> +/**
> + * Because virtio desc cache will increase memory overhead, users can turn it
> + * off or select an acceptable value. The maximum value is 2 + MAX_SKB_FRAGS.
> + */
Maybe add code to validate it and cap it at acceptable values then.
> +static u32 virtio_desc_cache_thr = 4;
Wouldn't something like CACHE_LINE_SIZE make more sense here?
> +module_param(virtio_desc_cache_thr, uint, 0644);
> +
> /* FIXME: MTU in config. */
> #define GOOD_PACKET_LEN (ETH_HLEN + VLAN_HLEN + ETH_DATA_LEN)
> #define GOOD_COPY_LEN 128
> @@ -3214,6 +3221,11 @@ static int virtnet_probe(struct virtio_device *vdev)
> vi->curr_queue_pairs = num_online_cpus();
> vi->max_queue_pairs = max_queue_pairs;
>
> + if (virtio_desc_cache_thr > 2 + MAX_SKB_FRAGS)
> + virtio_set_desc_cache(vdev, 2 + MAX_SKB_FRAGS);
> + else
> + virtio_set_desc_cache(vdev, virtio_desc_cache_thr);
> +
> /* Allocate/initialize the rx/tx queues, and invoke find_vqs */
> err = init_vqs(vi);
> if (err)
> --
> 2.31.0
prev parent reply other threads:[~2021-11-01 8:33 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-28 10:49 [PATCH v2 0/3] virtio support cache indirect desc Xuan Zhuo
2021-10-28 10:49 ` [PATCH v2 1/3] virtio: cache indirect desc for split Xuan Zhuo
2021-10-29 2:20 ` Jason Wang
2021-10-31 14:46 ` Michael S. Tsirkin
2021-11-01 8:35 ` Michael S. Tsirkin
2021-10-28 10:49 ` [PATCH v2 2/3] virtio: cache indirect desc for packed Xuan Zhuo
2021-10-28 10:49 ` [PATCH v2 3/3] virtio-net: enable virtio indirect cache Xuan Zhuo
2021-10-29 1:19 ` kernel test robot
2021-11-01 8:33 ` Michael S. Tsirkin [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211101043229-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=davem@davemloft.net \
--cc=jasowang@redhat.com \
--cc=kuba@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=virtualization@lists.linux-foundation.org \
--cc=xuanzhuo@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).