From: "Michael S. Tsirkin" <mst@redhat.com>
To: Parav Pandit <parav@nvidia.com>
Cc: "virtio-dev@lists.oasis-open.org"
<virtio-dev@lists.oasis-open.org>,
"sridhar.samudrala@intel.com" <sridhar.samudrala@intel.com>,
"jesse.brandeburg@intel.com" <jesse.brandeburg@intel.com>,
Gavi Teitz <gavi@nvidia.com>,
"virtualization@lists.linux-foundation.org"
<virtualization@lists.linux-foundation.org>,
"stephen@networkplumber.org" <stephen@networkplumber.org>,
"loseweigh@gmail.com" <loseweigh@gmail.com>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
"kuba@kernel.org" <kuba@kernel.org>,
"davem@davemloft.net" <davem@davemloft.net>,
Gavin Li <gavinl@nvidia.com>
Subject: Re: [PATCH v5 2/2] virtio-net: use mtu size as buffer length for big packets
Date: Wed, 7 Sep 2022 10:29:50 -0400 [thread overview]
Message-ID: <20220907101335-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <PH0PR12MB54812EC7F4711C1EA4CAA119DC419@PH0PR12MB5481.namprd12.prod.outlook.com>
On Wed, Sep 07, 2022 at 02:08:18PM +0000, Parav Pandit wrote:
>
> > From: Michael S. Tsirkin <mst@redhat.com>
> > Sent: Wednesday, September 7, 2022 5:27 AM
> >
> > On Wed, Sep 07, 2022 at 04:08:54PM +0800, Gavin Li wrote:
> > >
> > > On 9/7/2022 1:31 PM, Michael S. Tsirkin wrote:
> > > > External email: Use caution opening links or attachments
> > > >
> > > >
> > > > On Thu, Sep 01, 2022 at 05:10:38AM +0300, Gavin Li wrote:
> > > > > Currently add_recvbuf_big() allocates MAX_SKB_FRAGS segments for
> > > > > big packets even when GUEST_* offloads are not present on the
> > device.
> > > > > However, if guest GSO is not supported, it would be sufficient to
> > > > > allocate segments to cover just up the MTU size and no further.
> > > > > Allocating the maximum amount of segments results in a large waste
> > > > > of buffer space in the queue, which limits the number of packets
> > > > > that can be buffered and can result in reduced performance.
> >
> > actually how does this waste space? Is this because your device does not
> > have INDIRECT?
> VQ is 256 entries deep.
> Driver posted total of 256 descriptors.
> Each descriptor points to a page of 4K.
> These descriptors are chained as 4K * 16.
So without indirect then? with indirect each descriptor can
point to 16 entries.
> So total packets that can be serviced are 256/16 = 16.
> So effective queue depth = 16.
>
> So, when GSO is off, for 9K mtu, packet buffer needed = 3 pages. (12k).
> So, 13 descriptors (= 13 x 4K =52K) per packet buffer is wasted.
>
> After this improvement, these 13 descriptors are available, increasing the effective queue depth = 256/3 = 85.
>
> [..]
> > > > >
> > > > > MTU(Bytes)/Bandwidth (Gbit/s)
> > > > > Before After
> > > > > 1500 22.5 22.4
> > > > > 9000 12.8 25.9
> >
> >
> > is this buffer space?
> Above performance numbers are showing improvement in bandwidth. In Gbps/sec.
>
> > just the overhead of allocating/freeing the buffers?
> > of using INDIRECT?
> The effective queue depth is so small, device cannot receive all the packets at given bw-delay product.
>
> > > >
> > > > Which configurations were tested?
> > > I tested it with DPDK vDPA + qemu vhost. Do you mean the feature set
> > > of the VM?
> >
> The configuration of interest is mtu, not the backend.
> Which is different mtu as shown in above perf numbers.
>
> > > > Did you test devices without VIRTIO_NET_F_MTU ?
> > > No. It will need code changes.
> No. It doesn't need any code changes. This is misleading/vague.
>
> This patch doesn't have any relation to a device which doesn't offer VIRTIO_NET_F_MTU.
> Just the code restructuring is touching this area, that may require some existing tests.
> I assume virtio tree will have some automation tests for such a device?
I have some automated tests but I also expect developer to do testing.
> > > > >
> > > > > @@ -3853,12 +3866,10 @@ static int virtnet_probe(struct
> > > > > virtio_device *vdev)
> > > > >
> > > > > dev->mtu = mtu;
> > > > > dev->max_mtu = mtu;
> > > > > -
> > > > > - /* TODO: size buffers correctly in this case. */
> > > > > - if (dev->mtu > ETH_DATA_LEN)
> > > > > - vi->big_packets = true;
> > > > > }
> > > > >
> > > > > + virtnet_set_big_packets_fields(vi, mtu);
> > > > > +
> > > > If VIRTIO_NET_F_MTU is off, then mtu is uninitialized.
> > > > You should move it to within if () above to fix.
> > > mtu was initialized to 0 at the beginning of probe if VIRTIO_NET_F_MTU
> > > is off.
> > >
> > > In this case, big_packets_num_skbfrags will be set according to guest gso.
> > >
> > > If guest gso is supported, it will be set to MAX_SKB_FRAGS else
> > > zero---- do you
> > >
> > > think this is a bug to be fixed?
> >
> >
> > yes I think with no mtu this should behave as it did historically.
> >
> Michael is right.
> It should behave as today. There is no new bug introduced by this patch.
> dev->mtu and dev->max_mtu is set only when VIRTIO_NET_F_MTU is offered with/without this patch.
>
> Please have mtu related fix/change in different patch.
>
> > > >
> > > > > if (vi->any_header_sg)
> > > > > dev->needed_headroom = vi->hdr_len;
> > > > >
> > > > > --
> > > > > 2.31.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
next prev parent reply other threads:[~2022-09-07 14:30 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20220901021038.84751-1-gavinl@nvidia.com>
[not found] ` <20220901021038.84751-3-gavinl@nvidia.com>
2022-09-07 2:17 ` [virtio-dev] [PATCH v5 2/2] virtio-net: use mtu size as buffer length for big packets Jason Wang
2022-09-07 5:31 ` Michael S. Tsirkin
[not found] ` <0355d1e4-a3cf-5b16-8c7f-b39b1ec14ade@nvidia.com>
2022-09-07 9:26 ` Michael S. Tsirkin
2022-09-07 14:08 ` Parav Pandit via Virtualization
2022-09-07 14:29 ` Michael S. Tsirkin [this message]
2022-09-07 14:33 ` Parav Pandit via Virtualization
2022-09-07 14:40 ` Michael S. Tsirkin
2022-09-07 16:12 ` Parav Pandit via Virtualization
2022-09-07 18:15 ` Michael S. Tsirkin
2022-09-07 19:06 ` Parav Pandit via Virtualization
2022-09-07 19:11 ` Michael S. Tsirkin
2022-09-07 19:18 ` Parav Pandit via Virtualization
2022-09-07 19:23 ` Michael S. Tsirkin
2022-09-07 19:27 ` Parav Pandit via Virtualization
2022-09-07 19:36 ` Michael S. Tsirkin
2022-09-07 19:37 ` Michael S. Tsirkin
2022-09-07 19:54 ` Parav Pandit via Virtualization
2022-09-07 19:51 ` Parav Pandit via Virtualization
2022-09-07 21:39 ` [virtio-dev] " Si-Wei Liu
2022-09-07 22:11 ` Parav Pandit via Virtualization
2022-09-07 22:57 ` Si-Wei Liu
2022-09-22 9:26 ` Michael S. Tsirkin
2022-09-22 10:07 ` Parav Pandit via Virtualization
2022-09-07 20:04 ` Parav Pandit via Virtualization
2022-09-22 9:35 ` Michael S. Tsirkin
2022-09-22 10:04 ` Parav Pandit via Virtualization
2022-09-22 10:14 ` Michael S. Tsirkin
2022-09-22 10:29 ` Parav Pandit via Virtualization
[not found] ` <20220922053458.66f31136@kernel.org>
2022-10-05 10:29 ` Parav Pandit via Virtualization
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220907101335-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=davem@davemloft.net \
--cc=gavi@nvidia.com \
--cc=gavinl@nvidia.com \
--cc=jesse.brandeburg@intel.com \
--cc=kuba@kernel.org \
--cc=loseweigh@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=parav@nvidia.com \
--cc=sridhar.samudrala@intel.com \
--cc=stephen@networkplumber.org \
--cc=virtio-dev@lists.oasis-open.org \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).