From: "Michael S. Tsirkin" <mst@redhat.com>
To: Parav Pandit <parav@nvidia.com>
Cc: "virtio-dev@lists.oasis-open.org"
<virtio-dev@lists.oasis-open.org>,
"sridhar.samudrala@intel.com" <sridhar.samudrala@intel.com>,
"jesse.brandeburg@intel.com" <jesse.brandeburg@intel.com>,
Gavi Teitz <gavi@nvidia.com>,
"virtualization@lists.linux-foundation.org"
<virtualization@lists.linux-foundation.org>,
"stephen@networkplumber.org" <stephen@networkplumber.org>,
"loseweigh@gmail.com" <loseweigh@gmail.com>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
"kuba@kernel.org" <kuba@kernel.org>,
"davem@davemloft.net" <davem@davemloft.net>,
Gavin Li <gavinl@nvidia.com>
Subject: Re: [PATCH v5 2/2] virtio-net: use mtu size as buffer length for big packets
Date: Wed, 7 Sep 2022 15:23:57 -0400 [thread overview]
Message-ID: <20220907152156-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <PH0PR12MB54811F1234CB7822F47DD1B9DC419@PH0PR12MB5481.namprd12.prod.outlook.com>
On Wed, Sep 07, 2022 at 07:18:06PM +0000, Parav Pandit wrote:
>
> > From: Michael S. Tsirkin <mst@redhat.com>
> > Sent: Wednesday, September 7, 2022 3:12 PM
>
> > > Because of shallow queue of 16 entries deep.
> >
> > but why is the queue just 16 entries?
> I explained the calculation in [1] about 16 entries.
>
> [1] https://lore.kernel.org/netdev/PH0PR12MB54812EC7F4711C1EA4CAA119DC419@PH0PR12MB5481.namprd12.prod.outlook.com/
>
> > does the device not support indirect?
> >
> Yes, indirect feature bit is disabled on the device.
OK that explains it.
> > because with indirect you get 256 entries, with 16 s/g each.
> >
> Sure. I explained below that indirect comes with 7x memory cost that is not desired.
> (Ignored the table memory allocation cost and extra latency).
Oh sure, it's a waste. I wonder what effect does the patch have
on bandwidth with indirect enabled though.
> Hence don't want to enable indirect in this scenario.
> This optimization also works with indirect with smaller indirect table.
>
> >
> > > With driver turn around time to repost buffers, device is idle without any
> > RQ buffers.
> > > With this improvement, device has 85 buffers instead of 16 to receive
> > packets.
> > >
> > > Enabling indirect in device can help at cost of 7x higher memory per VQ in
> > the guest VM.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
next prev parent reply other threads:[~2022-09-07 19:24 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20220901021038.84751-1-gavinl@nvidia.com>
[not found] ` <20220901021038.84751-3-gavinl@nvidia.com>
2022-09-07 2:17 ` [virtio-dev] [PATCH v5 2/2] virtio-net: use mtu size as buffer length for big packets Jason Wang
2022-09-07 5:31 ` Michael S. Tsirkin
[not found] ` <0355d1e4-a3cf-5b16-8c7f-b39b1ec14ade@nvidia.com>
2022-09-07 9:26 ` Michael S. Tsirkin
2022-09-07 14:08 ` Parav Pandit via Virtualization
2022-09-07 14:29 ` Michael S. Tsirkin
2022-09-07 14:33 ` Parav Pandit via Virtualization
2022-09-07 14:40 ` Michael S. Tsirkin
2022-09-07 16:12 ` Parav Pandit via Virtualization
2022-09-07 18:15 ` Michael S. Tsirkin
2022-09-07 19:06 ` Parav Pandit via Virtualization
2022-09-07 19:11 ` Michael S. Tsirkin
2022-09-07 19:18 ` Parav Pandit via Virtualization
2022-09-07 19:23 ` Michael S. Tsirkin [this message]
2022-09-07 19:27 ` Parav Pandit via Virtualization
2022-09-07 19:36 ` Michael S. Tsirkin
2022-09-07 19:37 ` Michael S. Tsirkin
2022-09-07 19:54 ` Parav Pandit via Virtualization
2022-09-07 19:51 ` Parav Pandit via Virtualization
2022-09-07 21:39 ` [virtio-dev] " Si-Wei Liu
2022-09-07 22:11 ` Parav Pandit via Virtualization
2022-09-07 22:57 ` Si-Wei Liu
2022-09-22 9:26 ` Michael S. Tsirkin
2022-09-22 10:07 ` Parav Pandit via Virtualization
2022-09-07 20:04 ` Parav Pandit via Virtualization
2022-09-22 9:35 ` Michael S. Tsirkin
2022-09-22 10:04 ` Parav Pandit via Virtualization
2022-09-22 10:14 ` Michael S. Tsirkin
2022-09-22 10:29 ` Parav Pandit via Virtualization
[not found] ` <20220922053458.66f31136@kernel.org>
2022-10-05 10:29 ` Parav Pandit via Virtualization
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220907152156-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=davem@davemloft.net \
--cc=gavi@nvidia.com \
--cc=gavinl@nvidia.com \
--cc=jesse.brandeburg@intel.com \
--cc=kuba@kernel.org \
--cc=loseweigh@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=parav@nvidia.com \
--cc=sridhar.samudrala@intel.com \
--cc=stephen@networkplumber.org \
--cc=virtio-dev@lists.oasis-open.org \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).