From: "Michael S. Tsirkin" <mst@redhat.com>
To: Parav Pandit <parav@nvidia.com>
Cc: "virtio-dev@lists.oasis-open.org"
<virtio-dev@lists.oasis-open.org>,
"sridhar.samudrala@intel.com" <sridhar.samudrala@intel.com>,
"jesse.brandeburg@intel.com" <jesse.brandeburg@intel.com>,
Gavi Teitz <gavi@nvidia.com>,
"virtualization@lists.linux-foundation.org"
<virtualization@lists.linux-foundation.org>,
"stephen@networkplumber.org" <stephen@networkplumber.org>,
"loseweigh@gmail.com" <loseweigh@gmail.com>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
"kuba@kernel.org" <kuba@kernel.org>,
"davem@davemloft.net" <davem@davemloft.net>,
Gavin Li <gavinl@nvidia.com>
Subject: Re: [PATCH v5 2/2] virtio-net: use mtu size as buffer length for big packets
Date: Wed, 7 Sep 2022 14:15:43 -0400 [thread overview]
Message-ID: <20220907141447-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <PH0PR12MB5481066A18907753997A6F0CDC419@PH0PR12MB5481.namprd12.prod.outlook.com>
On Wed, Sep 07, 2022 at 04:12:47PM +0000, Parav Pandit wrote:
>
> > From: Michael S. Tsirkin <mst@redhat.com>
> > Sent: Wednesday, September 7, 2022 10:40 AM
> >
> > On Wed, Sep 07, 2022 at 02:33:02PM +0000, Parav Pandit wrote:
> > >
> > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > Sent: Wednesday, September 7, 2022 10:30 AM
> > >
> > > [..]
> > > > > > actually how does this waste space? Is this because your device
> > > > > > does not have INDIRECT?
> > > > > VQ is 256 entries deep.
> > > > > Driver posted total of 256 descriptors.
> > > > > Each descriptor points to a page of 4K.
> > > > > These descriptors are chained as 4K * 16.
> > > >
> > > > So without indirect then? with indirect each descriptor can point to
> > > > 16 entries.
> > > >
> > > With indirect, can it post 256 * 16 size buffers even though vq depth is 256
> > entries?
> > > I recall that total number of descriptors with direct/indirect descriptors is
> > limited to vq depth.
> >
> >
> > > Was there some recent clarification occurred in the spec to clarify this?
> >
> >
> > This would make INDIRECT completely pointless. So I don't think we ever
> > had such a limitation.
> > The only thing that comes to mind is this:
> >
> > A driver MUST NOT create a descriptor chain longer than the Queue
> > Size of
> > the device.
> >
> > but this limits individual chain length not the total length of all chains.
> >
> Right.
> I double checked in virtqueue_add_split() which doesn't count table entries towards desc count of VQ for indirect case.
>
> With indirect descriptors without this patch the situation is even worse with memory usage.
> Driver will allocate 64K * 256 = 16MB buffer per VQ, while needed (and used) buffer is only 2.3 Mbytes.
Yes. So just so we understand the reason for the performance improvement
is this because of memory usage? Or is this because device does not
have INDIRECT?
Thanks,
--
MST
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
next prev parent reply other threads:[~2022-09-07 18:16 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20220901021038.84751-1-gavinl@nvidia.com>
[not found] ` <20220901021038.84751-3-gavinl@nvidia.com>
2022-09-07 2:17 ` [virtio-dev] [PATCH v5 2/2] virtio-net: use mtu size as buffer length for big packets Jason Wang
2022-09-07 5:31 ` Michael S. Tsirkin
[not found] ` <0355d1e4-a3cf-5b16-8c7f-b39b1ec14ade@nvidia.com>
2022-09-07 9:26 ` Michael S. Tsirkin
2022-09-07 14:08 ` Parav Pandit via Virtualization
2022-09-07 14:29 ` Michael S. Tsirkin
2022-09-07 14:33 ` Parav Pandit via Virtualization
2022-09-07 14:40 ` Michael S. Tsirkin
2022-09-07 16:12 ` Parav Pandit via Virtualization
2022-09-07 18:15 ` Michael S. Tsirkin [this message]
2022-09-07 19:06 ` Parav Pandit via Virtualization
2022-09-07 19:11 ` Michael S. Tsirkin
2022-09-07 19:18 ` Parav Pandit via Virtualization
2022-09-07 19:23 ` Michael S. Tsirkin
2022-09-07 19:27 ` Parav Pandit via Virtualization
2022-09-07 19:36 ` Michael S. Tsirkin
2022-09-07 19:37 ` Michael S. Tsirkin
2022-09-07 19:54 ` Parav Pandit via Virtualization
2022-09-07 19:51 ` Parav Pandit via Virtualization
2022-09-07 21:39 ` [virtio-dev] " Si-Wei Liu
2022-09-07 22:11 ` Parav Pandit via Virtualization
2022-09-07 22:57 ` Si-Wei Liu
2022-09-22 9:26 ` Michael S. Tsirkin
2022-09-22 10:07 ` Parav Pandit via Virtualization
2022-09-07 20:04 ` Parav Pandit via Virtualization
2022-09-22 9:35 ` Michael S. Tsirkin
2022-09-22 10:04 ` Parav Pandit via Virtualization
2022-09-22 10:14 ` Michael S. Tsirkin
2022-09-22 10:29 ` Parav Pandit via Virtualization
[not found] ` <20220922053458.66f31136@kernel.org>
2022-10-05 10:29 ` Parav Pandit via Virtualization
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220907141447-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=davem@davemloft.net \
--cc=gavi@nvidia.com \
--cc=gavinl@nvidia.com \
--cc=jesse.brandeburg@intel.com \
--cc=kuba@kernel.org \
--cc=loseweigh@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=parav@nvidia.com \
--cc=sridhar.samudrala@intel.com \
--cc=stephen@networkplumber.org \
--cc=virtio-dev@lists.oasis-open.org \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).