From: "Michael S. Tsirkin" <mst@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: "Wang, Wei W" <wei.w.wang@intel.com>,
"Stefan Hajnoczi" <stefanha@gmail.com>,
"Marc-André Lureau" <marcandre.lureau@gmail.com>,
"pbonzini@redhat.com" <pbonzini@redhat.com>,
"virtio-dev@lists.oasis-open.org"
<virtio-dev@lists.oasis-open.org>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
"Jan Scheurich" <jan.scheurich@ericsson.com>
Subject: Re: [Qemu-devel] virtio-net: configurable TX queue size
Date: Fri, 5 May 2017 23:36:33 +0300 [thread overview]
Message-ID: <20170505233541-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <056500d7-6a91-12e5-be1d-2b2beebd0430@redhat.com>
On Fri, May 05, 2017 at 10:27:13AM +0800, Jason Wang wrote:
>
>
> On 2017年05月04日 18:58, Wang, Wei W wrote:
> > Hi,
> >
> > I want to re-open the discussion left long time ago:
> > https://lists.gnu.org/archive/html/qemu-devel/2015-11/msg06194.html
> > , and discuss the possibility of changing the hardcoded (256) TX queue
> > size to be configurable between 256 and 1024.
>
> Yes, I think we probably need this.
>
> >
> > The reason to propose this request is that a severe issue of packet drops in
> > TX direction was observed with the existing hardcoded 256 queue size,
> > which causes performance issues for packet drop sensitive guest
> > applications that cannot use indirect descriptor tables. The issue goes away
> > with 1K queue size.
>
> Do we need even more, what if we find 1K is even not sufficient in the
> future? Modern nics has size up to ~8192.
>
> >
> > The concern mentioned in the previous discussion (please check the link
> > above) is that the number of chained descriptors would exceed
> > UIO_MAXIOV (1024) supported by the Linux.
>
> We could try to address this limitation but probably need a new feature bit
> to allow more than UIO_MAXIOV sgs.
I'd say we should split the queue size and the sg size.
> >
> > From the code, I think the number of the chained descriptors is limited to
> > MAX_SKB_FRAGS + 2 (~18), which is much less than UIO_MAXIOV.
>
> This is the limitation of #page frags for skb, not the iov limitation.
>
> Thanks
>
> > Please point out if I missed anything. Thanks.
> >
> > Best,
> > Wei
> >
> >
next prev parent reply other threads:[~2017-05-05 20:36 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-04 10:58 [Qemu-devel] virtio-net: configurable TX queue size Wang, Wei W
2017-05-05 2:27 ` Jason Wang
2017-05-05 5:53 ` [Qemu-devel] [virtio-dev] " Wei Wang
2017-05-05 9:20 ` Jason Wang
2017-05-05 22:08 ` Michael S. Tsirkin
2017-05-07 12:02 ` [Qemu-devel] [virtio-dev] " Yan Vugenfirer
2017-05-08 1:23 ` Wei Wang
2017-05-05 20:36 ` Michael S. Tsirkin [this message]
2017-05-07 4:39 ` [Qemu-devel] " Wang, Wei W
2017-05-10 9:00 ` Jason Wang
2017-05-10 9:59 ` Wei Wang
2017-05-10 9:52 ` [Qemu-devel] [virtio-dev] " Wei Wang
2017-05-10 20:07 ` Michael S. Tsirkin
2017-05-11 5:09 ` Wei Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170505233541-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=jan.scheurich@ericsson.com \
--cc=jasowang@redhat.com \
--cc=marcandre.lureau@gmail.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@gmail.com \
--cc=virtio-dev@lists.oasis-open.org \
--cc=wei.w.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).