From: "Michael S. Tsirkin" <mst@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH V2 2/2] vhost_net: conditionally enable tx polling
Date: Mon, 30 May 2016 18:55:21 +0300 [thread overview]
Message-ID: <20160530155521.GA5427@redhat.com> (raw)
In-Reply-To: <1464590874-39539-3-git-send-email-jasowang@redhat.com>
On Mon, May 30, 2016 at 02:47:54AM -0400, Jason Wang wrote:
> We always poll tx for socket, this is sub optimal since:
>
> - it will be only used when we exceed the sndbuf of the socket.
> - since we use two independent polls for tx and vq, this will slightly
> increase the waitqueue traversing time and more important, vhost
> could not benefit from commit
> 9e641bdcfa4ef4d6e2fbaa59c1be0ad5d1551fd5 ("net-tun: restructure
> tun_do_read for better sleep/wakeup efficiency") even if we've
> stopped rx polling during handle_rx since tx poll were still left in
> the waitqueue.
Why is this an issue?
sock_def_write_space only wakes up when queue is half empty,
not on each packet.
if ((atomic_read(&sk->sk_wmem_alloc) << 1) <= sk->sk_sndbuf)
I suspect the issue is with your previous patch,
it now pokes at the spinlock on data path
where it used not to.
Is that right?
>
> Fix this by conditionally enable tx polling only when -EAGAIN were
> met.
>
> Test shows about 8% improvement on guest rx pps.
>
> Before: ~1350000
> After: ~1460000
>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
> drivers/vhost/net.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index e91603b..5a05fa0 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -378,6 +378,7 @@ static void handle_tx(struct vhost_net *net)
> goto out;
>
> vhost_disable_notify(&net->dev, vq);
> + vhost_net_disable_vq(net, vq);
>
> hdr_size = nvq->vhost_hlen;
> zcopy = nvq->ubufs;
> @@ -459,6 +460,8 @@ static void handle_tx(struct vhost_net *net)
> % UIO_MAXIOV;
> }
> vhost_discard_vq_desc(vq, 1);
> + if (err == -EAGAIN)
> + vhost_net_enable_vq(net, vq);
> break;
> }
> if (err != len)
> --
> 1.8.3.1
next prev parent reply other threads:[~2016-05-30 15:55 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-05-30 6:47 [PATCH V2 0/2] vhost_net polling optimization Jason Wang
2016-05-30 6:47 ` [PATCH V2 1/2] vhost_net: stop polling socket during rx processing Jason Wang
2016-05-30 15:47 ` Michael S. Tsirkin
2016-05-31 3:14 ` Jason Wang
2016-05-30 6:47 ` [PATCH V2 2/2] vhost_net: conditionally enable tx polling Jason Wang
2016-05-30 15:55 ` Michael S. Tsirkin [this message]
2016-05-31 3:23 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160530155521.GA5427@redhat.com \
--to=mst@redhat.com \
--cc=jasowang@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).