From: "Michael S. Tsirkin" <mst@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH net-next RFC 2/2] vhost_net: basic polling support
Date: Fri, 23 Oct 2015 16:39:26 +0300 [thread overview]
Message-ID: <20151023163839-mutt-send-email-mst@redhat.com> (raw)
In-Reply-To: <5629DE03.7030902@redhat.com>
On Fri, Oct 23, 2015 at 03:13:07PM +0800, Jason Wang wrote:
>
>
> On 10/22/2015 05:33 PM, Michael S. Tsirkin wrote:
> > On Thu, Oct 22, 2015 at 01:27:29AM -0400, Jason Wang wrote:
> >> This patch tries to poll for new added tx buffer for a while at the
> >> end of tx processing. The maximum time spent on polling were limited
> >> through a module parameter. To avoid block rx, the loop will end it
> >> there's new other works queued on vhost so in fact socket receive
> >> queue is also be polled.
> >>
> >> busyloop_timeout = 50 gives us following improvement on TCP_RR test:
> >>
> >> size/session/+thu%/+normalize%
> >> 1/ 1/ +5%/ -20%
> >> 1/ 50/ +17%/ +3%
> > Is there a measureable increase in cpu utilization
> > with busyloop_timeout = 0?
>
> Just run TCP_RR, no increasing. Will run a complete test on next version.
>
> >
> >> Signed-off-by: Jason Wang <jasowang@redhat.com>
> > We might be able to shave off the minor regression
> > by careful use of likely/unlikely, or maybe
> > deferring
>
> Yes, but what did "deferring" mean here?
Don't call local_clock until we know we'll need it.
> >
> >> ---
> >> drivers/vhost/net.c | 19 +++++++++++++++++++
> >> 1 file changed, 19 insertions(+)
> >>
> >> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> >> index 9eda69e..bbb522a 100644
> >> --- a/drivers/vhost/net.c
> >> +++ b/drivers/vhost/net.c
> >> @@ -31,7 +31,9 @@
> >> #include "vhost.h"
> >>
> >> static int experimental_zcopytx = 1;
> >> +static int busyloop_timeout = 50;
> >> module_param(experimental_zcopytx, int, 0444);
> >> +module_param(busyloop_timeout, int, 0444);
> > Pls add a description, including the units and the special
> > value 0.
>
> Ok.
>
> >
> >> MODULE_PARM_DESC(experimental_zcopytx, "Enable Zero Copy TX;"
> >> " 1 -Enable; 0 - Disable");
> >>
> >> @@ -287,12 +289,23 @@ static void vhost_zerocopy_callback(struct ubuf_info *ubuf, bool success)
> >> rcu_read_unlock_bh();
> >> }
> >>
> >> +static bool tx_can_busy_poll(struct vhost_dev *dev,
> >> + unsigned long endtime)
> >> +{
> >> + unsigned long now = local_clock() >> 10;
> > local_clock might go backwards if we jump between CPUs.
> > One way to fix would be to record the CPU id and break
> > out of loop if that changes.
>
> Right, or maybe disable preemption in this case?
>
> >
> > Also - defer this until we actually know we need it?
>
> Right.
>
> >
> >> +
> >> + return busyloop_timeout && !need_resched() &&
> >> + !time_after(now, endtime) && !vhost_has_work(dev) &&
> >> + single_task_running();
> > signal pending as well?
>
> Yes.
>
> >> +}
> >> +
> >> /* Expects to be always run from workqueue - which acts as
> >> * read-size critical section for our kind of RCU. */
> >> static void handle_tx(struct vhost_net *net)
> >> {
> >> struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_TX];
> >> struct vhost_virtqueue *vq = &nvq->vq;
> >> + unsigned long endtime;
> >> unsigned out, in;
> >> int head;
> >> struct msghdr msg = {
> >> @@ -331,6 +344,8 @@ static void handle_tx(struct vhost_net *net)
> >> % UIO_MAXIOV == nvq->done_idx))
> >> break;
> >>
> >> + endtime = (local_clock() >> 10) + busyloop_timeout;
> >> +again:
> >> head = vhost_get_vq_desc(vq, vq->iov,
> >> ARRAY_SIZE(vq->iov),
> >> &out, &in,
> >> @@ -340,6 +355,10 @@ static void handle_tx(struct vhost_net *net)
> >> break;
> >> /* Nothing new? Wait for eventfd to tell us they refilled. */
> >> if (head == vq->num) {
> >> + if (tx_can_busy_poll(vq->dev, endtime)) {
> >> + cpu_relax();
> >> + goto again;
> >> + }
> >> if (unlikely(vhost_enable_notify(&net->dev, vq))) {
> >> vhost_disable_notify(&net->dev, vq);
> >> continue;
> >> --
> >> 1.8.3.1
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at http://www.tux.org/lkml/
next prev parent reply other threads:[~2015-10-23 13:39 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-10-22 5:27 [PATCH net-next RFC 1/2] vhost: introduce vhost_has_work() Jason Wang
2015-10-22 5:27 ` [PATCH net-next RFC 2/2] vhost_net: basic polling support Jason Wang
2015-10-22 9:33 ` Michael S. Tsirkin
2015-10-22 15:46 ` Rick Jones
2015-10-22 16:16 ` Michael S. Tsirkin
2015-10-23 7:14 ` Jason Wang
2015-10-23 7:13 ` Jason Wang
2015-10-23 13:39 ` Michael S. Tsirkin [this message]
2015-10-22 8:38 ` [PATCH net-next RFC 1/2] vhost: introduce vhost_has_work() Michael S. Tsirkin
2015-10-23 7:10 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20151023163839-mutt-send-email-mst@redhat.com \
--to=mst@redhat.com \
--cc=jasowang@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).