netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Miller <davem@davemloft.net>
To: xii@google.com
Cc: netdev@vger.kernel.org, jasowang@redhat.com, mst@redhat.com,
	maxk@qti.qualcomm.com, ncardwell@google.com, edumazet@google.com
Subject: Re: [PATCH v2] net-tun: restructure tun_do_read for better sleep/wakeup efficiency
Date: Wed, 21 May 2014 15:51:00 -0400 (EDT)	[thread overview]
Message-ID: <20140521.155100.1364245684110064848.davem@davemloft.net> (raw)
In-Reply-To: <1400278308-25372-1-git-send-email-xii@google.com>

From: Xi Wang <xii@google.com>
Date: Fri, 16 May 2014 15:11:48 -0700

> tun_do_read always adds current thread to wait queue, even if a packet
> is ready to read. This is inefficient because both sleeper and waker
> want to acquire the wait queue spin lock when packet rate is high.
> 
> We restructure the read function and use common kernel networking
> routines to handle receive, sleep and wakeup. With the change
> available packets are checked first before the reading thread is added
> to the wait queue.
> 
> Ran performance tests with the following configuration:
> 
>  - my packet generator -> tap1 -> br0 -> tap0 -> my packet consumer
>  - sender pinned to one core and receiver pinned to another core
>  - sender send small UDP packets (64 bytes total) as fast as it can
>  - sandy bridge cores
>  - throughput are receiver side goodput numbers
> 
> The results are
> 
> baseline: 731k pkts/sec, cpu utilization at 1.50 cpus
>  changed: 783k pkts/sec, cpu utilization at 1.53 cpus
> 
> The performance difference is largely determined by packet rate and
> inter-cpu communication cost. For example, if the sender and
> receiver are pinned to different cpu sockets, the results are
> 
> baseline: 558k pkts/sec, cpu utilization at 1.71 cpus
>  changed: 690k pkts/sec, cpu utilization at 1.67 cpus
> 
> Co-authored-by: Eric Dumazet <edumazet@google.com>
> Signed-off-by: Xi Wang <xii@google.com>

Applied to net-next, thanks.

      parent reply	other threads:[~2014-05-21 19:51 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-05-16 22:11 [PATCH v2] net-tun: restructure tun_do_read for better sleep/wakeup efficiency Xi Wang
2014-05-19  9:27 ` Jason Wang
2014-05-19 14:09   ` Eric Dumazet
2014-05-20  4:44     ` Jason Wang
2014-05-20  4:52       ` Eric Dumazet
2014-05-20  6:35         ` Michael S. Tsirkin
2014-05-20  5:11       ` Eric Dumazet
2014-05-20  6:03         ` Jason Wang
2014-05-20  6:34           ` Michael S. Tsirkin
2014-05-20  6:55             ` Jason Wang
2014-05-20 13:59           ` Eric Dumazet
2014-05-21  4:45             ` Jason Wang
2014-05-19 16:06   ` Michael S. Tsirkin
2014-05-20  4:51     ` Jason Wang
2014-05-20  6:22       ` Michael S. Tsirkin
2014-05-20  6:40         ` Jason Wang
2014-05-21  7:54 ` Michael S. Tsirkin
2014-05-21 19:51 ` David Miller [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140521.155100.1364245684110064848.davem@davemloft.net \
    --to=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=jasowang@redhat.com \
    --cc=maxk@qti.qualcomm.com \
    --cc=mst@redhat.com \
    --cc=ncardwell@google.com \
    --cc=netdev@vger.kernel.org \
    --cc=xii@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).