public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Mike Galbraith <bitbucket@online.de>
To: Peter Zijlstra <peterz@infradead.org>
Cc: LKML <linux-kernel@vger.kernel.org>, Ingo Molnar <mingo@elte.hu>
Subject: Re: sched: context tracking demolishes pipe-test
Date: Tue, 02 Jul 2013 06:03:23 +0200	[thread overview]
Message-ID: <1372737803.7363.101.camel@marge.simpson.net> (raw)
In-Reply-To: <1372670432.7678.114.camel@marge.simpson.net>

On Mon, 2013-07-01 at 11:20 +0200, Mike Galbraith wrote: 
> On Mon, 2013-07-01 at 11:12 +0200, Mike Galbraith wrote: 
> > On Mon, 2013-07-01 at 10:06 +0200, Peter Zijlstra wrote:
> > 
> > > So aside from the context tracking stuff, there's still a regression
> > > we might want to look at. That's still a ~10% drop against 2.6.32 for
> > > TCP_RR and few percents for tbench.
> > 
> > Yeah, known, and some of it's ours.
> 
> (btw tbench has a ~5% phase-of-moon jitter, you can pretty much
> disregard that one)

Hm.  Seems we don't own much of TCP_RR regression after all, somewhere
along the line while my silly-tester hat was moldering, we got some
cycles back.. in the light config case anyway.

With wakeup granularity set to zero, per pipe-test, scheduler is within
variance of .32, sometimes appearing a tad lighter, though usually a wee
bit heavier.  TCP_RR throughput delta does not correlate.

echo 0 > sched_wakeup_granularity_ns

pipe-test
2.6.32-regress    689.8 Khz            1.000
3.10.0-regress    682.5 Khz             .989

netperf TCP_RR
2.6.32-regress   117910.11 Trans/sec   1.000
3.10.0-regress    96955.12 Trans/sec    .822

It should be closer than this. 

     3.10.0-regress                                              2.6.32-regress
     3.85%  [kernel]        [k] tcp_ack                          4.04%  [kernel]        [k] tcp_sendmsg
     3.34%  [kernel]        [k] __schedule                       3.63%  [kernel]        [k] schedule
     2.93%  [kernel]        [k] tcp_sendmsg                      2.86%  [kernel]        [k] tcp_recvmsg
     2.54%  [kernel]        [k] tcp_rcv_established              2.83%  [kernel]        [k] tcp_ack
     2.26%  [kernel]        [k] tcp_transmit_skb                 2.19%  [kernel]        [k] system_call
     1.90%  [kernel]        [k] __netif_receive_skb_core         2.16%  [kernel]        [k] tcp_transmit_skb
     1.87%  [kernel]        [k] tcp_v4_rcv                       2.07%  libc-2.14.1.so  [.] __libc_recv
     1.84%  [kernel]        [k] tcp_write_xmit                   1.95%  [kernel]        [k] _spin_lock_bh
     1.70%  [kernel]        [k] __switch_to                      1.89%  libc-2.14.1.so  [.] __libc_send
     1.57%  [kernel]        [k] tcp_recvmsg                      1.77%  [kernel]        [k] tcp_rcv_established
     1.54%  [kernel]        [k] _raw_spin_lock_bh                1.70%  [kernel]        [k] netif_receive_skb
     1.52%  libc-2.14.1.so  [.] __libc_recv                      1.61%  [kernel]        [k] tcp_v4_rcv
     1.43%  [kernel]        [k] ip_rcv                           1.49%  [kernel]        [k] native_sched_clock
     1.35%  [kernel]        [k] local_bh_enable                  1.49%  [kernel]        [k] tcp_write_xmit
     1.33%  [kernel]        [k] _raw_spin_lock_irqsave           1.46%  [kernel]        [k] __switch_to
     1.26%  [kernel]        [k] ip_queue_xmit                    1.35%  [kernel]        [k] dev_queue_xmit
     1.16%  [kernel]        [k] __inet_lookup_established        1.29%  [kernel]        [k] __alloc_skb
     1.14%  [kernel]        [k] mod_timer                        1.27%  [kernel]        [k] skb_release_data
     1.13%  [kernel]        [k] process_backlog                  1.26%  netserver       [.] recv_tcp_rr
     1.13%  [kernel]        [k] read_tsc                         1.22%  [kernel]        [k] local_bh_enable
     1.13%  libc-2.14.1.so  [.] __libc_send                      1.18%  netperf         [.] send_tcp_rr
     1.12%  [kernel]        [k] system_call                      1.18%  [kernel]        [k] sched_clock_local
     1.07%  [kernel]        [k] tcp_event_data_recv              1.11%  [kernel]        [k] copy_user_generic_string
     1.04%  [kernel]        [k] ip_finish_output                 1.07%  [kernel]        [k] _spin_lock_irqsave

	-Mike


  reply	other threads:[~2013-07-02  4:03 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-06-30  7:57 sched: context tracking demolishes pipe-test Mike Galbraith
2013-06-30 21:29 ` Peter Zijlstra
2013-07-01  6:07   ` Mike Galbraith
2013-07-01  8:06     ` Peter Zijlstra
2013-07-01  9:12       ` Mike Galbraith
2013-07-01  9:20         ` Mike Galbraith
2013-07-02  4:03           ` Mike Galbraith [this message]
2013-07-02  7:19             ` Mike Galbraith

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1372737803.7363.101.camel@marge.simpson.net \
    --to=bitbucket@online.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox