public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [rcu] 5057f55e543: -23.5% qperf.udp.recv_bw
@ 2014-06-03 10:08 Jet Chen
  2014-06-03 14:17 ` Paul E. McKenney
  0 siblings, 1 reply; 4+ messages in thread
From: Jet Chen @ 2014-06-03 10:08 UTC (permalink / raw)
  To: Paul E. McKenney; +Cc: LKML, lkp, Fengguang Wu

[-- Attachment #1: Type: text/plain, Size: 6346 bytes --]

Hi Paul,

FYI, we noticed the below changes on

git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git rcu/fixes
commit 5057f55e543b7859cfd26bc281291795eac93f8a ("rcu: Bind RCU grace-period kthreads if NO_HZ_FULL")

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
 2.127e+09 ~ 0%     -23.5%  1.628e+09 ~ 4%  bens/qperf/600s
 2.127e+09 ~ 0%     -23.5%  1.628e+09 ~ 4%  TOTAL qperf.udp.recv_bw

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
 2.128e+09 ~ 0%     -23.3%  1.633e+09 ~ 4%  bens/qperf/600s
 2.128e+09 ~ 0%     -23.3%  1.633e+09 ~ 4%  TOTAL qperf.udp.send_bw

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
 2.101e+10 ~ 2%     -18.7%  1.707e+10 ~ 2%  bens/iperf/300s-tcp
 2.101e+10 ~ 2%     -18.7%  1.707e+10 ~ 2%  TOTAL iperf.tcp.sender.bps

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
 2.101e+10 ~ 2%     -18.7%  1.707e+10 ~ 2%  bens/iperf/300s-tcp
 2.101e+10 ~ 2%     -18.7%  1.707e+10 ~ 2%  TOTAL iperf.tcp.receiver.bps

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
 1.331e+09 ~ 2%      -5.8%  1.255e+09 ~ 2%  bens/qperf/600s
   2.4e+09 ~ 6%     -30.4%  1.671e+09 ~12%  brickland3/qperf/600s
 2.384e+09 ~ 7%     -12.1%  2.096e+09 ~ 3%  lkp-sb03/qperf/600s
 6.115e+09 ~ 5%     -17.9%  5.022e+09 ~ 6%  TOTAL qperf.sctp.bw

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
  2.83e+09 ~ 1%     -12.5%  2.476e+09 ~ 3%  bens/qperf/600s
  2.83e+09 ~ 1%     -12.5%  2.476e+09 ~ 3%  TOTAL qperf.tcp.bw

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
 2.272e+08 ~ 1%     -13.3%   1.97e+08 ~ 2%  bens/qperf/600s
 2.272e+08 ~ 1%     -13.3%   1.97e+08 ~ 2%  TOTAL proc-vmstat.pgalloc_dma32

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
     53062 ~ 2%     -35.1%      34464 ~ 3%  bens/qperf/600s
    109531 ~13%     +46.9%     160928 ~ 5%  brickland3/qperf/600s
     67902 ~ 1%     +13.8%      77302 ~ 3%  lkp-sb03/qperf/600s
    230496 ~ 7%     +18.3%     272694 ~ 4%  TOTAL softirqs.RCU

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
     80344 ~ 1%     -26.2%      59325 ~ 2%  bens/qperf/600s
     80344 ~ 1%     -26.2%      59325 ~ 2%  TOTAL softirqs.SCHED

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
      1036 ~ 4%     -17.6%        853 ~ 4%  brickland3/qperf/600s
      1036 ~ 4%     -17.6%        853 ~ 4%  TOTAL proc-vmstat.nr_page_table_pages

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
     48.12 ~ 0%     -11.7%      42.46 ~ 6%  brickland3/qperf/600s
     48.12 ~ 0%     -11.7%      42.46 ~ 6%  TOTAL turbostat.%pc2

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
  74689352 ~ 1%     -13.3%   64771743 ~ 2%  bens/qperf/600s
  74689352 ~ 1%     -13.3%   64771743 ~ 2%  TOTAL proc-vmstat.pgalloc_normal

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
 3.019e+08 ~ 1%     -13.3%  2.618e+08 ~ 2%  bens/qperf/600s
 3.019e+08 ~ 1%     -13.3%  2.618e+08 ~ 2%  TOTAL proc-vmstat.pgfree

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
  23538414 ~ 0%     -12.9%   20506157 ~ 2%  bens/qperf/600s
  23538414 ~ 0%     -12.9%   20506157 ~ 2%  TOTAL proc-vmstat.numa_local

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
  23538414 ~ 0%     -12.9%   20506157 ~ 2%  bens/qperf/600s
  23538414 ~ 0%     -12.9%   20506157 ~ 2%  TOTAL proc-vmstat.numa_hit

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
     12789 ~ 1%     -10.9%      11391 ~ 2%  bens/qperf/600s
     12789 ~ 1%     -10.9%      11391 ~ 2%  TOTAL softirqs.HRTIMER

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
    481253 ~ 0%      -8.9%     438624 ~ 0%  bens/qperf/600s
    481253 ~ 0%      -8.9%     438624 ~ 0%  TOTAL softirqs.TIMER

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
      1297 ~33%    +565.9%       8640 ~ 7%  bens/iperf/300s-tcp
      2788 ~ 3%    +588.8%      19204 ~ 4%  bens/qperf/600s
      1191 ~ 5%   +1200.9%      15493 ~ 4%  brickland3/qperf/600s
      1135 ~26%   +1195.9%      14709 ~ 4%  lkp-sb03/qperf/600s
      6411 ~13%    +805.3%      58047 ~ 4%  TOTAL time.involuntary_context_switches

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
     72398 ~ 1%      -5.4%      68503 ~ 0%  bens/qperf/600s
      8789 ~ 4%     +22.3%      10749 ~15%  lkp-sb03/qperf/600s
     81187 ~ 1%      -2.4%      79253 ~ 2%  TOTAL vmstat.system.in

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
    141174 ~ 1%      -5.4%     133551 ~ 0%  bens/qperf/600s
    143982 ~ 1%      -4.4%     137600 ~ 0%  brickland3/qperf/600s
    285156 ~ 1%      -4.9%     271152 ~ 0%  TOTAL vmstat.system.cs

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
  42351859 ~ 0%      -5.3%   40114932 ~ 0%  bens/qperf/600s
  43015383 ~ 1%      -4.4%   41143092 ~ 0%  brickland3/qperf/600s
  85367242 ~ 1%      -4.8%   81258025 ~ 0%  TOTAL time.voluntary_context_switches

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
       146 ~ 0%      -2.2%        143 ~ 0%  bens/qperf/600s
       147 ~ 1%      -4.8%        140 ~ 1%  brickland3/qperf/600s
       293 ~ 0%      -3.5%        283 ~ 0%  TOTAL time.percent_of_cpu_this_job_got

71a9b26963f8c2d  5057f55e543b7859cfd26bc28
---------------  -------------------------
       872 ~ 0%      -2.3%        853 ~ 0%  bens/qperf/600s
       874 ~ 1%      -4.6%        834 ~ 1%  brickland3/qperf/600s
      1747 ~ 0%      -3.4%       1687 ~ 0%  TOTAL time.system_time


Legend:
	~XX%    - stddev percent
	[+-]XX% - change percent




Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.

Thanks,
Jet



[-- Attachment #2: reproduce --]
[-- Type: text/plain, Size: 324 bytes --]

echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
./runtest.py unlink1 32 1 2 3 4

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [rcu] 5057f55e543: -23.5% qperf.udp.recv_bw
  2014-06-03 10:08 [rcu] 5057f55e543: -23.5% qperf.udp.recv_bw Jet Chen
@ 2014-06-03 14:17 ` Paul E. McKenney
  2014-06-04 12:33   ` Fengguang Wu
  0 siblings, 1 reply; 4+ messages in thread
From: Paul E. McKenney @ 2014-06-03 14:17 UTC (permalink / raw)
  To: Jet Chen; +Cc: LKML, lkp, Fengguang Wu

On Tue, Jun 03, 2014 at 06:08:41PM +0800, Jet Chen wrote:
> Hi Paul,
> 
> FYI, we noticed the below changes on
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git rcu/fixes
> commit 5057f55e543b7859cfd26bc281291795eac93f8a ("rcu: Bind RCU grace-period kthreads if NO_HZ_FULL")

My guess would be that some of these workloads generated enough callbacks
that binding all the rcuo callback-offloading kthreads to CPU 0 resulted
in a bottleneck.  If that was the case, CPU 0 would often hit 100%
CPU utilization, and there would be more wait time on other CPUs because
callback execution was delayed.

Does that match what you are seeing?

							Thanx, Paul

> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>  2.127e+09 ~ 0%     -23.5%  1.628e+09 ~ 4%  bens/qperf/600s
>  2.127e+09 ~ 0%     -23.5%  1.628e+09 ~ 4%  TOTAL qperf.udp.recv_bw
> 
> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>  2.128e+09 ~ 0%     -23.3%  1.633e+09 ~ 4%  bens/qperf/600s
>  2.128e+09 ~ 0%     -23.3%  1.633e+09 ~ 4%  TOTAL qperf.udp.send_bw
> 
> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>  2.101e+10 ~ 2%     -18.7%  1.707e+10 ~ 2%  bens/iperf/300s-tcp
>  2.101e+10 ~ 2%     -18.7%  1.707e+10 ~ 2%  TOTAL iperf.tcp.sender.bps
> 
> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>  2.101e+10 ~ 2%     -18.7%  1.707e+10 ~ 2%  bens/iperf/300s-tcp
>  2.101e+10 ~ 2%     -18.7%  1.707e+10 ~ 2%  TOTAL iperf.tcp.receiver.bps
> 
> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>  1.331e+09 ~ 2%      -5.8%  1.255e+09 ~ 2%  bens/qperf/600s
>    2.4e+09 ~ 6%     -30.4%  1.671e+09 ~12%  brickland3/qperf/600s
>  2.384e+09 ~ 7%     -12.1%  2.096e+09 ~ 3%  lkp-sb03/qperf/600s
>  6.115e+09 ~ 5%     -17.9%  5.022e+09 ~ 6%  TOTAL qperf.sctp.bw
> 
> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>   2.83e+09 ~ 1%     -12.5%  2.476e+09 ~ 3%  bens/qperf/600s
>   2.83e+09 ~ 1%     -12.5%  2.476e+09 ~ 3%  TOTAL qperf.tcp.bw
> 
> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>  2.272e+08 ~ 1%     -13.3%   1.97e+08 ~ 2%  bens/qperf/600s
>  2.272e+08 ~ 1%     -13.3%   1.97e+08 ~ 2%  TOTAL proc-vmstat.pgalloc_dma32
> 
> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>      53062 ~ 2%     -35.1%      34464 ~ 3%  bens/qperf/600s
>     109531 ~13%     +46.9%     160928 ~ 5%  brickland3/qperf/600s
>      67902 ~ 1%     +13.8%      77302 ~ 3%  lkp-sb03/qperf/600s
>     230496 ~ 7%     +18.3%     272694 ~ 4%  TOTAL softirqs.RCU
> 
> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>      80344 ~ 1%     -26.2%      59325 ~ 2%  bens/qperf/600s
>      80344 ~ 1%     -26.2%      59325 ~ 2%  TOTAL softirqs.SCHED
> 
> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>       1036 ~ 4%     -17.6%        853 ~ 4%  brickland3/qperf/600s
>       1036 ~ 4%     -17.6%        853 ~ 4%  TOTAL proc-vmstat.nr_page_table_pages
> 
> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>      48.12 ~ 0%     -11.7%      42.46 ~ 6%  brickland3/qperf/600s
>      48.12 ~ 0%     -11.7%      42.46 ~ 6%  TOTAL turbostat.%pc2
> 
> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>   74689352 ~ 1%     -13.3%   64771743 ~ 2%  bens/qperf/600s
>   74689352 ~ 1%     -13.3%   64771743 ~ 2%  TOTAL proc-vmstat.pgalloc_normal
> 
> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>  3.019e+08 ~ 1%     -13.3%  2.618e+08 ~ 2%  bens/qperf/600s
>  3.019e+08 ~ 1%     -13.3%  2.618e+08 ~ 2%  TOTAL proc-vmstat.pgfree
> 
> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>   23538414 ~ 0%     -12.9%   20506157 ~ 2%  bens/qperf/600s
>   23538414 ~ 0%     -12.9%   20506157 ~ 2%  TOTAL proc-vmstat.numa_local
> 
> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>   23538414 ~ 0%     -12.9%   20506157 ~ 2%  bens/qperf/600s
>   23538414 ~ 0%     -12.9%   20506157 ~ 2%  TOTAL proc-vmstat.numa_hit
> 
> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>      12789 ~ 1%     -10.9%      11391 ~ 2%  bens/qperf/600s
>      12789 ~ 1%     -10.9%      11391 ~ 2%  TOTAL softirqs.HRTIMER
> 
> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>     481253 ~ 0%      -8.9%     438624 ~ 0%  bens/qperf/600s
>     481253 ~ 0%      -8.9%     438624 ~ 0%  TOTAL softirqs.TIMER
> 
> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>       1297 ~33%    +565.9%       8640 ~ 7%  bens/iperf/300s-tcp
>       2788 ~ 3%    +588.8%      19204 ~ 4%  bens/qperf/600s
>       1191 ~ 5%   +1200.9%      15493 ~ 4%  brickland3/qperf/600s
>       1135 ~26%   +1195.9%      14709 ~ 4%  lkp-sb03/qperf/600s
>       6411 ~13%    +805.3%      58047 ~ 4%  TOTAL time.involuntary_context_switches
> 
> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>      72398 ~ 1%      -5.4%      68503 ~ 0%  bens/qperf/600s
>       8789 ~ 4%     +22.3%      10749 ~15%  lkp-sb03/qperf/600s
>      81187 ~ 1%      -2.4%      79253 ~ 2%  TOTAL vmstat.system.in
> 
> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>     141174 ~ 1%      -5.4%     133551 ~ 0%  bens/qperf/600s
>     143982 ~ 1%      -4.4%     137600 ~ 0%  brickland3/qperf/600s
>     285156 ~ 1%      -4.9%     271152 ~ 0%  TOTAL vmstat.system.cs
> 
> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>   42351859 ~ 0%      -5.3%   40114932 ~ 0%  bens/qperf/600s
>   43015383 ~ 1%      -4.4%   41143092 ~ 0%  brickland3/qperf/600s
>   85367242 ~ 1%      -4.8%   81258025 ~ 0%  TOTAL time.voluntary_context_switches
> 
> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>        146 ~ 0%      -2.2%        143 ~ 0%  bens/qperf/600s
>        147 ~ 1%      -4.8%        140 ~ 1%  brickland3/qperf/600s
>        293 ~ 0%      -3.5%        283 ~ 0%  TOTAL time.percent_of_cpu_this_job_got
> 
> 71a9b26963f8c2d  5057f55e543b7859cfd26bc28
> ---------------  -------------------------
>        872 ~ 0%      -2.3%        853 ~ 0%  bens/qperf/600s
>        874 ~ 1%      -4.6%        834 ~ 1%  brickland3/qperf/600s
>       1747 ~ 0%      -3.4%       1687 ~ 0%  TOTAL time.system_time
> 
> 
> Legend:
> 	~XX%    - stddev percent
> 	[+-]XX% - change percent
> 
> 
> 
> 
> Disclaimer:
> Results have been estimated based on internal Intel analysis and are provided
> for informational purposes only. Any difference in system hardware or software
> design or configuration may affect actual performance.
> 
> Thanks,
> Jet
> 
> 

> echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
> echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
> ./runtest.py unlink1 32 1 2 3 4


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [rcu] 5057f55e543: -23.5% qperf.udp.recv_bw
  2014-06-03 14:17 ` Paul E. McKenney
@ 2014-06-04 12:33   ` Fengguang Wu
  2014-06-04 22:17     ` Paul E. McKenney
  0 siblings, 1 reply; 4+ messages in thread
From: Fengguang Wu @ 2014-06-04 12:33 UTC (permalink / raw)
  To: Paul E. McKenney; +Cc: Jet Chen, LKML, lkp

Hi Paul,

On Tue, Jun 03, 2014 at 07:17:20AM -0700, Paul E. McKenney wrote:
> On Tue, Jun 03, 2014 at 06:08:41PM +0800, Jet Chen wrote:
> > Hi Paul,
> > 
> > FYI, we noticed the below changes on
> > 
> > git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git rcu/fixes
> > commit 5057f55e543b7859cfd26bc281291795eac93f8a ("rcu: Bind RCU grace-period kthreads if NO_HZ_FULL")
> 
> My guess would be that some of these workloads generated enough callbacks
> that binding all the rcuo callback-offloading kthreads to CPU 0 resulted
> in a bottleneck.  If that was the case, CPU 0 would often hit 100%
> CPU utilization, and there would be more wait time on other CPUs because
> callback execution was delayed.
> 
> Does that match what you are seeing?

This is the /proc/stat contents on completion of test
brickland3/qperf/600s. It shows that CPU 0 is not busy.

cpu  1707 0 57716 8569826 33 0 14572 0 0 0
cpu0 2 0 575 69345 29 0 8 0 0 0
cpu1 0 0 86 72203 3 0 1 0 0 0
cpu2 116 0 386 71437 0 0 0 0 0 0

The columns of numbers are:

- user: normal processes executing in user mode
- nice: niced processes executing in user mode
- system: processes executing in kernel mode
- idle: twiddling thumbs
- iowait: waiting for I/O to complete
- irq: servicing interrupts
- softirq: servicing softirqs
- steal: involuntary wait
- guest: running a normal guest
- guest_nice: running a niced guest

cpu3 32 0 59 72213 0 0 0 0 0 0
cpu4 8 0 35 72272 0 0 0 0 0 0
cpu5 1 0 3 72318 0 0 0 0 0 0
cpu6 1 0 3 72320 0 0 0 0 0 0
cpu7 0 0 2 72327 0 0 0 0 0 0
cpu8 0 0 2 72323 0 0 0 0 0 0
cpu9 0 0 1 72323 0 0 0 0 0 0
cpu10 0 0 2 72314 0 0 0 0 0 0
cpu11 0 0 2 72317 0 0 0 0 0 0
cpu12 0 0 3 72311 0 0 0 0 0 0
cpu13 0 0 2 72313 0 0 0 0 0 0
cpu14 0 0 20 72296 0 0 0 0 0 0
cpu15 0 0 90 72192 0 0 1 0 0 0
cpu16 0 0 13 72288 0 0 1 0 0 0
cpu17 115 0 137 71848 0 0 0 0 0 0
cpu18 28 0 32 72106 0 0 0 0 0 0
cpu19 7 0 5 72256 0 0 0 0 0 0
cpu20 2 0 72 71938 0 0 0 0 0 0
cpu21 0 0 3 72273 0 0 0 0 0 0
cpu22 0 0 2 72284 0 0 0 0 0 0
cpu23 0 0 1 72284 0 0 0 0 0 0
cpu24 0 0 2 72279 0 0 0 0 0 0
cpu25 0 0 1 72281 0 0 0 0 0 0
cpu26 0 0 2 72274 0 0 0 0 0 0
cpu27 0 0 5 72274 0 0 0 0 0 0
cpu28 0 0 22 72255 0 0 0 0 0 0
cpu29 0 0 1 72273 0 0 0 0 0 0
cpu30 24 0 748 70165 0 0 407 0 0 0
cpu31 28 0 906 70251 0 0 465 0 0 0
cpu32 62 0 2276 68376 0 0 604 0 0 0
cpu33 27 0 1100 70059 0 0 453 0 0 0
cpu34 23 0 1359 69933 0 0 418 0 0 0
cpu35 32 0 1663 69109 0 0 775 0 0 0
cpu36 103 0 8569 60806 0 0 2306 0 0 0
cpu37 88 0 7052 63141 0 0 518 0 0 0
cpu38 51 0 2881 67927 0 0 498 0 0 0
cpu39 36 0 1705 69166 0 0 430 0 0 0
cpu40 99 0 8657 60720 0 0 1767 0 0 0
cpu41 103 0 7088 61185 0 0 1749 0 0 0
cpu42 42 0 1567 69244 0 0 525 0 0 0
cpu43 43 0 1661 68865 0 0 457 0 0 0
cpu44 39 0 2610 68046 0 0 660 0 0 0
cpu45 3 0 248 71852 0 0 30 0 0 0
cpu46 4 0 16 72140 0 0 6 0 0 0
cpu47 115 0 297 71435 0 0 159 0 0 0
cpu48 30 0 81 71940 0 0 61 0 0 0
cpu49 14 0 163 71815 0 0 118 0 0 0
cpu50 6 0 144 71832 0 0 129 0 0 0
cpu51 12 0 508 71217 0 0 276 0 0 0
cpu52 18 0 498 71082 0 0 378 0 0 0
cpu53 11 0 440 71236 0 0 282 0 0 0
cpu54 4 0 205 71710 0 0 144 0 0 0
cpu55 4 0 136 71898 0 0 78 0 0 0
cpu56 13 0 400 71305 0 0 288 0 0 0
cpu57 14 0 377 71346 0 0 316 0 0 0
cpu58 2 0 109 71954 0 0 73 0 0 0
cpu59 1 0 8 72174 0 0 3 0 0 0
cpu60 0 0 0 72191 0 0 0 0 0 0
cpu61 86 0 313 71749 0 0 0 0 0 0
cpu62 3 0 9 72173 0 0 0 0 0 0
cpu63 2 0 8 72175 0 0 0 0 0 0
cpu64 0 0 2 72178 0 0 0 0 0 0
cpu65 0 0 2 72177 0 0 0 0 0 0
cpu66 0 0 2 72175 0 0 0 0 0 0
cpu67 0 0 2 72173 0 0 0 0 0 0
cpu68 0 0 4 72169 0 0 0 0 0 0
cpu69 0 0 2 72168 0 0 0 0 0 0
cpu70 0 0 1 72166 0 0 0 0 0 0
cpu71 0 0 1 72164 0 0 0 0 0 0
cpu72 0 0 0 72163 0 0 0 0 0 0
cpu73 0 0 0 72161 0 0 0 0 0 0
cpu74 0 0 1 72158 0 0 0 0 0 0
cpu75 0 0 0 72156 0 0 0 0 0 0
cpu76 73 0 330 71709 0 0 0 0 0 0
cpu77 4 0 10 72137 0 0 0 0 0 0
cpu78 1 0 4 72145 0 0 0 0 0 0
cpu79 0 0 50 72077 0 0 0 0 0 0
cpu80 0 0 5 72138 0 0 0 0 0 0
cpu81 0 0 1 72141 0 0 0 0 0 0
cpu82 0 0 66 72074 0 0 0 0 0 0
cpu83 0 0 1 72137 0 0 0 0 0 0
cpu84 0 0 0 72136 0 0 0 0 0 0
cpu85 0 0 1 72133 0 0 0 0 0 0
cpu86 0 0 0 72131 0 0 0 0 0 0
cpu87 0 0 0 72129 0 0 0 0 0 0
cpu88 0 0 1 72127 0 0 0 0 0 0
cpu89 0 0 0 72124 0 0 0 0 0 0
cpu90 0 0 0 72122 0 0 0 0 0 0
cpu91 63 0 1377 70501 0 0 162 0 0 0
cpu92 2 0 14 72100 0 0 0 0 0 0
cpu93 1 0 5 72109 0 0 0 0 0 0
cpu94 0 0 4 72108 0 0 0 0 0 0
cpu95 1 0 4 72106 0 0 0 0 0 0
cpu96 0 0 1 72106 0 0 0 0 0 0
cpu97 0 0 0 72105 0 0 0 0 0 0
cpu98 0 0 1 72102 0 0 0 0 0 0
cpu99 0 0 2 72099 0 0 0 0 0 0
cpu100 0 0 1 72099 0 0 0 0 0 0
cpu101 0 0 0 72096 0 0 0 0 0 0
cpu102 0 0 0 72094 0 0 0 0 0 0
cpu103 0 0 1 72092 0 0 0 0 0 0
cpu104 0 0 0 72089 0 0 0 0 0 0
cpu105 0 0 1 72086 0 0 0 0 0 0
cpu106 66 0 363 71601 0 0 9 0 0 0
cpu107 2 0 4 72075 0 0 0 0 0 0
cpu108 0 0 2 72077 0 0 0 0 0 0
cpu109 0 0 8 72069 0 0 0 0 0 0
cpu110 0 0 1 72073 0 0 0 0 0 0
cpu111 0 0 9 72063 0 0 0 0 0 0
cpu112 0 0 1 72070 0 0 0 0 0 0
cpu113 0 0 0 72068 0 0 0 0 0 0
cpu114 0 0 1 72064 0 0 0 0 0 0
cpu115 0 0 1 72063 0 0 0 0 0 0
cpu116 0 0 1 72060 0 0 0 0 0 0
cpu117 0 0 0 72059 0 0 0 0 0 0
cpu118 0 0 0 72056 0 0 0 0 0 0
cpu119 0 0 1 72053 0 0 0 0 0 0
intr 5639794 132 0 0 9 856 0 0 0 1 0 0 0 0 0 0 0 31 0 0 0 0 0 0 311 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 0 2794 395 411 385 399 378 399 375 396 384 383 396 379 470 386 375 375 386 382 512 375 375 375 375 375 375 375 375 375 375 375 375 378 375 375 375 375 375 375 375 375 375 379 381 388 375 375 375 376 379 375 375 375 375 375 375 375 375 375 375 375 382 375 1 78 64 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
ctxt 84441175
btime 1401871947
processes 23608
procs_running 2
procs_blocked 0
softirq 84702758 9 1574595 174 82369760 28 0 305 456968 84738 216181

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [rcu] 5057f55e543: -23.5% qperf.udp.recv_bw
  2014-06-04 12:33   ` Fengguang Wu
@ 2014-06-04 22:17     ` Paul E. McKenney
  0 siblings, 0 replies; 4+ messages in thread
From: Paul E. McKenney @ 2014-06-04 22:17 UTC (permalink / raw)
  To: Fengguang Wu; +Cc: Jet Chen, LKML, lkp

On Wed, Jun 04, 2014 at 08:33:38PM +0800, Fengguang Wu wrote:
> Hi Paul,
> 
> On Tue, Jun 03, 2014 at 07:17:20AM -0700, Paul E. McKenney wrote:
> > On Tue, Jun 03, 2014 at 06:08:41PM +0800, Jet Chen wrote:
> > > Hi Paul,
> > > 
> > > FYI, we noticed the below changes on
> > > 
> > > git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git rcu/fixes
> > > commit 5057f55e543b7859cfd26bc281291795eac93f8a ("rcu: Bind RCU grace-period kthreads if NO_HZ_FULL")
> > 
> > My guess would be that some of these workloads generated enough callbacks
> > that binding all the rcuo callback-offloading kthreads to CPU 0 resulted
> > in a bottleneck.  If that was the case, CPU 0 would often hit 100%
> > CPU utilization, and there would be more wait time on other CPUs because
> > callback execution was delayed.
> > 
> > Does that match what you are seeing?
> 
> This is the /proc/stat contents on completion of test
> brickland3/qperf/600s. It shows that CPU 0 is not busy.

Hmmm...  Might be sometimes busy, thus blocking grace periods?
I posted a speculative fix at a558e309f99cf8b809691417324c6770e10bb614,
please let me know how it goes!

							Thanx, Paul

> cpu  1707 0 57716 8569826 33 0 14572 0 0 0
> cpu0 2 0 575 69345 29 0 8 0 0 0
> cpu1 0 0 86 72203 3 0 1 0 0 0
> cpu2 116 0 386 71437 0 0 0 0 0 0
> 
> The columns of numbers are:
> 
> - user: normal processes executing in user mode
> - nice: niced processes executing in user mode
> - system: processes executing in kernel mode
> - idle: twiddling thumbs
> - iowait: waiting for I/O to complete
> - irq: servicing interrupts
> - softirq: servicing softirqs
> - steal: involuntary wait
> - guest: running a normal guest
> - guest_nice: running a niced guest
> 
> cpu3 32 0 59 72213 0 0 0 0 0 0
> cpu4 8 0 35 72272 0 0 0 0 0 0
> cpu5 1 0 3 72318 0 0 0 0 0 0
> cpu6 1 0 3 72320 0 0 0 0 0 0
> cpu7 0 0 2 72327 0 0 0 0 0 0
> cpu8 0 0 2 72323 0 0 0 0 0 0
> cpu9 0 0 1 72323 0 0 0 0 0 0
> cpu10 0 0 2 72314 0 0 0 0 0 0
> cpu11 0 0 2 72317 0 0 0 0 0 0
> cpu12 0 0 3 72311 0 0 0 0 0 0
> cpu13 0 0 2 72313 0 0 0 0 0 0
> cpu14 0 0 20 72296 0 0 0 0 0 0
> cpu15 0 0 90 72192 0 0 1 0 0 0
> cpu16 0 0 13 72288 0 0 1 0 0 0
> cpu17 115 0 137 71848 0 0 0 0 0 0
> cpu18 28 0 32 72106 0 0 0 0 0 0
> cpu19 7 0 5 72256 0 0 0 0 0 0
> cpu20 2 0 72 71938 0 0 0 0 0 0
> cpu21 0 0 3 72273 0 0 0 0 0 0
> cpu22 0 0 2 72284 0 0 0 0 0 0
> cpu23 0 0 1 72284 0 0 0 0 0 0
> cpu24 0 0 2 72279 0 0 0 0 0 0
> cpu25 0 0 1 72281 0 0 0 0 0 0
> cpu26 0 0 2 72274 0 0 0 0 0 0
> cpu27 0 0 5 72274 0 0 0 0 0 0
> cpu28 0 0 22 72255 0 0 0 0 0 0
> cpu29 0 0 1 72273 0 0 0 0 0 0
> cpu30 24 0 748 70165 0 0 407 0 0 0
> cpu31 28 0 906 70251 0 0 465 0 0 0
> cpu32 62 0 2276 68376 0 0 604 0 0 0
> cpu33 27 0 1100 70059 0 0 453 0 0 0
> cpu34 23 0 1359 69933 0 0 418 0 0 0
> cpu35 32 0 1663 69109 0 0 775 0 0 0
> cpu36 103 0 8569 60806 0 0 2306 0 0 0
> cpu37 88 0 7052 63141 0 0 518 0 0 0
> cpu38 51 0 2881 67927 0 0 498 0 0 0
> cpu39 36 0 1705 69166 0 0 430 0 0 0
> cpu40 99 0 8657 60720 0 0 1767 0 0 0
> cpu41 103 0 7088 61185 0 0 1749 0 0 0
> cpu42 42 0 1567 69244 0 0 525 0 0 0
> cpu43 43 0 1661 68865 0 0 457 0 0 0
> cpu44 39 0 2610 68046 0 0 660 0 0 0
> cpu45 3 0 248 71852 0 0 30 0 0 0
> cpu46 4 0 16 72140 0 0 6 0 0 0
> cpu47 115 0 297 71435 0 0 159 0 0 0
> cpu48 30 0 81 71940 0 0 61 0 0 0
> cpu49 14 0 163 71815 0 0 118 0 0 0
> cpu50 6 0 144 71832 0 0 129 0 0 0
> cpu51 12 0 508 71217 0 0 276 0 0 0
> cpu52 18 0 498 71082 0 0 378 0 0 0
> cpu53 11 0 440 71236 0 0 282 0 0 0
> cpu54 4 0 205 71710 0 0 144 0 0 0
> cpu55 4 0 136 71898 0 0 78 0 0 0
> cpu56 13 0 400 71305 0 0 288 0 0 0
> cpu57 14 0 377 71346 0 0 316 0 0 0
> cpu58 2 0 109 71954 0 0 73 0 0 0
> cpu59 1 0 8 72174 0 0 3 0 0 0
> cpu60 0 0 0 72191 0 0 0 0 0 0
> cpu61 86 0 313 71749 0 0 0 0 0 0
> cpu62 3 0 9 72173 0 0 0 0 0 0
> cpu63 2 0 8 72175 0 0 0 0 0 0
> cpu64 0 0 2 72178 0 0 0 0 0 0
> cpu65 0 0 2 72177 0 0 0 0 0 0
> cpu66 0 0 2 72175 0 0 0 0 0 0
> cpu67 0 0 2 72173 0 0 0 0 0 0
> cpu68 0 0 4 72169 0 0 0 0 0 0
> cpu69 0 0 2 72168 0 0 0 0 0 0
> cpu70 0 0 1 72166 0 0 0 0 0 0
> cpu71 0 0 1 72164 0 0 0 0 0 0
> cpu72 0 0 0 72163 0 0 0 0 0 0
> cpu73 0 0 0 72161 0 0 0 0 0 0
> cpu74 0 0 1 72158 0 0 0 0 0 0
> cpu75 0 0 0 72156 0 0 0 0 0 0
> cpu76 73 0 330 71709 0 0 0 0 0 0
> cpu77 4 0 10 72137 0 0 0 0 0 0
> cpu78 1 0 4 72145 0 0 0 0 0 0
> cpu79 0 0 50 72077 0 0 0 0 0 0
> cpu80 0 0 5 72138 0 0 0 0 0 0
> cpu81 0 0 1 72141 0 0 0 0 0 0
> cpu82 0 0 66 72074 0 0 0 0 0 0
> cpu83 0 0 1 72137 0 0 0 0 0 0
> cpu84 0 0 0 72136 0 0 0 0 0 0
> cpu85 0 0 1 72133 0 0 0 0 0 0
> cpu86 0 0 0 72131 0 0 0 0 0 0
> cpu87 0 0 0 72129 0 0 0 0 0 0
> cpu88 0 0 1 72127 0 0 0 0 0 0
> cpu89 0 0 0 72124 0 0 0 0 0 0
> cpu90 0 0 0 72122 0 0 0 0 0 0
> cpu91 63 0 1377 70501 0 0 162 0 0 0
> cpu92 2 0 14 72100 0 0 0 0 0 0
> cpu93 1 0 5 72109 0 0 0 0 0 0
> cpu94 0 0 4 72108 0 0 0 0 0 0
> cpu95 1 0 4 72106 0 0 0 0 0 0
> cpu96 0 0 1 72106 0 0 0 0 0 0
> cpu97 0 0 0 72105 0 0 0 0 0 0
> cpu98 0 0 1 72102 0 0 0 0 0 0
> cpu99 0 0 2 72099 0 0 0 0 0 0
> cpu100 0 0 1 72099 0 0 0 0 0 0
> cpu101 0 0 0 72096 0 0 0 0 0 0
> cpu102 0 0 0 72094 0 0 0 0 0 0
> cpu103 0 0 1 72092 0 0 0 0 0 0
> cpu104 0 0 0 72089 0 0 0 0 0 0
> cpu105 0 0 1 72086 0 0 0 0 0 0
> cpu106 66 0 363 71601 0 0 9 0 0 0
> cpu107 2 0 4 72075 0 0 0 0 0 0
> cpu108 0 0 2 72077 0 0 0 0 0 0
> cpu109 0 0 8 72069 0 0 0 0 0 0
> cpu110 0 0 1 72073 0 0 0 0 0 0
> cpu111 0 0 9 72063 0 0 0 0 0 0
> cpu112 0 0 1 72070 0 0 0 0 0 0
> cpu113 0 0 0 72068 0 0 0 0 0 0
> cpu114 0 0 1 72064 0 0 0 0 0 0
> cpu115 0 0 1 72063 0 0 0 0 0 0
> cpu116 0 0 1 72060 0 0 0 0 0 0
> cpu117 0 0 0 72059 0 0 0 0 0 0
> cpu118 0 0 0 72056 0 0 0 0 0 0
> cpu119 0 0 1 72053 0 0 0 0 0 0
> intr 5639794 132 0 0 9 856 0 0 0 1 0 0 0 0 0 0 0 31 0 0 0 0 0 0 311 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 0 2794 395 411 385 399 378 399 375 396 384 383 396 379 470 386 375 375 386 382 512 375 375 375 375 375 375 375 375 375 375 375 375 378 375 375 375 375 375 375 375 375 375 379 381 388 375 375 375 376 379 375 375 375 375 375 375 375 375 375 375 375 382 375 1 78 64 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
> ctxt 84441175
> btime 1401871947
> processes 23608
> procs_running 2
> procs_blocked 0
> softirq 84702758 9 1574595 174 82369760 28 0 305 456968 84738 216181
> 
> Thanks,
> Fengguang
> 


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2014-06-04 22:17 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-06-03 10:08 [rcu] 5057f55e543: -23.5% qperf.udp.recv_bw Jet Chen
2014-06-03 14:17 ` Paul E. McKenney
2014-06-04 12:33   ` Fengguang Wu
2014-06-04 22:17     ` Paul E. McKenney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox