linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [lkp] [x86/build] b2c51106c75: -18.1% will-it-scale.per_process_ops
@ 2015-08-01  5:17 kernel test robot
  2015-08-05  8:38 ` Ingo Molnar
  0 siblings, 1 reply; 4+ messages in thread
From: kernel test robot @ 2015-08-01  5:17 UTC (permalink / raw)
  To: Andy Lutomirski; +Cc: lkp, LKML, Ingo Molnar

[-- Attachment #1: Type: text/plain, Size: 6227 bytes --]

FYI, we noticed the below changes on

git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/asm
commit b2c51106c7581866c37ffc77c5d739f3d4b7cbc9 ("x86/build: Fix detection of GCC -mpreferred-stack-boundary support")


=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test:
  wsm/will-it-scale/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/readseek2

commit: 
  f2a50f8b7da45ff2de93a71393e715a2ab9f3b68
  b2c51106c7581866c37ffc77c5d739f3d4b7cbc9

f2a50f8b7da45ff2 b2c51106c7581866c37ffc77c5 
---------------- -------------------------- 
         %stddev     %change         %stddev
             \          |                \  
    879002 ±  0%     -18.1%     720270 ±  7%  will-it-scale.per_process_ops
      0.02 ±  0%     +34.5%       0.02 ±  7%  will-it-scale.scalability
  26153173 ±  0%      +7.0%   27977076 ±  0%  will-it-scale.time.voluntary_context_switches
     15.70 ±  2%     +14.5%      17.98 ±  0%  time.user_time
    370683 ±  0%      +6.2%     393491 ±  0%  vmstat.system.cs
    830343 ± 56%     -54.0%     382128 ± 39%  cpuidle.C1E-NHM.time
    788.25 ± 14%     -21.7%     617.25 ± 16%  cpuidle.C1E-NHM.usage
      3947 ±  2%     +10.6%       4363 ±  3%  slabinfo.kmalloc-192.active_objs
      3947 ±  2%     +10.6%       4363 ±  3%  slabinfo.kmalloc-192.num_objs
   1082762 ±162%    -100.0%       0.00 ± -1%  latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
   1082762 ±162%    -100.0%       0.00 ± -1%  latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
   1082762 ±162%    -100.0%       0.00 ± -1%  latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
      2.58 ±  8%     +19.5%       3.09 ±  3%  perf-profile.cpu-cycles._raw_spin_lock_irqsave.finish_wait.__wait_on_bit_lock.__lock_page.find_lock_entry
      7.02 ±  3%      +9.2%       7.67 ±  2%  perf-profile.cpu-cycles._raw_spin_lock_irqsave.prepare_to_wait_exclusive.__wait_on_bit_lock.__lock_page.find_lock_entry
      3.07 ±  2%     +14.8%       3.53 ±  3%  perf-profile.cpu-cycles.finish_wait.__wait_on_bit_lock.__lock_page.find_lock_entry.shmem_getpage_gfp
      3.05 ±  5%      -8.4%       2.79 ±  4%  perf-profile.cpu-cycles.hrtimer_start_range_ns.tick_nohz_stop_sched_tick.__tick_nohz_idle_enter.tick_nohz_idle_enter.cpu_startup_entry
      0.98 ±  3%     -25.1%       0.74 ±  7%  perf-profile.cpu-cycles.is_ftrace_trampoline.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency
      1.82 ± 18%     +46.6%       2.67 ±  3%  perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.finish_wait.__wait_on_bit_lock.__lock_page
      8.05 ±  3%      +9.5%       8.82 ±  3%  perf-profile.cpu-cycles.prepare_to_wait_exclusive.__wait_on_bit_lock.__lock_page.find_lock_entry.shmem_getpage_gfp
      7.75 ± 34%     -64.5%       2.75 ± 64%  sched_debug.cfs_rq[2]:/.nr_spread_over
      1135 ± 20%     -43.6%     640.75 ± 49%  sched_debug.cfs_rq[3]:/.blocked_load_avg
      1215 ± 21%     -43.1%     691.50 ± 46%  sched_debug.cfs_rq[3]:/.tg_load_contrib
     38.50 ± 21%    +129.9%      88.50 ± 36%  sched_debug.cfs_rq[4]:/.load
     26.00 ± 20%     +98.1%      51.50 ± 46%  sched_debug.cfs_rq[4]:/.runnable_load_avg
    128.25 ± 18%    +227.5%     420.00 ± 43%  sched_debug.cfs_rq[4]:/.utilization_load_avg
      1015 ± 78%    +101.1%       2042 ± 25%  sched_debug.cfs_rq[6]:/.blocked_load_avg
      1069 ± 72%    +100.2%       2140 ± 23%  sched_debug.cfs_rq[6]:/.tg_load_contrib
     88.75 ± 14%     -47.3%      46.75 ± 36%  sched_debug.cfs_rq[9]:/.load
     59.25 ± 23%     -41.4%      34.75 ± 34%  sched_debug.cfs_rq[9]:/.runnable_load_avg
    315.50 ± 45%     -64.6%     111.67 ±  1%  sched_debug.cfs_rq[9]:/.utilization_load_avg
   2246758 ±  7%     +87.6%    4213925 ± 65%  sched_debug.cpu#0.nr_switches
   2249376 ±  7%     +87.4%    4215969 ± 65%  sched_debug.cpu#0.sched_count
   1121438 ±  7%     +81.0%    2030313 ± 61%  sched_debug.cpu#0.sched_goidle
   1151160 ±  7%     +86.5%    2146608 ± 64%  sched_debug.cpu#0.ttwu_count
     33.75 ± 15%     -22.2%      26.25 ±  6%  sched_debug.cpu#1.cpu_load[3]
     33.25 ± 10%     -18.0%      27.25 ±  7%  sched_debug.cpu#1.cpu_load[4]
     40.00 ± 18%     +24.4%      49.75 ± 18%  sched_debug.cpu#10.cpu_load[2]
     39.25 ± 14%     +22.3%      48.00 ± 10%  sched_debug.cpu#10.cpu_load[3]
     39.50 ± 15%     +20.3%      47.50 ±  6%  sched_debug.cpu#10.cpu_load[4]
   5269004 ±  1%     +27.8%    6731790 ± 30%  sched_debug.cpu#10.nr_switches
   5273193 ±  1%     +27.8%    6736526 ± 30%  sched_debug.cpu#10.sched_count
   2633974 ±  1%     +27.8%    3365271 ± 30%  sched_debug.cpu#10.sched_goidle
   2644149 ±  1%     +26.9%    3356318 ± 30%  sched_debug.cpu#10.ttwu_count
     30.75 ± 15%     +66.7%      51.25 ± 31%  sched_debug.cpu#11.cpu_load[1]
   5116609 ±  4%     +34.5%    6884238 ± 33%  sched_debug.cpu#9.nr_switches
   5120531 ±  4%     +34.5%    6889156 ± 33%  sched_debug.cpu#9.sched_count
   2557822 ±  4%     +34.5%    3441428 ± 33%  sched_debug.cpu#9.sched_goidle
      0.00 ±141%  +4.2e+05%       4.76 ±173%  sched_debug.rt_rq[10]:/.rt_time


wsm: Westmere
Memory: 6G

To reproduce:

        git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Ying Huang

[-- Attachment #2: job.yaml --]
[-- Type: text/plain, Size: 3216 bytes --]

---
LKP_SERVER: inn
LKP_CGI_PORT: 80
LKP_CIFS_PORT: 139
testcase: will-it-scale
default-monitors:
  wait: activate-monitor
  kmsg: 
  uptime: 
  iostat: 
  vmstat: 
  numa-numastat: 
  numa-vmstat: 
  numa-meminfo: 
  proc-vmstat: 
  proc-stat:
    interval: 10
  meminfo: 
  slabinfo: 
  interrupts: 
  lock_stat: 
  latency_stats: 
  softirqs: 
  bdi_dev_mapping: 
  diskstats: 
  nfsstat: 
  cpuidle: 
  cpufreq-stats: 
  turbostat: 
  pmeter: 
  sched_debug:
    interval: 60
cpufreq_governor: performance
default-watchdogs:
  oom-killer: 
  watchdog: 
commit: 11b9de4b33595f8168c4a8fa2bff3ca4c9b38c7b
model: Westmere
memory: 6G
nr_hdd_partitions: 1
hdd_partitions: ata-ATAPI_iHES208_2_3782208017_250915404
swap_partitions: 
rootfs_partition: 
netconsole_port: 6667
category: benchmark
perf-profile:
  freq: 800
will-it-scale:
  test: readseek2
queue: cyclic
testbox: wsm
tbox_group: wsm
kconfig: x86_64-rhel
enqueue_time: 2015-07-28 04:08:48.371752899 +08:00
user: lkp
compiler: gcc-4.9
head_commit: 11b9de4b33595f8168c4a8fa2bff3ca4c9b38c7b
base_commit: cbfe8fa6cd672011c755c3cd85c9ffd4e2d10a6f
branch: linux-devel/devel-hourly-2015072818
kernel: "/pkg/linux/x86_64-rhel/gcc-4.9/11b9de4b33595f8168c4a8fa2bff3ca4c9b38c7b/vmlinuz-4.2.0-rc4-02469-g11b9de4"
rootfs: debian-x86_64-2015-02-07.cgz
result_root: "/result/will-it-scale/performance-readseek2/wsm/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/11b9de4b33595f8168c4a8fa2bff3ca4c9b38c7b/0"
job_file: "/lkp/scheduled/wsm/cyclic_will-it-scale-performance-readseek2-x86_64-rhel-CYCLIC_HEAD-11b9de4b33595f8168c4a8fa2bff3ca4c9b38c7b-20150728-22560-192715z-0.yaml"
dequeue_time: 2015-07-29 00:56:27.676781673 +08:00
nr_cpu: "$(nproc)"
max_uptime: 1500
initrd: "/osimage/debian/debian-x86_64-2015-02-07.cgz"
bootloader_append:
- root=/dev/ram0
- user=lkp
- job=/lkp/scheduled/wsm/cyclic_will-it-scale-performance-readseek2-x86_64-rhel-CYCLIC_HEAD-11b9de4b33595f8168c4a8fa2bff3ca4c9b38c7b-20150728-22560-192715z-0.yaml
- ARCH=x86_64
- kconfig=x86_64-rhel
- branch=linux-devel/devel-hourly-2015072818
- commit=11b9de4b33595f8168c4a8fa2bff3ca4c9b38c7b
- BOOT_IMAGE=/pkg/linux/x86_64-rhel/gcc-4.9/11b9de4b33595f8168c4a8fa2bff3ca4c9b38c7b/vmlinuz-4.2.0-rc4-02469-g11b9de4
- max_uptime=1500
- RESULT_ROOT=/result/will-it-scale/performance-readseek2/wsm/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/11b9de4b33595f8168c4a8fa2bff3ca4c9b38c7b/0
- LKP_SERVER=inn
- |2-


  earlyprintk=ttyS0,115200 systemd.log_level=err
  debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100
  panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0
  console=ttyS0,115200 console=tty0 vga=normal

  rw
lkp_initrd: "/lkp/lkp/lkp-x86_64.cgz"
modules_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/11b9de4b33595f8168c4a8fa2bff3ca4c9b38c7b/modules.cgz"
bm_initrd: "/osimage/deps/debian-x86_64-2015-02-07.cgz/lkp.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/run-ipconfig.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/turbostat.cgz,/lkp/benchmarks/turbostat.cgz,/lkp/benchmarks/will-it-scale.cgz"
job_state: finished
loadavg: 9.93 5.11 2.06 1/186 4844
start_time: '1438102760'
end_time: '1438103064'
version: "/lkp/lkp/.src-20150728-100132"

[-- Attachment #3: reproduce --]
[-- Type: text/plain, Size: 918 bytes --]

echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu10/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu11/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu8/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu9/cpufreq/scaling_governor
./runtest.py readseek2 32 both 1 6 9 12

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [lkp] [x86/build] b2c51106c75: -18.1% will-it-scale.per_process_ops
  2015-08-01  5:17 [lkp] [x86/build] b2c51106c75: -18.1% will-it-scale.per_process_ops kernel test robot
@ 2015-08-05  8:38 ` Ingo Molnar
  2015-08-10 20:39   ` Andy Lutomirski
  2015-09-02  2:40   ` Huang Ying
  0 siblings, 2 replies; 4+ messages in thread
From: Ingo Molnar @ 2015-08-05  8:38 UTC (permalink / raw)
  To: kernel test robot
  Cc: Andy Lutomirski, lkp, LKML, Peter Zijlstra, Thomas Gleixner,
	Denys Vlasenko


* kernel test robot <ying.huang@intel.com> wrote:

> FYI, we noticed the below changes on
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/asm
> commit b2c51106c7581866c37ffc77c5d739f3d4b7cbc9 ("x86/build: Fix detection of GCC -mpreferred-stack-boundary support")

Does the performance regression go away reproducibly if you do:

   git revert b2c51106c7581866c37ffc77c5d739f3d4b7cbc9

?

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [lkp] [x86/build] b2c51106c75: -18.1% will-it-scale.per_process_ops
  2015-08-05  8:38 ` Ingo Molnar
@ 2015-08-10 20:39   ` Andy Lutomirski
  2015-09-02  2:40   ` Huang Ying
  1 sibling, 0 replies; 4+ messages in thread
From: Andy Lutomirski @ 2015-08-10 20:39 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: kernel test robot, Andy Lutomirski, LKP, LKML, Peter Zijlstra,
	Thomas Gleixner, Denys Vlasenko

On Wed, Aug 5, 2015 at 1:38 AM, Ingo Molnar <mingo@kernel.org> wrote:
>
> * kernel test robot <ying.huang@intel.com> wrote:
>
>> FYI, we noticed the below changes on
>>
>> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/asm
>> commit b2c51106c7581866c37ffc77c5d739f3d4b7cbc9 ("x86/build: Fix detection of GCC -mpreferred-stack-boundary support")
>
> Does the performance regression go away reproducibly if you do:
>
>    git revert b2c51106c7581866c37ffc77c5d739f3d4b7cbc9
>
> ?

FWIW, I spot-checked the generated code.  All I saw were the deletion
of some dummy subtraction from rsp to align the stack for children,
changes of stack frame offsets, and a couple instances in which
instructions got reordered.

--Andy

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [lkp] [x86/build] b2c51106c75: -18.1% will-it-scale.per_process_ops
  2015-08-05  8:38 ` Ingo Molnar
  2015-08-10 20:39   ` Andy Lutomirski
@ 2015-09-02  2:40   ` Huang Ying
  1 sibling, 0 replies; 4+ messages in thread
From: Huang Ying @ 2015-09-02  2:40 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Andy Lutomirski, lkp, LKML, Peter Zijlstra, Thomas Gleixner,
	Denys Vlasenko

On Wed, 2015-08-05 at 10:38 +0200, Ingo Molnar wrote:
> * kernel test robot <ying.huang@intel.com> wrote:
> 
> > FYI, we noticed the below changes on
> > 
> > git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/asm
> > commit b2c51106c7581866c37ffc77c5d739f3d4b7cbc9 ("x86/build: Fix detection of GCC -mpreferred-stack-boundary support")
> 
> Does the performance regression go away reproducibly if you do:
> 
>    git revert b2c51106c7581866c37ffc77c5d739f3d4b7cbc9
> 
> ?

Sorry for reply so late!

Revert the commit will restore part of the performance, as below.
parent commit: f2a50f8b7da45ff2de93a71393e715a2ab9f3b68
the commit:    b2c51106c7581866c37ffc77c5d739f3d4b7cbc9
revert commit: 987d12601a4a82cc2f2151b1be704723eb84cb9d

=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test:
  wsm/will-it-scale/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/readseek2

commit: 
  f2a50f8b7da45ff2de93a71393e715a2ab9f3b68
  b2c51106c7581866c37ffc77c5d739f3d4b7cbc9
  987d12601a4a82cc2f2151b1be704723eb84cb9d

f2a50f8b7da45ff2 b2c51106c7581866c37ffc77c5 987d12601a4a82cc2f2151b1be 
---------------- -------------------------- -------------------------- 
         %stddev     %change         %stddev     %change         %stddev
             \          |                \          |                \  
    879002 ±  0%     -18.1%     720270 ±  7%      -3.6%     847011 ±  2%  will-it-scale.per_process_ops
      0.02 ±  0%     +34.5%       0.02 ±  7%      +5.6%       0.02 ±  2%  will-it-scale.scalability
     11144 ±  0%      +0.1%      11156 ±  0%     +10.6%      12320 ±  0%  will-it-scale.time.minor_page_faults
    769.30 ±  0%      -0.9%     762.15 ±  0%      +1.1%     777.42 ±  0%  will-it-scale.time.system_time
  26153173 ±  0%      +7.0%   27977076 ±  0%      +3.5%   27078124 ±  0%  will-it-scale.time.voluntary_context_switches
      2964 ±  2%      +1.4%       3004 ±  1%     -51.9%       1426 ±  2%  proc-vmstat.pgactivate
      0.06 ± 27%    +154.5%       0.14 ± 44%    +122.7%       0.12 ± 24%  turbostat.CPU%c3
    370683 ±  0%      +6.2%     393491 ±  0%      +2.4%     379575 ±  0%  vmstat.system.cs
     11144 ±  0%      +0.1%      11156 ±  0%     +10.6%      12320 ±  0%  time.minor_page_faults
     15.70 ±  2%     +14.5%      17.98 ±  0%      +1.5%      15.94 ±  1%  time.user_time
    830343 ± 56%     -54.0%     382128 ± 39%     -22.3%     645308 ± 65%  cpuidle.C1E-NHM.time
    788.25 ± 14%     -21.7%     617.25 ± 16%     -12.3%     691.00 ±  3%  cpuidle.C1E-NHM.usage
   2489132 ± 20%     +79.3%    4464147 ± 33%     +78.4%    4440574 ± 21%  cpuidle.C3-NHM.time
   1082762 ±162%    -100.0%       0.00 ± -1%    +189.3%    3132030 ±110%  latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
    102189 ±  2%      -2.1%     100087 ±  5%     -32.9%      68568 ±  2%  latency_stats.hits.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
   1082762 ±162%    -100.0%       0.00 ± -1%    +289.6%    4217977 ±109%  latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
   1082762 ±162%    -100.0%       0.00 ± -1%    +478.5%    6264061 ±110%  latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
      5.10 ±  2%      -8.0%       4.69 ±  1%     +13.0%       5.76 ±  1%  perf-profile.cpu-cycles.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency
      2.58 ±  8%     +19.5%       3.09 ±  3%      -1.8%       2.54 ± 11%  perf-profile.cpu-cycles._raw_spin_lock_irqsave.finish_wait.__wait_on_bit_lock.__lock_page.find_lock_entry
      7.02 ±  3%      +9.2%       7.67 ±  2%      +7.1%       7.52 ±  3%  perf-profile.cpu-cycles._raw_spin_lock_irqsave.prepare_to_wait_exclusive.__wait_on_bit_lock.__lock_page.find_lock_entry
      3.07 ±  2%     +14.8%       3.53 ±  3%      -1.4%       3.03 ±  5%  perf-profile.cpu-cycles.finish_wait.__wait_on_bit_lock.__lock_page.find_lock_entry.shmem_getpage_gfp
      3.05 ±  5%      -8.4%       2.79 ±  4%      -5.2%       2.90 ±  5%  perf-profile.cpu-cycles.hrtimer_start_range_ns.tick_nohz_stop_sched_tick.__tick_nohz_idle_enter.tick_nohz_idle_enter.cpu_startup_entry
      0.89 ±  5%      -7.6%       0.82 ±  3%     +16.3%       1.03 ±  5%  perf-profile.cpu-cycles.is_ftrace_trampoline.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk
      0.98 ±  3%     -25.1%       0.74 ±  7%     -16.8%       0.82 ±  2%  perf-profile.cpu-cycles.is_ftrace_trampoline.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency
      1.58 ±  3%      -5.2%       1.50 ±  2%     +44.2%       2.28 ±  1%  perf-profile.cpu-cycles.is_module_text_address.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk
      1.82 ± 18%     +46.6%       2.67 ±  3%     -32.6%       1.23 ± 56%  perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.finish_wait.__wait_on_bit_lock.__lock_page
      8.05 ±  3%      +9.5%       8.82 ±  3%      +5.4%       8.49 ±  2%  perf-profile.cpu-cycles.prepare_to_wait_exclusive.__wait_on_bit_lock.__lock_page.find_lock_entry.shmem_getpage_gfp
      1.16 ±  2%      +6.9%       1.25 ±  5%     +11.4%       1.30 ±  5%  perf-profile.cpu-cycles.put_page.shmem_file_read_iter.__vfs_read.vfs_read.sys_read
     11102 ±  1%      +0.0%      11102 ±  1%     -95.8%     468.00 ±  0%  slabinfo.Acpi-ParseExt.active_objs
    198.25 ±  1%      +0.0%     198.25 ±  1%     -93.9%      12.00 ±  0%  slabinfo.Acpi-ParseExt.active_slabs
     11102 ±  1%      +0.0%      11102 ±  1%     -95.8%     468.00 ±  0%  slabinfo.Acpi-ParseExt.num_objs
    198.25 ±  1%      +0.0%     198.25 ±  1%     -93.9%      12.00 ±  0%  slabinfo.Acpi-ParseExt.num_slabs
    341.25 ± 14%      +2.9%     351.00 ± 11%    -100.0%       0.00 ± -1%  slabinfo.blkdev_ioc.active_objs
    341.25 ± 14%      +2.9%     351.00 ± 11%    -100.0%       0.00 ± -1%  slabinfo.blkdev_ioc.num_objs
    438.00 ± 16%      -8.3%     401.50 ± 20%    -100.0%       0.00 ± -1%  slabinfo.file_lock_ctx.active_objs
    438.00 ± 16%      -8.3%     401.50 ± 20%    -100.0%       0.00 ± -1%  slabinfo.file_lock_ctx.num_objs
      4398 ±  1%      +1.4%       4462 ±  0%     -14.5%       3761 ±  2%  slabinfo.ftrace_event_field.active_objs
      4398 ±  1%      +1.4%       4462 ±  0%     -14.5%       3761 ±  2%  slabinfo.ftrace_event_field.num_objs
      3947 ±  2%     +10.6%       4363 ±  3%    +107.1%       8175 ±  2%  slabinfo.kmalloc-192.active_objs
     93.00 ±  2%     +10.8%     103.00 ±  3%    +120.2%     204.75 ±  2%  slabinfo.kmalloc-192.active_slabs
      3947 ±  2%     +10.6%       4363 ±  3%    +118.4%       8620 ±  2%  slabinfo.kmalloc-192.num_objs
     93.00 ±  2%     +10.8%     103.00 ±  3%    +120.2%     204.75 ±  2%  slabinfo.kmalloc-192.num_slabs
      1794 ±  0%      +3.2%       1851 ±  2%     +12.2%       2012 ±  3%  slabinfo.trace_event_file.active_objs
      1794 ±  0%      +3.2%       1851 ±  2%     +12.2%       2012 ±  3%  slabinfo.trace_event_file.num_objs
      7065 ±  7%      -5.4%       6684 ±  8%    -100.0%       0.00 ± -1%  slabinfo.vm_area_struct.active_objs
    160.50 ±  7%      -5.5%     151.75 ±  8%    -100.0%       0.00 ± -1%  slabinfo.vm_area_struct.active_slabs
      7091 ±  7%      -5.6%       6694 ±  8%    -100.0%       0.00 ± -1%  slabinfo.vm_area_struct.num_objs
    160.50 ±  7%      -5.5%     151.75 ±  8%    -100.0%       0.00 ± -1%  slabinfo.vm_area_struct.num_slabs
    857.50 ± 29%     +75.7%       1506 ± 78%    +157.6%       2209 ± 33%  sched_debug.cfs_rq[11]:/.blocked_load_avg
     52.75 ± 29%     -29.4%      37.25 ± 60%    +103.3%     107.25 ± 43%  sched_debug.cfs_rq[11]:/.load
    914.50 ± 29%     +69.9%       1553 ± 77%    +155.6%       2337 ± 32%  sched_debug.cfs_rq[11]:/.tg_load_contrib
      7.75 ± 34%     -64.5%       2.75 ± 64%     -12.9%       6.75 ±115%  sched_debug.cfs_rq[2]:/.nr_spread_over
      1135 ± 20%     -43.6%     640.75 ± 49%     -18.8%     922.50 ± 51%  sched_debug.cfs_rq[3]:/.blocked_load_avg
      1215 ± 21%     -43.1%     691.50 ± 46%     -21.3%     956.25 ± 50%  sched_debug.cfs_rq[3]:/.tg_load_contrib
     38.50 ± 21%    +129.9%      88.50 ± 36%     +96.1%      75.50 ± 56%  sched_debug.cfs_rq[4]:/.load
     26.00 ± 20%     +98.1%      51.50 ± 46%    +142.3%      63.00 ± 53%  sched_debug.cfs_rq[4]:/.runnable_load_avg
    128.25 ± 18%    +227.5%     420.00 ± 43%    +152.4%     323.75 ± 68%  sched_debug.cfs_rq[4]:/.utilization_load_avg
     28320 ± 12%      -6.3%      26545 ± 11%     -19.4%      22813 ± 13%  sched_debug.cfs_rq[6]:/.avg->runnable_avg_sum
      1015 ± 78%    +101.1%       2042 ± 25%     +64.4%       1669 ± 73%  sched_debug.cfs_rq[6]:/.blocked_load_avg
      1069 ± 72%    +100.2%       2140 ± 23%     +61.2%       1722 ± 70%  sched_debug.cfs_rq[6]:/.tg_load_contrib
    619.25 ± 12%      -6.3%     580.25 ± 11%     -19.2%     500.25 ± 13%  sched_debug.cfs_rq[6]:/.tg_runnable_contrib
     88.75 ± 14%     -47.3%      46.75 ± 36%     -24.5%      67.00 ± 11%  sched_debug.cfs_rq[9]:/.load
     59.25 ± 23%     -41.4%      34.75 ± 34%      -6.3%      55.50 ± 12%  sched_debug.cfs_rq[9]:/.runnable_load_avg
    315.50 ± 45%     -64.6%     111.67 ±  1%     -12.1%     277.25 ±  3%  sched_debug.cfs_rq[9]:/.utilization_load_avg
   2246758 ±  7%     +87.6%    4213925 ± 65%      -2.2%    2197475 ±  4%  sched_debug.cpu#0.nr_switches
   2249376 ±  7%     +87.4%    4215969 ± 65%      -2.2%    2199216 ±  4%  sched_debug.cpu#0.sched_count
   1121438 ±  7%     +81.0%    2030313 ± 61%      -2.2%    1096479 ±  4%  sched_debug.cpu#0.sched_goidle
   1151160 ±  7%     +86.5%    2146608 ± 64%      -1.9%    1129264 ±  3%  sched_debug.cpu#0.ttwu_count
     33.75 ± 15%     -22.2%      26.25 ±  6%      -8.9%      30.75 ± 10%  sched_debug.cpu#1.cpu_load[3]
     33.25 ± 10%     -18.0%      27.25 ±  7%      -3.8%      32.00 ± 11%  sched_debug.cpu#1.cpu_load[4]
     41.75 ± 29%     +23.4%      51.50 ± 33%     +53.9%      64.25 ± 16%  sched_debug.cpu#10.cpu_load[1]
     40.00 ± 18%     +24.4%      49.75 ± 18%     +49.4%      59.75 ±  8%  sched_debug.cpu#10.cpu_load[2]
     39.25 ± 14%     +22.3%      48.00 ± 10%     +38.9%      54.50 ±  7%  sched_debug.cpu#10.cpu_load[3]
     39.50 ± 15%     +20.3%      47.50 ±  6%     +30.4%      51.50 ±  7%  sched_debug.cpu#10.cpu_load[4]
   5269004 ±  1%     +27.8%    6731790 ± 30%      +1.4%    5342560 ±  2%  sched_debug.cpu#10.nr_switches
   5273193 ±  1%     +27.8%    6736526 ± 30%      +1.4%    5345791 ±  2%  sched_debug.cpu#10.sched_count
   2633974 ±  1%     +27.8%    3365271 ± 30%      +1.4%    2670901 ±  2%  sched_debug.cpu#10.sched_goidle
   2644149 ±  1%     +26.9%    3356318 ± 30%      +1.9%    2693295 ±  1%  sched_debug.cpu#10.ttwu_count
     26.50 ± 37%    +116.0%      57.25 ± 48%    +109.4%      55.50 ± 29%  sched_debug.cpu#11.cpu_load[0]
     30.75 ± 15%     +66.7%      51.25 ± 31%     +65.9%      51.00 ± 21%  sched_debug.cpu#11.cpu_load[1]
     33.50 ± 10%     +37.3%      46.00 ± 22%     +39.6%      46.75 ± 17%  sched_debug.cpu#11.cpu_load[2]
     37.00 ± 11%     +15.5%      42.75 ± 19%     +29.7%      48.00 ± 11%  sched_debug.cpu#11.cpu_load[4]
    508300 ± 11%      -0.6%     505024 ±  1%     +18.1%     600291 ±  7%  sched_debug.cpu#4.avg_idle
    454696 ±  9%      -5.9%     427894 ± 25%     +21.8%     553608 ±  4%  sched_debug.cpu#5.avg_idle
     66.00 ± 27%     +11.0%      73.25 ± 37%     -46.6%      35.25 ± 22%  sched_debug.cpu#6.cpu_load[0]
     62.00 ± 36%     +12.5%      69.75 ± 45%     -41.5%      36.25 ± 11%  sched_debug.cpu#6.cpu_load[1]
    247681 ± 19%     +21.0%     299747 ± 10%     +28.7%     318764 ± 17%  sched_debug.cpu#8.avg_idle
   5116609 ±  4%     +34.5%    6884238 ± 33%     +55.2%    7942254 ± 34%  sched_debug.cpu#9.nr_switches
   5120531 ±  4%     +34.5%    6889156 ± 33%     +55.2%    7945270 ± 34%  sched_debug.cpu#9.sched_count
   2557822 ±  4%     +34.5%    3441428 ± 33%     +55.2%    3970337 ± 34%  sched_debug.cpu#9.sched_goidle
   2565307 ±  4%     +32.9%    3410042 ± 33%     +54.0%    3949696 ± 34%  sched_debug.cpu#9.ttwu_count
      0.00 ±141%  +4.2e+05%       4.76 ±173%     +47.7%       0.00 ±-59671%  sched_debug.rt_rq[10]:/.rt_time
    155259 ±  0%      +0.0%     155259 ±  0%     -42.2%      89723 ±  0%  sched_debug.sysctl_sched.sysctl_sched_features

Best Regards,
Huang, Ying



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2015-09-02  2:40 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-08-01  5:17 [lkp] [x86/build] b2c51106c75: -18.1% will-it-scale.per_process_ops kernel test robot
2015-08-05  8:38 ` Ingo Molnar
2015-08-10 20:39   ` Andy Lutomirski
2015-09-02  2:40   ` Huang Ying

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).