linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [lkp] [sched/numa] b52da86e0a: -1.4% will-it-scale.per_thread_ops
@ 2015-10-21  7:31 kernel test robot
  0 siblings, 0 replies; only message in thread
From: kernel test robot @ 2015-10-21  7:31 UTC (permalink / raw)
  To: Srikar Dronamraju
  Cc: lkp, LKML, Thomas Gleixner, Rik van Riel, Mike Galbraith,
	Mel Gorman, Linus Torvalds, Peter Zijlstra, Ingo Molnar

[-- Attachment #1: Type: text/plain, Size: 6946 bytes --]

FYI, we noticed the below changes on

https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit b52da86e0ad58f096710977fcda856fd84da9233 ("sched/numa: Fix task_tick_fair() from disabling numa_balancing")


=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test:
  nhm4/will-it-scale/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/readseek1

commit: 
  e2bf1c4b17aff25f07e0d2952d8c1c66643f33fe
  b52da86e0ad58f096710977fcda856fd84da9233

e2bf1c4b17aff25f b52da86e0ad58f096710977fcd 
---------------- -------------------------- 
         %stddev     %change         %stddev
             \          |                \  
   1868374 ±  0%      -1.4%    1843108 ±  0%  will-it-scale.per_thread_ops
      0.57 ±  0%     +14.1%       0.65 ± 10%  will-it-scale.scalability
     14052 ±  0%     -29.6%       9896 ±  0%  will-it-scale.time.minor_page_faults
    161.75 ± 45%     -53.2%      75.75 ± 23%  cpuidle.C1E-NHM.usage
     14052 ±  0%     -29.6%       9896 ±  0%  time.minor_page_faults
      6943 ±  0%    -100.0%       0.00 ± -1%  proc-vmstat.numa_hint_faults
      6943 ±  0%    -100.0%       0.00 ± -1%  proc-vmstat.numa_hint_faults_local
      7780 ±  0%    -100.0%       0.00 ± -1%  proc-vmstat.numa_pte_updates
      1.19 ±  4%     -14.1%       1.02 ±  4%  perf-profile.cpu-cycles.__fget_light.sys_lseek.entry_SYSCALL_64_fastpath
      5.47 ±  1%     -11.8%       4.83 ±  2%  perf-profile.cpu-cycles.entry_SYSCALL_64
      1.17 ±  6%     -16.7%       0.98 ± 12%  perf-profile.cpu-cycles.fsnotify.vfs_read.sys_read.entry_SYSCALL_64_fastpath
      1.70 ±  4%     -15.0%       1.45 ±  4%  perf-profile.cpu-cycles.shmem_file_llseek.sys_lseek.entry_SYSCALL_64_fastpath
      5.39 ±  1%     -12.8%       4.70 ±  5%  perf-profile.cpu-cycles.sys_lseek.entry_SYSCALL_64_fastpath
    116.50 ± 14%     -35.4%      75.25 ± 13%  sched_debug.cfs_rq[2]:/.load
     95126 ±  8%     +17.5%     111795 ± 10%  sched_debug.cpu#0.nr_load_updates
      2464 ±  6%     -39.4%       1494 ± 29%  sched_debug.cpu#2.curr->pid
    116.50 ± 14%     -35.4%      75.25 ± 13%  sched_debug.cpu#2.load
      1243 ±  2%     +50.4%       1870 ± 22%  sched_debug.cpu#3.curr->pid
     17134 ± 29%   +7834.3%    1359530 ±110%  sched_debug.cpu#4.nr_switches
     17204 ± 29%   +7802.5%    1359602 ±110%  sched_debug.cpu#4.sched_count
      4538 ± 78%  +12299.8%     562734 ±119%  sched_debug.cpu#4.sched_goidle
    950401 ±  4%      -7.9%     875553 ±  4%  sched_debug.cpu#7.avg_idle


nhm4: Nehalem
Memory: 4G


                           proc-vmstat.numa_pte_updates

  8000 ++---------------------------------------------------*---*-*-----*-*-+
       |                                                  *   *     *.*     *
  7000 ++                                                 :                 |
       |                                                  :                 |
       |                                                  :                 |
  6000 ++                                                :                  |
       |                                                 :                  |
  5000 ++                                                :                  |
       |                                                 :                  |
  4000 ++                                                :                  |
       |                                                 :                  |
       |                                                :                   |
  3000 ++                                               :                   |
       *.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*..*.*.*.*.*.*.*.*                   |
  2000 ++-------------------------------------------------------------------+


                           proc-vmstat.numa_hint_faults

  7000 ++-------------------------------------------------*-*-*-*-*-*-*-*-*-*
       |                                                  :                 |
  6000 ++                                                 :                 |
       |                                                  :                 |
       |                                                 :                  |
  5000 ++                                                :                  |
       |                                                 :                  |
  4000 ++                                                :                  |
       |                                                 :                  |
  3000 ++                                                :                  |
       |                                                :                   |
       |                                                :                   |
  2000 ++                                               :                   |
       *.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*..*.*.*.*.*.*.*.*                   |
  1000 ++-------------------------------------------------------------------+


                        proc-vmstat.numa_hint_faults_local

  7000 ++-------------------------------------------------*-*-*-*-*-*-*-*-*-*
       |                                                  :                 |
  6000 ++                                                 :                 |
       |                                                  :                 |
       |                                                 :                  |
  5000 ++                                                :                  |
       |                                                 :                  |
  4000 ++                                                :                  |
       |                                                 :                  |
  3000 ++                                                :                  |
       |                                                :                   |
       |                                                :                   |
  2000 ++                                               :                   |
       *.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*..*.*.*.*.*.*.*.*                   |
  1000 ++-------------------------------------------------------------------+


	[*] bisect-good sample
	[O] bisect-bad  sample

To reproduce:

        git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Ying Huang

[-- Attachment #2: job.yaml --]
[-- Type: text/plain, Size: 3410 bytes --]

---
LKP_SERVER: inn
LKP_CGI_PORT: 80
LKP_CIFS_PORT: 139
testcase: will-it-scale
default-monitors:
  wait: activate-monitor
  kmsg: 
  uptime: 
  iostat: 
  vmstat: 
  numa-numastat: 
  numa-vmstat: 
  numa-meminfo: 
  proc-vmstat: 
  proc-stat:
    interval: 10
  meminfo: 
  slabinfo: 
  interrupts: 
  lock_stat: 
  latency_stats: 
  softirqs: 
  bdi_dev_mapping: 
  diskstats: 
  nfsstat: 
  cpuidle: 
  cpufreq-stats: 
  turbostat: 
  pmeter: 
  sched_debug:
    interval: 60
cpufreq_governor: performance
default-watchdogs:
  oom-killer: 
  watchdog: 
commit: fd78775cf5c9ccf3d03275f6d0edf0d07b191990
model: Nehalem
nr_cpu: 8
memory: 4G
hdd_partitions: "/dev/disk/by-id/ata-WDC_WD1003FBYZ-010FB0_WD-WCAW36812041-part1"
swap_partitions: "/dev/disk/by-id/ata-WDC_WD1003FBYZ-010FB0_WD-WCAW36812041-part2"
rootfs_partition: "/dev/disk/by-id/ata-WDC_WD1003FBYZ-010FB0_WD-WCAW36812041-part3"
netconsole_port: 6649
category: benchmark
perf-profile:
  freq: 800
will-it-scale:
  test: readseek1
queue: cyclic
testbox: nhm4
tbox_group: nhm4
kconfig: x86_64-rhel
enqueue_time: 2015-10-19 12:26:12.826631656 +08:00
id: b461bee5bed499ceb2fed5e5bac5945895db5988
user: lkp
compiler: gcc-4.9
head_commit: fd78775cf5c9ccf3d03275f6d0edf0d07b191990
base_commit: 7379047d5585187d1288486d4627873170d0005a
branch: linux-devel/devel-spot-201510190712
kernel: "/pkg/linux/x86_64-rhel/gcc-4.9/fd78775cf5c9ccf3d03275f6d0edf0d07b191990/vmlinuz-4.3.0-rc6-06968-gfd78775"
rootfs: debian-x86_64-2015-02-07.cgz
result_root: "/result/will-it-scale/performance-readseek1/nhm4/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/fd78775cf5c9ccf3d03275f6d0edf0d07b191990/0"
job_file: "/lkp/scheduled/nhm4/cyclic_will-it-scale-performance-readseek1-x86_64-rhel-CYCLIC_HEAD-fd78775cf5c9ccf3d03275f6d0edf0d07b191990-20151019-32198-vi9ohb-0.yaml"
dequeue_time: 2015-10-19 16:53:19.866990276 +08:00
max_uptime: 1500
initrd: "/osimage/debian/debian-x86_64-2015-02-07.cgz"
bootloader_append:
- root=/dev/ram0
- user=lkp
- job=/lkp/scheduled/nhm4/cyclic_will-it-scale-performance-readseek1-x86_64-rhel-CYCLIC_HEAD-fd78775cf5c9ccf3d03275f6d0edf0d07b191990-20151019-32198-vi9ohb-0.yaml
- ARCH=x86_64
- kconfig=x86_64-rhel
- branch=linux-devel/devel-spot-201510190712
- commit=fd78775cf5c9ccf3d03275f6d0edf0d07b191990
- BOOT_IMAGE=/pkg/linux/x86_64-rhel/gcc-4.9/fd78775cf5c9ccf3d03275f6d0edf0d07b191990/vmlinuz-4.3.0-rc6-06968-gfd78775
- max_uptime=1500
- RESULT_ROOT=/result/will-it-scale/performance-readseek1/nhm4/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/fd78775cf5c9ccf3d03275f6d0edf0d07b191990/0
- LKP_SERVER=inn
- |-
  libata.force=1.5Gbps

  earlyprintk=ttyS0,115200 systemd.log_level=err
  debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100
  panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0
  console=ttyS0,115200 console=tty0 vga=normal

  rw
lkp_initrd: "/lkp/lkp/lkp-x86_64.cgz"
modules_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/fd78775cf5c9ccf3d03275f6d0edf0d07b191990/modules.cgz"
bm_initrd: "/osimage/deps/debian-x86_64-2015-02-07.cgz/lkp.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/run-ipconfig.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/turbostat.cgz,/lkp/benchmarks/turbostat.cgz,/lkp/benchmarks/will-it-scale.cgz"
job_state: finished
loadavg: 6.79 3.44 1.39 1/152 4754
start_time: '1445244835'
end_time: '1445245139'
version: "/lkp/lkp/.src-20151019-093117"

[-- Attachment #3: reproduce --]
[-- Type: text/plain, Size: 623 bytes --]

echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor
./runtest.py readseek1 32 both 1 4 6 8

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2015-10-21  7:32 UTC | newest]

Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-10-21  7:31 [lkp] [sched/numa] b52da86e0a: -1.4% will-it-scale.per_thread_ops kernel test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).