public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Aaron Lu <aaron.lu@intel.com>
To: Rik van Riel <riel@redhat.com>
Cc: LKML <linux-kernel@vger.kernel.org>, lkp@01.org
Subject: [LKP] [sched/numa] a43455a1d57: +94.1% proc-vmstat.numa_hint_faults_local
Date: Tue, 29 Jul 2014 13:24:05 +0800	[thread overview]
Message-ID: <53D72FF5.90908@intel.com> (raw)
In-Reply-To: <53d70ee6.JsUEmW5dWsv8dev+%fengguang.wu@intel.com>

[-- Attachment #1: Type: text/plain, Size: 5602 bytes --]

FYI, we noticed the below changes on

git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit a43455a1d572daf7b730fe12eb747d1e17411365 ("sched/numa: Ensure task_numa_migrate() checks the preferred node")

ebe06187bf2aec1  a43455a1d572daf7b730fe12e  
---------------  -------------------------  
     94500 ~ 3%    +115.6%     203711 ~ 6%  ivb42/hackbench/50%-threads-pipe
     67745 ~ 4%     +64.1%     111174 ~ 5%  lkp-snb01/hackbench/50%-threads-socket
    162245 ~ 3%     +94.1%     314885 ~ 6%  TOTAL proc-vmstat.numa_hint_faults_local

ebe06187bf2aec1  a43455a1d572daf7b730fe12e  
---------------  -------------------------  
    147474 ~ 3%     +70.6%     251650 ~ 5%  ivb42/hackbench/50%-threads-pipe
     94889 ~ 3%     +46.3%     138815 ~ 5%  lkp-snb01/hackbench/50%-threads-socket
    242364 ~ 3%     +61.1%     390465 ~ 5%  TOTAL proc-vmstat.numa_pte_updates

ebe06187bf2aec1  a43455a1d572daf7b730fe12e  
---------------  -------------------------  
    147104 ~ 3%     +69.5%     249306 ~ 5%  ivb42/hackbench/50%-threads-pipe
     94431 ~ 3%     +43.9%     135902 ~ 5%  lkp-snb01/hackbench/50%-threads-socket
    241535 ~ 3%     +59.5%     385209 ~ 5%  TOTAL proc-vmstat.numa_hint_faults

ebe06187bf2aec1  a43455a1d572daf7b730fe12e  
---------------  -------------------------  
       308 ~ 8%     +24.1%        382 ~ 5%  lkp-snb01/hackbench/50%-threads-socket
       308 ~ 8%     +24.1%        382 ~ 5%  TOTAL numa-vmstat.node0.nr_page_table_pages

ebe06187bf2aec1  a43455a1d572daf7b730fe12e  
---------------  -------------------------  
      1234 ~ 8%     +24.0%       1530 ~ 5%  lkp-snb01/hackbench/50%-threads-socket
      1234 ~ 8%     +24.0%       1530 ~ 5%  TOTAL numa-meminfo.node0.PageTables

ebe06187bf2aec1  a43455a1d572daf7b730fe12e  
---------------  -------------------------  
       381 ~ 6%     -17.9%        313 ~ 6%  lkp-snb01/hackbench/50%-threads-socket
       381 ~ 6%     -17.9%        313 ~ 6%  TOTAL numa-vmstat.node1.nr_page_table_pages

ebe06187bf2aec1  a43455a1d572daf7b730fe12e  
---------------  -------------------------  
      1528 ~ 6%     -18.0%       1253 ~ 6%  lkp-snb01/hackbench/50%-threads-socket
      1528 ~ 6%     -18.0%       1253 ~ 6%  TOTAL numa-meminfo.node1.PageTables

ebe06187bf2aec1  a43455a1d572daf7b730fe12e  
---------------  -------------------------  
     24533 ~ 2%     -16.2%      20560 ~ 3%  ivb42/hackbench/50%-threads-pipe
     13551 ~ 2%     -10.7%      12096 ~ 2%  lkp-snb01/hackbench/50%-threads-socket
     38084 ~ 2%     -14.2%      32657 ~ 3%  TOTAL proc-vmstat.numa_pages_migrated

ebe06187bf2aec1  a43455a1d572daf7b730fe12e  
---------------  -------------------------  
     24533 ~ 2%     -16.2%      20560 ~ 3%  ivb42/hackbench/50%-threads-pipe
     13551 ~ 2%     -10.7%      12096 ~ 2%  lkp-snb01/hackbench/50%-threads-socket
     38084 ~ 2%     -14.2%      32657 ~ 3%  TOTAL proc-vmstat.pgmigrate_success

ebe06187bf2aec1  a43455a1d572daf7b730fe12e  
---------------  -------------------------  
      3538 ~ 7%     +11.6%       3949 ~ 7%  lkp-snb01/hackbench/50%-threads-socket
      3538 ~ 7%     +11.6%       3949 ~ 7%  TOTAL numa-vmstat.node0.nr_anon_pages

ebe06187bf2aec1  a43455a1d572daf7b730fe12e  
---------------  -------------------------  
     14154 ~ 7%     +11.6%      15799 ~ 7%  lkp-snb01/hackbench/50%-threads-socket
     14154 ~ 7%     +11.6%      15799 ~ 7%  TOTAL numa-meminfo.node0.AnonPages

ebe06187bf2aec1  a43455a1d572daf7b730fe12e  
---------------  -------------------------  
      3511 ~ 7%     +11.0%       3898 ~ 7%  lkp-snb01/hackbench/50%-threads-socket
      3511 ~ 7%     +11.0%       3898 ~ 7%  TOTAL numa-vmstat.node0.nr_active_anon

ebe06187bf2aec1  a43455a1d572daf7b730fe12e  
---------------  -------------------------  
     14044 ~ 7%     +11.1%      15597 ~ 7%  lkp-snb01/hackbench/50%-threads-socket
     14044 ~ 7%     +11.1%      15597 ~ 7%  TOTAL numa-meminfo.node0.Active(anon)

ebe06187bf2aec1  a43455a1d572daf7b730fe12e  
---------------  -------------------------  
    187958 ~ 2%     +56.6%     294375 ~ 5%  ivb42/hackbench/50%-threads-pipe
    124490 ~ 2%     +35.0%     168004 ~ 4%  lkp-snb01/hackbench/50%-threads-socket
    312448 ~ 2%     +48.0%     462379 ~ 5%  TOTAL time.minor_page_faults

ebe06187bf2aec1  a43455a1d572daf7b730fe12e  
---------------  -------------------------  
     11.47 ~ 1%      -2.8%      11.15 ~ 1%  ivb42/hackbench/50%-threads-pipe
     11.47 ~ 1%      -2.8%      11.15 ~ 1%  TOTAL turbostat.RAM_W

ebe06187bf2aec1  a43455a1d572daf7b730fe12e  
---------------  -------------------------  
 3.649e+08 ~ 0%      -2.4%  3.562e+08 ~ 0%  lkp-snb01/hackbench/50%-threads-socket
 3.649e+08 ~ 0%      -2.4%  3.562e+08 ~ 0%  TOTAL time.involuntary_context_switches

ebe06187bf2aec1  a43455a1d572daf7b730fe12e  
---------------  -------------------------  
   1924472 ~ 0%      -2.6%    1874425 ~ 0%  ivb42/hackbench/50%-threads-pipe
   1924472 ~ 0%      -2.6%    1874425 ~ 0%  TOTAL vmstat.system.in

ebe06187bf2aec1  a43455a1d572daf7b730fe12e  
---------------  -------------------------  
  1.38e+09 ~ 0%      -1.8%  1.355e+09 ~ 0%  lkp-snb01/hackbench/50%-threads-socket
  1.38e+09 ~ 0%      -1.8%  1.355e+09 ~ 0%  TOTAL time.voluntary_context_switches


Legend:
	~XX%    - stddev percent
	[+-]XX% - change percent


	[*] bisect-good sample
	[O] bisect-bad  sample


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.

Thanks,
Aaron

[-- Attachment #2: reproduce --]
[-- Type: text/plain, Size: 2931 bytes --]

echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu10/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu11/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu12/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu13/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu14/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu15/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu16/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu17/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu18/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu19/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu20/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu21/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu22/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu23/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu24/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu25/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu26/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu27/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu28/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu29/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu30/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu31/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu8/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu9/cpufreq/scaling_governor
/usr/bin/hackbench -g 16 --threads -l 60000
/usr/bin/hackbench -g 16 --threads -l 60000
/usr/bin/hackbench -g 16 --threads -l 60000
/usr/bin/hackbench -g 16 --threads -l 60000
/usr/bin/hackbench -g 16 --threads -l 60000
/usr/bin/hackbench -g 16 --threads -l 60000
/usr/bin/hackbench -g 16 --threads -l 60000
/usr/bin/hackbench -g 16 --threads -l 60000
/usr/bin/hackbench -g 16 --threads -l 60000
/usr/bin/hackbench -g 16 --threads -l 60000
/usr/bin/hackbench -g 16 --threads -l 60000
/usr/bin/hackbench -g 16 --threads -l 60000
/usr/bin/hackbench -g 16 --threads -l 60000


       reply	other threads:[~2014-07-29  5:24 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <53d70ee6.JsUEmW5dWsv8dev+%fengguang.wu@intel.com>
2014-07-29  5:24 ` Aaron Lu [this message]
2014-07-29  6:39   ` [LKP] [sched/numa] a43455a1d57: +94.1% proc-vmstat.numa_hint_faults_local Rik van Riel
2014-07-29  8:17     ` Peter Zijlstra
2014-07-29 20:04       ` Rik van Riel
2014-07-30  2:14         ` Aaron Lu
2014-07-30 14:25           ` Rik van Riel
2014-07-31  5:04             ` Aaron Lu
2014-07-31  6:22               ` Rik van Riel
2014-07-31  6:53                 ` Aaron Lu
2014-07-31  6:42               ` Rik van Riel
2014-08-05 21:43               ` Rik van Riel
2014-07-31  8:33           ` Peter Zijlstra
2014-07-31  8:56             ` Aaron Lu
2014-07-31 10:42     ` Peter Zijlstra
2014-07-31 15:57       ` Peter Zijlstra
2014-07-31 16:16         ` Jirka Hladky
2014-07-31 16:27           ` Peter Zijlstra
2014-07-31 16:39             ` Jirka Hladky
2014-07-31 17:37               ` Peter Zijlstra
2014-08-01 15:02                 ` Peter Zijlstra
2014-08-01 20:46           ` Davidlohr Bueso
2014-08-01 20:48             ` Davidlohr Bueso
2014-08-01 21:30             ` Jirka Hladky
2014-08-02  4:17               ` Rik van Riel
2014-08-02  5:28                 ` Jirka Hladky
2014-08-02  4:26               ` Peter Zijlstra
2014-08-01  0:18       ` Davidlohr Bueso
2014-08-01  2:03       ` Aaron Lu
2014-08-01  4:03         ` Davidlohr Bueso
2014-08-01  7:29           ` Peter Zijlstra
2014-08-01  7:29         ` Peter Zijlstra
2014-07-31 23:58           ` Yuyang Du
2014-08-01  8:14           ` Fengguang Wu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=53D72FF5.90908@intel.com \
    --to=aaron.lu@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkp@01.org \
    --cc=riel@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox