public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Ye Xiaolong <xiaolong.ye@intel.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	"Darrick J. Wong" <darrick.wong@oracle.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	lkp@01.org, viro@zeniv.linux.org.uk
Subject: Re: [lkp-robot] [fs]  3deb642f0d:  will-it-scale.per_process_ops -8.8% regression
Date: Tue, 26 Jun 2018 14:03:38 +0800	[thread overview]
Message-ID: <20180626060338.GU12146@yexl-desktop> (raw)
In-Reply-To: <20180622150251.GA12802@lst.de>

Hi,

On 06/22, Christoph Hellwig wrote:
>Hi Xiaolong,
>
>can you retest this workload on the following branch:
>
>    git://git.infradead.org/users/hch/vfs.git remove-get-poll-head
>
>Gitweb:
>
>    http://git.infradead.org/users/hch/vfs.git/shortlog/refs/heads/remove-get-poll-head

Here is the comparison for commit 3deb642f0d and commit 8fbedc1 ("fs: replace f_ops->get_poll_head with a static ->f_poll_head pointer") in remove-get-poll-head branch.

3deb642f0de4c14f  8fbedc19c94fd25a2b9b327015  
----------------  --------------------------  
         %stddev      change         %stddev
             \          |                \  
    457120              -7%     424608        will-it-scale.per_process_ops
    238978                      233862        will-it-scale.per_thread_ops
      1755 ± 13%         8%       1899 ± 18%  will-it-scale.time.voluntary_context_switches
      2332                        2342        will-it-scale.time.system_time
       310                         310        will-it-scale.time.elapsed_time
       310                         310        will-it-scale.time.elapsed_time.max
      4096                        4096        will-it-scale.time.page_size
      0.54                        0.54        will-it-scale.scalability
       807                         806        will-it-scale.time.percent_of_cpu_this_job_got
     17218                       17179        will-it-scale.time.minor_page_faults
      9931                        9862        will-it-scale.time.maximum_resident_set_size
       173              -6%        163        will-it-scale.time.user_time
  49024375              -6%   46155690        will-it-scale.workload
     17818 ± 10%       -19%      14397 ±  4%  will-it-scale.time.involuntary_context_switches
    116842 ± 12%        -4%     112098 ±  5%  interrupts.CAL:Function_call_interrupts
     32735                       32635        vmstat.system.in
      2112 ±  7%       -13%       1845 ±  3%  vmstat.system.cs
       150                         150        turbostat.PkgWatt
       123                         122        turbostat.CorWatt
      1573                        1573        turbostat.Avg_MHz
     15.73              13%      17.77 ± 19%  boot-time.kernel_boot
     15.07              12%      16.93 ± 21%  boot-time.dhcp
       771               8%        834 ± 12%  boot-time.idle
     25.69               8%      27.69 ± 12%  boot-time.boot
      1755 ± 13%         8%       1899 ± 18%  time.voluntary_context_switches
      2332                        2342        time.system_time
       310                         310        time.elapsed_time
       310                         310        time.elapsed_time.max
      4096                        4096        time.page_size
       807                         806        time.percent_of_cpu_this_job_got
     17218                       17179        time.minor_page_faults
      9931                        9862        time.maximum_resident_set_size
       173              -6%        163        time.user_time
     17818 ± 10%       -19%      14397 ±  4%  time.involuntary_context_switches
    428813 ±  9%        57%     672385        proc-vmstat.pgalloc_normal
     41736 ± 15%        22%      50828        proc-vmstat.nr_free_cma
     18116               8%      19506 ±  8%  proc-vmstat.nr_slab_unreclaimable
      1029                        1033        proc-vmstat.nr_page_table_pages
      8453                        8471        proc-vmstat.nr_kernel_stack
      6486                        6499        proc-vmstat.nr_mapped
   3193607                     3194517        proc-vmstat.nr_dirty_threshold
   1594853                     1595308        proc-vmstat.nr_dirty_background_threshold
  16061877                    16064831        proc-vmstat.nr_free_pages
     20009                       20005        proc-vmstat.nr_anon_pages
      6303                        6294        proc-vmstat.numa_other
    799772                      797937        proc-vmstat.pgfault
    667803                      665906        proc-vmstat.pgfree
    666440                      663786        proc-vmstat.numa_hit
    660136                      657491        proc-vmstat.numa_local
    313125                      310062        proc-vmstat.nr_file_pages
      1941 ±  5%                  1917 ±  8%  proc-vmstat.numa_pte_updates
      1448 ±  7%                  1421 ±  9%  proc-vmstat.numa_hint_faults_local
      1596 ±  6%                  1558 ± 10%  proc-vmstat.numa_hint_faults
     12893              -6%      12152 ± 11%  proc-vmstat.nr_slab_reclaimable
     22885            -100%          0        proc-vmstat.nr_indirectly_reclaimable
    245443 ± 16%      -100%          0        proc-vmstat.pgalloc_movable
  19861107 ± 14%        34%   26619357 ± 35%  perf-stat.node-load-misses
  51734389 ±  5%        22%   63014695 ± 25%  perf-stat.node-loads
 1.924e+09 ±  3%        21%   2.32e+09 ±  5%  perf-stat.iTLB-load-misses
 2.342e+09 ±  8%        15%  2.695e+09 ±  4%  perf-stat.cache-references
 3.251e+08 ±  7%        11%  3.622e+08 ±  5%  perf-stat.iTLB-loads
 2.106e+08 ±  4%        10%  2.323e+08 ± 11%  perf-stat.cache-misses
      0.74               7%       0.79        perf-stat.cpi
 1.605e+08 ±  7%         6%  1.703e+08 ±  6%  perf-stat.node-stores
  50804799 ± 16%         5%   53535896 ± 18%  perf-stat.node-store-misses
     27.63 ±  8%         5%      29.07 ±  8%  perf-stat.node-load-miss-rate%
     85.55                       86.49        perf-stat.iTLB-load-miss-rate%
      0.25                        0.25        perf-stat.branch-miss-rate%
    778741                      776946        perf-stat.minor-faults
    778753                      776948        perf-stat.page-faults
     23.93 ±  9%                 23.75 ± 12%  perf-stat.node-store-miss-rate%
      9117 ±  4%                  8969 ±  4%  perf-stat.cpu-migrations
  1.59e+13              -4%  1.533e+13        perf-stat.cpu-cycles
    439328 ±  3%        -5%     419250 ±  5%  perf-stat.path-length
      9.05 ±  8%        -5%       8.62 ±  9%  perf-stat.cache-miss-rate%
      0.44 ± 39%        -6%       0.42 ± 31%  perf-stat.dTLB-load-miss-rate%
      1.35              -7%       1.26        perf-stat.ipc
 3.294e+12 ±  3%        -9%  2.988e+12 ±  3%  perf-stat.dTLB-stores
 5.451e+12 ±  4%       -10%  4.905e+12 ±  4%  perf-stat.dTLB-loads
 4.667e+12 ±  3%       -10%  4.195e+12 ±  4%  perf-stat.branch-instructions
 2.154e+13 ±  3%       -10%  1.935e+13 ±  4%  perf-stat.instructions
 1.161e+10 ±  4%       -10%  1.043e+10 ±  5%  perf-stat.branch-misses
 2.401e+10 ± 34%       -13%  2.093e+10 ± 36%  perf-stat.dTLB-load-misses
    653927 ±  8%       -13%     568299 ±  3%  perf-stat.context-switches
     11203 ±  4%       -26%       8344        perf-stat.instructions-per-iTLB-miss
      0.02 ± 41%       -50%       0.01 ± 47%  perf-stat.dTLB-store-miss-rate%
 7.557e+08 ± 37%       -53%  3.521e+08 ± 49%  perf-stat.dTLB-store-misses

Thanks,
Xiaolong

  parent reply	other threads:[~2018-06-26  6:07 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-22  8:27 [lkp-robot] [fs] 3deb642f0d: will-it-scale.per_process_ops -8.8% regression kernel test robot
2018-06-22  9:25 ` Linus Torvalds
2018-06-22  9:56   ` Christoph Hellwig
2018-06-22 10:00     ` Christoph Hellwig
2018-06-22 11:01       ` Al Viro
2018-06-22 11:53         ` Christoph Hellwig
2018-06-22 11:56           ` Al Viro
2018-06-22 12:07             ` Christoph Hellwig
2018-06-22 12:17               ` Al Viro
2018-06-22 12:33                 ` Christoph Hellwig
2018-06-22 12:29                   ` Al Viro
2018-06-22 19:06         ` Sean Paul
2018-06-22 10:02     ` Linus Torvalds
2018-06-22 10:05       ` Linus Torvalds
2018-06-22 15:02 ` Christoph Hellwig
2018-06-22 15:14   ` Al Viro
2018-06-22 15:28     ` Christoph Hellwig
2018-06-22 16:18       ` Christoph Hellwig
2018-06-22 20:02         ` Al Viro
2018-06-23  7:15           ` Christoph Hellwig
2018-06-26  6:03   ` Ye Xiaolong [this message]
2018-06-27  7:07     ` Christoph Hellwig
2018-06-28  0:38       ` Ye Xiaolong
2018-06-28 13:38         ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180626060338.GU12146@yexl-desktop \
    --to=xiaolong.ye@intel.com \
    --cc=darrick.wong@oracle.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=hch@lst.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkp@01.org \
    --cc=torvalds@linux-foundation.org \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox