public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Ye Xiaolong <xiaolong.ye@intel.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	"Darrick J. Wong" <darrick.wong@oracle.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	lkp@01.org, viro@zeniv.linux.org.uk
Subject: Re: [lkp-robot] [fs]  3deb642f0d:  will-it-scale.per_process_ops -8.8% regression
Date: Thu, 28 Jun 2018 08:38:34 +0800	[thread overview]
Message-ID: <20180628003834.GH18756@yexl-desktop> (raw)
In-Reply-To: <20180627070745.GA9765@lst.de>

On 06/27, Christoph Hellwig wrote:
>On Tue, Jun 26, 2018 at 02:03:38PM +0800, Ye Xiaolong wrote:
>> Hi,
>> 
>> On 06/22, Christoph Hellwig wrote:
>> >Hi Xiaolong,
>> >
>> >can you retest this workload on the following branch:
>> >
>> >    git://git.infradead.org/users/hch/vfs.git remove-get-poll-head
>> >
>> >Gitweb:
>> >
>> >    http://git.infradead.org/users/hch/vfs.git/shortlog/refs/heads/remove-get-poll-head
>> 
>> Here is the comparison for commit 3deb642f0d and commit 8fbedc1 ("fs: replace f_ops->get_poll_head with a static ->f_poll_head pointer") in remove-get-poll-head branch.
>
>Especially the boot time ones and others look like they have additional
>changes.
>
>Can you compare the baseline of my tree, which is
>894b8c00 ("Merge tag 'for_v4.18-rc2' of
>git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs") against 8fbedc1
>(("fs: replace f_ops->get_poll_head with a static ->f_poll_head pointer") ?

Update the result:

testcase/path_params/tbox_group/run: will-it-scale/poll2-performance/lkp-sb03

894b8c000ae6c106  8fbedc19c94fd25a2b9b327015  
----------------  --------------------------  
         %stddev      change         %stddev
             \          |                \  
    404611 ±  4%         5%     424608        will-it-scale.per_process_ops
      1489 ± 21%        28%       1899 ± 18%  will-it-scale.time.voluntary_context_switches
  45828560                    46155690        will-it-scale.workload
      2337                        2342        will-it-scale.time.system_time
       806                         806        will-it-scale.time.percent_of_cpu_this_job_got
       310                         310        will-it-scale.time.elapsed_time
       310                         310        will-it-scale.time.elapsed_time.max
      4096                        4096        will-it-scale.time.page_size
    233917                      233862        will-it-scale.per_thread_ops
     17196                       17179        will-it-scale.time.minor_page_faults
      9901                        9862        will-it-scale.time.maximum_resident_set_size
     14705 ±  3%                 14397 ±  4%  will-it-scale.time.involuntary_context_switches
       167                         163        will-it-scale.time.user_time
      0.66 ± 25%       -17%       0.54        will-it-scale.scalability
    120508 ± 15%        -7%     112098 ±  5%  interrupts.CAL:Function_call_interrupts
      1670 ±  3%        10%       1845 ±  3%  vmstat.system.cs
     32707                       32635        vmstat.system.in
       121                         122        turbostat.CorWatt
       149                         150        turbostat.PkgWatt
      1573                        1573        turbostat.Avg_MHz
     17.54 ± 19%                 17.77 ± 19%  boot-time.kernel_boot
       824 ± 12%                   834 ± 12%  boot-time.idle
     27.45 ± 12%                 27.69 ± 12%  boot-time.boot
     16.96 ± 21%                 16.93 ± 21%  boot-time.dhcp
      1489 ± 21%        28%       1899 ± 18%  time.voluntary_context_switches
      2337                        2342        time.system_time
       806                         806        time.percent_of_cpu_this_job_got
       310                         310        time.elapsed_time
       310                         310        time.elapsed_time.max
      4096                        4096        time.page_size
     17196                       17179        time.minor_page_faults
      9901                        9862        time.maximum_resident_set_size
     14705 ±  3%                 14397 ±  4%  time.involuntary_context_switches
       167                         163        time.user_time
     18320               6%      19506 ±  8%  proc-vmstat.nr_slab_unreclaimable
      1518 ±  7%                  1558 ± 10%  proc-vmstat.numa_hint_faults
      1387 ±  8%                  1421 ±  9%  proc-vmstat.numa_hint_faults_local
      1873 ±  5%                  1917 ±  8%  proc-vmstat.numa_pte_updates
     19987                       20005        proc-vmstat.nr_anon_pages
      8464                        8471        proc-vmstat.nr_kernel_stack
    309815                      310062        proc-vmstat.nr_file_pages
     50828                       50828        proc-vmstat.nr_free_cma
  16065590                    16064831        proc-vmstat.nr_free_pages
   3194669                     3194517        proc-vmstat.nr_dirty_threshold
   1595384                     1595308        proc-vmstat.nr_dirty_background_threshold
    798886                      797937        proc-vmstat.pgfault
      6510                        6499        proc-vmstat.nr_mapped
    659089                      657491        proc-vmstat.numa_local
    665458                      663786        proc-vmstat.numa_hit
      1037                        1033        proc-vmstat.nr_page_table_pages
    669923                      665906        proc-vmstat.pgfree
    676982                      672385        proc-vmstat.pgalloc_normal
      6368                        6294        proc-vmstat.numa_other
     13013              -7%      12152 ± 11%  proc-vmstat.nr_slab_reclaimable
  51213164 ± 18%        23%   63014695 ± 25%  perf-stat.node-loads
  22096136 ± 28%        20%   26619357 ± 35%  perf-stat.node-load-misses
 2.079e+08 ±  9%        12%  2.323e+08 ± 11%  perf-stat.cache-misses
    515039 ±  3%        10%     568299 ±  3%  perf-stat.context-switches
 3.283e+08 ± 22%        10%  3.622e+08 ±  5%  perf-stat.iTLB-loads

Thanks,
Xiaolong

  reply	other threads:[~2018-06-28  0:42 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-22  8:27 [lkp-robot] [fs] 3deb642f0d: will-it-scale.per_process_ops -8.8% regression kernel test robot
2018-06-22  9:25 ` Linus Torvalds
2018-06-22  9:56   ` Christoph Hellwig
2018-06-22 10:00     ` Christoph Hellwig
2018-06-22 11:01       ` Al Viro
2018-06-22 11:53         ` Christoph Hellwig
2018-06-22 11:56           ` Al Viro
2018-06-22 12:07             ` Christoph Hellwig
2018-06-22 12:17               ` Al Viro
2018-06-22 12:33                 ` Christoph Hellwig
2018-06-22 12:29                   ` Al Viro
2018-06-22 19:06         ` Sean Paul
2018-06-22 10:02     ` Linus Torvalds
2018-06-22 10:05       ` Linus Torvalds
2018-06-22 15:02 ` Christoph Hellwig
2018-06-22 15:14   ` Al Viro
2018-06-22 15:28     ` Christoph Hellwig
2018-06-22 16:18       ` Christoph Hellwig
2018-06-22 20:02         ` Al Viro
2018-06-23  7:15           ` Christoph Hellwig
2018-06-26  6:03   ` Ye Xiaolong
2018-06-27  7:07     ` Christoph Hellwig
2018-06-28  0:38       ` Ye Xiaolong [this message]
2018-06-28 13:38         ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180628003834.GH18756@yexl-desktop \
    --to=xiaolong.ye@intel.com \
    --cc=darrick.wong@oracle.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=hch@lst.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkp@01.org \
    --cc=torvalds@linux-foundation.org \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox