public inbox for linux-fsdevel@vger.kernel.org
 help / color / mirror / Atom feed
From: kernel test robot <oliver.sang@intel.com>
To: Yu Ma <yu.ma@intel.com>
Cc: <oe-lkp@lists.linux.dev>, <lkp@intel.com>,
	Al Viro <viro@zeniv.linux.org.uk>, Jan Kara <jack@suse.cz>,
	Tim Chen <tim.c.chen@linux.intel.com>,
	Christian Brauner <brauner@kernel.org>,
	<linux-fsdevel@vger.kernel.org>, <ying.huang@intel.com>,
	<feng.tang@intel.com>, <fengwei.yin@intel.com>,
	<oliver.sang@intel.com>
Subject: [linux-next:master] [fs/file.c]  0c40bf47cf: will-it-scale.per_thread_ops 6.7% improvement
Date: Wed, 13 Nov 2024 22:14:48 +0800	[thread overview]
Message-ID: <202411132104.c3e2d29f-oliver.sang@intel.com> (raw)



Hello,

kernel test robot noticed a 6.7% improvement of will-it-scale.per_thread_ops on:


commit: 0c40bf47cf2d9e1413b1e62826c89c2341e66e40 ("fs/file.c: add fast path in find_next_fd()")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master


testcase: will-it-scale
config: x86_64-rhel-8.3
compiler: gcc-12
test machine: 224 threads 2 sockets Intel(R) Xeon(R) Platinum 8480CTDX (Sapphire Rapids) with 512G memory
parameters:

	nr_task: 100%
	mode: thread
	test: dup1
	cpufreq_governor: performance






Details are as below:
-------------------------------------------------------------------------------------------------->


The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20241113/202411132104.c3e2d29f-oliver.sang@intel.com

=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase:
  gcc-12/performance/x86_64-rhel-8.3/thread/100%/debian-12-x86_64-20240206.cgz/lkp-spr-2sp4/dup1/will-it-scale

commit: 
  c9a3019603 ("fs/file.c: conditionally clear full_fds")
  0c40bf47cf ("fs/file.c: add fast path in find_next_fd()")

c9a3019603b8a851 0c40bf47cf2d9e1413b1e62826c 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
     33.83 ± 20%     +27.1%      43.00 ± 12%  perf-c2c.DRAM.local
      0.42 ±  6%     +26.9%       0.54 ±  8%  perf-sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.vfs_open
      0.29 ±118%     -58.2%       0.12 ±  2%  perf-sched.sch_delay.max.ms.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
      0.42 ±  6%     +26.9%       0.54 ±  8%  perf-sched.wait_time.avg.ms.wait_for_partner.fifo_open.do_dentry_open.vfs_open
    878341 ±  2%      +6.7%     937496 ±  2%  will-it-scale.224.threads
      3920 ±  2%      +6.7%       4184 ±  2%  will-it-scale.per_thread_ops
    878341 ±  2%      +6.7%     937496 ±  2%  will-it-scale.workload
      0.06            +0.0        0.08 ±  6%  perf-profile.children.cycles-pp.__fput_sync
      0.00            +0.1        0.06 ±  6%  perf-profile.children.cycles-pp.find_next_fd
      0.06            +0.0        0.08 ±  6%  perf-profile.self.cycles-pp.__fput_sync
      0.09            +0.0        0.11        perf-profile.self.cycles-pp._raw_spin_lock
      0.00            +0.1        0.05        perf-profile.self.cycles-pp.find_next_fd
      0.18 ±  2%     +11.5%       0.20 ±  3%  perf-stat.i.MPKI
     26.36 ±  3%      +2.4       28.75 ±  4%  perf-stat.i.cache-miss-rate%
   8636784 ±  2%     +11.9%    9662915 ±  3%  perf-stat.i.cache-misses
     76932 ±  2%     -10.9%      68510 ±  4%  perf-stat.i.cycles-between-cache-misses
      0.17 ±  2%     +11.8%       0.19 ±  3%  perf-stat.overall.MPKI
     24.92 ±  3%      +2.2       27.16 ±  4%  perf-stat.overall.cache-miss-rate%
     74960 ±  2%     -10.5%      67085 ±  4%  perf-stat.overall.cycles-between-cache-misses
  17263265 ±  2%      -6.3%   16180844 ±  2%  perf-stat.overall.path-length
   8605013 ±  2%     +11.9%    9625712 ±  3%  perf-stat.ps.cache-misses




Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


                 reply	other threads:[~2024-11-13 14:15 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=202411132104.c3e2d29f-oliver.sang@intel.com \
    --to=oliver.sang@intel.com \
    --cc=brauner@kernel.org \
    --cc=feng.tang@intel.com \
    --cc=fengwei.yin@intel.com \
    --cc=jack@suse.cz \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=lkp@intel.com \
    --cc=oe-lkp@lists.linux.dev \
    --cc=tim.c.chen@linux.intel.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=ying.huang@intel.com \
    --cc=yu.ma@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox