public inbox for oe-lkp@lists.linux.dev
 help / color / mirror / Atom feed
From: kernel test robot <oliver.sang@intel.com>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: <oe-lkp@lists.linux.dev>, <lkp@intel.com>,
	<linux-kernel@vger.kernel.org>, Borislav Petkov <bp@alien8.de>,
	Sean Christopherson <seanjc@google.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Mateusz Guzik <mjguzik@gmail.com>, <oliver.sang@intel.com>
Subject: [linus:master] [x86]  284922f4c5:  stress-ng.sockfd.ops_per_sec 6.1% improvement
Date: Fri, 28 Nov 2025 14:30:22 +0800	[thread overview]
Message-ID: <202511281306.51105b46-lkp@intel.com> (raw)



Hello,

kernel test robot noticed a 6.1% improvement of stress-ng.sockfd.ops_per_sec on:


commit: 284922f4c563aa3a8558a00f2a05722133237fe8 ("x86: uaccess: don't use runtime-const rewriting in modules")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master


testcase: stress-ng
config: x86_64-rhel-9.4
compiler: gcc-14
test machine: 224 threads 2 sockets Intel(R) Xeon(R) Platinum 8480CTDX (Sapphire Rapids) with 256G memory
parameters:

	nr_threads: 100%
	testtime: 60s
	test: sockfd
	cpufreq_governor: performance



Details are as below:
-------------------------------------------------------------------------------------------------->


The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20251128/202511281306.51105b46-lkp@intel.com

=========================================================================================
compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime:
  gcc-14/performance/x86_64-rhel-9.4/100%/debian-13-x86_64-20250902.cgz/lkp-spr-r02/sockfd/stress-ng/60s

commit: 
  17d85f33a8 ("Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma")
  284922f4c5 ("x86: uaccess: don't use runtime-const rewriting in modules")

17d85f33a83b84e7 284922f4c563aa3a8558a00f2a0 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
  55674763            +6.1%   59075135        stress-ng.sockfd.ops
    927326            +6.1%     983845        stress-ng.sockfd.ops_per_sec
      3555 ±  3%     +10.6%       3932 ±  3%  perf-c2c.DRAM.remote
      4834 ±  3%     +12.0%       5415 ±  3%  perf-c2c.HITM.local
      2714 ±  2%     +12.5%       3054 ±  3%  perf-c2c.HITM.remote
      0.51            +3.9%       0.53        perf-stat.i.MPKI
  34903541            +5.2%   36715161        perf-stat.i.cache-misses
 1.072e+08            +5.8%  1.133e+08        perf-stat.i.cache-references
     18971            -5.5%      17932        perf-stat.i.cycles-between-cache-misses
      0.46 ± 30%     +13.6%       0.52        perf-stat.overall.MPKI
  31330827 ± 30%     +14.9%   36004895        perf-stat.ps.cache-misses
  96530576 ± 30%     +15.3%  1.113e+08        perf-stat.ps.cache-references
     48.32            -0.2       48.16        perf-profile.calltrace.cycles-pp._raw_spin_lock.unix_del_edges.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
     48.23            -0.2       48.07        perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.unix_del_edges.unix_stream_read_generic.unix_stream_recvmsg
     48.34            -0.2       48.18        perf-profile.calltrace.cycles-pp.unix_del_edges.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.____sys_recvmsg
      0.56 ±  4%      +0.1        0.65 ±  9%  perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.__x64_sys_openat.do_syscall_64
      0.62 ±  3%      +0.1        0.71 ±  8%  perf-profile.calltrace.cycles-pp.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe.stress_sockfd
      0.56 ±  3%      +0.1        0.65 ±  8%  perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe
     48.34            -0.2       48.18        perf-profile.children.cycles-pp.unix_del_edges
      0.15 ±  3%      +0.0        0.17 ±  2%  perf-profile.children.cycles-pp.__scm_recv_common
      0.08 ±  7%      +0.0        0.10 ±  7%  perf-profile.children.cycles-pp.lockref_put_return
      0.09 ±  5%      +0.0        0.11 ±  6%  perf-profile.children.cycles-pp.__fput
      0.35 ±  5%      +0.1        0.43 ± 12%  perf-profile.children.cycles-pp.do_open
      0.63 ±  3%      +0.1        0.72 ±  8%  perf-profile.children.cycles-pp.do_sys_openat2
      0.56 ±  3%      +0.1        0.65 ±  8%  perf-profile.children.cycles-pp.do_filp_open




Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


             reply	other threads:[~2025-11-28  6:30 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-28  6:30 kernel test robot [this message]
2025-11-28 10:11 ` [linus:master] [x86] 284922f4c5: stress-ng.sockfd.ops_per_sec 6.1% improvement Mateusz Guzik
2025-12-02  2:58   ` Oliver Sang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=202511281306.51105b46-lkp@intel.com \
    --to=oliver.sang@intel.com \
    --cc=bp@alien8.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkp@intel.com \
    --cc=mjguzik@gmail.com \
    --cc=oe-lkp@lists.linux.dev \
    --cc=seanjc@google.com \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox