linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: kernel test robot <oliver.sang@intel.com>
To: Boris Burkov <boris@bur.io>
Cc: <oe-lkp@lists.linux.dev>, <lkp@intel.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Shakeel Butt <shakeel.butt@linux.dev>,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	Matthew Wilcox <willy@infradead.org>,
	"Michal Hocko" <mhocko@kernel.org>,
	Muchun Song <muchun.song@linux.dev>, Qu Wenruo <wqu@suse.com>,
	Roman Gushchin <roman.gushchin@linux.dev>,
	<linux-btrfs@vger.kernel.org>, <oliver.sang@intel.com>
Subject: [akpm-mm:mm-stable] [btrfs]  b55102826d:  filebench.sum_operations/s 12.4% regression
Date: Thu, 25 Sep 2025 15:24:03 +0800	[thread overview]
Message-ID: <202509251432.59b331b7-lkp@intel.com> (raw)



Hello,

kernel test robot noticed a 12.4% regression of filebench.sum_operations/s on:


commit: b55102826d7d3d41a5777931689c746207308c95 ("btrfs: set AS_KERNEL_FILE on the btree_inode")
https://git.kernel.org/cgit/linux/kernel/git/akpm/mm.git mm-stable


testcase: filebench
config: x86_64-rhel-9.4
compiler: gcc-14
test machine: 192 threads 4 sockets Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz (Cascade Lake) with 176G memory
parameters:

	disk: 1SSD
	fs: btrfs
	test: randomwrite.f
	cpufreq_governor: performance




If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@intel.com>
| Closes: https://lore.kernel.org/oe-lkp/202509251432.59b331b7-lkp@intel.com


Details are as below:
-------------------------------------------------------------------------------------------------->


The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20250925/202509251432.59b331b7-lkp@intel.com

=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/rootfs/tbox_group/test/testcase:
  gcc-14/performance/1SSD/btrfs/x86_64-rhel-9.4/debian-13-x86_64-20250902.cgz/lkp-csl-2sp10/randomwrite.f/filebench

commit: 
  e3a9ac4e86 ("mm: add vmstat for kernel_file pages")
  b55102826d ("btrfs: set AS_KERNEL_FILE on the btree_inode")

e3a9ac4e866ea746 b55102826d7d3d41a5777931689 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
      2.90            +2.2%       2.96        iostat.cpu.system
   1293622           -21.8%    1011948        meminfo.Dirty
    111179           +22.7%     136404 ±  2%  meminfo.Shmem
      0.02 ±  4%      -0.0        0.02 ±  6%  mpstat.cpu.all.iowait%
      0.39            +0.1        0.47        mpstat.cpu.all.sys%
      1300            +2.7%       1335        turbostat.Bzy_MHz
      0.16 ±  3%      +0.1        0.23 ±  4%  turbostat.C1%
     24580 ± 18%    +368.5%     115156 ± 97%  numa-meminfo.node0.Active
     24573 ± 18%    +368.6%     115148 ± 97%  numa-meminfo.node0.Active(anon)
     80724 ± 25%     +33.1%     107431 ±  9%  numa-meminfo.node3.Shmem
      9462 ±  4%     +48.2%      14025 ±  3%  sched_debug.cpu.nr_switches.avg
    104544 ±  9%     +42.0%     148483 ± 22%  sched_debug.cpu.nr_switches.max
     18711 ±  6%     +40.0%      26196 ±  7%  sched_debug.cpu.nr_switches.stddev
      6144 ± 18%    +368.1%      28763 ± 97%  numa-vmstat.node0.nr_active_anon
    609471 ±216%    +427.6%    3215506 ± 68%  numa-vmstat.node0.nr_dirtied
    602202 ±217%    +429.1%    3186249 ± 68%  numa-vmstat.node0.nr_written
      6144 ± 18%    +368.1%      28762 ± 97%  numa-vmstat.node0.nr_zone_active_anon
     20153 ± 26%     +33.3%      26862 ±  9%  numa-vmstat.node3.nr_shmem
    800.20           -12.4%     701.12        filebench.sum_bytes_mb/s
   6146050           -12.4%    5384959        filebench.sum_operations
    102425           -12.4%      89741        filebench.sum_operations/s
      0.01 ±  4%     +13.8%       0.01        filebench.sum_time_ms/op
    102425           -12.4%      89741        filebench.sum_writes/s
  28873566 ±  2%     +32.2%   38160857 ±  2%  filebench.time.file_system_outputs
      4.01 ± 29%   +4383.5%     180.01 ±203%  perf-sched.sch_delay.max.ms.[unknown].[unknown].[unknown].[unknown].[unknown]
      4.01 ± 29%   +4383.5%     180.01 ±203%  perf-sched.total_sch_delay.max.ms
    179.61 ±  6%     -78.9%      37.91 ± 10%  perf-sched.total_wait_and_delay.average.ms
      7539 ±  6%    +412.6%      38652 ±  9%  perf-sched.total_wait_and_delay.count.ms
      4983           -25.5%       3713 ± 13%  perf-sched.total_wait_and_delay.max.ms
    179.60 ±  6%     -78.9%      37.89 ± 10%  perf-sched.total_wait_time.average.ms
      4983           -25.5%       3713 ± 13%  perf-sched.total_wait_time.max.ms
    179.61 ±  6%     -78.9%      37.91 ± 10%  perf-sched.wait_and_delay.avg.ms.[unknown].[unknown].[unknown].[unknown].[unknown]
      7539 ±  6%    +412.6%      38652 ±  9%  perf-sched.wait_and_delay.count.[unknown].[unknown].[unknown].[unknown].[unknown]
      4983           -25.5%       3713 ± 13%  perf-sched.wait_and_delay.max.ms.[unknown].[unknown].[unknown].[unknown].[unknown]
    179.60 ±  6%     -78.9%      37.89 ± 10%  perf-sched.wait_time.avg.ms.[unknown].[unknown].[unknown].[unknown].[unknown]
      4983           -25.5%       3713 ± 13%  perf-sched.wait_time.max.ms.[unknown].[unknown].[unknown].[unknown].[unknown]
    191802            +3.3%     198131        proc-vmstat.nr_active_anon
   3666016 ±  2%     +33.5%    4895484 ±  2%  proc-vmstat.nr_dirtied
    324179           -21.8%     253508        proc-vmstat.nr_dirty
   2228424            +1.9%    2271558        proc-vmstat.nr_file_pages
   1283451            +2.9%    1320253        proc-vmstat.nr_inactive_file
     27774           +22.7%      34080 ±  2%  proc-vmstat.nr_shmem
     35203            +1.7%      35786        proc-vmstat.nr_slab_reclaimable
     97029            +2.5%      99413        proc-vmstat.nr_slab_unreclaimable
   3628151 ±  2%     +33.4%    4838878 ±  2%  proc-vmstat.nr_written
    191802            +3.3%     198131        proc-vmstat.nr_zone_active_anon
   1283451            +2.9%    1320253        proc-vmstat.nr_zone_inactive_file
    325032           -21.8%     254299        proc-vmstat.nr_zone_write_pending
   2438711            +4.5%    2548843        proc-vmstat.numa_hit
   2140583            +5.2%    2250864        proc-vmstat.numa_local
   2587325            +5.2%    2722312        proc-vmstat.pgalloc_normal
   2441172            +1.9%    2488658        proc-vmstat.pgfree
  14693823 ±  2%     +34.6%   19777902 ±  2%  proc-vmstat.pgpgout
      5.99            -2.4%       5.84        perf-stat.i.MPKI
 7.594e+08           +12.2%  8.523e+08        perf-stat.i.branch-instructions
      7.74 ±  2%      +0.6        8.37 ±  3%  perf-stat.i.cache-miss-rate%
  12470451           +11.0%   13837828        perf-stat.i.cache-misses
 1.627e+08            +2.6%   1.67e+08        perf-stat.i.cache-references
     17709 ±  3%     +42.7%      25268 ±  3%  perf-stat.i.context-switches
 1.189e+10            +3.9%  1.236e+10        perf-stat.i.cpu-cycles
 3.777e+09           +14.4%  4.323e+09        perf-stat.i.instructions
      0.28            +7.5%       0.30        perf-stat.i.ipc
      5.53            -0.5        5.00        perf-stat.overall.branch-miss-rate%
      7.66 ±  2%      +0.6        8.28 ±  2%  perf-stat.overall.cache-miss-rate%
      3.15 ±  2%      -9.2%       2.86        perf-stat.overall.cpi
    954.54 ±  3%      -6.3%     894.29 ±  3%  perf-stat.overall.cycles-between-cache-misses
      0.32 ±  2%     +10.1%       0.35        perf-stat.overall.ipc
 7.538e+08           +12.2%  8.459e+08        perf-stat.ps.branch-instructions
  12385908           +10.9%   13741541        perf-stat.ps.cache-misses
 1.618e+08            +2.6%   1.66e+08        perf-stat.ps.cache-references
     17574 ±  3%     +42.6%      25062 ±  3%  perf-stat.ps.context-switches
 1.182e+10            +3.9%  1.228e+10        perf-stat.ps.cpu-cycles
 3.749e+09           +14.4%   4.29e+09        perf-stat.ps.instructions
 6.365e+11 ±  2%     +15.1%  7.326e+11        perf-stat.total.instructions




Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


                 reply	other threads:[~2025-09-25  7:24 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=202509251432.59b331b7-lkp@intel.com \
    --to=oliver.sang@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=boris@bur.io \
    --cc=hannes@cmpxchg.org \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=lkp@intel.com \
    --cc=mhocko@kernel.org \
    --cc=muchun.song@linux.dev \
    --cc=oe-lkp@lists.linux.dev \
    --cc=roman.gushchin@linux.dev \
    --cc=shakeel.butt@linux.dev \
    --cc=willy@infradead.org \
    --cc=wqu@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).