From: kernel test robot <oliver.sang@intel.com>
To: Naohiro Aota <naohiro.aota@wdc.com>
Cc: <oe-lkp@lists.linux.dev>, <lkp@intel.com>,
<linux-btrfs@vger.kernel.org>,
Naohiro Aota <naohiro.aota@wdc.com>, <oliver.sang@intel.com>
Subject: Re: [PATCH 08/11] btrfs: introduce btrfs_space_info sub-group
Date: Tue, 24 Dec 2024 22:21:34 +0800 [thread overview]
Message-ID: <202412241603.d5f0c18f-lkp@intel.com> (raw)
In-Reply-To: <ab8eb232d1acbf02af7352a2224a31b53ece01f5.1733384172.git.naohiro.aota@wdc.com>
Hello,
kernel test robot noticed a 13.0% regression of aim7.jobs-per-min on:
commit: 1d2d783b0ef24d58eae07a32493d1e1e78b4351c ("[PATCH 08/11] btrfs: introduce btrfs_space_info sub-group")
url: https://github.com/intel-lab-lkp/linux/commits/Naohiro-Aota/btrfs-take-btrfs_space_info-in-btrfs_reserve_data_bytes/20241205-195311
base: https://git.kernel.org/cgit/linux/kernel/git/kdave/linux.git for-next
patch link: https://lore.kernel.org/all/ab8eb232d1acbf02af7352a2224a31b53ece01f5.1733384172.git.naohiro.aota@wdc.com/
patch subject: [PATCH 08/11] btrfs: introduce btrfs_space_info sub-group
testcase: aim7
config: x86_64-rhel-9.4
compiler: gcc-12
test machine: 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz (Ice Lake) with 256G memory
parameters:
disk: 1BRD_48G
fs: btrfs
test: disk_cp
load: 1500
cpufreq_governor: performance
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@intel.com>
| Closes: https://lore.kernel.org/oe-lkp/202412241603.d5f0c18f-lkp@intel.com
Details are as below:
-------------------------------------------------------------------------------------------------->
The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20241224/202412241603.d5f0c18f-lkp@intel.com
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/rootfs/tbox_group/test/testcase:
gcc-12/performance/1BRD_48G/btrfs/x86_64-rhel-9.4/1500/debian-12-x86_64-20240206.cgz/lkp-icl-2sp2/disk_cp/aim7
commit:
d437de1e3e ("btrfs: pass space_info for block group creation")
1d2d783b0e ("btrfs: introduce btrfs_space_info sub-group")
d437de1e3ee21349 1d2d783b0ef24d58eae07a32493
---------------- ---------------------------
%stddev %change %stddev
\ | \
3.914e+09 ± 2% +9.0% 4.266e+09 cpuidle..time
13.96 -5.6% 13.17 iostat.cpu.idle
0.28 ± 3% -0.0 0.24 ± 2% mpstat.cpu.all.usr%
283.52 ± 2% +11.1% 315.09 uptime.boot
145120 ± 40% +51.6% 220066 ± 2% meminfo.AnonHugePages
9544499 ± 8% +11.8% 10670421 ± 4% meminfo.DirectMap2M
124537 -7.6% 115017 vmstat.system.cs
216746 -4.5% 207084 vmstat.system.in
235158 ± 22% +89.9% 446683 ± 20% numa-meminfo.node0.Shmem
92328 ± 88% +138.0% 219719 ± 2% numa-meminfo.node1.AnonHugePages
906862 ± 6% -25.2% 678748 ± 12% numa-meminfo.node1.Shmem
58803 ± 22% +90.0% 111697 ± 20% numa-vmstat.node0.nr_shmem
163496 ± 2% +12.8% 184425 ± 4% numa-vmstat.node0.nr_written
45.09 ± 88% +138.1% 107.38 ± 2% numa-vmstat.node1.nr_anon_transparent_hugepages
226731 ± 6% -25.1% 169747 ± 12% numa-vmstat.node1.nr_shmem
163188 ± 2% +12.8% 184147 ± 2% numa-vmstat.node1.nr_written
39739 ± 2% -13.0% 34581 aim7.jobs-per-min
226.75 ± 2% +14.8% 260.40 aim7.time.elapsed_time
226.75 ± 2% +14.8% 260.40 aim7.time.elapsed_time.max
201270 ± 3% +18.8% 239157 aim7.time.involuntary_context_switches
24819 ± 2% +15.8% 28731 aim7.time.system_time
14211174 +5.9% 15052342 aim7.time.voluntary_context_switches
20288 +1.0% 20501 proc-vmstat.nr_inactive_file
30666 ± 2% -5.6% 28940 ± 2% proc-vmstat.nr_mapped
326687 +13.2% 369933 ± 2% proc-vmstat.nr_written
20288 +1.0% 20501 proc-vmstat.nr_zone_inactive_file
1098889 +6.7% 1172947 proc-vmstat.pgfault
1317807 +13.2% 1492099 ± 2% proc-vmstat.pgpgout
69223 ± 4% +8.5% 75099 ± 3% proc-vmstat.pgreuse
7.248e+09 -2.6% 7.06e+09 perf-stat.i.branch-instructions
0.55 ± 2% -0.0 0.51 perf-stat.i.branch-miss-rate%
50144736 ± 8% -7.5% 46399318 perf-stat.i.cache-misses
2.114e+08 -7.4% 1.958e+08 perf-stat.i.cache-references
125888 -7.9% 115986 perf-stat.i.context-switches
8.96 +4.5% 9.36 perf-stat.i.cpi
5667 ± 7% +8.4% 6144 perf-stat.i.cycles-between-cache-misses
3.119e+10 -3.1% 3.021e+10 perf-stat.i.instructions
0.17 -7.4% 0.16 perf-stat.i.ipc
0.71 ± 10% -77.0% 0.16 ± 27% perf-stat.i.metric.K/sec
4337 ± 2% -6.0% 4075 perf-stat.i.minor-faults
4339 ± 2% -6.0% 4078 perf-stat.i.page-faults
5.59 ± 81% +72.6% 9.64 perf-stat.overall.cpi
3401 ± 82% +84.6% 6279 perf-stat.overall.cycles-between-cache-misses
4.276e+12 ± 81% +84.2% 7.875e+12 perf-stat.total.instructions
9128951 +37.9% 12584528 sched_debug.cfs_rq:/.avg_vruntime.avg
19113988 ± 16% +52.5% 29146821 ± 21% sched_debug.cfs_rq:/.avg_vruntime.max
7853850 ± 2% +38.2% 10851391 ± 3% sched_debug.cfs_rq:/.avg_vruntime.min
1473135 ± 16% +60.2% 2360403 ± 24% sched_debug.cfs_rq:/.avg_vruntime.stddev
577.10 ± 20% -31.6% 394.77 ± 20% sched_debug.cfs_rq:/.load_avg.max
128.74 ± 9% -26.2% 94.97 ± 14% sched_debug.cfs_rq:/.load_avg.stddev
9128951 +37.9% 12584527 sched_debug.cfs_rq:/.min_vruntime.avg
19113988 ± 16% +52.5% 29146821 ± 21% sched_debug.cfs_rq:/.min_vruntime.max
7853850 ± 2% +38.2% 10851391 ± 3% sched_debug.cfs_rq:/.min_vruntime.min
1473135 ± 16% +60.2% 2360403 ± 24% sched_debug.cfs_rq:/.min_vruntime.stddev
257.90 -19.0% 209.00 ± 3% sched_debug.cfs_rq:/.removed.load_avg.max
132.30 ± 3% -19.2% 106.93 ± 3% sched_debug.cfs_rq:/.removed.runnable_avg.max
131.35 ± 2% -18.6% 106.93 ± 3% sched_debug.cfs_rq:/.removed.util_avg.max
506.87 ± 2% +10.4% 559.52 ± 2% sched_debug.cfs_rq:/.util_est.avg
1264 ± 9% +14.3% 1445 ± 4% sched_debug.cfs_rq:/.util_est.max
71296 ± 6% +16.6% 83111 ± 6% sched_debug.cpu.avg_idle.min
131900 ± 8% -14.7% 112551 ± 9% sched_debug.cpu.avg_idle.stddev
146287 +19.1% 174200 sched_debug.cpu.clock.avg
146298 +19.1% 174211 sched_debug.cpu.clock.max
146275 +19.1% 174187 sched_debug.cpu.clock.min
145437 +19.0% 173116 sched_debug.cpu.clock_task.avg
145601 +19.0% 173298 sched_debug.cpu.clock_task.max
136631 +20.2% 164246 sched_debug.cpu.clock_task.min
6491 ± 8% +19.4% 7753 ± 4% sched_debug.cpu.curr->pid.max
84360 +24.8% 105323 sched_debug.cpu.nr_switches.avg
112587 ± 3% +25.5% 141314 ± 8% sched_debug.cpu.nr_switches.max
79225 +26.6% 100293 sched_debug.cpu.nr_switches.min
146275 +19.1% 174187 sched_debug.cpu_clk
145107 +19.2% 173021 sched_debug.ktime
147232 +18.9% 175035 sched_debug.sched_clk
0.03 ±100% +707.4% 0.26 ±167% perf-sched.sch_delay.avg.ms.__cond_resched.kmem_cache_alloc_noprof.alloc_extent_state.__clear_extent_bit.btrfs_dirty_folio
0.01 ± 47% +265.2% 0.05 ± 74% perf-sched.sch_delay.avg.ms.__cond_resched.stop_one_cpu.migrate_task_to.task_numa_migrate.isra
0.00 ± 51% +150.0% 0.01 ± 39% perf-sched.sch_delay.avg.ms.btrfs_start_ordered_extent.lock_and_cleanup_extent_if_need.btrfs_buffered_write.btrfs_do_write_iter
0.12 ± 53% -55.6% 0.05 ± 32% perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
0.03 ± 14% +111.5% 0.06 ± 67% perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork
0.01 ±136% +1221.8% 0.08 ± 58% perf-sched.sch_delay.avg.ms.usleep_range_state.wait_for_tpm_stat.tpm_tis_send_data.tpm_tis_send_main
0.14 ± 39% +300.9% 0.56 ± 95% perf-sched.sch_delay.max.ms.__cond_resched.down_write.btrfs_tree_lock_nested.btrfs_lock_root_node.btrfs_search_slot
0.08 ±116% +979.5% 0.90 ±156% perf-sched.sch_delay.max.ms.__cond_resched.kmem_cache_alloc_noprof.alloc_extent_state.__clear_extent_bit.btrfs_dirty_folio
2.45 ± 13% +26.1% 3.09 ± 11% perf-sched.sch_delay.max.ms.pipe_read.vfs_read.ksys_read.do_syscall_64
1256 ± 80% +98.2% 2490 perf-sched.sch_delay.max.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.btrfs_tree_read_lock_nested
0.07 ± 9% +749.4% 0.57 ±121% perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork
2.38 ± 36% -56.0% 1.05 ± 45% perf-sched.sch_delay.max.ms.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
0.27 ±126% +188.4% 0.77 ± 95% perf-sched.sch_delay.max.ms.usleep_range_state.tpm_try_transmit.tpm_transmit.tpm_transmit_cmd
0.02 ±139% +5117.2% 0.98 ± 77% perf-sched.sch_delay.max.ms.usleep_range_state.wait_for_tpm_stat.tpm_tis_send_data.tpm_tis_send_main
2092 ± 4% +19.1% 2491 perf-sched.total_sch_delay.max.ms
286268 -11.6% 253047 ± 2% perf-sched.total_wait_and_delay.count.ms
4111 ± 5% +18.3% 4862 ± 4% perf-sched.total_wait_and_delay.max.ms
115.81 ± 5% +32.9% 153.90 ± 6% perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range.do_poll.constprop.0.do_sys_poll
130.80 -19.1% 105.83 ± 4% perf-sched.wait_and_delay.count.schedule_hrtimeout_range.do_poll.constprop.0.do_sys_poll
259644 -12.4% 227389 ± 3% perf-sched.wait_and_delay.count.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.btrfs_tree_read_lock_nested
10660 ± 2% -8.6% 9745 ± 2% perf-sched.wait_and_delay.count.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.btrfs_tree_lock_nested
1639 ± 3% -16.8% 1364 ± 5% perf-sched.wait_and_delay.count.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.do_unlinkat
2784 ± 3% -11.2% 2471 perf-sched.wait_and_delay.count.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.open_last_lookups
2977 ± 30% +38.9% 4137 ± 27% perf-sched.wait_and_delay.max.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.btrfs_tree_read_lock_nested
2204 ± 2% +49.8% 3301 ± 33% perf-sched.wait_and_delay.max.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.btrfs_tree_lock_nested
2158 ± 28% +40.6% 3033 ± 16% perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
0.03 ± 52% +60.1% 0.04 ± 12% perf-sched.wait_time.avg.ms.io_schedule.folio_wait_bit_common.extent_write_cache_pages.btrfs_writepages
114.21 ± 5% +31.8% 150.51 ± 5% perf-sched.wait_time.avg.ms.schedule_hrtimeout_range.do_poll.constprop.0.do_sys_poll
0.22 ±122% +180.9% 0.61 ± 6% perf-sched.wait_time.avg.ms.usleep_range_state.wait_for_tpm_stat.tpm_tis_send_data.tpm_tis_send_main
1717 ± 50% +44.7% 2484 perf-sched.wait_time.max.ms.__cond_resched.__alloc_pages_noprof.alloc_pages_mpol_noprof.folio_alloc_noprof.__filemap_get_folio
2192 ± 2% +13.0% 2476 perf-sched.wait_time.max.ms.__cond_resched.__filemap_get_folio.prepare_one_folio.constprop.0
2186 ± 2% +14.2% 2496 perf-sched.wait_time.max.ms.__cond_resched.btrfs_buffered_write.btrfs_do_write_iter.vfs_write.ksys_write
2162 ± 3% +14.8% 2482 perf-sched.wait_time.max.ms.__cond_resched.kmem_cache_alloc_noprof.alloc_extent_map.btrfs_get_extent.btrfs_set_extent_delalloc
2154 ± 2% +15.7% 2493 perf-sched.wait_time.max.ms.__cond_resched.kmem_cache_alloc_noprof.alloc_extent_state.__set_extent_bit.set_extent_bit
2207 ± 2% +13.9% 2513 perf-sched.wait_time.max.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.btrfs_tree_read_lock_nested
2204 ± 2% +13.9% 2509 perf-sched.wait_time.max.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.btrfs_tree_lock_nested
2204 ± 2% +14.0% 2512 perf-sched.wait_time.max.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.do_unlinkat
2206 ± 2% +13.8% 2510 perf-sched.wait_time.max.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.open_last_lookups
0.66 ±123% +164.6% 1.74 ± 42% perf-sched.wait_time.max.ms.usleep_range_state.tpm_try_transmit.tpm_transmit.tpm_transmit_cmd
0.41 ±122% +296.7% 1.63 ± 36% perf-sched.wait_time.max.ms.usleep_range_state.wait_for_tpm_stat.tpm_tis_send_data.tpm_tis_send_main
1850 ± 19% +55.3% 2873 ± 21% perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
33.43 -0.2 33.19 perf-profile.calltrace.cycles-pp.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_buffered_write.btrfs_do_write_iter.vfs_write
33.43 -0.2 33.20 perf-profile.calltrace.cycles-pp.btrfs_inode_rsv_release.btrfs_buffered_write.btrfs_do_write_iter.vfs_write.ksys_write
33.33 -0.2 33.10 perf-profile.calltrace.cycles-pp._raw_spin_lock.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_buffered_write.btrfs_do_write_iter
33.25 -0.2 33.02 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_buffered_write
27.98 +0.1 28.11 perf-profile.calltrace.cycles-pp.btrfs_dirty_folio.btrfs_buffered_write.btrfs_do_write_iter.vfs_write.ksys_write
98.01 +0.1 98.15 perf-profile.calltrace.cycles-pp.write
97.93 +0.2 98.08 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
97.87 +0.2 98.02 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
97.88 +0.2 98.04 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
97.92 +0.2 98.08 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
97.84 +0.2 98.00 perf-profile.calltrace.cycles-pp.btrfs_do_write_iter.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
97.81 +0.2 97.97 perf-profile.calltrace.cycles-pp.btrfs_buffered_write.btrfs_do_write_iter.vfs_write.ksys_write.do_syscall_64
34.96 +0.2 35.14 perf-profile.calltrace.cycles-pp._raw_spin_lock.__reserve_bytes.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_buffered_write
26.26 +0.2 26.44 perf-profile.calltrace.cycles-pp.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_clear_delalloc_extent.clear_state_bit.__clear_extent_bit
26.20 +0.2 26.38 perf-profile.calltrace.cycles-pp._raw_spin_lock.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_clear_delalloc_extent.clear_state_bit
26.27 +0.2 26.45 perf-profile.calltrace.cycles-pp.btrfs_inode_rsv_release.btrfs_clear_delalloc_extent.clear_state_bit.__clear_extent_bit.btrfs_dirty_folio
26.14 +0.2 26.32 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.btrfs_block_rsv_release.btrfs_inode_rsv_release.btrfs_clear_delalloc_extent
34.86 +0.2 35.05 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__reserve_bytes.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata
35.29 +0.2 35.48 perf-profile.calltrace.cycles-pp.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_buffered_write.btrfs_do_write_iter.vfs_write
35.29 +0.2 35.48 perf-profile.calltrace.cycles-pp.__reserve_bytes.btrfs_reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_buffered_write.btrfs_do_write_iter
35.41 +0.2 35.62 perf-profile.calltrace.cycles-pp.btrfs_delalloc_reserve_metadata.btrfs_buffered_write.btrfs_do_write_iter.vfs_write.ksys_write
26.60 +0.3 26.90 perf-profile.calltrace.cycles-pp.__clear_extent_bit.btrfs_dirty_folio.btrfs_buffered_write.btrfs_do_write_iter.vfs_write
26.55 +0.3 26.85 perf-profile.calltrace.cycles-pp.btrfs_clear_delalloc_extent.clear_state_bit.__clear_extent_bit.btrfs_dirty_folio.btrfs_buffered_write
26.55 +0.3 26.86 perf-profile.calltrace.cycles-pp.clear_state_bit.__clear_extent_bit.btrfs_dirty_folio.btrfs_buffered_write.btrfs_do_write_iter
0.34 ± 5% -0.1 0.29 ± 2% perf-profile.children.cycles-pp.read
0.40 ± 3% -0.0 0.36 ± 3% perf-profile.children.cycles-pp.down_write
0.23 ± 5% -0.0 0.20 ± 4% perf-profile.children.cycles-pp.ksys_read
0.47 ± 2% -0.0 0.43 ± 3% perf-profile.children.cycles-pp.start_secondary
0.18 ± 8% -0.0 0.15 ± 8% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.41 -0.0 0.38 ± 2% perf-profile.children.cycles-pp.cpuidle_idle_call
0.47 ± 2% -0.0 0.44 ± 2% perf-profile.children.cycles-pp.common_startup_64
0.47 ± 2% -0.0 0.44 ± 2% perf-profile.children.cycles-pp.cpu_startup_entry
0.47 ± 2% -0.0 0.44 ± 2% perf-profile.children.cycles-pp.do_idle
0.19 ± 6% -0.0 0.15 ± 6% perf-profile.children.cycles-pp.filemap_read
0.39 ± 3% -0.0 0.36 ± 2% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
0.21 ± 5% -0.0 0.18 ± 5% perf-profile.children.cycles-pp.vfs_read
0.34 ± 3% -0.0 0.31 ± 2% perf-profile.children.cycles-pp.rwsem_optimistic_spin
0.09 ± 4% -0.0 0.06 ± 11% perf-profile.children.cycles-pp.btrfs_bin_search
0.10 ± 11% -0.0 0.07 ± 14% perf-profile.children.cycles-pp.filemap_get_pages
0.37 ± 3% -0.0 0.34 ± 2% perf-profile.children.cycles-pp.open_last_lookups
0.30 ± 2% -0.0 0.27 ± 2% perf-profile.children.cycles-pp.acpi_idle_enter
0.31 -0.0 0.29 ± 2% perf-profile.children.cycles-pp.cpuidle_enter_state
0.37 ± 3% -0.0 0.34 ± 2% perf-profile.children.cycles-pp.__x64_sys_creat
0.37 ± 3% -0.0 0.34 ± 2% perf-profile.children.cycles-pp.creat64
0.37 ± 3% -0.0 0.34 ± 2% perf-profile.children.cycles-pp.do_filp_open
0.37 ± 3% -0.0 0.34 ± 2% perf-profile.children.cycles-pp.path_openat
0.30 -0.0 0.27 ± 2% perf-profile.children.cycles-pp.acpi_safe_halt
0.31 -0.0 0.29 ± 2% perf-profile.children.cycles-pp.cpuidle_enter
0.16 ± 8% -0.0 0.14 ± 5% perf-profile.children.cycles-pp.osq_lock
0.30 -0.0 0.27 ± 2% perf-profile.children.cycles-pp.acpi_idle_do_entry
0.09 -0.0 0.07 ± 10% perf-profile.children.cycles-pp.read_block_for_search
0.24 ± 3% -0.0 0.22 ± 3% perf-profile.children.cycles-pp.asm_sysvec_call_function_single
0.18 ± 3% -0.0 0.16 perf-profile.children.cycles-pp.__set_extent_bit
0.13 ± 3% -0.0 0.12 ± 4% perf-profile.children.cycles-pp.set_extent_bit
0.34 -0.0 0.32 perf-profile.children.cycles-pp.__close
0.34 -0.0 0.32 perf-profile.children.cycles-pp.__dentry_kill
0.34 -0.0 0.32 perf-profile.children.cycles-pp.__fput
0.34 -0.0 0.32 perf-profile.children.cycles-pp.__x64_sys_close
0.34 -0.0 0.32 perf-profile.children.cycles-pp.dput
0.08 ± 5% -0.0 0.07 perf-profile.children.cycles-pp.btrfs_space_info_update_bytes_may_use
0.06 -0.0 0.05 perf-profile.children.cycles-pp.kmem_cache_free
0.06 -0.0 0.05 perf-profile.children.cycles-pp.up_write
0.05 +0.0 0.06 perf-profile.children.cycles-pp.calc_available_free_space
0.07 ± 7% +0.0 0.08 ± 4% perf-profile.children.cycles-pp.btrfs_folio_clamp_clear_checked
0.07 +0.0 0.09 perf-profile.children.cycles-pp.btrfs_drop_folio
99.26 +0.1 99.32 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
99.25 +0.1 99.32 perf-profile.children.cycles-pp.do_syscall_64
0.30 ± 19% +0.1 0.40 ± 8% perf-profile.children.cycles-pp.btrfs_reserve_data_bytes
0.32 ± 17% +0.1 0.42 ± 8% perf-profile.children.cycles-pp.btrfs_check_data_free_space
0.24 ± 16% +0.1 0.36 ± 7% perf-profile.children.cycles-pp.btrfs_free_reserved_data_space_noquota
27.98 +0.1 28.11 perf-profile.children.cycles-pp.btrfs_dirty_folio
98.05 +0.1 98.20 perf-profile.children.cycles-pp.write
97.91 +0.2 98.07 perf-profile.children.cycles-pp.ksys_write
97.89 +0.2 98.05 perf-profile.children.cycles-pp.vfs_write
97.84 +0.2 98.00 perf-profile.children.cycles-pp.btrfs_do_write_iter
97.81 +0.2 97.98 perf-profile.children.cycles-pp.btrfs_buffered_write
35.43 +0.2 35.62 perf-profile.children.cycles-pp.btrfs_reserve_metadata_bytes
35.42 +0.2 35.63 perf-profile.children.cycles-pp.btrfs_delalloc_reserve_metadata
26.73 +0.3 27.02 perf-profile.children.cycles-pp.__clear_extent_bit
26.63 +0.3 26.93 perf-profile.children.cycles-pp.clear_state_bit
35.67 +0.3 35.97 perf-profile.children.cycles-pp.__reserve_bytes
26.59 +0.3 26.89 perf-profile.children.cycles-pp.btrfs_clear_delalloc_extent
95.31 +0.3 95.62 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
95.80 +0.3 96.12 perf-profile.children.cycles-pp._raw_spin_lock
0.66 -0.0 0.63 perf-profile.self.cycles-pp._raw_spin_lock
0.09 ± 5% -0.0 0.06 ± 11% perf-profile.self.cycles-pp.btrfs_bin_search
0.16 ± 6% -0.0 0.14 ± 5% perf-profile.self.cycles-pp.osq_lock
0.08 ± 5% -0.0 0.07 ± 7% perf-profile.self.cycles-pp.btrfs_space_info_update_bytes_may_use
0.15 ± 3% -0.0 0.13 ± 3% perf-profile.self.cycles-pp.acpi_safe_halt
0.06 -0.0 0.05 perf-profile.self.cycles-pp.kmem_cache_alloc_noprof
0.07 +0.0 0.08 perf-profile.self.cycles-pp.btrfs_block_rsv_release
0.06 ± 7% +0.0 0.08 perf-profile.self.cycles-pp.btrfs_folio_clamp_clear_checked
0.10 +0.0 0.12 ± 3% perf-profile.self.cycles-pp.need_preemptive_reclaim
94.52 +0.3 94.86 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
next prev parent reply other threads:[~2024-12-24 14:23 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-05 7:48 [PATCH 00/11] btrfs: zoned: split out data relocation space_info Naohiro Aota
2024-12-05 7:48 ` [PATCH 01/11] btrfs: take btrfs_space_info in btrfs_reserve_data_bytes Naohiro Aota
2024-12-05 23:48 ` Johannes Thumshirn
2024-12-05 7:48 ` [PATCH 02/11] btrfs: take struct btrfs_inode in btrfs_free_reserved_data_space_noquota Naohiro Aota
2024-12-05 23:49 ` Johannes Thumshirn
2024-12-05 7:48 ` [PATCH 03/11] btrfs: factor out init_space_info() Naohiro Aota
2024-12-05 7:48 ` [PATCH 04/11] btrfs: spin out do_async_reclaim_data_space() Naohiro Aota
2024-12-05 7:48 ` [PATCH 05/11] btrfs: factor out check_removing_space_info() Naohiro Aota
2024-12-07 11:29 ` Johannes Thumshirn
2024-12-10 5:16 ` Naohiro Aota
2024-12-05 7:48 ` [PATCH 06/11] btrfs: introduce space_info argument to btrfs_chunk_alloc Naohiro Aota
2024-12-05 7:48 ` [PATCH 07/11] btrfs: pass space_info for block group creation Naohiro Aota
2024-12-05 7:48 ` [PATCH 08/11] btrfs: introduce btrfs_space_info sub-group Naohiro Aota
2024-12-24 14:21 ` kernel test robot [this message]
2024-12-05 7:48 ` [PATCH 09/11] btrfs: tweak extent/chunk allocation for space_info sub-space Naohiro Aota
2024-12-05 7:48 ` [PATCH 10/11] btrfs: use proper data space_info Naohiro Aota
2024-12-05 7:48 ` [PATCH 11/11] btrfs: reclaim from data sub-space space_info Naohiro Aota
2024-12-07 11:35 ` [PATCH 00/11] btrfs: zoned: split out data relocation space_info Johannes Thumshirn
2024-12-10 5:40 ` Naohiro Aota
2025-01-02 14:19 ` David Sterba
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=202412241603.d5f0c18f-lkp@intel.com \
--to=oliver.sang@intel.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=lkp@intel.com \
--cc=naohiro.aota@wdc.com \
--cc=oe-lkp@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox