* [cifs:for-next-next] [smb] 6c18ca82c3: filebench.sum_operations/s -16.4% regression
@ 2024-09-03 13:45 kernel test robot
0 siblings, 0 replies; only message in thread
From: kernel test robot @ 2024-09-03 13:45 UTC (permalink / raw)
To: Enzo Matsumiya
Cc: oe-lkp, lkp, linux-cifs, samba-technical, Steve French,
Henrique Carvalho, ying.huang, feng.tang, fengwei.yin,
oliver.sang
Hello,
kernel test robot noticed a -16.4% regression of filebench.sum_operations/s on:
commit: 6c18ca82c3155bea26e0080ffc613e100b99f706 ("smb: client: force dentry revalidation if nohandlecache is set")
git://git.samba.org/sfrench/cifs-2.6.git for-next-next
testcase: filebench
test machine: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz (Ice Lake) with 128G memory
parameters:
disk: 1HDD
fs: btrfs
fs2: cifs
test: webproxy.f
cpufreq_governor: performance
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@intel.com>
| Closes: https://lore.kernel.org/oe-lkp/202409032100.5acabffb-oliver.sang@intel.com
Details are as below:
-------------------------------------------------------------------------------------------------->
The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20240903/202409032100.5acabffb-oliver.sang@intel.com
=========================================================================================
compiler/cpufreq_governor/disk/fs2/fs/kconfig/rootfs/tbox_group/test/testcase:
gcc-12/performance/1HDD/cifs/btrfs/x86_64-rhel-8.3/debian-12-x86_64-20240206.cgz/lkp-icl-2sp6/webproxy.f/filebench
commit:
a493f1a10b ("smb: client: fix hang in wait_for_response() for negproto")
6c18ca82c3 ("smb: client: force dentry revalidation if nohandlecache is set")
a493f1a10b62652c 6c18ca82c3155bea26e0080ffc6
---------------- ---------------------------
%stddev %change %stddev
\ | \
34.06 -1.8% 33.45 boot-time.boot
1.18 +70.6% 2.01 iostat.cpu.system
128.33 ± 15% -28.4% 91.83 ± 12% perf-c2c.DRAM.local
117016 ± 2% +19.4% 139662 ± 3% numa-meminfo.node1.Active(anon)
125998 ± 4% +25.9% 158613 ± 5% numa-meminfo.node1.Shmem
126046 +17.2% 147675 meminfo.Active(anon)
70015 ± 2% -9.4% 63434 meminfo.KernelStack
163656 +13.7% 186105 meminfo.Shmem
3.10 ± 5% +37.7% 4.26 ± 2% vmstat.procs.r
15063 +12.7% 16975 vmstat.system.cs
11960 +3.6% 12391 vmstat.system.in
0.37 -0.1 0.31 mpstat.cpu.all.iowait%
1.11 +0.8 1.95 mpstat.cpu.all.sys%
18.33 ± 4% +134.5% 43.00 ± 13% mpstat.max_utilization.seconds
4.76 ± 3% +54.2% 7.34 ± 2% mpstat.max_utilization_pct
427754 ± 4% -15.7% 360658 ± 8% numa-numastat.node0.local_node
53543 ± 71% +115.3% 115274 ± 11% numa-numastat.node0.other_node
433291 ± 4% +16.0% 502556 ± 6% numa-numastat.node1.local_node
87012 ± 44% -72.9% 23607 ± 54% numa-numastat.node1.other_node
427339 ± 4% -15.7% 360284 ± 8% numa-vmstat.node0.numa_local
53543 ± 71% +115.3% 115274 ± 11% numa-vmstat.node0.numa_other
29247 ± 2% +19.3% 34891 ± 3% numa-vmstat.node1.nr_active_anon
31500 ± 4% +25.8% 39633 ± 5% numa-vmstat.node1.nr_shmem
29247 ± 2% +19.3% 34891 ± 3% numa-vmstat.node1.nr_zone_active_anon
432962 ± 4% +15.8% 501460 ± 6% numa-vmstat.node1.numa_local
87012 ± 44% -72.9% 23607 ± 54% numa-vmstat.node1.numa_other
14.95 -13.6% 12.92 filebench.sum_bytes_mb/s
261178 -16.4% 218311 filebench.sum_operations
4352 -16.4% 3638 filebench.sum_operations/s
1145 -16.4% 957.17 filebench.sum_reads/s
22.93 +19.6% 27.42 filebench.sum_time_ms/op
229.17 -16.4% 191.50 filebench.sum_writes/s
177.33 +4.0% 184.48 filebench.time.elapsed_time
177.33 +4.0% 184.48 filebench.time.elapsed_time.max
530197 -8.4% 485534 filebench.time.file_system_outputs
2085 ± 3% +85.6% 3871 filebench.time.involuntary_context_switches
87.67 +108.6% 182.83 filebench.time.percent_of_cpu_this_job_got
155.38 +117.4% 337.73 filebench.time.system_time
373095 +40.8% 525150 filebench.time.voluntary_context_switches
7689 ± 12% +155.9% 19679 ± 4% sched_debug.cfs_rq:/.avg_vruntime.avg
386.01 ± 56% +834.0% 3605 ± 49% sched_debug.cfs_rq:/.avg_vruntime.min
7689 ± 12% +155.9% 19679 ± 4% sched_debug.cfs_rq:/.min_vruntime.avg
386.01 ± 56% +834.0% 3605 ± 49% sched_debug.cfs_rq:/.min_vruntime.min
433.64 ± 19% -35.8% 278.31 ± 12% sched_debug.cfs_rq:/.removed.load_avg.max
86.64 ± 24% -31.6% 59.25 ± 23% sched_debug.cfs_rq:/.removed.load_avg.stddev
227.69 ± 18% -34.5% 149.18 ± 16% sched_debug.cfs_rq:/.removed.runnable_avg.max
37.03 ± 22% -28.9% 26.34 ± 24% sched_debug.cfs_rq:/.removed.runnable_avg.stddev
227.53 ± 18% -34.5% 149.10 ± 17% sched_debug.cfs_rq:/.removed.util_avg.max
37.00 ± 22% -28.9% 26.29 ± 24% sched_debug.cfs_rq:/.removed.util_avg.stddev
186314 ± 76% +108.7% 388769 ± 8% sched_debug.cpu.avg_idle.min
300393 ± 81% -57.8% 126916 ± 9% sched_debug.cpu.avg_idle.stddev
10939 ± 19% +51.8% 16603 ± 6% sched_debug.cpu.nr_switches.avg
38505 ± 16% +52.2% 58598 ± 18% sched_debug.cpu.nr_switches.max
492.25 ± 26% +454.4% 2728 ± 40% sched_debug.cpu.nr_switches.min
0.32 ± 19% -35.5% 0.21 ± 10% sched_debug.cpu.nr_uninterruptible.avg
31486 +17.2% 36902 proc-vmstat.nr_active_anon
252034 -1.7% 247778 proc-vmstat.nr_anon_pages
144469 -9.1% 131275 proc-vmstat.nr_dirtied
259096 -1.6% 255042 proc-vmstat.nr_inactive_anon
27566 +1.5% 27977 proc-vmstat.nr_inactive_file
69963 ± 2% -9.4% 63387 proc-vmstat.nr_kernel_stack
40920 +13.7% 46536 proc-vmstat.nr_shmem
95280 -1.3% 93995 proc-vmstat.nr_slab_unreclaimable
116777 ± 2% -6.6% 109072 ± 3% proc-vmstat.nr_written
31486 +17.2% 36902 proc-vmstat.nr_zone_active_anon
259096 -1.6% 255042 proc-vmstat.nr_zone_inactive_anon
27566 +1.5% 27977 proc-vmstat.nr_zone_inactive_file
1021 +3.5% 1057 ± 2% proc-vmstat.numa_huge_pte_updates
157598 ± 5% +22.6% 193263 ± 8% proc-vmstat.numa_pages_migrated
666331 +5.0% 699975 proc-vmstat.pgfault
157598 ± 5% +22.6% 193263 ± 8% proc-vmstat.pgmigrate_success
31631 ± 2% +10.1% 34822 ± 4% proc-vmstat.pgreuse
1.117e+09 +17.5% 1.313e+09 perf-stat.i.branch-instructions
2.86 -0.0 2.81 perf-stat.i.branch-miss-rate%
15245638 +4.0% 15859539 perf-stat.i.branch-misses
56242131 +4.5% 58762772 perf-stat.i.cache-references
15103 +12.5% 16988 perf-stat.i.context-switches
1.62 +10.6% 1.80 perf-stat.i.cpi
6.063e+09 +59.6% 9.679e+09 perf-stat.i.cpu-cycles
255.96 +13.8% 291.33 perf-stat.i.cpu-migrations
2441 ± 2% +34.7% 3289 ± 4% perf-stat.i.cycles-between-cache-misses
4.996e+09 +15.7% 5.781e+09 perf-stat.i.instructions
0.69 -9.0% 0.63 perf-stat.i.ipc
0.43 ± 4% -10.8% 0.38 ± 4% perf-stat.overall.MPKI
1.36 -0.2 1.21 perf-stat.overall.branch-miss-rate%
1.21 +38.0% 1.67 perf-stat.overall.cpi
2853 ± 3% +54.7% 4414 ± 4% perf-stat.overall.cycles-between-cache-misses
0.82 -27.5% 0.60 perf-stat.overall.ipc
1.112e+09 +17.6% 1.308e+09 perf-stat.ps.branch-instructions
15163646 +4.0% 15775672 perf-stat.ps.branch-misses
55997377 +4.5% 58544094 perf-stat.ps.cache-references
15036 +12.6% 16924 perf-stat.ps.context-switches
6.038e+09 +59.7% 9.644e+09 perf-stat.ps.cpu-cycles
254.65 +13.9% 289.98 perf-stat.ps.cpu-migrations
4.974e+09 +15.8% 5.759e+09 perf-stat.ps.instructions
8.889e+11 +20.4% 1.07e+12 perf-stat.total.instructions
0.65 ± 6% -0.4 0.29 ±100% perf-profile.calltrace.cycles-pp.sched_ttwu_pending.__flush_smp_call_function_queue.flush_smp_call_function_queue.do_idle.cpu_startup_entry
1.66 ± 13% -0.3 1.39 ± 8% perf-profile.calltrace.cycles-pp.evsel__read_counter.read_counters.process_interval.dispatch_events.cmd_stat
3.62 ± 7% +0.4 4.07 ± 2% perf-profile.calltrace.cycles-pp.tick_nohz_handler.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
4.33 ± 6% +0.6 4.92 ± 5% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
1.50 ± 23% -0.5 1.00 ± 18% perf-profile.children.cycles-pp.walk_component
0.97 ± 27% -0.4 0.60 ± 19% perf-profile.children.cycles-pp.__lookup_slow
1.66 ± 13% -0.3 1.39 ± 8% perf-profile.children.cycles-pp.evsel__read_counter
0.66 ± 4% -0.2 0.50 ± 25% perf-profile.children.cycles-pp.sched_ttwu_pending
0.17 ± 38% -0.1 0.04 ± 80% perf-profile.children.cycles-pp.up_read
0.34 ± 22% -0.1 0.22 ± 31% perf-profile.children.cycles-pp.evlist__id2evsel
0.24 ± 39% -0.1 0.12 ± 39% perf-profile.children.cycles-pp.ct_kernel_exit_state
0.20 ± 46% -0.1 0.08 ± 75% perf-profile.children.cycles-pp.copy_page
0.19 ± 45% -0.1 0.08 ± 78% perf-profile.children.cycles-pp.vm_area_dup
0.14 ± 44% +0.1 0.28 ± 24% perf-profile.children.cycles-pp.task_work_run
0.08 ± 85% +0.1 0.23 ± 29% perf-profile.children.cycles-pp.__get_user_8
0.10 ± 56% +0.1 0.24 ± 21% perf-profile.children.cycles-pp.run_timer_softirq
0.11 ± 90% +0.2 0.26 ± 54% perf-profile.children.cycles-pp.rseq_ip_fixup
0.13 ± 53% +0.2 0.30 ± 13% perf-profile.children.cycles-pp.__run_timers
0.73 ± 15% +0.2 0.95 ± 11% perf-profile.children.cycles-pp.dequeue_entity
0.50 ± 32% +0.2 0.74 ± 9% perf-profile.children.cycles-pp.wp_page_copy
0.81 ± 19% +0.2 1.06 ± 15% perf-profile.children.cycles-pp.dequeue_task_fair
4.04 ± 5% +0.4 4.39 ± 3% perf-profile.children.cycles-pp.tick_nohz_handler
4.76 ± 5% +0.5 5.28 ± 6% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.19 ± 51% -0.1 0.07 ± 71% perf-profile.self.cycles-pp.copy_page
0.15 ± 24% -0.1 0.08 ± 25% perf-profile.self.cycles-pp.do_idle
0.08 ± 85% +0.1 0.22 ± 32% perf-profile.self.cycles-pp.__get_user_8
0.04 ± 7% +25.1% 0.05 ± 8% perf-sched.sch_delay.avg.ms.io_schedule.folio_wait_bit_common.filemap_update_page.filemap_get_pages
0.05 ± 4% +83.8% 0.08 ± 3% perf-sched.sch_delay.avg.ms.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
0.03 ± 41% +150.6% 0.07 ± 39% perf-sched.sch_delay.avg.ms.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown].[unknown]
0.04 ± 9% +42.9% 0.05 ± 6% perf-sched.sch_delay.avg.ms.schedule_preempt_disabled.__mutex_lock.constprop.0.cifs_call_async
0.09 ± 2% -26.7% 0.07 ± 17% perf-sched.sch_delay.avg.ms.schedule_preempt_disabled.kthread.ret_from_fork.ret_from_fork_asm
0.02 ± 3% +419.1% 0.10 ±111% perf-sched.sch_delay.avg.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.do_unlinkat
0.04 ± 18% +69.0% 0.06 ± 18% perf-sched.sch_delay.avg.ms.schedule_timeout.__wait_for_common.wait_for_completion_killable.__kthread_create_on_node
0.19 ± 13% +115.0% 0.42 ± 49% perf-sched.sch_delay.max.ms.__cond_resched.__kmalloc_noprof.cifs_strndup_to_utf16.cifs_convert_path_to_utf16.smb2_compound_op
0.61 ± 40% +206.7% 1.86 ± 36% perf-sched.sch_delay.max.ms.__cond_resched.dput.cifsFileInfo_put_final._cifsFileInfo_put.process_one_work
0.32 ± 10% +342.4% 1.43 ±163% perf-sched.sch_delay.max.ms.__lock_sock.sk_wait_data.tcp_recvmsg_locked.tcp_recvmsg
0.28 ± 6% +685.7% 2.16 ±109% perf-sched.sch_delay.max.ms.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
0.21 ± 76% +132.6% 0.48 ± 44% perf-sched.sch_delay.max.ms.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown].[unknown]
0.17 ± 9% +28.1% 0.22 ± 5% perf-sched.sch_delay.max.ms.schedule_preempt_disabled.kthread.ret_from_fork.ret_from_fork_asm
0.70 ± 31% +112.4% 1.49 ± 74% perf-sched.sch_delay.max.ms.schedule_timeout.__wait_for_common.wait_for_completion_killable.__kthread_create_on_node
42.95 ± 2% -20.5% 34.15 ± 2% perf-sched.total_wait_and_delay.average.ms
100335 +11.8% 112159 perf-sched.total_wait_and_delay.count.ms
42.79 ± 2% -20.6% 33.98 ± 2% perf-sched.total_wait_time.average.ms
93.35 ± 12% -29.6% 65.67 ± 13% perf-sched.wait_and_delay.avg.ms.__cond_resched.__kmalloc_noprof.cifs_strndup_to_utf16.cifs_convert_path_to_utf16.smb2_compound_op
206.36 ± 6% +19.4% 246.43 ± 4% perf-sched.wait_and_delay.avg.ms.__cond_resched.kmem_cache_alloc_noprof.cifs_do_create.isra.0
52.82 ± 19% -51.7% 25.49 ± 17% perf-sched.wait_and_delay.avg.ms.__lock_sock.lock_sock_nested.tcp_sock_set_cork.__smb_send_rqst
0.21 ± 2% -12.6% 0.18 ± 4% perf-sched.wait_and_delay.avg.ms.__lock_sock.sk_wait_data.tcp_recvmsg_locked.tcp_recvmsg
0.76 +17.7% 0.89 perf-sched.wait_and_delay.avg.ms.io_schedule.folio_wait_bit_common.filemap_update_page.filemap_get_pages
0.44 ± 5% +67.8% 0.73 ± 3% perf-sched.wait_and_delay.avg.ms.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
19.27 ± 2% +52.6% 29.42 ± 7% perf-sched.wait_and_delay.avg.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.do_unlinkat
14.40 ± 11% +70.0% 24.48 ± 10% perf-sched.wait_and_delay.avg.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.open_last_lookups
0.35 ± 2% -20.0% 0.28 ± 2% perf-sched.wait_and_delay.avg.ms.schedule_timeout.wait_woken.sk_wait_data.tcp_recvmsg_locked
20.75 -61.4% 8.02 ± 2% perf-sched.wait_and_delay.avg.ms.wait_for_response.compound_send_recv.smb2_compound_op.smb2_query_path_info
0.42 ± 2% +11.0% 0.46 ± 3% perf-sched.wait_and_delay.avg.ms.wait_for_response.compound_send_recv.smb2_compound_op.smb2_unlink
22.05 ± 5% +22.6% 27.04 ± 2% perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
93.17 ± 4% +117.5% 202.67 ± 7% perf-sched.wait_and_delay.count.__cond_resched.__kmalloc_noprof.cifs_strndup_to_utf16.cifs_convert_path_to_utf16.smb2_compound_op
41.67 ± 24% +144.8% 102.00 ± 14% perf-sched.wait_and_delay.count.__cond_resched.cancel_work_sync._cifsFileInfo_put.process_one_work.worker_thread
1458 ± 4% +86.4% 2718 perf-sched.wait_and_delay.count.__lock_sock.lock_sock_nested.tcp_recvmsg.inet6_recvmsg
282.67 ± 70% +155.7% 722.67 ± 6% perf-sched.wait_and_delay.count.__lock_sock.lock_sock_nested.tcp_sendmsg.sock_sendmsg
703.00 ± 2% +49.2% 1049 ± 4% perf-sched.wait_and_delay.count.__lock_sock.lock_sock_nested.tcp_sock_set_cork.__smb_send_rqst
1280 ± 4% +52.2% 1948 ± 2% perf-sched.wait_and_delay.count.__lock_sock.sk_wait_data.tcp_recvmsg_locked.tcp_recvmsg
8403 ± 2% -22.3% 6525 ± 2% perf-sched.wait_and_delay.count.futex_wait_queue.__futex_wait.futex_wait.do_futex
5561 -17.5% 4590 perf-sched.wait_and_delay.count.io_schedule.folio_wait_bit_common.filemap_update_page.filemap_get_pages
1115 -15.6% 941.67 perf-sched.wait_and_delay.count.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
1118 -14.8% 952.83 perf-sched.wait_and_delay.count.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.do_unlinkat
1111 -16.0% 934.00 perf-sched.wait_and_delay.count.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.open_last_lookups
196.83 ± 22% +62.2% 319.17 ± 24% perf-sched.wait_and_delay.count.schedule_timeout.__wait_for_common.wait_for_completion_killable.__kthread_create_on_node
15475 ± 2% +19.7% 18526 perf-sched.wait_and_delay.count.schedule_timeout.wait_woken.sk_wait_data.tcp_recvmsg_locked
6670 -17.1% 5528 perf-sched.wait_and_delay.count.wait_for_response.compound_send_recv.cifs_send_recv.SMB2_open
6713 -16.3% 5617 perf-sched.wait_and_delay.count.wait_for_response.compound_send_recv.cifs_send_recv.__SMB2_close
1112 -15.6% 939.17 perf-sched.wait_and_delay.count.wait_for_response.compound_send_recv.cifs_send_recv.query_info
9037 +139.4% 21632 perf-sched.wait_and_delay.count.wait_for_response.compound_send_recv.smb2_compound_op.smb2_query_path_info
1116 -15.6% 942.00 perf-sched.wait_and_delay.count.wait_for_response.compound_send_recv.smb2_compound_op.smb2_unlink
16955 -11.5% 15012 perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
253.45 ± 2% +16.9% 296.20 ± 2% perf-sched.wait_and_delay.max.ms.__cond_resched.__kmalloc_noprof.cifs_strndup_to_utf16.cifs_convert_path_to_utf16.smb2_compound_op
251.26 ± 2% +19.1% 299.18 ± 2% perf-sched.wait_and_delay.max.ms.__cond_resched.kmem_cache_alloc_noprof.cifs_do_create.isra.0
252.33 ± 2% +19.8% 302.34 ± 2% perf-sched.wait_and_delay.max.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.do_unlinkat
252.86 ± 2% +18.1% 298.54 ± 2% perf-sched.wait_and_delay.max.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.open_last_lookups
257.81 ± 2% +18.5% 305.46 ± 2% perf-sched.wait_and_delay.max.ms.wait_for_response.compound_send_recv.cifs_send_recv.SMB2_open
257.83 ± 2% +18.6% 305.78 ± 2% perf-sched.wait_and_delay.max.ms.wait_for_response.compound_send_recv.smb2_compound_op.smb2_query_path_info
93.27 ± 12% -29.7% 65.60 ± 13% perf-sched.wait_time.avg.ms.__cond_resched.__kmalloc_noprof.cifs_strndup_to_utf16.cifs_convert_path_to_utf16.smb2_compound_op
206.26 ± 6% +19.4% 246.33 ± 4% perf-sched.wait_time.avg.ms.__cond_resched.kmem_cache_alloc_noprof.cifs_do_create.isra.0
45.44 ± 9% -49.3% 23.03 ± 17% perf-sched.wait_time.avg.ms.__lock_sock.lock_sock_nested.tcp_sendmsg.sock_sendmsg
52.52 ± 19% -51.6% 25.45 ± 17% perf-sched.wait_time.avg.ms.__lock_sock.lock_sock_nested.tcp_sock_set_cork.__smb_send_rqst
0.14 ± 2% -16.0% 0.12 ± 3% perf-sched.wait_time.avg.ms.__lock_sock.sk_wait_data.tcp_recvmsg_locked.tcp_recvmsg
0.72 +17.2% 0.84 perf-sched.wait_time.avg.ms.io_schedule.folio_wait_bit_common.filemap_update_page.filemap_get_pages
0.39 ± 5% +65.8% 0.65 ± 3% perf-sched.wait_time.avg.ms.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
27.92 ± 20% -38.2% 17.24 ± 26% perf-sched.wait_time.avg.ms.kthreadd.ret_from_fork.ret_from_fork_asm
0.65 ± 6% +40.2% 0.92 perf-sched.wait_time.avg.ms.schedule_preempt_disabled.__mutex_lock.constprop.0.cifs_call_async
0.19 ± 45% +679.3% 1.49 ±139% perf-sched.wait_time.avg.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.btrfs_tree_read_lock_nested
19.25 ± 2% +52.3% 29.32 ± 7% perf-sched.wait_time.avg.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.do_unlinkat
14.32 ± 11% +70.2% 24.38 ± 11% perf-sched.wait_time.avg.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.open_last_lookups
0.27 -24.1% 0.21 ± 2% perf-sched.wait_time.avg.ms.schedule_timeout.wait_woken.sk_wait_data.tcp_recvmsg_locked
20.69 -61.5% 7.96 ± 2% perf-sched.wait_time.avg.ms.wait_for_response.compound_send_recv.smb2_compound_op.smb2_query_path_info
0.34 ± 2% +12.5% 0.38 ± 3% perf-sched.wait_time.avg.ms.wait_for_response.compound_send_recv.smb2_compound_op.smb2_unlink
21.95 ± 5% +22.1% 26.79 ± 2% perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
253.37 ± 2% +16.9% 296.11 ± 2% perf-sched.wait_time.max.ms.__cond_resched.__kmalloc_noprof.cifs_strndup_to_utf16.cifs_convert_path_to_utf16.smb2_compound_op
1.97 ± 2% +11.7% 2.20 ± 7% perf-sched.wait_time.max.ms.__cond_resched.cifs_demultiplex_thread.kthread.ret_from_fork.ret_from_fork_asm
251.18 ± 2% +19.1% 299.06 ± 2% perf-sched.wait_time.max.ms.__cond_resched.kmem_cache_alloc_noprof.cifs_do_create.isra.0
5.19 ± 78% +2302.7% 124.73 ±144% perf-sched.wait_time.max.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.btrfs_tree_read_lock_nested
252.23 ± 2% +19.8% 302.20 ± 2% perf-sched.wait_time.max.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.do_unlinkat
252.74 ± 2% +18.1% 298.41 ± 2% perf-sched.wait_time.max.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.open_last_lookups
669.39 ± 69% -50.0% 334.44 ±140% perf-sched.wait_time.max.ms.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
257.71 ± 2% +18.5% 305.36 ± 2% perf-sched.wait_time.max.ms.wait_for_response.compound_send_recv.cifs_send_recv.SMB2_open
257.72 ± 2% +18.6% 305.67 ± 2% perf-sched.wait_time.max.ms.wait_for_response.compound_send_recv.smb2_compound_op.smb2_query_path_info
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2024-09-03 13:46 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-03 13:45 [cifs:for-next-next] [smb] 6c18ca82c3: filebench.sum_operations/s -16.4% regression kernel test robot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox