* [lkp] [xfs] fbcc025613: -5.6% fsmark.files_per_sec
@ 2016-02-19 2:52 kernel test robot
2016-02-19 6:49 ` Dave Chinner
0 siblings, 1 reply; 4+ messages in thread
From: kernel test robot @ 2016-02-19 2:52 UTC (permalink / raw)
To: Dave Chinner; +Cc: lkp, LKML, Christoph Hellwig, Dave Chinner
[-- Attachment #1: Type: text/plain, Size: 37177 bytes --]
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit fbcc025613590d7b1d15521555dcc6393a148a6b ("xfs: Introduce writeback context for writepages")
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/md/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-4.9/performance/8BRD_12G/4M/xfs/1x/x86_64-rhel/RAID0/1t/debian-x86_64-2015-02-07.cgz/fsyncBeforeClose/lkp-hsx02/60G/fsmark
commit:
150d5be09ce49a9bed6feb7b7dc4e5ae188778ec
fbcc025613590d7b1d15521555dcc6393a148a6b
150d5be09ce49a9b fbcc025613590d7b1d15521555
---------------- --------------------------
%stddev %change %stddev
\ | \
36122 ± 0% -57.4% 15404 ± 0% fsmark.time.involuntary_context_switches
95.30 ± 0% +1.8% 97.00 ± 0% fsmark.time.percent_of_cpu_this_job_got
25339 ± 32% +5756.0% 1483901 ± 1% fsmark.time.voluntary_context_switches
881.80 ± 45% +14258.5% 126613 ± 10% latency_stats.hits.wait_on_page_bit.__filemap_fdatawait_range.filemap_fdatawait_range.filemap_write_and_wait_range.xfs_file_fsync.vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
3548 ± 48% +11967.4% 428200 ± 7% latency_stats.sum.wait_on_page_bit.__filemap_fdatawait_range.filemap_fdatawait_range.filemap_write_and_wait_range.xfs_file_fsync.vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
36122 ± 0% -57.4% 15404 ± 0% time.involuntary_context_switches
25339 ± 32% +5756.0% 1483901 ± 1% time.voluntary_context_switches
0.75 ± 0% +6.3% 0.80 ± 0% turbostat.%Busy
22.00 ± 0% +4.5% 23.00 ± 0% turbostat.Avg_MHz
4991 ± 4% +709.3% 40394 ± 3% vmstat.system.cs
1578 ± 1% +5.4% 1663 ± 0% vmstat.system.in
2856 ± 10% -33.7% 1894 ± 10% slabinfo.file_lock_cache.active_objs
2856 ± 10% -33.7% 1894 ± 10% slabinfo.file_lock_cache.num_objs
10424 ± 15% -65.8% 3561 ± 35% slabinfo.scsi_data_buffer.active_objs
10424 ± 15% -65.8% 3561 ± 35% slabinfo.scsi_data_buffer.num_objs
3978 ± 15% -65.4% 1375 ± 37% slabinfo.xfs_da_state.active_objs
3978 ± 15% -65.4% 1375 ± 37% slabinfo.xfs_da_state.num_objs
2452 ± 15% -65.8% 837.70 ± 35% slabinfo.xfs_efd_item.active_objs
2452 ± 15% -65.8% 837.70 ± 35% slabinfo.xfs_efd_item.num_objs
10181 ± 3% -17.4% 8414 ± 3% slabinfo.xfs_ili.active_objs
10181 ± 3% -17.4% 8414 ± 3% slabinfo.xfs_ili.num_objs
2819 ± 13% -62.6% 1054 ± 32% slabinfo.xfs_log_ticket.active_objs
2819 ± 13% -62.6% 1054 ± 32% slabinfo.xfs_log_ticket.num_objs
4369 ± 14% -64.3% 1559 ± 33% slabinfo.xfs_trans.active_objs
4369 ± 14% -64.3% 1559 ± 33% slabinfo.xfs_trans.num_objs
154727 ± 25% +56.9% 242693 ± 9% sched_debug.cpu.avg_idle.stddev
3668 ± 4% +43.9% 5278 ± 27% sched_debug.cpu.nr_load_updates.103
3752 ± 3% +44.3% 5414 ± 31% sched_debug.cpu.nr_load_updates.104
3721 ± 3% +45.8% 5425 ± 28% sched_debug.cpu.nr_load_updates.105
3734 ± 2% +39.5% 5210 ± 28% sched_debug.cpu.nr_load_updates.106
3738 ± 5% +38.4% 5175 ± 29% sched_debug.cpu.nr_load_updates.107
3719 ± 3% +39.0% 5169 ± 27% sched_debug.cpu.nr_load_updates.108
3653 ± 3% +40.9% 5148 ± 27% sched_debug.cpu.nr_load_updates.109
5678 ± 14% +19.2% 6768 ± 21% sched_debug.cpu.nr_load_updates.11
3664 ± 2% +40.0% 5131 ± 28% sched_debug.cpu.nr_load_updates.110
3607 ± 4% +43.5% 5178 ± 28% sched_debug.cpu.nr_load_updates.111
3647 ± 2% +41.7% 5169 ± 28% sched_debug.cpu.nr_load_updates.112
3699 ± 4% +40.9% 5212 ± 28% sched_debug.cpu.nr_load_updates.113
3595 ± 3% +41.3% 5080 ± 29% sched_debug.cpu.nr_load_updates.114
3545 ± 5% +42.7% 5058 ± 29% sched_debug.cpu.nr_load_updates.115
3654 ± 3% +39.3% 5090 ± 28% sched_debug.cpu.nr_load_updates.117
3627 ± 4% +39.4% 5058 ± 28% sched_debug.cpu.nr_load_updates.118
3612 ± 4% +42.1% 5135 ± 28% sched_debug.cpu.nr_load_updates.119
3583 ± 4% +40.3% 5028 ± 29% sched_debug.cpu.nr_load_updates.120
3594 ± 2% +42.2% 5109 ± 29% sched_debug.cpu.nr_load_updates.121
3553 ± 4% +43.0% 5082 ± 29% sched_debug.cpu.nr_load_updates.122
3622 ± 4% +38.1% 5002 ± 29% sched_debug.cpu.nr_load_updates.123
3551 ± 3% +31.3% 4664 ± 22% sched_debug.cpu.nr_load_updates.124
3552 ± 3% +41.3% 5018 ± 30% sched_debug.cpu.nr_load_updates.125
3512 ± 8% +44.3% 5066 ± 29% sched_debug.cpu.nr_load_updates.126
3489 ± 6% +42.9% 4987 ± 29% sched_debug.cpu.nr_load_updates.127
3483 ± 6% +42.6% 4966 ± 29% sched_debug.cpu.nr_load_updates.128
3612 ± 5% +38.6% 5007 ± 29% sched_debug.cpu.nr_load_updates.129
3484 ± 7% +43.7% 5007 ± 29% sched_debug.cpu.nr_load_updates.130
3507 ± 3% +42.2% 4986 ± 29% sched_debug.cpu.nr_load_updates.131
3483 ± 3% +44.4% 5029 ± 29% sched_debug.cpu.nr_load_updates.132
3551 ± 4% +40.3% 4984 ± 28% sched_debug.cpu.nr_load_updates.134
3519 ± 3% +38.5% 4876 ± 30% sched_debug.cpu.nr_load_updates.135
3456 ± 3% +43.5% 4960 ± 30% sched_debug.cpu.nr_load_updates.136
3462 ± 4% +42.3% 4925 ± 29% sched_debug.cpu.nr_load_updates.137
3469 ± 5% +43.5% 4980 ± 29% sched_debug.cpu.nr_load_updates.138
3435 ± 5% +43.6% 4933 ± 29% sched_debug.cpu.nr_load_updates.139
3535 ± 4% +37.8% 4873 ± 28% sched_debug.cpu.nr_load_updates.140
3470 ± 5% +41.4% 4906 ± 29% sched_debug.cpu.nr_load_updates.141
3461 ± 5% +40.9% 4876 ± 29% sched_debug.cpu.nr_load_updates.142
3508 ± 6% +41.4% 4960 ± 29% sched_debug.cpu.nr_load_updates.143
5052 ± 11% +32.9% 6714 ± 21% sched_debug.cpu.nr_load_updates.39
4872 ± 13% +29.7% 6321 ± 20% sched_debug.cpu.nr_load_updates.54
5037 ± 14% +28.5% 6472 ± 20% sched_debug.cpu.nr_load_updates.56
5937 ± 12% +22.0% 7244 ± 20% sched_debug.cpu.nr_load_updates.6
4966 ± 18% +29.9% 6451 ± 19% sched_debug.cpu.nr_load_updates.63
4972 ± 17% +29.4% 6436 ± 19% sched_debug.cpu.nr_load_updates.64
4892 ± 15% +30.3% 6373 ± 20% sched_debug.cpu.nr_load_updates.65
4764 ± 20% +35.5% 6453 ± 18% sched_debug.cpu.nr_load_updates.66
4818 ± 14% +31.0% 6313 ± 19% sched_debug.cpu.nr_load_updates.67
4764 ± 14% +35.1% 6438 ± 26% sched_debug.cpu.nr_load_updates.68
4773 ± 15% +32.0% 6302 ± 20% sched_debug.cpu.nr_load_updates.69
4634 ± 15% +35.5% 6280 ± 20% sched_debug.cpu.nr_load_updates.70
4684 ± 14% +33.6% 6258 ± 21% sched_debug.cpu.nr_load_updates.71
3912 ± 6% +39.5% 5459 ± 25% sched_debug.cpu.nr_load_updates.72
4007 ± 4% +37.3% 5502 ± 27% sched_debug.cpu.nr_load_updates.73
3937 ± 3% +33.3% 5246 ± 22% sched_debug.cpu.nr_load_updates.74
3935 ± 4% +37.8% 5425 ± 25% sched_debug.cpu.nr_load_updates.75
3868 ± 4% +39.1% 5382 ± 27% sched_debug.cpu.nr_load_updates.76
3928 ± 3% +38.3% 5432 ± 26% sched_debug.cpu.nr_load_updates.77
3846 ± 4% +37.7% 5295 ± 27% sched_debug.cpu.nr_load_updates.78
3966 ± 3% +34.4% 5330 ± 26% sched_debug.cpu.nr_load_updates.79
5758 ± 15% +21.9% 7017 ± 19% sched_debug.cpu.nr_load_updates.8
3942 ± 5% +36.4% 5377 ± 26% sched_debug.cpu.nr_load_updates.81
3921 ± 3% +37.1% 5376 ± 26% sched_debug.cpu.nr_load_updates.82
3884 ± 3% +38.0% 5359 ± 26% sched_debug.cpu.nr_load_updates.83
3886 ± 5% +37.5% 5342 ± 27% sched_debug.cpu.nr_load_updates.84
3826 ± 5% +37.5% 5259 ± 24% sched_debug.cpu.nr_load_updates.85
3909 ± 2% +35.6% 5299 ± 27% sched_debug.cpu.nr_load_updates.86
3783 ± 3% +40.8% 5328 ± 27% sched_debug.cpu.nr_load_updates.88
5676 ± 15% +21.8% 6915 ± 20% sched_debug.cpu.nr_load_updates.9
3773 ± 3% +40.6% 5303 ± 27% sched_debug.cpu.nr_load_updates.92
3775 ± 2% +39.3% 5258 ± 28% sched_debug.cpu.nr_load_updates.93
3817 ± 5% +39.5% 5323 ± 27% sched_debug.cpu.nr_load_updates.94
3798 ± 4% +40.0% 5318 ± 29% sched_debug.cpu.nr_load_updates.95
3776 ± 3% +39.6% 5271 ± 27% sched_debug.cpu.nr_load_updates.96
3752 ± 4% +38.9% 5211 ± 27% sched_debug.cpu.nr_load_updates.97
4501 ± 2% +31.5% 5920 ± 24% sched_debug.cpu.nr_load_updates.avg
1642 ± 4% +552.3% 10714 ± 3% sched_debug.cpu.nr_switches.avg
12081 ± 14% +726.7% 99874 ± 12% sched_debug.cpu.nr_switches.max
2088 ± 11% +1143.5% 25972 ± 6% sched_debug.cpu.nr_switches.stddev
671.69 ± 4% +679.1% 5233 ± 3% sched_debug.cpu.sched_goidle.avg
4936 ± 18% +877.5% 48256 ± 12% sched_debug.cpu.sched_goidle.max
816.17 ± 12% +1457.0% 12707 ± 6% sched_debug.cpu.sched_goidle.stddev
838.29 ± 4% +543.4% 5393 ± 3% sched_debug.cpu.ttwu_count.avg
9306 ± 9% +461.0% 52212 ± 11% sched_debug.cpu.ttwu_count.max
1333 ± 8% +899.5% 13325 ± 6% sched_debug.cpu.ttwu_count.stddev
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-4.9/performance/1BRD_48G/4M/xfs/1x/x86_64-rhel/1t/debian-x86_64-2015-02-07.cgz/fsyncBeforeClose/lkp-hsx04/40G/fsmark
commit:
150d5be09ce49a9bed6feb7b7dc4e5ae188778ec
fbcc025613590d7b1d15521555dcc6393a148a6b
150d5be09ce49a9b fbcc025613590d7b1d15521555
---------------- --------------------------
%stddev %change %stddev
\ | \
296253 ± 0% +7.3% 317785 ± 0% fsmark.app_overhead
260.88 ± 0% -5.6% 246.35 ± 0% fsmark.files_per_sec
28610 ± 0% -64.0% 10291 ± 0% fsmark.time.involuntary_context_switches
94.50 ± 0% +2.6% 97.00 ± 0% fsmark.time.percent_of_cpu_this_job_got
13641 ± 20% +5289.6% 735225 ± 1% fsmark.time.voluntary_context_switches
393.50 ± 3% -27.8% 284.00 ± 14% proc-vmstat.nr_writeback
179855 ± 88% -99.3% 1304 ± 12% proc-vmstat.pgalloc_dma32
28610 ± 0% -64.0% 10291 ± 0% time.involuntary_context_switches
13641 ± 20% +5289.6% 735225 ± 1% time.voluntary_context_switches
1010090 ± 1% -5.1% 958337 ± 0% vmstat.io.bo
5530 ± 2% +578.6% 37527 ± 0% vmstat.system.cs
1283915 ± 0% +95.7% 2512835 ± 81% latency_stats.avg.async_synchronize_cookie_domain.async_synchronize_full.do_init_module.load_module.SyS_finit_module.entry_SYSCALL_64_fastpath
356.00 ± 57% +20640.0% 73834 ± 17% latency_stats.hits.wait_on_page_bit.__filemap_fdatawait_range.filemap_fdatawait_range.filemap_write_and_wait_range.xfs_file_fsync.vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
1679 ± 63% +14400.8% 243468 ± 1% latency_stats.sum.wait_on_page_bit.__filemap_fdatawait_range.filemap_fdatawait_range.filemap_write_and_wait_range.xfs_file_fsync.vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
10257570 ± 97% -99.6% 36846 ± 5% numa-numastat.node0.local_node
10257571 ± 97% -99.6% 36847 ± 5% numa-numastat.node0.numa_hit
5159497 ±166% -99.3% 34470 ± 5% numa-numastat.node3.local_node
5159497 ±166% -99.3% 34470 ± 5% numa-numastat.node3.numa_hit
0.97 ± 13% +289.7% 3.80 ± 5% turbostat.%Busy
31.25 ± 14% +255.2% 111.00 ± 5% turbostat.Avg_MHz
3221 ± 0% -9.3% 2923 ± 0% turbostat.Bzy_MHz
2.30 ± 61% +252.2% 8.11 ± 43% turbostat.CPU%c1
193.92 ± 1% +9.2% 211.83 ± 1% turbostat.PkgWatt
898.00 ± 35% +68.3% 1511 ± 12% cpuidle.C1-HSW.usage
27387698 ±168% +439.5% 1.477e+08 ± 94% cpuidle.C1E-HSW.time
147.25 ± 38% +1074.2% 1729 ± 23% cpuidle.C3-HSW.usage
67775 ± 4% +14.8% 77808 ± 3% cpuidle.C6-HSW.usage
12851618 ± 57% +1331.2% 1.839e+08 ± 6% cpuidle.POLL.time
6455 ± 3% +891.3% 63993 ± 18% cpuidle.POLL.usage
2581 ± 6% -47.0% 1367 ± 14% slabinfo.file_lock_cache.active_objs
2581 ± 6% -47.0% 1367 ± 14% slabinfo.file_lock_cache.num_objs
9911 ± 12% -64.0% 3572 ± 33% slabinfo.scsi_data_buffer.active_objs
9911 ± 12% -64.0% 3572 ± 33% slabinfo.scsi_data_buffer.num_objs
3834 ± 14% -64.6% 1359 ± 36% slabinfo.xfs_da_state.active_objs
3834 ± 14% -64.6% 1359 ± 36% slabinfo.xfs_da_state.num_objs
2332 ± 12% -64.0% 840.25 ± 33% slabinfo.xfs_efd_item.active_objs
2332 ± 12% -64.0% 840.25 ± 33% slabinfo.xfs_efd_item.num_objs
7323 ± 3% -25.7% 5443 ± 4% slabinfo.xfs_ili.active_objs
7323 ± 3% -25.7% 5443 ± 4% slabinfo.xfs_ili.num_objs
6149 ± 1% -15.0% 5225 ± 1% slabinfo.xfs_inode.active_objs
6149 ± 1% -15.0% 5225 ± 1% slabinfo.xfs_inode.num_objs
2817 ± 11% -64.7% 995.50 ± 29% slabinfo.xfs_log_ticket.active_objs
2817 ± 11% -64.7% 995.50 ± 29% slabinfo.xfs_log_ticket.num_objs
4189 ± 11% -63.6% 1523 ± 30% slabinfo.xfs_trans.active_objs
4189 ± 11% -63.6% 1523 ± 30% slabinfo.xfs_trans.num_objs
2431648 ± 95% -100.0% 66.25 ± 57% numa-vmstat.node0.nr_dirtied
2461928 ± 93% -98.8% 29788 ± 0% numa-vmstat.node0.nr_file_pages
671.50 ±126% -87.1% 86.75 ± 14% numa-vmstat.node0.nr_inactive_anon
2455143 ± 94% -99.0% 23476 ± 1% numa-vmstat.node0.nr_inactive_file
690.75 ±122% -84.8% 105.25 ± 13% numa-vmstat.node0.nr_shmem
78482 ± 90% -94.9% 4022 ± 14% numa-vmstat.node0.nr_slab_reclaimable
12214 ± 7% -11.5% 10804 ± 2% numa-vmstat.node0.nr_slab_unreclaimable
2431323 ± 95% -100.0% 60.25 ± 57% numa-vmstat.node0.nr_written
5105548 ± 91% -96.3% 188791 ± 7% numa-vmstat.node0.numa_hit
5072639 ± 92% -96.9% 155667 ± 8% numa-vmstat.node0.numa_local
1506 ± 49% -49.5% 761.50 ± 1% numa-vmstat.node1.nr_mapped
665.00 ±130% +154.7% 1693 ± 49% numa-vmstat.node2.nr_shmem
1190411 ±166% -100.0% 1.00 ±100% numa-vmstat.node3.nr_dirtied
1220156 ±161% -97.5% 30211 ± 2% numa-vmstat.node3.nr_file_pages
1213900 ±162% -98.1% 23410 ± 0% numa-vmstat.node3.nr_inactive_file
40402 ±149% -91.3% 3503 ± 13% numa-vmstat.node3.nr_slab_reclaimable
1190259 ±166% -100.0% 0.50 ±100% numa-vmstat.node3.nr_written
9848197 ± 93% -98.8% 119153 ± 0% numa-meminfo.node0.FilePages
9823758 ± 94% -99.0% 94254 ± 0% numa-meminfo.node0.Inactive
2686 ±126% -87.0% 348.50 ± 14% numa-meminfo.node0.Inactive(anon)
9821071 ± 94% -99.0% 93906 ± 1% numa-meminfo.node0.Inactive(file)
20295204 ± 92% -97.1% 584915 ± 9% numa-meminfo.node0.MemUsed
313945 ± 90% -94.9% 16093 ± 14% numa-meminfo.node0.SReclaimable
48861 ± 7% -11.5% 43225 ± 2% numa-meminfo.node0.SUnreclaim
2765 ±122% -84.7% 422.25 ± 13% numa-meminfo.node0.Shmem
362807 ± 78% -83.7% 59318 ± 4% numa-meminfo.node0.Slab
6028 ± 49% -49.5% 3045 ± 1% numa-meminfo.node1.Mapped
2660 ±130% +154.7% 6776 ± 49% numa-meminfo.node2.Shmem
4881563 ±161% -97.5% 120847 ± 2% numa-meminfo.node3.FilePages
4856947 ±162% -98.0% 95995 ± 4% numa-meminfo.node3.Inactive
4856548 ±162% -98.1% 93643 ± 0% numa-meminfo.node3.Inactive(file)
10148394 ±158% -95.3% 475091 ± 3% numa-meminfo.node3.MemUsed
669.25 ± 4% +22.5% 819.50 ± 13% numa-meminfo.node3.PageTables
161638 ±149% -91.3% 14017 ± 13% numa-meminfo.node3.SReclaimable
204385 ±117% -73.0% 55109 ± 4% numa-meminfo.node3.Slab
2473 ± 82% -87.6% 305.96 ±122% sched_debug.cfs_rq:/.exec_clock.102
321.21 ± 65% -74.9% 80.77 ± 61% sched_debug.cfs_rq:/.exec_clock.103
327.77 ± 55% -53.0% 154.12 ± 35% sched_debug.cfs_rq:/.exec_clock.106
329.30 ± 59% -67.4% 107.49 ± 41% sched_debug.cfs_rq:/.exec_clock.109
299.92 ± 44% -68.7% 93.76 ± 62% sched_debug.cfs_rq:/.exec_clock.120
304.61 ± 61% -64.5% 108.23 ± 51% sched_debug.cfs_rq:/.exec_clock.123
317.37 ± 49% -57.8% 133.97 ± 52% sched_debug.cfs_rq:/.exec_clock.124
346.76 ± 24% -48.7% 177.98 ± 28% sched_debug.cfs_rq:/.exec_clock.13
270.01 ± 25% -46.2% 145.32 ± 42% sched_debug.cfs_rq:/.exec_clock.131
317.15 ± 35% -61.8% 121.14 ± 43% sched_debug.cfs_rq:/.exec_clock.141
515.18 ± 35% -62.2% 194.89 ± 45% sched_debug.cfs_rq:/.exec_clock.19
1740 ±117% -88.5% 200.72 ± 18% sched_debug.cfs_rq:/.exec_clock.22
442.34 ± 39% -43.0% 252.26 ± 26% sched_debug.cfs_rq:/.exec_clock.25
335.97 ± 11% -35.5% 216.59 ± 36% sched_debug.cfs_rq:/.exec_clock.3
478.25 ± 42% -48.9% 244.26 ± 39% sched_debug.cfs_rq:/.exec_clock.40
369.23 ± 31% -35.8% 237.02 ± 34% sched_debug.cfs_rq:/.exec_clock.41
465.87 ± 47% -55.3% 208.45 ± 23% sched_debug.cfs_rq:/.exec_clock.43
381.79 ± 38% -53.6% 177.20 ± 25% sched_debug.cfs_rq:/.exec_clock.50
347.70 ± 16% -51.2% 169.70 ± 32% sched_debug.cfs_rq:/.exec_clock.71
284.55 ± 41% -66.0% 96.64 ± 89% sched_debug.cfs_rq:/.exec_clock.74
368.83 ± 55% -70.5% 108.91 ± 61% sched_debug.cfs_rq:/.exec_clock.79
1299 ±135% -95.2% 61.80 ± 30% sched_debug.cfs_rq:/.exec_clock.92
13.00 ±136% +501.9% 78.25 ± 65% sched_debug.cfs_rq:/.load_avg.140
1.33 ± 93% +17168.8% 230.25 ±167% sched_debug.cfs_rq:/.load_avg.25
2.00 ± 61% +2912.5% 60.25 ± 88% sched_debug.cfs_rq:/.load_avg.39
2.00 ± 70% +1550.0% 33.00 ±134% sched_debug.cfs_rq:/.load_avg.41
91.75 ±112% -75.5% 22.50 ±163% sched_debug.cfs_rq:/.load_avg.55
3.75 ± 51% +5740.0% 219.00 ± 89% sched_debug.cfs_rq:/.load_avg.60
1.25 ± 34% +18240.0% 229.25 ±146% sched_debug.cfs_rq:/.load_avg.7
53.75 ± 75% -98.6% 0.75 ± 57% sched_debug.cfs_rq:/.load_avg.98
50.61 ± 17% -24.6% 38.15 ± 15% sched_debug.cfs_rq:/.load_avg.avg
42372 ± 29% -52.5% 20110 ± 4% sched_debug.cfs_rq:/.min_vruntime.0
24964 ± 33% -49.5% 12599 ± 52% sched_debug.cfs_rq:/.min_vruntime.10
28449 ± 61% -77.0% 6548 ± 53% sched_debug.cfs_rq:/.min_vruntime.102
25429 ± 68% -78.1% 5579 ± 97% sched_debug.cfs_rq:/.min_vruntime.103
22244 ± 32% -61.8% 8497 ± 77% sched_debug.cfs_rq:/.min_vruntime.108
24582 ± 65% -72.6% 6733 ± 65% sched_debug.cfs_rq:/.min_vruntime.109
23957 ± 73% -71.4% 6853 ± 83% sched_debug.cfs_rq:/.min_vruntime.110
23019 ± 66% -64.7% 8134 ± 52% sched_debug.cfs_rq:/.min_vruntime.118
24409 ± 62% -63.1% 9018 ± 39% sched_debug.cfs_rq:/.min_vruntime.119
20859 ± 33% -56.4% 9085 ± 60% sched_debug.cfs_rq:/.min_vruntime.12
22914 ± 53% -80.8% 4408 ±132% sched_debug.cfs_rq:/.min_vruntime.120
24011 ± 64% -67.6% 7783 ± 66% sched_debug.cfs_rq:/.min_vruntime.122
25835 ± 56% -72.5% 7102 ± 53% sched_debug.cfs_rq:/.min_vruntime.125
22529 ± 35% -61.7% 8628 ± 45% sched_debug.cfs_rq:/.min_vruntime.13
22852 ± 48% -69.8% 6908 ± 55% sched_debug.cfs_rq:/.min_vruntime.141
24299 ± 46% -57.3% 10375 ± 46% sched_debug.cfs_rq:/.min_vruntime.17
32100 ± 43% -66.6% 10732 ± 47% sched_debug.cfs_rq:/.min_vruntime.19
29947 ± 59% -59.7% 12061 ± 42% sched_debug.cfs_rq:/.min_vruntime.22
21889 ± 57% -51.5% 10609 ± 27% sched_debug.cfs_rq:/.min_vruntime.25
19883 ± 18% -61.4% 7677 ± 80% sched_debug.cfs_rq:/.min_vruntime.3
30560 ± 66% -70.7% 8942 ± 66% sched_debug.cfs_rq:/.min_vruntime.34
23910 ± 38% -63.1% 8818 ± 52% sched_debug.cfs_rq:/.min_vruntime.4
24186 ± 45% -59.2% 9879 ± 35% sched_debug.cfs_rq:/.min_vruntime.41
30645 ± 61% -66.1% 10385 ± 50% sched_debug.cfs_rq:/.min_vruntime.43
26137 ± 59% -57.6% 11071 ± 42% sched_debug.cfs_rq:/.min_vruntime.50
24390 ± 66% -71.2% 7013 ± 41% sched_debug.cfs_rq:/.min_vruntime.58
24634 ± 41% -59.0% 10091 ± 40% sched_debug.cfs_rq:/.min_vruntime.61
27874 ± 54% -61.7% 10665 ± 52% sched_debug.cfs_rq:/.min_vruntime.65
23946 ± 75% -60.4% 9478 ± 36% sched_debug.cfs_rq:/.min_vruntime.66
21961 ± 49% -50.8% 10800 ± 38% sched_debug.cfs_rq:/.min_vruntime.70
21587 ± 34% -72.6% 5924 ± 43% sched_debug.cfs_rq:/.min_vruntime.71
23441 ± 74% -63.4% 8574 ± 58% sched_debug.cfs_rq:/.min_vruntime.73
17619 ± 53% -74.4% 4505 ±122% sched_debug.cfs_rq:/.min_vruntime.74
25305 ± 59% -75.4% 6232 ± 84% sched_debug.cfs_rq:/.min_vruntime.79
11691 ± 3% -54.6% 5307 ± 78% sched_debug.cfs_rq:/.min_vruntime.82
21995 ± 49% -83.0% 3735 ± 67% sched_debug.cfs_rq:/.min_vruntime.92
16851 ± 11% -59.4% 6849 ± 72% sched_debug.cfs_rq:/.min_vruntime.95
21138 ± 51% -51.7% 10211 ± 36% sched_debug.cfs_rq:/.min_vruntime.avg
0.05 ± 17% -46.2% 0.02 ± 31% sched_debug.cfs_rq:/.nr_running.avg
0.21 ± 8% -26.6% 0.15 ± 16% sched_debug.cfs_rq:/.nr_running.stddev
1.65 ± 40% -59.2% 0.68 ± 20% sched_debug.cfs_rq:/.runnable_load_avg.avg
101.50 ± 96% -78.8% 21.50 ± 28% sched_debug.cfs_rq:/.runnable_load_avg.max
9.14 ± 85% -71.8% 2.57 ± 21% sched_debug.cfs_rq:/.runnable_load_avg.stddev
-25583 ±-20% -63.9% -9234 ±-45% sched_debug.cfs_rq:/.spread0.1
-18990 ±-29% -49.9% -9510 ±-52% sched_debug.cfs_rq:/.spread0.100
-32882 ±-53% -67.3% -10737 ±-42% sched_debug.cfs_rq:/.spread0.104
-18188 ±-31% -44.1% -10169 ±-38% sched_debug.cfs_rq:/.spread0.105
-31547 ±-55% -66.3% -10623 ±-34% sched_debug.cfs_rq:/.spread0.107
-27581 ±-43% -62.4% -10384 ±-34% sched_debug.cfs_rq:/.spread0.11
-22027 ±-16% -39.5% -13324 ±-37% sched_debug.cfs_rq:/.spread0.111
-18752 ±-25% -60.0% -7508 ±-55% sched_debug.cfs_rq:/.spread0.112
-35576 ±-48% -72.7% -9707 ±-36% sched_debug.cfs_rq:/.spread0.113
-32611 ±-53% -71.2% -9377 ±-48% sched_debug.cfs_rq:/.spread0.114
-23140 ±-38% -53.5% -10766 ±-55% sched_debug.cfs_rq:/.spread0.115
-36650 ±-44% -65.7% -12585 ±-38% sched_debug.cfs_rq:/.spread0.116
-17970 ±-25% -38.3% -11093 ±-24% sched_debug.cfs_rq:/.spread0.119
-18769 ±-30% -43.2% -10669 ±-44% sched_debug.cfs_rq:/.spread0.121
-18368 ±-21% -32.9% -12328 ±-39% sched_debug.cfs_rq:/.spread0.122
-30913 ±-61% -63.3% -11341 ±-53% sched_debug.cfs_rq:/.spread0.126
-21803 ±-26% -48.0% -11333 ±-36% sched_debug.cfs_rq:/.spread0.127
-21165 ±-24% -67.7% -6840 ±-85% sched_debug.cfs_rq:/.spread0.128
-26241 ±-26% -41.2% -15428 ±-18% sched_debug.cfs_rq:/.spread0.129
-32533 ±-55% -62.2% -12312 ±-38% sched_debug.cfs_rq:/.spread0.130
-24553 ±-32% -60.5% -9708 ±-43% sched_debug.cfs_rq:/.spread0.131
-18241 ±-22% -48.1% -9475 ±-58% sched_debug.cfs_rq:/.spread0.132
-23675 ±-43% -69.3% -7265 ±-75% sched_debug.cfs_rq:/.spread0.133
-31623 ±-58% -57.5% -13450 ±-33% sched_debug.cfs_rq:/.spread0.134
-18208 ±-26% -41.8% -10590 ±-48% sched_debug.cfs_rq:/.spread0.135
-19697 ±-30% -45.2% -10793 ±-33% sched_debug.cfs_rq:/.spread0.136
-23217 ±-39% -60.6% -9144 ±-46% sched_debug.cfs_rq:/.spread0.137
-22219 ±-20% -44.0% -12439 ±-20% sched_debug.cfs_rq:/.spread0.138
-29239 ±-61% -66.0% -9946 ±-59% sched_debug.cfs_rq:/.spread0.139
-28928 ±-28% -70.5% -8519 ±-48% sched_debug.cfs_rq:/.spread0.14
-18983 ±-14% -59.1% -7765 ±-64% sched_debug.cfs_rq:/.spread0.140
-19527 ± -7% -32.4% -13204 ±-26% sched_debug.cfs_rq:/.spread0.141
-18157 ±-27% -49.8% -9109 ±-36% sched_debug.cfs_rq:/.spread0.142
-23564 ±-34% -53.2% -11036 ±-56% sched_debug.cfs_rq:/.spread0.143
-23378 ±-22% -67.8% -7528 ±-65% sched_debug.cfs_rq:/.spread0.15
-18714 ±-36% -62.8% -6960 ±-45% sched_debug.cfs_rq:/.spread0.16
-18073 ± -8% -46.1% -9735 ±-39% sched_debug.cfs_rq:/.spread0.17
-25957 ±-34% -72.3% -7199 ±-77% sched_debug.cfs_rq:/.spread0.2
-27734 ±-48% -74.4% -7107 ±-77% sched_debug.cfs_rq:/.spread0.20
-21427 ± -9% -63.3% -7863 ±-88% sched_debug.cfs_rq:/.spread0.21
-18151 ±-47% -80.5% -3539 ±-246% sched_debug.cfs_rq:/.spread0.23
-20749 ±-12% -67.5% -6752 ±-52% sched_debug.cfs_rq:/.spread0.24
-20483 ±-13% -53.6% -9500 ±-27% sched_debug.cfs_rq:/.spread0.25
-22052 ±-15% -75.0% -5518 ±-67% sched_debug.cfs_rq:/.spread0.26
-19597 ±-17% -59.5% -7929 ±-51% sched_debug.cfs_rq:/.spread0.28
-15827 ±-46% -73.5% -4189 ±-54% sched_debug.cfs_rq:/.spread0.30
-20549 ±-39% -72.5% -5651 ±-73% sched_debug.cfs_rq:/.spread0.31
-16967 ±-28% -62.7% -6336 ±-66% sched_debug.cfs_rq:/.spread0.32
-17327 ± -7% -61.1% -6744 ±-89% sched_debug.cfs_rq:/.spread0.33
-20174 ±-15% -58.2% -8435 ±-41% sched_debug.cfs_rq:/.spread0.36
-28004 ±-73% -72.6% -7665 ±-51% sched_debug.cfs_rq:/.spread0.37
-18462 ±-21% -38.8% -11292 ±-35% sched_debug.cfs_rq:/.spread0.4
-19841 ±-29% -50.0% -9922 ±-38% sched_debug.cfs_rq:/.spread0.40
-18190 ± -7% -43.8% -10231 ±-28% sched_debug.cfs_rq:/.spread0.41
-20178 ±-14% -59.4% -8185 ±-57% sched_debug.cfs_rq:/.spread0.45
-18888 ±-11% -55.4% -8416 ±-44% sched_debug.cfs_rq:/.spread0.46
-21537 ±-21% -57.4% -9175 ±-70% sched_debug.cfs_rq:/.spread0.47
-16341 ±-14% -93.0% -1142 ±-867% sched_debug.cfs_rq:/.spread0.48
-18314 ± -7% -61.1% -7120 ±-62% sched_debug.cfs_rq:/.spread0.5
-16238 ±-23% -44.3% -9040 ±-41% sched_debug.cfs_rq:/.spread0.50
-27605 ±-56% -71.0% -8018 ±-53% sched_debug.cfs_rq:/.spread0.51
-26365 ±-42% -70.5% -7781 ±-53% sched_debug.cfs_rq:/.spread0.52
-25178 ±-19% -70.7% -7377 ±-65% sched_debug.cfs_rq:/.spread0.53
-17872 ±-23% -68.5% -5635 ±-86% sched_debug.cfs_rq:/.spread0.55
-16539 ±-23% -38.3% -10208 ±-40% sched_debug.cfs_rq:/.spread0.6
-20649 ±-23% -56.1% -9069 ±-55% sched_debug.cfs_rq:/.spread0.60
-17745 ±-24% -43.5% -10019 ±-35% sched_debug.cfs_rq:/.spread0.61
-18436 ±-32% -71.1% -5332 ±-144% sched_debug.cfs_rq:/.spread0.62
-20372 ±-20% -65.8% -6960 ±-77% sched_debug.cfs_rq:/.spread0.63
-16717 ±-17% -61.0% -6520 ±-79% sched_debug.cfs_rq:/.spread0.64
-19957 ±-37% -54.3% -9119 ±-45% sched_debug.cfs_rq:/.spread0.68
-21056 ±-11% -63.2% -7756 ±-90% sched_debug.cfs_rq:/.spread0.69
-30470 ±-48% -79.6% -6229 ±-75% sched_debug.cfs_rq:/.spread0.7
-20418 ±-14% -54.4% -9310 ±-42% sched_debug.cfs_rq:/.spread0.70
-20792 ±-24% -31.8% -14187 ±-15% sched_debug.cfs_rq:/.spread0.71
-24760 ±-19% -37.0% -15605 ±-30% sched_debug.cfs_rq:/.spread0.74
-25495 ±-21% -36.9% -16078 ±-15% sched_debug.cfs_rq:/.spread0.75
-25503 ±-16% -56.2% -11178 ±-24% sched_debug.cfs_rq:/.spread0.76
-32196 ±-52% -59.9% -12917 ±-52% sched_debug.cfs_rq:/.spread0.77
-29920 ±-29% -59.7% -12070 ±-35% sched_debug.cfs_rq:/.spread0.8
-30688 ±-41% -51.8% -14805 ±-30% sched_debug.cfs_rq:/.spread0.82
-20513 ±-22% -43.8% -11518 ±-31% sched_debug.cfs_rq:/.spread0.87
-27216 ±-26% -60.3% -10800 ±-33% sched_debug.cfs_rq:/.spread0.88
-26359 ±-63% -107.2% 1904 ±875% sched_debug.cfs_rq:/.spread0.9
-30731 ±-56% -51.6% -14887 ±-21% sched_debug.cfs_rq:/.spread0.90
-33391 ±-47% -58.6% -13809 ±-10% sched_debug.cfs_rq:/.spread0.91
-20385 ± -8% -19.7% -16376 ±-18% sched_debug.cfs_rq:/.spread0.92
-26920 ±-62% -74.0% -7005 ±-33% sched_debug.cfs_rq:/.spread0.93
-34217 ±-54% -59.2% -13959 ±-35% sched_debug.cfs_rq:/.spread0.94
-25528 ±-52% -48.0% -13262 ±-31% sched_debug.cfs_rq:/.spread0.95
-25938 ±-23% -66.5% -8680 ±-55% sched_debug.cfs_rq:/.spread0.98
-21239 ± -9% -53.4% -9900 ±-30% sched_debug.cfs_rq:/.spread0.avg
-41851 ±-30% -52.1% -20048 ± -4% sched_debug.cfs_rq:/.spread0.min
104.25 ± 18% -69.5% 31.75 ± 65% sched_debug.cfs_rq:/.util_avg.113
13.33 ±136% +509.4% 81.25 ± 58% sched_debug.cfs_rq:/.util_avg.140
223.25 ±141% -91.5% 19.00 ± 30% sched_debug.cfs_rq:/.util_avg.45
224.25 ± 54% -78.4% 48.50 ± 51% sched_debug.cfs_rq:/.util_avg.54
120.00 ± 71% -78.8% 25.50 ±116% sched_debug.cfs_rq:/.util_avg.55
7.25 ±102% +2924.1% 219.25 ± 92% sched_debug.cfs_rq:/.util_avg.60
58.50 ±100% -94.4% 3.25 ±173% sched_debug.cfs_rq:/.util_avg.79
65.25 ± 44% -78.9% 13.75 ±116% sched_debug.cfs_rq:/.util_avg.98
994.50 ± 2% -15.1% 844.00 ± 9% sched_debug.cfs_rq:/.util_avg.max
550561 ± 35% +57.8% 868662 ± 26% sched_debug.cpu.avg_idle.92
6.12 ± 29% -28.4% 4.38 ± 8% sched_debug.cpu.clock.stddev
6.12 ± 29% -28.4% 4.38 ± 8% sched_debug.cpu.clock_task.stddev
3.32 ± 85% -64.5% 1.18 ± 32% sched_debug.cpu.load.avg
27.19 ±123% -74.7% 6.89 ± 15% sched_debug.cpu.load.stddev
0.00 ± 15% -36.2% 0.00 ± 33% sched_debug.cpu.next_balance.stddev
0.04 ± 16% -33.3% 0.03 ± 35% sched_debug.cpu.nr_running.avg
0.20 ± 8% -18.9% 0.16 ± 18% sched_debug.cpu.nr_running.stddev
363.50 ± 37% -35.1% 235.75 ± 19% sched_debug.cpu.nr_switches.108
666.00 ± 18% -36.8% 421.00 ± 24% sched_debug.cpu.nr_switches.11
246.25 ± 16% +67.7% 413.00 ± 19% sched_debug.cpu.nr_switches.119
1400 ±102% -74.0% 364.75 ± 6% sched_debug.cpu.nr_switches.21
864.00 ± 15% -42.4% 498.00 ± 20% sched_debug.cpu.nr_switches.5
652.50 ± 4% -36.8% 412.25 ± 23% sched_debug.cpu.nr_switches.57
740.50 ± 75% -66.3% 249.25 ± 30% sched_debug.cpu.nr_switches.75
637.75 ± 64% -64.7% 225.25 ± 20% sched_debug.cpu.nr_switches.92
-16.00 ±-158% -104.2% 0.67 ±187% sched_debug.cpu.nr_uninterruptible.14
-1.67 ±-149% -160.0% 1.00 ±122% sched_debug.cpu.nr_uninterruptible.17
-6.75 ±-46% -81.5% -1.25 ±-34% sched_debug.cpu.nr_uninterruptible.18
1.50 ±191% -200.0% -1.50 ±-74% sched_debug.cpu.nr_uninterruptible.26
0.67 ±187% -587.5% -3.25 ±-59% sched_debug.cpu.nr_uninterruptible.57
-0.25 ±-173% +800.0% -2.25 ±-72% sched_debug.cpu.nr_uninterruptible.58
0.75 ±317% -600.0% -3.75 ±-39% sched_debug.cpu.nr_uninterruptible.67
-1.67 ±-28% -160.0% 1.00 ± 81% sched_debug.cpu.nr_uninterruptible.68
-1.25 ±-34% -80.0% -0.25 ±-331% sched_debug.cpu.nr_uninterruptible.69
5.00 ± 28% -100.0% 0.00 ± 0% sched_debug.cpu.nr_uninterruptible.79
1.75 ± 47% -100.0% 0.00 ± 0% sched_debug.cpu.nr_uninterruptible.90
365.75 ± 37% -35.3% 236.50 ± 19% sched_debug.cpu.sched_count.108
672.25 ± 18% -26.4% 494.50 ± 25% sched_debug.cpu.sched_count.11
248.00 ± 16% +67.1% 414.50 ± 19% sched_debug.cpu.sched_count.119
1497 ± 94% -75.4% 368.25 ± 6% sched_debug.cpu.sched_count.21
954.00 ± 36% -36.9% 602.25 ± 25% sched_debug.cpu.sched_count.45
538.75 ± 16% +100.9% 1082 ± 59% sched_debug.cpu.sched_count.47
871.25 ± 15% -42.4% 501.50 ± 20% sched_debug.cpu.sched_count.5
1057 ± 55% -60.8% 414.75 ± 23% sched_debug.cpu.sched_count.57
742.00 ± 75% -66.3% 250.00 ± 30% sched_debug.cpu.sched_count.75
639.25 ± 64% -64.6% 226.25 ± 20% sched_debug.cpu.sched_count.92
170.25 ± 36% -36.4% 108.25 ± 17% sched_debug.cpu.sched_goidle.108
316.75 ± 16% -36.5% 201.00 ± 25% sched_debug.cpu.sched_goidle.11
114.75 ± 16% +66.0% 190.50 ± 18% sched_debug.cpu.sched_goidle.119
87.00 ± 18% +33.0% 115.75 ± 23% sched_debug.cpu.sched_goidle.126
370.75 ± 50% -53.5% 172.25 ± 4% sched_debug.cpu.sched_goidle.21
246.50 ± 16% +98.0% 488.00 ± 68% sched_debug.cpu.sched_goidle.47
420.50 ± 14% -43.5% 237.50 ± 20% sched_debug.cpu.sched_goidle.5
270.25 ± 25% -33.4% 180.00 ± 25% sched_debug.cpu.sched_goidle.55
304.50 ± 6% -35.6% 196.25 ± 24% sched_debug.cpu.sched_goidle.57
361.75 ± 76% -68.1% 115.50 ± 29% sched_debug.cpu.sched_goidle.75
40.25 ± 26% +614.3% 287.50 ±118% sched_debug.cpu.ttwu_count.119
41.50 ± 58% +197.0% 123.25 ± 89% sched_debug.cpu.ttwu_count.133
917.25 ± 69% -83.7% 149.50 ± 78% sched_debug.cpu.ttwu_count.22
187.50 ± 82% +545.7% 1210 ± 78% sched_debug.cpu.ttwu_count.36
178.75 ± 27% +228.0% 586.25 ± 91% sched_debug.cpu.ttwu_count.44
133.75 ± 56% +212.5% 418.00 ± 74% sched_debug.cpu.ttwu_count.47
107.25 ± 29% +434.3% 573.00 ± 58% sched_debug.cpu.ttwu_count.48
25.25 ± 19% +36.6% 34.50 ± 16% sched_debug.cpu.ttwu_count.88
25.25 ± 30% -42.6% 14.50 ± 14% sched_debug.cpu.ttwu_local.110
19.50 ± 20% +43.6% 28.00 ± 23% sched_debug.cpu.ttwu_local.117
377.00 ±139% -92.7% 27.50 ± 10% sched_debug.cpu.ttwu_local.21
30.00 ± 46% +104.2% 61.25 ± 44% sched_debug.cpu.ttwu_local.36
151.25 ± 39% -71.6% 43.00 ± 21% sched_debug.cpu.ttwu_local.45
40.25 ± 43% +87.0% 75.25 ± 27% sched_debug.cpu.ttwu_local.48
0.00 ± 23% -61.7% 0.00 ±106% sched_debug.rt_rq:/.rt_time.stddev
lkp-hsx02: Brickland Haswell-EX
Memory: 128G
lkp-hsx04: Brickland Haswell-EX
Memory: 512G
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
[-- Attachment #2: job.yaml --]
[-- Type: text/plain, Size: 3711 bytes --]
---
LKP_SERVER: inn
LKP_CGI_PORT: 80
LKP_CIFS_PORT: 139
testcase: fsmark
default-monitors:
wait: activate-monitor
kmsg:
uptime:
iostat:
heartbeat:
vmstat:
numa-numastat:
numa-vmstat:
numa-meminfo:
proc-vmstat:
proc-stat:
interval: 10
meminfo:
slabinfo:
interrupts:
lock_stat:
latency_stats:
softirqs:
bdi_dev_mapping:
diskstats:
nfsstat:
cpuidle:
cpufreq-stats:
turbostat:
pmeter:
sched_debug:
interval: 60
cpufreq_governor: performance
default-watchdogs:
oom-killer:
watchdog:
commit: fbcc025613590d7b1d15521555dcc6393a148a6b
model: Brickland Haswell-EX
nr_cpu: 120
memory: 512G
hdd_partitions:
swap_partitions:
pmeter_server: lkp-st01
pmeter_device: yokogawa-wt310
category: benchmark
iterations: 1x
nr_threads: 1t
disk: 1BRD_48G
need_memory: 50G
fs: xfs
fs2:
fsmark:
filesize: 4M
test_size: 40G
sync_method: fsyncBeforeClose
queue: bisect
testbox: lkp-hsx04
tbox_group: lkp-hsx04
kconfig: x86_64-rhel
enqueue_time: 2016-02-17 21:48:14.760380573 +08:00
compiler: gcc-4.9
rootfs: debian-x86_64-2015-02-07.cgz
id: 2030c727296a90ef073d34bf338163142a94ebb4
user: lkp
head_commit: f7d31753c4d70c12e290de202be90df07f7f6dee
base_commit: 18558cae0272f8fd9647e69d3fec1565a7949865
branch: linux-devel/devel-hourly-2016021709
result_root: "/result/fsmark/performance-1x-1t-1BRD_48G-xfs-4M-40G-fsyncBeforeClose/lkp-hsx04/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/fbcc025613590d7b1d15521555dcc6393a148a6b/1"
job_file: "/lkp/scheduled/lkp-hsx04/bisect_fsmark-performance-1x-1t-1BRD_48G-xfs-4M-40G-fsyncBeforeClose-debian-x86_64-2015-02-07.cgz-x86_64-rhel-fbcc025613590d7b1d15521555dcc6393a148a6b-20160217-51683-9iljon-1.yaml"
max_uptime: 777.48
initrd: "/osimage/debian/debian-x86_64-2015-02-07.cgz"
bootloader_append:
- root=/dev/ram0
- user=lkp
- job=/lkp/scheduled/lkp-hsx04/bisect_fsmark-performance-1x-1t-1BRD_48G-xfs-4M-40G-fsyncBeforeClose-debian-x86_64-2015-02-07.cgz-x86_64-rhel-fbcc025613590d7b1d15521555dcc6393a148a6b-20160217-51683-9iljon-1.yaml
- ARCH=x86_64
- kconfig=x86_64-rhel
- branch=linux-devel/devel-hourly-2016021709
- commit=fbcc025613590d7b1d15521555dcc6393a148a6b
- BOOT_IMAGE=/pkg/linux/x86_64-rhel/gcc-4.9/fbcc025613590d7b1d15521555dcc6393a148a6b/vmlinuz-4.5.0-rc2-00003-gfbcc025
- max_uptime=777
- RESULT_ROOT=/result/fsmark/performance-1x-1t-1BRD_48G-xfs-4M-40G-fsyncBeforeClose/lkp-hsx04/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/fbcc025613590d7b1d15521555dcc6393a148a6b/1
- LKP_SERVER=inn
- |2-
earlyprintk=ttyS0,115200 systemd.log_level=err
debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100
panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0
console=ttyS0,115200 console=tty0 vga=normal
rw
lkp_initrd: "/lkp/lkp/lkp-x86_64.cgz"
modules_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/fbcc025613590d7b1d15521555dcc6393a148a6b/modules.cgz"
bm_initrd: "/osimage/deps/debian-x86_64-2015-02-07.cgz/lkp.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/run-ipconfig.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/turbostat.cgz,/lkp/benchmarks/turbostat.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/fs.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/fs2.cgz,/lkp/benchmarks/fsmark.cgz"
linux_headers_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/fbcc025613590d7b1d15521555dcc6393a148a6b/linux-headers.cgz"
repeat_to: 2
kernel: "/pkg/linux/x86_64-rhel/gcc-4.9/fbcc025613590d7b1d15521555dcc6393a148a6b/vmlinuz-4.5.0-rc2-00003-gfbcc025"
dequeue_time: 2016-02-17 22:09:02.572511814 +08:00
job_state: finished
loadavg: 0.79 0.27 0.10 2/1320 3927
start_time: '1455718204'
end_time: '1455718246'
version: "/lkp/lkp/.src-20160217-183926"
[-- Attachment #3: reproduce.sh --]
[-- Type: application/x-sh, Size: 13759 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [lkp] [xfs] fbcc025613: -5.6% fsmark.files_per_sec
2016-02-19 2:52 [lkp] [xfs] fbcc025613: -5.6% fsmark.files_per_sec kernel test robot
@ 2016-02-19 6:49 ` Dave Chinner
2016-02-22 8:54 ` Christoph Hellwig
0 siblings, 1 reply; 4+ messages in thread
From: Dave Chinner @ 2016-02-19 6:49 UTC (permalink / raw)
To: kernel test robot; +Cc: Dave Chinner, lkp, LKML, Christoph Hellwig, xfs
On Fri, Feb 19, 2016 at 10:52:08AM +0800, kernel test robot wrote:
> FYI, we noticed the below changes on
>
> https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
> commit fbcc025613590d7b1d15521555dcc6393a148a6b ("xfs: Introduce writeback context for writepages")
>
>
> =========================================================================================
> compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/md/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
> gcc-4.9/performance/8BRD_12G/4M/xfs/1x/x86_64-rhel/RAID0/1t/debian-x86_64-2015-02-07.cgz/fsyncBeforeClose/lkp-hsx02/60G/fsmark
>
> commit:
> 150d5be09ce49a9bed6feb7b7dc4e5ae188778ec
> fbcc025613590d7b1d15521555dcc6393a148a6b
>
> 150d5be09ce49a9b fbcc025613590d7b1d15521555
> ---------------- --------------------------
> %stddev %change %stddev
> \ | \
> 36122 ± 0% -57.4% 15404 ± 0% fsmark.time.involuntary_context_switches
> 95.30 ± 0% +1.8% 97.00 ± 0% fsmark.time.percent_of_cpu_this_job_got
> 25339 ± 32% +5756.0% 1483901 ± 1% fsmark.time.voluntary_context_switches
> 881.80 ± 45% +14258.5% 126613 ± 10% latency_stats.hits.wait_on_page_bit.__filemap_fdatawait_range.filemap_fdatawait_range.filemap_write_and_wait_range.xfs_file_fsync.vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
> 3548 ± 48% +11967.4% 428200 ± 7% latency_stats.sum.wait_on_page_bit.__filemap_fdatawait_range.filemap_fdatawait_range.filemap_write_and_wait_range.xfs_file_fsync.vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
.....
> 2016-02-17 22:08:15 mount -t xfs -o nobarrier,inode64 /dev/ram0 /fs/ram0
> 2016-02-17 22:08:16 ./fs_mark -d /fs/ram0/1 -n 10240 -L 1 -S 1 -s 4194304
Ok, so 10k x 4MB files, fsync after each. Let me go look.
So the problem above appears to be that fsync is waiting on 125,000
more pages to complete IO, causing a major increase in context
switches during fsync. With only 10000 files being created, that
mores than 10 pages per file that is being waited on.
That doesn't really seem right. The writeback should be done as a
single ioend, with a single completion, with a single setsize
transaction, adn then all the pages are marked clean sequentially.
The above behaviour implies we are ending up doing something like:
fsync proc io completion
wait on page 0
end page 0 writeback
wake up page 0
wait on page 1
end page 1 writeback
wake up page 1
wait on page 2
end page 2 writeback
wake up page 2
Though in slightly larger batches than a single page (10 wakeups a
file, so batches of around 100 pages per wakeup?). i.e. the fsync
IO wait appears to be racing with IO completion marking pages as
done. I simply cannot see how the above change would cause that, as
it was simply a change in the IO submission code that doesn't affect
overall size or shape of the IOs being submitted.
This is a patch in the middle of a larger patch set - if any of the
patches in that series should affect behaviour, it's the commit a
couple further that removed the XFS page clustering code - that
changes the size and shape of the ioends and bios we build and
submit and so if anything is perturbing Io completion behaviour I'd
expect those to be the culprit....
Does this same problem context switch problem still exist when the
entire set of patches in the XFs series has been applied? I can't
reproduce it here locally with the entire series applied, but I have
only tested on 16p with 48GB RAM. The context switch rate does not
go above 3,000/s, regardless of whether I use a ramdisk or high
throughput SSDs. Tracing also clearly indicates a single 4MB ioend
is being build and submitted by the new code and so that implies
that the io completions that are occuring before and after this
patchset are identical, and so therefore the way pages are moving
from dirty -> writeback -> clean is identical before and after the
patch series.
So, like I said - this makes no sense to me from the perspective of
the change that you've bisected it down to. This smells to me of a
scheduler change in the linux-next tree, espcially as I can't
reproduce the behaviour with just the XFS commits in isolation on a
4.5-rc3 tree.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [lkp] [xfs] fbcc025613: -5.6% fsmark.files_per_sec
2016-02-19 6:49 ` Dave Chinner
@ 2016-02-22 8:54 ` Christoph Hellwig
2016-02-22 11:22 ` Dave Chinner
0 siblings, 1 reply; 4+ messages in thread
From: Christoph Hellwig @ 2016-02-22 8:54 UTC (permalink / raw)
To: Dave Chinner
Cc: kernel test robot, Dave Chinner, lkp, LKML, Christoph Hellwig,
xfs
On Fri, Feb 19, 2016 at 05:49:32PM +1100, Dave Chinner wrote:
> That doesn't really seem right. The writeback should be done as a
> single ioend, with a single completion, with a single setsize
> transaction, adn then all the pages are marked clean sequentially.
> The above behaviour implies we are ending up doing something like:
>
> fsync proc io completion
> wait on page 0
> end page 0 writeback
> wake up page 0
> wait on page 1
> end page 1 writeback
> wake up page 1
> wait on page 2
> end page 2 writeback
> wake up page 2
>
> Though in slightly larger batches than a single page (10 wakeups a
> file, so batches of around 100 pages per wakeup?). i.e. the fsync
> IO wait appears to be racing with IO completion marking pages as
> done. I simply cannot see how the above change would cause that, as
> it was simply a change in the IO submission code that doesn't affect
> overall size or shape of the IOs being submitted.
Could this be the lack of blk plugs, which will cause us to complete
too early?
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [lkp] [xfs] fbcc025613: -5.6% fsmark.files_per_sec
2016-02-22 8:54 ` Christoph Hellwig
@ 2016-02-22 11:22 ` Dave Chinner
0 siblings, 0 replies; 4+ messages in thread
From: Dave Chinner @ 2016-02-22 11:22 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: kernel test robot, Dave Chinner, lkp, LKML, xfs
On Mon, Feb 22, 2016 at 09:54:09AM +0100, Christoph Hellwig wrote:
> On Fri, Feb 19, 2016 at 05:49:32PM +1100, Dave Chinner wrote:
> > That doesn't really seem right. The writeback should be done as a
> > single ioend, with a single completion, with a single setsize
> > transaction, adn then all the pages are marked clean sequentially.
> > The above behaviour implies we are ending up doing something like:
> >
> > fsync proc io completion
> > wait on page 0
> > end page 0 writeback
> > wake up page 0
> > wait on page 1
> > end page 1 writeback
> > wake up page 1
> > wait on page 2
> > end page 2 writeback
> > wake up page 2
> >
> > Though in slightly larger batches than a single page (10 wakeups a
> > file, so batches of around 100 pages per wakeup?). i.e. the fsync
> > IO wait appears to be racing with IO completion marking pages as
> > done. I simply cannot see how the above change would cause that, as
> > it was simply a change in the IO submission code that doesn't affect
> > overall size or shape of the IOs being submitted.
>
> Could this be the lack of blk plugs, which will cause us to complete
> too early?
No, because block plugging is still in place on the patch that the
regression is reported on. The difference it makes is that we don't
do any IO submission while building the ioend chaing, and submit it
all in one hit at the end of the ->writepages call.
However, this is an intermediate patch in the series, and later
patches correct this and we end up 4 commits later with bios being
built directly and being submitted the moment they are full. With
the entire series in place, I can't reproduce any sort of bad
behaviour, nor do I see any repeatable performance differential.
So I really want to know if this regression is seen with the entire
patchset applied, and if I can't reproduce on a local ramdisk or
real storage then we need to decide how much we care about fsync
performance on a volatile ramdisk...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2016-02-22 11:22 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-02-19 2:52 [lkp] [xfs] fbcc025613: -5.6% fsmark.files_per_sec kernel test robot
2016-02-19 6:49 ` Dave Chinner
2016-02-22 8:54 ` Christoph Hellwig
2016-02-22 11:22 ` Dave Chinner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).