public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: kernel test robot <ying.huang@intel.com>
To: Jaegeuk Kim <jaegeuk@kernel.org>
Cc: lkp@01.org
Cc: LKML <linux-kernel@vger.kernel.org>
Subject: [lkp] [f2fs] 88a70a69c0: +49.4% fsmark.files_per_sec
Date: Sun, 23 Aug 2015 07:32:22 +0800	[thread overview]
Message-ID: <87mvxikigp.fsf@yhuang-dev.intel.com> (raw)

[-- Attachment #1: Type: text/plain, Size: 45193 bytes --]

FYI, we noticed the below changes on

git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 88a70a69c088933011615fe26242e0335b012284 ("f2fs: fix wrong condition check to trigger f2fs_sync_fs")


=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/iterations/nr_threads/disk/fs/filesize/test_size/sync_method/nr_directories/nr_files_per_directory:
  nhm4/fsmark/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/1x/32t/1HDD/f2fs/16MB/60G/NoSync/16d/256fpd

commit: 
  cd52b6368f1301b55d0e484105c876930e443d83
  88a70a69c088933011615fe26242e0335b012284

cd52b6368f1301b5 88a70a69c088933011615fe262 
---------------- -------------------------- 
         %stddev     %change         %stddev
             \          |                \  
    698894 ±  2%     -10.7%     624267 ±  1%  fsmark.app_overhead
      6.43 ±  0%     +49.4%       9.60 ±  0%  fsmark.files_per_sec
    496.30 ±  0%      -3.7%     477.72 ±  0%  fsmark.time.elapsed_time
    496.30 ±  0%      -3.7%     477.72 ±  0%  fsmark.time.elapsed_time.max
     58356 ±  9%     -49.3%      29591 ±  7%  fsmark.time.involuntary_context_switches
     11.75 ±  3%     -14.9%      10.00 ±  0%  fsmark.time.percent_of_cpu_this_job_got
     59.14 ±  0%     -16.3%      49.50 ±  0%  fsmark.time.system_time
    609502 ±  0%     -35.9%     390479 ±  0%  fsmark.time.voluntary_context_switches
      2134 ±  5%     -29.7%       1499 ±  6%  uptime.idle
    229046 ±  0%     -13.3%     198655 ±  1%  meminfo.Dirty
    146785 ±  1%     +57.8%     231574 ±  1%  meminfo.Writeback
      2034 ±  0%     +10.0%       2238 ±  1%  slabinfo.kmalloc-256.active_objs
    448.00 ±  0%     +23.1%     551.50 ±  1%  slabinfo.kmalloc-4096.active_objs
    482.25 ±  0%     +21.6%     586.25 ±  1%  slabinfo.kmalloc-4096.num_objs
     24996 ±  0%     -18.5%      20379 ±  1%  softirqs.BLOCK
     93299 ±  1%     +21.1%     112999 ±  0%  softirqs.SCHED
    132179 ±  0%     +18.6%     156829 ±  1%  softirqs.TIMER
     58356 ±  9%     -49.3%      29591 ±  7%  time.involuntary_context_switches
     11.75 ±  3%     -14.9%      10.00 ±  0%  time.percent_of_cpu_this_job_got
     59.14 ±  0%     -16.3%      49.50 ±  0%  time.system_time
    609502 ±  0%     -35.9%     390479 ±  0%  time.voluntary_context_switches
    126546 ±  0%      +3.6%     131110 ±  0%  vmstat.io.bo
     15.25 ±  2%    +103.3%      31.00 ±  0%  vmstat.procs.b
     23495 ±  0%      +1.6%      23881 ±  0%  vmstat.system.cs
     10330 ±  0%      +4.8%      10825 ±  0%  vmstat.system.in
      3.14 ±  3%      -8.0%       2.89 ±  0%  turbostat.%Busy
     96.75 ±  3%     -12.4%      84.75 ±  0%  turbostat.Avg_MHz
     12.01 ±  2%     +12.6%      13.53 ±  4%  turbostat.CPU%c1
     40.58 ±  1%     +73.6%      70.47 ±  0%  turbostat.CPU%c3
     44.27 ±  0%     -70.4%      13.12 ±  3%  turbostat.CPU%c6
 1.506e+08 ±  2%     -18.6%  1.225e+08 ±  5%  cpuidle.C1-NHM.time
  63432793 ±  2%     +45.7%   92408180 ±  5%  cpuidle.C1E-NHM.time
     85526 ±  1%     -27.4%      62121 ±  2%  cpuidle.C1E-NHM.usage
 1.024e+09 ±  0%     +45.5%   1.49e+09 ±  1%  cpuidle.C3-NHM.time
    266717 ±  0%     +43.9%     383733 ±  1%  cpuidle.C3-NHM.usage
 2.619e+09 ±  0%     -22.9%  2.018e+09 ±  0%  cpuidle.C6-NHM.time
    194213 ±  2%     -30.4%     135255 ±  1%  cpuidle.C6-NHM.usage
    155.50 ± 58%     -91.9%      12.67 ± 73%  proc-vmstat.allocstall
    951.00 ±  5%    +230.8%       3145 ±  1%  proc-vmstat.kswapd_high_wmark_hit_quickly
      3011 ±  3%     -31.6%       2060 ± 10%  proc-vmstat.kswapd_low_wmark_hit_quickly
     57266 ±  0%     -13.3%      49663 ±  1%  proc-vmstat.nr_dirty
      0.75 ±110%  +13700.0%     103.50 ± 26%  proc-vmstat.nr_vmscan_immediate_reclaim
     36690 ±  1%     +57.8%      57885 ±  1%  proc-vmstat.nr_writeback
      4229 ±  1%     +24.9%       5283 ±  3%  proc-vmstat.pageoutrun
      8779 ±  2%     -45.8%       4755 ±  1%  proc-vmstat.pgactivate
     78067 ± 23%     -97.8%       1738 ± 73%  proc-vmstat.pgscan_direct_dma32
     18409 ±  3%     -32.8%      12373 ±  3%  sched_debug.cfs_rq[0]:/.min_vruntime
     18497 ±  4%     -18.9%      14995 ±  2%  sched_debug.cfs_rq[1]:/.min_vruntime
     87.71 ±1609%   +2889.8%       2622 ± 27%  sched_debug.cfs_rq[1]:/.spread0
      2068 ± 13%     +39.3%       2880 ± 20%  sched_debug.cfs_rq[2]:/.avg->runnable_avg_sum
     18322 ±  1%     -27.5%      13284 ±  5%  sched_debug.cfs_rq[2]:/.min_vruntime
     44.00 ± 13%     +41.5%      62.25 ± 20%  sched_debug.cfs_rq[2]:/.tg_runnable_contrib
      2260 ± 17%     +69.9%       3840 ± 39%  sched_debug.cfs_rq[3]:/.avg->runnable_avg_sum
     18810 ±  2%     -27.2%      13703 ±  4%  sched_debug.cfs_rq[3]:/.min_vruntime
     48.50 ± 16%     +70.1%      82.50 ± 39%  sched_debug.cfs_rq[3]:/.tg_runnable_contrib
      4839 ±  5%     -43.8%       2718 ± 14%  sched_debug.cfs_rq[4]:/.exec_clock
     15393 ±  2%     -55.7%       6817 ± 12%  sched_debug.cfs_rq[4]:/.min_vruntime
     -3016 ±-27%     +84.2%      -5555 ±-13%  sched_debug.cfs_rq[4]:/.spread0
      4957 ±  3%     -47.1%       2622 ±  4%  sched_debug.cfs_rq[5]:/.exec_clock
     16150 ±  2%     -52.8%       7615 ±  9%  sched_debug.cfs_rq[5]:/.min_vruntime
     -2259 ±-26%    +110.5%      -4757 ±-22%  sched_debug.cfs_rq[5]:/.spread0
      4977 ±  3%     -50.7%       2454 ± 11%  sched_debug.cfs_rq[6]:/.exec_clock
     15474 ±  1%     -51.2%       7558 ±  6%  sched_debug.cfs_rq[6]:/.min_vruntime
     -2935 ±-33%     +64.0%      -4814 ±-13%  sched_debug.cfs_rq[6]:/.spread0
      1797 ±  4%     +27.7%       2295 ±  4%  sched_debug.cfs_rq[7]:/.avg->runnable_avg_sum
      4930 ±  1%     -39.2%       2998 ±  8%  sched_debug.cfs_rq[7]:/.exec_clock
     16149 ±  4%     -48.4%       8326 ±  5%  sched_debug.cfs_rq[7]:/.min_vruntime
     38.25 ±  5%     +28.8%      49.25 ±  5%  sched_debug.cfs_rq[7]:/.tg_runnable_contrib
      1.25 ±103%    +420.0%       6.50 ± 53%  sched_debug.cpu#0.cpu_load[1]
   -499.50 ±-14%     -96.9%     -15.25 ±-143%  sched_debug.cpu#0.nr_uninterruptible
   1523859 ± 91%     -94.7%      80781 ±  8%  sched_debug.cpu#1.ttwu_count
     29758 ±  1%     +70.6%      50758 ± 21%  sched_debug.cpu#2.nr_load_updates
     27535 ±  1%   +3818.0%    1078840 ± 97%  sched_debug.cpu#2.ttwu_local
     28162 ±  1%     +35.3%      38116 ±  4%  sched_debug.cpu#3.nr_load_updates
    183204 ± 16%     -37.4%     114749 ±  4%  sched_debug.cpu#3.nr_switches
    183360 ± 16%     -37.4%     114830 ±  4%  sched_debug.cpu#3.sched_count
     80066 ± 18%     -32.0%      54416 ±  4%  sched_debug.cpu#3.sched_goidle
     25169 ±  2%     +20.2%      30259 ±  4%  sched_debug.cpu#3.ttwu_local
    205.25 ± 14%     -70.5%      60.50 ±  7%  sched_debug.cpu#4.nr_uninterruptible
     22718 ± 37%     -50.2%      11302 ±  6%  sched_debug.cpu#5.nr_load_updates
   1620691 ± 90%     -93.7%     102382 ± 14%  sched_debug.cpu#5.nr_switches
    177.25 ± 13%     -74.0%      46.00 ± 58%  sched_debug.cpu#5.nr_uninterruptible
   1620849 ± 90%     -93.7%     102471 ± 14%  sched_debug.cpu#5.sched_count
    794157 ± 91%     -93.9%      48455 ± 15%  sched_debug.cpu#5.sched_goidle
    810491 ± 88%     -91.3%      70436 ± 26%  sched_debug.cpu#5.ttwu_count
    729454 ± 98%     -99.2%       6150 ±  7%  sched_debug.cpu#5.ttwu_local
    131.00 ± 37%     -60.5%      51.75 ± 25%  sched_debug.cpu#6.nr_uninterruptible
      3.25 ±155%   +1130.8%      40.00 ± 83%  sched_debug.cpu#7.cpu_load[0]
      3.75 ± 82%    +160.0%       9.75 ± 53%  sched_debug.cpu#7.cpu_load[4]
    198.00 ± 20%     -71.7%      56.00 ±  8%  sched_debug.cpu#7.nr_uninterruptible
     13180 ±  6%     -43.2%       7481 ± 13%  sched_debug.cpu#7.ttwu_local
     43097 ±  0%     +25.3%      54021 ±  0%  latency_stats.avg.balance_dirty_pages_ratelimited.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write.system_call_fastpath
   1557144 ±  0%    -100.0%       0.00 ± -1%  latency_stats.avg.call_rwsem_down_read_failed.f2fs_convert_inline_inode.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write.system_call_fastpath
    970823 ± 23%    -100.0%       0.00 ± -1%  latency_stats.avg.call_rwsem_down_read_failed.f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.do_last.path_openat.do_filp_open.do_sys_open.SyS_open.system_call_fastpath
      0.00 ± -1%      +Inf%     214155 ± 70%  latency_stats.avg.call_rwsem_down_read_failed.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.do_last.path_openat.do_filp_open.do_sys_open.SyS_open.system_call_fastpath
      0.00 ± -1%      +Inf%      60170 ± 93%  latency_stats.avg.call_rwsem_down_read_failed.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].sync_inode_page.[f2fs].reserve_new_block.[f2fs].f2fs_reserve_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write
    225094 ± 16%    -100.0%       0.00 ± -1%  latency_stats.avg.call_rwsem_down_read_failed.f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write.system_call_fastpath
      0.00 ± -1%      +Inf%     101419 ± 96%  latency_stats.avg.get_request.blk_queue_bio.generic_make_request.submit_bio.__submit_merged_bio.[f2fs].f2fs_submit_merged_bio.[f2fs].f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write
      0.00 ± -1%      +Inf%    1566660 ±  2%  latency_stats.avg.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].__set_data_blkaddr.[f2fs].reserve_new_block.[f2fs].f2fs_reserve_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write
      0.00 ± -1%      +Inf%      60796 ± 69%  latency_stats.avg.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write
      0.00 ± -1%      +Inf%    1643593 ± 34%  latency_stats.avg.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].sync_inode_page.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter
      0.00 ± -1%      +Inf%     210894 ±117%  latency_stats.avg.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].sync_inode_page.[f2fs].reserve_new_block.[f2fs].f2fs_reserve_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write
      0.00 ± -1%      +Inf%     885323 ± 71%  latency_stats.avg.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write
   1593081 ± 27%    -100.0%       0.00 ± -1%  latency_stats.avg.wait_on_page_bit.find_data_page.[f2fs].f2fs_find_entry.[f2fs].f2fs_lookup.[f2fs].lookup_real.do_last.path_openat.do_filp_open.do_sys_open.SyS_open.system_call_fastpath
    166104 ±  0%     +43.7%     238737 ±  1%  latency_stats.hits.balance_dirty_pages_ratelimited.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write.system_call_fastpath
     89938 ±  2%     -77.3%      20398 ±  5%  latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].f2fs_convert_inline_inode.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write
     90810 ±  2%     -77.3%      20638 ±  5%  latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write.system_call_fastpath
     73358 ±  3%     -77.9%      16217 ±  5%  latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write
     95785 ±  2%     -76.4%      22618 ±  5%  latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write
   2926891 ±  2%    -100.0%       0.00 ± -1%  latency_stats.max.call_rwsem_down_read_failed.f2fs_convert_inline_inode.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write.system_call_fastpath
   1609764 ± 29%    -100.0%       0.00 ± -1%  latency_stats.max.call_rwsem_down_read_failed.f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.do_last.path_openat.do_filp_open.do_sys_open.SyS_open.system_call_fastpath
      0.00 ± -1%      +Inf%     493421 ± 62%  latency_stats.max.call_rwsem_down_read_failed.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.do_last.path_openat.do_filp_open.do_sys_open.SyS_open.system_call_fastpath
      0.00 ± -1%      +Inf%      99336 ± 98%  latency_stats.max.call_rwsem_down_read_failed.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].sync_inode_page.[f2fs].reserve_new_block.[f2fs].f2fs_reserve_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write
   2242545 ± 22%    -100.0%       0.00 ± -1%  latency_stats.max.call_rwsem_down_read_failed.f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write.system_call_fastpath
      0.00 ± -1%      +Inf%     101419 ± 96%  latency_stats.max.get_request.blk_queue_bio.generic_make_request.submit_bio.__submit_merged_bio.[f2fs].f2fs_submit_merged_bio.[f2fs].f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write
      0.00 ± -1%      +Inf%    2155762 ±  1%  latency_stats.max.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].__set_data_blkaddr.[f2fs].reserve_new_block.[f2fs].f2fs_reserve_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write
      0.00 ± -1%      +Inf%     547532 ±147%  latency_stats.max.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write
      0.00 ± -1%      +Inf%    1643593 ± 34%  latency_stats.max.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].sync_inode_page.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter
      0.00 ± -1%      +Inf%     936362 ±100%  latency_stats.max.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].sync_inode_page.[f2fs].reserve_new_block.[f2fs].f2fs_reserve_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write
      0.00 ± -1%      +Inf%     981143 ± 74%  latency_stats.max.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write
   2311901 ±  1%    -100.0%       0.00 ± -1%  latency_stats.max.wait_on_page_bit.find_data_page.[f2fs].f2fs_find_entry.[f2fs].f2fs_lookup.[f2fs].lookup_real.do_last.path_openat.do_filp_open.do_sys_open.SyS_open.system_call_fastpath
 7.159e+09 ±  1%     +80.1%   1.29e+10 ±  0%  latency_stats.sum.balance_dirty_pages_ratelimited.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write.system_call_fastpath
 7.392e+09 ±  1%    -100.0%       0.00 ± -1%  latency_stats.sum.call_rwsem_down_read_failed.f2fs_convert_inline_inode.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write.system_call_fastpath
   4641241 ± 65%    -100.0%       0.00 ± -1%  latency_stats.sum.call_rwsem_down_read_failed.f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.do_last.path_openat.do_filp_open.do_sys_open.SyS_open.system_call_fastpath
      0.00 ± -1%      +Inf%    2970972 ±119%  latency_stats.sum.call_rwsem_down_read_failed.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.do_last.path_openat.do_filp_open.do_sys_open.SyS_open.system_call_fastpath
      0.00 ± -1%      +Inf%    1400735 ±115%  latency_stats.sum.call_rwsem_down_read_failed.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].sync_inode_page.[f2fs].reserve_new_block.[f2fs].f2fs_reserve_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write
  34303529 ± 35%    -100.0%       0.00 ± -1%  latency_stats.sum.call_rwsem_down_read_failed.f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write.system_call_fastpath
    173732 ±  4%     -43.4%      98372 ±  9%  latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write
  12335489 ±  3%     -82.6%    2148753 ±  7%  latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].f2fs_convert_inline_inode.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write
  12643502 ±  3%     -82.9%    2161679 ±  7%  latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write.system_call_fastpath
  10460959 ±  4%     -82.8%    1796848 ±  5%  latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write
     46303 ± 10%     -71.7%      13097 ± 23%  latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].get_new_data_page.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.do_last.path_openat
     24455 ± 18%     -66.6%       8164 ± 29%  latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].find_data_page.[f2fs].f2fs_find_entry.[f2fs].f2fs_lookup.[f2fs].lookup_real.do_last.path_openat.do_filp_open
  13132122 ±  3%     -81.7%    2403201 ±  7%  latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write
    689498 ±  4%     -70.8%     201181 ±  6%  latency_stats.sum.call_rwsem_down_write_failed.get_node_info.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write
    107001 ±  7%     -67.6%      34703 ± 29%  latency_stats.sum.call_rwsem_down_write_failed.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.do_last.path_openat.do_filp_open.do_sys_open
    637933 ±  4%     -72.5%     175156 ±  3%  latency_stats.sum.call_rwsem_down_write_failed.set_node_addr.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write
     98057 ±  3%     -71.3%      28111 ± 17%  latency_stats.sum.call_rwsem_down_write_failed.set_node_addr.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.do_last.path_openat.do_filp_open.do_sys_open
   1107127 ±  0%      +5.5%    1168564 ±  1%  latency_stats.sum.do_wait.SyS_wait4.system_call_fastpath
      0.00 ± -1%      +Inf%     101419 ± 96%  latency_stats.sum.get_request.blk_queue_bio.generic_make_request.submit_bio.__submit_merged_bio.[f2fs].f2fs_submit_merged_bio.[f2fs].f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write
    946288 ±128%  +18540.4%  1.764e+08 ± 44%  latency_stats.sum.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.do_last.path_openat.do_filp_open.do_sys_open.SyS_open.system_call_fastpath
      0.00 ± -1%      +Inf%  9.825e+08 ±  4%  latency_stats.sum.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].__set_data_blkaddr.[f2fs].reserve_new_block.[f2fs].f2fs_reserve_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write
      0.00 ± -1%      +Inf%    6369185 ± 77%  latency_stats.sum.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write
      0.00 ± -1%      +Inf%    1643593 ± 34%  latency_stats.sum.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].sync_inode_page.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter
      0.00 ± -1%      +Inf%   14652195 ±106%  latency_stats.sum.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].sync_inode_page.[f2fs].reserve_new_block.[f2fs].f2fs_reserve_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write
      0.00 ± -1%      +Inf%    2354965 ±107%  latency_stats.sum.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.SyS_write
   6949333 ± 55%    -100.0%       0.00 ± -1%  latency_stats.sum.wait_on_page_bit.find_data_page.[f2fs].f2fs_find_entry.[f2fs].f2fs_lookup.[f2fs].lookup_real.do_last.path_openat.do_filp_open.do_sys_open.SyS_open.system_call_fastpath


nhm4: Nehalem
Memory: 4G

                               fsmark.files_per_sec

   10 ++--------------------------------------------------------------------+
      O  O O  O O  O O  O O  O O  O O  O O  O O  O O  O O  O O  O O  O O  O O
  9.5 ++                                                                    |
    9 ++                                                                    |
      |                                                                     |
  8.5 ++                                                                    |
      |                                                                     |
    8 ++                                                                    |
      |                                                                     |
  7.5 ++                                                                    |
    7 ++                                                                    |
      |                                                                     |
  6.5 *+.*.*..    .*.  .*.*..      .*..           .*                        |
      |       *.*.   *.      *.*..*    *.*..*.*..*                          |
    6 ++--------------------------------------------------------------------+


                             fsmark.time.system_time

  62 ++---------------------------------------------------------------------+
     |    *..*                                                              |
  60 *+. +    +       .*.*..*.*..*.*..   *.                                 |
     |  *      *.. .*.                 ..  *..*.*..*                        |
  58 ++           *                   *                                     |
     |                                                                      |
  56 ++                                                                     |
     |                                                                      |
  54 ++                                                                     |
     |                                                                      |
  52 ++                                                                     |
     O            O    O    O O  O    O  O                                  |
  50 ++ O O  O O    O    O         O       O  O O  O         O           O  O
     |                                               O  O O    O  O O  O    |
  48 ++---------------------------------------------------------------------+


                             fsmark.time.elapsed_time

  620 ++--------------------------------------------------------------------+
      |                                                                     |
  600 ++                                 *                                  |
  580 ++                                 :                                  |
      |                                  ::                                 |
  560 ++                                : :                                 |
      |                                 : :                                 |
  540 ++                                :  :                                |
      |                                 :  :                                |
  520 ++                               :   :                                |
  500 ++                               :    :                               |
      *..*.*..*.*..*.*..*.*..*.*..*.*..*    *.*..*.*                        |
  480 O+ O O  O O  O O  O O  O O    O  O O  O O  O O  O O    O  O O  O O  O |
      |                           O                        O                O
  460 ++--------------------------------------------------------------------+


                           fsmark.time.elapsed_time.max

  620 ++--------------------------------------------------------------------+
      |                                                                     |
  600 ++                                 *                                  |
  580 ++                                 :                                  |
      |                                  ::                                 |
  560 ++                                : :                                 |
      |                                 : :                                 |
  540 ++                                :  :                                |
      |                                 :  :                                |
  520 ++                               :   :                                |
  500 ++                               :    :                               |
      *..*.*..*.*..*.*..*.*..*.*..*.*..*    *.*..*.*                        |
  480 O+ O O  O O  O O  O O  O O    O  O O  O O  O O  O O    O  O O  O O  O |
      |                           O                        O                O
  460 ++--------------------------------------------------------------------+


                       fsmark.time.voluntary_context_switches

  650000 ++-----------------------------------------------------------------+
         *                                .*.                               |
  600000 ++  .*.*..*.*.*..*.*..*.*.*..*.*.   *.*..*.*                       |
         | *.                                                               |
         |                                                                  |
  550000 ++                                                                 |
         |                                                                  |
  500000 ++                                                                 |
         |                                                                  |
  450000 ++                                                                 |
         |                                                                  |
         |                O             O                                   |
  400000 O+O  O O  O O O    O  O O O  O    O O O  O O O  O O  O O O  O O  O O
         |                                                                  |
  350000 ++-----------------------------------------------------------------+


                                time.system_time

  62 ++---------------------------------------------------------------------+
     |    *..*                                                              |
  60 *+. +    +       .*.*..*.*..*.*..   *.                                 |
     |  *      *.. .*.                 ..  *..*.*..*                        |
  58 ++           *                   *                                     |
     |                                                                      |
  56 ++                                                                     |
     |                                                                      |
  54 ++                                                                     |
     |                                                                      |
  52 ++                                                                     |
     O            O    O    O O  O    O  O                                  |
  50 ++ O O  O O    O    O         O       O  O O  O         O           O  O
     |                                               O  O O    O  O O  O    |
  48 ++---------------------------------------------------------------------+


                           time.voluntary_context_switches

  650000 ++-----------------------------------------------------------------+
         *                                .*.                               |
  600000 ++  .*.*..*.*.*..*.*..*.*.*..*.*.   *.*..*.*                       |
         | *.                                                               |
         |                                                                  |
  550000 ++                                                                 |
         |                                                                  |
  500000 ++                                                                 |
         |                                                                  |
  450000 ++                                                                 |
         |                                                                  |
         |                O             O                                   |
  400000 O+O  O O  O O O    O  O O O  O    O O O  O O O  O O  O O O  O O  O O
         |                                                                  |
  350000 ++-----------------------------------------------------------------+


                                 vmstat.procs.b

  32 ++---------------------------------------------------------------------+
     O  O O  O O  O O  O O  O O  O O  O  O O  O O  O O  O O  O O  O O  O O  O
  30 ++                                                                     |
  28 ++                                                                     |
     |                                                                      |
  26 ++                                                                     |
  24 ++                                                                     |
     |                                                                      |
  22 ++                                                                     |
  20 ++                                                                     |
     |                                                                      |
  18 ++  .*..    .*.                                                        |
  16 *+.*    *.*.   *..*.*..*.*..*.*..*..*.*..                              |
     |                                        *.*..*                        |
  14 ++---------------------------------------------------------------------+


                                    vmstat.io.bo

  132000 ++-----------------------------------------------------------------+
         |                                     O    O      O                |
  131000 O+O    O  O O O  O O  O O O  O O  O O    O   O  O    O O O  O O    O
         |    O                                                           O |
         |                                                                  |
  130000 ++                                                                 |
         |                                                                  |
  129000 ++                                                                 |
         |                                                                  |
  128000 ++                                                                 |
         |                                                                  |
         |                                                                  |
  127000 ++  .*.*..   .*..*.  .*.    .*.  .*.      .*                       |
         *.*.      *.*      *.   *.*.   *.   *.*..*                         |
  126000 ++-----------------------------------------------------------------+


                              proc-vmstat.nr_writeback

  60000 ++--------------------------------------------------------O---------+
        | O  O O  O           O             O  O O  O    O O O  O    O O  O |
  55000 O+          O  O O O    O  O O  O O           O                     O
        |                                                                   |
        |                                                                   |
  50000 ++                                                                  |
        |                                                                   |
  45000 ++                                                                  |
        |                                                                   |
  40000 ++                                                                  |
        |.*..*.*..                  .*..                                    |
        *         *.*..*.*.*..*.*..*    *   *..*.*..*                       |
  35000 ++                               : :                                |
        |                                : :                                |
  30000 ++--------------------------------*---------------------------------+


                      proc-vmstat.nr_vmscan_immediate_reclaim

  180 ++--------------------------------------------------------------------+
      |  O                                                                  |
  160 ++                                                                    |
  140 ++                          O                                       O |
      |            O                   O O                   O         O    |
  120 O+   O                   O    O                             O         |
  100 ++      O O    O  O O  O                O    O    O                   |
      |                                                    O    O           |
   80 ++                                    O    O    O              O      O
   60 ++                                                                    |
      |                                                                     |
   40 ++                                                                    |
   20 ++                                                                    |
      |                                                                     |
    0 *+-*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*------------------------+


                               proc-vmstat.pgactivate

  10000 ++------------------------------------------------------------------+
        *                                 *.                                |
   9000 ++    .*.. .*..*.    .*.*.. .*.. +  *.. .*..                        |
        | *..*    *      *.*.      *    *      *    *                       |
        |                                                                   |
   8000 ++                                                                  |
        |                                                                   |
   7000 ++                                                                  |
        |                                                                   |
   6000 ++                                                                  |
        |                                                                   |
        |                                                                   |
   5000 O+O  O O    O  O O      O  O O    O O    O    O    O    O    O O  O O
        |         O        O  O         O      O    O    O   O    O         |
   4000 ++------------------------------------------------------------------+


                     proc-vmstat.kswapd_high_wmark_hit_quickly

  3500 ++-------------------------------------------------------------------+
       |                   O O         O  O   O    O    O  O O  O         O |
  3000 O+O  O O  O O  O O       O O  O      O    O    O           O  O O    O
       |                                                                    |
       |                                                                    |
  2500 ++                                                                   |
       |                                                                    |
  2000 ++                                                                   |
       |                                                                    |
  1500 ++                                                                   |
       |                                                                    |
       |                  .*.    .*..                                       |
  1000 *+*..*.  .*.*..*.*.   *..*    *.*..*.*.*..*.*                        |
       |      *.                                                            |
   500 ++-------------------------------------------------------------------+


                                  meminfo.Writeback

  240000 ++-------------------------------------------------------O---------+
         | O  O O  O           O             O O  O O    O O  O O    O O  O |
  220000 O+          O O  O O    O O  O O  O          O                     O
         |                                                                  |
         |                                                                  |
  200000 ++                                                                 |
         |                                                                  |
  180000 ++                                                                 |
         |                                                                  |
  160000 ++                                                                 |
         |.*..*.*..                  .*.                                    |
         *         *.*.*..*.*..*.*.*.   *    *.*..*.*                       |
  140000 ++                              +  :                               |
         |                                + :                               |
  120000 ++--------------------------------*--------------------------------+


                         slabinfo.kmalloc-4096.active_objs

  580 ++--------------------------------------------------------------------+
      |                      O                                            O |
  560 O+ O O  O O  O    O O         O  O O    O    O    O  O O  O O  O O    |
  540 ++             O         O  O         O    O    O                     O
      |                                                                     |
  520 ++                                                                    |
  500 ++                                                                    |
      |                                                                     |
  480 ++                                                                    |
  460 ++                                                                    |
      *..*.*..*.*..*.*..*.  .*.*..*.*..     *.*..*.*                        |
  440 ++                  *.           *   +                                |
  420 ++                                + +                                 |
      |                                  *                                  |
  400 ++--------------------------------------------------------------------+


                          slabinfo.kmalloc-4096.num_objs

  600 ++------------------------------------------------------------------O-+
      O  O O  O    O    O    O      O  O O    O    O    O  O O  O O  O O    |
  580 ++        O         O    O  O         O    O    O                     O
  560 ++             O                                                      |
      |                                                                     |
  540 ++                                                                    |
      |                                                                     |
  520 ++                                                                    |
      |                                                                     |
  500 ++                                                                    |
  480 *+.*.*..*.*..*.*..*.  .*.*..*.*..      .*..*.*                        |
      |                   *.           *    *                               |
  460 ++                                :  +                                |
      |                                 : +                                 |
  440 ++---------------------------------*----------------------------------+


	[*] bisect-good sample
	[O] bisect-bad  sample

To reproduce:

        git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Ying Huang

[-- Attachment #2: job.yaml --]
[-- Type: text/plain, Size: 3714 bytes --]

---
LKP_SERVER: inn
LKP_CGI_PORT: 80
LKP_CIFS_PORT: 139
testcase: fsmark
default-monitors:
  wait: activate-monitor
  kmsg: 
  uptime: 
  iostat: 
  vmstat: 
  numa-numastat: 
  numa-vmstat: 
  numa-meminfo: 
  proc-vmstat: 
  proc-stat:
    interval: 10
  meminfo: 
  slabinfo: 
  interrupts: 
  lock_stat: 
  latency_stats: 
  softirqs: 
  bdi_dev_mapping: 
  diskstats: 
  nfsstat: 
  cpuidle: 
  cpufreq-stats: 
  turbostat: 
  pmeter: 
  sched_debug:
    interval: 60
cpufreq_governor: performance
default-watchdogs:
  oom-killer: 
  watchdog: 
commit: 39a8804455fb23f09157341d3ba7db6d7ae6ee76
model: Nehalem
nr_cpu: 8
memory: 4G
hdd_partitions: "/dev/disk/by-id/ata-WDC_WD1003FBYZ-010FB0_WD-WCAW36812041-part1"
swap_partitions: "/dev/disk/by-id/ata-WDC_WD1003FBYZ-010FB0_WD-WCAW36812041-part2"
rootfs_partition: "/dev/disk/by-id/ata-WDC_WD1003FBYZ-010FB0_WD-WCAW36812041-part3"
netconsole_port: 6649
category: benchmark
iterations: 1x
nr_threads: 32t
disk: 1HDD
fs: f2fs
fs2: 
fsmark:
  filesize: 16MB
  test_size: 60G
  sync_method: NoSync
  nr_directories: 16d
  nr_files_per_directory: 256fpd
queue: reconfirm
testbox: nhm4
tbox_group: nhm4
kconfig: x86_64-rhel
enqueue_time: 2015-08-09 16:47:04.941036235 +08:00
id: 5fc4c7525d2ff02a13953db1f24c9bc46f6d8c90
user: lkp
compiler: gcc-4.9
head_commit: 3c19d237dd8148926e49259e495ee41dddd1f09c
base_commit: 74d33293e467df61de1b1d8b2fbe29e550dec33b
branch: soc-thermal/fixes
rootfs: debian-x86_64-2015-02-07.cgz
result_root: "/result/fsmark/performance-1x-32t-1HDD-f2fs-16MB-60G-NoSync-16d-256fpd/nhm4/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/39a8804455fb23f09157341d3ba7db6d7ae6ee76/0"
job_file: "/lkp/scheduled/nhm4/reconfirm_fsmark-performance-1x-32t-1HDD-f2fs-16MB-60G-NoSync-16d-256fpd-debian-x86_64-2015-02-07.cgz-x86_64-rhel-39a8804455fb23f09157341d3ba7db6d7ae6ee76-20150809-112891-1xduv68-0.yaml"
max_uptime: 1608.56
initrd: "/osimage/debian/debian-x86_64-2015-02-07.cgz"
bootloader_append:
- root=/dev/ram0
- user=lkp
- job=/lkp/scheduled/nhm4/reconfirm_fsmark-performance-1x-32t-1HDD-f2fs-16MB-60G-NoSync-16d-256fpd-debian-x86_64-2015-02-07.cgz-x86_64-rhel-39a8804455fb23f09157341d3ba7db6d7ae6ee76-20150809-112891-1xduv68-0.yaml
- ARCH=x86_64
- kconfig=x86_64-rhel
- branch=soc-thermal/fixes
- commit=39a8804455fb23f09157341d3ba7db6d7ae6ee76
- BOOT_IMAGE=/pkg/linux/x86_64-rhel/gcc-4.9/39a8804455fb23f09157341d3ba7db6d7ae6ee76/vmlinuz-4.0.0
- max_uptime=1608
- RESULT_ROOT=/result/fsmark/performance-1x-32t-1HDD-f2fs-16MB-60G-NoSync-16d-256fpd/nhm4/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/39a8804455fb23f09157341d3ba7db6d7ae6ee76/0
- LKP_SERVER=inn
- |-
  libata.force=1.5Gbps

  earlyprintk=ttyS0,115200 systemd.log_level=err
  debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100
  panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0
  console=ttyS0,115200 console=tty0 vga=normal

  rw
lkp_initrd: "/lkp/lkp/lkp-x86_64.cgz"
modules_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/39a8804455fb23f09157341d3ba7db6d7ae6ee76/modules.cgz"
bm_initrd: "/osimage/deps/debian-x86_64-2015-02-07.cgz/lkp.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/run-ipconfig.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/turbostat.cgz,/lkp/benchmarks/turbostat.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/fs.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/fs2.cgz,/lkp/benchmarks/fsmark.cgz"
repeat_to: 5
kernel: "/pkg/linux/x86_64-rhel/gcc-4.9/39a8804455fb23f09157341d3ba7db6d7ae6ee76/vmlinuz-4.0.0"
dequeue_time: 2015-08-09 16:52:48.698594073 +08:00
job_state: finished
loadavg: 31.68 26.07 13.20 1/155 6992
start_time: '1439109837'
end_time: '1439110314'
version: "/lkp/lkp/.src-20150807-183152"

[-- Attachment #3: reproduce --]
[-- Type: text/plain, Size: 1163 bytes --]

echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor
mkfs -t f2fs /dev/sdb1
mount -t f2fs /dev/sdb1 /fs/sdb1
./fs_mark -d /fs/sdb1/1 -d /fs/sdb1/2 -d /fs/sdb1/3 -d /fs/sdb1/4 -d /fs/sdb1/5 -d /fs/sdb1/6 -d /fs/sdb1/7 -d /fs/sdb1/8 -d /fs/sdb1/9 -d /fs/sdb1/10 -d /fs/sdb1/11 -d /fs/sdb1/12 -d /fs/sdb1/13 -d /fs/sdb1/14 -d /fs/sdb1/15 -d /fs/sdb1/16 -d /fs/sdb1/17 -d /fs/sdb1/18 -d /fs/sdb1/19 -d /fs/sdb1/20 -d /fs/sdb1/21 -d /fs/sdb1/22 -d /fs/sdb1/23 -d /fs/sdb1/24 -d /fs/sdb1/25 -d /fs/sdb1/26 -d /fs/sdb1/27 -d /fs/sdb1/28 -d /fs/sdb1/29 -d /fs/sdb1/30 -d /fs/sdb1/31 -d /fs/sdb1/32 -D 16 -N 256 -n 120 -L 1 -S 0 -s 16777216

                 reply	other threads:[~2015-08-22 23:32 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87mvxikigp.fsf@yhuang-dev.intel.com \
    --to=ying.huang@intel.com \
    --cc=jaegeuk@kernel.org \
    --cc=lkp@01.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox