From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753377AbbIJCC1 (ORCPT ); Wed, 9 Sep 2015 22:02:27 -0400 Received: from mga11.intel.com ([192.55.52.93]:52863 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752202AbbIJCCZ (ORCPT ); Wed, 9 Sep 2015 22:02:25 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,501,1437462000"; d="yaml'?scan'208";a="786288932" From: kernel test robot Subject: [lkp] [sched/preempt] fe32d3cd5e: -5.0% fsmark.files_per_sec CC: lkp@01.org CC: LKML CC: Ingo Molnar TO: Konstantin Khlebnikov Date: Thu, 10 Sep 2015 10:02:15 +0800 Message-ID: <874mj3c9qw.fsf@yhuang-dev.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.5 (gnu/linux) MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="=-=-=" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org --=-=-= Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable FYI, we noticed the below changes on https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master commit fe32d3cd5e8eb0f82e459763374aa80797023403 ("sched/preempt: Fix cond_r= esched_lock() and cond_resched_softirq()") =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/iterations/nr_= threads/disk/fs/fs2/filesize/test_size/sync_method/nr_directories/nr_files_= per_directory: nhm4/fsmark/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/= 1x/32t/1HDD/f2fs/nfsv4/5K/400M/fsyncBeforeClose/16d/256fpd commit:=20 c56dadf39761a6157239cac39e3988998c994f98 fe32d3cd5e8eb0f82e459763374aa80797023403 c56dadf39761a615 fe32d3cd5e8eb0f82e45976337=20 ---------------- --------------------------=20 %stddev %change %stddev \ | \=20=20 24905032 =B1 2% -23.3% 19104197 =B1 2% fsmark.app_overhead 128.00 =B1 0% -5.0% 121.60 =B1 0% fsmark.files_per_sec 640.73 =B1 0% +5.6% 676.69 =B1 0% fsmark.time.elapsed_time 640.73 =B1 0% +5.6% 676.69 =B1 0% fsmark.time.elapsed_time.= max 88485 =B1 0% +14.6% 101386 =B1 0% fsmark.time.involuntary_c= ontext_switches 374807 =B1 0% -2.8% 364148 =B1 0% fsmark.time.voluntary_con= text_switches 88485 =B1 0% +14.6% 101386 =B1 0% time.involuntary_context_= switches 18539 =B1 1% -11.8% 16345 =B1 1% slabinfo.kmalloc-128.acti= ve_objs 19786 =B1 1% -10.7% 17675 =B1 1% slabinfo.kmalloc-128.num_= objs 4.23e+08 =B1 1% +19.2% 5.042e+08 =B1 3% cpuidle.C1-NHM.time 84188847 =B1 1% +22.9% 1.034e+08 =B1 1% cpuidle.C1E-NHM.time 85925 =B1 0% +21.8% 104644 =B1 0% cpuidle.C1E-NHM.usage 1.81 =B1 0% -5.4% 1.71 =B1 0% turbostat.%Busy 53.00 =B1 0% -5.7% 50.00 =B1 0% turbostat.Avg_MHz 19.57 =B1 1% +11.1% 21.73 =B1 2% turbostat.CPU%c1 1589 =B1 0% -5.0% 1510 =B1 0% vmstat.io.bo 22835 =B1 0% -3.8% 21965 =B1 0% vmstat.system.cs 7892 =B1 0% -5.7% 7445 =B1 0% vmstat.system.in 75995 =B1 0% +10.1% 83680 =B1 0% latency_stats.avg.rpc_wai= t_bit_killable.__rpc_wait_for_completion_task.nfs4_run_open_task.[nfsv4]._n= fs4_open_and_get_state.[nfsv4].nfs4_do_open.[nfsv4].nfs4_atomic_open.[nfsv4= ].nfs_atomic_open.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCA= LL_64_fastpath 370488 =B1 65% -48.0% 192671 =B1 8% latency_stats.max.wait_on= _page_bit.filemap_fdatawait_range.filemap_write_and_wait_range.nfs4_file_fs= ync.[nfsv4].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath 1353699 =B1 0% +4.2% 1410361 =B1 0% latency_stats.sum.do_wait= .SyS_wait4.entry_SYSCALL_64_fastpath 2451859 =B1 1% +4.4% 2559496 =B1 0% latency_stats.sum.pipe_wa= it.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath 18995177 =B1 4% -26.9% 13876200 =B1 1% latency_stats.sum.rpc_wai= t_bit_killable.__rpc_execute.rpc_execute.rpc_run_task.nfs4_call_sync_sequen= ce.[nfsv4]._nfs4_proc_lookup.[nfsv4].nfs4_proc_lookup_common.[nfsv4].nfs4_p= roc_lookup.[nfsv4].nfs_lookup_revalidate.nfs4_lookup_revalidate.lookup_dcac= he.__lookup_hash 5.941e+09 =B1 0% +2.9% 6.116e+09 =B1 0% latency_stats.sum.rpc_wai= t_bit_killable.__rpc_wait_for_completion_task.nfs4_do_close.[nfsv4].__nfs4_= close.[nfsv4].nfs4_close_sync.[nfsv4].nfs4_close_context.[nfsv4].__put_nfs_= open_context.nfs_release.nfs_file_release.__fput.____fput.task_work_run 6.257e+09 =B1 0% +10.1% 6.89e+09 =B1 0% latency_stats.sum.rpc_wai= t_bit_killable.__rpc_wait_for_completion_task.nfs4_run_open_task.[nfsv4]._n= fs4_open_and_get_state.[nfsv4].nfs4_do_open.[nfsv4].nfs4_atomic_open.[nfsv4= ].nfs_atomic_open.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCA= LL_64_fastpath 4.633e+09 =B1 0% +6.9% 4.951e+09 =B1 0% latency_stats.sum.wait_on= _page_bit.filemap_fdatawait_range.filemap_write_and_wait_range.nfs4_file_fs= ync.[nfsv4].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath 429.00 =B1 10% -10.5% 384.00 =B1 6% sched_debug.cfs_rq[0]:/.t= g->runnable_avg 430.75 =B1 10% -10.6% 385.25 =B1 6% sched_debug.cfs_rq[1]:/.t= g->runnable_avg 15.50 =B1 5% +21.0% 18.75 =B1 4% sched_debug.cfs_rq[2]:/.n= r_spread_over 431.50 =B1 10% -10.2% 387.50 =B1 6% sched_debug.cfs_rq[2]:/.t= g->runnable_avg 437.75 =B1 10% -10.1% 393.75 =B1 7% sched_debug.cfs_rq[4]:/.t= g->runnable_avg 441.00 =B1 9% -10.3% 395.75 =B1 6% sched_debug.cfs_rq[5]:/.t= g->runnable_avg 442.75 =B1 9% -10.2% 397.75 =B1 6% sched_debug.cfs_rq[6]:/.t= g->runnable_avg 1888 =B1 41% -55.4% 843.00 =B1 39% sched_debug.cfs_rq[7]:/.b= locked_load_avg 443.50 =B1 9% -9.3% 402.25 =B1 6% sched_debug.cfs_rq[7]:/.t= g->runnable_avg 1916 =B1 41% -54.9% 865.25 =B1 39% sched_debug.cfs_rq[7]:/.t= g_load_contrib 320180 =B1 0% +9.7% 351102 =B1 0% sched_debug.cpu#0.clock 320180 =B1 0% +9.7% 351102 =B1 0% sched_debug.cpu#0.clock_t= ask 46298 =B1 11% +20.5% 55770 =B1 16% sched_debug.cpu#0.nr_load= _updates -3942 =B1 -2% -7.1% -3663 =B1 -1% sched_debug.cpu#0.nr_unin= terruptible 320181 =B1 0% +9.7% 351102 =B1 0% sched_debug.cpu#1.clock 320181 =B1 0% +9.7% 351102 =B1 0% sched_debug.cpu#1.clock_t= ask 320181 =B1 0% +9.7% 351104 =B1 0% sched_debug.cpu#2.clock 320181 =B1 0% +9.7% 351104 =B1 0% sched_debug.cpu#2.clock_t= ask 418.25 =B1 7% +35.0% 564.50 =B1 0% sched_debug.cpu#2.nr_unin= terruptible 320180 =B1 0% +9.7% 351103 =B1 0% sched_debug.cpu#3.clock 320180 =B1 0% +9.7% 351103 =B1 0% sched_debug.cpu#3.clock_t= ask 43361 =B1 0% +28.5% 55731 =B1 21% sched_debug.cpu#3.nr_load= _updates 503.00 =B1 5% +16.1% 583.75 =B1 3% sched_debug.cpu#3.nr_unin= terruptible 60027 =B1 0% +997.9% 659006 =B1155% sched_debug.cpu#3.ttwu_lo= cal 320179 =B1 0% +9.7% 351103 =B1 0% sched_debug.cpu#4.clock 320179 =B1 0% +9.7% 351103 =B1 0% sched_debug.cpu#4.clock_t= ask 3.50 =B1 95% +135.7% 8.25 =B1 51% sched_debug.cpu#4.cpu_loa= d[2] 1090 =B1 2% +13.7% 1239 =B1 2% sched_debug.cpu#4.nr_unin= terruptible 320178 =B1 0% +9.7% 351087 =B1 0% sched_debug.cpu#5.clock 320178 =B1 0% +9.7% 351087 =B1 0% sched_debug.cpu#5.clock_t= ask 547.50 =B1 3% -45.8% 296.75 =B1 4% sched_debug.cpu#5.nr_unin= terruptible 320178 =B1 0% +9.7% 351105 =B1 0% sched_debug.cpu#6.clock 320178 =B1 0% +9.7% 351105 =B1 0% sched_debug.cpu#6.clock_t= ask 542.75 =B1 2% -46.0% 293.25 =B1 13% sched_debug.cpu#6.nr_unin= terruptible 320182 =B1 0% +9.7% 351104 =B1 0% sched_debug.cpu#7.clock 320182 =B1 0% +9.7% 351104 =B1 0% sched_debug.cpu#7.clock_t= ask 495.75 =B1 3% -39.0% 302.50 =B1 6% sched_debug.cpu#7.nr_unin= terruptible 320182 =B1 0% +9.7% 351105 =B1 0% sched_debug.cpu_clk 320014 =B1 0% +9.7% 350935 =B1 0% sched_debug.ktime 320182 =B1 0% +9.7% 351105 =B1 0% sched_debug.sched_clk =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/iterations/nr_= threads/disk/fs/fs2/filesize/test_size/sync_method/nr_directories/nr_files_= per_directory: nhm4/fsmark/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/= 1x/32t/1HDD/xfs/nfsv4/8K/400M/fsyncBeforeClose/16d/256fpd commit:=20 c56dadf39761a6157239cac39e3988998c994f98 fe32d3cd5e8eb0f82e459763374aa80797023403 c56dadf39761a615 fe32d3cd5e8eb0f82e45976337=20 ---------------- --------------------------=20 fail:runs %reproduction fail:runs | | |=20=20=20=20 2:4 -50% :4 kmsg.Spurious_LAPIC_timer_int= errupt_on_cpu %stddev %change %stddev \ | \=20=20 5006474 =B1 0% +8.3% 5423397 =B1 1% fsmark.app_overhead 55958 =B1 0% +8.2% 60571 =B1 0% fsmark.time.involuntary_c= ontext_switches 229728 =B1 0% -1.8% 225635 =B1 0% fsmark.time.voluntary_con= text_switches 2156965 =B1 0% +15.7% 2495398 =B1 0% latency_stats.sum.rpc_wai= t_bit_killable.__rpc_execute.rpc_execute.rpc_run_task.nfs4_call_sync_sequen= ce.[nfsv4]._nfs4_proc_lookup.[nfsv4].nfs4_proc_lookup_common.[nfsv4].nfs4_p= roc_lookup.[nfsv4].nfs_lookup_revalidate.nfs4_lookup_revalidate.lookup_dcac= he.__lookup_hash 2481 =B1 2% +10.1% 2731 =B1 2% slabinfo.kmalloc-256.num_= objs 5226 =B1 8% -8.5% 4783 =B1 4% slabinfo.vm_area_struct.a= ctive_objs 2584 =B1 6% -21.8% 2021 =B1 8% sched_debug.cfs_rq[2]:/.m= in_vruntime -1883 =B1-12% +38.9% -2614 =B1 -8% sched_debug.cfs_rq[2]:/.s= pread0 7666 =B1165% -100.0% 0.00 =B1 -1% sched_debug.cfs_rq[4]:/.l= oad 229.25 =B1110% -100.0% 0.00 =B1 -1% sched_debug.cfs_rq[4]:/.r= unnable_load_avg -2331 =B1 -9% +17.2% -2731 =B1 -6% sched_debug.cfs_rq[4]:/.s= pread0 437.25 =B1 61% -100.0% 0.00 =B1 -1% sched_debug.cfs_rq[4]:/.u= tilization_load_avg 2091 =B1 7% +22.3% 2558 =B1 8% sched_debug.cfs_rq[5]:/.m= in_vruntime -2376 =B1 -6% -12.6% -2077 =B1 -8% sched_debug.cfs_rq[5]:/.s= pread0 1537704 =B1 96% -93.1% 105402 =B1 33% sched_debug.cpu#7.nr_swit= ches 1537765 =B1 96% -93.1% 105462 =B1 33% sched_debug.cpu#7.sched_c= ount 756633 =B1 98% -94.6% 41165 =B1 43% sched_debug.cpu#7.sched_g= oidle nhm4: Nehalem Memory: 4G fsmark.time.involuntary_context_switches 65000 ++-----------------------------------------------------------------= -+ 64000 ++ O = | O O O O O O O O O O O O O O O = | 63000 ++ O = | 62000 ++ = | | = | 61000 ++ O O O O = O 60000 ++ O O O = | 59000 ++ = | | = | 58000 ++ = | 57000 ++ = | *..*.. .*. .*.. .* = | 56000 ++ *. *..*. *..*..*.*..*..*..*..*..* = | 55000 ++-----------------------------------------------------------------= -+ fsmark.time.voluntary_context_switches 231000 ++----------------------------------------------------------------= -+ | .*.. = | 230000 ++ .*.*..*.. .* .*.*.. .* = | *..*. *..*. *.. .*. *..*. = | 229000 ++ *. = | | = | 228000 ++ = | | = | 227000 ++ = | O = | 226000 ++ O O = | | O O O O O O O O O O O O O O = O 225000 ++ O O O O O O O = | | = | 224000 ++----------------------------------------------------------------= -+ [*] bisect-good sample [O] bisect-bad sample To reproduce: git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tes= ts.git cd lkp-tests bin/lkp install job.yaml # job file is attached in this email bin/lkp run job.yaml Disclaimer: Results have been estimated based on internal Intel analysis and are provid= ed for informational purposes only. Any difference in system hardware or softw= are design or configuration may affect actual performance. Thanks, Ying Huang --=-=-= Content-Type: text/plain; charset=ascii Content-Disposition: attachment; filename=job.yaml --- LKP_SERVER: inn LKP_CGI_PORT: 80 LKP_CIFS_PORT: 139 testcase: fsmark default-monitors: wait: activate-monitor kmsg: uptime: iostat: vmstat: numa-numastat: numa-vmstat: numa-meminfo: proc-vmstat: proc-stat: interval: 10 meminfo: slabinfo: interrupts: lock_stat: latency_stats: softirqs: bdi_dev_mapping: diskstats: nfsstat: cpuidle: cpufreq-stats: turbostat: pmeter: sched_debug: interval: 60 cpufreq_governor: performance default-watchdogs: oom-killer: watchdog: commit: 9cfcc658da9693f65e7224e8329e40ada2f3c699 model: Nehalem nr_cpu: 8 memory: 4G hdd_partitions: "/dev/disk/by-id/ata-WDC_WD1003FBYZ-010FB0_WD-WCAW36812041-part1" swap_partitions: "/dev/disk/by-id/ata-WDC_WD1003FBYZ-010FB0_WD-WCAW36812041-part2" rootfs_partition: "/dev/disk/by-id/ata-WDC_WD1003FBYZ-010FB0_WD-WCAW36812041-part3" netconsole_port: 6649 category: benchmark iterations: 1x nr_threads: 32t disk: 1HDD fs: xfs fs2: nfsv4 fsmark: filesize: 8K test_size: 400M sync_method: fsyncBeforeClose nr_directories: 16d nr_files_per_directory: 256fpd queue: cyclic testbox: nhm4 tbox_group: nhm4 kconfig: x86_64-rhel enqueue_time: 2015-09-05 22:01:12.417466845 +08:00 id: 6cafd078675b335a6d47e6ddc8daebd0059f0c88 user: lkp compiler: gcc-4.9 head_commit: 9cfcc658da9693f65e7224e8329e40ada2f3c699 base_commit: bf59e6623a3a92a2bf428f2d6592c81aae6317e1 branch: linus/master kernel: "/pkg/linux/x86_64-rhel/gcc-4.9/9cfcc658da9693f65e7224e8329e40ada2f3c699/vmlinuz-4.2.0-09628-g9cfcc65" rootfs: debian-x86_64-2015-02-07.cgz result_root: "/result/fsmark/performance-1x-32t-1HDD-xfs-nfsv4-8K-400M-fsyncBeforeClose-16d-256fpd/nhm4/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/9cfcc658da9693f65e7224e8329e40ada2f3c699/0" job_file: "/lkp/scheduled/nhm4/cyclic_fsmark-performance-1x-32t-1HDD-xfs-nfsv4-8K-400M-fsyncBeforeClose-16d-256fpd-x86_64-rhel-CYCLIC_HEAD-9cfcc658da9693f65e7224e8329e40ada2f3c699-20150905-98956-61twsm-0.yaml" dequeue_time: 2015-09-07 03:13:57.049455437 +08:00 max_uptime: 917.1600000000001 initrd: "/osimage/debian/debian-x86_64-2015-02-07.cgz" bootloader_append: - root=/dev/ram0 - user=lkp - job=/lkp/scheduled/nhm4/cyclic_fsmark-performance-1x-32t-1HDD-xfs-nfsv4-8K-400M-fsyncBeforeClose-16d-256fpd-x86_64-rhel-CYCLIC_HEAD-9cfcc658da9693f65e7224e8329e40ada2f3c699-20150905-98956-61twsm-0.yaml - ARCH=x86_64 - kconfig=x86_64-rhel - branch=linus/master - commit=9cfcc658da9693f65e7224e8329e40ada2f3c699 - BOOT_IMAGE=/pkg/linux/x86_64-rhel/gcc-4.9/9cfcc658da9693f65e7224e8329e40ada2f3c699/vmlinuz-4.2.0-09628-g9cfcc65 - max_uptime=917 - RESULT_ROOT=/result/fsmark/performance-1x-32t-1HDD-xfs-nfsv4-8K-400M-fsyncBeforeClose-16d-256fpd/nhm4/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/9cfcc658da9693f65e7224e8329e40ada2f3c699/0 - LKP_SERVER=inn - |- libata.force=1.5Gbps earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw lkp_initrd: "/lkp/lkp/lkp-x86_64.cgz" modules_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/9cfcc658da9693f65e7224e8329e40ada2f3c699/modules.cgz" bm_initrd: "/osimage/deps/debian-x86_64-2015-02-07.cgz/lkp.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/run-ipconfig.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/turbostat.cgz,/lkp/benchmarks/turbostat.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/fs.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/fs2.cgz,/lkp/benchmarks/fsmark.cgz" job_state: finished loadavg: 27.12 11.13 4.14 1/187 2699 start_time: '1441566896' end_time: '1441567023' version: "/lkp/lkp/.src-20150906-205656" --=-=-= Content-Type: text/plain; charset=ascii Content-Disposition: attachment; filename=reproduce echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor mkfs -t xfs /dev/sda1 mount -t xfs -o nobarrier,inode64 /dev/sda1 /fs/sda1 /etc/init.d/rpcbind start /etc/init.d/nfs-common start /etc/init.d/nfs-kernel-server start mount -t nfs -o vers=4 localhost:/fs/sda1 /nfs/sda1 ./fs_mark -d /nfs/sda1/1 -d /nfs/sda1/2 -d /nfs/sda1/3 -d /nfs/sda1/4 -d /nfs/sda1/5 -d /nfs/sda1/6 -d /nfs/sda1/7 -d /nfs/sda1/8 -d /nfs/sda1/9 -d /nfs/sda1/10 -d /nfs/sda1/11 -d /nfs/sda1/12 -d /nfs/sda1/13 -d /nfs/sda1/14 -d /nfs/sda1/15 -d /nfs/sda1/16 -d /nfs/sda1/17 -d /nfs/sda1/18 -d /nfs/sda1/19 -d /nfs/sda1/20 -d /nfs/sda1/21 -d /nfs/sda1/22 -d /nfs/sda1/23 -d /nfs/sda1/24 -d /nfs/sda1/25 -d /nfs/sda1/26 -d /nfs/sda1/27 -d /nfs/sda1/28 -d /nfs/sda1/29 -d /nfs/sda1/30 -d /nfs/sda1/31 -d /nfs/sda1/32 -D 16 -N 256 -n 1600 -L 1 -S 1 -s 8192 --=-=-=--