From mboxrd@z Thu Jan 1 00:00:00 1970 From: Stan Hoeppner Subject: Re: =?UTF-8?B?562U5aSN77ya562U5aSN77yabWQgcmFpZDUgcGVyZm9ybWFjZSA=?= =?UTF-8?B?NnggU1NEIFJBSUQ1?= Date: Fri, 29 Nov 2013 00:23:46 -0600 Message-ID: <529832F2.8010206@hardwarefreak.com> References: <1385118796.8091.31.camel@bews002.euractiv.com> <528FBBE5.80404@hardwarefreak.com> <1385369796.2076.16.camel@bews002.euractiv.com> <5293EF32.9090301@hardwarefreak.com> <20131126025210.GL8803@dastard> <52941C5D.1000305@hardwarefreak.com> <20131126061458.GM8803@dastard>,529455CB.6050907@hardwarefreak.com,3c94e3bd-c74f-44ed-a1da-443b08edc43e@aliyun.com <36ffd6f7-bfb0-4298-a18c-f45b07cab326@aliyun.com>,5296C98D.8000302@hardwarefreak.com <5297FE1C.6080504@hardwarefreak.com> Reply-To: stan@hardwarefreak.com Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <5297FE1C.6080504@hardwarefreak.com> Sender: linux-raid-owner@vger.kernel.org To: lilofile , Linux RAID List-Id: linux-raid.ids On 11/28/2013 8:38 PM, Stan Hoeppner wrote: > On 11/28/2013 4:02 AM, lilofile wrote: >> thank you for your advise. now I have test multi-thread patch, the single raid5 performance improve 30%. >> >> but I have another problem,when write on single raid,write performance is approx 1.1GB/s > ... >> [1]- Done dd if=/dev/zero of=/dev/md126 count=100000 bs=1M >> [2]+ Done dd if=/dev/zero of=/dev/md127 count=100000 bs=1M > > No. This is not a parallel IO test. > > ... >> To address #3 use FIO or a similar testing tool that can issue IOs in >> parallel. With SSD based storage you will never reach maximum >> throughput with a serial data stream. > > This is a parallel IO test, one command line: > > ~# fio --directory=/dev/md126 --zero_buffers --numjobs=16 > --group_reporting --blocksize=64k --ioengine=libaio --iodepth=16 > --direct=1 --size=64g --name=read --rw=read --stonewall --name=write > --rw=write --stonewall Correction. The --size value is per job, not per fio run. We use 16 jobs in parallel to maximize the hardware throughput. So use --size=4g for 64GB total written in the test. If you use --size=64g as I stated above you'll write 1TB total in the test, and it will take forever to finish. With --size=4g the read test should take ~30 seconds and the write test ~40s, not including the fio initialization time. > Normally this targets a filesystem, not a raw block device. This > command line should work for a raw md device. -- Stan