From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: with ECARTIS (v1.0.0; list xfs); Fri, 12 Jan 2007 13:00:24 -0800 (PST) Received: from raad.intranet ([212.12.190.123]) by oss.sgi.com (8.12.10/8.12.10/SuSE Linux 0.7) with ESMTP id l0CL0Aqw001179 for ; Fri, 12 Jan 2007 13:00:12 -0800 From: Al Boldi Subject: Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write) Date: Sat, 13 Jan 2007 00:00:48 +0300 References: In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset="windows-1256" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200701130000.48717.a1426z@gawab.com> Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com List-Id: xfs To: Justin Piszcz Cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com Justin Piszcz wrote: > Btw, max sectors did improve my performance a little bit but > stripe_cache+read_ahead were the main optimizations that made everything > go faster by about ~1.5x. I have individual bonnie++ benchmarks of > [only] the max_sector_kb tests as well, it improved the times from > 8min/bonnie run -> 7min 11 seconds or so, see below and then after that is > what you requested. > > # echo 3 > /proc/sys/vm/drop_caches > # dd if=/dev/md3 of=/dev/null bs=1M count=10240 > 10240+0 records in > 10240+0 records out > 10737418240 bytes (11 GB) copied, 399.352 seconds, 26.9 MB/s > # for i in sde sdg sdi sdk; do echo 192 > > /sys/block/"$i"/queue/max_sectors_kb; echo "Set > /sys/block/"$i"/queue/max_sectors_kb to 192kb"; done > Set /sys/block/sde/queue/max_sectors_kb to 192kb > Set /sys/block/sdg/queue/max_sectors_kb to 192kb > Set /sys/block/sdi/queue/max_sectors_kb to 192kb > Set /sys/block/sdk/queue/max_sectors_kb to 192kb > # echo 3 > /proc/sys/vm/drop_caches > # dd if=/dev/md3 of=/dev/null bs=1M count=10240 > 10240+0 records in > 10240+0 records out > 10737418240 bytes (11 GB) copied, 398.069 seconds, 27.0 MB/s > > Awful performance with your numbers/drop_caches settings.. ! Can you repeat with /dev/sda only? With fresh reboot to shell, then: $ cat /sys/block/sda/queue/max_sectors_kb $ echo 3 > /proc/sys/vm/drop_caches $ dd if=/dev/sda of=/dev/null bs=1M count=10240 $ echo 192 > /sys/block/sda/queue/max_sectors_kb $ echo 3 > /proc/sys/vm/drop_caches $ dd if=/dev/sda of=/dev/null bs=1M count=10240 $ echo 128 > /sys/block/sda/queue/max_sectors_kb $ echo 3 > /proc/sys/vm/drop_caches $ dd if=/dev/sda of=/dev/null bs=1M count=10240 > What were your tests designed to show? A problem with the block-io. Thanks! -- Al