linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@zip.com.au>
To: dementiev@mpi-sb.mpg.de
Cc: linux-kernel mailing list <linux-kernel@vger.kernel.org>
Subject: Re: Multi disk performance (8 disks), limit 230 MB/s
Date: Sat, 31 Aug 2002 13:11:32 -0700	[thread overview]
Message-ID: <3D7122F4.3FE3BD07@zip.com.au> (raw)
In-Reply-To: 3D7104D5.8AD2086B@mpi-sb.mpg.de

Roman Dementiev wrote:
> 
> Hi all,
> 
> I have been doing some benchmarking experiments on kernel 2.4.19 with 8
> IDE disks. Due to poor performance of 2 disks at 1 IDE channels we have
> bought 4 Promise Ultra 100 TX2 (32-bit 66 Mhz) controllers and to avoid
> bus saturation Supermicro P4PDE motherboard with multiple PCI buses
> (64-bit 66 Mhz) and 2-Xeons. I submitted already PCI slot placing
> problems to the mailing list. But theoretically I can live with the
> current IDE condrollers->PCI slots assighnment.
> 
> The assignment is the following: 3 IDE controllers are connected to the
> one PCI 64-bit Hub with bandwidth 1 GByte/s and  4th controller is on
> another hub with the same characteristics.
> 
> Theoretically with 6 IBM disks (47 MB/s from the first cylinders) I
> should achieve a number about  266 MB/s (32 bit X 66 Mhz) < 6*47. AND
> 2*47 = 94 MB/s < 266 MB/s from the last two disks. Thus the rate should
> be 94 + 266 = 360 MB/s.
> 
> BUT no matter from which set of the disks I read or write I have got the
> 
> following parallel read/write rates (raw access):
> 
>          write (MB/s) read (MB/s) systime (top)  real/user/sys(time) (s)
> 
> 1 disk :        48    45          3  %           3.0 / 0.1 / 0.4
> 2 disks:        83    94          10 %           3.5 / 0.1 / 0.6
> 4 disks:        131   189         21 %           4.3 / 0.4 / 2.8
> 5 disks:        172   233                        4.5 / 0.5 / 4.5
> 6 disks:        197   234 ?       30 %           5.2 / 0.6 / 6.6
> 7 disks:        209 ? 230 ?                      5.9 / 0.6 / 8.8
> 8 disks:        214 ? 229 ?       40 %           6.7 / 0.8 /10.8
> 

raw access in 2.4 isn't very good - it uses 512-byte chunks.  If
you can hunt down the `rawvary' patch that might help, but I don't
know if it works against IDE.

Testing 2.5 would be interesting ;)

Try the same test with O_DIRECT reads or writes against ext2 filesystems.
That will use 4k blocks.

Be sceptical about the `top' figures. They're just statistical and
are subject to gross errors under some circumstances - you may have
run out of CPU (unlikely with direct IO).

direct IO is synchronous: the kernel waits for completion of each
IO request before submitting the next.  You _must_ submit large
IO's.  That means passing 128k or more to each invokation of the
read and write system calls.   See the bandwidth-versus-request size
numbers I measured for O_DIRECT ext3:
http://www.geocrawler.com/lists/3/SourceForge/7493/0/9329947/

  reply	other threads:[~2002-08-31 19:55 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-08-31 18:03 Multi disk performance (8 disks), limit 230 MB/s Roman Dementiev
2002-08-31 20:11 ` Andrew Morton [this message]
2002-09-03 16:00   ` Roman Dementiev

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3D7122F4.3FE3BD07@zip.com.au \
    --to=akpm@zip.com.au \
    --cc=dementiev@mpi-sb.mpg.de \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).