linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Nuno Silva <nuno.silva@vgertech.com>
To: Ian Godin <Ian.Godin@lowrydigital.com>
Cc: linux-kernel@vger.kernel.org
Subject: Re: Drive performance bottleneck
Date: Thu, 03 Feb 2005 17:40:23 +0000	[thread overview]
Message-ID: <42026207.4090007@vgertech.com> (raw)
In-Reply-To: <c4fc982390674caa2eae4f252bf4fc78@lowrydigital.com>

Ian Godin wrote:
> 
>   I am trying to get very fast disk drive performance and I am seeing 
> some interesting bottlenecks.  We are trying to get 800 MB/sec or more 
> (yes, that is megabytes per second).  We are currently using PCI-Express 
> with a 16 drive raid card (SATA drives).  We have achieved that speed, 
> but only through the SG (SCSI generic) driver.  This is running the 
> stock 2.6.10 kernel.  And the device is not mounted as a file system.  I 
> also set the read ahead size on the device to 16KB (which speeds things 
> up a lot):

I was trying to reproduce but got distracted by this:
(use page down, if you just want to see the odd result)

puma:/tmp/dd# sg_map
/dev/sg0  /dev/sda
/dev/sg1  /dev/sdb
/dev/sg2  /dev/scd0
/dev/sg3  /dev/sdc
puma:/tmp/dd# time sg_dd if=/dev/sg1 of=/tmp/dd/sg1 bs=64k count=1000
Reducing read to 64 blocks per loop
1000+0 records in
1000+0 records out

real    0m0.187s
user    0m0.001s
sys     0m0.141s
puma:/tmp/dd# time dd if=/dev/sdb of=/tmp/dd/sdb bs=64k count=1000
1000+0 records in
1000+0 records out
65536000 bytes transferred in 1.203468 seconds (54455956 bytes/sec)

real    0m1.219s
user    0m0.001s
sys     0m0.138s
puma:/tmp/dd# ls -l
total 128000
-rw-r--r--  1 root root 65536000 Feb  3 17:16 sdb
-rw-r--r--  1 root root 65536000 Feb  3 17:16 sg1
puma:/tmp/dd# md5sum *
ec31224970ddd3fb74501c8e68327e7b  sdb
60d4689227d60e6122f1ffe0ec1b2ad7  sg1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

See? dd from sdb is not the same as sg1! Is this supposed to happen?

About the 900MB/sec:
This same sg1 (= sdb, which is a single hitachi sata hdd) performes like 
this:

puma:/tmp/dd# time sg_dd if=/dev/sg1 of=/dev/null bs=64k count=1000000 
time=1
Reducing read to 64 blocks per loop
time to transfer data was 69.784784 secs, 939.12 MB/sec
1000000+0 records in
1000000+0 records out

real    1m9.787s
user    0m0.063s
sys     0m58.115s

I can assure you that this drive can't do more than 60MB/sec sustained.

My only conclusion is that sg (or sg_dd) is broken? ;)

Peace,
Nuno Silva

  parent reply	other threads:[~2005-02-03 18:05 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-02-03  2:24 Drive performance bottleneck Ian Godin
2005-02-03  3:56 ` Bernd Eckenfels
2005-02-03 16:49   ` Ian Godin
2005-02-03 16:06 ` Jens Axboe
2005-02-03 17:40 ` Nuno Silva [this message]
2005-02-03 18:08   ` Ian Godin
2005-02-03 19:03     ` Paulo Marques
2005-02-04 19:36       ` Andy Isaacson
2005-02-04  9:32 ` Andrew Morton
2005-02-04 22:51   ` Lincoln Dale
2005-02-04 23:07     ` Andrew Morton
2005-02-06  4:25       ` Nuno Silva

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=42026207.4090007@vgertech.com \
    --to=nuno.silva@vgertech.com \
    --cc=Ian.Godin@lowrydigital.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).