linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ming Zhang <mingz@ele.uri.edu>
To: Michael Tokarev <mjt@tls.msk.ru>
Cc: Holger Kiehl <Holger.Kiehl@dwd.de>, Jens Axboe <axboe@suse.de>,
	Vojtech Pavlik <vojtech@suse.cz>,
	linux-raid <linux-raid@vger.kernel.org>,
	linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: Where is the performance bottleneck?
Date: Wed, 31 Aug 2005 14:57:00 -0400	[thread overview]
Message-ID: <1125514620.6617.11.camel@localhost.localdomain> (raw)
In-Reply-To: <1125514340.6617.7.camel@localhost.localdomain>

forgot to attach lspci output.

it is a 133MB PCI-X card but only run at 66MHZ.

quick question, where I can check if it is running at 64bit?

66MHZ * 32Bit /8 * 80% bus utilization ~= 211MB/s then match the upper
speed I meet now...

Ming


02:01.0 SCSI storage controller: Marvell MV88SX5081 8-port SATA I PCI-X
Controller (rev 03)
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop-
ParErr- Stepping- SERR- FastB2B-
        Status: Cap+ 66Mhz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort-
<TAbort- <MAbort- >SERR- <PERR-
        Latency: 128, Cache Line Size 08
        Interrupt: pin A routed to IRQ 24
        Region 0: Memory at fa000000 (64-bit, non-prefetchable)
        Capabilities: [40] Power Management version 2
                Flags: PMEClk+ DSI- D1- D2- AuxCurrent=0mA PME
(D0-,D1-,D2-,D3hot-,D3cold-)
                Status: D0 PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [50] Message Signalled Interrupts: 64bit+
Queue=0/0 Enable-
                Address: 0000000000000000  Data: 0000
        Capabilities: [60] PCI-X non-bridge device.
                Command: DPERE- ERO- RBC=0 OST=3
                Status: Bus=2 Dev=1 Func=0 64bit+ 133MHz+ SCD- USC-,
DC=simple, DMMRBC=0, DMOST=3, DMCRS=0, RSCEM-


On Wed, 2005-08-31 at 14:52 -0400, Ming Zhang wrote:
> join the party. ;)
> 
> 8 400GB SATA disk on same Marvel 8 port PCIX-133 card. P4 CPU.
> Supermicro SCT board.
> 
> # cat /proc/mdstat
> Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6]
> [raid10] [faulty]
> md0 : active raid0 sdh[7] sdg[6] sdf[5] sde[4] sdd[3] sdc[2] sdb[1] sda
> [0]
>       3125690368 blocks 64k chunks
> 
> 8 DISK RAID0 from same slot and card. Stripe size is 512KB.
> 
> run oread
> 
> # vmstat 1
> procs -----------memory---------- ---swap-- -----io---- --system-- ----
> cpu----
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy
> id wa
>  1  1      0 533216 330424  11004    0    0  7128  1610 1069    77  0  2
> 95  3
>  1  0      0 298464 560828  11004    0    0 230404     0 2595  1389  1
> 23  0 76
>  0  1      0  64736 792248  11004    0    0 231420     0 2648  1342  0
> 26  0 74
>  1  0      0   8948 848416   9696    0    0 229376     0 2638  1337  0
> 29  0 71
>  0  0      0 868896    768   9696    0    0 29696    48 1224   162  0 19
> 73  8
> 
> # time ./oread /dev/md0
> 
> real    0m6.595s
> user    0m0.004s
> sys     0m0.151s
> 
> run dd
> 
> # vmstat 1
> procs -----------memory---------- ---swap-- -----io---- --system-- ----
> cpu----
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy
> id wa
>  2  2      0 854008   2932  17108    0    0  7355  1606 1071    80  0  2
> 95  3
>  0  2      0 848888   3112  21388    0    0 164332     0 2985  3564  2
> 7  0 91
>  0  2      0 844024   3260  25664    0    0 164040     0 2990  3665  1
> 7  0 92
>  0  2      0 840328   3380  28920    0    0 164272     0 2932  3791  1
> 9  0 90
>  0  2      0 836360   3500  32232    0    0 163688   100 3001  5045  2
> 7  0 91
>  0  2      0 831432   3644  36612    0    0 164120   568 2977  3843  0
> 9  0 91
>  0  1      0 826056   3752  41688    0    0  7872     0 1267  1474  1  3
> 0 96
> 
> # time dd if=/dev/md0 of=/dev/null bs=131072 count=8192
> 8192+0 records in
> 8192+0 records out
> 
> real    0m4.771s
> user    0m0.005s
> sys     0m0.973s
> 
> so the reasonable thing here is because of O_DIRECT, the sys time
> reduced a lot.
> 
> but the time is longer! the reason i found is...
> 
> i attached a new oread.c which allow to set block size of each read and
> total read count. so i read full strip once a time,
> 
> # time ./oread /dev/md0 524288 2048
> 
> real    0m4.950s
> user    0m0.000s
> sys     0m0.131s
> 
> compared to 
> 
> # time ./oread /dev/md0 131072 8192
> 
> real    0m6.633s
> user    0m0.002s
> sys     0m0.191s
> 
> 
> but still, I can get linear speed at 4 DISKS, then no speed gain when
> adding more disk into the RAID.
> 
> Ming
> 


  reply	other threads:[~2005-08-31 18:57 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-08-29 18:20 Where is the performance bottleneck? Holger Kiehl
2005-08-29 19:54 ` Mark Hahn
2005-08-30 19:08   ` Holger Kiehl
2005-08-30 23:05     ` Guy
2005-09-28 20:04       ` Bill Davidsen
2005-09-30  4:52         ` Guy
2005-09-30  5:19           ` dean gaudet
2005-10-06 21:15           ` Bill Davidsen
2005-08-29 20:10 ` Al Boldi
2005-08-30 19:18   ` Holger Kiehl
2005-08-31 10:30     ` Al Boldi
2005-08-29 23:09 ` Peter Chubb
     [not found] ` <20050829202529.GA32214@midnight.suse.cz>
2005-08-30 20:06   ` Holger Kiehl
2005-08-31  7:11     ` Vojtech Pavlik
2005-08-31  7:26       ` Jens Axboe
2005-08-31 11:54         ` Holger Kiehl
2005-08-31 12:07           ` Jens Axboe
2005-08-31 13:55             ` Holger Kiehl
2005-08-31 14:24               ` Dr. David Alan Gilbert
2005-08-31 20:56                 ` Holger Kiehl
2005-08-31 21:16                   ` Dr. David Alan Gilbert
2005-08-31 16:20               ` Jens Axboe
2005-08-31 15:16                 ` jmerkey
2005-08-31 16:58                   ` Tom Callahan
2005-08-31 15:47                     ` jmerkey
2005-08-31 17:11                   ` Jens Axboe
2005-08-31 15:59                     ` jmerkey
2005-08-31 17:32                       ` Jens Axboe
2005-08-31 16:51                 ` Holger Kiehl
2005-08-31 17:35                   ` Jens Axboe
2005-08-31 19:00                     ` Holger Kiehl
2005-08-31 18:06                   ` Michael Tokarev
2005-08-31 18:52                     ` Ming Zhang
2005-08-31 18:57                       ` Ming Zhang [this message]
2005-08-31 12:24           ` Nick Piggin
2005-08-31 16:25             ` Holger Kiehl
2005-08-31 17:25               ` Nick Piggin
2005-08-31 21:57                 ` Holger Kiehl
2005-09-01  9:12                   ` Holger Kiehl
2005-09-02 14:28                     ` Al Boldi
2005-08-31 13:38       ` Holger Kiehl

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1125514620.6617.11.camel@localhost.localdomain \
    --to=mingz@ele.uri.edu \
    --cc=Holger.Kiehl@dwd.de \
    --cc=axboe@suse.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-raid@vger.kernel.org \
    --cc=mjt@tls.msk.ru \
    --cc=vojtech@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).