linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Slow software RAID5
@ 2004-10-15 13:23 linux-raid2eran
  2004-10-15 17:27 ` 3Ware 7506-8, anyone have any luck? I don't buggz
  0 siblings, 1 reply; 10+ messages in thread
From: linux-raid2eran @ 2004-10-15 13:23 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 1712 bytes --]

Hi,

I have a Linux box running a software RAID5 array of 3 IDE disks. Each
disk, by itself, has a sequential read bandwidth of over 40MB/sec.
I thus expect the sequential read bandwidth of the RAID array to be at
least 80MB/sec, but it's much lower (e.g., 26MB/sec for 128K chunk and
64K lookahead). What could be wrong?

Attached is a file listing the read and write bandwidth near the
beginning of the array (as measured by timing 'dd bs=64k ...') for
various chunk sizes, lookahead sizes and degraded modes. As you can see,
it never approaches the theoretical 80MB/sec --- except in some degraded
modes.

Each drive is connected as master on a separate IDE channel with no
slave. The 3 drives are of different brands, all 7200RPM, all ATA/100 or
better. CPU usage is low, nothing else is accessing the disks, the 1GB
of RAM is mostly unused and there's no array sync in progress. Running
vanilla Linux 2.6.8.1 on Fedora Core 2.

Curiously, if I remove one of the drives from the array
("mdadm --fail ...") and run in degraded mode, the bandwidth greatly
increases. Likewise, RAID4 is significantly faster.

One theory is that the reads are slowed in non-degraded RAID5 because
they're not contiguous (the parity chunks are skipped). But I
think a huge lookahead would have fixed that. It doesn't, even when the
readahead is larger than a full stripe.

Another theory is that the disks, being of different models, get out of
sync; again, a huge lookahead should have fixed that (65536 sectors
lookahead is much larger than tracksize*#disks), but doesn't. But then,
with that lookahead sequential read bandwidth is lower than with smaller
lookaheads, which doesn't make much sense anyway.

Any ideas?

  Eran


[-- Attachment #2: raidperf.txt --]
[-- Type: text/plain, Size: 6320 bytes --]

With readahead of 128 blocks (on both /dev/hd? and /dev/md?):

RAID: --chunk=4 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 46.135222 MB/sec      write: 39.887844 MB/sec
RAID: --chunk=8 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 47.709570 MB/sec      write: 36.555040 MB/sec
RAID: --chunk=16 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 48.440246 MB/sec      write: 38.898690 MB/sec
RAID: --chunk=32 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 45.970336 MB/sec      write: 38.828235 MB/sec
RAID: --chunk=64 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 45.173847 MB/sec      write: 33.991808 MB/sec
RAID: --chunk=128 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 28.513893 MB/sec      write: 34.574373 MB/sec
RAID: --chunk=256 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 37.830947 MB/sec      write: 32.963090 MB/sec
RAID: --chunk=512 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 35.472600 MB/sec      write: 28.127734 MB/sec
RAID: --chunk=4 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hda5)
  read: 59.807037 MB/sec      write: 38.380247 MB/sec
RAID: --chunk=8 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hda5)
  read: 62.257986 MB/sec      write: 37.675747 MB/sec
RAID: --chunk=16 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hda5)
  read: 65.225659 MB/sec      write: 33.920095 MB/sec
RAID: --chunk=32 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hda5)
  read: 60.940461 MB/sec      write: 32.236842 MB/sec
RAID: --chunk=64 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hda5)
  read: 59.110754 MB/sec      write: 35.835345 MB/sec
RAID: --chunk=128 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hda5)
  read: 44.908054 MB/sec      write: 34.975526 MB/sec
RAID: --chunk=256 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hda5)
  read: 41.654070 MB/sec      write: 30.112395 MB/sec
RAID: --chunk=512 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hda5)
  read: 45.122807 MB/sec      write: 24.445571 MB/sec
RAID: --chunk=4 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hdc5)
  read: 75.513699 MB/sec      write: 47.311795 MB/sec
RAID: --chunk=8 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hdc5)
  read: 88.462861 MB/sec      write: 44.733944 MB/sec
RAID: --chunk=16 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hdc5)
  read: 83.198577 MB/sec      write: 38.626126 MB/sec
RAID: --chunk=32 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hdc5)
  read: 88.422319 MB/sec      write: 39.953924 MB/sec
RAID: --chunk=64 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hdc5)
  read: 72.806604 MB/sec      write: 32.263796 MB/sec
RAID: --chunk=128 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hdc5)
  read: 59.526381 MB/sec      write: 34.981868 MB/sec
RAID: --chunk=256 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hdc5)
  read: 59.906832 MB/sec      write: 32.645118 MB/sec
RAID: --chunk=512 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hdc5)
  read: 58.260344 MB/sec      write: 26.106374 MB/sec
RAID: --chunk=4 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hdg5)
  read: 78.143985 MB/sec      write: 39.318830 MB/sec
RAID: --chunk=8 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hdg5)
  read: 77.453834 MB/sec      write: 34.471592 MB/sec
RAID: --chunk=16 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hdg5)
  read: 77.828762 MB/sec      write: 32.166972 MB/sec
RAID: --chunk=32 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hdg5)
  read: 76.898167 MB/sec      write: 31.566999 MB/sec
RAID: --chunk=64 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hdg5)
  read: 73.809296 MB/sec      write: 34.251287 MB/sec
RAID: --chunk=128 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hdg5)
  read: 47.600543 MB/sec      write: 34.648886 MB/sec
RAID: --chunk=256 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hdg5)
  read: 54.200618 MB/sec      write: 31.072809 MB/sec
RAID: --chunk=512 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=/dev/hdg5)
  read: 53.082003 MB/sec      write: 28.086779 MB/sec

With readahead of 256 blocks (on both /dev/hd? and /dev/md?):

RAID: --chunk=4 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 48.525528 MB/sec      write: 42.591060 MB/sec
RAID: --chunk=8 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 47.474779 MB/sec      write: 39.262821 MB/sec
RAID: --chunk=16 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 49.118508 MB/sec      write: 34.306099 MB/sec
RAID: --chunk=32 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 49.764638 MB/sec      write: 35.755652 MB/sec
RAID: --chunk=64 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 43.298362 MB/sec      write: 39.495906 MB/sec
RAID: --chunk=128 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 41.074090 MB/sec      write: 33.199966 MB/sec
RAID: --chunk=256 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 41.016373 MB/sec      write: 33.782837 MB/sec
RAID: --chunk=512 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 39.634272 MB/sec      write: 30.377953 MB/sec

With readahead of 65536 blocks (on both /dev/hd? and /dev/md?):

RAID: --chunk=4 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 29.646205 MB/sec      write: 42.798913 MB/sec
RAID: --chunk=8 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 31.546354 MB/sec      write: 37.972348 MB/sec
RAID: --chunk=16 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 31.209560 MB/sec      write: 38.945801 MB/sec
RAID: --chunk=32 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 32.475593 MB/sec      write: 34.958779 MB/sec
RAID: --chunk=64 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 36.673161 MB/sec      write: 37.719941 MB/sec
RAID: --chunk=128 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 27.939899 MB/sec      write: 32.561181 MB/sec
RAID: --chunk=256 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 37.139007 MB/sec      write: 33.195663 MB/sec
RAID: --chunk=512 --level=5 -n3 /dev/hda5 /dev/hdc5 /dev/hdg5 (fail=)
  read: 49.537750 MB/sec      write: 28.594723 MB/sec


^ permalink raw reply	[flat|nested] 10+ messages in thread

* 3Ware 7506-8, anyone have any luck?  I don't...
  2004-10-15 13:23 Slow software RAID5 linux-raid2eran
@ 2004-10-15 17:27 ` buggz
  2004-10-15 17:40   ` Scott T. Smith
  0 siblings, 1 reply; 10+ messages in thread
From: buggz @ 2004-10-15 17:27 UTC (permalink / raw)
  To: linux-raid


Hello,

Anyone use the 3ware 7506-8 card in ANY motherboard?
I'm having a bad problem, it won't boot into the 3ware BIOS.
The machine hangs.
I'm using a compatible motherboard, per 3ware's list.
An Asus P4C800-E Deluxe.
I've tried various mb BIOS settings.





^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: 3Ware 7506-8, anyone have any luck?  I don't...
  2004-10-15 17:27 ` 3Ware 7506-8, anyone have any luck? I don't buggz
@ 2004-10-15 17:40   ` Scott T. Smith
  2004-10-15 18:12     ` KELEMEN Peter
  0 siblings, 1 reply; 10+ messages in thread
From: Scott T. Smith @ 2004-10-15 17:40 UTC (permalink / raw)
  To: buggz; +Cc: linux-raid

On Fri, 2004-10-15 at 10:27, buggz wrote:
> Anyone use the 3ware 7506-8 card in ANY motherboard?
> I'm having a bad problem, it won't boot into the 3ware BIOS.
> The machine hangs.
> I'm using a compatible motherboard, per 3ware's list.
> An Asus P4C800-E Deluxe.
> I've tried various mb BIOS settings.

We used three successfully in one motherboard.  We have since switched
to RocketRaid 1820A's for performance reasons though.

	Scott



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: 3Ware 7506-8, anyone have any luck?  I don't...
  2004-10-15 17:40   ` Scott T. Smith
@ 2004-10-15 18:12     ` KELEMEN Peter
  2004-10-15 18:21       ` Scott T. Smith
  0 siblings, 1 reply; 10+ messages in thread
From: KELEMEN Peter @ 2004-10-15 18:12 UTC (permalink / raw)
  To: linux-raid

* Scott T. Smith (scott@gelatinous.com) [20041015 10:40]:

> On Fri, 2004-10-15 at 10:27, buggz wrote:
> > Anyone use the 3ware 7506-8 card in ANY motherboard? [...]

> We used three successfully in one motherboard.  We have since
> switched to RocketRaid 1820A's for performance reasons though.

It's only fair to mention that the 3ware 7506 is a 64-bit/66MHz
PCI PATA card, while the HighPoint RocketRAID 1820A is a
64-bit/133MHz PCI-X SATA card, so comparing their performance is
comparing apples to oranges.

In the SATA PCI-X league HighPoint is not such a shiny player
performance-wise but has attractive pricing, see:
http://www.tomshardware.com/storage/20040831/sata-raid-controller-16.html

Peter

-- 
    .+'''+.         .+'''+.         .+'''+.         .+'''+.         .+''
 Kelemen Péter     /       \       /       \     Peter.Kelemen@cern.ch
.+'         `+...+'         `+...+'         `+...+'         `+...+'
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: 3Ware 7506-8, anyone have any luck?  I don't...
  2004-10-15 18:12     ` KELEMEN Peter
@ 2004-10-15 18:21       ` Scott T. Smith
  2004-10-15 18:44         ` KELEMEN Peter
  0 siblings, 1 reply; 10+ messages in thread
From: Scott T. Smith @ 2004-10-15 18:21 UTC (permalink / raw)
  To: KELEMEN Peter; +Cc: linux-raid

On Fri, 2004-10-15 at 11:12, KELEMEN Peter wrote:
> * Scott T. Smith (scott@gelatinous.com) [20041015 10:40]:
> 
> > On Fri, 2004-10-15 at 10:27, buggz wrote:
> > > Anyone use the 3ware 7506-8 card in ANY motherboard? [...]
> 
> > We used three successfully in one motherboard.  We have since
> > switched to RocketRaid 1820A's for performance reasons though.
> 
> It's only fair to mention that the 3ware 7506 is a 64-bit/66MHz
> PCI PATA card

You know what, I think I was wrong -- we have the 8506.  Sorry, my bad. 
And ours is SATA, not PATA (is that the difference between the 7xxx and
8xxx?)

> , while the HighPoint RocketRAID 1820A is a
> 64-bit/133MHz PCI-X SATA card, so comparing their performance is
> comparing apples to oranges.

There are other reasons the Highpoint is better than the 3ware,
including the fact that it instantly recognizes when you yank a disk,
unlike the 3ware which sits there for a while, and then hard resets the
entire controller (thus halting all disk io to that controller for a
couple of seconds).  3ware wrote me back and told me it's supposed to do
that!

> In the SATA PCI-X league HighPoint is not such a shiny player
> performance-wise but has attractive pricing, see:
> http://www.tomshardware.com/storage/20040831/sata-raid-controller-16.html

Is the 9500 a PCI-X card?  We have three of the 9xxx (don't know exactly
which one, might be the 9000-8ML) in another chassis (using multilane,
which basically bundles 4 SATA cables on one monster cable), and we get
the same crappy performance from that one too.  Worse yet, the 9xxx
series wants to hijack your disks unless you go out of your way to
enable 'export JBOD' mode.  You can't take a disk from a 9xxx controller
and put it in a box with an 8506 -- the controller won't recognize it!

BTW I'm not talking about RAID or filesystem performance, I'm talking
about direct disk access, JBOD mode, using rather large blocks.  So
really all I want is multiple controllers on a single card.  To that
end, the 1820a works great.  I figure we'd all be running software RAID
anyways ;-)

	Scott



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: 3Ware 7506-8, anyone have any luck?  I don't...
  2004-10-15 18:21       ` Scott T. Smith
@ 2004-10-15 18:44         ` KELEMEN Peter
  2004-10-15 18:55           ` Scott T. Smith
  2004-10-15 19:19           ` Ming Zhang
  0 siblings, 2 replies; 10+ messages in thread
From: KELEMEN Peter @ 2004-10-15 18:44 UTC (permalink / raw)
  To: linux-raid

[ Please do not address answers directly to me; I read the list. ]

* Scott T. Smith (scott@gelatinous.com) [20041015 11:21]:

> You know what, I think I was wrong -- we have the 8506.  Sorry,
> my bad.  And ours is SATA, not PATA (is that the difference
> between the 7xxx and 8xxx?)

Yes.

> There are other reasons the Highpoint is better than the 3ware,
> including the fact that it instantly recognizes when you yank a
> disk, unlike the 3ware which sits there for a while, and then
> hard resets the entire controller (thus halting all disk io to
> that controller for a couple of seconds).  3ware wrote me back
> and told me it's supposed to do that!

Cannot comment on this one, never was a problem for us.

> Is the 9500 a PCI-X card?

No, it's 64-bit 66 MHz PCI.
http://www.3ware.com/products/pdf/9000DS_041904.pdf

> [...] and we get the same crappy performance from that one too.

Define `crappy': I get 770 MiB/s read with 3x 3ware 9500S-8MI.
(RAID00 hw-sw)

> Worse yet, the 9xxx series wants to hijack your disks unless you
> go out of your way to enable 'export JBOD' mode.  You can't take
> a disk from a 9xxx controller and put it in a box with an 8506
> -- the controller won't recognize it!

Have you tried zapping the first and the last megabyte of the
disk?

> BTW I'm not talking about RAID or filesystem performance, I'm
> talking about direct disk access, JBOD mode, using rather large
> blocks.

Direct disk access gives me nominal disk throughput on 3ware.

> So really all I want is multiple controllers on a single card.
> To that end, the 1820a works great.  I figure we'd all be
> running software RAID anyways ;-)

As I said, attractive pricing.  Cannot get stellar hardware RAID
performance with crappy chipsets. :-)  There's a turnover point
where you are killed by PCI bandwith limit when using a lot of
disks.

Peter

-- 
    .+'''+.         .+'''+.         .+'''+.         .+'''+.         .+''
 Kelemen Péter     /       \       /       \     Peter.Kelemen@cern.ch
.+'         `+...+'         `+...+'         `+...+'         `+...+'
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: 3Ware 7506-8, anyone have any luck?  I don't...
  2004-10-15 18:44         ` KELEMEN Peter
@ 2004-10-15 18:55           ` Scott T. Smith
  2004-10-15 19:28             ` KELEMEN Peter
  2004-10-15 19:19           ` Ming Zhang
  1 sibling, 1 reply; 10+ messages in thread
From: Scott T. Smith @ 2004-10-15 18:55 UTC (permalink / raw)
  To: linux-raid

On Fri, 2004-10-15 at 11:44, KELEMEN Peter wrote:
> [ Please do not address answers directly to me; I read the list. ]

then filter out email based on duplicate message-ids! :-P

> > [...] and we get the same crappy performance from that one too.
> 
> Define `crappy': I get 770 MiB/s read with 3x 3ware 9500S-8MI.
> (RAID00 hw-sw)

When issueing random reads of 1 megabyte each, I get 2 gigabits/sec from
the Highpoint, and 1.5 gigabits/sec from the 3ware, when using a single
controller and 8 disks (WD 250GB).  No RAID, just JBOD mode, all disks
in use at once.

> > Worse yet, the 9xxx series wants to hijack your disks unless you
> > go out of your way to enable 'export JBOD' mode.  You can't take
> > a disk from a 9xxx controller and put it in a box with an 8506
> > -- the controller won't recognize it!
> 
> Have you tried zapping the first and the last megabyte of the
> disk?

the 8506 won't even let me set the disk up as a JBOD.  In order to zap
it, I need yet another controller!  (The Highpoint controller is "nice"
in that it will recognize the 3ware superblock and not absorb it
either!)

> As I said, attractive pricing.  Cannot get stellar hardware RAID
> performance with crappy chipsets. :-)  There's a turnover point
> where you are killed by PCI bandwith limit when using a lot of
> disks.

That's the funny thing about the 3ware 12 disk controllers; they don't
even have the bandwidth to support all 12 disks!

It truly depends on what you want to use it for.  If you want RAID5, you
most likely want a hardware solution, so yeah, 3ware would be the way to
go.  If you want JBOD though, I'd go with Highpoint.

Anyone know how the Adaptec holds up?  I've heard they have better
random IO support than the 3ware, and I think they're a hardware only
solution too.

	Scott



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: 3Ware 7506-8, anyone have any luck?  I don't...
  2004-10-15 18:44         ` KELEMEN Peter
  2004-10-15 18:55           ` Scott T. Smith
@ 2004-10-15 19:19           ` Ming Zhang
  2004-10-16 21:37             ` KELEMEN Peter
  1 sibling, 1 reply; 10+ messages in thread
From: Ming Zhang @ 2004-10-15 19:19 UTC (permalink / raw)
  To: KELEMEN Peter; +Cc: linux-raid

Can you explain a bit that with what kind of configuration you can get
that 770MiB/s performance? Thanks.

ming

On Fri, 2004-10-15 at 14:44, KELEMEN Peter wrote:
> [ Please do not address answers directly to me; I read the list. ]
> 
> * Scott T. Smith (scott@gelatinous.com) [20041015 11:21]:
> 
> > You know what, I think I was wrong -- we have the 8506.  Sorry,
> > my bad.  And ours is SATA, not PATA (is that the difference
> > between the 7xxx and 8xxx?)
> 
> Yes.
> 
> > There are other reasons the Highpoint is better than the 3ware,
> > including the fact that it instantly recognizes when you yank a
> > disk, unlike the 3ware which sits there for a while, and then
> > hard resets the entire controller (thus halting all disk io to
> > that controller for a couple of seconds).  3ware wrote me back
> > and told me it's supposed to do that!
> 
> Cannot comment on this one, never was a problem for us.
> 
> > Is the 9500 a PCI-X card?
> 
> No, it's 64-bit 66 MHz PCI.
> http://www.3ware.com/products/pdf/9000DS_041904.pdf
> 
> > [...] and we get the same crappy performance from that one too.
> 
> Define `crappy': I get 770 MiB/s read with 3x 3ware 9500S-8MI.
> (RAID00 hw-sw)
> 
> > Worse yet, the 9xxx series wants to hijack your disks unless you
> > go out of your way to enable 'export JBOD' mode.  You can't take
> > a disk from a 9xxx controller and put it in a box with an 8506
> > -- the controller won't recognize it!
> 
> Have you tried zapping the first and the last megabyte of the
> disk?
> 
> > BTW I'm not talking about RAID or filesystem performance, I'm
> > talking about direct disk access, JBOD mode, using rather large
> > blocks.
> 
> Direct disk access gives me nominal disk throughput on 3ware.
> 
> > So really all I want is multiple controllers on a single card.
> > To that end, the 1820a works great.  I figure we'd all be
> > running software RAID anyways ;-)
> 
> As I said, attractive pricing.  Cannot get stellar hardware RAID
> performance with crappy chipsets. :-)  There's a turnover point
> where you are killed by PCI bandwith limit when using a lot of
> disks.
> 
> Peter


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: 3Ware 7506-8, anyone have any luck?  I don't...
  2004-10-15 18:55           ` Scott T. Smith
@ 2004-10-15 19:28             ` KELEMEN Peter
  0 siblings, 0 replies; 10+ messages in thread
From: KELEMEN Peter @ 2004-10-15 19:28 UTC (permalink / raw)
  To: linux-raid

* Scott T. Smith (scott@gelatinous.com) [20041015 11:55]:

> then filter out email based on duplicate message-ids! :-P

I would if the corporate Exchange would _let_ me have duplicates.
It does not.  Moreover, I'm setting Mail-Followup-To: which _you_
are not honoring.

> When issueing random reads of 1 megabyte each, I get 2
> gigabits/sec from the Highpoint, and 1.5 gigabits/sec from the
> 3ware, when using a single controller and 8 disks (WD 250GB).
> No RAID, just JBOD mode, all disks in use at once.

Cannot comment on this one, haven't done measurements myself.  We
don't use random I/O much.

> the 8506 won't even let me set the disk up as a JBOD.  In order
> to zap it, I need yet another controller!

Do it with the 9xxx.  Interesting though... did you let 3ware know
about it?  (We do not have 8506s yet.)

> That's the funny thing about the 3ware 12 disk controllers; they
> don't even have the bandwidth to support all 12 disks!

Theoretically you could get 12*50 MiB/s out from the disks.  PCI
limit is just 12% lower.  Add processing overhead you are about to
go.

> It truly depends on what you want to use it for.  If you want
> RAID5, you most likely want a hardware solution, so yeah, 3ware
> would be the way to go.

Agreed.

> If you want JBOD though, I'd go with Highpoint.

Thanks for the data point.

Peter

-- 
    .+'''+.         .+'''+.         .+'''+.         .+'''+.         .+''
 Kelemen Péter     /       \       /       \     Peter.Kelemen@cern.ch
.+'         `+...+'         `+...+'         `+...+'         `+...+'
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: 3Ware 7506-8, anyone have any luck?  I don't...
  2004-10-15 19:19           ` Ming Zhang
@ 2004-10-16 21:37             ` KELEMEN Peter
  0 siblings, 0 replies; 10+ messages in thread
From: KELEMEN Peter @ 2004-10-16 21:37 UTC (permalink / raw)
  To: linux-raid

* Ming Zhang (mingz@ele.uri.edu) [20041015 15:19]:

> Can you explain a bit that with what kind of configuration you
> can get that 770MiB/s performance? Thanks.

A HP Itanium box with Raptor disks (3x8).

Peter

-- 
    .+'''+.         .+'''+.         .+'''+.         .+'''+.         .+''
 Kelemen Péter     /       \       /       \     Peter.Kelemen@cern.ch
.+'         `+...+'         `+...+'         `+...+'         `+...+'
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2004-10-16 21:37 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-10-15 13:23 Slow software RAID5 linux-raid2eran
2004-10-15 17:27 ` 3Ware 7506-8, anyone have any luck? I don't buggz
2004-10-15 17:40   ` Scott T. Smith
2004-10-15 18:12     ` KELEMEN Peter
2004-10-15 18:21       ` Scott T. Smith
2004-10-15 18:44         ` KELEMEN Peter
2004-10-15 18:55           ` Scott T. Smith
2004-10-15 19:28             ` KELEMEN Peter
2004-10-15 19:19           ` Ming Zhang
2004-10-16 21:37             ` KELEMEN Peter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).