From: Samuel Flory <sflory@rackable.com>
To: Roy Sigurd Karlsbakk <roy@karlsbakk.net>
Cc: "Jakob Østergaard" <jakob@unthought.net>,
"Martin Dalecki" <dalecki@evision-ventures.com>,
"Pavel Machek" <pavel@suse.cz>,
linux-kernel@vger.kernel.org
Subject: Re: IDE hotplug support?
Date: Thu, 02 May 2002 13:09:58 -0700 [thread overview]
Message-ID: <3CD19D16.7070605@rackable.com> (raw)
In-Reply-To: <Pine.LNX.4.44.0204301746020.2301-100000@mustard.heime.net> <20020426152943.A413@toy.ucw.cz> <3CD18318.7060407@evision-ventures.com> <20020502215833.V31556@unthought.net>
Why not just grab a pair of 8 port 3ware cards? Run raid 5 on each
card, and throw 0 or linear via the md driver on top?
Jakob Østergaard wrote:
>On Thu, May 02, 2002 at 08:19:04PM +0200, Martin Dalecki wrote:
>...
>
>
>>15 drives == 16 interfaces == 8 channels == 4 controllers
>>with primary and secondary channel.
>>
>>
>
>Usually using both master and slave on an IDE channel spells disaster
>performance wise, and I would be surprised if the hotplug stuff worked
>with this as well...
>
>
>
>>He will have groups of about 4 drives on each channel wich
>>serialize each other due to excessive IRQ line sharing and
>>master slave issues.
>>
>>
>
>Use 8 controllers for the 15 (16) drives.
>
>
>
>>8 x 130MBy/s >>>> PCI bus throughput... I would rather recommend
>>a classical RAID controller card for this kind of
>>setup.
>>
>>
>
>Because RAID controllers do not use the PCI bus ??? ;)
>
>The bus-overhead on RAID-5 is not too bad unless you specifically construct
>a workload to make it so (writes-only, scattered so that the kernel cannot
>cache stripes to avoid read-in for parity calculation).
>
>Sure, the PCI bus will be a bottleneck, and PCI overhead alone will decrease
>the real-world performance to somewhere below the theoretical PCI bandwidth
>limitations, but don't let this blind you - 100 MB/sec sustained transfers
>can still be "good enough" for many people.
>
>By the way, has anyone tried such larger multi-controller setups, and tested
>the bandwidth in configurations with multiple PCI busses on the board, versus a
>single PCI bus ?
>
>
>
next prev parent reply other threads:[~2002-05-02 20:15 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-04-30 15:48 IDE hotplug support? Roy Sigurd Karlsbakk
2002-04-26 15:29 ` Pavel Machek
2002-05-02 18:19 ` Martin Dalecki
2002-05-02 19:58 ` Jakob Østergaard
2002-05-02 20:09 ` Samuel Flory [this message]
2002-05-03 0:31 ` Roy Sigurd Karlsbakk
2002-05-03 3:14 ` jw schultz
2002-05-02 20:26 ` Alan Cox
2002-05-02 21:13 ` Jakob Østergaard
2002-05-02 20:18 ` Martin Dalecki
2002-05-02 22:22 ` Jeff Nguyen
2002-05-02 23:09 ` Jakob Østergaard
2002-05-03 0:16 ` Alan Cox
2002-05-03 0:35 ` Roy Sigurd Karlsbakk
2002-05-03 17:10 ` Roy Sigurd Karlsbakk
2002-05-03 0:25 ` Roy Sigurd Karlsbakk
2002-05-03 0:51 ` Alan Cox
2002-05-03 0:37 ` Roy Sigurd Karlsbakk
2002-04-30 16:22 ` Zwane Mwaikambo
2002-04-30 18:46 ` Ragnar Hojland Espinosa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3CD19D16.7070605@rackable.com \
--to=sflory@rackable.com \
--cc=dalecki@evision-ventures.com \
--cc=jakob@unthought.net \
--cc=linux-kernel@vger.kernel.org \
--cc=pavel@suse.cz \
--cc=roy@karlsbakk.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox