From: Keld J?rn Simonsen <keld@dkuug.dk>
To: Jon Nelson <jnelson-linux-raid@jamponi.net>
Cc: Matt Garman <matthew.garman@gmail.com>,
David Lethe <david@santools.com>,
linux-raid@vger.kernel.org
Subject: Re: new bottleneck section in wiki
Date: Wed, 2 Jul 2008 21:35:27 +0200 [thread overview]
Message-ID: <20080702193527.GA13186@rap.rap.dk> (raw)
In-Reply-To: <cccedfc60807021210y2b47917ap227e46fd1666755e@mail.gmail.com>
On Wed, Jul 02, 2008 at 02:10:21PM -0500, Jon Nelson wrote:
> On Wed, Jul 2, 2008 at 2:03 PM, Matt Garman <matthew.garman@gmail.com> wrote:
> > On Wed, Jul 02, 2008 at 12:04:11PM -0500, David Lethe wrote:
> >> The PCI (and PCI-X) bus is shared bandwidth, and operates at
> >> lowest common denominator. Put a 33Mhz card in the PCI bus, and
> >> not only does everything operate at 33Mhz, but all of the cards
> >> compete. Grossly simplified, if you have a 133Mhz card and a
> >> 33Mhz card in the same PCI bus, then that card will operate at
> >> 16Mhz. Your motherboard's embedded Ethernet chip and disk
> >> controllers are "on" the PCI bus, so even if you have a single PCI
> >> controller card, and a multiple-bus motherboard, then it does make
> >> a difference what slot you put the controller in.
> >
> > Is that true for all PCI-X implementations? What's the point, then,
> > of having PCI-X (64 bit/66 MHz or greater) if you have even one PCI
> > card (32 bit/33 MHz)?
>
> This motherboard (EPoX MF570SLI) uses PCI-E.
PCI-E is quite different architecturally from PCI-X.
> It has a plain old PCI video card in it:
> Trident Microsystems TGUI 9660/938x/968x
> and yet I appear to be able to sustain plenty of disk bandwidth to 4 drives:
> (dd if=/dev/sd[b,c,d,e] of=/dev/null bs=64k)
> vmstat 1 reports:
> 290000 to 310000 "blocks in", hovering around 300000.
>
> 4x70 would be more like 280, 4x75 is 300. Clearly the system is not
> bandwidth challenged.
> (This is with 4500 context switches/second, BTW.)
Possibly you are using an on-board disk controller, and then it most
likely does not use the PCI-E bus for disk IO.
best regards
keld
next prev parent reply other threads:[~2008-07-02 19:35 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-07-02 15:56 new bottleneck section in wiki Keld Jørn Simonsen
2008-07-02 16:43 ` Justin Piszcz
2008-07-02 17:21 ` Keld Jørn Simonsen
2008-07-02 17:04 ` David Lethe
2008-07-02 17:51 ` Keld Jørn Simonsen
2008-07-02 18:08 ` David Lethe
2008-07-02 18:26 ` Keld Jørn Simonsen
2008-07-02 21:55 ` Roger Heflin
2008-07-02 19:45 ` Matt Garman
2008-07-02 20:05 ` Keld J?rn Simonsen
2008-07-02 20:24 ` Richard Scobie
2008-07-02 19:03 ` Matt Garman
2008-07-02 19:10 ` Jon Nelson
2008-07-02 19:35 ` Keld J?rn Simonsen [this message]
2008-07-02 19:38 ` Jon Nelson
2008-07-02 22:07 ` David Lethe
2008-07-03 12:28 ` Jon Nelson
2008-07-03 14:00 ` Justin Piszcz
2008-07-02 19:17 ` Robin Hill
2008-07-02 19:39 ` Keld J?rn Simonsen
2008-07-03 5:10 ` Doug Ledford
2008-07-02 21:45 ` Roger Heflin
2008-07-02 17:33 ` Iustin Pop
2008-07-02 18:14 ` Keld Jørn Simonsen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20080702193527.GA13186@rap.rap.dk \
--to=keld@dkuug.dk \
--cc=david@santools.com \
--cc=jnelson-linux-raid@jamponi.net \
--cc=linux-raid@vger.kernel.org \
--cc=matthew.garman@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).