linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Willy Tarreau <w@1wt.eu>
To: Justin Piszcz <jpiszcz@lucidpixels.com>
Cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org,
	xfs@oss.sgi.com
Subject: Re: Limits of the 965 chipset & 3 PCI-e cards/southbridge? ~774MiB/s peak for read, ~650MiB/s peak for write?
Date: Sun, 1 Jun 2008 14:20:33 +0200	[thread overview]
Message-ID: <20080601122033.GE5609@1wt.eu> (raw)
In-Reply-To: <alpine.DEB.1.10.0806010721230.10729@p34.internal.lan>

On Sun, Jun 01, 2008 at 07:26:09AM -0400, Justin Piszcz wrote:
> 
> 
> On Sun, 1 Jun 2008, Justin Piszcz wrote:
> 
> >
> >On Sun, 1 Jun 2008, Justin Piszcz wrote:
> >
> >>I have 12 enterprise-class seagate 1TiB disks on a 965 desktop board and 
> >>it appears I have hit the limit, if I were able to get the maximum speed 
> >>of all drives, ~70MiB/avg * 12 = 840MiB/s but it seems to stop aound 774 
> >>MiB/s (currently running badblocks on all drives)..
> >
> >Small correction, they are 7200.11 Seagate Desktop Drives (ST31000340AS), 
> >not enterprise drives:
> >
> >http://www.seagate.com/ww/v/index.jsp?vgnextoid=0732f141e7f43110VgnVCM100000f5ee0a0aRCRD
> >http://www.newegg.com/Product/Product.aspx?Item=N82E16822148274
> >
> >
> 
> http://www.intel.com/products/chipsets/g965/diagram.jpg
> 
> Basically it appears I am hammering the southbridge as for this board the 
> PCI-e (x1) slots also traverse through the southbridge.
> 
>  6_SATA -> G965 ICH8
> 3_PCI-e -> G965 ICH8
> 
> >From which has to ship that data across the DMI (2GB) link to the 
> northbridge.
> 
> If one utilized a 12, 16 or 24 port raid card (but used SW RAID) on the 
> x16 slot on the northbridge itself, would this barrier exist as the:
> GMCH<->CPU is (8.5GB/s)..?
> 
> Also on the X38 and X48 the speed increases slightly:
> http://www.intel.com/products/chipsets/X38/X38_Block_Diagram.jpg (10.6GB/s)
> http://www.intel.com/products/chipsets/x48/x48_block_diagram.jpg (12.8GB/s)
> 
> If one asks why would one need such speed?

It looks like graphic games are pushing the technologies to their
limits, which is good for us. I have bought X38 motherboards for
10 Gbps experimentations, and this chipset is perfectly capable of
feeding two Myri10GE NICs (20 Gbps total). This is 2.5 GB/s, not
counting overhead. So I/O bandwidth is a premium requirement today.

Other chipsets I have tested (945 and 965) were very poor (about 4.7
and 6.5 Gbps respectively if my memory serves me right).

Willy


  reply	other threads:[~2008-06-01 12:20 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-06-01  9:45 Limits of the 965 chipset & 3 PCI-e cards/southbridge? ~774MiB/s peak for read, ~650MiB/s peak for write? Justin Piszcz
2008-06-01 11:01 ` Justin Piszcz
2008-06-01 11:26   ` Justin Piszcz
2008-06-01 12:20     ` Willy Tarreau [this message]
2008-06-03 18:44 ` Bryan Mesich
  -- strict thread matches above, loose matches on Subject: below --
2008-06-01 13:52 David Lethe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080601122033.GE5609@1wt.eu \
    --to=w@1wt.eu \
    --cc=jpiszcz@lucidpixels.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-raid@vger.kernel.org \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).