* SATA on mptsas performance
@ 2006-02-13 14:01 Mirko Benz
0 siblings, 0 replies; 8+ messages in thread
From: Mirko Benz @ 2006-02-13 14:01 UTC (permalink / raw)
To: linux-scsi
Hello,
We are testing the following setup:
- LSI SAS3442X controller
- Promise J300S SAS JBOD connected via the external port of the SAS
controller
- 10 SATA disks (Seagate) in the JBOD
- Linux kernel 2.6.16RC2 on a INTEL Dual Xeon Server, 2.8 Ghz, 64 bit mode
LSI SAS driver provided by the kernel finds the JBOD and the disks.
Single disk performance is like attaching via a SATA controller from the
chipset.
Testing with multiple parallel drive accesses gives very poor results
and high system utilisation.
Tests performed with parallel invocations of dd with bs=32k. Values in MB/s.
# of disks READ AVG WRITE AVG
1 60 60,0 58 58,0
5 310 62,0 288 57,6
6 344 57,3 336 56,0
7 259 37,0 375 53,6
8 226 28,3 391 48,9
9 245 27,2 402 44,7
10 265 26,5 405 40,5
Up to 5 drives it looks as it should. Then the performance goes
significantly down for READ operations.
The card is plugged in a 100 MHz PCI-X slot. No other activity on the
system.
Any hint?
Thanks,
Mirko
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: SATA on mptsas performance
@ 2006-02-14 16:02 Moore, Eric
2006-02-15 7:52 ` Mirko Benz
0 siblings, 1 reply; 8+ messages in thread
From: Moore, Eric @ 2006-02-14 16:02 UTC (permalink / raw)
To: Mirko Benz, linux-scsi
On Monday, February 13, 2006 7:02 AM, Mirko Benz wrote:
>
> Hello,
>
> We are testing the following setup:
> - LSI SAS3442X controller
> - Promise J300S SAS JBOD connected via the external port of the SAS
> controller
> - 10 SATA disks (Seagate) in the JBOD
> - Linux kernel 2.6.16RC2 on a INTEL Dual Xeon Server, 2.8
> Ghz, 64 bit mode
>
> LSI SAS driver provided by the kernel finds the JBOD and the disks.
How many links are connected between your controller and the JBOD?
With that many drives, I suggest you connect four cables to create
a wide port. There is a expander in your JBOD, right? Meaning its
not a SATA port mulitplier?
> Single disk performance is like attaching via a SATA
> controller from the
> chipset.
Are your disks SATA-II, and do they support NCQ? If so, I suggest
we enable that. That is can be done in the NVDATA.
> Testing with multiple parallel drive accesses gives very poor results
> and high system utilisation.
> Tests performed with parallel invocations of dd with bs=32k.
> Values in MB/s.
>
Which benchmarking tool are you using?
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SATA on mptsas performance
2006-02-14 16:02 Moore, Eric
@ 2006-02-15 7:52 ` Mirko Benz
2006-03-01 3:30 ` Douglas Gilbert
0 siblings, 1 reply; 8+ messages in thread
From: Mirko Benz @ 2006-02-15 7:52 UTC (permalink / raw)
To: Moore, Eric; +Cc: linux-scsi
Hello,
The SAS3442X is connected to the JBOD via a x4 SAS cable.
The JBOD has an LSI SAS Expander chip - no port multiplier.
Disks are Seagate ST3300831AS (300 GB, SATA-I, NCQ)
Does the SAS controller and the SAS expander communicate at 3 Gb when
accessing SATA disks?
If not it should give 4 * 150 MB/s r/w throughput for this configuration.
I am testing with parallel dd invocations on the raw device e.g.:
dd if=/dev/sdb of=/dev/null bs=32k count=10000 &
...
dd if=/dev/sdl of=/dev/null bs=32k count=10000 &
I have tested with IOMETER but the results are worse.
I will test SATA II disks later. But I assume there is a scheduling
problem (mapping 10 disks to 4 channels). The CPU load is also very high.
Disks support NCQ but it should have no effect due to sequential access
of a single application per disk. How to switch on/off NCQ?
Regards,
Mirko
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SATA on mptsas performance
2006-02-15 7:52 ` Mirko Benz
@ 2006-03-01 3:30 ` Douglas Gilbert
2006-03-01 6:09 ` Jeff Garzik
2006-03-01 17:55 ` Asgeir Eiriksson
0 siblings, 2 replies; 8+ messages in thread
From: Douglas Gilbert @ 2006-03-01 3:30 UTC (permalink / raw)
To: Mirko Benz; +Cc: linux-scsi
Mirko Benz wrote:
> Hello,
>
> The SAS3442X is connected to the JBOD via a x4 SAS cable.
> The JBOD has an LSI SAS Expander chip - no port multiplier.
> Disks are Seagate ST3300831AS (300 GB, SATA-I, NCQ)
>
> Does the SAS controller and the SAS expander communicate at 3 Gb when
> accessing SATA disks?
If the SATA disk does 3 Gb/sec then yes but SATA-1 disks
run at 1.5 Gb/sec (and don't have NCQ). When a SAS HBA
using STP connects to a SATA-1 disk via an expander it
rate matches. This means that it substitutes a dummy value
(ALIGN I think) between each data value on that path
(between the HBA and the expander). Hence it essentially
wastes half the available bandwidth for the duration of
the connection. There is talk of multiplexing in SAS-2
when the the physical link rate is 6 Gb/sec and the
connection rate is 3 or 1.5 Gb/sec.
> If not it should give 4 * 150 MB/s r/w throughput for this configuration.
4 * 150 MB/sec r/w throughput (half duplex ?) would be
correct.
BTW since SAS uses "10b8b" encoding (as do IB,
FC, SATA, PCIe, 10 gigabit ethernet on copper, etc) which
encodes 8 data bits into 10 bits on the wire then one
can flip between "<n> MB/sec" and "(<n> * 10) Mb/sec". The
capitalization (or not) of the "B" is obviously significant.
Perhaps somebody from a SAS vendor company could tell us how
much data one can really send down a 3 Gb/sec SAS/STP
connection under optimum conditions.
Doug Gilbert
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SATA on mptsas performance
2006-03-01 3:30 ` Douglas Gilbert
@ 2006-03-01 6:09 ` Jeff Garzik
2006-03-01 8:16 ` Mirko Benz
2006-03-01 17:55 ` Asgeir Eiriksson
1 sibling, 1 reply; 8+ messages in thread
From: Jeff Garzik @ 2006-03-01 6:09 UTC (permalink / raw)
To: dougg; +Cc: Mirko Benz, linux-scsi
Douglas Gilbert wrote:
> If the SATA disk does 3 Gb/sec then yes but SATA-1 disks
> run at 1.5 Gb/sec (and don't have NCQ). When a SAS HBA
To further confuse things, "SATA-1", "SATA-2", etc. don't mean much at
all. I recommend never using these terms.
There are many disks that can do NCQ but not 3 Gb/sec, for example.
Its best just to mention the existence of features, because there is no
__technical__ definition of "SATA 2" that one can test in software.
SATA 2 is just a set of features defined by marketing.
Jeff
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SATA on mptsas performance
2006-03-01 6:09 ` Jeff Garzik
@ 2006-03-01 8:16 ` Mirko Benz
0 siblings, 0 replies; 8+ messages in thread
From: Mirko Benz @ 2006-03-01 8:16 UTC (permalink / raw)
To: Jeff Garzik; +Cc: dougg, linux-scsi
Hello,
As promised we have tested 3 Gb (aka SATA II) disks.
Configuration: LSI SAS3442X controller, Promise SAS JBOD, 12 disks
Seagate SATA 80 GB
Tested with parallel dd invocations with bs=32k.
The results look better:
# disks Read AVG Write AVG
1 69 69,0 68 68,0
6 420 70,0 324 54,0
7 459 65,6 367 52,4
8 426 53,3 394 49,3
9 399 44,3 389 43,2
10 410 41,0 431 43,1
11 438 39,8 428 38,9
12 468 39,0 481 40,1
Available bandwidth is 4 * 3 Gb = 1.5 GB (4 lanes / external wide SAS
port).
So there is still some room for improvement. Has anyone else a similar
setup and can provide results?
Maybe with the Adaptec SAS controller?
Thanks,
Mirko
Jeff Garzik schrieb:
> Douglas Gilbert wrote:
>> If the SATA disk does 3 Gb/sec then yes but SATA-1 disks
>> run at 1.5 Gb/sec (and don't have NCQ). When a SAS HBA
>
> To further confuse things, "SATA-1", "SATA-2", etc. don't mean much at
> all. I recommend never using these terms.
>
> There are many disks that can do NCQ but not 3 Gb/sec, for example.
>
> Its best just to mention the existence of features, because there is
> no __technical__ definition of "SATA 2" that one can test in software.
> SATA 2 is just a set of features defined by marketing.
>
> Jeff
>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: SATA on mptsas performance
@ 2006-03-01 16:04 Moore, Eric
0 siblings, 0 replies; 8+ messages in thread
From: Moore, Eric @ 2006-03-01 16:04 UTC (permalink / raw)
To: Mirko Benz, Jeff Garzik; +Cc: dougg, linux-scsi
On Wednesday, March 01, 2006 1:16 AM, Mirko Benz wrote:
>
> As promised we have tested 3 Gb (aka SATA II) disks.
> Configuration: LSI SAS3442X controller, Promise SAS JBOD, 12 disks
> Seagate SATA 80 GB
>
> Tested with parallel dd invocations with bs=32k.
> The results look better:
>
> # disks Read AVG Write AVG
> 1 69 69,0 68 68,0
> 6 420 70,0 324 54,0
> 7 459 65,6 367 52,4
> 8 426 53,3 394 49,3
> 9 399 44,3 389 43,2
> 10 410 41,0 431 43,1
> 11 438 39,8 428 38,9
> 12 468 39,0 481 40,1
>
> Available bandwidth is 4 * 3 Gb = 1.5 GB (4 lanes / external wide SAS
> port).
> So there is still some room for improvement. Has anyone else
> a similar
> setup and can provide results?
> Maybe with the Adaptec SAS controller?
>
Sorry, I've not had a chance to replicate this. Of late, I've been
overloaded.
I asked co-worker regarding this. They told me they believe SATA
devices
when they open frame, they will complete entire data transaction during
a single connection. Meaning they don't do disconnect, and yield to
other
devices as do SAS devices. Thus could be the reason when you have 12
SATA devices,
the performance is low. Jeff Garzik is this true? However, when I
have a chance,
I will verify this via bus analyzer, and do some testing. I beleive I
can obtain
about 10 SATA disk.
Eric
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SATA on mptsas performance
2006-03-01 3:30 ` Douglas Gilbert
2006-03-01 6:09 ` Jeff Garzik
@ 2006-03-01 17:55 ` Asgeir Eiriksson
1 sibling, 0 replies; 8+ messages in thread
From: Asgeir Eiriksson @ 2006-03-01 17:55 UTC (permalink / raw)
To: dougg, mirko.benz; +Cc: linux-scsi
Doug
I realize that your 10b8b arithmetic was probably meant to be approximate,
but I'd still like to point out that 10G Ethernet technology uses the 8b
number when quoting bandwidth whereas e.g. IB quotes the 10b number (let's
refer to this as data rate vs. signaling rate).
For example: so called 10Gbps IB has a data rate limit of 8Gbps, whereas
10GE has a data rate lmit of 10Gbps (countinting TCP/IP headers as part of
the BW).
For CX-4 copper cable (Infiiniband connectors and cable) the rate of the
serdes used are actually different for the two technologies with IB having 4
lanes at 2.5GHz whereas 10G Ethernet has 4 lanes at 3.125GHz, leading again
to 8Gbps IB vs. 10Gbps Ethernet data rate bandwidth.
Regards,
Asgeir Eiriksson
Chelsio Communications Inc.
>From: Douglas Gilbert <dougg@torque.net>
>Reply-To: dougg@torque.net
>To: Mirko Benz <mirko.benz@web.de>
>CC: linux-scsi@vger.kernel.org
>Subject: Re: SATA on mptsas performance
>Date: Tue, 28 Feb 2006 22:30:16 -0500
>
...
>
>BTW since SAS uses "10b8b" encoding (as do IB,
>FC, SATA, PCIe, 10 gigabit ethernet on copper, etc) which
>encodes 8 data bits into 10 bits on the wire then one
>can flip between "<n> MB/sec" and "(<n> * 10) Mb/sec". The
>capitalization (or not) of the "B" is obviously significant.
>Perhaps somebody from a SAS vendor company could tell us how
>much data one can really send down a 3 Gb/sec SAS/STP
>connection under optimum conditions.
>
>
>Doug Gilbert
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2006-03-01 17:56 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-02-13 14:01 SATA on mptsas performance Mirko Benz
-- strict thread matches above, loose matches on Subject: below --
2006-02-14 16:02 Moore, Eric
2006-02-15 7:52 ` Mirko Benz
2006-03-01 3:30 ` Douglas Gilbert
2006-03-01 6:09 ` Jeff Garzik
2006-03-01 8:16 ` Mirko Benz
2006-03-01 17:55 ` Asgeir Eiriksson
2006-03-01 16:04 Moore, Eric
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).