* Poor RAID5 performance on new SMP system
@ 2004-10-18 2:11 Marc
2004-10-18 3:37 ` Guy
` (2 more replies)
0 siblings, 3 replies; 13+ messages in thread
From: Marc @ 2004-10-18 2:11 UTC (permalink / raw)
To: linux-raid
Hi,
I recently upgraded my file server to a dual AMD 2800+ on a Tyan Tiger MPX
motherboard. The previous server was using a PIII 700 on an Intel 440BX
motherboard. I basically just took the IDE drives and their controllers
across to the new machine. The strange thing is that the RAID-5 performance
is worse than before! Have a look at the stats below:
I've fiddled with hdparm making sure the drives are setup correctly
(dma,32bit,unmaskirq) but without much improvement.
As a comparison I ran the same benchmark against the -single- (root)
drive /dev/hda and it performed better than the raid array!!
I'm using kernel version 2.4.26 SMP. Do you think upgrading to 2.6.8 would
improve matters?
The only guess I can make is that there is an issue with software raid
performance on an SMP system.
Any help/suggestions appreciated!
Thanks...
----------------------------------------------------------
Bonnie++ benchmarks:
Before (PIII 700/440BX):
128k chunk
-------Sequential Output-------- ---Sequential Input-- --
Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks-
--
Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %
CPU
1000 7594 85.1 36234 43.0 22728 31.1 8812 96.1 58155 52.5 286.8
5.6
After: (Dual AMD MP 2800+/Tiger MPX(AMD760)
-------Sequential Output-------- ---Sequential Input-- --
Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks-
--
Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %
CPU
1000 16707 97.7 20745 16.4 9633 9.8 17032 67.2 24741 11.4 194.8
2.1
/dev/md0:
Version : 00.90.00
Creation Time : Sat Apr 17 12:19:25 2004
Raid Level : raid5
Array Size : 234444288 (223.58 GiB 240.07 GB)
Device Size : 78148096 (74.53 GiB 80.02 GB)
Raid Devices : 4
Total Devices : 5
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Oct 18 08:36:52 2004
State : dirty, no-errors
Active Devices : 4
Working Devices : 4
Failed Devices : 1
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 128K
Number Major Minor RaidDevice State
0 33 1 0 active sync /dev/hde1
1 34 1 1 active sync /dev/hdg1
2 56 1 2 active sync /dev/hdi1
3 57 1 3 active sync /dev/hdk1
UUID : 775f1dcf:7cbc17ab:86e1e792:669b732f
Events : 0.82
lspci:
0000:00:00.0 Host bridge: Advanced Micro Devices [AMD] AMD-760 MP [IGD4-2P]
System Controller (rev 20)
0000:00:01.0 PCI bridge: Advanced Micro Devices [AMD] AMD-760 MP [IGD4-2P]
AGP Bridge
0000:00:07.0 ISA bridge: Advanced Micro Devices [AMD] AMD-768 [Opus] ISA
(rev 05)
0000:00:07.1 IDE interface: Advanced Micro Devices [AMD] AMD-768 [Opus] IDE
(rev 04)
0000:00:07.3 Bridge: Advanced Micro Devices [AMD] AMD-768 [Opus] ACPI (rev
03)
0000:00:10.0 PCI bridge: Advanced Micro Devices [AMD] AMD-768 [Opus] PCI
(rev 05)
0000:02:00.0 USB Controller: Advanced Micro Devices [AMD] AMD-768 [Opus] USB
(rev 07)
0000:02:04.0 VGA compatible controller: NVidia / SGS Thomson (Joint Venture)
Riva128 (rev 10)
0000:02:05.0 RAID bus controller: Silicon Image, Inc. (formerly CMD
Technology Inc) PCI0680 Ultra ATA-133 Host Controller (rev 02)
0000:02:06.0 RAID bus controller: Silicon Image, Inc. (formerly CMD
Technology Inc) PCI0649 (rev 02)
0000:02:07.0 FireWire (IEEE 1394): Texas Instruments TSB12LV23 IEEE-1394
Controller
0000:02:08.0 Ethernet controller: 3Com Corporation 3c905C-TX/TX-M [Tornado]
(rev 78)
--
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: Poor RAID5 performance on new SMP system
2004-10-18 2:11 Poor RAID5 performance on new SMP system Marc
@ 2004-10-18 3:37 ` Guy
2004-10-18 4:04 ` Marc
2004-10-18 3:44 ` Richard Scobie
2004-10-18 5:37 ` Gerd Knops
2 siblings, 1 reply; 13+ messages in thread
From: Guy @ 2004-10-18 3:37 UTC (permalink / raw)
To: 'Marc', linux-raid
I have a P3-500 SMP system. It performs better than your SMP system. So I
don't think SMP is the issue. This is a SCSI system with 3 SCSI buses and
14 disks.
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
watkins-home. 1G 3403 98 35480 86 22074 47 3589 99 68735 63 512.6 11
Guy
-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Marc
Sent: Sunday, October 17, 2004 10:12 PM
To: linux-raid@vger.kernel.org
Subject: Poor RAID5 performance on new SMP system
Hi,
I recently upgraded my file server to a dual AMD 2800+ on a Tyan Tiger MPX
motherboard. The previous server was using a PIII 700 on an Intel 440BX
motherboard. I basically just took the IDE drives and their controllers
across to the new machine. The strange thing is that the RAID-5 performance
is worse than before! Have a look at the stats below:
I've fiddled with hdparm making sure the drives are setup correctly
(dma,32bit,unmaskirq) but without much improvement.
As a comparison I ran the same benchmark against the -single- (root)
drive /dev/hda and it performed better than the raid array!!
I'm using kernel version 2.4.26 SMP. Do you think upgrading to 2.6.8 would
improve matters?
The only guess I can make is that there is an issue with software raid
performance on an SMP system.
Any help/suggestions appreciated!
Thanks...
----------------------------------------------------------
Bonnie++ benchmarks:
Before (PIII 700/440BX):
128k chunk
-------Sequential Output-------- ---Sequential Input-- --
Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks-
--
Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %
CPU
1000 7594 85.1 36234 43.0 22728 31.1 8812 96.1 58155 52.5 286.8
5.6
After: (Dual AMD MP 2800+/Tiger MPX(AMD760)
-------Sequential Output-------- ---Sequential Input-- --
Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block---
--Seeks-
--
Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %
CPU
1000 16707 97.7 20745 16.4 9633 9.8 17032 67.2 24741 11.4 194.8
2.1
/dev/md0:
Version : 00.90.00
Creation Time : Sat Apr 17 12:19:25 2004
Raid Level : raid5
Array Size : 234444288 (223.58 GiB 240.07 GB)
Device Size : 78148096 (74.53 GiB 80.02 GB)
Raid Devices : 4
Total Devices : 5
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Oct 18 08:36:52 2004
State : dirty, no-errors
Active Devices : 4
Working Devices : 4
Failed Devices : 1
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 128K
Number Major Minor RaidDevice State
0 33 1 0 active sync /dev/hde1
1 34 1 1 active sync /dev/hdg1
2 56 1 2 active sync /dev/hdi1
3 57 1 3 active sync /dev/hdk1
UUID : 775f1dcf:7cbc17ab:86e1e792:669b732f
Events : 0.82
lspci:
0000:00:00.0 Host bridge: Advanced Micro Devices [AMD] AMD-760 MP [IGD4-2P]
System Controller (rev 20)
0000:00:01.0 PCI bridge: Advanced Micro Devices [AMD] AMD-760 MP [IGD4-2P]
AGP Bridge
0000:00:07.0 ISA bridge: Advanced Micro Devices [AMD] AMD-768 [Opus] ISA
(rev 05)
0000:00:07.1 IDE interface: Advanced Micro Devices [AMD] AMD-768 [Opus] IDE
(rev 04)
0000:00:07.3 Bridge: Advanced Micro Devices [AMD] AMD-768 [Opus] ACPI (rev
03)
0000:00:10.0 PCI bridge: Advanced Micro Devices [AMD] AMD-768 [Opus] PCI
(rev 05)
0000:02:00.0 USB Controller: Advanced Micro Devices [AMD] AMD-768 [Opus] USB
(rev 07)
0000:02:04.0 VGA compatible controller: NVidia / SGS Thomson (Joint Venture)
Riva128 (rev 10)
0000:02:05.0 RAID bus controller: Silicon Image, Inc. (formerly CMD
Technology Inc) PCI0680 Ultra ATA-133 Host Controller (rev 02)
0000:02:06.0 RAID bus controller: Silicon Image, Inc. (formerly CMD
Technology Inc) PCI0649 (rev 02)
0000:02:07.0 FireWire (IEEE 1394): Texas Instruments TSB12LV23 IEEE-1394
Controller
0000:02:08.0 Ethernet controller: 3Com Corporation 3c905C-TX/TX-M [Tornado]
(rev 78)
--
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Poor RAID5 performance on new SMP system
2004-10-18 2:11 Poor RAID5 performance on new SMP system Marc
2004-10-18 3:37 ` Guy
@ 2004-10-18 3:44 ` Richard Scobie
2004-10-18 4:56 ` Marc
2004-10-18 17:26 ` Marc Marais
2004-10-18 5:37 ` Gerd Knops
2 siblings, 2 replies; 13+ messages in thread
From: Richard Scobie @ 2004-10-18 3:44 UTC (permalink / raw)
To: linux-raid
Marc wrote:
> Hi,
> I recently upgraded my file server to a dual AMD 2800+ on a Tyan Tiger MPX
> motherboard. The previous server was using a PIII 700 on an Intel 440BX
> motherboard. I basically just took the IDE drives and their controllers
> across to the new machine. The strange thing is that the RAID-5 performance
> is worse than before! Have a look at the stats below:
Hi Marc,
Unfortunately you have a lemon :(
I spent some time trying to get acceptable performance out of an Adaptec
SCSI RAID 0 on the 32 bit, 33MHz bus of one of these boards and
eventually found this:
http://forums.2cpu.com/showthread.php?s=c8040a4e9c9b6390dd389f1b3cca32de&threadid=31211&perpage=15
The executive summary is
"After testing AMD determined that the problem was identified as a
bandwidth issue(this was startling information). It appears that the
motherboard has a bandwidth limitation of 25MB/s on PCI devices that are
connected through the AMD 768 Southbridge."
This was in line with my findings.
I have not tried, but it is possible that a 64 bit 66MHz IDE card will
be OK.
Regards,
Richard
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: Poor RAID5 performance on new SMP system
2004-10-18 3:37 ` Guy
@ 2004-10-18 4:04 ` Marc
2004-10-18 5:12 ` Guy
0 siblings, 1 reply; 13+ messages in thread
From: Marc @ 2004-10-18 4:04 UTC (permalink / raw)
To: Guy, linux-raid
So much for my 'upgrade'! :(
I suspect some kind of hardware problem here... :( As a quick check I did
the following tests:
hdparm -t /dev/hde - 38MB/sec
hdparm -t /dev/hdg - 9MB/sec (and it fluctates wildly)
hdparm -t /dev/hdi - 50MB/sec
hdparm -t /dev/hdk - 45MB/sec
/dev/hdg doesnt look too healthy - is there any way to check if there are
IDE errors - say related to cabling etc? I might just swap this cable out
and try again.
--
---------- Original Message -----------
From: "Guy" <bugzilla@watkins-home.com>
To: "'Marc'" <linux-raid@liquid-nexus.net>, <linux-raid@vger.kernel.org>
Sent: Sun, 17 Oct 2004 23:37:49 -0400
Subject: RE: Poor RAID5 performance on new SMP system
> I have a P3-500 SMP system. It performs better than your SMP
> system. So I don't think SMP is the issue. This is a SCSI system
> with 3 SCSI buses and 14 disks.
>
> Version 1.03 ------Sequential Output------ --Sequential Input- --
> Random- -Per Chr- --Block-- -Rewrite- -Per Chr- -
> -Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP
> K/sec %CP K/sec %CP /sec %CP watkins-home. 1G 3403 98 35480 86
> 22074 47 3589 99 68735 63 512.6 11
>
> Guy
>
> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org
> [mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Marc
> Sent: Sunday, October 17, 2004 10:12 PM
> To: linux-raid@vger.kernel.org
> Subject: Poor RAID5 performance on new SMP system
>
> Hi,
> I recently upgraded my file server to a dual AMD 2800+ on a Tyan
> Tiger MPX motherboard. The previous server was using a PIII 700 on
> an Intel 440BX motherboard. I basically just took the IDE drives and
> their controllers across to the new machine. The strange thing is
> that the RAID-5 performance is worse than before! Have a look at the
> stats below:
>
> I've fiddled with hdparm making sure the drives are setup correctly
> (dma,32bit,unmaskirq) but without much improvement.
>
> As a comparison I ran the same benchmark against the -single- (root)
> drive /dev/hda and it performed better than the raid array!!
>
> I'm using kernel version 2.4.26 SMP. Do you think upgrading to 2.6.8
> would improve matters?
>
> The only guess I can make is that there is an issue with software
> raid performance on an SMP system.
>
> Any help/suggestions appreciated!
>
> Thanks...
>
> ----------------------------------------------------------
> Bonnie++ benchmarks:
>
> Before (PIII 700/440BX):
>
> 128k chunk
> -------Sequential Output-------- ---Sequential Input-- --
> Random--
> -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
> --Seeks-
> --
> Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU
> /sec % CPU 1000 7594 85.1 36234 43.0 22728 31.1 8812 96.1
> 58155 52.5 286.8
> 5.6
>
> After: (Dual AMD MP 2800+/Tiger MPX(AMD760)
>
> -------Sequential Output-------- ---Sequential Input-- --
> Random--
> -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
> --Seeks-
> --
> Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU
> /sec % CPU 1000 16707 97.7 20745 16.4 9633 9.8 17032 67.2
> 24741 11.4 194.8
> 2.1
>
> /dev/md0:
> Version : 00.90.00
> Creation Time : Sat Apr 17 12:19:25 2004
> Raid Level : raid5
> Array Size : 234444288 (223.58 GiB 240.07 GB)
> Device Size : 78148096 (74.53 GiB 80.02 GB)
> Raid Devices : 4
> Total Devices : 5
> Preferred Minor : 0
> Persistence : Superblock is persistent
>
> Update Time : Mon Oct 18 08:36:52 2004
> State : dirty, no-errors
> Active Devices : 4
> Working Devices : 4
> Failed Devices : 1
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 128K
>
> Number Major Minor RaidDevice State
> 0 33 1 0 active sync /dev/hde1
> 1 34 1 1 active sync /dev/hdg1
> 2 56 1 2 active sync /dev/hdi1
> 3 57 1 3 active sync /dev/hdk1
> UUID : 775f1dcf:7cbc17ab:86e1e792:669b732f
> Events : 0.82
>
> lspci:
>
> 0000:00:00.0 Host bridge: Advanced Micro Devices [AMD] AMD-760 MP
> [IGD4-2P] System Controller (rev 20)
> 0000:00:01.0 PCI bridge: Advanced Micro Devices [AMD] AMD-760 MP
> [IGD4-2P] AGP Bridge
> 0000:00:07.0 ISA bridge: Advanced Micro Devices [AMD] AMD-768 [Opus]
> ISA
> (rev 05)
> 0000:00:07.1 IDE interface: Advanced Micro Devices [AMD] AMD-768
> [Opus] IDE
> (rev 04)
> 0000:00:07.3 Bridge: Advanced Micro Devices [AMD] AMD-768 [Opus]
> ACPI (rev 03)
> 0000:00:10.0 PCI bridge: Advanced Micro Devices [AMD] AMD-768 [Opus]
> PCI
> (rev 05)
> 0000:02:00.0 USB Controller: Advanced Micro Devices [AMD] AMD-768
> [Opus] USB
>
> (rev 07)
>
> 0000:02:04.0 VGA compatible controller: NVidia / SGS Thomson (Joint
> Venture)
>
> Riva128 (rev 10)
> 0000:02:05.0 RAID bus controller: Silicon Image, Inc. (formerly CMD
> Technology Inc) PCI0680 Ultra ATA-133 Host Controller (rev 02)
> 0000:02:06.0 RAID bus controller: Silicon Image, Inc. (formerly CMD
> Technology Inc) PCI0649 (rev 02)
>
> 0000:02:07.0 FireWire (IEEE 1394): Texas Instruments TSB12LV23 IEEE-
> 1394 Controller
> 0000:02:08.0 Ethernet controller: 3Com Corporation 3c905C-TX/TX-M
> [Tornado]
> (rev 78)
>
> --
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-
> raid" in the body of a message to majordomo@vger.kernel.org More
> majordomo info at http://vger.kernel.org/majordomo-info.html
------- End of Original Message -------
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Poor RAID5 performance on new SMP system
2004-10-18 3:44 ` Richard Scobie
@ 2004-10-18 4:56 ` Marc
2004-10-18 17:26 ` Marc Marais
1 sibling, 0 replies; 13+ messages in thread
From: Marc @ 2004-10-18 4:56 UTC (permalink / raw)
To: linux-raid
That thread is pretty old and I couldn't really find many references (on
google) about it. Perhaps its been fixed in later revisions of the AMD-768?
--
---------- Original Message -----------
From: Richard Scobie <richard@sauce.co.nz>
To: linux-raid@vger.kernel.org
Sent: Mon, 18 Oct 2004 16:44:20 +1300
Subject: Re: Poor RAID5 performance on new SMP system
> Marc wrote:
> > Hi,
> > I recently upgraded my file server to a dual AMD 2800+ on a Tyan Tiger
MPX
> > motherboard. The previous server was using a PIII 700 on an Intel 440BX
> > motherboard. I basically just took the IDE drives and their controllers
> > across to the new machine. The strange thing is that the RAID-5
performance
> > is worse than before! Have a look at the stats below:
>
> Hi Marc,
>
> Unfortunately you have a lemon :(
>
> I spent some time trying to get acceptable performance out of an
> Adaptec SCSI RAID 0 on the 32 bit, 33MHz bus of one of these boards
> and eventually found this:
>
> http://forums.2cpu.com/showthread.php?
s=c8040a4e9c9b6390dd389f1b3cca32de&threadid=31211&perpage=15
>
> The executive summary is
>
> "After testing AMD determined that the problem was identified as a
> bandwidth issue(this was startling information). It appears that the
> motherboard has a bandwidth limitation of 25MB/s on PCI devices that
> are connected through the AMD 768 Southbridge."
>
> This was in line with my findings.
>
> I have not tried, but it is possible that a 64 bit 66MHz IDE card
> will be OK.
>
> Regards,
>
> Richard
> -
> To unsubscribe from this list: send the line "unsubscribe linux-
> raid" in the body of a message to majordomo@vger.kernel.org More
> majordomo info at http://vger.kernel.org/majordomo-info.html
------- End of Original Message -------
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: Poor RAID5 performance on new SMP system
2004-10-18 4:04 ` Marc
@ 2004-10-18 5:12 ` Guy
0 siblings, 0 replies; 13+ messages in thread
From: Guy @ 2004-10-18 5:12 UTC (permalink / raw)
To: 'Marc', linux-raid
Since you had one disk that performs badly. I would swap disks, exchange
hdg and hdk if you can. You can put them back after the test. If that
would prevent you from booting, boot from a knoppix CD. Then test the
disks.
You may have a bad disk, cable or card. Swap until you determine which it
is.
Label the disks!!! It is easy to get confused about which is which!
Guy
-----Original Message-----
From: Marc [mailto:linux-raid@liquid-nexus.net]
Sent: Monday, October 18, 2004 12:05 AM
To: Guy; linux-raid@vger.kernel.org
Subject: RE: Poor RAID5 performance on new SMP system
So much for my 'upgrade'! :(
I suspect some kind of hardware problem here... :( As a quick check I did
the following tests:
hdparm -t /dev/hde - 38MB/sec
hdparm -t /dev/hdg - 9MB/sec (and it fluctates wildly)
hdparm -t /dev/hdi - 50MB/sec
hdparm -t /dev/hdk - 45MB/sec
/dev/hdg doesnt look too healthy - is there any way to check if there are
IDE errors - say related to cabling etc? I might just swap this cable out
and try again.
--
---------- Original Message -----------
From: "Guy" <bugzilla@watkins-home.com>
To: "'Marc'" <linux-raid@liquid-nexus.net>, <linux-raid@vger.kernel.org>
Sent: Sun, 17 Oct 2004 23:37:49 -0400
Subject: RE: Poor RAID5 performance on new SMP system
> I have a P3-500 SMP system. It performs better than your SMP
> system. So I don't think SMP is the issue. This is a SCSI system
> with 3 SCSI buses and 14 disks.
>
> Version 1.03 ------Sequential Output------ --Sequential Input- --
> Random- -Per Chr- --Block-- -Rewrite- -Per Chr- -
> -Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP
> K/sec %CP K/sec %CP /sec %CP watkins-home. 1G 3403 98 35480 86
> 22074 47 3589 99 68735 63 512.6 11
>
> Guy
>
> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org
> [mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Marc
> Sent: Sunday, October 17, 2004 10:12 PM
> To: linux-raid@vger.kernel.org
> Subject: Poor RAID5 performance on new SMP system
>
> Hi,
> I recently upgraded my file server to a dual AMD 2800+ on a Tyan
> Tiger MPX motherboard. The previous server was using a PIII 700 on
> an Intel 440BX motherboard. I basically just took the IDE drives and
> their controllers across to the new machine. The strange thing is
> that the RAID-5 performance is worse than before! Have a look at the
> stats below:
>
> I've fiddled with hdparm making sure the drives are setup correctly
> (dma,32bit,unmaskirq) but without much improvement.
>
> As a comparison I ran the same benchmark against the -single- (root)
> drive /dev/hda and it performed better than the raid array!!
>
> I'm using kernel version 2.4.26 SMP. Do you think upgrading to 2.6.8
> would improve matters?
>
> The only guess I can make is that there is an issue with software
> raid performance on an SMP system.
>
> Any help/suggestions appreciated!
>
> Thanks...
>
> ----------------------------------------------------------
> Bonnie++ benchmarks:
>
> Before (PIII 700/440BX):
>
> 128k chunk
> -------Sequential Output-------- ---Sequential Input-- --
> Random--
> -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
> --Seeks-
> --
> Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU
> /sec % CPU 1000 7594 85.1 36234 43.0 22728 31.1 8812 96.1
> 58155 52.5 286.8
> 5.6
>
> After: (Dual AMD MP 2800+/Tiger MPX(AMD760)
>
> -------Sequential Output-------- ---Sequential Input-- --
> Random--
> -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
> --Seeks-
> --
> Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU
> /sec % CPU 1000 16707 97.7 20745 16.4 9633 9.8 17032 67.2
> 24741 11.4 194.8
> 2.1
>
> /dev/md0:
> Version : 00.90.00
> Creation Time : Sat Apr 17 12:19:25 2004
> Raid Level : raid5
> Array Size : 234444288 (223.58 GiB 240.07 GB)
> Device Size : 78148096 (74.53 GiB 80.02 GB)
> Raid Devices : 4
> Total Devices : 5
> Preferred Minor : 0
> Persistence : Superblock is persistent
>
> Update Time : Mon Oct 18 08:36:52 2004
> State : dirty, no-errors
> Active Devices : 4
> Working Devices : 4
> Failed Devices : 1
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 128K
>
> Number Major Minor RaidDevice State
> 0 33 1 0 active sync /dev/hde1
> 1 34 1 1 active sync /dev/hdg1
> 2 56 1 2 active sync /dev/hdi1
> 3 57 1 3 active sync /dev/hdk1
> UUID : 775f1dcf:7cbc17ab:86e1e792:669b732f
> Events : 0.82
>
> lspci:
>
> 0000:00:00.0 Host bridge: Advanced Micro Devices [AMD] AMD-760 MP
> [IGD4-2P] System Controller (rev 20)
> 0000:00:01.0 PCI bridge: Advanced Micro Devices [AMD] AMD-760 MP
> [IGD4-2P] AGP Bridge
> 0000:00:07.0 ISA bridge: Advanced Micro Devices [AMD] AMD-768 [Opus]
> ISA
> (rev 05)
> 0000:00:07.1 IDE interface: Advanced Micro Devices [AMD] AMD-768
> [Opus] IDE
> (rev 04)
> 0000:00:07.3 Bridge: Advanced Micro Devices [AMD] AMD-768 [Opus]
> ACPI (rev 03)
> 0000:00:10.0 PCI bridge: Advanced Micro Devices [AMD] AMD-768 [Opus]
> PCI
> (rev 05)
> 0000:02:00.0 USB Controller: Advanced Micro Devices [AMD] AMD-768
> [Opus] USB
>
> (rev 07)
>
> 0000:02:04.0 VGA compatible controller: NVidia / SGS Thomson (Joint
> Venture)
>
> Riva128 (rev 10)
> 0000:02:05.0 RAID bus controller: Silicon Image, Inc. (formerly CMD
> Technology Inc) PCI0680 Ultra ATA-133 Host Controller (rev 02)
> 0000:02:06.0 RAID bus controller: Silicon Image, Inc. (formerly CMD
> Technology Inc) PCI0649 (rev 02)
>
> 0000:02:07.0 FireWire (IEEE 1394): Texas Instruments TSB12LV23 IEEE-
> 1394 Controller
> 0000:02:08.0 Ethernet controller: 3Com Corporation 3c905C-TX/TX-M
> [Tornado]
> (rev 78)
>
> --
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-
> raid" in the body of a message to majordomo@vger.kernel.org More
> majordomo info at http://vger.kernel.org/majordomo-info.html
------- End of Original Message -------
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Poor RAID5 performance on new SMP system
2004-10-18 2:11 Poor RAID5 performance on new SMP system Marc
2004-10-18 3:37 ` Guy
2004-10-18 3:44 ` Richard Scobie
@ 2004-10-18 5:37 ` Gerd Knops
2004-10-18 6:12 ` Guy
2 siblings, 1 reply; 13+ messages in thread
From: Gerd Knops @ 2004-10-18 5:37 UTC (permalink / raw)
To: Marc; +Cc: linux-raid
On Oct 17, 2004, at 21:11, Marc wrote:
> Hi,
> I recently upgraded my file server to a dual AMD 2800+ on a Tyan Tiger
> MPX
> motherboard. The previous server was using a PIII 700 on an Intel 440BX
> motherboard. I basically just took the IDE drives and their controllers
> across to the new machine. The strange thing is that the RAID-5
> performance
> is worse than before! Have a look at the stats below:
>
[..]
> State : dirty, no-errors
> Active Devices : 4
> Working Devices : 4
> Failed Devices : 1
> Spare Devices : 0
>
Unless I am missing something, a disk is missing and the RAID runs in
degraded (=slower) mode.
Gerd
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: Poor RAID5 performance on new SMP system
2004-10-18 5:37 ` Gerd Knops
@ 2004-10-18 6:12 ` Guy
2004-10-18 7:33 ` Marc
0 siblings, 1 reply; 13+ messages in thread
From: Guy @ 2004-10-18 6:12 UTC (permalink / raw)
To: 'Gerd Knops', 'Marc'; +Cc: linux-raid
You missed something!
"State : dirty, no-errors"
Mark,
If you want, send the output of these 2 commands:
cat /proc/mdstat
mdadm -D /dev/md?
Don't forget, with versions of md (or mdadm) older than about 6 months, the
counts get really off!
My 14 disk array is fine..... Note the: "no-errors"!
But:
/dev/md2:
Version : 00.90.00
Creation Time : Fri Dec 12 17:29:50 2003
Raid Level : raid5
Array Size : 230980672 (220.28 GiB 236.57 GB)
Device Size : 17767744 (16.94 GiB 18.24 GB)
Raid Devices : 14 <<LOOK HERE>>
Total Devices : 12 <<LOOK HERE>>
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Wed Oct 13 01:55:40 2004
State : dirty, no-errors <<LOOK HERE>>
Active Devices : 14 <<LOOK HERE>>
Working Devices : 11 <<LOOK HERE>>
Failed Devices : 1 <<LOOK HERE>>
Spare Devices : 0 <<LOOK HERE>>
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 145 1 active sync /dev/sdj1
2 8 65 2 active sync /dev/sde1
3 8 161 3 active sync /dev/sdk1
4 8 81 4 active sync /dev/sdf1
5 8 177 5 active sync /dev/sdl1
6 8 97 6 active sync /dev/sdg1
7 8 193 7 active sync /dev/sdm1
8 8 241 8 active sync /dev/sdp1
9 8 209 9 active sync /dev/sdn1
10 8 113 10 active sync /dev/sdh1
11 8 225 11 active sync /dev/sdo1
12 8 129 12 active sync /dev/sdi1
13 8 33 13 active sync /dev/sdc1
UUID : 8357a389:8853c2d1:f160d155:6b4e1b99
#cat /proc/mdstat
Personalities : [raid1] [raid5]
read_ahead 1024 sectors
md2 : active raid5 sdc1[13] sdi1[12] sdo1[11] sdh1[10] sdn1[9] sdp1[8]
sdm1[7] sdg1[6] sdl1[5] sdf1[4] sdk1[3] sde1[2] sdj1[1] sdd1[0]
230980672 blocks level 5, 64k chunk, algorithm 2 [14/14]
[UUUUUUUUUUUUUU]
Guy
-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Gerd Knops
Sent: Monday, October 18, 2004 1:37 AM
To: Marc
Cc: linux-raid@vger.kernel.org
Subject: Re: Poor RAID5 performance on new SMP system
On Oct 17, 2004, at 21:11, Marc wrote:
> Hi,
> I recently upgraded my file server to a dual AMD 2800+ on a Tyan Tiger
> MPX
> motherboard. The previous server was using a PIII 700 on an Intel 440BX
> motherboard. I basically just took the IDE drives and their controllers
> across to the new machine. The strange thing is that the RAID-5
> performance
> is worse than before! Have a look at the stats below:
>
[..]
> State : dirty, no-errors
> Active Devices : 4
> Working Devices : 4
> Failed Devices : 1
> Spare Devices : 0
>
Unless I am missing something, a disk is missing and the RAID runs in
degraded (=slower) mode.
Gerd
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: Poor RAID5 performance on new SMP system
2004-10-18 6:12 ` Guy
@ 2004-10-18 7:33 ` Marc
0 siblings, 0 replies; 13+ messages in thread
From: Marc @ 2004-10-18 7:33 UTC (permalink / raw)
To: linux-raid
I took hdg offline and ran tests on it separately with bonnie and it seems
OK. The array rebuild is really slow - max 15000kB/s and the load average is
over 2. The strange thing is that kswapd is actively running whenever I
perform IO on the array (and my swap file is not used at all). I haven't
noticed this before - I suspect its related to this issue. Any ideas? Enable
himem? (I only have 512MB RAM).
-----------
cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5]
read_ahead 1024 sectors
md0 : active raid5 hdg1[1] hdk1[3] hdi1[2] hde1[0]
234444288 blocks level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>
---------------
mdadm -D /dev/md0 (I got the Debian testing version v.1.7.0 - it doesnt
show 'no-errors' now but maybe its because I've just rebuild the array by
removing hdg and then re-adding it).
/dev/md0:
Version : 00.90.00
Creation Time : Sat Apr 17 12:19:25 2004
Raid Level : raid5
Array Size : 234444288 (223.58 GiB 240.07 GB)
Device Size : 78148096 (74.53 GiB 80.02 GB)
Raid Devices : 4
Total Devices : 5
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Oct 18 15:10:52 2004
State : dirty
Active Devices : 4
Working Devices : 4
Failed Devices : 1
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 128K
UUID : 775f1dcf:7cbc17ab:86e1e792:669b732f
Events : 0.86
Number Major Minor RaidDevice State
0 33 1 0 active sync /dev/hde1
1 34 1 1 active sync /dev/hdg1
2 56 1 2 active sync /dev/hdi1
3 57 1 3 active sync /dev/hdk1
--
---------- Original Message -----------
From: "Guy" <bugzilla@watkins-home.com>
To: "'Gerd Knops'" <gerti@bitart.com>, "'Marc'" <linux-raid@liquid-nexus.net>
Cc: <linux-raid@vger.kernel.org>
Sent: Mon, 18 Oct 2004 02:12:30 -0400
Subject: RE: Poor RAID5 performance on new SMP system
> You missed something!
> "State : dirty, no-errors"
>
> Mark,
> If you want, send the output of these 2 commands:
> cat /proc/mdstat
> mdadm -D /dev/md?
>
> Don't forget, with versions of md (or mdadm) older than about 6
> months, the counts get really off! My 14 disk array is fine.....
> Note the: "no-errors"! But: /dev/md2: Version : 00.90.00
> Creation Time : Fri Dec 12 17:29:50 2003 Raid Level : raid5
> Array Size : 230980672 (220.28 GiB 236.57 GB) Device Size :
> 17767744 (16.94 GiB 18.24 GB) Raid Devices : 14 <<LOOK HERE>>
> Total Devices : 12 <<LOOK HERE>> Preferred Minor : 2
> Persistence : Superblock is persistent
>
> Update Time : Wed Oct 13 01:55:40 2004
> State : dirty, no-errors <<LOOK HERE>>
> Active Devices : 14 <<LOOK HERE>>
> Working Devices : 11 <<LOOK HERE>>
> Failed Devices : 1 <<LOOK HERE>>
> Spare Devices : 0 <<LOOK HERE>>
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Number Major Minor RaidDevice State
> 0 8 49 0 active sync /dev/sdd1
> 1 8 145 1 active sync /dev/sdj1
> 2 8 65 2 active sync /dev/sde1
> 3 8 161 3 active sync /dev/sdk1
> 4 8 81 4 active sync /dev/sdf1
> 5 8 177 5 active sync /dev/sdl1
> 6 8 97 6 active sync /dev/sdg1
> 7 8 193 7 active sync /dev/sdm1
> 8 8 241 8 active sync /dev/sdp1
> 9 8 209 9 active sync /dev/sdn1
> 10 8 113 10 active sync /dev/sdh1
> 11 8 225 11 active sync /dev/sdo1
> 12 8 129 12 active sync /dev/sdi1
> 13 8 33 13 active sync /dev/sdc1
> UUID : 8357a389:8853c2d1:f160d155:6b4e1b99
>
> #cat /proc/mdstat
> Personalities : [raid1] [raid5]
> read_ahead 1024 sectors
> md2 : active raid5 sdc1[13] sdi1[12] sdo1[11] sdh1[10] sdn1[9]
> sdp1[8] sdm1[7] sdg1[6] sdl1[5] sdf1[4] sdk1[3] sde1[2] sdj1[1] sdd1[0]
> 230980672 blocks level 5, 64k chunk, algorithm 2 [14/14]
> [UUUUUUUUUUUUUU]
>
> Guy
>
> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org
> [mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Gerd Knops
> Sent: Monday, October 18, 2004 1:37 AM
> To: Marc
> Cc: linux-raid@vger.kernel.org
> Subject: Re: Poor RAID5 performance on new SMP system
>
> On Oct 17, 2004, at 21:11, Marc wrote:
>
> > Hi,
> > I recently upgraded my file server to a dual AMD 2800+ on a Tyan Tiger
> > MPX
> > motherboard. The previous server was using a PIII 700 on an Intel 440BX
> > motherboard. I basically just took the IDE drives and their controllers
> > across to the new machine. The strange thing is that the RAID-5
> > performance
> > is worse than before! Have a look at the stats below:
> >
>
> [..]
>
> > State : dirty, no-errors
> > Active Devices : 4
> > Working Devices : 4
> > Failed Devices : 1
> > Spare Devices : 0
> >
>
> Unless I am missing something, a disk is missing and the RAID runs
> in degraded (=slower) mode.
>
> Gerd
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-
> raid" in the body of a message to majordomo@vger.kernel.org More
> majordomo info at http://vger.kernel.org/majordomo-info.html
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-
> raid" in the body of a message to majordomo@vger.kernel.org More
> majordomo info at http://vger.kernel.org/majordomo-info.html
------- End of Original Message -------
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Poor RAID5 performance on new SMP system
2004-10-18 3:44 ` Richard Scobie
2004-10-18 4:56 ` Marc
@ 2004-10-18 17:26 ` Marc Marais
2004-10-18 18:41 ` Richard Scobie
1 sibling, 1 reply; 13+ messages in thread
From: Marc Marais @ 2004-10-18 17:26 UTC (permalink / raw)
To: Richard Scobie, linux-raid
I've moved one of the IDE cards to the 64 bit bus and thats improved things a
lot. The 2nd card isnt 3.3v capable and won't go into the 64 bit slot though
so I'm going to replace it.
I noticed using vmstat that the average latency (await) is over 50ms for the
card on the secondary PCI bus and less than 20ms on the 64 bit bus... Very
interesting...
Thanks Richard, and everyone else for your input(s).
(Oh I tried 2.6.8 - didn't make a difference at all - I'm sticking with 2.4.26 :)
--
---------- Original Message -----------
From: Richard Scobie <richard@sauce.co.nz>
To: linux-raid@vger.kernel.org
Sent: Mon, 18 Oct 2004 16:44:20 +1300
Subject: Re: Poor RAID5 performance on new SMP system
> Marc wrote:
> > Hi,
> > I recently upgraded my file server to a dual AMD 2800+ on a Tyan Tiger MPX
> > motherboard. The previous server was using a PIII 700 on an Intel 440BX
> > motherboard. I basically just took the IDE drives and their controllers
> > across to the new machine. The strange thing is that the RAID-5 performance
> > is worse than before! Have a look at the stats below:
>
> Hi Marc,
>
> Unfortunately you have a lemon :(
>
> I spent some time trying to get acceptable performance out of an
> Adaptec SCSI RAID 0 on the 32 bit, 33MHz bus of one of these boards
> and eventually found this:
>
>
http://forums.2cpu.com/showthread.php?s=c8040a4e9c9b6390dd389f1b3cca32de&threadid=31211&perpage=15
>
> The executive summary is
>
> "After testing AMD determined that the problem was identified as a
> bandwidth issue(this was startling information). It appears that the
> motherboard has a bandwidth limitation of 25MB/s on PCI devices that
> are connected through the AMD 768 Southbridge."
>
> This was in line with my findings.
>
> I have not tried, but it is possible that a 64 bit 66MHz IDE card
> will be OK.
>
> Regards,
>
> Richard
> -
> To unsubscribe from this list: send the line "unsubscribe linux-
> raid" in the body of a message to majordomo@vger.kernel.org More
> majordomo info at http://vger.kernel.org/majordomo-info.html
------- End of Original Message -------
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Poor RAID5 performance on new SMP system
2004-10-18 17:26 ` Marc Marais
@ 2004-10-18 18:41 ` Richard Scobie
2004-10-18 20:45 ` Guy
2004-10-20 3:24 ` Mark Hahn
0 siblings, 2 replies; 13+ messages in thread
From: Richard Scobie @ 2004-10-18 18:41 UTC (permalink / raw)
To: linux-raid
Marc Marais wrote:
> I've moved one of the IDE cards to the 64 bit bus and thats improved things a
> lot. The 2nd card isnt 3.3v capable and won't go into the 64 bit slot though
> so I'm going to replace it.
>
> I noticed using vmstat that the average latency (await) is over 50ms for the
> card on the secondary PCI bus and less than 20ms on the 64 bit bus... Very
> interesting...
Glad it's not going to be a total loss.
I find AMD's behaviour over this bug to be very disappointing -
motherboards using this broken south bridge can still be purchased today
and the box does not state "Secondary PCI bus throughput limited to +-
25MB/s". You will not find this problem mentioned on their website either.
The board I tested is only 9 months old and when I pulled the SCSI card
and discs and placed them on the 33MHz bus of an equivalent dual Xeon
board, the throughput went up to +-90MB/s.
I was a big AMD fan prior to this, as the bang for the buck is way
better, but the time, money and effort wasted left a bad taste.
Regards,
Richard
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: Poor RAID5 performance on new SMP system
2004-10-18 18:41 ` Richard Scobie
@ 2004-10-18 20:45 ` Guy
2004-10-20 3:24 ` Mark Hahn
1 sibling, 0 replies; 13+ messages in thread
From: Guy @ 2004-10-18 20:45 UTC (permalink / raw)
To: linux-raid
I have never used AMD MBs, but until now I had plans to buy AMD the next
time I build a system. Stick with Intel, seems like a safe bet. The real
thing, not a generic equivalent! :)
Seems like they should have recalled the south bridge chip set. At least
that would stop new boards from hitting the market. And the MB companies
that want to keep customers should offer a free replacement. The lost time
can easily cost more than the board!
Guy
-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Richard Scobie
Sent: Monday, October 18, 2004 2:42 PM
To: linux-raid@vger.kernel.org
Subject: Re: Poor RAID5 performance on new SMP system
Marc Marais wrote:
> I've moved one of the IDE cards to the 64 bit bus and thats improved
things a
> lot. The 2nd card isnt 3.3v capable and won't go into the 64 bit slot
though
> so I'm going to replace it.
>
> I noticed using vmstat that the average latency (await) is over 50ms for
the
> card on the secondary PCI bus and less than 20ms on the 64 bit bus... Very
> interesting...
Glad it's not going to be a total loss.
I find AMD's behaviour over this bug to be very disappointing -
motherboards using this broken south bridge can still be purchased today
and the box does not state "Secondary PCI bus throughput limited to +-
25MB/s". You will not find this problem mentioned on their website either.
The board I tested is only 9 months old and when I pulled the SCSI card
and discs and placed them on the 33MHz bus of an equivalent dual Xeon
board, the throughput went up to +-90MB/s.
I was a big AMD fan prior to this, as the bang for the buck is way
better, but the time, money and effort wasted left a bad taste.
Regards,
Richard
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Poor RAID5 performance on new SMP system
2004-10-18 18:41 ` Richard Scobie
2004-10-18 20:45 ` Guy
@ 2004-10-20 3:24 ` Mark Hahn
1 sibling, 0 replies; 13+ messages in thread
From: Mark Hahn @ 2004-10-20 3:24 UTC (permalink / raw)
To: linux-raid
> I was a big AMD fan prior to this, as the bang for the buck is way
> better, but the time, money and effort wasted left a bad taste.
folks, this is long-spilt milk. yes, the Athlon was a good competitor
to the PIII, but AMD took a long time to respond when Intel got its
act together with the P4/xeon/i7500/etc. it's all water under the
southbridge, please don't beat the nice dead horse, move along nothing
to see, this parrot is dead.
opterons make really quite excellent servers, certainly as good as
anything from Intel, possibly better. no performance problems, and
plenty stable (chalk it up to an integrated heat spreader?) it *would*
be nice to start seeing opteron server boards with pci-e, though...
(a dual-opteron is just crying out for a pci-e tunnel off each cpu!)
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2004-10-20 3:24 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-10-18 2:11 Poor RAID5 performance on new SMP system Marc
2004-10-18 3:37 ` Guy
2004-10-18 4:04 ` Marc
2004-10-18 5:12 ` Guy
2004-10-18 3:44 ` Richard Scobie
2004-10-18 4:56 ` Marc
2004-10-18 17:26 ` Marc Marais
2004-10-18 18:41 ` Richard Scobie
2004-10-18 20:45 ` Guy
2004-10-20 3:24 ` Mark Hahn
2004-10-18 5:37 ` Gerd Knops
2004-10-18 6:12 ` Guy
2004-10-18 7:33 ` Marc
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).