Matt Darcy wrote: > Its almost as if there is an "IO leak" which is the only way I can think > of to describe it.the card / system performaces quite well as individual > disks, but as soon as its entered into a raid 5 configuration using the > any number of disks the creation of the array appears to be fine until > around %20-%30 through the assembly, the speed of the arrays creations > plummits and the machine hangs. You have 7x250G disks in Raid-5, so that's 6x250G or 1.5T total space. In the beginning of raid recovery, when the system is good, you're getting 12M/s. It slows then dies after 25% to 40% of completion. 6x250G is 1536000M, at 12M/s that's about 35 hours. You tested the disks individually (without Raid) for ~12 hours, which is about 34% of 35 hours. So it's possible you'd see the the same slowdown & hang if you tested the individual disks longer. You're having these problems on a Marvell controller with 2.6.15 and the in-kernel sata_mv driver, right? I've got a very similar system with unexplained hard hangs too. On my system the individual disks seem to work fine, Raid-6 of the disks seems work fine, LVM of the disks seems to work fine, but LVM of a Raid-6 of the disks hangs. One wierd thing I've discovered is that if I enable all the kernel debugging options, the system is perfectly stable, and all the debug tests report no warnings or errors to the logs. Seems like a race condition somewhere, I'm suspecting in the interaction of Raid-6 and LVM, but it could be anywhere I suppose. I've attached the .config of the production (non-debug) kernel that hangs, and the diff to the debug kernel that works. -- Sebastian Kuzminsky