* Disappointing performance: 5-disk RAID6, 3.11.6
@ 2014-01-21 16:42 Jon Nelson
2014-01-21 16:55 ` Mathias Burén
2014-01-21 22:54 ` NeilBrown
0 siblings, 2 replies; 5+ messages in thread
From: Jon Nelson @ 2014-01-21 16:42 UTC (permalink / raw)
To: LinuxRaid
I have a 5-disk RAID6 using (5) 320GB SATA drives.
I rarely see even sequential I/O approaching that of a single drive's
performance.
Example: (Rarely!) I'll see an aggregate 250MB/s read or write, but
that translates to 50MB/s read or write per-drive. I was hoping for
more.
The partition layout looks like this:
/dev/sda1 : start= 2048, size= 1024000, Id=83, bootable
/dev/sda2 : start= 1026048, size= 1024000, Id=82
/dev/sda3 : start= 2050048, size=623091712, Id=fd
/dev/sda4 : start= 0, size= 0, Id= 0
on all 5 disks, and sd{whatever}3 is used to assemble the raid,
specifically, /dev/md2.
mdadm -D /dev/md2:
/dev/md2:
Version : 1.2
Creation Time : Fri Nov 1 11:13:07 2013
Raid Level : raid6
Array Size : 934242816 (890.96 GiB 956.66 GB)
Used Dev Size : 311414272 (296.99 GiB 318.89 GB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Jan 21 10:33:52 2014
State : active
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : turnip:2 (local to host turnip)
UUID : bece804d:eaaeb280:38d2d7f3:1e493146
Events : 21788
Number Major Minor RaidDevice State
0 8 51 0 active sync /dev/sdd3
1 8 35 1 active sync /dev/sdc3
2 8 3 2 active sync /dev/sda3
3 8 19 3 active sync /dev/sdb3
4 8 67 4 active sync /dev/sde3
The filesystem is ext4, and debugfs says:
RAID stride: 16
RAID stripe width: 48
The processor is an AMD Phenom 9150e (quad-core, x86_64) and the O/S
is openSUSE 13.1, kernel 3.11.6. Some of the hardware looks like this:
00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] RS780 Host Bridge
00:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] RS780/RS880 PCI
to PCI bridge (int gfx)
00:07.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] RS780/RS880 PCI
to PCI bridge (PCIE port 3)
00:11.0 SATA controller: Advanced Micro Devices, Inc. [AMD/ATI]
SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode]
Settings:
The stripe_cache_size is 4096 (see
http://blog.jamponi.net/2013/12/sw-raid6-performance-influenced-by.html
)
readahead is 16384
scheduler is deadline
queue depth per-drive is 1.
nr_requests is 256.
Does this seem out of line? Thoughts?
--
Jon
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Disappointing performance: 5-disk RAID6, 3.11.6
2014-01-21 16:42 Disappointing performance: 5-disk RAID6, 3.11.6 Jon Nelson
@ 2014-01-21 16:55 ` Mathias Burén
2014-01-21 22:00 ` Jon Nelson
2014-01-21 22:54 ` NeilBrown
1 sibling, 1 reply; 5+ messages in thread
From: Mathias Burén @ 2014-01-21 16:55 UTC (permalink / raw)
To: Jon Nelson; +Cc: LinuxRaid
On 21 January 2014 17:42, Jon Nelson <jnelson-linux-raid@jamponi.net> wrote:
>
> I have a 5-disk RAID6 using (5) 320GB SATA drives.
> I rarely see even sequential I/O approaching that of a single drive's
> performance.
> Example: (Rarely!) I'll see an aggregate 250MB/s read or write, but
> that translates to 50MB/s read or write per-drive. I was hoping for
> more.
>
> The partition layout looks like this:
>
> /dev/sda1 : start= 2048, size= 1024000, Id=83, bootable
> /dev/sda2 : start= 1026048, size= 1024000, Id=82
> /dev/sda3 : start= 2050048, size=623091712, Id=fd
> /dev/sda4 : start= 0, size= 0, Id= 0
>
> on all 5 disks, and sd{whatever}3 is used to assemble the raid,
> specifically, /dev/md2.
>
> mdadm -D /dev/md2:
>
> /dev/md2:
> Version : 1.2
> Creation Time : Fri Nov 1 11:13:07 2013
> Raid Level : raid6
> Array Size : 934242816 (890.96 GiB 956.66 GB)
> Used Dev Size : 311414272 (296.99 GiB 318.89 GB)
> Raid Devices : 5
> Total Devices : 5
> Persistence : Superblock is persistent
>
> Intent Bitmap : Internal
>
> Update Time : Tue Jan 21 10:33:52 2014
> State : active
> Active Devices : 5
> Working Devices : 5
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Name : turnip:2 (local to host turnip)
> UUID : bece804d:eaaeb280:38d2d7f3:1e493146
> Events : 21788
>
> Number Major Minor RaidDevice State
> 0 8 51 0 active sync /dev/sdd3
> 1 8 35 1 active sync /dev/sdc3
> 2 8 3 2 active sync /dev/sda3
> 3 8 19 3 active sync /dev/sdb3
> 4 8 67 4 active sync /dev/sde3
>
> The filesystem is ext4, and debugfs says:
>
> RAID stride: 16
> RAID stripe width: 48
>
>
> The processor is an AMD Phenom 9150e (quad-core, x86_64) and the O/S
> is openSUSE 13.1, kernel 3.11.6. Some of the hardware looks like this:
>
> 00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] RS780 Host Bridge
> 00:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] RS780/RS880 PCI
> to PCI bridge (int gfx)
> 00:07.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] RS780/RS880 PCI
> to PCI bridge (PCIE port 3)
> 00:11.0 SATA controller: Advanced Micro Devices, Inc. [AMD/ATI]
> SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode]
>
>
> Settings:
> The stripe_cache_size is 4096 (see
> http://blog.jamponi.net/2013/12/sw-raid6-performance-influenced-by.html
> )
> readahead is 16384
> scheduler is deadline
> queue depth per-drive is 1.
> nr_requests is 256.
>
> Does this seem out of line? Thoughts?
>
> --
> Jon
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
Does enabling NCQ help? (I see "queue depth per-drive is 1")
Mathias
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Disappointing performance: 5-disk RAID6, 3.11.6
2014-01-21 16:55 ` Mathias Burén
@ 2014-01-21 22:00 ` Jon Nelson
0 siblings, 0 replies; 5+ messages in thread
From: Jon Nelson @ 2014-01-21 22:00 UTC (permalink / raw)
To: Mathias Burén; +Cc: LinuxRaid
On Tue, Jan 21, 2014 at 10:55 AM, Mathias Burén <mathias.buren@gmail.com> wrote:
> On 21 January 2014 17:42, Jon Nelson <jnelson-linux-raid@jamponi.net> wrote:
>>
>> I have a 5-disk RAID6 using (5) 320GB SATA drives.
>> I rarely see even sequential I/O approaching that of a single drive's
>> performance.
...
> Does enabling NCQ help? (I see "queue depth per-drive is 1")
I didn't see any noticeable performance change with 31 (as high as it would go).
--
Jon
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Disappointing performance: 5-disk RAID6, 3.11.6
2014-01-21 16:42 Disappointing performance: 5-disk RAID6, 3.11.6 Jon Nelson
2014-01-21 16:55 ` Mathias Burén
@ 2014-01-21 22:54 ` NeilBrown
2014-01-22 0:24 ` Jon Nelson
1 sibling, 1 reply; 5+ messages in thread
From: NeilBrown @ 2014-01-21 22:54 UTC (permalink / raw)
To: Jon Nelson; +Cc: LinuxRaid
[-- Attachment #1: Type: text/plain, Size: 3333 bytes --]
On Tue, 21 Jan 2014 10:42:19 -0600 Jon Nelson
<jnelson-linux-raid@jamponi.net> wrote:
> I have a 5-disk RAID6 using (5) 320GB SATA drives.
> I rarely see even sequential I/O approaching that of a single drive's
> performance.
> Example: (Rarely!) I'll see an aggregate 250MB/s read or write, but
> that translates to 50MB/s read or write per-drive. I was hoping for
> more.
A 5 disk RAID6 has 3 data drives (in each stripe), so 250MB/s translates to
250/3 or 83MB/s per drive (skipping over parity data isn't faster than
reading it unless you have a very large chunk size, which brings other costs).
What exactly where you hoping for?
If you run something like
for i in a b c d e
do dd if=/dev/sd${i}3 of=/dev/null bs=1M count=100 &
done
while system is otherwise idle, what throughput does each dd report?
NeilBrown
>
> The partition layout looks like this:
>
> /dev/sda1 : start= 2048, size= 1024000, Id=83, bootable
> /dev/sda2 : start= 1026048, size= 1024000, Id=82
> /dev/sda3 : start= 2050048, size=623091712, Id=fd
> /dev/sda4 : start= 0, size= 0, Id= 0
>
> on all 5 disks, and sd{whatever}3 is used to assemble the raid,
> specifically, /dev/md2.
>
> mdadm -D /dev/md2:
>
> /dev/md2:
> Version : 1.2
> Creation Time : Fri Nov 1 11:13:07 2013
> Raid Level : raid6
> Array Size : 934242816 (890.96 GiB 956.66 GB)
> Used Dev Size : 311414272 (296.99 GiB 318.89 GB)
> Raid Devices : 5
> Total Devices : 5
> Persistence : Superblock is persistent
>
> Intent Bitmap : Internal
>
> Update Time : Tue Jan 21 10:33:52 2014
> State : active
> Active Devices : 5
> Working Devices : 5
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Name : turnip:2 (local to host turnip)
> UUID : bece804d:eaaeb280:38d2d7f3:1e493146
> Events : 21788
>
> Number Major Minor RaidDevice State
> 0 8 51 0 active sync /dev/sdd3
> 1 8 35 1 active sync /dev/sdc3
> 2 8 3 2 active sync /dev/sda3
> 3 8 19 3 active sync /dev/sdb3
> 4 8 67 4 active sync /dev/sde3
>
> The filesystem is ext4, and debugfs says:
>
> RAID stride: 16
> RAID stripe width: 48
>
>
> The processor is an AMD Phenom 9150e (quad-core, x86_64) and the O/S
> is openSUSE 13.1, kernel 3.11.6. Some of the hardware looks like this:
>
> 00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] RS780 Host Bridge
> 00:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] RS780/RS880 PCI
> to PCI bridge (int gfx)
> 00:07.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] RS780/RS880 PCI
> to PCI bridge (PCIE port 3)
> 00:11.0 SATA controller: Advanced Micro Devices, Inc. [AMD/ATI]
> SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode]
>
>
> Settings:
> The stripe_cache_size is 4096 (see
> http://blog.jamponi.net/2013/12/sw-raid6-performance-influenced-by.html
> )
> readahead is 16384
> scheduler is deadline
> queue depth per-drive is 1.
> nr_requests is 256.
>
> Does this seem out of line? Thoughts?
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Disappointing performance: 5-disk RAID6, 3.11.6
2014-01-21 22:54 ` NeilBrown
@ 2014-01-22 0:24 ` Jon Nelson
0 siblings, 0 replies; 5+ messages in thread
From: Jon Nelson @ 2014-01-22 0:24 UTC (permalink / raw)
To: NeilBrown; +Cc: LinuxRaid
On Tue, Jan 21, 2014 at 4:54 PM, NeilBrown <neilb@suse.de> wrote:
> On Tue, 21 Jan 2014 10:42:19 -0600 Jon Nelson
> <jnelson-linux-raid@jamponi.net> wrote:
>
>> I have a 5-disk RAID6 using (5) 320GB SATA drives.
>> I rarely see even sequential I/O approaching that of a single drive's
>> performance.
>> Example: (Rarely!) I'll see an aggregate 250MB/s read or write, but
>> that translates to 50MB/s read or write per-drive. I was hoping for
>> more.
>
> A 5 disk RAID6 has 3 data drives (in each stripe), so 250MB/s translates to
> 250/3 or 83MB/s per drive (skipping over parity data isn't faster than
> reading it unless you have a very large chunk size, which brings other costs).
>
> What exactly where you hoping for?
>
> If you run something like
> for i in a b c d e
> do dd if=/dev/sd${i}3 of=/dev/null bs=1M count=100 &
> done
> while system is otherwise idle, what throughput does each dd report?
Of course, you're absolutely right about data disks vs. parity disks.
I'd been playing with raid10 for so long...
Bumping that count up to 10000, I get an aggregate of (up to) 360MB/s,
averaging around 325MB/s, lows near 300MB/s.
Something didn't seem right, though, so I re-ran the dd on each drive
individually and found an outlier. Most were 70-75MB/s, but one was
55MB/s. Bummer. Anyway, thanks for reminding me to think twice, post
once.
-
Jon
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2014-01-22 0:24 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-01-21 16:42 Disappointing performance: 5-disk RAID6, 3.11.6 Jon Nelson
2014-01-21 16:55 ` Mathias Burén
2014-01-21 22:00 ` Jon Nelson
2014-01-21 22:54 ` NeilBrown
2014-01-22 0:24 ` Jon Nelson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).