linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RAID10 Performance
@ 2011-03-02  8:50 Aaron Sowry
  2011-03-02 11:16 ` NeilBrown
  0 siblings, 1 reply; 20+ messages in thread
From: Aaron Sowry @ 2011-03-02  8:50 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 1617 bytes --]

Hello,

I have been testing different RAID configurations on a 2-disk setup, and
have a couple of questions regarding performance. The information I have
found online so far seems to contradict itself fairly regularly so I was
hoping for a more coherent answer :)

1) As I understand it, a RAID10 'near' configuration using two disks is
essentially equivalent to a RAID1 configuration. Is this correct?

2) Does md RAID1 support 'striped' reads? If not, is RAID1 read
performance in any way related to the number of disks in the array?

3) From what I have read so far, a RAID10 'far' configuration on 2 disks
provides increased read performance over an equivalent 'near'
configuration, however I am struggling to understand exactly why. I
understand the difference between the 'near' and 'far' configurations,
but not *why* this should provide any speed increases. What am I missing?

4) I have performed a(n admittedly fairly basic) benchmark on the same
system under two different configurations - RAID10,n2 and RAID10,f2
using tiobench with default settings. In short, the results showed a
significant speed increase for single-threaded sequential reads (83Mb/s
vs 166MB/s), some increase for single-threaded random reads (1.85Mb/s vs
2.25Mb/s), but a decrease for every other metric, including
multi-threaded sequential and random reads. I was expecting write
performance to decrease slightly under RAID10,f2 compared to RAID10,n2,
but am slightly confused about the multi-threaded read performance. Is
it my expectations or my testing that needs to be reviewed?

Cheers,
Aaron


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 490 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread
* RAID10 Performance
@ 2011-03-02  9:04 Aaron Sowry
  2011-03-02  9:24 ` Robin Hill
                   ` (2 more replies)
  0 siblings, 3 replies; 20+ messages in thread
From: Aaron Sowry @ 2011-03-02  9:04 UTC (permalink / raw)
  To: linux-raid

Hello,

I have been testing different RAID configurations on a 2-disk setup, and
have a couple of questions regarding performance. The information I have
found online so far seems to contradict itself fairly regularly so I was
hoping for a more coherent answer :)

1) As I understand it, a RAID10 'near' configuration using two disks is
essentially equivalent to a RAID1 configuration. Is this correct?

2) Does md RAID1 support 'striped' reads? If not, is RAID1 read
performance in any way related to the number of disks in the array?

3) From what I have read so far, a RAID10 'far' configuration on 2 disks
provides increased read performance over an equivalent 'near'
configuration, however I am struggling to understand exactly why. I
understand the difference between the 'near' and 'far' configurations,
but not *why* this should provide any speed increases. What am I missing?

4) I have performed a(n admittedly fairly basic) benchmark on the same
system under two different configurations - RAID10,n2 and RAID10,f2
using tiobench with default settings. In short, the results showed a
significant speed increase for single-threaded sequential reads (83Mb/s
vs 166MB/s), some increase for single-threaded random reads (1.85Mb/s vs
2.25Mb/s), but a decrease for every other metric, including
multi-threaded sequential and random reads. I was expecting write
performance to decrease under RAID10,f2 compared to RAID10,n2, but am
slightly confused about the multi-threaded read performance. Is it my
expectations or my testing that needs to be reviewed?

Cheers,
Aaron

^ permalink raw reply	[flat|nested] 20+ messages in thread
* RAID10 Performance
@ 2012-07-26 14:16 Adam Goryachev
  2012-07-27  7:07 ` Stan Hoeppner
  2012-07-27 12:05 ` Phil Turmel
  0 siblings, 2 replies; 20+ messages in thread
From: Adam Goryachev @ 2012-07-26 14:16 UTC (permalink / raw)
  To: linux-raid

Hi all,

I've got a system with the following config that I am trying to improve
performance on. Hopefully you can help guide me in the best direction
please.

1 x SSD (OS drive only)
3 x 2TB WDC WD2003FYYS-02W0B1

The three HDD's are configured in a single RAID10 (I configured as
RAID10 to easily support adding additional drives later, I realise/hope
it is currently equivalent to RAID1)
md0 : active raid10 sdb1[0] sdd1[2](S) sdc1[1]
      1953511936 blocks super 1.2 2 near-copies [2/2] [UU]

Which is then shared with DRBD to another identical system.
Then LVM is used to carve the redundant storage into virtual disks
Finally, iSCSI is used to export the virtual disks to the various
virtual machines running on other physical boxes.

When a single VM is accessing data, performance is more than acceptable
(max around 110M/s as reported by dd)

The two SAN machines have 1 Gb ethernet crossover between them, and 4 x
Gb bonded to the switch which connects to the physical machines running
the VM's (which have only a single Gb connection).

The issue is poor performance when more than one machine attempts to do
disk intensive activity at the same time (ie, when the anti virus scan
starts on all VM's at the same time, or during the backup window, etc).

During these times, performance can drop to 5M/s (reported by dd, or
calculated timings from windows VM etc). I'd like to:
a) improve overall performance when multiple VM's are r/w data to the drives
b) Hopefully set a minimum performance level for each VM (so one VM
can't starve the others).

I have adjusted some drbd related values, and significantly improved
performance there, not to say it is perfect yet).
I am currently using the deadline scheduler on the HDD's, but this
doesn't make much difference.
I have manually balanced IRQ's across the available CPU's (two bonded
ethernet on one, two bonded ethernet on second, SATA on third, and the
rest of the IRQ's on the fourth, it is a quad core CPU).

If I add the third (hot spare) disk into the RAID10 array, could I get
1.5x the total storage capacity, and improve performance by approx 30%?
If I add another two disks (on each server) could I extend the array to
2x total storage capacity, double performance, and still keep the hot spare?
If I add the third (hot spare) disk into the RAID10 array, could I get
1x the total storage capacity (ie, 3 disk RAID1) and improve read
performance?

Is there some other details I should provide, or knobs that I can tweak
to get better performance?

Additional data:
mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Mon Jun  4 23:31:20 2012
     Raid Level : raid10
     Array Size : 1953511936 (1863.01 GiB 2000.40 GB)
  Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
   Raid Devices : 2
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Fri Jul 27 00:07:21 2012
          State : active
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1

         Layout : near=2
     Chunk Size : 512K

           Name : san2:0  (local to host san2)
           UUID : b402c62b:2ae7eca3:89422456:2cd7c6f3
         Events : 82

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

       2       8       49        -      spare   /dev/sdd1

cat /proc/drbd
version: 8.3.7 (api:88/proto:86-91)
srcversion: EE47D8BF18AC166BE219757
 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----
    ns:295435862 nr:0 dw:298389532 dr:1582870916 al:10877089 bm:53752
lo:2 pe:0 ua:0 ap:1 ep:1 wo:b oos:0

Running debian stable 2.6.32-5-amd64

top - 00:10:57 up 20 days, 18:41,  1 user,  load average: 0.25, 0.50, 0.61
Tasks: 340 total,   1 running, 339 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.1%sy,  0.0%ni, 99.8%id,  0.0%wa,  0.0%hi,  0.0%si, 
0.0%st
Mem:   7903292k total,  7702696k used,   200596k free,  5871076k buffers
Swap:  3939320k total,        0k used,  3939320k free,  1266756k cached

(Note, CPU load peaks up to 12 during heavy IO load periods)

Thank you for any advice or assistance you can provide.

Regards,
Adam

-- 
Adam Goryachev
Website Managers
www.websitemanagers.com.au


^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2012-08-09 22:37 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-03-02  8:50 RAID10 Performance Aaron Sowry
2011-03-02 11:16 ` NeilBrown
  -- strict thread matches above, loose matches on Subject: below --
2011-03-02  9:04 Aaron Sowry
2011-03-02  9:24 ` Robin Hill
2011-03-02 10:14   ` Keld Jørn Simonsen
2011-03-02 14:42 ` Mark Knecht
2011-03-02 14:47   ` Mathias Burén
2011-03-02 15:02 ` Mario 'BitKoenig' Holbe
2012-07-26 14:16 Adam Goryachev
2012-07-27  7:07 ` Stan Hoeppner
2012-07-27 13:02   ` Adam Goryachev
2012-07-27 18:29     ` Stan Hoeppner
2012-07-28  6:36       ` Adam Goryachev
2012-07-28 15:33         ` Stan Hoeppner
2012-08-08  3:49           ` Adam Goryachev
2012-08-08 16:59             ` Stan Hoeppner
2012-08-08 17:14               ` Roberto Spadim
2012-08-09  1:00               ` Adam Goryachev
2012-08-09 22:37                 ` Stan Hoeppner
2012-07-27 12:05 ` Phil Turmel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).