linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Soft-/Hardware RAID Performance
@ 2003-02-19 19:56 Daniel Brockhaus
  2003-02-20  0:03 ` Neil Brown
  2003-02-20 10:55 ` Daniel Brockhaus
  0 siblings, 2 replies; 5+ messages in thread
From: Daniel Brockhaus @ 2003-02-19 19:56 UTC (permalink / raw)
  To: linux-raid

Hi all,

I need to build a server for an application that does lots of small writes 
and some small reads. So far I've build the hardware side of the server, 
using an Adaptec 2100S RAID controller and five Fujitsu MAM3184MP. My 
original intention was to build a RAID 10 array (RAID0 on two mirror sets 
of two disks each with one spare). But performance was very poor with this 
setup. I used a custom benchmark which reads and writes 4K blocks in random 
locations in a 2GB file (this is very close to the actual application):

Test results for :

Cheap IDE drive: 50 writes/s, 105 reads/s.
MAM3184MP: 195 writes/s, 425 reads/s.

This is as expected. But:

Hardware RAID10 array: 115 writes/s, 405 reads/s.

Which is way slower than a single drive. Now the testing began:

Hardware RAID1: 145 writes/s, 420 reads/s.
Software RAID1: 180 writes/s, 450 reads/s.
Software RAID10: 190 writes/s, 475 reads/s.

Since write performance is more important than read performance, a single 
drive is still faster than any configuration using two or four drives I've 
tried. So the question is: Are there any tunable parameters which might 
increase performance? In theory, read performance on a two-disk RAID1 array 
should be almost twice as high as on a single disk while write performance 
should (almost) stay the same, and a two-disk RAID0 array should double 
both, read and write performance. So the whole RAID10 array should be able 
to manage 350 writes/s and 1600 reads/s. What am I missing?

Performance issues aside. If I go for the software RAID10: How can I 
configure the system to use the fifth drive as hot spare for both RAID1 
arrays? Is it save to add the same drive to both arrays (haven't tried it 
yet)? And would you say that software RAID is stable enough to use in a 
production system?

Thanks a lot,
Daniel Brockhaus 


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Soft-/Hardware RAID Performance
  2003-02-19 19:56 Soft-/Hardware RAID Performance Daniel Brockhaus
@ 2003-02-20  0:03 ` Neil Brown
  2003-02-20 10:55 ` Daniel Brockhaus
  1 sibling, 0 replies; 5+ messages in thread
From: Neil Brown @ 2003-02-20  0:03 UTC (permalink / raw)
  To: Daniel Brockhaus; +Cc: linux-raid

On Wednesday February 19, joker@astonia.com wrote:
> 
> Performance issues aside. If I go for the software RAID10: How can I 
> configure the system to use the fifth drive as hot spare for both RAID1 
> arrays? Is it save to add the same drive to both arrays (haven't tried it 
> yet)? And would you say that software RAID is stable enough to use in a 
> production system?

This can be done using mdadm in --monitor mode.
If the manual page doesn't make it sufficiently clear, ask and I will
improve the manual page.


mdadm is available from
   http://www.kernel.org/pub/linux/utils/raid/mdadm/

NeilBrown

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Soft-/Hardware RAID Performance
  2003-02-19 19:56 Soft-/Hardware RAID Performance Daniel Brockhaus
  2003-02-20  0:03 ` Neil Brown
@ 2003-02-20 10:55 ` Daniel Brockhaus
  2003-02-20 18:31   ` Gregory Leblanc
  2003-02-21  0:08   ` Neil Brown
  1 sibling, 2 replies; 5+ messages in thread
From: Daniel Brockhaus @ 2003-02-20 10:55 UTC (permalink / raw)
  To: linux-raid

Hi again,

I've received some helpful responses, and I'd like to share those, and the 
new test results. Let me know if I'm boring you to death. ;)

One suggestion to speed up the reads was to issue several reads in 
parallel. Silly me didn't think of that, I was completely focused on the 
writes, which are more important for my application. Anyway. Using parallel 
reads (from four processes), read performance scales almost with the number 
of disks in the array. This goes for both, hardware and software RAID, with 
software RAID being about 15% faster.

Write performance on the other hand does not change at all when using 
multiple processes - for obvious reasons: The kernel queues, sorts and 
merges write requests anyway, so the number of processes doing the writes 
does not matter. But I've noticed something peculiar: If I change my 
benchmark to write 4K blocks at 4K boundaries, write performance increases 
to almost 300%. This is quite logical, since the kernel can write a 'page 
aligned' block directly to the disk, without having to read the remaining 
parts of the page from disk first. The strange thing is that the expected 
performance gain from using RAID0 does show when writing aligned 4K blocks, 
but not when writing unaligned blocks. Non-aligned writes also tend to 
block much more often than aligned writes do. It seems the kernel doesn't 
handle unaligned writes very well. I can't be sure without having read the 
kernel sources (which I don't intend to do, they give me a headache), but I 
think the kernel serializes the reads needed to do the writes, thus killing 
any performance gain from using RAID arrays.

Concerning the question how to use one hot spare for two arrays, Neil 
recommendet using mdadm. I'll take a look at it today, thanks :)

Regards,
Daniel Brockhaus

At 20:56 19.02.03 +0100, you wrote:
>I need to build a server for an application that does lots of small writes 
>and some small reads. So far I've build the hardware side of the server, 
>using an Adaptec 2100S RAID controller and five Fujitsu MAM3184MP. My 
>original intention was to build a RAID 10 array (RAID0 on two mirror sets 
>of two disks each with one spare). But performance was very poor with this 
>setup. I used a custom benchmark which reads and writes 4K blocks in 
>random locations in a 2GB file (this is very close to the actual application):
>
>Test results for :
>
>Cheap IDE drive: 50 writes/s, 105 reads/s.
>MAM3184MP: 195 writes/s, 425 reads/s.
>
>This is as expected. But:
>
>Hardware RAID10 array: 115 writes/s, 405 reads/s.
>
>Which is way slower than a single drive. Now the testing began:
>
>Hardware RAID1: 145 writes/s, 420 reads/s.
>Software RAID1: 180 writes/s, 450 reads/s.
>Software RAID10: 190 writes/s, 475 reads/s.
>
>Since write performance is more important than read performance, a single 
>drive is still faster than any configuration using two or four drives I've 
>tried. So the question is: Are there any tunable parameters which might 
>increase performance? In theory, read performance on a two-disk RAID1 
>array should be almost twice as high as on a single disk while write 
>performance should (almost) stay the same, and a two-disk RAID0 array 
>should double both, read and write performance. So the whole RAID10 array 
>should be able to manage 350 writes/s and 1600 reads/s. What am I missing?
>
>Performance issues aside. If I go for the software RAID10: How can I 
>configure the system to use the fifth drive as hot spare for both RAID1 
>arrays? Is it save to add the same drive to both arrays (haven't tried it 
>yet)? And would you say that software RAID is stable enough to use in a 
>production system?


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Soft-/Hardware RAID Performance
  2003-02-20 10:55 ` Daniel Brockhaus
@ 2003-02-20 18:31   ` Gregory Leblanc
  2003-02-21  0:08   ` Neil Brown
  1 sibling, 0 replies; 5+ messages in thread
From: Gregory Leblanc @ 2003-02-20 18:31 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 1143 bytes --]

On Thu, 2003-02-20 at 02:55, Daniel Brockhaus wrote:
[snip]
> One suggestion to speed up the reads was to issue several reads in 
> parallel. Silly me didn't think of that, I was completely focused on the 
> writes, which are more important for my application. Anyway. Using parallel 
> reads (from four processes), read performance scales almost with the number 
> of disks in the array. This goes for both, hardware and software RAID, with 
> software RAID being about 15% faster.
> 
> Write performance on the other hand does not change at all when using 
> multiple processes - for obvious reasons: The kernel queues, sorts and 
> merges write requests anyway, so the number of processes doing the writes 
> does not matter. But I've noticed something peculiar: If I change my 
> benchmark to write 4K blocks at 4K boundaries, write performance increases 
> to almost 300%. This is quite logical, since the kernel can write a 'page 

What was your previous benchmark write size?  And what parameters did
you use when creating the RAID 0 array and filesystem?  There may be
opportunities for tuning there.
	Greg
[snip]

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Soft-/Hardware RAID Performance
  2003-02-20 10:55 ` Daniel Brockhaus
  2003-02-20 18:31   ` Gregory Leblanc
@ 2003-02-21  0:08   ` Neil Brown
  1 sibling, 0 replies; 5+ messages in thread
From: Neil Brown @ 2003-02-21  0:08 UTC (permalink / raw)
  To: Daniel Brockhaus; +Cc: linux-raid

On Thursday February 20, joker@astonia.com wrote:
> Hi again,
> 
> I've received some helpful responses, and I'd like to share those, and the 
> new test results. Let me know if I'm boring you to death. ;)
> 
> One suggestion to speed up the reads was to issue several reads in 
> parallel. Silly me didn't think of that, I was completely focused on the 
> writes, which are more important for my application. Anyway. Using parallel 
> reads (from four processes), read performance scales almost with the number 
> of disks in the array. This goes for both, hardware and software RAID, with 
> software RAID being about 15% faster.
> 
> Write performance on the other hand does not change at all when using 
> multiple processes - for obvious reasons: The kernel queues, sorts and 
> merges write requests anyway, so the number of processes doing the writes 
> does not matter. But I've noticed something peculiar: If I change my 
> benchmark to write 4K blocks at 4K boundaries, write performance increases 
> to almost 300%. This is quite logical, since the kernel can write a 'page 
> aligned' block directly to the disk, without having to read the remaining 
> parts of the page from disk first. The strange thing is that the expected 
> performance gain from using RAID0 does show when writing aligned 4K blocks, 
> but not when writing unaligned blocks. Non-aligned writes also tend to 
> block much more often than aligned writes do. It seems the kernel doesn't 
> handle unaligned writes very well. I can't be sure without having read the 
> kernel sources (which I don't intend to do, they give me a headache), but I 
> think the kernel serializes the reads needed to do the writes, thus killing 
> any performance gain from using RAID arrays.

When you do unaligned writes to a block device, it pre-reads the parts
of each page that you don't write.  This causes your loss of
perfomance.

I *think* (i.e. vague memory from reading code suggests) that if you
open with O_DIRECT and make sure all your accesses are 512 byte
aligned and a multiple of 512 bytes in size, it should avoid the
pre-reading and should give you full performance.

NeilBrown

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2003-02-21  0:08 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-02-19 19:56 Soft-/Hardware RAID Performance Daniel Brockhaus
2003-02-20  0:03 ` Neil Brown
2003-02-20 10:55 ` Daniel Brockhaus
2003-02-20 18:31   ` Gregory Leblanc
2003-02-21  0:08   ` Neil Brown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).