linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Observation about RAID 0 performance in 2.6.25 kernel
@ 2008-09-08 13:00 AndrewL733
       [not found] ` <a43edf1b0809080741j72c850f8n6508870ccec01ee7@mail.gmail.com>
  2008-09-09  9:58 ` Peter Grandi
  0 siblings, 2 replies; 3+ messages in thread
From: AndrewL733 @ 2008-09-08 13:00 UTC (permalink / raw)
  To: linux-raid

I'm wondering if anybody has observed something similar to what I am 
seeing. For the past year, my production storage systems have primarily 
been using the 2.6.20.15 kernel (that's what we settled on a while back, 
and generally I have been happy with it).

About 3 months ago, I began experimenting with the 2.6.25 kernel, 
because I wanted to use some kernel-specific features that were only 
introduced in 2.6.23, 2.6.24 and 2.6.25.

My production systems typically consist of servers with two 3ware 9650 
12-port RAID cards and 24 SATA drives, 12 drives on each card. For 
maximum performance, we stripe together the two 12-drive "hardware 
RAIDS" using Linux software RAID-0. My other hardware includes a very 
recent motherboard based on the Intel 5400 chipset, with 4 Gen-2 x8 
PCI-e slots and the 5482 Intel 3.2 Ghz Quad Core CPU with 4 GBs of RAM. 
In other words, it's very capable hardware.

When comparing the 2.6.20.15 kernel with the 2.6.25 kernel, I have 
noticed that:

For the underlying 3ware devices, all benchmarks -- dd, bonnie++, and my 
own "torture test" that measures performance doing many random reads 
simultaneously -- show that 2.6.25 kernel is about 10 percent faster 
than the 2.6.20.15 kernel for both reading and writing.

However, when I stripe together those two 3ware devices with Linux 
software RAID 0, with the 2.6.25 kernel I get about a 20 percent BOOST 
in  performance for WRITING compared to the 2.6.20.15 kernel, but I get 
about an 8 percent DROP in READING performance with the 2.6.25 kernel.

My tests have been conducted using the in-kernel 3ware drivers, as well 
as compiling 3ware's latest drivers for each kernel (so, in the latter 
case, I have the same 3ware firmware and driver for either kernel). The 
results are very similar either way.

Does anybody have any insights into what might be going on here? Does 
Linux software RAID need to be configured differently in 2.6.25 to NOT 
lose READ performance? Is there something that most be done to vm tuning 
with 2.6.25? Is there a known issue with 2.6.25 that perhaps has been 
resolved with 2.6.26?

Regards,
Andrew

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Observation about RAID 0 performance in 2.6.25 kernel
       [not found] ` <a43edf1b0809080741j72c850f8n6508870ccec01ee7@mail.gmail.com>
@ 2008-09-09  1:04   ` AndrewL733
  0 siblings, 0 replies; 3+ messages in thread
From: AndrewL733 @ 2008-09-09  1:04 UTC (permalink / raw)
  To: Billy Crook, linux-raid

Billy Crook wrote:
> I would suspect the default IO scheduler changed between those two
> kernels.  On both systems,
> [bcrook@bcrook ~]$ cat /sys/block/sda/queue/scheduler
> noop anticipatory deadline [cfq]
>
> I bet you'll get closer matching results it you echo the name of the
> .20 default scheduler to /sys/block/sda/queue/scheduler on .25
> Though, it might be worth contrasting the performance of all options
> on the .25 system.
>
> You probably already know this but the io scheduler determines in what
> order the heads move to get at what parts of the disk in what order.
> It can greatly impact performance, so choosing the right scheduler,
> and tuning it, can potentially help a lot.
>   
Nice try, but no, both systems are running the deadline scheduler. CFQ?  
I believe that means "poor performance for all". CFQ is fair, in that it 
treats every request equally badly!

In all seriousness, CFQ is not for this kind of storage server.

Andrew
> On Mon, Sep 8, 2008 at 08:00, AndrewL733 <AndrewL733@aol.com> wrote:
>   
>> I'm wondering if anybody has observed something similar to what I am seeing.
>> For the past year, my production storage systems have primarily been using
>> the 2.6.20.15 kernel (that's what we settled on a while back, and generally
>> I have been happy with it).
>>
>> About 3 months ago, I began experimenting with the 2.6.25 kernel, because I
>> wanted to use some kernel-specific features that were only introduced in
>> 2.6.23, 2.6.24 and 2.6.25.
>>
>> My production systems typically consist of servers with two 3ware 9650
>> 12-port RAID cards and 24 SATA drives, 12 drives on each card. For maximum
>> performance, we stripe together the two 12-drive "hardware RAIDS" using
>> Linux software RAID-0. My other hardware includes a very recent motherboard
>> based on the Intel 5400 chipset, with 4 Gen-2 x8 PCI-e slots and the 5482
>> Intel 3.2 Ghz Quad Core CPU with 4 GBs of RAM. In other words, it's very
>> capable hardware.
>>
>> When comparing the 2.6.20.15 kernel with the 2.6.25 kernel, I have noticed
>> that:
>>
>> For the underlying 3ware devices, all benchmarks -- dd, bonnie++, and my own
>> "torture test" that measures performance doing many random reads
>> simultaneously -- show that 2.6.25 kernel is about 10 percent faster than
>> the 2.6.20.15 kernel for both reading and writing.
>>
>> However, when I stripe together those two 3ware devices with Linux software
>> RAID 0, with the 2.6.25 kernel I get about a 20 percent BOOST in
>>  performance for WRITING compared to the 2.6.20.15 kernel, but I get about
>> an 8 percent DROP in READING performance with the 2.6.25 kernel.
>>
>> My tests have been conducted using the in-kernel 3ware drivers, as well as
>> compiling 3ware's latest drivers for each kernel (so, in the latter case, I
>> have the same 3ware firmware and driver for either kernel). The results are
>> very similar either way.
>>
>> Does anybody have any insights into what might be going on here? Does Linux
>> software RAID need to be configured differently in 2.6.25 to NOT lose READ
>> performance? Is there something that most be done to vm tuning with 2.6.25?
>> Is there a known issue with 2.6.25 that perhaps has been resolved with
>> 2.6.26?
>>
>> Regards,
>> Andrew
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>>     


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Observation about RAID 0 performance in 2.6.25 kernel
  2008-09-08 13:00 Observation about RAID 0 performance in 2.6.25 kernel AndrewL733
       [not found] ` <a43edf1b0809080741j72c850f8n6508870ccec01ee7@mail.gmail.com>
@ 2008-09-09  9:58 ` Peter Grandi
  1 sibling, 0 replies; 3+ messages in thread
From: Peter Grandi @ 2008-09-09  9:58 UTC (permalink / raw)
  To: Linux RAID


> When comparing the 2.6.20.15 kernel with the 2.6.25 kernel,
> I have noticed that: For the underlying 3ware devices, all
> benchmarks -- dd, bonnie++, [ ... ] I get about an 8 percent
> DROP in READING performance with the 2.6.25 kernel.

It seems particularly dumb to avoid giving absolute numbers
here, but yet it is sort of expected from someone thinking that
"READING performance" and benchmark results are the same thing
(expecially someone who uses bonnie++ and does not give the
exact parameters used too).

> Does anybody have any insights into what might be going on
> here? Does Linux software RAID need to be configured
> differently in 2.6.25 to NOT lose READ performance? [ ... ]

A difference of 10% plus or minus is insignificant and well
within measurement error and happenstance. Some people ascribe
significance to such variations, but that is due to lack of
understanding, as if a complex and often poorly designed IO
subsystem that is meant to distribute iops across several
devices had deterministic performance on a multitasking system
with load spread over several CPUs. Variations like that can
depend on a timing bug being fixed in an unrelated driver.

In particular as to MD read performance, it has been reported
quite a few times in this mailing list that it seems to depend
to a very large extent on the precise value of the block device
read ahead, with variations of *several times*, and with best
benchmark results achieved by laughably (also because most
likely counterproductive) large values of that parameter.
Worrying about 8% is pointless and clueless.

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2008-09-09  9:58 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-09-08 13:00 Observation about RAID 0 performance in 2.6.25 kernel AndrewL733
     [not found] ` <a43edf1b0809080741j72c850f8n6508870ccec01ee7@mail.gmail.com>
2008-09-09  1:04   ` AndrewL733
2008-09-09  9:58 ` Peter Grandi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).