* Re: Severe slowdown with LVM on RAID, alignment problem?
@ 2008-02-29 8:12 Michael Guntsche
2008-02-29 10:37 ` Peter Rabbitson
0 siblings, 1 reply; 7+ messages in thread
From: Michael Guntsche @ 2008-02-29 8:12 UTC (permalink / raw)
To: Maurice Hilarius; +Cc: linux-raid
On Fri, 29 Feb 2008 00:53:06 -0700, Maurice Hilarius <maurice@harddata.com>
wrote:
> Michael Guntsche wrote:
>> ..
>> While the result for XFS is different, reading is actually faster, the
>> differnece between xfs and xfs on lvm is still there.
>>
>>
> Great. At least now the figures are more realistic.
>> pvcreate was called so that the first PE starts exactly at 256K, no
sunits
>> where used with mkfs.xfs for the lvm case.
>>
>> I still do not understand the read output at all.
>>
> That is certainly a puzzle.
Is it possible that my computer is just too slow to get good read results?
I wonder since writing seems to be nearly similar.
I just tried with an ext3 FS.
Version 1.03c ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec
%CP
lvm-al8g-ext3 8G 45029 27 22436 29 55034 45 192.0
3
While reading is a little bit faster it's nowhere near the speed I get on
md0 itself.
Kind regards,
Michael
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: Severe slowdown with LVM on RAID, alignment problem?
2008-02-29 8:12 Severe slowdown with LVM on RAID, alignment problem? Michael Guntsche
@ 2008-02-29 10:37 ` Peter Rabbitson
2008-02-29 10:45 ` Michael Guntsche
2008-03-01 20:45 ` Bill Davidsen
0 siblings, 2 replies; 7+ messages in thread
From: Peter Rabbitson @ 2008-02-29 10:37 UTC (permalink / raw)
To: Michael Guntsche; +Cc: Maurice Hilarius, linux-raid
Michael Guntsche wrote:
>
> Is it possible that my computer is just too slow to get good read results?
unlikely
> While reading is a little bit faster it's nowhere near the speed I get on
> md0 itself.
>
I would guess that you did not set the correct read-ahead values for the LV.
If you do not specify anything it will default to 128k (256 sectors), which is
terribly small for sequential reads. On the contrary the MD device will do
some clever calculations and set its read-ahead correctly depending on the
raid level and the number of disks. Do:
blockdev --setra 65536 <your lv device>
and run the tests again. You are almost certainly going to get the results you
are after.
Peter
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Severe slowdown with LVM on RAID, alignment problem?
2008-02-29 10:37 ` Peter Rabbitson
@ 2008-02-29 10:45 ` Michael Guntsche
2008-03-01 20:45 ` Bill Davidsen
1 sibling, 0 replies; 7+ messages in thread
From: Michael Guntsche @ 2008-02-29 10:45 UTC (permalink / raw)
To: Peter Rabbitson; +Cc: Maurice Hilarius, linux-raid
Hello Peter
On Fri, 29 Feb 2008 11:37:58 +0100, Peter Rabbitson <rabbit+list@rabbit.us>
wrote:
> Michael Guntsche wrote:
> I would guess that you did not set the correct read-ahead values for the
> LV.
> If you do not specify anything it will default to 128k (256 sectors),
> which is
> terribly small for sequential reads. On the contrary the MD device will
do
> some clever calculations and set its read-ahead correctly depending on
the
> raid level and the number of disks. Do:
>
> blockdev --setra 65536 <your lv device>
>
> and run the tests again. You are almost certainly going to get the
results
> you
> are after.
I checked the read-ahead value on md0 (3072) and set this on the LV as
well.
Here is the result:
Version 1.03c ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec
%CP
lvm 8G 37251 25 27620 25 103996 49
160.0 2
I did not test it with the proper sunit,swdith values yet but the result is
now looking much better.
I'll play around with it some more this afternoon and post my result of
what is working best for me.
In the mean time, thank you all for your quick and helpful responses.
Kind regards,
Michael
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: Severe slowdown with LVM on RAID, alignment problem?
2008-02-29 10:37 ` Peter Rabbitson
2008-02-29 10:45 ` Michael Guntsche
@ 2008-03-01 20:45 ` Bill Davidsen
2008-03-01 21:26 ` Michael Guntsche
1 sibling, 1 reply; 7+ messages in thread
From: Bill Davidsen @ 2008-03-01 20:45 UTC (permalink / raw)
To: Peter Rabbitson; +Cc: Michael Guntsche, Maurice Hilarius, linux-raid
Peter Rabbitson wrote:
> Michael Guntsche wrote:
>>
>> Is it possible that my computer is just too slow to get good read
>> results?
> unlikely
>
>> While reading is a little bit faster it's nowhere near the speed I
>> get on
>> md0 itself.
>>
>
> I would guess that you did not set the correct read-ahead values for
> the LV. If you do not specify anything it will default to 128k (256
> sectors), which is terribly small for sequential reads. On the
> contrary the MD device will do some clever calculations and set its
> read-ahead correctly depending on the raid level and the number of
> disks. Do:
>
> blockdev --setra 65536 <your lv device>
>
> and run the tests again. You are almost certainly going to get the
> results you are after.
I will just comment that really large readahead values may cause
significant memory usage and transfer of unused data. My observations
and some posts indicate that very large readahead and/or chunk size may
reduce random access performance. I believe you said you had 512MB RAM,
that may be a factor as well.
Also, blockdev will allow you to diddle readahead on the device,
/dev/sdX, the array /dev/mdX, and the lv /dev/mapper/NAME. The
interaction of these, and the performance results of having the same
exact amount of readhead memory used in different way is a fine topic
for a thesis, conference paper, magazine article, or nightmare.
Unless you are planning to use this machine mainly for running
benchmarks, I would tune it for your actual load and a bit of worst case
avoidance.
--
Bill Davidsen <davidsen@tmr.com>
"Woe unto the statesman who makes war without a reason that will still
be valid when the war is over..." Otto von Bismark
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Severe slowdown with LVM on RAID, alignment problem?
2008-03-01 20:45 ` Bill Davidsen
@ 2008-03-01 21:26 ` Michael Guntsche
2008-03-02 20:14 ` Bill Davidsen
0 siblings, 1 reply; 7+ messages in thread
From: Michael Guntsche @ 2008-03-01 21:26 UTC (permalink / raw)
To: Bill Davidsen; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 1633 bytes --]
On Mar 1, 2008, at 21:45, Bill Davidsen wrote:
>> blockdev --setra 65536 <your lv device>
>>
>> and run the tests again. You are almost certainly going to get the
>> results you are after.
>
> I will just comment that really large readahead values may cause
> significant memory usage and transfer of unused data. My
> observations and some posts indicate that very large readahead and/
> or chunk size may reduce random access performance. I believe you
> said you had 512MB RAM, that may be a factor as well.
>
I did not set such a large read-ahead. I had a look at the md0 device
which had a value of 3072 and set this on the LV device as well.
Performance really improved after this.
>
> Unless you are planning to use this machine mainly for running
> benchmarks, I would tune it for your actual load and a bit of worst
> case avoidance.
>
The last part is exactly what I am aiming at right now.
I tried to keep my changes to a bare minimum.
* Change chunk size to 256K
* Align the physical extent of the LVM to it
* Use the same parameters for mkfs.xfs that are choosen autmatically
by mkfs.xfs if called on the md0 device itself.
* Set the read-ahead of the LVM block device to the same value as the
md0 device
* Change the stripe_cache_size to 2048
With these settings applied to my setup here, RAID+XFS and RAID+LVM
+XFS perform nearly identical and that was my goal from the beginning.
Now I am off to figure out what's happening during the initial
rebuild of the RAID-5 but see my other mail for this.
Once again, thank you all for your valuable input and support.
Kind regards,
Michael
[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 2417 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Severe slowdown with LVM on RAID, alignment problem?
2008-03-01 21:26 ` Michael Guntsche
@ 2008-03-02 20:14 ` Bill Davidsen
2008-03-04 19:52 ` Severe slowdown with LVM on RAID, alignment problem? - Autodetect? Janek Kozicki
0 siblings, 1 reply; 7+ messages in thread
From: Bill Davidsen @ 2008-03-02 20:14 UTC (permalink / raw)
To: Michael Guntsche; +Cc: linux-raid
Michael Guntsche wrote:
>
> On Mar 1, 2008, at 21:45, Bill Davidsen wrote:
>
>>> blockdev --setra 65536 <your lv device>
>>>
>>> and run the tests again. You are almost certainly going to get the
>>> results you are after.
>>
>> I will just comment that really large readahead values may cause
>> significant memory usage and transfer of unused data. My observations
>> and some posts indicate that very large readahead and/or chunk size
>> may reduce random access performance. I believe you said you had
>> 512MB RAM, that may be a factor as well.
>>
>
> I did not set such a large read-ahead. I had a look at the md0 device
> which had a value of 3072 and set this on the LV device as well.
> Performance really improved after this.
>
>>
>> Unless you are planning to use this machine mainly for running
>> benchmarks, I would tune it for your actual load and a bit of worst
>> case avoidance.
>>
>
> The last part is exactly what I am aiming at right now.
> I tried to keep my changes to a bare minimum.
>
> * Change chunk size to 256K
> * Align the physical extent of the LVM to it
> * Use the same parameters for mkfs.xfs that are choosen autmatically
> by mkfs.xfs if called on the md0 device itself.
>
> * Set the read-ahead of the LVM block device to the same value as the
> md0 device
> * Change the stripe_cache_size to 2048
>
>
> With these settings applied to my setup here, RAID+XFS and
> RAID+LVM+XFS perform nearly identical and that was my goal from the
> beginning.
>
> Now I am off to figure out what's happening during the initial rebuild
> of the RAID-5 but see my other mail for this.
>
> Once again, thank you all for your valuable input and support.
Thank you for reporting results, hopefully will be useful to some future
seeker of the same info.
--
Bill Davidsen <davidsen@tmr.com>
"Woe unto the statesman who makes war without a reason that will still
be valid when the war is over..." Otto von Bismark
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Severe slowdown with LVM on RAID, alignment problem? - Autodetect?
2008-03-02 20:14 ` Bill Davidsen
@ 2008-03-04 19:52 ` Janek Kozicki
0 siblings, 0 replies; 7+ messages in thread
From: Janek Kozicki @ 2008-03-04 19:52 UTC (permalink / raw)
To: linux-raid
Hello
I was wondering, that maybe someone from our community, skilled
enough, could submit a patch to LVM which will enable autodetection
of RAID stripes in the underlying device?
We would no longer suffer any slowdowns, then ;-)
If LVM cannot communicate with RAID layer (eg. because it's too
difficult, since /dev/md0 is *only* a block device) - then LVM could
do empirical autodetection by writing data to /dev/md0 (*) and
reading data from /dev/sda3 (*), /dev/sdb3 (*) and comparing it.
(*) those are just an example values, of course. They would be a
command line parameters for that ;-)
best regards
--
Janek Kozicki |
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2008-03-04 19:52 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-02-29 8:12 Severe slowdown with LVM on RAID, alignment problem? Michael Guntsche
2008-02-29 10:37 ` Peter Rabbitson
2008-02-29 10:45 ` Michael Guntsche
2008-03-01 20:45 ` Bill Davidsen
2008-03-01 21:26 ` Michael Guntsche
2008-03-02 20:14 ` Bill Davidsen
2008-03-04 19:52 ` Severe slowdown with LVM on RAID, alignment problem? - Autodetect? Janek Kozicki
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).