linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Re: LVM Performance effects?
@ 2006-07-13 15:56 Michael Heyse
  2006-07-13 17:43 ` Lars Ellenberg
  0 siblings, 1 reply; 2+ messages in thread
From: Michael Heyse @ 2006-07-13 15:56 UTC (permalink / raw)
  To: linux-lvm, bretm

(sorry for disrupting the thread - copied this message from the archives)

> I have set up a file server with LVM on top of RAID 5, and seem to be
> having LVM related performance issue.

Me too.

> raid device: 118.2 MB/s
> lvm device: 49.43 MB/s
> file system: 40.83 MB/s

Have you found an answer to that problem? I couldn't find anything
helpful in the archives. Is this a general LVM on software RAID issue?

I'm measuring a similar performance degradation on my system:

plain Software RAID-5:

# hdparm -t /dev/md3

/dev/md3:
 Timing buffered disk reads:  884 MB in  3.00 seconds = 294.49 MB/sec


LVM2 logical volume:

# hdparm -t /dev/data/temp

/dev/data/temp:
 Timing buffered disk reads:  316 MB in  3.01 seconds = 105.15 MB/sec

I tried changing the physical extent size, but this has almost no effect.

Some short system information, I'll gladly provide more if needed:

- kernel 2.6.17
- lvm2

  LVM version:     2.02.06 (2006-05-12)
  Library version: 1.02.07 (2006-05-11)
  Driver version:  4.5.0

- 2x dual core xeon, 3 GHz
- 6x WD360GD 36,7 GB SATA Disk
- Software RAID-5 (chunk size: 256kB)

  --- Physical volume ---
  PV Name               /dev/md3
  VG Name               data
  PV Size               167.45 GB / not usable 0
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              42868
  Free PE               4468
  Allocated PE          38400

  --- Volume group ---
  VG Name               data
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  11
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                6
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               167.45 GB
  PE Size               4.00 MB
  Total PE              42868
  Alloc PE / Size       38400 / 150.00 GB
  Free  PE / Size       4468 / 17.45 GB

  --- Logical volume ---
  LV Name                /dev/data/temp
  VG Name                data
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                10.00 GB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:5

Any hints are greatly appreciated.

Thanks,
Michael

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [linux-lvm] Re: LVM Performance effects?
  2006-07-13 15:56 [linux-lvm] Re: LVM Performance effects? Michael Heyse
@ 2006-07-13 17:43 ` Lars Ellenberg
  0 siblings, 0 replies; 2+ messages in thread
From: Lars Ellenberg @ 2006-07-13 17:43 UTC (permalink / raw)
  To: linux-lvm

/ 2006-07-13 17:56:55 +0200
\ Michael Heyse:
> (sorry for disrupting the thread - copied this message from the archives)
> 
> > I have set up a file server with LVM on top of RAID 5, and seem to be
> > having LVM related performance issue.
> 
> Me too.
> 
> > raid device: 118.2 MB/s
> > lvm device: 49.43 MB/s
> > file system: 40.83 MB/s
> 
> Have you found an answer to that problem? I couldn't find anything
> helpful in the archives. Is this a general LVM on software RAID issue?

maybe lvm (device mapper) does not propagate the read-ahead
parameters of the physical device to the upper layers
(we had a similar problem with drbd, which in respect to this problem is
"just an other" virtual block device)

once we did (block device kernel code during device creation)
+
+       if( q->backing_dev_info.ra_pages != b->backing_dev_info.ra_pages) {
+               INFO("Adjusting my ra_pages to backing device's (%lu -> %lu)\n",
+                    q->backing_dev_info.ra_pages,
+                    b->backing_dev_info.ra_pages);
+               q->backing_dev_info.ra_pages = b->backing_dev_info.ra_pages;
+       }
+

we suddenly got basically all the bandwidth the box could deliver.
without that patch and that particular storage backend we had only
about 1/6 of the performance. 
other storage backends don't care at all.

you could verify this with "blockdev --getra /dev/whatever",
and try to tune it with "blockdev --setra 1024 /dev/whatever".

-- 
: Lars Ellenberg                                  Tel +43-1-8178292-0  :
: LINBIT Information Technologies GmbH            Fax +43-1-8178292-82 :
: Schoenbrunner Str. 244, A-1120 Vienna/Europe   http://www.linbit.com :

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2006-07-13 17:43 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-07-13 15:56 [linux-lvm] Re: LVM Performance effects? Michael Heyse
2006-07-13 17:43 ` Lars Ellenberg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).