* [linux-lvm] Performance impact of LVM @ 2006-07-27 12:02 Sander Smeenk 2006-07-27 15:30 ` Dieter Stüken 2006-07-27 17:14 ` Lamont R. Peterson 0 siblings, 2 replies; 8+ messages in thread From: Sander Smeenk @ 2006-07-27 12:02 UTC (permalink / raw) To: linux-lvm Hello list! I've recently subscribed here as i have some questions about LVM, particularly the performance impact of LVM on disk IO. I'm a happy LVM user, on my workstation at home i've used it for a long time. No special setups or anything, but it's nice to be able to resize partitions on the fly, or have a number of disks act as one huge disk... So, when I had to reinstall all servers for the company I work for, i decided to use LVM for the same reasons stated above. But now i wonder: Does LVM have any impact on disk IO? Are there any tests done on this subject? I couldn't really find any on the internet. Most of the things you find are implementation issues and 'how does it work' stuff ;-) I'm running LVM2 (2.02.06) on Debian 'sid' (unstable, but i hate that word) using linux kernels 2.6.17.xx. For example, one of my servers has 4x 34gb SCSI disks and 2x IDE disks. One of the IDE disks has a 250MB boot partition, the rest is LVM partition, the other IDE disk has one big LVM partition, same goes for the 4 SCSI disks. Then i made a scsi_vg01, with all the scsi disks and a ide_vg01 with all the ide disks, and started lvcreating "partitions" inside those vg's. That's basically how i set up LVM on all of my servers. Some servers have different disk-configurations though... Can anyone shed any light on this approach? Are there impacts on performance of read / write actions? Any information is welcomed. Hope to hear from you all! Kind regards, Sander. -- | $ perl -e 'length q bless glob and print chr oct ord q mkdir m and print \ | chr ord q xor x and print chr ord q q q and print chr ord uc q map m and \ | print chr ord q qw q and print chr ord q each le and print chr ord q my \ | alarm and print chr oct oct ord uc qw q for q' - Don't you LOVE perl? ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] Performance impact of LVM 2006-07-27 12:02 [linux-lvm] Performance impact of LVM Sander Smeenk @ 2006-07-27 15:30 ` Dieter Stüken 2006-07-28 7:09 ` Sander Smeenk 2006-07-28 12:54 ` Mark H. Wood 2006-07-27 17:14 ` Lamont R. Peterson 1 sibling, 2 replies; 8+ messages in thread From: Dieter Stüken @ 2006-07-27 15:30 UTC (permalink / raw) To: LVM general discussion and development Sander Smeenk wrote: > I've recently subscribed here as i have some questions about LVM, > particularly the performance impact of LVM on disk IO. impact? no, I don't think so. LVM does not interfere with the IO system too much. For an IO request LVM decides where (device/sector) to find the data. The transfer of the data itself, to or from the device, happens as before, without LVM. I think the delay for the lookup to find the data is negligible compared to the transfer time itself. But LVM may improve the performance indirectly, if you spread your data over several disk by using striping. But this is not guaranteed and depends on the structure of your data and the access pattern. there are several studies about this topic, i.E: http://www.suse.com/en/whitepapers/lvm/lvm1.html#perf .. although, its a bit outdated, as they still use LVM1... Dieter. ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] Performance impact of LVM 2006-07-27 15:30 ` Dieter Stüken @ 2006-07-28 7:09 ` Sander Smeenk 2006-07-28 12:54 ` Mark H. Wood 1 sibling, 0 replies; 8+ messages in thread From: Sander Smeenk @ 2006-07-28 7:09 UTC (permalink / raw) To: linux-lvm Quoting Dieter St�ken (stueken@conterra.de): > > I've recently subscribed here as i have some questions about LVM, > > particularly the performance impact of LVM on disk IO. > impact? no, I don't think so. Okay, that's a clear start :-) > http://www.suse.com/en/whitepapers/lvm/lvm1.html#perf > .. although, its a bit outdated, as they still use LVM1... I'll read it anyways! Thanks alot. -- | If you're too open-minded, your brain will fall out. | 1024D/08CEC94D - 34B3 3314 B146 E13C 70C8 9BDB D463 7E41 08CE C94D ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] Performance impact of LVM 2006-07-27 15:30 ` Dieter Stüken 2006-07-28 7:09 ` Sander Smeenk @ 2006-07-28 12:54 ` Mark H. Wood 2006-07-28 14:22 ` Michael T. Babcock 2006-07-31 7:58 ` Sander Smeenk 1 sibling, 2 replies; 8+ messages in thread From: Mark H. Wood @ 2006-07-28 12:54 UTC (permalink / raw) To: LVM general discussion and development -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Some uses of LVM *could* increase I/O wait time. It's easy to paste new extents onto existing volumes, and the new extent probably won't be contiguous with the old one. So you *could* see longer average seek delays due to additional arm travel distance between extents. It's strongly dependent on access patterns. However this effect is probably down in the noise for most systems. The only way to know if it's a problem for you is to measure. I would expect that, given contemporary amounts of caching on the drive, the controller, and in the OS, you probably won't see it unless you are driving your storage *really* hard. If you do, dump/recreate contiguously/restore will make it go away. - -- Mark H. Wood, Lead System Programmer mwood@IUPUI.Edu Typically when a software vendor says that a product is "intuitive" he means the exact opposite. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.4 (GNU/Linux) Comment: pgpenvelope 2.10.2 - http://pgpenvelope.sourceforge.net/ iD8DBQFEygj6s/NR4JuTKG8RAsiUAJ4rfE9pWkHQjjl1W5DGLV2jaHCyegCgn+fO umxX1nQ9djK5wa2/n93FKLc= =7ev8 -----END PGP SIGNATURE----- ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] Performance impact of LVM 2006-07-28 12:54 ` Mark H. Wood @ 2006-07-28 14:22 ` Michael T. Babcock 2006-07-31 7:58 ` Sander Smeenk 1 sibling, 0 replies; 8+ messages in thread From: Michael T. Babcock @ 2006-07-28 14:22 UTC (permalink / raw) To: LVM general discussion and development Mark H. Wood wrote: > However this effect is probably down in the noise for most systems. > The only way to know if it's a problem for you is to measure. I would > expect that, given contemporary amounts of caching on the drive, the > controller, and in the OS, you probably won't see it unless you are > driving your storage *really* hard. If you do, dump/recreate > contiguously/restore will make it go away. For database partitions we always use "lvcreate -C y" when creating LVs for this reason, for every other area of the system however, I haven't noticed almost any impact of LVM(1 or 2) except that striping is easier to set up than using RAID0 because there's no need to repartition. For example, I often do something like: lvcreate -C y -n dbdata1 -L 100G mainstore lvcreate -I 2 -n dbtemp -L 10G mainstore ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] Performance impact of LVM 2006-07-28 12:54 ` Mark H. Wood 2006-07-28 14:22 ` Michael T. Babcock @ 2006-07-31 7:58 ` Sander Smeenk 1 sibling, 0 replies; 8+ messages in thread From: Sander Smeenk @ 2006-07-31 7:58 UTC (permalink / raw) To: linux-lvm Quoting Mark H. Wood (mwood@IUPUI.Edu): > Some uses of LVM *could* increase I/O wait time. It's easy to paste new > extents onto existing volumes, and the new extent probably won't be > contiguous with the old one. So you *could* see longer average seek > delays due to additional arm travel distance between extents. It's > strongly dependent on access patterns. Hmm. Yeah, this setup i'm using was created and thereafter never changed (yet). So i guess it should all still be contiguous... Thanks for the info though. I'll see if i can do some tests when i get my new hardware to play with. But as said before, it's probably not even measurable. The added latency that is... Thanks! Sander. -- | The older you get, the better you realize you were. | 1024D/08CEC94D - 34B3 3314 B146 E13C 70C8 9BDB D463 7E41 08CE C94D ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] Performance impact of LVM 2006-07-27 12:02 [linux-lvm] Performance impact of LVM Sander Smeenk 2006-07-27 15:30 ` Dieter Stüken @ 2006-07-27 17:14 ` Lamont R. Peterson 2006-07-28 7:24 ` Sander Smeenk 1 sibling, 1 reply; 8+ messages in thread From: Lamont R. Peterson @ 2006-07-27 17:14 UTC (permalink / raw) To: linux-lvm [-- Attachment #1: Type: text/plain, Size: 3506 bytes --] On Thursday 27 July 2006 06:02am, Sander Smeenk wrote: > Hello list! > > I've recently subscribed here as i have some questions about LVM, > particularly the performance impact of LVM on disk IO. > > I'm a happy LVM user, on my workstation at home i've used it for a long > time. No special setups or anything, but it's nice to be able to resize > partitions on the fly, or have a number of disks act as one huge disk... > > So, when I had to reinstall all servers for the company I work for, i > decided to use LVM for the same reasons stated above. But now i wonder: > Does LVM have any impact on disk IO? Are there any tests done on this > subject? > > I couldn't really find any on the internet. Most of the things you find > are implementation issues and 'how does it work' stuff ;-) > > I'm running LVM2 (2.02.06) on Debian 'sid' (unstable, but i hate that word) > using linux kernels 2.6.17.xx. > > For example, one of my servers has 4x 34gb SCSI disks and 2x IDE disks. > One of the IDE disks has a 250MB boot partition, the rest is LVM > partition, the other IDE disk has one big LVM partition, same goes for > the 4 SCSI disks. > > Then i made a scsi_vg01, with all the scsi disks and a ide_vg01 with all > the ide disks, and started lvcreating "partitions" inside those vg's. > That's basically how i set up LVM on all of my servers. Some servers > have different disk-configurations though... Any particular reason to not include all the disks in a single VG? Also, this setup will actually leave you more vulnerable to single disk failures. I would *highly* recommend using RAID to aggregate your disks together, then use LVM on top of that to make things manageable. > Can anyone shed any light on this approach? Are there impacts on > performance of read / write actions? Any information is welcomed. When you try to read or write to a VG, the LVM code is used by the VFS layer in the Kernel to decide the physical device/track/sector address to send the I/O operation to. The only "extra" LVM I/O done is when you are (re)configuring LVM. Things like creating, resizing & deleting an LV require a little bit of disk I/O, of course. Other than the small amount of overhead when using snapshot volumes, there isn't any other impact on I/O performance. However, I wonder if the LVM address look-up code is better than, equal to or any worse than that for a plain block device (e.g. partition, loopback mounted file, etc.). If there is a statistically relevent delta there, I think it would only impact I/O latency and even then, it couldn't be much. When booting your system, it does have to take a moment and "vgscan" for VGs. This is pretty fast, but it adds a second or two to your bootup time. That's all I can think of off the top of my head. HTH. -- Lamont R. Peterson <peregrine@OpenBrainstem.net> Founder [ http://blog.OpenBrainstem.net/peregrine/ ] GPG Key fingerprint: 0E35 93C5 4249 49F0 EC7B 4DDD BE46 4732 6460 CCB5 ___ ____ _ _ / _ \ _ __ ___ _ __ | __ ) _ __ __ _(_)_ __ ___| |_ ___ _ __ ___ | | | | '_ \ / _ \ '_ \| _ \| '__/ _` | | '_ \/ __| __/ _ \ '_ ` _ \ | |_| | |_) | __/ | | | |_) | | | (_| | | | | \__ \ || __/ | | | | | \___/| .__/ \___|_| |_|____/|_| \__,_|_|_| |_|___/\__\___|_| |_| |_| |_| Intelligent Open Source Software Engineering [ http://www.OpenBrainstem.net/ ] [-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --] ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] Performance impact of LVM 2006-07-27 17:14 ` Lamont R. Peterson @ 2006-07-28 7:24 ` Sander Smeenk 0 siblings, 0 replies; 8+ messages in thread From: Sander Smeenk @ 2006-07-28 7:24 UTC (permalink / raw) To: linux-lvm [-- Attachment #1: Type: text/plain, Size: 2556 bytes --] Quoting Lamont R. Peterson (peregrine@openbrainstem.net): > > Then i made a scsi_vg01, with all the scsi disks and a ide_vg01 with all > > the ide disks, and started lvcreating "partitions" inside those vg's. > Any particular reason to not include all the disks in a single VG? Well, SCSI is usually faster than IDE. So the SCSI disks are used for storing databases, websitecontent, etc. IDE disks are only used for booting and storing local backup copies, etc... Although with the latest (pata) IDE disks the performance difference already got really small ;) > Also, this setup will actually leave you more vulnerable to single disk > failures. I would *highly* recommend using RAID to aggregate your disks > together, then use LVM on top of that to make things manageable. Mmmh, yeah, that's true. Although it doesn't really matter if one of these servers would fail due to diskproblems. There's enough of them to take the traffic ;) I should fix that sometime soon though ;) > The only "extra" LVM I/O done is when you are (re)configuring LVM. Things > like creating, resizing & deleting an LV require a little bit of disk I/O, of > course. Other than the small amount of overhead when using snapshot volumes, > there isn't any other impact on I/O performance. OK, really clear explanation. Luckily it matches my idea about LVM, and hopefuly with the two responses i've gotten i can convince my superiors that LVM will not cause any performance impact. > However, I wonder if the LVM address look-up code is better than, equal to or > any worse than that for a plain block device (e.g. partition, loopback > mounted file, etc.). If there is a statistically relevent delta there, I > think it would only impact I/O latency and even then, it couldn't be much. Would using bonnie++ on a 'plain block device' be a good enough way to possibly measure that? ;-) I think the latency is so small that it won't show ;) > When booting your system, it does have to take a moment and "vgscan" for VGs. > This is pretty fast, but it adds a second or two to your bootup time. Haha. That's NO problem compared to the time it takes the BIOS to detect disks, load the SATA bios scan for disks, the Adaptec bios, scan for disks, the two networkcards, blah ;) > That's all I can think of off the top of my head. HTH. TY! Really appreciated. Kind regards, Sander. -- | It was a business doing pleasure with you! | 1024D/08CEC94D - 34B3 3314 B146 E13C 70C8 9BDB D463 7E41 08CE C94D [-- Attachment #2: Digital signature --] [-- Type: application/pgp-signature, Size: 189 bytes --] ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2006-07-31 7:58 UTC | newest] Thread overview: 8+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2006-07-27 12:02 [linux-lvm] Performance impact of LVM Sander Smeenk 2006-07-27 15:30 ` Dieter Stüken 2006-07-28 7:09 ` Sander Smeenk 2006-07-28 12:54 ` Mark H. Wood 2006-07-28 14:22 ` Michael T. Babcock 2006-07-31 7:58 ` Sander Smeenk 2006-07-27 17:14 ` Lamont R. Peterson 2006-07-28 7:24 ` Sander Smeenk
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).