linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] advice for curing terrible snapshot performance?
@ 2010-11-12 21:52 chris (fool) mccraw
  2010-11-12 22:28 ` Joe Pruett
                   ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: chris (fool) mccraw @ 2010-11-12 21:52 UTC (permalink / raw)
  To: linux-lvm

hi folks,

i'm new to linux lvm but a longtime user of lvm's on other commercial
unices.  i love some of the features like r/w snapshots!  and indeed
snapshots are the primary reason i'm even interested in using LVM on
linux.

however, snapshots really shoot my system in the foot.  in my (12
processor, 12GB RAM, x86_64 centos 5.5) server, i have two pricey
areca hardware raid cards that give me ridiculous write performance:
i can sustain over 700MByte/sec write speeds (writing a file with dd
if=/dev/zero bs=1M, twice the size of the raid card's onboard memory
and timing the write + a sync afterwards) to the volumes on either
card (both backed by 7 or more fast disks).  enabling a single
snapshot reduces that speed by a factor of 10!  enabling more
snapshots isn't as drastic but still doesn't really scale well:


no snapshot  = ~11sec (727MB/sec)
1 snapshot   = ~102sec (78MB/sec)
2 snapshots  = ~144sec (55MB/sec)
3 snapshots  = ~313sec (25MB/sec)
4 snapshots  = ~607sec (15MB/sec)


i have my snapshots set up a separate array from the master
filesystem, on a separate raid card.  i did not change the default
parameters for the setup (ie blocksize) because our typical workload
is reading and writing small (<64k) files.  i can copy non-snapshotted
files from an LVM on array1 to array2 at a good clip, 307MByte/sec
(including sync).  copies from the parent array to itself (no
snapshots enabled) go at about 220MByte/sec.

all of my measurements were repeated 4x and averaged--occasionally
there was one that was a good 30% faster than the other 3, but it was
always an outlier.  typically all measurements for a given scenario
were within 10% of eachother.

so i guess that as a replacement for a netapp, setup with a few hourly
& daily, and even one weekly snapshot isn't something people do with
stock linux LVM?  or am i just doing it wrong?

in searching the archives i heard about zumastor.  is that really
production-ready?  the no-new-releases in the last 2 years and not
being in the mainstream kernel makes me leery of it.  i think we can
live with the factor-of-10 performance degradation on a daily
basis--we can turn off all the snapshots in case we really have to
hammer the server (which has a 4Gbit uplink to a render farm, so it is
possible for us to actually write over 70MByte/sec when things are
humming, via NFS), and in general it serves SMB@closer to 400Mbit
than 4000, so all the desktop users will not notice a difference.

it seems that others have seen these problems:
http://www.nikhef.nl/~dennisvd/lvmcrap.html as an example.

any thoughts?

thanks in advance for your input!

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2010-11-16  0:09 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-11-12 21:52 [linux-lvm] advice for curing terrible snapshot performance? chris (fool) mccraw
2010-11-12 22:28 ` Joe Pruett
2010-11-12 23:30   ` chris (fool) mccraw
2010-11-12 23:36   ` Joe Pruett
2010-11-13  0:17     ` chris (fool) mccraw
2010-11-13  0:58       ` Stuart D Gathman
2010-11-15 17:52         ` chris (fool) mccraw
2010-11-15 18:04           ` Romeo Theriault
2010-11-15 18:08           ` Joe Pruett
2010-11-15 18:18             ` chris (fool) mccraw
2010-11-15 23:51           ` Stuart D. Gathman
2010-11-16  0:09             ` chris (fool) mccraw
2010-11-15 18:05       ` chris (fool) mccraw
2010-11-15 14:35 ` Romeo Theriault
2010-11-15 17:46   ` chris (fool) mccraw
2010-11-15 20:37 ` Stephane Chazelas
2010-11-15 22:57   ` Stuart D. Gathman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).