* [linux-lvm] Thin snapshot caching behaviour
@ 2014-06-03 15:22 dw+linux-lvm
2014-06-04 0:19 ` Lars Ellenberg
0 siblings, 1 reply; 2+ messages in thread
From: dw+linux-lvm @ 2014-06-03 15:22 UTC (permalink / raw)
To: linux-lvm
Hi there,
While playing with LVM thin provisioning, I've noticed that snapshots
seem to have different caching semantics compared to their original thin
LV.
I've hunted everywhere for documentation that describes this difference,
or even an indication of on what layer it occurs, but I can find none.
Perhaps someone here could shed some light?
Thin volume behaves like a regular drive:
# echo 1 > /proc/sys/vm/drop_caches
# dd if=/dev/vg0/tv0 of=/dev/null bs=64k count=1000
65536000 bytes (66 MB) copied, 0.946389 s, 69.2 MB/s
# dd if=/dev/vg0/tv0 of=/dev/null bs=64k count=1000
65536000 bytes (66 MB) copied, 0.00810655 s, 8.1 GB/s
Create activated snapshot:
# lvcreate -kn -s -n tv0s /dev/vg0/tv0
Logical volume "tv0s" created
# dd if=/dev/vg0/tv0s of=/dev/null bs=64k count=1000
65536000 bytes (66 MB) copied, 1.00061 s, 65.5 MB/s
Second dd shows no/very little speedup:
# time dd if=/dev/vg0/tv0s of=/dev/null bs=64k count=1000
65536000 bytes (66 MB) copied, 0.921402 s, 71.1 MB/s
Thanks,
David
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [linux-lvm] Thin snapshot caching behaviour
2014-06-03 15:22 [linux-lvm] Thin snapshot caching behaviour dw+linux-lvm
@ 2014-06-04 0:19 ` Lars Ellenberg
0 siblings, 0 replies; 2+ messages in thread
From: Lars Ellenberg @ 2014-06-04 0:19 UTC (permalink / raw)
To: linux-lvm
On Tue, Jun 03, 2014 at 03:22:09PM +0000, dw+linux-lvm@hmmz.org wrote:
> Hi there,
>
> While playing with LVM thin provisioning, I've noticed that snapshots
> seem to have different caching semantics compared to their original thin
> LV.
>
> I've hunted everywhere for documentation that describes this difference,
> or even an indication of on what layer it occurs, but I can find none.
> Perhaps someone here could shed some light?
>
> Thin volume behaves like a regular drive:
>
> # echo 1 > /proc/sys/vm/drop_caches
>
> # dd if=/dev/vg0/tv0 of=/dev/null bs=64k count=1000
> 65536000 bytes (66 MB) copied, 0.946389 s, 69.2 MB/s
>
> # dd if=/dev/vg0/tv0 of=/dev/null bs=64k count=1000
> 65536000 bytes (66 MB) copied, 0.00810655 s, 8.1 GB/s
>
> Create activated snapshot:
>
> # lvcreate -kn -s -n tv0s /dev/vg0/tv0
> Logical volume "tv0s" created
>
> # dd if=/dev/vg0/tv0s of=/dev/null bs=64k count=1000
> 65536000 bytes (66 MB) copied, 1.00061 s, 65.5 MB/s
>
> Second dd shows no/very little speedup:
>
> # time dd if=/dev/vg0/tv0s of=/dev/null bs=64k count=1000
> 65536000 bytes (66 MB) copied, 0.921402 s, 71.1 MB/s
# echo 3 > /proc/sys/vm/drop_caches
# free -m
# exec 77< /dev/vg0/tv0s
# time dd if=/dev/vg0/tv0s of=/dev/null bs=64k count=1000
# free -m
# time dd if=/dev/vg0/tv0s of=/dev/null bs=64k count=1000
# free -m
# exec 77<&-
# free -m
Background:
linux "forgets" the buffer cache for a block device,
if the last opener is gone (no one has it open anymore).
The "main" device has some internal references,
or is mounted... someone has it open, anyways.
At least that's my best guess as to what you are seeing there.
--
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2014-06-04 0:19 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-06-03 15:22 [linux-lvm] Thin snapshot caching behaviour dw+linux-lvm
2014-06-04 0:19 ` Lars Ellenberg
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).