linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: "Richard W.M. Jones" <rjones@redhat.com>
To: Mike Snitzer <snitzer@redhat.com>
Cc: LVM general discussion and development <linux-lvm@redhat.com>,
	thornber@redhat.com, Zdenek Kabelac <zkabelac@redhat.com>
Subject: Re: [linux-lvm] Testing the new LVM cache feature
Date: Thu, 29 May 2014 21:47:20 +0100	[thread overview]
Message-ID: <20140529204719.GD1302@redhat.com> (raw)
In-Reply-To: <20140529203410.GG1954@redhat.com>

On Thu, May 29, 2014 at 04:34:10PM -0400, Mike Snitzer wrote:
> Try using :
> dmsetup message <cache device> 0 write_promote_adjustment 0
> 
> Documentation/device-mapper/cache-policies.txt says:
> 
> Internally the mq policy maintains a promotion threshold variable.  If
> the hit count of a block not in the cache goes above this threshold it
> gets promoted to the cache.  The read, write and discard promote adjustment
> tunables allow you to tweak the promotion threshold by adding a small
> value based on the io type.  They default to 4, 8 and 1 respectively.
> If you're trying to quickly warm a new cache device you may wish to
> reduce these to encourage promotion.  Remember to switch them back to
> their defaults after the cache fills though.

What would be bad about leaving write_promote_adjustment set at 0 or 1?

Wouldn't that mean that I get a simple LRU policy?  (That's probably
what I want.)

> Also, if you discard the entire cache device (e.g. using blkdiscard)
> before use you could get a big win, especially if you use:
> dmsetup message <cache device> 0 discard_promote_adjustment 0

To be clear, that means I should do:

lvcreate -L 1G -n lv_cache_meta vg_guests /dev/fast
lvcreate -L 229G -n lv_cache vg_guests /dev/fast
lvconvert --type cache-pool --poolmetadata vg_guests/lv_cache_meta vg_guests/lv_cache
blkdiscard /dev/vg_guests/lv_cache
lvconvert --type cache --cachepool vg_guests/lv_cache vg_guests/testoriginlv

Or should I do the blkdiscard earlier?

[On the separate subject of volume groups ...]

Is there a reason why fast and slow devices need to be in the same VG?

I've talked to two other people who found this very confusing.  No one
knew that you could manually place LVs into different PVs, and it's
something of a pain to have to remember to place LVs every time you
create or resize one.  It seems it would be a lot simpler if you could
have the slow PVs in one VG and the fast PVs in another VG.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-p2v converts physical machines to virtual machines.  Boot with a
live CD or over the network (PXE) and turn machines into KVM guests.
http://libguestfs.org/virt-v2v

  reply	other threads:[~2014-05-29 20:47 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-05-22 10:18 [linux-lvm] Testing the new LVM cache feature Richard W.M. Jones
2014-05-22 14:43 ` Zdenek Kabelac
2014-05-22 15:22   ` Richard W.M. Jones
2014-05-22 15:49     ` Richard W.M. Jones
2014-05-22 18:04       ` Mike Snitzer
2014-05-22 18:13         ` Richard W.M. Jones
2014-05-29 13:52           ` Richard W.M. Jones
2014-05-29 20:34             ` Mike Snitzer
2014-05-29 20:47               ` Richard W.M. Jones [this message]
2014-05-29 21:06                 ` Mike Snitzer
2014-05-29 21:19                   ` Richard W.M. Jones
2014-05-29 21:58                     ` Mike Snitzer
2014-05-30  9:04                       ` Richard W.M. Jones
2014-05-30 10:30                         ` Richard W.M. Jones
2014-05-30 13:38                         ` Mike Snitzer
2014-05-30 13:40                           ` Richard W.M. Jones
2014-05-30 13:42                           ` Heinz Mauelshagen
2014-05-30 13:54                             ` Richard W.M. Jones
2014-05-30 13:58                               ` Zdenek Kabelac
2014-05-30 13:46                           ` Richard W.M. Jones
2014-05-30 13:54                             ` Heinz Mauelshagen
2014-05-30 14:26                               ` Richard W.M. Jones
2014-05-30 14:29                                 ` Mike Snitzer
2014-05-30 14:36                                   ` Richard W.M. Jones
2014-05-30 14:44                                     ` Mike Snitzer
2014-05-30 14:51                                       ` Richard W.M. Jones
2014-05-30 14:58                                         ` Mike Snitzer
2014-05-30 15:28                                           ` Richard W.M. Jones
2014-05-30 18:16                                             ` Mike Snitzer
2014-05-30 20:53                                               ` Mike Snitzer
2014-05-30 13:55                             ` Mike Snitzer
2014-05-30 14:29                               ` Richard W.M. Jones
2014-05-30 14:36                                 ` Mike Snitzer
2014-05-30 11:53                       ` Mike Snitzer
2014-05-30 11:38                 ` Alasdair G Kergon
2014-05-30 11:45                   ` Alasdair G Kergon
2014-05-30 12:45                     ` Werner Gold

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140529204719.GD1302@redhat.com \
    --to=rjones@redhat.com \
    --cc=linux-lvm@redhat.com \
    --cc=snitzer@redhat.com \
    --cc=thornber@redhat.com \
    --cc=zkabelac@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).