From: Mike Snitzer <snitzer@redhat.com>
To: "Richard W.M. Jones" <rjones@redhat.com>
Cc: Heinz Mauelshagen <heinzm@redhat.com>,
Zdenek Kabelac <zkabelac@redhat.com>,
thornber@redhat.com,
LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] Testing the new LVM cache feature
Date: Fri, 30 May 2014 09:38:14 -0400 [thread overview]
Message-ID: <20140530133814.GB8830@redhat.com> (raw)
In-Reply-To: <20140530090422.GB31293@redhat.com>
On Fri, May 30 2014 at 5:04am -0400,
Richard W.M. Jones <rjones@redhat.com> wrote:
> On Thu, May 29, 2014 at 05:58:15PM -0400, Mike Snitzer wrote:
> > On Thu, May 29 2014 at 5:19pm -0400, Richard W.M. Jones <rjones@redhat.com> wrote:
> > > I'm concerned that would delete all the data on the origin LV ...
> >
> > OK, but how are you testing with fio at this point? Doesn't that
> > destroy data too?
>
> I'm testing with files. This matches my final configuration which is
> to use qcow2 files on an ext4 filesystem to store the VM disk images.
>
> I set read_promote_adjustment == write_promote_adjustment == 1 and ran
> fio 6 times, reusing the same test files.
>
> It is faster than HDD (slower layer), but still much slower than the
> SSD (fast layer). Across the fio runs it's about 5 times slower than
> the SSD, and the times don't improve at all over the runs. (It is
> more than twice as fast as the HDD though).
>
> Somehow something is not working as I expected.
Why are you setting {read,write}_promote_adjustment to 1? I asked you
to set write_promote_adjustment to 0.
Your random fio job won't hit the same blocks, and md5sum likely uses
buffered IO so unless you set 0 for both the cache won't aggressively
cache like you're expecting.
I explained earlier in this thread that the dm-cache is currently a
"hotspot cache". Not a pure writeback cache like you're hoping. We're
working to make it fit your expectations (you aren't alone in expecting
more performance!)
> Back to an earlier point. I wrote and you replied:
>
> > > What would be bad about leaving write_promote_adjustment set at 0 or 1?
> > > Wouldn't that mean that I get a simple LRU policy? (That's probably
> > > what I want.)
> >
> > Leaving them at 0 could result in cache thrashing. But given how
> > large your SSD is in relation to the origin you'd likely be OK for a
> > while (at least until your cache gets quite full).
>
> My SSD is ~200 GB and the backing origin LV is ~800 GB. It is
> unlikely the working set will ever grow > 200 GB, not least because I
> cannot run that many VMs at the same time on the cluster.
>
> So should I be concerned about cache thrashing? Specifically: If the
> cache layer gets full, then it will send the least recently used
> blocks back to the slow layer, right? (It seems obvious, but I'd like
> to check that)
Right, you should be fine. But I'll defer to Heinz on more particulars
about the cache replacement strategy that is provided in this case for
the "mq" (aka multi-queue policy).
next prev parent reply other threads:[~2014-05-30 13:38 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-05-22 10:18 [linux-lvm] Testing the new LVM cache feature Richard W.M. Jones
2014-05-22 14:43 ` Zdenek Kabelac
2014-05-22 15:22 ` Richard W.M. Jones
2014-05-22 15:49 ` Richard W.M. Jones
2014-05-22 18:04 ` Mike Snitzer
2014-05-22 18:13 ` Richard W.M. Jones
2014-05-29 13:52 ` Richard W.M. Jones
2014-05-29 20:34 ` Mike Snitzer
2014-05-29 20:47 ` Richard W.M. Jones
2014-05-29 21:06 ` Mike Snitzer
2014-05-29 21:19 ` Richard W.M. Jones
2014-05-29 21:58 ` Mike Snitzer
2014-05-30 9:04 ` Richard W.M. Jones
2014-05-30 10:30 ` Richard W.M. Jones
2014-05-30 13:38 ` Mike Snitzer [this message]
2014-05-30 13:40 ` Richard W.M. Jones
2014-05-30 13:42 ` Heinz Mauelshagen
2014-05-30 13:54 ` Richard W.M. Jones
2014-05-30 13:58 ` Zdenek Kabelac
2014-05-30 13:46 ` Richard W.M. Jones
2014-05-30 13:54 ` Heinz Mauelshagen
2014-05-30 14:26 ` Richard W.M. Jones
2014-05-30 14:29 ` Mike Snitzer
2014-05-30 14:36 ` Richard W.M. Jones
2014-05-30 14:44 ` Mike Snitzer
2014-05-30 14:51 ` Richard W.M. Jones
2014-05-30 14:58 ` Mike Snitzer
2014-05-30 15:28 ` Richard W.M. Jones
2014-05-30 18:16 ` Mike Snitzer
2014-05-30 20:53 ` Mike Snitzer
2014-05-30 13:55 ` Mike Snitzer
2014-05-30 14:29 ` Richard W.M. Jones
2014-05-30 14:36 ` Mike Snitzer
2014-05-30 11:53 ` Mike Snitzer
2014-05-30 11:38 ` Alasdair G Kergon
2014-05-30 11:45 ` Alasdair G Kergon
2014-05-30 12:45 ` Werner Gold
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140530133814.GB8830@redhat.com \
--to=snitzer@redhat.com \
--cc=heinzm@redhat.com \
--cc=linux-lvm@redhat.com \
--cc=rjones@redhat.com \
--cc=thornber@redhat.com \
--cc=zkabelac@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).