From: "Richard W.M. Jones" <rjones@redhat.com>
To: Mike Snitzer <snitzer@redhat.com>
Cc: Heinz Mauelshagen <heinzm@redhat.com>,
Zdenek Kabelac <zkabelac@redhat.com>,
thornber@redhat.com,
LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] Testing the new LVM cache feature
Date: Fri, 30 May 2014 15:36:59 +0100 [thread overview]
Message-ID: <20140530143659.GP1302@redhat.com> (raw)
In-Reply-To: <20140530142926.GA9219@redhat.com>
On Fri, May 30, 2014 at 10:29:26AM -0400, Mike Snitzer wrote:
> sequential_threshold is only going to help the md5sum's IO get promoted
> (assuming you're having it read a large file).
Note the fio test runs on the virt.* files. I'm using md5sum in an
attempt to pull those same files into the SSD.
> > Is there a way to print the current settings?
> >
> > Could writethrough be enabled? (I'm supposed to be using writeback).
> > How do I find out?
>
> dmsetup status vg_guests-libvirt--images
Here's dmsetup status on various objects:
$ sudo dmsetup table
vg_guests-lv_cache_cdata: 0 419430400 linear 8:33 2099200
vg_guests-lv_cache_cmeta: 0 2097152 linear 8:33 2048
vg_guests-home: 0 209715200 linear 9:127 2048
vg_guests-libvirt--images: 0 1677721600 cache 253:1 253:0 253:2 128 0 default 0
vg_guests-libvirt--images_corig: 0 1677721600 linear 9:127 2055211008
$ sudo dmsetup status vg_guests-libvirt--images
0 1677721600 cache 8 10162/262144 128 39839/3276800 1087840 821795 116320 2057235 0 39835 0 1 writeback 2 migration_threshold 2048 mq 10 random_threshold 4 sequential_threshold 0 discard_promote_adjustment 1 read_promote_adjustment 0 write_promote_adjustment 0
$ sudo dmsetup status vg_guests-lv_cache_cdata
0 419430400 linear
$ sudo dmsetup status vg_guests-lv_cache_cmeta
0 2097152 linear
$ sudo dmsetup status vg_guests-libvirt--images_corig
0 1677721600 linear
> But I'm really wondering if your IO is misaligned (like my earlier email
> brought up). It _could_ be promoting 2 64K blocks from the origin for
> every 64K IO.
There's nothing obviously wrong ...
** For the SSD **
$ sudo fdisk -l /dev/sdc
Disk /dev/sdc: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x3e302f2a
Device Boot Start End Blocks Id System
/dev/sdc1 2048 488397167 244197560 8e Linux LVM
The PV is placed directly on /dev/sdc1.
** For the HDD array **
$ sudo fdisk -l /dev/sd{a,b}
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: B9545B67-681D-4729-A8A0-C75CB2EFFCB1
Device Start End Size Type
/dev/sda1 2048 3907029134 1.8T Linux filesystem
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: EFA66BD1-E813-4826-88A2-F2BB3C2E093E
Device Start End Size Type
/dev/sdb1 2048 3907029134 1.8T Linux filesystem
$ cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 sdb1[2] sda1[1]
1953382272 blocks super 1.2 [2/2] [UU]
unused devices: <none>
The PV is placed on /dev/md127.
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
Fedora Windows cross-compiler. Compile Windows programs, test, and
build Windows installers. Over 100 libraries supported.
http://fedoraproject.org/wiki/MinGW
next prev parent reply other threads:[~2014-05-30 14:36 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-05-22 10:18 [linux-lvm] Testing the new LVM cache feature Richard W.M. Jones
2014-05-22 14:43 ` Zdenek Kabelac
2014-05-22 15:22 ` Richard W.M. Jones
2014-05-22 15:49 ` Richard W.M. Jones
2014-05-22 18:04 ` Mike Snitzer
2014-05-22 18:13 ` Richard W.M. Jones
2014-05-29 13:52 ` Richard W.M. Jones
2014-05-29 20:34 ` Mike Snitzer
2014-05-29 20:47 ` Richard W.M. Jones
2014-05-29 21:06 ` Mike Snitzer
2014-05-29 21:19 ` Richard W.M. Jones
2014-05-29 21:58 ` Mike Snitzer
2014-05-30 9:04 ` Richard W.M. Jones
2014-05-30 10:30 ` Richard W.M. Jones
2014-05-30 13:38 ` Mike Snitzer
2014-05-30 13:40 ` Richard W.M. Jones
2014-05-30 13:42 ` Heinz Mauelshagen
2014-05-30 13:54 ` Richard W.M. Jones
2014-05-30 13:58 ` Zdenek Kabelac
2014-05-30 13:46 ` Richard W.M. Jones
2014-05-30 13:54 ` Heinz Mauelshagen
2014-05-30 14:26 ` Richard W.M. Jones
2014-05-30 14:29 ` Mike Snitzer
2014-05-30 14:36 ` Richard W.M. Jones [this message]
2014-05-30 14:44 ` Mike Snitzer
2014-05-30 14:51 ` Richard W.M. Jones
2014-05-30 14:58 ` Mike Snitzer
2014-05-30 15:28 ` Richard W.M. Jones
2014-05-30 18:16 ` Mike Snitzer
2014-05-30 20:53 ` Mike Snitzer
2014-05-30 13:55 ` Mike Snitzer
2014-05-30 14:29 ` Richard W.M. Jones
2014-05-30 14:36 ` Mike Snitzer
2014-05-30 11:53 ` Mike Snitzer
2014-05-30 11:38 ` Alasdair G Kergon
2014-05-30 11:45 ` Alasdair G Kergon
2014-05-30 12:45 ` Werner Gold
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140530143659.GP1302@redhat.com \
--to=rjones@redhat.com \
--cc=heinzm@redhat.com \
--cc=linux-lvm@redhat.com \
--cc=snitzer@redhat.com \
--cc=thornber@redhat.com \
--cc=zkabelac@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).