From: Mark Syms <mark@marksyms.me.uk>
To: linux-lvm@redhat.com
Subject: [linux-lvm] Checking alignment on existing volumes
Date: Tue, 27 Jan 2015 18:15:04 +0000 [thread overview]
Message-ID: <54C7D5A8.3080805@marksyms.me.uk> (raw)
Hi,
Is it possible to check the alignment of the various pieces involved in
a LVM2, MDADM RAID5 system to ensure that things are going to 4k boundaries?
I have a system with 4 2TB drives (which are 4k natural block and use
GPT) which has Raid 1 and Raid 5 mdadm arrays on it (raid1 for /boot,
everything else on the raid5). The raid5 array is used as an LVM2 PV
which then has multiple LVs on it. The partitions for the raid were
created with parted and are aligned to 4k and report as such with a
align-check optimal.
I get some confusing performance results if I use ioping -WWWs to test
write speed to a test volume (shown at the bottom). Periodically thing
block for 1-2 seconds which makes the performance quite unpredictable.
I've tried tweaking the dirty ratios and used a modified form of the
second script from this forum post
(http://ubuntuforums.org/showthread.php?t=1494846) to set the raid
stripe cache size and read ahead but without much success.
Running "pvs -o +pe_start", gives
PV VG Fmt Attr PSize PFree 1st PE
/dev/md1 vol1 lvm2 a-- 5.44t 4.08t 1.50m
which if I'm reading it right (as the man page isn't much help in terms
of information) says that the PV is aligned at a 1.5m boundary and so
should be on a 4k boundary?
Any suggestions for further things to try would be much appreciated,
please reply to me directly as I'm not subscribed to the list.
Thanks,
Mark.
-----------------------------------
10.0 MiB from /dev/vol1/test (device 100.0 GiB): request=6 time=159.7 ms
10.0 MiB from /dev/vol1/test (device 100.0 GiB): request=7 time=146.9 ms
10.0 MiB from /dev/vol1/test (device 100.0 GiB): request=8 time=144.1 ms
10.0 MiB from /dev/vol1/test (device 100.0 GiB): request=9 time=144.6 ms
10.0 MiB from /dev/vol1/test (device 100.0 GiB): request=10 time=144.6 ms
10.0 MiB from /dev/vol1/test (device 100.0 GiB): request=11 time=147.8 ms
10.0 MiB from /dev/vol1/test (device 100.0 GiB): request=12 time=161.7 ms
10.0 MiB from /dev/vol1/test (device 100.0 GiB): request=13 time=1.9 s
10.0 MiB from /dev/vol1/test (device 100.0 GiB): request=14 time=175.8 ms
10.0 MiB from /dev/vol1/test (device 100.0 GiB): request=15 time=163.0 ms
10.0 MiB from /dev/vol1/test (device 100.0 GiB): request=16 time=140.4 ms
10.0 MiB from /dev/vol1/test (device 100.0 GiB): request=17 time=182.7 ms
10.0 MiB from /dev/vol1/test (device 100.0 GiB): request=18 time=155.3 ms
10.0 MiB from /dev/vol1/test (device 100.0 GiB): request=19 time=173.8 ms
10.0 MiB from /dev/vol1/test (device 100.0 GiB): request=20 time=2.2 s
10.0 MiB from /dev/vol1/test (device 100.0 GiB): request=21 time=158.5 ms
10.0 MiB from /dev/vol1/test (device 100.0 GiB): request=22 time=144.5 ms
10.0 MiB from /dev/vol1/test (device 100.0 GiB): request=23 time=166.3 ms
10.0 MiB from /dev/vol1/test (device 100.0 GiB): request=24 time=147.1 ms
reply other threads:[~2015-01-27 18:14 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=54C7D5A8.3080805@marksyms.me.uk \
--to=mark@marksyms.me.uk \
--cc=linux-lvm@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).