From: Cesare Leonardi <celeonar@gmail.com>
To: Ingo Franzki <ifranzki@linux.ibm.com>,
LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] Filesystem corruption with LVM's pvmove onto a PV with a larger physical block size
Date: Sat, 2 Mar 2019 02:36:33 +0100 [thread overview]
Message-ID: <30346b34-c1e1-f7ba-be4e-a37d8ce8cf03@gmail.com> (raw)
In-Reply-To: <eb3fb7e3-9946-f266-815e-4b49c997e3a4@linux.ibm.com>
Hello Ingo, I've made several tests but I was unable to trigger any
filesystem corruption. Maybe the trouble you encountered are specific to
encrypted device?
Yesterday and today I've used:
Debian unstable
kernel 4.19.20
lvm2 2.03.02
e2fsprogs 1.44.5
On 01/03/19 09:05, Ingo Franzki wrote:
> Hmm, maybe the size of the volume plays a role as Bernd has pointed out. ext4 may use -b 4K by default on larger devices.
> Once the FS uses 4K block anyway you wont see the problem.
>
> Use tune2fs -l <device> after you created the file system and check if it is using 4K blocks on your 512/512 device. If so, then you won't see the problem when moved to a 4K block size device.
I confirm that tune2fs reports 4096 block size for the 1 GB ext4
filesystem I've used.
I've also verified what Bernd said: mkfs.ext4 still use 4096 block size
for a +512M partition, but use 1024 for +500M.
As suggested by Stuart, I also made a test using a 4k loop device and
pvmoving the LV into it. As you expected, no data corruption.
To do it I've recreated the same setup ad yesterday:
/dev/mapper/vgtest-lvol0 on /dev/sdb4, a 512/512 disk, with some data on
it. Then:
# fallocate -l 10G testdisk.img
# losetup -f -L -P -b 4096 testdisk.img
# pvcreate /dev/loop0
# vgextend vgtest /dev/loop0
# pvmove /dev/sdb4 /dev/loop0
# fsck.ext4 -f /dev/mapper/vgtest-lvol0
While I was there, out of curiosity, I've created an ext4 filesystem on
a <500MB LV (block size = 1024) and I've tried pvmoving data from the
512/512 disk to 512/4096, then to the 4096/4096 loop device.
New partitions and a new VG was used for that.
The setup:
/dev/sdb5: 512/512
/dev/sdc2: 512/4096
/dev/loop0 4096/4096
# blockdev -v --getss --getpbsz --getbsz /dev/sdb
get logical block (sector) size: 512
get physical block (sector) size: 512
get blocksize: 4096
# blockdev -v --getss --getpbsz --getbsz /dev/sdc
get logical block (sector) size: 512
get physical block (sector) size: 4096
get blocksize: 4096
# blockdev -v --getss --getpbsz --getbsz /dev/loop0
get logical block (sector) size: 4096
get physical block (sector) size: 4096
get blocksize: 4096
# pvcreate /dev/sdb5
# vgcreate vgtest2 /dev/sdb5
# lvcreate -L 400M vgtest2 /dev/sdb5
# mkfs.ext4 /dev/mapper/vgtest2-lvol0
# tune2fs -l /dev/mapper/vgtest2-lvol0
[...]
Block size: 1024
[...]
# mount /dev/mapper/vgtest2-lvol0 /media/test
# cp -a SOMEDATA /media/test/
# umount /media/test
# fsck.ext4 -f /dev/mapper/vgtest2-lvol0
Now I've moved data from the 512/512 to the 512/4096 disk:
# pvcreate /dev/sdc2
# vgextend vgtest2 /dev/sdc2
# pvmove /dev/sdb5 /dev/sdc2
# fsck.ext4 -f /dev/mapper/vgtest2-lvol0
No error reported.
Now I've moved data to the 4096/4096 loop device:
# pvcreate /dev/loop0
# vgextend vgtest2 /dev/loop0
# pvmove /dev/sdc2 /dev/loop0
# fsck.ext4 -f /dev/mapper/vgtest2-lvol0
Still no data corruption.
Cesare.
next prev parent reply other threads:[~2019-03-02 1:36 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-25 15:33 [linux-lvm] Filesystem corruption with LVM's pvmove onto a PV with a larger physical block size Ingo Franzki
2019-02-27 0:00 ` Cesare Leonardi
2019-02-27 8:49 ` Ingo Franzki
2019-02-27 14:59 ` Stuart D. Gathman
2019-02-27 17:05 ` Ingo Franzki
2019-03-02 1:37 ` L A Walsh
2019-02-28 1:31 ` Cesare Leonardi
2019-02-28 1:52 ` Stuart D. Gathman
2019-02-28 8:41 ` Ingo Franzki
2019-02-28 9:48 ` Ilia Zykov
2019-02-28 10:10 ` Ingo Franzki
2019-02-28 10:41 ` Ilia Zykov
2019-02-28 10:50 ` Ilia Zykov
2019-02-28 13:13 ` Ilia Zykov
2019-03-01 1:24 ` Cesare Leonardi
2019-03-01 2:56 ` [linux-lvm] Filesystem corruption with LVM's pvmove onto a PVwith " Bernd Eckenfels
2019-03-01 8:00 ` Ingo Franzki
2019-03-01 3:41 ` [linux-lvm] Filesystem corruption with LVM's pvmove onto a PV with " Stuart D. Gathman
2019-03-01 7:59 ` Ingo Franzki
2019-03-01 8:05 ` Ingo Franzki
2019-03-02 1:36 ` Cesare Leonardi [this message]
2019-03-02 20:25 ` Nir Soffer
2019-03-04 22:45 ` Cesare Leonardi
2019-03-04 23:22 ` Nir Soffer
2019-03-05 7:54 ` Ingo Franzki
2019-03-04 9:12 ` Ingo Franzki
2019-03-04 22:10 ` Cesare Leonardi
2019-03-05 0:12 ` Stuart D. Gathman
2019-03-05 7:53 ` Ingo Franzki
2019-03-05 9:29 ` Ilia Zykov
2019-03-05 11:42 ` Ingo Franzki
2019-03-05 16:29 ` Nir Soffer
2019-03-05 16:36 ` David Teigland
2019-03-05 16:56 ` Stuart D. Gathman
2019-02-28 14:36 ` Ilia Zykov
2019-02-28 16:30 ` Ingo Franzki
2019-02-28 18:11 ` Ilia Zykov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=30346b34-c1e1-f7ba-be4e-a37d8ce8cf03@gmail.com \
--to=celeonar@gmail.com \
--cc=ifranzki@linux.ibm.com \
--cc=linux-lvm@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).