public inbox for cryptsetup@lists.linux.dev
 help / color / mirror / Atom feed
From: Marc SCHAEFER <schaefer@alphanet.ch>
To: cryptsetup@lists.linux.dev
Subject: known issue with specific kernel releases and snapshots on LVM on md RAID integrity devices
Date: Fri, 3 Oct 2025 17:01:15 +0200	[thread overview]
Message-ID: <aN/lO3XNKyrinSz5@alphanet.ch> (raw)

Hello,

kernel version (Debian): Linux virtual 6.1.0-40-amd64 #1 SMP PREEMPT_DYNAMIC
Debian 6.1.153-1 (2025-09-20) x86_64 GNU/Linux.

My setup is as follows:

/var/lib/lxc/121 is ext4-mounted from /dev/vg1/lxc-121 which is a LVM volume,
built on the physical volume /dev/md127 which is md RAID1 implemented on the
integrity devices md127_i_0 and md127_i_1, which are integrity devices as:

/usr/sbin/integritysetup --integrity crc32 --integrity-bitmap-mode --allow-discards open /dev/sda3 md127_i_0
/usr/sbin/integritysetup --integrity crc32 --integrity-bitmap-mode --allow-discards open /dev/nvme0n1p3 md127_i_1

This setup runs for months on two different systems and works like a charm. I also tried at the beginning to
force errors on one of the physical devices and they were correctly detected and rewritten at the md level.
Apart from that, no other errors so far.

It's quite fast, also.

I preferred this stacking option instead of doing the dm-integrity within LVM
itself, since this does not seem to work well with snapshots.

I never had any issue: the systems creates dozens of snapshots every few hours
for backup purposes and deletes them.

However, today I had integrity errors [1], without actual device errors nor anything specific
in the smartctl output [1]. It looks like some sectors could be redirected to the other
RAID1 mirror. However some other seem to have been also incorrect on the other devices,
thus was not corrected. This led to the snapshot device being invalidated. The only issue
is that the backup of this snapshot did not complete (I restic-backup and rsync it
to local and remote devices).

This could be a completely minor issue, e.g. if there was some reboot which did
not complete integrity the CRC writes, for example, however there was no such
issue (the system is on a UPS and only experienced controlled reboots since it
was installed).

Do you see anything peculiar with this?

As a work-around I did the following:
   - deleted all snapshots
   - allocated a LVM volume with all free blocks on the volume group
   - wrote /dev/zero to it with dd
   - sync
   - destroyed that LVM volume

Thank you for any idea or pointers!


[1]
2025-10-03T13:15:16.359650+02:00 virtual kernel: [612220.519671] device-mapper: integrity: dm-2: Checksum failed at sector 0xd50a6b8
2025-10-03T13:15:16.359662+02:00 virtual kernel: [612220.519703] md/raid1:md127: dm-2: rescheduling sector 223125176
2025-10-03T13:15:16.361212+02:00 virtual kernel: [612220.524315] device-mapper: integrity: dm-3: Checksum failed at sector 0xd514028
2025-10-03T13:15:16.361217+02:00 virtual kernel: [612220.524342] md/raid1:md127: dm-3: rescheduling sector 223164456
2025-10-03T13:15:16.361218+02:00 virtual kernel: [612220.524393] device-mapper: integrity: dm-3: Checksum failed at sector 0xd514030
2025-10-03T13:15:16.361219+02:00 virtual kernel: [612220.524415] md/raid1:md127: dm-3: rescheduling sector 223164464
2025-10-03T13:15:16.361219+02:00 virtual kernel: [612220.524461] device-mapper: integrity: dm-3: Checksum failed at sector 0xd514020
2025-10-03T13:15:16.361220+02:00 virtual kernel: [612220.524479] md/raid1:md127: dm-3: rescheduling sector 223164448
2025-10-03T13:15:16.361220+02:00 virtual kernel: [612220.524550] device-mapper: integrity: dm-3: Checksum failed at sector 0xd514038
2025-10-03T13:15:16.361221+02:00 virtual kernel: [612220.524570] md/raid1:md127: dm-3: rescheduling sector 223164472
2025-10-03T13:15:16.361221+02:00 virtual kernel: [612220.524604] device-mapper: integrity: dm-3: Checksum failed at sector 0xd514008
2025-10-03T13:15:16.361221+02:00 virtual kernel: [612220.524621] md/raid1:md127: dm-3: rescheduling sector 223164424
2025-10-03T13:15:16.365237+02:00 virtual kernel: [612220.524712] device-mapper: integrity: dm-3: Checksum failed at sector 0xd514040
2025-10-03T13:15:16.365246+02:00 virtual kernel: [612220.524731] md/raid1:md127: dm-3: rescheduling sector 223164480
2025-10-03T13:15:16.365247+02:00 virtual kernel: [612220.524827] device-mapper: integrity: dm-3: Checksum failed at sector 0xd51401c
2025-10-03T13:15:16.365248+02:00 virtual kernel: [612220.524842] md/raid1:md127: dm-3: rescheduling sector 223164440
2025-10-03T13:15:16.381305+02:00 virtual kernel: [612220.542185] device-mapper: integrity: dm-2: Checksum failed at sector 0xd50a6b8
2025-10-03T13:15:16.381317+02:00 virtual kernel: [612220.544723] md/raid1:md127: read error corrected (8 sectors at 223389368 on dm-2)
2025-10-03T13:15:16.381351+02:00 virtual kernel: [612220.544729] md/raid1:md127: redirecting sector 223125176 to other mirror: dm-2
2025-10-03T13:15:16.385216+02:00 virtual kernel: [612220.545002] device-mapper: integrity: dm-3: Checksum failed at sector 0xd514028
2025-10-03T13:15:16.385227+02:00 virtual kernel: [612220.545204] md/raid1:md127: redirecting sector 223164456 to other mirror: dm-2
2025-10-03T13:15:16.385228+02:00 virtual kernel: [612220.547073] md/raid1:md127: dm-2: rescheduling sector 223164456
2025-10-03T13:15:16.393219+02:00 virtual kernel: [612220.553809] md/raid1:md127: redirecting sector 223164464 to other mirror: dm-2
2025-10-03T13:15:16.393232+02:00 virtual kernel: [612220.554002] md/raid1:md127: dm-2: rescheduling sector 223164464
2025-10-03T13:15:16.401225+02:00 virtual kernel: [612220.561696] md/raid1:md127: redirecting sector 223164448 to other mirror: dm-2
2025-10-03T13:15:16.414212+02:00 virtual kernel: [612220.576446] md/raid1:md127: redirecting sector 223164472 to other mirror: dm-2
2025-10-03T13:15:16.421730+02:00 virtual kernel: [612220.584276] md/raid1:md127: redirecting sector 223164424 to other mirror: dm-2
2025-10-03T13:15:16.429237+02:00 virtual kernel: [612220.592338] md/raid1:md127: redirecting sector 223164480 to other mirror: dm-2
2025-10-03T13:15:16.437263+02:00 virtual kernel: [612220.599077] md/raid1:md127: redirecting sector 223164440 to other mirror: dm-2
2025-10-03T13:15:16.445281+02:00 virtual kernel: [612220.605304] md/raid1:md127: dm-2: unrecoverable I/O read error for block 223164456
2025-10-03T13:15:16.449231+02:00 virtual kernel: [612220.612399] md/raid1:md127: dm-2: unrecoverable I/O read error for block 223164464
2025-10-03T13:15:16.477217+02:00 virtual kernel: [612220.637038] md/raid1:md127: dm-2: unrecoverable I/O read error for block 223164448
2025-10-03T13:15:16.485263+02:00 virtual kernel: [612220.647901] md/raid1:md127: dm-2: unrecoverable I/O read error for block 223164472
2025-10-03T13:15:16.505219+02:00 virtual kernel: [612220.664740] md/raid1:md127: dm-2: unrecoverable I/O read error for block 223164424
2025-10-03T13:15:16.533219+02:00 virtual kernel: [612220.696016] md/raid1:md127: dm-2: unrecoverable I/O read error for block 223164480
2025-10-03T13:15:16.565224+02:00 virtual kernel: [612220.725286] md/raid1:md127: dm-2: unrecoverable I/O read error for block 223164440
2025-10-03T13:15:16.629223+02:00 virtual kernel: [612220.789079] device-mapper: snapshots: Invalidating snapshot: Error reading/writing.
2025-10-03T13:15:16.641509+02:00 virtual dmeventd[3523319]: WARNING: Snapshot vg1-121_snapshot changed state to: Invalid and should be removed.
2025-10-03T13:15:16.641620+02:00 virtual dmeventd[3523319]: Unmounting invalid snapshot vg1-121_snapshot from /mnt/snapshot/121_snapshot.



             reply	other threads:[~2025-10-03 15:22 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-03 15:01 Marc SCHAEFER [this message]
2025-10-04 18:34 ` known issue with specific kernel releases and snapshots on LVM on md RAID integrity devices Milan Broz
2025-10-05 14:59   ` Marc SCHAEFER

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aN/lO3XNKyrinSz5@alphanet.ch \
    --to=schaefer@alphanet.ch \
    --cc=cryptsetup@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox