* btrfs ontop of LVM ontop of MD RAID1 supported?
@ 2024-03-02 15:01 Nigel Kukard
2024-03-02 15:47 ` Roman Mamedov
0 siblings, 1 reply; 6+ messages in thread
From: Nigel Kukard @ 2024-03-02 15:01 UTC (permalink / raw)
To: linux-btrfs
[-- Attachment #1.1: Type: text/plain, Size: 4151 bytes --]
Hi there,
I hope everyone is doing great today!
I'm wondering if btrfs ontop of LVM ontop of MD RAID1 is supported?
I've managed to reproduce with 100% accuracy severe data corruption
using this configuration on 6.6.19.
2 x 1.92T NVMe's in MD RAID1 configuration
LVM volume created ontop of the MD RAID1
btrfs filesystem on the LV
I then write about 100-200G of data. Create a snapshot. Read/write the
file and get these messages...
Mar 02 11:34:01 xxxx kernel: BTRFS error (device dm-1): bdev
/dev/mapper/lvm--raid-images errs: wr 0, rd 0, flush 0, corrupt 43, gen 0
Mar 02 11:34:01 xxxx kernel: BTRFS warning (device dm-1): csum failed
root 5 ino 274 off 12477722624 csum 0xea911494 expected csum 0xc29349a8
mirror 1
Mar 02 11:34:01 xxxx kernel: BTRFS error (device dm-1): bdev
/dev/mapper/lvm--raid-images errs: wr 0, rd 0, flush 0, corrupt 44, gen 0
It seems to be more related to larger files, a bunch of smaller files I
have didn't see any corruption until a few hours later. Snapshots and
subvolumes don't seem to affect the results in a noticeable way.
I've managed to reproduce it on 4 completely different systems as I
first thought it may be a hardware issue, but its consistent across
completely different enterprise platforms, however they all have
enterprise NVMe disks, but are different brands.
Using ext4 and comparing data after each set of writes is consistent.
Its just btrfs that seems to be having an issue.
When running a scrub, I get these messages...
Mar 02 11:39:38 xxxx kernel: BTRFS info (device dm-1): scrub: started on
devid 1
Mar 02 11:39:39 xxxx kernel: BTRFS error (device dm-1): unable to fixup
(regular) error at logical 4036689920 on dev
/dev/mapper/lvm--raid-images physical 5118820352
Mar 02 11:39:39 xxxx kernel: BTRFS error (device dm-1): unable to fixup
(regular) error at logical 4295163904 on dev
/dev/mapper/lvm--raid-images physical 5377294336
Mar 02 11:39:39 xxxx kernel: BTRFS warning (device dm-1): checksum error
at logical 4295163904 on dev /dev/mapper/lvm--raid-images, physical
5377294336, root 263, inode>
Mar 02 11:39:39 xxxx kernel: BTRFS error (device dm-1): unable to fixup
(regular) error at logical 4307615744 on dev
/dev/mapper/lvm--raid-images physical 5389746176
Mar 02 11:39:39 xxxx kernel: BTRFS error (device dm-1): unable to fixup
(regular) error at logical 4511760384 on dev
/dev/mapper/lvm--raid-images physical 5593890816
Mar 02 11:39:40 xxxx kernel: BTRFS error (device dm-1): unable to fixup
(regular) error at logical 8687386624 on dev
/dev/mapper/lvm--raid-images physical 9769517056
Mar 02 11:39:40 xxxx kernel: BTRFS warning (device dm-1): checksum error
at logical 8687386624 on dev /dev/mapper/lvm--raid-images, physical
9769517056, root 267, inode>
Mar 02 11:39:40 xxxx kernel: BTRFS error (device dm-1): unable to fixup
(regular) error at logical 8689352704 on dev
/dev/mapper/lvm--raid-images physical 9771483136
Mar 02 11:39:40 xxxx kernel: BTRFS error (device dm-1): unable to fixup
(regular) error at logical 8690335744 on dev
/dev/mapper/lvm--raid-images physical 9772466176
Mar 02 11:39:40 xxxx kernel: BTRFS error (device dm-1): unable to fixup
(regular) error at logical 8691974144 on dev
/dev/mapper/lvm--raid-images physical 9774104576
Mar 02 11:39:40 xxxx kernel: BTRFS error (device dm-1): unable to fixup
(regular) error at logical 8692236288 on dev
/dev/mapper/lvm--raid-images physical 9774366720
Mar 02 11:39:40 xxxx kernel: BTRFS error (device dm-1): unable to fixup
(regular) error at logical 8692957184 on dev
/dev/mapper/lvm--raid-images physical 9775087616
Mar 02 11:39:40 xxxx kernel: BTRFS error (device dm-1): fixed up error
at logical 8723496960 on dev /dev/mapper/lvm--raid-images physical
9805627392
It gets progressively much worse over time the more writes that happen.
Starting at around 800 or so errors most uncorrectable escalating into
the thousands.
I'm using default mount options for btrfs and Arch Linux.
(please kindly CC me as I'm not subscribed to the mailing list)
Kind regards
-N
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 236 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: btrfs ontop of LVM ontop of MD RAID1 supported?
2024-03-02 15:01 btrfs ontop of LVM ontop of MD RAID1 supported? Nigel Kukard
@ 2024-03-02 15:47 ` Roman Mamedov
2024-03-02 16:09 ` Nigel Kukard
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Roman Mamedov @ 2024-03-02 15:47 UTC (permalink / raw)
To: Nigel Kukard; +Cc: linux-btrfs
On Sat, 2 Mar 2024 15:01:43 +0000
Nigel Kukard <nkukard@LBSD.net> wrote:
> I'm wondering if btrfs ontop of LVM ontop of MD RAID1 is supported?
Should be absolutely supported.
> I've managed to reproduce with 100% accuracy severe data corruption
> using this configuration on 6.6.19.
>
> 2 x 1.92T NVMe's in MD RAID1 configuration
> LVM volume created ontop of the MD RAID1
> btrfs filesystem on the LV
>
> I then write about 100-200G of data. Create a snapshot. Read/write the
> file and get these messages...
Has the MD RAID1 finished its initial sync after creation?
Have you tried waiting until it finishes and only then do thewrites to see if
the corruption is still observed (of course that's in no way a "workaround",
just to see what might cause the bug).
Do you know any kernel versions or series where this corruption did not happen?
I assume you're not saying it appeared in 6.6.19 compared to 6.6.18.
Can you try the 6.1 series? Or maybe also 6.8?
I made a Btrfs on top of LVM on top of RAID1 myself now, but on consumer SSDs.
Copying some files to it now, to check. Would be also helpful if you can find
a precise sequence of commands which can trigger the bug, less vague than
copy some files to it and see.
--
With respect,
Roman
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: btrfs ontop of LVM ontop of MD RAID1 supported?
2024-03-02 15:47 ` Roman Mamedov
@ 2024-03-02 16:09 ` Nigel Kukard
2024-03-02 16:19 ` Roman Mamedov
2024-03-19 11:05 ` Nigel Kukard
2 siblings, 0 replies; 6+ messages in thread
From: Nigel Kukard @ 2024-03-02 16:09 UTC (permalink / raw)
To: Roman Mamedov; +Cc: linux-btrfs
[-- Attachment #1.1: Type: text/plain, Size: 1935 bytes --]
Hi there Roman,
On 3/2/24 15:47, Roman Mamedov wrote:
> On Sat, 2 Mar 2024 15:01:43 +0000
> Nigel Kukard <nkukard@LBSD.net> wrote:
>
>> I'm wondering if btrfs ontop of LVM ontop of MD RAID1 is supported?
> Should be absolutely supported.
>
>> I've managed to reproduce with 100% accuracy severe data corruption
>> using this configuration on 6.6.19.
>>
>> 2 x 1.92T NVMe's in MD RAID1 configuration
>> LVM volume created ontop of the MD RAID1
>> btrfs filesystem on the LV
>>
>> I then write about 100-200G of data. Create a snapshot. Read/write the
>> file and get these messages...
> Has the MD RAID1 finished its initial sync after creation?
>
> Have you tried waiting until it finishes and only then do thewrites to see if
> the corruption is still observed (of course that's in no way a "workaround",
> just to see what might cause the bug).
Sync completed on all 4 systems I tested on before creating the LVM
volume and filesystem.
>
> Do you know any kernel versions or series where this corruption did not happen?
>
> I assume you're not saying it appeared in 6.6.19 compared to 6.6.18.
Sadly, this is the first time I'm testing this specific setup. These are
production systems, so changing kernel version is a bit tricky.
Let me try reproduce it on another bit of hardware, I'll revert back.
>
> Can you try the 6.1 series? Or maybe also 6.8?
Sure, let me see if I can set something up and reproduce it, then we can
test different kernel versions and start bisecting.
> I made a Btrfs on top of LVM on top of RAID1 myself now, but on consumer SSDs.
> Copying some files to it now, to check. Would be also helpful if you can find
> a precise sequence of commands which can trigger the bug, less vague than
> copy some files to it and see.
Sure thing. Let me see if I can trim things down and get a simple
sequence of commands to make things easier.
-N
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 236 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: btrfs ontop of LVM ontop of MD RAID1 supported?
2024-03-02 15:47 ` Roman Mamedov
2024-03-02 16:09 ` Nigel Kukard
@ 2024-03-02 16:19 ` Roman Mamedov
2024-03-19 11:05 ` Nigel Kukard
2 siblings, 0 replies; 6+ messages in thread
From: Roman Mamedov @ 2024-03-02 16:19 UTC (permalink / raw)
To: Nigel Kukard; +Cc: linux-btrfs
On Sat, 2 Mar 2024 20:47:26 +0500
Roman Mamedov <rm@romanrm.net> wrote:
> I made a Btrfs on top of LVM on top of RAID1 myself now, but on consumer SSDs.
> Copying some files to it now, to check. Would be also helpful if you can find
> a precise sequence of commands which can trigger the bug, less vague than
> copy some files to it and see.
After some file copying, snapshotting and reading back, I did not manage to
reproduce any corruption so far, on 6.6.18.
--
With respect,
Roman
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: btrfs ontop of LVM ontop of MD RAID1 supported?
2024-03-02 15:47 ` Roman Mamedov
2024-03-02 16:09 ` Nigel Kukard
2024-03-02 16:19 ` Roman Mamedov
@ 2024-03-19 11:05 ` Nigel Kukard
2024-03-19 13:26 ` Nigel Kukard
2 siblings, 1 reply; 6+ messages in thread
From: Nigel Kukard @ 2024-03-19 11:05 UTC (permalink / raw)
To: Roman Mamedov; +Cc: linux-btrfs
[-- Attachment #1.1.1: Type: text/plain, Size: 7256 bytes --]
Hi there Roman,
On 3/2/24 15:47, Roman Mamedov wrote:
> On Sat, 2 Mar 2024 15:01:43 +0000
> Nigel Kukard<nkukard@LBSD.net> wrote:
>
>> I'm wondering if btrfs ontop of LVM ontop of MD RAID1 is supported?
> Should be absolutely supported.
>
>> I've managed to reproduce with 100% accuracy severe data corruption
>> using this configuration on 6.6.19.
>>
>> 2 x 1.92T NVMe's in MD RAID1 configuration
>> LVM volume created ontop of the MD RAID1
>> btrfs filesystem on the LV
>>
>> I then write about 100-200G of data. Create a snapshot. Read/write the
>> file and get these messages...
> Has the MD RAID1 finished its initial sync after creation?
>
> Have you tried waiting until it finishes and only then do thewrites to see if
> the corruption is still observed (of course that's in no way a "workaround",
> just to see what might cause the bug).
>
> Do you know any kernel versions or series where this corruption did not happen?
>
> I assume you're not saying it appeared in 6.6.19 compared to 6.6.18.
>
> Can you try the 6.1 series? Or maybe also 6.8?
>
> I made a Btrfs on top of LVM on top of RAID1 myself now, but on consumer SSDs.
> Copying some files to it now, to check. Would be also helpful if you can find
> a precise sequence of commands which can trigger the bug, less vague than
> copy some files to it and see.
>
I'm now absolutely convinced this is some kind of a bug.
Over the past 2 weeks I've migrated a number of libvirtd qcow2 files
over to btrfs filesystems, and on every single system with NVMe's with
LVM ontop of MD I'm getting errors like the below.
I'm not sure if its relevant, but the VM's run weekly TRIM's on their
disks. The host system also runs weekly TRIM's.
The first time I mentioned above was with new systems I installed on
brand new disks. I'm now seeing the same issue on systems that are 1-2
years old, also with NVMe's, which were running ext4 in the past.
On all systems I did a clean LV and created the btrfs filesystem with:
mkfs.btrfs -L images /dev/lvm-raid/images
The LVM volumes were created normally with..
pvcreate /dev/md/0
vgcreate lvm-raid /dev/md/0
lvcreate --size 1T --name images lvm-raid
I'm running one subvolume folder per virtual machines. Each folder
contains 1-2 files of between 25G and 500G in size which are my qcow2 files.
I'm running btrfs-progs version 6.7.1-1 on all machines and kernel
6.6.18, 6.6.19 or 6.6.20. It seems to occur on all 3 of them.
Most of the NVMe's are Samsungs...
Model Number: SAMSUNG MZQLB1T9HAJR-00007
Serial Number: S439NA0MA01592
Firmware Version: EDA5402Q
Here is the output in the logs running: btrfs scrub <subvolume>
[1693754.276834] BTRFS error (device dm-1): unable to fixup (regular)
error at logical 214229778432 on dev /dev/mapper/lvm--raid-images
physical 215311908864
[1693754.415563] BTRFS error (device dm-1): unable to fixup (regular)
error at logical 214611918848 on dev /dev/mapper/lvm--raid-images
physical 215694049280
[1693754.877399] BTRFS error (device dm-1): unable to fixup (regular)
error at logical 216889556992 on dev /dev/mapper/lvm--raid-images
physical 217971687424
[1693754.878195] BTRFS warning (device dm-1): checksum error at logical
216889556992 on dev /dev/mapper/lvm--raid-images, physical 217971687424,
root 256, inode 259, offset 4607844352, leng
th 4096, links 1 (path: [redacted])
[1693755.503582] BTRFS error (device dm-1): unable to fixup (regular)
error at logical 219655766016 on dev /dev/mapper/lvm--raid-images
physical 220737896448
[1693755.503628] BTRFS error (device dm-1): unable to fixup (regular)
error at logical 219655700480 on dev /dev/mapper/lvm--raid-images
physical 220737830912
[1693755.505141] BTRFS warning (device dm-1): checksum error at logical
219655700480 on dev /dev/mapper/lvm--raid-images, physical 220737830912,
root 257, inode 260, offset 3594641408, leng
th 4096, links 1 (path: [redacted].qcow2)
[1693755.627722] BTRFS error (device dm-1): unable to fixup (regular)
error at logical 220037513216 on dev /dev/mapper/lvm--raid-images
physical 221119643648
[1693755.628515] BTRFS warning (device dm-1): checksum error at logical
220037513216 on dev /dev/mapper/lvm--raid-images, physical 221119643648,
root 5, inode 269, offset 7562219520, length
4096, links 1 (path: [redacted]/[redacted].qcow2)
[1693756.072457] BTRFS error (device dm-1): unable to fixup (regular)
error at logical 222065065984 on dev /dev/mapper/lvm--raid-images
physical 223147196416
[1693760.488641] scrub_stripe_report_errors: 1 callbacks suppressed
[1693760.488645] BTRFS error (device dm-1): unable to fixup (regular)
error at logical 236760006656 on dev /dev/mapper/lvm--raid-images
physical 238915878912
[1693760.489456] scrub_stripe_report_errors: 1 callbacks suppressed
[1693760.489487] BTRFS warning (device dm-1): checksum error at logical
236760006656 on dev /dev/mapper/lvm--raid-images, physical 238915878912,
root 256, inode 259, offset 2252308480, leng
th 4096, links 1 (path: [redacted])
[1693760.602360] BTRFS error (device dm-1): unable to fixup (regular)
error at logical 237096009728 on dev /dev/mapper/lvm--raid-images
physical 239251881984
[1693760.815775] BTRFS error (device dm-1): unable to fixup (regular)
error at logical 238488584192 on dev /dev/mapper/lvm--raid-images
physical 240644456448
[1693760.816568] BTRFS warning (device dm-1): checksum error at logical
238488584192 on dev /dev/mapper/lvm--raid-images, physical 240644456448,
root 256, inode 259, offset 4523651072, leng
th 4096, links 1 (path: [redacted])
[1693760.819925] BTRFS error (device dm-1): unable to fixup (regular)
error at logical 238511390720 on dev /dev/mapper/lvm--raid-images
physical 240667262976
[1693760.887074] BTRFS error (device dm-1): unable to fixup (regular)
error at logical 238796603392 on dev /dev/mapper/lvm--raid-images
physical 240952475648
[1693760.887905] BTRFS warning (device dm-1): checksum error at logical
238796603392 on dev /dev/mapper/lvm--raid-images, physical 240952475648,
root 256, inode 259, offset 4549595136, leng
th 4096, links 1 (path: [redacted])
[1693760.913950] BTRFS error (device dm-1): unable to fixup (regular)
error at logical 238889271296 on dev /dev/mapper/lvm--raid-images
physical 241045143552
[1693760.996105] BTRFS error (device dm-1): unable to fixup (regular)
error at logical 239231434752 on dev /dev/mapper/lvm--raid-images
physical 241387307008
[1693760.999121] BTRFS error (device dm-1): unable to fixup (regular)
error at logical 239241396224 on dev /dev/mapper/lvm--raid-images
physical 241397268480
[1693761.005686] BTRFS error (device dm-1): unable to fixup (regular)
error at logical 239266430976 on dev /dev/mapper/lvm--raid-images
physical 241422303232
[1693761.011476] BTRFS error (device dm-1): unable to fixup (regular)
error at logical 239321481216 on dev /dev/mapper/lvm--raid-images
physical 241477353472
[1693771.039669] BTRFS info (device dm-1): scrub: finished on devid 1
with status: 0
[-- Attachment #1.1.2: Type: text/html, Size: 8814 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 236 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: btrfs ontop of LVM ontop of MD RAID1 supported?
2024-03-19 11:05 ` Nigel Kukard
@ 2024-03-19 13:26 ` Nigel Kukard
0 siblings, 0 replies; 6+ messages in thread
From: Nigel Kukard @ 2024-03-19 13:26 UTC (permalink / raw)
To: Roman Mamedov; +Cc: linux-btrfs
[-- Attachment #1.1.1: Type: text/plain, Size: 8793 bytes --]
Hi there,
On 3/19/24 11:05, Nigel Kukard wrote:
>
> Hi there Roman,
>
> On 3/2/24 15:47, Roman Mamedov wrote:
>> On Sat, 2 Mar 2024 15:01:43 +0000
>> Nigel Kukard<nkukard@LBSD.net> wrote:
>>
>>> I'm wondering if btrfs ontop of LVM ontop of MD RAID1 is supported?
>> Should be absolutely supported.
>>
>>> I've managed to reproduce with 100% accuracy severe data corruption
>>> using this configuration on 6.6.19.
>>>
>>> 2 x 1.92T NVMe's in MD RAID1 configuration
>>> LVM volume created ontop of the MD RAID1
>>> btrfs filesystem on the LV
>>>
>>> I then write about 100-200G of data. Create a snapshot. Read/write the
>>> file and get these messages...
>> Has the MD RAID1 finished its initial sync after creation?
Correct. It doesn't seem to make a difference if sync'd or not. I tried
it on a new system about a week ago and it took a 5 days to manifest on
that system.
On this system I copied more data than the first 4 systems to try
reproduce the issue, which is when I thought maybe all 4 systems I
originally had the problem on had a similar issue.
But after 5 days I started noticing messages in the log.
>>
>> Have you tried waiting until it finishes and only then do thewrites to see if
>> the corruption is still observed (of course that's in no way a "workaround",
>> just to see what might cause the bug).
Yep, that doesn't seem to be the problem.
>>
>> Do you know any kernel versions or series where this corruption did not happen?
I believe from my log searching these issues only occurred after
updating from 6.1.56 to 6.6.18. That's the most I've been able to narrow
it down so far.
>>
>> I assume you're not saying it appeared in 6.6.19 compared to 6.6.18.
My kernel jumps were all from 6.1.56 to 6.6.18/6.6.19.
>>
>> Can you try the 6.1 series? Or maybe also 6.8?
I'm not seeing any errors in my logs from 6.1.56.
>>
>> I made a Btrfs on top of LVM on top of RAID1 myself now, but on consumer SSDs.
>> Copying some files to it now, to check. Would be also helpful if you can find
>> a precise sequence of commands which can trigger the bug, less vague than
>> copy some files to it and see.
Yea, after the first 4 systems that almost immediately manifested within
1-24 hours, subsequent tries to try reproduce it were unsuccessful.
I tried SSD's in my lab of a similar size, 1.920Tb to no avail, I was
unable to reproduce.
I have since then provisioned multiple new servers with NVMe's and they
have all manifested the issue within 5 or so days.
I hope this helps at least a little. It seems hard to reproduce
intentionally.
-N
>>
> I'm now absolutely convinced this is some kind of a bug.
>
> Over the past 2 weeks I've migrated a number of libvirtd qcow2 files
> over to btrfs filesystems, and on every single system with NVMe's with
> LVM ontop of MD I'm getting errors like the below.
>
> I'm not sure if its relevant, but the VM's run weekly TRIM's on their
> disks. The host system also runs weekly TRIM's.
>
> The first time I mentioned above was with new systems I installed on
> brand new disks. I'm now seeing the same issue on systems that are 1-2
> years old, also with NVMe's, which were running ext4 in the past.
>
> On all systems I did a clean LV and created the btrfs filesystem with:
> mkfs.btrfs -L images /dev/lvm-raid/images
>
> The LVM volumes were created normally with..
> pvcreate /dev/md/0
> vgcreate lvm-raid /dev/md/0
> lvcreate --size 1T --name images lvm-raid
>
>
> I'm running one subvolume folder per virtual machines. Each folder
> contains 1-2 files of between 25G and 500G in size which are my qcow2
> files.
>
>
> I'm running btrfs-progs version 6.7.1-1 on all machines and kernel
> 6.6.18, 6.6.19 or 6.6.20. It seems to occur on all 3 of them.
>
>
> Most of the NVMe's are Samsungs...
>
> Model Number: SAMSUNG MZQLB1T9HAJR-00007
> Serial Number: S439NA0MA01592
> Firmware Version: EDA5402Q
>
>
> Here is the output in the logs running: btrfs scrub <subvolume>
>
> [1693754.276834] BTRFS error (device dm-1): unable to fixup (regular)
> error at logical 214229778432 on dev /dev/mapper/lvm--raid-images
> physical 215311908864
> [1693754.415563] BTRFS error (device dm-1): unable to fixup (regular)
> error at logical 214611918848 on dev /dev/mapper/lvm--raid-images
> physical 215694049280
> [1693754.877399] BTRFS error (device dm-1): unable to fixup (regular)
> error at logical 216889556992 on dev /dev/mapper/lvm--raid-images
> physical 217971687424
> [1693754.878195] BTRFS warning (device dm-1): checksum error at
> logical 216889556992 on dev /dev/mapper/lvm--raid-images, physical
> 217971687424, root 256, inode 259, offset 4607844352, leng
> th 4096, links 1 (path: [redacted])
> [1693755.503582] BTRFS error (device dm-1): unable to fixup (regular)
> error at logical 219655766016 on dev /dev/mapper/lvm--raid-images
> physical 220737896448
> [1693755.503628] BTRFS error (device dm-1): unable to fixup (regular)
> error at logical 219655700480 on dev /dev/mapper/lvm--raid-images
> physical 220737830912
> [1693755.505141] BTRFS warning (device dm-1): checksum error at
> logical 219655700480 on dev /dev/mapper/lvm--raid-images, physical
> 220737830912, root 257, inode 260, offset 3594641408, leng
> th 4096, links 1 (path: [redacted].qcow2)
> [1693755.627722] BTRFS error (device dm-1): unable to fixup (regular)
> error at logical 220037513216 on dev /dev/mapper/lvm--raid-images
> physical 221119643648
> [1693755.628515] BTRFS warning (device dm-1): checksum error at
> logical 220037513216 on dev /dev/mapper/lvm--raid-images, physical
> 221119643648, root 5, inode 269, offset 7562219520, length
> 4096, links 1 (path: [redacted]/[redacted].qcow2)
> [1693756.072457] BTRFS error (device dm-1): unable to fixup (regular)
> error at logical 222065065984 on dev /dev/mapper/lvm--raid-images
> physical 223147196416
> [1693760.488641] scrub_stripe_report_errors: 1 callbacks suppressed
> [1693760.488645] BTRFS error (device dm-1): unable to fixup (regular)
> error at logical 236760006656 on dev /dev/mapper/lvm--raid-images
> physical 238915878912
> [1693760.489456] scrub_stripe_report_errors: 1 callbacks suppressed
> [1693760.489487] BTRFS warning (device dm-1): checksum error at
> logical 236760006656 on dev /dev/mapper/lvm--raid-images, physical
> 238915878912, root 256, inode 259, offset 2252308480, leng
> th 4096, links 1 (path: [redacted])
> [1693760.602360] BTRFS error (device dm-1): unable to fixup (regular)
> error at logical 237096009728 on dev /dev/mapper/lvm--raid-images
> physical 239251881984
> [1693760.815775] BTRFS error (device dm-1): unable to fixup (regular)
> error at logical 238488584192 on dev /dev/mapper/lvm--raid-images
> physical 240644456448
> [1693760.816568] BTRFS warning (device dm-1): checksum error at
> logical 238488584192 on dev /dev/mapper/lvm--raid-images, physical
> 240644456448, root 256, inode 259, offset 4523651072, leng
> th 4096, links 1 (path: [redacted])
> [1693760.819925] BTRFS error (device dm-1): unable to fixup (regular)
> error at logical 238511390720 on dev /dev/mapper/lvm--raid-images
> physical 240667262976
> [1693760.887074] BTRFS error (device dm-1): unable to fixup (regular)
> error at logical 238796603392 on dev /dev/mapper/lvm--raid-images
> physical 240952475648
> [1693760.887905] BTRFS warning (device dm-1): checksum error at
> logical 238796603392 on dev /dev/mapper/lvm--raid-images, physical
> 240952475648, root 256, inode 259, offset 4549595136, leng
> th 4096, links 1 (path: [redacted])
> [1693760.913950] BTRFS error (device dm-1): unable to fixup (regular)
> error at logical 238889271296 on dev /dev/mapper/lvm--raid-images
> physical 241045143552
> [1693760.996105] BTRFS error (device dm-1): unable to fixup (regular)
> error at logical 239231434752 on dev /dev/mapper/lvm--raid-images
> physical 241387307008
> [1693760.999121] BTRFS error (device dm-1): unable to fixup (regular)
> error at logical 239241396224 on dev /dev/mapper/lvm--raid-images
> physical 241397268480
> [1693761.005686] BTRFS error (device dm-1): unable to fixup (regular)
> error at logical 239266430976 on dev /dev/mapper/lvm--raid-images
> physical 241422303232
> [1693761.011476] BTRFS error (device dm-1): unable to fixup (regular)
> error at logical 239321481216 on dev /dev/mapper/lvm--raid-images
> physical 241477353472
> [1693771.039669] BTRFS info (device dm-1): scrub: finished on devid 1
> with status: 0
>
>
>
>
[-- Attachment #1.1.2: Type: text/html, Size: 12531 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 236 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2024-03-19 13:26 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-03-02 15:01 btrfs ontop of LVM ontop of MD RAID1 supported? Nigel Kukard
2024-03-02 15:47 ` Roman Mamedov
2024-03-02 16:09 ` Nigel Kukard
2024-03-02 16:19 ` Roman Mamedov
2024-03-19 11:05 ` Nigel Kukard
2024-03-19 13:26 ` Nigel Kukard
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox