* [PATCH] fix random failures in shell/integrity.sh
@ 2025-08-04 14:17 Mikulas Patocka
2025-08-06 21:24 ` John Stoffel
0 siblings, 1 reply; 10+ messages in thread
From: Mikulas Patocka @ 2025-08-04 14:17 UTC (permalink / raw)
To: Peter Rajnoha, Zdenek Kabelac, Heinz Mauelshagen, David Teigland
Cc: linux-lvm, lvm-devel
Hi
The test shell/integrity.sh creates raid arrays, corrupts one of the
legs, then reads the array and verifies that the corruption was
corrected. Finally, the test tests that the number of mismatches on the
corrupted leg is non-zero.
The problem is that the raid1 implementation may freely choose which leg
to read from. If it chooses to read from the non-corrupted leg, the
corruption is not detected, the number of mismatches is not incremented
and the test reports this as a failure.
Fix the test by not checking the number of integrity mismatches for
raid1.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
---
test/shell/integrity.sh | 14 --------------
1 file changed, 14 deletions(-)
Index: lvm2/test/shell/integrity.sh
===================================================================
--- lvm2.orig/test/shell/integrity.sh 2025-07-29 19:35:54.000000000 +0200
+++ lvm2/test/shell/integrity.sh 2025-08-01 15:08:02.000000000 +0200
@@ -136,10 +136,6 @@ aux wait_recalc $vg/${lv1}_rimage_0
aux wait_recalc $vg/${lv1}_rimage_1
aux wait_recalc $vg/$lv1
_test_fs_with_read_repair "$dev1"
-lvs -o integritymismatches $vg/${lv1}_rimage_0 |tee mismatch
-not grep 0 mismatch
-lvs -o integritymismatches $vg/$lv1 |tee mismatch
-not grep 0 mismatch
lvchange -an $vg/$lv1
lvconvert --raidintegrity n $vg/$lv1
lvremove $vg/$lv1
@@ -153,10 +149,6 @@ aux wait_recalc $vg/${lv1}_rimage_1
aux wait_recalc $vg/${lv1}_rimage_2
aux wait_recalc $vg/$lv1
_test_fs_with_read_repair "$dev1" "$dev2"
-lvs -o integritymismatches $vg/${lv1}_rimage_0 |tee mismatch
-not grep 0 mismatch
-lvs -o integritymismatches $vg/$lv1 |tee mismatch
-not grep 0 mismatch
lvchange -an $vg/$lv1
lvconvert --raidintegrity n $vg/$lv1
lvremove $vg/$lv1
@@ -233,8 +225,6 @@ lvs -o integritymismatches $vg/${lv1}_ri
lvs -o integritymismatches $vg/${lv1}_rimage_1
lvs -o integritymismatches $vg/${lv1}_rimage_2
lvs -o integritymismatches $vg/${lv1}_rimage_3
-lvs -o integritymismatches $vg/$lv1 |tee mismatch
-not grep 0 mismatch
lvchange -an $vg/$lv1
lvconvert --raidintegrity n $vg/$lv1
lvremove $vg/$lv1
@@ -603,10 +593,6 @@ aux wait_recalc $vg/${lv1}_rimage_0
aux wait_recalc $vg/${lv1}_rimage_1
aux wait_recalc $vg/$lv1
_test_fs_with_read_repair "$dev1"
-lvs -o integritymismatches $vg/${lv1}_rimage_0 |tee mismatch
-not grep 0 mismatch
-lvs -o integritymismatches $vg/$lv1 |tee mismatch
-not grep 0 mismatch
lvchange -an $vg/$lv1
lvconvert --raidintegrity n $vg/$lv1
lvremove $vg/$lv1
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] fix random failures in shell/integrity.sh
2025-08-04 14:17 [PATCH] fix random failures in shell/integrity.sh Mikulas Patocka
@ 2025-08-06 21:24 ` John Stoffel
2025-08-06 23:25 ` matthew patton
2025-08-07 4:23 ` Stuart D Gathman
0 siblings, 2 replies; 10+ messages in thread
From: John Stoffel @ 2025-08-06 21:24 UTC (permalink / raw)
To: Mikulas Patocka
Cc: Peter Rajnoha, Zdenek Kabelac, Heinz Mauelshagen, David Teigland,
linux-lvm, lvm-devel
>>>>> "Mikulas" == Mikulas Patocka <mpatocka@redhat.com> writes:
> Hi
> The test shell/integrity.sh creates raid arrays, corrupts one of the
> legs, then reads the array and verifies that the corruption was
> corrected. Finally, the test tests that the number of mismatches on
> the corrupted leg is non-zero.
> The problem is that the raid1 implementation may freely choose which leg
> to read from. If it chooses to read from the non-corrupted leg, the
> corruption is not detected, the number of mismatches is not incremented
> and the test reports this as a failure.
So wait, how is integrity supposed to work in this situation then? In
real life? I understand the test is hard, maybe doing it in a loop
three times? Or configure the RAID1 to prefer one half over another
is the way to make this test work?
> Fix the test by not checking the number of integrity mismatches for
> raid1.
> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
> ---
> test/shell/integrity.sh | 14 --------------
> 1 file changed, 14 deletions(-)
> Index: lvm2/test/shell/integrity.sh
> ===================================================================
> --- lvm2.orig/test/shell/integrity.sh 2025-07-29 19:35:54.000000000 +0200
> +++ lvm2/test/shell/integrity.sh 2025-08-01 15:08:02.000000000 +0200
> @@ -136,10 +136,6 @@ aux wait_recalc $vg/${lv1}_rimage_0
> aux wait_recalc $vg/${lv1}_rimage_1
> aux wait_recalc $vg/$lv1
> _test_fs_with_read_repair "$dev1"
> -lvs -o integritymismatches $vg/${lv1}_rimage_0 |tee mismatch
> -not grep 0 mismatch
> -lvs -o integritymismatches $vg/$lv1 |tee mismatch
> -not grep 0 mismatch
> lvchange -an $vg/$lv1
> lvconvert --raidintegrity n $vg/$lv1
> lvremove $vg/$lv1
> @@ -153,10 +149,6 @@ aux wait_recalc $vg/${lv1}_rimage_1
> aux wait_recalc $vg/${lv1}_rimage_2
> aux wait_recalc $vg/$lv1
> _test_fs_with_read_repair "$dev1" "$dev2"
> -lvs -o integritymismatches $vg/${lv1}_rimage_0 |tee mismatch
> -not grep 0 mismatch
> -lvs -o integritymismatches $vg/$lv1 |tee mismatch
> -not grep 0 mismatch
> lvchange -an $vg/$lv1
> lvconvert --raidintegrity n $vg/$lv1
> lvremove $vg/$lv1
> @@ -233,8 +225,6 @@ lvs -o integritymismatches $vg/${lv1}_ri
> lvs -o integritymismatches $vg/${lv1}_rimage_1
> lvs -o integritymismatches $vg/${lv1}_rimage_2
> lvs -o integritymismatches $vg/${lv1}_rimage_3
> -lvs -o integritymismatches $vg/$lv1 |tee mismatch
> -not grep 0 mismatch
> lvchange -an $vg/$lv1
> lvconvert --raidintegrity n $vg/$lv1
> lvremove $vg/$lv1
> @@ -603,10 +593,6 @@ aux wait_recalc $vg/${lv1}_rimage_0
> aux wait_recalc $vg/${lv1}_rimage_1
> aux wait_recalc $vg/$lv1
> _test_fs_with_read_repair "$dev1"
> -lvs -o integritymismatches $vg/${lv1}_rimage_0 |tee mismatch
> -not grep 0 mismatch
> -lvs -o integritymismatches $vg/$lv1 |tee mismatch
> -not grep 0 mismatch
> lvchange -an $vg/$lv1
> lvconvert --raidintegrity n $vg/$lv1
> lvremove $vg/$lv1
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] fix random failures in shell/integrity.sh
2025-08-06 21:24 ` John Stoffel
@ 2025-08-06 23:25 ` matthew patton
2025-08-07 4:23 ` Stuart D Gathman
1 sibling, 0 replies; 10+ messages in thread
From: matthew patton @ 2025-08-06 23:25 UTC (permalink / raw)
To: linux-lvm@lists.linux.dev; +Cc: lvm-devel@lists.linux.dev
> The problem is that the raid1 implementation may freely choose which leg
> to read from. If it chooses to read from the non-corrupted leg, the
> corruption is not detected,
which is why man invented ZFS. :)
I don't know what the probability 10^-X is of silent mismatch between halves of a mirror, but it's a fatal weakness IMO of pretty much every RAID scheme out there that doesn't checksum each block be it at the "disk sector" (512/4k) or "filesystem" (4k).
Would it make sense for LVM since it's a shim between disk device and filesystems to implement its own checksum scheme? Maybe do it at a "LVM page" notion of 32 disk sectors followed by a couple of extra disk sectors in which the checksum for each of the preceding 32 is strung together in one packed value? /spit-ball
The *best* answer is for everyone to move to 520/528 byte sectors like serious storage vendors did 50 years ago, but I suspect that would be harder to get past the gatekeepers than drivers written in RUST.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] fix random failures in shell/integrity.sh
2025-08-06 21:24 ` John Stoffel
2025-08-06 23:25 ` matthew patton
@ 2025-08-07 4:23 ` Stuart D Gathman
2025-08-07 14:07 ` Mikulas Patocka
1 sibling, 1 reply; 10+ messages in thread
From: Stuart D Gathman @ 2025-08-07 4:23 UTC (permalink / raw)
To: John Stoffel
Cc: Mikulas Patocka, Peter Rajnoha, Zdenek Kabelac, Heinz Mauelshagen,
David Teigland, linux-lvm, lvm-devel
On Wed, 6 Aug 2025, John Stoffel wrote:
>>>>>> "Mikulas" == Mikulas Patocka <mpatocka@redhat.com> writes:
>
>> The problem is that the raid1 implementation may freely choose which leg
>> to read from. If it chooses to read from the non-corrupted leg, the
>> corruption is not detected, the number of mismatches is not incremented
>> and the test reports this as a failure.
>
> So wait, how is integrity supposed to work in this situation then? In
> real life? I understand the test is hard, maybe doing it in a loop
> three times? Or configure the RAID1 to prefer one half over another
> is the way to make this test work?
Linux needs an optional parameter to read() syscall that is "leg index"
for the blk interface. Thus, btrfs scrub can check all legs, and this
test can check all legs. Filesystems with checks can repair corruption
by rewriting the block after finding a leg with correct csum.
This only needs a few bits (how many legs can there be?), so can go in
the FLAGS argument.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] fix random failures in shell/integrity.sh
2025-08-07 4:23 ` Stuart D Gathman
@ 2025-08-07 14:07 ` Mikulas Patocka
2025-08-07 14:26 ` Zdenek Kabelac
2025-08-07 14:38 ` John Stoffel
0 siblings, 2 replies; 10+ messages in thread
From: Mikulas Patocka @ 2025-08-07 14:07 UTC (permalink / raw)
To: Stuart D Gathman
Cc: John Stoffel, Peter Rajnoha, Zdenek Kabelac, Heinz Mauelshagen,
David Teigland, linux-lvm, lvm-devel
On Thu, 7 Aug 2025, Stuart D Gathman wrote:
> On Wed, 6 Aug 2025, John Stoffel wrote:
>
> > > > > > > "Mikulas" == Mikulas Patocka <mpatocka@redhat.com> writes:
> >
> > > The problem is that the raid1 implementation may freely choose which leg
> > > to read from. If it chooses to read from the non-corrupted leg, the
> > > corruption is not detected, the number of mismatches is not incremented
> > > and the test reports this as a failure.
> >
> > So wait, how is integrity supposed to work in this situation then? In
> > real life? I understand the test is hard, maybe doing it in a loop
> > three times? Or configure the RAID1 to prefer one half over another
> > is the way to make this test work?
If you want to make sure that you detect (and correct) all mismatches, you
have to scrub the raid array.
> Linux needs an optional parameter to read() syscall that is "leg index"
> for the blk interface. Thus, btrfs scrub can check all legs, and this
> test can check all legs. Filesystems with checks can repair corruption
> by rewriting the block after finding a leg with correct csum.
>
> This only needs a few bits (how many legs can there be?), so can go in
> the FLAGS argument.
I think that adding a new bit for the read syscalls is not a workable
solition. There are so many programs using the read() syscall and teaching
them to use this new bit is impossible.
Mikulas
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] fix random failures in shell/integrity.sh
2025-08-07 14:07 ` Mikulas Patocka
@ 2025-08-07 14:26 ` Zdenek Kabelac
2025-08-11 12:22 ` Mikulas Patocka
2025-08-07 14:38 ` John Stoffel
1 sibling, 1 reply; 10+ messages in thread
From: Zdenek Kabelac @ 2025-08-07 14:26 UTC (permalink / raw)
To: Mikulas Patocka, Stuart D Gathman
Cc: John Stoffel, Peter Rajnoha, Heinz Mauelshagen, David Teigland,
linux-lvm, lvm-devel
Dne 07. 08. 25 v 16:07 Mikulas Patocka napsal(a):
>
>
> On Thu, 7 Aug 2025, Stuart D Gathman wrote:
>
>> On Wed, 6 Aug 2025, John Stoffel wrote:
>>
>>>>>>>> "Mikulas" == Mikulas Patocka <mpatocka@redhat.com> writes:
>>>
>>>> The problem is that the raid1 implementation may freely choose which leg
>>>> to read from. If it chooses to read from the non-corrupted leg, the
>>>> corruption is not detected, the number of mismatches is not incremented
>>>> and the test reports this as a failure.
>>>
>>> So wait, how is integrity supposed to work in this situation then? In
>>> real life? I understand the test is hard, maybe doing it in a loop
>>> three times? Or configure the RAID1 to prefer one half over another
>>> is the way to make this test work?
>
> If you want to make sure that you detect (and correct) all mismatches, you
> have to scrub the raid array.
>
>> Linux needs an optional parameter to read() syscall that is "leg index"
>> for the blk interface. Thus, btrfs scrub can check all legs, and this
>> test can check all legs. Filesystems with checks can repair corruption
>> by rewriting the block after finding a leg with correct csum.
>>
>> This only needs a few bits (how many legs can there be?), so can go in
>> the FLAGS argument.
>
> I think that adding a new bit for the read syscalls is not a workable
> solition. There are so many programs using the read() syscall and teaching
> them to use this new bit is impossible.
I believe the idea of the test is to physically check the proper leg is
reporting an error.
So instead of this proposed patch - we should actually enforce reading leg
with lvchange --writemostly to select the preferred leg among other legs.
The bigger question is though - that user doesn't have normally a single
workable leg - as 'reading' is spread across all legs - thus when one leg
goes away - it could have happen that other legs have also some 'errors'
spread around...
Zdenek
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] fix random failures in shell/integrity.sh
2025-08-07 14:07 ` Mikulas Patocka
2025-08-07 14:26 ` Zdenek Kabelac
@ 2025-08-07 14:38 ` John Stoffel
2025-08-07 14:58 ` Mikulas Patocka
2025-08-07 15:31 ` Stuart D Gathman
1 sibling, 2 replies; 10+ messages in thread
From: John Stoffel @ 2025-08-07 14:38 UTC (permalink / raw)
To: Mikulas Patocka
Cc: Stuart D Gathman, John Stoffel, Peter Rajnoha, Zdenek Kabelac,
Heinz Mauelshagen, David Teigland, linux-lvm, lvm-devel
>>>>> "Mikulas" == Mikulas Patocka <mpatocka@redhat.com> writes:
> On Thu, 7 Aug 2025, Stuart D Gathman wrote:
>> On Wed, 6 Aug 2025, John Stoffel wrote:
>>
>> > > > > > > "Mikulas" == Mikulas Patocka <mpatocka@redhat.com> writes:
>> >
>> > > The problem is that the raid1 implementation may freely choose which leg
>> > > to read from. If it chooses to read from the non-corrupted leg, the
>> > > corruption is not detected, the number of mismatches is not incremented
>> > > and the test reports this as a failure.
>> >
>> > So wait, how is integrity supposed to work in this situation then? In
>> > real life? I understand the test is hard, maybe doing it in a loop
>> > three times? Or configure the RAID1 to prefer one half over another
>> > is the way to make this test work?
> If you want to make sure that you detect (and correct) all mismatches, you
> have to scrub the raid array.
And how do you know which level of the array is showing the errors? I
could have a RAID1 array composed of a single partition on the left
side, but then a RAID0 of two smaller disks on the right side. So how
would this read() flag know what to do?
I would assume the integrity sub-system would be reading from both
sides and comparing them to look for errors. When you find a
mis-match, how do you tell which side is wrong?
>> Linux needs an optional parameter to read() syscall that is "leg index"
>> for the blk interface. Thus, btrfs scrub can check all legs, and this
>> test can check all legs. Filesystems with checks can repair corruption
>> by rewriting the block after finding a leg with correct csum.
>>
>> This only needs a few bits (how many legs can there be?), so can go in
>> the FLAGS argument.
> I think that adding a new bit for the read syscalls is not a
> workable solition. There are so many programs using the read()
> syscall and teaching them to use this new bit is impossible.
It's also the completely wrong level for this type of support. But
maybe they can convince us with a pseudo-code example of how an
application would use this read() extension to solve the issue?
But right now, it's just for testing, and if the tests don't work
correctly without a silly workaround like this, then your tests aren't
representative of the real world use!
John
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] fix random failures in shell/integrity.sh
2025-08-07 14:38 ` John Stoffel
@ 2025-08-07 14:58 ` Mikulas Patocka
2025-08-07 15:31 ` Stuart D Gathman
1 sibling, 0 replies; 10+ messages in thread
From: Mikulas Patocka @ 2025-08-07 14:58 UTC (permalink / raw)
To: John Stoffel
Cc: Stuart D Gathman, Peter Rajnoha, Zdenek Kabelac,
Heinz Mauelshagen, David Teigland, linux-lvm, lvm-devel
On Thu, 7 Aug 2025, John Stoffel wrote:
> >>>>> "Mikulas" == Mikulas Patocka <mpatocka@redhat.com> writes:
>
> > On Thu, 7 Aug 2025, Stuart D Gathman wrote:
>
> >> On Wed, 6 Aug 2025, John Stoffel wrote:
> >>
> >> > > > > > > "Mikulas" == Mikulas Patocka <mpatocka@redhat.com> writes:
> >> >
> >> > > The problem is that the raid1 implementation may freely choose which leg
> >> > > to read from. If it chooses to read from the non-corrupted leg, the
> >> > > corruption is not detected, the number of mismatches is not incremented
> >> > > and the test reports this as a failure.
> >> >
> >> > So wait, how is integrity supposed to work in this situation then? In
> >> > real life? I understand the test is hard, maybe doing it in a loop
> >> > three times? Or configure the RAID1 to prefer one half over another
> >> > is the way to make this test work?
>
> > If you want to make sure that you detect (and correct) all mismatches, you
> > have to scrub the raid array.
>
> And how do you know which level of the array is showing the errors? I
> could have a RAID1 array composed of a single partition on the left
> side, but then a RAID0 of two smaller disks on the right side. So how
> would this read() flag know what to do?
>
> I would assume the integrity sub-system would be reading from both
> sides and comparing them to look for errors. When you find a
> mis-match, how do you tell which side is wrong?
If you use dm-integrity on the raid legs (as it was done in this test),
you know which leg is corrupted - dm-integrity will turn silent data
corruptions into -EILSEQ.
So, all you have to do, is to initiate scrub on the array.
Mikulas
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] fix random failures in shell/integrity.sh
2025-08-07 14:38 ` John Stoffel
2025-08-07 14:58 ` Mikulas Patocka
@ 2025-08-07 15:31 ` Stuart D Gathman
1 sibling, 0 replies; 10+ messages in thread
From: Stuart D Gathman @ 2025-08-07 15:31 UTC (permalink / raw)
To: John Stoffel
Cc: Mikulas Patocka, Peter Rajnoha, Zdenek Kabelac, Heinz Mauelshagen,
David Teigland, linux-lvm, lvm-devel
On Thu, 7 Aug 2025, John Stoffel wrote:
>
> And how do you know which level of the array is showing the errors? I
> could have a RAID1 array composed of a single partition on the left
> side, but then a RAID0 of two smaller disks on the right side. So how
> would this read() flag know what to do?
Modern filesystems like btrfs (and xfs for metadata) have csums on
extents. So they know which version is good. Btrfs can do it's own
version of raid1, with multiple copies of each extents on different
devices. But this doesn't integrate well with LVM. (It's great for
arrays of external USB disks for archival purposes.) LVM is
more efficient as a base - especially for use with KVM virtual machines.
(You could use containers on btrfs, but I like the better security
of KVM despite higher CPU usage.)
> I would assume the integrity sub-system would be reading from both
> sides and comparing them to look for errors. When you find a
> mis-match, how do you tell which side is wrong?
Modern filesystems have csums on extents. There is an integrity layer
for dm that also does this. Both filesystems and integrity layer
need a way to read all versions of a block/extent to see which is
correct.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] fix random failures in shell/integrity.sh
2025-08-07 14:26 ` Zdenek Kabelac
@ 2025-08-11 12:22 ` Mikulas Patocka
0 siblings, 0 replies; 10+ messages in thread
From: Mikulas Patocka @ 2025-08-11 12:22 UTC (permalink / raw)
To: Zdenek Kabelac
Cc: Stuart D Gathman, John Stoffel, Peter Rajnoha, Heinz Mauelshagen,
David Teigland, linux-lvm, lvm-devel
Hi
On Thu, 7 Aug 2025, Zdenek Kabelac wrote:
> > I think that adding a new bit for the read syscalls is not a workable
> > solition. There are so many programs using the read() syscall and teaching
> > them to use this new bit is impossible.
>
> I believe the idea of the test is to physically check the proper leg is
> reporting an error.
>
> So instead of this proposed patch - we should actually enforce reading leg
> with lvchange --writemostly to select the preferred leg among other legs.
Here I'm sending a patch that uses "lvchange --writemostly". It doesn't
use it for raid-10, because lvm doesn't support it for raid-10.
> The bigger question is though - that user doesn't have normally a single
> workable leg - as 'reading' is spread across all legs - thus when one leg
> goes away - it could have happen that other legs have also some 'errors'
> spread around...
So, the user should periodically scrub the array or he should use a raid
configuration that can withstand two-disk failure (i.e. raid-1 with 3 legs
or raid-6).
> Zdenek
From: Mikulas Patocka <mpatocka@redhat.com>
The test shell/integrity.sh creates raid arrays, corrupts one of the
legs, then reads the array and verifies that the corruption was
corrected. Finally, the test tests that the number of mismatches on the
corrupted leg is non-zero.
The problem is that the raid1 implementation may freely choose which leg
to read from. If it chooses to read the non-corrupted leg, the corruption
is not detected, the number of mismatches is not incremented and the test
reports this as failure.
Fix this failure by marking the non-corrupted leg as "writemostly", so
that the kernel doesn't try to read it (it reads it only if it finds
corruption on the other leg).
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
---
test/shell/integrity.sh | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
Index: lvm2/test/shell/integrity.sh
===================================================================
--- lvm2.orig/test/shell/integrity.sh 2025-08-11 13:34:07.000000000 +0200
+++ lvm2/test/shell/integrity.sh 2025-08-11 13:46:43.000000000 +0200
@@ -135,6 +135,7 @@ lvs -a -o name,segtype,devices,sync_perc
aux wait_recalc $vg/${lv1}_rimage_0
aux wait_recalc $vg/${lv1}_rimage_1
aux wait_recalc $vg/$lv1
+lvchange $vg/$lv1 --writemostly "$dev2"
_test_fs_with_read_repair "$dev1"
lvs -o integritymismatches $vg/${lv1}_rimage_0 |tee mismatch
not grep 0 mismatch
@@ -152,6 +153,8 @@ aux wait_recalc $vg/${lv1}_rimage_0
aux wait_recalc $vg/${lv1}_rimage_1
aux wait_recalc $vg/${lv1}_rimage_2
aux wait_recalc $vg/$lv1
+lvchange $vg/$lv1 --writemostly "$dev2"
+lvchange $vg/$lv1 --writemostly "$dev3"
_test_fs_with_read_repair "$dev1" "$dev2"
lvs -o integritymismatches $vg/${lv1}_rimage_0 |tee mismatch
not grep 0 mismatch
@@ -233,8 +236,6 @@ lvs -o integritymismatches $vg/${lv1}_ri
lvs -o integritymismatches $vg/${lv1}_rimage_1
lvs -o integritymismatches $vg/${lv1}_rimage_2
lvs -o integritymismatches $vg/${lv1}_rimage_3
-lvs -o integritymismatches $vg/$lv1 |tee mismatch
-not grep 0 mismatch
lvchange -an $vg/$lv1
lvconvert --raidintegrity n $vg/$lv1
lvremove $vg/$lv1
@@ -602,6 +603,7 @@ lvs -a -o name,segtype,devices,sync_perc
aux wait_recalc $vg/${lv1}_rimage_0
aux wait_recalc $vg/${lv1}_rimage_1
aux wait_recalc $vg/$lv1
+lvchange $vg/$lv1 --writemostly "$dev2"
_test_fs_with_read_repair "$dev1"
lvs -o integritymismatches $vg/${lv1}_rimage_0 |tee mismatch
not grep 0 mismatch
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2025-08-11 12:23 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-04 14:17 [PATCH] fix random failures in shell/integrity.sh Mikulas Patocka
2025-08-06 21:24 ` John Stoffel
2025-08-06 23:25 ` matthew patton
2025-08-07 4:23 ` Stuart D Gathman
2025-08-07 14:07 ` Mikulas Patocka
2025-08-07 14:26 ` Zdenek Kabelac
2025-08-11 12:22 ` Mikulas Patocka
2025-08-07 14:38 ` John Stoffel
2025-08-07 14:58 ` Mikulas Patocka
2025-08-07 15:31 ` Stuart D Gathman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).