* Make test failed raid19check
@ 2026-01-20 18:08 David Niklas
0 siblings, 0 replies; only message in thread
From: David Niklas @ 2026-01-20 18:08 UTC (permalink / raw)
To: Linux RAID
[-- Attachment #1: Type: text/plain, Size: 2304 bytes --]
Hello,
Recently, mdadm-4.5 was released. I upgraded my system to the latest
version, uninstalling the packaged mdadm for the new one.
I was particularly interested in using raid19check as you may recall, my
raid6 array developed some errors.
I ran the tests for raid6check and two of them failed.
I've attached the logs.
I'm running Devuan (Debian) Linux with a custom configured kernel.
Nothing special, I just needed the newer features for my GPU.
Here's the output of the "test" command.
# ./test
Warning! Tests are performed on system level mdadm!
If you want to test local build, you need to install it first!
test: skipping tests for multipath, which is removed in upstream 6.8+ kernels
Warning! Test suite will set up systemd environment!
Use "systemctl show-environment" to show systemd environment variables
/root/working/mdadm/tests/func.sh: line 228: systemctl: command not found
Added IMSM_DEVNAME_AS_SERIAL=1 to systemd environment, use "systemctl unset-environment IMSM_DEVNAME_AS_SERIAL=1" to remove it.
/root/working/mdadm/tests/func.sh: line 228: systemctl: command not found
Added IMSM_NO_PLATFORM=1 to systemd environment, use "systemctl unset-environment IMSM_NO_PLATFORM=1" to remove it.
setenforce: SELinux is disabled
Testing on linux-6.14.11-nopreempt-AMDGPU-dav15-noxz kernel
/root/working/mdadm/tests/19raid6auto-repair... Execution time (seconds): 8 FAILED - see /var/tmp/19raid6auto-repair.log and /var/tmp/fail19raid6auto-repair.log for details
(KNOWN BROKEN TEST: always fails)
/root/working/mdadm/tests/19raid6check... Execution time (seconds): 340 succeeded
/root/working/mdadm/tests/19raid6repair... Execution time (seconds): 8 FAILED - see /var/tmp/19raid6repair.log and /var/tmp/fail19raid6repair.log for details
(KNOWN BROKEN TEST: always fails)
/root/working/mdadm/tests/19repair-does-not-destroy... Execution time (seconds): 10 succeeded
setenforce: SELinux is disabled
/root/working/mdadm/tests/func.sh: line 237: systemctl: command not found
Removed IMSM_DEVNAME_AS_SERIAL=1 from systemd environment.
/root/working/mdadm/tests/func.sh: line 237: systemctl: command not found
Removed IMSM_NO_PLATFORM=1 from systemd environment.
I think I'll start running the other tests and see what happens with them.
Any ideas on what's going wrong?
Thanks,
David
[-- Attachment #2: fail19raid6repair.log --]
[-- Type: text/x-log, Size: 2053 bytes --]
## Core-Ultra-2-x20: saving dmesg.
[4501074.465567] md/raid:md0: not clean -- starting background reconstruction
[4501074.465590] md/raid:md0: device loop4 operational as raid disk 3
[4501074.465592] md/raid:md0: device loop3 operational as raid disk 2
[4501074.465593] md/raid:md0: device loop2 operational as raid disk 1
[4501074.465594] md/raid:md0: device loop1 operational as raid disk 0
[4501074.465820] md/raid:md0: raid level 6 active with 4 out of 4 devices, algorithm 2
[4501074.465838] md0: detected capacity change from 0 to 71680
[4501074.466242] md: resync of RAID array md0
[4501075.294307] md: md0: resync done.
[4501081.959606] test (31434): drop_caches: 3
[4501082.158217] test (31434): drop_caches: 3
## Core-Ultra-2-x20: saving proc mdstat.
Personalities : [raid0] [linear] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid6 loop4[3] loop3[2] loop2[1] loop1[0]
35840 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>
## Core-Ultra-2-x20: mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Tue Jan 20 12:51:04 2026
Raid Level : raid6
Array Size : 35840 (35.00 MiB 36.70 MB)
Used Dev Size : 17920 (17.50 MiB 18.35 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Tue Jan 20 12:51:05 2026
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Name : Core-Ultra-2-x20:0 (local to host Core-Ultra-2-x20)
UUID : 472d803a:08838b9e:c943ddb8:9f55f6f0
Events : 19
Number Major Minor RaidDevice State
0 7 1 0 active sync /dev/loop1
1 7 2 1 active sync /dev/loop2
2 7 3 2 active sync /dev/loop3
3 7 4 3 active sync /dev/loop4
[-- Attachment #3: 19raid6repair.log --]
[-- Type: text/x-log, Size: 3989 bytes --]
+ . /root/working/mdadm/tests/19raid6repair
++ number_of_disks=4
++ chunksize_in_kib=512
++ chunksize_in_b=524288
++ array_data_size_in_kib=4096
++ array_data_size_in_b=4194304
++ devs='/dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4'
++ data_offset_in_kib=1024
++ layouts='ls rs la ra parity-first ddf-zero-restart ddf-N-restart ddf-N-continue left-asymmetric-6 right-asymmetric-6 left-symmetric-6 right-symmetric-6 parity-first-6'
++ for layout in $layouts
++ for failure in "$dev3 3 3 2" "$dev3 3 2 3" "$dev3 3 2 1" "$dev3 3 2 0" "$dev4 3 3 0" "$dev4 3 3 1" "$dev4 3 3 2" "$dev1 3 0 1" "$dev1 3 0 2" "$dev1 3 0 3" "$dev2 3 1 0" "$dev2 3 1 2" "$dev2 3 1 3"
++ failure_split=($failure)
++ device_with_error=/dev/loop3
++ stripe_with_error=3
++ repair_params='3 3 2'
++ start_of_errors_in_kib=2560
++ dd if=/dev/urandom of=/tmp/RandFile bs=1024 count=4096
4096+0 records in
4096+0 records out
4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0142637 s, 294 MB/s
++ mdadm -CR /dev/md0 -l6 --layout=ls -n4 -c 512 /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ for args in $*
++ [[ -CR =~ /dev/ ]]
++ for args in $*
++ [[ /dev/md0 =~ /dev/ ]]
++ [[ /dev/md0 =~ md ]]
++ for args in $*
++ [[ -l6 =~ /dev/ ]]
++ for args in $*
++ [[ --layout=ls =~ /dev/ ]]
++ for args in $*
++ [[ -n4 =~ /dev/ ]]
++ for args in $*
++ [[ -c =~ /dev/ ]]
++ for args in $*
++ [[ 512 =~ /dev/ ]]
++ for args in $*
++ [[ /dev/loop1 =~ /dev/ ]]
++ [[ /dev/loop1 =~ md ]]
++ /sbin/mdadm --zero /dev/loop1
mdadm: Unrecognised md component device - /dev/loop1
++ for args in $*
++ [[ /dev/loop2 =~ /dev/ ]]
++ [[ /dev/loop2 =~ md ]]
++ /sbin/mdadm --zero /dev/loop2
mdadm: Unrecognised md component device - /dev/loop2
++ for args in $*
++ [[ /dev/loop3 =~ /dev/ ]]
++ [[ /dev/loop3 =~ md ]]
++ /sbin/mdadm --zero /dev/loop3
mdadm: Unrecognised md component device - /dev/loop3
++ for args in $*
++ [[ /dev/loop4 =~ /dev/ ]]
++ [[ /dev/loop4 =~ md ]]
++ /sbin/mdadm --zero /dev/loop4
mdadm: Unrecognised md component device - /dev/loop4
++ /sbin/mdadm -CR /dev/md0 -l6 --layout=ls -n4 -c 512 /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
++ return 0
++ dd if=/tmp/RandFile of=/dev/md0 bs=1024 count=4096
4096+0 records in
4096+0 records out
4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0321578 s, 130 MB/s
++ blockdev --flushbufs /dev/md0
++ sync
++ check wait
++ case $1 in
+++ cat /proc/sys/dev/raid/speed_limit_min
++ min=100
+++ cat /proc/sys/dev/raid/speed_limit_max
++ max=500
++ echo 200000
++ sleep 0.1
++ iterations=0
++ '[' 0 -le 10 ']'
+++ grep -Ec '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sync_action=1
++ (( 1 == 0 ))
++ break
++ echo 'Reshape has not started after 10 seconds'
Reshape has not started after 10 seconds
++ echo 'Waiting for grow-continue to finish'
Waiting for grow-continue to finish
++ wait_for_reshape_end
++ true
+++ grep -Ec '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sync_action=1
++ (( 1 != 0 ))
++ sleep 2
++ continue
++ true
+++ grep -Ec '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sync_action=0
++ (( 0 != 0 ))
+++ pgrep -f 'mdadm --grow --continue'
++ [[ '' != '' ]]
++ break
++ sleep 5
++ wait_for_reshape_end
++ true
+++ grep -Ec '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sync_action=0
++ (( 0 != 0 ))
+++ pgrep -f 'mdadm --grow --continue'
++ [[ '' != '' ]]
++ break
++ echo 100
++ echo 500
++ blockdev --flushbufs /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4
++ sync
++ echo 3
++ cmp -s -n 4194304 /dev/md0 /tmp/RandFile
++ dd if=/dev/urandom of=/dev/loop3 bs=1024 count=512 seek=2560
512+0 records in
512+0 records out
524288 bytes (524 kB, 512 KiB) copied, 0.00223738 s, 234 MB/s
++ blockdev --flushbufs /dev/loop3
++ sync
++ echo 3
++ /raid6check /dev/md0 0 0
++ grep -qs Error
++ echo should detect errors
should detect errors
++ exit 2
[-- Attachment #4: fail19raid6auto-repair.log --]
[-- Type: text/x-log, Size: 2199 bytes --]
## Core-Ultra-2-x20: saving dmesg.
[4500726.128927] md/raid:md0: not clean -- starting background reconstruction
[4500726.128949] md/raid:md0: device loop4 operational as raid disk 4
[4500726.128951] md/raid:md0: device loop3 operational as raid disk 3
[4500726.128952] md/raid:md0: device loop2 operational as raid disk 2
[4500726.128953] md/raid:md0: device loop1 operational as raid disk 1
[4500726.128954] md/raid:md0: device loop0 operational as raid disk 0
[4500726.129347] md/raid:md0: raid level 6 active with 5 out of 5 devices, algorithm 2
[4500726.129362] md0: detected capacity change from 0 to 107520
[4500726.129513] md: resync of RAID array md0
[4500726.903794] md: md0: resync done.
[4500733.827662] test (26245): drop_caches: 3
[4500734.121635] test (26245): drop_caches: 3
## Core-Ultra-2-x20: saving proc mdstat.
Personalities : [raid0] [linear] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid6 loop4[4] loop3[3] loop2[2] loop1[1] loop0[0]
53760 blocks super 1.2 level 6, 512k chunk, algorithm 2 [5/5] [UUUUU]
unused devices: <none>
## Core-Ultra-2-x20: mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Tue Jan 20 12:45:15 2026
Raid Level : raid6
Array Size : 53760 (52.50 MiB 55.05 MB)
Used Dev Size : 17920 (17.50 MiB 18.35 MB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent
Update Time : Tue Jan 20 12:45:16 2026
State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Name : Core-Ultra-2-x20:0 (local to host Core-Ultra-2-x20)
UUID : 43e4b762:7978a8e7:59af7d77:2747b753
Events : 19
Number Major Minor RaidDevice State
0 7 0 0 active sync /dev/loop0
1 7 1 1 active sync /dev/loop1
2 7 2 2 active sync /dev/loop2
3 7 3 3 active sync /dev/loop3
4 7 4 4 active sync /dev/loop4
[-- Attachment #5: 19raid6auto-repair.log --]
[-- Type: text/x-log, Size: 4563 bytes --]
+ . /root/working/mdadm/tests/19raid6auto-repair
++ number_of_disks=5
++ chunksize_in_kib=512
++ chunksize_in_b=524288
++ array_data_size_in_kib=7680
++ array_data_size_in_b=7864320
++ devs='/dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4'
++ data_offset_in_kib=1024
++ dd if=/dev/urandom of=/tmp/RandFile bs=1024 count=7680
7680+0 records in
7680+0 records out
7864320 bytes (7.9 MB, 7.5 MiB) copied, 0.0342983 s, 229 MB/s
++ layouts='ls rs la ra parity-first ddf-zero-restart ddf-N-restart ddf-N-continue left-asymmetric-6 right-asymmetric-6 left-symmetric-6 right-symmetric-6 parity-first-6'
++ for layout in $layouts
++ mdadm -CR /dev/md0 -l6 --layout=ls -n5 -c 512 /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4
++ rm -f /var/tmp/stderr
++ case $* in
++ case $* in
++ for args in $*
++ [[ -CR =~ /dev/ ]]
++ for args in $*
++ [[ /dev/md0 =~ /dev/ ]]
++ [[ /dev/md0 =~ md ]]
++ for args in $*
++ [[ -l6 =~ /dev/ ]]
++ for args in $*
++ [[ --layout=ls =~ /dev/ ]]
++ for args in $*
++ [[ -n5 =~ /dev/ ]]
++ for args in $*
++ [[ -c =~ /dev/ ]]
++ for args in $*
++ [[ 512 =~ /dev/ ]]
++ for args in $*
++ [[ /dev/loop0 =~ /dev/ ]]
++ [[ /dev/loop0 =~ md ]]
++ /sbin/mdadm --zero /dev/loop0
mdadm: Unrecognised md component device - /dev/loop0
++ for args in $*
++ [[ /dev/loop1 =~ /dev/ ]]
++ [[ /dev/loop1 =~ md ]]
++ /sbin/mdadm --zero /dev/loop1
mdadm: Unrecognised md component device - /dev/loop1
++ for args in $*
++ [[ /dev/loop2 =~ /dev/ ]]
++ [[ /dev/loop2 =~ md ]]
++ /sbin/mdadm --zero /dev/loop2
mdadm: Unrecognised md component device - /dev/loop2
++ for args in $*
++ [[ /dev/loop3 =~ /dev/ ]]
++ [[ /dev/loop3 =~ md ]]
++ /sbin/mdadm --zero /dev/loop3
mdadm: Unrecognised md component device - /dev/loop3
++ for args in $*
++ [[ /dev/loop4 =~ /dev/ ]]
++ [[ /dev/loop4 =~ md ]]
++ /sbin/mdadm --zero /dev/loop4
mdadm: Unrecognised md component device - /dev/loop4
++ /sbin/mdadm -CR /dev/md0 -l6 --layout=ls -n5 -c 512 /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
++ rv=0
++ case $* in
++ cat /var/tmp/stderr
++ return 0
++ dd if=/tmp/RandFile of=/dev/md0 bs=1024 count=7680
7680+0 records in
7680+0 records out
7864320 bytes (7.9 MB, 7.5 MiB) copied, 0.0407638 s, 193 MB/s
++ blockdev --flushbufs /dev/md0
++ sync
++ check wait
++ case $1 in
+++ cat /proc/sys/dev/raid/speed_limit_min
++ min=100
+++ cat /proc/sys/dev/raid/speed_limit_max
++ max=500
++ echo 200000
++ sleep 0.1
++ iterations=0
++ '[' 0 -le 10 ']'
+++ grep -Ec '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sync_action=1
++ (( 1 == 0 ))
++ break
++ echo 'Reshape has not started after 10 seconds'
Reshape has not started after 10 seconds
++ echo 'Waiting for grow-continue to finish'
Waiting for grow-continue to finish
++ wait_for_reshape_end
++ true
+++ grep -Ec '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sync_action=1
++ (( 1 != 0 ))
++ sleep 2
++ continue
++ true
+++ grep -Ec '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sync_action=0
++ (( 0 != 0 ))
+++ pgrep -f 'mdadm --grow --continue'
++ [[ '' != '' ]]
++ break
++ sleep 5
++ wait_for_reshape_end
++ true
+++ grep -Ec '(resync|recovery|reshape|check|repair) *=' /proc/mdstat
++ sync_action=0
++ (( 0 != 0 ))
+++ pgrep -f 'mdadm --grow --continue'
++ [[ '' != '' ]]
++ break
++ echo 100
++ echo 500
++ blockdev --flushbufs /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4
++ sync
++ echo 3
++ cmp -s -n 7864320 /dev/md0 /tmp/RandFile
++ dd if=/dev/urandom of=/dev/loop0 bs=1024 count=2560 seek=1024
2560+0 records in
2560+0 records out
2621440 bytes (2.6 MB, 2.5 MiB) copied, 0.0104878 s, 250 MB/s
++ dd if=/dev/urandom of=/dev/loop1 bs=1024 count=2560 seek=3584
2560+0 records in
2560+0 records out
2621440 bytes (2.6 MB, 2.5 MiB) copied, 0.00968777 s, 271 MB/s
++ dd if=/dev/urandom of=/dev/loop2 bs=1024 count=2560 seek=6144
2560+0 records in
2560+0 records out
2621440 bytes (2.6 MB, 2.5 MiB) copied, 0.0147139 s, 178 MB/s
++ dd if=/dev/urandom of=/dev/loop3 bs=1024 count=2560 seek=8704
2560+0 records in
2560+0 records out
2621440 bytes (2.6 MB, 2.5 MiB) copied, 0.0108583 s, 241 MB/s
++ dd if=/dev/urandom of=/dev/loop4 bs=1024 count=2560 seek=11264
2560+0 records in
2560+0 records out
2621440 bytes (2.6 MB, 2.5 MiB) copied, 0.0121655 s, 215 MB/s
++ blockdev --flushbufs /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4
++ sync
++ echo 3
++ /raid6check /dev/md0 0 0
++ grep -qs Error
++ echo should detect errors
should detect errors
++ exit 2
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2026-01-20 18:15 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-20 18:08 Make test failed raid19check David Niklas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox