public inbox for linux-raid@vger.kernel.org
 help / color / mirror / Atom feed
* Issues with md raid 1 on kernel 6.18 after booting kernel 6.19rc1 once
@ 2025-12-17  6:58 BugReports
  2025-12-17  7:06 ` Yu Kuai
  0 siblings, 1 reply; 17+ messages in thread
From: BugReports @ 2025-12-17  6:58 UTC (permalink / raw)
  To: linux-raid

Hi,

i hope i am reaching out to the correct mailing list and this is the way 
to correctly report issues with rc kernels.

I installed kernel 6.19-rc 1 recently (with linux-tkg, but that should 
not matter).  Booting the 6.19 rc1 kernel worked fine and i could access 
my md raid 1.

After that i wanted to switch back to kernel 6.18.1 and noticed the 
following:

- I can not access the raid 1 md anymore as it does not assemble anymore

- The following error message shows up when i try to assemble the raid:

|mdadm: /dev/sdc1 is identified as a member of /dev/md/1, slot 0. mdadm: 
/dev/sda1 is identified as a member of /dev/md/1, slot 1. mdadm: failed 
to add /dev/sda1 to /dev/md/1: Invalid argument mdadm: failed to add 
/dev/sdc1 to /dev/md/1: Invalid argument - The following error shows up 
in dmesg: ||[Di, 16. Dez 2025, 11:50:38] md: md1 stopped. [Di, 16. Dez 2025, 
11:50:38] md: sda1 does not have a valid v1.2 superblock, not importing! 
[Di, 16. Dez 2025, 11:50:38] md: md_import_device returned -22 [Di, 16. 
Dez 2025, 11:50:38] md: sdc1 does not have a valid v1.2 superblock, not 
importing! [Di, 16. Dez 2025, 11:50:38] md: md_import_device returned 
-22 [Di, 16. Dez 2025, 11:50:38] md: md1 stopped. - mdadm --examine used 
with kerne 6.18 shows the following : ||cat mdadmin618.txt /dev/sdc1: Magic : a92b4efc Version : 1.2 Feature 
Map : 0x1 Array UUID : 3b786bf1:559584b0:b9eabbe2:82bdea18 Name : 
gamebox:1 (local to host gamebox) Creation Time : Tue Nov 26 20:39:09 
2024 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 3859879936 
sectors (1840.53 GiB 1976.26 GB) Array Size : 1929939968 KiB (1840.53 
GiB 1976.26 GB) Data Offset : 264192 sectors Super Offset : 8 sectors 
Unused Space : before=264112 sectors, after=0 sectors State : clean 
Device UUID : 9f185862:a11d8deb:db6d708e:a7cc6a91 Internal Bitmap : 8 
sectors from superblock Update Time : Mon Dec 15 22:40:46 2025 Bad Block 
Log : 512 entries available at offset 16 sectors Checksum : f11e2fa5 - 
correct Events : 2618 Device Role : Active device 0 Array State : AA 
('A' == active, '.' == missing, 'R' == replacing) /dev/sda1: Magic : 
a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 
3b786bf1:559584b0:b9eabbe2:82bdea18 Name : gamebox:1 (local to host 
gamebox) Creation Time : Tue Nov 26 20:39:09 2024 Raid Level : raid1 
Raid Devices : 2 Avail Dev Size : 3859879936 sectors (1840.53 GiB 
1976.26 GB) Array Size : 1929939968 KiB (1840.53 GiB 1976.26 GB) Data 
Offset : 264192 sectors Super Offset : 8 sectors Unused Space : 
before=264112 sectors, after=0 sectors State : clean Device UUID : 
fc196769:0e25b5af:dfc6cab6:639ac8f9 Internal Bitmap : 8 sectors from 
superblock Update Time : Mon Dec 15 22:40:46 2025 Bad Block Log : 512 
entries available at offset 16 sectors Checksum : 4d0d5f31 - correct 
Events : 2618 Device Role : Active device 1 Array State : AA ('A' == 
active, '.' == missing, 'R' == replacing)|
|- Mdadm --detail shows the following in 6.19 rc1 (i am using 6.19 
output as it does not work anymore in 6.18.1): ||/dev/md1: Version : 1.2 Creation Time : Tue Nov 26 20:39:09 2024 Raid 
Level : raid1 Array Size : 1929939968 (1840.53 GiB 1976.26 GB) Used Dev 
Size : 1929939968 (1840.53 GiB 1976.26 GB) Raid Devices : 2 Total 
Devices : 2 Persistence : Superblock is persistent Intent Bitmap : 
Internal Update Time : Tue Dec 16 13:14:10 2025 State : clean Active 
Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 
Consistency Policy : bitmap Name : gamebox:1 (local to host gamebox) 
UUID : 3b786bf1:559584b0:b9eabbe2:82bdea18 Events : 2618 Number Major 
Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 1 1 active 
sync /dev/sda1|


I didn't spot any obvious issue in the mdadm --examine on kernel 6.18 
pointing to why it thinks this is not a valid 1.2 superblock.

The mdraid still works nicely on kernel 6.19 but i am unable to use it 
on kernel 6.18 (worked fine before booting 6.19).

Is kernel 6.19 rc1 doing adjustments on the md superblock when the md is 
used which are not compatible with older kernels (the md was created 
already in Nov 2024)?


Many thx !



^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2025-12-19  8:22 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-17  6:58 Issues with md raid 1 on kernel 6.18 after booting kernel 6.19rc1 once BugReports
2025-12-17  7:06 ` Yu Kuai
2025-12-17  7:13   ` BugReports
2025-12-17  7:17     ` Yu Kuai
2025-12-17  7:25       ` BugReports
2025-12-17  7:33       ` Issues with md raid 1 on kernel 6.18 after booting kernel 6.19-rc1 once Paul Menzel
2025-12-17  7:41         ` Yu Kuai
2025-12-17  8:02           ` Reindl Harald
2025-12-17  8:33             ` Yu Kuai
2025-12-17 13:07               ` Thorsten Leemhuis
2025-12-17 13:45                 ` Yu Kuai
2025-12-17 13:50                   ` Reindl Harald
2025-12-17 13:24   ` Issues with md raid 1 on kernel 6.18 after booting kernel 6.19rc1 once Li Nan
2025-12-18 10:41     ` Bugreports61
2025-12-18 14:54       ` Li Nan
2025-12-18 16:04         ` BugReports
2025-12-19  8:22           ` Li Nan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox