public inbox for linux-raid@vger.kernel.org
 help / color / mirror / Atom feed
* Issues with md raid 1 on kernel 6.18 after booting kernel 6.19rc1 once
@ 2025-12-17  6:58 BugReports
  2025-12-17  7:06 ` Yu Kuai
  0 siblings, 1 reply; 17+ messages in thread
From: BugReports @ 2025-12-17  6:58 UTC (permalink / raw)
  To: linux-raid

Hi,

i hope i am reaching out to the correct mailing list and this is the way 
to correctly report issues with rc kernels.

I installed kernel 6.19-rc 1 recently (with linux-tkg, but that should 
not matter).  Booting the 6.19 rc1 kernel worked fine and i could access 
my md raid 1.

After that i wanted to switch back to kernel 6.18.1 and noticed the 
following:

- I can not access the raid 1 md anymore as it does not assemble anymore

- The following error message shows up when i try to assemble the raid:

|mdadm: /dev/sdc1 is identified as a member of /dev/md/1, slot 0. mdadm: 
/dev/sda1 is identified as a member of /dev/md/1, slot 1. mdadm: failed 
to add /dev/sda1 to /dev/md/1: Invalid argument mdadm: failed to add 
/dev/sdc1 to /dev/md/1: Invalid argument - The following error shows up 
in dmesg: ||[Di, 16. Dez 2025, 11:50:38] md: md1 stopped. [Di, 16. Dez 2025, 
11:50:38] md: sda1 does not have a valid v1.2 superblock, not importing! 
[Di, 16. Dez 2025, 11:50:38] md: md_import_device returned -22 [Di, 16. 
Dez 2025, 11:50:38] md: sdc1 does not have a valid v1.2 superblock, not 
importing! [Di, 16. Dez 2025, 11:50:38] md: md_import_device returned 
-22 [Di, 16. Dez 2025, 11:50:38] md: md1 stopped. - mdadm --examine used 
with kerne 6.18 shows the following : ||cat mdadmin618.txt /dev/sdc1: Magic : a92b4efc Version : 1.2 Feature 
Map : 0x1 Array UUID : 3b786bf1:559584b0:b9eabbe2:82bdea18 Name : 
gamebox:1 (local to host gamebox) Creation Time : Tue Nov 26 20:39:09 
2024 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 3859879936 
sectors (1840.53 GiB 1976.26 GB) Array Size : 1929939968 KiB (1840.53 
GiB 1976.26 GB) Data Offset : 264192 sectors Super Offset : 8 sectors 
Unused Space : before=264112 sectors, after=0 sectors State : clean 
Device UUID : 9f185862:a11d8deb:db6d708e:a7cc6a91 Internal Bitmap : 8 
sectors from superblock Update Time : Mon Dec 15 22:40:46 2025 Bad Block 
Log : 512 entries available at offset 16 sectors Checksum : f11e2fa5 - 
correct Events : 2618 Device Role : Active device 0 Array State : AA 
('A' == active, '.' == missing, 'R' == replacing) /dev/sda1: Magic : 
a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 
3b786bf1:559584b0:b9eabbe2:82bdea18 Name : gamebox:1 (local to host 
gamebox) Creation Time : Tue Nov 26 20:39:09 2024 Raid Level : raid1 
Raid Devices : 2 Avail Dev Size : 3859879936 sectors (1840.53 GiB 
1976.26 GB) Array Size : 1929939968 KiB (1840.53 GiB 1976.26 GB) Data 
Offset : 264192 sectors Super Offset : 8 sectors Unused Space : 
before=264112 sectors, after=0 sectors State : clean Device UUID : 
fc196769:0e25b5af:dfc6cab6:639ac8f9 Internal Bitmap : 8 sectors from 
superblock Update Time : Mon Dec 15 22:40:46 2025 Bad Block Log : 512 
entries available at offset 16 sectors Checksum : 4d0d5f31 - correct 
Events : 2618 Device Role : Active device 1 Array State : AA ('A' == 
active, '.' == missing, 'R' == replacing)|
|- Mdadm --detail shows the following in 6.19 rc1 (i am using 6.19 
output as it does not work anymore in 6.18.1): ||/dev/md1: Version : 1.2 Creation Time : Tue Nov 26 20:39:09 2024 Raid 
Level : raid1 Array Size : 1929939968 (1840.53 GiB 1976.26 GB) Used Dev 
Size : 1929939968 (1840.53 GiB 1976.26 GB) Raid Devices : 2 Total 
Devices : 2 Persistence : Superblock is persistent Intent Bitmap : 
Internal Update Time : Tue Dec 16 13:14:10 2025 State : clean Active 
Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 
Consistency Policy : bitmap Name : gamebox:1 (local to host gamebox) 
UUID : 3b786bf1:559584b0:b9eabbe2:82bdea18 Events : 2618 Number Major 
Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 1 1 active 
sync /dev/sda1|


I didn't spot any obvious issue in the mdadm --examine on kernel 6.18 
pointing to why it thinks this is not a valid 1.2 superblock.

The mdraid still works nicely on kernel 6.19 but i am unable to use it 
on kernel 6.18 (worked fine before booting 6.19).

Is kernel 6.19 rc1 doing adjustments on the md superblock when the md is 
used which are not compatible with older kernels (the md was created 
already in Nov 2024)?


Many thx !



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Issues with md raid 1 on kernel 6.18 after booting kernel 6.19rc1 once
  2025-12-17  6:58 Issues with md raid 1 on kernel 6.18 after booting kernel 6.19rc1 once BugReports
@ 2025-12-17  7:06 ` Yu Kuai
  2025-12-17  7:13   ` BugReports
  2025-12-17 13:24   ` Issues with md raid 1 on kernel 6.18 after booting kernel 6.19rc1 once Li Nan
  0 siblings, 2 replies; 17+ messages in thread
From: Yu Kuai @ 2025-12-17  7:06 UTC (permalink / raw)
  To: BugReports, linux-raid, linan122, xni, yukuai

Hi,

在 2025/12/17 14:58, BugReports 写道:
> Hi,
>
> i hope i am reaching out to the correct mailing list and this is the 
> way to correctly report issues with rc kernels.
>
> I installed kernel 6.19-rc 1 recently (with linux-tkg, but that should 
> not matter).  Booting the 6.19 rc1 kernel worked fine and i could 
> access my md raid 1.
>
> After that i wanted to switch back to kernel 6.18.1 and noticed the 
> following:
>
> - I can not access the raid 1 md anymore as it does not assemble anymore
>
> - The following error message shows up when i try to assemble the raid:
>
> |mdadm: /dev/sdc1 is identified as a member of /dev/md/1, slot 0. 
> mdadm: /dev/sda1 is identified as a member of /dev/md/1, slot 1. 
> mdadm: failed to add /dev/sda1 to /dev/md/1: Invalid argument mdadm: 
> failed to add /dev/sdc1 to /dev/md/1: Invalid argument - The following 
> error shows up in dmesg: ||[Di, 16. Dez 2025, 11:50:38] md: md1 
> stopped. [Di, 16. Dez 2025, 11:50:38] md: sda1 does not have a valid 
> v1.2 superblock, not importing! [Di, 16. Dez 2025, 11:50:38] md: 
> md_import_device returned -22 [Di, 16. Dez 2025, 11:50:38] md: sdc1 
> does not have a valid v1.2 superblock, not importing! [Di, 16. Dez 
> 2025, 11:50:38] md: md_import_device returned -22 [Di, 16. Dez 2025, 
> 11:50:38] md: md1 stopped. - mdadm --examine used with kerne 6.18 
> shows the following : ||cat mdadmin618.txt /dev/sdc1: Magic : a92b4efc 
> Version : 1.2 Feature Map : 0x1 Array UUID : 
> 3b786bf1:559584b0:b9eabbe2:82bdea18 Name : gamebox:1 (local to host 
> gamebox) Creation Time : Tue Nov 26 20:39:09 2024 Raid Level : raid1 
> Raid Devices : 2 Avail Dev Size : 3859879936 sectors (1840.53 GiB 
> 1976.26 GB) Array Size : 1929939968 KiB (1840.53 GiB 1976.26 GB) Data 
> Offset : 264192 sectors Super Offset : 8 sectors Unused Space : 
> before=264112 sectors, after=0 sectors State : clean Device UUID : 
> 9f185862:a11d8deb:db6d708e:a7cc6a91 Internal Bitmap : 8 sectors from 
> superblock Update Time : Mon Dec 15 22:40:46 2025 Bad Block Log : 512 
> entries available at offset 16 sectors Checksum : f11e2fa5 - correct 
> Events : 2618 Device Role : Active device 0 Array State : AA ('A' == 
> active, '.' == missing, 'R' == replacing) /dev/sda1: Magic : a92b4efc 
> Version : 1.2 Feature Map : 0x1 Array UUID : 
> 3b786bf1:559584b0:b9eabbe2:82bdea18 Name : gamebox:1 (local to host 
> gamebox) Creation Time : Tue Nov 26 20:39:09 2024 Raid Level : raid1 
> Raid Devices : 2 Avail Dev Size : 3859879936 sectors (1840.53 GiB 
> 1976.26 GB) Array Size : 1929939968 KiB (1840.53 GiB 1976.26 GB) Data 
> Offset : 264192 sectors Super Offset : 8 sectors Unused Space : 
> before=264112 sectors, after=0 sectors State : clean Device UUID : 
> fc196769:0e25b5af:dfc6cab6:639ac8f9 Internal Bitmap : 8 sectors from 
> superblock Update Time : Mon Dec 15 22:40:46 2025 Bad Block Log : 512 
> entries available at offset 16 sectors Checksum : 4d0d5f31 - correct 
> Events : 2618 Device Role : Active device 1 Array State : AA ('A' == 
> active, '.' == missing, 'R' == replacing)|
> |- Mdadm --detail shows the following in 6.19 rc1 (i am using 6.19 
> output as it does not work anymore in 6.18.1): ||/dev/md1: Version : 
> 1.2 Creation Time : Tue Nov 26 20:39:09 2024 Raid Level : raid1 Array 
> Size : 1929939968 (1840.53 GiB 1976.26 GB) Used Dev Size : 1929939968 
> (1840.53 GiB 1976.26 GB) Raid Devices : 2 Total Devices : 2 
> Persistence : Superblock is persistent Intent Bitmap : Internal Update 
> Time : Tue Dec 16 13:14:10 2025 State : clean Active Devices : 2 
> Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency 
> Policy : bitmap Name : gamebox:1 (local to host gamebox) UUID : 
> 3b786bf1:559584b0:b9eabbe2:82bdea18 Events : 2618 Number Major Minor 
> RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 1 1 active sync 
> /dev/sda1|
>
>
> I didn't spot any obvious issue in the mdadm --examine on kernel 6.18 
> pointing to why it thinks this is not a valid 1.2 superblock.
>
> The mdraid still works nicely on kernel 6.19 but i am unable to use it 
> on kernel 6.18 (worked fine before booting 6.19).
>
> Is kernel 6.19 rc1 doing adjustments on the md superblock when the md 
> is used which are not compatible with older kernels (the md was 
> created already in Nov 2024)?

I believe this is because lbs is now stored in metadata of md arrays, while this field is still
not defined in old kernels, see dtails in the following set:

[PATCH v9 0/5] make logical block size configurable - linan666 <https://lore.kernel.org/linux-raid/20251103125757.1405796-1-linan666@huaweicloud.com/>

We'll have to backport following patch into old kernels to make new arrays to assemble in old
kernels.

https://lore.kernel.org/linux-raid/20251103125757.1405796-5-linan666@huaweicloud.com

+CC Nan, would you mind backport above patch into stable kernels?

>
>
> Many thx !
>
>
>
-- 
Thansk,
Kuai

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Issues with md raid 1 on kernel 6.18 after booting kernel 6.19rc1 once
  2025-12-17  7:06 ` Yu Kuai
@ 2025-12-17  7:13   ` BugReports
  2025-12-17  7:17     ` Yu Kuai
  2025-12-17 13:24   ` Issues with md raid 1 on kernel 6.18 after booting kernel 6.19rc1 once Li Nan
  1 sibling, 1 reply; 17+ messages in thread
From: BugReports @ 2025-12-17  7:13 UTC (permalink / raw)
  To: yukuai, linux-raid, linan122, xni

Hi,


...

We'll have to backport following patch into old kernels to make new arrays to assemble in old
kernels.

....


The md array which i am talking about was not created with kernel 6.19, 
it was created sometime in 2024.

It was just used in kernel 6.19 and that broke compatibility with my 
6.18 kernel.


Br !


Am 17.12.25 um 08:06 schrieb Yu Kuai:
> Hi,
>
> 在 2025/12/17 14:58, BugReports 写道:
>> Hi,
>>
>> i hope i am reaching out to the correct mailing list and this is the
>> way to correctly report issues with rc kernels.
>>
>> I installed kernel 6.19-rc 1 recently (with linux-tkg, but that should
>> not matter).  Booting the 6.19 rc1 kernel worked fine and i could
>> access my md raid 1.
>>
>> After that i wanted to switch back to kernel 6.18.1 and noticed the
>> following:
>>
>> - I can not access the raid 1 md anymore as it does not assemble anymore
>>
>> - The following error message shows up when i try to assemble the raid:
>>
>> |mdadm: /dev/sdc1 is identified as a member of /dev/md/1, slot 0.
>> mdadm: /dev/sda1 is identified as a member of /dev/md/1, slot 1.
>> mdadm: failed to add /dev/sda1 to /dev/md/1: Invalid argument mdadm:
>> failed to add /dev/sdc1 to /dev/md/1: Invalid argument - The following
>> error shows up in dmesg: ||[Di, 16. Dez 2025, 11:50:38] md: md1
>> stopped. [Di, 16. Dez 2025, 11:50:38] md: sda1 does not have a valid
>> v1.2 superblock, not importing! [Di, 16. Dez 2025, 11:50:38] md:
>> md_import_device returned -22 [Di, 16. Dez 2025, 11:50:38] md: sdc1
>> does not have a valid v1.2 superblock, not importing! [Di, 16. Dez
>> 2025, 11:50:38] md: md_import_device returned -22 [Di, 16. Dez 2025,
>> 11:50:38] md: md1 stopped. - mdadm --examine used with kerne 6.18
>> shows the following : ||cat mdadmin618.txt /dev/sdc1: Magic : a92b4efc
>> Version : 1.2 Feature Map : 0x1 Array UUID :
>> 3b786bf1:559584b0:b9eabbe2:82bdea18 Name : gamebox:1 (local to host
>> gamebox) Creation Time : Tue Nov 26 20:39:09 2024 Raid Level : raid1
>> Raid Devices : 2 Avail Dev Size : 3859879936 sectors (1840.53 GiB
>> 1976.26 GB) Array Size : 1929939968 KiB (1840.53 GiB 1976.26 GB) Data
>> Offset : 264192 sectors Super Offset : 8 sectors Unused Space :
>> before=264112 sectors, after=0 sectors State : clean Device UUID :
>> 9f185862:a11d8deb:db6d708e:a7cc6a91 Internal Bitmap : 8 sectors from
>> superblock Update Time : Mon Dec 15 22:40:46 2025 Bad Block Log : 512
>> entries available at offset 16 sectors Checksum : f11e2fa5 - correct
>> Events : 2618 Device Role : Active device 0 Array State : AA ('A' ==
>> active, '.' == missing, 'R' == replacing) /dev/sda1: Magic : a92b4efc
>> Version : 1.2 Feature Map : 0x1 Array UUID :
>> 3b786bf1:559584b0:b9eabbe2:82bdea18 Name : gamebox:1 (local to host
>> gamebox) Creation Time : Tue Nov 26 20:39:09 2024 Raid Level : raid1
>> Raid Devices : 2 Avail Dev Size : 3859879936 sectors (1840.53 GiB
>> 1976.26 GB) Array Size : 1929939968 KiB (1840.53 GiB 1976.26 GB) Data
>> Offset : 264192 sectors Super Offset : 8 sectors Unused Space :
>> before=264112 sectors, after=0 sectors State : clean Device UUID :
>> fc196769:0e25b5af:dfc6cab6:639ac8f9 Internal Bitmap : 8 sectors from
>> superblock Update Time : Mon Dec 15 22:40:46 2025 Bad Block Log : 512
>> entries available at offset 16 sectors Checksum : 4d0d5f31 - correct
>> Events : 2618 Device Role : Active device 1 Array State : AA ('A' ==
>> active, '.' == missing, 'R' == replacing)|
>> |- Mdadm --detail shows the following in 6.19 rc1 (i am using 6.19
>> output as it does not work anymore in 6.18.1): ||/dev/md1: Version :
>> 1.2 Creation Time : Tue Nov 26 20:39:09 2024 Raid Level : raid1 Array
>> Size : 1929939968 (1840.53 GiB 1976.26 GB) Used Dev Size : 1929939968
>> (1840.53 GiB 1976.26 GB) Raid Devices : 2 Total Devices : 2
>> Persistence : Superblock is persistent Intent Bitmap : Internal Update
>> Time : Tue Dec 16 13:14:10 2025 State : clean Active Devices : 2
>> Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency
>> Policy : bitmap Name : gamebox:1 (local to host gamebox) UUID :
>> 3b786bf1:559584b0:b9eabbe2:82bdea18 Events : 2618 Number Major Minor
>> RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 1 1 active sync
>> /dev/sda1|
>>
>>
>> I didn't spot any obvious issue in the mdadm --examine on kernel 6.18
>> pointing to why it thinks this is not a valid 1.2 superblock.
>>
>> The mdraid still works nicely on kernel 6.19 but i am unable to use it
>> on kernel 6.18 (worked fine before booting 6.19).
>>
>> Is kernel 6.19 rc1 doing adjustments on the md superblock when the md
>> is used which are not compatible with older kernels (the md was
>> created already in Nov 2024)?
> I believe this is because lbs is now stored in metadata of md arrays, while this field is still
> not defined in old kernels, see dtails in the following set:
>
> [PATCH v9 0/5] make logical block size configurable - linan666 <https://lore.kernel.org/linux-raid/20251103125757.1405796-1-linan666@huaweicloud.com/>
>
> We'll have to backport following patch into old kernels to make new arrays to assemble in old
> kernels.
>
> https://lore.kernel.org/linux-raid/20251103125757.1405796-5-linan666@huaweicloud.com
>
> +CC Nan, would you mind backport above patch into stable kernels?
>
>>
>> Many thx !
>>
>>
>>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Issues with md raid 1 on kernel 6.18 after booting kernel 6.19rc1 once
  2025-12-17  7:13   ` BugReports
@ 2025-12-17  7:17     ` Yu Kuai
  2025-12-17  7:25       ` BugReports
  2025-12-17  7:33       ` Issues with md raid 1 on kernel 6.18 after booting kernel 6.19-rc1 once Paul Menzel
  0 siblings, 2 replies; 17+ messages in thread
From: Yu Kuai @ 2025-12-17  7:17 UTC (permalink / raw)
  To: BugReports, linux-raid, linan122, xni, yukuai

Hi,

在 2025/12/17 15:13, BugReports 写道:
>
> ...
>
> We'll have to backport following patch into old kernels to make new 
> arrays to assemble in old
> kernels.
>
> ....
>
>
> The md array which i am talking about was not created with kernel 
> 6.19, it was created sometime in 2024.
>
> It was just used in kernel 6.19 and that broke compatibility with my 
> 6.18 kernel.

I know, I mean any array that is created or assembled in new kernels will now have
lsb field stored in metadata. This field is not defined in old kernels and that's why
array can't assembled in old kernels, due to unknown metadata.

This is what we have to do for new features, and we're planning to avoid the forward
compatibility issue with the above patch that I mentioned.

-- 
Thansk,
Kuai

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Issues with md raid 1 on kernel 6.18 after booting kernel 6.19rc1 once
  2025-12-17  7:17     ` Yu Kuai
@ 2025-12-17  7:25       ` BugReports
  2025-12-17  7:33       ` Issues with md raid 1 on kernel 6.18 after booting kernel 6.19-rc1 once Paul Menzel
  1 sibling, 0 replies; 17+ messages in thread
From: BugReports @ 2025-12-17  7:25 UTC (permalink / raw)
  To: yukuai, linux-raid, linan122, xni

Hi,

many thx for the clarification cause that makes a huge diff !

Br

Am 17.12.25 um 08:17 schrieb Yu Kuai:
> Hi,
>
> 在 2025/12/17 15:13, BugReports 写道:
>> ...
>>
>> We'll have to backport following patch into old kernels to make new
>> arrays to assemble in old
>> kernels.
>>
>> ....
>>
>>
>> The md array which i am talking about was not created with kernel
>> 6.19, it was created sometime in 2024.
>>
>> It was just used in kernel 6.19 and that broke compatibility with my
>> 6.18 kernel.
> I know, I mean any array that is created or assembled in new kernels will now have
> lsb field stored in metadata. This field is not defined in old kernels and that's why
> array can't assembled in old kernels, due to unknown metadata.
>
> This is what we have to do for new features, and we're planning to avoid the forward
> compatibility issue with the above patch that I mentioned.
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Issues with md raid 1 on kernel 6.18 after booting kernel 6.19-rc1 once
  2025-12-17  7:17     ` Yu Kuai
  2025-12-17  7:25       ` BugReports
@ 2025-12-17  7:33       ` Paul Menzel
  2025-12-17  7:41         ` Yu Kuai
  1 sibling, 1 reply; 17+ messages in thread
From: Paul Menzel @ 2025-12-17  7:33 UTC (permalink / raw)
  To: Yu Kuai
  Cc: bugreports61, linux-raid, linan122, xni, regressions,
	Linus Torvalds

Dear Kuai,


Am 17.12.25 um 08:17 schrieb Yu Kuai:

> 在 2025/12/17 15:13, BugReports 写道:
>> 
>> ...
>> 
>> We'll have to backport following patch into old kernels to make
>> new arrays to assemble in old kernels. ....
>> 
>> The md array which i am talking about was not created with kernel 
>> 6.19, it was created sometime in 2024.
>> 
>> It was just used in kernel 6.19 and that broke compatibility with
>> my 6.18 kernel.
> 
> I know, I mean any array that is created or assembled in new kernels
> will now have lsb field stored in metadata. This field is not
> defined in old kernels and that's why array can't assembled in old
> kernels, due to unknown metadata.
> 
> This is what we have to do for new features, and we're planning to
> avoid the forward compatibility issue with the above patch that I
> mentioned.
Is there really no way around it? Just testing a new kernel and being 
able to go back must be supported in my opinion, at least between one or 
two LTS versions.


Kind regards,

Paul

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Issues with md raid 1 on kernel 6.18 after booting kernel 6.19-rc1 once
  2025-12-17  7:33       ` Issues with md raid 1 on kernel 6.18 after booting kernel 6.19-rc1 once Paul Menzel
@ 2025-12-17  7:41         ` Yu Kuai
  2025-12-17  8:02           ` Reindl Harald
  0 siblings, 1 reply; 17+ messages in thread
From: Yu Kuai @ 2025-12-17  7:41 UTC (permalink / raw)
  To: Paul Menzel
  Cc: bugreports61, linux-raid, linan122, xni, regressions,
	Linus Torvalds, yukuai

Hi,

在 2025/12/17 15:33, Paul Menzel 写道:
> Dear Kuai,
>
>
> Am 17.12.25 um 08:17 schrieb Yu Kuai:
>
>> 在 2025/12/17 15:13, BugReports 写道:
>>>
>>> ...
>>>
>>> We'll have to backport following patch into old kernels to make
>>> new arrays to assemble in old kernels. ....
>>>
>>> The md array which i am talking about was not created with kernel 
>>> 6.19, it was created sometime in 2024.
>>>
>>> It was just used in kernel 6.19 and that broke compatibility with
>>> my 6.18 kernel.
>>
>> I know, I mean any array that is created or assembled in new kernels
>> will now have lsb field stored in metadata. This field is not
>> defined in old kernels and that's why array can't assembled in old
>> kernels, due to unknown metadata.
>>
>> This is what we have to do for new features, and we're planning to
>> avoid the forward compatibility issue with the above patch that I
>> mentioned.
> Is there really no way around it? Just testing a new kernel and being 
> able to go back must be supported in my opinion, at least between one 
> or two LTS versions.

As I said, following patch should be backported to LTS kernels to avoid the problem.

https://lore.kernel.org/linux-raid/20251103125757.1405796-5-linan666@huaweicloud.com

>
>
> Kind regards,
>
> Paul

-- 
Thansk,
Kuai

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Issues with md raid 1 on kernel 6.18 after booting kernel 6.19-rc1 once
  2025-12-17  7:41         ` Yu Kuai
@ 2025-12-17  8:02           ` Reindl Harald
  2025-12-17  8:33             ` Yu Kuai
  0 siblings, 1 reply; 17+ messages in thread
From: Reindl Harald @ 2025-12-17  8:02 UTC (permalink / raw)
  To: yukuai, Paul Menzel
  Cc: bugreports61, linux-raid, linan122, xni, regressions,
	Linus Torvalds



Am 17.12.25 um 08:41 schrieb Yu Kuai:
>>>> We'll have to backport following patch into old kernels to make
>>>> new arrays to assemble in old kernels. ....
>>>>
>>>> The md array which i am talking about was not created with kernel
>>>> 6.19, it was created sometime in 2024.
>>>>
>>>> It was just used in kernel 6.19 and that broke compatibility with
>>>> my 6.18 kernel.
>>>
>>> I know, I mean any array that is created or assembled in new kernels
>>> will now have lsb field stored in metadata. This field is not
>>> defined in old kernels and that's why array can't assembled in old
>>> kernels, due to unknown metadata.
>>>
>>> This is what we have to do for new features, and we're planning to
>>> avoid the forward compatibility issue with the above patch that I
>>> mentioned.
>> Is there really no way around it? Just testing a new kernel and being
>> able to go back must be supported in my opinion, at least between one
>> or two LTS versions.
> 
> As I said, following patch should be backported to LTS kernels to avoid the problem.
> https://lore.kernel.org/linux-raid/20251103125757.1405796-5-linan666@huaweicloud.com

that's nothing you can rely on - yo can write as many pachtes as you 
will but if and when they are included in random binary kernels is not 
controllable

the current situation is somebody tests a new kernel and after that his 
RAID got unrevertable changed and can't be used with the previous kernel

that's not expectable nor acceptable


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Issues with md raid 1 on kernel 6.18 after booting kernel 6.19-rc1 once
  2025-12-17  8:02           ` Reindl Harald
@ 2025-12-17  8:33             ` Yu Kuai
  2025-12-17 13:07               ` Thorsten Leemhuis
  0 siblings, 1 reply; 17+ messages in thread
From: Yu Kuai @ 2025-12-17  8:33 UTC (permalink / raw)
  To: Reindl Harald, Paul Menzel
  Cc: bugreports61, linux-raid, linan122, xni, regressions,
	Linus Torvalds, yukuai

Hi,

在 2025/12/17 16:02, Reindl Harald 写道:
>
>
> Am 17.12.25 um 08:41 schrieb Yu Kuai:
>>>>> We'll have to backport following patch into old kernels to make
>>>>> new arrays to assemble in old kernels. ....
>>>>>
>>>>> The md array which i am talking about was not created with kernel
>>>>> 6.19, it was created sometime in 2024.
>>>>>
>>>>> It was just used in kernel 6.19 and that broke compatibility with
>>>>> my 6.18 kernel.
>>>>
>>>> I know, I mean any array that is created or assembled in new kernels
>>>> will now have lsb field stored in metadata. This field is not
>>>> defined in old kernels and that's why array can't assembled in old
>>>> kernels, due to unknown metadata.
>>>>
>>>> This is what we have to do for new features, and we're planning to
>>>> avoid the forward compatibility issue with the above patch that I
>>>> mentioned.
>>> Is there really no way around it? Just testing a new kernel and being
>>> able to go back must be supported in my opinion, at least between one
>>> or two LTS versions.
>>
>> As I said, following patch should be backported to LTS kernels to 
>> avoid the problem.
>> https://lore.kernel.org/linux-raid/20251103125757.1405796-5-linan666@huaweicloud.com 
>>
>
> that's nothing you can rely on - yo can write as many pachtes as you 
> will but if and when they are included in random binary kernels is not 
> controllable
>
> the current situation is somebody tests a new kernel and after that 
> his RAID got unrevertable changed and can't be used with the previous 
> kernel
>
> that's not expectable nor acceptable

I'll explain a bit more about the lbs.

There is a long long term problem from day one, and reported several times, that array data
can be broken when:
  - user add a new disk to the array;
  - some member disks are failed;

lbs in metadata is used to fix this problem. However, mdraid is designed to refuse new metadata
fields, this doesn't make sense but that's the fact.

Any array that is assembled or created in new kernels will have lbs filed stored in metadata, to
prevent the data loss problem. I know we're not expecting forward compatibility issue, but I don't
think this is not acceptable. We'll provide a solution but we can't guarantee for any binary
kernels.

-- 
Thansk,
Kuai

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Issues with md raid 1 on kernel 6.18 after booting kernel 6.19-rc1 once
  2025-12-17  8:33             ` Yu Kuai
@ 2025-12-17 13:07               ` Thorsten Leemhuis
  2025-12-17 13:45                 ` Yu Kuai
  0 siblings, 1 reply; 17+ messages in thread
From: Thorsten Leemhuis @ 2025-12-17 13:07 UTC (permalink / raw)
  To: Jens Axboe
  Cc: bugreports61, linux-raid, linan122, xni, regressions, yukuai,
	Linus Torvalds, Paul Menzel, Reindl Harald, Song Liu

Bringing Jens in (and Son Liu, too), as the patches that cause this
afaics went through his tree -- so he is the right point of contact in
the hierarchy.

FWIW, thread starts here:
https://lore.kernel.org/all/b3e941b0-38d1-4809-a386-34659a20415e@gmail.com/

Problem rough and short afaiui: mdraids assembled with 6.19-rc1 can not
be mounted with 6.18 any more; see below for details. That to my
understanding of things is not okay, even if it could be fixed by
backporting a patch (which is a option here)

Ciao, Thorsten

On 12/17/25 09:33, Yu Kuai wrote:
> 在 2025/12/17 16:02, Reindl Harald 写道:
>> Am 17.12.25 um 08:41 schrieb Yu Kuai:
>>>>>> We'll have to backport following patch into old kernels to make
>>>>>> new arrays to assemble in old kernels. ....
>>>>>>
>>>>>> The md array which i am talking about was not created with kernel
>>>>>> 6.19, it was created sometime in 2024.
>>>>>>
>>>>>> It was just used in kernel 6.19 and that broke compatibility with
>>>>>> my 6.18 kernel.
>>>>>
>>>>> I know, I mean any array that is created or assembled in new kernels
>>>>> will now have lsb field stored in metadata. This field is not
>>>>> defined in old kernels and that's why array can't assembled in old
>>>>> kernels, due to unknown metadata.
>>>>>
>>>>> This is what we have to do for new features, and we're planning to
>>>>> avoid the forward compatibility issue with the above patch that I
>>>>> mentioned.
>>>> Is there really no way around it? Just testing a new kernel and being
>>>> able to go back must be supported in my opinion, at least between one
>>>> or two LTS versions.
>>>
>>> As I said, following patch should be backported to LTS kernels to 
>>> avoid the problem.
>>> https://lore.kernel.org/linux-raid/20251103125757.1405796-5-linan666@huaweicloud.com 
>>>
>>
>> that's nothing you can rely on - yo can write as many pachtes as you 
>> will but if and when they are included in random binary kernels is not 
>> controllable
>>
>> the current situation is somebody tests a new kernel and after that 
>> his RAID got unrevertable changed and can't be used with the previous 
>> kernel
>>
>> that's not expectable nor acceptable
> 
> I'll explain a bit more about the lbs.
> 
> There is a long long term problem from day one, and reported several times, that array data
> can be broken when:
>   - user add a new disk to the array;
>   - some member disks are failed;
> 
> lbs in metadata is used to fix this problem. However, mdraid is designed to refuse new metadata
> fields, this doesn't make sense but that's the fact.
> 
> Any array that is assembled or created in new kernels will have lbs filed stored in metadata, to
> prevent the data loss problem. I know we're not expecting forward compatibility issue, but I don't
> think this is not acceptable. We'll provide a solution but we can't guarantee for any binary
> kernels.
> 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Issues with md raid 1 on kernel 6.18 after booting kernel 6.19rc1 once
  2025-12-17  7:06 ` Yu Kuai
  2025-12-17  7:13   ` BugReports
@ 2025-12-17 13:24   ` Li Nan
  2025-12-18 10:41     ` Bugreports61
  1 sibling, 1 reply; 17+ messages in thread
From: Li Nan @ 2025-12-17 13:24 UTC (permalink / raw)
  To: yukuai, BugReports, linux-raid, xni



在 2025/12/17 15:06, Yu Kuai 写道:
> Hi,
> 
> 在 2025/12/17 14:58, BugReports 写道:
>> Hi,
>>
>> i hope i am reaching out to the correct mailing list and this is the
>> way to correctly report issues with rc kernels.
>>
>> I installed kernel 6.19-rc 1 recently (with linux-tkg, but that should
>> not matter).  Booting the 6.19 rc1 kernel worked fine and i could
>> access my md raid 1.
>>
>> After that i wanted to switch back to kernel 6.18.1 and noticed the
>> following:
>>
>> - I can not access the raid 1 md anymore as it does not assemble anymore
>>
>> - The following error message shows up when i try to assemble the raid:
>>
>> |mdadm: /dev/sdc1 is identified as a member of /dev/md/1, slot 0.
>> mdadm: /dev/sda1 is identified as a member of /dev/md/1, slot 1.
>> mdadm: failed to add /dev/sda1 to /dev/md/1: Invalid argument mdadm:
>> failed to add /dev/sdc1 to /dev/md/1: Invalid argument - The following
>> error shows up in dmesg: ||[Di, 16. Dez 2025, 11:50:38] md: md1
>> stopped. [Di, 16. Dez 2025, 11:50:38] md: sda1 does not have a valid
>> v1.2 superblock, not importing! [Di, 16. Dez 2025, 11:50:38] md:
>> md_import_device returned -22 [Di, 16. Dez 2025, 11:50:38] md: sdc1
>> does not have a valid v1.2 superblock, not importing! [Di, 16. Dez
>> 2025, 11:50:38] md: md_import_device returned -22 [Di, 16. Dez 2025,
>> 11:50:38] md: md1 stopped. - mdadm --examine used with kerne 6.18
>> shows the following : ||cat mdadmin618.txt /dev/sdc1: Magic : a92b4efc
>> Version : 1.2 Feature Map : 0x1 Array UUID :
>> 3b786bf1:559584b0:b9eabbe2:82bdea18 Name : gamebox:1 (local to host
>> gamebox) Creation Time : Tue Nov 26 20:39:09 2024 Raid Level : raid1
>> Raid Devices : 2 Avail Dev Size : 3859879936 sectors (1840.53 GiB
>> 1976.26 GB) Array Size : 1929939968 KiB (1840.53 GiB 1976.26 GB) Data
>> Offset : 264192 sectors Super Offset : 8 sectors Unused Space :
>> before=264112 sectors, after=0 sectors State : clean Device UUID :
>> 9f185862:a11d8deb:db6d708e:a7cc6a91 Internal Bitmap : 8 sectors from
>> superblock Update Time : Mon Dec 15 22:40:46 2025 Bad Block Log : 512
>> entries available at offset 16 sectors Checksum : f11e2fa5 - correct
>> Events : 2618 Device Role : Active device 0 Array State : AA ('A' ==
>> active, '.' == missing, 'R' == replacing) /dev/sda1: Magic : a92b4efc
>> Version : 1.2 Feature Map : 0x1 Array UUID :
>> 3b786bf1:559584b0:b9eabbe2:82bdea18 Name : gamebox:1 (local to host
>> gamebox) Creation Time : Tue Nov 26 20:39:09 2024 Raid Level : raid1
>> Raid Devices : 2 Avail Dev Size : 3859879936 sectors (1840.53 GiB
>> 1976.26 GB) Array Size : 1929939968 KiB (1840.53 GiB 1976.26 GB) Data
>> Offset : 264192 sectors Super Offset : 8 sectors Unused Space :
>> before=264112 sectors, after=0 sectors State : clean Device UUID :
>> fc196769:0e25b5af:dfc6cab6:639ac8f9 Internal Bitmap : 8 sectors from
>> superblock Update Time : Mon Dec 15 22:40:46 2025 Bad Block Log : 512
>> entries available at offset 16 sectors Checksum : 4d0d5f31 - correct
>> Events : 2618 Device Role : Active device 1 Array State : AA ('A' ==
>> active, '.' == missing, 'R' == replacing)|
>> |- Mdadm --detail shows the following in 6.19 rc1 (i am using 6.19
>> output as it does not work anymore in 6.18.1): ||/dev/md1: Version :
>> 1.2 Creation Time : Tue Nov 26 20:39:09 2024 Raid Level : raid1 Array
>> Size : 1929939968 (1840.53 GiB 1976.26 GB) Used Dev Size : 1929939968
>> (1840.53 GiB 1976.26 GB) Raid Devices : 2 Total Devices : 2
>> Persistence : Superblock is persistent Intent Bitmap : Internal Update
>> Time : Tue Dec 16 13:14:10 2025 State : clean Active Devices : 2
>> Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency
>> Policy : bitmap Name : gamebox:1 (local to host gamebox) UUID :
>> 3b786bf1:559584b0:b9eabbe2:82bdea18 Events : 2618 Number Major Minor
>> RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 1 1 active sync
>> /dev/sda1|
>>
>>
>> I didn't spot any obvious issue in the mdadm --examine on kernel 6.18
>> pointing to why it thinks this is not a valid 1.2 superblock.
>>
>> The mdraid still works nicely on kernel 6.19 but i am unable to use it
>> on kernel 6.18 (worked fine before booting 6.19).
>>
>> Is kernel 6.19 rc1 doing adjustments on the md superblock when the md
>> is used which are not compatible with older kernels (the md was
>> created already in Nov 2024)?
> 
> I believe this is because lbs is now stored in metadata of md arrays, while this field is still
> not defined in old kernels, see dtails in the following set:
> 
> [PATCH v9 0/5] make logical block size configurable - linan666 <https://lore.kernel.org/linux-raid/20251103125757.1405796-1-linan666@huaweicloud.com/>
> 
> We'll have to backport following patch into old kernels to make new arrays to assemble in old
> kernels.
> 
> https://lore.kernel.org/linux-raid/20251103125757.1405796-5-linan666@huaweicloud.com
> 
> +CC Nan, would you mind backport above patch into stable kernels?
> 

Sent to stable from 6.18 to 5.10

https://lore.kernel.org/stable/20251217130513.2706844-1-linan666@huaweicloud.com/T/#u
https://lore.kernel.org/stable/20251217130935.2712267-1-linan666@huaweicloud.com/T/#u

-- 
Thanks,
Nan


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Issues with md raid 1 on kernel 6.18 after booting kernel 6.19-rc1 once
  2025-12-17 13:07               ` Thorsten Leemhuis
@ 2025-12-17 13:45                 ` Yu Kuai
  2025-12-17 13:50                   ` Reindl Harald
  0 siblings, 1 reply; 17+ messages in thread
From: Yu Kuai @ 2025-12-17 13:45 UTC (permalink / raw)
  To: Thorsten Leemhuis, Jens Axboe
  Cc: bugreports61, linux-raid, linan122, xni, regressions,
	Linus Torvalds, Paul Menzel, Reindl Harald, Song Liu, yukuai

Hi,

在 2025/12/17 21:07, Thorsten Leemhuis 写道:
> Bringing Jens in (and Son Liu, too), as the patches that cause this
> afaics went through his tree -- so he is the right point of contact in
> the hierarchy.
>
> FWIW, thread starts here:
> https://lore.kernel.org/all/b3e941b0-38d1-4809-a386-34659a20415e@gmail.com/
>
> Problem rough and short afaiui: mdraids assembled with 6.19-rc1 can not
> be mounted with 6.18 any more; see below for details. That to my
> understanding of things is not okay, even if it could be fixed by
> backporting a patch (which is a option here)

AFAIK, possible options:
1) always set lbs for arrays for new kernel, and backport the patch for
old kernels so that users can still assemble the array.(current option)
2) only set lbs by default for new array, for assembling the array still
left the lbs field unset, in this case the data loss problem is not fixed,
we should also print a warning and guide users to set lbs to fix the problem,
with the notification the array will not be assembled in old kernels.
3) revert the new feature to set lbs for mdraids.

>
> Ciao, Thorsten
>
> On 12/17/25 09:33, Yu Kuai wrote:
>> 在 2025/12/17 16:02, Reindl Harald 写道:
>>> Am 17.12.25 um 08:41 schrieb Yu Kuai:
>>>>>>> We'll have to backport following patch into old kernels to make
>>>>>>> new arrays to assemble in old kernels. ....
>>>>>>>
>>>>>>> The md array which i am talking about was not created with kernel
>>>>>>> 6.19, it was created sometime in 2024.
>>>>>>>
>>>>>>> It was just used in kernel 6.19 and that broke compatibility with
>>>>>>> my 6.18 kernel.
>>>>>> I know, I mean any array that is created or assembled in new kernels
>>>>>> will now have lsb field stored in metadata. This field is not
>>>>>> defined in old kernels and that's why array can't assembled in old
>>>>>> kernels, due to unknown metadata.
>>>>>>
>>>>>> This is what we have to do for new features, and we're planning to
>>>>>> avoid the forward compatibility issue with the above patch that I
>>>>>> mentioned.
>>>>> Is there really no way around it? Just testing a new kernel and being
>>>>> able to go back must be supported in my opinion, at least between one
>>>>> or two LTS versions.
>>>> As I said, following patch should be backported to LTS kernels to
>>>> avoid the problem.
>>>> https://lore.kernel.org/linux-raid/20251103125757.1405796-5-linan666@huaweicloud.com
>>>>
>>> that's nothing you can rely on - yo can write as many pachtes as you
>>> will but if and when they are included in random binary kernels is not
>>> controllable
>>>
>>> the current situation is somebody tests a new kernel and after that
>>> his RAID got unrevertable changed and can't be used with the previous
>>> kernel
>>>
>>> that's not expectable nor acceptable
>> I'll explain a bit more about the lbs.
>>
>> There is a long long term problem from day one, and reported several times, that array data
>> can be broken when:
>>    - user add a new disk to the array;
>>    - some member disks are failed;
>>
>> lbs in metadata is used to fix this problem. However, mdraid is designed to refuse new metadata
>> fields, this doesn't make sense but that's the fact.
>>
>> Any array that is assembled or created in new kernels will have lbs filed stored in metadata, to
>> prevent the data loss problem. I know we're not expecting forward compatibility issue, but I don't
>> think this is not acceptable. We'll provide a solution but we can't guarantee for any binary
>> kernels.
>>
-- 
Thansk,
Kuai

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Issues with md raid 1 on kernel 6.18 after booting kernel 6.19-rc1 once
  2025-12-17 13:45                 ` Yu Kuai
@ 2025-12-17 13:50                   ` Reindl Harald
  0 siblings, 0 replies; 17+ messages in thread
From: Reindl Harald @ 2025-12-17 13:50 UTC (permalink / raw)
  To: yukuai, Thorsten Leemhuis, Jens Axboe
  Cc: bugreports61, linux-raid, linan122, xni, regressions,
	Linus Torvalds, Paul Menzel, Song Liu



Am 17.12.25 um 14:45 schrieb Yu Kuai:
> AFAIK, possible options:
> 1) always set lbs for arrays for new kernel, and backport the patch for
> old kernels so that users can still assemble the array.(current option)
> 2) only set lbs by default for new array, for assembling the array still
> left the lbs field unset, in this case the data loss problem is not fixed,
> we should also print a warning and guide users to set lbs to fix the problem,
> with the notification the array will not be assembled in old kernels.
> 3) revert the new feature to set lbs for mdraids
option 2 including the guide so that everybody can decide "i am fine now 
never boot a older kernel" and activate it

how it's done now you burn possible testers of new kernels in the future 
when bitten once



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Issues with md raid 1 on kernel 6.18 after booting kernel 6.19rc1 once
  2025-12-17 13:24   ` Issues with md raid 1 on kernel 6.18 after booting kernel 6.19rc1 once Li Nan
@ 2025-12-18 10:41     ` Bugreports61
  2025-12-18 14:54       ` Li Nan
  0 siblings, 1 reply; 17+ messages in thread
From: Bugreports61 @ 2025-12-18 10:41 UTC (permalink / raw)
  To: Li Nan, yukuai, linux-raid, xni

Hi,


reading the threads it was now decided that only newly created arrays 
will get the lbs adjustment and patches to make it work on older kernels 
will not happen:


How do i get back my md raid1 to a state which makes it usable again on 
older kernels ?

Is it  safe to simply mdadm --create --assume-clean /dev/mdX /sdX /sdY 
on kernel 6.18 to get the old superblock 1.2 information back without 
loosing data ?


My thx !


Am 17.12.25 um 14:24 schrieb Li Nan:
>
>
> 在 2025/12/17 15:06, Yu Kuai 写道:
>> Hi,
>>
>> 在 2025/12/17 14:58, BugReports 写道:
>>> Hi,
>>>
>>> i hope i am reaching out to the correct mailing list and this is the
>>> way to correctly report issues with rc kernels.
>>>
>>> I installed kernel 6.19-rc 1 recently (with linux-tkg, but that should
>>> not matter).  Booting the 6.19 rc1 kernel worked fine and i could
>>> access my md raid 1.
>>>
>>> After that i wanted to switch back to kernel 6.18.1 and noticed the
>>> following:
>>>
>>> - I can not access the raid 1 md anymore as it does not assemble 
>>> anymore
>>>
>>> - The following error message shows up when i try to assemble the raid:
>>>
>>> |mdadm: /dev/sdc1 is identified as a member of /dev/md/1, slot 0.
>>> mdadm: /dev/sda1 is identified as a member of /dev/md/1, slot 1.
>>> mdadm: failed to add /dev/sda1 to /dev/md/1: Invalid argument mdadm:
>>> failed to add /dev/sdc1 to /dev/md/1: Invalid argument - The following
>>> error shows up in dmesg: ||[Di, 16. Dez 2025, 11:50:38] md: md1
>>> stopped. [Di, 16. Dez 2025, 11:50:38] md: sda1 does not have a valid
>>> v1.2 superblock, not importing! [Di, 16. Dez 2025, 11:50:38] md:
>>> md_import_device returned -22 [Di, 16. Dez 2025, 11:50:38] md: sdc1
>>> does not have a valid v1.2 superblock, not importing! [Di, 16. Dez
>>> 2025, 11:50:38] md: md_import_device returned -22 [Di, 16. Dez 2025,
>>> 11:50:38] md: md1 stopped. - mdadm --examine used with kerne 6.18
>>> shows the following : ||cat mdadmin618.txt /dev/sdc1: Magic : a92b4efc
>>> Version : 1.2 Feature Map : 0x1 Array UUID :
>>> 3b786bf1:559584b0:b9eabbe2:82bdea18 Name : gamebox:1 (local to host
>>> gamebox) Creation Time : Tue Nov 26 20:39:09 2024 Raid Level : raid1
>>> Raid Devices : 2 Avail Dev Size : 3859879936 sectors (1840.53 GiB
>>> 1976.26 GB) Array Size : 1929939968 KiB (1840.53 GiB 1976.26 GB) Data
>>> Offset : 264192 sectors Super Offset : 8 sectors Unused Space :
>>> before=264112 sectors, after=0 sectors State : clean Device UUID :
>>> 9f185862:a11d8deb:db6d708e:a7cc6a91 Internal Bitmap : 8 sectors from
>>> superblock Update Time : Mon Dec 15 22:40:46 2025 Bad Block Log : 512
>>> entries available at offset 16 sectors Checksum : f11e2fa5 - correct
>>> Events : 2618 Device Role : Active device 0 Array State : AA ('A' ==
>>> active, '.' == missing, 'R' == replacing) /dev/sda1: Magic : a92b4efc
>>> Version : 1.2 Feature Map : 0x1 Array UUID :
>>> 3b786bf1:559584b0:b9eabbe2:82bdea18 Name : gamebox:1 (local to host
>>> gamebox) Creation Time : Tue Nov 26 20:39:09 2024 Raid Level : raid1
>>> Raid Devices : 2 Avail Dev Size : 3859879936 sectors (1840.53 GiB
>>> 1976.26 GB) Array Size : 1929939968 KiB (1840.53 GiB 1976.26 GB) Data
>>> Offset : 264192 sectors Super Offset : 8 sectors Unused Space :
>>> before=264112 sectors, after=0 sectors State : clean Device UUID :
>>> fc196769:0e25b5af:dfc6cab6:639ac8f9 Internal Bitmap : 8 sectors from
>>> superblock Update Time : Mon Dec 15 22:40:46 2025 Bad Block Log : 512
>>> entries available at offset 16 sectors Checksum : 4d0d5f31 - correct
>>> Events : 2618 Device Role : Active device 1 Array State : AA ('A' ==
>>> active, '.' == missing, 'R' == replacing)|
>>> |- Mdadm --detail shows the following in 6.19 rc1 (i am using 6.19
>>> output as it does not work anymore in 6.18.1): ||/dev/md1: Version :
>>> 1.2 Creation Time : Tue Nov 26 20:39:09 2024 Raid Level : raid1 Array
>>> Size : 1929939968 (1840.53 GiB 1976.26 GB) Used Dev Size : 1929939968
>>> (1840.53 GiB 1976.26 GB) Raid Devices : 2 Total Devices : 2
>>> Persistence : Superblock is persistent Intent Bitmap : Internal Update
>>> Time : Tue Dec 16 13:14:10 2025 State : clean Active Devices : 2
>>> Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency
>>> Policy : bitmap Name : gamebox:1 (local to host gamebox) UUID :
>>> 3b786bf1:559584b0:b9eabbe2:82bdea18 Events : 2618 Number Major Minor
>>> RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 1 1 active sync
>>> /dev/sda1|
>>>
>>>
>>> I didn't spot any obvious issue in the mdadm --examine on kernel 6.18
>>> pointing to why it thinks this is not a valid 1.2 superblock.
>>>
>>> The mdraid still works nicely on kernel 6.19 but i am unable to use it
>>> on kernel 6.18 (worked fine before booting 6.19).
>>>
>>> Is kernel 6.19 rc1 doing adjustments on the md superblock when the md
>>> is used which are not compatible with older kernels (the md was
>>> created already in Nov 2024)?
>>
>> I believe this is because lbs is now stored in metadata of md arrays, 
>> while this field is still
>> not defined in old kernels, see dtails in the following set:
>>
>> [PATCH v9 0/5] make logical block size configurable - linan666 
>> <https://lore.kernel.org/linux-raid/20251103125757.1405796-1-linan666@huaweicloud.com/>
>>
>> We'll have to backport following patch into old kernels to make new 
>> arrays to assemble in old
>> kernels.
>>
>> https://lore.kernel.org/linux-raid/20251103125757.1405796-5-linan666@huaweicloud.com 
>>
>>
>> +CC Nan, would you mind backport above patch into stable kernels?
>>
>
> Sent to stable from 6.18 to 5.10
>
> https://lore.kernel.org/stable/20251217130513.2706844-1-linan666@huaweicloud.com/T/#u 
>
> https://lore.kernel.org/stable/20251217130935.2712267-1-linan666@huaweicloud.com/T/#u 
>
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Issues with md raid 1 on kernel 6.18 after booting kernel 6.19rc1 once
  2025-12-18 10:41     ` Bugreports61
@ 2025-12-18 14:54       ` Li Nan
  2025-12-18 16:04         ` BugReports
  0 siblings, 1 reply; 17+ messages in thread
From: Li Nan @ 2025-12-18 14:54 UTC (permalink / raw)
  To: Bugreports61, Li Nan, yukuai, linux-raid, xni



在 2025/12/18 18:41, Bugreports61 写道:
> Hi,
> 
> 
> reading the threads it was now decided that only newly created arrays will 
> get the lbs adjustment and patches to make it work on older kernels will 
> not happen:
> 
> 
> How do i get back my md raid1 to a state which makes it usable again on 
> older kernels ?
> 
> Is it  safe to simply mdadm --create --assume-clean /dev/mdX /sdX /sdY on 
> kernel 6.18 to get the old superblock 1.2 information back without loosing 
> data ?
> 
> 
> My thx !
> 

In principle, this works but remains a high-risk operation. I still
recommend backporting this patch and add module parameters to mitigate the
risk.

https://lore.kernel.org/stable/20251217130513.2706844-1-linan666@huaweicloud.com/T/#u

This issue will be fixed upstream soon. The patch is under validation and
expected to be submitted tomorrow. However, the existing impact cannot be
undone – apologies for this.

-- 
Thanks,
Nan


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Issues with md raid 1 on kernel 6.18 after booting kernel 6.19rc1 once
  2025-12-18 14:54       ` Li Nan
@ 2025-12-18 16:04         ` BugReports
  2025-12-19  8:22           ` Li Nan
  0 siblings, 1 reply; 17+ messages in thread
From: BugReports @ 2025-12-18 16:04 UTC (permalink / raw)
  To: Li Nan, yukuai, linux-raid, xni

Hi,

ok, so no easy way back for me to the original state sadly.

The patch for 6.18 here: 
https://lore.kernel.org/stable/20251217130513.2706844-1-linan666@huaweicloud.com/T/#u

has the following in:

+ memcmp(sb->pad3, sb->pad3+1, sizeof(sb->pad3) - sizeof(sb->pad3[1]))) 
{ + pr_warn("Some padding is non-zero on %pg, might be a new feature\n", 
+ rdev->bdev); + if (check_new_feature) + return -EINVAL; + 
pr_warn("check_new_feature is disabled, data corruption possible\n"); + }

Data corruption (especially the one happening in the background without 
noticing) would be the worst case.

So is it really safe to use that patch+module option with my modified md 
raid  on kernel 6.18 (can easily apply the patch on my 6.18 kernel) ?

Br

Am 18.12.25 um 15:54 schrieb Li Nan:
>
>
> 在 2025/12/18 18:41, Bugreports61 写道:
>> Hi,
>>
>>
>> reading the threads it was now decided that only newly created arrays 
>> will get the lbs adjustment and patches to make it work on older 
>> kernels will not happen:
>>
>>
>> How do i get back my md raid1 to a state which makes it usable again 
>> on older kernels ?
>>
>> Is it  safe to simply mdadm --create --assume-clean /dev/mdX /sdX 
>> /sdY on kernel 6.18 to get the old superblock 1.2 information back 
>> without loosing data ?
>>
>>
>> My thx !
>>
>
> In principle, this works but remains a high-risk operation. I still
> recommend backporting this patch and add module parameters to mitigate 
> the
> risk.
>
> https://lore.kernel.org/stable/20251217130513.2706844-1-linan666@huaweicloud.com/T/#u 
>
>
> This issue will be fixed upstream soon. The patch is under validation and
> expected to be submitted tomorrow. However, the existing impact cannot be
> undone – apologies for this.
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Issues with md raid 1 on kernel 6.18 after booting kernel 6.19rc1 once
  2025-12-18 16:04         ` BugReports
@ 2025-12-19  8:22           ` Li Nan
  0 siblings, 0 replies; 17+ messages in thread
From: Li Nan @ 2025-12-19  8:22 UTC (permalink / raw)
  To: BugReports, Li Nan, yukuai, linux-raid, xni



在 2025/12/19 0:04, BugReports 写道:
> Hi,
> 
> ok, so no easy way back for me to the original state sadly.
> 
> The patch for 6.18 here: 
> https://lore.kernel.org/stable/20251217130513.2706844-1-linan666@huaweicloud.com/T/#u 
> 
> 
> has the following in:
> 
> + memcmp(sb->pad3, sb->pad3+1, sizeof(sb->pad3) - sizeof(sb->pad3[1]))) { + 
> pr_warn("Some padding is non-zero on %pg, might be a new feature\n", + 
> rdev->bdev); + if (check_new_feature) + return -EINVAL; + 
> pr_warn("check_new_feature is disabled, data corruption possible\n"); + }
> 
> Data corruption (especially the one happening in the background without 
> noticing) would be the worst case.
> 
> So is it really safe to use that patch+module option with my modified md 
> raid  on kernel 6.18 (can easily apply the patch on my 6.18 kernel) ?
> 
> Br

These dmesg note that using an array with new features in an old kernel may
cause data loss. If you configured LBS in a new kernel and use that array 
in an old kernel, data loss will occur due to LBS changes.

However, this problem does not exist if your RAID was created in an old
kernel, as LBS will not change when rolling back.

> 
> Am 18.12.25 um 15:54 schrieb Li Nan:
>>
>>
>> 在 2025/12/18 18:41, Bugreports61 写道:
>>> Hi,
>>>
>>>
>>> reading the threads it was now decided that only newly created arrays 
>>> will get the lbs adjustment and patches to make it work on older kernels 
>>> will not happen:
>>>
>>>
>>> How do i get back my md raid1 to a state which makes it usable again on 
>>> older kernels ?
>>>
>>> Is it  safe to simply mdadm --create --assume-clean /dev/mdX /sdX /sdY 
>>> on kernel 6.18 to get the old superblock 1.2 information back without 
>>> loosing data ?
>>>
>>>
>>> My thx !
>>>
>>
>> In principle, this works but remains a high-risk operation. I still
>> recommend backporting this patch and add module parameters to mitigate the
>> risk.
>>
>> https://lore.kernel.org/stable/20251217130513.2706844-1-linan666@huaweicloud.com/T/#u 
>>
>>
>> This issue will be fixed upstream soon. The patch is under validation and
>> expected to be submitted tomorrow. However, the existing impact cannot be
>> undone – apologies for this.
>>
> 
> .

-- 
Thanks,
Nan


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2025-12-19  8:22 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-17  6:58 Issues with md raid 1 on kernel 6.18 after booting kernel 6.19rc1 once BugReports
2025-12-17  7:06 ` Yu Kuai
2025-12-17  7:13   ` BugReports
2025-12-17  7:17     ` Yu Kuai
2025-12-17  7:25       ` BugReports
2025-12-17  7:33       ` Issues with md raid 1 on kernel 6.18 after booting kernel 6.19-rc1 once Paul Menzel
2025-12-17  7:41         ` Yu Kuai
2025-12-17  8:02           ` Reindl Harald
2025-12-17  8:33             ` Yu Kuai
2025-12-17 13:07               ` Thorsten Leemhuis
2025-12-17 13:45                 ` Yu Kuai
2025-12-17 13:50                   ` Reindl Harald
2025-12-17 13:24   ` Issues with md raid 1 on kernel 6.18 after booting kernel 6.19rc1 once Li Nan
2025-12-18 10:41     ` Bugreports61
2025-12-18 14:54       ` Li Nan
2025-12-18 16:04         ` BugReports
2025-12-19  8:22           ` Li Nan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox