public inbox for linux-raid@vger.kernel.org
 help / color / mirror / Atom feed
From: BugReports <bugreports61@gmail.com>
To: yukuai@fnnas.com, linux-raid@vger.kernel.org,
	linan122@huawei.com, xni@redhat.com
Subject: Re: Issues with md raid 1 on kernel 6.18 after booting kernel 6.19rc1 once
Date: Wed, 17 Dec 2025 08:13:37 +0100	[thread overview]
Message-ID: <619a9b00-43dd-4897-8bb0-9ff29a760f52@gmail.com> (raw)
In-Reply-To: <8fd97a33-eb5a-4c88-ac8a-bfa1dd2ced61@fnnas.com>

Hi,


...

We'll have to backport following patch into old kernels to make new arrays to assemble in old
kernels.

....


The md array which i am talking about was not created with kernel 6.19, 
it was created sometime in 2024.

It was just used in kernel 6.19 and that broke compatibility with my 
6.18 kernel.


Br !


Am 17.12.25 um 08:06 schrieb Yu Kuai:
> Hi,
>
> 在 2025/12/17 14:58, BugReports 写道:
>> Hi,
>>
>> i hope i am reaching out to the correct mailing list and this is the
>> way to correctly report issues with rc kernels.
>>
>> I installed kernel 6.19-rc 1 recently (with linux-tkg, but that should
>> not matter).  Booting the 6.19 rc1 kernel worked fine and i could
>> access my md raid 1.
>>
>> After that i wanted to switch back to kernel 6.18.1 and noticed the
>> following:
>>
>> - I can not access the raid 1 md anymore as it does not assemble anymore
>>
>> - The following error message shows up when i try to assemble the raid:
>>
>> |mdadm: /dev/sdc1 is identified as a member of /dev/md/1, slot 0.
>> mdadm: /dev/sda1 is identified as a member of /dev/md/1, slot 1.
>> mdadm: failed to add /dev/sda1 to /dev/md/1: Invalid argument mdadm:
>> failed to add /dev/sdc1 to /dev/md/1: Invalid argument - The following
>> error shows up in dmesg: ||[Di, 16. Dez 2025, 11:50:38] md: md1
>> stopped. [Di, 16. Dez 2025, 11:50:38] md: sda1 does not have a valid
>> v1.2 superblock, not importing! [Di, 16. Dez 2025, 11:50:38] md:
>> md_import_device returned -22 [Di, 16. Dez 2025, 11:50:38] md: sdc1
>> does not have a valid v1.2 superblock, not importing! [Di, 16. Dez
>> 2025, 11:50:38] md: md_import_device returned -22 [Di, 16. Dez 2025,
>> 11:50:38] md: md1 stopped. - mdadm --examine used with kerne 6.18
>> shows the following : ||cat mdadmin618.txt /dev/sdc1: Magic : a92b4efc
>> Version : 1.2 Feature Map : 0x1 Array UUID :
>> 3b786bf1:559584b0:b9eabbe2:82bdea18 Name : gamebox:1 (local to host
>> gamebox) Creation Time : Tue Nov 26 20:39:09 2024 Raid Level : raid1
>> Raid Devices : 2 Avail Dev Size : 3859879936 sectors (1840.53 GiB
>> 1976.26 GB) Array Size : 1929939968 KiB (1840.53 GiB 1976.26 GB) Data
>> Offset : 264192 sectors Super Offset : 8 sectors Unused Space :
>> before=264112 sectors, after=0 sectors State : clean Device UUID :
>> 9f185862:a11d8deb:db6d708e:a7cc6a91 Internal Bitmap : 8 sectors from
>> superblock Update Time : Mon Dec 15 22:40:46 2025 Bad Block Log : 512
>> entries available at offset 16 sectors Checksum : f11e2fa5 - correct
>> Events : 2618 Device Role : Active device 0 Array State : AA ('A' ==
>> active, '.' == missing, 'R' == replacing) /dev/sda1: Magic : a92b4efc
>> Version : 1.2 Feature Map : 0x1 Array UUID :
>> 3b786bf1:559584b0:b9eabbe2:82bdea18 Name : gamebox:1 (local to host
>> gamebox) Creation Time : Tue Nov 26 20:39:09 2024 Raid Level : raid1
>> Raid Devices : 2 Avail Dev Size : 3859879936 sectors (1840.53 GiB
>> 1976.26 GB) Array Size : 1929939968 KiB (1840.53 GiB 1976.26 GB) Data
>> Offset : 264192 sectors Super Offset : 8 sectors Unused Space :
>> before=264112 sectors, after=0 sectors State : clean Device UUID :
>> fc196769:0e25b5af:dfc6cab6:639ac8f9 Internal Bitmap : 8 sectors from
>> superblock Update Time : Mon Dec 15 22:40:46 2025 Bad Block Log : 512
>> entries available at offset 16 sectors Checksum : 4d0d5f31 - correct
>> Events : 2618 Device Role : Active device 1 Array State : AA ('A' ==
>> active, '.' == missing, 'R' == replacing)|
>> |- Mdadm --detail shows the following in 6.19 rc1 (i am using 6.19
>> output as it does not work anymore in 6.18.1): ||/dev/md1: Version :
>> 1.2 Creation Time : Tue Nov 26 20:39:09 2024 Raid Level : raid1 Array
>> Size : 1929939968 (1840.53 GiB 1976.26 GB) Used Dev Size : 1929939968
>> (1840.53 GiB 1976.26 GB) Raid Devices : 2 Total Devices : 2
>> Persistence : Superblock is persistent Intent Bitmap : Internal Update
>> Time : Tue Dec 16 13:14:10 2025 State : clean Active Devices : 2
>> Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency
>> Policy : bitmap Name : gamebox:1 (local to host gamebox) UUID :
>> 3b786bf1:559584b0:b9eabbe2:82bdea18 Events : 2618 Number Major Minor
>> RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 1 1 active sync
>> /dev/sda1|
>>
>>
>> I didn't spot any obvious issue in the mdadm --examine on kernel 6.18
>> pointing to why it thinks this is not a valid 1.2 superblock.
>>
>> The mdraid still works nicely on kernel 6.19 but i am unable to use it
>> on kernel 6.18 (worked fine before booting 6.19).
>>
>> Is kernel 6.19 rc1 doing adjustments on the md superblock when the md
>> is used which are not compatible with older kernels (the md was
>> created already in Nov 2024)?
> I believe this is because lbs is now stored in metadata of md arrays, while this field is still
> not defined in old kernels, see dtails in the following set:
>
> [PATCH v9 0/5] make logical block size configurable - linan666 <https://lore.kernel.org/linux-raid/20251103125757.1405796-1-linan666@huaweicloud.com/>
>
> We'll have to backport following patch into old kernels to make new arrays to assemble in old
> kernels.
>
> https://lore.kernel.org/linux-raid/20251103125757.1405796-5-linan666@huaweicloud.com
>
> +CC Nan, would you mind backport above patch into stable kernels?
>
>>
>> Many thx !
>>
>>
>>

  reply	other threads:[~2025-12-17  7:13 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-17  6:58 Issues with md raid 1 on kernel 6.18 after booting kernel 6.19rc1 once BugReports
2025-12-17  7:06 ` Yu Kuai
2025-12-17  7:13   ` BugReports [this message]
2025-12-17  7:17     ` Yu Kuai
2025-12-17  7:25       ` BugReports
2025-12-17  7:33       ` Issues with md raid 1 on kernel 6.18 after booting kernel 6.19-rc1 once Paul Menzel
2025-12-17  7:41         ` Yu Kuai
2025-12-17  8:02           ` Reindl Harald
2025-12-17  8:33             ` Yu Kuai
2025-12-17 13:07               ` Thorsten Leemhuis
2025-12-17 13:45                 ` Yu Kuai
2025-12-17 13:50                   ` Reindl Harald
2025-12-17 13:24   ` Issues with md raid 1 on kernel 6.18 after booting kernel 6.19rc1 once Li Nan
2025-12-18 10:41     ` Bugreports61
2025-12-18 14:54       ` Li Nan
2025-12-18 16:04         ` BugReports
2025-12-19  8:22           ` Li Nan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=619a9b00-43dd-4897-8bb0-9ff29a760f52@gmail.com \
    --to=bugreports61@gmail.com \
    --cc=linan122@huawei.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=xni@redhat.com \
    --cc=yukuai@fnnas.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox