public inbox for linux-raid@vger.kernel.org
 help / color / mirror / Atom feed
From: Franco Martelli <martellif67@gmail.com>
To: Xiao Ni <xni@redhat.com>
Cc: linux-raid@vger.kernel.org, ncroxon@redhat.com, mtkaczyk@kernel.org
Subject: Re: Unable to set group_thread_cnt using mdadm.conf
Date: Mon, 20 Oct 2025 21:22:19 +0200	[thread overview]
Message-ID: <b0db39dc-f261-4bc3-aac3-e983150ba8c7@gmail.com> (raw)
In-Reply-To: <CALTww2_0rAvqc=C0zAP7pdGT-V7-ypMV5Rc=dk4iKS4VkAVE7Q@mail.gmail.com>

On 20/10/25 at 16:40, Xiao Ni wrote:
> On Fri, Oct 17, 2025 at 10:07 PM Franco Martelli <martellif67@gmail.com> wrote:
>>
>> Hello,
>>
>> I've a RAID5 array on Debian 13:
>>
>> ~$ cat /proc/mdstat
>> Personalities : [raid6] [raid5] [raid4] [linear] [raid0] [raid1] [raid10]
>> md0 : active raid5 sda1[0] sdc1[2] sdd1[3](S) sdb1[1]
>>         1953258496 blocks super 1.2 level 5, 512k chunk, algorithm 2
>> [3/3] [UUU]
>>
>> unused devices: <none>
>>
>> ~# mdadm --version
>> mdadm - v4.4 - 2024-11-07 - Debian 4.4-11
>>
>> the issue is that I can't set group_thread_cnt if I use the syntax
>> described in mdadm.conf(5) man-page:
>>
>> …
>> SYSFS name=/dev/md/raid5 group_thread_cnt=4 sync_speed_max=1000000
>> SYSFS uuid=bead5eb6:31c17a27:da120ba2:7dfda40d group_thread_cnt=4
>> sync_speed_max=1000000
>>
>> my "mdadm.conf" is:
>>
>> ~$ grep -v '^#' /etc/mdadm/mdadm.conf
>>
>>
>> HOMEHOST <system>
>>
>> MAILADDR root
>>
>> ARRAY /dev/md/0  metadata=1.2 UUID=8bdf78b9:4cad434c:3a30392d:8463c7e0
>>      spares=1
>>
>>
>> SYSFS name=/dev/md/0 group_thread_cnt=8
>> SYSFS uuid=8bdf78b9:4cad434c:3a30392d:8463c7e0 sync_speed_max=1000000
>>
>>
>> after I makes changes to the file "mdadm.conf" I rebuild the initramfs
>> image file and reboot. The thing that seems strange to me is that the
>> other item I set (sync_speed_max) is changed accordingly, only
>> "group_thread_cnt" fails to set (it's always ==0):
>>
>> ~$ cat /sys/block/md0/md/group_thread_cnt
>> 0
>> ~$ cat /sys/block/md0/md/sync_speed_max
>> 1000000 (system)
>>
>> Why "sync_speed_max" is set while "group_thread_cnt" it is not? Any clue?
> 
> Hi Franco
> 
> The sync_speed_max should not be set too. Because it still shows
> "system" rather than "local". I tried in my environment. It indeed has
> a problem if you specify uuid in conf file to set sysfs value. This is
> my conf:

Hi Xiao,

Yes, it's correct, I cannot set sync_speed_max maybe it reflects the 
setting of:

~$ cat /etc/sysctl.d/md0.conf
dev.raid.speed_limit_max = 1000000

I don't know, I guess…

> 
> cat /etc/mdadm.conf
> ARRAY /dev/md/0  metadata=1.2 UUID=689642b7:fa1e5cf2:82c6c527:ca37716f
> SYSFS name=/dev/md/0 sync_speed_max=5000 group_thread_cnt=8
> 
> After assembling by command `mdadm -As`:
> [root@ mdadm]# cat /sys/block/md0/md/group_thread_cnt
> 8
> [root@ mdadm]# cat /sys/block/md0/md/sync_speed_max
> 5000 (local)
> 
> If I specify uuid in the second line of the conf file, it can't work.
> Because the uuid read from the device doesn't match with the conf
> file. It should be the problem of big-endian and little-endian. I'm
> looking at this problem.
> 
> But in your conf, the group thread cnt should be set successfully.
> What's your command to assemble the array? Can you stop the array and
> assemble it after boot.

my mdadm.conf now is:

~$ grep -v '^#' /etc/mdadm/mdadm.conf


HOMEHOST <system>

MAILADDR root

ARRAY /dev/md/0  metadata=1.2 UUID=8bdf78b9:4cad434c:3a30392d:8463c7e0
    spares=1


SYSFS name=/dev/md/0 sync_speed_max=5000000 group_thread_cnt=8


I rebuild the initramfs image file:

~# update-initramfs -uk all
update-initramfs: Generating /boot/initrd.img-6.12.48+deb13-amd64
update-initramfs: Generating /boot/initrd.img-6.12.48


mdadm files in the initramfs file are:

~# lsinitramfs /boot/initrd.img-6.12.48 |grep mdadm
etc/mdadm
etc/mdadm/mdadm.conf
etc/modprobe.d/mdadm.conf
scripts/local-block/mdadm
scripts/local-bottom/mdadm
usr/sbin/mdadm


and then I rebooted the system, I cannot set group_thread_cnt:

~$ cat /sys/block/md0/md/sync_speed_max
1000000 (system)
~$ cat /sys/block/md0/md/group_thread_cnt
0


I cannot stop the array since my root filesystem resides on it. The 
command that Debian 13 Trixie uses to assemble the array is:

~$ ps -wax|grep mdadm
     543 ?        Ss     0:00 /usr/sbin/mdadm --monitor --scan


FYI the topology of the storage on my system is:

PCI [ahci] 00:11.0 SATA controller: Advanced Micro Devices, Inc. 
[AMD/ATI] SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode] (rev 40)
├scsi 0:0:0:0 ATA      ST1000DM003-1CH1 {Z1D93ZHJ}
│└sda 931.51g [8:0] Partitioned (dos)
│ └sda1 931.51g [8:1] MD raid5 (0/3) (w/ sdd1,sdc1,sdb1) in_sync 
'itek:0' {8bdf78b9-4cad-434c-3a30-392d8463c7e0}
│  └md0 1.82t [9:0] MD v1.2 raid5 (3) clean, 512k Chunk 
{8bdf78b9:-4cad-43:4c-3a30-:392d8463c7e0}
│   │               PV LVM2_member 1,82t used, 0 free 
{6HMlpl-ZLKi-4JOP-DU9i-aiZh-7IoH-Dd1Hoy}
│   └VG ld0 1,82t 0 free {BUUpNE-w0IY-Ut2I-OmiZ-err7-zXLX-F0VmjU}
│    ├dm-6 1.67t [253:6] LV data ext4 {29ed4bd3-f546-4781-988c-39f853276492}
│    ├dm-1 27.94g [253:1] LV lv1 ext4 {fa251689-956d-4a3c-b60c-0ab92ac51488}
│    ├dm-2 27.94g [253:2] LV lv2 ext4 {4baf8453-6525-4b72-aade-25de240ac4f4}
│    ├dm-3 27.94g [253:3] LV lv3 ext4 {9ad082fd-d9fb-4ddc-b3e8-a39753480319}
│    │└Mounted as /dev/mapper/ld0-lv3 @ /
│    ├dm-4 27.94g [253:4] LV lv4 ext4 {6cbd93ab-0def-4a56-8270-176c66588c45}
│    ├dm-5 27.94g [253:5] LV lv5 ext4 {665d8145-786a-461b-a4c1-7155156c40be}
│    └dm-0 9.31g [253:0] LV swap swap {f3859a53-eb70-4a7b-ac94-93433b72a723}
├scsi 1:0:0:0 ATA      ST1000DM003-1CH1 {Z1D8ZJ1H}
│└sdb 931.51g [8:16] Partitioned (dos)
│ └sdb1 931.51g [8:17] MD raid5 (1/3) (w/ sdd1,sdc1,sda1) in_sync 
'itek:0' {8bdf78b9-4cad-434c-3a30-392d8463c7e0}
│  └md0 1.82t [9:0] MD v1.2 raid5 (3) clean, 512k Chunk 
{8bdf78b9:-4cad-43:4c-3a30-:392d8463c7e0}
│                   PV LVM2_member 1,82t used, 0 free 
{6HMlpl-ZLKi-4JOP-DU9i-aiZh-7IoH-Dd1Hoy}
├scsi 2:0:0:0 ATA      ST1000DM003-1CH1 {Z1D94DF3}
│└sdc 931.51g [8:32] Partitioned (dos)
│ └sdc1 931.51g [8:33] MD raid5 (2/3) (w/ sdd1,sdb1,sda1) in_sync 
'itek:0' {8bdf78b9-4cad-434c-3a30-392d8463c7e0}
│  └md0 1.82t [9:0] MD v1.2 raid5 (3) clean, 512k Chunk 
{8bdf78b9:-4cad-43:4c-3a30-:392d8463c7e0}
│                   PV LVM2_member 1,82t used, 0 free 
{6HMlpl-ZLKi-4JOP-DU9i-aiZh-7IoH-Dd1Hoy}
├scsi 3:0:0:0 ATA      ST1000DM003-1CH1 {Z1D94DF5}
│└sdd 931.51g [8:48] Partitioned (dos)
│ └sdd1 931.51g [8:49] MD raid5 (none/3) (w/ sdc1,sdb1,sda1) spare 
'itek:0' {8bdf78b9-4cad-434c-3a30-392d8463c7e0}
│  └md0 1.82t [9:0] MD v1.2 raid5 (3) clean, 512k Chunk 
{8bdf78b9:-4cad-43:4c-3a30-:392d8463c7e0}
│                   PV LVM2_member 1,82t used, 0 free 
{6HMlpl-ZLKi-4JOP-DU9i-aiZh-7IoH-Dd1Hoy}
└scsi 4:0:0:0 ASUS     BW-16D1HT        {K9GDBBA5248}
  └sr0 1.00g [11:0] Empty/Unknown
PCI [ahci] 04:00.0 SATA controller: ASMedia Technology Inc. ASM1062 
Serial ATA Controller (rev 01)
└scsi 5:x:x:x [Empty]


Thank you again, kind regards.
-- 
Franco Martelli

  reply	other threads:[~2025-10-20 19:22 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-17 14:07 Unable to set group_thread_cnt using mdadm.conf Franco Martelli
2025-10-20 14:40 ` Xiao Ni
2025-10-20 19:22   ` Franco Martelli [this message]
2025-10-21  0:33     ` Xiao Ni
  -- strict thread matches above, loose matches on Subject: below --
2025-10-14 19:53 Franco Martelli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b0db39dc-f261-4bc3-aac3-e983150ba8c7@gmail.com \
    --to=martellif67@gmail.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=mtkaczyk@kernel.org \
    --cc=ncroxon@redhat.com \
    --cc=xni@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox