* Possibly wrong exit status for mdadm --misc --test
@ 2024-07-15 4:59 Justinas Naruševičius
2024-07-15 14:34 ` Mariusz Tkaczyk
0 siblings, 1 reply; 2+ messages in thread
From: Justinas Naruševičius @ 2024-07-15 4:59 UTC (permalink / raw)
To: linux-raid
Hello,
After reboot raid1 array with one failed drive is reported as degraded (failed drive reported as removed):
> root@rico ~ # mdadm --detail /dev/md127
> /dev/md127:
> Version : 1.2
> Creation Time : Thu Feb 21 13:28:21 2019
> Raid Level : raid1
> Array Size : 57638912 (54.97 GiB 59.02 GB)
> Used Dev Size : 57638912 (54.97 GiB 59.02 GB)
> Raid Devices : 2
> Total Devices : 1
> Persistence : Superblock is persistent
>
> Update Time : Mon Jul 15 07:25:12 2024
> State : clean, degraded
> Active Devices : 1
> Working Devices : 1
> Failed Devices : 0
> Spare Devices : 0
>
> Consistency Policy : resync
>
> Name : sabretooth:root-raid1
> UUID : 1f1f3113:0b87a325:b9ad1414:0fe55600
> Events : 323644
>
> Number Major Minor RaidDevice State
> - 0 0 0 removed
> 2 8 2 1 active sync /dev/sda2
However testing such state with mdadm --misc --test returns 0
> root@rico ~ # mdadm --misc --test /dev/md127
> root@rico ~ # echo $?
> 0
> root@rico ~ #
From man page:
> if the --test option is given, then the exit status
> will be:
> 0 The array is functioning normally.
> 1 The array has at least one failed device.
> 2 The array has multiple failed devices such that it is unusable.
> 4 There was an error while trying to get information about the device.
From --help output:
> root@rico ~ # mdadm --misc --help| grep test
> --test -t : exit status 0 if ok, 1 if degrade, 2 if dead, 4 if missing
Would expect the exit code to be 1.
Can anyone confirm this is expected behaviour?
> root@rico ~ # mdadm -V
> mdadm - v4.3 - 2024-02-15
> root@rico ~ #
--
Regards,
Justinas Naruševičius
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: Possibly wrong exit status for mdadm --misc --test
2024-07-15 4:59 Possibly wrong exit status for mdadm --misc --test Justinas Naruševičius
@ 2024-07-15 14:34 ` Mariusz Tkaczyk
0 siblings, 0 replies; 2+ messages in thread
From: Mariusz Tkaczyk @ 2024-07-15 14:34 UTC (permalink / raw)
To: Justinas Naruševičius; +Cc: linux-raid
On Mon, 15 Jul 2024 07:59:36 +0300
Justinas Naruševičius <contact@junaru.com> wrote:
> Hello,
>
> After reboot raid1 array with one failed drive is reported as degraded
> (failed drive reported as removed):
>
> > root@rico ~ # mdadm --detail /dev/md127
> > /dev/md127:
> > Version : 1.2
> > Creation Time : Thu Feb 21 13:28:21 2019
> > Raid Level : raid1
> > Array Size : 57638912 (54.97 GiB 59.02 GB)
> > Used Dev Size : 57638912 (54.97 GiB 59.02 GB)
> > Raid Devices : 2
> > Total Devices : 1
> > Persistence : Superblock is persistent
> >
> > Update Time : Mon Jul 15 07:25:12 2024
> > State : clean, degraded
> > Active Devices : 1
> > Working Devices : 1
> > Failed Devices : 0
> > Spare Devices : 0
> >
> > Consistency Policy : resync
> >
> > Name : sabretooth:root-raid1
> > UUID : 1f1f3113:0b87a325:b9ad1414:0fe55600
> > Events : 323644
> >
> > Number Major Minor RaidDevice State
> > - 0 0 0 removed
> > 2 8 2 1 active sync /dev/sda2
>
>
> However testing such state with mdadm --misc --test returns 0
>
>
> > root@rico ~ # mdadm --misc --test /dev/md127
> > root@rico ~ # echo $?
> > 0
> > root@rico ~ #
>
> From man page:
>
> > if the --test option is given, then the exit status
> > will be:
> > 0 The array is functioning normally.
> > 1 The array has at least one failed device.
> > 2 The array has multiple failed devices such that it is
> > unusable. 4 There was an error while trying to get information about
> > the device.
>
> From --help output:
>
> > root@rico ~ # mdadm --misc --help| grep test
> > --test -t : exit status 0 if ok, 1 if degrade, 2 if dead, 4 if
> > missing
>
> Would expect the exit code to be 1.
>
> Can anyone confirm this is expected behaviour?
>
> > root@rico ~ # mdadm -V
> > mdadm - v4.3 - 2024-02-15
> > root@rico ~ #
>
> --
>
> Regards,
> Justinas Naruševičius
>
Hello,
This is old functionality but from what I can see it has sense if you are
sending Manage command like mdadm --remove. This --test command shouldn't
be used separately and that is why it is not working for you.
Thanks,
Mariusz
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2024-07-15 14:34 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-15 4:59 Possibly wrong exit status for mdadm --misc --test Justinas Naruševičius
2024-07-15 14:34 ` Mariusz Tkaczyk
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).