linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RAID1 working correctly, error messages during boot
@ 2015-05-18 20:29 Hans Malissa
  2015-05-21 11:20 ` NeilBrown
  0 siblings, 1 reply; 3+ messages in thread
From: Hans Malissa @ 2015-05-18 20:29 UTC (permalink / raw)
  To: linux-raid

I have a software-RAID1 that seems to be working correctly:

# cat /proc/mdstat
Personalities : [raid1] 
md0 : active raid1 sdc1[1] sdb1[0]
      976629568 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sun May 17 15:21:30 2015
     Raid Level : raid1
     Array Size : 976629568 (931.39 GiB 1000.07 GB)
  Used Dev Size : 976629568 (931.39 GiB 1000.07 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Mon May 18 10:28:36 2015
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : eprb21:0  (local to host eprb21)
           UUID : 0901fe50:444a29b6:d3caff14:e45ef9cc
         Events : 19

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

But, on the other hand, when the system boots, I briefly see the following messages:

doing fast boot
Creating device nodes with udev
udevd[174]: failed to execute ‘/sbin/mdadm’ ‘/sbin/mdadm --incremental /dev/sdb1

udevd[175]: failed to execute ‘/sbin/mdadm’ ‘/sbin/mdadm --incremental /dev/sdc1 --offroot’: No such file or directory

But otherwise the system appears to run normally. After booting, /dev/md0 seems to be working correctly.
What does it mean, and should I worry about it? What can I do about it? My system is openSUSE 12.2.
Thanks a lot,

Hans Malissa--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: RAID1 working correctly, error messages during boot
  2015-05-18 20:29 RAID1 working correctly, error messages during boot Hans Malissa
@ 2015-05-21 11:20 ` NeilBrown
  2015-05-21 11:24   ` NeilBrown
  0 siblings, 1 reply; 3+ messages in thread
From: NeilBrown @ 2015-05-21 11:20 UTC (permalink / raw)
  To: Hans Malissa; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 2348 bytes --]

On Mon, 18 May 2015 14:29:23 -0600 Hans Malissa <hmalissa@me.com> wrote:

> I have a software-RAID1 that seems to be working correctly:
> 
> # cat /proc/mdstat
> Personalities : [raid1] 
> md0 : active raid1 sdc1[1] sdb1[0]
>       976629568 blocks super 1.2 [2/2] [UU]
>       
> unused devices: <none>
> 
> # mdadm --detail /dev/md0
> /dev/md0:
>         Version : 1.2
>   Creation Time : Sun May 17 15:21:30 2015
>      Raid Level : raid1
>      Array Size : 976629568 (931.39 GiB 1000.07 GB)
>   Used Dev Size : 976629568 (931.39 GiB 1000.07 GB)
>    Raid Devices : 2
>   Total Devices : 2
>     Persistence : Superblock is persistent
> 
>     Update Time : Mon May 18 10:28:36 2015
>           State : clean 
>  Active Devices : 2
> Working Devices : 2
>  Failed Devices : 0
>   Spare Devices : 0
> 
>            Name : eprb21:0  (local to host eprb21)
>            UUID : 0901fe50:444a29b6:d3caff14:e45ef9cc
>          Events : 19
> 
>     Number   Major   Minor   RaidDevice State
>        0       8       17        0      active sync   /dev/sdb1
>        1       8       33        1      active sync   /dev/sdc1
> 
> But, on the other hand, when the system boots, I briefly see the following messages:
> 
> doing fast boot
> Creating device nodes with udev
> udevd[174]: failed to execute ‘/sbin/mdadm’ ‘/sbin/mdadm --incremental /dev/sdb1
> 
> udevd[175]: failed to execute ‘/sbin/mdadm’ ‘/sbin/mdadm --incremental /dev/sdc1 --offroot’: No such file or directory
> 
> But otherwise the system appears to run normally. After booting, /dev/md0 seems to be working correctly.
> What does it mean, and should I worry about it? What can I do about it? My system is openSUSE 12.2.
> Thanks a lot,
> 

I suspect that some udev scripts on the initrd say to run mdadm, but mdadm
isn't installed on the initrd.
/dev/md0 doesn't hold the root filesystem does it?  mdadm is only installed
on the initrd if it is needed for root, swap, or suspend-to-disk.

If you really wanted to get rid of the messages - which are definitely
harmless - you would need to recreate the initrd either without those udev
rules files, or with mdadm.
Adding the 'md' arg to the "mkinitrd" command might be sufficient, but I
don't have a 12.2 install lying around that I can play with.

NeilBrown

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 811 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: RAID1 working correctly, error messages during boot
  2015-05-21 11:20 ` NeilBrown
@ 2015-05-21 11:24   ` NeilBrown
  0 siblings, 0 replies; 3+ messages in thread
From: NeilBrown @ 2015-05-21 11:24 UTC (permalink / raw)
  To: Hans Malissa; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 256 bytes --]

On Thu, 21 May 2015 21:20:49 +1000 NeilBrown <neilb@suse.de> wrote:

> Adding the 'md' arg to the "mkinitrd" command might be sufficient, but I
> don't have a 12.2 install lying around that I can play with.

Make that  "mkinitrd -f md".

NeilBrown

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 811 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2015-05-21 11:24 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-05-18 20:29 RAID1 working correctly, error messages during boot Hans Malissa
2015-05-21 11:20 ` NeilBrown
2015-05-21 11:24   ` NeilBrown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).