linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: nterry <nigel@nigelterry.net>
To: Michal Soltys <soltys@ziu.info>
Cc: linux-raid@vger.kernel.org
Subject: Re: Raid 5 Problem
Date: Sun, 14 Dec 2008 16:34:27 -0500	[thread overview]
Message-ID: <49457BE3.4090200@nigelterry.net> (raw)
In-Reply-To: <49457719.4000009@ziu.info>

Michal Soltys wrote:
> nterry wrote:
>>
>> Great - All working.  Then I rebooted and was back to square one with 
>> only 3 drives in /dev/md0 and /dev/sdb in /dev/md_d0
>>                                   So I am still not understanding 
>> where /dev/md_d0 is coming from and although I know how to get things 
>> working after a reboot, clearly this is not a long term solution...
>>
>
> My blind shot is that udev rules of your distro are doing mdadm 
> --incremental assembly and picking sdb as a part of nonexisting array 
> from the long ago (leftover after old experimentations ?). Or 
> something else is doing so.
>
> What does mdadm -Esvv /dev/sdb show ?
>
> Add
>
> DEVICE /dev/sd[bcde]1
>
> on top of your mdadm.conf - it should stop --incremental from picking 
> up sdb. Assuming that's the cause of the problem.
>
> Also note, that FC9 might be trying to assemble the array during 
> initramfs stage (assuming it uses one) and having problems there. I've 
> never used Fedora so hard to tell for me - but definitely peek there, 
> particulary at udev and mdadm part of things.
>
> -- 
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
[root@homepc ~]# mdadm -Esvv /dev/sdb
/dev/sdb:
          Magic : a92b4efc
        Version : 0.90.02
           UUID : c57d50aa:1b3bcabd:ab04d342:6049b3f1
  Creation Time : Thu Dec 15 15:29:36 2005
     Raid Level : raid5
  Used Dev Size : 245111552 (233.76 GiB 250.99 GB)
     Array Size : 245111552 (233.76 GiB 250.99 GB)
   Raid Devices : 2
  Total Devices : 3
Preferred Minor : 0

    Update Time : Wed Apr  5 13:43:20 2006
          State : clean
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1
       Checksum : 2bd59790 - correct
         Events : 1530654

         Layout : left-symmetric
     Chunk Size : 128K

      Number   Major   Minor   RaidDevice State
this     2      22        0        2      spare

   0     0       8        1        0      active sync   /dev/sda1
   1     1       8       17        1      active sync   /dev/sdb1
   2     2      22        0        2      spare
[root@homepc ~]#

I added the DEVICE /dev/sd[bcde]1 to mdadm.conf and that apepars to have 
fixed the problem.  2 reboots and it worked both times.

I also note now that:
[root@homepc ~]# mdadm --examine --scan
ARRAY /dev/md0 level=raid5 num-devices=4 
UUID=50e3173e:b5d2bdb6:7db3576b:644409bb
   spares=1
[root@homepc ~]#

Frankly I don't know enough about the workings of udev and the boot 
process to be able to get into that.  However these two files might mean 
something to you:

[root@homepc ~]# cat /etc/udev/rules.d/64-md-raid.rules
# do not edit this file, it will be overwritten on update

SUBSYSTEM!="block", GOTO="md_end"
ACTION!="add|change", GOTO="md_end"

# import data from a raid member and activate it
#ENV{ID_FS_TYPE}=="linux_raid_member", IMPORT{program}="/sbin/mdadm 
--examine --export $tempnode", RUN+="/sbin/mdadm --incremental 
$env{DEVNAME}"
# import data from a raid set
KERNEL!="md*", GOTO="md_end"

ATTR{md/array_state}=="|clear|inactive", GOTO="md_end"

IMPORT{program}="/sbin/mdadm --detail --export $tempnode"
ENV{MD_NAME}=="?*", SYMLINK+="disk/by-id/md-name-$env{MD_NAME}"
ENV{MD_UUID}=="?*", SYMLINK+="disk/by-id/md-uuid-$env{MD_UUID}"

IMPORT{program}="vol_id --export $tempnode"
OPTIONS="link_priority=100"
ENV{ID_FS_USAGE}=="filesystem|other|crypto", ENV{ID_FS_UUID_ENC}=="?*", 
SYMLINK+="disk/by-uuid/$env{ID_FS_UUID_ENC}"
ENV{ID_FS_USAGE}=="filesystem|other", ENV{ID_FS_LABEL_ENC}=="?*", 
SYMLINK+="disk/by-label/$env{ID_FS_LABEL_ENC}"

LABEL="md_end"
[root@homepc ~]#

AND...

[root@homepc ~]# cat /etc/udev/rules.d/70-mdadm.rules
# This file causes block devices with Linux RAID (mdadm) signatures to
# automatically cause mdadm to be run.
# See udev(8) for syntax

SUBSYSTEM=="block", ACTION=="add|change", ENV{ID_FS_TYPE}=="linux_raid*", \
    RUN+="/sbin/mdadm -I --auto=yes $root/%k"
[root@homepc ~]#

Thanks for getting me working

Nigel






  reply	other threads:[~2008-12-14 21:34 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-12-14 13:41 Raid 5 Problem nterry
2008-12-14 15:34 ` Michal Soltys
2008-12-14 20:41   ` nterry
2008-12-14 20:53     ` Justin Piszcz
2008-12-14 20:58       ` nterry
2008-12-14 21:03         ` Justin Piszcz
2008-12-14 21:08           ` Nigel J. Terry
2008-12-14 22:55           ` Michal Soltys
2008-12-14 21:14     ` Michal Soltys
2008-12-14 21:34       ` nterry [this message]
2008-12-14 22:02         ` Michal Soltys
2008-12-15 21:50         ` Neil Brown
2008-12-15 23:07           ` nterry
2008-12-16 20:39             ` nterry

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=49457BE3.4090200@nigelterry.net \
    --to=nigel@nigelterry.net \
    --cc=linux-raid@vger.kernel.org \
    --cc=soltys@ziu.info \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).