Linux RAID subsystem development
 help / color / mirror / Atom feed
From: Joe Landman <joe.landman@gmail.com>
To: Jeff Johnson <jeff.johnson@aeoncomputing.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: 20 drive raid-10, CentOS5.5, after reboot assemble fails - all drives "non-fresh"
Date: Mon, 08 Aug 2011 00:40:43 -0400	[thread overview]
Message-ID: <4E3F68CB.4060005@gmail.com> (raw)
In-Reply-To: <4E3F66F2.3030300@aeoncomputing.com>

On 08/08/2011 12:32 AM, Jeff Johnson wrote:

> It is good I can start the raid manually but it isn't supposed to work
> like that. Any idea why assembling from a config file would fail? Here
> is the latest version of the config file line (made with mdadm --examine
> --scan):

Jeff,

   You might need to update the raid superblocks during the manual assemble.

   mdadm --assemble --update=summaries /dev/md3 /dev/sd[c-v]1

   Also, you can simplify the below a bit to the following:

>
> ARRAY /dev/md3 level=raid10 num-devices=20 metadata=0.90 spares=4
> UUID=e17a29e8:ec6bce5c:f13d343c:cfba4dc4

ARRAY /dev/md3 UUID=e17a29e8:ec6bce5c:f13d343c:cfba4dc4

>
> --Jeff
>
> On Sun, Aug 7, 2011 at 7:56 PM, NeilBrown<neilb@suse.de
> <mailto:neilb@suse.de>>wrote:
>
> On Sun, 07 Aug 2011 19:37:04 -0700 Jeff Johnson
> <jeff.johnson@aeoncomputing.com
> <mailto:jeff.johnson@aeoncomputing.com>> wrote:
>
>  > Greetings,
>  >
>  > I have a 20 drive raid-10 that has been running well for over one
> year.
>  > After the most recently system boot the raid will not assemble.
>  > /var/log/messages shows that all of the drives are "non-fresh".
>
> --snip--
>
> You really don't want that 'devices=" clause in there. Device names can
> change..
> --snip--
>
>  > Events : 90
>  > Events : 90
>  > Events : 92
>  > Events : 92
>  > Events : 92
>  > Events : 92
>
> So the spares are '92' and the others are '90'. That is weird...
>
> However you should be able to assemble the array by simply listing
> all the
> non-spare devices:
>
> mdadm -A /dev/md3 /dev/sd[c-v]1
>
> NeilBrown
>
>
> --
> ------------------------------
> Jeff Johnson
> Manager
> Aeon Computing
>
> jeff.johnson "at" aeoncomputing.com <mailto:jeff.johnson@aeoncomputing.com>
> www.aeoncomputing.com <http://www.aeoncomputing.com/>
> t:858-412-3810 x101 <tel:858-412-3810%20x101> f:858-412-3845
> <tel:858-412-3845>
>
>
> 4905 Morena Boulevard, Suite 1313 - San Diego, CA 92117
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html


  reply	other threads:[~2011-08-08  4:40 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-08-08  2:37 20 drive raid-10, CentOS5.5, after reboot assemble fails - all drives "non-fresh" Jeff Johnson
2011-08-08  2:56 ` NeilBrown
2011-08-08  4:32   ` Jeff Johnson
2011-08-08  4:40     ` Joe Landman [this message]
2011-08-08  4:54       ` Jeff Johnson
2011-08-08  5:04         ` Joe Landman
2011-08-08  5:55           ` Jeff Johnson
2011-08-08 16:17             ` Joe Landman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4E3F68CB.4060005@gmail.com \
    --to=joe.landman@gmail.com \
    --cc=jeff.johnson@aeoncomputing.com \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox