linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: NeilBrown <neilb@suse.de>
To: vincent <hanguozhong@meganovo.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: raid10 array tend to two degraded raid10 array
Date: Mon, 23 Jul 2012 09:11:05 +1000	[thread overview]
Message-ID: <20120723091105.5cb8dcf0@notabene.brown> (raw)
In-Reply-To: <50092079.a287440a.5002.ffff88ef@mx.google.com>

[-- Attachment #1: Type: text/plain, Size: 3904 bytes --]

On Fri, 20 Jul 2012 17:10:14 +0800 "vincent" <hanguozhong@meganovo.com> wrote:

> Hi, everyone:
>           I am Vincent, I am writing to you to ask a question about of
> mdadm.
>           I created a raid10 array with 4 160G disks used the command: mdadm
> -Cv /dev/md0 -l10 -n4 /dev/sd[abcd],
>           The version of my mdadm is 3.2.2, and the version of my kernel is
> 2.6.38
>           when the raid10 is in resyncing, I used the following command to
> make file system for it: mkfs.ext3 /dev/md0
>           every was OK. The array continued to resync, but when the process
> of resyncing is 3.4%, there were a lot of
>           IO error of "sda" and "sdc". There were bad blocks in sda and sdc.
>           Then I used "cat /proc/mdstat" to see the status of /dev/md0:
>  
>           Personalities : [raid10]      
>           md0 : active raid10 sdb[1] sdd[3]
>           310343680 blocks super 1.2 512K chunks 2 near-copies [4/2] [_U_U]
>  
>           unused devices: <none>
>          
>           /dev/sdc and /dev/sda had lost. 
>           Then I reboot the system, but when i used "cat /proc/mdstat" to
> see the status of /dev/md0:
>  
>           Personalities : [raid10] 
>           md126 : active raid10 sda[0] sdc[2]
>           310343680 blocks super 1.2 512K chunks 2 near-copies [4/2] [U_U_]
>       
>           md0 : active raid10 sdb[1] sdd[3]
>           310343680 blocks super 1.2 512K chunks 2 near-copies [4/2] [_U_U]
>       
>           unused devices: <none>
>            
>           there had a array which name was md126, and consisted by /dev/sdc
> /dev/sda.
>           I used "mdadm --assemble --scan" to assemble the md devices. the
> output of the command is:
>                   
>           dm: /dev/md/0 exists - ignoring
>           md: md0 stopped.
>           mdadm: ignoring /dev/sda as it reports /dev/sdd as failed
>           mdadm: ignoring /dev/sdc as it reports /dev/sdd as failed
>           md: bind<sdd>
>           md: bind<sdb>
>           md/raid10:md0: active with 2 out of 4 devices
>           md0: detected capacity change from 0 to 317791928320
>           mdadm: /dev/md0 has been started with 2 drives (out of 4).
>           md0: unknown partition table
>           mdadm: /dev/md/0 exists - ignoring
>           md: md126 stopped.
>           md: bind<sdc>
>           md: bind<sda>
>           md/raid10:md126: active with 2 out of 4 devices
>           md126: detected capacity change from 0 to 317791928320
>           mdadm: /dev/md126 has been started with 2 drives (out of 4).
>           md126: unknown partition table
>  
>           And then I used "mdadm -E /dev/sda", "mdadm -E /dev/sdb", "mdadm
> -E /dev/sdc", "mdadm -E /dev/sdc" , 
>           "mdadm -D /dev/md0" and "mdadm -D /dev/md127" to  check the
> details info of sda, sdb, sdc and sdd. 
>           I found the property of "Array UUID" of all of these devices(sda,
> sdb, sdc, sdd)were the same. But the 
>           property of "Events" and "Update Time" of "sda" and "sdc" were the
> same(21,  Fri Jul 6 11:02:09 2012),  
>           the property of "Events" and "Update Time" of "sdb" and "sdd" were
> the same(35,  Fri Jul 6 11:06:21 2012).
>  
>           Although the "Update Time" and "events" property of "sda" and
> "sdc" were not equal to "sdb" and "sdd", 
>           they had the same "Array UUID". why this array tend to two
> degraded arrays those had the same uuid? 
>           As the two arrays had the same uuid, it is difficult to
> distinguish and use them. I think it is unreasonable,
>           could you help me ?
> 

Yes, this is a known problem.  Hopefully it will be fixed in the next release
of mdadm.  
For now, just remove the faulty devices, or at least remove the metadata from
them with
  mdadm --zero-superblock /dev/sd[ac]

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

      reply	other threads:[~2012-07-22 23:11 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-07-20  9:10 raid10 array tend to two degraded raid10 array vincent
2012-07-22 23:11 ` NeilBrown [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120723091105.5cb8dcf0@notabene.brown \
    --to=neilb@suse.de \
    --cc=hanguozhong@meganovo.com \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).