linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* faulty array member
@ 2010-11-18 17:08 Roberto Nunnari
  2010-11-18 19:29 ` Tim Small
  2010-11-22  4:05 ` Neil Brown
  0 siblings, 2 replies; 4+ messages in thread
From: Roberto Nunnari @ 2010-11-18 17:08 UTC (permalink / raw)
  To: linux-raid

Hello.

I have a linux file-server with to 1TB sata disks
in software raid1.

as my drives are no longer in full health raid put
one array member in faulty state.

A bit about my environment:

# uname -rms
Linux 2.6.9-89.0.18.ELsmp i686

# cat /etc/redhat-release
CentOS release 4.8 (Final)


# parted /dev/sda print
Disk geometry for /dev/sda: 0.000-953869.710 megabytes
Disk label type: msdos
Minor    Start       End     Type      Filesystem  Flags
1          0.031    251.015  primary   ext3        boot
2        251.016  40248.786  primary   ext3        raid
3      40248.787  42296.132  primary   linux-swap
4      42296.133 953867.219  primary   ext3        raid

# parted /dev/sdb print
Disk geometry for /dev/sdb: 0.000-953869.710 megabytes
Disk label type: msdos
Minor    Start       End     Type      Filesystem  Flags
1          0.031  39997.771  primary   ext3        boot, raid
2      39997.771  42045.117  primary   linux-swap
3      42045.117  42296.132  primary   ext3
4      42296.133 953867.219  primary   ext3        raid


# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb4[1] sda4[0]
       933448704 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] sda2[2](F)
       40957568 blocks [2/1] [_U]
unused devices: <none>


Don't ask me why the two drives are not specular
and md0 is mapped sdb1+sda2.. I have no idea.
It was made so by anaconda using kickstart during install.


So, I was using debugfs:
# debugfs
debugfs 1.35 (28-Feb-2004)
debugfs:  open /dev/md0
debugfs:  testb 1736947
Block 1736947 marked in use
debugfs:  icheck 1736947
Block   Inode number
1736947 <block not found>
debugfs:  icheck 1736947 10
Block   Inode number
1736947 <block not found>
10      7

in an attempt to locate the bad disk blocks, and after that
software raid put sda2 in faulty state.


Now, as smartctl is telling me that there are errors spread
on all partitions used in both raids, I would like to take
a full backup of at least /dev/md1 (that's still healthy).

The question is:
Is there a way and is it safe to put back /dev/sda2 into
/dev/md0 so that I'm sure I can backup even the blocks
that are unreadable on the first array member but probably
are still readable on failed device?

Thank you for your time and help!

Best regards.
Robi

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2010-11-22  4:05 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-11-18 17:08 faulty array member Roberto Nunnari
2010-11-18 19:29 ` Tim Small
     [not found]   ` <4CE640C6.3090607@supsi.ch>
2010-11-19 10:47     ` Tim Small
2010-11-22  4:05 ` Neil Brown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).