* mdadm question
@ 2004-08-20 16:18 Andreas John
2004-08-20 22:11 ` Neil Brown
0 siblings, 1 reply; 7+ messages in thread
From: Andreas John @ 2004-08-20 16:18 UTC (permalink / raw)
To: linux-raid; +Cc: neilb
Hello!
I have a question about mdadm usage. One of me customers killed a
raid5+1 by accidently removing an ide bus with two disk on it (dont't
ask, I know ;-))
Assuming that the data is on the disks I'm looking for a way to convince
linux to mark the superblocks (of the missing disk only?) as "not
failed" so that the raid comes up again. mdadm seems to have what I am
looking for, like "madm --assemble --update=? --force" can write the
superblocks freshly without destorying the data on it? Do I have to use
--update=super‐minor or --update=super‐minor on ia32?
Or is there an recovery tool for such a case (i.e. mddump?)?
If this does not work, can anyone point me out where I can find an
overview about the structure of a superblock, so that I may fix the
problem via hex-editor (my skills point back to C64 times ... :-))
I've seen the source but some kind of overview/scheme would be nice.
I'm scared. :)
Rgds,
Andreas
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: mdadm question
2004-08-20 16:18 Andreas John
@ 2004-08-20 22:11 ` Neil Brown
0 siblings, 0 replies; 7+ messages in thread
From: Neil Brown @ 2004-08-20 22:11 UTC (permalink / raw)
To: Andreas John; +Cc: linux-raid
On Friday August 20, lists@aj.net-lab.net wrote:
> Hello!
>
> I have a question about mdadm usage. One of me customers killed a
> raid5+1 by accidently removing an ide bus with two disk on it (dont't
> ask, I know ;-))
>
> Assuming that the data is on the disks I'm looking for a way to convince
> linux to mark the superblocks (of the missing disk only?) as "not
> failed" so that the raid comes up again. mdadm seems to have what I am
> looking for, like "madm --assemble --update=? --force" can write the
> superblocks freshly without destorying the data on it? Do I have to use
> --update=super^[$,1rp^[(Bminor or --update=super^[$,1rp^[(Bminor on ia32?
> Or is there an recovery tool for such a case (i.e. mddump?)?
You don't need any --update. Just --assemble --force. mdadm will
then pick the best available drives to assemble a degraded array (one
drive missing). If the result seems good (e.g. fsck reports ok), then
you can add the other drive (--add) and it will be reconstructed with
consistent data.
NeilBrown
>
> If this does not work, can anyone point me out where I can find an
> overview about the structure of a superblock, so that I may fix the
> problem via hex-editor (my skills point back to C64 times ... :-))
> I've seen the source but some kind of overview/scheme would be nice.
>
> I'm scared. :)
>
> Rgds,
> Andreas
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* mdadm question
[not found] <1352887864.2731.1267578731820.JavaMail.root@mail1>
@ 2010-03-03 1:15 ` Robert Minvielle
2010-03-03 1:42 ` Neil Brown
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Robert Minvielle @ 2010-03-03 1:15 UTC (permalink / raw)
To: linux-raid
I am trying to setup a new raid array on a debian box, and I have not run into
this problem before. I could not find a mdadm list, so I will ask here. If this
is totally off topic, please disregard.
I am attempting to setup a raid 6 array, but this is failing so I backing down
to a no-frills raid 5 array. The system is Debian5, stock kernel, stock everything.
The mdadm is the stock debian, pulled with apt-get, version 2.6.7.
The machine in question has one IDE drive for linux, and 45 SATA drives. Debian
sees all of the drives, and I have fdisked all of them with one partition of type
fd (Linux autodetect raid). fdisk -l /dev/sd[a-z] /dev/sdaa[a-s] shows them all
with no problems.
The issue is that when I do a
mdadm --create /dev/md0 --level=5 --raid-devices=45 /dev/sd[a-z]1 /dev/sda[a-s]1
to create a raid array with no spares, all defaults, it returns with
invalid number of raid devices.
I have searched the web to no avail. --verbose does not increase verbosity. There
are no debug switches (that I know of) to mdadm. log files show nothing. Leaving off
--raid-devices=45 does nothing. Changing the number of devices just for fun does
nothing. (45,44,43,2,whatever). I am not sure if this is a problem with this version
in debian, the number of drives that I have, or the setup. I have done this before
(with a few less drives) with no problems.
Any suggestions would be appreciated.
Thanks.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: mdadm question
2010-03-03 1:15 ` Robert Minvielle
@ 2010-03-03 1:42 ` Neil Brown
2010-03-03 1:45 ` Leslie Rhorer
2010-03-03 16:51 ` Bill Davidsen
2 siblings, 0 replies; 7+ messages in thread
From: Neil Brown @ 2010-03-03 1:42 UTC (permalink / raw)
To: Robert Minvielle; +Cc: linux-raid
On Tue, 2 Mar 2010 19:15:04 -0600 (CST)
Robert Minvielle <robert@lite3d.com> wrote:
>
> I am trying to setup a new raid array on a debian box, and I have not run into
> this problem before. I could not find a mdadm list, so I will ask here. If this
> is totally off topic, please disregard.
Perfectly on-topic.
>
>
> I am attempting to setup a raid 6 array, but this is failing so I backing down
> to a no-frills raid 5 array. The system is Debian5, stock kernel, stock everything.
> The mdadm is the stock debian, pulled with apt-get, version 2.6.7.
> The machine in question has one IDE drive for linux, and 45 SATA drives. Debian
> sees all of the drives, and I have fdisked all of them with one partition of type
> fd (Linux autodetect raid). fdisk -l /dev/sd[a-z] /dev/sdaa[a-s] shows them all
> with no problems.
Using old software on new hardware....
When using Debian, I would recommend the -testing version for new hardware....
(not that I am prepared to back-up that recommendation with support).
However I suspect Debian5 should be able to be made to work with your setup.
>
> The issue is that when I do a
>
> mdadm --create /dev/md0 --level=5 --raid-devices=45 /dev/sd[a-z]1 /dev/sda[a-s]1
>
> to create a raid array with no spares, all defaults, it returns with
>
> invalid number of raid devices.
The default metadata layout has a maximum of 28 devices. If you want more,
add
--metadata=1.0
You won't be able to use in-kernel autodetect, but you shouldn't need to with
Debian, even at that vintage.
>
> I have searched the web to no avail. --verbose does not increase verbosity. There
> are no debug switches (that I know of) to mdadm. log files show nothing. Leaving off
> --raid-devices=45 does nothing. Changing the number of devices just for fun does
> nothing. (45,44,43,2,whatever). I am not sure if this is a problem with this version
> in debian, the number of drives that I have, or the setup. I have done this before
> (with a few less drives) with no problems.
I'm surprised it didn't work with '2', or did you mean "42" ?
NeilBrown
>
> Any suggestions would be appreciated.
>
> Thanks.
>
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread
* RE: mdadm question
2010-03-03 1:15 ` Robert Minvielle
2010-03-03 1:42 ` Neil Brown
@ 2010-03-03 1:45 ` Leslie Rhorer
2010-03-03 16:51 ` Bill Davidsen
2 siblings, 0 replies; 7+ messages in thread
From: Leslie Rhorer @ 2010-03-03 1:45 UTC (permalink / raw)
To: 'Robert Minvielle', linux-raid
> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> owner@vger.kernel.org] On Behalf Of Robert Minvielle
> Sent: Tuesday, March 02, 2010 7:15 PM
> To: linux-raid@vger.kernel.org
> Subject: mdadm question
>
>
> I am trying to setup a new raid array on a debian box, and I have not run
> into
> this problem before. I could not find a mdadm list, so I will ask here. If
> this
> is totally off topic, please disregard.
>
>
> I am attempting to setup a raid 6 array, but this is failing so I backing
> down
> to a no-frills raid 5 array. The system is Debian5, stock kernel, stock
> everything.
> The mdadm is the stock debian, pulled with apt-get, version 2.6.7.
> The machine in question has one IDE drive for linux, and 45 SATA drives.
I think that's your problem. The default for 2.6.7 is a version .90
superblock. The .90 superblock is limited to 28 devices in the array. Ise
a version 1.x superblock, instead.
> Debian
> sees all of the drives, and I have fdisked all of them with one partition
> of type
> fd (Linux autodetect raid). fdisk -l /dev/sd[a-z] /dev/sdaa[a-s] shows
> them all
> with no problems.
>
> The issue is that when I do a
>
> mdadm --create /dev/md0 --level=5 --raid-devices=45 /dev/sd[a-z]1
> /dev/sda[a-s]1
Make that:
` mdadm --create /dev/md0 --level=5 --raid-devices=45 --metadata=1.x`
/dev/sd[a-z]1 /dev/sda[a-s]1
where x is 0, 1, or 2 and I think you'll be fine. See the man page.
> to create a raid array with no spares, all defaults, it returns with
>
> invalid number of raid devices.
>
> I have searched the web to no avail. --verbose does not increase
> verbosity. There
> are no debug switches (that I know of) to mdadm. log files show nothing.
> Leaving off
> --raid-devices=45 does nothing. Changing the number of devices just for
> fun does
> nothing. (45,44,43,2,whatever). I am not sure if this is a problem with
> this version
> in debian, the number of drives that I have, or the setup. I have done
> this before
> (with a few less drives) with no problems.
>
> Any suggestions would be appreciated.
>
> Thanks.
>
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: mdadm question
[not found] <2127483366.2736.1267585385014.JavaMail.root@mail1>
@ 2010-03-03 3:05 ` Robert Minvielle
0 siblings, 0 replies; 7+ messages in thread
From: Robert Minvielle @ 2010-03-03 3:05 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid
----- "Neil Brown" <neilb@suse.de> wrote:
> On Tue, 2 Mar 2010 19:15:04 -0600 (CST)
> Using old software on new hardware....
> When using Debian, I would recommend the -testing version for new
> hardware....
> (not that I am prepared to back-up that recommendation with support).
>
> However I suspect Debian5 should be able to be made to work with your
> setup.
Hrmm, will look into that.
>
> The default metadata layout has a maximum of 28 devices. If you want
> more,
> add
> --metadata=1.0
>
> You won't be able to use in-kernel autodetect, but you shouldn't need
> to with
> Debian, even at that vintage.
>
Argh, I see the note now in the man page. I missed it. Funny that google
did not return that/man page/etc when I searched for things like "maximum number of
drives in mdadm array" etc.
> I'm surprised it didn't work with '2', or did you mean "42" ?
>
Yes, sorry for the confusion.
Thanks.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: mdadm question
2010-03-03 1:15 ` Robert Minvielle
2010-03-03 1:42 ` Neil Brown
2010-03-03 1:45 ` Leslie Rhorer
@ 2010-03-03 16:51 ` Bill Davidsen
2 siblings, 0 replies; 7+ messages in thread
From: Bill Davidsen @ 2010-03-03 16:51 UTC (permalink / raw)
To: Robert Minvielle; +Cc: linux-raid
Robert Minvielle wrote:
> I am trying to setup a new raid array on a debian box, and I have not run into
> this problem before. I could not find a mdadm list, so I will ask here. If this
> is totally off topic, please disregard.
>
>
> I am attempting to setup a raid 6 array, but this is failing so I backing down
> to a no-frills raid 5 array. The system is Debian5, stock kernel, stock everything.
> The mdadm is the stock debian, pulled with apt-get, version 2.6.7.
> The machine in question has one IDE drive for linux, and 45 SATA drives. Debian
> sees all of the drives, and I have fdisked all of them with one partition of type
> fd (Linux autodetect raid). fdisk -l /dev/sd[a-z] /dev/sdaa[a-s] shows them all
> with no problems.
>
> The issue is that when I do a
>
> mdadm --create /dev/md0 --level=5 --raid-devices=45 /dev/sd[a-z]1 /dev/sda[a-s]1
>
> to create a raid array with no spares, all defaults, it returns with
>
> invalid number of raid devices.
>
> I have searched the web to no avail. --verbose does not increase verbosity. There
> are no debug switches (that I know of) to mdadm. log files show nothing. Leaving off
> --raid-devices=45 does nothing. Changing the number of devices just for fun does
> nothing. (45,44,43,2,whatever). I am not sure if this is a problem with this version
> in debian, the number of drives that I have, or the setup. I have done this before
> (with a few less drives) with no problems.
>
> Any suggestions would be appreciated.
>
You seem to have gotten your answer on number of drives, now you can go
with raid-6 as desired. However, since the performance of raid-6 in
degraded mode is pretty poor and gets worse with more drives, you may
want to consider allocating at least one drive as a spare, or doing a
raid-0 over three smaller raid-5 or raid-6 arrays to speed rebuild. You
can also have shared spares to allow for fast rebuild of any of the
smaller redundant arrays.
The price of many flexible options is many decisions.
--
Bill Davidsen <davidsen@tmr.com>
"We can't solve today's problems by using the same thinking we
used in creating them." - Einstein
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2010-03-03 16:51 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <2127483366.2736.1267585385014.JavaMail.root@mail1>
2010-03-03 3:05 ` mdadm question Robert Minvielle
[not found] <1352887864.2731.1267578731820.JavaMail.root@mail1>
2010-03-03 1:15 ` Robert Minvielle
2010-03-03 1:42 ` Neil Brown
2010-03-03 1:45 ` Leslie Rhorer
2010-03-03 16:51 ` Bill Davidsen
2004-08-20 16:18 Andreas John
2004-08-20 22:11 ` Neil Brown
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).