* Question regarding mdadm.conf
@ 2005-02-17 6:25 Torsten E.
2005-02-17 6:48 ` Lajber Zoltan
0 siblings, 1 reply; 7+ messages in thread
From: Torsten E. @ 2005-02-17 6:25 UTC (permalink / raw)
To: linux-raid
Good morning! :)
End of january I had some problems with an faulty harddisk, which was
part of an RAID1 (two mirroed disks containing 4 partitions).
I replaced it, and got some realy good instructions from Gordon Henderson.
Last week the same problem occured again, and I decided to replace the
controller, too. Now its an PCI 4-port S-ATA controller, and everything
runs fine till now.
My question now is:
the formerly used /etc/mdadm.conf had some lines like:
ARRAY /dev/md1 level=raid1 num-devices=2
UUID=66f80d1a:621442c5:2b6eabe8:3a9f5e54
devices=/dev/sda6,/dev/sdb6
ARRAY /dev/md0 level=raid1 num-devices=2
UUID=06b9cdec:65448b34:3add2ff0:2180f5cd
devices=/dev/sda5,/dev/sdb5
How does I get those UUID information, to add them to the new
/etc/mdadm.conf?
Have a nice day
Torsten
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Question regarding mdadm.conf
2005-02-17 6:25 Question regarding mdadm.conf Torsten E.
@ 2005-02-17 6:48 ` Lajber Zoltan
2005-02-17 7:14 ` Michael Tokarev
0 siblings, 1 reply; 7+ messages in thread
From: Lajber Zoltan @ 2005-02-17 6:48 UTC (permalink / raw)
To: Torsten E.; +Cc: linux-raid
Hi!
On Thu, 17 Feb 2005, Torsten E. wrote:
> How does I get those UUID information, to add them to the new
> /etc/mdadm.conf?
Try this one: mdadm --detail /dev/md1 | grep UUID
Bye,
-=Lajbi=----------------------------------------------------------------
LAJBER Zoltan Szent Istvan Egyetem, Informatika Hivatal
engineer: a mechanism for converting caffeine into designs.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Question regarding mdadm.conf
2005-02-17 6:48 ` Lajber Zoltan
@ 2005-02-17 7:14 ` Michael Tokarev
2005-02-17 8:23 ` Torsten E.
2005-02-17 8:30 ` Bad blocks Guy
0 siblings, 2 replies; 7+ messages in thread
From: Michael Tokarev @ 2005-02-17 7:14 UTC (permalink / raw)
To: Lajber Zoltan; +Cc: Torsten E., linux-raid
Lajber Zoltan wrote:
> Hi!
>
> On Thu, 17 Feb 2005, Torsten E. wrote:
>
>
>>How does I get those UUID information, to add them to the new
>>/etc/mdadm.conf?
>
> Try this one: mdadm --detail /dev/md1 | grep UUID
I'd say
mdadm --detail --brief /dev/md1 | grep -v devices=
-- this will give you all information necessary for mdadm.conf,
you can just redirect output into that file.
Note the grep usage. Someone will disagree with me here, but
there is a reason why to remove devices= line. Without it,
output from mdadm looks like (on my system anyway):
ARRAY /dev/md1 level=raid1 num-devices=4 UUID=11e92e45:15fcc4a0:cf62e981:a79de494
devices=/dev/sda1,/dev/sdb1,/dev/sdc1,/dev/sdd1
Ie, it lists all the devices which are parts of the array.
The problem with this is: if, for any reason (dead drive,
adding/removing drives/controllers etc) the devices will
change, and some /dev/sdXY will point to another device wich
is a part of some other raid array, mdadm will refuse to
assemble this array, saying something in a line of "the
UUIDs does not match, aborting". Without the "devices="
part but with --scan option, mdadm will search all devices
by its own (based on the DEVICE line in mdadm.conf) - this
is somewhat slower as it will try to open each device in
turn, but safer, as it will find all the present components
no matter what.
Someone correct me if I'm wrong... ;)
/mjt
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Question regarding mdadm.conf
2005-02-17 7:14 ` Michael Tokarev
@ 2005-02-17 8:23 ` Torsten E.
2005-02-17 8:44 ` Guy
2005-02-17 8:30 ` Bad blocks Guy
1 sibling, 1 reply; 7+ messages in thread
From: Torsten E. @ 2005-02-17 8:23 UTC (permalink / raw)
To: linux-raid
Hi Michael,
Hi Lajber,
Thanks for your hints!
Michael Tokarev scribbled on 17.02.2005 08:14:
> Lajber Zoltan wrote:
>
>> Hi!
>>
>> On Thu, 17 Feb 2005, Torsten E. wrote:
>>
>>> How does I get those UUID information, to add them to the new
>>> /etc/mdadm.conf?
>>
>> Try this one: mdadm --detail /dev/md1 | grep UUID
>
> I'd say
>
> mdadm --detail --brief /dev/md1 | grep -v devices=
Didn't work for me ... but as I (hopefully!) understood the basic usage
I run:
mdadm --detail --brief /dev/md* >> /tmp/mdtest
After adding some more lines (DEVICE /dev/sda*, DEVICE /dev/sdb*,
MAILADDR admin) I simply renamed it to /etc/mdadm.conf ... its not a
good way, but for me its an useable (and so its good again ;)).
> -- this will give you all information necessary for mdadm.conf,
> you can just redirect output into that file.
>
> Note the grep usage. Someone will disagree with me here, but
> there is a reason why to remove devices= line. Without it,
> output from mdadm looks like (on my system anyway):
>
> ARRAY /dev/md1 level=raid1 num-devices=4
> UUID=11e92e45:15fcc4a0:cf62e981:a79de494
> devices=/dev/sda1,/dev/sdb1,/dev/sdc1,/dev/sdd1
>
> Ie, it lists all the devices which are parts of the array.
> The problem with this is: if, for any reason (dead drive,
> adding/removing drives/controllers etc) the devices will
> change, and some /dev/sdXY will point to another device wich
> is a part of some other raid array, mdadm will refuse to
> assemble this array, saying something in a line of "the
> UUIDs does not match, aborting". Without the "devices="
> part but with --scan option, mdadm will search all devices
> by its own (based on the DEVICE line in mdadm.conf) - this
> is somewhat slower as it will try to open each device in
> turn, but safer, as it will find all the present components
> no matter what.
>
> Someone correct me if I'm wrong... ;)
>
> /mjt
Have a nice day!! :)
Torsten
^ permalink raw reply [flat|nested] 7+ messages in thread
* Bad blocks
2005-02-17 7:14 ` Michael Tokarev
2005-02-17 8:23 ` Torsten E.
@ 2005-02-17 8:30 ` Guy
1 sibling, 0 replies; 7+ messages in thread
From: Guy @ 2005-02-17 8:30 UTC (permalink / raw)
To: linux-raid
About 1 month ago the topic was bad blocks. I have been monitoring the bad
blocks on my disks and I find I have had 3 new bad blocks since Jan 18.
Each on a different disk. I have 17 disks, SEAGATE ST118202LC.
These bad blocks did not cause any problems with md. I believe they were
readable or write errors, but re-mapped since I have AWRE and ARRE turned
on.
I don't know what is a normal rate, but based on the last month, I would
expect about 2.29 defects per disk per year. I have had the disks in use
for about 2.5 years. I have 81 defects. That comes to 1.905 per disk per
year. Not to far off the mark! And I have replaced disks 2-3 times in 2.5
years. The failed disks failed due to a cable problem, the disks may have
been just fine, but I took them apart for the magnets before I realized the
power cable was at fault. The cable is now repaired.
Anyway, if these were read errors, md would have caused me lots of problems.
So, we need md to deal with bad blocks without kicking out the disk.
I have been using this command to monitor the disks:
sginfo -G /dev/sda | grep "in grown table"
My current bad block status (Defect list) for 17 disks:
0 entries (0 bytes) in grown table.
0 entries (0 bytes) in grown table.
8 entries (64 bytes) in grown table.
5 entries (40 bytes) in grown table.
0 entries (0 bytes) in grown table.
12 entries (96 bytes) in grown table.
20 entries (160 bytes) in grown table.
0 entries (0 bytes) in grown table.
6 entries (48 bytes) in grown table.
0 entries (0 bytes) in grown table.
6 entries (48 bytes) in grown table.
28 entries (224 bytes) in grown table.
0 entries (0 bytes) in grown table.
2 entries (16 bytes) in grown table.
3 entries (24 bytes) in grown table.
4 entries (32 bytes) in grown table.
0 entries (0 bytes) in grown table.
Does anyone know how many defects is considered too many?
Guy
^ permalink raw reply [flat|nested] 7+ messages in thread
* RE: Question regarding mdadm.conf
2005-02-17 8:23 ` Torsten E.
@ 2005-02-17 8:44 ` Guy
2005-02-17 11:11 ` GrantC
0 siblings, 1 reply; 7+ messages in thread
From: Guy @ 2005-02-17 8:44 UTC (permalink / raw)
To: 'Torsten E.', linux-raid
In the past, Neil has recommended using a device line like this:
DEVICE partitions
From "man mdadm.conf":
Alternatively, a device line can contain the word partitions.
This will cause mdadm to read /proc/partitions and include all
devices and partitions found there-in. mdadm does not use the
names from /proc/partitions but only the major and minor device
numbers. It scans /dev to find the name that matches the num-
bers.
Guy
-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Torsten E.
Sent: Thursday, February 17, 2005 3:24 AM
To: linux-raid@vger.kernel.org
Subject: Re: Question regarding mdadm.conf
Hi Michael,
Hi Lajber,
Thanks for your hints!
Michael Tokarev scribbled on 17.02.2005 08:14:
> Lajber Zoltan wrote:
>
>> Hi!
>>
>> On Thu, 17 Feb 2005, Torsten E. wrote:
>>
>>> How does I get those UUID information, to add them to the new
>>> /etc/mdadm.conf?
>>
>> Try this one: mdadm --detail /dev/md1 | grep UUID
>
> I'd say
>
> mdadm --detail --brief /dev/md1 | grep -v devices=
Didn't work for me ... but as I (hopefully!) understood the basic usage
I run:
mdadm --detail --brief /dev/md* >> /tmp/mdtest
After adding some more lines (DEVICE /dev/sda*, DEVICE /dev/sdb*,
MAILADDR admin) I simply renamed it to /etc/mdadm.conf ... its not a
good way, but for me its an useable (and so its good again ;)).
> -- this will give you all information necessary for mdadm.conf,
> you can just redirect output into that file.
>
> Note the grep usage. Someone will disagree with me here, but
> there is a reason why to remove devices= line. Without it,
> output from mdadm looks like (on my system anyway):
>
> ARRAY /dev/md1 level=raid1 num-devices=4
> UUID=11e92e45:15fcc4a0:cf62e981:a79de494
> devices=/dev/sda1,/dev/sdb1,/dev/sdc1,/dev/sdd1
>
> Ie, it lists all the devices which are parts of the array.
> The problem with this is: if, for any reason (dead drive,
> adding/removing drives/controllers etc) the devices will
> change, and some /dev/sdXY will point to another device wich
> is a part of some other raid array, mdadm will refuse to
> assemble this array, saying something in a line of "the
> UUIDs does not match, aborting". Without the "devices="
> part but with --scan option, mdadm will search all devices
> by its own (based on the DEVICE line in mdadm.conf) - this
> is somewhat slower as it will try to open each device in
> turn, but safer, as it will find all the present components
> no matter what.
>
> Someone correct me if I'm wrong... ;)
>
> /mjt
Have a nice day!! :)
Torsten
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Question regarding mdadm.conf
2005-02-17 8:44 ` Guy
@ 2005-02-17 11:11 ` GrantC
0 siblings, 0 replies; 7+ messages in thread
From: GrantC @ 2005-02-17 11:11 UTC (permalink / raw)
To: Guy; +Cc: 'Torsten E.', linux-raid
On Thu, 17 Feb 2005 03:44:40 -0500, you wrote:
>In the past, Neil has recommended using a device line like this:
>DEVICE partitions
>
. . .
>> mdadm --detail --brief /dev/md1 | grep -v devices=
peetoo:~$ mdadm --detail --brief /dev/md1
ARRAY /dev/md1 level=raid0 num-devices=2 UUID=732ae5f8:e5d69879:30435be8:aafb7e94
devices=/dev/hda6,/dev/hdc6
peetoo:~$ mdadm --detail --brief /dev/md1 | grep -v " devices="
ARRAY /dev/md1 level=raid0 num-devices=2 UUID=732ae5f8:e5d69879:30435be8:aafb7e94
Is that what was intended?
Cheers
Grant.
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2005-02-17 11:11 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-02-17 6:25 Question regarding mdadm.conf Torsten E.
2005-02-17 6:48 ` Lajber Zoltan
2005-02-17 7:14 ` Michael Tokarev
2005-02-17 8:23 ` Torsten E.
2005-02-17 8:44 ` Guy
2005-02-17 11:11 ` GrantC
2005-02-17 8:30 ` Bad blocks Guy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).