* mdadm rebuild
@ 2013-12-08 17:53 Hai Wu
2013-12-08 21:55 ` NeilBrown
2013-12-08 21:56 ` Peter Grandi
0 siblings, 2 replies; 9+ messages in thread
From: Hai Wu @ 2013-12-08 17:53 UTC (permalink / raw)
To: linux-raid
I am wondering whether it is possible for mdadm to auto-rebuild a failed
raid1 driver upon its replacement with a new drive? The following lines
from some RedHat website URL seems to indicate vaguely that it might be possible:
Previously, mdadm was not able to rebuild newly-connected drives
automatically. This update adds the array auto-rebuild feature and allows a
RAID stack to automatically rebuild newly-connected drives.
The goal is to get mdadm software raid1 to behave the same as hardware
raid1, when replacing failed hard drive. It should automatically detect new
drive and rebuild the new drive into part of raid1 ..
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: mdadm rebuild
2013-12-08 17:53 mdadm rebuild Hai Wu
@ 2013-12-08 21:55 ` NeilBrown
[not found] ` <CAJ1=nZfUBQ0_T1s3deG+HOU7BQeayibWcW8y5h5bEjFt2tvhnA@mail.gmail.com>
[not found] ` <CAJ1=nZcGO_GTUZ9bgrYJJXHoGQnepHyLof21xQ3MFwokfaCvCg@mail.gmail.com>
2013-12-08 21:56 ` Peter Grandi
1 sibling, 2 replies; 9+ messages in thread
From: NeilBrown @ 2013-12-08 21:55 UTC (permalink / raw)
To: Hai Wu; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 2180 bytes --]
On Sun, 8 Dec 2013 11:53:00 -0600 Hai Wu <haiwu.us@gmail.com> wrote:
> I am wondering whether it is possible for mdadm to auto-rebuild a failed
> raid1 driver upon its replacement with a new drive? The following lines
> from some RedHat website URL seems to indicate vaguely that it might be possible:
>
Yes and no.
"yes" because it is certainly possible to arrange this,
"no" because it isn't just mdadm which does it.
When a drive is plugged in, udev notices and can run various commands to do
things with that device. You need to get udev to run "mdadm -I $devname"
when a new device is plugged in.
The udev scripts which come with mdadm will only do that for new drives which
appear to be part of an array already. You presumably want it to do that for
any new drive. The change should be quite easy.
Secondly, you need to tell mdadm that it is OK to add a new device as a spare
to an array. To see how to do this you need to read the documentation for
the "POLICY" command in mdadm.conf.5.
A line like:
POLICY action=force-spare
tells mdadm that any device passed to "mdadm -I" can be added to any array as
a spare. You might not want that, but you can restrict it in various ways.
POLICY path=pci-0000:00:1f.2-scsi* action=spare
says that any device attached to a particular controller can be added to any
array as long as it is already a member of the array, or appears to be blank.
There are various other directives which should allow you to describe
whatever you want.
NeilBrown
> Previously, mdadm was not able to rebuild newly-connected drives
> automatically. This update adds the array auto-rebuild feature and allows a
> RAID stack to automatically rebuild newly-connected drives.
>
> The goal is to get mdadm software raid1 to behave the same as hardware
> raid1, when replacing failed hard drive. It should automatically detect new
> drive and rebuild the new drive into part of raid1 ..--
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: mdadm rebuild
2013-12-08 17:53 mdadm rebuild Hai Wu
2013-12-08 21:55 ` NeilBrown
@ 2013-12-08 21:56 ` Peter Grandi
1 sibling, 0 replies; 9+ messages in thread
From: Peter Grandi @ 2013-12-08 21:56 UTC (permalink / raw)
To: Linux RAID
> I am wondering whether it is possible for mdadm to
> auto-rebuild a failed raid1 driver upon its replacement with a
> new drive?
MD or 'mdadm' or something else?
> The goal is to get mdadm software raid1 to behave the same as
> hardware raid1, when replacing failed hard drive.
The crucial difference between hw RAID and MD RAID is that with
hw RAID all members of a RAID set are attached the same card,
while MD can build RAID sets that span several cards,
potentially on all host adapters.
This means that while it may be appropriate for a hw RAID card
to assume that any new drives it sees are for one of the RAID
sets it defines, for MD RAID it is not appropriate to use *any*
drive added to a system to add to one of its RAID sets.
> It should automatically detect new drive and rebuild the new
> drive into part of raid1 [ ... ]
That already happens with "spare" drives, and MD itself does it.
The point with that is that "spare" drives are already marked by
'mdadm' as drives usable by MD, so MD knows that they are to be
used for its RAID sets; and they can be marked by the list of
RAID sets they are usable for.
Because in general there is a some reluctance to have a default
behaviour that takes *any* drive added to a system and adds it
to any MD RAID set.
If you want really to do that, you could add a suitable 'udev'
rule for a script that is triggered on a device insertion,
checks it, and invokes 'madm' to add it as a spare. Then if
there is a missing member in a related RAID set, MD will make
use of that spare.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: mdadm rebuild
[not found] ` <CAJ1=nZfUBQ0_T1s3deG+HOU7BQeayibWcW8y5h5bEjFt2tvhnA@mail.gmail.com>
@ 2013-12-08 22:53 ` NeilBrown
2013-12-08 23:03 ` Hai Wu
0 siblings, 1 reply; 9+ messages in thread
From: NeilBrown @ 2013-12-08 22:53 UTC (permalink / raw)
To: hai wu; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 3359 bytes --]
On Sun, 8 Dec 2013 16:45:08 -0600 hai wu <haiwu.us@gmail.com> wrote:
> Thanks Neil. I am not sure if I understand mdadm 'spare' correctly. If
> doing as you mentioned above, the new driver will show up in output of
> "mdadm --detail /dev/md0" as 'spare' status, while I would like the new
> drive to automatically show up as "acitve, sync", and it will automatically
> be synced up with the one remaining good drive upon running the udev rule.
> I don't see an option like "force-include" in this case. Please let me know
> if I miss something.
Whenever md notices that an array has a spare device and a missing device it
will start rebuilding the spare and will then make it an active device.
So if a new device is added to the system, you really do want to give it to
md as a 'spare'. md will do the rest - it always has done.
NeilBrown
>
>
> On Sun, Dec 8, 2013 at 3:55 PM, NeilBrown <neilb@suse.de> wrote:
>
> > On Sun, 8 Dec 2013 11:53:00 -0600 Hai Wu <haiwu.us@gmail.com> wrote:
> >
> > > I am wondering whether it is possible for mdadm to auto-rebuild a failed
> > > raid1 driver upon its replacement with a new drive? The following lines
> > > from some RedHat website URL seems to indicate vaguely that it might be
> > possible:
> > >
> >
> > Yes and no.
> > "yes" because it is certainly possible to arrange this,
> > "no" because it isn't just mdadm which does it.
> >
> > When a drive is plugged in, udev notices and can run various commands to do
> > things with that device. You need to get udev to run "mdadm -I $devname"
> > when a new device is plugged in.
> > The udev scripts which come with mdadm will only do that for new drives
> > which
> > appear to be part of an array already. You presumably want it to do that
> > for
> > any new drive. The change should be quite easy.
> >
> > Secondly, you need to tell mdadm that it is OK to add a new device as a
> > spare
> > to an array. To see how to do this you need to read the documentation for
> > the "POLICY" command in mdadm.conf.5.
> >
> > A line like:
> > POLICY action=force-spare
> > tells mdadm that any device passed to "mdadm -I" can be added to any array
> > as
> > a spare. You might not want that, but you can restrict it in various ways.
> >
> > POLICY path=pci-0000:00:1f.2-scsi* action=spare
> >
> > says that any device attached to a particular controller can be added to
> > any
> > array as long as it is already a member of the array, or appears to be
> > blank.
> >
> > There are various other directives which should allow you to describe
> > whatever you want.
> >
> > NeilBrown
> >
> >
> > > Previously, mdadm was not able to rebuild newly-connected drives
> > > automatically. This update adds the array auto-rebuild feature and
> > allows a
> > > RAID stack to automatically rebuild newly-connected drives.
> > >
> > > The goal is to get mdadm software raid1 to behave the same as hardware
> > > raid1, when replacing failed hard drive. It should automatically detect
> > new
> > > drive and rebuild the new drive into part of raid1 ..--
> > > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
> >
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: mdadm rebuild
2013-12-08 22:53 ` NeilBrown
@ 2013-12-08 23:03 ` Hai Wu
2013-12-08 23:17 ` NeilBrown
0 siblings, 1 reply; 9+ messages in thread
From: Hai Wu @ 2013-12-08 23:03 UTC (permalink / raw)
To: NeilBrown; +Cc: linux-raid
This is something I am not aware of, thanks!
In this case, do I have to worry about cases where the new drive might not be able to boot (this is raid1 with 2 drives, and both drives need to be able to boot up the server by themselves in case the other drive fails later)? I remember I had to do the following before for the new drive:
grub> root (hd0,0)
Is that still the case here if marking it as 'spare'?
On Dec 8, 2013, at 4:53 PM, NeilBrown <neilb@suse.de> wrote:
> On Sun, 8 Dec 2013 16:45:08 -0600 hai wu <haiwu.us@gmail.com> wrote:
>
>> Thanks Neil. I am not sure if I understand mdadm 'spare' correctly. If
>> doing as you mentioned above, the new driver will show up in output of
>> "mdadm --detail /dev/md0" as 'spare' status, while I would like the new
>> drive to automatically show up as "acitve, sync", and it will automatically
>> be synced up with the one remaining good drive upon running the udev rule.
>> I don't see an option like "force-include" in this case. Please let me know
>> if I miss something.
>
> Whenever md notices that an array has a spare device and a missing device it
> will start rebuilding the spare and will then make it an active device.
>
> So if a new device is added to the system, you really do want to give it to
> md as a 'spare'. md will do the rest - it always has done.
>
> NeilBrown
>
>
>>
>>
>> On Sun, Dec 8, 2013 at 3:55 PM, NeilBrown <neilb@suse.de> wrote:
>>
>>> On Sun, 8 Dec 2013 11:53:00 -0600 Hai Wu <haiwu.us@gmail.com> wrote:
>>>
>>>> I am wondering whether it is possible for mdadm to auto-rebuild a failed
>>>> raid1 driver upon its replacement with a new drive? The following lines
>>>> from some RedHat website URL seems to indicate vaguely that it might be
>>> possible:
>>>>
>>>
>>> Yes and no.
>>> "yes" because it is certainly possible to arrange this,
>>> "no" because it isn't just mdadm which does it.
>>>
>>> When a drive is plugged in, udev notices and can run various commands to do
>>> things with that device. You need to get udev to run "mdadm -I $devname"
>>> when a new device is plugged in.
>>> The udev scripts which come with mdadm will only do that for new drives
>>> which
>>> appear to be part of an array already. You presumably want it to do that
>>> for
>>> any new drive. The change should be quite easy.
>>>
>>> Secondly, you need to tell mdadm that it is OK to add a new device as a
>>> spare
>>> to an array. To see how to do this you need to read the documentation for
>>> the "POLICY" command in mdadm.conf.5.
>>>
>>> A line like:
>>> POLICY action=force-spare
>>> tells mdadm that any device passed to "mdadm -I" can be added to any array
>>> as
>>> a spare. You might not want that, but you can restrict it in various ways.
>>>
>>> POLICY path=pci-0000:00:1f.2-scsi* action=spare
>>>
>>> says that any device attached to a particular controller can be added to
>>> any
>>> array as long as it is already a member of the array, or appears to be
>>> blank.
>>>
>>> There are various other directives which should allow you to describe
>>> whatever you want.
>>>
>>> NeilBrown
>>>
>>>
>>>> Previously, mdadm was not able to rebuild newly-connected drives
>>>> automatically. This update adds the array auto-rebuild feature and
>>> allows a
>>>> RAID stack to automatically rebuild newly-connected drives.
>>>>
>>>> The goal is to get mdadm software raid1 to behave the same as hardware
>>>> raid1, when replacing failed hard drive. It should automatically detect
>>> new
>>>> drive and rebuild the new drive into part of raid1 ..--
>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>
>>>
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: mdadm rebuild
2013-12-08 23:03 ` Hai Wu
@ 2013-12-08 23:17 ` NeilBrown
2013-12-08 23:56 ` Adam Goryachev
0 siblings, 1 reply; 9+ messages in thread
From: NeilBrown @ 2013-12-08 23:17 UTC (permalink / raw)
To: Hai Wu; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 4678 bytes --]
On Sun, 8 Dec 2013 17:03:22 -0600 Hai Wu <haiwu.us@gmail.com> wrote:
> This is something I am not aware of, thanks!
>
> In this case, do I have to worry about cases where the new drive might not be able to boot (this is raid1 with 2 drives, and both drives need to be able to boot up the server by themselves in case the other drive fails later)? I remember I had to do the following before for the new drive:
>
> grub> root (hd0,0)
>
> Is that still the case here if marking it as 'spare'?
If you want the new drive to boot and the boot sector is not covered by any
md array, then you have to update the boot sector yourself.
You can presumably get udev to run some command which will write a boot
sector out. However I'm not an expert on boot sector management so cannot
really advise you.
NeilBrown
>
> On Dec 8, 2013, at 4:53 PM, NeilBrown <neilb@suse.de> wrote:
>
> > On Sun, 8 Dec 2013 16:45:08 -0600 hai wu <haiwu.us@gmail.com> wrote:
> >
> >> Thanks Neil. I am not sure if I understand mdadm 'spare' correctly. If
> >> doing as you mentioned above, the new driver will show up in output of
> >> "mdadm --detail /dev/md0" as 'spare' status, while I would like the new
> >> drive to automatically show up as "acitve, sync", and it will automatically
> >> be synced up with the one remaining good drive upon running the udev rule.
> >> I don't see an option like "force-include" in this case. Please let me know
> >> if I miss something.
> >
> > Whenever md notices that an array has a spare device and a missing device it
> > will start rebuilding the spare and will then make it an active device.
> >
> > So if a new device is added to the system, you really do want to give it to
> > md as a 'spare'. md will do the rest - it always has done.
> >
> > NeilBrown
> >
> >
> >>
> >>
> >> On Sun, Dec 8, 2013 at 3:55 PM, NeilBrown <neilb@suse.de> wrote:
> >>
> >>> On Sun, 8 Dec 2013 11:53:00 -0600 Hai Wu <haiwu.us@gmail.com> wrote:
> >>>
> >>>> I am wondering whether it is possible for mdadm to auto-rebuild a failed
> >>>> raid1 driver upon its replacement with a new drive? The following lines
> >>>> from some RedHat website URL seems to indicate vaguely that it might be
> >>> possible:
> >>>>
> >>>
> >>> Yes and no.
> >>> "yes" because it is certainly possible to arrange this,
> >>> "no" because it isn't just mdadm which does it.
> >>>
> >>> When a drive is plugged in, udev notices and can run various commands to do
> >>> things with that device. You need to get udev to run "mdadm -I $devname"
> >>> when a new device is plugged in.
> >>> The udev scripts which come with mdadm will only do that for new drives
> >>> which
> >>> appear to be part of an array already. You presumably want it to do that
> >>> for
> >>> any new drive. The change should be quite easy.
> >>>
> >>> Secondly, you need to tell mdadm that it is OK to add a new device as a
> >>> spare
> >>> to an array. To see how to do this you need to read the documentation for
> >>> the "POLICY" command in mdadm.conf.5.
> >>>
> >>> A line like:
> >>> POLICY action=force-spare
> >>> tells mdadm that any device passed to "mdadm -I" can be added to any array
> >>> as
> >>> a spare. You might not want that, but you can restrict it in various ways.
> >>>
> >>> POLICY path=pci-0000:00:1f.2-scsi* action=spare
> >>>
> >>> says that any device attached to a particular controller can be added to
> >>> any
> >>> array as long as it is already a member of the array, or appears to be
> >>> blank.
> >>>
> >>> There are various other directives which should allow you to describe
> >>> whatever you want.
> >>>
> >>> NeilBrown
> >>>
> >>>
> >>>> Previously, mdadm was not able to rebuild newly-connected drives
> >>>> automatically. This update adds the array auto-rebuild feature and
> >>> allows a
> >>>> RAID stack to automatically rebuild newly-connected drives.
> >>>>
> >>>> The goal is to get mdadm software raid1 to behave the same as hardware
> >>>> raid1, when replacing failed hard drive. It should automatically detect
> >>> new
> >>>> drive and rebuild the new drive into part of raid1 ..--
> >>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> >>>> the body of a message to majordomo@vger.kernel.org
> >>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
> >>>
> >>>
> >
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: mdadm rebuild
2013-12-08 23:17 ` NeilBrown
@ 2013-12-08 23:56 ` Adam Goryachev
2013-12-09 0:16 ` NeilBrown
0 siblings, 1 reply; 9+ messages in thread
From: Adam Goryachev @ 2013-12-08 23:56 UTC (permalink / raw)
To: NeilBrown, Hai Wu; +Cc: linux-raid
On 09/12/13 10:17, NeilBrown wrote:
> On Sun, 8 Dec 2013 17:03:22 -0600 Hai Wu <haiwu.us@gmail.com> wrote:
>
>> This is something I am not aware of, thanks!
>>
>> In this case, do I have to worry about cases where the new drive might not be able to boot (this is raid1 with 2 drives, and both drives need to be able to boot up the server by themselves in case the other drive fails later)? I remember I had to do the following before for the new drive:
>>
>> grub> root (hd0,0)
>>
>> Is that still the case here if marking it as 'spare'?
> If you want the new drive to boot and the boot sector is not covered by any
> md array, then you have to update the boot sector yourself.
>
> You can presumably get udev to run some command which will write a boot
> sector out. However I'm not an expert on boot sector management so cannot
> really advise you.
Hi Neil,
I like the way you phrased that :)
I just wanted to clarify one point, it there a specific MD metadata
version (I'm assuming either 1.0 or 1.2) which could be applied to an
entire device MD array which would allow the automatic sync of the boot
sectors/etc
ie, something like:
mdadm --create /dev/md3 --level=raid1 --metadata=1.2 /dev/sda /dev/sdb
then writing the boot sector information to /dev/md3
mdadm --manage /dev/md3 --fail /dev/sdb
mdadm --manage /dev/md3 --remove /dev/sdb
mdadm --manage /dev/md3 --add /dev/sdc
Then, shutdown, remove sda and sdb
Would you expect the system to boot successfully from sdc (assuming the
BIOS will boot from it, and that the OS will "find" the correct boot and
root device....
In the past I've always used partitions for MD, but this might be a good
reason to use whole devices if it would solve the issue of the boot
sector information.
Regards,
Adam
--
Adam Goryachev Website Managers www.websitemanagers.com.au
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: mdadm rebuild
2013-12-08 23:56 ` Adam Goryachev
@ 2013-12-09 0:16 ` NeilBrown
0 siblings, 0 replies; 9+ messages in thread
From: NeilBrown @ 2013-12-09 0:16 UTC (permalink / raw)
To: Adam Goryachev; +Cc: Hai Wu, linux-raid
[-- Attachment #1: Type: text/plain, Size: 2364 bytes --]
On Mon, 09 Dec 2013 10:56:32 +1100 Adam Goryachev
<mailinglists@websitemanagers.com.au> wrote:
> On 09/12/13 10:17, NeilBrown wrote:
> > On Sun, 8 Dec 2013 17:03:22 -0600 Hai Wu <haiwu.us@gmail.com> wrote:
> >
> >> This is something I am not aware of, thanks!
> >>
> >> In this case, do I have to worry about cases where the new drive might not be able to boot (this is raid1 with 2 drives, and both drives need to be able to boot up the server by themselves in case the other drive fails later)? I remember I had to do the following before for the new drive:
> >>
> >> grub> root (hd0,0)
> >>
> >> Is that still the case here if marking it as 'spare'?
> > If you want the new drive to boot and the boot sector is not covered by any
> > md array, then you have to update the boot sector yourself.
> >
> > You can presumably get udev to run some command which will write a boot
> > sector out. However I'm not an expert on boot sector management so cannot
> > really advise you.
> Hi Neil,
>
> I like the way you phrased that :)
>
> I just wanted to clarify one point, it there a specific MD metadata
> version (I'm assuming either 1.0 or 1.2) which could be applied to an
> entire device MD array which would allow the automatic sync of the boot
> sectors/etc
>
> ie, something like:
> mdadm --create /dev/md3 --level=raid1 --metadata=1.2 /dev/sda /dev/sdb
>
> then writing the boot sector information to /dev/md3
> mdadm --manage /dev/md3 --fail /dev/sdb
> mdadm --manage /dev/md3 --remove /dev/sdb
> mdadm --manage /dev/md3 --add /dev/sdc
>
> Then, shutdown, remove sda and sdb
>
> Would you expect the system to boot successfully from sdc (assuming the
> BIOS will boot from it, and that the OS will "find" the correct boot and
> root device....
>
> In the past I've always used partitions for MD, but this might be a good
> reason to use whole devices if it would solve the issue of the boot
> sector information.
>
> Regards,
> Adam
If you use --metadata=1.0 then everything from the very start of the device
to near the end is mirrored.
If having identical boot blocks on every device works (which I think it does,
but I'm no expert), then using
--level=raid1 --metadata=1.0
on whole devices (not partitions) would remove the need to worry further
about boot sectors.
NeilBrown
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: mdadm rebuild
[not found] ` <CAJ1=nZcGO_GTUZ9bgrYJJXHoGQnepHyLof21xQ3MFwokfaCvCg@mail.gmail.com>
@ 2013-12-09 22:17 ` NeilBrown
0 siblings, 0 replies; 9+ messages in thread
From: NeilBrown @ 2013-12-09 22:17 UTC (permalink / raw)
To: hai wu; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 3055 bytes --]
On Mon, 9 Dec 2013 11:16:43 -0600 hai wu <haiwu.us@gmail.com> wrote:
> One more question:
>
> How would the failed drive be handled if doing this? Would its traces be
> auto cleaned up and thus it would no longer show up in /proc/mdstat?
When you unplug a device from the system, udev processes a "remove" event.
If you pass this to "mdadm -If" it will fail/remove that device from any
array that it is part of.
The udev scripts distributed with mdadm do this, but only for devices that
were previously detected as raid members.
NeilBrown
>
>
> On Sun, Dec 8, 2013 at 3:55 PM, NeilBrown <neilb@suse.de> wrote:
>
> > On Sun, 8 Dec 2013 11:53:00 -0600 Hai Wu <haiwu.us@gmail.com> wrote:
> >
> > > I am wondering whether it is possible for mdadm to auto-rebuild a failed
> > > raid1 driver upon its replacement with a new drive? The following lines
> > > from some RedHat website URL seems to indicate vaguely that it might be
> > possible:
> > >
> >
> > Yes and no.
> > "yes" because it is certainly possible to arrange this,
> > "no" because it isn't just mdadm which does it.
> >
> > When a drive is plugged in, udev notices and can run various commands to do
> > things with that device. You need to get udev to run "mdadm -I $devname"
> > when a new device is plugged in.
> > The udev scripts which come with mdadm will only do that for new drives
> > which
> > appear to be part of an array already. You presumably want it to do that
> > for
> > any new drive. The change should be quite easy.
> >
> > Secondly, you need to tell mdadm that it is OK to add a new device as a
> > spare
> > to an array. To see how to do this you need to read the documentation for
> > the "POLICY" command in mdadm.conf.5.
> >
> > A line like:
> > POLICY action=force-spare
> > tells mdadm that any device passed to "mdadm -I" can be added to any array
> > as
> > a spare. You might not want that, but you can restrict it in various ways.
> >
> > POLICY path=pci-0000:00:1f.2-scsi* action=spare
> >
> > says that any device attached to a particular controller can be added to
> > any
> > array as long as it is already a member of the array, or appears to be
> > blank.
> >
> > There are various other directives which should allow you to describe
> > whatever you want.
> >
> > NeilBrown
> >
> >
> > > Previously, mdadm was not able to rebuild newly-connected drives
> > > automatically. This update adds the array auto-rebuild feature and
> > allows a
> > > RAID stack to automatically rebuild newly-connected drives.
> > >
> > > The goal is to get mdadm software raid1 to behave the same as hardware
> > > raid1, when replacing failed hard drive. It should automatically detect
> > new
> > > drive and rebuild the new drive into part of raid1 ..--
> > > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
> >
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2013-12-09 22:17 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-12-08 17:53 mdadm rebuild Hai Wu
2013-12-08 21:55 ` NeilBrown
[not found] ` <CAJ1=nZfUBQ0_T1s3deG+HOU7BQeayibWcW8y5h5bEjFt2tvhnA@mail.gmail.com>
2013-12-08 22:53 ` NeilBrown
2013-12-08 23:03 ` Hai Wu
2013-12-08 23:17 ` NeilBrown
2013-12-08 23:56 ` Adam Goryachev
2013-12-09 0:16 ` NeilBrown
[not found] ` <CAJ1=nZcGO_GTUZ9bgrYJJXHoGQnepHyLof21xQ3MFwokfaCvCg@mail.gmail.com>
2013-12-09 22:17 ` NeilBrown
2013-12-08 21:56 ` Peter Grandi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).