* raid md126, md127 problem after reboot, howto fix?
@ 2015-02-08 19:03 Marc Widmer
2015-02-08 21:29 ` Wols Lists
2015-02-09 9:55 ` Sebastian Parschauer
0 siblings, 2 replies; 6+ messages in thread
From: Marc Widmer @ 2015-02-08 19:03 UTC (permalink / raw)
To: linux-raid
Hi List
I have no deep unterstand about raids, beside setting them up initially and
replacing disks if needed. So this error has never happened to me before:
After a reboot i have a really strange behaviour on my server. Disks are
not marked faulty, but raid is "fallend apart".
/proc/mdstat shows me:
md126 : active raid1 sda1[0]
10485696 blocks [2/1] [U_]
md127 : active raid1 sda2[0]
721558464 blocks [2/1] [U_]
md1 : active raid1 sdb1[1]
10485696 blocks [2/1] [_U]
md2 : active raid1 sdb2[1]
721558464 blocks [2/1] [_U]
wished would be something similar to:
md1 : active raid1 sdb1[1] sda1[0]
10238912 blocks [2/2] [UU]
md2 : active raid1 sdb2[1] sda2[0]
1942746048 blocks [2/2] [UU]
Currently only md1, md2 are running. nmon shows me, that only disks sdb is
active, sda is not doing anything.
I run debian squeeze.
I am a bit concerned what to do, because at the moment i run on one disk
only and if things go wrong i end up with a server not running (downtime)
and possible data loss (beside backups).
Any ideas what i should do? Howto put the raid back together, possibly in
live mode, without rebooting in rescue mode and risk long downtime?
Any help would be greatly appreciated as by now the only thing i had to do
was resyncing a disk after usual hd crash.
Best
marc
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: raid md126, md127 problem after reboot, howto fix?
2015-02-08 19:03 raid md126, md127 problem after reboot, howto fix? Marc Widmer
@ 2015-02-08 21:29 ` Wols Lists
2015-02-09 9:41 ` Sebastian Parschauer
2015-02-09 9:55 ` Sebastian Parschauer
1 sibling, 1 reply; 6+ messages in thread
From: Wols Lists @ 2015-02-08 21:29 UTC (permalink / raw)
To: Marc Widmer, linux-raid
On 08/02/15 19:03, Marc Widmer wrote:
> Hi List
>
> I have no deep unterstand about raids, beside setting them up initially and
> replacing disks if needed. So this error has never happened to me before:
>
> After a reboot i have a really strange behaviour on my server. Disks are
> not marked faulty, but raid is "fallend apart".
>
> /proc/mdstat shows me:
>
> md126 : active raid1 sda1[0]
> 10485696 blocks [2/1] [U_]
>
> md127 : active raid1 sda2[0]
> 721558464 blocks [2/1] [U_]
>
> md1 : active raid1 sdb1[1]
> 10485696 blocks [2/1] [_U]
>
> md2 : active raid1 sdb2[1]
> 721558464 blocks [2/1] [_U]
>
> wished would be something similar to:
> md1 : active raid1 sdb1[1] sda1[0]
> 10238912 blocks [2/2] [UU]
>
> md2 : active raid1 sdb2[1] sda2[0]
> 1942746048 blocks [2/2] [UU]
>
> Currently only md1, md2 are running. nmon shows me, that only disks sdb is
> active, sda is not doing anything.
>
> I run debian squeeze.
What version of mdadm are you running? 3.2.6 or thereabouts?
>
> I am a bit concerned what to do, because at the moment i run on one disk
> only and if things go wrong i end up with a server not running (downtime)
> and possible data loss (beside backups).
>
> Any ideas what i should do? Howto put the raid back together, possibly in
> live mode, without rebooting in rescue mode and risk long downtime?
>
> Any help would be greatly appreciated as by now the only thing i had to do
> was resyncing a disk after usual hd crash.
>
The reason I ask is this looks like a bug I had - if I'm right it's a
known problem and you need to upgrade mdadm.
Cheers,
Wol
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: raid md126, md127 problem after reboot, howto fix?
2015-02-08 21:29 ` Wols Lists
@ 2015-02-09 9:41 ` Sebastian Parschauer
0 siblings, 0 replies; 6+ messages in thread
From: Sebastian Parschauer @ 2015-02-09 9:41 UTC (permalink / raw)
To: Wols Lists, Marc Widmer, linux-raid
On 08.02.2015 22:29, Wols Lists wrote:
> On 08/02/15 19:03, Marc Widmer wrote:
>> Hi List
>>
>> I have no deep unterstand about raids, beside setting them up initially and
>> replacing disks if needed. So this error has never happened to me before:
>>
>> After a reboot i have a really strange behaviour on my server. Disks are
>> not marked faulty, but raid is "fallend apart".
>>
>> /proc/mdstat shows me:
>>
>> md126 : active raid1 sda1[0]
>> 10485696 blocks [2/1] [U_]
>>
>> md127 : active raid1 sda2[0]
>> 721558464 blocks [2/1] [U_]
>>
>> md1 : active raid1 sdb1[1]
>> 10485696 blocks [2/1] [_U]
>>
>> md2 : active raid1 sdb2[1]
>> 721558464 blocks [2/1] [_U]
>>
>> wished would be something similar to:
>> md1 : active raid1 sdb1[1] sda1[0]
>> 10238912 blocks [2/2] [UU]
>>
>> md2 : active raid1 sdb2[1] sda2[0]
>> 1942746048 blocks [2/2] [UU]
>>
>> Currently only md1, md2 are running. nmon shows me, that only disks sdb is
>> active, sda is not doing anything.
>>
>> I run debian squeeze.
>
> What version of mdadm are you running? 3.2.6 or thereabouts?
>>
>> I am a bit concerned what to do, because at the moment i run on one disk
>> only and if things go wrong i end up with a server not running (downtime)
>> and possible data loss (beside backups).
>>
>> Any ideas what i should do? Howto put the raid back together, possibly in
>> live mode, without rebooting in rescue mode and risk long downtime?
>>
>> Any help would be greatly appreciated as by now the only thing i had to do
>> was resyncing a disk after usual hd crash.
>>
> The reason I ask is this looks like a bug I had - if I'm right it's a
> known problem and you need to upgrade mdadm.
Yeah, pretty much sounds like the bad udev rules with Squeeze and old
mdadm. E.g. deactivation of the MD udev rules and assembling via init
scripts is a way to workaround this.
I have some test VMs providing MD RAID-1 on iSCSI targets and have seen
the same issue when logging in to the targets. Deactivation of the udev
rules and manual assembly helped.
Cheers,
Sebastian
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: raid md126, md127 problem after reboot, howto fix?
2015-02-08 19:03 raid md126, md127 problem after reboot, howto fix? Marc Widmer
2015-02-08 21:29 ` Wols Lists
@ 2015-02-09 9:55 ` Sebastian Parschauer
2015-02-09 10:22 ` Wols Lists
1 sibling, 1 reply; 6+ messages in thread
From: Sebastian Parschauer @ 2015-02-09 9:55 UTC (permalink / raw)
To: Marc Widmer, linux-raid
On 08.02.2015 20:03, Marc Widmer wrote:
> Hi List
>
> I have no deep unterstand about raids, beside setting them up initially and
> replacing disks if needed. So this error has never happened to me before:
>
> After a reboot i have a really strange behaviour on my server. Disks are
> not marked faulty, but raid is "fallend apart".
>
> /proc/mdstat shows me:
>
> md126 : active raid1 sda1[0]
> 10485696 blocks [2/1] [U_]
>
> md127 : active raid1 sda2[0]
> 721558464 blocks [2/1] [U_]
>
> md1 : active raid1 sdb1[1]
> 10485696 blocks [2/1] [_U]
>
> md2 : active raid1 sdb2[1]
> 721558464 blocks [2/1] [_U]
>
> wished would be something similar to:
> md1 : active raid1 sdb1[1] sda1[0]
> 10238912 blocks [2/2] [UU]
>
> md2 : active raid1 sdb2[1] sda2[0]
> 1942746048 blocks [2/2] [UU]
>
> Currently only md1, md2 are running. nmon shows me, that only disks sdb is
> active, sda is not doing anything.
>
> I run debian squeeze.
>
> I am a bit concerned what to do, because at the moment i run on one disk
> only and if things go wrong i end up with a server not running (downtime)
> and possible data loss (beside backups).
>
> Any ideas what i should do? Howto put the raid back together, possibly in
> live mode, without rebooting in rescue mode and risk long downtime?
Just stop all arrays which aren't in use at the moment and assemble them
manually. If they are already running degraded, then add the now
unassociated disks to their respective running arrays.
$ mdadm --stop /dev/md126
$ mdadm --stop /dev/md127
$ mdadm /dev/md1 --add /dev/sda1
$ mdadm /dev/md2 --add /dev/sda2
This could require some syncing but then everything should be normal again.
Cheers,
Sebastian
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: raid md126, md127 problem after reboot, howto fix?
2015-02-09 9:55 ` Sebastian Parschauer
@ 2015-02-09 10:22 ` Wols Lists
2015-02-09 10:33 ` Sebastian Parschauer
0 siblings, 1 reply; 6+ messages in thread
From: Wols Lists @ 2015-02-09 10:22 UTC (permalink / raw)
To: Sebastian Parschauer, Marc Widmer, linux-raid
On 09/02/15 09:55, Sebastian Parschauer wrote:
>> > Any ideas what i should do? Howto put the raid back together, possibly in
>> > live mode, without rebooting in rescue mode and risk long downtime?
> Just stop all arrays which aren't in use at the moment and assemble them
> manually. If they are already running degraded, then add the now
> unassociated disks to their respective running arrays.
>
> $ mdadm --stop /dev/md126
> $ mdadm --stop /dev/md127
> $ mdadm /dev/md1 --add /dev/sda1
> $ mdadm /dev/md2 --add /dev/sda2
>
> This could require some syncing but then everything should be normal again.
This isn't just a Debian/Ubuntu problem - I run gentoo.
And isn't there a re-add option? That will hopefully just require a
recovery rather than a total resync.
Cheers,
Wol
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: raid md126, md127 problem after reboot, howto fix?
2015-02-09 10:22 ` Wols Lists
@ 2015-02-09 10:33 ` Sebastian Parschauer
0 siblings, 0 replies; 6+ messages in thread
From: Sebastian Parschauer @ 2015-02-09 10:33 UTC (permalink / raw)
To: Wols Lists, Marc Widmer, linux-raid
On 09.02.2015 11:22, Wols Lists wrote:
> On 09/02/15 09:55, Sebastian Parschauer wrote:
>>>> Any ideas what i should do? Howto put the raid back together, possibly in
>>>> live mode, without rebooting in rescue mode and risk long downtime?
>> Just stop all arrays which aren't in use at the moment and assemble them
>> manually. If they are already running degraded, then add the now
>> unassociated disks to their respective running arrays.
>>
>> $ mdadm --stop /dev/md126
>> $ mdadm --stop /dev/md127
>> $ mdadm /dev/md1 --add /dev/sda1
>> $ mdadm /dev/md2 --add /dev/sda2
>>
>> This could require some syncing but then everything should be normal again.
>
> This isn't just a Debian/Ubuntu problem - I run gentoo.
Okay.
> And isn't there a re-add option? That will hopefully just require a
> recovery rather than a total resync.
Yes, sure, --re-add should be tried first. If it doesn't work, then the
--add option will definitely work.
Cheers,
Sebastian
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2015-02-09 10:33 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-02-08 19:03 raid md126, md127 problem after reboot, howto fix? Marc Widmer
2015-02-08 21:29 ` Wols Lists
2015-02-09 9:41 ` Sebastian Parschauer
2015-02-09 9:55 ` Sebastian Parschauer
2015-02-09 10:22 ` Wols Lists
2015-02-09 10:33 ` Sebastian Parschauer
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).