* raid 1 assembled but inactive - works from a "live" distribution...
@ 2017-07-20 10:36 Georgios Petasis
0 siblings, 0 replies; 5+ messages in thread
From: Georgios Petasis @ 2017-07-20 10:36 UTC (permalink / raw)
To: linux-raid
Hi all,
I have a fedora 25 (mdadm v3.4 - 28 Jan 2016), where out of blue, it
stopped booting. The problem was because it could not mount some
filesystems that were software raid1 devices. No disk failures. cat
/proc/mdstat shows:
Personalities :
md0 : inactive sdb1[1] sda1[0]
1228797952 blocks super 1.2
md2 : inactive sdb3[2] sda3[0]
507344896 blocks super 1.2
md1 : inactive sda2[0] sdb2[2]
2047997952 blocks super 1.2
unused devices: <none>
What is strange, is that it does not say it is a raid1 array (all of
them are).
mdadm -D /dev/md0 shows:
/dev/md0:
Version : 1.2
Creation Time : Sat Feb 4 08:50:18 2012
Raid Level : raid1
Used Dev Size : 614398840 (585.94 GiB 629.14 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Wed Jul 19 22:46:24 2017
State : active, Not Started
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : server.intellitech.gr:0 (local to host
server.intellitech.gr)
UUID : 6f903cce:5f6b3df4:c865924f:b05e2cd4
Events : 447211
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
I booted my pc with a live dvd of fedora 26 (and not 25 as the system),
having mdadm v4.0, and everything works as expected. I can see the 3
raid devices, I can mount them and see their contents.
So I am puzzled why mdadm 4 can use the devices, and mdadm 3.4 cannot
any more. How can I fix this?
(I have backups of all 3 filesystems, taken with dd from the live dvd,
where the raid devices work. The backups can be mounted)
I am thinking like destroying completely the arrays and recreate them,
but I feel that there is a much simpler solution, as the raid arrays
work with mdadm v4.
Regards,
George
^ permalink raw reply [flat|nested] 5+ messages in thread
* raid 1 assembled but inactive - works from a "live" distribution...
@ 2017-07-20 11:04 Georgios Petasis
2017-07-20 11:52 ` Georgios Petasis
0 siblings, 1 reply; 5+ messages in thread
From: Georgios Petasis @ 2017-07-20 11:04 UTC (permalink / raw)
To: linux-raid
Hi all,
I have a fedora 25 (mdadm v3.4 - 28 Jan 2016), where out of blue, it
stopped booting. The problem was because it could not mount some
filesystems that were software raid1 devices. No disk failures. cat
/proc/mdstat shows:
Personalities :
md0 : inactive sdb1[1] sda1[0]
1228797952 blocks super 1.2
md2 : inactive sdb3[2] sda3[0]
507344896 blocks super 1.2
md1 : inactive sda2[0] sdb2[2]
2047997952 blocks super 1.2
unused devices: <none>
What is strange, is that it does not say it is a raid1 array (all of
them are).
mdadm -D /dev/md0 shows:
/dev/md0:
Version : 1.2
Creation Time : Sat Feb 4 08:50:18 2012
Raid Level : raid1
Used Dev Size : 614398840 (585.94 GiB 629.14 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Wed Jul 19 22:46:24 2017
State : active, Not Started
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : server.intellitech.gr:0 (local to host
server.intellitech.gr)
UUID : 6f903cce:5f6b3df4:c865924f:b05e2cd4
Events : 447211
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
I booted my pc with a live dvd of fedora 26 (and not 25 as the system),
having mdadm v4.0, and everything works as expected. I can see the 3
raid devices, I can mount them and see their contents.
So I am puzzled why mdadm 4 can use the devices, and mdadm 3.4 cannot
any more. How can I fix this?
(I have backups of all 3 filesystems, taken with dd from the live dvd,
where the raid devices work. The backups can be mounted)
I am thinking like destroying completely the arrays and recreate them,
but I feel that there is a much simpler solution, as the raid arrays
work with mdadm v4.
Regards,
George
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: raid 1 assembled but inactive - works from a "live" distribution...
2017-07-20 11:04 raid 1 assembled but inactive - works from a "live" distribution Georgios Petasis
@ 2017-07-20 11:52 ` Georgios Petasis
2017-07-20 13:42 ` Wols Lists
0 siblings, 1 reply; 5+ messages in thread
From: Georgios Petasis @ 2017-07-20 11:52 UTC (permalink / raw)
To: linux-raid
Στις 20/7/2017 14:04, ο Georgios Petasis έγραψε:
> Hi all,
>
> I have a fedora 25 (mdadm v3.4 - 28 Jan 2016), where out of blue, it
> stopped booting. The problem was because it could not mount some
> filesystems that were software raid1 devices. No disk failures. cat
> /proc/mdstat shows:
>
> Personalities :
> md0 : inactive sdb1[1] sda1[0]
> 1228797952 blocks super 1.2
>
> md2 : inactive sdb3[2] sda3[0]
> 507344896 blocks super 1.2
>
> md1 : inactive sda2[0] sdb2[2]
> 2047997952 blocks super 1.2
>
> unused devices: <none>
>
> What is strange, is that it does not say it is a raid1 array (all of
> them are).
>
> mdadm -D /dev/md0 shows:
> /dev/md0:
> Version : 1.2
> Creation Time : Sat Feb 4 08:50:18 2012
> Raid Level : raid1
> Used Dev Size : 614398840 (585.94 GiB 629.14 GB)
> Raid Devices : 2
> Total Devices : 2
> Persistence : Superblock is persistent
>
> Update Time : Wed Jul 19 22:46:24 2017
> State : active, Not Started
> Active Devices : 2
> Working Devices : 2
> Failed Devices : 0
> Spare Devices : 0
>
> Name : server.intellitech.gr:0 (local to host
> server.intellitech.gr)
> UUID : 6f903cce:5f6b3df4:c865924f:b05e2cd4
> Events : 447211
>
> Number Major Minor RaidDevice State
> 0 8 1 0 active sync /dev/sda1
> 1 8 17 1 active sync /dev/sdb1
>
> I booted my pc with a live dvd of fedora 26 (and not 25 as the
> system), having mdadm v4.0, and everything works as expected. I can
> see the 3 raid devices, I can mount them and see their contents.
>
> So I am puzzled why mdadm 4 can use the devices, and mdadm 3.4 cannot
> any more. How can I fix this?
>
> (I have backups of all 3 filesystems, taken with dd from the live dvd,
> where the raid devices work. The backups can be mounted)
>
> I am thinking like destroying completely the arrays and recreate them,
> but I feel that there is a much simpler solution, as the raid arrays
> work with mdadm v4.
>
> Regards,
>
> George
Another puzzling fact, is that booting fedora 25 with the previous
kernel works.
With the latest kernel does not.
I tried because I saw "md: personality for level 1 is not loaded!" in
the logs...
George
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: raid 1 assembled but inactive - works from a "live" distribution...
2017-07-20 11:52 ` Georgios Petasis
@ 2017-07-20 13:42 ` Wols Lists
2017-07-21 14:04 ` Georgios Petasis
0 siblings, 1 reply; 5+ messages in thread
From: Wols Lists @ 2017-07-20 13:42 UTC (permalink / raw)
To: petasisg, linux-raid
On 20/07/17 12:52, Georgios Petasis wrote:
> Another puzzling fact, is that booting fedora 25 with the previous
> kernel works.
> With the latest kernel does not.
> I tried because I saw "md: personality for level 1 is not loaded!" in
> the logs...
Has the initrd or similar been changed somehow? I'll let the experts
chime in more, but this sounds like the kernel module for raid1 has
become inaccessible.
You've got superblock v1.2, which means the old trick of booting off of
one disk read-only, before assembling the raid and switching root
read/write won't work. Therefore you
(a) *MUST* have raid1 support in grub, in order to be able to read the
mirror to find the kernel, and
(b) must have raid1 support in the kernel, so the kernel can see /.
As I say, that message makes me suspect that either the grub or the
kernel raid-1 support module has disappeared ...
Cheers,
Wol
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: raid 1 assembled but inactive - works from a "live" distribution...
2017-07-20 13:42 ` Wols Lists
@ 2017-07-21 14:04 ` Georgios Petasis
0 siblings, 0 replies; 5+ messages in thread
From: Georgios Petasis @ 2017-07-21 14:04 UTC (permalink / raw)
To: Wols Lists, linux-raid
Στις 20/7/2017 16:42, ο Wols Lists έγραψε:
> On 20/07/17 12:52, Georgios Petasis wrote:
>> Another puzzling fact, is that booting fedora 25 with the previous
>> kernel works.
>> With the latest kernel does not.
>> I tried because I saw "md: personality for level 1 is not loaded!" in
>> the logs...
> Has the initrd or similar been changed somehow? I'll let the experts
> chime in more, but this sounds like the kernel module for raid1 has
> become inaccessible.
I think the problem is a kernel update that was issued by fedora. I have
updated to the newest kernel, but since this is a server running 24/7, I
had the problem after I reboot. And it took me many hours to find out it
was a kernel issue.
I have filed a bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1473308
and I am waiting what will happen.
>
> You've got superblock v1.2, which means the old trick of booting off of
> one disk read-only, before assembling the raid and switching root
> read/write won't work. Therefore you
>
> (a) *MUST* have raid1 support in grub, in order to be able to read the
> mirror to find the kernel, and
>
> (b) must have raid1 support in the kernel, so the kernel can see /.
My configuration does not use raid at /. The /boot and / partitions are
on a plain SSD. What I have as raid-1 are /var, /home, and /free (an
irrelevant space you can drop things).
What happens during Fedora 25 boot with kernel 4.11.10-200.fc25 is:
everything is normal until the boot process mounts the file systems.
There is waits from 1,5 minute to find all three devices, and then times
out and enters in emergency mode.
In there, the 3 raid devices are available, but inactive. The reason
they are inactive, is because personalities is empty.
For sure raid-1 personality is missing, there can be also other
personalities missing but I don't have them in my system to check.
There is a clear message in the boot logs that says:
md: personality for level 1 is not loaded!
So, I suspect relevant modules are missing from the specific kernel.
>
> As I say, that message makes me suspect that either the grub or the
> kernel raid-1 support module has disappeared ...
I also think that the kernel module for raid-1 has disappeared...
Thanks,
George
>
> Cheers,
> Wol
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2017-07-21 14:04 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-07-20 11:04 raid 1 assembled but inactive - works from a "live" distribution Georgios Petasis
2017-07-20 11:52 ` Georgios Petasis
2017-07-20 13:42 ` Wols Lists
2017-07-21 14:04 ` Georgios Petasis
-- strict thread matches above, loose matches on Subject: below --
2017-07-20 10:36 Georgios Petasis
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).