* [BUG REPORT] md raid5 with write log does not start
@ 2020-04-16 16:30 Coly Li
2020-04-17 10:28 ` Artur Paszkiewicz
2020-04-20 2:47 ` Xiao Ni
0 siblings, 2 replies; 3+ messages in thread
From: Coly Li @ 2020-04-16 16:30 UTC (permalink / raw)
To: linux-raid
Hi folks,
When I try to create md raid5 array with 4 NVMe SSD (3 for raid array
component disks, 1 for write log), the kernel is Linux v5.6 (not Linux
v5.7-rc), I find the md raid5 array cannot start.
I use this command to create md raid5 with writelog,
mdadm -C /dev/md0 -l 5 -n 3 /dev/nvme{0,1,2}n1 --write-journal /dev/nvme3n1
From terminal I have the following 2 lines information,
mdadm: Defaulting to version 1.2 metadata
mdadm: RUN_ARRAY failed: Invalid argument
From kernel message, I have the following dmesg lines,
[13624.897066] md/raid:md0: array cannot have both journal and bitmap
[13624.897068] md: pers->run() failed ...
[13624.897105] md: md0 stopped.
But from /proc/mdstat, it seems an inactive array is still created,
/proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md127 : inactive nvme2n1[4](S) nvme0n1[0](S) nvme3n1[3](S)
11251818504 blocks super 1.2
unused devices: <none>
From all the information it seems when initialize raid5 cache the bitmap
information is not cleared, so an error message shows up and raid5_run()
fails.
I don't have clear idea who to handle bitmap, journal and ppl properly,
so I firstly report the problem here.
So far I am not sure whether this is a bug or I do something wrong. Hope
other people may reproduce the above failure too.
Thanks.
Coly Li
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [BUG REPORT] md raid5 with write log does not start
2020-04-16 16:30 [BUG REPORT] md raid5 with write log does not start Coly Li
@ 2020-04-17 10:28 ` Artur Paszkiewicz
2020-04-20 2:47 ` Xiao Ni
1 sibling, 0 replies; 3+ messages in thread
From: Artur Paszkiewicz @ 2020-04-17 10:28 UTC (permalink / raw)
To: Coly Li, linux-raid
On 4/16/20 6:30 PM, Coly Li wrote:
> Hi folks,
>
> When I try to create md raid5 array with 4 NVMe SSD (3 for raid array
> component disks, 1 for write log), the kernel is Linux v5.6 (not Linux
> v5.7-rc), I find the md raid5 array cannot start.
>
> I use this command to create md raid5 with writelog,
>
> mdadm -C /dev/md0 -l 5 -n 3 /dev/nvme{0,1,2}n1 --write-journal /dev/nvme3n1
>
> From terminal I have the following 2 lines information,
>
> mdadm: Defaulting to version 1.2 metadata
> mdadm: RUN_ARRAY failed: Invalid argument
>
> From kernel message, I have the following dmesg lines,
>
> [13624.897066] md/raid:md0: array cannot have both journal and bitmap
> [13624.897068] md: pers->run() failed ...
> [13624.897105] md: md0 stopped.
>
> But from /proc/mdstat, it seems an inactive array is still created,
>
> /proc/mdstat
> Personalities : [raid6] [raid5] [raid4]
> md127 : inactive nvme2n1[4](S) nvme0n1[0](S) nvme3n1[3](S)
> 11251818504 blocks super 1.2
>
> unused devices: <none>
>
> From all the information it seems when initialize raid5 cache the bitmap
> information is not cleared, so an error message shows up and raid5_run()
> fails.
>
> I don't have clear idea who to handle bitmap, journal and ppl properly,
> so I firstly report the problem here.
>
> So far I am not sure whether this is a bug or I do something wrong. Hope
> other people may reproduce the above failure too.
Hi Coly,
It looks like the mdadm that you're using added an internal bitmap
despite creating the array with a journal. I think that was fixed some
time ago. The kernel correctly does not allow starting the array with
bitmap and journal (or ppl). You can assemble this now with:
mdadm -A /dev/md0 /dev/nvme[0-3]n1 --update=no-bitmap
You can also explicitly tell mdadm not to add a bitmap when creating an
array using "--bitmap=none".
Regards,
Artur
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [BUG REPORT] md raid5 with write log does not start
2020-04-16 16:30 [BUG REPORT] md raid5 with write log does not start Coly Li
2020-04-17 10:28 ` Artur Paszkiewicz
@ 2020-04-20 2:47 ` Xiao Ni
1 sibling, 0 replies; 3+ messages in thread
From: Xiao Ni @ 2020-04-20 2:47 UTC (permalink / raw)
To: Coly Li, linux-raid
On 04/17/2020 12:30 AM, Coly Li wrote:
> Hi folks,
>
> When I try to create md raid5 array with 4 NVMe SSD (3 for raid array
> component disks, 1 for write log), the kernel is Linux v5.6 (not Linux
> v5.7-rc), I find the md raid5 array cannot start.
>
> I use this command to create md raid5 with writelog,
>
> mdadm -C /dev/md0 -l 5 -n 3 /dev/nvme{0,1,2}n1 --write-journal /dev/nvme3n1
>
> From terminal I have the following 2 lines information,
>
> mdadm: Defaulting to version 1.2 metadata
> mdadm: RUN_ARRAY failed: Invalid argument
>
> From kernel message, I have the following dmesg lines,
>
> [13624.897066] md/raid:md0: array cannot have both journal and bitmap
> [13624.897068] md: pers->run() failed ...
> [13624.897105] md: md0 stopped.
>
> But from /proc/mdstat, it seems an inactive array is still created,
>
> /proc/mdstat
> Personalities : [raid6] [raid5] [raid4]
> md127 : inactive nvme2n1[4](S) nvme0n1[0](S) nvme3n1[3](S)
> 11251818504 blocks super 1.2
>
> unused devices: <none>
>
> From all the information it seems when initialize raid5 cache the bitmap
> information is not cleared, so an error message shows up and raid5_run()
> fails.
>
> I don't have clear idea who to handle bitmap, journal and ppl properly,
> so I firstly report the problem here.
>
> So far I am not sure whether this is a bug or I do something wrong. Hope
> other people may reproduce the above failure too.
>
> Thanks.
>
> Coly Li
>
Hi Coly
I can reproduce this. mdadm creates internal bitmap automatically if the
member device is bigger
than 100GB. raid5 journal device and bitmap can't exist at the same
time. So it needs to check this
when creating raid device.
diff --git a/Create.c b/Create.c
index 6f84e5b..0efa19c 100644
--- a/Create.c
+++ b/Create.c
@@ -542,6 +542,7 @@ int Create(struct supertype *st, char *mddev,
if (!s->bitmap_file &&
s->level >= 1 &&
st->ss->add_internal_bitmap &&
+ s->journaldisks == 0 &&
(s->consistency_policy != CONSISTENCY_POLICY_RESYNC &&
s->consistency_policy != CONSISTENCY_POLICY_PPL) &&
(s->write_behind || s->size > 100*1024*1024ULL)) {
I tried this patch. It can resolve this problem. How do you think about
this?
Best Regards
Xiao
^ permalink raw reply related [flat|nested] 3+ messages in thread
end of thread, other threads:[~2020-04-20 2:47 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-04-16 16:30 [BUG REPORT] md raid5 with write log does not start Coly Li
2020-04-17 10:28 ` Artur Paszkiewicz
2020-04-20 2:47 ` Xiao Ni
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).