* [PATCH 0/2] Check node nums for cluster raid
@ 2016-05-04 8:33 Guoqing Jiang
2016-05-04 8:33 ` [PATCH 1/2] Create: check the node nums when create clustered raid Guoqing Jiang
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Guoqing Jiang @ 2016-05-04 8:33 UTC (permalink / raw)
To: Jes.Sorensen; +Cc: linux-raid, Guoqing Jiang
For cluster raid, we do need at least two nodes for it,
the two patches add the checks before create and change
bitmap.
Thanks,
Guoqing
Guoqing Jiang (2):
Create: check the node nums when create clustered raid
super1: don't update node nums if it is not more than 1
Create.c | 7 ++++++-
super1.c | 5 +++++
2 files changed, 11 insertions(+), 1 deletion(-)
--
2.6.2
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/2] Create: check the node nums when create clustered raid
2016-05-04 8:33 [PATCH 0/2] Check node nums for cluster raid Guoqing Jiang
@ 2016-05-04 8:33 ` Guoqing Jiang
2016-05-04 8:33 ` [PATCH 2/2] super1: don't update node nums if it is not more than 1 Guoqing Jiang
2016-05-04 15:12 ` [PATCH 0/2] Check node nums for cluster raid Jes Sorensen
2 siblings, 0 replies; 9+ messages in thread
From: Guoqing Jiang @ 2016-05-04 8:33 UTC (permalink / raw)
To: Jes.Sorensen; +Cc: linux-raid, Guoqing Jiang
It doesn't make sense to create a clustered raid
with only 1 node.
Reported-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
---
Create.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/Create.c b/Create.c
index 1e4a6ee..717086b 100644
--- a/Create.c
+++ b/Create.c
@@ -114,8 +114,13 @@ int Create(struct supertype *st, char *mddev,
unsigned long long newsize;
int major_num = BITMAP_MAJOR_HI;
- if (s->bitmap_file && strcmp(s->bitmap_file, "clustered") == 0)
+ if (s->bitmap_file && strcmp(s->bitmap_file, "clustered") == 0) {
major_num = BITMAP_MAJOR_CLUSTERED;
+ if (c->nodes <= 1) {
+ pr_err("At least 2 nodes are needed for cluster-md\n");
+ return 1;
+ }
+ }
memset(&info, 0, sizeof(info));
if (s->level == UnSet && st && st->ss->default_geometry)
--
2.6.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/2] super1: don't update node nums if it is not more than 1
2016-05-04 8:33 [PATCH 0/2] Check node nums for cluster raid Guoqing Jiang
2016-05-04 8:33 ` [PATCH 1/2] Create: check the node nums when create clustered raid Guoqing Jiang
@ 2016-05-04 8:33 ` Guoqing Jiang
2016-05-04 15:12 ` [PATCH 0/2] Check node nums for cluster raid Jes Sorensen
2 siblings, 0 replies; 9+ messages in thread
From: Guoqing Jiang @ 2016-05-04 8:33 UTC (permalink / raw)
To: Jes.Sorensen; +Cc: linux-raid, Guoqing Jiang
We at least need two nodes for cluster raid so make the
check before update node nums.
Reported-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
---
super1.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/super1.c b/super1.c
index 8d5543f..972b470 100644
--- a/super1.c
+++ b/super1.c
@@ -2394,6 +2394,11 @@ static int write_bitmap1(struct supertype *st, int fd, enum bitmap_update update
return -EINVAL;
}
+ if (bms->version == BITMAP_MAJOR_CLUSTERED && st->nodes <= 1) {
+ pr_err("Warning: cluster-md at least needs two nodes\n");
+ return -EINVAL;
+ }
+
/* Each node has an independent bitmap, it is necessary to calculate the
* space is enough or not, first get how many bytes for the total bitmap */
bm_space_per_node = calc_bitmap_size(bms, 4096);
--
2.6.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 0/2] Check node nums for cluster raid
2016-05-04 8:33 [PATCH 0/2] Check node nums for cluster raid Guoqing Jiang
2016-05-04 8:33 ` [PATCH 1/2] Create: check the node nums when create clustered raid Guoqing Jiang
2016-05-04 8:33 ` [PATCH 2/2] super1: don't update node nums if it is not more than 1 Guoqing Jiang
@ 2016-05-04 15:12 ` Jes Sorensen
2016-05-04 15:19 ` Doug Ledford
2016-05-05 3:03 ` Guoqing Jiang
2 siblings, 2 replies; 9+ messages in thread
From: Jes Sorensen @ 2016-05-04 15:12 UTC (permalink / raw)
To: Guoqing Jiang; +Cc: linux-raid
Guoqing Jiang <gqjiang@suse.com> writes:
> For cluster raid, we do need at least two nodes for it,
> the two patches add the checks before create and change
> bitmap.
>
> Thanks,
> Guoqing
>
> Guoqing Jiang (2):
> Create: check the node nums when create clustered raid
> super1: don't update node nums if it is not more than 1
>
> Create.c | 7 ++++++-
> super1.c | 5 +++++
> 2 files changed, 11 insertions(+), 1 deletion(-)
Hi Guoqing,
I am a little confused on this one - albeit I haven't looked at it in
detail. Why should it not be possible to start a cluster with one node?
In theory you should be able to do that, and then add nodes later?
Cheers,
Jes
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 0/2] Check node nums for cluster raid
2016-05-04 15:12 ` [PATCH 0/2] Check node nums for cluster raid Jes Sorensen
@ 2016-05-04 15:19 ` Doug Ledford
2016-05-04 15:25 ` Jes Sorensen
2016-05-05 3:03 ` Guoqing Jiang
1 sibling, 1 reply; 9+ messages in thread
From: Doug Ledford @ 2016-05-04 15:19 UTC (permalink / raw)
To: Jes Sorensen, Guoqing Jiang; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 1311 bytes --]
On 05/04/2016 11:12 AM, Jes Sorensen wrote:
> Guoqing Jiang <gqjiang@suse.com> writes:
>> For cluster raid, we do need at least two nodes for it,
>> the two patches add the checks before create and change
>> bitmap.
>>
>> Thanks,
>> Guoqing
>>
>> Guoqing Jiang (2):
>> Create: check the node nums when create clustered raid
>> super1: don't update node nums if it is not more than 1
>>
>> Create.c | 7 ++++++-
>> super1.c | 5 +++++
>> 2 files changed, 11 insertions(+), 1 deletion(-)
>
> Hi Guoqing,
>
> I am a little confused on this one - albeit I haven't looked at it in
> detail. Why should it not be possible to start a cluster with one node?
> In theory you should be able to do that, and then add nodes later?
Not typically. A single node of a cluster is likely the odd man out, so
starting it and allowing changes to the underlying device has a high
potential of creating split brain issues. For that reason, most cluster
setups require some minimum (usually 2) for a quorum before they will
start. Otherwise, given a three node cluster, you could end up with
three separate live filesystems and the need to merge changes between
them to bring the cluster back into sync.
--
Doug Ledford <dledford@redhat.com>
GPG KeyID: 0E572FDD
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 884 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 0/2] Check node nums for cluster raid
2016-05-04 15:19 ` Doug Ledford
@ 2016-05-04 15:25 ` Jes Sorensen
2016-05-04 15:28 ` Doug Ledford
0 siblings, 1 reply; 9+ messages in thread
From: Jes Sorensen @ 2016-05-04 15:25 UTC (permalink / raw)
To: Doug Ledford; +Cc: Guoqing Jiang, linux-raid
Doug Ledford <dledford@redhat.com> writes:
> On 05/04/2016 11:12 AM, Jes Sorensen wrote:
>> Guoqing Jiang <gqjiang@suse.com> writes:
>>> For cluster raid, we do need at least two nodes for it,
>>> the two patches add the checks before create and change
>>> bitmap.
>>>
>>> Thanks,
>>> Guoqing
>>>
>>> Guoqing Jiang (2):
>>> Create: check the node nums when create clustered raid
>>> super1: don't update node nums if it is not more than 1
>>>
>>> Create.c | 7 ++++++-
>>> super1.c | 5 +++++
>>> 2 files changed, 11 insertions(+), 1 deletion(-)
>>
>> Hi Guoqing,
>>
>> I am a little confused on this one - albeit I haven't looked at it in
>> detail. Why should it not be possible to start a cluster with one node?
>> In theory you should be able to do that, and then add nodes later?
>
> Not typically. A single node of a cluster is likely the odd man out, so
> starting it and allowing changes to the underlying device has a high
> potential of creating split brain issues. For that reason, most cluster
> setups require some minimum (usually 2) for a quorum before they will
> start. Otherwise, given a three node cluster, you could end up with
> three separate live filesystems and the need to merge changes between
> them to bring the cluster back into sync.
Valid point, but it still looks like a duplicate of the classic raid1
situation. We still allow the creation of a raid1 with just one drive,
would it not make more sense to spit out a warning here, rather than
deny it?
Cheers,
Jes
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 0/2] Check node nums for cluster raid
2016-05-04 15:25 ` Jes Sorensen
@ 2016-05-04 15:28 ` Doug Ledford
0 siblings, 0 replies; 9+ messages in thread
From: Doug Ledford @ 2016-05-04 15:28 UTC (permalink / raw)
To: Jes Sorensen; +Cc: Guoqing Jiang, linux-raid
[-- Attachment #1: Type: text/plain, Size: 2449 bytes --]
On 05/04/2016 11:25 AM, Jes Sorensen wrote:
> Doug Ledford <dledford@redhat.com> writes:
>> On 05/04/2016 11:12 AM, Jes Sorensen wrote:
>>> Guoqing Jiang <gqjiang@suse.com> writes:
>>>> For cluster raid, we do need at least two nodes for it,
>>>> the two patches add the checks before create and change
>>>> bitmap.
>>>>
>>>> Thanks,
>>>> Guoqing
>>>>
>>>> Guoqing Jiang (2):
>>>> Create: check the node nums when create clustered raid
>>>> super1: don't update node nums if it is not more than 1
>>>>
>>>> Create.c | 7 ++++++-
>>>> super1.c | 5 +++++
>>>> 2 files changed, 11 insertions(+), 1 deletion(-)
>>>
>>> Hi Guoqing,
>>>
>>> I am a little confused on this one - albeit I haven't looked at it in
>>> detail. Why should it not be possible to start a cluster with one node?
>>> In theory you should be able to do that, and then add nodes later?
>>
>> Not typically. A single node of a cluster is likely the odd man out, so
>> starting it and allowing changes to the underlying device has a high
>> potential of creating split brain issues. For that reason, most cluster
>> setups require some minimum (usually 2) for a quorum before they will
>> start. Otherwise, given a three node cluster, you could end up with
>> three separate live filesystems and the need to merge changes between
>> them to bring the cluster back into sync.
>
> Valid point, but it still looks like a duplicate of the classic raid1
> situation. We still allow the creation of a raid1 with just one drive,
> would it not make more sense to spit out a warning here, rather than
> deny it?
Local raid1 is a little different in that if both members of a raid1 are
supposed to be present on the same machine, and that machine only sees
one of the disks, we take it on faith that the other one isn't running
around live in another machine. If it is, we can end up corrupting our
array (we rely on the events counter on one disk superseding the other
disk to know which disk is the master copy and which one needs to be
refreshed, if both disks are brought up the same number of times without
each other, then their event counters will be the same and we won't know
which one should be master). With a clustered MD filesystem, that
assumption isn't true, and so starting a device without a quorum carries
a much higher risk.
--
Doug Ledford <dledford@redhat.com>
GPG KeyID: 0E572FDD
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 884 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 0/2] Check node nums for cluster raid
2016-05-04 15:12 ` [PATCH 0/2] Check node nums for cluster raid Jes Sorensen
2016-05-04 15:19 ` Doug Ledford
@ 2016-05-05 3:03 ` Guoqing Jiang
2016-05-05 20:28 ` Jes Sorensen
1 sibling, 1 reply; 9+ messages in thread
From: Guoqing Jiang @ 2016-05-05 3:03 UTC (permalink / raw)
To: Jes Sorensen; +Cc: linux-raid
Hi Jes,
On 05/04/2016 11:12 AM, Jes Sorensen wrote:
> Guoqing Jiang <gqjiang@suse.com> writes:
>> For cluster raid, we do need at least two nodes for it,
>> the two patches add the checks before create and change
>> bitmap.
>>
>> Thanks,
>> Guoqing
>>
>> Guoqing Jiang (2):
>> Create: check the node nums when create clustered raid
>> super1: don't update node nums if it is not more than 1
>>
>> Create.c | 7 ++++++-
>> super1.c | 5 +++++
>> 2 files changed, 11 insertions(+), 1 deletion(-)
> Hi Guoqing,
>
> I am a little confused on this one - albeit I haven't looked at it in
> detail. Why should it not be possible to start a cluster with one node?
> In theory you should be able to do that, and then add nodes later?
The "nodes" means how many nodes could run with the clustered raid.
IOW, if nodes is set to 1, then we can't assemble the clustered raid in
node B after clustered raid is created in node A.
And we had provided below protection in md-cluster.c, so it doesn't make
sense to create clustered raid with "nodes = 1" since we can't use this raid
across cluster.
f (nodes < cinfo->slot_number) {
pr_err("md-cluster: Slot allotted(%d) is greater than
available slots(%d).",
cinfo->slot_number, nodes);
ret = -ERANGE;
goto err;
}
Regards,
Guoqing
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 0/2] Check node nums for cluster raid
2016-05-05 3:03 ` Guoqing Jiang
@ 2016-05-05 20:28 ` Jes Sorensen
0 siblings, 0 replies; 9+ messages in thread
From: Jes Sorensen @ 2016-05-05 20:28 UTC (permalink / raw)
To: Guoqing Jiang; +Cc: linux-raid
Guoqing Jiang <gqjiang@suse.com> writes:
> Hi Jes,
>
> On 05/04/2016 11:12 AM, Jes Sorensen wrote:
>> Guoqing Jiang <gqjiang@suse.com> writes:
>>> For cluster raid, we do need at least two nodes for it,
>>> the two patches add the checks before create and change
>>> bitmap.
>>>
>>> Thanks,
>>> Guoqing
>>>
>>> Guoqing Jiang (2):
>>> Create: check the node nums when create clustered raid
>>> super1: don't update node nums if it is not more than 1
>>>
>>> Create.c | 7 ++++++-
>>> super1.c | 5 +++++
>>> 2 files changed, 11 insertions(+), 1 deletion(-)
>> Hi Guoqing,
>>
>> I am a little confused on this one - albeit I haven't looked at it in
>> detail. Why should it not be possible to start a cluster with one node?
>> In theory you should be able to do that, and then add nodes later?
>
> The "nodes" means how many nodes could run with the clustered raid.
> IOW, if nodes is set to 1, then we can't assemble the clustered raid in
> node B after clustered raid is created in node A.
>
> And we had provided below protection in md-cluster.c, so it doesn't make
> sense to create clustered raid with "nodes = 1" since we can't use this raid
> across cluster.
>
> f (nodes < cinfo->slot_number) {
> pr_err("md-cluster: Slot allotted(%d) is greater than
> available slots(%d).",
> cinfo->slot_number, nodes);
> ret = -ERANGE;
> goto err;
> }
OK, thanks for the explanation! I'll apply these shortly.
Cheers,
Jes
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2016-05-05 20:28 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-05-04 8:33 [PATCH 0/2] Check node nums for cluster raid Guoqing Jiang
2016-05-04 8:33 ` [PATCH 1/2] Create: check the node nums when create clustered raid Guoqing Jiang
2016-05-04 8:33 ` [PATCH 2/2] super1: don't update node nums if it is not more than 1 Guoqing Jiang
2016-05-04 15:12 ` [PATCH 0/2] Check node nums for cluster raid Jes Sorensen
2016-05-04 15:19 ` Doug Ledford
2016-05-04 15:25 ` Jes Sorensen
2016-05-04 15:28 ` Doug Ledford
2016-05-05 3:03 ` Guoqing Jiang
2016-05-05 20:28 ` Jes Sorensen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).