* [PATCH V3 0/3] Make dlm lock more reliable for cluster-md
@ 2018-01-22 9:12 Guoqing Jiang
2018-01-22 9:12 ` [PATCH V3 1/3] mdadm: improve the dlm locking mechanism for clustered raid Guoqing Jiang
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Guoqing Jiang @ 2018-01-22 9:12 UTC (permalink / raw)
To: jes.sorensen; +Cc: linux-raid, Guoqing Jiang
V3 changes:
1. remove lock_cluster()/unclock_cluster().
---------------------------------------------------------------------
V2 changes:
1. confirm a device in slave node doen't need to get lock.
2. remove the explicitly name of dlm library.
3. delete a global variable which was used for store lockid.
---------------------------------------------------------------------
Currently, only a few functions which change superblock are protected
by dlm lock, but to avoid consistent issue we need to protect those
funcs which read superblock.
And call dlm locking interface inside super1.c is really clumsy since
each func which accesses superblock should be protected by dlm lock,
so we call lock_cluster/unlock_cluster in mdadm.c.
The first patch provides dlm locking support for CREATE, MANAGE, GROW
and INCREMENTAL mode, then the second patch provides the support for
ASSEMBLE mode. The last patch cleanup the failure path in Assemble().
Guoqing Jiang (3):
mdadm: improve the dlm locking mechanism for clustered raid
Assemble: provide protection when clustered raid do assemble
Assemble: cleanup the failure path
Assemble.c | 78 ++++++++++++++++++++++++++++++++++++--------------------------
mdadm.c | 20 ++++++++++++++++
mdadm.h | 10 ++++----
super1.c | 42 ---------------------------------
util.c | 57 ++++++++++++++++++++++++++++++++++-----------
5 files changed, 113 insertions(+), 94 deletions(-)
--
2.13.6
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH V3 1/3] mdadm: improve the dlm locking mechanism for clustered raid
2018-01-22 9:12 [PATCH V3 0/3] Make dlm lock more reliable for cluster-md Guoqing Jiang
@ 2018-01-22 9:12 ` Guoqing Jiang
2018-01-22 9:12 ` [PATCH V3 2/3] Assemble: provide protection when clustered raid do assemble Guoqing Jiang
2018-01-22 9:12 ` [PATCH V3 3/3] Assemble: cleanup the failure path Guoqing Jiang
2 siblings, 0 replies; 4+ messages in thread
From: Guoqing Jiang @ 2018-01-22 9:12 UTC (permalink / raw)
To: jes.sorensen; +Cc: linux-raid, Guoqing Jiang
Previously, the dlm locking only protects several
functions which writes to superblock (update_super,
add_to_super and store_super), and we missed other
funcs such as add_internal_bitmap. We also need to
call the funcs which read superblock under the
locking protection to avoid consistent issue.
So let's remove the dlm stuffs from super1.c, and
provide the locking mechanism to the main() except
assemble mode which will be handled in next commit.
And since we can identify it is a clustered raid or
not based on check the different conditions of each
mode, so the change should not have effect on native
array.
And we improve the existed locking stuffs as follows:
1. replace ls_unlock with ls_unlock_wait since we
should return when unlock operation is complete.
2. inspired by lvm, let's also try to use the existed
lockspace first before creat a lockspace blindly if
the lockspace not released for some reason.
3. try more times before quit if EAGAIN happened for
locking.
Note: for MANAGE mode, we do not need to get lock if
node just want to confirm device change, otherwise we
can't add a disk to cluster since all nodes are compete
for the lock.
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
---
mdadm.c | 20 ++++++++++++++++++++
mdadm.h | 10 +++++-----
super1.c | 42 ------------------------------------------
util.c | 57 +++++++++++++++++++++++++++++++++++++++++++--------------
4 files changed, 68 insertions(+), 61 deletions(-)
diff --git a/mdadm.c b/mdadm.c
index afc3015144d3..5afe4155d701 100644
--- a/mdadm.c
+++ b/mdadm.c
@@ -57,6 +57,7 @@ int main(int argc, char *argv[])
struct mddev_dev *devlist = NULL;
struct mddev_dev **devlistend = & devlist;
struct mddev_dev *dv;
+ mdu_array_info_t array;
int devs_found = 0;
char *symlinks = NULL;
int grow_continue = 0;
@@ -103,6 +104,7 @@ int main(int argc, char *argv[])
FILE *outf;
int mdfd = -1;
+ int locked = 0;
srandom(time(0) ^ getpid());
@@ -1434,6 +1436,22 @@ int main(int argc, char *argv[])
/* --scan implied --brief unless -vv */
c.brief = 1;
+ if (mode == CREATE) {
+ if (s.bitmap_file && strcmp(s.bitmap_file, "clustered") == 0) {
+ locked = cluster_get_dlmlock();
+ if (locked != 1)
+ exit(1);
+ }
+ } else if (mode == MANAGE || mode == GROW || mode == INCREMENTAL) {
+ if (!md_get_array_info(mdfd, &array) && (devmode != 'c')) {
+ if (array.state & (1 << MD_SB_CLUSTERED)) {
+ locked = cluster_get_dlmlock();
+ if (locked != 1)
+ exit(1);
+ }
+ }
+ }
+
switch(mode) {
case MANAGE:
/* readonly, add/remove, readwrite, runstop */
@@ -1739,6 +1757,8 @@ int main(int argc, char *argv[])
autodetect();
break;
}
+ if (locked)
+ cluster_release_dlmlock();
if (mdfd > 0)
close(mdfd);
exit(rv);
diff --git a/mdadm.h b/mdadm.h
index cf4721aa9e9a..61bc7afe036e 100644
--- a/mdadm.h
+++ b/mdadm.h
@@ -1604,6 +1604,7 @@ struct dlm_hooks {
dlm_lshandle_t (*create_lockspace)(const char *name,
unsigned int mode);
+ dlm_lshandle_t (*open_lockspace)(const char *name);
int (*release_lockspace)(const char *name, dlm_lshandle_t ls,
int force);
int (*ls_lock)(dlm_lshandle_t lockspace, uint32_t mode,
@@ -1612,17 +1613,16 @@ struct dlm_hooks {
uint32_t parent, void (*astaddr) (void *astarg),
void *astarg, void (*bastaddr) (void *astarg),
void *range);
- int (*ls_unlock)(dlm_lshandle_t lockspace, uint32_t lkid,
- uint32_t flags, struct dlm_lksb *lksb,
- void *astarg);
+ int (*ls_unlock_wait)(dlm_lshandle_t lockspace, uint32_t lkid,
+ uint32_t flags, struct dlm_lksb *lksb);
int (*ls_get_fd)(dlm_lshandle_t ls);
int (*dispatch)(int fd);
};
extern int get_cluster_name(char **name);
extern int dlm_funs_ready(void);
-extern int cluster_get_dlmlock(int *lockid);
-extern int cluster_release_dlmlock(int lockid);
+extern int cluster_get_dlmlock(void);
+extern int cluster_release_dlmlock(void);
extern void set_dlm_hooks(void);
#define _ROUND_UP(val, base) (((val) + (base) - 1) & ~(base - 1))
diff --git a/super1.c b/super1.c
index 7ae6dc326683..6774fbd2b53f 100644
--- a/super1.c
+++ b/super1.c
@@ -1185,20 +1185,9 @@ static int update_super1(struct supertype *st, struct mdinfo *info,
* ignored.
*/
int rv = 0;
- int lockid;
struct mdp_superblock_1 *sb = st->sb;
bitmap_super_t *bms = (bitmap_super_t*)(((char*)sb) + MAX_SB_SIZE);
- if (bms->version == BITMAP_MAJOR_CLUSTERED && dlm_funs_ready()) {
- rv = cluster_get_dlmlock(&lockid);
- if (rv) {
- pr_err("Cannot get dlmlock in %s return %d\n",
- __func__, rv);
- cluster_release_dlmlock(lockid);
- return rv;
- }
- }
-
if (strcmp(update, "homehost") == 0 &&
homehost) {
/* Note that 'homehost' is special as it is really
@@ -1551,8 +1540,6 @@ static int update_super1(struct supertype *st, struct mdinfo *info,
rv = -1;
sb->sb_csum = calc_sb_1_csum(sb);
- if (bms->version == BITMAP_MAJOR_CLUSTERED && dlm_funs_ready())
- cluster_release_dlmlock(lockid);
return rv;
}
@@ -1656,20 +1643,8 @@ static int add_to_super1(struct supertype *st, mdu_disk_info_t *dk,
struct mdp_superblock_1 *sb = st->sb;
__u16 *rp = sb->dev_roles + dk->number;
struct devinfo *di, **dip;
- bitmap_super_t *bms = (bitmap_super_t*)(((char*)sb) + MAX_SB_SIZE);
- int rv, lockid;
int dk_state;
- if (bms->version == BITMAP_MAJOR_CLUSTERED && dlm_funs_ready()) {
- rv = cluster_get_dlmlock(&lockid);
- if (rv) {
- pr_err("Cannot get dlmlock in %s return %d\n",
- __func__, rv);
- cluster_release_dlmlock(lockid);
- return rv;
- }
- }
-
dk_state = dk->state & ~(1<<MD_DISK_FAILFAST);
if ((dk_state & (1<<MD_DISK_ACTIVE)) &&
(dk_state & (1<<MD_DISK_SYNC)))/* active, sync */
@@ -1701,9 +1676,6 @@ static int add_to_super1(struct supertype *st, mdu_disk_info_t *dk,
di->next = NULL;
*dip = di;
- if (bms->version == BITMAP_MAJOR_CLUSTERED && dlm_funs_ready())
- cluster_release_dlmlock(lockid);
-
return 0;
}
@@ -1716,18 +1688,6 @@ static int store_super1(struct supertype *st, int fd)
struct align_fd afd;
int sbsize;
unsigned long long dsize;
- bitmap_super_t *bms = (bitmap_super_t*)(((char*)sb) + MAX_SB_SIZE);
- int rv, lockid;
-
- if (bms->version == BITMAP_MAJOR_CLUSTERED && dlm_funs_ready()) {
- rv = cluster_get_dlmlock(&lockid);
- if (rv) {
- pr_err("Cannot get dlmlock in %s return %d\n",
- __func__, rv);
- cluster_release_dlmlock(lockid);
- return rv;
- }
- }
if (!get_dev_size(fd, NULL, &dsize))
return 1;
@@ -1788,8 +1748,6 @@ static int store_super1(struct supertype *st, int fd)
}
}
fsync(fd);
- if (bms->version == BITMAP_MAJOR_CLUSTERED && dlm_funs_ready())
- cluster_release_dlmlock(lockid);
return 0;
}
diff --git a/util.c b/util.c
index 543ec6cf46ef..4adbbff0295e 100644
--- a/util.c
+++ b/util.c
@@ -128,11 +128,17 @@ static void dlm_ast(void *arg)
static char *cluster_name = NULL;
/* Create the lockspace, take bitmapXXX locks on all the bitmaps. */
-int cluster_get_dlmlock(int *lockid)
+int cluster_get_dlmlock(void)
{
int ret = -1;
char str[64];
int flags = LKF_NOQUEUE;
+ int retry_count = 0;
+
+ if (!dlm_funs_ready()) {
+ pr_err("Something wrong with dlm library\n");
+ return -1;
+ }
ret = get_cluster_name(&cluster_name);
if (ret) {
@@ -141,38 +147,57 @@ int cluster_get_dlmlock(int *lockid)
}
dlm_lock_res = xmalloc(sizeof(struct dlm_lock_resource));
- dlm_lock_res->ls = dlm_hooks->create_lockspace(cluster_name, O_RDWR);
+ dlm_lock_res->ls = dlm_hooks->open_lockspace(cluster_name);
if (!dlm_lock_res->ls) {
- pr_err("%s failed to create lockspace\n", cluster_name);
- return -ENOMEM;
+ dlm_lock_res->ls = dlm_hooks->create_lockspace(cluster_name, O_RDWR);
+ if (!dlm_lock_res->ls) {
+ pr_err("%s failed to create lockspace\n", cluster_name);
+ return -ENOMEM;
+ }
+ } else {
+ pr_err("open existed %s lockspace\n", cluster_name);
}
snprintf(str, 64, "bitmap%s", cluster_name);
+retry:
ret = dlm_hooks->ls_lock(dlm_lock_res->ls, LKM_PWMODE,
&dlm_lock_res->lksb, flags, str, strlen(str),
0, dlm_ast, dlm_lock_res, NULL, NULL);
if (ret) {
pr_err("error %d when get PW mode on lock %s\n", errno, str);
+ /* let's try several times if EAGAIN happened */
+ if (dlm_lock_res->lksb.sb_status == EAGAIN && retry_count < 10) {
+ sleep(10);
+ retry_count++;
+ goto retry;
+ }
dlm_hooks->release_lockspace(cluster_name, dlm_lock_res->ls, 1);
return ret;
}
/* Wait for it to complete */
poll_for_ast(dlm_lock_res->ls);
- *lockid = dlm_lock_res->lksb.sb_lkid;
- return dlm_lock_res->lksb.sb_status;
+ if (dlm_lock_res->lksb.sb_status) {
+ pr_err("failed to lock cluster\n");
+ return -1;
+ }
+ return 1;
}
-int cluster_release_dlmlock(int lockid)
+int cluster_release_dlmlock(void)
{
int ret = -1;
if (!cluster_name)
- return -1;
+ goto out;
+
+ if (!dlm_lock_res->lksb.sb_lkid)
+ goto out;
- ret = dlm_hooks->ls_unlock(dlm_lock_res->ls, lockid, 0,
- &dlm_lock_res->lksb, dlm_lock_res);
+ ret = dlm_hooks->ls_unlock_wait(dlm_lock_res->ls,
+ dlm_lock_res->lksb.sb_lkid, 0,
+ &dlm_lock_res->lksb);
if (ret) {
pr_err("error %d happened when unlock\n", errno);
/* XXX make sure the lock is unlocked eventually */
@@ -2324,18 +2349,22 @@ void set_dlm_hooks(void)
if (!dlm_hooks->dlm_handle)
return;
+ dlm_hooks->open_lockspace =
+ dlsym(dlm_hooks->dlm_handle, "dlm_open_lockspace");
dlm_hooks->create_lockspace =
dlsym(dlm_hooks->dlm_handle, "dlm_create_lockspace");
dlm_hooks->release_lockspace =
dlsym(dlm_hooks->dlm_handle, "dlm_release_lockspace");
dlm_hooks->ls_lock = dlsym(dlm_hooks->dlm_handle, "dlm_ls_lock");
- dlm_hooks->ls_unlock = dlsym(dlm_hooks->dlm_handle, "dlm_ls_unlock");
+ dlm_hooks->ls_unlock_wait =
+ dlsym(dlm_hooks->dlm_handle, "dlm_ls_unlock_wait");
dlm_hooks->ls_get_fd = dlsym(dlm_hooks->dlm_handle, "dlm_ls_get_fd");
dlm_hooks->dispatch = dlsym(dlm_hooks->dlm_handle, "dlm_dispatch");
- if (!dlm_hooks->create_lockspace || !dlm_hooks->ls_lock ||
- !dlm_hooks->ls_unlock || !dlm_hooks->release_lockspace ||
- !dlm_hooks->ls_get_fd || !dlm_hooks->dispatch)
+ if (!dlm_hooks->open_lockspace || !dlm_hooks->create_lockspace ||
+ !dlm_hooks->ls_lock || !dlm_hooks->ls_unlock_wait ||
+ !dlm_hooks->release_lockspace || !dlm_hooks->ls_get_fd ||
+ !dlm_hooks->dispatch)
dlclose(dlm_hooks->dlm_handle);
else
is_dlm_hooks_ready = 1;
--
2.13.6
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH V3 2/3] Assemble: provide protection when clustered raid do assemble
2018-01-22 9:12 [PATCH V3 0/3] Make dlm lock more reliable for cluster-md Guoqing Jiang
2018-01-22 9:12 ` [PATCH V3 1/3] mdadm: improve the dlm locking mechanism for clustered raid Guoqing Jiang
@ 2018-01-22 9:12 ` Guoqing Jiang
2018-01-22 9:12 ` [PATCH V3 3/3] Assemble: cleanup the failure path Guoqing Jiang
2 siblings, 0 replies; 4+ messages in thread
From: Guoqing Jiang @ 2018-01-22 9:12 UTC (permalink / raw)
To: jes.sorensen; +Cc: linux-raid, Guoqing Jiang
The previous patch provides protection for other modes
such as CREATE, MANAGE, GROW and INCREMENTAL. And for
ASSEMBLE mode, we also need to protect during the process
of assemble clustered raid.
However, we can only know the array is clustered or not
when the metadata is ready, so the lock_cluster is called
after select_devices(). And we could re-read the metadata
when doing auto-assembly, so refresh the locking.
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
---
Assemble.c | 32 +++++++++++++++++++++++++++++++-
1 file changed, 31 insertions(+), 1 deletion(-)
diff --git a/Assemble.c b/Assemble.c
index 3c10b6cd27c2..894ff361cf30 100644
--- a/Assemble.c
+++ b/Assemble.c
@@ -1326,6 +1326,9 @@ int Assemble(struct supertype *st, char *mddev,
char chosen_name[1024];
struct map_ent *map = NULL;
struct map_ent *mp;
+ int locked = 0;
+ struct mdp_superblock_1 *sb;
+ bitmap_super_t *bms;
/*
* If any subdevs are listed, then any that don't
@@ -1356,6 +1359,12 @@ try_again:
* set of devices failed. Those are now marked as ->used==2 and
* we ignore them and try again
*/
+ if (locked)
+ /*
+ * if come back try_again is called, then need to unlock first,
+ * and lock again since the metadate is re-read.
+ */
+ cluster_release_dlmlock();
if (!st && ident->st)
st = ident->st;
if (c->verbose>0)
@@ -1373,6 +1382,14 @@ try_again:
if (!st || !st->sb || !content)
return 2;
+ sb = st->sb;
+ bms = (bitmap_super_t*)(((char*)sb) + 4096);
+ if (sb && bms->version == BITMAP_MAJOR_CLUSTERED) {
+ locked = cluster_get_dlmlock();
+ if (locked != 1)
+ return 1;
+ }
+
/* We have a full set of devices - we now need to find the
* array device.
* However there is a risk that we are racing with "mdadm -I"
@@ -1399,6 +1416,8 @@ try_again:
pr_err("Found some drive for an array that is already active: %s\n",
mp->path);
pr_err("giving up.\n");
+ if (locked == 1)
+ cluster_release_dlmlock();
return 1;
}
for (dv = pre_exist->devs; dv; dv = dv->next) {
@@ -1472,6 +1491,8 @@ try_again:
st->ss->free_super(st);
if (auto_assem)
goto try_again;
+ if (locked == 1)
+ cluster_release_dlmlock();
return 1;
}
mddev = chosen_name;
@@ -1491,6 +1512,8 @@ try_again:
st->ss->free_super(st);
if (auto_assem)
goto try_again;
+ if (locked == 1)
+ cluster_release_dlmlock();
return 1;
}
/* just incase it was started but has no content */
@@ -1503,6 +1526,8 @@ try_again:
err = assemble_container_content(st, mdfd, content, c,
chosen_name, NULL);
close(mdfd);
+ if (locked == 1)
+ cluster_release_dlmlock();
return err;
}
@@ -1512,8 +1537,11 @@ try_again:
devcnt = load_devices(devices, devmap, ident, &st, devlist,
c, content, mdfd, mddev,
&most_recent, &bestcnt, &best, inargv);
- if (devcnt < 0)
+ if (devcnt < 0) {
+ if (locked == 1)
+ cluster_release_dlmlock();
return 1;
+ }
if (devcnt == 0) {
pr_err("no devices found for %s\n",
@@ -1860,6 +1888,8 @@ try_again:
close(mdfd);
/* '2' means 'OK, but not started yet' */
+ if (locked == 1)
+ cluster_release_dlmlock();
return rv == 2 ? 0 : rv;
}
--
2.13.6
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH V3 3/3] Assemble: cleanup the failure path
2018-01-22 9:12 [PATCH V3 0/3] Make dlm lock more reliable for cluster-md Guoqing Jiang
2018-01-22 9:12 ` [PATCH V3 1/3] mdadm: improve the dlm locking mechanism for clustered raid Guoqing Jiang
2018-01-22 9:12 ` [PATCH V3 2/3] Assemble: provide protection when clustered raid do assemble Guoqing Jiang
@ 2018-01-22 9:12 ` Guoqing Jiang
2 siblings, 0 replies; 4+ messages in thread
From: Guoqing Jiang @ 2018-01-22 9:12 UTC (permalink / raw)
To: jes.sorensen; +Cc: linux-raid, Guoqing Jiang
There are some failure paths which share common codes
before return, so simplify them by move common codes
to the end of function, and just goto out in case
failure happened.
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
---
Assemble.c | 62 ++++++++++++++++++++++----------------------------------------
1 file changed, 22 insertions(+), 40 deletions(-)
diff --git a/Assemble.c b/Assemble.c
index 894ff361cf30..96ea2fd53146 100644
--- a/Assemble.c
+++ b/Assemble.c
@@ -1295,13 +1295,13 @@ int Assemble(struct supertype *st, char *mddev,
* START_ARRAY
*
*/
- int rv;
- int mdfd;
+ int rv = -1;
+ int mdfd = -1;
int clean;
int auto_assem = (mddev == NULL && !ident->uuid_set &&
ident->super_minor == UnSet && ident->name[0] == 0 &&
(ident->container == NULL || ident->member == NULL));
- struct devs *devices;
+ struct devs *devices = NULL;
char *devmap;
int *best = NULL; /* indexed by raid_disk */
int bestcnt = 0;
@@ -1416,9 +1416,7 @@ try_again:
pr_err("Found some drive for an array that is already active: %s\n",
mp->path);
pr_err("giving up.\n");
- if (locked == 1)
- cluster_release_dlmlock();
- return 1;
+ goto out;
}
for (dv = pre_exist->devs; dv; dv = dv->next) {
/* We want to add this device to our list,
@@ -1491,9 +1489,7 @@ try_again:
st->ss->free_super(st);
if (auto_assem)
goto try_again;
- if (locked == 1)
- cluster_release_dlmlock();
- return 1;
+ goto out;
}
mddev = chosen_name;
if (pre_exist == NULL) {
@@ -1512,9 +1508,7 @@ try_again:
st->ss->free_super(st);
if (auto_assem)
goto try_again;
- if (locked == 1)
- cluster_release_dlmlock();
- return 1;
+ goto out;
}
/* just incase it was started but has no content */
ioctl(mdfd, STOP_ARRAY, NULL);
@@ -1538,9 +1532,8 @@ try_again:
c, content, mdfd, mddev,
&most_recent, &bestcnt, &best, inargv);
if (devcnt < 0) {
- if (locked == 1)
- cluster_release_dlmlock();
- return 1;
+ mdfd = -3;
+ goto out;
}
if (devcnt == 0) {
@@ -1548,10 +1541,8 @@ try_again:
mddev);
if (st)
st->ss->free_super(st);
- close(mdfd);
- free(devices);
free(devmap);
- return 1;
+ goto out;
}
if (c->update && strcmp(c->update, "byteorder")==0)
@@ -1665,32 +1656,24 @@ try_again:
: (O_RDONLY|O_EXCL)))< 0) {
pr_err("Cannot open %s: %s\n",
devices[j].devname, strerror(errno));
- close(mdfd);
- free(devices);
- return 1;
+ goto out;
}
if (st->ss->load_super(st,fd, NULL)) {
close(fd);
pr_err("RAID superblock has disappeared from %s\n",
devices[j].devname);
- close(mdfd);
- free(devices);
- return 1;
+ goto out;
}
close(fd);
}
if (st->sb == NULL) {
pr_err("No suitable drives found for %s\n", mddev);
- close(mdfd);
- free(devices);
- return 1;
+ goto out;
}
st->ss->getinfo_super(st, content, NULL);
if (sysfs_init(content, mdfd, NULL)) {
pr_err("Unable to initialize sysfs\n");
- close(mdfd);
- free(devices);
- return 1;
+ goto out;
}
/* after reload context, store journal_clean in context */
@@ -1756,17 +1739,13 @@ try_again:
if (fd < 0) {
pr_err("Could not open %s for write - cannot Assemble array.\n",
devices[chosen_drive].devname);
- close(mdfd);
- free(devices);
- return 1;
+ goto out;
}
if (st->ss->store_super(st, fd)) {
close(fd);
pr_err("Could not re-write superblock on %s\n",
devices[chosen_drive].devname);
- close(mdfd);
- free(devices);
- return 1;
+ goto out;
}
if (c->verbose >= 0)
pr_err("Marking array %s as 'clean'\n",
@@ -1824,9 +1803,7 @@ try_again:
pr_err("Failed to restore critical section for reshape, sorry.\n");
if (c->backup_file == NULL)
cont_err("Possibly you needed to specify the --backup-file\n");
- close(mdfd);
- free(devices);
- return err;
+ goto out;
}
}
@@ -1855,6 +1832,7 @@ try_again:
ioctl(mdfd, STOP_ARRAY, NULL);
free(devices);
map_unlock(&map);
+out:
if (rv == 0) {
wait_for(chosen_name, mdfd);
close(mdfd);
@@ -1884,12 +1862,16 @@ try_again:
usecs <<= 1;
}
}
- } else
+ } else if (mdfd >= 0)
close(mdfd);
/* '2' means 'OK, but not started yet' */
if (locked == 1)
cluster_release_dlmlock();
+ if (rv == -1) {
+ free(devices);
+ return 1;
+ }
return rv == 2 ? 0 : rv;
}
--
2.13.6
^ permalink raw reply related [flat|nested] 4+ messages in thread
end of thread, other threads:[~2018-01-22 9:12 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-01-22 9:12 [PATCH V3 0/3] Make dlm lock more reliable for cluster-md Guoqing Jiang
2018-01-22 9:12 ` [PATCH V3 1/3] mdadm: improve the dlm locking mechanism for clustered raid Guoqing Jiang
2018-01-22 9:12 ` [PATCH V3 2/3] Assemble: provide protection when clustered raid do assemble Guoqing Jiang
2018-01-22 9:12 ` [PATCH V3 3/3] Assemble: cleanup the failure path Guoqing Jiang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).