* [Cluster-devel] [PATCH dlm/next 0/7] fs: dlm: locking and memory fixes
@ 2020-08-27 19:02 Alexander Aring
2020-08-27 19:02 ` [Cluster-devel] [PATCH dlm/next 1/7] fs: dlm: synchronize dlm before shutdown Alexander Aring
` (6 more replies)
0 siblings, 7 replies; 8+ messages in thread
From: Alexander Aring @ 2020-08-27 19:02 UTC (permalink / raw)
To: cluster-devel.redhat.com
Hi,
this patch series contains some locking and memory issues which I found
while working on a bigger fix to make dlm secured against tcpkill.
- Alex
Alexander Aring (7):
fs: dlm: synchronize dlm before shutdown
fs: dlm: make connection hash lockless
fs: dlm: fix dlm_local_addr memory leak
fs: dlm: fix configfs memory leak
fs: dlm: move free writequeue into con free
fs: dlm: handle possible othercon writequeues
fs: dlm: use free_con to free connection
fs/dlm/Kconfig | 1 +
fs/dlm/config.c | 3 ++
fs/dlm/lowcomms.c | 122 +++++++++++++++++++++-------------------------
3 files changed, 60 insertions(+), 66 deletions(-)
--
2.26.2
^ permalink raw reply [flat|nested] 8+ messages in thread
* [Cluster-devel] [PATCH dlm/next 1/7] fs: dlm: synchronize dlm before shutdown
2020-08-27 19:02 [Cluster-devel] [PATCH dlm/next 0/7] fs: dlm: locking and memory fixes Alexander Aring
@ 2020-08-27 19:02 ` Alexander Aring
2020-08-27 19:02 ` [Cluster-devel] [PATCH dlm/next 2/7] fs: dlm: make connection hash lockless Alexander Aring
` (5 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Alexander Aring @ 2020-08-27 19:02 UTC (permalink / raw)
To: cluster-devel.redhat.com
This patch moves the dlm workqueue dlm synchronization before shutdown
handling. The patch just flushes all pending work before starting to
shutdown the connection. At least for the send_workqeue we should flush
the workqueue to make sure there is no new connection handling going on
as dlm_allow_conn switch is turned to false before.
Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
fs/dlm/lowcomms.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
index 5050fe05769b..ed098870ba0d 100644
--- a/fs/dlm/lowcomms.c
+++ b/fs/dlm/lowcomms.c
@@ -1624,10 +1624,6 @@ static void work_flush(void)
struct hlist_node *n;
struct connection *con;
- if (recv_workqueue)
- flush_workqueue(recv_workqueue);
- if (send_workqueue)
- flush_workqueue(send_workqueue);
do {
ok = 1;
foreach_conn(stop_conn);
@@ -1659,6 +1655,12 @@ void dlm_lowcomms_stop(void)
mutex_lock(&connections_lock);
dlm_allow_conn = 0;
mutex_unlock(&connections_lock);
+
+ if (recv_workqueue)
+ flush_workqueue(recv_workqueue);
+ if (send_workqueue)
+ flush_workqueue(send_workqueue);
+
foreach_conn(shutdown_conn);
work_flush();
clean_writequeues();
--
2.26.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [Cluster-devel] [PATCH dlm/next 2/7] fs: dlm: make connection hash lockless
2020-08-27 19:02 [Cluster-devel] [PATCH dlm/next 0/7] fs: dlm: locking and memory fixes Alexander Aring
2020-08-27 19:02 ` [Cluster-devel] [PATCH dlm/next 1/7] fs: dlm: synchronize dlm before shutdown Alexander Aring
@ 2020-08-27 19:02 ` Alexander Aring
2020-08-27 19:02 ` [Cluster-devel] [PATCH dlm/next 3/7] fs: dlm: fix dlm_local_addr memory leak Alexander Aring
` (4 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Alexander Aring @ 2020-08-27 19:02 UTC (permalink / raw)
To: cluster-devel.redhat.com
There are some problems with the connections_lock. During my
experiements I saw sometimes circular dependencies with sock_lock.
The reason here might be code parts which runs nodeid2con() before
or after sock_lock is acquired.
Another issue are missing locks in for_conn() iteration. Maybe this
works fine because for_conn() is running in a context where
connection_hash cannot be manipulated by others anymore.
However this patch changes the connection_hash to be protected by
sleepable rcu. The hotpath function __find_con() is implemented
lockless as it is only a reader of connection_hash and this hopefully
fixes the circular locking dependencies. The iteration for_conn() will
still call some sleepable functionality, that's why we use sleepable rcu
in this case.
This patch removes the kmemcache functionality as I think I need to
make some free() functionality via call_rcu(). However allocation time
isn't here an issue. The dlm_allow_con will not be protected by a lock
anymore as I think it's enough to just set and flush workqueues
afterwards.
Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
fs/dlm/Kconfig | 1 +
fs/dlm/lowcomms.c | 86 ++++++++++++++++++++---------------------------
2 files changed, 37 insertions(+), 50 deletions(-)
diff --git a/fs/dlm/Kconfig b/fs/dlm/Kconfig
index f82a4952769d..ee92634196a8 100644
--- a/fs/dlm/Kconfig
+++ b/fs/dlm/Kconfig
@@ -4,6 +4,7 @@ menuconfig DLM
depends on INET
depends on SYSFS && CONFIGFS_FS && (IPV6 || IPV6=n)
select IP_SCTP
+ select SRCU
help
A general purpose distributed lock manager for kernel or userspace
applications.
diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
index ed098870ba0d..9db7126de793 100644
--- a/fs/dlm/lowcomms.c
+++ b/fs/dlm/lowcomms.c
@@ -126,6 +126,7 @@ struct connection {
struct work_struct rwork; /* Receive workqueue */
struct work_struct swork; /* Send workqueue */
wait_queue_head_t shutdown_wait; /* wait for graceful shutdown */
+ struct rcu_head rcu;
};
#define sock2con(x) ((struct connection *)(x)->sk_user_data)
@@ -167,8 +168,8 @@ static struct workqueue_struct *recv_workqueue;
static struct workqueue_struct *send_workqueue;
static struct hlist_head connection_hash[CONN_HASH_SIZE];
-static DEFINE_MUTEX(connections_lock);
-static struct kmem_cache *con_cache;
+static DEFINE_SPINLOCK(connections_lock);
+DEFINE_STATIC_SRCU(connections_srcu);
static void process_recv_sockets(struct work_struct *work);
static void process_send_sockets(struct work_struct *work);
@@ -184,15 +185,20 @@ static inline int nodeid_hash(int nodeid)
static struct connection *__find_con(int nodeid)
{
- int r;
+ int r, idx;
struct connection *con;
r = nodeid_hash(nodeid);
- hlist_for_each_entry(con, &connection_hash[r], list) {
- if (con->nodeid == nodeid)
+ idx = srcu_read_lock(&connections_srcu);
+ hlist_for_each_entry_rcu(con, &connection_hash[r], list) {
+ if (con->nodeid == nodeid) {
+ srcu_read_unlock(&connections_srcu, idx);
return con;
+ }
}
+ srcu_read_unlock(&connections_srcu, idx);
+
return NULL;
}
@@ -200,7 +206,7 @@ static struct connection *__find_con(int nodeid)
* If 'allocation' is zero then we don't attempt to create a new
* connection structure for this node.
*/
-static struct connection *__nodeid2con(int nodeid, gfp_t alloc)
+static struct connection *nodeid2con(int nodeid, gfp_t alloc)
{
struct connection *con = NULL;
int r;
@@ -209,13 +215,10 @@ static struct connection *__nodeid2con(int nodeid, gfp_t alloc)
if (con || !alloc)
return con;
- con = kmem_cache_zalloc(con_cache, alloc);
+ con = kzalloc(sizeof(*con), alloc);
if (!con)
return NULL;
- r = nodeid_hash(nodeid);
- hlist_add_head(&con->list, &connection_hash[r]);
-
con->nodeid = nodeid;
mutex_init(&con->sock_mutex);
INIT_LIST_HEAD(&con->writequeue);
@@ -233,31 +236,27 @@ static struct connection *__nodeid2con(int nodeid, gfp_t alloc)
con->rx_action = zerocon->rx_action;
}
+ r = nodeid_hash(nodeid);
+
+ spin_lock(&connections_lock);
+ hlist_add_head_rcu(&con->list, &connection_hash[r]);
+ spin_unlock(&connections_lock);
+
return con;
}
/* Loop round all connections */
static void foreach_conn(void (*conn_func)(struct connection *c))
{
- int i;
- struct hlist_node *n;
+ int i, idx;
struct connection *con;
+ idx = srcu_read_lock(&connections_srcu);
for (i = 0; i < CONN_HASH_SIZE; i++) {
- hlist_for_each_entry_safe(con, n, &connection_hash[i], list)
+ hlist_for_each_entry_rcu(con, &connection_hash[i], list)
conn_func(con);
}
-}
-
-static struct connection *nodeid2con(int nodeid, gfp_t allocation)
-{
- struct connection *con;
-
- mutex_lock(&connections_lock);
- con = __nodeid2con(nodeid, allocation);
- mutex_unlock(&connections_lock);
-
- return con;
+ srcu_read_unlock(&connections_srcu, idx);
}
static struct dlm_node_addr *find_node_addr(int nodeid)
@@ -792,12 +791,9 @@ static int accept_from_sock(struct connection *con)
struct connection *newcon;
struct connection *addcon;
- mutex_lock(&connections_lock);
if (!dlm_allow_conn) {
- mutex_unlock(&connections_lock);
return -1;
}
- mutex_unlock(&connections_lock);
mutex_lock_nested(&con->sock_mutex, 0);
@@ -847,7 +843,7 @@ static int accept_from_sock(struct connection *con)
struct connection *othercon = newcon->othercon;
if (!othercon) {
- othercon = kmem_cache_zalloc(con_cache, GFP_NOFS);
+ othercon = kzalloc(sizeof(*othercon), GFP_NOFS);
if (!othercon) {
log_print("failed to allocate incoming socket");
mutex_unlock(&newcon->sock_mutex);
@@ -1612,16 +1608,17 @@ static void free_conn(struct connection *con)
{
close_connection(con, true, true, true);
if (con->othercon)
- kmem_cache_free(con_cache, con->othercon);
- hlist_del(&con->list);
- kmem_cache_free(con_cache, con);
+ kfree_rcu(con->othercon, rcu);
+ spin_lock(&connections_lock);
+ hlist_del_rcu(&con->list);
+ spin_unlock(&connections_lock);
+ kfree_rcu(con, rcu);
}
static void work_flush(void)
{
- int ok;
+ int ok, idx;
int i;
- struct hlist_node *n;
struct connection *con;
do {
@@ -1631,9 +1628,10 @@ static void work_flush(void)
flush_workqueue(recv_workqueue);
if (send_workqueue)
flush_workqueue(send_workqueue);
+ idx = srcu_read_lock(&connections_srcu);
for (i = 0; i < CONN_HASH_SIZE && ok; i++) {
- hlist_for_each_entry_safe(con, n,
- &connection_hash[i], list) {
+ hlist_for_each_entry_rcu(con, &connection_hash[i],
+ list) {
ok &= test_bit(CF_READ_PENDING, &con->flags);
ok &= test_bit(CF_WRITE_PENDING, &con->flags);
if (con->othercon) {
@@ -1644,6 +1642,7 @@ static void work_flush(void)
}
}
}
+ srcu_read_unlock(&connections_srcu, idx);
} while (!ok);
}
@@ -1652,9 +1651,7 @@ void dlm_lowcomms_stop(void)
/* Set all the flags to prevent any
socket activity.
*/
- mutex_lock(&connections_lock);
dlm_allow_conn = 0;
- mutex_unlock(&connections_lock);
if (recv_workqueue)
flush_workqueue(recv_workqueue);
@@ -1666,8 +1663,6 @@ void dlm_lowcomms_stop(void)
clean_writequeues();
foreach_conn(free_conn);
work_stop();
-
- kmem_cache_destroy(con_cache);
}
int dlm_lowcomms_start(void)
@@ -1686,16 +1681,9 @@ int dlm_lowcomms_start(void)
goto fail;
}
- error = -ENOMEM;
- con_cache = kmem_cache_create("dlm_conn", sizeof(struct connection),
- __alignof__(struct connection), 0,
- NULL);
- if (!con_cache)
- goto fail;
-
error = work_start();
if (error)
- goto fail_destroy;
+ goto fail;
dlm_allow_conn = 1;
@@ -1714,10 +1702,8 @@ int dlm_lowcomms_start(void)
con = nodeid2con(0,0);
if (con) {
close_connection(con, false, true, true);
- kmem_cache_free(con_cache, con);
+ kfree_rcu(con, rcu);
}
-fail_destroy:
- kmem_cache_destroy(con_cache);
fail:
return error;
}
--
2.26.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [Cluster-devel] [PATCH dlm/next 3/7] fs: dlm: fix dlm_local_addr memory leak
2020-08-27 19:02 [Cluster-devel] [PATCH dlm/next 0/7] fs: dlm: locking and memory fixes Alexander Aring
2020-08-27 19:02 ` [Cluster-devel] [PATCH dlm/next 1/7] fs: dlm: synchronize dlm before shutdown Alexander Aring
2020-08-27 19:02 ` [Cluster-devel] [PATCH dlm/next 2/7] fs: dlm: make connection hash lockless Alexander Aring
@ 2020-08-27 19:02 ` Alexander Aring
2020-08-27 19:02 ` [Cluster-devel] [PATCH dlm/next 4/7] fs: dlm: fix configfs " Alexander Aring
` (3 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Alexander Aring @ 2020-08-27 19:02 UTC (permalink / raw)
To: cluster-devel.redhat.com
This patch fixes the following memory detected by kmemleak and umount
gfs2 filesystem which removed the last lockspace:
unreferenced object 0xffff9264f4f48f00 (size 128):
comm "mount", pid 425, jiffies 4294690253 (age 48.159s)
hex dump (first 32 bytes):
02 00 52 48 c0 a8 7a fb 00 00 00 00 00 00 00 00 ..RH..z.........
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace:
[<0000000067a34940>] kmemdup+0x18/0x40
[<00000000c935f9ab>] init_local+0x4c/0xa0
[<00000000bbd286ef>] dlm_lowcomms_start+0x28/0x160
[<00000000a86625cb>] dlm_new_lockspace+0x7e/0xb80
[<000000008df6cd63>] gdlm_mount+0x1cc/0x5de
[<00000000b67df8c7>] gfs2_lm_mount.constprop.0+0x1a3/0x1d3
[<000000006642ac5e>] gfs2_fill_super+0x717/0xba9
[<00000000d3ab7118>] get_tree_bdev+0x17f/0x280
[<000000001975926e>] gfs2_get_tree+0x21/0x90
[<00000000561ce1c4>] vfs_get_tree+0x28/0xc0
[<000000007fecaf63>] path_mount+0x434/0xc00
[<00000000636b9594>] __x64_sys_mount+0xe3/0x120
[<00000000cc478a33>] do_syscall_64+0x33/0x40
[<00000000ce9ccf01>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
fs/dlm/lowcomms.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
index 9db7126de793..d0ece252a0d9 100644
--- a/fs/dlm/lowcomms.c
+++ b/fs/dlm/lowcomms.c
@@ -1234,6 +1234,14 @@ static void init_local(void)
}
}
+static void deinit_local(void)
+{
+ int i;
+
+ for (i = 0; i < dlm_local_count; i++)
+ kfree(dlm_local_addr[i]);
+}
+
/* Initialise SCTP socket and bind to all interfaces */
static int sctp_listen_for_all(void)
{
@@ -1663,6 +1671,7 @@ void dlm_lowcomms_stop(void)
clean_writequeues();
foreach_conn(free_conn);
work_stop();
+ deinit_local();
}
int dlm_lowcomms_start(void)
--
2.26.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [Cluster-devel] [PATCH dlm/next 4/7] fs: dlm: fix configfs memory leak
2020-08-27 19:02 [Cluster-devel] [PATCH dlm/next 0/7] fs: dlm: locking and memory fixes Alexander Aring
` (2 preceding siblings ...)
2020-08-27 19:02 ` [Cluster-devel] [PATCH dlm/next 3/7] fs: dlm: fix dlm_local_addr memory leak Alexander Aring
@ 2020-08-27 19:02 ` Alexander Aring
2020-08-27 19:02 ` [Cluster-devel] [PATCH dlm/next 5/7] fs: dlm: move free writequeue into con free Alexander Aring
` (2 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Alexander Aring @ 2020-08-27 19:02 UTC (permalink / raw)
To: cluster-devel.redhat.com
This patch fixes the following memory detected by kmemleak and umount
gfs2 filesystem which removed the last lockspace:
unreferenced object 0xffff9264f482f600 (size 192):
comm "dlm_controld", pid 325, jiffies 4294690276 (age 48.136s)
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 6e 6f 64 65 73 00 00 00 ........nodes...
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace:
[<00000000060481d7>] make_space+0x41/0x130
[<000000008d905d46>] configfs_mkdir+0x1a2/0x5f0
[<00000000729502cf>] vfs_mkdir+0x155/0x210
[<000000000369bcf1>] do_mkdirat+0x6d/0x110
[<00000000cc478a33>] do_syscall_64+0x33/0x40
[<00000000ce9ccf01>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
The patch just remembers the "nodes" entry pointer in space as I think
it's created as subdirectory when parent "spaces" is created. In
function drop_space() we will lost the pointer reference to nds because
configfs_remove_default_groups(). However as this subdirectory is always
available when "spaces" exists it will just be freed when "spaces" will be
freed.
Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
fs/dlm/config.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/fs/dlm/config.c b/fs/dlm/config.c
index 47f0b98b707f..f33a7e4ae917 100644
--- a/fs/dlm/config.c
+++ b/fs/dlm/config.c
@@ -221,6 +221,7 @@ struct dlm_space {
struct list_head members;
struct mutex members_lock;
int members_count;
+ struct dlm_nodes *nds;
};
struct dlm_comms {
@@ -430,6 +431,7 @@ static struct config_group *make_space(struct config_group *g, const char *name)
INIT_LIST_HEAD(&sp->members);
mutex_init(&sp->members_lock);
sp->members_count = 0;
+ sp->nds = nds;
return &sp->group;
fail:
@@ -451,6 +453,7 @@ static void drop_space(struct config_group *g, struct config_item *i)
static void release_space(struct config_item *i)
{
struct dlm_space *sp = config_item_to_space(i);
+ kfree(sp->nds);
kfree(sp);
}
--
2.26.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [Cluster-devel] [PATCH dlm/next 5/7] fs: dlm: move free writequeue into con free
2020-08-27 19:02 [Cluster-devel] [PATCH dlm/next 0/7] fs: dlm: locking and memory fixes Alexander Aring
` (3 preceding siblings ...)
2020-08-27 19:02 ` [Cluster-devel] [PATCH dlm/next 4/7] fs: dlm: fix configfs " Alexander Aring
@ 2020-08-27 19:02 ` Alexander Aring
2020-08-27 19:02 ` [Cluster-devel] [PATCH dlm/next 6/7] fs: dlm: handle possible othercon writequeues Alexander Aring
2020-08-27 19:02 ` [Cluster-devel] [PATCH dlm/next 7/7] fs: dlm: use free_con to free connection Alexander Aring
6 siblings, 0 replies; 8+ messages in thread
From: Alexander Aring @ 2020-08-27 19:02 UTC (permalink / raw)
To: cluster-devel.redhat.com
This patch just move the free of struct connection member writequeue
into the functionality when struct connection will be freed instead of
doing two iterations.
Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
fs/dlm/lowcomms.c | 9 +--------
1 file changed, 1 insertion(+), 8 deletions(-)
diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
index d0ece252a0d9..04afc7178afb 100644
--- a/fs/dlm/lowcomms.c
+++ b/fs/dlm/lowcomms.c
@@ -1550,13 +1550,6 @@ static void process_send_sockets(struct work_struct *work)
send_to_sock(con);
}
-
-/* Discard all entries on the write queues */
-static void clean_writequeues(void)
-{
- foreach_conn(clean_one_writequeue);
-}
-
static void work_stop(void)
{
if (recv_workqueue)
@@ -1620,6 +1613,7 @@ static void free_conn(struct connection *con)
spin_lock(&connections_lock);
hlist_del_rcu(&con->list);
spin_unlock(&connections_lock);
+ clean_one_writequeue(con);
kfree_rcu(con, rcu);
}
@@ -1668,7 +1662,6 @@ void dlm_lowcomms_stop(void)
foreach_conn(shutdown_conn);
work_flush();
- clean_writequeues();
foreach_conn(free_conn);
work_stop();
deinit_local();
--
2.26.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [Cluster-devel] [PATCH dlm/next 6/7] fs: dlm: handle possible othercon writequeues
2020-08-27 19:02 [Cluster-devel] [PATCH dlm/next 0/7] fs: dlm: locking and memory fixes Alexander Aring
` (4 preceding siblings ...)
2020-08-27 19:02 ` [Cluster-devel] [PATCH dlm/next 5/7] fs: dlm: move free writequeue into con free Alexander Aring
@ 2020-08-27 19:02 ` Alexander Aring
2020-08-27 19:02 ` [Cluster-devel] [PATCH dlm/next 7/7] fs: dlm: use free_con to free connection Alexander Aring
6 siblings, 0 replies; 8+ messages in thread
From: Alexander Aring @ 2020-08-27 19:02 UTC (permalink / raw)
To: cluster-devel.redhat.com
This patch adds free of possible other writequeue entries in othercon
member of struct connection.
Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
fs/dlm/lowcomms.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
index 04afc7178afb..794216eb728c 100644
--- a/fs/dlm/lowcomms.c
+++ b/fs/dlm/lowcomms.c
@@ -1608,11 +1608,13 @@ static void shutdown_conn(struct connection *con)
static void free_conn(struct connection *con)
{
close_connection(con, true, true, true);
- if (con->othercon)
- kfree_rcu(con->othercon, rcu);
spin_lock(&connections_lock);
hlist_del_rcu(&con->list);
spin_unlock(&connections_lock);
+ if (con->othercon) {
+ clean_one_writequeue(con->othercon);
+ kfree_rcu(con->othercon, rcu);
+ }
clean_one_writequeue(con);
kfree_rcu(con, rcu);
}
--
2.26.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [Cluster-devel] [PATCH dlm/next 7/7] fs: dlm: use free_con to free connection
2020-08-27 19:02 [Cluster-devel] [PATCH dlm/next 0/7] fs: dlm: locking and memory fixes Alexander Aring
` (5 preceding siblings ...)
2020-08-27 19:02 ` [Cluster-devel] [PATCH dlm/next 6/7] fs: dlm: handle possible othercon writequeues Alexander Aring
@ 2020-08-27 19:02 ` Alexander Aring
6 siblings, 0 replies; 8+ messages in thread
From: Alexander Aring @ 2020-08-27 19:02 UTC (permalink / raw)
To: cluster-devel.redhat.com
This patch use free_con() functionality to free the listen connection if
listen fails. It also fixes an issue that a freed resource is still part
of the connection_hash as hlist_del() is not called in this case. The
only difference is that free_con() handles othercon as well, but this is
never been set for the listen connection.
Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
fs/dlm/lowcomms.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
index 794216eb728c..1bf1808bfa6b 100644
--- a/fs/dlm/lowcomms.c
+++ b/fs/dlm/lowcomms.c
@@ -1704,10 +1704,8 @@ int dlm_lowcomms_start(void)
fail_unlisten:
dlm_allow_conn = 0;
con = nodeid2con(0,0);
- if (con) {
- close_connection(con, false, true, true);
- kfree_rcu(con, rcu);
- }
+ if (con)
+ free_conn(con);
fail:
return error;
}
--
2.26.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
end of thread, other threads:[~2020-08-27 19:02 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-08-27 19:02 [Cluster-devel] [PATCH dlm/next 0/7] fs: dlm: locking and memory fixes Alexander Aring
2020-08-27 19:02 ` [Cluster-devel] [PATCH dlm/next 1/7] fs: dlm: synchronize dlm before shutdown Alexander Aring
2020-08-27 19:02 ` [Cluster-devel] [PATCH dlm/next 2/7] fs: dlm: make connection hash lockless Alexander Aring
2020-08-27 19:02 ` [Cluster-devel] [PATCH dlm/next 3/7] fs: dlm: fix dlm_local_addr memory leak Alexander Aring
2020-08-27 19:02 ` [Cluster-devel] [PATCH dlm/next 4/7] fs: dlm: fix configfs " Alexander Aring
2020-08-27 19:02 ` [Cluster-devel] [PATCH dlm/next 5/7] fs: dlm: move free writequeue into con free Alexander Aring
2020-08-27 19:02 ` [Cluster-devel] [PATCH dlm/next 6/7] fs: dlm: handle possible othercon writequeues Alexander Aring
2020-08-27 19:02 ` [Cluster-devel] [PATCH dlm/next 7/7] fs: dlm: use free_con to free connection Alexander Aring
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).