From: Alexander Aring <aahringo@redhat.com>
To: teigland@redhat.com
Cc: cluster-devel@redhat.com, gfs2@lists.linux.dev, aahringo@redhat.com
Subject: [RFC dlm/next 10/10] fs: dlm: do dlm message processing in softirq context
Date: Fri, 8 Sep 2023 16:46:11 -0400 [thread overview]
Message-ID: <20230908204611.1910601-10-aahringo@redhat.com> (raw)
In-Reply-To: <20230908204611.1910601-1-aahringo@redhat.com>
This patch change the dlm message parsing context from a workqueue to
a softirq context. This will hopefully speed up our dlm message
processing by removing a bunch of implicit scheduling points such a
cond_reched() depends on the preemption model setting. A softirq
(except PREEMPT_RT) can only be interrupted by other softirqs or
higher prio context such as hardware interrupts.
This patch will only move the dlm message parsing to the right context,
there exists more ideas to improve message parsing like using lockless
locking when doing read access on datastructures or enable a parallel
per node message processing. Further patches will improve those
behaviors. For now this patch will reduce the amount of interruptions
when doing DLM message parsing.
Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
fs/dlm/lowcomms.c | 34 ++++++++++------------------------
1 file changed, 10 insertions(+), 24 deletions(-)
diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
index 28dd74aebc84..93f7e8827201 100644
--- a/fs/dlm/lowcomms.c
+++ b/fs/dlm/lowcomms.c
@@ -183,7 +183,6 @@ static int dlm_local_count;
/* Work queues */
static struct workqueue_struct *io_workqueue;
-static struct workqueue_struct *process_workqueue;
static struct hlist_head connection_hash[CONN_HASH_SIZE];
static DEFINE_SPINLOCK(connections_lock);
@@ -199,9 +198,9 @@ static const struct dlm_proto_ops *dlm_proto_ops;
static void process_recv_sockets(struct work_struct *work);
static void process_send_sockets(struct work_struct *work);
-static void process_dlm_messages(struct work_struct *work);
+static void process_dlm_messages(struct tasklet_struct *tasklet);
-static DECLARE_WORK(process_work, process_dlm_messages);
+static DECLARE_TASKLET(process_tasklet, process_dlm_messages);
static DEFINE_SPINLOCK(processqueue_lock);
static bool process_dlm_messages_pending;
static atomic_t processqueue_count;
@@ -863,7 +862,7 @@ struct dlm_processed_nodes {
struct list_head list;
};
-static void process_dlm_messages(struct work_struct *work)
+static void process_dlm_messages(struct tasklet_struct *tasklet)
{
struct processqueue_entry *pentry;
@@ -971,7 +970,7 @@ static int receive_from_sock(struct connection *con, int buflen)
list_add_tail(&pentry->list, &processqueue);
if (!process_dlm_messages_pending) {
process_dlm_messages_pending = true;
- queue_work(process_workqueue, &process_work);
+ tasklet_schedule(&process_tasklet);
}
spin_unlock_bh(&processqueue_lock);
@@ -1511,7 +1510,8 @@ static void process_recv_sockets(struct work_struct *work)
/* CF_RECV_PENDING cleared */
break;
case DLM_IO_FLUSH:
- flush_workqueue(process_workqueue);
+ tasklet_disable(&process_tasklet);
+ tasklet_enable(&process_tasklet);
fallthrough;
case DLM_IO_RESCHED:
cond_resched();
@@ -1685,11 +1685,6 @@ static void work_stop(void)
destroy_workqueue(io_workqueue);
io_workqueue = NULL;
}
-
- if (process_workqueue) {
- destroy_workqueue(process_workqueue);
- process_workqueue = NULL;
- }
}
static int work_start(void)
@@ -1701,18 +1696,6 @@ static int work_start(void)
return -ENOMEM;
}
- /* ordered dlm message process queue,
- * should be converted to a tasklet
- */
- process_workqueue = alloc_ordered_workqueue("dlm_process",
- WQ_HIGHPRI | WQ_MEM_RECLAIM);
- if (!process_workqueue) {
- log_print("can't start dlm_process");
- destroy_workqueue(io_workqueue);
- io_workqueue = NULL;
- return -ENOMEM;
- }
-
return 0;
}
@@ -1734,7 +1717,10 @@ void dlm_lowcomms_shutdown(void)
hlist_for_each_entry_rcu(con, &connection_hash[i], list) {
shutdown_connection(con, true);
stop_connection_io(con);
- flush_workqueue(process_workqueue);
+
+ tasklet_disable(&process_tasklet);
+ tasklet_enable(&process_tasklet);
+
close_connection(con, true);
clean_one_writequeue(con);
--
2.31.1
prev parent reply other threads:[~2023-09-08 20:46 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-08 20:46 [RFC dlm/next 01/10] fs: dlm: remove allocation parameter in msg allocation Alexander Aring
2023-09-08 20:46 ` [RFC dlm/next 02/10] fs: dlm: switch to GFP_ATOMIC in dlm allocations Alexander Aring
2023-09-08 20:46 ` [RFC dlm/next 03/10] fs: dlm: remove explicit scheduling points Alexander Aring
2023-09-08 20:46 ` [RFC dlm/next 04/10] fs: dlm: convert ls_waiters_mutex to spinlock Alexander Aring
2023-09-08 20:46 ` [RFC dlm/next 05/10] fs: dlm: convert res_lock " Alexander Aring
2023-09-08 20:46 ` [RFC dlm/next 06/10] fs: dlm: make requestqueue handling non sleepable Alexander Aring
2023-09-08 20:46 ` [RFC dlm/next 07/10] fs: dlm: ls_root_lock semaphore to rwlock Alexander Aring
2023-09-08 20:46 ` [RFC dlm/next 08/10] fs: dlm: ls_recv_active " Alexander Aring
2023-09-08 20:46 ` [RFC dlm/next 09/10] fs: dlm: convert message parsing locks to disable bh Alexander Aring
2023-09-08 20:46 ` Alexander Aring [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230908204611.1910601-10-aahringo@redhat.com \
--to=aahringo@redhat.com \
--cc=cluster-devel@redhat.com \
--cc=gfs2@lists.linux.dev \
--cc=teigland@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox