From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 580A5EEB570 for ; Fri, 8 Sep 2023 20:46:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1694205992; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=8i20XpPtsujogswA6tCQeWhaFPbqbVSlf/77e4AtQNE=; b=NoW0p99nEz0nnmn31jtUR8tQCHFhtkGWys8aDjYNQ8SzSeI6YOFhsio+1eBWoZBamQVzUT RUqaEnM7//vU1ipkNZl9a7co4jo7595wuEJeMQSngQQNRGntoRu9msgu2R0k9xu43VV05x oJqxDpV+BIKITf3mlOro/lGh5TMuO/g= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-338-1rIP1FO1MJOWiFWtKXdfCQ-1; Fri, 08 Sep 2023 16:46:27 -0400 X-MC-Unique: 1rIP1FO1MJOWiFWtKXdfCQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 493AA816545; Fri, 8 Sep 2023 20:46:26 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1513C412F2CF; Fri, 8 Sep 2023 20:46:26 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id D7EBB19466DF; Fri, 8 Sep 2023 20:46:25 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 656BC194658C for ; Fri, 8 Sep 2023 20:46:24 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 5983C20BB074; Fri, 8 Sep 2023 20:46:24 +0000 (UTC) Received: from fs-i40c-03.fs.lab.eng.bos.redhat.com (fs-i40c-03.fs.lab.eng.bos.redhat.com [10.16.224.23]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2FB0C21EE56C; Fri, 8 Sep 2023 20:46:24 +0000 (UTC) From: Alexander Aring To: teigland@redhat.com Date: Fri, 8 Sep 2023 16:46:11 -0400 Message-Id: <20230908204611.1910601-10-aahringo@redhat.com> In-Reply-To: <20230908204611.1910601-1-aahringo@redhat.com> References: <20230908204611.1910601-1-aahringo@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 Subject: [Cluster-devel] [RFC dlm/next 10/10] fs: dlm: do dlm message processing in softirq context X-BeenThere: cluster-devel@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: "\[Cluster devel\]" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: cluster-devel@redhat.com, gfs2@lists.linux.dev Errors-To: cluster-devel-bounces@redhat.com Sender: "Cluster-devel" X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="US-ASCII"; x-default=true This patch change the dlm message parsing context from a workqueue to a softirq context. This will hopefully speed up our dlm message processing by removing a bunch of implicit scheduling points such a cond_reched() depends on the preemption model setting. A softirq (except PREEMPT_RT) can only be interrupted by other softirqs or higher prio context such as hardware interrupts. This patch will only move the dlm message parsing to the right context, there exists more ideas to improve message parsing like using lockless locking when doing read access on datastructures or enable a parallel per node message processing. Further patches will improve those behaviors. For now this patch will reduce the amount of interruptions when doing DLM message parsing. Signed-off-by: Alexander Aring --- fs/dlm/lowcomms.c | 34 ++++++++++------------------------ 1 file changed, 10 insertions(+), 24 deletions(-) diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c index 28dd74aebc84..93f7e8827201 100644 --- a/fs/dlm/lowcomms.c +++ b/fs/dlm/lowcomms.c @@ -183,7 +183,6 @@ static int dlm_local_count; /* Work queues */ static struct workqueue_struct *io_workqueue; -static struct workqueue_struct *process_workqueue; static struct hlist_head connection_hash[CONN_HASH_SIZE]; static DEFINE_SPINLOCK(connections_lock); @@ -199,9 +198,9 @@ static const struct dlm_proto_ops *dlm_proto_ops; static void process_recv_sockets(struct work_struct *work); static void process_send_sockets(struct work_struct *work); -static void process_dlm_messages(struct work_struct *work); +static void process_dlm_messages(struct tasklet_struct *tasklet); -static DECLARE_WORK(process_work, process_dlm_messages); +static DECLARE_TASKLET(process_tasklet, process_dlm_messages); static DEFINE_SPINLOCK(processqueue_lock); static bool process_dlm_messages_pending; static atomic_t processqueue_count; @@ -863,7 +862,7 @@ struct dlm_processed_nodes { struct list_head list; }; -static void process_dlm_messages(struct work_struct *work) +static void process_dlm_messages(struct tasklet_struct *tasklet) { struct processqueue_entry *pentry; @@ -971,7 +970,7 @@ static int receive_from_sock(struct connection *con, int buflen) list_add_tail(&pentry->list, &processqueue); if (!process_dlm_messages_pending) { process_dlm_messages_pending = true; - queue_work(process_workqueue, &process_work); + tasklet_schedule(&process_tasklet); } spin_unlock_bh(&processqueue_lock); @@ -1511,7 +1510,8 @@ static void process_recv_sockets(struct work_struct *work) /* CF_RECV_PENDING cleared */ break; case DLM_IO_FLUSH: - flush_workqueue(process_workqueue); + tasklet_disable(&process_tasklet); + tasklet_enable(&process_tasklet); fallthrough; case DLM_IO_RESCHED: cond_resched(); @@ -1685,11 +1685,6 @@ static void work_stop(void) destroy_workqueue(io_workqueue); io_workqueue = NULL; } - - if (process_workqueue) { - destroy_workqueue(process_workqueue); - process_workqueue = NULL; - } } static int work_start(void) @@ -1701,18 +1696,6 @@ static int work_start(void) return -ENOMEM; } - /* ordered dlm message process queue, - * should be converted to a tasklet - */ - process_workqueue = alloc_ordered_workqueue("dlm_process", - WQ_HIGHPRI | WQ_MEM_RECLAIM); - if (!process_workqueue) { - log_print("can't start dlm_process"); - destroy_workqueue(io_workqueue); - io_workqueue = NULL; - return -ENOMEM; - } - return 0; } @@ -1734,7 +1717,10 @@ void dlm_lowcomms_shutdown(void) hlist_for_each_entry_rcu(con, &connection_hash[i], list) { shutdown_connection(con, true); stop_connection_io(con); - flush_workqueue(process_workqueue); + + tasklet_disable(&process_tasklet); + tasklet_enable(&process_tasklet); + close_connection(con, true); clean_one_writequeue(con); -- 2.31.1