From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D98D4EB8FDC for ; Wed, 6 Sep 2023 14:32:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1694010764; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=cmwIzFIXNpRLUeiK4lUVg/tlamLBDpFKni0CkmWHZaY=; b=UW8kYD+1YdGSZu/cfQaWnF1fqf4v7yYiw5af9ss0fkrfDR2kr+z9B/y8GfRJ7kC70GKDz1 tGBs4X0Z3er06XoDnQtnFtWHy3HFyIPteJ3+ctCNnaRK37JvSuvLfVaNJLIGq7tQ6IhE0W xqnqTa4svazuR39PTtOBOz21DhbRn/I= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-649-D58AQYg6Ml2nUsdZL3gO7Q-1; Wed, 06 Sep 2023 10:32:40 -0400 X-MC-Unique: D58AQYg6Ml2nUsdZL3gO7Q-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BD3342815E51; Wed, 6 Sep 2023 14:32:37 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id B334E1121318; Wed, 6 Sep 2023 14:32:37 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 1CFF11946A43; Wed, 6 Sep 2023 14:32:37 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id F20DA194658C for ; Wed, 6 Sep 2023 14:32:33 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id C2E8A140E965; Wed, 6 Sep 2023 14:32:33 +0000 (UTC) Received: from fs-i40c-03.fs.lab.eng.bos.redhat.com (fs-i40c-03.fs.lab.eng.bos.redhat.com [10.16.224.23]) by smtp.corp.redhat.com (Postfix) with ESMTP id 94AFE140E950; Wed, 6 Sep 2023 14:32:33 +0000 (UTC) From: Alexander Aring To: teigland@redhat.com Date: Wed, 6 Sep 2023 10:31:53 -0400 Message-Id: <20230906143153.1353077-6-aahringo@redhat.com> In-Reply-To: <20230906143153.1353077-1-aahringo@redhat.com> References: <20230906143153.1353077-1-aahringo@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 Subject: [Cluster-devel] [PATCH dlm/next 6/6] dlm: slow down filling up processing queue X-BeenThere: cluster-devel@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: "\[Cluster devel\]" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: cluster-devel@redhat.com, gfs2@lists.linux.dev, stable@vger.kernel.org Errors-To: cluster-devel-bounces@redhat.com Sender: "Cluster-devel" X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="US-ASCII"; x-default=true If there is a burst of message the receive worker will filling up the processing queue but where are too slow to process dlm messages. This patch will slow down the receiver worker to keep the buffer on the socket layer to tell the sender to backoff. This is done by a threshold to get the next buffers from the socket after all messages were processed done by a flush_workqueue(). This however only occurs when we have a burst like my dlm_bursttest [0] testcase to create 1 million locks on module init and unlock them on module exit. If we put more and more new messages to process in the processqueue we will soon run out of memory. The testcase to reproduce this issue can be found at: https://gitlab.com/netcoder/linux-public/-/blob/dlm_test_burst/fs/dlm/dlm_bursttest.c Signed-off-by: Alexander Aring --- fs/dlm/lowcomms.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c index f7bc22e74db2..67f8dd8a05ef 100644 --- a/fs/dlm/lowcomms.c +++ b/fs/dlm/lowcomms.c @@ -63,6 +63,7 @@ #include "config.h" #define DLM_SHUTDOWN_WAIT_TIMEOUT msecs_to_jiffies(5000) +#define DLM_MAX_PROCESS_BUFFERS 24 #define NEEDED_RMEM (4*1024*1024) struct connection { @@ -194,6 +195,7 @@ static const struct dlm_proto_ops *dlm_proto_ops; #define DLM_IO_END 1 #define DLM_IO_EOF 2 #define DLM_IO_RESCHED 3 +#define DLM_IO_FLUSH 4 static void process_recv_sockets(struct work_struct *work); static void process_send_sockets(struct work_struct *work); @@ -202,6 +204,7 @@ static void process_dlm_messages(struct work_struct *work); static DECLARE_WORK(process_work, process_dlm_messages); static DEFINE_SPINLOCK(processqueue_lock); static bool process_dlm_messages_pending; +static atomic_t processqueue_count; static LIST_HEAD(processqueue); bool dlm_lowcomms_is_running(void) @@ -874,6 +877,7 @@ static void process_dlm_messages(struct work_struct *work) } list_del(&pentry->list); + atomic_dec(&processqueue_count); spin_unlock(&processqueue_lock); for (;;) { @@ -891,6 +895,7 @@ static void process_dlm_messages(struct work_struct *work) } list_del(&pentry->list); + atomic_dec(&processqueue_count); spin_unlock(&processqueue_lock); } } @@ -962,6 +967,7 @@ static int receive_from_sock(struct connection *con, int buflen) con->rx_leftover); spin_lock(&processqueue_lock); + ret = atomic_inc_return(&processqueue_count); list_add_tail(&pentry->list, &processqueue); if (!process_dlm_messages_pending) { process_dlm_messages_pending = true; @@ -969,6 +975,9 @@ static int receive_from_sock(struct connection *con, int buflen) } spin_unlock(&processqueue_lock); + if (ret > DLM_MAX_PROCESS_BUFFERS) + return DLM_IO_FLUSH; + return DLM_IO_SUCCESS; } @@ -1503,6 +1512,9 @@ static void process_recv_sockets(struct work_struct *work) wake_up(&con->shutdown_wait); /* CF_RECV_PENDING cleared */ break; + case DLM_IO_FLUSH: + flush_workqueue(process_workqueue); + fallthrough; case DLM_IO_RESCHED: cond_resched(); queue_work(io_workqueue, &con->rwork); -- 2.31.1