From: Alexander Aring <aahringo@redhat.com>
To: teigland@redhat.com
Cc: cluster-devel@redhat.com, gfs2@lists.linux.dev
Subject: [Cluster-devel] [RFC dlm/next 03/10] fs: dlm: remove explicit scheduling points
Date: Fri, 8 Sep 2023 16:46:04 -0400 [thread overview]
Message-ID: <20230908204611.1910601-3-aahringo@redhat.com> (raw)
In-Reply-To: <20230908204611.1910601-1-aahringo@redhat.com>
This patch prepares to switch some locks to a spinlock. In this case we
need to remove some explicit schedule points when a spinlock is held.
We might have less scheduling points to try to serve others, we need to
see if this still makes problems when we remove them and find other
solutions.
Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
fs/dlm/lock.c | 2 --
fs/dlm/recover.c | 1 -
fs/dlm/requestqueue.c | 1 -
3 files changed, 4 deletions(-)
diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
index 970b8499b66f..61eb285c613c 100644
--- a/fs/dlm/lock.c
+++ b/fs/dlm/lock.c
@@ -4979,7 +4979,6 @@ void dlm_recover_waiters_pre(struct dlm_ls *ls)
log_error(ls, "invalid lkb wait_type %d %d",
lkb->lkb_wait_type, wait_type);
}
- schedule();
}
mutex_unlock(&ls->ls_waiters_mutex);
kfree(ms_local);
@@ -5218,7 +5217,6 @@ void dlm_recover_purge(struct dlm_ls *ls)
}
unlock_rsb(r);
unhold_rsb(r);
- cond_resched();
}
up_write(&ls->ls_root_sem);
diff --git a/fs/dlm/recover.c b/fs/dlm/recover.c
index ce6dc914cb86..752002304ca9 100644
--- a/fs/dlm/recover.c
+++ b/fs/dlm/recover.c
@@ -543,7 +543,6 @@ int dlm_recover_masters(struct dlm_ls *ls, uint64_t seq)
else
error = recover_master(r, &count, seq);
unlock_rsb(r);
- cond_resched();
total++;
if (error) {
diff --git a/fs/dlm/requestqueue.c b/fs/dlm/requestqueue.c
index c05940afd063..ef7b7c8d6907 100644
--- a/fs/dlm/requestqueue.c
+++ b/fs/dlm/requestqueue.c
@@ -106,7 +106,6 @@ int dlm_process_requestqueue(struct dlm_ls *ls)
error = -EINTR;
break;
}
- schedule();
}
return error;
--
2.31.1
next prev parent reply other threads:[~2023-09-08 20:46 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-08 20:46 [Cluster-devel] [RFC dlm/next 01/10] fs: dlm: remove allocation parameter in msg allocation Alexander Aring
2023-09-08 20:46 ` [Cluster-devel] [RFC dlm/next 02/10] fs: dlm: switch to GFP_ATOMIC in dlm allocations Alexander Aring
2023-09-08 20:46 ` Alexander Aring [this message]
2023-09-08 20:46 ` [Cluster-devel] [RFC dlm/next 04/10] fs: dlm: convert ls_waiters_mutex to spinlock Alexander Aring
2023-09-08 20:46 ` [Cluster-devel] [RFC dlm/next 05/10] fs: dlm: convert res_lock " Alexander Aring
2023-09-08 20:46 ` [Cluster-devel] [RFC dlm/next 06/10] fs: dlm: make requestqueue handling non sleepable Alexander Aring
2023-09-08 20:46 ` [Cluster-devel] [RFC dlm/next 07/10] fs: dlm: ls_root_lock semaphore to rwlock Alexander Aring
2023-09-08 20:46 ` [Cluster-devel] [RFC dlm/next 08/10] fs: dlm: ls_recv_active " Alexander Aring
2023-09-08 20:46 ` [Cluster-devel] [RFC dlm/next 09/10] fs: dlm: convert message parsing locks to disable bh Alexander Aring
2023-09-08 20:46 ` [Cluster-devel] [RFC dlm/next 10/10] fs: dlm: do dlm message processing in softirq context Alexander Aring
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230908204611.1910601-3-aahringo@redhat.com \
--to=aahringo@redhat.com \
--cc=cluster-devel@redhat.com \
--cc=gfs2@lists.linux.dev \
--cc=teigland@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).