From: NeilBrown <neilb@suse.de>
To: Chuck Lever <chuck.lever@oracle.com>, Jeff Layton <jlayton@kernel.org>
Cc: linux-nfs@vger.kernel.org
Subject: [PATCH 08/12] SUNRPC: move task-dequeueing code into svc_recv()
Date: Mon, 31 Jul 2023 16:48:35 +1000 [thread overview]
Message-ID: <20230731064839.7729-9-neilb@suse.de> (raw)
In-Reply-To: <20230731064839.7729-1-neilb@suse.de>
svc_recv() has become rather small, and svc_rqst_wait_and_dequeue_work()
performs two different tasks.
So move the "dequeue" part out of svc_rqst_wait_and_dequeue_work()
into svc_recv(). This balances code between the two.
svc_rqst_wait_and_dequeue_work() is now svc_rqst_wait_for_work() and
returns bool if it actually waited. This is used to guide tracing and
some statistics gathering.
Signed-off-by: NeilBrown <neilb@suse.de>
---
net/sunrpc/svc_xprt.c | 67 +++++++++++++++++++++----------------------
1 file changed, 32 insertions(+), 35 deletions(-)
diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
index 604c486c8576..45a76313b7e1 100644
--- a/net/sunrpc/svc_xprt.c
+++ b/net/sunrpc/svc_xprt.c
@@ -722,14 +722,11 @@ rqst_should_sleep(struct svc_rqst *rqstp)
return true;
}
-static void svc_rqst_wait_and_dequeue_work(struct svc_rqst *rqstp)
+static bool svc_rqst_wait_for_work(struct svc_rqst *rqstp)
{
- struct svc_pool *pool = rqstp->rq_pool;
+ struct svc_pool *pool = rqstp->rq_pool;
bool slept = false;
- /* rq_xprt should be clear on entry */
- WARN_ON_ONCE(rqstp->rq_xprt);
-
if (rqst_should_sleep(rqstp)) {
set_current_state(TASK_IDLE);
smp_mb__before_atomic();
@@ -749,31 +746,7 @@ static void svc_rqst_wait_and_dequeue_work(struct svc_rqst *rqstp)
smp_mb__after_atomic();
}
try_to_freeze();
-
- if (kthread_should_stop())
- return;
-
- clear_bit(SP_TASK_PENDING, &pool->sp_flags);
- rqstp->rq_xprt = svc_xprt_dequeue(pool);
- if (rqstp->rq_xprt) {
- if (slept)
- trace_svc_pool_awoken(rqstp);
- else
- trace_svc_pool_polled(rqstp);
- goto out_found;
- }
-
- if (slept)
- percpu_counter_inc(&pool->sp_threads_no_work);
- return;
-out_found:
- /* Normally we will wait up to 5 seconds for any required
- * cache information to be provided.
- */
- if (!test_bit(SP_CONGESTED, &pool->sp_flags))
- rqstp->rq_chandle.thread_wait = 5*HZ;
- else
- rqstp->rq_chandle.thread_wait = 1*HZ;
+ return slept;
}
static void svc_add_new_temp_xprt(struct svc_serv *serv, struct svc_xprt *newxpt)
@@ -865,17 +838,41 @@ static void svc_handle_xprt(struct svc_rqst *rqstp, struct svc_xprt *xprt)
*/
void svc_recv(struct svc_rqst *rqstp)
{
- struct svc_xprt *xprt = NULL;
+ struct svc_pool *pool = rqstp->rq_pool;
+ bool slept;
if (!svc_alloc_arg(rqstp))
return;
- svc_rqst_wait_and_dequeue_work(rqstp);
+ slept = svc_rqst_wait_for_work(rqstp);
- xprt = rqstp->rq_xprt;
- if (xprt)
+ if (kthread_should_stop())
+ return;
+
+ clear_bit(SP_TASK_PENDING, &pool->sp_flags);
+
+ rqstp->rq_xprt = svc_xprt_dequeue(pool);
+ if (rqstp->rq_xprt) {
+ struct svc_xprt *xprt = rqstp->rq_xprt;
+
+ if (slept)
+ trace_svc_pool_awoken(rqstp);
+ else
+ trace_svc_pool_polled(rqstp);
+
+ /* Normally we will wait up to 5 seconds for any required
+ * cache information to be provided.
+ */
+ if (test_bit(SP_CONGESTED, &pool->sp_flags))
+ rqstp->rq_chandle.thread_wait = 5 * HZ;
+ else
+ rqstp->rq_chandle.thread_wait = 1 * HZ;
svc_handle_xprt(rqstp, xprt);
-out:
+ return;
+ }
+
+ if (slept)
+ percpu_counter_inc(&pool->sp_threads_no_work);
}
EXPORT_SYMBOL_GPL(svc_recv);
--
2.40.1
next prev parent reply other threads:[~2023-07-31 6:52 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-31 6:48 [PATCH 00/12] SUNRPC: various thread management improvements NeilBrown
2023-07-31 6:48 ` [PATCH 01/12] SUNRPC: make rqst_should_sleep() idempotent() NeilBrown
2023-07-31 14:21 ` Chuck Lever
2023-07-31 22:05 ` NeilBrown
2023-07-31 22:31 ` Chuck Lever III
2023-07-31 14:33 ` Jeff Layton
2023-07-31 6:48 ` [PATCH 02/12] FIXUP: SUNRPC: Deduplicate thread wake-up code NeilBrown
2023-07-31 6:48 ` [PATCH 03/12] FIXUP: SUNRPC: call svc_process() from svc_recv() NeilBrown
2023-07-31 14:22 ` Chuck Lever
2023-07-31 6:48 ` [PATCH 04/12] nfsd: Simplify code around svc_exit_thread() call in nfsd() NeilBrown
2023-07-31 6:48 ` [PATCH 05/12] nfsd: separate nfsd_last_thread() from nfsd_put() NeilBrown
2023-07-31 14:23 ` Chuck Lever
2023-07-31 6:48 ` [PATCH 06/12] SUNRPC: rename and refactor svc_get_next_xprt() NeilBrown
2023-07-31 23:16 ` Chuck Lever
2023-08-01 22:46 ` Chuck Lever
2023-08-02 5:00 ` NeilBrown
2023-07-31 6:48 ` [PATCH 07/12] SUNRPC: move all of xprt handling into svc_xprt_handle() NeilBrown
2023-07-31 6:48 ` NeilBrown [this message]
2023-07-31 6:48 ` [PATCH 09/12] SUNRPC: integrate back-channel processing with svc_recv() NeilBrown
2023-07-31 6:48 ` [PATCH 10/12] SUNRPC: change how svc threads are asked to exit NeilBrown
2023-07-31 6:48 ` [PATCH 11/12] SUNRPC: add list of idle threads NeilBrown
2023-07-31 6:48 ` [PATCH 12/12] SUNRPC: discard SP_CONGESTED NeilBrown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230731064839.7729-9-neilb@suse.de \
--to=neilb@suse.de \
--cc=chuck.lever@oracle.com \
--cc=jlayton@kernel.org \
--cc=linux-nfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox