Linux RCU subsystem development
 help / color / mirror / Atom feed
From: neeraj.upadhyay@kernel.org
To: rcu@vger.kernel.org
Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com,
	rostedt@goodmis.org, paulmck@kernel.org,
	neeraj.upadhyay@kernel.org, neeraj.upadhyay@amd.com,
	boqun.feng@gmail.com, joel@joelfernandes.org, urezki@gmail.com,
	frederic@kernel.org
Subject: [PATCH rcu 07/14] rcuscale: Print detailed grace-period and barrier diagnostics
Date: Fri, 16 Aug 2024 12:32:49 +0530	[thread overview]
Message-ID: <20240816070256.60993-7-neeraj.upadhyay@kernel.org> (raw)
In-Reply-To: <20240816070209.GA60666@neeraj.linux>

From: "Paul E. McKenney" <paulmck@kernel.org>

This commit uses the new  rcu_tasks_torture_stats_print(),
rcu_tasks_trace_torture_stats_print(), and
rcu_tasks_rude_torture_stats_print() functions in order to provide
detailed diagnostics on grace-period, callback, and barrier state when
rcu_scale_writer() hangs.

[ paulmck: Apply kernel test robot feedback. ]

Signed-off-by: "Paul E. McKenney" <paulmck@kernel.org>
Signed-off-by: Neeraj Upadhyay <neeraj.upadhyay@kernel.org>
---
 kernel/rcu/rcuscale.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/kernel/rcu/rcuscale.c b/kernel/rcu/rcuscale.c
index 1b9a43653d7e..c507750e94d8 100644
--- a/kernel/rcu/rcuscale.c
+++ b/kernel/rcu/rcuscale.c
@@ -298,6 +298,11 @@ static void tasks_scale_read_unlock(int idx)
 {
 }
 
+static void rcu_tasks_scale_stats(void)
+{
+	rcu_tasks_torture_stats_print(scale_type, SCALE_FLAG);
+}
+
 static struct rcu_scale_ops tasks_ops = {
 	.ptype		= RCU_TASKS_FLAVOR,
 	.init		= rcu_sync_scale_init,
@@ -310,6 +315,7 @@ static struct rcu_scale_ops tasks_ops = {
 	.sync		= synchronize_rcu_tasks,
 	.exp_sync	= synchronize_rcu_tasks,
 	.rso_gp_kthread	= get_rcu_tasks_gp_kthread,
+	.stats		= IS_ENABLED(CONFIG_TINY_RCU) ? NULL : rcu_tasks_scale_stats,
 	.name		= "tasks"
 };
 
@@ -336,6 +342,11 @@ static void tasks_rude_scale_read_unlock(int idx)
 {
 }
 
+static void rcu_tasks_rude_scale_stats(void)
+{
+	rcu_tasks_rude_torture_stats_print(scale_type, SCALE_FLAG);
+}
+
 static struct rcu_scale_ops tasks_rude_ops = {
 	.ptype		= RCU_TASKS_RUDE_FLAVOR,
 	.init		= rcu_sync_scale_init,
@@ -346,6 +357,7 @@ static struct rcu_scale_ops tasks_rude_ops = {
 	.sync		= synchronize_rcu_tasks_rude,
 	.exp_sync	= synchronize_rcu_tasks_rude,
 	.rso_gp_kthread	= get_rcu_tasks_rude_gp_kthread,
+	.stats		= IS_ENABLED(CONFIG_TINY_RCU) ? NULL : rcu_tasks_rude_scale_stats,
 	.name		= "tasks-rude"
 };
 
@@ -374,6 +386,11 @@ static void tasks_trace_scale_read_unlock(int idx)
 	rcu_read_unlock_trace();
 }
 
+static void rcu_tasks_trace_scale_stats(void)
+{
+	rcu_tasks_trace_torture_stats_print(scale_type, SCALE_FLAG);
+}
+
 static struct rcu_scale_ops tasks_tracing_ops = {
 	.ptype		= RCU_TASKS_FLAVOR,
 	.init		= rcu_sync_scale_init,
@@ -386,6 +403,7 @@ static struct rcu_scale_ops tasks_tracing_ops = {
 	.sync		= synchronize_rcu_tasks_trace,
 	.exp_sync	= synchronize_rcu_tasks_trace,
 	.rso_gp_kthread	= get_rcu_tasks_trace_gp_kthread,
+	.stats		= IS_ENABLED(CONFIG_TINY_RCU) ? NULL : rcu_tasks_trace_scale_stats,
 	.name		= "tasks-tracing"
 };
 
-- 
2.40.1


  parent reply	other threads:[~2024-08-16  7:04 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-16  7:02 [PATCH rcu 00/14] RCU scaling tests updates for v6.12 Neeraj Upadhyay
2024-08-16  7:02 ` [PATCH rcu 01/14] refscale: Add TINY scenario neeraj.upadhyay
2024-08-16  7:02 ` [PATCH rcu 02/14] refscale: Optimize process_durations() neeraj.upadhyay
2024-08-16  7:02 ` [PATCH rcu 03/14] rcuscale: Save a few lines with whitespace-only change neeraj.upadhyay
2024-08-16  7:02 ` [PATCH rcu 04/14] rcuscale: Dump stacks of stalled rcu_scale_writer() instances neeraj.upadhyay
2024-08-16  7:02 ` [PATCH rcu 05/14] rcuscale: Dump grace-period statistics when rcu_scale_writer() stalls neeraj.upadhyay
2024-08-16  7:02 ` [PATCH rcu 06/14] rcu: Mark callbacks not currently participating in barrier operation neeraj.upadhyay
2024-08-16  7:02 ` neeraj.upadhyay [this message]
2024-08-16  7:02 ` [PATCH rcu 08/14] rcuscale: Provide clear error when async specified without primitives neeraj.upadhyay
2024-08-16  7:02 ` [PATCH rcu 09/14] rcuscale: Make all writer tasks report upon hang neeraj.upadhyay
2024-08-16  7:02 ` [PATCH rcu 10/14] rcuscale: Make rcu_scale_writer() tolerate repeated GFP_KERNEL failure neeraj.upadhyay
2024-08-16  7:02 ` [PATCH rcu 11/14] rcuscale: Use special allocator for rcu_scale_writer() neeraj.upadhyay
2024-08-16  7:02 ` [PATCH rcu 12/14] rcuscale: NULL out top-level pointers to heap memory neeraj.upadhyay
2024-08-16  7:02 ` [PATCH rcu 13/14] rcuscale: Count outstanding callbacks per-task rather than per-CPU neeraj.upadhyay
2024-08-16  7:02 ` [PATCH rcu 14/14] refscale: Constify struct ref_scale_ops neeraj.upadhyay

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240816070256.60993-7-neeraj.upadhyay@kernel.org \
    --to=neeraj.upadhyay@kernel.org \
    --cc=boqun.feng@gmail.com \
    --cc=frederic@kernel.org \
    --cc=joel@joelfernandes.org \
    --cc=kernel-team@meta.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=neeraj.upadhyay@amd.com \
    --cc=paulmck@kernel.org \
    --cc=rcu@vger.kernel.org \
    --cc=rostedt@goodmis.org \
    --cc=urezki@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox