linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jason Xing <kerneljasonxing@gmail.com>
To: axboe@kernel.dk, rostedt@goodmis.org, mhiramat@kernel.org,
	mathieu.desnoyers@efficios.com, akpm@linux-foundation.org
Cc: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,
	Jason Xing <kernelxing@tencent.com>,
	Yushan Zhou <katrinzhou@tencent.com>
Subject: [PATCH v2 1/4] relayfs: support a counter tracking if per-cpu buffers is full
Date: Thu, 15 May 2025 14:16:40 +0800	[thread overview]
Message-ID: <20250515061643.31472-2-kerneljasonxing@gmail.com> (raw)
In-Reply-To: <20250515061643.31472-1-kerneljasonxing@gmail.com>

From: Jason Xing <kernelxing@tencent.com>

When using relay mechanism, we often encounter the case where new data
are lost or old unconsumed data are overwritten because of slow reader.

Add 'full' field in per-cpu buffer structure to detect if the above case
is happening.  Relay has two modes: 1) non-overwrite mode, 2) overwrite
mode. So buffer being full here respectively means: 1) relayfs doesn't
intend to accept new data and then simply drop them, or 2) relayfs is
going to start over again and overwrite old unread data with new data.

Note: this counter doesn't need any explicit lock to protect from being
modified by different threads for the better performance consideration.
Writers calling __relay_write/relay_write should consider how to use
the lock and ensure it performs under the lock protection, thus it's
not necessary to add a new small lock here.

Reviewed-by: Yushan Zhou <katrinzhou@tencent.com>
Signed-off-by: Jason Xing <kernelxing@tencent.com>
---
v2
1. rename full member
2. revise the commit log
I still chose to keep struct rchan_buf_stats to help later patches
implementing relay_dump() to dump more statistics from buffer. It means
more fields can/will be added. But I have to say, I have no strong favor
on this kind of organization. If you don't like it, I will remove in the
next re-spin :)
---
 include/linux/relay.h | 9 +++++++++
 kernel/relay.c        | 8 +++++++-
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/include/linux/relay.h b/include/linux/relay.h
index e10a0fdf4325..ce7a1b396872 100644
--- a/include/linux/relay.h
+++ b/include/linux/relay.h
@@ -28,6 +28,14 @@
  */
 #define RELAYFS_CHANNEL_VERSION		7
 
+/*
+ * Relay buffer statistics dump
+ */
+struct rchan_buf_stats
+{
+	unsigned int full_count;	/* counter for buffer full */
+};
+
 /*
  * Per-cpu relay channel buffer
  */
@@ -43,6 +51,7 @@ struct rchan_buf
 	struct irq_work wakeup_work;	/* reader wakeup */
 	struct dentry *dentry;		/* channel file dentry */
 	struct kref kref;		/* channel buffer refcount */
+	struct rchan_buf_stats stats;	/* buffer stats */
 	struct page **page_array;	/* array of current buffer pages */
 	unsigned int page_count;	/* number of current buffer pages */
 	unsigned int finalized;		/* buffer has been finalized */
diff --git a/kernel/relay.c b/kernel/relay.c
index 94f79f52d826..eb3f630f3896 100644
--- a/kernel/relay.c
+++ b/kernel/relay.c
@@ -252,8 +252,13 @@ EXPORT_SYMBOL_GPL(relay_buf_full);
 static int relay_subbuf_start(struct rchan_buf *buf, void *subbuf,
 			      void *prev_subbuf)
 {
+	int full = relay_buf_full(buf);
+
+	if (full)
+		buf->stats.full_count++;
+
 	if (!buf->chan->cb->subbuf_start)
-		return !relay_buf_full(buf);
+		return !full;
 
 	return buf->chan->cb->subbuf_start(buf, subbuf,
 					   prev_subbuf);
@@ -298,6 +303,7 @@ static void __relay_reset(struct rchan_buf *buf, unsigned int init)
 	buf->finalized = 0;
 	buf->data = buf->start;
 	buf->offset = 0;
+	buf->stats.full_count = 0;
 
 	for (i = 0; i < buf->chan->n_subbufs; i++)
 		buf->padding[i] = 0;
-- 
2.43.5


  reply	other threads:[~2025-05-15  6:17 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-05-15  6:16 [PATCH v2 0/4] relayfs: misc changes Jason Xing
2025-05-15  6:16 ` Jason Xing [this message]
2025-05-15 18:17   ` [PATCH v2 1/4] relayfs: support a counter tracking if per-cpu buffers is full Jens Axboe
2025-05-15  6:16 ` [PATCH v2 2/4] relayfs: introduce dump of relayfs statistics function Jason Xing
2025-05-15 18:19   ` Jens Axboe
2025-05-15 23:24     ` Jason Xing
2025-05-16  2:08   ` Masami Hiramatsu
2025-05-16  2:32     ` Jason Xing
2025-05-16  4:37   ` Christoph Hellwig
2025-05-16  5:02     ` Jason Xing
2025-05-16  5:06       ` Christoph Hellwig
2025-05-16  5:44         ` Jason Xing
2025-05-15  6:16 ` [PATCH v2 3/4] blktrace: use rbuf->stats.full as a drop indicator in relayfs Jason Xing
2025-05-15 18:19   ` Jens Axboe
2025-05-15 18:21   ` Jens Axboe
2025-05-15 23:26     ` Jason Xing
2025-05-15  6:16 ` [PATCH v2 4/4] relayfs: support a counter tracking if data is too big to write Jason Xing

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250515061643.31472-2-kerneljasonxing@gmail.com \
    --to=kerneljasonxing@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=axboe@kernel.dk \
    --cc=katrinzhou@tencent.com \
    --cc=kernelxing@tencent.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-trace-kernel@vger.kernel.org \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=mhiramat@kernel.org \
    --cc=rostedt@goodmis.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).