linux-trace-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] ring-buffer: Have the buffer update counter be atomic
@ 2024-10-10 23:58 Steven Rostedt
  2024-10-11  8:05 ` Petr Pavlu
  0 siblings, 1 reply; 7+ messages in thread
From: Steven Rostedt @ 2024-10-10 23:58 UTC (permalink / raw)
  To: LKML, Linux Trace Kernel; +Cc: Masami Hiramatsu, Mathieu Desnoyers, Petr Pavlu

From: Steven Rostedt <rostedt@goodmis.org>

In order to prevent any subtle races with the buffer update counter,
change it to an atomic_t. Also, since atomic_t is 32 bits, move its
location in the ring_buffer_per_cpu structure next to "current_context" as
that too is only 32 bits (making 64 bit alignment).

The counter is only used to detect that the buffer has been updated when
the buffer verifier check is being done. It's not really that important
that it's atomic or not. But since the updates to the counter are never in
the fast path, having it be consistent isn't a bad thing.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
Note, this is based on top of:

  https://lore.kernel.org/linux-trace-kernel/20240715145141.5528-1-petr.pavlu@suse.com/

 kernel/trace/ring_buffer.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index a6a1c26ea2e3..bbf7f68f9af2 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -481,9 +481,9 @@ struct ring_buffer_per_cpu {
 	struct buffer_data_page		*free_page;
 	unsigned long			nr_pages;
 	unsigned int			current_context;
-	struct list_head		*pages;
 	/* pages generation counter, incremented when the list changes */
-	unsigned long			cnt;
+	atomic_t			cnt;
+	struct list_head		*pages;
 	struct buffer_page		*head_page;	/* read from head */
 	struct buffer_page		*tail_page;	/* write to tail */
 	struct buffer_page		*commit_page;	/* committed pages */
@@ -1532,7 +1532,7 @@ static void rb_check_pages(struct ring_buffer_per_cpu *cpu_buffer)
 	head = rb_list_head(cpu_buffer->pages);
 	if (!rb_check_links(cpu_buffer, head))
 		goto out_locked;
-	buffer_cnt = cpu_buffer->cnt;
+	buffer_cnt = atomic_read(&cpu_buffer->cnt);
 	tmp = head;
 	raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
 		return;
@@ -1540,7 +1540,7 @@ static void rb_check_pages(struct ring_buffer_per_cpu *cpu_buffer)
 	while (true) {
 		raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
 
-		if (buffer_cnt != cpu_buffer->cnt) {
+		if (buffer_cnt != atomic_read(&cpu_buffer->cnt)) {
 			/* The list was updated, try again. */
 			raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
 			goto again;
@@ -2585,7 +2585,7 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages)
 
 	/* make sure pages points to a valid page in the ring buffer */
 	cpu_buffer->pages = next_page;
-	cpu_buffer->cnt++;
+	atomic_inc(&cpu_buffer->cnt);
 
 	/* update head page */
 	if (head_bit)
@@ -2692,7 +2692,7 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer)
 			 * pointer to point to end of list
 			 */
 			head_page->prev = last_page;
-			cpu_buffer->cnt++;
+			atomic_inc(&cpu_buffer->cnt);
 			success = true;
 			break;
 		}
@@ -5347,7 +5347,7 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer)
 	rb_list_head(reader->list.next)->prev = &cpu_buffer->reader_page->list;
 	rb_inc_page(&cpu_buffer->head_page);
 
-	cpu_buffer->cnt++;
+	atomic_inc(&cpu_buffer->cnt);
 	local_inc(&cpu_buffer->pages_read);
 
 	/* Finally update the reader page to the new head */
@@ -6804,7 +6804,7 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order)
 		cpu_buffer->pages = cpu_buffer->new_pages.next;
 		cpu_buffer->new_pages.next->prev = cpu_buffer->new_pages.prev;
 		cpu_buffer->new_pages.prev->next = cpu_buffer->new_pages.next;
-		cpu_buffer->cnt++;
+		atomic_inc(&cpu_buffer->cnt);
 
 		/* Clear the new_pages list */
 		INIT_LIST_HEAD(&cpu_buffer->new_pages);
-- 
2.45.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2024-10-11 16:53 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-10 23:58 [PATCH] ring-buffer: Have the buffer update counter be atomic Steven Rostedt
2024-10-11  8:05 ` Petr Pavlu
2024-10-11 14:01   ` Steven Rostedt
2024-10-11 14:48     ` Steven Rostedt
2024-10-11 15:20       ` Petr Pavlu
2024-10-11 16:44         ` Steven Rostedt
2024-10-11 16:53         ` Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).