From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 31DDE2E88BD for ; Tue, 10 Mar 2026 14:35:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773153353; cv=none; b=JUEkWUtruNlQchTC72AjiQFzpKYrD0chK7qeFnIrqszGUjeA0cLTPn1GUMFayLCcAs7pOXzTIWWRYdnNN1e+++1gOS41HA5jUCTJOjn9RZW/ZJwbPxpfbFJOL8YTqSrUhhYKsuTNoIA25qQuBs6Za7L2r9zBnJqE7Skq6ci3DFE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773153353; c=relaxed/simple; bh=NNr9V78uXJ/VI+bEPvEAkQ8ULjTN/7EWompKerAAGzs=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=n2h4Zv3Vif4tKxdgTSFN5dK1EvdykwDB73N9clLOa6A9l96MPAdvpZkAA04IVSsfB676g09yb2Ktx53JFUXEJiKvLB5cUf8ngexP3GnZGCJ2e3mVUkykNjNT6vzg/sadojqBpxffwMAEomJPBNjfxW2A8r5+DyKm9PEueJv0858= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=rMYtleXj; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="rMYtleXj" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EEE59C2BC9E; Tue, 10 Mar 2026 14:35:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773153353; bh=NNr9V78uXJ/VI+bEPvEAkQ8ULjTN/7EWompKerAAGzs=; h=Date:From:To:Cc:Subject:References:From; b=rMYtleXjiDzguqWv1FFaIf0ahhEFAFA7oAzHvxwpexplD7LCuBHOAX+ySNTlIoO39 Dk1DpbcZWU2c4pEqLIVyDOCjYnRpUAkkjPUUTE2s6GbVbfLk/2tU86/zCyiyhjt0SA rctrf8CuOnETQfDxLcQADctn418zaJUgtgu8yzMgI9FvnVTE35emyTsBx//ZvBPcaL PMh0IoIW1xMb6Lt/rToUqVjH344tat2FUHxwnGP7+oUd2i80e4Q+PzJELJFh1wNOqc JOU1kSmNFG3l0bxH6KM2SI0Jx89jwhzKjFO/OhhLkhyepy63Mr1o3m/m000z1sI6Me UejjlPjOq8jaA== Received: from rostedt by gandalf with local (Exim 4.99.1) (envelope-from ) id 1vzyBn-00000002ZaL-0Ayx; Tue, 10 Mar 2026 10:36:03 -0400 Message-ID: <20260310143602.907472935@kernel.org> User-Agent: quilt/0.69 Date: Tue, 10 Mar 2026 10:35:17 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Vincent Donnefort Subject: [for-next][PATCH 02/18] ring-buffer: Store bpage pointers into subbuf_ids References: <20260310143515.132579088@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 From: Vincent Donnefort The subbuf_ids field allows to point to a specific page from the ring-buffer based on its ID. As a preparation or the upcoming ring-buffer remote support, point this array to the buffer_page instead of the buffer_data_page. Link: https://patch.msgid.link/20260309162516.2623589-3-vdonnefort@google.com Reviewed-by: Steven Rostedt (Google) Signed-off-by: Vincent Donnefort Signed-off-by: Steven Rostedt (Google) --- kernel/trace/ring_buffer.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 82b4df579670..3d2804a7e8ab 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -555,7 +555,7 @@ struct ring_buffer_per_cpu { unsigned int mapped; unsigned int user_mapped; /* user space mapping */ struct mutex mapping_lock; - unsigned long *subbuf_ids; /* ID to subbuf VA */ + struct buffer_page **subbuf_ids; /* ID to subbuf VA */ struct trace_buffer_meta *meta_page; struct ring_buffer_cpu_meta *ring_meta; @@ -7036,7 +7036,7 @@ static void rb_free_meta_page(struct ring_buffer_per_cpu *cpu_buffer) } static void rb_setup_ids_meta_page(struct ring_buffer_per_cpu *cpu_buffer, - unsigned long *subbuf_ids) + struct buffer_page **subbuf_ids) { struct trace_buffer_meta *meta = cpu_buffer->meta_page; unsigned int nr_subbufs = cpu_buffer->nr_pages + 1; @@ -7045,7 +7045,7 @@ static void rb_setup_ids_meta_page(struct ring_buffer_per_cpu *cpu_buffer, int id = 0; id = rb_page_id(cpu_buffer, cpu_buffer->reader_page, id); - subbuf_ids[id++] = (unsigned long)cpu_buffer->reader_page->page; + subbuf_ids[id++] = cpu_buffer->reader_page; cnt++; first_subbuf = subbuf = rb_set_head_page(cpu_buffer); @@ -7055,7 +7055,7 @@ static void rb_setup_ids_meta_page(struct ring_buffer_per_cpu *cpu_buffer, if (WARN_ON(id >= nr_subbufs)) break; - subbuf_ids[id] = (unsigned long)subbuf->page; + subbuf_ids[id] = subbuf; rb_inc_page(&subbuf); id++; @@ -7064,7 +7064,7 @@ static void rb_setup_ids_meta_page(struct ring_buffer_per_cpu *cpu_buffer, WARN_ON(cnt != nr_subbufs); - /* install subbuf ID to kern VA translation */ + /* install subbuf ID to bpage translation */ cpu_buffer->subbuf_ids = subbuf_ids; meta->meta_struct_len = sizeof(*meta); @@ -7220,13 +7220,15 @@ static int __rb_map_vma(struct ring_buffer_per_cpu *cpu_buffer, } while (p < nr_pages) { + struct buffer_page *subbuf; struct page *page; int off = 0; if (WARN_ON_ONCE(s >= nr_subbufs)) return -EINVAL; - page = virt_to_page((void *)cpu_buffer->subbuf_ids[s]); + subbuf = cpu_buffer->subbuf_ids[s]; + page = virt_to_page((void *)subbuf->page); for (; off < (1 << (subbuf_order)); off++, page++) { if (p >= nr_pages) @@ -7253,7 +7255,8 @@ int ring_buffer_map(struct trace_buffer *buffer, int cpu, struct vm_area_struct *vma) { struct ring_buffer_per_cpu *cpu_buffer; - unsigned long flags, *subbuf_ids; + struct buffer_page **subbuf_ids; + unsigned long flags; int err; if (!cpumask_test_cpu(cpu, buffer->cpumask)) @@ -7277,7 +7280,7 @@ int ring_buffer_map(struct trace_buffer *buffer, int cpu, if (err) return err; - /* subbuf_ids include the reader while nr_pages does not */ + /* subbuf_ids includes the reader while nr_pages does not */ subbuf_ids = kcalloc(cpu_buffer->nr_pages + 1, sizeof(*subbuf_ids), GFP_KERNEL); if (!subbuf_ids) { rb_free_meta_page(cpu_buffer); -- 2.51.0