From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 57C01C3ABBC for ; Tue, 6 May 2025 22:24:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=icNV5U2xFYpleTGHgYYCWy62cTNgpuaCWjv8l6wElo4=; b=qw7ktn/RUgQmxKIrfrIb7/HPkP JhKKrkVP/Vf/8mVv9e7qp6Szoh4LEJrxScNT2tLkkZGrH5jT9gI56pdTKVX80t/Qt9r2JCD3WV/2p ZB+seHgYT36UOPkTS4LxtIqO/DVjPV7SHqSUPliC2a9ZXRDhJqwN7Y4s180J0yIcHNVh0mlI92fmc 4zdT5o0Nubi5I7daKOXjHS5EwVKtN0n4emBodGY9uQ1WCPxiTQz0laBPrlVjOepom2n1LDWPdnE7w U+3i6GAMPbJ6eParZwon3q7erArA8yyTw1sJ47n9/10+JcbN36uVUIXcU/IxD4/J8L3NG7evrdtNF 5NEgvwaA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1uCQhZ-0000000DXvz-0Zwr; Tue, 06 May 2025 22:23:49 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1uCLTV-0000000Cjt9-1nFn for linux-arm-kernel@lists.infradead.org; Tue, 06 May 2025 16:48:58 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-43cf3168b87so25848235e9.2 for ; Tue, 06 May 2025 09:48:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746550135; x=1747154935; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=icNV5U2xFYpleTGHgYYCWy62cTNgpuaCWjv8l6wElo4=; b=XL00NMW5Hw2zyQcJtlvwtjkex4i7nZNC8FYA3oUiatwRlyct43m8yv2nFPvoE4R7yP lCWToGhdU5nX1jDz70FMkxpmNvnhvEIhxC3d2azhPh1K6t9D2g95Rr9H31GFg1LA+Yez ewYthFouOSRAuc7PFGsPqbxmMK89ILLNL9x9BZ+x8FhMbefNwdMiF+PBltwCggerKJAi K8BgMc4X/ROgD6zO8r7IOh19GQ29PId46XMWMs4J5RoToxeGg/hEGvXu0OMuxpQt8Khv rtjumnz6gK6pyQJfregpOUALSgdZw4GPXDvuPEWo8X25nQUUaV6pS63V+UAdanH0NZnM EKuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746550135; x=1747154935; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=icNV5U2xFYpleTGHgYYCWy62cTNgpuaCWjv8l6wElo4=; b=u8NfQn+HdIfJJjRpWD6yckYdbjG6qSLeDUdT9gepuwsDQhKdo5zwy8bw8Lel9DdEdB vXSGSKxIlum9M31JpQrht6pgORipLiqJoiZAjTC38zQ5ANAB6bIURrlOqh5fS8ofTdSt MZMClcva2ZxS+EajXq5bHDLLuJo8uctpFzHMw0fUWGZyOUCp9HgoscwTBhYbBA5sylww qCLXlhDu/HlqZcnbSjBE96x5eXHt/pAB7S5pFoaWnIBH+B7tANK3HZDlwbkJxy7KDrlN Q/PvptUtdrkafs5NftSv1g3yi7o2Fy6Alx160aJq9WYl7sDdPY3pclrzEuQSQZOo9PLp sB+A== X-Forwarded-Encrypted: i=1; AJvYcCU55y9cKNlBrZMV6v/UlUMLRTMUlOgSpX0nMdldg5nnb9xUzEc/BLdlVsV4ZCiWWb7zhF7VMjqybHM/yt3oIDmc@lists.infradead.org X-Gm-Message-State: AOJu0Yz+na8KUsRLXOuenVSTKVRvucqbp8eFEytZn7FG9C11XL5cRvYC yGLYT+XesnErzP17mmFBN9438Asp15y5v9uzLpF2XEuFQc6z3/SoSQhqofdQ1GN+1uGAVK/wqDS xC3MWLEWP5G0JMfyAYg== X-Google-Smtp-Source: AGHT+IEH4tPyJ0ZU3wW2oaZpaqyI/xkGYNcY9ZlBfHwhBFWBjVxs23c/ZL5Hxmqhh7GmoIZxN6NOq1t9uBcbwofO X-Received: from wmbhh15.prod.google.com ([2002:a05:600c:530f:b0:43b:c336:7b29]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1c91:b0:43c:fe90:1282 with SMTP id 5b1f17b1804b1-441bbea0cdamr133138885e9.7.1746550135231; Tue, 06 May 2025 09:48:55 -0700 (PDT) Date: Tue, 6 May 2025 17:48:05 +0100 In-Reply-To: <20250506164820.515876-1-vdonnefort@google.com> Mime-Version: 1.0 References: <20250506164820.515876-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.967.g6a0df3ecc3-goog Message-ID: <20250506164820.515876-10-vdonnefort@google.com> Subject: [PATCH v4 09/24] tracing: Introduce simple_ring_buffer From: Vincent Donnefort To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, linux-trace-kernel@vger.kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com Cc: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, jstultz@google.com, qperret@google.com, will@kernel.org, kernel-team@android.com, linux-kernel@vger.kernel.org, Vincent Donnefort Content-Type: text/plain; charset="UTF-8" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250506_094857_470690_A698A47D X-CRM114-Status: GOOD ( 23.76 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add a simple implementation of the kernel ring-buffer. This intends to be used later by ring-buffer remotes such as the pKVM hypervisor, hence the need for a cut down version (write only) without any dependency. Signed-off-by: Vincent Donnefort diff --git a/include/linux/simple_ring_buffer.h b/include/linux/simple_ring_buffer.h new file mode 100644 index 000000000000..6cf8486d46e2 --- /dev/null +++ b/include/linux/simple_ring_buffer.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _LINUX_SIMPLE_RING_BUFFER_H +#define _LINUX_SIMPLE_RING_BUFFER_H + +#include +#include +#include + +/* + * Ideally those struct would stay private but the caller needs to know how big they are to allocate + * the memory for simple_ring_buffer_init(). + */ +struct simple_buffer_page { + struct list_head list; + struct buffer_data_page *page; + u64 entries; + u32 write; + u32 id; +}; + +struct simple_rb_per_cpu { + struct simple_buffer_page *tail_page; + struct simple_buffer_page *reader_page; + struct simple_buffer_page *head_page; + struct simple_buffer_page *bpages; + struct trace_buffer_meta *meta; + u32 nr_pages; + +#define SIMPLE_RB_UNAVAILABLE 0 +#define SIMPLE_RB_READY 1 +#define SIMPLE_RB_WRITING 2 + u32 status; + + u64 last_overrun; + u64 write_stamp; + + struct simple_rb_cbs *cbs; +}; + +void *simple_ring_buffer_reserve(struct simple_rb_per_cpu *cpu_buffer, unsigned long length, + u64 timestamp); +void simple_ring_buffer_commit(struct simple_rb_per_cpu *cpu_buffer); +void simple_ring_buffer_unload(struct simple_rb_per_cpu *cpu_buffer); +int simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, struct simple_buffer_page *bpages, + const struct ring_buffer_desc *desc); +int simple_ring_buffer_enable_tracing(struct simple_rb_per_cpu *cpu_buffer, bool enable); +int simple_ring_buffer_swap_reader_page(struct simple_rb_per_cpu *cpu_buffer); +int simple_ring_buffer_reset(struct simple_rb_per_cpu *cpu_buffer); +#endif diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index 2fcc86c7fe7e..407cb05cc8a0 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -1218,4 +1218,7 @@ source "kernel/trace/rv/Kconfig" config TRACE_REMOTE bool +config SIMPLE_RING_BUFFER + bool + endif # FTRACE diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile index b8204ae93744..cece10b1f97c 100644 --- a/kernel/trace/Makefile +++ b/kernel/trace/Makefile @@ -111,4 +111,5 @@ obj-$(CONFIG_TRACEPOINT_BENCHMARK) += trace_benchmark.o obj-$(CONFIG_RV) += rv/ obj-$(CONFIG_TRACE_REMOTE) += trace_remote.o +obj-$(CONFIG_SIMPLE_RING_BUFFER) += simple_ring_buffer.o libftrace-y := ftrace.o diff --git a/kernel/trace/simple_ring_buffer.c b/kernel/trace/simple_ring_buffer.c new file mode 100644 index 000000000000..da9ea42b9926 --- /dev/null +++ b/kernel/trace/simple_ring_buffer.c @@ -0,0 +1,352 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2025 - Google LLC + * Author: Vincent Donnefort + */ + +#include +#include + +#include +#include + +#define SIMPLE_RB_LINK_HEAD 1UL +#define SIMPLE_RB_LINK_MASK ~SIMPLE_RB_LINK_HEAD + +static bool simple_bpage_try_shunt_link(struct simple_buffer_page *bpage, + struct simple_buffer_page *dst, + unsigned long old_flags, unsigned long flags) +{ + unsigned long *ptr = (unsigned long *)(&bpage->list.next); + unsigned long old = (*ptr & SIMPLE_RB_LINK_MASK) | old_flags; + unsigned long new = (unsigned long)(&dst->list) | flags; + + return cmpxchg(ptr, old, new) == old; +} + +static void simple_bpage_set_link_flag(struct simple_buffer_page *bpage, unsigned long flag) +{ + bpage->list.next = (struct list_head *) + (((unsigned long)bpage->list.next & SIMPLE_RB_LINK_MASK) | flag); +} + +static struct simple_buffer_page *simple_bpage_from_link(struct list_head *list) +{ + unsigned long ptr = (unsigned long)list & SIMPLE_RB_LINK_MASK; + + return container_of((struct list_head *)ptr, struct simple_buffer_page, list); +} + +static struct simple_buffer_page *simple_bpage_next_page(struct simple_buffer_page *bpage) +{ + return simple_bpage_from_link(bpage->list.next); +} + +static bool simple_bpage_is_head(struct simple_buffer_page *bpage) +{ + return (unsigned long)bpage->list.prev->next & SIMPLE_RB_LINK_HEAD; +} + +static void simple_bpage_reset(struct simple_buffer_page *bpage) +{ + bpage->write = 0; + bpage->entries = 0; + + local_set(&bpage->page->commit, 0); +} + +static void simple_bpage_init(struct simple_buffer_page *bpage, unsigned long page) +{ + INIT_LIST_HEAD(&bpage->list); + bpage->page = (struct buffer_data_page *)page; + + simple_bpage_reset(bpage); +} + +#define simple_rb_meta_inc(__meta, __inc) \ + WRITE_ONCE((__meta), (__meta + __inc)) + +static bool simple_rb_loaded(struct simple_rb_per_cpu *cpu_buffer) +{ + return !!cpu_buffer->bpages; +} + +int simple_ring_buffer_swap_reader_page(struct simple_rb_per_cpu *cpu_buffer) +{ + struct simple_buffer_page *last, *head, *reader; + unsigned long overrun; + + if (!simple_rb_loaded(cpu_buffer)) + return -ENODEV; + + head = cpu_buffer->head_page; + reader = cpu_buffer->reader_page; + + do { + /* Run after the writer to find the head */ + while (!simple_bpage_is_head(head)) + cpu_buffer->head_page = head = simple_bpage_next_page(head); + + /* Connect the reader page around the header page */ + reader->list.next = head->list.next; + reader->list.prev = head->list.prev; + + /* The last page before the head */ + last = simple_bpage_from_link(head->list.prev); + + /* The reader page points to the new header page */ + simple_bpage_set_link_flag(reader, SIMPLE_RB_LINK_HEAD); + + overrun = smp_load_acquire(&cpu_buffer->meta->overrun); + } while (!simple_bpage_try_shunt_link(last, reader, SIMPLE_RB_LINK_HEAD, 0)); + + cpu_buffer->head_page = simple_bpage_from_link(reader->list.next); + cpu_buffer->head_page->list.prev = &reader->list; + cpu_buffer->reader_page = head; + cpu_buffer->meta->reader.lost_events = overrun - cpu_buffer->last_overrun; + cpu_buffer->meta->reader.id = cpu_buffer->reader_page->id; + cpu_buffer->last_overrun = overrun; + + return 0; +} + +static struct simple_buffer_page *simple_rb_move_tail(struct simple_rb_per_cpu *cpu_buffer) +{ + struct simple_buffer_page *tail, *new_tail; + + tail = cpu_buffer->tail_page; + new_tail = simple_bpage_next_page(tail); + + if (simple_bpage_try_shunt_link(tail, new_tail, SIMPLE_RB_LINK_HEAD, 0)) { + /* + * Oh no! we've caught the head. There is none anymore and swap_reader will spin + * until we set the new one. Overrun must be written first, to make sure we report + * the correct number of lost events. + */ + simple_rb_meta_inc(cpu_buffer->meta->overrun, new_tail->entries); + simple_rb_meta_inc(meta_pages_lost(cpu_buffer->meta), 1); + + smp_store_release(&new_tail->list.next, + (unsigned long)new_tail->list.next | SIMPLE_RB_LINK_HEAD); + } + + simple_bpage_reset(new_tail); + cpu_buffer->tail_page = new_tail; + + simple_rb_meta_inc(meta_pages_touched(cpu_buffer->meta), 1); + + return new_tail; +} + +static unsigned long rb_event_size(unsigned long length) +{ + struct ring_buffer_event *event; + + return length + RB_EVNT_HDR_SIZE + sizeof(event->array[0]); +} + +static struct ring_buffer_event * +rb_event_add_ts_extend(struct ring_buffer_event *event, u64 delta) +{ + event->type_len = RINGBUF_TYPE_TIME_EXTEND; + event->time_delta = delta & TS_MASK; + event->array[0] = delta >> TS_SHIFT; + + return (struct ring_buffer_event *)((unsigned long)event + 8); +} + +static struct ring_buffer_event * +simple_rb_reserve_next(struct simple_rb_per_cpu *cpu_buffer, unsigned long length, u64 timestamp) +{ + unsigned long ts_ext_size = 0, event_size = rb_event_size(length); + struct simple_buffer_page *tail = cpu_buffer->tail_page; + struct ring_buffer_event *event; + u32 write, prev_write; + u64 time_delta; + + time_delta = timestamp - cpu_buffer->write_stamp; + + if (test_time_stamp(time_delta)) + ts_ext_size = 8; + + prev_write = tail->write; + write = prev_write + event_size + ts_ext_size; + + if (unlikely(write > (PAGE_SIZE - BUF_PAGE_HDR_SIZE))) + tail = simple_rb_move_tail(cpu_buffer); + + if (!tail->entries) { + tail->page->time_stamp = timestamp; + time_delta = 0; + ts_ext_size = 0; + write = event_size; + prev_write = 0; + } + + tail->write = write; + tail->entries++; + + cpu_buffer->write_stamp = timestamp; + + event = (struct ring_buffer_event *)(tail->page->data + prev_write); + if (ts_ext_size) { + event = rb_event_add_ts_extend(event, time_delta); + time_delta = 0; + } + + event->type_len = 0; + event->time_delta = time_delta; + event->array[0] = event_size - RB_EVNT_HDR_SIZE; + + return event; +} + +void *simple_ring_buffer_reserve(struct simple_rb_per_cpu *cpu_buffer, unsigned long length, + u64 timestamp) +{ + struct ring_buffer_event *rb_event; + + if (cmpxchg(&cpu_buffer->status, SIMPLE_RB_READY, SIMPLE_RB_WRITING) != SIMPLE_RB_READY) + return NULL; + + rb_event = simple_rb_reserve_next(cpu_buffer, length, timestamp); + + return &rb_event->array[1]; +} + +void simple_ring_buffer_commit(struct simple_rb_per_cpu *cpu_buffer) +{ + local_set(&cpu_buffer->tail_page->page->commit, + cpu_buffer->tail_page->write); + simple_rb_meta_inc(cpu_buffer->meta->entries, 1); + + /* + * Paired with simple_rb_enable_tracing() to ensure data is + * written to the ring-buffer before teardown. + */ + smp_store_release(&cpu_buffer->status, SIMPLE_RB_READY); +} + +static u32 simple_rb_enable_tracing(struct simple_rb_per_cpu *cpu_buffer, bool enable) +{ + u32 prev_status; + + if (enable) + return cmpxchg(&cpu_buffer->status, SIMPLE_RB_UNAVAILABLE, SIMPLE_RB_READY); + + /* Wait for the buffer to be released */ + do { + prev_status = cmpxchg_acquire(&cpu_buffer->status, + SIMPLE_RB_READY, + SIMPLE_RB_UNAVAILABLE); + } while (prev_status == SIMPLE_RB_WRITING); + + return prev_status; +} + +int simple_ring_buffer_reset(struct simple_rb_per_cpu *cpu_buffer) +{ + struct simple_buffer_page *bpage; + u32 prev_status; + + if (!simple_rb_loaded(cpu_buffer)) + return -ENODEV; + + prev_status = simple_rb_enable_tracing(cpu_buffer, false); + + while (!simple_bpage_is_head(cpu_buffer->head_page)) + cpu_buffer->head_page = simple_bpage_next_page(cpu_buffer->head_page); + + bpage = cpu_buffer->tail_page = cpu_buffer->head_page; + do { + simple_bpage_reset(bpage); + bpage = simple_bpage_next_page(bpage); + } while (bpage != cpu_buffer->head_page); + + simple_bpage_reset(cpu_buffer->reader_page); + + cpu_buffer->last_overrun = 0; + cpu_buffer->write_stamp = 0; + + cpu_buffer->meta->reader.read = 0; + cpu_buffer->meta->reader.lost_events = 0; + cpu_buffer->meta->entries = 0; + cpu_buffer->meta->overrun = 0; + cpu_buffer->meta->read = 0; + meta_pages_lost(cpu_buffer->meta) = 0; + meta_pages_touched(cpu_buffer->meta) = 0; + + if (prev_status == SIMPLE_RB_READY) + simple_rb_enable_tracing(cpu_buffer, true); + + return 0; +} + +int simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, struct simple_buffer_page *bpages, + const struct ring_buffer_desc *desc) +{ + struct simple_buffer_page *bpage = bpages; + int i; + + /* At least 1 reader page and one head */ + if (desc->nr_page_va < 2) + return -EINVAL; + + memset(cpu_buffer, 0, sizeof(*cpu_buffer)); + + cpu_buffer->bpages = bpages; + + cpu_buffer->meta = (void *)desc->meta_va; + memset(cpu_buffer->meta, 0, sizeof(*cpu_buffer->meta)); + cpu_buffer->meta->meta_page_size = PAGE_SIZE; + cpu_buffer->meta->nr_subbufs = cpu_buffer->nr_pages; + + /* The reader page is not part of the ring initially */ + simple_bpage_init(bpage, desc->page_va[0]); + bpage->id = 0; + + cpu_buffer->nr_pages = 1; + + cpu_buffer->reader_page = bpage; + cpu_buffer->tail_page = bpage + 1; + cpu_buffer->head_page = bpage + 1; + + for (i = 1; i < desc->nr_page_va; i++) { + simple_bpage_init(++bpage, desc->page_va[i]); + + bpage->list.next = &(bpage + 1)->list; + bpage->list.prev = &(bpage - 1)->list; + bpage->id = i; + + cpu_buffer->nr_pages = i + 1; + } + + /* Close the ring */ + bpage->list.next = &cpu_buffer->tail_page->list; + cpu_buffer->tail_page->list.prev = &bpage->list; + + /* The last init'ed page points to the head page */ + simple_bpage_set_link_flag(bpage, SIMPLE_RB_LINK_HEAD); + + return 0; +} + +void simple_ring_buffer_unload(struct simple_rb_per_cpu *cpu_buffer) +{ + if (!simple_rb_loaded(cpu_buffer)) + return; + + simple_rb_enable_tracing(cpu_buffer, false); + + cpu_buffer->bpages = 0; +} + +int simple_ring_buffer_enable_tracing(struct simple_rb_per_cpu *cpu_buffer, bool enable) +{ + if (!simple_rb_loaded(cpu_buffer)) + return -ENODEV; + + simple_rb_enable_tracing(cpu_buffer, enable); + + return 0; +} -- 2.49.0.967.g6a0df3ecc3-goog