From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 74EF9CF65DC for ; Mon, 26 Jan 2026 10:45:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=bAuB0YdbnPC1gHE5GkC16j+FeJhoKW2PjTnj9gON1mU=; b=UhwiBNQl0CHp8kTR0/XrW7tcTe WzEvQZ3NNgXmmhATrDQjypsVhmBv9D2FAZolOG5Sdbaf8YEb4g0buOIAp0W3sYYyLvMMeLqsK2tcz 0RkpMBOvm070GF1AHncDEFFVK46GLearna02jpERNIXGURVucpY6T6Sgc3XKrxeo9h8Ptc302C0qk 890V9dFCx2vKg099NveKSMiHTp/ikVoqfB3eylQT2QGLhinual2fs4KRnCXf5Xv/Q/i/lmKWFdmXu RiavH1wDI5yQfYuP79nSLhwlE9elmz0tyU+ALCq9OLHPc6cyGCLkyQGPlEkbQ+eHtRQZ2Sn4bTLDW /iI6N80w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1vkK5r-0000000CK2P-3cCX; Mon, 26 Jan 2026 10:45:15 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1vkK5L-0000000CJTX-2p7k for linux-arm-kernel@lists.infradead.org; Mon, 26 Jan 2026 10:44:45 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-4801e9e7159so37024215e9.1 for ; Mon, 26 Jan 2026 02:44:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1769424281; x=1770029081; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bAuB0YdbnPC1gHE5GkC16j+FeJhoKW2PjTnj9gON1mU=; b=NMEzZ1u+iw0uAHVtbeJF5kBDS982S2L2qbe8rrcRz7c+5uFy5It5yYelfIGQpHv4cE QfPgmJWf9LC7c6e6JMwstIyJbB8Gpi8QNdLXdyW8bbgmqXv1Kus/S4qYneOFTP53rc4u eayj3AXLIp1ptuBoeqy0dt9QD5s6Tt7HGR/NwM4Ax3yDYLtRmXzKx6izFOm+ij+ifGaa S6HpQC8fQIok5hzlWO3h0W+WB5tYJ84SrLSzKxGEzC+Ny5z5/zjUhTr44TXqkIGix2wW laTLHj5bv2d0O8cQ0jcopH+6E4LZWskjiLV4RFvAF4SkHiKO7Zm5oL0t8p7SVSX9SQN3 G4wg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769424281; x=1770029081; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bAuB0YdbnPC1gHE5GkC16j+FeJhoKW2PjTnj9gON1mU=; b=uwtY7Pzt/yUUp48gMsS7nLIxGgrgNU1rwJmwkoH7GwsW2jRrQFElWRej4+WXNHqqTV Fv06wLgIrBVBy6blAMevfIuQySp37+ru+GlaiK77MPpfbPt0BKaS/qObXRzrmhMahHiv LgeQXn27VenZ34aQR8IhgoxzpBsGn1Vol9vVkQezu1Ff8LNQDD+GsHckTTjeBEY/Ta5a 5JWLjSKvxmmvdZSbO0AcUuLmuH/700hCzLnsXB9JahsyTnMHi+x2v+lAQLmwGAcjBLi4 vELQkX72lw+AVJHuSKx6nyXd0VGqUofI9usKgoQiYBr1kLzr9apSIFc0jIyyLlz5vKlr FTAw== X-Forwarded-Encrypted: i=1; AJvYcCWq9MdEia7kuStW0Oblqm/ilT/gt8WcysIWvfzNb3fz+6Xnf5zT510SojisPInFqlkN2jOY4oxrXs+fbjH//OHe@lists.infradead.org X-Gm-Message-State: AOJu0Yz0VKZ1t+DHz9HwvjNg+libBR3p6lS/o6td2sWBz7lL0zAs+Ffs yPZAHF9neTaxphVf/gMtnSUBSCMqDhW1V1VShbUArwWa4YjPi1y/+DgK/buC1xXInuY2/7G6nKW UMuNezMOb3KKTox3gpw7sjg== X-Received: from wmlf26.prod.google.com ([2002:a7b:c8da:0:b0:47e:e722:72e4]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:a05:b0:47e:e952:86ca with SMTP id 5b1f17b1804b1-4805ce3fcfemr73650335e9.2.1769424281349; Mon, 26 Jan 2026 02:44:41 -0800 (PST) Date: Mon, 26 Jan 2026 10:44:06 +0000 In-Reply-To: <20260126104419.1649811-1-vdonnefort@google.com> Mime-Version: 1.0 References: <20260126104419.1649811-1-vdonnefort@google.com> X-Mailer: git-send-email 2.52.0.457.g6b5491de43-goog Message-ID: <20260126104419.1649811-18-vdonnefort@google.com> Subject: [PATCH v10 17/30] tracing: load/unload page callbacks for simple_ring_buffer From: Vincent Donnefort To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, linux-trace-kernel@vger.kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com Cc: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, jstultz@google.com, qperret@google.com, will@kernel.org, aneesh.kumar@kernel.org, kernel-team@android.com, linux-kernel@vger.kernel.org, Vincent Donnefort Content-Type: text/plain; charset="UTF-8" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20260126_024443_847606_642ADEDF X-CRM114-Status: GOOD ( 16.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add load/unload callback used for each admitted page in the ring-buffer. This will be later useful for the pKVM hypervisor which uses a different VA space and need to dynamically map/unmap the ring-buffer pages. Signed-off-by: Vincent Donnefort diff --git a/include/linux/simple_ring_buffer.h b/include/linux/simple_ring_buffer.h index f324df2f875b..ecd0e988c699 100644 --- a/include/linux/simple_ring_buffer.h +++ b/include/linux/simple_ring_buffer.h @@ -110,4 +110,11 @@ int simple_ring_buffer_reset(struct simple_rb_per_cpu *cpu_buffer); */ int simple_ring_buffer_swap_reader_page(struct simple_rb_per_cpu *cpu_buffer); +int __simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, + struct simple_buffer_page *bpages, + const struct ring_buffer_desc *desc, + void *(*load_page)(unsigned long va), + void (*unload_page)(void *va)); +void __simple_ring_buffer_unload(struct simple_rb_per_cpu *cpu_buffer, + void (*unload_page)(void *)); #endif diff --git a/kernel/trace/simple_ring_buffer.c b/kernel/trace/simple_ring_buffer.c index 1d1f40c8c6d8..f01386975266 100644 --- a/kernel/trace/simple_ring_buffer.c +++ b/kernel/trace/simple_ring_buffer.c @@ -71,7 +71,7 @@ static void simple_bpage_reset(struct simple_buffer_page *bpage) local_set(&bpage->page->commit, 0); } -static void simple_bpage_init(struct simple_buffer_page *bpage, unsigned long page) +static void simple_bpage_init(struct simple_buffer_page *bpage, void *page) { INIT_LIST_HEAD(&bpage->link); bpage->page = (struct buffer_data_page *)page; @@ -342,10 +342,15 @@ int simple_ring_buffer_reset(struct simple_rb_per_cpu *cpu_buffer) } EXPORT_SYMBOL_GPL(simple_ring_buffer_reset); -int simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, struct simple_buffer_page *bpages, - const struct ring_buffer_desc *desc) +int __simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, + struct simple_buffer_page *bpages, + const struct ring_buffer_desc *desc, + void *(*load_page)(unsigned long va), + void (*unload_page)(void *va)) { struct simple_buffer_page *bpage = bpages; + int ret = 0; + void *page; int i; /* At least 1 reader page and two pages in the ring-buffer */ @@ -354,15 +359,22 @@ int simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, struct simple_ memset(cpu_buffer, 0, sizeof(*cpu_buffer)); - cpu_buffer->bpages = bpages; + cpu_buffer->meta = load_page(desc->meta_va); + if (!cpu_buffer->meta) + return -EINVAL; - cpu_buffer->meta = (void *)desc->meta_va; memset(cpu_buffer->meta, 0, sizeof(*cpu_buffer->meta)); cpu_buffer->meta->meta_page_size = PAGE_SIZE; cpu_buffer->meta->nr_subbufs = cpu_buffer->nr_pages; /* The reader page is not part of the ring initially */ - simple_bpage_init(bpage, desc->page_va[0]); + page = load_page(desc->page_va[0]); + if (!page) { + unload_page(cpu_buffer->meta); + return -EINVAL; + } + + simple_bpage_init(bpage, page); bpage->id = 0; cpu_buffer->nr_pages = 1; @@ -372,7 +384,13 @@ int simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, struct simple_ cpu_buffer->head_page = bpage + 1; for (i = 1; i < desc->nr_page_va; i++) { - simple_bpage_init(++bpage, desc->page_va[i]); + page = load_page(desc->page_va[i]); + if (!page) { + ret = -EINVAL; + break; + } + + simple_bpage_init(++bpage, page); bpage->link.next = &(bpage + 1)->link; bpage->link.prev = &(bpage - 1)->link; @@ -381,6 +399,14 @@ int simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, struct simple_ cpu_buffer->nr_pages = i + 1; } + if (ret) { + for (i--; i >= 0; i--) + unload_page((void *)desc->page_va[i]); + unload_page(cpu_buffer->meta); + + return ret; + } + /* Close the ring */ bpage->link.next = &cpu_buffer->tail_page->link; cpu_buffer->tail_page->link.prev = &bpage->link; @@ -388,21 +414,48 @@ int simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, struct simple_ /* The last init'ed page points to the head page */ simple_bpage_set_head_link(bpage); + cpu_buffer->bpages = bpages; + return 0; } EXPORT_SYMBOL_GPL(simple_ring_buffer_init); -void simple_ring_buffer_unload(struct simple_rb_per_cpu *cpu_buffer) +static void *__load_page(unsigned long page) { + return (void *)page; +} + +static void __unload_page(void *page) { } + +int simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, struct simple_buffer_page *bpages, + const struct ring_buffer_desc *desc) +{ + return __simple_ring_buffer_init(cpu_buffer, bpages, desc, __load_page, __unload_page); +} + +void __simple_ring_buffer_unload(struct simple_rb_per_cpu *cpu_buffer, + void (*unload_page)(void *)) +{ + int p; + if (!simple_rb_loaded(cpu_buffer)) return; simple_rb_enable_tracing(cpu_buffer, false); + unload_page(cpu_buffer->meta); + for (p = 0; p < cpu_buffer->nr_pages; p++) + unload_page(cpu_buffer->bpages[p].page); + cpu_buffer->bpages = NULL; } EXPORT_SYMBOL_GPL(simple_ring_buffer_unload); +void simple_ring_buffer_unload(struct simple_rb_per_cpu *cpu_buffer) +{ + return __simple_ring_buffer_unload(cpu_buffer, __unload_page); +} + int simple_ring_buffer_enable_tracing(struct simple_rb_per_cpu *cpu_buffer, bool enable) { if (!simple_rb_loaded(cpu_buffer)) -- 2.52.0.457.g6b5491de43-goog