From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 48C9F3446BC for ; Thu, 19 Feb 2026 15:03:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771513422; cv=none; b=L6tYf6wY4E91j9aOkrOxyfP69cSFynEF8NSBNOEJ1aE71Gw9VW1PbgafNWpD7BfF9NEmZ6rMw2hGeaR2N0qi/vlPmlv7cW/jIKr6+OT0fMfYIKKGCBcLG1WekNtonfm/5IUOabFLbLaEZ+bNdyxORgSWXI/mWB8UikHY8TrwVRY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771513422; c=relaxed/simple; bh=BmyUpW0RSpMZX5HCFKFhPYwEBbQil/y0rGG2CIvyIiQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=qa/G7qwJaivEl40kfV57JJBmCX9xPaCdmDwC4ECJH0UTaEfKQAhcSn+V4ukW5RxwI/s+I6iSRnem0lZcsu3959zCwF/BkUVoeqo2lnjCyEYBvpwO/1L+nWfE2CQ1mbR153jJQ68jDN4Jr9GWmM9eJdVdaSXrBTs6ZLUAanuMLUA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=cn5tmX8D; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cn5tmX8D" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-65a43295689so966365a12.3 for ; Thu, 19 Feb 2026 07:03:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1771513419; x=1772118219; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=M9+ARIgqsEga7ypKxJEyYbrKnJKwPy+BmaCOQYOBhWM=; b=cn5tmX8DuB2Cmj4SHjOyYpgntdwLBQRagtXHJbwAJGCHNha+uYRT3C5N8lXVPWRf/6 lzKNjSQpp9ixCAJWDHTr6uknEvkTKWq27FzkNLrIsXRAzTCKU3ZA6PeqzRoJlz/HzTGt zGrk2c6FJkgxXBfjUFVYpppuOU90QDGpBPqWjUYGnubUTSrbWA8uQlWqxWXBl42yDM7F zQyCngC7qwOY5zIOYZF3ZuemOfE5284Y+21BXy+ezTLaYhcESRWzG1JugNasx7OTnFNM oCNZwhOYho1QiYO0K9nc3Nqr6WHx2rePMPc39iJtlJeH6doe6TngMhyFMwMbTe8qG7d5 43sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1771513419; x=1772118219; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=M9+ARIgqsEga7ypKxJEyYbrKnJKwPy+BmaCOQYOBhWM=; b=poav1Unrdwiu1Cr9ajW0/bWnnk6XqgEw5GBOYGtiwd7VKHufqbVw2g9Ma5LaRpuQgT 1IJth3m9wmw4sEqAOZdY+1nJwrI8ge/GyRHled+r9IFpOlvl7/3jrftNEmia8HYGZyNJ rAe6SQ13PO1N4Bl1Gh1ZYpy1pbYPhZ09FxNmMVtjpyYf87NXaWMeQplPRyouJIdq/bDr WH3FQE+f10UvHpAZlVSKJriJ4raRkz59RQ7tbtHkMHVZoXUU2Fak7WwTxx84lYYZsiiO HMHrW8KLe7fjMsSPcl/Za6ocSaCnKQMYx0dsgNeicgVaQEruw7u7mfCPjGaMRTfH6pws vYmw== X-Forwarded-Encrypted: i=1; AJvYcCX++6vwSQ7nOtcE/cWbEB+d7wIsNFnMAI743rUDpf1DoU77JcrmmxRNiBQxS7BcpJmFXVT/CEfdtsdOR5j8NEjDL50=@vger.kernel.org X-Gm-Message-State: AOJu0Yztzgv4uuZGRQXPr+98bXHuTR4JmXfLlFEsxDms0uXtBkvYKRGU azp01SD+NLrZnz30MJAGdfiJIh1G/b2MhDH00j2zY364Lw3GMjka/YXHoVNDP5BrReE4KYrjNeC RHVDyIX7ca8n4/p7PhX/a7w== X-Received: from edzd2.prod.google.com ([2002:a05:6402:8c2:b0:65a:3686:71b9]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:5188:b0:659:329a:bc10 with SMTP id 4fb4d7f45d1cf-65bc7a7622bmr9740227a12.18.1771513418420; Thu, 19 Feb 2026 07:03:38 -0800 (PST) Date: Thu, 19 Feb 2026 15:02:54 +0000 In-Reply-To: <20260219150307.14538-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260219150307.14538-1-vdonnefort@google.com> X-Mailer: git-send-email 2.53.0.335.g19a08e0c02-goog Message-ID: <20260219150307.14538-18-vdonnefort@google.com> Subject: [PATCH v12 17/30] tracing: load/unload page callbacks for simple_ring_buffer From: Vincent Donnefort To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, linux-trace-kernel@vger.kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com Cc: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, jstultz@google.com, qperret@google.com, will@kernel.org, aneesh.kumar@kernel.org, kernel-team@android.com, linux-kernel@vger.kernel.org, Vincent Donnefort Content-Type: text/plain; charset="UTF-8" Add load/unload callback used for each admitted page in the ring-buffer. This will be later useful for the pKVM hypervisor which uses a different VA space and need to dynamically map/unmap the ring-buffer pages. Reviewed-by: Steven Rostedt (Google) Signed-off-by: Vincent Donnefort diff --git a/include/linux/simple_ring_buffer.h b/include/linux/simple_ring_buffer.h index 2c4c0ae336bc..21aec556293e 100644 --- a/include/linux/simple_ring_buffer.h +++ b/include/linux/simple_ring_buffer.h @@ -54,4 +54,12 @@ int simple_ring_buffer_reset(struct simple_rb_per_cpu *cpu_buffer); int simple_ring_buffer_swap_reader_page(struct simple_rb_per_cpu *cpu_buffer); +int simple_ring_buffer_init_mm(struct simple_rb_per_cpu *cpu_buffer, + struct simple_buffer_page *bpages, + const struct ring_buffer_desc *desc, + void *(*load_page)(unsigned long va), + void (*unload_page)(void *va)); + +void simple_ring_buffer_unload_mm(struct simple_rb_per_cpu *cpu_buffer, + void (*unload_page)(void *)); #endif diff --git a/kernel/trace/simple_ring_buffer.c b/kernel/trace/simple_ring_buffer.c index 15df9781411b..02af2297ae5a 100644 --- a/kernel/trace/simple_ring_buffer.c +++ b/kernel/trace/simple_ring_buffer.c @@ -71,7 +71,7 @@ static void simple_bpage_reset(struct simple_buffer_page *bpage) local_set(&bpage->page->commit, 0); } -static void simple_bpage_init(struct simple_buffer_page *bpage, unsigned long page) +static void simple_bpage_init(struct simple_buffer_page *bpage, void *page) { INIT_LIST_HEAD(&bpage->link); bpage->page = (struct buffer_data_page *)page; @@ -372,18 +372,15 @@ int simple_ring_buffer_reset(struct simple_rb_per_cpu *cpu_buffer) } EXPORT_SYMBOL_GPL(simple_ring_buffer_reset); -/** - * simple_ring_buffer_init - Init @cpu_buffer based on @desc - * @cpu_buffer: A simple_rb_per_cpu buffer to init, allocated by the caller. - * @bpages: Array of simple_buffer_pages, with as many elements as @desc->nr_page_va - * @desc: A ring_buffer_desc - * - * Returns 0 on success or -EINVAL if the content of @desc is invalid - */ -int simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, struct simple_buffer_page *bpages, - const struct ring_buffer_desc *desc) +int simple_ring_buffer_init_mm(struct simple_rb_per_cpu *cpu_buffer, + struct simple_buffer_page *bpages, + const struct ring_buffer_desc *desc, + void *(*load_page)(unsigned long va), + void (*unload_page)(void *va)) { struct simple_buffer_page *bpage = bpages; + int ret = 0; + void *page; int i; /* At least 1 reader page and two pages in the ring-buffer */ @@ -392,15 +389,22 @@ int simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, struct simple_ memset(cpu_buffer, 0, sizeof(*cpu_buffer)); - cpu_buffer->bpages = bpages; + cpu_buffer->meta = load_page(desc->meta_va); + if (!cpu_buffer->meta) + return -EINVAL; - cpu_buffer->meta = (void *)desc->meta_va; memset(cpu_buffer->meta, 0, sizeof(*cpu_buffer->meta)); cpu_buffer->meta->meta_page_size = PAGE_SIZE; cpu_buffer->meta->nr_subbufs = cpu_buffer->nr_pages; /* The reader page is not part of the ring initially */ - simple_bpage_init(bpage, desc->page_va[0]); + page = load_page(desc->page_va[0]); + if (!page) { + unload_page(cpu_buffer->meta); + return -EINVAL; + } + + simple_bpage_init(bpage, page); bpage->id = 0; cpu_buffer->nr_pages = 1; @@ -410,7 +414,13 @@ int simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, struct simple_ cpu_buffer->head_page = bpage + 1; for (i = 1; i < desc->nr_page_va; i++) { - simple_bpage_init(++bpage, desc->page_va[i]); + page = load_page(desc->page_va[i]); + if (!page) { + ret = -EINVAL; + break; + } + + simple_bpage_init(++bpage, page); bpage->link.next = &(bpage + 1)->link; bpage->link.prev = &(bpage - 1)->link; @@ -419,6 +429,14 @@ int simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, struct simple_ cpu_buffer->nr_pages = i + 1; } + if (ret) { + for (i--; i >= 0; i--) + unload_page((void *)desc->page_va[i]); + unload_page(cpu_buffer->meta); + + return ret; + } + /* Close the ring */ bpage->link.next = &cpu_buffer->tail_page->link; cpu_buffer->tail_page->link.prev = &bpage->link; @@ -426,23 +444,58 @@ int simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, struct simple_ /* The last init'ed page points to the head page */ simple_bpage_set_head_link(bpage); + cpu_buffer->bpages = bpages; + return 0; } -EXPORT_SYMBOL_GPL(simple_ring_buffer_init); + +static void *__load_page(unsigned long page) +{ + return (void *)page; +} + +static void __unload_page(void *page) { } /** - * simple_ring_buffer_unload - Prepare @cpu_buffer for deletion - * @cpu_buffer: A simple_rb_per_cpu that will be deleted. + * simple_ring_buffer_init - Init @cpu_buffer based on @desc + * @cpu_buffer: A simple_rb_per_cpu buffer to init, allocated by the caller. + * @bpages: Array of simple_buffer_pages, with as many elements as @desc->nr_page_va + * @desc: A ring_buffer_desc + * + * Returns 0 on success or -EINVAL if the content of @desc is invalid */ -void simple_ring_buffer_unload(struct simple_rb_per_cpu *cpu_buffer) +int simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, struct simple_buffer_page *bpages, + const struct ring_buffer_desc *desc) +{ + return simple_ring_buffer_init_mm(cpu_buffer, bpages, desc, __load_page, __unload_page); +} +EXPORT_SYMBOL_GPL(simple_ring_buffer_init); + +void simple_ring_buffer_unload_mm(struct simple_rb_per_cpu *cpu_buffer, + void (*unload_page)(void *)) { + int p; + if (!simple_rb_loaded(cpu_buffer)) return; simple_rb_enable_tracing(cpu_buffer, false); + unload_page(cpu_buffer->meta); + for (p = 0; p < cpu_buffer->nr_pages; p++) + unload_page(cpu_buffer->bpages[p].page); + cpu_buffer->bpages = NULL; } + +/** + * simple_ring_buffer_unload - Prepare @cpu_buffer for deletion + * @cpu_buffer: A simple_rb_per_cpu that will be deleted. + */ +void simple_ring_buffer_unload(struct simple_rb_per_cpu *cpu_buffer) +{ + return simple_ring_buffer_unload_mm(cpu_buffer, __unload_page); +} EXPORT_SYMBOL_GPL(simple_ring_buffer_unload); /** -- 2.53.0.335.g19a08e0c02-goog