From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5215BC3ABBC for ; Tue, 6 May 2025 22:24:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=PbofLrSiYoUrDKR0IcYz2ighpQm3HY6HT+wQFKaRR8E=; b=QSLQwXp7fttS4o371QTn+UXYFl tfIyzzUOMMqTTF2lyidjJn2sNwwLRRcD93a0b7XYm9pk17Lrsexh9iDM+sicq5Wh+K0enDM6nXwLe /lHlIHRAXVOk0Js6G3J+Hebyg9vEbocLAZcG+qKpYdgiNlqA80OfTnM4KulUVHgTn58WqFhNYVbCr 7CQopK/0cRa9jSvCqf//eGlmo0ZMKyo0XZHaLAz6HdgDUdsRWTVWLsoAz372fgx0AZZ8pWB8Up40B DYrUF66AshhLuq3yo5xlFzodX2+bUBntGWK/BPTn8aGIVrMkSqq9sFxIgc+0SwotP9NhziylJbJqQ zDQt01mg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1uCQi2-0000000DYqR-2yZe; Tue, 06 May 2025 22:24:18 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1uCLTl-0000000Ck1N-3URY for linux-arm-kernel@lists.infradead.org; Tue, 06 May 2025 16:49:14 +0000 Received: by mail-ed1-x549.google.com with SMTP id 4fb4d7f45d1cf-5fba56df298so815043a12.1 for ; Tue, 06 May 2025 09:49:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746550152; x=1747154952; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PbofLrSiYoUrDKR0IcYz2ighpQm3HY6HT+wQFKaRR8E=; b=Zkav5PXN4/tTzNpAcFVRlsd1KSOo9S1Uoplw72may/k5ITTzXPCFtGP0vChjYz5GGo FoULnwZZjbT9CDpltj0FoxYaB9/jlhqTmtjqdv5gNhpfb+vJRx4HKuUediWFGQWGfOZX okZeNiSYWklyNzd7HBHVNrMhkkGQJ8nz5B+pmwlkD4SBjfxQwfbSUPeS7c9YLRojeSK8 N772Ei2aKNJpWTJ9SAK+8AshOoNRkHdqrwzapwLnu2ap3woQ+JCXCMLpxKkWOuRkh8ba htwRkH4jZZQOKYfORZhdeA6JB/T2WZCZD97LXtbQmmEt5106tnQsQm14CbSE+14Cs9kn /8jA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746550152; x=1747154952; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PbofLrSiYoUrDKR0IcYz2ighpQm3HY6HT+wQFKaRR8E=; b=WO7EIRXFFuhkdbTNNc1FIlyeldqG1Gnqi3WHGml8GHbzHS8VIrIOiAFif5c3IIS24H ORWlCdNFYk+ER5qnHfg70uJTkKa2CDO0JQ9L1jgvw1/mv5Wrh79NeO/6lSI23UAoo5eg bUV4CRILkrGG+Y195YNuFigtkJDfSR2keuNj/xOpElSriltk87I2DmtlIuRnKS+2jk+6 SfkvL1hoBXso3SCkW3F8KcpIFHyvSZ67KdNPJAfiSVMPg2zT0fXPZ4+w/V5NAzyntU0D fWPjGxSon3MHTM2dQisC9N/652QWXm+7SsOaNhRhZpu/t3cdc/M4xtKYPyZDHbqRvdcf +TYQ== X-Forwarded-Encrypted: i=1; AJvYcCXrXly4xVOd+rvNyYAJiZdiRHZq+aWBuPoO/6Wxf66/8RLJud/JPMx9btWCaU3r1aY5SWLfDqa7erwZ7R0GUScQ@lists.infradead.org X-Gm-Message-State: AOJu0YzjVxGLtr5rs3Yg97ruavNaPcrcGkaODy+zYYeD2rY6KHU49q3/ 6a2dqdJKItpls55jHluQJnB/65hddONKtFfhwnbiLyqQHf9ooprLYlbkGSh6gkgKava+ZulGqLt h4v9P8WNqNDXWwMZ2Qg== X-Google-Smtp-Source: AGHT+IGGoAXb1vdJDEqbvbocLN2nVo70zjIZSaJaSUPxR2r33LndiIC/nG5doi1ZsnzpfOo5CHFpJEQ9HbbmWv/E X-Received: from wmbhc5.prod.google.com ([2002:a05:600c:8705:b0:43d:9035:df36]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:698e:b0:43d:7588:6688 with SMTP id 5b1f17b1804b1-441c48b672emr115738045e9.12.1746550141291; Tue, 06 May 2025 09:49:01 -0700 (PDT) Date: Tue, 6 May 2025 17:48:08 +0100 In-Reply-To: <20250506164820.515876-1-vdonnefort@google.com> Mime-Version: 1.0 References: <20250506164820.515876-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.967.g6a0df3ecc3-goog Message-ID: <20250506164820.515876-13-vdonnefort@google.com> Subject: [PATCH v4 12/24] tracing: load/unload page callbacks for simple_ring_buffer From: Vincent Donnefort To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, linux-trace-kernel@vger.kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com Cc: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, jstultz@google.com, qperret@google.com, will@kernel.org, kernel-team@android.com, linux-kernel@vger.kernel.org, Vincent Donnefort Content-Type: text/plain; charset="UTF-8" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250506_094913_870165_C6EDD675 X-CRM114-Status: GOOD ( 16.06 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add load/unload callback used for each admitted page in the ring-buffer. This will be later useful for the pKVM hypervisor which uses a different VA space and need to dynamically map/unmap the ring-buffer pages. Signed-off-by: Vincent Donnefort diff --git a/include/linux/simple_ring_buffer.h b/include/linux/simple_ring_buffer.h index 6cf8486d46e2..10e385d347a0 100644 --- a/include/linux/simple_ring_buffer.h +++ b/include/linux/simple_ring_buffer.h @@ -46,4 +46,12 @@ int simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, struct simple_ int simple_ring_buffer_enable_tracing(struct simple_rb_per_cpu *cpu_buffer, bool enable); int simple_ring_buffer_swap_reader_page(struct simple_rb_per_cpu *cpu_buffer); int simple_ring_buffer_reset(struct simple_rb_per_cpu *cpu_buffer); + +int __simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, + struct simple_buffer_page *bpages, + const struct ring_buffer_desc *desc, + void *(*load_page)(unsigned long va), + void (*unload_page)(void *va)); +void __simple_ring_buffer_unload(struct simple_rb_per_cpu *cpu_buffer, + void (*unload_page)(void *)); #endif diff --git a/kernel/trace/simple_ring_buffer.c b/kernel/trace/simple_ring_buffer.c index da9ea42b9926..54c8f221f693 100644 --- a/kernel/trace/simple_ring_buffer.c +++ b/kernel/trace/simple_ring_buffer.c @@ -55,7 +55,7 @@ static void simple_bpage_reset(struct simple_buffer_page *bpage) local_set(&bpage->page->commit, 0); } -static void simple_bpage_init(struct simple_buffer_page *bpage, unsigned long page) +static void simple_bpage_init(struct simple_buffer_page *bpage, void *page) { INIT_LIST_HEAD(&bpage->list); bpage->page = (struct buffer_data_page *)page; @@ -282,10 +282,14 @@ int simple_ring_buffer_reset(struct simple_rb_per_cpu *cpu_buffer) return 0; } -int simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, struct simple_buffer_page *bpages, - const struct ring_buffer_desc *desc) +int __simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, struct simple_buffer_page *bpages, + const struct ring_buffer_desc *desc, + void *(*load_page)(unsigned long va), + void (*unload_page)(void *va)) { struct simple_buffer_page *bpage = bpages; + int ret = 0; + void *page; int i; /* At least 1 reader page and one head */ @@ -294,15 +298,22 @@ int simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, struct simple_ memset(cpu_buffer, 0, sizeof(*cpu_buffer)); - cpu_buffer->bpages = bpages; + cpu_buffer->meta = load_page(desc->meta_va); + if (!cpu_buffer->meta) + return -EINVAL; - cpu_buffer->meta = (void *)desc->meta_va; memset(cpu_buffer->meta, 0, sizeof(*cpu_buffer->meta)); cpu_buffer->meta->meta_page_size = PAGE_SIZE; cpu_buffer->meta->nr_subbufs = cpu_buffer->nr_pages; /* The reader page is not part of the ring initially */ - simple_bpage_init(bpage, desc->page_va[0]); + page = load_page(desc->page_va[0]); + if (!page) { + unload_page(cpu_buffer->meta); + return -EINVAL; + } + + simple_bpage_init(bpage, page); bpage->id = 0; cpu_buffer->nr_pages = 1; @@ -312,7 +323,13 @@ int simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, struct simple_ cpu_buffer->head_page = bpage + 1; for (i = 1; i < desc->nr_page_va; i++) { - simple_bpage_init(++bpage, desc->page_va[i]); + page = load_page(desc->page_va[i]); + if (!page) { + ret = -EINVAL; + break; + } + + simple_bpage_init(++bpage, page); bpage->list.next = &(bpage + 1)->list; bpage->list.prev = &(bpage - 1)->list; @@ -321,6 +338,14 @@ int simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, struct simple_ cpu_buffer->nr_pages = i + 1; } + if (ret) { + for (i--; i >= 0; i--) + unload_page((void *)desc->page_va[i]); + unload_page(cpu_buffer->meta); + + return ret; + } + /* Close the ring */ bpage->list.next = &cpu_buffer->tail_page->list; cpu_buffer->tail_page->list.prev = &bpage->list; @@ -328,19 +353,46 @@ int simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, struct simple_ /* The last init'ed page points to the head page */ simple_bpage_set_link_flag(bpage, SIMPLE_RB_LINK_HEAD); + cpu_buffer->bpages = bpages; + return 0; } -void simple_ring_buffer_unload(struct simple_rb_per_cpu *cpu_buffer) +static void *__load_page(unsigned long page) { + return (void *)page; +} + +static void __unload_page(void *page) { } + +int simple_ring_buffer_init(struct simple_rb_per_cpu *cpu_buffer, struct simple_buffer_page *bpages, + const struct ring_buffer_desc *desc) +{ + return __simple_ring_buffer_init(cpu_buffer, bpages, desc, __load_page, __unload_page); +} + +void __simple_ring_buffer_unload(struct simple_rb_per_cpu *cpu_buffer, + void (*unload_page)(void *)) +{ + int p; + if (!simple_rb_loaded(cpu_buffer)) return; simple_rb_enable_tracing(cpu_buffer, false); + unload_page(cpu_buffer->meta); + for (p = 0; p < cpu_buffer->nr_pages; p++) + unload_page(cpu_buffer->bpages[p].page); + cpu_buffer->bpages = 0; } +void simple_ring_buffer_unload(struct simple_rb_per_cpu *cpu_buffer) +{ + return __simple_ring_buffer_unload(cpu_buffer, __unload_page); +} + int simple_ring_buffer_enable_tracing(struct simple_rb_per_cpu *cpu_buffer, bool enable) { if (!simple_rb_loaded(cpu_buffer)) -- 2.49.0.967.g6a0df3ecc3-goog