From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7C59FC3ABBC for ; Tue, 6 May 2025 22:24:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=W9MwmVHyjbiE0UGKdOkG/nKnXhGzmuj5oAsbld+rHvs=; b=i74SfUxvVxw0ZHqsFdSl+vO1KO Hhi8UUKxVjllQhZxrNYic/kpJJspn4BE863NLyu98k9q9f5sC8aId5JncQxYXMroREHPZTPkBsmpB DPSRXJ6DXPXQ0pqH32vRD+9O+HEka3gITurvVXyaiFvS8DZLQkqh0pM1nY18SAcgdEtHssvDcQ74Z xf2VBerot5Z+nYjXGTn+31MgVOi1ZlwWZCCMNaz0XzV7oleD/uy+9ESHhjUHtwBMXximqugyNtBoK d7pMz8o8Wi6FcPnmSrN+uf3Y1EeHZnEg4SqcWb4zdb6aTpmrKFTcD0CcGZu46pEj2KOOavRu1BZot E7+3TKUw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1uCQi7-0000000DYzy-3UnK; Tue, 06 May 2025 22:24:23 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1uCLTm-0000000Ck1i-3HI7 for linux-arm-kernel@lists.infradead.org; Tue, 06 May 2025 16:49:15 +0000 Received: by mail-wr1-x44a.google.com with SMTP id ffacd0b85a97d-39130f02631so1686862f8f.2 for ; Tue, 06 May 2025 09:49:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1746550152; x=1747154952; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=W9MwmVHyjbiE0UGKdOkG/nKnXhGzmuj5oAsbld+rHvs=; b=FfBkcJLBjK6/2iFaH8V1ZX+YW7OgcazNt2RzeNVp0ZSIukEmdcAU7I0TA3x+OGQ0ne QgXHSeXot6FpuEgfxKtShvIenk/jYVvaOmQ1KDemBjkfV20daKqdofavHfSjT9E7Trll qyfkLdCbYyKWHmiwaJqI2nkxodm6PQjTYudnBWi9apyFce6C2eD6jWYbtmVCxec7IYoe 2oihmmZmB+C393k4sERxlBMGdEgBYTlRxIKSAp947Ec83NHYsGsmLCOqbs9NHTWbgWsu pEVBGBJtPhbkZPQPGzhFnn8vOhL3B3jjgbDHVwmcOFhNRkcEkEnd+Pn9Vn/4KVJM5Q9n Hvgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746550152; x=1747154952; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=W9MwmVHyjbiE0UGKdOkG/nKnXhGzmuj5oAsbld+rHvs=; b=CVn9DaxBa3m76BlHCQ7DbTRKqLq2icq/QMyNv1y/UEetvhnDX1E/UYBRz9NfWlx6A8 xz7LUUXJV9hWV1NLNGMNqQKhENOHpjMP67qkklHWDmmBWiXgLYrBlQMMdECaQQAOw4+O w0So0/pylVdOn2gO3ntkCqME0Sk9Y3niGv1qecpgQ66fk84o/b0TQg4tGq87rnsimMMQ ok1Kc7xjmeegGm4N/XFcZuQzSQYi7wCtNfJL8gLXnFIFXo/oMsBmP/67LMjIZ1PpGrjU JGJyetJZWWmNvanOgSjLXG87sB3ktIaPrTDIeaNkoE/UYoyaqhWjXbHbvGHMl4mfaM/q ag6w== X-Forwarded-Encrypted: i=1; AJvYcCWXUaI42/a8l370qnxM6kz0kJiFKOJVZfHC6eloKF1aUdxh4S7Qra/GaE3iLi+wpoVHZ4SAiFaBB5FDwo+Mo5jL@lists.infradead.org X-Gm-Message-State: AOJu0YwnSc2fFXGhqDchqQ0HzGeL5kbc3EZAG3sMDQMTqDhQ5TXWg0hV GaPItpm6MLWmBDGLJc5DAiSHe7B4nwbRSET6YMkoyVXgYHfvuERyCaIS1ffcyP88taDYmmFcaiE dkhNyhRmkryElMh2Bcw== X-Google-Smtp-Source: AGHT+IGiKCS8wdkD9tOBTL+nED1QNv0Onsf2jiW2FFXnwD+M/CDDw5lCerIFtYleGxjGbo8PT5lZ+cBBBAXTu/Zq X-Received: from wrbfu20.prod.google.com ([2002:a05:6000:25f4:b0:39a:be99:a101]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:2af:b0:391:3157:7717 with SMTP id ffacd0b85a97d-3a0b4a2369amr156953f8f.34.1746550152797; Tue, 06 May 2025 09:49:12 -0700 (PDT) Date: Tue, 6 May 2025 17:48:14 +0100 In-Reply-To: <20250506164820.515876-1-vdonnefort@google.com> Mime-Version: 1.0 References: <20250506164820.515876-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.967.g6a0df3ecc3-goog Message-ID: <20250506164820.515876-19-vdonnefort@google.com> Subject: [PATCH v4 18/24] KVM: arm64: Add trace remote for the pKVM hyp From: Vincent Donnefort To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, linux-trace-kernel@vger.kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com Cc: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, jstultz@google.com, qperret@google.com, will@kernel.org, kernel-team@android.com, linux-kernel@vger.kernel.org, Vincent Donnefort Content-Type: text/plain; charset="UTF-8" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250506_094914_820465_844654FC X-CRM114-Status: GOOD ( 18.88 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When running with KVM protected mode, the hypervisor is able to generate events into tracefs compatible ring-buffers. Create a trace remote so the kernel can read those buffers. This currently doesn't provide any event support which will come later. Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index f65eba00c1c9..f7d1d8987cce 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -87,6 +87,7 @@ config PKVM_TRACING bool depends on KVM depends on TRACING + select TRACE_REMOTE select SIMPLE_RING_BUFFER default y diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index 209bc76263f1..fffbbc172bcc 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -29,6 +29,8 @@ kvm-$(CONFIG_HW_PERF_EVENTS) += pmu-emul.o pmu.o kvm-$(CONFIG_ARM64_PTR_AUTH) += pauth.o kvm-$(CONFIG_PTDUMP_STAGE2_DEBUGFS) += ptdump.o +kvm-$(CONFIG_PKVM_TRACING) += hyp_trace.o + always-y := hyp_constants.h hyp-constants.s define rule_gen_hyp_constants diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 58e8119f66bd..bb800cf55cfd 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -25,6 +25,7 @@ #define CREATE_TRACE_POINTS #include "trace_arm.h" +#include "hyp_trace.h" #include #include @@ -2355,6 +2356,9 @@ static int __init init_subsystems(void) kvm_register_perf_callbacks(NULL); + err = hyp_trace_init(); + if (err) + kvm_err("Failed to initialize Hyp tracing\n"); out: if (err) hyp_cpu_pm_exit(); diff --git a/arch/arm64/kvm/hyp_trace.c b/arch/arm64/kvm/hyp_trace.c new file mode 100644 index 000000000000..6a2f02cd6c7f --- /dev/null +++ b/arch/arm64/kvm/hyp_trace.c @@ -0,0 +1,209 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2025 Google LLC + * Author: Vincent Donnefort + */ + +#include +#include + +#include +#include + +#include "hyp_trace.h" + +/* Access to this struct within the trace_remote_callbacks are protected by the trace_remote lock */ +struct hyp_trace_buffer { + struct hyp_trace_desc *desc; + size_t desc_size; +} trace_buffer; + +static int hyp_trace_buffer_alloc_bpages_backing(struct hyp_trace_buffer *trace_buffer, size_t size) +{ + int nr_bpages = (PAGE_ALIGN(size) / PAGE_SIZE) + 1; + size_t backing_size; + void *start; + + backing_size = PAGE_ALIGN(sizeof(struct simple_buffer_page) * nr_bpages * + num_possible_cpus()); + + start = alloc_pages_exact(backing_size, GFP_KERNEL_ACCOUNT); + if (!start) + return -ENOMEM; + + trace_buffer->desc->bpages_backing_start = (unsigned long)start; + trace_buffer->desc->bpages_backing_size = backing_size; + + return 0; +} + +static void hyp_trace_buffer_free_bpages_backing(struct hyp_trace_buffer *trace_buffer) +{ + free_pages_exact((void *)trace_buffer->desc->bpages_backing_start, + trace_buffer->desc->bpages_backing_size); +} + +static int __load_page(unsigned long va) +{ + return kvm_call_hyp_nvhe(__pkvm_host_share_hyp, virt_to_pfn((void *)va), 1); +} + +static void __unload_page(unsigned long va) +{ + WARN_ON(kvm_call_hyp_nvhe(__pkvm_host_unshare_hyp, virt_to_pfn((void *)va), 1)); +} + +static void hyp_trace_buffer_unload_pages(struct hyp_trace_buffer *trace_buffer, int last_cpu) +{ + struct ring_buffer_desc *rb_desc; + int cpu, p; + + for_each_ring_buffer_desc(rb_desc, cpu, &trace_buffer->desc->trace_buffer_desc) { + if (cpu > last_cpu) + break; + + __unload_page(rb_desc->meta_va); + for (p = 0; p < rb_desc->nr_page_va; p++) + __unload_page(rb_desc->page_va[p]); + } +} + +static int hyp_trace_buffer_load_pages(struct hyp_trace_buffer *trace_buffer) +{ + struct ring_buffer_desc *rb_desc; + int cpu, p, ret = 0; + + for_each_ring_buffer_desc(rb_desc, cpu, &trace_buffer->desc->trace_buffer_desc) { + ret = __load_page(rb_desc->meta_va); + if (ret) + break; + + for (p = 0; p < rb_desc->nr_page_va; p++) { + ret = __load_page(rb_desc->page_va[p]); + if (ret) + break; + } + + if (ret) { + for (p--; p >= 0; p--) + __unload_page(rb_desc->page_va[p]); + break; + } + } + + if (ret) + hyp_trace_buffer_unload_pages(trace_buffer, cpu--); + + return ret; +} + +static struct trace_buffer_desc *hyp_trace_load(unsigned long size, void *priv) +{ + struct hyp_trace_buffer *trace_buffer = priv; + struct hyp_trace_desc *desc; + size_t desc_size; + int ret; + + if (WARN_ON(trace_buffer->desc)) + return NULL; + + desc_size = trace_buffer_desc_size(size, num_possible_cpus()); + if (desc_size == SIZE_MAX) + return NULL; + + /* + * The hypervisor will unmap the descriptor from the host to protect the reading. Page + * granularity for the allocation ensures no other useful data will be unmapped. + */ + desc_size = PAGE_ALIGN(desc_size); + desc = (struct hyp_trace_desc *)alloc_pages_exact(desc_size, GFP_KERNEL); + if (!desc) + return NULL; + + trace_buffer->desc = desc; + + ret = hyp_trace_buffer_alloc_bpages_backing(trace_buffer, size); + if (ret) + goto err_free_desc; + + ret = trace_remote_alloc_buffer(&desc->trace_buffer_desc, size, cpu_possible_mask); + if (ret) + goto err_free_backing; + + ret = hyp_trace_buffer_load_pages(trace_buffer); + if (ret) + goto err_free_buffer; + + ret = kvm_call_hyp_nvhe(__pkvm_load_tracing, (unsigned long)desc, desc_size); + if (ret) + goto err_unload_pages; + + return &desc->trace_buffer_desc; + +err_unload_pages: + hyp_trace_buffer_unload_pages(trace_buffer, INT_MAX); + +err_free_buffer: + trace_remote_free_buffer(&desc->trace_buffer_desc); + +err_free_backing: + hyp_trace_buffer_free_bpages_backing(trace_buffer); + +err_free_desc: + free_pages_exact(desc, desc_size); + trace_buffer->desc = NULL; + + return NULL; +} + +static void hyp_trace_unload(struct trace_buffer_desc *desc, void *priv) +{ + struct hyp_trace_buffer *trace_buffer = priv; + + if (WARN_ON(desc != &trace_buffer->desc->trace_buffer_desc)) + return; + + kvm_call_hyp_nvhe(__pkvm_unload_tracing); + hyp_trace_buffer_unload_pages(trace_buffer, INT_MAX); + trace_remote_free_buffer(desc); + hyp_trace_buffer_free_bpages_backing(trace_buffer); + free_pages_exact(trace_buffer->desc, trace_buffer->desc_size); + trace_buffer->desc = NULL; +} + +static int hyp_trace_enable_tracing(bool enable, void *priv) +{ + return kvm_call_hyp_nvhe(__pkvm_enable_tracing, enable); +} + +static int hyp_trace_swap_reader_page(unsigned int cpu, void *priv) +{ + return kvm_call_hyp_nvhe(__pkvm_swap_reader_tracing, cpu); +} + +static int hyp_trace_reset(unsigned int cpu, void *priv) +{ + return 0; +} + +static int hyp_trace_enable_event(unsigned short id, bool enable, void *priv) +{ + return 0; +} + +static struct trace_remote_callbacks trace_remote_callbacks = { + .load_trace_buffer = hyp_trace_load, + .unload_trace_buffer = hyp_trace_unload, + .enable_tracing = hyp_trace_enable_tracing, + .swap_reader_page = hyp_trace_swap_reader_page, + .reset = hyp_trace_reset, + .enable_event = hyp_trace_enable_event, +}; + +int hyp_trace_init(void) +{ + if (!is_protected_kvm_enabled()) + return 0; + + return trace_remote_register("hypervisor", &trace_remote_callbacks, &trace_buffer, NULL, 0); +} diff --git a/arch/arm64/kvm/hyp_trace.h b/arch/arm64/kvm/hyp_trace.h new file mode 100644 index 000000000000..54d8b1f44ca5 --- /dev/null +++ b/arch/arm64/kvm/hyp_trace.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __ARM64_KVM_HYP_TRACE_H__ +#define __ARM64_KVM_HYP_TRACE_H__ + +#ifdef CONFIG_PKVM_TRACING +int hyp_trace_init(void); +#else +static inline int hyp_trace_init(void) { return 0; } +#endif +#endif -- 2.49.0.967.g6a0df3ecc3-goog