From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 39D53CCA471 for ; Fri, 3 Oct 2025 13:40:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=EvbcZ/9JwQ50AQRxhxf/g+ZmyyJKqJ3DWwQCljO0JC0=; b=YaICozH4rXl0y5ODS7LnupqRyH pI9xv8TyJ5uIcCk/hzj3NvdoviCo9zBBvnKEt+EYGNHFp3Ei65Jf3qQRuIa6pblcqcaU0AcdZQT6H /JX2iggrSb9WhDOge4ViGu7xmvk8zLozH17UuJKOv4FRrJsVUa0r9HlBuvYJ9GNKI/uaq+5uFH1OA eOF1d20oWB142K5n0RIZqBod91FVw9kXVH8sPf9MfhajX8MFzFq5DfXlfYtZ5R98ZlYNBi8cSPPh9 giOFPIPW3VFQMU7z+oq7cukEpnpx/jXiVfbPZZmt2dpsAxfVskQBovlwgKQoRsTMJpjg/7lIgquqx 7a27EECg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1v4g0n-0000000CRIQ-40Uo; Fri, 03 Oct 2025 13:39:54 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1v4g0H-0000000CQRg-3ebo for linux-arm-kernel@lists.infradead.org; Fri, 03 Oct 2025 13:39:27 +0000 Received: by mail-wr1-x44a.google.com with SMTP id ffacd0b85a97d-3ed9557f976so1554663f8f.3 for ; Fri, 03 Oct 2025 06:39:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1759498759; x=1760103559; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EvbcZ/9JwQ50AQRxhxf/g+ZmyyJKqJ3DWwQCljO0JC0=; b=yYY3k/i4SN3tkx1bXom8PvAQgIUUOLJa1HqtVu/yAW+XDsO5pTpKzIYDIdK1uSP5Tu ZlFdbNTwjCAA6rgKZ+C17nku4y4wIfZawBuftvbcO04DMPLIdRkER0X6SV15tScyhz17 7mwsv8dyvrLzpuKahGQUUqMB83VSHWKmu5kp85IPsQeabsMdtc5EcskqsoRw6MwCCLC4 Ufi4+j0oF1F9tW/qbNGRRWfnydXsT4NaiGvIf5iT8bcWi6zAFphUm7NPMiCzr3ruJc+T NWf434vlImx87baMw/+N8ruvQqr40gKNGSkc0Q0Kn4CDIPea3CgjUL9tRqh8wB+WuHxq MaPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1759498759; x=1760103559; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EvbcZ/9JwQ50AQRxhxf/g+ZmyyJKqJ3DWwQCljO0JC0=; b=P0L1mt8fkyL8Yhm5ZXAlfw+jRS47Zl/BqUuCXw7SmSjfumNX7wf3mVSGg6QZPWDuU8 JQOo4JLYqbn+Cz3ifrVKZpdanO5ZR2GF/B9xsarZHjT36Sy+vFL0wgP2xgI7LKHYw9nl 9pedyOIoaRz0Bhqre0WEX3qI4DkX/MR+zY8XZVno6EI24gcxuCxCWJS6eTIffRdGnjii 5iLJjKiykm/ft2UWFkdWowAbGegpU5cxTrKYE0kXtlHXEjQRXAfYM/zucNFAS15qjBXB Vh38Wyx7c3F3098oiEI6yhrlOvIsv6PZnPOGOZPmlvgWa/YZMdEf/uhku+OXR6VD8aAW mC3g== X-Forwarded-Encrypted: i=1; AJvYcCUJTOEVJ4wR6qXM2BiQ/7/MaYrNSn8Ze03Mp15KIpB7Q6b1oPcjZWUFwr71goVNsRdLV2d2jlJc2R83NbnAXUAG@lists.infradead.org X-Gm-Message-State: AOJu0Yy8uyhWOZdH/h3lldAFopYT2V9q+2oL/Iks3/1au2w45teEQQlW Fb6L6ot9YKyszJ1jDuJd4Yx0ysIZJCAeIjOBaKmQDBUbT67hiyZVvb5maDFensNxZvcmOea9Llt b0T1z9cBrjSojcXck9ewf3g== X-Google-Smtp-Source: AGHT+IHRFBl6xcKM3euFKM3qcCjqstVQ5iqgGtD/Yu1lJW5dw3vQ2ZJ2GnzE/ICAKttrTohsi5EQeP4aP9wcvJMD X-Received: from wroh18.prod.google.com ([2002:adf:f192:0:b0:3ee:12ad:455e]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:2909:b0:3ee:109a:3a6f with SMTP id ffacd0b85a97d-425671c0fbamr2158648f8f.51.1759498758849; Fri, 03 Oct 2025 06:39:18 -0700 (PDT) Date: Fri, 3 Oct 2025 14:38:20 +0100 In-Reply-To: <20251003133825.2068970-1-vdonnefort@google.com> Mime-Version: 1.0 References: <20251003133825.2068970-1-vdonnefort@google.com> X-Mailer: git-send-email 2.51.0.618.g983fd99d29-goog Message-ID: <20251003133825.2068970-24-vdonnefort@google.com> Subject: [PATCH v7 23/28] KVM: arm64: Sync boot clock with the pKVM hyp From: Vincent Donnefort To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, linux-trace-kernel@vger.kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com Cc: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, jstultz@google.com, qperret@google.com, will@kernel.org, aneesh.kumar@kernel.org, kernel-team@android.com, linux-kernel@vger.kernel.org, Vincent Donnefort , Thomas Gleixner , Stephen Boyd , "Christopher S. Hall" , Richard Cochran Content-Type: text/plain; charset="UTF-8" X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20251003_063922_205836_C0F8802D X-CRM114-Status: GOOD ( 24.59 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Configure the pKVM hypervisor tracing clock with the kernel boot clock. For tracing purpose, the boot clock is interesting as it doesn't stop on suspend. However, it is corrected on a regular basis, which implies we need to re-evaluate it every once in a while. Cc: John Stultz Cc: Thomas Gleixner Cc: Stephen Boyd Cc: Christopher S. Hall Cc: Richard Cochran Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 437ac948d136..d122d79718a0 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -87,6 +87,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_load, __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_put, __KVM_HOST_SMCCC_FUNC___pkvm_tlb_flush_vmid, + __KVM_HOST_SMCCC_FUNC___pkvm_update_clock_tracing, __KVM_HOST_SMCCC_FUNC___pkvm_load_tracing, __KVM_HOST_SMCCC_FUNC___pkvm_unload_tracing, __KVM_HOST_SMCCC_FUNC___pkvm_enable_tracing, diff --git a/arch/arm64/kvm/hyp/include/nvhe/trace.h b/arch/arm64/kvm/hyp/include/nvhe/trace.h index 996e90c0974f..4e11dcdf049b 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/trace.h +++ b/arch/arm64/kvm/hyp/include/nvhe/trace.h @@ -7,6 +7,7 @@ void *tracing_reserve_entry(unsigned long length); void tracing_commit_entry(void); +void __pkvm_update_clock_tracing(u32 mult, u32 shift, u64 epoch_ns, u64 epoch_cyc); int __pkvm_load_tracing(unsigned long desc_va, size_t desc_size); void __pkvm_unload_tracing(void); int __pkvm_enable_tracing(bool enable); @@ -15,6 +16,8 @@ int __pkvm_swap_reader_tracing(unsigned int cpu); static inline void *tracing_reserve_entry(unsigned long length) { return NULL; } static inline void tracing_commit_entry(void) { } +static inline +void __pkvm_update_clock_tracing(u32 mult, u32 shift, u64 epoch_ns, u64 epoch_cyc) { } static inline int __pkvm_load_tracing(unsigned long desc_va, size_t desc_size) { return -ENODEV; } static inline void __pkvm_unload_tracing(void) { } static inline int __pkvm_enable_tracing(bool enable) { return -ENODEV; } diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 02b2fdd9a8e4..36a263422e4e 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -574,6 +574,18 @@ static void handle___pkvm_teardown_vm(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = __pkvm_teardown_vm(handle); } +static void handle___pkvm_update_clock_tracing(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(u32, mult, host_ctxt, 1); + DECLARE_REG(u32, shift, host_ctxt, 2); + DECLARE_REG(u64, epoch_ns, host_ctxt, 3); + DECLARE_REG(u64, epoch_cyc, host_ctxt, 4); + + __pkvm_update_clock_tracing(mult, shift, epoch_ns, epoch_cyc); + + cpu_reg(host_ctxt, 1) = 0; +} + static void handle___pkvm_load_tracing(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(unsigned long, desc_hva, host_ctxt, 1); @@ -642,6 +654,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_vcpu_load), HANDLE_FUNC(__pkvm_vcpu_put), HANDLE_FUNC(__pkvm_tlb_flush_vmid), + HANDLE_FUNC(__pkvm_update_clock_tracing), HANDLE_FUNC(__pkvm_load_tracing), HANDLE_FUNC(__pkvm_unload_tracing), HANDLE_FUNC(__pkvm_enable_tracing), diff --git a/arch/arm64/kvm/hyp/nvhe/trace.c b/arch/arm64/kvm/hyp/nvhe/trace.c index def5cbc75722..d146ac3046de 100644 --- a/arch/arm64/kvm/hyp/nvhe/trace.c +++ b/arch/arm64/kvm/hyp/nvhe/trace.c @@ -255,3 +255,19 @@ int __pkvm_swap_reader_tracing(unsigned int cpu) return ret; } + +void __pkvm_update_clock_tracing(u32 mult, u32 shift, u64 epoch_ns, u64 epoch_cyc) +{ + int cpu; + + /* After this loop, all CPUs are observing the new bank... */ + for (cpu = 0; cpu < hyp_nr_cpus; cpu++) { + struct simple_rb_per_cpu *simple_rb = per_cpu_ptr(trace_buffer.simple_rbs, cpu); + + while (READ_ONCE(simple_rb->status) == SIMPLE_RB_WRITING) + ; + } + + /* ...we can now override the old one and swap. */ + trace_clock_update(mult, shift, epoch_ns, epoch_cyc); +} diff --git a/arch/arm64/kvm/hyp_trace.c b/arch/arm64/kvm/hyp_trace.c index 98051c3fb0c2..4f154ec743f3 100644 --- a/arch/arm64/kvm/hyp_trace.c +++ b/arch/arm64/kvm/hyp_trace.c @@ -5,6 +5,7 @@ */ #include +#include #include #include @@ -12,6 +13,121 @@ #include "hyp_trace.h" +/* Same 10min used by clocksource when width is more than 32-bits */ +#define CLOCK_MAX_CONVERSION_S 600 +/* + * Time to give for the clock init. Long enough to get a good mult/shift + * estimation. Short enough to not delay the tracing start too much. + */ +#define CLOCK_INIT_MS 100 +/* + * Time between clock checks. Must be small enough to catch clock deviation when + * it is still tiny. + */ +#define CLOCK_UPDATE_MS 500 + +static struct hyp_trace_clock { + u64 cycles; + u64 cyc_overflow64; + u64 boot; + u32 mult; + u32 shift; + struct delayed_work work; + struct completion ready; + struct mutex lock; + bool running; +} hyp_clock; + +static void __hyp_clock_work(struct work_struct *work) +{ + struct delayed_work *dwork = to_delayed_work(work); + struct hyp_trace_clock *hyp_clock; + struct system_time_snapshot snap; + u64 rate, delta_cycles; + u64 boot, delta_boot; + + hyp_clock = container_of(dwork, struct hyp_trace_clock, work); + + ktime_get_snapshot(&snap); + boot = ktime_to_ns(snap.boot); + + delta_boot = boot - hyp_clock->boot; + delta_cycles = snap.cycles - hyp_clock->cycles; + + /* Compare hyp clock with the kernel boot clock */ + if (hyp_clock->mult) { + u64 err, cur = delta_cycles; + + if (WARN_ON_ONCE(cur >= hyp_clock->cyc_overflow64)) { + __uint128_t tmp = (__uint128_t)cur * hyp_clock->mult; + + cur = tmp >> hyp_clock->shift; + } else { + cur *= hyp_clock->mult; + cur >>= hyp_clock->shift; + } + cur += hyp_clock->boot; + + err = abs_diff(cur, boot); + /* No deviation, only update epoch if necessary */ + if (!err) { + if (delta_cycles >= (hyp_clock->cyc_overflow64 >> 1)) + goto fast_forward; + + goto resched; + } + + /* Warn if the error is above tracing precision (1us) */ + if (err > NSEC_PER_USEC) + pr_warn_ratelimited("hyp trace clock off by %lluus\n", + err / NSEC_PER_USEC); + } + + rate = div64_u64(delta_cycles * NSEC_PER_SEC, delta_boot); + + clocks_calc_mult_shift(&hyp_clock->mult, &hyp_clock->shift, + rate, NSEC_PER_SEC, CLOCK_MAX_CONVERSION_S); + + /* Add a comfortable 50% margin */ + hyp_clock->cyc_overflow64 = (U64_MAX / hyp_clock->mult) >> 1; + +fast_forward: + hyp_clock->cycles = snap.cycles; + hyp_clock->boot = boot; + kvm_call_hyp_nvhe(__pkvm_update_clock_tracing, hyp_clock->mult, + hyp_clock->shift, hyp_clock->boot, hyp_clock->cycles); + complete(&hyp_clock->ready); + +resched: + schedule_delayed_work(&hyp_clock->work, + msecs_to_jiffies(CLOCK_UPDATE_MS)); +} + +static void hyp_trace_clock_enable(struct hyp_trace_clock *hyp_clock, bool enable) +{ + struct system_time_snapshot snap; + + if (hyp_clock->running == enable) + return; + + if (!enable) { + cancel_delayed_work_sync(&hyp_clock->work); + hyp_clock->running = false; + } + + ktime_get_snapshot(&snap); + + hyp_clock->boot = ktime_to_ns(snap.boot); + hyp_clock->cycles = snap.cycles; + hyp_clock->mult = 0; + + init_completion(&hyp_clock->ready); + INIT_DELAYED_WORK(&hyp_clock->work, __hyp_clock_work); + schedule_delayed_work(&hyp_clock->work, msecs_to_jiffies(CLOCK_INIT_MS)); + wait_for_completion(&hyp_clock->ready); + hyp_clock->running = true; +} + /* Access to this struct within the trace_remote_callbacks are protected by the trace_remote lock */ static struct hyp_trace_buffer { struct hyp_trace_desc *desc; @@ -174,6 +290,8 @@ static void hyp_trace_unload(struct trace_buffer_desc *desc, void *priv) static int hyp_trace_enable_tracing(bool enable, void *priv) { + hyp_trace_clock_enable(&hyp_clock, enable); + return kvm_call_hyp_nvhe(__pkvm_enable_tracing, enable); } @@ -192,7 +310,22 @@ static int hyp_trace_enable_event(unsigned short id, bool enable, void *priv) return 0; } +static int hyp_trace_clock_show(struct seq_file *m, void *v) +{ + seq_puts(m, "[boot]\n"); + + return 0; +} +DEFINE_SHOW_ATTRIBUTE(hyp_trace_clock); + +static int hyp_trace_init_tracefs(struct dentry *d, void *priv) +{ + return tracefs_create_file("trace_clock", 0440, d, NULL, &hyp_trace_clock_fops) ? + 0 : -ENOMEM; +} + static struct trace_remote_callbacks trace_remote_callbacks = { + .init = hyp_trace_init_tracefs, .load_trace_buffer = hyp_trace_load, .unload_trace_buffer = hyp_trace_unload, .enable_tracing = hyp_trace_enable_tracing, -- 2.51.0.618.g983fd99d29-goog