From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 304832D3EC7 for ; Wed, 1 Oct 2025 18:14:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759342469; cv=none; b=Tlqw/2+7nV36v9tB/RunZVjAn8PqM4KEAF9tiSgychLS1VJx0X4GazJbYxZPVdIOKB8LgAdRIfMJZKLfuMVWM9hn4GwvVR1pwYToHAyf6WMfVLVDgeJpgvfGxQOf+EQ3BLD5eH8jhwFh6ragzRnmYYHR3bErEEVmJkAlTzc0aXE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759342469; c=relaxed/simple; bh=F7w6+iS6qdAPIESuSb4CMj9IywBNbDaDryHHYM/oA78=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NOHZnPJRbReRXbGU4WVHM5MqBRzGIexD91AlX6B6b2+2qo74QKj1Xi6i3LBMRGwc7W6X7LFrP5GBeK8rnNK5t8+umQ+VVy0gbgW7QLZYkCiqSzayDk1K3dspOhjggIU/GFOQ3WDa+UZWgYO1hqdpeF81KVL1e6nStNU1EoPL/XU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=yQE79kpl; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="yQE79kpl" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-33428befc08so350602a91.2 for ; Wed, 01 Oct 2025 11:14:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1759342465; x=1759947265; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LOBNzl0piewZaB0fEWYxjiNzOXTgJ+Q2XdYBWxqPDCY=; b=yQE79kplB7uZHUaS6AKBx7uc5se/NhiAM8HFXDQXk9lxkKuQlijhsoIDLZvTXgYKTZ vknpYiSrUGz5TdHO5et9Xa80EdfNM+z2ytZGZ8055/rsrfcS67Y7ue4BsthLObMod39K RkWY5dtVPIJ85nmNrknoVpH6EGkEJCu8oyXLD5rRY45KyMB5N3dL6d7tKAWmZDzsIWFo 4Van496uBUJdoxPWIIlgfhwr/j2pSOtuJjSHvxJ/HBumoTCiNPhfKWhpqrh3ivk3yi0c faUFe+y4RJVi4+dkOKfb/qEV2MurDz+1WaDyPqToHCKh4KYTT3qniJQP7hOs69NZBVhw szgQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1759342465; x=1759947265; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LOBNzl0piewZaB0fEWYxjiNzOXTgJ+Q2XdYBWxqPDCY=; b=seObYYHxTP9kSm+UwaduM1LeDlkx0kqgxaoVoodtb5W3VoEa/Rex1u7sWsXuW/26IK boUygdJvvqS2jwPuqOEJLrmlEB8DrDVdahI8IwA6fxVeNKHE1Qh6cS9p2iEZeZvSEVD3 b1QXhmrCT54h6O+MOn1m2MhIQStYosmvCIDuLcZiyeL738xQWj37esesQCgBw5z0QcpZ VLOiymhbBiSAElaenSalGFilOMDge2fbjBu73r6FpkVN5EznJurzb2mbz6YX84lYP4xI an5c/PFHxhYpwsIUsPliy3oBamd/pvyqphDHO8eWgVuXsyykAWrxY2/6lS8gCvBQCodp 1D/A== X-Forwarded-Encrypted: i=1; AJvYcCXdqVjaFj9G8+GFJEs1ApLswKrnra7j9Hhgi/LyDBt+x8H1XLeFPj9ADwtQf+CVLVs0N8123KOmBAv61WlIDHI2@vger.kernel.org X-Gm-Message-State: AOJu0YwGsUjPzaQ5471MadOSGKS17G+lRxx4ZWFd4EpDZOb1pkCPNJWV 0BEEg17Ud9MJ+ZMzgugo4nfiAe1Qx9AdMc83LcE1Shd4nC6+Pu70aCtrhambCR1RxuFh8Qzhcqk qGYRXfw== X-Google-Smtp-Source: AGHT+IGJbul18z2pAriB4M9Dv7xuCXJyXC/vErDTd2EuWuyjJTHyZFET/F4MY3SeMRgMCYc9TSFDnBqrzsg= X-Received: from pjbjs19.prod.google.com ([2002:a17:90b:1493:b0:327:7035:d848]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:33cf:b0:32b:8b8d:c2d1 with SMTP id 98e67ed59e1d1-339a6f076edmr5249061a91.21.1759342465480; Wed, 01 Oct 2025 11:14:25 -0700 (PDT) Date: Wed, 1 Oct 2025 11:14:23 -0700 In-Reply-To: Precedence: bulk X-Mailing-List: linux-perf-users@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250806195706.1650976-1-seanjc@google.com> <20250806195706.1650976-33-seanjc@google.com> Message-ID: Subject: Re: [PATCH v5 32/44] KVM: x86/pmu: Disable interception of select PMU MSRs for mediated vPMUs From: Sean Christopherson To: Sandipan Das Cc: Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Xin Li , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Paolo Bonzini , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Kan Liang , Yongwei Ma , Mingwei Zhang , Xiong Zhang , Sandipan Das , Dapeng Mi Content-Type: text/plain; charset="us-ascii" On Fri, Sep 26, 2025, Sandipan Das wrote: > On 8/7/2025 1:26 AM, Sean Christopherson wrote: > > From: Dapeng Mi > > > > For vCPUs with a mediated vPMU, disable interception of counter MSRs for > > PMCs that are exposed to the guest, and for GLOBAL_CTRL and related MSRs > > if they are fully supported according to the vCPU model, i.e. if the MSRs > > and all bits supported by hardware exist from the guest's point of view. > > > > Do NOT passthrough event selector or fixed counter control MSRs, so that > > KVM can enforce userspace-defined event filters, e.g. to prevent use of > > AnyThread events (which is unfortunately a setting in the fixed counter > > control MSR). > > > > Defer support for nested passthrough of mediated PMU MSRs to the future, > > as the logic for nested MSR interception is unfortunately vendor specific. ... > > #define MSR_AMD64_LBR_SELECT 0xc000010e > > diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c > > index 4246e1d2cfcc..817ef852bdf9 100644 > > --- a/arch/x86/kvm/pmu.c > > +++ b/arch/x86/kvm/pmu.c > > @@ -715,18 +715,14 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) > > return 0; > > } > > > > -bool kvm_need_rdpmc_intercept(struct kvm_vcpu *vcpu) > > +bool kvm_need_perf_global_ctrl_intercept(struct kvm_vcpu *vcpu) > > { > > struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); > > > > if (!kvm_vcpu_has_mediated_pmu(vcpu)) > > return true; > > > > - /* > > - * VMware allows access to these Pseduo-PMCs even when read via RDPMC > > - * in Ring3 when CR4.PCE=0. > > - */ > > - if (enable_vmware_backdoor) > > + if (!kvm_pmu_has_perf_global_ctrl(pmu)) > > return true; > > > > /* > > @@ -735,7 +731,22 @@ bool kvm_need_rdpmc_intercept(struct kvm_vcpu *vcpu) > > * capabilities themselves may be a subset of hardware capabilities. > > */ > > return pmu->nr_arch_gp_counters != kvm_host_pmu.num_counters_gp || > > - pmu->nr_arch_fixed_counters != kvm_host_pmu.num_counters_fixed || > > + pmu->nr_arch_fixed_counters != kvm_host_pmu.num_counters_fixed; > > +} > > +EXPORT_SYMBOL_GPL(kvm_need_perf_global_ctrl_intercept); > > + > > +bool kvm_need_rdpmc_intercept(struct kvm_vcpu *vcpu) > > +{ > > + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); > > + > > + /* > > + * VMware allows access to these Pseduo-PMCs even when read via RDPMC > > + * in Ring3 when CR4.PCE=0. > > + */ > > + if (enable_vmware_backdoor) > > + return true; > > + > > + return kvm_need_perf_global_ctrl_intercept(vcpu) || > > pmu->counter_bitmask[KVM_PMC_GP] != (BIT_ULL(kvm_host_pmu.bit_width_gp) - 1) || > > pmu->counter_bitmask[KVM_PMC_FIXED] != (BIT_ULL(kvm_host_pmu.bit_width_fixed) - 1); > > } > > There is a case for AMD processors where the global MSRs are absent in the guest > but the guest still uses the same number of counters as what is advertised by the > host capabilities. So RDPMC interception is not necessary for all cases where > global control is unavailable.o Hmm, I think Intel would be the same? Ah, no, because the host will have fixed counters, but the guest will not. However, that's not directly related to kvm_pmu_has_perf_global_ctrl(), so I think this would be correct? diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 4414d070c4f9..4c5b2712ee4c 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -744,16 +744,13 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) return 0; } -bool kvm_need_perf_global_ctrl_intercept(struct kvm_vcpu *vcpu) +static bool kvm_need_pmc_intercept(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); if (!kvm_vcpu_has_mediated_pmu(vcpu)) return true; - if (!kvm_pmu_has_perf_global_ctrl(pmu)) - return true; - /* * Note! Check *host* PMU capabilities, not KVM's PMU capabilities, as * KVM's capabilities are constrained based on KVM support, i.e. KVM's @@ -762,6 +759,13 @@ bool kvm_need_perf_global_ctrl_intercept(struct kvm_vcpu *vcpu) return pmu->nr_arch_gp_counters != kvm_host_pmu.num_counters_gp || pmu->nr_arch_fixed_counters != kvm_host_pmu.num_counters_fixed; } + +bool kvm_need_perf_global_ctrl_intercept(struct kvm_vcpu *vcpu) +{ + + return kvm_need_pmc_intercept(vcpu) || + !kvm_pmu_has_perf_global_ctrl(vcpu_to_pmu(vcpu)); +} EXPORT_SYMBOL_GPL(kvm_need_perf_global_ctrl_intercept); bool kvm_need_rdpmc_intercept(struct kvm_vcpu *vcpu) @@ -775,7 +779,7 @@ bool kvm_need_rdpmc_intercept(struct kvm_vcpu *vcpu) if (enable_vmware_backdoor) return true; - return kvm_need_perf_global_ctrl_intercept(vcpu) || + return kvm_need_pmc_intercept(vcpu) || pmu->counter_bitmask[KVM_PMC_GP] != (BIT_ULL(kvm_host_pmu.bit_width_gp) - 1) || pmu->counter_bitmask[KVM_PMC_FIXED] != (BIT_ULL(kvm_host_pmu.bit_width_fixed) - 1); }