kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sean Christopherson <seanjc@google.com>
To: Ravi Bangoria <ravi.bangoria@amd.com>
Cc: pbonzini@redhat.com, thomas.lendacky@amd.com, tglx@linutronix.de,
	 mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com,
	x86@kernel.org,  hpa@zytor.com, michael.roth@amd.com,
	nikunj.dadhania@amd.com,  kvm@vger.kernel.org,
	linux-kernel@vger.kernel.org, santosh.shukla@amd.com
Subject: Re: [PATCH v2] KVM: SEV-ES: Don't intercept MSR_IA32_DEBUGCTLMSR for SEV-ES guests
Date: Tue, 21 May 2024 13:31:41 -0700	[thread overview]
Message-ID: <Zk0ErRQt3XH7xK6O@google.com> (raw)
In-Reply-To: <b66ea07a-f57e-014c-68b4-729f893c2fbd@amd.com>

On Mon, May 20, 2024, Ravi Bangoria wrote:
> On 17-May-24 8:01 PM, Sean Christopherson wrote:
> > On Fri, May 17, 2024, Ravi Bangoria wrote:
> >> On 08-May-24 12:37 AM, Sean Christopherson wrote:
> >>> So unless I'm missing something, the only reason to ever disable LBRV would be
> >>> for performance reasons.  Indeed the original commits more or less says as much:
> >>>
> >>>   commit 24e09cbf480a72f9c952af4ca77b159503dca44b
> >>>   Author:     Joerg Roedel <joerg.roedel@amd.com>
> >>>   AuthorDate: Wed Feb 13 18:58:47 2008 +0100
> >>>
> >>>     KVM: SVM: enable LBR virtualization
> >>>     
> >>>     This patch implements the Last Branch Record Virtualization (LBRV) feature of
> >>>     the AMD Barcelona and Phenom processors into the kvm-amd module. It will only
> >>>     be enabled if the guest enables last branch recording in the DEBUG_CTL MSR. So
> >>>     there is no increased world switch overhead when the guest doesn't use these
> >>>     MSRs.
> >>>
> >>> but what it _doesn't_ say is what the world switch overhead is when LBRV is
> >>> enabled.  If the overhead is small, e.g. 20 cycles?, then I see no reason to
> >>> keep the dynamically toggling.
> >>>
> >>> And if we ditch the dynamic toggling, then this patch is unnecessary to fix the
> >>> LBRV issue.  It _is_ necessary to actually let the guest use the LBRs, but that's
> >>> a wildly different changelog and justification.
> >>
> >> The overhead might be less for legacy LBR. But upcoming hw also supports
> >> LBR Stack Virtualization[1]. LBR Stack has total 34 MSRs (two control and
> >> 16*2 stack). Also, Legacy and Stack LBR virtualization both are controlled
> >> through the same VMCB bit. So I think I still need to keep the dynamic
> >> toggling for LBR Stack virtualization.
> > 
> > Please get performance number so that we can make an informed decision.  I don't
> > want to carry complexity because we _think_ the overhead would be too high.
> 
> LBR Virtualization overhead for guest entry + exit roundtrip is ~450 cycles* on

Ouch.  Just to clearify, that's for LBR Stack Virtualization, correct?

Ugh, I was going to say that we could always enable "legacy" LBR virtualization,
and do the dynamic toggling iff DebugExtnCtl.LBRS=1, but they share an enabling
flag.  What a mess.

> a Genoa machine. Also, LBR MSRs (except MSR_AMD_DBG_EXTN_CFG) are of swap type
> C so this overhead is only for guest MSR save/restore.

Lovely.

Have I mentioned that the SEV-ES behavior of force-enabling every feature under
the sun is really, really annoying?

Anyways, I agree that we need to keep the dynamic toggling.

But I still think we should delete the "lbrv" module param.  LBR Stack support has
a CPUID feature flag, i.e. userspace can disable LBR support via CPUID in order
to avoid the overhead on CPUs with LBR Stack.  The logic for MSR_IA32_DEBUGCTLMSR
will be bizarre, but I don't see a way around that since legacy LBR virtualization
and LBR Stack virtualization share a control.

E.g. I think we'll want to end up with something like this?

	case MSR_IA32_DEBUGCTLMSR:
		if (data & DEBUGCTL_RESERVED_BITS)
			return 1;

		if (kvm_cpu_cap_has(X86_FEATURE_LBR_STACK) &&
		    !guest_cpuid_has(vcpu, X86_FEATURE_LBR_STACK)) {
		    	kvm_pr_unimpl_wrmsr(vcpu, ecx, data);
			break;
		}

		svm_get_lbr_vmcb(svm)->save.dbgctl = data;
		svm_update_lbrv(vcpu);
		break;

  parent reply	other threads:[~2024-05-21 20:31 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-16  5:03 [PATCH v2] KVM: SEV-ES: Don't intercept MSR_IA32_DEBUGCTLMSR for SEV-ES guests Ravi Bangoria
2024-04-16  8:48 ` Gupta, Pankaj
2024-05-02 23:51 ` Sean Christopherson
2024-05-06  4:49   ` Ravi Bangoria
2024-05-07 19:07     ` Sean Christopherson
2024-05-17  6:18       ` Ravi Bangoria
2024-05-17 14:31         ` Sean Christopherson
2024-05-20  5:04           ` Ravi Bangoria
2024-05-20  5:06             ` Ravi Bangoria
2024-05-21 20:31             ` Sean Christopherson [this message]
2024-05-21 20:47               ` Paolo Bonzini
2024-05-21 22:22                 ` Sean Christopherson
2024-05-22  6:12                   ` Ravi Bangoria
2024-05-22  6:11                 ` Ravi Bangoria
2024-05-22  8:09                   ` Paolo Bonzini
2024-05-22  6:11               ` Ravi Bangoria

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Zk0ErRQt3XH7xK6O@google.com \
    --to=seanjc@google.com \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=michael.roth@amd.com \
    --cc=mingo@redhat.com \
    --cc=nikunj.dadhania@amd.com \
    --cc=pbonzini@redhat.com \
    --cc=ravi.bangoria@amd.com \
    --cc=santosh.shukla@amd.com \
    --cc=tglx@linutronix.de \
    --cc=thomas.lendacky@amd.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).