From: Maxim Levitsky <mlevitsk@redhat.com>
To: Yosry Ahmed <yosry.ahmed@linux.dev>,
Sean Christopherson <seanjc@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
Jim Mattson <jmattson@google.com>,
Vitaly Kuznetsov <vkuznets@redhat.com>,
Rik van Riel <riel@surriel.com>,
Tom Lendacky <thomas.lendacky@amd.com>,
x86@kernel.org, kvm@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH 06/24] KVM: SEV: Track ASID->vCPU instead of ASID->VMCB
Date: Thu, 03 Apr 2025 16:04:05 -0400 [thread overview]
Message-ID: <03be59f070a02555596550d5764aa8b416e43b58.camel@redhat.com> (raw)
In-Reply-To: <20250326193619.3714986-7-yosry.ahmed@linux.dev>
On Wed, 2025-03-26 at 19:36 +0000, Yosry Ahmed wrote:
> SEV currently tracks the ASID to VMCB mapping for each physical CPU.
> This is required to flush the ASID when a new VMCB using the same ASID
> is run on the same CPU.
> Practically, there is a single VMCB for each
> vCPU using SEV.
Can you elaborate on this a bit? AFAIK you can't run nested with SEV,
even plain SEV because guest state is encrypted, so for SEV we have
indeed one VMCB per vCPU.
> Furthermore, TLB flushes on nested transitions between
> VMCB01 and VMCB02 are handled separately (see
> nested_svm_transition_tlb_flush()).
Yes, or we can say that for now both VMCBs share the same ASID,
up until later in this patch series.
>
> In preparation for generalizing the tracking and making the tracking
> more expensive, start tracking the ASID to vCPU mapping instead. This
> will allow for the tracking to be moved to a cheaper code path when
> vCPUs are switched.
>
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
> ---
> arch/x86/kvm/svm/sev.c | 12 ++++++------
> arch/x86/kvm/svm/svm.c | 2 +-
> arch/x86/kvm/svm/svm.h | 4 ++--
> 3 files changed, 9 insertions(+), 9 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index d613f81addf1c..ddb4d5b211ed7 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -240,7 +240,7 @@ static void sev_asid_free(struct kvm_sev_info *sev)
>
> for_each_possible_cpu(cpu) {
> sd = per_cpu_ptr(&svm_data, cpu);
> - sd->sev_vmcbs[sev->asid] = NULL;
> + sd->sev_vcpus[sev->asid] = NULL;
> }
>
> mutex_unlock(&sev_bitmap_lock);
> @@ -3081,8 +3081,8 @@ int sev_cpu_init(struct svm_cpu_data *sd)
> if (!sev_enabled)
> return 0;
>
> - sd->sev_vmcbs = kcalloc(nr_asids, sizeof(void *), GFP_KERNEL);
> - if (!sd->sev_vmcbs)
> + sd->sev_vcpus = kcalloc(nr_asids, sizeof(void *), GFP_KERNEL);
> + if (!sd->sev_vcpus)
> return -ENOMEM;
>
> return 0;
> @@ -3471,14 +3471,14 @@ int pre_sev_run(struct vcpu_svm *svm, int cpu)
> /*
> * Flush guest TLB:
> *
> - * 1) when different VMCB for the same ASID is to be run on the same host CPU.
> + * 1) when different vCPU for the same ASID is to be run on the same host CPU.
> * 2) or this VMCB was executed on different host CPU in previous VMRUNs.
> */
> - if (sd->sev_vmcbs[asid] == svm->vmcb &&
> + if (sd->sev_vcpus[asid] == &svm->vcpu &&
> svm->vcpu.arch.last_vmentry_cpu == cpu)
> return 0;
>
> - sd->sev_vmcbs[asid] = svm->vmcb;
> + sd->sev_vcpus[asid] = &svm->vcpu;
> vmcb_set_flush_asid(svm->vmcb);
> vmcb_mark_dirty(svm->vmcb, VMCB_ASID);
> return 0;
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 18bfc3d3f9ba1..1156ca97fd798 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -694,7 +694,7 @@ static void svm_cpu_uninit(int cpu)
> if (!sd->save_area)
> return;
>
> - kfree(sd->sev_vmcbs);
> + kfree(sd->sev_vcpus);
> __free_page(__sme_pa_to_page(sd->save_area_pa));
> sd->save_area_pa = 0;
> sd->save_area = NULL;
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index 843a29a6d150e..4ea6c61c3b048 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -340,8 +340,8 @@ struct svm_cpu_data {
>
> struct vmcb *current_vmcb;
>
> - /* index = sev_asid, value = vmcb pointer */
> - struct vmcb **sev_vmcbs;
> + /* index = sev_asid, value = vcpu pointer */
> + struct kvm_vcpu **sev_vcpus;
> };
>
> DECLARE_PER_CPU(struct svm_cpu_data, svm_data);
Code itself looks OK, so
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Best regards,
Maxim Levitsky
next prev parent reply other threads:[~2025-04-03 20:04 UTC|newest]
Thread overview: 58+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-26 19:35 [RFC PATCH 00/24] KVM: SVM: Rework ASID management Yosry Ahmed
2025-03-26 19:35 ` [RFC PATCH 01/24] KVM: VMX: Generalize VPID allocation to be vendor-neutral Yosry Ahmed
2025-03-27 10:58 ` Nikunj A Dadhania
2025-03-27 17:13 ` Yosry Ahmed
2025-03-27 19:42 ` Sean Christopherson
2025-06-23 16:44 ` Sean Christopherson
2025-03-26 19:35 ` [RFC PATCH 02/24] KVM: SVM: Use cached local variable in init_vmcb() Yosry Ahmed
2025-04-03 19:56 ` Maxim Levitsky
2025-03-26 19:35 ` [RFC PATCH 03/24] KVM: SVM: Add helpers to set/clear ASID flush in VMCB Yosry Ahmed
2025-04-03 20:00 ` Maxim Levitsky
2025-06-23 16:46 ` Sean Christopherson
2025-03-26 19:35 ` [RFC PATCH 04/24] KVM: SVM: Flush everything if FLUSHBYASID is not available Yosry Ahmed
2025-04-03 20:00 ` Maxim Levitsky
2025-03-26 19:36 ` [RFC PATCH 05/24] KVM: SVM: Flush the ASID when running on a new CPU Yosry Ahmed
2025-04-03 20:00 ` Maxim Levitsky
2025-03-26 19:36 ` [RFC PATCH 06/24] KVM: SEV: Track ASID->vCPU instead of ASID->VMCB Yosry Ahmed
2025-04-03 20:04 ` Maxim Levitsky [this message]
2025-04-22 9:41 ` Yosry Ahmed
2025-06-20 23:13 ` Sean Christopherson
2025-06-23 19:50 ` Tom Lendacky
2025-06-23 20:37 ` Sean Christopherson
2025-03-26 19:36 ` [RFC PATCH 07/24] KVM: SEV: Track ASID->vCPU on vCPU load Yosry Ahmed
2025-04-03 20:04 ` Maxim Levitsky
2025-03-26 19:36 ` [RFC PATCH 08/24] KVM: SEV: Drop pre_sev_run() Yosry Ahmed
2025-04-03 20:04 ` Maxim Levitsky
2025-03-26 19:36 ` [RFC PATCH 09/24] KVM: SEV: Generalize tracking ASID->vCPU with xarrays Yosry Ahmed
2025-04-03 20:05 ` Maxim Levitsky
2025-04-22 9:50 ` Yosry Ahmed
2025-03-26 19:36 ` [RFC PATCH 10/24] KVM: SVM: Use a single ASID per VM Yosry Ahmed
2025-04-03 20:05 ` Maxim Levitsky
2025-04-22 9:51 ` Yosry Ahmed
2025-03-26 19:36 ` [RFC PATCH 11/24] KVM: nSVM: Use a separate ASID for nested guests Yosry Ahmed
2025-04-03 20:09 ` Maxim Levitsky
2025-04-22 10:08 ` Yosry Ahmed
2025-03-26 19:36 ` [RFC PATCH 12/24] KVM: x86: hyper-v: Pass is_guest_mode to kvm_hv_vcpu_purge_flush_tlb() Yosry Ahmed
2025-04-03 20:09 ` Maxim Levitsky
2025-06-23 19:22 ` Sean Christopherson
2025-03-26 19:36 ` [RFC PATCH 13/24] KVM: nSVM: Parameterize svm_flush_tlb_asid() by is_guest_mode Yosry Ahmed
2025-04-03 20:10 ` Maxim Levitsky
2025-04-22 10:04 ` Yosry Ahmed
2025-03-26 19:36 ` [RFC PATCH 14/24] KVM: nSVM: Split nested_svm_transition_tlb_flush() into entry/exit fns Yosry Ahmed
2025-03-26 19:36 ` [RFC PATCH 15/24] KVM: x86/mmu: rename __kvm_mmu_invalidate_addr() Yosry Ahmed
2025-04-03 20:10 ` Maxim Levitsky
2025-03-26 19:36 ` [RFC PATCH 16/24] KVM: x86/mmu: Allow skipping the gva flush in kvm_mmu_invalidate_addr() Yosry Ahmed
2025-04-03 20:10 ` Maxim Levitsky
2025-03-26 19:36 ` [RFC PATCH 17/24] KVM: nSVM: Flush both L1 and L2 ASIDs on KVM_REQ_TLB_FLUSH Yosry Ahmed
2025-04-03 20:10 ` Maxim Levitsky
2025-03-26 19:41 ` [RFC PATCH 18/24] KVM: nSVM: Handle nested TLB flush requests through TLB_CONTROL Yosry Ahmed
2025-03-26 19:43 ` [RFC PATCH 19/24] KVM: nSVM: Flush the TLB if L1 changes L2's ASID Yosry Ahmed
2025-03-26 19:44 ` [RFC PATCH 20/24] KVM: nSVM: Do not reset TLB_CONTROL in VMCB02 on nested entry Yosry Ahmed
2025-03-26 19:44 ` [RFC PATCH 21/24] KVM: nSVM: Service local TLB flushes before nested transitions Yosry Ahmed
2025-03-26 19:44 ` [RFC PATCH 22/24] KVM: nSVM: Handle INVLPGA interception correctly Yosry Ahmed
2025-04-03 20:10 ` Maxim Levitsky
2025-06-24 1:08 ` Sean Christopherson
2025-03-26 19:44 ` [RFC PATCH 23/24] KVM: nSVM: Allocate a new ASID for nested guests Yosry Ahmed
2025-04-03 20:11 ` Maxim Levitsky
2025-04-22 10:01 ` Yosry Ahmed
2025-03-26 19:44 ` [RFC PATCH 24/24] KVM: nSVM: Stop bombing the TLB on nested transitions Yosry Ahmed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=03be59f070a02555596550d5764aa8b416e43b58.camel@redhat.com \
--to=mlevitsk@redhat.com \
--cc=jmattson@google.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=riel@surriel.com \
--cc=seanjc@google.com \
--cc=thomas.lendacky@amd.com \
--cc=vkuznets@redhat.com \
--cc=x86@kernel.org \
--cc=yosry.ahmed@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).