kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] KVM: SEV: Mark nested locking of vcpu->lock
@ 2022-04-04 19:46 Peter Gonda
  2022-04-04 20:35 ` Sean Christopherson
  0 siblings, 1 reply; 3+ messages in thread
From: Peter Gonda @ 2022-04-04 19:46 UTC (permalink / raw)
  To: kvm
  Cc: Peter Gonda, John Sperbeck, David Rientjes, Paolo Bonzini,
	Sean Christopherson, linux-kernel

svm_vm_migrate_from() uses sev_lock_vcpus_for_migration() to lock all
source and target vcpu->locks. Mark the nested subclasses to avoid false
positives from lockdep.

Fixes: b56639318bb2b ("KVM: SEV: Add support for SEV intra host migration")
Reported-by: John Sperbeck<jsperbeck@google.com>
Suggested-by: David Rientjes <rientjes@google.com>
Signed-off-by: Peter Gonda <pgonda@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---

Tested by running sev_migrate_tests with lockdep enabled. Before we see
a warning from sev_lock_vcpus_for_migration(). After we get no warnings.

---
 arch/x86/kvm/svm/sev.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 75fa6dd268f0..8f77421c1c4b 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1591,15 +1591,16 @@ static void sev_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm)
 	atomic_set_release(&src_sev->migration_in_progress, 0);
 }
 
-
-static int sev_lock_vcpus_for_migration(struct kvm *kvm)
+static int sev_lock_vcpus_for_migration(struct kvm *kvm, unsigned int *subclass)
 {
 	struct kvm_vcpu *vcpu;
 	unsigned long i, j;
 
 	kvm_for_each_vcpu(i, vcpu, kvm) {
-		if (mutex_lock_killable(&vcpu->mutex))
+		if (mutex_lock_killable_nested(&vcpu->mutex, *subclass))
 			goto out_unlock;
+
+		++(*subclass);
 	}
 
 	return 0;
@@ -1717,6 +1718,7 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd)
 	struct kvm *source_kvm;
 	bool charged = false;
 	int ret;
+	unsigned int vcpu_mutex_subclass = 0;
 
 	source_kvm_file = fget(source_fd);
 	if (!file_is_kvm(source_kvm_file)) {
@@ -1745,10 +1747,10 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd)
 		charged = true;
 	}
 
-	ret = sev_lock_vcpus_for_migration(kvm);
+	ret = sev_lock_vcpus_for_migration(kvm, &vcpu_mutex_subclass);
 	if (ret)
 		goto out_dst_cgroup;
-	ret = sev_lock_vcpus_for_migration(source_kvm);
+	ret = sev_lock_vcpus_for_migration(source_kvm, &vcpu_mutex_subclass);
 	if (ret)
 		goto out_dst_vcpu;
 
-- 
2.35.1.1094.g7c7d902a7c-goog


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] KVM: SEV: Mark nested locking of vcpu->lock
  2022-04-04 19:46 [PATCH] KVM: SEV: Mark nested locking of vcpu->lock Peter Gonda
@ 2022-04-04 20:35 ` Sean Christopherson
  2022-04-04 21:51   ` Peter Gonda
  0 siblings, 1 reply; 3+ messages in thread
From: Sean Christopherson @ 2022-04-04 20:35 UTC (permalink / raw)
  To: Peter Gonda
  Cc: kvm, John Sperbeck, David Rientjes, Paolo Bonzini, linux-kernel

On Mon, Apr 04, 2022, Peter Gonda wrote:
> svm_vm_migrate_from() uses sev_lock_vcpus_for_migration() to lock all
> source and target vcpu->locks. Mark the nested subclasses to avoid false
> positives from lockdep.
> 
> Fixes: b56639318bb2b ("KVM: SEV: Add support for SEV intra host migration")
> Reported-by: John Sperbeck<jsperbeck@google.com>
> Suggested-by: David Rientjes <rientjes@google.com>
> Signed-off-by: Peter Gonda <pgonda@google.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Sean Christopherson <seanjc@google.com>
> Cc: kvm@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> ---
> 
> Tested by running sev_migrate_tests with lockdep enabled. Before we see
> a warning from sev_lock_vcpus_for_migration(). After we get no warnings.
> 
> ---
>  arch/x86/kvm/svm/sev.c | 12 +++++++-----
>  1 file changed, 7 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 75fa6dd268f0..8f77421c1c4b 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -1591,15 +1591,16 @@ static void sev_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm)
>  	atomic_set_release(&src_sev->migration_in_progress, 0);
>  }
>  
> -
> -static int sev_lock_vcpus_for_migration(struct kvm *kvm)
> +static int sev_lock_vcpus_for_migration(struct kvm *kvm, unsigned int *subclass)
>  {
>  	struct kvm_vcpu *vcpu;
>  	unsigned long i, j;
>  
>  	kvm_for_each_vcpu(i, vcpu, kvm) {
> -		if (mutex_lock_killable(&vcpu->mutex))
> +		if (mutex_lock_killable_nested(&vcpu->mutex, *subclass))
>  			goto out_unlock;
> +
> +		++(*subclass);

This is rather gross, and I'm guessing it adds extra work for the non-lockdep
case, assuming the compiler isn't so clever that it can figure out that the result
is never used.  Not that this is a hot path...

Does each lock actually need a separate subclass?  If so, why don't the other
paths that lock all vCPUs complain?

If differentiating the two VMs is sufficient, then we can pass in SINGLE_DEPTH_NESTING
for the second round of locks.  If a per-vCPU subclass is required, we can use the
vCPU index and assign evens to one and odds to the other, e.g. this should work and
compiles to a nop when LOCKDEP is disabled (compile tested only).  It's still gross,
but we could pretty it up, e.g. add defines for the 0/1 param.

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 75fa6dd268f0..9be35902b809 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1591,14 +1591,13 @@ static void sev_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm)
        atomic_set_release(&src_sev->migration_in_progress, 0);
 }

-
-static int sev_lock_vcpus_for_migration(struct kvm *kvm)
+static int sev_lock_vcpus_for_migration(struct kvm *kvm, int mod)
 {
        struct kvm_vcpu *vcpu;
        unsigned long i, j;

        kvm_for_each_vcpu(i, vcpu, kvm) {
-               if (mutex_lock_killable(&vcpu->mutex))
+               if (mutex_lock_killable_nested(&vcpu->mutex, i * 2 + mod))
                        goto out_unlock;
        }

@@ -1745,10 +1744,10 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd)
                charged = true;
        }

-       ret = sev_lock_vcpus_for_migration(kvm);
+       ret = sev_lock_vcpus_for_migration(kvm, 0);
        if (ret)
                goto out_dst_cgroup;
-       ret = sev_lock_vcpus_for_migration(source_kvm);
+       ret = sev_lock_vcpus_for_migration(source_kvm, 1);
        if (ret)
                goto out_dst_vcpu;



^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] KVM: SEV: Mark nested locking of vcpu->lock
  2022-04-04 20:35 ` Sean Christopherson
@ 2022-04-04 21:51   ` Peter Gonda
  0 siblings, 0 replies; 3+ messages in thread
From: Peter Gonda @ 2022-04-04 21:51 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm list, John Sperbeck, David Rientjes, Paolo Bonzini, LKML

>
> This is rather gross, and I'm guessing it adds extra work for the non-lockdep
> case, assuming the compiler isn't so clever that it can figure out that the result
> is never used.  Not that this is a hot path...
>
> Does each lock actually need a separate subclass?  If so, why don't the other
> paths that lock all vCPUs complain?
>
> If differentiating the two VMs is sufficient, then we can pass in SINGLE_DEPTH_NESTING
> for the second round of locks.  If a per-vCPU subclass is required, we can use the
> vCPU index and assign evens to one and odds to the other, e.g. this should work and
> compiles to a nop when LOCKDEP is disabled (compile tested only).  It's still gross,
> but we could pretty it up, e.g. add defines for the 0/1 param.

I checked and the perf vCPU subclassing is required. If I just only
use a SINGLE_DEPTH_NESTING on the second VM's vCPUs I still see the
warning.

This odds and evens approach seems much better. I'll update to use
that in the V2 unless there is a better idea.

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2022-04-04 22:28 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-04-04 19:46 [PATCH] KVM: SEV: Mark nested locking of vcpu->lock Peter Gonda
2022-04-04 20:35 ` Sean Christopherson
2022-04-04 21:51   ` Peter Gonda

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).