linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: Maxim Levitsky <mlevitsk@redhat.com>
Cc: kvm@vger.kernel.org, linux-riscv@lists.infradead.org,
	Kunkun Jiang <jiangkunkun@huawei.com>,
	Waiman Long <longman@redhat.com>,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Boqun Feng <boqun.feng@gmail.com>, Borislav Petkov <bp@alien8.de>,
	Albert Ou <aou@eecs.berkeley.edu>,
	Anup Patel <anup@brainfault.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Alexandre Ghiti <alex@ghiti.fr>,
	Alexander Potapenko <glider@google.com>,
	Oliver Upton <oliver.upton@linux.dev>,
	Andre Przywara <andre.przywara@arm.com>,
	x86@kernel.org, Joey Gouly <joey.gouly@arm.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	kvm-riscv@lists.infradead.org,
	Atish Patra <atishp@atishpatra.org>,
	Ingo Molnar <mingo@redhat.com>,
	Jing Zhang <jingzhangos@google.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	kvmarm@lists.linux.dev, Will Deacon <will@kernel.org>,
	Keisuke Nishimura <keisuke.nishimura@inria.fr>,
	Sebastian Ott <sebott@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Shusen Li <lishusen2@huawei.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Sean Christopherson <seanjc@google.com>,
	Zenghui Yu <yuzenghui@huawei.com>
Subject: Re: [PATCH v4 2/5] arm64: KVM: use mutex_trylock_nest_lock when locking all vCPUs
Date: Thu, 01 May 2025 09:24:11 +0100	[thread overview]
Message-ID: <864iy4ivro.wl-maz@kernel.org> (raw)
In-Reply-To: <20250430203013.366479-3-mlevitsk@redhat.com>

nit: in keeping with the existing arm64 patches, please write the
subject as "KVM: arm64: Use ..."

On Wed, 30 Apr 2025 21:30:10 +0100,
Maxim Levitsky <mlevitsk@redhat.com> wrote:

[...]

> 
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 68fec8c95fee..d31f42a71bdc 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -1914,49 +1914,6 @@ int kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
>  	}
>  }
>  
> -/* unlocks vcpus from @vcpu_lock_idx and smaller */
> -static void unlock_vcpus(struct kvm *kvm, int vcpu_lock_idx)
> -{
> -	struct kvm_vcpu *tmp_vcpu;
> -
> -	for (; vcpu_lock_idx >= 0; vcpu_lock_idx--) {
> -		tmp_vcpu = kvm_get_vcpu(kvm, vcpu_lock_idx);
> -		mutex_unlock(&tmp_vcpu->mutex);
> -	}
> -}
> -
> -void unlock_all_vcpus(struct kvm *kvm)
> -{
> -	lockdep_assert_held(&kvm->lock);

Note this assertion...

> -
> -	unlock_vcpus(kvm, atomic_read(&kvm->online_vcpus) - 1);
> -}
> -
> -/* Returns true if all vcpus were locked, false otherwise */
> -bool lock_all_vcpus(struct kvm *kvm)
> -{
> -	struct kvm_vcpu *tmp_vcpu;
> -	unsigned long c;
> -
> -	lockdep_assert_held(&kvm->lock);

and this one...

> -
> -	/*
> -	 * Any time a vcpu is in an ioctl (including running), the
> -	 * core KVM code tries to grab the vcpu->mutex.
> -	 *
> -	 * By grabbing the vcpu->mutex of all VCPUs we ensure that no
> -	 * other VCPUs can fiddle with the state while we access it.
> -	 */
> -	kvm_for_each_vcpu(c, tmp_vcpu, kvm) {
> -		if (!mutex_trylock(&tmp_vcpu->mutex)) {
> -			unlock_vcpus(kvm, c - 1);
> -			return false;
> -		}
> -	}
> -
> -	return true;
> -}
> -
>  static unsigned long nvhe_percpu_size(void)
>  {
>  	return (unsigned long)CHOOSE_NVHE_SYM(__per_cpu_end) -

[...]

> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 69782df3617f..834f08dfa24c 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1368,6 +1368,40 @@ static int kvm_vm_release(struct inode *inode, struct file *filp)
>  	return 0;
>  }
>  
> +/*
> + * Try to lock all of the VM's vCPUs.
> + * Assumes that the kvm->lock is held.

Assuming is not enough. These assertions have caught a number of bugs,
and I'm not prepared to drop them.

> + */
> +int kvm_trylock_all_vcpus(struct kvm *kvm)
> +{
> +	struct kvm_vcpu *vcpu;
> +	unsigned long i, j;
> +
> +	kvm_for_each_vcpu(i, vcpu, kvm)
> +		if (!mutex_trylock_nest_lock(&vcpu->mutex, &kvm->lock))
> +			goto out_unlock;
> +	return 0;
> +
> +out_unlock:
> +	kvm_for_each_vcpu(j, vcpu, kvm) {
> +		if (i == j)
> +			break;
> +		mutex_unlock(&vcpu->mutex);
> +	}
> +	return -EINTR;
> +}
> +EXPORT_SYMBOL_GPL(kvm_trylock_all_vcpus);
> +
> +void kvm_unlock_all_vcpus(struct kvm *kvm)
> +{
> +	struct kvm_vcpu *vcpu;
> +	unsigned long i;
> +
> +	kvm_for_each_vcpu(i, vcpu, kvm)
> +		mutex_unlock(&vcpu->mutex);
> +}
> +EXPORT_SYMBOL_GPL(kvm_unlock_all_vcpus);

I don't mind you not including the assertions in these helpers, but
then the existing primitives have to stay and call into the new stuff.
Which, from a simple patch volume, would be far preferable and help
with managing backports.

I'd also expect the introduction of these new helpers to be done in
its own patch, so that we don't get cross architecture dependencies if
something needs to be backported for a reason or another.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.


  reply	other threads:[~2025-05-01  8:26 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-30 20:30 [PATCH v4 0/5] KVM: lockdep improvements Maxim Levitsky
2025-04-30 20:30 ` [PATCH v4 1/5] locking/mutex: implement mutex_trylock_nested Maxim Levitsky
2025-04-30 20:30 ` [PATCH v4 2/5] arm64: KVM: use mutex_trylock_nest_lock when locking all vCPUs Maxim Levitsky
2025-05-01  8:24   ` Marc Zyngier [this message]
2025-05-01 11:15     ` Peter Zijlstra
2025-05-01 12:44       ` Marc Zyngier
2025-05-01 13:41         ` Peter Zijlstra
2025-05-01 13:55           ` Marc Zyngier
2025-05-02 20:58     ` Sean Christopherson
2025-04-30 20:30 ` [PATCH v4 3/5] RISC-V: KVM: switch to kvm_trylock/unlock_all_vcpus Maxim Levitsky
2025-04-30 20:30 ` [PATCH v4 4/5] locking/mutex: implement mutex_lock_killable_nest_lock Maxim Levitsky
2025-04-30 20:30 ` [PATCH v4 5/5] x86: KVM: SEV: implement kvm_lock_all_vcpus and use it Maxim Levitsky
2025-05-02 20:57   ` Sean Christopherson
2025-05-03 10:08     ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=864iy4ivro.wl-maz@kernel.org \
    --to=maz@kernel.org \
    --cc=alex@ghiti.fr \
    --cc=andre.przywara@arm.com \
    --cc=anup@brainfault.org \
    --cc=aou@eecs.berkeley.edu \
    --cc=atishp@atishpatra.org \
    --cc=bhelgaas@google.com \
    --cc=boqun.feng@gmail.com \
    --cc=bp@alien8.de \
    --cc=catalin.marinas@arm.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=glider@google.com \
    --cc=hpa@zytor.com \
    --cc=jiangkunkun@huawei.com \
    --cc=jingzhangos@google.com \
    --cc=joey.gouly@arm.com \
    --cc=keisuke.nishimura@inria.fr \
    --cc=kvm-riscv@lists.infradead.org \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=lishusen2@huawei.com \
    --cc=longman@redhat.com \
    --cc=mingo@redhat.com \
    --cc=mlevitsk@redhat.com \
    --cc=oliver.upton@linux.dev \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rdunlap@infradead.org \
    --cc=seanjc@google.com \
    --cc=sebott@redhat.com \
    --cc=suzuki.poulose@arm.com \
    --cc=tglx@linutronix.de \
    --cc=will@kernel.org \
    --cc=x86@kernel.org \
    --cc=yuzenghui@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).