public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* Question about lock_all_vcpus
@ 2025-02-06 20:08 Maxim Levitsky
  2025-02-10 12:56 ` Paolo Bonzini
  2025-02-10 15:57 ` Marc Zyngier
  0 siblings, 2 replies; 4+ messages in thread
From: Maxim Levitsky @ 2025-02-06 20:08 UTC (permalink / raw)
  To: kvmarm; +Cc: kvm

Hi!

KVM on ARM has this function, and it seems to be only used in a couple of places, mostly for
initialization.

We recently noticed a CI failure roughly like that:

[  328.171264] BUG: MAX_LOCK_DEPTH too low!
[  328.175227] turning off the locking correctness validator.
[  328.180726] Please attach the output of /proc/lock_stat to the bug report
[  328.187531] depth: 48  max: 48!
[  328.190678] 48 locks held by qemu-kvm/11664:
[  328.194957]  #0: ffff800086de5ba0 (&kvm->lock){+.+.}-{3:3}, at: kvm_ioctl_create_device+0x174/0x5b0
[  328.204048]  #1: ffff0800e78800b8 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
[  328.212521]  #2: ffff07ffeee51e98 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
[  328.220991]  #3: ffff0800dc7d80b8 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
[  328.229463]  #4: ffff07ffe0c980b8 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
[  328.237934]  #5: ffff0800a3883c78 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
[  328.246405]  #6: ffff07fffbe480b8 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0


..
..
..
..


As far as I see currently MAX_LOCK_DEPTH is 48 and the number of vCPUs can easily be hundreds.

Do you think that it's possible? or know if there were any efforts to get rid of lock_all_vcpus to avoid
this problem? If not possible, maybe we can exclude the lock_all_vcpus from the lockdep validator?

AFAIK, on x86 most of the similar cases where lock_all_vcpus could be used are handled by 
assuming and enforcing that userspace will call these functions prior to first vCPU is created an/or run, 
thus the need for such locking doesn't exist.

Recently x86 got a lot of cleanups to enforce this, like for example enforce that userspace won't change
CPUID after a vCPU has run.

Best regards,
	Maxim Levitsky


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Question about lock_all_vcpus
  2025-02-06 20:08 Question about lock_all_vcpus Maxim Levitsky
@ 2025-02-10 12:56 ` Paolo Bonzini
  2025-02-10 15:57 ` Marc Zyngier
  1 sibling, 0 replies; 4+ messages in thread
From: Paolo Bonzini @ 2025-02-10 12:56 UTC (permalink / raw)
  To: Maxim Levitsky, kvmarm; +Cc: kvm, anup@brainfault.org

On 2/6/25 21:08, Maxim Levitsky wrote:
> Do you think that it's possible? or know if there were any efforts 
> to get rid of lock_all_vcpus to avoid this problem? If not possible, 
> maybe we can exclude the lock_all_vcpus from the lockdep validator?
> 
> AFAIK, on x86 most of the similar cases where lock_all_vcpus could 
> be used are handled by assuming and enforcing that userspace will 
> call these functions prior to first vCPU is created an/or run, thus 
> the need for such locking doesn't exist.

The way x86 handles something like lock_all_vcpus() is in function
sev_lock_vcpus_for_migration(), where all vCPUs from the same VM are
collapsed into a single lock key.

This works because you know that multiple vCPU mutexes are only nested
while kvm->lock is held as well.  Since that's the case also for ARM's
lock_all_vcpus(), perhaps sev_lock_vcpus_for_migration() and 
sev_unlock_vcpus_for_migration() could be moved to virt/kvm/kvm_main.c 
(and renamed to kvm_{lock,unlock}_all_vcpus_nested(); with another 
function that lacks the _nested suffix and hardcodes that argument to 0).

RISC-V also has a copy of lock_all_vcpus() and it also has the kvm->lock 
around it thanks to kvm_ioctl_create_device(); so it can use the same 
generic function, too.

Paolo

> Recently x86 got a lot of cleanups to enforce this, like for example 
> enforce that userspace won't change CPUID after a vCPU has run.
> 
> Best regards, Maxim Levitsky
> 
> 


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Question about lock_all_vcpus
  2025-02-06 20:08 Question about lock_all_vcpus Maxim Levitsky
  2025-02-10 12:56 ` Paolo Bonzini
@ 2025-02-10 15:57 ` Marc Zyngier
  2025-02-10 23:52   ` Maxim Levitsky
  1 sibling, 1 reply; 4+ messages in thread
From: Marc Zyngier @ 2025-02-10 15:57 UTC (permalink / raw)
  To: Maxim Levitsky; +Cc: kvmarm, kvm

On Thu, 06 Feb 2025 20:08:10 +0000,
Maxim Levitsky <mlevitsk@redhat.com> wrote:
> 
> Hi!
> 
> KVM on ARM has this function, and it seems to be only used in a couple of places, mostly for
> initialization.
> 
> We recently noticed a CI failure roughly like that:

Did you only recently noticed because you only recently started
testing with lockdep? As far as I remember this has been there
forever.

> 
> [  328.171264] BUG: MAX_LOCK_DEPTH too low!
> [  328.175227] turning off the locking correctness validator.
> [  328.180726] Please attach the output of /proc/lock_stat to the bug report
> [  328.187531] depth: 48  max: 48!
> [  328.190678] 48 locks held by qemu-kvm/11664:
> [  328.194957]  #0: ffff800086de5ba0 (&kvm->lock){+.+.}-{3:3}, at: kvm_ioctl_create_device+0x174/0x5b0
> [  328.204048]  #1: ffff0800e78800b8 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
> [  328.212521]  #2: ffff07ffeee51e98 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
> [  328.220991]  #3: ffff0800dc7d80b8 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
> [  328.229463]  #4: ffff07ffe0c980b8 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
> [  328.237934]  #5: ffff0800a3883c78 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
> [  328.246405]  #6: ffff07fffbe480b8 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
> 
> 
> ..
> ..
> ..
> ..
> 
> 
> As far as I see currently MAX_LOCK_DEPTH is 48 and the number of
> vCPUs can easily be hundreds.

512 exactly. Both of which are pretty arbitrary limits.

> 
> Do you think that it's possible? or know if there were any efforts
> to get rid of lock_all_vcpus to avoid this problem? If not possible,
> maybe we can exclude the lock_all_vcpus from the lockdep validator?

I'd be very wary of excluding any form of locking from being checked
by lockdep, and I'd rather we bump MAX_LOCK_DEPTH up if KVM is enabled
on arm64. it's not like anyone is going to run that in production
anyway. task_struct may not be happy about that though.

The alternative is a full stop_machine(), and I don't think that will
fly either.

> AFAIK, on x86 most of the similar cases where lock_all_vcpus could
> be used are handled by assuming and enforcing that userspace will
> call these functions prior to first vCPU is created an/or run, thus
> the need for such locking doesn't exist.

This assertion doesn't hold on arm64, as this ordering requirement
doesn't exist. We already have a bunch of established VMMs doing
things in random orders (QEMU being the #1 offender), and the sad
reality of the Linux ABI means this needs to be supported forever.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Question about lock_all_vcpus
  2025-02-10 15:57 ` Marc Zyngier
@ 2025-02-10 23:52   ` Maxim Levitsky
  0 siblings, 0 replies; 4+ messages in thread
From: Maxim Levitsky @ 2025-02-10 23:52 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, kvm

On Mon, 2025-02-10 at 15:57 +0000, Marc Zyngier wrote:
> On Thu, 06 Feb 2025 20:08:10 +0000,
> Maxim Levitsky <mlevitsk@redhat.com> wrote:
> > Hi!
> > 
> > KVM on ARM has this function, and it seems to be only used in a couple of places, mostly for
> > initialization.
> > 
> > We recently noticed a CI failure roughly like that:
> 
> Did you only recently noticed because you only recently started
> testing with lockdep? As far as I remember this has been there
> forever.

Hi,

I also think that this is something old, I guess our CI started to
test aarch64 kernels with debug lags enabled or something like that.

> 
> > [  328.171264] BUG: MAX_LOCK_DEPTH too low!
> > [  328.175227] turning off the locking correctness validator.
> > [  328.180726] Please attach the output of /proc/lock_stat to the bug report
> > [  328.187531] depth: 48  max: 48!
> > [  328.190678] 48 locks held by qemu-kvm/11664:
> > [  328.194957]  #0: ffff800086de5ba0 (&kvm->lock){+.+.}-{3:3}, at: kvm_ioctl_create_device+0x174/0x5b0
> > [  328.204048]  #1: ffff0800e78800b8 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
> > [  328.212521]  #2: ffff07ffeee51e98 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
> > [  328.220991]  #3: ffff0800dc7d80b8 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
> > [  328.229463]  #4: ffff07ffe0c980b8 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
> > [  328.237934]  #5: ffff0800a3883c78 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
> > [  328.246405]  #6: ffff07fffbe480b8 (&vcpu->mutex){+.+.}-{3:3}, at: lock_all_vcpus+0x16c/0x2a0
> > 
> > 
> > ..
> > ..
> > ..
> > ..
> > 
> > 
> > As far as I see currently MAX_LOCK_DEPTH is 48 and the number of
> > vCPUs can easily be hundreds.
> 
> 512 exactly. Both of which are pretty arbitrary limits.
> 
> > Do you think that it's possible? or know if there were any efforts
> > to get rid of lock_all_vcpus to avoid this problem? If not possible,
> > maybe we can exclude the lock_all_vcpus from the lockdep validator?
> 
> I'd be very wary of excluding any form of locking from being checked
> by lockdep, and I'd rather we bump MAX_LOCK_DEPTH up if KVM is enabled
> on arm64. it's not like anyone is going to run that in production
> anyway. task_struct may not be happy about that though.
> 
> The alternative is a full stop_machine(), and I don't think that will
> fly either.
> 
> > AFAIK, on x86 most of the similar cases where lock_all_vcpus could
> > be used are handled by assuming and enforcing that userspace will
> > call these functions prior to first vCPU is created an/or run, thus
> > the need for such locking doesn't exist.
> 
> This assertion doesn't hold on arm64, as this ordering requirement
> doesn't exist. We already have a bunch of established VMMs doing
> things in random orders (QEMU being the #1 offender), and the sad
> reality of the Linux ABI means this needs to be supported forever.

Understood.

Best regards,
	Maxim Levitsky

> 
> Thanks,
> 
> 	M.
> 



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2025-02-10 23:53 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-06 20:08 Question about lock_all_vcpus Maxim Levitsky
2025-02-10 12:56 ` Paolo Bonzini
2025-02-10 15:57 ` Marc Zyngier
2025-02-10 23:52   ` Maxim Levitsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox