From: Sean Christopherson <seanjc@google.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Subject: Re: [PATCH 1/2] KVM: x86: fix usage of kvm_lock in set_nx_huge_pages()
Date: Fri, 24 Jan 2025 12:11:24 -0800 [thread overview]
Message-ID: <Z5Pz7Ga5UGt88zDc@google.com> (raw)
In-Reply-To: <20250124191109.205955-2-pbonzini@redhat.com>
On Fri, Jan 24, 2025, Paolo Bonzini wrote:
> Protect the whole function with kvm_lock() so that all accesses to
> nx_hugepage_mitigation_hard_disabled are under the lock; but drop it
> when calling out to the MMU to avoid complex circular locking
> situations such as the following:
...
> To break the deadlock, release kvm_lock while taking kvm->slots_lock, which
> breaks the chain:
Heh, except it's all kinds of broken. IMO, biting the bullet and converting to
an SRCU-protected list is going to be far less work in the long run.
> @@ -7143,16 +7141,19 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp)
> if (new_val != old_val) {
> struct kvm *kvm;
>
> - mutex_lock(&kvm_lock);
> -
> list_for_each_entry(kvm, &vm_list, vm_list) {
This is unsafe, as vm_list can be modified while kvm_lock is dropped. And
using list_for_each_entry_safe() doesn't help, because the _next_ entry have been
freed.
> + kvm_get_kvm(kvm);
This needs to be:
if (!kvm_get_kvm_safe(kvm))
continue;
because the last reference to the VM could already have been put.
> + mutex_unlock(&kvm_lock);
> +
> mutex_lock(&kvm->slots_lock);
> kvm_mmu_zap_all_fast(kvm);
> mutex_unlock(&kvm->slots_lock);
>
> vhost_task_wake(kvm->arch.nx_huge_page_recovery_thread);
See my bug report on this being a NULL pointer deref.
> +
> + mutex_lock(&kvm_lock);
> + kvm_put_kvm(kvm);
The order is backwards, kvm_put_kvm() needs to be called before acquiring kvm_lock.
If the last reference is put, kvm_put_kvm() => kvm_destroy_vm() will deadlock on
kvm_lock.
> }
> - mutex_unlock(&kvm_lock);
> }
next prev parent reply other threads:[~2025-01-24 20:11 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-24 19:11 [RFC PATCH 0/2] KVM: x86: Strengthen locking rules for kvm_lock Paolo Bonzini
2025-01-24 19:11 ` [PATCH 1/2] KVM: x86: fix usage of kvm_lock in set_nx_huge_pages() Paolo Bonzini
2025-01-24 20:11 ` Sean Christopherson [this message]
2025-01-24 22:19 ` Paolo Bonzini
2025-01-24 23:44 ` Sean Christopherson
2025-01-25 0:08 ` Paolo Bonzini
2025-01-25 0:44 ` Sean Christopherson
2025-01-27 17:27 ` Paolo Bonzini
2025-01-27 17:56 ` Paolo Bonzini
2025-01-27 18:01 ` Sean Christopherson
2025-01-27 18:17 ` Paolo Bonzini
2025-01-24 19:11 ` [PATCH 2/2] Documentation: explain issues with taking locks inside kvm_lock Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z5Pz7Ga5UGt88zDc@google.com \
--to=seanjc@google.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox