From: Sean Christopherson <seanjc@google.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
Will Deacon <will@kernel.org>, Michal Luczaj <mhal@rbox.co>,
Sean Christopherson <seanjc@google.com>,
Alexander Potapenko <glider@google.com>,
Marc Zyngier <maz@kernel.org>,
Oliver Upton <oliver.upton@linux.dev>
Subject: [PATCH 4/6] Revert "KVM: Fix vcpu_array[0] races"
Date: Wed, 9 Oct 2024 08:04:53 -0700 [thread overview]
Message-ID: <20241009150455.1057573-5-seanjc@google.com> (raw)
In-Reply-To: <20241009150455.1057573-1-seanjc@google.com>
Now that KVM loads from vcpu_array if and only if the target index is
valid with respect to online_vcpus, i.e. now that it is safe to erase a
not-fully-onlined vCPU entry, revert to storing into vcpu_array before
success is guaranteed.
If xa_store() fails, which _should_ be impossible, then putting the vCPU's
reference to 'struct kvm' results in a refcounting bug as the vCPU fd has
been installed and owns the vCPU's reference.
This was found by inspection, but forcing the xa_store() to fail
confirms the problem:
| Unable to handle kernel paging request at virtual address ffff800080ecd960
| Call trace:
| _raw_spin_lock_irq+0x2c/0x70
| kvm_irqfd_release+0x24/0xa0
| kvm_vm_release+0x1c/0x38
| __fput+0x88/0x2ec
| ____fput+0x10/0x1c
| task_work_run+0xb0/0xd4
| do_exit+0x210/0x854
| do_group_exit+0x70/0x98
| get_signal+0x6b0/0x73c
| do_signal+0xa4/0x11e8
| do_notify_resume+0x60/0x12c
| el0_svc+0x64/0x68
| el0t_64_sync_handler+0x84/0xfc
| el0t_64_sync+0x190/0x194
| Code: b9000909 d503201f 2a1f03e1 52800028 (88e17c08)
Practically speaking, this is a non-issue as xa_store() can't fail, absent
a nasty kernel bug. But the code is visually jarring and technically
broken.
This reverts commit afb2acb2e3a32e4d56f7fbd819769b98ed1b7520.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Michal Luczaj <mhal@rbox.co>
Cc: Alexander Potapenko <glider@google.com>
Cc: Marc Zyngier <maz@kernel.org>
Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
virt/kvm/kvm_main.c | 14 +++++---------
1 file changed, 5 insertions(+), 9 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index fca9f74e9544..f081839521ef 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -4283,7 +4283,8 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id)
}
vcpu->vcpu_idx = atomic_read(&kvm->online_vcpus);
- r = xa_reserve(&kvm->vcpu_array, vcpu->vcpu_idx, GFP_KERNEL_ACCOUNT);
+ r = xa_insert(&kvm->vcpu_array, vcpu->vcpu_idx, vcpu, GFP_KERNEL_ACCOUNT);
+ BUG_ON(r == -EBUSY);
if (r)
goto unlock_vcpu_destroy;
@@ -4298,12 +4299,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id)
kvm_get_kvm(kvm);
r = create_vcpu_fd(vcpu);
if (r < 0)
- goto kvm_put_xa_release;
-
- if (KVM_BUG_ON(xa_store(&kvm->vcpu_array, vcpu->vcpu_idx, vcpu, 0), kvm)) {
- r = -EINVAL;
- goto kvm_put_xa_release;
- }
+ goto kvm_put_xa_erase;
/*
* Pairs with smp_rmb() in kvm_get_vcpu. Store the vcpu
@@ -4318,10 +4314,10 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, unsigned long id)
kvm_create_vcpu_debugfs(vcpu);
return r;
-kvm_put_xa_release:
+kvm_put_xa_erase:
mutex_unlock(&vcpu->mutex);
kvm_put_kvm_no_destroy(kvm);
- xa_release(&kvm->vcpu_array, vcpu->vcpu_idx);
+ xa_erase(&kvm->vcpu_array, vcpu->vcpu_idx);
unlock_vcpu_destroy:
mutex_unlock(&kvm->lock);
kvm_dirty_ring_free(&vcpu->dirty_ring);
--
2.47.0.rc0.187.ge670bccf7e-goog
next prev parent reply other threads:[~2024-10-09 15:05 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-09 15:04 [PATCH 0/6] KVM: Fix bugs in vCPUs xarray usage Sean Christopherson
2024-10-09 15:04 ` [PATCH 1/6] KVM: Explicitly verify target vCPU is online in kvm_get_vcpu() Sean Christopherson
2024-10-10 5:31 ` Gupta, Pankaj
2024-10-10 15:54 ` Sean Christopherson
2024-10-10 16:33 ` Gupta, Pankaj
2024-10-09 15:04 ` [PATCH 2/6] KVM: Verify there's at least one online vCPU when iterating over all vCPUs Sean Christopherson
2024-10-09 15:04 ` [PATCH 3/6] KVM: Grab vcpu->mutex across installing the vCPU's fd and bumping online_vcpus Sean Christopherson
2024-10-09 15:04 ` Sean Christopherson [this message]
2024-10-10 12:46 ` [PATCH 4/6] Revert "KVM: Fix vcpu_array[0] races" Paolo Bonzini
2024-10-10 17:48 ` Sean Christopherson
2024-10-20 11:23 ` Paolo Bonzini
2024-12-16 23:05 ` Sean Christopherson
2024-10-09 15:04 ` [PATCH 5/6] KVM: Don't BUG() the kernel if xa_insert() fails with -EBUSY Sean Christopherson
2024-10-10 5:33 ` Gupta, Pankaj
2024-10-09 15:04 ` [PATCH 6/6] KVM: Drop hack that "manually" informs lockdep of kvm->lock vs. vcpu->mutex Sean Christopherson
2024-10-29 14:18 ` [PATCH 0/6] KVM: Fix bugs in vCPUs xarray usage Will Deacon
2024-12-19 2:40 ` Sean Christopherson
2024-12-19 14:18 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241009150455.1057573-5-seanjc@google.com \
--to=seanjc@google.com \
--cc=glider@google.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=maz@kernel.org \
--cc=mhal@rbox.co \
--cc=oliver.upton@linux.dev \
--cc=pbonzini@redhat.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox