From: Sean Christopherson <seanjc@google.com>
To: Michal Luczaj <mhal@rbox.co>
Cc: pbonzini@redhat.com, dwmw2@infradead.org, kvm@vger.kernel.org,
paul@xen.org
Subject: Re: [PATCH 1/2] KVM: x86: Fix deadlock in kvm_vm_ioctl_set_msr_filter()
Date: Thu, 5 Jan 2023 22:23:12 +0000 [thread overview]
Message-ID: <Y7dN0Negds7XUbvI@google.com> (raw)
In-Reply-To: <c33180be-a5cc-64b1-f2e5-6a1a5dd0d996@rbox.co>
On Thu, Jan 05, 2023, Michal Luczaj wrote:
> On 1/3/23 18:17, Sean Christopherson wrote:
> > On Thu, Dec 29, 2022, Michal Luczaj wrote:
> >> Move synchronize_srcu(&kvm->srcu) out of kvm->lock critical section.
> >
> > This needs a much more descriptive changelog, and an update to
> > Documentation/virt/kvm/locking.rst to define the ordering requirements between
> > kvm->scru and kvm->lock. And IIUC, there is no deadlock in the current code
> > base, so this really should be a prep patch that's sent along with the Xen series[*]
> > that wants to take kvm->-srcu outside of kvm->lock.
> >
> > [*] https://lore.kernel.org/all/20221222203021.1944101-2-mhal@rbox.co
>
> I'd be happy to provide a more descriptive changelog, but right now I'm a
> bit confused. I'd be really grateful for some clarifications:
>
> I'm not sure how to understand "no deadlock in the current code base". I've
> ran selftests[1] under the up-to-date mainline/master and I do see the
> deadlocks. Is there a branch where kvm_xen_set_evtchn() is not taking
> kvm->lock while inside kvm->srcu?
Ah, no, I'm the one that's confused, I saw an earlier patch touch SRCU stuff and
assumed it introduced the deadlock. Actually, it's the KVM Xen code that's confused.
This comment in kvm_xen_set_evtchn() is a tragicomedy. It explicitly calls out
the exact case that would be problematic (Xen hypercall), but commit 2fd6df2f2b47
("KVM: x86/xen: intercept EVTCHNOP_send from guests") ran right past that.
/*
* For the irqfd workqueue, using the main kvm->lock mutex is
* fine since this function is invoked from kvm_set_irq() with
* no other lock held, no srcu. In future if it will be called
* directly from a vCPU thread (e.g. on hypercall for an IPI)
* then it may need to switch to using a leaf-node mutex for
* serializing the shared_info mapping.
*/
mutex_lock(&kvm->lock);
> Also, is there a consensus as for the lock ordering? IOW, is the state of
> virt/kvm/locking.rst up to date, regardless of the discussion going on[2]?
I'm not convinced that allowing kvm->lock to be taken while holding kvm->srcu is
a good idea. Requiring kvm->lock to be dropped before doing synchronize_srcu()
isn't problematic, and arguably it's a good rule since holding kvm->lock for
longer than necessary is undesirable. What I don't like is taking kvm->lock inside
kvm->srcu. It's not documented, but in pretty much every other case except Xen,
sleepable locks are taken outside of kvm->srcu, e.g. vcpu->mutex, slots_lock, and
quite often kvm->lock itself.
Ha! Case in point. The aforementioned Xen code blatantly violates KVM's locking
rules:
- kvm->lock is taken outside vcpu->mutex
In the kvm_xen_hypercal() case, vcpu->mutex is held (KVM_RUN) when kvm_xen_set_evtchn()
is called, i.e. takes kvm->lock inside vcpu->mutex. It doesn't cause explosions
because KVM x86 only takes vcpu->mutex inside kvm->lock for SEV, and no one runs
Xen+SEV guests, but the Xen code is still a trainwreck waiting to happen.
In other words, I'm find with this patch for optimization purposes, but I don't
think we should call it a bug fix. commit 2fd6df2f2b47 ("KVM: x86/xen: intercept
EVTCHNOP_send from guests") is the one who is wrong and needs fixing.
next prev parent reply other threads:[~2023-01-05 22:23 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-12-22 20:30 [RFC PATCH 0/2] Use-after-free in kvm_xen_eventfd_update() Michal Luczaj
2022-12-22 20:30 ` [RFC PATCH 1/2] KVM: x86/xen: Fix use-after-free " Michal Luczaj
2022-12-24 8:52 ` Paolo Bonzini
2022-12-24 11:14 ` Michal Luczaj
2022-12-27 11:11 ` Paolo Bonzini
2022-12-28 0:21 ` Michal Luczaj
2022-12-28 9:32 ` David Woodhouse
2022-12-28 9:39 ` Paolo Bonzini
2022-12-28 9:54 ` David Woodhouse
2022-12-28 11:58 ` Paolo Bonzini
2022-12-28 12:35 ` David Woodhouse
2022-12-28 13:14 ` Paolo Bonzini
2022-12-29 2:12 ` Michal Luczaj
2022-12-29 21:03 ` Paolo Bonzini
2022-12-29 21:17 ` [PATCH 0/2] Fix deadlocks in kvm_vm_ioctl_set_msr_filter() and Michal Luczaj
2022-12-29 21:17 ` [PATCH 1/2] KVM: x86: Fix deadlock in kvm_vm_ioctl_set_msr_filter() Michal Luczaj
2023-01-03 17:17 ` Sean Christopherson
2023-01-03 17:28 ` Sean Christopherson
2023-01-05 19:32 ` Michal Luczaj
2023-01-05 22:23 ` Sean Christopherson [this message]
2023-01-05 23:02 ` Paolo Bonzini
2023-01-05 23:07 ` Sean Christopherson
2023-01-10 12:55 ` David Woodhouse
2023-01-10 14:10 ` Paolo Bonzini
2023-01-10 15:27 ` David Woodhouse
2023-01-10 19:17 ` David Woodhouse
2023-01-10 19:37 ` Sean Christopherson
2023-01-10 19:46 ` David Woodhouse
2023-01-11 8:49 ` David Woodhouse
2023-01-11 22:49 ` Paolo Bonzini
2023-01-06 10:06 ` David Woodhouse
2023-01-07 0:06 ` Michal Luczaj
2023-01-05 22:46 ` Sean Christopherson
2022-12-29 21:17 ` [PATCH 2/2] KVM: x86: Fix deadlock in kvm_vm_ioctl_set_pmu_event_filter() Michal Luczaj
2022-12-22 20:30 ` [RFC PATCH 2/2] KVM: x86/xen: Simplify eventfd IOCTLs Michal Luczaj
2022-12-24 8:54 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y7dN0Negds7XUbvI@google.com \
--to=seanjc@google.com \
--cc=dwmw2@infradead.org \
--cc=kvm@vger.kernel.org \
--cc=mhal@rbox.co \
--cc=paul@xen.org \
--cc=pbonzini@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).