From: Paolo Bonzini <pbonzini@redhat.com>
To: Linu Cherian <linuc.decode@gmail.com>, rkrcmar@redhat.com
Cc: kvm@vger.kernel.org, linu.cherian@cavium.com,
sunil.goutham@cavium.com, Geethasowjanya.Akula@cavium.com,
Paul McKenney <paulmck@linux.vnet.ibm.com>
Subject: Re: Possible race condition during lock_count increment in srcu_read_lock
Date: Mon, 29 May 2017 17:05:47 +0200 [thread overview]
Message-ID: <2519bca6-a0d6-1ce3-2dce-f3c9fecb21a0@redhat.com> (raw)
In-Reply-To: <20170529143057.GA12683@virtx40>
Adding Paul McKenney...
On 29/05/2017 16:30, Linu Cherian wrote:
> In KVM irq injection path, irqfd_wakeup in interrupt context
> calls srcu_read_lock(&kvm->irq_srcu). A worker thread, irqfd_inject
> in process context also calls srcu_read_lock inside kvm_set_irq.
>
> This can lead to race condition while incrementing the
> lock_count(using __this_cpu_inc), since atomic instructions being not
> used.
>
> Replacing __this_cpu_inc with this_cpu_inc(atomic version) workarounds the
> problem which backs up this analysis.
Paul, how would you feel about either using this_cpu_inc in
__srcu_read_lock and __srcu_read_unlock? Many architectures already
provide efficient this_cpu_inc, and any architecture that has LL/SC
could provide it easily.
Thanks,
Paolo
> Possible solution
> -----------------
> One way is to avoid the srcu_read_lock/unlock usage in irq context.
> In arm/arm64 architecture,
> if (kvm_arch_set_irq_inatomic(&irq, kvm, KVM_USERSPACE_IRQ_SOURCE_ID, 1,
> false) == -EWOULDBLOCK)
> in irqfd_wakefup is always true and hence schedule_work can be called
> directly
>
> @@ -195,6 +195,11 @@ int __attribute__((weak))
> kvm_arch_set_irq_inatomic(
> int idx;
>
> if (flags & POLLIN) {
> + if (kvm_arch_set_irq_will_block_always()) {
> + schedule_work(&irqfd->inject);
> + goto skiplock;
> + }
> +
> idx = srcu_read_lock(&kvm->irq_srcu);
> do {
> seq =
> read_seqcount_begin(&irqfd->irq_entry_sc);
> @@ -208,6 +213,7 @@ int __attribute__((weak))
> kvm_arch_set_irq_inatomic(
> srcu_read_unlock(&kvm->irq_srcu, idx);
> }
>
> +skiplock:
> if (flags & POLLHUP) {
> /* The eventfd is closing, detach from KVM */
> unsigned long flags;
>
> This works without giving any warnings as well.
>
> Is a patch welcome in that direction ?
>
> Appreicate your feedback on this.
>
>
>
next prev parent reply other threads:[~2017-05-29 15:06 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-29 14:30 Possible race condition during lock_count increment in srcu_read_lock Linu Cherian
2017-05-29 15:05 ` Paolo Bonzini [this message]
2017-05-29 15:50 ` Paul E. McKenney
2017-05-29 16:38 ` Paolo Bonzini
2017-05-29 20:14 ` Paul E. McKenney
2017-05-30 13:49 ` Paolo Bonzini
2017-05-30 16:19 ` Paul E. McKenney
2017-05-31 13:03 ` Linu Cherian
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2519bca6-a0d6-1ce3-2dce-f3c9fecb21a0@redhat.com \
--to=pbonzini@redhat.com \
--cc=Geethasowjanya.Akula@cavium.com \
--cc=kvm@vger.kernel.org \
--cc=linu.cherian@cavium.com \
--cc=linuc.decode@gmail.com \
--cc=paulmck@linux.vnet.ibm.com \
--cc=rkrcmar@redhat.com \
--cc=sunil.goutham@cavium.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).