public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Juergen Gross <jgross@suse.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	x86@kernel.org
Cc: boris.ostrovsky@oracle.com, hpa@zytor.com, tglx@linutronix.de,
	mingo@redhat.com, bp@alien8.de, stable@vger.kernel.org,
	Waiman Long <longman@redhat.com>,
	peterz@infradead.org
Subject: Re: [PATCH 2/2] xen: make xen_qlock_wait() nestable
Date: Mon, 1 Oct 2018 09:38:11 +0200	[thread overview]
Message-ID: <8ae7807d-9f0d-e6f2-ef0d-9dce56d06165@suse.com> (raw)
In-Reply-To: <20181001071641.19282-3-jgross@suse.com>

Correcting Waiman's mail address

On 01/10/2018 09:16, Juergen Gross wrote:
> xen_qlock_wait() isn't safe for nested calls due to interrupts. A call
> of xen_qlock_kick() might be ignored in case a deeper nesting level
> was active right before the call of xen_poll_irq():
> 
> CPU 1:                                   CPU 2:
> spin_lock(lock1)
>                                          spin_lock(lock1)
>                                          -> xen_qlock_wait()
>                                             -> xen_clear_irq_pending()
>                                             Interrupt happens
> spin_unlock(lock1)
> -> xen_qlock_kick(CPU 2)
> spin_lock_irqsave(lock2)
>                                          spin_lock_irqsave(lock2)
>                                          -> xen_qlock_wait()
>                                             -> xen_clear_irq_pending()
>                                                clears kick for lock1
>                                             -> xen_poll_irq()
> spin_unlock_irq_restore(lock2)
> -> xen_qlock_kick(CPU 2)
>                                             wakes up
>                                          spin_unlock_irq_restore(lock2)
>                                          IRET
>                                            resumes in xen_qlock_wait()
>                                            -> xen_poll_irq()
>                                            never wakes up
> 
> The solution is to disable interrupts in xen_qlock_wait() and not to
> poll for the irq in case xen_qlock_wait() is called in nmi context.
> 
> Cc: stable@vger.kernel.org
> Cc: longman@redhat.com
> Cc: peterz@infradead.org
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  arch/x86/xen/spinlock.c | 24 ++++++++++--------------
>  1 file changed, 10 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
> index cd210a4ba7b1..e8d880e98057 100644
> --- a/arch/x86/xen/spinlock.c
> +++ b/arch/x86/xen/spinlock.c
> @@ -39,29 +39,25 @@ static void xen_qlock_kick(int cpu)
>   */
>  static void xen_qlock_wait(u8 *byte, u8 val)
>  {
> +	unsigned long flags;
>  	int irq = __this_cpu_read(lock_kicker_irq);
>  
>  	/* If kicker interrupts not initialized yet, just spin */
> -	if (irq == -1)
> +	if (irq == -1 || in_nmi())
>  		return;
>  
> -	/* If irq pending already clear it and return. */
> +	/* Guard against reentry. */
> +	local_irq_save(flags);
> +
> +	/* If irq pending already clear it. */
>  	if (xen_test_irq_pending(irq)) {
>  		xen_clear_irq_pending(irq);
> -		return;
> +	} else if (READ_ONCE(*byte) == val) {
> +		/* Block until irq becomes pending (or a spurious wakeup) */
> +		xen_poll_irq(irq);
>  	}
>  
> -	if (READ_ONCE(*byte) != val)
> -		return;
> -
> -	/*
> -	 * If an interrupt happens here, it will leave the wakeup irq
> -	 * pending, which will cause xen_poll_irq() to return
> -	 * immediately.
> -	 */
> -
> -	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
> -	xen_poll_irq(irq);
> +	local_irq_restore(flags);
>  }
>  
>  static irqreturn_t dummy_handler(int irq, void *dev_id)
> 


  reply	other threads:[~2018-10-01  7:38 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-01  7:16 [PATCH 0/2] xen: fix two issues in Xen pv qspinlock handling Juergen Gross
2018-10-01  7:16 ` [PATCH 1/2] xen: fix race in xen_qlock_wait() Juergen Gross
2018-10-01  7:37   ` Juergen Gross
2018-10-01  8:54   ` [Xen-devel] " Jan Beulich
2018-10-01  7:16 ` [PATCH 2/2] xen: make xen_qlock_wait() nestable Juergen Gross
2018-10-01  7:38   ` Juergen Gross [this message]
2018-10-01  8:57   ` [Xen-devel] " Jan Beulich
     [not found]   ` <5BB1E18802000078001ED127@suse.com>
2018-10-01  9:03     ` Juergen Gross
2018-10-01  9:18       ` Jan Beulich
2018-10-10 11:53   ` David Woodhouse
2018-10-10 12:30     ` Thomas Gleixner
2018-10-10 12:44       ` David Woodhouse
2018-10-10 12:47         ` Thomas Gleixner
2018-10-10 13:38           ` Juergen Gross
2018-10-10 13:53             ` David Woodhouse
2018-10-01  7:37 ` [PATCH 0/2] xen: fix two issues in Xen pv qspinlock handling Juergen Gross
2018-10-09 14:40 ` David Woodhouse
2018-10-09 14:52   ` Juergen Gross

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8ae7807d-9f0d-e6f2-ef0d-9dce56d06165@suse.com \
    --to=jgross@suse.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=bp@alien8.de \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longman@redhat.com \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=stable@vger.kernel.org \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox