xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: George Dunlap <george.dunlap@eu.citrix.com>
To: Julien Grall <julien.grall@linaro.org>,
	Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [PATCH v1] xen/arm: Fix deadlock in gic_set_guest_irq()
Date: Wed, 5 Feb 2014 14:38:48 +0000	[thread overview]
Message-ID: <52F24CF8.5070500@eu.citrix.com> (raw)
In-Reply-To: <52F10CD4.1050000@linaro.org>

On 02/04/2014 03:52 PM, Julien Grall wrote:
> Forget to add George.
>
> On 02/03/2014 05:51 PM, Julien Grall wrote:
>> (+ Xen ARM maintainers)
>>
>> Hello Oleksandr,
>>
>> Thanks for the patch. For next time, can you add the Xen ARM maintainers
>> in cc? With the amount of mail in the mailing list, your mail could be
>> lost easily. :)
>>
>> On 02/03/2014 05:33 PM, Oleksandr Tyshchenko wrote:
>>> The possible deadlock scenario is explained below:
>>>
>>> non interrupt context:    interrupt contex        interrupt context
>>>                            (CPU0):                 (CPU1):
>>> vgic_distr_mmio_write()   do_trap_irq()           do_softirq()
>>>    |                         |                       |
>>>    vgic_disable_irqs()       ...                     ...
>>>      |                         |                       |
>>>      gic_remove_from_queues()  vgic_vcpu_inject_irq()  vgic_vcpu_inject_irq()
>>>      |  ...                      |                       |
>>>      |  spin_lock(...)           gic_set_guest_irq()     gic_set_guest_irq()
>>>      |  ...                        ...                     ...
>>>      |  ... <----------------.---- spin_lock_irqsave(...)  ...
>>>      |  ... <----------------.-.---------------------------spin_lock_irqsave(...)
>>>      |  ...                  . .       Oops! The lock has already taken.
>>>      |  spin_unlock(...)     . .
>>>      |  ...                  . .
>>>      gic_irq_disable()       . .
>>>         ...                  . .
>>>         spin_lock(...)       . .
>>>         ...                  . .
>>>         ... <----------------. .
>>>         ... <------------------.
>>>         ...
>>>         spin_unlock(...)
>>>
>>> Since the gic_remove_from_queues() and gic_irq_disable() called from
>>> non interrupt context and they acquire the same lock as gic_set_guest_irq()
>>> which called from interrupt context we must disable interrupts in these
>>> functions to avoid possible deadlocks.
>>>
>>> Change-Id: Ia354d87bb44418956e30cd7e49cc76616c359cc9
>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
>> Acked-by: Julien Grall <julien.grall@linaro.org>
>>
>> I think this patch should have a release exception for Xen 4.4. It's fix
>> a race condition in the interrupt management.

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

>>
>>> ---
>>>   xen/arch/arm/gic.c |   10 ++++++----
>>>   1 file changed, 6 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>>> index c44a4d0..7d83b0c 100644
>>> --- a/xen/arch/arm/gic.c
>>> +++ b/xen/arch/arm/gic.c
>>> @@ -147,14 +147,15 @@ static void gic_irq_enable(struct irq_desc *desc)
>>>   static void gic_irq_disable(struct irq_desc *desc)
>>>   {
>>>       int irq = desc->irq;
>>> +    unsigned long flags;
>>>   
>>> -    spin_lock(&desc->lock);
>>> +    spin_lock_irqsave(&desc->lock, flags);
>>>       spin_lock(&gic.lock);
>>>       /* Disable routing */
>>>       GICD[GICD_ICENABLER + irq / 32] = (1u << (irq % 32));
>>>       desc->status |= IRQ_DISABLED;
>>>       spin_unlock(&gic.lock);
>>> -    spin_unlock(&desc->lock);
>>> +    spin_unlock_irqrestore(&desc->lock, flags);
>>>   }
>>>   
>>>   static unsigned int gic_irq_startup(struct irq_desc *desc)
>>> @@ -658,11 +659,12 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, unsigned int irq,
>>>   void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
>>>   {
>>>       struct pending_irq *p = irq_to_pending(v, virtual_irq);
>>> +    unsigned long flags;
>>>   
>>> -    spin_lock(&gic.lock);
>>> +    spin_lock_irqsave(&gic.lock, flags);
>>>       if ( !list_empty(&p->lr_queue) )
>>>           list_del_init(&p->lr_queue);
>>> -    spin_unlock(&gic.lock);
>>> +    spin_unlock_irqrestore(&gic.lock, flags);
>>>   }
>>>   
>>>   void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
>>>
>>
>

  reply	other threads:[~2014-02-05 14:38 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-02-03 17:33 [PATCH v1] xen/arm: Fix deadlock in gic_set_guest_irq() Oleksandr Tyshchenko
2014-02-03 17:51 ` Julien Grall
2014-02-03 17:59   ` Oleksandr Tyshchenko
2014-02-04 15:52   ` Julien Grall
2014-02-05 14:38     ` George Dunlap [this message]
2014-02-04 15:21 ` Stefano Stabellini
2014-02-04 15:36   ` Oleksandr Tyshchenko
2014-02-06 13:07 ` Ian Campbell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52F24CF8.5070500@eu.citrix.com \
    --to=george.dunlap@eu.citrix.com \
    --cc=ian.campbell@citrix.com \
    --cc=julien.grall@linaro.org \
    --cc=oleksandr.tyshchenko@globallogic.com \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).