xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Julien Grall <julien.grall@linaro.org>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: patches@linaro.org,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [PATCH 2/5] xen/arm: Keep count of inflight interrupts
Date: Tue, 25 Jun 2013 18:46:57 +0100	[thread overview]
Message-ID: <51C9D791.8020407@linaro.org> (raw)
In-Reply-To: <alpine.DEB.2.02.1306251747261.4782@kaball.uk.xensource.com>

On 06/25/2013 05:58 PM, Stefano Stabellini wrote:

> On Tue, 25 Jun 2013, Ian Campbell wrote:
>> On Tue, 2013-06-25 at 00:04 +0100, Julien Grall wrote:
>>> From: Stefano Stabellini <stefano.stabellini@citrix.com>
>>>
>>> For guest's timers (both virtual and physical), Xen will inject virtual
>>> interrupt. Linux handles timer interrupt as:
>>
>> We should be wary of developing things based on the way which Linux
>> happens to do things. On x86 we have several time modes, which can be
>> selected based upon guest behaviour (described in
>> docs/man/xl.cfg.pod.5). Do we need to do something similar here?
>>
>>>   1) Receive the interrupt and ack it
>>>   2) Handle the current event timer
>>>   3) Set the next event timer
>>>   4) EOI the interrupt
>>>
>>> It's unlikely possible to reinject another interrupt before
>>
>> I can't parse this sentence. "unlikely to be possible" perhaps? but I'm
>> not sure if that is what you meant.
>>
>>> the previous one is EOIed because the next deadline is shorter than the time
>>> to execute code until EOI it.
>>
>> If we get into this situation once is there any danger that we will get
>> into it repeatedly and overflow the count?
>>
>> Overall I'm not convinced this is the right approach to get the
>> behaviour we want. Isn't this interrupt level triggered, with the level
>> being determined by a comparison of two registers? IOW can't we
>> determine whether to retrigger the interrupt or not by examining the
>> state of our emulated versions of those registers? A generic mechanism
>> to callback into the appropriate emulator on EOI plus a little bit of
>> logic in the vtimer code is all it ought to take.
> 
> AFAICT this what could happen:
> 
> - vtimer fires
> - xen mask the vtimer
> - xen adds the vtimer to the LR for the guest
> - xen EOIs the vtimer
> - the guest receive the vtimer interrupt
> - the guest set the next deadline in the past
> - the guest enables the vtimer
> ## an unexpected vtimer interrupt is received by Xen but the current
> ## one is still being serviced 
> - the guest eoi the vtimer
> 
> as a result the guest looses an interrupt.
> Julien, is that correct?

Yes.

> If that is the case, this can only happen once, right?  In that case
> rather than an atomic_t we could just have a bit in the status field I
> proposed before. It should be enough for us to keep track of the case
> when the irq is supposed to stay high even after the guest EOIs it. (Of
> course that means that we need to re-inject it into the guest).


For the timer yes. I wonder, what happen if Xen receive an SGI (for
instance SCHEDULE ipi) twice, does Xen must re-inject it multiple times?

>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@citrix.com>
>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>> ---
>>>  xen/arch/arm/gic.c           |   35 +++++++++++++++++++++++------------
>>>  xen/arch/arm/vgic.c          |    1 +
>>>  xen/include/asm-arm/domain.h |    2 ++
>>>  3 files changed, 26 insertions(+), 12 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>>> index 0fee3f2..21575df 100644
>>> --- a/xen/arch/arm/gic.c
>>> +++ b/xen/arch/arm/gic.c
>>> @@ -817,7 +817,7 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
>>>  
>>>      while ((i = find_next_bit((const long unsigned int *) &eisr,
>>>                                64, i)) < 64) {
>>> -        struct pending_irq *p;
>>> +        struct pending_irq *p, *n;
>>>          int cpu, eoi;
>>>  
>>>          cpu = -1;
>>> @@ -826,13 +826,32 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
>>>          spin_lock_irq(&gic.lock);
>>>          lr = GICH[GICH_LR + i];
>>>          virq = lr & GICH_LR_VIRTUAL_MASK;
>>> +
>>> +        p = irq_to_pending(v, virq);
>>> +        if ( p->desc != NULL ) {
>>> +            p->desc->status &= ~IRQ_INPROGRESS;
> 
> Now that I am thinking about this, shouldn't this be protected by taking
> the desc->lock?

Right. I don't think desc->lock is enough if we want to protect from
release_irq.
If this function is called, it will wait until IRQ_INPROGRESS is
removed. How about moving ~IRQ_INPROGRESS at then end of the block and
add a dmb() before?

>>> +            /* Assume only one pcpu needs to EOI the irq */
>>> +            cpu = p->desc->arch.eoi_cpu;
>>> +            eoi = 1;
>>> +            pirq = p->desc->irq;
>>> +        }
>>> +        if ( !atomic_dec_and_test(&p->inflight_cnt) )
>>> +        {
>>> +            /* Physical IRQ can't be reinject */
>>> +            WARN_ON(p->desc != NULL);
>>> +            gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
>>> +            spin_unlock_irq(&gic.lock);
>>> +            i++;
>>> +            continue;
>>> +        }
>>> +
>>>          GICH[GICH_LR + i] = 0;
>>>          clear_bit(i, &this_cpu(lr_mask));
>>>  
>>>          if ( !list_empty(&v->arch.vgic.lr_pending) ) {
>>> -            p = list_entry(v->arch.vgic.lr_pending.next, typeof(*p), lr_queue);
>>> -            gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
>>> -            list_del_init(&p->lr_queue);
>>> +            n = list_entry(v->arch.vgic.lr_pending.next, typeof(*n), lr_queue);
>>> +            gic_set_lr(i, n->irq, GICH_LR_PENDING, n->priority);
>>> +            list_del_init(&n->lr_queue);
>>>              set_bit(i, &this_cpu(lr_mask));
>>>          } else {
>>>              gic_inject_irq_stop();
>>> @@ -840,14 +859,6 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
>>>          spin_unlock_irq(&gic.lock);
>>>  
>>>          spin_lock_irq(&v->arch.vgic.lock);
>>> -        p = irq_to_pending(v, virq);
>>> -        if ( p->desc != NULL ) {
>>> -            p->desc->status &= ~IRQ_INPROGRESS;
>>> -            /* Assume only one pcpu needs to EOI the irq */
>>> -            cpu = p->desc->arch.eoi_cpu;
>>> -            eoi = 1;
>>> -            pirq = p->desc->irq;
>>> -        }
>>>          list_del_init(&p->inflight);
>>>          spin_unlock_irq(&v->arch.vgic.lock);
>>>  
>  

  reply	other threads:[~2013-06-25 17:46 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-06-24 23:04 [PATCH 0/5] Fix multiple issues with the interrupts on ARM Julien Grall
2013-06-24 23:04 ` [PATCH 1/5] xen/arm: Physical IRQ is not always equal to virtual IRQ Julien Grall
2013-06-25 13:16   ` Stefano Stabellini
2013-06-25 15:21     ` Julien Grall
2013-06-25 16:06       ` Ian Campbell
2013-06-24 23:04 ` [PATCH 2/5] xen/arm: Keep count of inflight interrupts Julien Grall
2013-06-25 16:12   ` Ian Campbell
2013-06-25 16:58     ` Stefano Stabellini
2013-06-25 17:46       ` Julien Grall [this message]
2013-06-25 18:38         ` Stefano Stabellini
2013-06-26 10:59           ` Ian Campbell
2013-06-26 11:10             ` Stefano Stabellini
2013-06-26 11:16               ` Ian Campbell
2013-06-26 10:58       ` Ian Campbell
2013-06-26 11:08         ` Stefano Stabellini
2013-06-26 11:15           ` Ian Campbell
2013-06-26 11:23             ` Stefano Stabellini
2013-06-26 11:41               ` Ian Campbell
2013-06-26 11:50                 ` Stefano Stabellini
2013-06-26 11:57                   ` Ian Campbell
2013-06-26 14:02                     ` Stefano Stabellini
2013-06-24 23:04 ` [PATCH 3/5] xen/arm: Don't reinject the IRQ if it's already in LRs Julien Grall
2013-06-25 13:24   ` Stefano Stabellini
2013-06-25 13:55     ` Julien Grall
2013-06-25 16:36       ` Stefano Stabellini
2013-06-25 16:46         ` Ian Campbell
2013-06-25 17:05           ` Stefano Stabellini
2013-06-26 10:53             ` Ian Campbell
2013-06-26 11:19               ` Stefano Stabellini
2013-06-25 16:48         ` Julien Grall
2013-06-25 16:59           ` Stefano Stabellini
2013-06-25 16:14     ` Ian Campbell
2013-06-24 23:04 ` [PATCH 4/5] xen/arm: Rename gic_irq_{startup, shutdown} to gic_irq_{mask, unmask} Julien Grall
2013-06-24 23:04 ` [PATCH 5/5] xen/arm: Only enable physical IRQs when the guest asks Julien Grall
2013-06-25 16:19   ` Stefano Stabellini
2013-06-25 16:55     ` Julien Grall
2013-06-25 17:07       ` Stefano Stabellini
2013-12-02 17:26     ` Ian Campbell
2013-12-02 17:37       ` Stefano Stabellini
2013-06-25 16:28   ` Ian Campbell
2013-06-25 17:38     ` Julien Grall
2013-06-25 18:27       ` Stefano Stabellini
2013-06-26 10:55       ` Ian Campbell
2013-06-26 13:03         ` Julien Grall
2013-07-31 13:08 ` [PATCH 0/5] Fix multiple issues with the interrupts on ARM Andrii Anisov
2013-07-31 14:00   ` Julien Grall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51C9D791.8020407@linaro.org \
    --to=julien.grall@linaro.org \
    --cc=Ian.Campbell@citrix.com \
    --cc=patches@linaro.org \
    --cc=stefano.stabellini@citrix.com \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).