xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Julien Grall <julien.grall@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Julien Grall <julien.grall@citrix.com>,
	Stefano.Stabellini@eu.citrix.com, patches@linaro.org,
	xen-devel@lists.xen.org
Subject: Re: [PATCH 5/5] xen/arm: Only enable physical IRQs when the guest asks
Date: Wed, 26 Jun 2013 14:03:44 +0100	[thread overview]
Message-ID: <51CAE6B0.8000205@linaro.org> (raw)
In-Reply-To: <1372244150.7337.35.camel@zakaz.uk.xensource.com>

On 06/26/2013 11:55 AM, Ian Campbell wrote:

> On Tue, 2013-06-25 at 18:38 +0100, Julien Grall wrote:
>>>> @@ -719,11 +731,18 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
>>>>      unsigned long flags;
>>>>      int retval;
>>>>      bool_t level;
>>>> +    struct pending_irq *p;
>>>> +    /* XXX: handler other VCPU than 0 */
>>>
>>> That should be something like "XXX: handle VCPUs other than 0".
>>>
>>> This only matters if we can route SGIs or PPIs to the guest though I
>>> think, since they are the only banked interrupts? For SPIs we actually
>>> want to actively avoid doing this multiple times, don't we?
>>
>>
>> Yes. Here the VCPU is only used to retrieved the struct pending_irq.
> 
> Which is per-CPU for PPIs and SGIs. Do we not care about PPIs here?

I don't see reason to route physical PPIs. Is it possible to have a
device (other than the timer and the GIC) with PPIs?

>>>
>>> For the banked interrupts I think we just need a loop here, or for
>>> p->desc to not be part of the pending_irq struct but actually part of
>>> some separate per-domain datastructure, since it would be very weird to
>>> have a domain where the PPIs differed between CPUs. (I'm not sure if
>>> that is allowed by the hardware, I bet it is, but it would be a
>>> pathological case IMHO...).
>>
>>> I think a perdomain irq_desc * array is probably the right answer,
>>> unless someone can convincingly argue that PPI routing differing between
>>> VCPUs in a guest is a useful thing...
>>
>>
>> Until now, I didn't see PPIs on other devices than the arch timers and
>> the GIC. I don't know if it's possible, but pending_irq are also banked
>> for PPIs, so it's not an issue.
>>
>> The issue is how do we link the physical PPI to the virtual PPI? Is a
>> 1:1 mapping. How does Xen handle PPI when a it is coming on VCPUs which
>> doesn't handle it (for instance a domU)?
> 
> How do you mean?


My sentence wasn't clear.

As for the arch timer, which is using PPIs, routing theses interrupts
require some code to support the device in Xen, mainly to save/restore
the context when the VCPU is moved.

Can we assume that Xen will never route PPIs to a guest?

-- 
Julien

  reply	other threads:[~2013-06-26 13:03 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-06-24 23:04 [PATCH 0/5] Fix multiple issues with the interrupts on ARM Julien Grall
2013-06-24 23:04 ` [PATCH 1/5] xen/arm: Physical IRQ is not always equal to virtual IRQ Julien Grall
2013-06-25 13:16   ` Stefano Stabellini
2013-06-25 15:21     ` Julien Grall
2013-06-25 16:06       ` Ian Campbell
2013-06-24 23:04 ` [PATCH 2/5] xen/arm: Keep count of inflight interrupts Julien Grall
2013-06-25 16:12   ` Ian Campbell
2013-06-25 16:58     ` Stefano Stabellini
2013-06-25 17:46       ` Julien Grall
2013-06-25 18:38         ` Stefano Stabellini
2013-06-26 10:59           ` Ian Campbell
2013-06-26 11:10             ` Stefano Stabellini
2013-06-26 11:16               ` Ian Campbell
2013-06-26 10:58       ` Ian Campbell
2013-06-26 11:08         ` Stefano Stabellini
2013-06-26 11:15           ` Ian Campbell
2013-06-26 11:23             ` Stefano Stabellini
2013-06-26 11:41               ` Ian Campbell
2013-06-26 11:50                 ` Stefano Stabellini
2013-06-26 11:57                   ` Ian Campbell
2013-06-26 14:02                     ` Stefano Stabellini
2013-06-24 23:04 ` [PATCH 3/5] xen/arm: Don't reinject the IRQ if it's already in LRs Julien Grall
2013-06-25 13:24   ` Stefano Stabellini
2013-06-25 13:55     ` Julien Grall
2013-06-25 16:36       ` Stefano Stabellini
2013-06-25 16:46         ` Ian Campbell
2013-06-25 17:05           ` Stefano Stabellini
2013-06-26 10:53             ` Ian Campbell
2013-06-26 11:19               ` Stefano Stabellini
2013-06-25 16:48         ` Julien Grall
2013-06-25 16:59           ` Stefano Stabellini
2013-06-25 16:14     ` Ian Campbell
2013-06-24 23:04 ` [PATCH 4/5] xen/arm: Rename gic_irq_{startup, shutdown} to gic_irq_{mask, unmask} Julien Grall
2013-06-24 23:04 ` [PATCH 5/5] xen/arm: Only enable physical IRQs when the guest asks Julien Grall
2013-06-25 16:19   ` Stefano Stabellini
2013-06-25 16:55     ` Julien Grall
2013-06-25 17:07       ` Stefano Stabellini
2013-12-02 17:26     ` Ian Campbell
2013-12-02 17:37       ` Stefano Stabellini
2013-06-25 16:28   ` Ian Campbell
2013-06-25 17:38     ` Julien Grall
2013-06-25 18:27       ` Stefano Stabellini
2013-06-26 10:55       ` Ian Campbell
2013-06-26 13:03         ` Julien Grall [this message]
2013-07-31 13:08 ` [PATCH 0/5] Fix multiple issues with the interrupts on ARM Andrii Anisov
2013-07-31 14:00   ` Julien Grall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51CAE6B0.8000205@linaro.org \
    --to=julien.grall@linaro.org \
    --cc=Ian.Campbell@citrix.com \
    --cc=Stefano.Stabellini@eu.citrix.com \
    --cc=julien.grall@citrix.com \
    --cc=patches@linaro.org \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).