* TARGET_CPUS in assign_irq_vector
@ 2008-09-09 22:30 Jeremy Fitzhardinge
2008-09-09 22:54 ` Yinghai Lu
0 siblings, 1 reply; 9+ messages in thread
From: Jeremy Fitzhardinge @ 2008-09-09 22:30 UTC (permalink / raw)
To: Yinghai Lu, Eric W. Biederman, Ingo Molnar; +Cc: Linux Kernel Mailing List
On x86-64, the genapic implementations of target_cpus returns
cpu_online_map. What if more cpus have yet to come online? Shouldn't
it be cpu_possible_map?
Though I'm probably confused. I don't really know what the intent of
target_cpus and vector_allocation_domain is.
Thanks,
J
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: TARGET_CPUS in assign_irq_vector
2008-09-09 22:30 TARGET_CPUS in assign_irq_vector Jeremy Fitzhardinge
@ 2008-09-09 22:54 ` Yinghai Lu
2008-09-09 23:46 ` Jeremy Fitzhardinge
0 siblings, 1 reply; 9+ messages in thread
From: Yinghai Lu @ 2008-09-09 22:54 UTC (permalink / raw)
To: Jeremy Fitzhardinge
Cc: Eric W. Biederman, Ingo Molnar, Linux Kernel Mailing List
On Tue, Sep 9, 2008 at 3:30 PM, Jeremy Fitzhardinge <jeremy@goop.org> wrote:
> On x86-64, the genapic implementations of target_cpus returns
> cpu_online_map. What if more cpus have yet to come online? Shouldn't
> it be cpu_possible_map?
>
> Though I'm probably confused. I don't really know what the intent of
> target_cpus and vector_allocation_domain is.
>
logical mode, vector_allocation_domain will return all online cpus aka
the 8 cpus.
phys_flat mode, vector_allotcation_domain will only the cpumask with
the cpu set..
target_cpus is the cpus that could be possible to used to take vector
and process that irq. so at least it should be online.
YH
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: TARGET_CPUS in assign_irq_vector
2008-09-09 22:54 ` Yinghai Lu
@ 2008-09-09 23:46 ` Jeremy Fitzhardinge
2008-09-10 0:26 ` Yinghai Lu
0 siblings, 1 reply; 9+ messages in thread
From: Jeremy Fitzhardinge @ 2008-09-09 23:46 UTC (permalink / raw)
To: Yinghai Lu; +Cc: Eric W. Biederman, Ingo Molnar, Linux Kernel Mailing List
Yinghai Lu wrote:
> target_cpus is the cpus that could be possible to used to take vector
> and process that irq. so at least it should be online.
>
Would it be wrong to make it possible_cpu_mask?
J
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: TARGET_CPUS in assign_irq_vector
2008-09-09 23:46 ` Jeremy Fitzhardinge
@ 2008-09-10 0:26 ` Yinghai Lu
2008-09-10 0:50 ` Jeremy Fitzhardinge
0 siblings, 1 reply; 9+ messages in thread
From: Yinghai Lu @ 2008-09-10 0:26 UTC (permalink / raw)
To: Jeremy Fitzhardinge
Cc: Eric W. Biederman, Ingo Molnar, Linux Kernel Mailing List
On Tue, Sep 9, 2008 at 4:46 PM, Jeremy Fitzhardinge <jeremy@goop.org> wrote:
> Yinghai Lu wrote:
>> target_cpus is the cpus that could be possible to used to take vector
>> and process that irq. so at least it should be online.
>>
>
> Would it be wrong to make it possible_cpu_mask?
>
it is wrong
YH
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: TARGET_CPUS in assign_irq_vector
2008-09-10 0:26 ` Yinghai Lu
@ 2008-09-10 0:50 ` Jeremy Fitzhardinge
2008-09-10 1:24 ` Yinghai Lu
2008-09-10 6:45 ` Eric W. Biederman
0 siblings, 2 replies; 9+ messages in thread
From: Jeremy Fitzhardinge @ 2008-09-10 0:50 UTC (permalink / raw)
To: Yinghai Lu; +Cc: Eric W. Biederman, Ingo Molnar, Linux Kernel Mailing List
Yinghai Lu wrote:
> On Tue, Sep 9, 2008 at 4:46 PM, Jeremy Fitzhardinge <jeremy@goop.org> wrote:
>
>> Yinghai Lu wrote:
>>
>>> target_cpus is the cpus that could be possible to used to take vector
>>> and process that irq. so at least it should be online.
>>>
>>>
>> Would it be wrong to make it possible_cpu_mask?
>>
>>
> it is wrong
What happens if you online a new cpu and migrate the irq to it? Does it
get allocated a new vector?
I'm using create_irq() as a general irq and vector allocation mechanism
for Xen interrupts. I'd like to be able to allocate a vector across all
possible cpus so I can bind Xen event channels to vectors. Should I: 1)
add a create_irq_cpus() which takes a cpu mask rather than defaulting to
TARGET_CPUS, 2) modify struct genapic to insert by own target_cpus(),
3) give up because the idea is fundamentally ill-conceived, or 4)
something else?
Thanks,
J
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: TARGET_CPUS in assign_irq_vector
2008-09-10 0:50 ` Jeremy Fitzhardinge
@ 2008-09-10 1:24 ` Yinghai Lu
2008-09-10 6:45 ` Eric W. Biederman
1 sibling, 0 replies; 9+ messages in thread
From: Yinghai Lu @ 2008-09-10 1:24 UTC (permalink / raw)
To: Jeremy Fitzhardinge
Cc: Eric W. Biederman, Ingo Molnar, Linux Kernel Mailing List
On Tue, Sep 9, 2008 at 5:50 PM, Jeremy Fitzhardinge <jeremy@goop.org> wrote:
> Yinghai Lu wrote:
>> On Tue, Sep 9, 2008 at 4:46 PM, Jeremy Fitzhardinge <jeremy@goop.org> wrote:
>>
>>> Yinghai Lu wrote:
>>>
>>>> target_cpus is the cpus that could be possible to used to take vector
>>>> and process that irq. so at least it should be online.
>>>>
>>>>
>>> Would it be wrong to make it possible_cpu_mask?
>>>
>>>
>> it is wrong
>
> What happens if you online a new cpu and migrate the irq to it? Does it
> get allocated a new vector?
for phys_flat mode: it will get new vector for on new cpu.
>
> I'm using create_irq() as a general irq and vector allocation mechanism
> for Xen interrupts. I'd like to be able to allocate a vector across all
> possible cpus so I can bind Xen event channels to vectors. Should I: 1)
> add a create_irq_cpus() which takes a cpu mask rather than defaulting to
> TARGET_CPUS, 2) modify struct genapic to insert by own target_cpus(),
> 3) give up because the idea is fundamentally ill-conceived, or 4)
> something else?
seems need to rework __assign_irq_vector a little bit.
to
cpumask_t (*vector_allocation_domain_t)(int cpu)
static int __assign_irq_vector(int irq, cpumask_t mask,
vector_allocation_domain_t p)
...
and you could have your own
static cpumask_t vec_domain_alloc(int cpu)
{
cpumask_t domain = cpu_possible_map;
return domain;
}
static int assign_irq_vector_all(int irq)
{
int err;
unsigned long flags;
cpumask_t mask = cpu_possible_map;
spin_lock_irqsave(&vector_lock, flags);
err = __assign_irq_vector(irq, mask, vec_domain_alloc);
spin_unlock_irqrestore(&vector_lock, flags);
return err;
}
YH
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: TARGET_CPUS in assign_irq_vector
2008-09-10 0:50 ` Jeremy Fitzhardinge
2008-09-10 1:24 ` Yinghai Lu
@ 2008-09-10 6:45 ` Eric W. Biederman
2008-09-10 19:44 ` Jeremy Fitzhardinge
1 sibling, 1 reply; 9+ messages in thread
From: Eric W. Biederman @ 2008-09-10 6:45 UTC (permalink / raw)
To: Jeremy Fitzhardinge; +Cc: Yinghai Lu, Ingo Molnar, Linux Kernel Mailing List
Jeremy Fitzhardinge <jeremy@goop.org> writes:
> 3) give up because the idea is fundamentally ill-conceived, or 4)
> something else?
Yes.
When working with event channels you should not have any
truck with vectors and you should not call the architecture
specific do_IRQ().
Eric
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: TARGET_CPUS in assign_irq_vector
2008-09-10 6:45 ` Eric W. Biederman
@ 2008-09-10 19:44 ` Jeremy Fitzhardinge
2008-09-10 20:07 ` Eric W. Biederman
0 siblings, 1 reply; 9+ messages in thread
From: Jeremy Fitzhardinge @ 2008-09-10 19:44 UTC (permalink / raw)
To: Eric W. Biederman; +Cc: Yinghai Lu, Ingo Molnar, Linux Kernel Mailing List
Eric W. Biederman wrote:
> Jeremy Fitzhardinge <jeremy@goop.org> writes:
>
>
>> 3) give up because the idea is fundamentally ill-conceived, or 4)
>> something else?
>>
>
> Yes.
>
> When working with event channels you should not have any
> truck with vectors and you should not call the architecture
> specific do_IRQ().
Hm. That would work OK for fully paravirtualized domains, which have no
direct access to real hardware in any form (well, there's pci
passthough, but interrupts are all thoroughly massaged into event channels).
But for dom0, the kernel handles interrupts weird hybrid mode. The
interrupts themselves are delivered via event channels rather than via a
local apic, but the IO APIC is still under the kernel's control, and is
responsible for poking (Xen-allocated) vectors into it. This only
applies to physical irq event channels; there's no need to have vectors
for purely software event channels like interdomain, IPI and timers.
This is further complicated by the fact that the dom0 kernel parses the
ACPI and MPTABLES to find out about IO APICs, so the existing APIC
subsystem is already involved. I need to work out how'd I'd hook all
this together with a minimum of mess.
J
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: TARGET_CPUS in assign_irq_vector
2008-09-10 19:44 ` Jeremy Fitzhardinge
@ 2008-09-10 20:07 ` Eric W. Biederman
0 siblings, 0 replies; 9+ messages in thread
From: Eric W. Biederman @ 2008-09-10 20:07 UTC (permalink / raw)
To: Jeremy Fitzhardinge; +Cc: Yinghai Lu, Ingo Molnar, Linux Kernel Mailing List
Jeremy Fitzhardinge <jeremy@goop.org> writes:
> Hm. That would work OK for fully paravirtualized domains, which have no
> direct access to real hardware in any form (well, there's pci
> passthough, but interrupts are all thoroughly massaged into event channels).
>
> But for dom0, the kernel handles interrupts weird hybrid mode. The
> interrupts themselves are delivered via event channels rather than via a
> local apic, but the IO APIC is still under the kernel's control, and is
> responsible for poking (Xen-allocated) vectors into it. This only
> applies to physical irq event channels; there's no need to have vectors
> for purely software event channels like interdomain, IPI and timers.
> This is further complicated by the fact that the dom0 kernel parses the
> ACPI and MPTABLES to find out about IO APICs, so the existing APIC
> subsystem is already involved. I need to work out how'd I'd hook all
> this together with a minimum of mess.
In that case. Having the information on the event channel tell you
which cpu and which vector were received is sufficient. Then you
can call into do_IRQ() with the information. Unless ack_irq()
and friends are enough different at the local apic level to cause
a challenge.
For the reset of the event channel interrupts you simply want to
dispatch the irq directly.
Eric
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2008-09-10 20:15 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-09-09 22:30 TARGET_CPUS in assign_irq_vector Jeremy Fitzhardinge
2008-09-09 22:54 ` Yinghai Lu
2008-09-09 23:46 ` Jeremy Fitzhardinge
2008-09-10 0:26 ` Yinghai Lu
2008-09-10 0:50 ` Jeremy Fitzhardinge
2008-09-10 1:24 ` Yinghai Lu
2008-09-10 6:45 ` Eric W. Biederman
2008-09-10 19:44 ` Jeremy Fitzhardinge
2008-09-10 20:07 ` Eric W. Biederman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox