public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jeremy Fitzhardinge <jeremy@goop.org>
To: Ingo Molnar <mingo@elte.hu>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Yinghai Lu <yhlu.kernel@gmail.com>
Subject: Re: Should irq_chip->mask disable percpu interrupts to all cpus, or just to this cpu?
Date: Sat, 27 Sep 2008 21:58:34 -0700	[thread overview]
Message-ID: <48DF0EFA.1010904@goop.org> (raw)
In-Reply-To: <20080927194424.GG18619@elte.hu>

Ingo Molnar wrote:
> * Eric W. Biederman <ebiederm@xmission.com> wrote:
>
>   
>> Jeremy Fitzhardinge <jeremy@goop.org> writes:
>>
>>     
>>> I found handle_percpu_irq() which addresses my concerns.  It doesn't
>>> attempt to mask the interrupt, takes no locks, and doesn't set or test
>>> IRQ_INPROGRESS in desc->status, so it will scale perfectly across
>>> multiple cpus.  It makes no changes to the desc structure, so there
>>> isn't even any cacheline bouncing.
>>>       
>> kstat_irqs.  Is arguably part of the irq structure.
>> And kstat_irqs is a major pain in my book.
>>
>> And for a rare event you have a cacheline read.
>> I don't think we are quite there yet but we really want to allocate
>> irq_desc on the right NUMA node in a multi socket system, to reduce
>> the cache miss times.
>>     
>
> note that we already do _almost_ that in tip/irq/sparseirq. dyn_array[] 
> will extend itself in a NUMA-aware fashion. (normal device irq_desc 
> entries will be allocated via kmalloc)
>
> what would be needed is to deallocate/reallocate irq_desc when the IRQ 
> affinity is changed? (i.e. when a device is migrated to a specific NUMA 
> node)
>
>   
>> Is it a big deal?  Probably not.  But I think it would be a bad idea 
>> to increasingly use infrastructure that will make it hard to optimize 
>> the code.
>>
>> Especially since the common case in high performance drivers is going 
>> to be, individually routable irq sources.  Having one queue per cpu 
>> and one irq per queue.  Which sounds like the same case you have.
>>     
>
> agreed - the kstat_irqs cacheline bounce would show up in Xen benchmarks 
> i'm sure.
>   

I've put that approach aside anyway, since I couldn't get it to work
after a day of fiddling and I didn't want to waste too much time on it. 
I've just restricted myself to avoiding the normal interrupt delivery
path, and going direct from event channel to irq to desc->handler.

    J

      reply	other threads:[~2008-09-28  4:58 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-09-23 20:02 Should irq_chip->mask disable percpu interrupts to all cpus, or just to this cpu? Jeremy Fitzhardinge
2008-09-24  8:45 ` Ingo Molnar
2008-09-24  9:54   ` Eric W. Biederman
2008-09-24 10:18     ` Ingo Molnar
2008-09-24 18:33     ` Jeremy Fitzhardinge
2008-09-24 19:34       ` Eric W. Biederman
2008-09-27 19:44         ` Ingo Molnar
2008-09-28  4:58           ` Jeremy Fitzhardinge [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=48DF0EFA.1010904@goop.org \
    --to=jeremy@goop.org \
    --cc=ebiederm@xmission.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=tglx@linutronix.de \
    --cc=yhlu.kernel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox