From: Andrew Morton <akpm@linux-foundation.org>
To: Yinghai Lu <yhlu.kernel@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>, Thomas Gleixner <tglx@linutronix.de>,
"H. Peter Anvin" <hpa@zytor.com>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] x86: sparse_irq need spin_lock in alloc
Date: Wed, 20 Aug 2008 21:03:36 -0700 [thread overview]
Message-ID: <20080820210336.3e6ffd6d.akpm@linux-foundation.org> (raw)
In-Reply-To: <1219290385-4976-1-git-send-email-yhlu.kernel@gmail.com>
On Wed, 20 Aug 2008 20:46:25 -0700 Yinghai Lu <yhlu.kernel@gmail.com> wrote:
> acording to Suresh Siddha, we should have spin_lock around it
>
> Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
>
> ---
> arch/x86/kernel/io_apic.c | 6 ++++++
> kernel/irq/handle.c | 7 +++++++
> 2 files changed, 13 insertions(+)
>
> Index: linux-2.6/kernel/irq/handle.c
> ===================================================================
> --- linux-2.6.orig/kernel/irq/handle.c
> +++ linux-2.6/kernel/irq/handle.c
> @@ -166,6 +166,9 @@ struct irq_desc *irq_to_desc(unsigned in
> }
> return NULL;
> }
> +
> +static DEFINE_SPINLOCK(sparse_irq_lock);
> +
> struct irq_desc *irq_to_desc_alloc(unsigned int irq)
> {
> struct irq_desc *desc, *desc_pri;
> @@ -182,6 +185,7 @@ struct irq_desc *irq_to_desc_alloc(unsig
> count++;
> }
>
> + spin_lock(&sparse_irq_lock);
> /*
> * we run out of pre-allocate ones, allocate more
> */
> @@ -223,6 +227,9 @@ struct irq_desc *irq_to_desc_alloc(unsig
> else
> sparse_irqs = desc;
> desc->irq = irq;
> +
> + spin_unlock(&sparse_irq_lock);
> +
> printk(KERN_DEBUG "found new irq_desc for irq %d\n", desc->irq);
> #ifdef CONFIG_HAVE_SPARSE_IRQ_DEBUG
> {
> Index: linux-2.6/arch/x86/kernel/io_apic.c
> ===================================================================
> --- linux-2.6.orig/arch/x86/kernel/io_apic.c
> +++ linux-2.6/arch/x86/kernel/io_apic.c
> @@ -210,6 +210,8 @@ static struct irq_cfg *irq_cfg(unsigned
> return NULL;
> }
>
> +static DEFINE_SPINLOCK(irq_cfg_lock);
> +
> static struct irq_cfg *irq_cfg_alloc(unsigned int irq)
> {
> struct irq_cfg *cfg, *cfg_pri;
> @@ -226,6 +228,7 @@ static struct irq_cfg *irq_cfg_alloc(uns
> count++;
> }
>
> + spin_lock(&irq_cfg_lock);
> if (!irq_cfgx_free) {
> unsigned long phys;
> unsigned long total_bytes;
> @@ -263,6 +266,9 @@ static struct irq_cfg *irq_cfg_alloc(uns
> else
> irq_cfgx = cfg;
> cfg->irq = irq;
> +
> + spin_unlock(&irq_cfg_lock);
> +
> printk(KERN_DEBUG "found new irq_cfg for irq %d\n", cfg->irq);
> #ifdef CONFIG_HAVE_SPARSE_IRQ_DEBUG
> {
Each of these locks can be made local to the function in which they are
used (and hence they should be made local).
It would be nice to add a comment explaining what they are protecting,
unless that is obvious (I didn't look).
next prev parent reply other threads:[~2008-08-21 4:04 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-08-21 3:46 [PATCH] x86: sparse_irq need spin_lock in alloc Yinghai Lu
2008-08-21 4:03 ` Andrew Morton [this message]
2008-08-21 8:58 ` Ingo Molnar
2008-08-21 10:27 ` Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20080820210336.3e6ffd6d.akpm@linux-foundation.org \
--to=akpm@linux-foundation.org \
--cc=hpa@zytor.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=tglx@linutronix.de \
--cc=yhlu.kernel@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox