From: Jiang Liu <jiang.liu@linux.intel.com>
To: Thomas Gleixner <tglx@linutronix.de>,
Joe Lawrence <joe.lawrence@stratus.com>,
Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
x86@kernel.org, Jiang Liu <jiang.liu@linux.intel.com>
Cc: Jeremiah Mahler <jmmahler@gmail.com>,
Borislav Petkov <bp@alien8.de>,
andy.shevchenko@gmail.com, Guenter Roeck <linux@roeck-us.net>,
linux-kernel@vger.kernel.org
Subject: [Bugfix v2 3/5] x86/irq: Fix a race window in x86_vector_free_irqs()
Date: Wed, 23 Dec 2015 22:13:28 +0800 [thread overview]
Message-ID: <1450880014-11741-3-git-send-email-jiang.liu@linux.intel.com> (raw)
In-Reply-To: <1450880014-11741-1-git-send-email-jiang.liu@linux.intel.com>
There's a race condition between x86_vector_free_irqs()
{
free_apic_chip_data(irq_data->chip_data);
xxxxx //irq_data->chip_data has been freed, but the pointer
//hasn't been reset yet
irq_domain_reset_irq_data(irq_data);
}
and smp_irq_move_cleanup_interrupt()
{
raw_spin_lock(&vector_lock);
data = apic_chip_data(irq_desc_get_irq_data(desc));
access data->xxxx // may access freed memory
raw_spin_unlock(&desc->lock);
}
, which may cause smp_irq_move_cleanup_interrupt() accesses freed memory.
So use vector_lock to guard all memory free code in x86_vector_free_irqs().
Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
---
arch/x86/kernel/apic/vector.c | 20 ++++++++------------
1 file changed, 8 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
index b32c6ef7b4b0..f648fce39d5e 100644
--- a/arch/x86/kernel/apic/vector.c
+++ b/arch/x86/kernel/apic/vector.c
@@ -228,23 +228,16 @@ static int assign_irq_vector_policy(int irq, int node,
static void clear_irq_vector(int irq, struct apic_chip_data *data)
{
struct irq_desc *desc;
- unsigned long flags;
- int cpu, vector;
-
- raw_spin_lock_irqsave(&vector_lock, flags);
- BUG_ON(!data->cfg.vector);
+ int cpu, vector = data->cfg.vector;
- vector = data->cfg.vector;
+ BUG_ON(!vector);
for_each_cpu_and(cpu, data->domain, cpu_online_mask)
per_cpu(vector_irq, cpu)[vector] = VECTOR_UNUSED;
-
data->cfg.vector = 0;
cpumask_clear(data->domain);
- if (likely(!data->move_in_progress)) {
- raw_spin_unlock_irqrestore(&vector_lock, flags);
+ if (likely(!data->move_in_progress))
return;
- }
desc = irq_to_desc(irq);
for_each_cpu_and(cpu, data->old_domain, cpu_online_mask) {
@@ -257,7 +250,7 @@ static void clear_irq_vector(int irq, struct apic_chip_data *data)
}
}
data->move_in_progress = 0;
- raw_spin_unlock_irqrestore(&vector_lock, flags);
+ cpumask_clear(data->old_domain);
}
void init_irq_alloc_info(struct irq_alloc_info *info,
@@ -279,18 +272,21 @@ static void x86_vector_free_irqs(struct irq_domain *domain,
unsigned int virq, unsigned int nr_irqs)
{
struct irq_data *irq_data;
+ unsigned long flags;
int i;
for (i = 0; i < nr_irqs; i++) {
irq_data = irq_domain_get_irq_data(x86_vector_domain, virq + i);
if (irq_data && irq_data->chip_data) {
+ raw_spin_lock_irqsave(&vector_lock, flags);
clear_irq_vector(virq + i, irq_data->chip_data);
free_apic_chip_data(irq_data->chip_data);
+ irq_domain_reset_irq_data(irq_data);
+ raw_spin_unlock_irqrestore(&vector_lock, flags);
#ifdef CONFIG_X86_IO_APIC
if (virq + i < nr_legacy_irqs())
legacy_irq_data[virq + i] = NULL;
#endif
- irq_domain_reset_irq_data(irq_data);
}
}
}
--
1.7.10.4
next prev parent reply other threads:[~2015-12-23 14:11 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-12-11 7:49 [lkp] [x86/irq] 4c24cee6b2: IP-Config: Auto-configuration of network failed kernel test robot
2015-12-14 6:38 ` Jiang Liu
2015-12-14 6:54 ` [LKP] " Huang, Ying
2015-12-14 9:54 ` Borislav Petkov
2015-12-15 7:55 ` Jiang Liu
2015-12-15 10:08 ` Borislav Petkov
2015-12-19 20:31 ` Thomas Gleixner
2015-12-23 14:13 ` [Bugfix v2 1/5] x86/irq: Do not reuse struct apic_chip_data.old_domain as temporary buffer Jiang Liu
2015-12-23 14:13 ` [Bugfix v2 2/5] x86/irq: Enhance __assign_irq_vector() to rollback in case of failure Jiang Liu
2015-12-30 18:52 ` Thomas Gleixner
2015-12-23 14:13 ` Jiang Liu [this message]
2015-12-29 13:39 ` [Bugfix v2 3/5] x86/irq: Fix a race window in x86_vector_free_irqs() Thomas Gleixner
2016-01-16 21:16 ` [tip:x86/urgent] x86/irq: Fix a race " tip-bot for Jiang Liu
2015-12-23 14:13 ` [Bugfix v2 4/5] x86/irq: Fix a race condition between vector assigning and cleanup Jiang Liu
2015-12-23 18:41 ` Borislav Petkov
2015-12-30 17:25 ` Thomas Gleixner
2015-12-30 22:50 ` Thomas Gleixner
2015-12-23 14:13 ` [Bugfix v2 5/5] x86/irq: Trivial cleanups for x86 vector allocation code Jiang Liu
2015-12-23 19:10 ` [Bugfix v2 1/5] x86/irq: Do not reuse struct apic_chip_data.old_domain as temporary buffer Borislav Petkov
2015-12-24 5:15 ` Jeremiah Mahler
2015-12-28 8:24 ` Jiang Liu
2015-12-29 3:26 ` Jeremiah Mahler
2015-12-24 14:34 ` Joe Lawrence
2016-01-16 21:16 ` [tip:x86/urgent] x86/irq: Do not use " tip-bot for Jiang Liu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1450880014-11741-3-git-send-email-jiang.liu@linux.intel.com \
--to=jiang.liu@linux.intel.com \
--cc=andy.shevchenko@gmail.com \
--cc=bp@alien8.de \
--cc=hpa@zytor.com \
--cc=jmmahler@gmail.com \
--cc=joe.lawrence@stratus.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux@roeck-us.net \
--cc=mingo@redhat.com \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).