linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] x86: Add check for number of available vectors before CPU down [v3]
@ 2013-12-20 15:50 Prarit Bhargava
  2013-12-20 22:56 ` Andi Kleen
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Prarit Bhargava @ 2013-12-20 15:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Prarit Bhargava, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	x86, Michel Lespinasse, Andi Kleen, Seiji Aguchi, Yang Zhang,
	Paul Gortmaker, janet.morgan, tony.luck, ruiv.wang, gong.chen

I tested across various systems with both the linux.git and linux-next.git
trees and do not see any false positives.  The patch behaves as expected on
a system with 590 IRQs allocated at boot time; the system refuses to down cpu
62.

P.

----8<----

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=64791

When a cpu is downed on a system, the irqs on the cpu are assigned to other cpus.  It is possible, however, that when a cpu is downed there aren't enough free vectors on the remaining cpus to account for the vectors from the cpu that is being downed.

This results in an interesting "overflow" condition where irqs are "assigned" to a CPU but are not handled.

For example, when downing cpus on a 1-64 logical processor system:

<snip>
[  232.021745] smpboot: CPU 61 is now offline
[  238.480275] smpboot: CPU 62 is now offline
[  245.991080] ------------[ cut here ]------------
[  245.996270] WARNING: CPU: 0 PID: 0 at net/sched/sch_generic.c:264 dev_watchdog+0x246/0x250()
[  246.005688] NETDEV WATCHDOG: p786p1 (ixgbe): transmit queue 0 timed out
[  246.013070] Modules linked in: lockd sunrpc iTCO_wdt iTCO_vendor_support sb_edac ixgbe microcode e1000e pcspkr joydev edac_core lpc_ich ioatdma ptp mdio mfd_core i2c_i801 dca pps_core i2c_core wmi acpi_cpufreq isci libsas scsi_transport_sas
[  246.037633] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.12.0+ #14
[  246.044451] Hardware name: Intel Corporation S4600LH ........../SVRBD-ROW_T, BIOS SE5C600.86B.01.08.0003.022620131521 02/26/2013
[  246.057371]  0000000000000009 ffff88081fa03d40 ffffffff8164fbf6 ffff88081fa0ee48
[  246.065728]  ffff88081fa03d90 ffff88081fa03d80 ffffffff81054ecc ffff88081fa13040
[  246.074073]  0000000000000000 ffff88200cce0000 0000000000000040 0000000000000000
[  246.082430] Call Trace:
[  246.085174]  <IRQ>  [<ffffffff8164fbf6>] dump_stack+0x46/0x58
[  246.091633]  [<ffffffff81054ecc>] warn_slowpath_common+0x8c/0xc0
[  246.098352]  [<ffffffff81054fb6>] warn_slowpath_fmt+0x46/0x50
[  246.104786]  [<ffffffff815710d6>] dev_watchdog+0x246/0x250
[  246.110923]  [<ffffffff81570e90>] ? dev_deactivate_queue.constprop.31+0x80/0x80
[  246.119097]  [<ffffffff8106092a>] call_timer_fn+0x3a/0x110
[  246.125224]  [<ffffffff8106280f>] ? update_process_times+0x6f/0x80
[  246.132137]  [<ffffffff81570e90>] ? dev_deactivate_queue.constprop.31+0x80/0x80
[  246.140308]  [<ffffffff81061db0>] run_timer_softirq+0x1f0/0x2a0
[  246.146933]  [<ffffffff81059a80>] __do_softirq+0xe0/0x220
[  246.152976]  [<ffffffff8165fedc>] call_softirq+0x1c/0x30
[  246.158920]  [<ffffffff810045f5>] do_softirq+0x55/0x90
[  246.164670]  [<ffffffff81059d35>] irq_exit+0xa5/0xb0
[  246.170227]  [<ffffffff8166062a>] smp_apic_timer_interrupt+0x4a/0x60
[  246.177324]  [<ffffffff8165f40a>] apic_timer_interrupt+0x6a/0x70
[  246.184041]  <EOI>  [<ffffffff81505a1b>] ? cpuidle_enter_state+0x5b/0xe0
[  246.191559]  [<ffffffff81505a17>] ? cpuidle_enter_state+0x57/0xe0
[  246.198374]  [<ffffffff81505b5d>] cpuidle_idle_call+0xbd/0x200
[  246.204900]  [<ffffffff8100b7ae>] arch_cpu_idle+0xe/0x30
[  246.210846]  [<ffffffff810a47b0>] cpu_startup_entry+0xd0/0x250
[  246.217371]  [<ffffffff81646b47>] rest_init+0x77/0x80
[  246.223028]  [<ffffffff81d09e8e>] start_kernel+0x3ee/0x3fb
[  246.229165]  [<ffffffff81d0989f>] ? repair_env_string+0x5e/0x5e
[  246.235787]  [<ffffffff81d095a5>] x86_64_start_reservations+0x2a/0x2c
[  246.242990]  [<ffffffff81d0969f>] x86_64_start_kernel+0xf8/0xfc
[  246.249610] ---[ end trace fb74fdef54d79039 ]---
[  246.254807] ixgbe 0000:c2:00.0 p786p1: initiating reset due to tx timeout
[  246.262489] ixgbe 0000:c2:00.0 p786p1: Reset adapter
Last login: Mon Nov 11 08:35:14 from 10.18.17.119
[root@(none) ~]# [  246.792676] ixgbe 0000:c2:00.0 p786p1: detected SFP+: 5
[  249.231598] ixgbe 0000:c2:00.0 p786p1: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[  246.792676] ixgbe 0000:c2:00.0 p786p1: detected SFP+: 5
[  249.231598] ixgbe 0000:c2:00.0 p786p1: NIC Link is Up 10 Gbps, Flow Control: RX/TX

(last lines keep repeating.  ixgbe driver is dead until module reload.)

If the downed cpu has more vectors than are free on the remaining cpus on the
system, it is possible that some vectors are "orphaned" even though they are
assigned to a cpu.  In this case, since the ixgbe driver had a watchdog, the
watchdog fired and notified that something was wrong.

This patch adds a function, check_vectors(), to compare the number of vectors
on the CPU going down and compares it to the number of vectors available on
the system.  If there aren't enough vectors for the CPU to go down, an
error is returned and propogated back to userspace.

v2: Do not need to look at percpu irqs
v3: Need to check affinity to prevent counting of MSIs in IOAPIC Lowest
    Priority Mode

Signed-off-by: Prarit Bhargava <prarit@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Cc: Michel Lespinasse <walken@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Seiji Aguchi <seiji.aguchi@hds.com>
Cc: Yang Zhang <yang.z.zhang@Intel.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: janet.morgan@intel.com
Cc: tony.luck@intel.com
Cc: ruiv.wang@gmail.com
Cc: gong.chen@linux.intel.com
---
 arch/x86/include/asm/irq.h |    1 +
 arch/x86/kernel/irq.c      |   44 ++++++++++++++++++++++++++++++++++++++++++++
 arch/x86/kernel/smpboot.c  |    6 ++++++
 3 files changed, 51 insertions(+)

diff --git a/arch/x86/include/asm/irq.h b/arch/x86/include/asm/irq.h
index 0ea10f27..dfd7372 100644
--- a/arch/x86/include/asm/irq.h
+++ b/arch/x86/include/asm/irq.h
@@ -25,6 +25,7 @@ extern void irq_ctx_init(int cpu);
 
 #ifdef CONFIG_HOTPLUG_CPU
 #include <linux/cpumask.h>
+extern int check_vectors(void);
 extern void fixup_irqs(void);
 extern void irq_force_complete_move(int);
 #endif
diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
index 22d0687..483f111 100644
--- a/arch/x86/kernel/irq.c
+++ b/arch/x86/kernel/irq.c
@@ -262,6 +262,50 @@ __visible void smp_trace_x86_platform_ipi(struct pt_regs *regs)
 EXPORT_SYMBOL_GPL(vector_used_by_percpu_irq);
 
 #ifdef CONFIG_HOTPLUG_CPU
+/*
+ * This cpu is going to be removed and its vectors migrated to the remaining
+ * online cpus.  Check to see if there are enough vectors in the remaining cpus.
+ */
+int check_vectors(void)
+{
+	int irq, cpu;
+	unsigned int vector, this_count, count;
+	struct irq_desc *desc;
+	struct irq_data *data;
+	struct cpumask *affinity;
+
+	this_count = 0;
+	for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) {
+		irq = __this_cpu_read(vector_irq[vector]);
+		if (irq >= 0) {
+			desc = irq_to_desc(irq);
+			data = irq_desc_get_irq_data(desc);
+			affinity = data->affinity;
+			if (irq_has_action(irq) || !irqd_is_per_cpu(data) ||
+			    !cpumask_subset(affinity, cpu_online_mask))
+				this_count++;
+		}
+	}
+
+	count = 0;
+	for_each_online_cpu(cpu) {
+		if (cpu == smp_processor_id())
+			continue;
+		for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS;
+		     vector++) {
+			if (per_cpu(vector_irq, cpu)[vector] < 0)
+				count++;
+		}
+	}
+
+	if (count < this_count) {
+		pr_warn("CPU %d disable failed: CPU has %u vectors assigned and there are only %u available.\n",
+			smp_processor_id(), this_count, count);
+		return -ERANGE;
+	}
+	return 0;
+}
+
 /* A cpu has been removed from cpu_online_mask.  Reset irq affinities. */
 void fixup_irqs(void)
 {
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 85dc05a..7ac6654 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1312,6 +1312,12 @@ void cpu_disable_common(void)
 
 int native_cpu_disable(void)
 {
+	int ret;
+
+	ret = check_vectors();
+	if (ret)
+		return ret;
+
 	clear_local_APIC();
 
 	cpu_disable_common();
-- 
1.7.9.3


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] x86: Add check for number of available vectors before CPU down [v3]
  2013-12-20 15:50 [PATCH] x86: Add check for number of available vectors before CPU down [v3] Prarit Bhargava
@ 2013-12-20 22:56 ` Andi Kleen
  2013-12-20 23:06   ` Luck, Tony
  2013-12-20 23:39 ` [tip:x86/urgent] x86: Add check for number of available vectors before CPU down tip-bot for Prarit Bhargava
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 8+ messages in thread
From: Andi Kleen @ 2013-12-20 22:56 UTC (permalink / raw)
  To: Prarit Bhargava
  Cc: linux-kernel, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Michel Lespinasse, Seiji Aguchi, Yang Zhang, Paul Gortmaker,
	janet.morgan, tony.luck, ruiv.wang, gong.chen

On Fri, Dec 20, 2013 at 10:50:09AM -0500, Prarit Bhargava wrote:
> I tested across various systems with both the linux.git and linux-next.git
> trees and do not see any false positives.  The patch behaves as expected on
> a system with 590 IRQs allocated at boot time; the system refuses to down cpu
> 62.

I read the patch and it looks good to me

I haven't double checked, but I'm assuming the hot plug locks are held
while you are doing this.

Reviewed-by: Andi Kleen <ak@linux.intel.com>

-Andi

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [PATCH] x86: Add check for number of available vectors before CPU down [v3]
  2013-12-20 22:56 ` Andi Kleen
@ 2013-12-20 23:06   ` Luck, Tony
  2013-12-21  0:08     ` H. Peter Anvin
  0 siblings, 1 reply; 8+ messages in thread
From: Luck, Tony @ 2013-12-20 23:06 UTC (permalink / raw)
  To: Andi Kleen, Prarit Bhargava
  Cc: linux-kernel@vger.kernel.org, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, x86@kernel.org, Michel Lespinasse, Seiji Aguchi,
	Zhang, Yang Z, Paul Gortmaker, Morgan, Janet, ruiv.wang@gmail.com,
	gong.chen@linux.intel.com

> I haven't double checked, but I'm assuming the hot plug locks are held
> while you are doing this.

I dug into that when looking at v2 - the whole thing is under "stop_machine()"
so locking was not an issue.

Reviewed-by: Tony Luck <tony.luck@intel.com>

-Tony

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [tip:x86/urgent] x86: Add check for number of available vectors before CPU down
  2013-12-20 15:50 [PATCH] x86: Add check for number of available vectors before CPU down [v3] Prarit Bhargava
  2013-12-20 22:56 ` Andi Kleen
@ 2013-12-20 23:39 ` tip-bot for Prarit Bhargava
  2013-12-20 23:39 ` [tip:x86/urgent] x86, irq: Change check_vectors() to check_irq_vectors_for_cpu_disable() tip-bot for H. Peter Anvin
  2013-12-23  7:12 ` [PATCH] x86: Add check for number of available vectors before CPU down [v3] Chen, Gong
  3 siblings, 0 replies; 8+ messages in thread
From: tip-bot for Prarit Bhargava @ 2013-12-20 23:39 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, seiji.aguchi, tony.luck, gong.chen,
	yang.z.zhang, ruiv.wang, janet.morgan, ak, tglx, hpa, prarit,
	walken, paul.gortmaker

Commit-ID:  5fd782a0553cf9572bd38cb877ee6fbf070ef651
Gitweb:     http://git.kernel.org/tip/5fd782a0553cf9572bd38cb877ee6fbf070ef651
Author:     Prarit Bhargava <prarit@redhat.com>
AuthorDate: Fri, 20 Dec 2013 10:50:09 -0500
Committer:  H. Peter Anvin <hpa@linux.intel.com>
CommitDate: Fri, 20 Dec 2013 15:24:04 -0800

x86: Add check for number of available vectors before CPU down

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=64791

When a cpu is downed on a system, the irqs on the cpu are assigned to
other cpus.  It is possible, however, that when a cpu is downed there
aren't enough free vectors on the remaining cpus to account for the
vectors from the cpu that is being downed.

This results in an interesting "overflow" condition where irqs are
"assigned" to a CPU but are not handled.

For example, when downing cpus on a 1-64 logical processor system:

<snip>
[  232.021745] smpboot: CPU 61 is now offline
[  238.480275] smpboot: CPU 62 is now offline
[  245.991080] ------------[ cut here ]------------
[  245.996270] WARNING: CPU: 0 PID: 0 at net/sched/sch_generic.c:264 dev_watchdog+0x246/0x250()
[  246.005688] NETDEV WATCHDOG: p786p1 (ixgbe): transmit queue 0 timed out
[  246.013070] Modules linked in: lockd sunrpc iTCO_wdt iTCO_vendor_support sb_edac ixgbe microcode e1000e pcspkr joydev edac_core lpc_ich ioatdma ptp mdio mfd_core i2c_i801 dca pps_core i2c_core wmi acpi_cpufreq isci libsas scsi_transport_sas
[  246.037633] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.12.0+ #14
[  246.044451] Hardware name: Intel Corporation S4600LH ........../SVRBD-ROW_T, BIOS SE5C600.86B.01.08.0003.022620131521 02/26/2013
[  246.057371]  0000000000000009 ffff88081fa03d40 ffffffff8164fbf6 ffff88081fa0ee48
[  246.065728]  ffff88081fa03d90 ffff88081fa03d80 ffffffff81054ecc ffff88081fa13040
[  246.074073]  0000000000000000 ffff88200cce0000 0000000000000040 0000000000000000
[  246.082430] Call Trace:
[  246.085174]  <IRQ>  [<ffffffff8164fbf6>] dump_stack+0x46/0x58
[  246.091633]  [<ffffffff81054ecc>] warn_slowpath_common+0x8c/0xc0
[  246.098352]  [<ffffffff81054fb6>] warn_slowpath_fmt+0x46/0x50
[  246.104786]  [<ffffffff815710d6>] dev_watchdog+0x246/0x250
[  246.110923]  [<ffffffff81570e90>] ? dev_deactivate_queue.constprop.31+0x80/0x80
[  246.119097]  [<ffffffff8106092a>] call_timer_fn+0x3a/0x110
[  246.125224]  [<ffffffff8106280f>] ? update_process_times+0x6f/0x80
[  246.132137]  [<ffffffff81570e90>] ? dev_deactivate_queue.constprop.31+0x80/0x80
[  246.140308]  [<ffffffff81061db0>] run_timer_softirq+0x1f0/0x2a0
[  246.146933]  [<ffffffff81059a80>] __do_softirq+0xe0/0x220
[  246.152976]  [<ffffffff8165fedc>] call_softirq+0x1c/0x30
[  246.158920]  [<ffffffff810045f5>] do_softirq+0x55/0x90
[  246.164670]  [<ffffffff81059d35>] irq_exit+0xa5/0xb0
[  246.170227]  [<ffffffff8166062a>] smp_apic_timer_interrupt+0x4a/0x60
[  246.177324]  [<ffffffff8165f40a>] apic_timer_interrupt+0x6a/0x70
[  246.184041]  <EOI>  [<ffffffff81505a1b>] ? cpuidle_enter_state+0x5b/0xe0
[  246.191559]  [<ffffffff81505a17>] ? cpuidle_enter_state+0x57/0xe0
[  246.198374]  [<ffffffff81505b5d>] cpuidle_idle_call+0xbd/0x200
[  246.204900]  [<ffffffff8100b7ae>] arch_cpu_idle+0xe/0x30
[  246.210846]  [<ffffffff810a47b0>] cpu_startup_entry+0xd0/0x250
[  246.217371]  [<ffffffff81646b47>] rest_init+0x77/0x80
[  246.223028]  [<ffffffff81d09e8e>] start_kernel+0x3ee/0x3fb
[  246.229165]  [<ffffffff81d0989f>] ? repair_env_string+0x5e/0x5e
[  246.235787]  [<ffffffff81d095a5>] x86_64_start_reservations+0x2a/0x2c
[  246.242990]  [<ffffffff81d0969f>] x86_64_start_kernel+0xf8/0xfc
[  246.249610] ---[ end trace fb74fdef54d79039 ]---
[  246.254807] ixgbe 0000:c2:00.0 p786p1: initiating reset due to tx timeout
[  246.262489] ixgbe 0000:c2:00.0 p786p1: Reset adapter
Last login: Mon Nov 11 08:35:14 from 10.18.17.119
[root@(none) ~]# [  246.792676] ixgbe 0000:c2:00.0 p786p1: detected SFP+: 5
[  249.231598] ixgbe 0000:c2:00.0 p786p1: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[  246.792676] ixgbe 0000:c2:00.0 p786p1: detected SFP+: 5
[  249.231598] ixgbe 0000:c2:00.0 p786p1: NIC Link is Up 10 Gbps, Flow Control: RX/TX

(last lines keep repeating.  ixgbe driver is dead until module reload.)

If the downed cpu has more vectors than are free on the remaining cpus on the
system, it is possible that some vectors are "orphaned" even though they are
assigned to a cpu.  In this case, since the ixgbe driver had a watchdog, the
watchdog fired and notified that something was wrong.

This patch adds a function, check_vectors(), to compare the number of vectors
on the CPU going down and compares it to the number of vectors available on
the system.  If there aren't enough vectors for the CPU to go down, an
error is returned and propogated back to userspace.

v2: Do not need to look at percpu irqs
v3: Need to check affinity to prevent counting of MSIs in IOAPIC Lowest
    Priority Mode

Signed-off-by: Prarit Bhargava <prarit@redhat.com>
Link: http://lkml.kernel.org/r/1387554609-9823-1-git-send-email-prarit@redhat.com
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Cc: Michel Lespinasse <walken@google.com>
Cc: Seiji Aguchi <seiji.aguchi@hds.com>
Cc: Yang Zhang <yang.z.zhang@Intel.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Janet Morgan <janet.morgan@intel.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Ruiv Wang <ruiv.wang@gmail.com>
Cc: Gong Chen <gong.chen@linux.intel.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: <stable@vger.kernel.org> # see note below
[ hpa: I'm tagging this for -stable, as it is a serious failure on
  these large systems, but this is definitely a policy call for the
  -stable maintainers. ]
---
 arch/x86/include/asm/irq.h |  1 +
 arch/x86/kernel/irq.c      | 44 ++++++++++++++++++++++++++++++++++++++++++++
 arch/x86/kernel/smpboot.c  |  6 ++++++
 3 files changed, 51 insertions(+)

diff --git a/arch/x86/include/asm/irq.h b/arch/x86/include/asm/irq.h
index 0ea10f27..dfd7372 100644
--- a/arch/x86/include/asm/irq.h
+++ b/arch/x86/include/asm/irq.h
@@ -25,6 +25,7 @@ extern void irq_ctx_init(int cpu);
 
 #ifdef CONFIG_HOTPLUG_CPU
 #include <linux/cpumask.h>
+extern int check_vectors(void);
 extern void fixup_irqs(void);
 extern void irq_force_complete_move(int);
 #endif
diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
index 22d0687..483f111 100644
--- a/arch/x86/kernel/irq.c
+++ b/arch/x86/kernel/irq.c
@@ -262,6 +262,50 @@ __visible void smp_trace_x86_platform_ipi(struct pt_regs *regs)
 EXPORT_SYMBOL_GPL(vector_used_by_percpu_irq);
 
 #ifdef CONFIG_HOTPLUG_CPU
+/*
+ * This cpu is going to be removed and its vectors migrated to the remaining
+ * online cpus.  Check to see if there are enough vectors in the remaining cpus.
+ */
+int check_vectors(void)
+{
+	int irq, cpu;
+	unsigned int vector, this_count, count;
+	struct irq_desc *desc;
+	struct irq_data *data;
+	struct cpumask *affinity;
+
+	this_count = 0;
+	for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) {
+		irq = __this_cpu_read(vector_irq[vector]);
+		if (irq >= 0) {
+			desc = irq_to_desc(irq);
+			data = irq_desc_get_irq_data(desc);
+			affinity = data->affinity;
+			if (irq_has_action(irq) || !irqd_is_per_cpu(data) ||
+			    !cpumask_subset(affinity, cpu_online_mask))
+				this_count++;
+		}
+	}
+
+	count = 0;
+	for_each_online_cpu(cpu) {
+		if (cpu == smp_processor_id())
+			continue;
+		for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS;
+		     vector++) {
+			if (per_cpu(vector_irq, cpu)[vector] < 0)
+				count++;
+		}
+	}
+
+	if (count < this_count) {
+		pr_warn("CPU %d disable failed: CPU has %u vectors assigned and there are only %u available.\n",
+			smp_processor_id(), this_count, count);
+		return -ERANGE;
+	}
+	return 0;
+}
+
 /* A cpu has been removed from cpu_online_mask.  Reset irq affinities. */
 void fixup_irqs(void)
 {
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 85dc05a..7ac6654 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1312,6 +1312,12 @@ void cpu_disable_common(void)
 
 int native_cpu_disable(void)
 {
+	int ret;
+
+	ret = check_vectors();
+	if (ret)
+		return ret;
+
 	clear_local_APIC();
 
 	cpu_disable_common();

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [tip:x86/urgent] x86, irq: Change check_vectors() to check_irq_vectors_for_cpu_disable()
  2013-12-20 15:50 [PATCH] x86: Add check for number of available vectors before CPU down [v3] Prarit Bhargava
  2013-12-20 22:56 ` Andi Kleen
  2013-12-20 23:39 ` [tip:x86/urgent] x86: Add check for number of available vectors before CPU down tip-bot for Prarit Bhargava
@ 2013-12-20 23:39 ` tip-bot for H. Peter Anvin
  2013-12-23  7:12 ` [PATCH] x86: Add check for number of available vectors before CPU down [v3] Chen, Gong
  3 siblings, 0 replies; 8+ messages in thread
From: tip-bot for H. Peter Anvin @ 2013-12-20 23:39 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, tglx, prarit, hpa

Commit-ID:  9d69bb2cd96221b35bcabb3c1ffe84160bcf1b9f
Gitweb:     http://git.kernel.org/tip/9d69bb2cd96221b35bcabb3c1ffe84160bcf1b9f
Author:     H. Peter Anvin <hpa@linux.intel.com>
AuthorDate: Fri, 20 Dec 2013 15:34:22 -0800
Committer:  H. Peter Anvin <hpa@linux.intel.com>
CommitDate: Fri, 20 Dec 2013 15:34:22 -0800

x86, irq: Change check_vectors() to check_irq_vectors_for_cpu_disable()

In the context of smpboot.c especially, it is not at all obvious what
"check_vectors" actually mean.  Since this function is only called in
one place, without arguments, we can afford to be a bit more verbose
with the function name and make it clear at a glance what the function
does.

Cc: Prarit Bhargava <prarit@redhat.com>
Link: http://lkml.kernel.org/r/1387554609-9823-1-git-send-email-prarit@redhat.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
---
 arch/x86/include/asm/irq.h | 2 +-
 arch/x86/kernel/irq.c      | 2 +-
 arch/x86/kernel/smpboot.c  | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/irq.h b/arch/x86/include/asm/irq.h
index dfd7372..cb6cfcd 100644
--- a/arch/x86/include/asm/irq.h
+++ b/arch/x86/include/asm/irq.h
@@ -25,7 +25,7 @@ extern void irq_ctx_init(int cpu);
 
 #ifdef CONFIG_HOTPLUG_CPU
 #include <linux/cpumask.h>
-extern int check_vectors(void);
+extern int check_irq_vectors_for_cpu_disable(void);
 extern void fixup_irqs(void);
 extern void irq_force_complete_move(int);
 #endif
diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
index 483f111..7d40698 100644
--- a/arch/x86/kernel/irq.c
+++ b/arch/x86/kernel/irq.c
@@ -266,7 +266,7 @@ EXPORT_SYMBOL_GPL(vector_used_by_percpu_irq);
  * This cpu is going to be removed and its vectors migrated to the remaining
  * online cpus.  Check to see if there are enough vectors in the remaining cpus.
  */
-int check_vectors(void)
+int check_irq_vectors_for_cpu_disable(void)
 {
 	int irq, cpu;
 	unsigned int vector, this_count, count;
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 7ac6654..391ea52 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1314,7 +1314,7 @@ int native_cpu_disable(void)
 {
 	int ret;
 
-	ret = check_vectors();
+	ret = check_irq_vectors_for_cpu_disable();
 	if (ret)
 		return ret;
 

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] x86: Add check for number of available vectors before CPU down [v3]
  2013-12-20 23:06   ` Luck, Tony
@ 2013-12-21  0:08     ` H. Peter Anvin
  0 siblings, 0 replies; 8+ messages in thread
From: H. Peter Anvin @ 2013-12-21  0:08 UTC (permalink / raw)
  To: Luck, Tony, Andi Kleen, Prarit Bhargava
  Cc: linux-kernel@vger.kernel.org, Thomas Gleixner, Ingo Molnar,
	x86@kernel.org, Michel Lespinasse, Seiji Aguchi, Zhang, Yang Z,
	Paul Gortmaker, Morgan, Janet, ruiv.wang@gmail.com,
	gong.chen@linux.intel.com

On 12/20/2013 03:06 PM, Luck, Tony wrote:
>> I haven't double checked, but I'm assuming the hot plug locks are held
>> while you are doing this.
> 
> I dug into that when looking at v2 - the whole thing is under "stop_machine()"
> so locking was not an issue.
> 
> Reviewed-by: Tony Luck <tony.luck@intel.com>
> 

Awesome, thank you!  I have queued this up for Linus already as a fix
for 3.13, as it seems important enough.

	-hpa



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] x86: Add check for number of available vectors before CPU down [v3]
  2013-12-20 15:50 [PATCH] x86: Add check for number of available vectors before CPU down [v3] Prarit Bhargava
                   ` (2 preceding siblings ...)
  2013-12-20 23:39 ` [tip:x86/urgent] x86, irq: Change check_vectors() to check_irq_vectors_for_cpu_disable() tip-bot for H. Peter Anvin
@ 2013-12-23  7:12 ` Chen, Gong
  2013-12-23 12:32   ` Prarit Bhargava
  3 siblings, 1 reply; 8+ messages in thread
From: Chen, Gong @ 2013-12-23  7:12 UTC (permalink / raw)
  To: Prarit Bhargava
  Cc: linux-kernel, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Michel Lespinasse, Andi Kleen, Seiji Aguchi, Yang Zhang,
	Paul Gortmaker, janet.morgan, tony.luck, ruiv.wang

[-- Attachment #1: Type: text/plain, Size: 1429 bytes --]

On Fri, Dec 20, 2013 at 10:50:09AM -0500, Prarit Bhargava wrote:
> +int check_vectors(void)
> +{
> +	int irq, cpu;
> +	unsigned int vector, this_count, count;
> +	struct irq_desc *desc;
> +	struct irq_data *data;
> +	struct cpumask *affinity;
> +
> +	this_count = 0;
> +	for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) {
> +		irq = __this_cpu_read(vector_irq[vector]);
> +		if (irq >= 0) {
> +			desc = irq_to_desc(irq);
> +			data = irq_desc_get_irq_data(desc);
> +			affinity = data->affinity;
> +			if (irq_has_action(irq) || !irqd_is_per_cpu(data) ||
This line looks weird. Why counter will be increased once the irq has
action? Do you mean

if (irq_has_action(irq) && !irqd_is_per_cpu(data) &&
    !cpumask_subset(affinity, cpu_online_mask))


> +			    !cpumask_subset(affinity, cpu_online_mask))
It looks like cpu_online_mask will be updated until cpu_disable_common
is called, but your check_vectors is called before that.

[...]
>  {
> diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
> index 85dc05a..7ac6654 100644
> --- a/arch/x86/kernel/smpboot.c
> +++ b/arch/x86/kernel/smpboot.c
> @@ -1312,6 +1312,12 @@ void cpu_disable_common(void)
>  
>  int native_cpu_disable(void)
>  {
> +	int ret;
> +
> +	ret = check_vectors();
> +	if (ret)
> +		return ret;
> +
>  	clear_local_APIC();
>  
>  	cpu_disable_common();
> -- 
> 1.7.9.3
> 

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] x86: Add check for number of available vectors before CPU down [v3]
  2013-12-23  7:12 ` [PATCH] x86: Add check for number of available vectors before CPU down [v3] Chen, Gong
@ 2013-12-23 12:32   ` Prarit Bhargava
  0 siblings, 0 replies; 8+ messages in thread
From: Prarit Bhargava @ 2013-12-23 12:32 UTC (permalink / raw)
  To: linux-kernel, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Michel Lespinasse, Andi Kleen, Seiji Aguchi, Yang Zhang,
	Paul Gortmaker, janet.morgan, tony.luck, ruiv.wang



On 12/23/2013 02:12 AM, Chen, Gong wrote:
> On Fri, Dec 20, 2013 at 10:50:09AM -0500, Prarit Bhargava wrote:
>> +int check_vectors(void)
>> +{
>> +	int irq, cpu;
>> +	unsigned int vector, this_count, count;
>> +	struct irq_desc *desc;
>> +	struct irq_data *data;
>> +	struct cpumask *affinity;
>> +
>> +	this_count = 0;
>> +	for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) {
>> +		irq = __this_cpu_read(vector_irq[vector]);
>> +		if (irq >= 0) {
>> +			desc = irq_to_desc(irq);
>> +			data = irq_desc_get_irq_data(desc);
>> +			affinity = data->affinity;
>> +			if (irq_has_action(irq) || !irqd_is_per_cpu(data) ||
> This line looks weird. Why counter will be increased once the irq has
> action? Do you mean
> 
> if (irq_has_action(irq) && !irqd_is_per_cpu(data) &&
>     !cpumask_subset(affinity, cpu_online_mask))
> 
> 

Yes, you're absolutely right.  I mixed up the and and or's on this line of code.
 I will send out a patch against tip shortly.

P.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2013-12-23 12:32 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-12-20 15:50 [PATCH] x86: Add check for number of available vectors before CPU down [v3] Prarit Bhargava
2013-12-20 22:56 ` Andi Kleen
2013-12-20 23:06   ` Luck, Tony
2013-12-21  0:08     ` H. Peter Anvin
2013-12-20 23:39 ` [tip:x86/urgent] x86: Add check for number of available vectors before CPU down tip-bot for Prarit Bhargava
2013-12-20 23:39 ` [tip:x86/urgent] x86, irq: Change check_vectors() to check_irq_vectors_for_cpu_disable() tip-bot for H. Peter Anvin
2013-12-23  7:12 ` [PATCH] x86: Add check for number of available vectors before CPU down [v3] Chen, Gong
2013-12-23 12:32   ` Prarit Bhargava

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).