From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753284AbdFVQzh (ORCPT ); Thu, 22 Jun 2017 12:55:37 -0400 Received: from terminus.zytor.com ([65.50.211.136]:57475 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751156AbdFVQzf (ORCPT ); Thu, 22 Jun 2017 12:55:35 -0400 Date: Thu, 22 Jun 2017 09:50:56 -0700 From: tip-bot for Thomas Gleixner Message-ID: Cc: marc.zyngier@arm.com, mpe@ellerman.id.au, linux-kernel@vger.kernel.org, keith.busch@intel.com, hch@lst.de, hpa@zytor.com, peterz@infradead.org, mingo@kernel.org, tglx@linutronix.de, axboe@kernel.dk Reply-To: mingo@kernel.org, tglx@linutronix.de, axboe@kernel.dk, peterz@infradead.org, keith.busch@intel.com, hch@lst.de, hpa@zytor.com, mpe@ellerman.id.au, marc.zyngier@arm.com, linux-kernel@vger.kernel.org In-Reply-To: <20170619235444.774068557@linutronix.de> References: <20170619235444.774068557@linutronix.de> To: linux-tip-commits@vger.kernel.org Subject: [tip:irq/core] x86/irq: Cleanup pending irq move in fixup_irqs() Git-Commit-ID: 8e7b632237df8b17526411d1d98f838580bb6aa3 X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: 8e7b632237df8b17526411d1d98f838580bb6aa3 Gitweb: http://git.kernel.org/tip/8e7b632237df8b17526411d1d98f838580bb6aa3 Author: Thomas Gleixner AuthorDate: Tue, 20 Jun 2017 01:37:20 +0200 Committer: Thomas Gleixner CommitDate: Thu, 22 Jun 2017 18:21:13 +0200 x86/irq: Cleanup pending irq move in fixup_irqs() If an CPU goes offline, the interrupts are migrated away, but a eventually pending interrupt move, which has not yet been made effective is kept pending even if the outgoing CPU is the sole target of the pending affinity mask. What's worse is, that the pending affinity mask is discarded even if it would contain a valid subset of the online CPUs. Use the newly introduced helper to: - Discard a pending move when the outgoing CPU is the only target in the pending mask. - Use the pending mask instead of the affinity mask to find a valid target for the CPU if the pending mask intersects with the online CPUs. Signed-off-by: Thomas Gleixner Cc: Jens Axboe Cc: Marc Zyngier Cc: Michael Ellerman Cc: Keith Busch Cc: Peter Zijlstra Cc: Christoph Hellwig Link: http://lkml.kernel.org/r/20170619235444.774068557@linutronix.de --- arch/x86/kernel/irq.c | 25 +++++++++++++++++++++---- 1 file changed, 21 insertions(+), 4 deletions(-) diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c index f34fe74..9696007d 100644 --- a/arch/x86/kernel/irq.c +++ b/arch/x86/kernel/irq.c @@ -440,9 +440,9 @@ void fixup_irqs(void) int ret; for_each_irq_desc(irq, desc) { + const struct cpumask *affinity; int break_affinity = 0; int set_affinity = 1; - const struct cpumask *affinity; if (!desc) continue; @@ -454,19 +454,36 @@ void fixup_irqs(void) data = irq_desc_get_irq_data(desc); affinity = irq_data_get_affinity_mask(data); + if (!irq_has_action(irq) || irqd_is_per_cpu(data) || cpumask_subset(affinity, cpu_online_mask)) { + irq_fixup_move_pending(desc, false); raw_spin_unlock(&desc->lock); continue; } /* - * Complete the irq move. This cpu is going down and for - * non intr-remapping case, we can't wait till this interrupt - * arrives at this cpu before completing the irq move. + * Complete an eventually pending irq move cleanup. If this + * interrupt was moved in hard irq context, then the + * vectors need to be cleaned up. It can't wait until this + * interrupt actually happens and this CPU was involved. */ irq_force_complete_move(desc); + /* + * If there is a setaffinity pending, then try to reuse the + * pending mask, so the last change of the affinity does + * not get lost. If there is no move pending or the pending + * mask does not contain any online CPU, use the current + * affinity mask. + */ + if (irq_fixup_move_pending(desc, true)) + affinity = desc->pending_mask; + + /* + * If the mask does not contain an offline CPU, break + * affinity and use cpu_online_mask as fall back. + */ if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) { break_affinity = 1; affinity = cpu_online_mask;