From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linuxfoundation.org ([140.211.169.12]:34630 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1034556AbdAIP07 (ORCPT ); Mon, 9 Jan 2017 10:26:59 -0500 Subject: Patch "genirq/affinity: Fix node generation from cpumask" has been added to the 4.9-stable tree To: gpiccoli@linux.vnet.ibm.com, gabriel@krisman.be, gregkh@linuxfoundation.org, gwshan@linux.vnet.ibm.com, hch@lst.de, tglx@linutronix.de Cc: , From: Date: Mon, 09 Jan 2017 16:26:36 +0100 Message-ID: <1483975596117107@kroah.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ANSI_X3.4-1968 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org List-ID: This is a note to let you know that I've just added the patch titled genirq/affinity: Fix node generation from cpumask to the 4.9-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: genirq-affinity-fix-node-generation-from-cpumask.patch and it can be found in the queue-4.9 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let know about it. >>From c0af52437254fda8b0cdbaae5a9b6d9327f1fcd5 Mon Sep 17 00:00:00 2001 From: "Guilherme G. Piccoli" Date: Wed, 14 Dec 2016 16:01:12 -0200 Subject: genirq/affinity: Fix node generation from cpumask From: Guilherme G. Piccoli commit c0af52437254fda8b0cdbaae5a9b6d9327f1fcd5 upstream. Commit 34c3d9819fda ("genirq/affinity: Provide smarter irq spreading infrastructure") introduced a better IRQ spreading mechanism, taking account of the available NUMA nodes in the machine. Problem is that the algorithm of retrieving the nodemask iterates "linearly" based on the number of online nodes - some architectures present non-linear node distribution among the nodemask, like PowerPC. If this is the case, the algorithm lead to a wrong node count number and therefore to a bad/incomplete IRQ affinity distribution. For example, this problem were found in a machine with 128 CPUs and two nodes, namely nodes 0 and 8 (instead of 0 and 1, if it was linearly distributed). This led to a wrong affinity distribution which then led to a bad mq allocation for nvme driver. Finally, we take the opportunity to fix a comment regarding the affinity distribution when we have _more_ nodes than vectors. Fixes: 34c3d9819fda ("genirq/affinity: Provide smarter irq spreading infrastructure") Reported-by: Gabriel Krisman Bertazi Signed-off-by: Guilherme G. Piccoli Reviewed-by: Christoph Hellwig Reviewed-by: Gabriel Krisman Bertazi Reviewed-by: Gavin Shan Cc: linux-pci@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: hch@lst.de Link: http://lkml.kernel.org/r/1481738472-2671-1-git-send-email-gpiccoli@linux.vnet.ibm.com Signed-off-by: Thomas Gleixner Signed-off-by: Greg Kroah-Hartman --- kernel/irq/affinity.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) --- a/kernel/irq/affinity.c +++ b/kernel/irq/affinity.c @@ -37,10 +37,10 @@ static void irq_spread_init_one(struct c static int get_nodes_in_cpumask(const struct cpumask *mask, nodemask_t *nodemsk) { - int n, nodes; + int n, nodes = 0; /* Calculate the number of nodes in the supplied affinity mask */ - for (n = 0, nodes = 0; n < num_online_nodes(); n++) { + for_each_online_node(n) { if (cpumask_intersects(mask, cpumask_of_node(n))) { node_set(n, *nodemsk); nodes++; @@ -81,7 +81,7 @@ struct cpumask *irq_create_affinity_mask nodes = get_nodes_in_cpumask(affinity, &nodemsk); /* - * If the number of nodes in the mask is less than or equal the + * If the number of nodes in the mask is greater than or equal the * number of vectors we just spread the vectors across the nodes. */ if (nvec <= nodes) { Patches currently in stable-queue which might be from gpiccoli@linux.vnet.ibm.com are queue-4.9/genirq-affinity-fix-node-generation-from-cpumask.patch