* Patch "mlx4: Fix tx ring affinity_mask creation" has been added to the 4.0-stable tree
@ 2015-05-08 11:57 gregkh
0 siblings, 0 replies; only message in thread
From: gregkh @ 2015-05-08 11:57 UTC (permalink / raw)
To: bpoirier, davem, gregkh, idos; +Cc: stable, stable-commits
This is a note to let you know that I've just added the patch titled
mlx4: Fix tx ring affinity_mask creation
to the 4.0-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
mlx4-fix-tx-ring-affinity_mask-creation.patch
and it can be found in the queue-4.0 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
>From foo@baz Fri May 8 13:16:04 CEST 2015
From: Benjamin Poirier <bpoirier@suse.de>
Date: Tue, 28 Apr 2015 14:49:29 -0700
Subject: mlx4: Fix tx ring affinity_mask creation
From: Benjamin Poirier <bpoirier@suse.de>
[ Upstream commit 42eab005a5dd5d7ea2b0328aecc4d6cc0c23c9c2 ]
By default, the number of tx queues is limited by the number of online cpus
in mlx4_en_get_profile(). However, this limit no longer holds after the
ethtool .set_channels method has been called. In that situation, the driver
may access invalid bits of certain cpumask variables when queue_index >=
nr_cpu_ids.
Signed-off-by: Benjamin Poirier <bpoirier@suse.de>
Acked-by: Ido Shamay <idos@mellanox.com>
Fixes: d03a68f ("net/mlx4_en: Configure the XPS queue mapping on driver load")
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
drivers/net/ethernet/mellanox/mlx4/en_tx.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
--- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
@@ -143,8 +143,10 @@ int mlx4_en_create_tx_ring(struct mlx4_e
ring->hwtstamp_tx_type = priv->hwtstamp_config.tx_type;
ring->queue_index = queue_index;
- if (queue_index < priv->num_tx_rings_p_up && cpu_online(queue_index))
- cpumask_set_cpu(queue_index, &ring->affinity_mask);
+ if (queue_index < priv->num_tx_rings_p_up)
+ cpumask_set_cpu_local_first(queue_index,
+ priv->mdev->dev->numa_node,
+ &ring->affinity_mask);
*pring = ring;
return 0;
@@ -213,7 +215,7 @@ int mlx4_en_activate_tx_ring(struct mlx4
err = mlx4_qp_to_ready(mdev->dev, &ring->wqres.mtt, &ring->context,
&ring->qp, &ring->qp_state);
- if (!user_prio && cpu_online(ring->queue_index))
+ if (!cpumask_empty(&ring->affinity_mask))
netif_set_xps_queue(priv->dev, &ring->affinity_mask,
ring->queue_index);
Patches currently in stable-queue which might be from bpoirier@suse.de are
queue-4.0/mlx4-fix-tx-ring-affinity_mask-creation.patch
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2015-05-08 11:57 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-05-08 11:57 Patch "mlx4: Fix tx ring affinity_mask creation" has been added to the 4.0-stable tree gregkh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).