From: <gregkh@linuxfoundation.org>
To: clsoto@linux.vnet.ibm.com, davem@davemloft.net,
gregkh@linuxfoundation.org, matanb@mellanox.com
Cc: <stable@vger.kernel.org>, <stable-commits@vger.kernel.org>
Subject: Patch "net/mlx4_core: Capping number of requested MSIXs to MAX_MSIX" has been added to the 4.2-stable tree
Date: Wed, 30 Sep 2015 05:32:05 +0200 [thread overview]
Message-ID: <144358392542151@kroah.com> (raw)
This is a note to let you know that I've just added the patch titled
net/mlx4_core: Capping number of requested MSIXs to MAX_MSIX
to the 4.2-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
net-mlx4_core-capping-number-of-requested-msixs-to-max_msix.patch
and it can be found in the queue-4.2 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
>From foo@baz Wed Sep 30 05:25:07 CEST 2015
From: Carol L Soto <clsoto@linux.vnet.ibm.com>
Date: Thu, 27 Aug 2015 14:43:25 -0500
Subject: net/mlx4_core: Capping number of requested MSIXs to MAX_MSIX
From: Carol L Soto <clsoto@linux.vnet.ibm.com>
[ Upstream commit 9293267a3e2a7a2555d8ddc8f9301525e5b03b1b ]
We currently manage IRQs in pool_bm which is a bit field
of MAX_MSIX bits. Thus, allocating more than MAX_MSIX
interrupts can't be managed in pool_bm.
Fixing this by capping number of requested MSIXs to
MAX_MSIX.
Signed-off-by: Matan Barak <matanb@mellanox.com>
Signed-off-by: Carol L Soto <clsoto@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
drivers/net/ethernet/mellanox/mlx4/main.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
--- a/drivers/net/ethernet/mellanox/mlx4/main.c
+++ b/drivers/net/ethernet/mellanox/mlx4/main.c
@@ -2654,9 +2654,14 @@ static void mlx4_enable_msi_x(struct mlx
if (msi_x) {
int nreq = dev->caps.num_ports * num_online_cpus() + 1;
+ bool shared_ports = false;
nreq = min_t(int, dev->caps.num_eqs - dev->caps.reserved_eqs,
nreq);
+ if (nreq > MAX_MSIX) {
+ nreq = MAX_MSIX;
+ shared_ports = true;
+ }
entries = kcalloc(nreq, sizeof *entries, GFP_KERNEL);
if (!entries)
@@ -2679,6 +2684,9 @@ static void mlx4_enable_msi_x(struct mlx
bitmap_zero(priv->eq_table.eq[MLX4_EQ_ASYNC].actv_ports.ports,
dev->caps.num_ports);
+ if (MLX4_IS_LEGACY_EQ_MODE(dev->caps))
+ shared_ports = true;
+
for (i = 0; i < dev->caps.num_comp_vectors + 1; i++) {
if (i == MLX4_EQ_ASYNC)
continue;
@@ -2686,7 +2694,7 @@ static void mlx4_enable_msi_x(struct mlx
priv->eq_table.eq[i].irq =
entries[i + 1 - !!(i > MLX4_EQ_ASYNC)].vector;
- if (MLX4_IS_LEGACY_EQ_MODE(dev->caps)) {
+ if (shared_ports) {
bitmap_fill(priv->eq_table.eq[i].actv_ports.ports,
dev->caps.num_ports);
/* We don't set affinity hint when there
Patches currently in stable-queue which might be from clsoto@linux.vnet.ibm.com are
queue-4.2/net-mlx4_core-capping-number-of-requested-msixs-to-max_msix.patch
reply other threads:[~2015-09-30 3:33 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=144358392542151@kroah.com \
--to=gregkh@linuxfoundation.org \
--cc=clsoto@linux.vnet.ibm.com \
--cc=davem@davemloft.net \
--cc=matanb@mellanox.com \
--cc=stable-commits@vger.kernel.org \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).