* Patch "net/mlx4_en: Schedule napi when RX buffers allocation fails" has been added to the 3.19-stable tree
@ 2015-05-08 11:09 gregkh
0 siblings, 0 replies; only message in thread
From: gregkh @ 2015-05-08 11:09 UTC (permalink / raw)
To: idos, amirv, davem, gregkh; +Cc: stable, stable-commits
This is a note to let you know that I've just added the patch titled
net/mlx4_en: Schedule napi when RX buffers allocation fails
to the 3.19-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
net-mlx4_en-schedule-napi-when-rx-buffers-allocation-fails.patch
and it can be found in the queue-3.19 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
>From foo@baz Fri May 8 13:04:43 CEST 2015
From: Ido Shamay <idos@mellanox.com>
Date: Thu, 30 Apr 2015 17:32:46 +0300
Subject: net/mlx4_en: Schedule napi when RX buffers allocation fails
From: Ido Shamay <idos@mellanox.com>
[ Upstream commit 07841f9d94c11afe00c0498cf242edf4075729f4 ]
When system is out of memory, refilling of RX buffers fails while
the driver continue to pass the received packets to the kernel stack.
At some point, when all RX buffers deplete, driver may fall into a
sleep, and not recover when memory for new RX buffers is once again
availible. This is because hardware does not have valid descriptors,
so no interrupt will be generated for the driver to return to work
in napi context. Fix it by schedule the napi poll function from
stats_task delayed workqueue, as long as the allocations fail.
Signed-off-by: Ido Shamay <idos@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
drivers/net/ethernet/mellanox/mlx4/en_netdev.c | 1
drivers/net/ethernet/mellanox/mlx4/en_rx.c | 26 +++++++++++++++++++++++--
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h | 1
3 files changed, 26 insertions(+), 2 deletions(-)
--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
@@ -1467,6 +1467,7 @@ static void mlx4_en_service_task(struct
if (mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_TS)
mlx4_en_ptp_overflow_check(mdev);
+ mlx4_en_recover_from_oom(priv);
queue_delayed_work(mdev->workqueue, &priv->service_task,
SERVICE_TASK_DELAY);
}
--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
@@ -240,6 +240,12 @@ static int mlx4_en_prepare_rx_desc(struc
return mlx4_en_alloc_frags(priv, rx_desc, frags, ring->page_alloc, gfp);
}
+static inline bool mlx4_en_is_ring_empty(struct mlx4_en_rx_ring *ring)
+{
+ BUG_ON((u32)(ring->prod - ring->cons) > ring->actual_size);
+ return ring->prod == ring->cons;
+}
+
static inline void mlx4_en_update_rx_prod_db(struct mlx4_en_rx_ring *ring)
{
*ring->wqres.db.db = cpu_to_be32(ring->prod & 0xffff);
@@ -311,8 +317,7 @@ static void mlx4_en_free_rx_buf(struct m
ring->cons, ring->prod);
/* Unmap and free Rx buffers */
- BUG_ON((u32) (ring->prod - ring->cons) > ring->actual_size);
- while (ring->cons != ring->prod) {
+ while (!mlx4_en_is_ring_empty(ring)) {
index = ring->cons & ring->size_mask;
en_dbg(DRV, priv, "Processing descriptor:%d\n", index);
mlx4_en_free_rx_desc(priv, ring, index);
@@ -487,6 +492,23 @@ err_allocator:
return err;
}
+/* We recover from out of memory by scheduling our napi poll
+ * function (mlx4_en_process_cq), which tries to allocate
+ * all missing RX buffers (call to mlx4_en_refill_rx_buffers).
+ */
+void mlx4_en_recover_from_oom(struct mlx4_en_priv *priv)
+{
+ int ring;
+
+ if (!priv->port_up)
+ return;
+
+ for (ring = 0; ring < priv->rx_ring_num; ring++) {
+ if (mlx4_en_is_ring_empty(priv->rx_ring[ring]))
+ napi_reschedule(&priv->rx_cq[ring]->napi);
+ }
+}
+
void mlx4_en_destroy_rx_ring(struct mlx4_en_priv *priv,
struct mlx4_en_rx_ring **pring,
u32 size, u16 stride)
--- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
@@ -788,6 +788,7 @@ int mlx4_en_activate_tx_ring(struct mlx4
void mlx4_en_deactivate_tx_ring(struct mlx4_en_priv *priv,
struct mlx4_en_tx_ring *ring);
void mlx4_en_set_num_rx_rings(struct mlx4_en_dev *mdev);
+void mlx4_en_recover_from_oom(struct mlx4_en_priv *priv);
int mlx4_en_create_rx_ring(struct mlx4_en_priv *priv,
struct mlx4_en_rx_ring **pring,
u32 size, u16 stride, int node);
Patches currently in stable-queue which might be from idos@mellanox.com are
queue-3.19/mlx4-fix-tx-ring-affinity_mask-creation.patch
queue-3.19/net-mlx4_en-schedule-napi-when-rx-buffers-allocation-fails.patch
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2015-05-08 11:10 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-05-08 11:09 Patch "net/mlx4_en: Schedule napi when RX buffers allocation fails" has been added to the 3.19-stable tree gregkh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).