* [PATCH net-next v4 1/3] net/mlx4: Track RX allocation failures in a stat
2024-05-09 20:50 [PATCH net-next v4 0/3] mlx4: Add support for netdev-genl API Joe Damato
@ 2024-05-09 20:50 ` Joe Damato
2024-05-12 8:17 ` Tariq Toukan
2024-05-09 20:50 ` [PATCH net-next v4 2/3] net/mlx4: link NAPI instances to queues and IRQs Joe Damato
2024-05-09 20:50 ` [PATCH net-next v4 3/3] net/mlx4: support per-queue statistics via netlink Joe Damato
2 siblings, 1 reply; 6+ messages in thread
From: Joe Damato @ 2024-05-09 20:50 UTC (permalink / raw)
To: linux-kernel, netdev
Cc: mkarsten, nalramli, Joe Damato, Tariq Toukan, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni,
open list:MELLANOX MLX4 core VPI driver
mlx4_en_alloc_frags currently returns -ENOMEM when mlx4_alloc_page
fails but does not increment a stat field when this occurs.
A new field called alloc_fail has been added to struct mlx4_en_rx_ring
which is now incremented in mlx4_en_rx_ring when -ENOMEM occurs.
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
---
drivers/net/ethernet/mellanox/mlx4/en_rx.c | 4 +++-
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h | 1 +
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
index 8328df8645d5..15c57e9517e9 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
@@ -82,8 +82,10 @@ static int mlx4_en_alloc_frags(struct mlx4_en_priv *priv,
for (i = 0; i < priv->num_frags; i++, frags++) {
if (!frags->page) {
- if (mlx4_alloc_page(priv, frags, gfp))
+ if (mlx4_alloc_page(priv, frags, gfp)) {
+ ring->alloc_fail++;
return -ENOMEM;
+ }
ring->rx_alloc_pages++;
}
rx_desc->data[i].addr = cpu_to_be64(frags->dma +
diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
index efe3f97b874f..cd70df22724b 100644
--- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
@@ -355,6 +355,7 @@ struct mlx4_en_rx_ring {
unsigned long xdp_tx;
unsigned long xdp_tx_full;
unsigned long dropped;
+ unsigned long alloc_fail;
int hwtstamp_rx_filter;
cpumask_var_t affinity_mask;
struct xdp_rxq_info xdp_rxq;
--
2.25.1
^ permalink raw reply related [flat|nested] 6+ messages in thread* Re: [PATCH net-next v4 1/3] net/mlx4: Track RX allocation failures in a stat
2024-05-09 20:50 ` [PATCH net-next v4 1/3] net/mlx4: Track RX allocation failures in a stat Joe Damato
@ 2024-05-12 8:17 ` Tariq Toukan
2024-05-12 18:37 ` Joe Damato
0 siblings, 1 reply; 6+ messages in thread
From: Tariq Toukan @ 2024-05-12 8:17 UTC (permalink / raw)
To: Joe Damato, linux-kernel, netdev
Cc: mkarsten, nalramli, Tariq Toukan, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni,
open list:MELLANOX MLX4 core VPI driver, Tariq Toukan
On 09/05/2024 23:50, Joe Damato wrote:
> mlx4_en_alloc_frags currently returns -ENOMEM when mlx4_alloc_page
> fails but does not increment a stat field when this occurs.
>
> A new field called alloc_fail has been added to struct mlx4_en_rx_ring
> which is now incremented in mlx4_en_rx_ring when -ENOMEM occurs.
>
> Signed-off-by: Joe Damato <jdamato@fastly.com>
> Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
> ---
> drivers/net/ethernet/mellanox/mlx4/en_rx.c | 4 +++-
> drivers/net/ethernet/mellanox/mlx4/mlx4_en.h | 1 +
> 2 files changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> index 8328df8645d5..15c57e9517e9 100644
> --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> @@ -82,8 +82,10 @@ static int mlx4_en_alloc_frags(struct mlx4_en_priv *priv,
>
> for (i = 0; i < priv->num_frags; i++, frags++) {
> if (!frags->page) {
> - if (mlx4_alloc_page(priv, frags, gfp))
> + if (mlx4_alloc_page(priv, frags, gfp)) {
> + ring->alloc_fail++;
> return -ENOMEM;
> + }
> ring->rx_alloc_pages++;
> }
> rx_desc->data[i].addr = cpu_to_be64(frags->dma +
> diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
> index efe3f97b874f..cd70df22724b 100644
> --- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
> +++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
> @@ -355,6 +355,7 @@ struct mlx4_en_rx_ring {
> unsigned long xdp_tx;
> unsigned long xdp_tx_full;
> unsigned long dropped;
> + unsigned long alloc_fail;
> int hwtstamp_rx_filter;
> cpumask_var_t affinity_mask;
> struct xdp_rxq_info xdp_rxq;
Counter should be reset in mlx4_en_clear_stats().
BTW, there are existing counters that are missing there already.
We should add them as well, not related to your series though...
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [PATCH net-next v4 1/3] net/mlx4: Track RX allocation failures in a stat
2024-05-12 8:17 ` Tariq Toukan
@ 2024-05-12 18:37 ` Joe Damato
0 siblings, 0 replies; 6+ messages in thread
From: Joe Damato @ 2024-05-12 18:37 UTC (permalink / raw)
To: Tariq Toukan
Cc: linux-kernel, netdev, mkarsten, nalramli, Tariq Toukan,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
open list:MELLANOX MLX4 core VPI driver
On Sun, May 12, 2024 at 11:17:09AM +0300, Tariq Toukan wrote:
>
>
> On 09/05/2024 23:50, Joe Damato wrote:
> > mlx4_en_alloc_frags currently returns -ENOMEM when mlx4_alloc_page
> > fails but does not increment a stat field when this occurs.
> >
> > A new field called alloc_fail has been added to struct mlx4_en_rx_ring
> > which is now incremented in mlx4_en_rx_ring when -ENOMEM occurs.
> >
> > Signed-off-by: Joe Damato <jdamato@fastly.com>
> > Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
> > ---
> > drivers/net/ethernet/mellanox/mlx4/en_rx.c | 4 +++-
> > drivers/net/ethernet/mellanox/mlx4/mlx4_en.h | 1 +
> > 2 files changed, 4 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> > index 8328df8645d5..15c57e9517e9 100644
> > --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> > +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
> > @@ -82,8 +82,10 @@ static int mlx4_en_alloc_frags(struct mlx4_en_priv *priv,
> > for (i = 0; i < priv->num_frags; i++, frags++) {
> > if (!frags->page) {
> > - if (mlx4_alloc_page(priv, frags, gfp))
> > + if (mlx4_alloc_page(priv, frags, gfp)) {
> > + ring->alloc_fail++;
> > return -ENOMEM;
> > + }
> > ring->rx_alloc_pages++;
> > }
> > rx_desc->data[i].addr = cpu_to_be64(frags->dma +
> > diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
> > index efe3f97b874f..cd70df22724b 100644
> > --- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
> > +++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
> > @@ -355,6 +355,7 @@ struct mlx4_en_rx_ring {
> > unsigned long xdp_tx;
> > unsigned long xdp_tx_full;
> > unsigned long dropped;
> > + unsigned long alloc_fail;
> > int hwtstamp_rx_filter;
> > cpumask_var_t affinity_mask;
> > struct xdp_rxq_info xdp_rxq;
>
> Counter should be reset in mlx4_en_clear_stats().
OK, thanks. I'll add that to the v5, alongside any other feedback that
comes in within the next ~24 hours or so.
> BTW, there are existing counters that are missing there already.
> We should add them as well, not related to your series though...
Yea, I see what you mean about the other counters. I think those can
potentially be sent as a 'Fixes' later?
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH net-next v4 2/3] net/mlx4: link NAPI instances to queues and IRQs
2024-05-09 20:50 [PATCH net-next v4 0/3] mlx4: Add support for netdev-genl API Joe Damato
2024-05-09 20:50 ` [PATCH net-next v4 1/3] net/mlx4: Track RX allocation failures in a stat Joe Damato
@ 2024-05-09 20:50 ` Joe Damato
2024-05-09 20:50 ` [PATCH net-next v4 3/3] net/mlx4: support per-queue statistics via netlink Joe Damato
2 siblings, 0 replies; 6+ messages in thread
From: Joe Damato @ 2024-05-09 20:50 UTC (permalink / raw)
To: linux-kernel, netdev
Cc: mkarsten, nalramli, Joe Damato, Jakub Kicinski, Tariq Toukan,
David S. Miller, Eric Dumazet, Paolo Abeni,
open list:MELLANOX MLX4 core VPI driver
Make mlx4 compatible with the newly added netlink queue GET APIs.
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
Acked-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/mellanox/mlx4/en_cq.c | 14 ++++++++++++++
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h | 1 +
2 files changed, 15 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_cq.c b/drivers/net/ethernet/mellanox/mlx4/en_cq.c
index 1184ac5751e1..461cc2c79c71 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_cq.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_cq.c
@@ -126,6 +126,7 @@ int mlx4_en_activate_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq *cq,
cq_idx = cq_idx % priv->rx_ring_num;
rx_cq = priv->rx_cq[cq_idx];
cq->vector = rx_cq->vector;
+ irq = mlx4_eq_get_irq(mdev->dev, cq->vector);
}
if (cq->type == RX)
@@ -142,18 +143,23 @@ int mlx4_en_activate_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq *cq,
if (err)
goto free_eq;
+ cq->cq_idx = cq_idx;
cq->mcq.event = mlx4_en_cq_event;
switch (cq->type) {
case TX:
cq->mcq.comp = mlx4_en_tx_irq;
netif_napi_add_tx(cq->dev, &cq->napi, mlx4_en_poll_tx_cq);
+ netif_napi_set_irq(&cq->napi, irq);
napi_enable(&cq->napi);
+ netif_queue_set_napi(cq->dev, cq_idx, NETDEV_QUEUE_TYPE_TX, &cq->napi);
break;
case RX:
cq->mcq.comp = mlx4_en_rx_irq;
netif_napi_add(cq->dev, &cq->napi, mlx4_en_poll_rx_cq);
+ netif_napi_set_irq(&cq->napi, irq);
napi_enable(&cq->napi);
+ netif_queue_set_napi(cq->dev, cq_idx, NETDEV_QUEUE_TYPE_RX, &cq->napi);
break;
case TX_XDP:
/* nothing regarding napi, it's shared with rx ring */
@@ -189,6 +195,14 @@ void mlx4_en_destroy_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq **pcq)
void mlx4_en_deactivate_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq *cq)
{
if (cq->type != TX_XDP) {
+ enum netdev_queue_type qtype;
+
+ if (cq->type == RX)
+ qtype = NETDEV_QUEUE_TYPE_RX;
+ else
+ qtype = NETDEV_QUEUE_TYPE_TX;
+
+ netif_queue_set_napi(cq->dev, cq->cq_idx, qtype, NULL);
napi_disable(&cq->napi);
netif_napi_del(&cq->napi);
}
diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
index cd70df22724b..28b70dcc652e 100644
--- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
@@ -380,6 +380,7 @@ struct mlx4_en_cq {
#define MLX4_EN_OPCODE_ERROR 0x1e
const struct cpumask *aff_mask;
+ int cq_idx;
};
struct mlx4_en_port_profile {
--
2.25.1
^ permalink raw reply related [flat|nested] 6+ messages in thread* [PATCH net-next v4 3/3] net/mlx4: support per-queue statistics via netlink
2024-05-09 20:50 [PATCH net-next v4 0/3] mlx4: Add support for netdev-genl API Joe Damato
2024-05-09 20:50 ` [PATCH net-next v4 1/3] net/mlx4: Track RX allocation failures in a stat Joe Damato
2024-05-09 20:50 ` [PATCH net-next v4 2/3] net/mlx4: link NAPI instances to queues and IRQs Joe Damato
@ 2024-05-09 20:50 ` Joe Damato
2 siblings, 0 replies; 6+ messages in thread
From: Joe Damato @ 2024-05-09 20:50 UTC (permalink / raw)
To: linux-kernel, netdev
Cc: mkarsten, nalramli, Joe Damato, Tariq Toukan, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni,
open list:MELLANOX MLX4 core VPI driver
Make mlx4 compatible with the newly added netlink queue stats API.
Signed-off-by: Joe Damato <jdamato@fastly.com>
Tested-by: Martin Karsten <mkarsten@uwaterloo.ca>
---
.../net/ethernet/mellanox/mlx4/en_netdev.c | 73 +++++++++++++++++++
1 file changed, 73 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
index 4c089cfa027a..fd79e957b5d8 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
@@ -43,6 +43,7 @@
#include <net/vxlan.h>
#include <net/devlink.h>
#include <net/rps.h>
+#include <net/netdev_queues.h>
#include <linux/mlx4/driver.h>
#include <linux/mlx4/device.h>
@@ -3099,6 +3100,77 @@ void mlx4_en_set_stats_bitmap(struct mlx4_dev *dev,
last_i += NUM_PHY_STATS;
}
+static void mlx4_get_queue_stats_rx(struct net_device *dev, int i,
+ struct netdev_queue_stats_rx *stats)
+{
+ struct mlx4_en_priv *priv = netdev_priv(dev);
+ const struct mlx4_en_rx_ring *ring;
+
+ spin_lock_bh(&priv->stats_lock);
+
+ if (!priv->port_up || mlx4_is_master(priv->mdev->dev))
+ goto out_unlock;
+
+ ring = priv->rx_ring[i];
+ stats->packets = READ_ONCE(ring->packets);
+ stats->bytes = READ_ONCE(ring->bytes);
+ stats->alloc_fail = READ_ONCE(ring->alloc_fail);
+
+out_unlock:
+ spin_unlock_bh(&priv->stats_lock);
+}
+
+static void mlx4_get_queue_stats_tx(struct net_device *dev, int i,
+ struct netdev_queue_stats_tx *stats)
+{
+ struct mlx4_en_priv *priv = netdev_priv(dev);
+ const struct mlx4_en_tx_ring *ring;
+
+ spin_lock_bh(&priv->stats_lock);
+
+ if (!priv->port_up || mlx4_is_master(priv->mdev->dev))
+ goto out_unlock;
+
+ ring = priv->tx_ring[TX][i];
+ stats->packets = READ_ONCE(ring->packets);
+ stats->bytes = READ_ONCE(ring->bytes);
+
+out_unlock:
+ spin_unlock_bh(&priv->stats_lock);
+}
+
+static void mlx4_get_base_stats(struct net_device *dev,
+ struct netdev_queue_stats_rx *rx,
+ struct netdev_queue_stats_tx *tx)
+{
+ struct mlx4_en_priv *priv = netdev_priv(dev);
+
+ spin_lock_bh(&priv->stats_lock);
+
+ if (!priv->port_up || mlx4_is_master(priv->mdev->dev))
+ goto out_unlock;
+
+ if (priv->rx_ring_num) {
+ rx->packets = 0;
+ rx->bytes = 0;
+ rx->alloc_fail = 0;
+ }
+
+ if (priv->tx_ring_num[TX]) {
+ tx->packets = 0;
+ tx->bytes = 0;
+ }
+
+out_unlock:
+ spin_unlock_bh(&priv->stats_lock);
+}
+
+static const struct netdev_stat_ops mlx4_stat_ops = {
+ .get_queue_stats_rx = mlx4_get_queue_stats_rx,
+ .get_queue_stats_tx = mlx4_get_queue_stats_tx,
+ .get_base_stats = mlx4_get_base_stats,
+};
+
int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port,
struct mlx4_en_port_profile *prof)
{
@@ -3262,6 +3334,7 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port,
netif_set_real_num_tx_queues(dev, priv->tx_ring_num[TX]);
netif_set_real_num_rx_queues(dev, priv->rx_ring_num);
+ dev->stat_ops = &mlx4_stat_ops;
dev->ethtool_ops = &mlx4_en_ethtool_ops;
/*
--
2.25.1
^ permalink raw reply related [flat|nested] 6+ messages in thread