* [net-next V2 01/15] net/mlx5: Use shared code for checking lag is supported
2023-07-27 18:38 [pull request][net-next V2 00/15] mlx5 updates 2023-07-24 Saeed Mahameed
@ 2023-07-27 18:39 ` Saeed Mahameed
2023-07-28 20:50 ` patchwork-bot+netdevbpf
2023-07-27 18:39 ` [net-next V2 02/15] net/mlx5: Devcom, Infrastructure changes Saeed Mahameed
` (13 subsequent siblings)
14 siblings, 1 reply; 17+ messages in thread
From: Saeed Mahameed @ 2023-07-27 18:39 UTC (permalink / raw)
To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
Cc: Saeed Mahameed, netdev, Tariq Toukan, Roi Dayan
From: Roi Dayan <roid@nvidia.com>
Move shared function to check lag is supported to lag header file.
Signed-off-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
drivers/net/ethernet/mellanox/mlx5/core/dev.c | 6 ++----
drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c | 10 ----------
drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h | 12 ++++++++++--
3 files changed, 12 insertions(+), 16 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
index edb06fb9bbc5..7909f378dc93 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
@@ -36,6 +36,7 @@
#include <linux/mlx5/vport.h>
#include "mlx5_core.h"
#include "devlink.h"
+#include "lag/lag.h"
/* intf dev list mutex */
static DEFINE_MUTEX(mlx5_intf_mutex);
@@ -587,10 +588,7 @@ static int next_phys_dev_lag(struct device *dev, const void *data)
if (!mdev)
return 0;
- if (!MLX5_CAP_GEN(mdev, vport_group_manager) ||
- !MLX5_CAP_GEN(mdev, lag_master) ||
- (MLX5_CAP_GEN(mdev, num_lag_ports) > MLX5_MAX_PORTS ||
- MLX5_CAP_GEN(mdev, num_lag_ports) <= 1))
+ if (!mlx5_lag_is_supported(mdev))
return 0;
return _next_phys_dev(mdev, data);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
index f0a074b2fcdf..900a18883f28 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
@@ -1268,16 +1268,6 @@ void mlx5_lag_remove_mdev(struct mlx5_core_dev *dev)
mlx5_ldev_put(ldev);
}
-bool mlx5_lag_is_supported(struct mlx5_core_dev *dev)
-{
- if (!MLX5_CAP_GEN(dev, vport_group_manager) ||
- !MLX5_CAP_GEN(dev, lag_master) ||
- MLX5_CAP_GEN(dev, num_lag_ports) < 2 ||
- MLX5_CAP_GEN(dev, num_lag_ports) > MLX5_MAX_PORTS)
- return false;
- return true;
-}
-
void mlx5_lag_add_mdev(struct mlx5_core_dev *dev)
{
int err;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h
index a061b1873e27..481e92f39fe6 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h
@@ -74,8 +74,6 @@ struct mlx5_lag {
struct lag_mpesw lag_mpesw;
};
-bool mlx5_lag_is_supported(struct mlx5_core_dev *dev);
-
static inline struct mlx5_lag *
mlx5_lag_dev(struct mlx5_core_dev *dev)
{
@@ -115,4 +113,14 @@ void mlx5_lag_remove_devices(struct mlx5_lag *ldev);
int mlx5_deactivate_lag(struct mlx5_lag *ldev);
void mlx5_lag_add_devices(struct mlx5_lag *ldev);
+static inline bool mlx5_lag_is_supported(struct mlx5_core_dev *dev)
+{
+ if (!MLX5_CAP_GEN(dev, vport_group_manager) ||
+ !MLX5_CAP_GEN(dev, lag_master) ||
+ MLX5_CAP_GEN(dev, num_lag_ports) < 2 ||
+ MLX5_CAP_GEN(dev, num_lag_ports) > MLX5_MAX_PORTS)
+ return false;
+ return true;
+}
+
#endif /* __MLX5_LAG_H__ */
--
2.41.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* Re: [net-next V2 01/15] net/mlx5: Use shared code for checking lag is supported
2023-07-27 18:39 ` [net-next V2 01/15] net/mlx5: Use shared code for checking lag is supported Saeed Mahameed
@ 2023-07-28 20:50 ` patchwork-bot+netdevbpf
0 siblings, 0 replies; 17+ messages in thread
From: patchwork-bot+netdevbpf @ 2023-07-28 20:50 UTC (permalink / raw)
To: Saeed Mahameed
Cc: davem, kuba, pabeni, edumazet, saeedm, netdev, tariqt, roid
Hello:
This series was applied to netdev/net-next.git (main)
by Saeed Mahameed <saeedm@nvidia.com>:
On Thu, 27 Jul 2023 11:39:00 -0700 you wrote:
> From: Roi Dayan <roid@nvidia.com>
>
> Move shared function to check lag is supported to lag header file.
>
> Signed-off-by: Roi Dayan <roid@nvidia.com>
> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
>
> [...]
Here is the summary with links:
- [net-next,V2,01/15] net/mlx5: Use shared code for checking lag is supported
https://git.kernel.org/netdev/net-next/c/02ceda65f014
- [net-next,V2,02/15] net/mlx5: Devcom, Infrastructure changes
https://git.kernel.org/netdev/net-next/c/88d162b47981
- [net-next,V2,03/15] net/mlx5e: E-Switch, Register devcom device with switch id key
https://git.kernel.org/netdev/net-next/c/1161d22ded07
- [net-next,V2,04/15] net/mlx5e: E-Switch, Allow devcom initialization on more vports
https://git.kernel.org/netdev/net-next/c/e2bb7984719b
- [net-next,V2,05/15] net/mlx5: Re-organize mlx5_cmd struct
https://git.kernel.org/netdev/net-next/c/58db72869a9f
- [net-next,V2,06/15] net/mlx5: Remove redundant cmdif revision check
https://git.kernel.org/netdev/net-next/c/0714ec9ea1f2
- [net-next,V2,07/15] net/mlx5: split mlx5_cmd_init() to probe and reload routines
https://git.kernel.org/netdev/net-next/c/06cd555f73ca
- [net-next,V2,08/15] net/mlx5: Allocate command stats with xarray
https://git.kernel.org/netdev/net-next/c/b90ebfc018b0
- [net-next,V2,09/15] net/mlx5e: Remove duplicate code for user flow
https://git.kernel.org/netdev/net-next/c/9ec85cc9c90e
- [net-next,V2,10/15] net/mlx5e: Make flow classification filters static
https://git.kernel.org/netdev/net-next/c/b9335a757232
- [net-next,V2,11/15] net/mlx5: Don't check vport->enabled in port ops
https://git.kernel.org/netdev/net-next/c/550449d8e389
- [net-next,V2,12/15] net/mlx5: Remove pointless devlink_rate checks
https://git.kernel.org/netdev/net-next/c/3e82a9cf579e
- [net-next,V2,13/15] net/mlx5: Make mlx5_esw_offloads_rep_load/unload() static
https://git.kernel.org/netdev/net-next/c/b71863876f84
- [net-next,V2,14/15] net/mlx5: Make mlx5_eswitch_load/unload_vport() static
https://git.kernel.org/netdev/net-next/c/329980d05d8c
- [net-next,V2,15/15] net/mlx5: Give esw_offloads_load/unload_rep() "mlx5_" prefix
https://git.kernel.org/netdev/net-next/c/9eca8bb8da43
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 17+ messages in thread
* [net-next V2 02/15] net/mlx5: Devcom, Infrastructure changes
2023-07-27 18:38 [pull request][net-next V2 00/15] mlx5 updates 2023-07-24 Saeed Mahameed
2023-07-27 18:39 ` [net-next V2 01/15] net/mlx5: Use shared code for checking lag is supported Saeed Mahameed
@ 2023-07-27 18:39 ` Saeed Mahameed
2023-07-27 18:39 ` [net-next V2 03/15] net/mlx5e: E-Switch, Register devcom device with switch id key Saeed Mahameed
` (12 subsequent siblings)
14 siblings, 0 replies; 17+ messages in thread
From: Saeed Mahameed @ 2023-07-27 18:39 UTC (permalink / raw)
To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
Cc: Saeed Mahameed, netdev, Tariq Toukan, Roi Dayan, Eli Cohen,
Shay Drory
From: Roi Dayan <roid@nvidia.com>
Update devcom infrastructure to be more generic, without
depending on max supported ports definition or a device guid,
and also more encapsulated so callers don't need to pass
the register devcom component id per event call.
Signed-off-by: Eli Cohen <elic@nvidia.com>
Signed-off-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
.../net/ethernet/mellanox/mlx5/core/en_rep.c | 21 +-
.../net/ethernet/mellanox/mlx5/core/en_tc.c | 36 +-
.../ethernet/mellanox/mlx5/core/esw/bridge.c | 22 +-
.../mellanox/mlx5/core/esw/bridge_mcast.c | 17 +-
.../net/ethernet/mellanox/mlx5/core/eswitch.h | 3 +
.../mellanox/mlx5/core/eswitch_offloads.c | 47 +-
.../net/ethernet/mellanox/mlx5/core/lag/lag.c | 2 +-
.../ethernet/mellanox/mlx5/core/lib/devcom.c | 448 ++++++++++--------
.../ethernet/mellanox/mlx5/core/lib/devcom.h | 74 ++-
.../net/ethernet/mellanox/mlx5/core/main.c | 12 +-
include/linux/mlx5/driver.h | 4 +-
11 files changed, 359 insertions(+), 327 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
index 152b62138450..ca4f57f5064f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
@@ -399,15 +399,13 @@ static void mlx5e_sqs2vport_stop(struct mlx5_eswitch *esw,
}
static int mlx5e_sqs2vport_add_peers_rules(struct mlx5_eswitch *esw, struct mlx5_eswitch_rep *rep,
- struct mlx5_devcom *devcom,
struct mlx5e_rep_sq *rep_sq, int i)
{
- struct mlx5_eswitch *peer_esw = NULL;
struct mlx5_flow_handle *flow_rule;
- int tmp;
+ struct mlx5_devcom_comp_dev *tmp;
+ struct mlx5_eswitch *peer_esw;
- mlx5_devcom_for_each_peer_entry(devcom, MLX5_DEVCOM_ESW_OFFLOADS,
- peer_esw, tmp) {
+ mlx5_devcom_for_each_peer_entry(esw->devcom, peer_esw, tmp) {
u16 peer_rule_idx = MLX5_CAP_GEN(peer_esw->dev, vhca_id);
struct mlx5e_rep_sq_peer *sq_peer;
int err;
@@ -443,7 +441,6 @@ static int mlx5e_sqs2vport_start(struct mlx5_eswitch *esw,
struct mlx5_flow_handle *flow_rule;
struct mlx5e_rep_priv *rpriv;
struct mlx5e_rep_sq *rep_sq;
- struct mlx5_devcom *devcom;
bool devcom_locked = false;
int err;
int i;
@@ -451,10 +448,10 @@ static int mlx5e_sqs2vport_start(struct mlx5_eswitch *esw,
if (esw->mode != MLX5_ESWITCH_OFFLOADS)
return 0;
- devcom = esw->dev->priv.devcom;
rpriv = mlx5e_rep_to_rep_priv(rep);
- if (mlx5_devcom_comp_is_ready(devcom, MLX5_DEVCOM_ESW_OFFLOADS) &&
- mlx5_devcom_for_each_peer_begin(devcom, MLX5_DEVCOM_ESW_OFFLOADS))
+
+ if (mlx5_devcom_comp_is_ready(esw->devcom) &&
+ mlx5_devcom_for_each_peer_begin(esw->devcom))
devcom_locked = true;
for (i = 0; i < sqns_num; i++) {
@@ -477,7 +474,7 @@ static int mlx5e_sqs2vport_start(struct mlx5_eswitch *esw,
xa_init(&rep_sq->sq_peer);
if (devcom_locked) {
- err = mlx5e_sqs2vport_add_peers_rules(esw, rep, devcom, rep_sq, i);
+ err = mlx5e_sqs2vport_add_peers_rules(esw, rep, rep_sq, i);
if (err) {
mlx5_eswitch_del_send_to_vport_rule(rep_sq->send_to_vport_rule);
xa_destroy(&rep_sq->sq_peer);
@@ -490,7 +487,7 @@ static int mlx5e_sqs2vport_start(struct mlx5_eswitch *esw,
}
if (devcom_locked)
- mlx5_devcom_for_each_peer_end(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
+ mlx5_devcom_for_each_peer_end(esw->devcom);
return 0;
@@ -498,7 +495,7 @@ static int mlx5e_sqs2vport_start(struct mlx5_eswitch *esw,
mlx5e_sqs2vport_stop(esw, rep);
if (devcom_locked)
- mlx5_devcom_for_each_peer_end(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
+ mlx5_devcom_for_each_peer_end(esw->devcom);
return err;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index 8d0a3f69693e..22bc88620653 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -1668,11 +1668,10 @@ int mlx5e_tc_query_route_vport(struct net_device *out_dev, struct net_device *ro
{
struct mlx5e_priv *out_priv, *route_priv;
struct mlx5_core_dev *route_mdev;
- struct mlx5_devcom *devcom;
+ struct mlx5_devcom_comp_dev *pos;
struct mlx5_eswitch *esw;
u16 vhca_id;
int err;
- int i;
out_priv = netdev_priv(out_dev);
esw = out_priv->mdev->priv.eswitch;
@@ -1688,10 +1687,8 @@ int mlx5e_tc_query_route_vport(struct net_device *out_dev, struct net_device *ro
return err;
rcu_read_lock();
- devcom = out_priv->mdev->priv.devcom;
err = -ENODEV;
- mlx5_devcom_for_each_peer_entry_rcu(devcom, MLX5_DEVCOM_ESW_OFFLOADS,
- esw, i) {
+ mlx5_devcom_for_each_peer_entry_rcu(esw->devcom, esw, pos) {
err = mlx5_eswitch_vhca_id_to_vport(esw, vhca_id, vport);
if (!err)
break;
@@ -2031,15 +2028,15 @@ static void mlx5e_tc_del_flow(struct mlx5e_priv *priv,
struct mlx5e_tc_flow *flow)
{
if (mlx5e_is_eswitch_flow(flow)) {
- struct mlx5_devcom *devcom = flow->priv->mdev->priv.devcom;
+ struct mlx5_devcom_comp_dev *devcom = flow->priv->mdev->priv.eswitch->devcom;
- if (!mlx5_devcom_for_each_peer_begin(devcom, MLX5_DEVCOM_ESW_OFFLOADS)) {
+ if (!mlx5_devcom_for_each_peer_begin(devcom)) {
mlx5e_tc_del_fdb_flow(priv, flow);
return;
}
mlx5e_tc_del_fdb_peers_flow(flow);
- mlx5_devcom_for_each_peer_end(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
+ mlx5_devcom_for_each_peer_end(devcom);
mlx5e_tc_del_fdb_flow(priv, flow);
} else {
mlx5e_tc_del_nic_flow(priv, flow);
@@ -4216,8 +4213,7 @@ static bool is_peer_flow_needed(struct mlx5e_tc_flow *flow)
flow_flag_test(flow, INGRESS);
bool act_is_encap = !!(attr->action &
MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT);
- bool esw_paired = mlx5_devcom_comp_is_ready(esw_attr->in_mdev->priv.devcom,
- MLX5_DEVCOM_ESW_OFFLOADS);
+ bool esw_paired = mlx5_devcom_comp_is_ready(esw_attr->in_mdev->priv.eswitch->devcom);
if (!esw_paired)
return false;
@@ -4471,14 +4467,13 @@ mlx5e_add_fdb_flow(struct mlx5e_priv *priv,
struct net_device *filter_dev,
struct mlx5e_tc_flow **__flow)
{
- struct mlx5_devcom *devcom = priv->mdev->priv.devcom;
+ struct mlx5_devcom_comp_dev *devcom = priv->mdev->priv.eswitch->devcom, *pos;
struct mlx5e_rep_priv *rpriv = priv->ppriv;
struct mlx5_eswitch_rep *in_rep = rpriv->rep;
struct mlx5_core_dev *in_mdev = priv->mdev;
struct mlx5_eswitch *peer_esw;
struct mlx5e_tc_flow *flow;
int err;
- int i;
flow = __mlx5e_add_fdb_flow(priv, f, flow_flags, filter_dev, in_rep,
in_mdev);
@@ -4490,27 +4485,25 @@ mlx5e_add_fdb_flow(struct mlx5e_priv *priv,
return 0;
}
- if (!mlx5_devcom_for_each_peer_begin(devcom, MLX5_DEVCOM_ESW_OFFLOADS)) {
+ if (!mlx5_devcom_for_each_peer_begin(devcom)) {
err = -ENODEV;
goto clean_flow;
}
- mlx5_devcom_for_each_peer_entry(devcom,
- MLX5_DEVCOM_ESW_OFFLOADS,
- peer_esw, i) {
+ mlx5_devcom_for_each_peer_entry(devcom, peer_esw, pos) {
err = mlx5e_tc_add_fdb_peer_flow(f, flow, flow_flags, peer_esw);
if (err)
goto peer_clean;
}
- mlx5_devcom_for_each_peer_end(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
+ mlx5_devcom_for_each_peer_end(devcom);
*__flow = flow;
return 0;
peer_clean:
mlx5e_tc_del_fdb_peers_flow(flow);
- mlx5_devcom_for_each_peer_end(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
+ mlx5_devcom_for_each_peer_end(devcom);
clean_flow:
mlx5e_tc_del_fdb_flow(priv, flow);
return err;
@@ -4728,7 +4721,7 @@ int mlx5e_tc_fill_action_stats(struct mlx5e_priv *priv,
int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv,
struct flow_cls_offload *f, unsigned long flags)
{
- struct mlx5_devcom *devcom = priv->mdev->priv.devcom;
+ struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
struct rhashtable *tc_ht = get_tc_ht(priv, flags);
struct mlx5e_tc_flow *flow;
struct mlx5_fc *counter;
@@ -4764,7 +4757,7 @@ int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv,
/* Under multipath it's possible for one rule to be currently
* un-offloaded while the other rule is offloaded.
*/
- if (!mlx5_devcom_for_each_peer_begin(devcom, MLX5_DEVCOM_ESW_OFFLOADS))
+ if (esw && !mlx5_devcom_for_each_peer_begin(esw->devcom))
goto out;
if (flow_flag_test(flow, DUP)) {
@@ -4795,7 +4788,8 @@ int mlx5e_stats_flower(struct net_device *dev, struct mlx5e_priv *priv,
}
no_peer_counter:
- mlx5_devcom_for_each_peer_end(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
+ if (esw)
+ mlx5_devcom_for_each_peer_end(esw->devcom);
out:
flow_stats_update(&f->stats, bytes, packets, 0, lastuse,
FLOW_ACTION_HW_STATS_DELAYED);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c
index f4fe1daa4afd..e36294b7ade2 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c
@@ -652,30 +652,30 @@ mlx5_esw_bridge_ingress_flow_peer_create(u16 vport_num, u16 esw_owner_vhca_id,
struct mlx5_esw_bridge_vlan *vlan, u32 counter_id,
struct mlx5_esw_bridge *bridge)
{
- struct mlx5_devcom *devcom = bridge->br_offloads->esw->dev->priv.devcom;
+ struct mlx5_devcom_comp_dev *devcom = bridge->br_offloads->esw->devcom, *pos;
struct mlx5_eswitch *tmp, *peer_esw = NULL;
static struct mlx5_flow_handle *handle;
- int i;
- if (!mlx5_devcom_for_each_peer_begin(devcom, MLX5_DEVCOM_ESW_OFFLOADS))
+ if (!mlx5_devcom_for_each_peer_begin(devcom))
return ERR_PTR(-ENODEV);
- mlx5_devcom_for_each_peer_entry(devcom,
- MLX5_DEVCOM_ESW_OFFLOADS,
- tmp, i) {
+ mlx5_devcom_for_each_peer_entry(devcom, tmp, pos) {
if (mlx5_esw_is_owner(tmp, vport_num, esw_owner_vhca_id)) {
peer_esw = tmp;
break;
}
}
+
if (!peer_esw) {
- mlx5_devcom_for_each_peer_end(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
- return ERR_PTR(-ENODEV);
+ handle = ERR_PTR(-ENODEV);
+ goto out;
}
handle = mlx5_esw_bridge_ingress_flow_with_esw_create(vport_num, addr, vlan, counter_id,
bridge, peer_esw);
- mlx5_devcom_for_each_peer_end(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
+
+out:
+ mlx5_devcom_for_each_peer_end(devcom);
return handle;
}
@@ -1391,8 +1391,8 @@ mlx5_esw_bridge_fdb_entry_init(struct net_device *dev, u16 vport_num, u16 esw_ow
mlx5_fc_id(counter), bridge);
if (IS_ERR(handle)) {
err = PTR_ERR(handle);
- esw_warn(esw->dev, "Failed to create ingress flow(vport=%u,err=%d)\n",
- vport_num, err);
+ esw_warn(esw->dev, "Failed to create ingress flow(vport=%u,err=%d,peer=%d)\n",
+ vport_num, err, peer);
goto err_ingress_flow_create;
}
entry->ingress_handle = handle;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c
index 2455f8b93c1e..7a01714b3780 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c
@@ -539,30 +539,29 @@ mlx5_esw_bridge_mcast_filter_flow_create(struct mlx5_esw_bridge_port *port)
static struct mlx5_flow_handle *
mlx5_esw_bridge_mcast_filter_flow_peer_create(struct mlx5_esw_bridge_port *port)
{
- struct mlx5_devcom *devcom = port->bridge->br_offloads->esw->dev->priv.devcom;
+ struct mlx5_devcom_comp_dev *devcom = port->bridge->br_offloads->esw->devcom, *pos;
struct mlx5_eswitch *tmp, *peer_esw = NULL;
static struct mlx5_flow_handle *handle;
- int i;
- if (!mlx5_devcom_for_each_peer_begin(devcom, MLX5_DEVCOM_ESW_OFFLOADS))
+ if (!mlx5_devcom_for_each_peer_begin(devcom))
return ERR_PTR(-ENODEV);
- mlx5_devcom_for_each_peer_entry(devcom,
- MLX5_DEVCOM_ESW_OFFLOADS,
- tmp, i) {
+ mlx5_devcom_for_each_peer_entry(devcom, tmp, pos) {
if (mlx5_esw_is_owner(tmp, port->vport_num, port->esw_owner_vhca_id)) {
peer_esw = tmp;
break;
}
}
+
if (!peer_esw) {
- mlx5_devcom_for_each_peer_end(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
- return ERR_PTR(-ENODEV);
+ handle = ERR_PTR(-ENODEV);
+ goto out;
}
handle = mlx5_esw_bridge_mcast_flow_with_esw_create(port, peer_esw);
- mlx5_devcom_for_each_peer_end(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
+out:
+ mlx5_devcom_for_each_peer_end(devcom);
return handle;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index ae0dc8a3060d..6d9378b0bce5 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -354,6 +354,7 @@ struct mlx5_eswitch {
} params;
struct blocking_notifier_head n_head;
struct xarray paired;
+ struct mlx5_devcom_comp_dev *devcom;
};
void esw_offloads_disable(struct mlx5_eswitch *esw);
@@ -383,6 +384,7 @@ void mlx5_eswitch_disable_locked(struct mlx5_eswitch *esw);
void mlx5_eswitch_disable(struct mlx5_eswitch *esw);
void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw);
void mlx5_esw_offloads_devcom_cleanup(struct mlx5_eswitch *esw);
+bool mlx5_esw_offloads_devcom_is_ready(struct mlx5_eswitch *esw);
int mlx5_eswitch_set_vport_mac(struct mlx5_eswitch *esw,
u16 vport, const u8 *mac);
int mlx5_eswitch_set_vport_state(struct mlx5_eswitch *esw,
@@ -818,6 +820,7 @@ static inline void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw, bool cle
static inline void mlx5_eswitch_disable(struct mlx5_eswitch *esw) {}
static inline void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw) {}
static inline void mlx5_esw_offloads_devcom_cleanup(struct mlx5_eswitch *esw) {}
+static inline bool mlx5_esw_offloads_devcom_is_ready(struct mlx5_eswitch *esw) { return false; }
static inline bool mlx5_eswitch_is_funcs_handler(struct mlx5_core_dev *dev) { return false; }
static inline
int mlx5_eswitch_set_vport_state(struct mlx5_eswitch *esw, u16 vport, int link_state) { return 0; }
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index bdfe609cc9ec..11cce630c1b8 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -2811,7 +2811,6 @@ static int mlx5_esw_offloads_devcom_event(int event,
void *event_data)
{
struct mlx5_eswitch *esw = my_data;
- struct mlx5_devcom *devcom = esw->dev->priv.devcom;
struct mlx5_eswitch *peer_esw = event_data;
u16 esw_i, peer_esw_i;
bool esw_paired;
@@ -2833,6 +2832,7 @@ static int mlx5_esw_offloads_devcom_event(int event,
err = mlx5_esw_offloads_set_ns_peer(esw, peer_esw, true);
if (err)
goto err_out;
+
err = mlx5_esw_offloads_pair(esw, peer_esw);
if (err)
goto err_peer;
@@ -2851,7 +2851,7 @@ static int mlx5_esw_offloads_devcom_event(int event,
esw->num_peers++;
peer_esw->num_peers++;
- mlx5_devcom_comp_set_ready(devcom, MLX5_DEVCOM_ESW_OFFLOADS, true);
+ mlx5_devcom_comp_set_ready(esw->devcom, true);
break;
case ESW_OFFLOADS_DEVCOM_UNPAIR:
@@ -2861,7 +2861,7 @@ static int mlx5_esw_offloads_devcom_event(int event,
peer_esw->num_peers--;
esw->num_peers--;
if (!esw->num_peers && !peer_esw->num_peers)
- mlx5_devcom_comp_set_ready(devcom, MLX5_DEVCOM_ESW_OFFLOADS, false);
+ mlx5_devcom_comp_set_ready(esw->devcom, false);
xa_erase(&peer_esw->paired, esw_i);
xa_erase(&esw->paired, peer_esw_i);
mlx5_esw_offloads_unpair(peer_esw, esw);
@@ -2888,7 +2888,7 @@ static int mlx5_esw_offloads_devcom_event(int event,
void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw)
{
- struct mlx5_devcom *devcom = esw->dev->priv.devcom;
+ u64 guid;
int i;
for (i = 0; i < MLX5_MAX_PORTS; i++)
@@ -2902,34 +2902,41 @@ void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw)
return;
xa_init(&esw->paired);
- mlx5_devcom_register_component(devcom,
- MLX5_DEVCOM_ESW_OFFLOADS,
- mlx5_esw_offloads_devcom_event,
- esw);
+ guid = mlx5_query_nic_system_image_guid(esw->dev);
esw->num_peers = 0;
- mlx5_devcom_send_event(devcom,
- MLX5_DEVCOM_ESW_OFFLOADS,
+ esw->devcom = mlx5_devcom_register_component(esw->dev->priv.devc,
+ MLX5_DEVCOM_ESW_OFFLOADS,
+ guid,
+ mlx5_esw_offloads_devcom_event,
+ esw);
+ if (IS_ERR_OR_NULL(esw->devcom))
+ return;
+
+ mlx5_devcom_send_event(esw->devcom,
ESW_OFFLOADS_DEVCOM_PAIR,
- ESW_OFFLOADS_DEVCOM_UNPAIR, esw);
+ ESW_OFFLOADS_DEVCOM_UNPAIR,
+ esw);
}
void mlx5_esw_offloads_devcom_cleanup(struct mlx5_eswitch *esw)
{
- struct mlx5_devcom *devcom = esw->dev->priv.devcom;
-
- if (!MLX5_CAP_ESW(esw->dev, merged_eswitch))
+ if (IS_ERR_OR_NULL(esw->devcom))
return;
- if (!mlx5_lag_is_supported(esw->dev))
- return;
-
- mlx5_devcom_send_event(devcom, MLX5_DEVCOM_ESW_OFFLOADS,
+ mlx5_devcom_send_event(esw->devcom,
+ ESW_OFFLOADS_DEVCOM_UNPAIR,
ESW_OFFLOADS_DEVCOM_UNPAIR,
- ESW_OFFLOADS_DEVCOM_UNPAIR, esw);
+ esw);
- mlx5_devcom_unregister_component(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
+ mlx5_devcom_unregister_component(esw->devcom);
xa_destroy(&esw->paired);
+ esw->devcom = NULL;
+}
+
+bool mlx5_esw_offloads_devcom_is_ready(struct mlx5_eswitch *esw)
+{
+ return mlx5_devcom_comp_is_ready(esw->devcom);
}
bool mlx5_esw_vport_match_metadata_supported(const struct mlx5_eswitch *esw)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
index 900a18883f28..af3fac090b82 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
@@ -835,7 +835,7 @@ static bool mlx5_shared_fdb_supported(struct mlx5_lag *ldev)
dev = ldev->pf[MLX5_LAG_P1].dev;
if (is_mdev_switchdev_mode(dev) &&
mlx5_eswitch_vport_match_metadata_enabled(dev->priv.eswitch) &&
- mlx5_devcom_comp_is_ready(dev->priv.devcom, MLX5_DEVCOM_ESW_OFFLOADS) &&
+ mlx5_esw_offloads_devcom_is_ready(dev->priv.eswitch) &&
MLX5_CAP_ESW(dev, esw_shared_ingress_acl) &&
mlx5_eswitch_get_npeers(dev->priv.eswitch) == MLX5_CAP_GEN(dev, num_lag_ports) - 1)
return true;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c
index 78c94b22bdc0..feb62d952643 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c
@@ -2,214 +2,273 @@
/* Copyright (c) 2018 Mellanox Technologies */
#include <linux/mlx5/vport.h>
+#include <linux/list.h>
#include "lib/devcom.h"
#include "mlx5_core.h"
-static LIST_HEAD(devcom_list);
+static LIST_HEAD(devcom_dev_list);
+static LIST_HEAD(devcom_comp_list);
+/* protect device list */
+static DEFINE_MUTEX(dev_list_lock);
+/* protect component list */
+static DEFINE_MUTEX(comp_list_lock);
-#define devcom_for_each_component(priv, comp, iter) \
- for (iter = 0; \
- comp = &(priv)->components[iter], iter < MLX5_DEVCOM_NUM_COMPONENTS; \
- iter++)
+#define devcom_for_each_component(iter) \
+ list_for_each_entry(iter, &devcom_comp_list, comp_list)
-struct mlx5_devcom_component {
- struct {
- void __rcu *data;
- } device[MLX5_DEVCOM_PORTS_SUPPORTED];
+struct mlx5_devcom_dev {
+ struct list_head list;
+ struct mlx5_core_dev *dev;
+ struct kref ref;
+};
+struct mlx5_devcom_comp {
+ struct list_head comp_list;
+ enum mlx5_devcom_component id;
+ u64 key;
+ struct list_head comp_dev_list_head;
mlx5_devcom_event_handler_t handler;
- struct rw_semaphore sem;
+ struct kref ref;
bool ready;
+ struct rw_semaphore sem;
};
-struct mlx5_devcom_list {
+struct mlx5_devcom_comp_dev {
struct list_head list;
-
- struct mlx5_devcom_component components[MLX5_DEVCOM_NUM_COMPONENTS];
- struct mlx5_core_dev *devs[MLX5_DEVCOM_PORTS_SUPPORTED];
+ struct mlx5_devcom_comp *comp;
+ struct mlx5_devcom_dev *devc;
+ void __rcu *data;
};
-struct mlx5_devcom {
- struct mlx5_devcom_list *priv;
- int idx;
-};
-
-static struct mlx5_devcom_list *mlx5_devcom_list_alloc(void)
+static bool devcom_dev_exists(struct mlx5_core_dev *dev)
{
- struct mlx5_devcom_component *comp;
- struct mlx5_devcom_list *priv;
- int i;
-
- priv = kzalloc(sizeof(*priv), GFP_KERNEL);
- if (!priv)
- return NULL;
+ struct mlx5_devcom_dev *iter;
- devcom_for_each_component(priv, comp, i)
- init_rwsem(&comp->sem);
+ list_for_each_entry(iter, &devcom_dev_list, list)
+ if (iter->dev == dev)
+ return true;
- return priv;
+ return false;
}
-static struct mlx5_devcom *mlx5_devcom_alloc(struct mlx5_devcom_list *priv,
- u8 idx)
+static struct mlx5_devcom_dev *
+mlx5_devcom_dev_alloc(struct mlx5_core_dev *dev)
{
- struct mlx5_devcom *devcom;
+ struct mlx5_devcom_dev *devc;
- devcom = kzalloc(sizeof(*devcom), GFP_KERNEL);
- if (!devcom)
+ devc = kzalloc(sizeof(*devc), GFP_KERNEL);
+ if (!devc)
return NULL;
- devcom->priv = priv;
- devcom->idx = idx;
- return devcom;
+ devc->dev = dev;
+ kref_init(&devc->ref);
+ return devc;
}
-/* Must be called with intf_mutex held */
-struct mlx5_devcom *mlx5_devcom_register_device(struct mlx5_core_dev *dev)
+struct mlx5_devcom_dev *
+mlx5_devcom_register_device(struct mlx5_core_dev *dev)
{
- struct mlx5_devcom_list *priv = NULL, *iter;
- struct mlx5_devcom *devcom = NULL;
- bool new_priv = false;
- u64 sguid0, sguid1;
- int idx, i;
-
- if (!mlx5_core_is_pf(dev))
- return NULL;
- if (MLX5_CAP_GEN(dev, num_lag_ports) > MLX5_DEVCOM_PORTS_SUPPORTED)
- return NULL;
-
- mlx5_dev_list_lock();
- sguid0 = mlx5_query_nic_system_image_guid(dev);
- list_for_each_entry(iter, &devcom_list, list) {
- /* There is at least one device in iter */
- struct mlx5_core_dev *tmp_dev;
-
- idx = -1;
- for (i = 0; i < MLX5_DEVCOM_PORTS_SUPPORTED; i++) {
- if (iter->devs[i])
- tmp_dev = iter->devs[i];
- else
- idx = i;
- }
-
- if (idx == -1)
- continue;
-
- sguid1 = mlx5_query_nic_system_image_guid(tmp_dev);
- if (sguid0 != sguid1)
- continue;
-
- priv = iter;
- break;
- }
+ struct mlx5_devcom_dev *devc;
- if (!priv) {
- priv = mlx5_devcom_list_alloc();
- if (!priv) {
- devcom = ERR_PTR(-ENOMEM);
- goto out;
- }
+ mutex_lock(&dev_list_lock);
- idx = 0;
- new_priv = true;
+ if (devcom_dev_exists(dev)) {
+ devc = ERR_PTR(-EEXIST);
+ goto out;
}
- priv->devs[idx] = dev;
- devcom = mlx5_devcom_alloc(priv, idx);
- if (!devcom) {
- if (new_priv)
- kfree(priv);
- devcom = ERR_PTR(-ENOMEM);
+ devc = mlx5_devcom_dev_alloc(dev);
+ if (!devc) {
+ devc = ERR_PTR(-ENOMEM);
goto out;
}
- if (new_priv)
- list_add(&priv->list, &devcom_list);
+ list_add_tail(&devc->list, &devcom_dev_list);
out:
- mlx5_dev_list_unlock();
- return devcom;
+ mutex_unlock(&dev_list_lock);
+ return devc;
}
-/* Must be called with intf_mutex held */
-void mlx5_devcom_unregister_device(struct mlx5_devcom *devcom)
+static void
+mlx5_devcom_dev_release(struct kref *ref)
{
- struct mlx5_devcom_list *priv;
- int i;
+ struct mlx5_devcom_dev *devc = container_of(ref, struct mlx5_devcom_dev, ref);
- if (IS_ERR_OR_NULL(devcom))
- return;
+ mutex_lock(&dev_list_lock);
+ list_del(&devc->list);
+ mutex_unlock(&dev_list_lock);
+ kfree(devc);
+}
- mlx5_dev_list_lock();
- priv = devcom->priv;
- priv->devs[devcom->idx] = NULL;
+void mlx5_devcom_unregister_device(struct mlx5_devcom_dev *devc)
+{
+ if (!IS_ERR_OR_NULL(devc))
+ kref_put(&devc->ref, mlx5_devcom_dev_release);
+}
- kfree(devcom);
+static struct mlx5_devcom_comp *
+mlx5_devcom_comp_alloc(u64 id, u64 key, mlx5_devcom_event_handler_t handler)
+{
+ struct mlx5_devcom_comp *comp;
- for (i = 0; i < MLX5_DEVCOM_PORTS_SUPPORTED; i++)
- if (priv->devs[i])
- break;
+ comp = kzalloc(sizeof(*comp), GFP_KERNEL);
+ if (!comp)
+ return ERR_PTR(-ENOMEM);
- if (i != MLX5_DEVCOM_PORTS_SUPPORTED)
- goto out;
+ comp->id = id;
+ comp->key = key;
+ comp->handler = handler;
+ init_rwsem(&comp->sem);
+ kref_init(&comp->ref);
+ INIT_LIST_HEAD(&comp->comp_dev_list_head);
- list_del(&priv->list);
- kfree(priv);
-out:
- mlx5_dev_list_unlock();
+ return comp;
}
-void mlx5_devcom_register_component(struct mlx5_devcom *devcom,
- enum mlx5_devcom_components id,
- mlx5_devcom_event_handler_t handler,
- void *data)
+static void
+mlx5_devcom_comp_release(struct kref *ref)
{
- struct mlx5_devcom_component *comp;
+ struct mlx5_devcom_comp *comp = container_of(ref, struct mlx5_devcom_comp, ref);
- if (IS_ERR_OR_NULL(devcom))
- return;
+ mutex_lock(&comp_list_lock);
+ list_del(&comp->comp_list);
+ mutex_unlock(&comp_list_lock);
+ kfree(comp);
+}
+
+static struct mlx5_devcom_comp_dev *
+devcom_alloc_comp_dev(struct mlx5_devcom_dev *devc,
+ struct mlx5_devcom_comp *comp,
+ void *data)
+{
+ struct mlx5_devcom_comp_dev *devcom;
- WARN_ON(!data);
+ devcom = kzalloc(sizeof(*devcom), GFP_KERNEL);
+ if (!devcom)
+ return ERR_PTR(-ENOMEM);
+
+ kref_get(&devc->ref);
+ devcom->devc = devc;
+ devcom->comp = comp;
+ rcu_assign_pointer(devcom->data, data);
- comp = &devcom->priv->components[id];
down_write(&comp->sem);
- comp->handler = handler;
- rcu_assign_pointer(comp->device[devcom->idx].data, data);
+ list_add_tail(&devcom->list, &comp->comp_dev_list_head);
up_write(&comp->sem);
+
+ return devcom;
}
-void mlx5_devcom_unregister_component(struct mlx5_devcom *devcom,
- enum mlx5_devcom_components id)
+static void
+devcom_free_comp_dev(struct mlx5_devcom_comp_dev *devcom)
{
- struct mlx5_devcom_component *comp;
-
- if (IS_ERR_OR_NULL(devcom))
- return;
+ struct mlx5_devcom_comp *comp = devcom->comp;
- comp = &devcom->priv->components[id];
down_write(&comp->sem);
- RCU_INIT_POINTER(comp->device[devcom->idx].data, NULL);
+ list_del(&devcom->list);
up_write(&comp->sem);
- synchronize_rcu();
+
+ kref_put(&devcom->devc->ref, mlx5_devcom_dev_release);
+ kfree(devcom);
+ kref_put(&comp->ref, mlx5_devcom_comp_release);
}
-int mlx5_devcom_send_event(struct mlx5_devcom *devcom,
- enum mlx5_devcom_components id,
+static bool
+devcom_component_equal(struct mlx5_devcom_comp *devcom,
+ enum mlx5_devcom_component id,
+ u64 key)
+{
+ return devcom->id == id && devcom->key == key;
+}
+
+static struct mlx5_devcom_comp *
+devcom_component_get(struct mlx5_devcom_dev *devc,
+ enum mlx5_devcom_component id,
+ u64 key,
+ mlx5_devcom_event_handler_t handler)
+{
+ struct mlx5_devcom_comp *comp;
+
+ devcom_for_each_component(comp) {
+ if (devcom_component_equal(comp, id, key)) {
+ if (handler == comp->handler) {
+ kref_get(&comp->ref);
+ return comp;
+ }
+
+ mlx5_core_err(devc->dev,
+ "Cannot register existing devcom component with different handler\n");
+ return ERR_PTR(-EINVAL);
+ }
+ }
+
+ return NULL;
+}
+
+struct mlx5_devcom_comp_dev *
+mlx5_devcom_register_component(struct mlx5_devcom_dev *devc,
+ enum mlx5_devcom_component id,
+ u64 key,
+ mlx5_devcom_event_handler_t handler,
+ void *data)
+{
+ struct mlx5_devcom_comp_dev *devcom;
+ struct mlx5_devcom_comp *comp;
+
+ if (IS_ERR_OR_NULL(devc))
+ return NULL;
+
+ mutex_lock(&comp_list_lock);
+ comp = devcom_component_get(devc, id, key, handler);
+ if (IS_ERR(comp)) {
+ devcom = ERR_PTR(-EINVAL);
+ goto out_unlock;
+ }
+
+ if (!comp) {
+ comp = mlx5_devcom_comp_alloc(id, key, handler);
+ if (IS_ERR(comp)) {
+ devcom = ERR_CAST(comp);
+ goto out_unlock;
+ }
+ list_add_tail(&comp->comp_list, &devcom_comp_list);
+ }
+ mutex_unlock(&comp_list_lock);
+
+ devcom = devcom_alloc_comp_dev(devc, comp, data);
+ if (IS_ERR(devcom))
+ kref_put(&comp->ref, mlx5_devcom_comp_release);
+
+ return devcom;
+
+out_unlock:
+ mutex_unlock(&comp_list_lock);
+ return devcom;
+}
+
+void mlx5_devcom_unregister_component(struct mlx5_devcom_comp_dev *devcom)
+{
+ if (!IS_ERR_OR_NULL(devcom))
+ devcom_free_comp_dev(devcom);
+}
+
+int mlx5_devcom_send_event(struct mlx5_devcom_comp_dev *devcom,
int event, int rollback_event,
void *event_data)
{
- struct mlx5_devcom_component *comp;
- int err = -ENODEV, i;
+ struct mlx5_devcom_comp *comp = devcom->comp;
+ struct mlx5_devcom_comp_dev *pos;
+ int err = 0;
+ void *data;
if (IS_ERR_OR_NULL(devcom))
- return err;
+ return -ENODEV;
- comp = &devcom->priv->components[id];
down_write(&comp->sem);
- for (i = 0; i < MLX5_DEVCOM_PORTS_SUPPORTED; i++) {
- void *data = rcu_dereference_protected(comp->device[i].data,
- lockdep_is_held(&comp->sem));
+ list_for_each_entry(pos, &comp->comp_dev_list_head, list) {
+ data = rcu_dereference_protected(pos->data, lockdep_is_held(&comp->sem));
- if (i != devcom->idx && data) {
+ if (pos != devcom && data) {
err = comp->handler(event, data, event_data);
if (err)
goto rollback;
@@ -220,48 +279,43 @@ int mlx5_devcom_send_event(struct mlx5_devcom *devcom,
return 0;
rollback:
- while (i--) {
- void *data = rcu_dereference_protected(comp->device[i].data,
- lockdep_is_held(&comp->sem));
+ if (list_entry_is_head(pos, &comp->comp_dev_list_head, list))
+ goto out;
+ pos = list_prev_entry(pos, list);
+ list_for_each_entry_from_reverse(pos, &comp->comp_dev_list_head, list) {
+ data = rcu_dereference_protected(pos->data, lockdep_is_held(&comp->sem));
- if (i != devcom->idx && data)
+ if (pos != devcom && data)
comp->handler(rollback_event, data, event_data);
}
-
+out:
up_write(&comp->sem);
return err;
}
-void mlx5_devcom_comp_set_ready(struct mlx5_devcom *devcom,
- enum mlx5_devcom_components id,
- bool ready)
+void mlx5_devcom_comp_set_ready(struct mlx5_devcom_comp_dev *devcom, bool ready)
{
- struct mlx5_devcom_component *comp;
-
- comp = &devcom->priv->components[id];
- WARN_ON(!rwsem_is_locked(&comp->sem));
+ WARN_ON(!rwsem_is_locked(&devcom->comp->sem));
- WRITE_ONCE(comp->ready, ready);
+ WRITE_ONCE(devcom->comp->ready, ready);
}
-bool mlx5_devcom_comp_is_ready(struct mlx5_devcom *devcom,
- enum mlx5_devcom_components id)
+bool mlx5_devcom_comp_is_ready(struct mlx5_devcom_comp_dev *devcom)
{
if (IS_ERR_OR_NULL(devcom))
return false;
- return READ_ONCE(devcom->priv->components[id].ready);
+ return READ_ONCE(devcom->comp->ready);
}
-bool mlx5_devcom_for_each_peer_begin(struct mlx5_devcom *devcom,
- enum mlx5_devcom_components id)
+bool mlx5_devcom_for_each_peer_begin(struct mlx5_devcom_comp_dev *devcom)
{
- struct mlx5_devcom_component *comp;
+ struct mlx5_devcom_comp *comp;
if (IS_ERR_OR_NULL(devcom))
return false;
- comp = &devcom->priv->components[id];
+ comp = devcom->comp;
down_read(&comp->sem);
if (!READ_ONCE(comp->ready)) {
up_read(&comp->sem);
@@ -271,74 +325,60 @@ bool mlx5_devcom_for_each_peer_begin(struct mlx5_devcom *devcom,
return true;
}
-void mlx5_devcom_for_each_peer_end(struct mlx5_devcom *devcom,
- enum mlx5_devcom_components id)
+void mlx5_devcom_for_each_peer_end(struct mlx5_devcom_comp_dev *devcom)
{
- struct mlx5_devcom_component *comp = &devcom->priv->components[id];
-
- up_read(&comp->sem);
+ up_read(&devcom->comp->sem);
}
-void *mlx5_devcom_get_next_peer_data(struct mlx5_devcom *devcom,
- enum mlx5_devcom_components id,
- int *i)
+void *mlx5_devcom_get_next_peer_data(struct mlx5_devcom_comp_dev *devcom,
+ struct mlx5_devcom_comp_dev **pos)
{
- struct mlx5_devcom_component *comp;
- void *ret;
- int idx;
+ struct mlx5_devcom_comp *comp = devcom->comp;
+ struct mlx5_devcom_comp_dev *tmp;
+ void *data;
- comp = &devcom->priv->components[id];
+ tmp = list_prepare_entry(*pos, &comp->comp_dev_list_head, list);
- if (*i == MLX5_DEVCOM_PORTS_SUPPORTED)
- return NULL;
- for (idx = *i; idx < MLX5_DEVCOM_PORTS_SUPPORTED; idx++) {
- if (idx != devcom->idx) {
- ret = rcu_dereference_protected(comp->device[idx].data,
- lockdep_is_held(&comp->sem));
- if (ret)
+ list_for_each_entry_continue(tmp, &comp->comp_dev_list_head, list) {
+ if (tmp != devcom) {
+ data = rcu_dereference_protected(tmp->data, lockdep_is_held(&comp->sem));
+ if (data)
break;
}
}
- if (idx == MLX5_DEVCOM_PORTS_SUPPORTED) {
- *i = idx;
+ if (list_entry_is_head(tmp, &comp->comp_dev_list_head, list))
return NULL;
- }
- *i = idx + 1;
- return ret;
+ *pos = tmp;
+ return data;
}
-void *mlx5_devcom_get_next_peer_data_rcu(struct mlx5_devcom *devcom,
- enum mlx5_devcom_components id,
- int *i)
+void *mlx5_devcom_get_next_peer_data_rcu(struct mlx5_devcom_comp_dev *devcom,
+ struct mlx5_devcom_comp_dev **pos)
{
- struct mlx5_devcom_component *comp;
- void *ret;
- int idx;
+ struct mlx5_devcom_comp *comp = devcom->comp;
+ struct mlx5_devcom_comp_dev *tmp;
+ void *data;
- comp = &devcom->priv->components[id];
+ tmp = list_prepare_entry(*pos, &comp->comp_dev_list_head, list);
- if (*i == MLX5_DEVCOM_PORTS_SUPPORTED)
- return NULL;
- for (idx = *i; idx < MLX5_DEVCOM_PORTS_SUPPORTED; idx++) {
- if (idx != devcom->idx) {
+ list_for_each_entry_continue(tmp, &comp->comp_dev_list_head, list) {
+ if (tmp != devcom) {
/* This can change concurrently, however 'data' pointer will remain
* valid for the duration of RCU read section.
*/
if (!READ_ONCE(comp->ready))
return NULL;
- ret = rcu_dereference(comp->device[idx].data);
- if (ret)
+ data = rcu_dereference(tmp->data);
+ if (data)
break;
}
}
- if (idx == MLX5_DEVCOM_PORTS_SUPPORTED) {
- *i = idx;
+ if (list_entry_is_head(tmp, &comp->comp_dev_list_head, list))
return NULL;
- }
- *i = idx + 1;
- return ret;
+ *pos = tmp;
+ return data;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h
index d953a01b8eaa..8389ac0af708 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h
@@ -6,11 +6,8 @@
#include <linux/mlx5/driver.h>
-#define MLX5_DEVCOM_PORTS_SUPPORTED 4
-
-enum mlx5_devcom_components {
+enum mlx5_devcom_component {
MLX5_DEVCOM_ESW_OFFLOADS,
-
MLX5_DEVCOM_NUM_COMPONENTS,
};
@@ -18,45 +15,40 @@ typedef int (*mlx5_devcom_event_handler_t)(int event,
void *my_data,
void *event_data);
-struct mlx5_devcom *mlx5_devcom_register_device(struct mlx5_core_dev *dev);
-void mlx5_devcom_unregister_device(struct mlx5_devcom *devcom);
+struct mlx5_devcom_dev *mlx5_devcom_register_device(struct mlx5_core_dev *dev);
+void mlx5_devcom_unregister_device(struct mlx5_devcom_dev *devc);
-void mlx5_devcom_register_component(struct mlx5_devcom *devcom,
- enum mlx5_devcom_components id,
- mlx5_devcom_event_handler_t handler,
- void *data);
-void mlx5_devcom_unregister_component(struct mlx5_devcom *devcom,
- enum mlx5_devcom_components id);
+struct mlx5_devcom_comp_dev *
+mlx5_devcom_register_component(struct mlx5_devcom_dev *devc,
+ enum mlx5_devcom_component id,
+ u64 key,
+ mlx5_devcom_event_handler_t handler,
+ void *data);
+void mlx5_devcom_unregister_component(struct mlx5_devcom_comp_dev *devcom);
-int mlx5_devcom_send_event(struct mlx5_devcom *devcom,
- enum mlx5_devcom_components id,
+int mlx5_devcom_send_event(struct mlx5_devcom_comp_dev *devcom,
int event, int rollback_event,
void *event_data);
-void mlx5_devcom_comp_set_ready(struct mlx5_devcom *devcom,
- enum mlx5_devcom_components id,
- bool ready);
-bool mlx5_devcom_comp_is_ready(struct mlx5_devcom *devcom,
- enum mlx5_devcom_components id);
-
-bool mlx5_devcom_for_each_peer_begin(struct mlx5_devcom *devcom,
- enum mlx5_devcom_components id);
-void mlx5_devcom_for_each_peer_end(struct mlx5_devcom *devcom,
- enum mlx5_devcom_components id);
-void *mlx5_devcom_get_next_peer_data(struct mlx5_devcom *devcom,
- enum mlx5_devcom_components id, int *i);
-
-#define mlx5_devcom_for_each_peer_entry(devcom, id, data, i) \
- for (i = 0, data = mlx5_devcom_get_next_peer_data(devcom, id, &i); \
- data; \
- data = mlx5_devcom_get_next_peer_data(devcom, id, &i))
-
-void *mlx5_devcom_get_next_peer_data_rcu(struct mlx5_devcom *devcom,
- enum mlx5_devcom_components id, int *i);
-
-#define mlx5_devcom_for_each_peer_entry_rcu(devcom, id, data, i) \
- for (i = 0, data = mlx5_devcom_get_next_peer_data_rcu(devcom, id, &i); \
- data; \
- data = mlx5_devcom_get_next_peer_data_rcu(devcom, id, &i))
-
-#endif
+void mlx5_devcom_comp_set_ready(struct mlx5_devcom_comp_dev *devcom, bool ready);
+bool mlx5_devcom_comp_is_ready(struct mlx5_devcom_comp_dev *devcom);
+
+bool mlx5_devcom_for_each_peer_begin(struct mlx5_devcom_comp_dev *devcom);
+void mlx5_devcom_for_each_peer_end(struct mlx5_devcom_comp_dev *devcom);
+void *mlx5_devcom_get_next_peer_data(struct mlx5_devcom_comp_dev *devcom,
+ struct mlx5_devcom_comp_dev **pos);
+
+#define mlx5_devcom_for_each_peer_entry(devcom, data, pos) \
+ for (pos = NULL, data = mlx5_devcom_get_next_peer_data(devcom, &pos); \
+ data; \
+ data = mlx5_devcom_get_next_peer_data(devcom, &pos))
+
+void *mlx5_devcom_get_next_peer_data_rcu(struct mlx5_devcom_comp_dev *devcom,
+ struct mlx5_devcom_comp_dev **pos);
+
+#define mlx5_devcom_for_each_peer_entry_rcu(devcom, data, pos) \
+ for (pos = NULL, data = mlx5_devcom_get_next_peer_data_rcu(devcom, &pos); \
+ data; \
+ data = mlx5_devcom_get_next_peer_data_rcu(devcom, &pos))
+
+#endif /* __LIB_MLX5_DEVCOM_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
index 88dbea6631d5..a79c9b8286a7 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
@@ -951,10 +951,10 @@ static int mlx5_init_once(struct mlx5_core_dev *dev)
{
int err;
- dev->priv.devcom = mlx5_devcom_register_device(dev);
- if (IS_ERR(dev->priv.devcom))
- mlx5_core_err(dev, "failed to register with devcom (0x%p)\n",
- dev->priv.devcom);
+ dev->priv.devc = mlx5_devcom_register_device(dev);
+ if (IS_ERR(dev->priv.devc))
+ mlx5_core_warn(dev, "failed to register devcom device %ld\n",
+ PTR_ERR(dev->priv.devc));
err = mlx5_query_board_id(dev);
if (err) {
@@ -1089,7 +1089,7 @@ static int mlx5_init_once(struct mlx5_core_dev *dev)
err_irq_cleanup:
mlx5_irq_table_cleanup(dev);
err_devcom:
- mlx5_devcom_unregister_device(dev->priv.devcom);
+ mlx5_devcom_unregister_device(dev->priv.devc);
return err;
}
@@ -1118,7 +1118,7 @@ static void mlx5_cleanup_once(struct mlx5_core_dev *dev)
mlx5_events_cleanup(dev);
mlx5_eq_table_cleanup(dev);
mlx5_irq_table_cleanup(dev);
- mlx5_devcom_unregister_device(dev->priv.devcom);
+ mlx5_devcom_unregister_device(dev->priv.devc);
}
static int mlx5_function_enable(struct mlx5_core_dev *dev, bool boot, u64 timeout)
diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
index 25d0528f9219..56dd3dfe2304 100644
--- a/include/linux/mlx5/driver.h
+++ b/include/linux/mlx5/driver.h
@@ -501,7 +501,7 @@ struct mlx5_events;
struct mlx5_mpfs;
struct mlx5_eswitch;
struct mlx5_lag;
-struct mlx5_devcom;
+struct mlx5_devcom_dev;
struct mlx5_fw_reset;
struct mlx5_eq_table;
struct mlx5_irq_table;
@@ -618,7 +618,7 @@ struct mlx5_priv {
struct mlx5_core_sriov sriov;
struct mlx5_lag *lag;
u32 flags;
- struct mlx5_devcom *devcom;
+ struct mlx5_devcom_dev *devc;
struct mlx5_fw_reset *fw_reset;
struct mlx5_core_roce roce;
struct mlx5_fc_stats fc_stats;
--
2.41.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [net-next V2 03/15] net/mlx5e: E-Switch, Register devcom device with switch id key
2023-07-27 18:38 [pull request][net-next V2 00/15] mlx5 updates 2023-07-24 Saeed Mahameed
2023-07-27 18:39 ` [net-next V2 01/15] net/mlx5: Use shared code for checking lag is supported Saeed Mahameed
2023-07-27 18:39 ` [net-next V2 02/15] net/mlx5: Devcom, Infrastructure changes Saeed Mahameed
@ 2023-07-27 18:39 ` Saeed Mahameed
2023-07-27 18:39 ` [net-next V2 04/15] net/mlx5e: E-Switch, Allow devcom initialization on more vports Saeed Mahameed
` (11 subsequent siblings)
14 siblings, 0 replies; 17+ messages in thread
From: Saeed Mahameed @ 2023-07-27 18:39 UTC (permalink / raw)
To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
Cc: Saeed Mahameed, netdev, Tariq Toukan, Roi Dayan, Shay Drory
From: Roi Dayan <roid@nvidia.com>
Register devcom devices with switch id instead of guid.
Devcom interface is used to sync between ports in the eswitch,
e.g. Adding miss rules between the ports.
New eswitch devices could have the same guid but a different
switch id so its more correct to group according to switch id
which is the identifier if the ports are on the same eswitch.
Signed-off-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 9 +++++++--
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h | 4 ++--
.../net/ethernet/mellanox/mlx5/core/eswitch_offloads.c | 7 ++-----
3 files changed, 11 insertions(+), 9 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index 22bc88620653..507825a1abc8 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -5194,11 +5194,12 @@ void mlx5e_tc_ht_cleanup(struct rhashtable *tc_ht)
int mlx5e_tc_esw_init(struct mlx5_rep_uplink_priv *uplink_priv)
{
const size_t sz_enc_opts = sizeof(struct tunnel_match_enc_opts);
+ struct netdev_phys_item_id ppid;
struct mlx5e_rep_priv *rpriv;
struct mapping_ctx *mapping;
struct mlx5_eswitch *esw;
struct mlx5e_priv *priv;
- u64 mapping_id;
+ u64 mapping_id, key;
int err = 0;
rpriv = container_of(uplink_priv, struct mlx5e_rep_priv, uplink_priv);
@@ -5252,7 +5253,11 @@ int mlx5e_tc_esw_init(struct mlx5_rep_uplink_priv *uplink_priv)
goto err_action_counter;
}
- mlx5_esw_offloads_devcom_init(esw);
+ err = dev_get_port_parent_id(priv->netdev, &ppid, false);
+ if (!err) {
+ memcpy(&key, &ppid.id, sizeof(key));
+ mlx5_esw_offloads_devcom_init(esw, key);
+ }
return 0;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index 6d9378b0bce5..9b5a1651b877 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -382,7 +382,7 @@ int mlx5_eswitch_enable(struct mlx5_eswitch *esw, int num_vfs);
void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw, bool clear_vf);
void mlx5_eswitch_disable_locked(struct mlx5_eswitch *esw);
void mlx5_eswitch_disable(struct mlx5_eswitch *esw);
-void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw);
+void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw, u64 key);
void mlx5_esw_offloads_devcom_cleanup(struct mlx5_eswitch *esw);
bool mlx5_esw_offloads_devcom_is_ready(struct mlx5_eswitch *esw);
int mlx5_eswitch_set_vport_mac(struct mlx5_eswitch *esw,
@@ -818,7 +818,7 @@ static inline void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw) {}
static inline int mlx5_eswitch_enable(struct mlx5_eswitch *esw, int num_vfs) { return 0; }
static inline void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw, bool clear_vf) {}
static inline void mlx5_eswitch_disable(struct mlx5_eswitch *esw) {}
-static inline void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw) {}
+static inline void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw, u64 key) {}
static inline void mlx5_esw_offloads_devcom_cleanup(struct mlx5_eswitch *esw) {}
static inline bool mlx5_esw_offloads_devcom_is_ready(struct mlx5_eswitch *esw) { return false; }
static inline bool mlx5_eswitch_is_funcs_handler(struct mlx5_core_dev *dev) { return false; }
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 11cce630c1b8..7d100cd4afab 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -2886,9 +2886,8 @@ static int mlx5_esw_offloads_devcom_event(int event,
return err;
}
-void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw)
+void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw, u64 key)
{
- u64 guid;
int i;
for (i = 0; i < MLX5_MAX_PORTS; i++)
@@ -2902,12 +2901,10 @@ void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw)
return;
xa_init(&esw->paired);
- guid = mlx5_query_nic_system_image_guid(esw->dev);
-
esw->num_peers = 0;
esw->devcom = mlx5_devcom_register_component(esw->dev->priv.devc,
MLX5_DEVCOM_ESW_OFFLOADS,
- guid,
+ key,
mlx5_esw_offloads_devcom_event,
esw);
if (IS_ERR_OR_NULL(esw->devcom))
--
2.41.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [net-next V2 04/15] net/mlx5e: E-Switch, Allow devcom initialization on more vports
2023-07-27 18:38 [pull request][net-next V2 00/15] mlx5 updates 2023-07-24 Saeed Mahameed
` (2 preceding siblings ...)
2023-07-27 18:39 ` [net-next V2 03/15] net/mlx5e: E-Switch, Register devcom device with switch id key Saeed Mahameed
@ 2023-07-27 18:39 ` Saeed Mahameed
2023-07-27 18:39 ` [net-next V2 05/15] net/mlx5: Re-organize mlx5_cmd struct Saeed Mahameed
` (10 subsequent siblings)
14 siblings, 0 replies; 17+ messages in thread
From: Saeed Mahameed @ 2023-07-27 18:39 UTC (permalink / raw)
To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
Cc: Saeed Mahameed, netdev, Tariq Toukan, Roi Dayan, Shay Drory
From: Roi Dayan <roid@nvidia.com>
New features could use the devcom interface but not necessarily
the lag feature although for vport managers and ECPF
still check for lag support.
Signed-off-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 7d100cd4afab..b4b8cb788573 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -2897,7 +2897,8 @@ void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw, u64 key)
if (!MLX5_CAP_ESW(esw->dev, merged_eswitch))
return;
- if (!mlx5_lag_is_supported(esw->dev))
+ if ((MLX5_VPORT_MANAGER(esw->dev) || mlx5_core_is_ecpf_esw_manager(esw->dev)) &&
+ !mlx5_lag_is_supported(esw->dev))
return;
xa_init(&esw->paired);
--
2.41.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [net-next V2 05/15] net/mlx5: Re-organize mlx5_cmd struct
2023-07-27 18:38 [pull request][net-next V2 00/15] mlx5 updates 2023-07-24 Saeed Mahameed
` (3 preceding siblings ...)
2023-07-27 18:39 ` [net-next V2 04/15] net/mlx5e: E-Switch, Allow devcom initialization on more vports Saeed Mahameed
@ 2023-07-27 18:39 ` Saeed Mahameed
2023-07-27 18:39 ` [net-next V2 06/15] net/mlx5: Remove redundant cmdif revision check Saeed Mahameed
` (9 subsequent siblings)
14 siblings, 0 replies; 17+ messages in thread
From: Saeed Mahameed @ 2023-07-27 18:39 UTC (permalink / raw)
To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
Cc: Saeed Mahameed, netdev, Tariq Toukan, Shay Drory, Moshe Shemesh
From: Shay Drory <shayd@nvidia.com>
Downstream patch will split mlx5_cmd_init() to probe and reload
routines. As a preparation, organize mlx5_cmd struct so that any
field that will be used in the reload routine are grouped at new
nested struct.
Signed-off-by: Shay Drory <shayd@nvidia.com>
Reviewed-by: Moshe Shemesh <moshe@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
drivers/net/ethernet/mellanox/mlx5/core/cmd.c | 94 +++++++++----------
.../net/ethernet/mellanox/mlx5/core/debugfs.c | 4 +-
include/linux/mlx5/driver.h | 21 +++--
3 files changed, 60 insertions(+), 59 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
index d532883b42d7..f175af528fe0 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
@@ -162,18 +162,18 @@ static int cmd_alloc_index(struct mlx5_cmd *cmd)
int ret;
spin_lock_irqsave(&cmd->alloc_lock, flags);
- ret = find_first_bit(&cmd->bitmask, cmd->max_reg_cmds);
- if (ret < cmd->max_reg_cmds)
- clear_bit(ret, &cmd->bitmask);
+ ret = find_first_bit(&cmd->vars.bitmask, cmd->vars.max_reg_cmds);
+ if (ret < cmd->vars.max_reg_cmds)
+ clear_bit(ret, &cmd->vars.bitmask);
spin_unlock_irqrestore(&cmd->alloc_lock, flags);
- return ret < cmd->max_reg_cmds ? ret : -ENOMEM;
+ return ret < cmd->vars.max_reg_cmds ? ret : -ENOMEM;
}
static void cmd_free_index(struct mlx5_cmd *cmd, int idx)
{
lockdep_assert_held(&cmd->alloc_lock);
- set_bit(idx, &cmd->bitmask);
+ set_bit(idx, &cmd->vars.bitmask);
}
static void cmd_ent_get(struct mlx5_cmd_work_ent *ent)
@@ -192,7 +192,7 @@ static void cmd_ent_put(struct mlx5_cmd_work_ent *ent)
if (ent->idx >= 0) {
cmd_free_index(cmd, ent->idx);
- up(ent->page_queue ? &cmd->pages_sem : &cmd->sem);
+ up(ent->page_queue ? &cmd->vars.pages_sem : &cmd->vars.sem);
}
cmd_free_ent(ent);
@@ -202,7 +202,7 @@ static void cmd_ent_put(struct mlx5_cmd_work_ent *ent)
static struct mlx5_cmd_layout *get_inst(struct mlx5_cmd *cmd, int idx)
{
- return cmd->cmd_buf + (idx << cmd->log_stride);
+ return cmd->cmd_buf + (idx << cmd->vars.log_stride);
}
static int mlx5_calc_cmd_blocks(struct mlx5_cmd_msg *msg)
@@ -974,7 +974,7 @@ static void cmd_work_handler(struct work_struct *work)
cb_timeout = msecs_to_jiffies(mlx5_tout_ms(dev, CMD));
complete(&ent->handling);
- sem = ent->page_queue ? &cmd->pages_sem : &cmd->sem;
+ sem = ent->page_queue ? &cmd->vars.pages_sem : &cmd->vars.sem;
down(sem);
if (!ent->page_queue) {
alloc_ret = cmd_alloc_index(cmd);
@@ -994,9 +994,9 @@ static void cmd_work_handler(struct work_struct *work)
}
ent->idx = alloc_ret;
} else {
- ent->idx = cmd->max_reg_cmds;
+ ent->idx = cmd->vars.max_reg_cmds;
spin_lock_irqsave(&cmd->alloc_lock, flags);
- clear_bit(ent->idx, &cmd->bitmask);
+ clear_bit(ent->idx, &cmd->vars.bitmask);
spin_unlock_irqrestore(&cmd->alloc_lock, flags);
}
@@ -1572,15 +1572,15 @@ void mlx5_cmd_allowed_opcode(struct mlx5_core_dev *dev, u16 opcode)
struct mlx5_cmd *cmd = &dev->cmd;
int i;
- for (i = 0; i < cmd->max_reg_cmds; i++)
- down(&cmd->sem);
- down(&cmd->pages_sem);
+ for (i = 0; i < cmd->vars.max_reg_cmds; i++)
+ down(&cmd->vars.sem);
+ down(&cmd->vars.pages_sem);
cmd->allowed_opcode = opcode;
- up(&cmd->pages_sem);
- for (i = 0; i < cmd->max_reg_cmds; i++)
- up(&cmd->sem);
+ up(&cmd->vars.pages_sem);
+ for (i = 0; i < cmd->vars.max_reg_cmds; i++)
+ up(&cmd->vars.sem);
}
static void mlx5_cmd_change_mod(struct mlx5_core_dev *dev, int mode)
@@ -1588,15 +1588,15 @@ static void mlx5_cmd_change_mod(struct mlx5_core_dev *dev, int mode)
struct mlx5_cmd *cmd = &dev->cmd;
int i;
- for (i = 0; i < cmd->max_reg_cmds; i++)
- down(&cmd->sem);
- down(&cmd->pages_sem);
+ for (i = 0; i < cmd->vars.max_reg_cmds; i++)
+ down(&cmd->vars.sem);
+ down(&cmd->vars.pages_sem);
cmd->mode = mode;
- up(&cmd->pages_sem);
- for (i = 0; i < cmd->max_reg_cmds; i++)
- up(&cmd->sem);
+ up(&cmd->vars.pages_sem);
+ for (i = 0; i < cmd->vars.max_reg_cmds; i++)
+ up(&cmd->vars.sem);
}
static int cmd_comp_notifier(struct notifier_block *nb,
@@ -1655,7 +1655,7 @@ static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool force
/* there can be at most 32 command queues */
vector = vec & 0xffffffff;
- for (i = 0; i < (1 << cmd->log_sz); i++) {
+ for (i = 0; i < (1 << cmd->vars.log_sz); i++) {
if (test_bit(i, &vector)) {
ent = cmd->ent_arr[i];
@@ -1744,7 +1744,7 @@ static void mlx5_cmd_trigger_completions(struct mlx5_core_dev *dev)
/* wait for pending handlers to complete */
mlx5_eq_synchronize_cmd_irq(dev);
spin_lock_irqsave(&dev->cmd.alloc_lock, flags);
- vector = ~dev->cmd.bitmask & ((1ul << (1 << dev->cmd.log_sz)) - 1);
+ vector = ~dev->cmd.vars.bitmask & ((1ul << (1 << dev->cmd.vars.log_sz)) - 1);
if (!vector)
goto no_trig;
@@ -1753,14 +1753,14 @@ static void mlx5_cmd_trigger_completions(struct mlx5_core_dev *dev)
* to guarantee pending commands will not get freed in the meanwhile.
* For that reason, it also has to be done inside the alloc_lock.
*/
- for_each_set_bit(i, &bitmask, (1 << cmd->log_sz))
+ for_each_set_bit(i, &bitmask, (1 << cmd->vars.log_sz))
cmd_ent_get(cmd->ent_arr[i]);
vector |= MLX5_TRIGGERED_CMD_COMP;
spin_unlock_irqrestore(&dev->cmd.alloc_lock, flags);
mlx5_core_dbg(dev, "vector 0x%llx\n", vector);
mlx5_cmd_comp_handler(dev, vector, true);
- for_each_set_bit(i, &bitmask, (1 << cmd->log_sz))
+ for_each_set_bit(i, &bitmask, (1 << cmd->vars.log_sz))
cmd_ent_put(cmd->ent_arr[i]);
return;
@@ -1773,22 +1773,22 @@ void mlx5_cmd_flush(struct mlx5_core_dev *dev)
struct mlx5_cmd *cmd = &dev->cmd;
int i;
- for (i = 0; i < cmd->max_reg_cmds; i++) {
- while (down_trylock(&cmd->sem)) {
+ for (i = 0; i < cmd->vars.max_reg_cmds; i++) {
+ while (down_trylock(&cmd->vars.sem)) {
mlx5_cmd_trigger_completions(dev);
cond_resched();
}
}
- while (down_trylock(&cmd->pages_sem)) {
+ while (down_trylock(&cmd->vars.pages_sem)) {
mlx5_cmd_trigger_completions(dev);
cond_resched();
}
/* Unlock cmdif */
- up(&cmd->pages_sem);
- for (i = 0; i < cmd->max_reg_cmds; i++)
- up(&cmd->sem);
+ up(&cmd->vars.pages_sem);
+ for (i = 0; i < cmd->vars.max_reg_cmds; i++)
+ up(&cmd->vars.sem);
}
static struct mlx5_cmd_msg *alloc_msg(struct mlx5_core_dev *dev, int in_size,
@@ -1858,7 +1858,7 @@ static int cmd_exec(struct mlx5_core_dev *dev, void *in, int in_size, void *out,
/* atomic context may not sleep */
if (callback)
return -EINVAL;
- down(&dev->cmd.throttle_sem);
+ down(&dev->cmd.vars.throttle_sem);
}
pages_queue = is_manage_pages(in);
@@ -1903,7 +1903,7 @@ static int cmd_exec(struct mlx5_core_dev *dev, void *in, int in_size, void *out,
free_msg(dev, inb);
out_up:
if (throttle_op)
- up(&dev->cmd.throttle_sem);
+ up(&dev->cmd.vars.throttle_sem);
return err;
}
@@ -2213,16 +2213,16 @@ int mlx5_cmd_init(struct mlx5_core_dev *dev)
goto err_free_pool;
cmd_l = ioread32be(&dev->iseg->cmdq_addr_l_sz) & 0xff;
- cmd->log_sz = cmd_l >> 4 & 0xf;
- cmd->log_stride = cmd_l & 0xf;
- if (1 << cmd->log_sz > MLX5_MAX_COMMANDS) {
+ cmd->vars.log_sz = cmd_l >> 4 & 0xf;
+ cmd->vars.log_stride = cmd_l & 0xf;
+ if (1 << cmd->vars.log_sz > MLX5_MAX_COMMANDS) {
mlx5_core_err(dev, "firmware reports too many outstanding commands %d\n",
- 1 << cmd->log_sz);
+ 1 << cmd->vars.log_sz);
err = -EINVAL;
goto err_free_page;
}
- if (cmd->log_sz + cmd->log_stride > MLX5_ADAPTER_PAGE_SHIFT) {
+ if (cmd->vars.log_sz + cmd->vars.log_stride > MLX5_ADAPTER_PAGE_SHIFT) {
mlx5_core_err(dev, "command queue size overflow\n");
err = -EINVAL;
goto err_free_page;
@@ -2230,13 +2230,13 @@ int mlx5_cmd_init(struct mlx5_core_dev *dev)
cmd->state = MLX5_CMDIF_STATE_DOWN;
cmd->checksum_disabled = 1;
- cmd->max_reg_cmds = (1 << cmd->log_sz) - 1;
- cmd->bitmask = (1UL << cmd->max_reg_cmds) - 1;
+ cmd->vars.max_reg_cmds = (1 << cmd->vars.log_sz) - 1;
+ cmd->vars.bitmask = (1UL << cmd->vars.max_reg_cmds) - 1;
- cmd->cmdif_rev = ioread32be(&dev->iseg->cmdif_rev_fw_sub) >> 16;
- if (cmd->cmdif_rev > CMD_IF_REV) {
+ cmd->vars.cmdif_rev = ioread32be(&dev->iseg->cmdif_rev_fw_sub) >> 16;
+ if (cmd->vars.cmdif_rev > CMD_IF_REV) {
mlx5_core_err(dev, "driver does not support command interface version. driver %d, firmware %d\n",
- CMD_IF_REV, cmd->cmdif_rev);
+ CMD_IF_REV, cmd->vars.cmdif_rev);
err = -EOPNOTSUPP;
goto err_free_page;
}
@@ -2246,9 +2246,9 @@ int mlx5_cmd_init(struct mlx5_core_dev *dev)
for (i = 0; i < MLX5_CMD_OP_MAX; i++)
spin_lock_init(&cmd->stats[i].lock);
- sema_init(&cmd->sem, cmd->max_reg_cmds);
- sema_init(&cmd->pages_sem, 1);
- sema_init(&cmd->throttle_sem, DIV_ROUND_UP(cmd->max_reg_cmds, 2));
+ sema_init(&cmd->vars.sem, cmd->vars.max_reg_cmds);
+ sema_init(&cmd->vars.pages_sem, 1);
+ sema_init(&cmd->vars.throttle_sem, DIV_ROUND_UP(cmd->vars.max_reg_cmds, 2));
cmd_h = (u32)((u64)(cmd->dma) >> 32);
cmd_l = (u32)(cmd->dma);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c b/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c
index 2138f28a2931..9a826fb3ca38 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c
@@ -176,8 +176,8 @@ static ssize_t slots_read(struct file *filp, char __user *buf, size_t count,
int ret;
cmd = filp->private_data;
- weight = bitmap_weight(&cmd->bitmask, cmd->max_reg_cmds);
- field = cmd->max_reg_cmds - weight;
+ weight = bitmap_weight(&cmd->vars.bitmask, cmd->vars.max_reg_cmds);
+ field = cmd->vars.max_reg_cmds - weight;
ret = snprintf(tbuf, sizeof(tbuf), "%d\n", field);
return simple_read_from_buffer(buf, count, pos, tbuf, ret);
}
diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
index 56dd3dfe2304..39c5f4087c39 100644
--- a/include/linux/mlx5/driver.h
+++ b/include/linux/mlx5/driver.h
@@ -287,18 +287,23 @@ struct mlx5_cmd_stats {
struct mlx5_cmd {
struct mlx5_nb nb;
+ /* members which needs to be queried or reinitialized each reload */
+ struct {
+ u16 cmdif_rev;
+ u8 log_sz;
+ u8 log_stride;
+ int max_reg_cmds;
+ unsigned long bitmask;
+ struct semaphore sem;
+ struct semaphore pages_sem;
+ struct semaphore throttle_sem;
+ } vars;
enum mlx5_cmdif_state state;
void *cmd_alloc_buf;
dma_addr_t alloc_dma;
int alloc_size;
void *cmd_buf;
dma_addr_t dma;
- u16 cmdif_rev;
- u8 log_sz;
- u8 log_stride;
- int max_reg_cmds;
- int events;
- u32 __iomem *vector;
/* protect command queue allocations
*/
@@ -308,12 +313,8 @@ struct mlx5_cmd {
*/
spinlock_t token_lock;
u8 token;
- unsigned long bitmask;
char wq_name[MLX5_CMD_WQ_MAX_NAME];
struct workqueue_struct *wq;
- struct semaphore sem;
- struct semaphore pages_sem;
- struct semaphore throttle_sem;
int mode;
u16 allowed_opcode;
struct mlx5_cmd_work_ent *ent_arr[MLX5_MAX_COMMANDS];
--
2.41.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [net-next V2 06/15] net/mlx5: Remove redundant cmdif revision check
2023-07-27 18:38 [pull request][net-next V2 00/15] mlx5 updates 2023-07-24 Saeed Mahameed
` (4 preceding siblings ...)
2023-07-27 18:39 ` [net-next V2 05/15] net/mlx5: Re-organize mlx5_cmd struct Saeed Mahameed
@ 2023-07-27 18:39 ` Saeed Mahameed
2023-07-27 18:39 ` [net-next V2 07/15] net/mlx5: split mlx5_cmd_init() to probe and reload routines Saeed Mahameed
` (8 subsequent siblings)
14 siblings, 0 replies; 17+ messages in thread
From: Saeed Mahameed @ 2023-07-27 18:39 UTC (permalink / raw)
To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
Cc: Saeed Mahameed, netdev, Tariq Toukan, Shay Drory, Moshe Shemesh
From: Shay Drory <shayd@nvidia.com>
mlx5 is checking the cmdif revision twice, for no reason.
Remove the latter check.
Signed-off-by: Shay Drory <shayd@nvidia.com>
Reviewed-by: Moshe Shemesh <moshe@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
drivers/net/ethernet/mellanox/mlx5/core/cmd.c | 15 +++------------
1 file changed, 3 insertions(+), 12 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
index f175af528fe0..9ced943ebd0d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
@@ -2191,16 +2191,15 @@ int mlx5_cmd_init(struct mlx5_core_dev *dev)
int align = roundup_pow_of_two(size);
struct mlx5_cmd *cmd = &dev->cmd;
u32 cmd_h, cmd_l;
- u16 cmd_if_rev;
int err;
int i;
memset(cmd, 0, sizeof(*cmd));
- cmd_if_rev = cmdif_rev(dev);
- if (cmd_if_rev != CMD_IF_REV) {
+ cmd->vars.cmdif_rev = cmdif_rev(dev);
+ if (cmd->vars.cmdif_rev != CMD_IF_REV) {
mlx5_core_err(dev,
"Driver cmdif rev(%d) differs from firmware's(%d)\n",
- CMD_IF_REV, cmd_if_rev);
+ CMD_IF_REV, cmd->vars.cmdif_rev);
return -EINVAL;
}
@@ -2233,14 +2232,6 @@ int mlx5_cmd_init(struct mlx5_core_dev *dev)
cmd->vars.max_reg_cmds = (1 << cmd->vars.log_sz) - 1;
cmd->vars.bitmask = (1UL << cmd->vars.max_reg_cmds) - 1;
- cmd->vars.cmdif_rev = ioread32be(&dev->iseg->cmdif_rev_fw_sub) >> 16;
- if (cmd->vars.cmdif_rev > CMD_IF_REV) {
- mlx5_core_err(dev, "driver does not support command interface version. driver %d, firmware %d\n",
- CMD_IF_REV, cmd->vars.cmdif_rev);
- err = -EOPNOTSUPP;
- goto err_free_page;
- }
-
spin_lock_init(&cmd->alloc_lock);
spin_lock_init(&cmd->token_lock);
for (i = 0; i < MLX5_CMD_OP_MAX; i++)
--
2.41.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [net-next V2 07/15] net/mlx5: split mlx5_cmd_init() to probe and reload routines
2023-07-27 18:38 [pull request][net-next V2 00/15] mlx5 updates 2023-07-24 Saeed Mahameed
` (5 preceding siblings ...)
2023-07-27 18:39 ` [net-next V2 06/15] net/mlx5: Remove redundant cmdif revision check Saeed Mahameed
@ 2023-07-27 18:39 ` Saeed Mahameed
2023-07-27 18:39 ` [net-next V2 08/15] net/mlx5: Allocate command stats with xarray Saeed Mahameed
` (7 subsequent siblings)
14 siblings, 0 replies; 17+ messages in thread
From: Saeed Mahameed @ 2023-07-27 18:39 UTC (permalink / raw)
To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
Cc: Saeed Mahameed, netdev, Tariq Toukan, Shay Drory, Moshe Shemesh
From: Shay Drory <shayd@nvidia.com>
There is no need to destroy and allocate cmd SW structs during reload,
this is time consuming for no reason.
Hence, split mlx5_cmd_init() to probe and reload routines.
Signed-off-by: Shay Drory <shayd@nvidia.com>
Reviewed-by: Moshe Shemesh <moshe@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
drivers/net/ethernet/mellanox/mlx5/core/cmd.c | 121 ++++++++++--------
.../net/ethernet/mellanox/mlx5/core/main.c | 15 ++-
.../ethernet/mellanox/mlx5/core/mlx5_core.h | 2 +
3 files changed, 82 insertions(+), 56 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
index 9ced943ebd0d..45edd5a110c8 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
@@ -1548,7 +1548,6 @@ static void clean_debug_files(struct mlx5_core_dev *dev)
if (!mlx5_debugfs_root)
return;
- mlx5_cmdif_debugfs_cleanup(dev);
debugfs_remove_recursive(dbg->dbg_root);
}
@@ -1563,8 +1562,6 @@ static void create_debugfs_files(struct mlx5_core_dev *dev)
debugfs_create_file("out_len", 0600, dbg->dbg_root, dev, &olfops);
debugfs_create_u8("status", 0600, dbg->dbg_root, &dbg->status);
debugfs_create_file("run", 0200, dbg->dbg_root, dev, &fops);
-
- mlx5_cmdif_debugfs_init(dev);
}
void mlx5_cmd_allowed_opcode(struct mlx5_core_dev *dev, u16 opcode)
@@ -2190,19 +2187,10 @@ int mlx5_cmd_init(struct mlx5_core_dev *dev)
int size = sizeof(struct mlx5_cmd_prot_block);
int align = roundup_pow_of_two(size);
struct mlx5_cmd *cmd = &dev->cmd;
- u32 cmd_h, cmd_l;
+ u32 cmd_l;
int err;
int i;
- memset(cmd, 0, sizeof(*cmd));
- cmd->vars.cmdif_rev = cmdif_rev(dev);
- if (cmd->vars.cmdif_rev != CMD_IF_REV) {
- mlx5_core_err(dev,
- "Driver cmdif rev(%d) differs from firmware's(%d)\n",
- CMD_IF_REV, cmd->vars.cmdif_rev);
- return -EINVAL;
- }
-
cmd->pool = dma_pool_create("mlx5_cmd", mlx5_core_dma_dev(dev), size, align, 0);
if (!cmd->pool)
return -ENOMEM;
@@ -2211,43 +2199,93 @@ int mlx5_cmd_init(struct mlx5_core_dev *dev)
if (err)
goto err_free_pool;
+ cmd_l = (u32)(cmd->dma);
+ if (cmd_l & 0xfff) {
+ mlx5_core_err(dev, "invalid command queue address\n");
+ err = -ENOMEM;
+ goto err_cmd_page;
+ }
+ cmd->checksum_disabled = 1;
+
+ spin_lock_init(&cmd->alloc_lock);
+ spin_lock_init(&cmd->token_lock);
+ for (i = 0; i < MLX5_CMD_OP_MAX; i++)
+ spin_lock_init(&cmd->stats[i].lock);
+
+ create_msg_cache(dev);
+
+ set_wqname(dev);
+ cmd->wq = create_singlethread_workqueue(cmd->wq_name);
+ if (!cmd->wq) {
+ mlx5_core_err(dev, "failed to create command workqueue\n");
+ err = -ENOMEM;
+ goto err_cache;
+ }
+
+ mlx5_cmdif_debugfs_init(dev);
+
+ return 0;
+
+err_cache:
+ destroy_msg_cache(dev);
+err_cmd_page:
+ free_cmd_page(dev, cmd);
+err_free_pool:
+ dma_pool_destroy(cmd->pool);
+ return err;
+}
+
+void mlx5_cmd_cleanup(struct mlx5_core_dev *dev)
+{
+ struct mlx5_cmd *cmd = &dev->cmd;
+
+ mlx5_cmdif_debugfs_cleanup(dev);
+ destroy_workqueue(cmd->wq);
+ destroy_msg_cache(dev);
+ free_cmd_page(dev, cmd);
+ dma_pool_destroy(cmd->pool);
+}
+
+int mlx5_cmd_enable(struct mlx5_core_dev *dev)
+{
+ struct mlx5_cmd *cmd = &dev->cmd;
+ u32 cmd_h, cmd_l;
+
+ memset(&cmd->vars, 0, sizeof(cmd->vars));
+ cmd->vars.cmdif_rev = cmdif_rev(dev);
+ if (cmd->vars.cmdif_rev != CMD_IF_REV) {
+ mlx5_core_err(dev,
+ "Driver cmdif rev(%d) differs from firmware's(%d)\n",
+ CMD_IF_REV, cmd->vars.cmdif_rev);
+ return -EINVAL;
+ }
+
cmd_l = ioread32be(&dev->iseg->cmdq_addr_l_sz) & 0xff;
cmd->vars.log_sz = cmd_l >> 4 & 0xf;
cmd->vars.log_stride = cmd_l & 0xf;
if (1 << cmd->vars.log_sz > MLX5_MAX_COMMANDS) {
mlx5_core_err(dev, "firmware reports too many outstanding commands %d\n",
1 << cmd->vars.log_sz);
- err = -EINVAL;
- goto err_free_page;
+ return -EINVAL;
}
if (cmd->vars.log_sz + cmd->vars.log_stride > MLX5_ADAPTER_PAGE_SHIFT) {
mlx5_core_err(dev, "command queue size overflow\n");
- err = -EINVAL;
- goto err_free_page;
+ return -EINVAL;
}
cmd->state = MLX5_CMDIF_STATE_DOWN;
- cmd->checksum_disabled = 1;
cmd->vars.max_reg_cmds = (1 << cmd->vars.log_sz) - 1;
cmd->vars.bitmask = (1UL << cmd->vars.max_reg_cmds) - 1;
- spin_lock_init(&cmd->alloc_lock);
- spin_lock_init(&cmd->token_lock);
- for (i = 0; i < MLX5_CMD_OP_MAX; i++)
- spin_lock_init(&cmd->stats[i].lock);
-
sema_init(&cmd->vars.sem, cmd->vars.max_reg_cmds);
sema_init(&cmd->vars.pages_sem, 1);
sema_init(&cmd->vars.throttle_sem, DIV_ROUND_UP(cmd->vars.max_reg_cmds, 2));
cmd_h = (u32)((u64)(cmd->dma) >> 32);
cmd_l = (u32)(cmd->dma);
- if (cmd_l & 0xfff) {
- mlx5_core_err(dev, "invalid command queue address\n");
- err = -ENOMEM;
- goto err_free_page;
- }
+ if (WARN_ON(cmd_l & 0xfff))
+ return -EINVAL;
iowrite32be(cmd_h, &dev->iseg->cmdq_addr_h);
iowrite32be(cmd_l, &dev->iseg->cmdq_addr_l_sz);
@@ -2260,40 +2298,17 @@ int mlx5_cmd_init(struct mlx5_core_dev *dev)
cmd->mode = CMD_MODE_POLLING;
cmd->allowed_opcode = CMD_ALLOWED_OPCODE_ALL;
- create_msg_cache(dev);
-
- set_wqname(dev);
- cmd->wq = create_singlethread_workqueue(cmd->wq_name);
- if (!cmd->wq) {
- mlx5_core_err(dev, "failed to create command workqueue\n");
- err = -ENOMEM;
- goto err_cache;
- }
-
create_debugfs_files(dev);
return 0;
-
-err_cache:
- destroy_msg_cache(dev);
-
-err_free_page:
- free_cmd_page(dev, cmd);
-
-err_free_pool:
- dma_pool_destroy(cmd->pool);
- return err;
}
-void mlx5_cmd_cleanup(struct mlx5_core_dev *dev)
+void mlx5_cmd_disable(struct mlx5_core_dev *dev)
{
struct mlx5_cmd *cmd = &dev->cmd;
clean_debug_files(dev);
- destroy_workqueue(cmd->wq);
- destroy_msg_cache(dev);
- free_cmd_page(dev, cmd);
- dma_pool_destroy(cmd->pool);
+ flush_workqueue(cmd->wq);
}
void mlx5_cmd_set_state(struct mlx5_core_dev *dev,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
index a79c9b8286a7..740d4476c413 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
@@ -1142,7 +1142,7 @@ static int mlx5_function_enable(struct mlx5_core_dev *dev, bool boot, u64 timeou
return err;
}
- err = mlx5_cmd_init(dev);
+ err = mlx5_cmd_enable(dev);
if (err) {
mlx5_core_err(dev, "Failed initializing command interface, aborting\n");
return err;
@@ -1196,7 +1196,7 @@ static int mlx5_function_enable(struct mlx5_core_dev *dev, bool boot, u64 timeou
mlx5_stop_health_poll(dev, boot);
err_cmd_cleanup:
mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_DOWN);
- mlx5_cmd_cleanup(dev);
+ mlx5_cmd_disable(dev);
return err;
}
@@ -1207,7 +1207,7 @@ static void mlx5_function_disable(struct mlx5_core_dev *dev, bool boot)
mlx5_core_disable_hca(dev, 0);
mlx5_stop_health_poll(dev, boot);
mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_DOWN);
- mlx5_cmd_cleanup(dev);
+ mlx5_cmd_disable(dev);
}
static int mlx5_function_open(struct mlx5_core_dev *dev)
@@ -1796,6 +1796,12 @@ int mlx5_mdev_init(struct mlx5_core_dev *dev, int profile_idx)
debugfs_create_file("vhca_id", 0400, priv->dbg.dbg_root, dev, &vhca_id_fops);
INIT_LIST_HEAD(&priv->traps);
+ err = mlx5_cmd_init(dev);
+ if (err) {
+ mlx5_core_err(dev, "Failed initializing cmdif SW structs, aborting\n");
+ goto err_cmd_init;
+ }
+
err = mlx5_tout_init(dev);
if (err) {
mlx5_core_err(dev, "Failed initializing timeouts, aborting\n");
@@ -1841,6 +1847,8 @@ int mlx5_mdev_init(struct mlx5_core_dev *dev, int profile_idx)
err_health_init:
mlx5_tout_cleanup(dev);
err_timeout_init:
+ mlx5_cmd_cleanup(dev);
+err_cmd_init:
debugfs_remove(dev->priv.dbg.dbg_root);
mutex_destroy(&priv->pgdir_mutex);
mutex_destroy(&priv->alloc_mutex);
@@ -1863,6 +1871,7 @@ void mlx5_mdev_uninit(struct mlx5_core_dev *dev)
mlx5_pagealloc_cleanup(dev);
mlx5_health_cleanup(dev);
mlx5_tout_cleanup(dev);
+ mlx5_cmd_cleanup(dev);
debugfs_remove_recursive(dev->priv.dbg.dbg_root);
mutex_destroy(&priv->pgdir_mutex);
mutex_destroy(&priv->alloc_mutex);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
index c4be257c043d..43b0144121ca 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
@@ -178,6 +178,8 @@ int mlx5_query_hca_caps(struct mlx5_core_dev *dev);
int mlx5_query_board_id(struct mlx5_core_dev *dev);
int mlx5_cmd_init(struct mlx5_core_dev *dev);
void mlx5_cmd_cleanup(struct mlx5_core_dev *dev);
+int mlx5_cmd_enable(struct mlx5_core_dev *dev);
+void mlx5_cmd_disable(struct mlx5_core_dev *dev);
void mlx5_cmd_set_state(struct mlx5_core_dev *dev,
enum mlx5_cmdif_state cmdif_state);
int mlx5_cmd_init_hca(struct mlx5_core_dev *dev, uint32_t *sw_owner_id);
--
2.41.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [net-next V2 08/15] net/mlx5: Allocate command stats with xarray
2023-07-27 18:38 [pull request][net-next V2 00/15] mlx5 updates 2023-07-24 Saeed Mahameed
` (6 preceding siblings ...)
2023-07-27 18:39 ` [net-next V2 07/15] net/mlx5: split mlx5_cmd_init() to probe and reload routines Saeed Mahameed
@ 2023-07-27 18:39 ` Saeed Mahameed
2023-07-27 18:39 ` [net-next V2 09/15] net/mlx5e: Remove duplicate code for user flow Saeed Mahameed
` (6 subsequent siblings)
14 siblings, 0 replies; 17+ messages in thread
From: Saeed Mahameed @ 2023-07-27 18:39 UTC (permalink / raw)
To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
Cc: Saeed Mahameed, netdev, Tariq Toukan, Shay Drory, Moshe Shemesh
From: Shay Drory <shayd@nvidia.com>
Command stats is an array with more than 2K entries, which amounts to
~180KB. This is way more than actually needed, as only ~190 entries
are being used.
Therefore, replace the array with xarray.
Signed-off-by: Shay Drory <shayd@nvidia.com>
Reviewed-by: Moshe Shemesh <moshe@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
drivers/net/ethernet/mellanox/mlx5/core/cmd.c | 15 +++++-----
.../net/ethernet/mellanox/mlx5/core/debugfs.c | 30 ++++++++++++++++++-
include/linux/mlx5/driver.h | 2 +-
3 files changed, 37 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
index 45edd5a110c8..afb348579577 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
@@ -1225,8 +1225,8 @@ static int mlx5_cmd_invoke(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *in,
goto out_free;
ds = ent->ts2 - ent->ts1;
- if (ent->op < MLX5_CMD_OP_MAX) {
- stats = &cmd->stats[ent->op];
+ stats = xa_load(&cmd->stats, ent->op);
+ if (stats) {
spin_lock_irq(&stats->lock);
stats->sum += ds;
++stats->n;
@@ -1695,8 +1695,8 @@ static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool force
if (ent->callback) {
ds = ent->ts2 - ent->ts1;
- if (ent->op < MLX5_CMD_OP_MAX) {
- stats = &cmd->stats[ent->op];
+ stats = xa_load(&cmd->stats, ent->op);
+ if (stats) {
spin_lock_irqsave(&stats->lock, flags);
stats->sum += ds;
++stats->n;
@@ -1923,7 +1923,9 @@ static void cmd_status_log(struct mlx5_core_dev *dev, u16 opcode, u8 status,
if (!err || !(strcmp(namep, "unknown command opcode")))
return;
- stats = &dev->cmd.stats[opcode];
+ stats = xa_load(&dev->cmd.stats, opcode);
+ if (!stats)
+ return;
spin_lock_irq(&stats->lock);
stats->failed++;
if (err < 0)
@@ -2189,7 +2191,6 @@ int mlx5_cmd_init(struct mlx5_core_dev *dev)
struct mlx5_cmd *cmd = &dev->cmd;
u32 cmd_l;
int err;
- int i;
cmd->pool = dma_pool_create("mlx5_cmd", mlx5_core_dma_dev(dev), size, align, 0);
if (!cmd->pool)
@@ -2209,8 +2210,6 @@ int mlx5_cmd_init(struct mlx5_core_dev *dev)
spin_lock_init(&cmd->alloc_lock);
spin_lock_init(&cmd->token_lock);
- for (i = 0; i < MLX5_CMD_OP_MAX; i++)
- spin_lock_init(&cmd->stats[i].lock);
create_msg_cache(dev);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c b/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c
index 9a826fb3ca38..09652dc89115 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c
@@ -188,6 +188,24 @@ static const struct file_operations slots_fops = {
.read = slots_read,
};
+static struct mlx5_cmd_stats *
+mlx5_cmdif_alloc_stats(struct xarray *stats_xa, int opcode)
+{
+ struct mlx5_cmd_stats *stats = kzalloc(sizeof(*stats), GFP_KERNEL);
+ int err;
+
+ if (!stats)
+ return NULL;
+
+ err = xa_insert(stats_xa, opcode, stats, GFP_KERNEL);
+ if (err) {
+ kfree(stats);
+ return NULL;
+ }
+ spin_lock_init(&stats->lock);
+ return stats;
+}
+
void mlx5_cmdif_debugfs_init(struct mlx5_core_dev *dev)
{
struct mlx5_cmd_stats *stats;
@@ -200,10 +218,14 @@ void mlx5_cmdif_debugfs_init(struct mlx5_core_dev *dev)
debugfs_create_file("slots_inuse", 0400, *cmd, &dev->cmd, &slots_fops);
+ xa_init(&dev->cmd.stats);
+
for (i = 0; i < MLX5_CMD_OP_MAX; i++) {
- stats = &dev->cmd.stats[i];
namep = mlx5_command_str(i);
if (strcmp(namep, "unknown command opcode")) {
+ stats = mlx5_cmdif_alloc_stats(&dev->cmd.stats, i);
+ if (!stats)
+ continue;
stats->root = debugfs_create_dir(namep, *cmd);
debugfs_create_file("average", 0400, stats->root, stats,
@@ -224,7 +246,13 @@ void mlx5_cmdif_debugfs_init(struct mlx5_core_dev *dev)
void mlx5_cmdif_debugfs_cleanup(struct mlx5_core_dev *dev)
{
+ struct mlx5_cmd_stats *stats;
+ unsigned long i;
+
debugfs_remove_recursive(dev->priv.dbg.cmdif_debugfs);
+ xa_for_each(&dev->cmd.stats, i, stats)
+ kfree(stats);
+ xa_destroy(&dev->cmd.stats);
}
void mlx5_cq_debugfs_init(struct mlx5_core_dev *dev)
diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
index 39c5f4087c39..f21703fb75fd 100644
--- a/include/linux/mlx5/driver.h
+++ b/include/linux/mlx5/driver.h
@@ -322,7 +322,7 @@ struct mlx5_cmd {
struct mlx5_cmd_debug dbg;
struct cmd_msg_cache cache[MLX5_NUM_COMMAND_CACHES];
int checksum_disabled;
- struct mlx5_cmd_stats stats[MLX5_CMD_OP_MAX];
+ struct xarray stats;
};
struct mlx5_cmd_mailbox {
--
2.41.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [net-next V2 09/15] net/mlx5e: Remove duplicate code for user flow
2023-07-27 18:38 [pull request][net-next V2 00/15] mlx5 updates 2023-07-24 Saeed Mahameed
` (7 preceding siblings ...)
2023-07-27 18:39 ` [net-next V2 08/15] net/mlx5: Allocate command stats with xarray Saeed Mahameed
@ 2023-07-27 18:39 ` Saeed Mahameed
2023-07-27 18:39 ` [net-next V2 10/15] net/mlx5e: Make flow classification filters static Saeed Mahameed
` (5 subsequent siblings)
14 siblings, 0 replies; 17+ messages in thread
From: Saeed Mahameed @ 2023-07-27 18:39 UTC (permalink / raw)
To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
Cc: Saeed Mahameed, netdev, Tariq Toukan, Parav Pandit, Roi Dayan
From: Parav Pandit <parav@nvidia.com>
Flow table and priority detection is same for IP user flows and other L4
flows. Hence, use same code for all these flow types.
Signed-off-by: Parav Pandit <parav@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c
index aac32e505c14..08eb186615c8 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c
@@ -96,10 +96,6 @@ static struct mlx5e_ethtool_table *get_flow_table(struct mlx5e_priv *priv,
case UDP_V4_FLOW:
case TCP_V6_FLOW:
case UDP_V6_FLOW:
- max_tuples = ETHTOOL_NUM_L3_L4_FTS;
- prio = MLX5E_ETHTOOL_L3_L4_PRIO + (max_tuples - num_tuples);
- eth_ft = ðtool->l3_l4_ft[prio];
- break;
case IP_USER_FLOW:
case IPV6_USER_FLOW:
max_tuples = ETHTOOL_NUM_L3_L4_FTS;
--
2.41.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [net-next V2 10/15] net/mlx5e: Make flow classification filters static
2023-07-27 18:38 [pull request][net-next V2 00/15] mlx5 updates 2023-07-24 Saeed Mahameed
` (8 preceding siblings ...)
2023-07-27 18:39 ` [net-next V2 09/15] net/mlx5e: Remove duplicate code for user flow Saeed Mahameed
@ 2023-07-27 18:39 ` Saeed Mahameed
2023-07-27 18:39 ` [net-next V2 11/15] net/mlx5: Don't check vport->enabled in port ops Saeed Mahameed
` (4 subsequent siblings)
14 siblings, 0 replies; 17+ messages in thread
From: Saeed Mahameed @ 2023-07-27 18:39 UTC (permalink / raw)
To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
Cc: Saeed Mahameed, netdev, Tariq Toukan, Parav Pandit, Roi Dayan
From: Parav Pandit <parav@nvidia.com>
Get and set flow classification filters are used in a single file.
Hence, make them static.
Signed-off-by: Parav Pandit <parav@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en.h | 3 ---
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c | 6 +++---
2 files changed, 3 insertions(+), 6 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index b1807bfb815f..0f8f70b91485 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -1167,9 +1167,6 @@ int mlx5e_ethtool_set_link_ksettings(struct mlx5e_priv *priv,
int mlx5e_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key, u8 *hfunc);
int mlx5e_set_rxfh(struct net_device *dev, const u32 *indir, const u8 *key,
const u8 hfunc);
-int mlx5e_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *info,
- u32 *rule_locs);
-int mlx5e_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd);
u32 mlx5e_ethtool_get_rxfh_key_size(struct mlx5e_priv *priv);
u32 mlx5e_ethtool_get_rxfh_indir_size(struct mlx5e_priv *priv);
int mlx5e_ethtool_get_ts_info(struct mlx5e_priv *priv,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
index 27861b68ced5..04195a673a6b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
@@ -2163,8 +2163,8 @@ static u32 mlx5e_get_priv_flags(struct net_device *netdev)
return priv->channels.params.pflags;
}
-int mlx5e_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *info,
- u32 *rule_locs)
+static int mlx5e_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *info,
+ u32 *rule_locs)
{
struct mlx5e_priv *priv = netdev_priv(dev);
@@ -2181,7 +2181,7 @@ int mlx5e_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *info,
return mlx5e_ethtool_get_rxnfc(priv, info, rule_locs);
}
-int mlx5e_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd)
+static int mlx5e_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd)
{
struct mlx5e_priv *priv = netdev_priv(dev);
--
2.41.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [net-next V2 11/15] net/mlx5: Don't check vport->enabled in port ops
2023-07-27 18:38 [pull request][net-next V2 00/15] mlx5 updates 2023-07-24 Saeed Mahameed
` (9 preceding siblings ...)
2023-07-27 18:39 ` [net-next V2 10/15] net/mlx5e: Make flow classification filters static Saeed Mahameed
@ 2023-07-27 18:39 ` Saeed Mahameed
2023-07-27 18:39 ` [net-next V2 12/15] net/mlx5: Remove pointless devlink_rate checks Saeed Mahameed
` (3 subsequent siblings)
14 siblings, 0 replies; 17+ messages in thread
From: Saeed Mahameed @ 2023-07-27 18:39 UTC (permalink / raw)
To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
Cc: Saeed Mahameed, netdev, Tariq Toukan, Jiri Pirko, Shay Drory
From: Jiri Pirko <jiri@nvidia.com>
vport->enabled is always set for a vport for which a devlink port is
registered, therefore the checks in the ops are pointless.
Remove those.
Signed-off-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
.../mellanox/mlx5/core/eswitch_offloads.c | 28 ++++---------------
1 file changed, 6 insertions(+), 22 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index b4b8cb788573..c192000e3614 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -4125,7 +4125,6 @@ int mlx5_devlink_port_fn_migratable_get(struct devlink_port *port, bool *is_enab
{
struct mlx5_eswitch *esw;
struct mlx5_vport *vport;
- int err = -EOPNOTSUPP;
esw = mlx5_devlink_eswitch_get(port->devlink);
if (IS_ERR(esw))
@@ -4133,7 +4132,7 @@ int mlx5_devlink_port_fn_migratable_get(struct devlink_port *port, bool *is_enab
if (!MLX5_CAP_GEN(esw->dev, migration)) {
NL_SET_ERR_MSG_MOD(extack, "Device doesn't support migration");
- return err;
+ return -EOPNOTSUPP;
}
vport = mlx5_devlink_port_fn_get_vport(port, esw);
@@ -4143,12 +4142,9 @@ int mlx5_devlink_port_fn_migratable_get(struct devlink_port *port, bool *is_enab
}
mutex_lock(&esw->state_lock);
- if (vport->enabled) {
- *is_enabled = vport->info.mig_enabled;
- err = 0;
- }
+ *is_enabled = vport->info.mig_enabled;
mutex_unlock(&esw->state_lock);
- return err;
+ return 0;
}
int mlx5_devlink_port_fn_migratable_set(struct devlink_port *port, bool enable,
@@ -4177,10 +4173,6 @@ int mlx5_devlink_port_fn_migratable_set(struct devlink_port *port, bool enable,
}
mutex_lock(&esw->state_lock);
- if (!vport->enabled) {
- NL_SET_ERR_MSG_MOD(extack, "Eswitch vport is disabled");
- goto out;
- }
if (vport->info.mig_enabled == enable) {
err = 0;
@@ -4224,7 +4216,6 @@ int mlx5_devlink_port_fn_roce_get(struct devlink_port *port, bool *is_enabled,
{
struct mlx5_eswitch *esw;
struct mlx5_vport *vport;
- int err = -EOPNOTSUPP;
esw = mlx5_devlink_eswitch_get(port->devlink);
if (IS_ERR(esw))
@@ -4237,12 +4228,9 @@ int mlx5_devlink_port_fn_roce_get(struct devlink_port *port, bool *is_enabled,
}
mutex_lock(&esw->state_lock);
- if (vport->enabled) {
- *is_enabled = vport->info.roce_enabled;
- err = 0;
- }
+ *is_enabled = vport->info.roce_enabled;
mutex_unlock(&esw->state_lock);
- return err;
+ return 0;
}
int mlx5_devlink_port_fn_roce_set(struct devlink_port *port, bool enable,
@@ -4251,10 +4239,10 @@ int mlx5_devlink_port_fn_roce_set(struct devlink_port *port, bool enable,
int query_out_sz = MLX5_ST_SZ_BYTES(query_hca_cap_out);
struct mlx5_eswitch *esw;
struct mlx5_vport *vport;
- int err = -EOPNOTSUPP;
void *query_ctx;
void *hca_caps;
u16 vport_num;
+ int err;
esw = mlx5_devlink_eswitch_get(port->devlink);
if (IS_ERR(esw))
@@ -4268,10 +4256,6 @@ int mlx5_devlink_port_fn_roce_set(struct devlink_port *port, bool enable,
vport_num = vport->vport;
mutex_lock(&esw->state_lock);
- if (!vport->enabled) {
- NL_SET_ERR_MSG_MOD(extack, "Eswitch vport is disabled");
- goto out;
- }
if (vport->info.roce_enabled == enable) {
err = 0;
--
2.41.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [net-next V2 12/15] net/mlx5: Remove pointless devlink_rate checks
2023-07-27 18:38 [pull request][net-next V2 00/15] mlx5 updates 2023-07-24 Saeed Mahameed
` (10 preceding siblings ...)
2023-07-27 18:39 ` [net-next V2 11/15] net/mlx5: Don't check vport->enabled in port ops Saeed Mahameed
@ 2023-07-27 18:39 ` Saeed Mahameed
2023-07-27 18:39 ` [net-next V2 13/15] net/mlx5: Make mlx5_esw_offloads_rep_load/unload() static Saeed Mahameed
` (2 subsequent siblings)
14 siblings, 0 replies; 17+ messages in thread
From: Saeed Mahameed @ 2023-07-27 18:39 UTC (permalink / raw)
To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
Cc: Saeed Mahameed, netdev, Tariq Toukan, Jiri Pirko, Shay Drory
From: Jiri Pirko <jiri@nvidia.com>
It is guaranteed that the devlink rate leaf is created during init paths.
No need to check during cleanup. Remove the checks.
Signed-off-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
.../ethernet/mellanox/mlx5/core/esw/devlink_port.c | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c
index af779c700278..433541ac36a7 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/devlink_port.c
@@ -132,10 +132,8 @@ void mlx5_esw_offloads_devlink_port_unregister(struct mlx5_eswitch *esw, u16 vpo
if (IS_ERR(vport))
return;
- if (vport->dl_port->devlink_rate) {
- mlx5_esw_qos_vport_update_group(esw, vport, NULL, NULL);
- devl_rate_leaf_destroy(vport->dl_port);
- }
+ mlx5_esw_qos_vport_update_group(esw, vport, NULL, NULL);
+ devl_rate_leaf_destroy(vport->dl_port);
devl_port_unregister(vport->dl_port);
mlx5_esw_dl_port_free(vport->dl_port);
@@ -211,10 +209,8 @@ void mlx5_esw_devlink_sf_port_unregister(struct mlx5_eswitch *esw, u16 vport_num
if (IS_ERR(vport))
return;
- if (vport->dl_port->devlink_rate) {
- mlx5_esw_qos_vport_update_group(esw, vport, NULL, NULL);
- devl_rate_leaf_destroy(vport->dl_port);
- }
+ mlx5_esw_qos_vport_update_group(esw, vport, NULL, NULL);
+ devl_rate_leaf_destroy(vport->dl_port);
devl_port_unregister(vport->dl_port);
vport->dl_port = NULL;
--
2.41.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [net-next V2 13/15] net/mlx5: Make mlx5_esw_offloads_rep_load/unload() static
2023-07-27 18:38 [pull request][net-next V2 00/15] mlx5 updates 2023-07-24 Saeed Mahameed
` (11 preceding siblings ...)
2023-07-27 18:39 ` [net-next V2 12/15] net/mlx5: Remove pointless devlink_rate checks Saeed Mahameed
@ 2023-07-27 18:39 ` Saeed Mahameed
2023-07-27 18:39 ` [net-next V2 14/15] net/mlx5: Make mlx5_eswitch_load/unload_vport() static Saeed Mahameed
2023-07-27 18:39 ` [net-next V2 15/15] net/mlx5: Give esw_offloads_load/unload_rep() "mlx5_" prefix Saeed Mahameed
14 siblings, 0 replies; 17+ messages in thread
From: Saeed Mahameed @ 2023-07-27 18:39 UTC (permalink / raw)
To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
Cc: Saeed Mahameed, netdev, Tariq Toukan, Jiri Pirko, Shay Drory
From: Jiri Pirko <jiri@nvidia.com>
mlx5_esw_offloads_rep_load/unload() functions are not used
outside of eswitch_offloads.c. Make them static.
Signed-off-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h | 3 ---
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c | 4 ++--
2 files changed, 2 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index 9b5a1651b877..6f360a15aa62 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -730,9 +730,6 @@ void mlx5_esw_set_spec_source_port(struct mlx5_eswitch *esw,
int esw_offloads_load_rep(struct mlx5_eswitch *esw, u16 vport_num);
void esw_offloads_unload_rep(struct mlx5_eswitch *esw, u16 vport_num);
-int mlx5_esw_offloads_rep_load(struct mlx5_eswitch *esw, u16 vport_num);
-void mlx5_esw_offloads_rep_unload(struct mlx5_eswitch *esw, u16 vport_num);
-
int mlx5_eswitch_load_vport(struct mlx5_eswitch *esw, u16 vport_num,
enum mlx5_eswitch_vport_event enabled_events);
void mlx5_eswitch_unload_vport(struct mlx5_eswitch *esw, u16 vport_num);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index c192000e3614..b4df57962b2b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -2391,7 +2391,7 @@ static void __unload_reps_all_vport(struct mlx5_eswitch *esw, u8 rep_type)
__esw_offloads_unload_rep(esw, rep, rep_type);
}
-int mlx5_esw_offloads_rep_load(struct mlx5_eswitch *esw, u16 vport_num)
+static int mlx5_esw_offloads_rep_load(struct mlx5_eswitch *esw, u16 vport_num)
{
struct mlx5_eswitch_rep *rep;
int rep_type;
@@ -2415,7 +2415,7 @@ int mlx5_esw_offloads_rep_load(struct mlx5_eswitch *esw, u16 vport_num)
return err;
}
-void mlx5_esw_offloads_rep_unload(struct mlx5_eswitch *esw, u16 vport_num)
+static void mlx5_esw_offloads_rep_unload(struct mlx5_eswitch *esw, u16 vport_num)
{
struct mlx5_eswitch_rep *rep;
int rep_type;
--
2.41.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [net-next V2 14/15] net/mlx5: Make mlx5_eswitch_load/unload_vport() static
2023-07-27 18:38 [pull request][net-next V2 00/15] mlx5 updates 2023-07-24 Saeed Mahameed
` (12 preceding siblings ...)
2023-07-27 18:39 ` [net-next V2 13/15] net/mlx5: Make mlx5_esw_offloads_rep_load/unload() static Saeed Mahameed
@ 2023-07-27 18:39 ` Saeed Mahameed
2023-07-27 18:39 ` [net-next V2 15/15] net/mlx5: Give esw_offloads_load/unload_rep() "mlx5_" prefix Saeed Mahameed
14 siblings, 0 replies; 17+ messages in thread
From: Saeed Mahameed @ 2023-07-27 18:39 UTC (permalink / raw)
To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
Cc: Saeed Mahameed, netdev, Tariq Toukan, Jiri Pirko, Shay Drory
From: Jiri Pirko <jiri@nvidia.com>
mlx5_eswitch_load/unload_vport()() functions are not used
outside of eswitch.c. Make them static.
Signed-off-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 7 +++----
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h | 4 ----
2 files changed, 3 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 243c455f1029..4b13269f49e0 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1068,9 +1068,8 @@ static void mlx5_eswitch_clear_ec_vf_vports_info(struct mlx5_eswitch *esw)
}
}
-/* Public E-Switch API */
-int mlx5_eswitch_load_vport(struct mlx5_eswitch *esw, u16 vport_num,
- enum mlx5_eswitch_vport_event enabled_events)
+static int mlx5_eswitch_load_vport(struct mlx5_eswitch *esw, u16 vport_num,
+ enum mlx5_eswitch_vport_event enabled_events)
{
int err;
@@ -1089,7 +1088,7 @@ int mlx5_eswitch_load_vport(struct mlx5_eswitch *esw, u16 vport_num,
return err;
}
-void mlx5_eswitch_unload_vport(struct mlx5_eswitch *esw, u16 vport_num)
+static void mlx5_eswitch_unload_vport(struct mlx5_eswitch *esw, u16 vport_num)
{
esw_offloads_unload_rep(esw, vport_num);
mlx5_esw_vport_disable(esw, vport_num);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index 6f360a15aa62..837bdd03557b 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -730,10 +730,6 @@ void mlx5_esw_set_spec_source_port(struct mlx5_eswitch *esw,
int esw_offloads_load_rep(struct mlx5_eswitch *esw, u16 vport_num);
void esw_offloads_unload_rep(struct mlx5_eswitch *esw, u16 vport_num);
-int mlx5_eswitch_load_vport(struct mlx5_eswitch *esw, u16 vport_num,
- enum mlx5_eswitch_vport_event enabled_events);
-void mlx5_eswitch_unload_vport(struct mlx5_eswitch *esw, u16 vport_num);
-
int mlx5_eswitch_load_vf_vports(struct mlx5_eswitch *esw, u16 num_vfs,
enum mlx5_eswitch_vport_event enabled_events);
void mlx5_eswitch_unload_vf_vports(struct mlx5_eswitch *esw, u16 num_vfs);
--
2.41.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [net-next V2 15/15] net/mlx5: Give esw_offloads_load/unload_rep() "mlx5_" prefix
2023-07-27 18:38 [pull request][net-next V2 00/15] mlx5 updates 2023-07-24 Saeed Mahameed
` (13 preceding siblings ...)
2023-07-27 18:39 ` [net-next V2 14/15] net/mlx5: Make mlx5_eswitch_load/unload_vport() static Saeed Mahameed
@ 2023-07-27 18:39 ` Saeed Mahameed
14 siblings, 0 replies; 17+ messages in thread
From: Saeed Mahameed @ 2023-07-27 18:39 UTC (permalink / raw)
To: David S. Miller, Jakub Kicinski, Paolo Abeni, Eric Dumazet
Cc: Saeed Mahameed, netdev, Tariq Toukan, Jiri Pirko, Shay Drory
From: Jiri Pirko <jiri@nvidia.com>
As esw_offloads_load/unload_rep() are used outside eswitch.c it is nicer
for them to have "mlx5_" prefix. Add it.
Signed-off-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c | 4 ++--
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h | 4 ++--
.../net/ethernet/mellanox/mlx5/core/eswitch_offloads.c | 10 +++++-----
3 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 4b13269f49e0..4a7a13169a90 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -1077,7 +1077,7 @@ static int mlx5_eswitch_load_vport(struct mlx5_eswitch *esw, u16 vport_num,
if (err)
return err;
- err = esw_offloads_load_rep(esw, vport_num);
+ err = mlx5_esw_offloads_load_rep(esw, vport_num);
if (err)
goto err_rep;
@@ -1090,7 +1090,7 @@ static int mlx5_eswitch_load_vport(struct mlx5_eswitch *esw, u16 vport_num,
static void mlx5_eswitch_unload_vport(struct mlx5_eswitch *esw, u16 vport_num)
{
- esw_offloads_unload_rep(esw, vport_num);
+ mlx5_esw_offloads_unload_rep(esw, vport_num);
mlx5_esw_vport_disable(esw, vport_num);
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index 837bdd03557b..2944c9207487 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -727,8 +727,8 @@ void mlx5_esw_set_spec_source_port(struct mlx5_eswitch *esw,
u16 vport,
struct mlx5_flow_spec *spec);
-int esw_offloads_load_rep(struct mlx5_eswitch *esw, u16 vport_num);
-void esw_offloads_unload_rep(struct mlx5_eswitch *esw, u16 vport_num);
+int mlx5_esw_offloads_load_rep(struct mlx5_eswitch *esw, u16 vport_num);
+void mlx5_esw_offloads_unload_rep(struct mlx5_eswitch *esw, u16 vport_num);
int mlx5_eswitch_load_vf_vports(struct mlx5_eswitch *esw, u16 num_vfs,
enum mlx5_eswitch_vport_event enabled_events);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index b4df57962b2b..f83667b4339e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -2425,7 +2425,7 @@ static void mlx5_esw_offloads_rep_unload(struct mlx5_eswitch *esw, u16 vport_num
__esw_offloads_unload_rep(esw, rep, rep_type);
}
-int esw_offloads_load_rep(struct mlx5_eswitch *esw, u16 vport_num)
+int mlx5_esw_offloads_load_rep(struct mlx5_eswitch *esw, u16 vport_num)
{
int err;
@@ -2449,7 +2449,7 @@ int esw_offloads_load_rep(struct mlx5_eswitch *esw, u16 vport_num)
return err;
}
-void esw_offloads_unload_rep(struct mlx5_eswitch *esw, u16 vport_num)
+void mlx5_esw_offloads_unload_rep(struct mlx5_eswitch *esw, u16 vport_num)
{
if (esw->mode != MLX5_ESWITCH_OFFLOADS)
return;
@@ -3361,7 +3361,7 @@ int esw_offloads_enable(struct mlx5_eswitch *esw)
vport->info.link_state = MLX5_VPORT_ADMIN_STATE_DOWN;
/* Uplink vport rep must load first. */
- err = esw_offloads_load_rep(esw, MLX5_VPORT_UPLINK);
+ err = mlx5_esw_offloads_load_rep(esw, MLX5_VPORT_UPLINK);
if (err)
goto err_uplink;
@@ -3372,7 +3372,7 @@ int esw_offloads_enable(struct mlx5_eswitch *esw)
return 0;
err_vports:
- esw_offloads_unload_rep(esw, MLX5_VPORT_UPLINK);
+ mlx5_esw_offloads_unload_rep(esw, MLX5_VPORT_UPLINK);
err_uplink:
esw_offloads_steering_cleanup(esw);
err_steering_init:
@@ -3410,7 +3410,7 @@ static int esw_offloads_stop(struct mlx5_eswitch *esw,
void esw_offloads_disable(struct mlx5_eswitch *esw)
{
mlx5_eswitch_disable_pf_vf_vports(esw);
- esw_offloads_unload_rep(esw, MLX5_VPORT_UPLINK);
+ mlx5_esw_offloads_unload_rep(esw, MLX5_VPORT_UPLINK);
esw_set_passing_vport_metadata(esw, false);
esw_offloads_steering_cleanup(esw);
mapping_destroy(esw->offloads.reg_c0_obj_pool);
--
2.41.0
^ permalink raw reply related [flat|nested] 17+ messages in thread