From: Jacob Keller <jacob.e.keller@intel.com>
To: Saeed Mahameed <saeed@kernel.org>,
"David S. Miller" <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Eric Dumazet <edumazet@google.com>
Cc: Saeed Mahameed <saeedm@nvidia.com>, <netdev@vger.kernel.org>,
Tariq Toukan <tariqt@nvidia.com>, Wei Zhang <weizhang@nvidia.com>,
Moshe Shemesh <moshe@nvidia.com>, Shay Drory <shayd@nvidia.com>
Subject: Re: [net-next V2 01/15] net/mlx5: Parallelize vhca event handling
Date: Thu, 12 Oct 2023 14:13:52 -0700 [thread overview]
Message-ID: <16b9f9c1-2df9-4242-b610-d7c20ac5a7e5@intel.com> (raw)
In-Reply-To: <20231012192750.124945-2-saeed@kernel.org>
On 10/12/2023 12:27 PM, Saeed Mahameed wrote:
> From: Wei Zhang <weizhang@nvidia.com>
>
> At present, mlx5 driver have a general purpose
> event handler which not only handles vhca event
> but also many other events. This incurs a huge
> bottleneck because the event handler is
> implemented by single threaded workqueue and all
> events are forced to be handled in serial manner
> even though application tries to create multiple
> SFs simultaneously.
>
> Introduce a dedicated vhca event handler which
> manages SFs parallel creation.
>
> Signed-off-by: Wei Zhang <weizhang@nvidia.com>
> Reviewed-by: Moshe Shemesh <moshe@nvidia.com>
> Reviewed-by: Shay Drory <shayd@nvidia.com>
> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
> ---
> .../net/ethernet/mellanox/mlx5/core/events.c | 5 --
> .../ethernet/mellanox/mlx5/core/mlx5_core.h | 3 +-
> .../mellanox/mlx5/core/sf/vhca_event.c | 57 ++++++++++++++++++-
> include/linux/mlx5/driver.h | 1 +
> 4 files changed, 57 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/events.c b/drivers/net/ethernet/mellanox/mlx5/core/events.c
> index 3ec892d51f57..d91ea53eb394 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/events.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/events.c
> @@ -441,8 +441,3 @@ int mlx5_blocking_notifier_call_chain(struct mlx5_core_dev *dev, unsigned int ev
>
> return blocking_notifier_call_chain(&events->sw_nh, event, data);
> }
> -
> -void mlx5_events_work_enqueue(struct mlx5_core_dev *dev, struct work_struct *work)
> -{
> - queue_work(dev->priv.events->wq, work);
> -}
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
> index 124352459c23..94f809f52f27 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
> @@ -143,6 +143,8 @@ enum mlx5_semaphore_space_address {
>
> #define MLX5_DEFAULT_PROF 2
> #define MLX5_SF_PROF 3
> +#define MLX5_NUM_FW_CMD_THREADS 8
> +#define MLX5_DEV_MAX_WQS MLX5_NUM_FW_CMD_THREADS
>
> static inline int mlx5_flexible_inlen(struct mlx5_core_dev *dev, size_t fixed,
> size_t item_size, size_t num_items,
> @@ -331,7 +333,6 @@ int mlx5_vport_set_other_func_cap(struct mlx5_core_dev *dev, const void *hca_cap
> #define mlx5_vport_get_other_func_general_cap(dev, vport, out) \
> mlx5_vport_get_other_func_cap(dev, vport, out, MLX5_CAP_GENERAL)
>
> -void mlx5_events_work_enqueue(struct mlx5_core_dev *dev, struct work_struct *work);
> static inline u32 mlx5_sriov_get_vf_total_msix(struct pci_dev *pdev)
> {
> struct mlx5_core_dev *dev = pci_get_drvdata(pdev);
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/vhca_event.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/vhca_event.c
> index d908fba968f0..c6fd729de8b2 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/sf/vhca_event.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/vhca_event.c
> @@ -21,6 +21,15 @@ struct mlx5_vhca_event_work {
> struct mlx5_vhca_state_event event;
> };
>
> +struct mlx5_vhca_event_handler {
> + struct workqueue_struct *wq;
> +};
> +
> +struct mlx5_vhca_events {
> + struct mlx5_core_dev *dev;
> + struct mlx5_vhca_event_handler handler[MLX5_DEV_MAX_WQS];
> +};
> +
> int mlx5_cmd_query_vhca_state(struct mlx5_core_dev *dev, u16 function_id, u32 *out, u32 outlen)
> {
> u32 in[MLX5_ST_SZ_DW(query_vhca_state_in)] = {};
> @@ -99,6 +108,12 @@ static void mlx5_vhca_state_work_handler(struct work_struct *_work)
> kfree(work);
> }
>
> +static void
> +mlx5_vhca_events_work_enqueue(struct mlx5_core_dev *dev, int idx, struct work_struct *work)
> +{
> + queue_work(dev->priv.vhca_events->handler[idx].wq, work);
> +}
I guess you need seprate single-threaded work queues because each
sequence of vhca work items on a given index need to be serialized, but
you don't need to serialize between different vhca index? Makes sense
enough.
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
next prev parent reply other threads:[~2023-10-12 21:14 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-12 19:27 [pull request][net-next V2 00/15] mlx5 updates 2023-10-10 Saeed Mahameed
2023-10-12 19:27 ` [net-next V2 01/15] net/mlx5: Parallelize vhca event handling Saeed Mahameed
2023-10-12 21:13 ` Jacob Keller [this message]
2023-10-12 19:27 ` [net-next V2 02/15] net/mlx5: Redesign SF active work to remove table_lock Saeed Mahameed
2023-10-12 21:18 ` Jacob Keller
2023-10-12 19:27 ` [net-next V2 03/15] net/mlx5: Avoid false positive lockdep warning by adding lock_class_key Saeed Mahameed
2023-10-12 21:23 ` Jacob Keller
2023-10-12 19:27 ` [net-next V2 04/15] net/mlx5: Refactor LAG peer device lookout bus logic to mlx5 devcom Saeed Mahameed
2023-10-12 21:26 ` Jacob Keller
2023-10-12 21:46 ` Saeed Mahameed
2023-10-12 19:27 ` [net-next V2 05/15] net/mlx5: Replace global mlx5_intf_lock with HCA devcom component lock Saeed Mahameed
2023-10-12 21:28 ` Jacob Keller
2023-10-12 19:27 ` [net-next V2 06/15] net/mlx5: Remove unused declaration Saeed Mahameed
2023-10-12 21:29 ` Jacob Keller
2023-10-12 19:27 ` [net-next V2 07/15] net/mlx5: fix config name in Kconfig parameter documentation Saeed Mahameed
2023-10-12 19:27 ` [net-next V2 08/15] net/mlx5: Use PTR_ERR_OR_ZERO() to simplify code Saeed Mahameed
2023-10-12 19:27 ` [net-next V2 09/15] net/mlx5e: " Saeed Mahameed
2023-10-12 21:30 ` Jacob Keller
2023-10-12 19:27 ` [net-next V2 10/15] net/mlx5e: Refactor rx_res_init() and rx_res_free() APIs Saeed Mahameed
2023-10-12 21:31 ` Jacob Keller
2023-10-12 19:27 ` [net-next V2 11/15] net/mlx5e: Refactor mlx5e_rss_set_rxfh() and mlx5e_rss_get_rxfh() Saeed Mahameed
2023-10-12 21:32 ` Jacob Keller
2023-10-12 19:27 ` [net-next V2 12/15] net/mlx5e: Refactor mlx5e_rss_init() and mlx5e_rss_free() API's Saeed Mahameed
2023-10-12 21:33 ` Jacob Keller
2023-10-12 19:27 ` [net-next V2 13/15] net/mlx5e: Preparations for supporting larger number of channels Saeed Mahameed
2023-10-12 21:35 ` Jacob Keller
2023-10-12 19:27 ` [net-next V2 14/15] net/mlx5e: Increase max supported channels number to 256 Saeed Mahameed
2023-10-12 21:35 ` Jacob Keller
2023-10-12 19:27 ` [net-next V2 15/15] net/mlx5e: Allow IPsec soft/hard limits in bytes Saeed Mahameed
2023-10-12 21:37 ` Jacob Keller
2023-10-14 1:10 ` [pull request][net-next V2 00/15] mlx5 updates 2023-10-10 Jakub Kicinski
2023-10-14 17:19 ` Saeed Mahameed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=16b9f9c1-2df9-4242-b610-d7c20ac5a7e5@intel.com \
--to=jacob.e.keller@intel.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=kuba@kernel.org \
--cc=moshe@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=saeed@kernel.org \
--cc=saeedm@nvidia.com \
--cc=shayd@nvidia.com \
--cc=tariqt@nvidia.com \
--cc=weizhang@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).