netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Leon Romanovsky <leon@kernel.org>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: Leon Romanovsky <leonro@nvidia.com>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Jianbo Liu <jianbol@nvidia.com>,
	linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org,
	netdev@vger.kernel.org, Paolo Abeni <pabeni@redhat.com>,
	Saeed Mahameed <saeedm@nvidia.com>,
	Tariq Toukan <tariqt@nvidia.com>
Subject: [PATCH rdma-next 0/3] Delay mlx5_ib internal resources allocations
Date: Mon,  3 Jun 2024 13:26:36 +0300	[thread overview]
Message-ID: <cover.1717409369.git.leon@kernel.org> (raw)

From: Leon Romanovsky <leonro@nvidia.com>

Internal mlx5_ib resources are created during mlx5_ib module load. This
behavior is not optimal because it consumes resources that are not
needed when SFs are created. This patch series delays the creation of
mlx5_ib internal resources to the stage when they actually used.

Thanks

Jianbo Liu (3):
  net/mlx5: Reimplement write combining test
  IB/mlx5: Create UMR QP just before first reg_mr occurs
  IB/mlx5: Allocate resources just before first QP/SRQ is created

 drivers/infiniband/hw/mlx5/main.c             | 171 ++++---
 drivers/infiniband/hw/mlx5/mem.c              | 198 --------
 drivers/infiniband/hw/mlx5/mlx5_ib.h          |   9 +-
 drivers/infiniband/hw/mlx5/mr.c               |   9 +
 drivers/infiniband/hw/mlx5/qp.c               |  20 +-
 drivers/infiniband/hw/mlx5/srq.c              |   4 +
 drivers/infiniband/hw/mlx5/umr.c              |  55 ++-
 drivers/infiniband/hw/mlx5/umr.h              |   3 +
 .../net/ethernet/mellanox/mlx5/core/Makefile  |   2 +-
 .../net/ethernet/mellanox/mlx5/core/main.c    |   2 +
 drivers/net/ethernet/mellanox/mlx5/core/wc.c  | 434 ++++++++++++++++++
 include/linux/mlx5/driver.h                   |  11 +
 12 files changed, 627 insertions(+), 291 deletions(-)
 create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/wc.c

-- 
2.45.1


             reply	other threads:[~2024-06-03 10:26 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-03 10:26 Leon Romanovsky [this message]
2024-06-03 10:26 ` [PATCH mlx5-next 1/3] net/mlx5: Reimplement write combining test Leon Romanovsky
2024-06-03 10:26 ` [PATCH rdma-next 2/3] IB/mlx5: Create UMR QP just before first reg_mr occurs Leon Romanovsky
2024-06-07 17:30   ` Jason Gunthorpe
2024-06-13 18:06     ` Leon Romanovsky
2024-06-13 19:12       ` Zhu Yanjun
2024-06-13 19:24         ` Leon Romanovsky
2024-06-17 14:24       ` Jason Gunthorpe
2024-06-03 10:26 ` [PATCH rdma-next 3/3] IB/mlx5: Allocate resources just before first QP/SRQ is created Leon Romanovsky
2024-06-16 15:38 ` (subset) [PATCH rdma-next 0/3] Delay mlx5_ib internal resources allocations Leon Romanovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cover.1717409369.git.leon@kernel.org \
    --to=leon@kernel.org \
    --cc=edumazet@google.com \
    --cc=jgg@nvidia.com \
    --cc=jianbol@nvidia.com \
    --cc=kuba@kernel.org \
    --cc=leonro@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=saeedm@nvidia.com \
    --cc=tariqt@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).