From: Dariusz Sosnowski <dsosnowski@nvidia.com>
To: Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
Bing Zhao <bingz@nvidia.com>, Ori Kam <orika@nvidia.com>,
Suanming Mou <suanmingm@nvidia.com>,
Matan Azrad <matan@nvidia.com>
Cc: <dev@dpdk.org>, Raslan Darawsheh <rasland@nvidia.com>
Subject: [PATCH 0/5] net/mlx5: add BlueField socket direct support
Date: Mon, 2 Mar 2026 12:34:38 +0100 [thread overview]
Message-ID: <20260302113443.16648-1-dsosnowski@nvidia.com> (raw)
Goal of this patchset is to prepare probing logic in mlx5 networking PMD
for support of BlueField DPUs with Socket Direct.
In such use case, BlueField DPU will be connected through PCI
to 2 different CPUs on the host.
Each host CPU sees 2 PFs.
Each PF is connected to one of the physical ports.
+--------+ +--------+
|CPU 0 | |CPU 1 |
| | | |
| pf0 | | pf0 |
| | | |
| pf1 | | pf1 |
| | | |
+---+----+ +-+------+
| |
| |
| |
+----+ +-----+
| |
| |
| |
+---+-----------+----+
|BF3 DPU |
| |
| pf0hpf pf1hpf |
| |
| pf2hpf pf3hpf |
| |
| p0 p1 |
+------+------+------+
| phy0 | | phy1 |
+------+ +------+
On BlueField DPU ARM Linux netdevs map to PFs/ports as follows:
- p0 and p1 to physical ports 0 and 1 respectively,
- pf0hpf and pf2hpf to CPU0 pf0 and CPU1 pf0 respectively,
- pf1hpf and pf3hpf to CPU0 pf1 and CPU1 pf1 respectively.
There are several possible ways to use such a setup:
- Single E-Switch (embedded switch) per each CPU PF to
physical port connection.
- Shared E-Switch for related CPU PFs:
- For example, both pf0hpf and pf2hpf are in the same E-Switch domain.
- Multiport E-Switch.
- All host PFs and physical ports are in the same E-Switch domain.
When a DPDK application would be run on BlueField ARM it should be possible
for application to probe all the relevant representors
(corresponding to available netdevs).
Using testpmd syntax users will be able to do the following:
# Probe both physical ports
port attach 03:00.0,dv_flow_en=2,representor=pf0-1
# Probe both host PF 0 from CPU 0
# (VF representor index -1 is special encoding for host PF)
port attach 03:00.0,dv_flow_en=2,representor=pf0vf65535
# or with explicit controller index
port attach 03:00.0,dv_flow_en=2,representor=c1pf0vf65535
# Probe both host PF 0 from CPU 1
port attach 03:00.0,dv_flow_en=2,representor=pf2vf65535
# or with explicit controller index
port attach 03:00.0,dv_flow_en=2,representor=c2pf2vf65535
Patches overview:
- Patch 1 and 2 - Fixes bond detection logic.
Previously mlx5 PMD relied on "bond" appearing in IB device name
which is not always the case. Moved to sysfs checks for bonding devices.
- Patch 3 - Add calculation of number of physical ports and host PFs.
This information will be used to determine
how DPDK port name is generated, instead of relying on
specific setup type.
- Patch 4 - Change "representor to IB port" matching logic to directly
compare ethdev devargs values to IB port info.
Added optional matching on controller index.
- Patch 5 - Make DPDK port name generation dynamic and dependent on
types/number of ports, instead of specific setup type.
This allows more generic probing, independent of setup topology.
Dariusz Sosnowski (5):
common/mlx5: fix bond check
net/mlx5: fix bond check
net/mlx5: calculate number of uplinks and host PFs
net/mlx5: compare representors explicitly
net/mlx5: build port name dynamically
drivers/common/mlx5/linux/mlx5_common_os.c | 86 ++++-
drivers/common/mlx5/linux/mlx5_common_os.h | 9 +
drivers/net/mlx5/linux/mlx5_os.c | 356 ++++++++++++++-------
drivers/net/mlx5/mlx5.h | 2 +
4 files changed, 338 insertions(+), 115 deletions(-)
--
2.47.3
next reply other threads:[~2026-03-02 11:35 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-02 11:34 Dariusz Sosnowski [this message]
2026-03-02 11:34 ` [PATCH 1/5] common/mlx5: fix bond check Dariusz Sosnowski
2026-03-02 11:34 ` [PATCH 2/5] net/mlx5: " Dariusz Sosnowski
2026-03-02 11:34 ` [PATCH 3/5] net/mlx5: calculate number of uplinks and host PFs Dariusz Sosnowski
2026-03-02 11:34 ` [PATCH 4/5] net/mlx5: compare representors explicitly Dariusz Sosnowski
2026-03-02 11:34 ` [PATCH 5/5] net/mlx5: build port name dynamically Dariusz Sosnowski
2026-03-04 7:26 ` [PATCH 0/5] net/mlx5: add BlueField socket direct support Bing Zhao
2026-03-04 10:57 ` [PATCH v2 0/3] net/mlx5: net/mlx5: fix probing to allow BlueField Socket Direct Dariusz Sosnowski
2026-03-04 10:57 ` [PATCH v2 1/3] common/mlx5: fix bond check Dariusz Sosnowski
2026-03-04 10:57 ` [PATCH v2 2/3] net/mlx5: " Dariusz Sosnowski
2026-03-04 10:57 ` [PATCH v2 3/3] net/mlx5: fix probing to allow BlueField Socket Direct Dariusz Sosnowski
2026-03-10 8:16 ` [PATCH v2 0/3] net/mlx5: " Raslan Darawsheh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260302113443.16648-1-dsosnowski@nvidia.com \
--to=dsosnowski@nvidia.com \
--cc=bingz@nvidia.com \
--cc=dev@dpdk.org \
--cc=matan@nvidia.com \
--cc=orika@nvidia.com \
--cc=rasland@nvidia.com \
--cc=suanmingm@nvidia.com \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox