public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Leon Romanovsky <leon@kernel.org>
To: Jakub Kicinski <kuba@kernel.org>
Cc: Saeed Mahameed <saeedm@nvidia.com>,
	Tariq Toukan <tariqt@nvidia.com>, Mark Bloch <mbloch@nvidia.com>,
	Andrew Lunn <andrew+netdev@lunn.ch>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Paolo Abeni <pabeni@redhat.com>, Jason Gunthorpe <jgg@ziepe.ca>,
	linux-rdma@vger.kernel.org, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Maher Sanalla <msanalla@nvidia.com>
Subject: Re: [PATCH rdma-next 0/6] Add support for TLP emulation
Date: Mon, 2 Mar 2026 16:06:52 +0200	[thread overview]
Message-ID: <20260302140652.GQ12611@unreal> (raw)
In-Reply-To: <20260226173434.62c82688@kernel.org>

On Thu, Feb 26, 2026 at 05:34:34PM -0800, Jakub Kicinski wrote:
> On Wed, 25 Feb 2026 16:19:30 +0200 Leon Romanovsky wrote:
> > This series adds support for Transaction Layer Packet (TLP) emulation
> > response gateway regions, enabling userspace device emulation software
> > to write TLP responses directly to lower layers without kernel driver
> > involvement.
> > 
> > Currently, the mlx5 driver exposes VirtIO emulation access regions via
> > the MLX5_IB_METHOD_VAR_OBJ_ALLOC ioctl. This series extends that
> > ioctl to also support allocating TLP response gateway channels for
> > PCI device emulation use cases.
> 
> Why is this an RDMA thing if it's a PCIe feature indented for VirtIO?

This is the result of a long path of evolution.

Early on, we had VDPA emulation implemented entirely within the RDMA
stack. The idea was to build something similar to a tun/tap pair, where
a native RDMA QP could be connected to RDMA QPs carrying WQEs formatted
in the VirtIO layout. With some QEMU-side handling, this produced a
virtio-net device.

Later, this model was adapted for a DPU configuration. In that setup,
the DPU's RDMA block held the native QPs, while the x86 host exposed the
VirtIO-formatted QPs, still with QEMU involved. The DPU controlled the
x86-side "tun/tap" through RDMA-linked operations on the associated
objects.

Next, the DPU evolved to instantiate a full VirtIO PCI function on its
own, removing the need for x86 to run QEMU. The DPU continued to manage
the tun/tap via RDMA operations, with some extensions to cover PCI-
related details.

Eventually, the DPU gained general-purpose programmable co-processors
capable of executing various RDMA and non-RDMA operations. As a result,
the RDMA subsystem also became responsible for loading programs onto
these co-processors and managing them within RDMA context and PD
security constraints.

Now we have reached a stage where these co-processors can manage a much
larger portion of the PCI-side behavior, including delegating some
responsibilities back to the host CPU. This produces an odd situation
where a privileged RDMA user can:

- Claim an "emulation" PCI function
- Load a co-processor program associated with that PCI function
- Use RDMA-mediated queues and security controls to interact with the
  co-processor program
- Use the co-processor and related mechanisms to capture and respond to
  TLPs directed to that PCI function

There are many tightly coupled components in this design, but the TLP
handling cannot be separated from the RDMA-related logic that enables
it.

Thanks

  reply	other threads:[~2026-03-02 14:06 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-25 14:19 [PATCH rdma-next 0/6] Add support for TLP emulation Leon Romanovsky
2026-02-25 14:19 ` [PATCH mlx5-next 1/6] net/mlx5: Add TLP emulation device capabilities Leon Romanovsky
2026-02-25 14:19 ` [PATCH mlx5-next 2/6] net/mlx5: Expose TLP emulation capabilities Leon Romanovsky
2026-02-25 14:19 ` [PATCH rdma-next 3/6] RDMA/mlx5: Refactor VAR table to use region abstraction Leon Romanovsky
2026-02-25 14:19 ` [PATCH rdma-next 4/6] RDMA/mlx5: Add TLP VAR region support and infrastructure Leon Romanovsky
2026-02-25 14:19 ` [PATCH rdma-next 5/6] RDMA/mlx5: Add support for TLP VAR allocation Leon Romanovsky
2026-02-25 14:19 ` [PATCH rdma-next 6/6] RDMA/mlx5: Add VAR object query method for cross-process sharing Leon Romanovsky
2026-02-25 14:48 ` [PATCH rdma-next 0/6] Add support for TLP emulation Leon Romanovsky
2026-02-27  1:34 ` Jakub Kicinski
2026-03-02 14:06   ` Leon Romanovsky [this message]
2026-02-27 21:37 ` Keith Busch
2026-03-02 14:04   ` Jason Gunthorpe
2026-03-05 10:34 ` (subset) " Leon Romanovsky
2026-03-05 10:44 ` Leon Romanovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260302140652.GQ12611@unreal \
    --to=leon@kernel.org \
    --cc=andrew+netdev@lunn.ch \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=jgg@ziepe.ca \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mbloch@nvidia.com \
    --cc=msanalla@nvidia.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=saeedm@nvidia.com \
    --cc=tariqt@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox