From: Dave Jiang <dave.jiang@intel.com>
To: Koichiro Den <den@valinux.co.jp>,
Frank.Li@nxp.com, ntb@lists.linux.dev, linux-pci@vger.kernel.org,
dmaengine@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org
Cc: mani@kernel.org, kwilczynski@kernel.org, kishon@kernel.org,
bhelgaas@google.com, corbet@lwn.net, geert+renesas@glider.be,
magnus.damm@gmail.com, robh@kernel.org, krzk+dt@kernel.org,
conor+dt@kernel.org, vkoul@kernel.org, joro@8bytes.org,
will@kernel.org, robin.murphy@arm.com, jdmason@kudzu.us,
allenbh@gmail.com, andrew+netdev@lunn.ch, davem@davemloft.net,
edumazet@google.com, kuba@kernel.org, pabeni@redhat.com,
Basavaraj.Natikar@amd.com, Shyam-sundar.S-k@amd.com,
kurt.schwemmer@microsemi.com, logang@deltatee.com,
jingoohan1@gmail.com, lpieralisi@kernel.org,
utkarsh02t@gmail.com, jbrunet@baylibre.com, dlemoal@kernel.org,
arnd@arndb.de, elfring@users.sourceforge.net
Subject: Re: [RFC PATCH v3 35/35] Documentation: driver-api: ntb: Document remote eDMA transport backend
Date: Tue, 6 Jan 2026 14:09:38 -0700 [thread overview]
Message-ID: <77ae1b02-ff32-4694-9b34-bc49c85c6c82@intel.com> (raw)
In-Reply-To: <20251217151609.3162665-36-den@valinux.co.jp>
On 12/17/25 8:16 AM, Koichiro Den wrote:
> Add a description of the ntb_transport backend architecture and the new
> remote eDMA backed mode introduced by CONFIG_NTB_TRANSPORT_EDMA and the
> use_remote_edma module parameter.
>
> Signed-off-by: Koichiro Den <den@valinux.co.jp>
> ---
> Documentation/driver-api/ntb.rst | 58 ++++++++++++++++++++++++++++++++
> 1 file changed, 58 insertions(+)
>
> diff --git a/Documentation/driver-api/ntb.rst b/Documentation/driver-api/ntb.rst
> index a49c41383779..eb7b889d17c4 100644
> --- a/Documentation/driver-api/ntb.rst
> +++ b/Documentation/driver-api/ntb.rst
> @@ -132,6 +132,64 @@ Transport queue pair. Network data is copied between socket buffers and the
> Transport queue pair buffer. The Transport client may be used for other things
> besides Netdev, however no other applications have yet been written.
>
> +Transport backends
> +~~~~~~~~~~~~~~~~~~
> +
> +The ``ntb_transport`` core driver implements a generic "queue pair"
> +abstraction on top of the memory windows exported by the NTB hardware. Each
> +queue pair has a TX and an RX ring and is used by client drivers such as
> +``ntb_netdev`` to exchange variable sized payloads with the peer.
> +
> +There are currently two ways for ``ntb_transport`` to move payload data
> +between the local system memory and the peer:
> +
> +* The default backend copies data between the caller buffers and the TX/RX
> + rings in the memory windows using ``memcpy()`` on the local CPU or, when
> + the ``use_dma`` module parameter is set, a local DMA engine via the
> + standard dmaengine ``DMA_MEMCPY`` interface.
> +
> +* When ``CONFIG_NTB_TRANSPORT_EDMA`` is enabled in the kernel configuration
> + and the ``use_remote_edma`` module parameter is set at run time, a second
> + backend uses a DesignWare eDMA engine that resides on the endpoint side
I would say "embedded DMA device" instead of a specific DesignWare eDMA engine to keep the transport generic. But provide a reference or link to DesignWare eDMA engine as reference.
> + of the NTB. In this mode the endpoint driver exposes a dedicated peer
> + memory window that contains the eDMA register block together with a small
> + control structure and per-channel linked-list rings only for read
> + channels. The host ioremaps this window and configures a dmaengine
> + device. The endpoint uses its local eDMA write channels for its TX
> + transfer, while the host side uses the remote eDMA read channels for its
> + TX transfer.
Can you provide some more text on the data flow from one host to the other for eDMA vs via host based DMA in the current transport? i.e. currently for a transmit, user data gets copied into an skbuff by the network stack, and then the local host copies it into the ring buffer on the remote host via DMA write (or CPU). And the remote host then copies out of the ring buffer entry to a kernel skbuff and back to user space on the receiver side. How does it now work with eDMA? Also can the mechanism used by eDMA be achieved with a host DMA setup or is the eDMA mechanism specifically tied to the DW hardware design? Would be nice to move the ASCII data flow diagram in the cover to documentation so we don't lose that.
DJ
> +
> +The ``ntb_transport`` core routes queue pair operations (enqueue,
> +completion polling, link bring-up/teardown etc.) through a small
> +backend-ops structure so that both implementations can coexist in the same
> +module without affecting the public queue pair API used by clients. From a
> +client driver's point of view (for example ``ntb_netdev``) the queue pair
> +interface is the same regardless of which backend is active.
> +
> +When ``use_remote_edma`` is not enabled, ``ntb_transport`` behaves as in
> +previous kernels before the optional ``use_remote_edma`` parameter was
> +introduced, and continues to use the shared-memory backend. Existing
> +configurations that do not select the eDMA backend therefore see no
> +behavioural change.
> +
> +In the remote eDMA mode host-to-endpoint notifications are delivered via a
> +dedicated DMA read channel located at the endpoint. In both the default
> +backend mode and the remote eDMA mode, endpoint-to-host notifications are
> +backed by native MSI support on DW EPC, even when ``use_msi=0``. Because
> +of this, the ``use_msi`` module parameter has no effect when
> +``use_remote_edma=1`` on the host.
> +
> +At a high level, enabling the remote eDMA transport backend requires:
> +
> +* building the kernel with ``CONFIG_NTB_TRANSPORT`` and
> + ``CONFIG_NTB_TRANSPORT_EDMA`` enabled,
> +* configuring the NTB endpoint so that it exposes a memory window containing
> + the eDMA register block, descriptor rings and control structure expected by
> + the helper driver, and
> +* loading ``ntb_transport`` on the host with ``use_remote_edma=1`` so that
> + the eDMA-backed backend is selected instead of the default shared-memory
> + backend.
> +
> NTB Ping Pong Test Client (ntb\_pingpong)
> -----------------------------------------
>
next prev parent reply other threads:[~2026-01-06 21:09 UTC|newest]
Thread overview: 61+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-17 15:15 [RFC PATCH v3 00/35] NTB transport backed by endpoint DW eDMA Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 01/35] PCI: endpoint: pci-epf-vntb: Use array_index_nospec() on mws_size[] access Koichiro Den
2025-12-19 3:08 ` Frank Li
2025-12-17 15:15 ` [RFC PATCH v3 02/35] NTB: epf: Add mwN_offset support and config region versioning Koichiro Den
2025-12-19 3:19 ` Frank Li
2025-12-19 7:23 ` Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 03/35] PCI: dwc: ep: Support BAR subrange inbound mapping via address match iATU Koichiro Den
2025-12-19 14:19 ` Frank Li
2025-12-20 15:36 ` Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 04/35] NTB: Add offset parameter to MW translation APIs Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 05/35] PCI: endpoint: pci-epf-vntb: Propagate MW offset from configfs when present Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 06/35] NTB: ntb_transport: Support partial memory windows with offsets Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 07/35] PCI: endpoint: pci-epf-vntb: Hint subrange mapping preference to EPC driver Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 08/35] NTB: core: Add .get_private_data() to ntb_dev_ops Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 09/35] NTB: epf: vntb: Implement .get_private_data() callback Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 10/35] dmaengine: dw-edma: Fix MSI data values for multi-vector IMWr interrupts Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 11/35] NTB: ntb_transport: Move TX memory window setup into setup_qp_mw() Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 12/35] NTB: ntb_transport: Dynamically determine qp count Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 13/35] NTB: ntb_transport: Introduce get_dma_dev() helper Koichiro Den
2025-12-19 14:31 ` Frank Li
2025-12-20 15:29 ` Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 14/35] NTB: epf: Reserve a subset of MSI vectors for non-NTB users Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 15/35] NTB: ntb_transport: Move internal types to ntb_transport_internal.h Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 16/35] NTB: ntb_transport: Introduce ntb_transport_backend_ops Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 17/35] dmaengine: dw-edma: Add helper func to retrieve register base and size Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 18/35] dmaengine: dw-edma: Add per-channel interrupt routing mode Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 19/35] dmaengine: dw-edma: Poll completion when local IRQ handling is disabled Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 20/35] dmaengine: dw-edma: Add notify-only channels support Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 21/35] dmaengine: dw-edma: Add a helper to retrieve LL (Linked List) region Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 22/35] dmaengine: dw-edma: Serialize RMW on shared interrupt registers Koichiro Den
2025-12-19 14:39 ` Frank Li
2025-12-20 15:21 ` Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 23/35] NTB: ntb_transport: Split core into ntb_transport_core.c Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 24/35] NTB: ntb_transport: Add additional hooks for DW eDMA backend Koichiro Den
2025-12-17 15:15 ` [RFC PATCH v3 25/35] NTB: hw: Introduce DesignWare eDMA helper Koichiro Den
2025-12-17 15:16 ` [RFC PATCH v3 26/35] NTB: ntb_transport: Introduce DW eDMA backed transport mode Koichiro Den
2025-12-19 15:00 ` Frank Li
2025-12-20 15:28 ` Koichiro Den
2026-01-06 18:46 ` Dave Jiang
2026-01-07 15:05 ` Koichiro Den
2026-01-06 18:51 ` Dave Jiang
2026-01-07 14:54 ` Koichiro Den
2026-01-07 19:02 ` Dave Jiang
2026-01-08 1:25 ` Koichiro Den
2026-01-08 17:55 ` Dave Jiang
2026-01-10 13:43 ` Koichiro Den
2026-01-12 15:43 ` Dave Jiang
2026-01-13 2:44 ` Koichiro Den
2025-12-17 15:16 ` [RFC PATCH v3 27/35] NTB: epf: Provide db_vector_count/db_vector_mask callbacks Koichiro Den
2025-12-17 15:16 ` [RFC PATCH v3 28/35] ntb_netdev: Multi-queue support Koichiro Den
2025-12-17 15:16 ` [RFC PATCH v3 29/35] NTB: epf: Add per-SoC quirk to cap MRRS for DWC eDMA (128B for R-Car) Koichiro Den
2025-12-17 15:16 ` [RFC PATCH v3 30/35] iommu: ipmmu-vmsa: Add PCIe ch0 to devices_allowlist Koichiro Den
2025-12-17 15:16 ` [RFC PATCH v3 31/35] iommu: ipmmu-vmsa: Add support for reserved regions Koichiro Den
2025-12-17 15:16 ` [RFC PATCH v3 32/35] arm64: dts: renesas: Add Spider RC/EP DTs for NTB with remote DW PCIe eDMA Koichiro Den
2025-12-17 15:16 ` [RFC PATCH v3 33/35] NTB: epf: Add an additional memory window (MW2) barno mapping on Renesas R-Car Koichiro Den
2025-12-17 15:16 ` [RFC PATCH v3 34/35] Documentation: PCI: endpoint: pci-epf-vntb: Update and add mwN_offset usage Koichiro Den
2025-12-17 15:16 ` [RFC PATCH v3 35/35] Documentation: driver-api: ntb: Document remote eDMA transport backend Koichiro Den
2026-01-06 21:09 ` Dave Jiang [this message]
2026-01-07 15:13 ` Koichiro Den
2025-12-19 15:12 ` [RFC PATCH v3 00/35] NTB transport backed by endpoint DW eDMA Frank Li
2025-12-20 15:44 ` Koichiro Den
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=77ae1b02-ff32-4694-9b34-bc49c85c6c82@intel.com \
--to=dave.jiang@intel.com \
--cc=Basavaraj.Natikar@amd.com \
--cc=Frank.Li@nxp.com \
--cc=Shyam-sundar.S-k@amd.com \
--cc=allenbh@gmail.com \
--cc=andrew+netdev@lunn.ch \
--cc=arnd@arndb.de \
--cc=bhelgaas@google.com \
--cc=conor+dt@kernel.org \
--cc=corbet@lwn.net \
--cc=davem@davemloft.net \
--cc=den@valinux.co.jp \
--cc=dlemoal@kernel.org \
--cc=dmaengine@vger.kernel.org \
--cc=edumazet@google.com \
--cc=elfring@users.sourceforge.net \
--cc=geert+renesas@glider.be \
--cc=jbrunet@baylibre.com \
--cc=jdmason@kudzu.us \
--cc=jingoohan1@gmail.com \
--cc=joro@8bytes.org \
--cc=kishon@kernel.org \
--cc=krzk+dt@kernel.org \
--cc=kuba@kernel.org \
--cc=kurt.schwemmer@microsemi.com \
--cc=kwilczynski@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=linux-renesas-soc@vger.kernel.org \
--cc=logang@deltatee.com \
--cc=lpieralisi@kernel.org \
--cc=magnus.damm@gmail.com \
--cc=mani@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=ntb@lists.linux.dev \
--cc=pabeni@redhat.com \
--cc=robh@kernel.org \
--cc=robin.murphy@arm.com \
--cc=utkarsh02t@gmail.com \
--cc=vkoul@kernel.org \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox