From: Frank Li <Frank.li@nxp.com>
To: Koichiro Den <den@valinux.co.jp>
Cc: dave.jiang@intel.com, cassel@kernel.org, mani@kernel.org,
kwilczynski@kernel.org, kishon@kernel.org, bhelgaas@google.com,
geert+renesas@glider.be, robh@kernel.org, vkoul@kernel.org,
jdmason@kudzu.us, allenbh@gmail.com, jingoohan1@gmail.com,
lpieralisi@kernel.org, linux-pci@vger.kernel.org,
linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-renesas-soc@vger.kernel.org, devicetree@vger.kernel.org,
dmaengine@vger.kernel.org, iommu@lists.linux.dev,
ntb@lists.linux.dev, netdev@vger.kernel.org,
linux-kselftest@vger.kernel.org, arnd@arndb.de,
gregkh@linuxfoundation.org, joro@8bytes.org, will@kernel.org,
robin.murphy@arm.com, magnus.damm@gmail.com, krzk+dt@kernel.org,
conor+dt@kernel.org, corbet@lwn.net, skhan@linuxfoundation.org,
andriy.shevchenko@linux.intel.com, jbrunet@baylibre.com,
utkarsh02t@gmail.com
Subject: Re: [RFC PATCH v4 16/38] NTB: ntb_transport: Move TX memory window setup into setup_qp_mw()
Date: Mon, 19 Jan 2026 15:36:47 -0500 [thread overview]
Message-ID: <aW6V36kWrXE3X017@lizhi-Precision-Tower-5810> (raw)
In-Reply-To: <20260118135440.1958279-17-den@valinux.co.jp>
On Sun, Jan 18, 2026 at 10:54:18PM +0900, Koichiro Den wrote:
> Historically both TX and RX have assumed the same per-QP MW slice
> (tx_max_entry == remote rx_max_entry), while those are calculated
> separately in different places (pre and post the link-up negotiation
> point). This has been safe because nt->link_is_up is never set to true
> unless the pre-determined qp_count are the same among them, and qp_count
> is typically limited to nt->mw_count, which should be carefully
> configured by admin.
>
> However, setup_qp_mw can actually split mw and handle multi-qps in one
> MW properly, so qp_count needs not to be limited by nt->mw_count. Once
> we relax the limitation, pre-determined qp_count can differ among host
> side and endpoint, and link-up negotiation can easily fail.
>
> Move the TX MW configuration (per-QP offset and size) into
> ntb_transport_setup_qp_mw() so that both RX and TX layout decisions are
> centralized in a single helper. ntb_transport_init_queue() now deals
> only with per-QP software state, not with MW layout.
>
> This keeps the previous behavior, while preparing for relaxing the
> qp_count limitation and improving readability.
>
> No functional change is intended.
>
> Signed-off-by: Koichiro Den <den@valinux.co.jp>
> ---
> drivers/ntb/ntb_transport.c | 76 ++++++++++++++++---------------------
> 1 file changed, 32 insertions(+), 44 deletions(-)
>
> diff --git a/drivers/ntb/ntb_transport.c b/drivers/ntb/ntb_transport.c
> index d5a544bf8fd6..57a21f2daac6 100644
> --- a/drivers/ntb/ntb_transport.c
> +++ b/drivers/ntb/ntb_transport.c
> @@ -569,7 +569,10 @@ static int ntb_transport_setup_qp_mw(struct ntb_transport_ctx *nt,
> struct ntb_transport_mw *mw;
> struct ntb_dev *ndev = nt->ndev;
> struct ntb_queue_entry *entry;
> - unsigned int rx_size, num_qps_mw;
> + phys_addr_t mw_base;
> + resource_size_t mw_size;
> + unsigned int rx_size, tx_size, num_qps_mw;
> + u64 qp_offset;
> unsigned int mw_num, mw_count, qp_count;
> unsigned int i;
> int node;
> @@ -588,13 +591,38 @@ static int ntb_transport_setup_qp_mw(struct ntb_transport_ctx *nt,
> else
> num_qps_mw = qp_count / mw_count;
>
> - rx_size = (unsigned int)mw->xlat_size / num_qps_mw;
> - qp->rx_buff = mw->virt_addr + rx_size * (qp_num / mw_count);
> - rx_size -= sizeof(struct ntb_rx_info);
> + mw_base = nt->mw_vec[mw_num].phys_addr;
> + mw_size = nt->mw_vec[mw_num].phys_size;
> +
> + if (mw_size > mw->xlat_size)
> + mw_size = mw->xlat_size;
old code have not check this.
Frank
> + if (max_mw_size && mw_size > max_mw_size)
> + mw_size = max_mw_size;
> +
> + tx_size = (unsigned int)mw_size / num_qps_mw;
> + qp_offset = tx_size * (qp_num / mw_count);
> +
> + qp->rx_buff = mw->virt_addr + qp_offset;
> +
> + qp->tx_mw_size = tx_size;
> + qp->tx_mw = nt->mw_vec[mw_num].vbase + qp_offset;
> + if (!qp->tx_mw)
> + return -EINVAL;
> +
> + qp->tx_mw_phys = mw_base + qp_offset;
> + if (!qp->tx_mw_phys)
> + return -EINVAL;
>
> + rx_size = tx_size;
> + rx_size -= sizeof(struct ntb_rx_info);
> qp->remote_rx_info = qp->rx_buff + rx_size;
>
> + tx_size -= sizeof(struct ntb_rx_info);
> + qp->rx_info = qp->tx_mw + tx_size;
> +
> /* Due to housekeeping, there must be atleast 2 buffs */
> + qp->tx_max_frame = min(transport_mtu, tx_size / 2);
> + qp->tx_max_entry = tx_size / qp->tx_max_frame;
> qp->rx_max_frame = min(transport_mtu, rx_size / 2);
> qp->rx_max_entry = rx_size / qp->rx_max_frame;
> qp->rx_index = 0;
> @@ -1132,16 +1160,6 @@ static int ntb_transport_init_queue(struct ntb_transport_ctx *nt,
> unsigned int qp_num)
> {
> struct ntb_transport_qp *qp;
> - phys_addr_t mw_base;
> - resource_size_t mw_size;
> - unsigned int num_qps_mw, tx_size;
> - unsigned int mw_num, mw_count, qp_count;
> - u64 qp_offset;
> -
> - mw_count = nt->mw_count;
> - qp_count = nt->qp_count;
> -
> - mw_num = QP_TO_MW(nt, qp_num);
>
> qp = &nt->qp_vec[qp_num];
> qp->qp_num = qp_num;
> @@ -1151,36 +1169,6 @@ static int ntb_transport_init_queue(struct ntb_transport_ctx *nt,
> qp->event_handler = NULL;
> ntb_qp_link_context_reset(qp);
>
> - if (mw_num < qp_count % mw_count)
> - num_qps_mw = qp_count / mw_count + 1;
> - else
> - num_qps_mw = qp_count / mw_count;
> -
> - mw_base = nt->mw_vec[mw_num].phys_addr;
> - mw_size = nt->mw_vec[mw_num].phys_size;
> -
> - if (max_mw_size && mw_size > max_mw_size)
> - mw_size = max_mw_size;
> -
> - tx_size = (unsigned int)mw_size / num_qps_mw;
> - qp_offset = tx_size * (qp_num / mw_count);
> -
> - qp->tx_mw_size = tx_size;
> - qp->tx_mw = nt->mw_vec[mw_num].vbase + qp_offset;
> - if (!qp->tx_mw)
> - return -EINVAL;
> -
> - qp->tx_mw_phys = mw_base + qp_offset;
> - if (!qp->tx_mw_phys)
> - return -EINVAL;
> -
> - tx_size -= sizeof(struct ntb_rx_info);
> - qp->rx_info = qp->tx_mw + tx_size;
> -
> - /* Due to housekeeping, there must be atleast 2 buffs */
> - qp->tx_max_frame = min(transport_mtu, tx_size / 2);
> - qp->tx_max_entry = tx_size / qp->tx_max_frame;
> -
> if (nt->debugfs_node_dir) {
> char debugfs_name[8];
>
> --
> 2.51.0
>
next prev parent reply other threads:[~2026-01-19 20:37 UTC|newest]
Thread overview: 68+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-18 13:54 [RFC PATCH v4 00/38] NTB transport backed by PCI EP embedded DMA Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 01/38] dmaengine: dw-edma: Export helper to get integrated register window Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 02/38] dmaengine: dw-edma: Add per-channel interrupt routing control Koichiro Den
2026-01-18 17:03 ` Frank Li
2026-01-19 14:26 ` Koichiro Den
2026-01-21 16:02 ` Vinod Koul
2026-01-22 2:44 ` Koichiro Den
2026-01-23 15:44 ` Frank Li
2026-01-18 13:54 ` [RFC PATCH v4 03/38] dmaengine: dw-edma: Poll completion when local IRQ handling is disabled Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 04/38] dmaengine: dw-edma: Add notify-only channels support Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 05/38] dmaengine: dw-edma: Add a helper to query linked-list region Koichiro Den
2026-01-18 17:05 ` Frank Li
2026-01-21 1:38 ` Koichiro Den
2026-01-21 8:41 ` Koichiro Den
2026-01-21 15:24 ` Frank Li
2026-01-22 1:19 ` Koichiro Den
2026-01-22 1:54 ` Frank Li
2026-01-18 13:54 ` [RFC PATCH v4 06/38] NTB: epf: Add mwN_offset support and config region versioning Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 07/38] NTB: epf: Reserve a subset of MSI vectors for non-NTB users Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 08/38] NTB: epf: Provide db_vector_count/db_vector_mask callbacks Koichiro Den
2026-01-19 20:03 ` Frank Li
2026-01-21 1:41 ` Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 09/38] NTB: core: Add mw_set_trans_ranges() for subrange programming Koichiro Den
2026-01-19 20:07 ` Frank Li
2026-01-18 13:54 ` [RFC PATCH v4 10/38] NTB: core: Add .get_private_data() to ntb_dev_ops Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 11/38] NTB: core: Add .get_dma_dev() " Koichiro Den
2026-01-19 20:09 ` Frank Li
2026-01-21 1:44 ` Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 12/38] NTB: core: Add driver_override support for NTB devices Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 13/38] PCI: endpoint: pci-epf-vntb: Support BAR subrange mappings for MWs Koichiro Den
2026-01-19 20:26 ` Frank Li
2026-01-21 2:08 ` Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 14/38] PCI: endpoint: pci-epf-vntb: Implement .get_private_data() callback Koichiro Den
2026-01-19 20:27 ` Frank Li
2026-01-18 13:54 ` [RFC PATCH v4 15/38] PCI: endpoint: pci-epf-vntb: Implement .get_dma_dev() Koichiro Den
2026-01-19 20:30 ` Frank Li
2026-01-22 14:58 ` Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 16/38] NTB: ntb_transport: Move TX memory window setup into setup_qp_mw() Koichiro Den
2026-01-19 20:36 ` Frank Li [this message]
2026-01-21 2:15 ` Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 17/38] NTB: ntb_transport: Dynamically determine qp count Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 18/38] NTB: ntb_transport: Use ntb_get_dma_dev() Koichiro Den
2026-01-19 20:38 ` Frank Li
2026-01-18 13:54 ` [RFC PATCH v4 19/38] NTB: ntb_transport: Rename ntb_transport.c to ntb_transport_core.c Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 20/38] NTB: ntb_transport: Move internal types to ntb_transport_internal.h Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 21/38] NTB: ntb_transport: Export common helpers for modularization Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 22/38] NTB: ntb_transport: Split core library and default NTB client Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 23/38] NTB: ntb_transport: Add transport backend infrastructure Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 24/38] NTB: ntb_transport: Run ntb_set_mw() before link-up negotiation Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 25/38] NTB: hw: Add remote eDMA backend registry and DesignWare backend Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 26/38] NTB: ntb_transport: Add remote embedded-DMA transport client Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 27/38] ntb_netdev: Multi-queue support Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 28/38] iommu: ipmmu-vmsa: Add PCIe ch0 to devices_allowlist Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 29/38] iommu: ipmmu-vmsa: Add support for reserved regions Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 30/38] arm64: dts: renesas: Add Spider RC/EP DTs for NTB with remote DW PCIe eDMA Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 31/38] NTB: epf: Add per-SoC quirk to cap MRRS for DWC eDMA (128B for R-Car) Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 32/38] NTB: epf: Add an additional memory window (MW2) barno mapping on Renesas R-Car Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 33/38] Documentation: PCI: endpoint: pci-epf-vntb: Update and add mwN_offset usage Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 34/38] Documentation: driver-api: ntb: Document remote embedded-DMA transport Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 35/38] PCI: endpoint: pci-epf-test: Add pci_epf_test_next_free_bar() helper Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 36/38] PCI: endpoint: pci-epf-test: Add remote eDMA-backed mode Koichiro Den
2026-01-19 20:47 ` Frank Li
2026-01-22 14:54 ` Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 37/38] misc: pci_endpoint_test: Add remote eDMA transfer test mode Koichiro Den
2026-01-18 13:54 ` [RFC PATCH v4 38/38] selftests: pci_endpoint: Add remote eDMA transfer coverage Koichiro Den
2026-01-20 18:30 ` [RFC PATCH v4 00/38] NTB transport backed by PCI EP embedded DMA Dave Jiang
2026-01-20 18:47 ` Dave Jiang
2026-01-21 2:40 ` Koichiro Den
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aW6V36kWrXE3X017@lizhi-Precision-Tower-5810 \
--to=frank.li@nxp.com \
--cc=allenbh@gmail.com \
--cc=andriy.shevchenko@linux.intel.com \
--cc=arnd@arndb.de \
--cc=bhelgaas@google.com \
--cc=cassel@kernel.org \
--cc=conor+dt@kernel.org \
--cc=corbet@lwn.net \
--cc=dave.jiang@intel.com \
--cc=den@valinux.co.jp \
--cc=devicetree@vger.kernel.org \
--cc=dmaengine@vger.kernel.org \
--cc=geert+renesas@glider.be \
--cc=gregkh@linuxfoundation.org \
--cc=iommu@lists.linux.dev \
--cc=jbrunet@baylibre.com \
--cc=jdmason@kudzu.us \
--cc=jingoohan1@gmail.com \
--cc=joro@8bytes.org \
--cc=kishon@kernel.org \
--cc=krzk+dt@kernel.org \
--cc=kwilczynski@kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=linux-renesas-soc@vger.kernel.org \
--cc=lpieralisi@kernel.org \
--cc=magnus.damm@gmail.com \
--cc=mani@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=ntb@lists.linux.dev \
--cc=robh@kernel.org \
--cc=robin.murphy@arm.com \
--cc=skhan@linuxfoundation.org \
--cc=utkarsh02t@gmail.com \
--cc=vkoul@kernel.org \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox