* [PATCH 0/4] Add APIs to request TX/RX DMA channels by ID
@ 2023-11-14 8:39 Siddharth Vadapalli
2023-11-14 8:39 ` [PATCH 1/4] dmaengine: ti: k3-udma-glue: Add function to parse channel " Siddharth Vadapalli
` (4 more replies)
0 siblings, 5 replies; 10+ messages in thread
From: Siddharth Vadapalli @ 2023-11-14 8:39 UTC (permalink / raw)
To: peter.ujfalusi, vkoul
Cc: dmaengine, linux-kernel, linux-arm-kernel, srk, vigneshr,
s-vadapalli
The existing APIs for requesting TX and RX DMA channels rely on parsing
a device-tree node to obtain the Channel/Thread IDs from their names.
However, it is possible to know the thread IDs by alternative means such
as being informed by Firmware on a remote core regarding the allocated
TX/RX DMA channel IDs. Thus, add APIs to support such use cases.
Additionally, since the name of the device for the remote RX channel is
being set purely on the basis of the RX channel ID itself, it can result
in duplicate names when multiple flows are used on the same channel.
Avoid name duplication by including the flow in the name.
Series is based on linux-next tagged next-20231114.
RFC Series:
https://lore.kernel.org/r/20231111121555.2656760-1-s-vadapalli@ti.com/
Changes since RFC Series:
- Rebased patches 1, 2 and 3 on linux-next tagged next-20231114.
- Added patch 4 to the series.
Regards,
Siddharth.
Siddharth Vadapalli (4):
dmaengine: ti: k3-udma-glue: Add function to parse channel by ID
dmaengine: ti: k3-udma-glue: Add function to request TX channel by ID
dmaengine: ti: k3-udma-glue: Add function to request RX channel by ID
dmaengine: ti: k3-udma-glue: Update name for remote RX channel device
drivers/dma/ti/k3-udma-glue.c | 306 ++++++++++++++++++++++---------
include/linux/dma/k3-udma-glue.h | 8 +
2 files changed, 228 insertions(+), 86 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 1/4] dmaengine: ti: k3-udma-glue: Add function to parse channel by ID
2023-11-14 8:39 [PATCH 0/4] Add APIs to request TX/RX DMA channels by ID Siddharth Vadapalli
@ 2023-11-14 8:39 ` Siddharth Vadapalli
2023-11-14 8:39 ` [PATCH 2/4] dmaengine: ti: k3-udma-glue: Add function to request TX " Siddharth Vadapalli
` (3 subsequent siblings)
4 siblings, 0 replies; 10+ messages in thread
From: Siddharth Vadapalli @ 2023-11-14 8:39 UTC (permalink / raw)
To: peter.ujfalusi, vkoul
Cc: dmaengine, linux-kernel, linux-arm-kernel, srk, vigneshr,
s-vadapalli
The existing helper function of_k3_udma_glue_parse() fetches the DMA
Channel thread ID from the device-tree node. This makes it necessary to
have a device-tree node with the Channel thread IDs populated. However,
in the case where the thread ID is known by alternate methods (an
example being that of Firmware running on remote core sharing details of
the thread IDs), there is no equivalent function to implement the
functionality of the existing of_k3_udma_glue_parse() function.
Add the of_k3_udma_glue_parse_chn_by_id() helper function which accepts
the thread ID as an argument, thereby making it unnecessary to have a
device-tree node for obtaining the thread ID.
Since of_k3_udma_glue_parse() and of_k3_udma_glue_parse_chn_by_id()
share a lot of code in common, create a new function to handle the
common code which is named as of_k3_udma_glue_parse_chn_common().
Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com>
---
drivers/dma/ti/k3-udma-glue.c | 81 +++++++++++++++++++++++------------
1 file changed, 54 insertions(+), 27 deletions(-)
diff --git a/drivers/dma/ti/k3-udma-glue.c b/drivers/dma/ti/k3-udma-glue.c
index c278d5facf7d..9979785d30aa 100644
--- a/drivers/dma/ti/k3-udma-glue.c
+++ b/drivers/dma/ti/k3-udma-glue.c
@@ -111,6 +111,35 @@ static int of_k3_udma_glue_parse(struct device_node *udmax_np,
return 0;
}
+static int of_k3_udma_glue_parse_chn_common(struct k3_udma_glue_common *common, u32 thread_id,
+ bool tx_chn)
+{
+ if (tx_chn && !(thread_id & K3_PSIL_DST_THREAD_ID_OFFSET))
+ return -EINVAL;
+
+ if (!tx_chn && (thread_id & K3_PSIL_DST_THREAD_ID_OFFSET))
+ return -EINVAL;
+
+ /* get psil endpoint config */
+ common->ep_config = psil_get_ep_config(thread_id);
+ if (IS_ERR(common->ep_config)) {
+ dev_err(common->dev,
+ "No configuration for psi-l thread 0x%04x\n",
+ thread_id);
+ return PTR_ERR(common->ep_config);
+ }
+
+ common->epib = common->ep_config->needs_epib;
+ common->psdata_size = common->ep_config->psd_size;
+
+ if (tx_chn)
+ common->dst_thread = thread_id;
+ else
+ common->src_thread = thread_id;
+
+ return 0;
+}
+
static int of_k3_udma_glue_parse_chn(struct device_node *chn_np,
const char *name, struct k3_udma_glue_common *common,
bool tx_chn)
@@ -153,38 +182,36 @@ static int of_k3_udma_glue_parse_chn(struct device_node *chn_np,
common->atype_asel = dma_spec.args[1];
}
- if (tx_chn && !(thread_id & K3_PSIL_DST_THREAD_ID_OFFSET)) {
- ret = -EINVAL;
+ ret = of_k3_udma_glue_parse_chn_common(common, thread_id, tx_chn);
+ if (ret)
goto out_put_spec;
- }
-
- if (!tx_chn && (thread_id & K3_PSIL_DST_THREAD_ID_OFFSET)) {
- ret = -EINVAL;
- goto out_put_spec;
- }
-
- /* get psil endpoint config */
- common->ep_config = psil_get_ep_config(thread_id);
- if (IS_ERR(common->ep_config)) {
- dev_err(common->dev,
- "No configuration for psi-l thread 0x%04x\n",
- thread_id);
- ret = PTR_ERR(common->ep_config);
- goto out_put_spec;
- }
-
- common->epib = common->ep_config->needs_epib;
- common->psdata_size = common->ep_config->psd_size;
-
- if (tx_chn)
- common->dst_thread = thread_id;
- else
- common->src_thread = thread_id;
out_put_spec:
of_node_put(dma_spec.np);
return ret;
-};
+}
+
+static int
+of_k3_udma_glue_parse_chn_by_id(struct device_node *udmax_np, struct k3_udma_glue_common *common,
+ bool tx_chn, u32 thread_id)
+{
+ int ret = 0;
+
+ if (unlikely(!udmax_np))
+ return -EINVAL;
+
+ ret = of_k3_udma_glue_parse(udmax_np, common);
+ if (ret)
+ goto out_put_spec;
+
+ ret = of_k3_udma_glue_parse_chn_common(common, thread_id, tx_chn);
+ if (ret)
+ goto out_put_spec;
+
+out_put_spec:
+ of_node_put(udmax_np);
+ return ret;
+}
static void k3_udma_glue_dump_tx_chn(struct k3_udma_glue_tx_channel *tx_chn)
{
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 2/4] dmaengine: ti: k3-udma-glue: Add function to request TX channel by ID
2023-11-14 8:39 [PATCH 0/4] Add APIs to request TX/RX DMA channels by ID Siddharth Vadapalli
2023-11-14 8:39 ` [PATCH 1/4] dmaengine: ti: k3-udma-glue: Add function to parse channel " Siddharth Vadapalli
@ 2023-11-14 8:39 ` Siddharth Vadapalli
2023-11-14 8:39 ` [PATCH 3/4] dmaengine: ti: k3-udma-glue: Add function to request RX " Siddharth Vadapalli
` (2 subsequent siblings)
4 siblings, 0 replies; 10+ messages in thread
From: Siddharth Vadapalli @ 2023-11-14 8:39 UTC (permalink / raw)
To: peter.ujfalusi, vkoul
Cc: dmaengine, linux-kernel, linux-arm-kernel, srk, vigneshr,
s-vadapalli
The existing function k3_udma_glue_request_tx_chn() supports requesting
a TX DMA channel by its name. Add support to request TX channel by ID in
the form of a new function k3_udma_glue_request_tx_chn_by_id() and
export it.
Since the implementation of k3_udma_glue_request_tx_chn_by_id() reuses
most of the code in k3_udma_glue_request_tx_chn(), create a new function
for the common code named as k3_udma_glue_request_tx_chn_common().
Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com>
---
drivers/dma/ti/k3-udma-glue.c | 101 +++++++++++++++++++++++--------
include/linux/dma/k3-udma-glue.h | 4 ++
2 files changed, 79 insertions(+), 26 deletions(-)
diff --git a/drivers/dma/ti/k3-udma-glue.c b/drivers/dma/ti/k3-udma-glue.c
index 9979785d30aa..d3f04d446c4e 100644
--- a/drivers/dma/ti/k3-udma-glue.c
+++ b/drivers/dma/ti/k3-udma-glue.c
@@ -278,29 +278,13 @@ static int k3_udma_glue_cfg_tx_chn(struct k3_udma_glue_tx_channel *tx_chn)
return tisci_rm->tisci_udmap_ops->tx_ch_cfg(tisci_rm->tisci, &req);
}
-struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev,
- const char *name, struct k3_udma_glue_tx_channel_cfg *cfg)
+static int
+k3_udma_glue_request_tx_chn_common(struct device *dev,
+ struct k3_udma_glue_tx_channel *tx_chn,
+ struct k3_udma_glue_tx_channel_cfg *cfg)
{
- struct k3_udma_glue_tx_channel *tx_chn;
int ret;
- tx_chn = devm_kzalloc(dev, sizeof(*tx_chn), GFP_KERNEL);
- if (!tx_chn)
- return ERR_PTR(-ENOMEM);
-
- tx_chn->common.dev = dev;
- tx_chn->common.swdata_size = cfg->swdata_size;
- tx_chn->tx_pause_on_err = cfg->tx_pause_on_err;
- tx_chn->tx_filt_einfo = cfg->tx_filt_einfo;
- tx_chn->tx_filt_pswords = cfg->tx_filt_pswords;
- tx_chn->tx_supr_tdpkt = cfg->tx_supr_tdpkt;
-
- /* parse of udmap channel */
- ret = of_k3_udma_glue_parse_chn(dev->of_node, name,
- &tx_chn->common, true);
- if (ret)
- goto err;
-
tx_chn->common.hdesc_size = cppi5_hdesc_calc_size(tx_chn->common.epib,
tx_chn->common.psdata_size,
tx_chn->common.swdata_size);
@@ -316,7 +300,7 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev,
if (IS_ERR(tx_chn->udma_tchanx)) {
ret = PTR_ERR(tx_chn->udma_tchanx);
dev_err(dev, "UDMAX tchanx get err %d\n", ret);
- goto err;
+ return ret;
}
tx_chn->udma_tchan_id = xudma_tchan_get_id(tx_chn->udma_tchanx);
@@ -329,7 +313,7 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev,
dev_err(dev, "Channel Device registration failed %d\n", ret);
put_device(&tx_chn->common.chan_dev);
tx_chn->common.chan_dev.parent = NULL;
- goto err;
+ return ret;
}
if (xudma_is_pktdma(tx_chn->common.udmax)) {
@@ -353,7 +337,7 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev,
&tx_chn->ringtxcq);
if (ret) {
dev_err(dev, "Failed to get TX/TXCQ rings %d\n", ret);
- goto err;
+ return ret;
}
/* Set the dma_dev for the rings to be configured */
@@ -369,13 +353,13 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev,
ret = k3_ringacc_ring_cfg(tx_chn->ringtx, &cfg->tx_cfg);
if (ret) {
dev_err(dev, "Failed to cfg ringtx %d\n", ret);
- goto err;
+ return ret;
}
ret = k3_ringacc_ring_cfg(tx_chn->ringtxcq, &cfg->txcq_cfg);
if (ret) {
dev_err(dev, "Failed to cfg ringtx %d\n", ret);
- goto err;
+ return ret;
}
/* request and cfg psi-l */
@@ -386,11 +370,42 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev,
ret = k3_udma_glue_cfg_tx_chn(tx_chn);
if (ret) {
dev_err(dev, "Failed to cfg tchan %d\n", ret);
- goto err;
+ return ret;
}
k3_udma_glue_dump_tx_chn(tx_chn);
+ return 0;
+}
+
+struct k3_udma_glue_tx_channel *
+k3_udma_glue_request_tx_chn(struct device *dev, const char *name,
+ struct k3_udma_glue_tx_channel_cfg *cfg)
+{
+ struct k3_udma_glue_tx_channel *tx_chn;
+ int ret;
+
+ tx_chn = devm_kzalloc(dev, sizeof(*tx_chn), GFP_KERNEL);
+ if (!tx_chn)
+ return ERR_PTR(-ENOMEM);
+
+ tx_chn->common.dev = dev;
+ tx_chn->common.swdata_size = cfg->swdata_size;
+ tx_chn->tx_pause_on_err = cfg->tx_pause_on_err;
+ tx_chn->tx_filt_einfo = cfg->tx_filt_einfo;
+ tx_chn->tx_filt_pswords = cfg->tx_filt_pswords;
+ tx_chn->tx_supr_tdpkt = cfg->tx_supr_tdpkt;
+
+ /* parse of udmap channel */
+ ret = of_k3_udma_glue_parse_chn(dev->of_node, name,
+ &tx_chn->common, true);
+ if (ret)
+ goto err;
+
+ ret = k3_udma_glue_request_tx_chn_common(dev, tx_chn, cfg);
+ if (ret)
+ goto err;
+
return tx_chn;
err:
@@ -399,6 +414,40 @@ struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev,
}
EXPORT_SYMBOL_GPL(k3_udma_glue_request_tx_chn);
+struct k3_udma_glue_tx_channel *
+k3_udma_glue_request_tx_chn_by_id(struct device *dev, struct k3_udma_glue_tx_channel_cfg *cfg,
+ struct device_node *udmax_np, u32 thread_id)
+{
+ struct k3_udma_glue_tx_channel *tx_chn;
+ int ret;
+
+ tx_chn = devm_kzalloc(dev, sizeof(*tx_chn), GFP_KERNEL);
+ if (!tx_chn)
+ return ERR_PTR(-ENOMEM);
+
+ tx_chn->common.dev = dev;
+ tx_chn->common.swdata_size = cfg->swdata_size;
+ tx_chn->tx_pause_on_err = cfg->tx_pause_on_err;
+ tx_chn->tx_filt_einfo = cfg->tx_filt_einfo;
+ tx_chn->tx_filt_pswords = cfg->tx_filt_pswords;
+ tx_chn->tx_supr_tdpkt = cfg->tx_supr_tdpkt;
+
+ ret = of_k3_udma_glue_parse_chn_by_id(udmax_np, &tx_chn->common, true, thread_id);
+ if (ret)
+ goto err;
+
+ ret = k3_udma_glue_request_tx_chn_common(dev, tx_chn, cfg);
+ if (ret)
+ goto err;
+
+ return tx_chn;
+
+err:
+ k3_udma_glue_release_tx_chn(tx_chn);
+ return ERR_PTR(ret);
+}
+EXPORT_SYMBOL_GPL(k3_udma_glue_request_tx_chn_by_id);
+
void k3_udma_glue_release_tx_chn(struct k3_udma_glue_tx_channel *tx_chn)
{
if (tx_chn->psil_paired) {
diff --git a/include/linux/dma/k3-udma-glue.h b/include/linux/dma/k3-udma-glue.h
index e443be4d3b4b..6205d84430ca 100644
--- a/include/linux/dma/k3-udma-glue.h
+++ b/include/linux/dma/k3-udma-glue.h
@@ -26,6 +26,10 @@ struct k3_udma_glue_tx_channel;
struct k3_udma_glue_tx_channel *k3_udma_glue_request_tx_chn(struct device *dev,
const char *name, struct k3_udma_glue_tx_channel_cfg *cfg);
+struct k3_udma_glue_tx_channel *
+k3_udma_glue_request_tx_chn_by_id(struct device *dev, struct k3_udma_glue_tx_channel_cfg *cfg,
+ struct device_node *udmax_np, u32 thread_id);
+
void k3_udma_glue_release_tx_chn(struct k3_udma_glue_tx_channel *tx_chn);
int k3_udma_glue_push_tx_chn(struct k3_udma_glue_tx_channel *tx_chn,
struct cppi5_host_desc_t *desc_tx,
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 3/4] dmaengine: ti: k3-udma-glue: Add function to request RX channel by ID
2023-11-14 8:39 [PATCH 0/4] Add APIs to request TX/RX DMA channels by ID Siddharth Vadapalli
2023-11-14 8:39 ` [PATCH 1/4] dmaengine: ti: k3-udma-glue: Add function to parse channel " Siddharth Vadapalli
2023-11-14 8:39 ` [PATCH 2/4] dmaengine: ti: k3-udma-glue: Add function to request TX " Siddharth Vadapalli
@ 2023-11-14 8:39 ` Siddharth Vadapalli
2023-11-14 8:39 ` [PATCH 4/4] dmaengine: ti: k3-udma-glue: Update name for remote RX channel device Siddharth Vadapalli
2023-11-15 19:59 ` [PATCH 0/4] Add APIs to request TX/RX DMA channels by ID Péter Ujfalusi
4 siblings, 0 replies; 10+ messages in thread
From: Siddharth Vadapalli @ 2023-11-14 8:39 UTC (permalink / raw)
To: peter.ujfalusi, vkoul
Cc: dmaengine, linux-kernel, linux-arm-kernel, srk, vigneshr,
s-vadapalli
The existing function k3_udma_glue_request_remote_rx_chn() supports
requesting an RX DMA channel and flow by the name of the RX DMA channel.
Add support to request RX channel by ID in the form of a new function
k3_udma_glue_request_remote_rx_chn_by_id() and export it.
Since the implementation of k3_udma_glue_request_remote_rx_chn_by_id()
reuses most of the code in k3_udma_glue_request_remote_rx_chn(), create
a new function k3_udma_glue_request_remote_rx_chn_common() for the
common code.
Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com>
---
drivers/dma/ti/k3-udma-glue.c | 140 ++++++++++++++++++++++---------
include/linux/dma/k3-udma-glue.h | 4 +
2 files changed, 103 insertions(+), 41 deletions(-)
diff --git a/drivers/dma/ti/k3-udma-glue.c b/drivers/dma/ti/k3-udma-glue.c
index d3f04d446c4e..167fe77de71e 100644
--- a/drivers/dma/ti/k3-udma-glue.c
+++ b/drivers/dma/ti/k3-udma-glue.c
@@ -1076,52 +1076,21 @@ k3_udma_glue_request_rx_chn_priv(struct device *dev, const char *name,
return ERR_PTR(ret);
}
-static struct k3_udma_glue_rx_channel *
-k3_udma_glue_request_remote_rx_chn(struct device *dev, const char *name,
- struct k3_udma_glue_rx_channel_cfg *cfg)
+static int
+k3_udma_glue_request_remote_rx_chn_common(struct k3_udma_glue_rx_channel *rx_chn,
+ struct k3_udma_glue_rx_channel_cfg *cfg,
+ struct device *dev)
{
- struct k3_udma_glue_rx_channel *rx_chn;
int ret, i;
- if (cfg->flow_id_num <= 0 ||
- cfg->flow_id_use_rxchan_id ||
- cfg->def_flow_cfg ||
- cfg->flow_id_base < 0)
- return ERR_PTR(-EINVAL);
-
- /*
- * Remote RX channel is under control of Remote CPU core, so
- * Linux can only request and manipulate by dedicated RX flows
- */
-
- rx_chn = devm_kzalloc(dev, sizeof(*rx_chn), GFP_KERNEL);
- if (!rx_chn)
- return ERR_PTR(-ENOMEM);
-
- rx_chn->common.dev = dev;
- rx_chn->common.swdata_size = cfg->swdata_size;
- rx_chn->remote = true;
- rx_chn->udma_rchan_id = -1;
- rx_chn->flow_num = cfg->flow_id_num;
- rx_chn->flow_id_base = cfg->flow_id_base;
- rx_chn->psil_paired = false;
-
- /* parse of udmap channel */
- ret = of_k3_udma_glue_parse_chn(dev->of_node, name,
- &rx_chn->common, false);
- if (ret)
- goto err;
-
rx_chn->common.hdesc_size = cppi5_hdesc_calc_size(rx_chn->common.epib,
- rx_chn->common.psdata_size,
- rx_chn->common.swdata_size);
+ rx_chn->common.psdata_size,
+ rx_chn->common.swdata_size);
rx_chn->flows = devm_kcalloc(dev, rx_chn->flow_num,
sizeof(*rx_chn->flows), GFP_KERNEL);
- if (!rx_chn->flows) {
- ret = -ENOMEM;
- goto err;
- }
+ if (!rx_chn->flows)
+ return -ENOMEM;
rx_chn->common.chan_dev.class = &k3_udma_glue_devclass;
rx_chn->common.chan_dev.parent = xudma_get_device(rx_chn->common.udmax);
@@ -1132,7 +1101,7 @@ k3_udma_glue_request_remote_rx_chn(struct device *dev, const char *name,
dev_err(dev, "Channel Device registration failed %d\n", ret);
put_device(&rx_chn->common.chan_dev);
rx_chn->common.chan_dev.parent = NULL;
- goto err;
+ return ret;
}
if (xudma_is_pktdma(rx_chn->common.udmax)) {
@@ -1144,19 +1113,108 @@ k3_udma_glue_request_remote_rx_chn(struct device *dev, const char *name,
ret = k3_udma_glue_allocate_rx_flows(rx_chn, cfg);
if (ret)
- goto err;
+ return ret;
for (i = 0; i < rx_chn->flow_num; i++)
rx_chn->flows[i].udma_rflow_id = rx_chn->flow_id_base + i;
k3_udma_glue_dump_rx_chn(rx_chn);
+ return 0;
+}
+
+static struct k3_udma_glue_rx_channel *
+k3_udma_glue_request_remote_rx_chn(struct device *dev, const char *name,
+ struct k3_udma_glue_rx_channel_cfg *cfg)
+{
+ struct k3_udma_glue_rx_channel *rx_chn;
+ int ret;
+
+ if (cfg->flow_id_num <= 0 ||
+ cfg->flow_id_use_rxchan_id ||
+ cfg->def_flow_cfg ||
+ cfg->flow_id_base < 0)
+ return ERR_PTR(-EINVAL);
+
+ /*
+ * Remote RX channel is under control of Remote CPU core, so
+ * Linux can only request and manipulate by dedicated RX flows
+ */
+
+ rx_chn = devm_kzalloc(dev, sizeof(*rx_chn), GFP_KERNEL);
+ if (!rx_chn)
+ return ERR_PTR(-ENOMEM);
+
+ rx_chn->common.dev = dev;
+ rx_chn->common.swdata_size = cfg->swdata_size;
+ rx_chn->remote = true;
+ rx_chn->udma_rchan_id = -1;
+ rx_chn->flow_num = cfg->flow_id_num;
+ rx_chn->flow_id_base = cfg->flow_id_base;
+ rx_chn->psil_paired = false;
+
+ /* parse of udmap channel */
+ ret = of_k3_udma_glue_parse_chn(dev->of_node, name,
+ &rx_chn->common, false);
+ if (ret)
+ goto err;
+
+ ret = k3_udma_glue_request_remote_rx_chn_common(rx_chn, cfg, dev);
+ if (ret)
+ goto err;
+
+ return rx_chn;
+
+err:
+ k3_udma_glue_release_rx_chn(rx_chn);
+ return ERR_PTR(ret);
+}
+
+struct k3_udma_glue_rx_channel *
+k3_udma_glue_request_remote_rx_chn_by_id(struct device *dev, struct device_node *udmax_np,
+ struct k3_udma_glue_rx_channel_cfg *cfg, u32 thread_id)
+{
+ struct k3_udma_glue_rx_channel *rx_chn;
+ int ret;
+
+ if (cfg->flow_id_num <= 0 ||
+ cfg->flow_id_use_rxchan_id ||
+ cfg->def_flow_cfg ||
+ cfg->flow_id_base < 0)
+ return ERR_PTR(-EINVAL);
+
+ /*
+ * Remote RX channel is under control of Remote CPU core, so
+ * Linux can only request and manipulate by dedicated RX flows
+ */
+
+ rx_chn = devm_kzalloc(dev, sizeof(*rx_chn), GFP_KERNEL);
+ if (!rx_chn)
+ return ERR_PTR(-ENOMEM);
+
+ rx_chn->common.dev = dev;
+ rx_chn->common.swdata_size = cfg->swdata_size;
+ rx_chn->remote = true;
+ rx_chn->udma_rchan_id = -1;
+ rx_chn->flow_num = cfg->flow_id_num;
+ rx_chn->flow_id_base = cfg->flow_id_base;
+ rx_chn->psil_paired = false;
+
+ ret = of_k3_udma_glue_parse_chn_by_id(udmax_np, &rx_chn->common, false, thread_id);
+ if (ret)
+ goto err;
+
+ ret = k3_udma_glue_request_remote_rx_chn_common(rx_chn, cfg, dev);
+ if (ret)
+ goto err;
+
return rx_chn;
err:
k3_udma_glue_release_rx_chn(rx_chn);
return ERR_PTR(ret);
}
+EXPORT_SYMBOL_GPL(k3_udma_glue_request_remote_rx_chn_by_id);
struct k3_udma_glue_rx_channel *
k3_udma_glue_request_rx_chn(struct device *dev, const char *name,
diff --git a/include/linux/dma/k3-udma-glue.h b/include/linux/dma/k3-udma-glue.h
index 6205d84430ca..a81d1b8f889c 100644
--- a/include/linux/dma/k3-udma-glue.h
+++ b/include/linux/dma/k3-udma-glue.h
@@ -108,6 +108,10 @@ struct k3_udma_glue_rx_channel_cfg {
struct k3_udma_glue_rx_channel;
+struct k3_udma_glue_rx_channel *
+k3_udma_glue_request_remote_rx_chn_by_id(struct device *dev, struct device_node *udmax_np,
+ struct k3_udma_glue_rx_channel_cfg *cfg, u32 thread_id);
+
struct k3_udma_glue_rx_channel *k3_udma_glue_request_rx_chn(
struct device *dev,
const char *name,
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 4/4] dmaengine: ti: k3-udma-glue: Update name for remote RX channel device
2023-11-14 8:39 [PATCH 0/4] Add APIs to request TX/RX DMA channels by ID Siddharth Vadapalli
` (2 preceding siblings ...)
2023-11-14 8:39 ` [PATCH 3/4] dmaengine: ti: k3-udma-glue: Add function to request RX " Siddharth Vadapalli
@ 2023-11-14 8:39 ` Siddharth Vadapalli
2023-11-15 19:59 ` [PATCH 0/4] Add APIs to request TX/RX DMA channels by ID Péter Ujfalusi
4 siblings, 0 replies; 10+ messages in thread
From: Siddharth Vadapalli @ 2023-11-14 8:39 UTC (permalink / raw)
To: peter.ujfalusi, vkoul
Cc: dmaengine, linux-kernel, linux-arm-kernel, srk, vigneshr,
s-vadapalli
A single RX Channel can have multiple flows. It is possible that a
single device requests multiple flows on the same RX Channel. In such
cases, the existing implementation of naming the device on the basis of
the RX Channel can result in duplicate names. The existing
implementation only uses the RX Channel source thread when naming, which
implies duplicate names when different flows are being requested on the
same RX Channel. In order to avoid duplicate names, include the RX flow
as well in the name.
Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com>
---
drivers/dma/ti/k3-udma-glue.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/dma/ti/k3-udma-glue.c b/drivers/dma/ti/k3-udma-glue.c
index 167fe77de71e..9415f03757bb 100644
--- a/drivers/dma/ti/k3-udma-glue.c
+++ b/drivers/dma/ti/k3-udma-glue.c
@@ -1094,8 +1094,8 @@ k3_udma_glue_request_remote_rx_chn_common(struct k3_udma_glue_rx_channel *rx_chn
rx_chn->common.chan_dev.class = &k3_udma_glue_devclass;
rx_chn->common.chan_dev.parent = xudma_get_device(rx_chn->common.udmax);
- dev_set_name(&rx_chn->common.chan_dev, "rchan_remote-0x%04x",
- rx_chn->common.src_thread);
+ dev_set_name(&rx_chn->common.chan_dev, "rchan_remote-0x%04x-0x%02x",
+ rx_chn->common.src_thread, rx_chn->flow_id_base);
ret = device_register(&rx_chn->common.chan_dev);
if (ret) {
dev_err(dev, "Channel Device registration failed %d\n", ret);
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 0/4] Add APIs to request TX/RX DMA channels by ID
2023-11-14 8:39 [PATCH 0/4] Add APIs to request TX/RX DMA channels by ID Siddharth Vadapalli
` (3 preceding siblings ...)
2023-11-14 8:39 ` [PATCH 4/4] dmaengine: ti: k3-udma-glue: Update name for remote RX channel device Siddharth Vadapalli
@ 2023-11-15 19:59 ` Péter Ujfalusi
2023-11-17 5:55 ` Siddharth Vadapalli
4 siblings, 1 reply; 10+ messages in thread
From: Péter Ujfalusi @ 2023-11-15 19:59 UTC (permalink / raw)
To: Siddharth Vadapalli, vkoul
Cc: dmaengine, linux-kernel, linux-arm-kernel, srk, vigneshr
On 14/11/2023 10:39, Siddharth Vadapalli wrote:
> The existing APIs for requesting TX and RX DMA channels rely on parsing
> a device-tree node to obtain the Channel/Thread IDs from their names.
Yes, since it is a DMA device and it is using the standard DMA mapping.
It is by design that the standard DMAengine and the custom glue layer
(which should have been a temporary solution) uses the same standard DMA
binding to make sure that we are not going to deviate from the standard
and be able to move the glue users to DMAengine (which would need core
changes).
> However, it is possible to know the thread IDs by alternative means such
> as being informed by Firmware on a remote core regarding the allocated
> TX/RX DMA channel IDs. Thus, add APIs to support such use cases.
I see, so the TISCI res manager is going to managed the channels/flows
for some peripherals?
What is the API and parameters to get these channels?
I would really like to follow a standard binding since what will happen
if the firmware will start to provision channels/flows for DMAengine
users? It is not that simple to hack that around.
My initial take is that this can be implemented via the existing DMA
crossbar support. It has been created exactly for this sort of purpose.
I'm sure you need to provide some sort of parameters to TISCI to get the
channel/rflow provisioned for the host requesting, right?
The crossbar implements the binding with the given set of parameters,
does the needed 'black magic' to get the information needed for the
target DMA and crafts the binding for it and get's the channel.
If you take a look at the drivers/dma/ti/dma-crossbar.c, it implements
two types of crossbars.
For DMAengine, it would be relatively simple to write a new one for
tisci, The glue layer might needs a bit more work as it is not relying
on core, but I would not think that it is that much complicated to
extend it to be able to handle a crossbar binding.
The benefit is that none of the clients should not need to know about
the way the channel is looked up, they just request for an RX channel
and depending on the binding they will get it directly from the DMA or
get the translation via the crossbar to be able to fetch the channel.
Can you check if this would be doable?
For reference:
Documentation/devicetree/bindings/dma/dma-router.yaml
Documentation/devicetree/bindings/dma/ti-dma-crossbar.txt
drivers/dma/ti/dma-crossbar.c
> Additionally, since the name of the device for the remote RX channel is
> being set purely on the basis of the RX channel ID itself, it can result
> in duplicate names when multiple flows are used on the same channel.
> Avoid name duplication by including the flow in the name.
Make sense.
> Series is based on linux-next tagged next-20231114.
>
> RFC Series:
> https://lore.kernel.org/r/20231111121555.2656760-1-s-vadapalli@ti.com/
>
> Changes since RFC Series:
> - Rebased patches 1, 2 and 3 on linux-next tagged next-20231114.
> - Added patch 4 to the series.
>
> Regards,
> Siddharth.
>
> Siddharth Vadapalli (4):
> dmaengine: ti: k3-udma-glue: Add function to parse channel by ID
> dmaengine: ti: k3-udma-glue: Add function to request TX channel by ID
> dmaengine: ti: k3-udma-glue: Add function to request RX channel by ID
> dmaengine: ti: k3-udma-glue: Update name for remote RX channel device
>
> drivers/dma/ti/k3-udma-glue.c | 306 ++++++++++++++++++++++---------
> include/linux/dma/k3-udma-glue.h | 8 +
> 2 files changed, 228 insertions(+), 86 deletions(-)
>
--
Péter
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/4] Add APIs to request TX/RX DMA channels by ID
2023-11-15 19:59 ` [PATCH 0/4] Add APIs to request TX/RX DMA channels by ID Péter Ujfalusi
@ 2023-11-17 5:55 ` Siddharth Vadapalli
2023-11-22 15:22 ` Péter Ujfalusi
0 siblings, 1 reply; 10+ messages in thread
From: Siddharth Vadapalli @ 2023-11-17 5:55 UTC (permalink / raw)
To: Péter Ujfalusi
Cc: vkoul, dmaengine, linux-kernel, linux-arm-kernel, srk, vigneshr,
s-vadapalli
Hello Péter,
On 16/11/23 01:29, Péter Ujfalusi wrote:
>
>
> On 14/11/2023 10:39, Siddharth Vadapalli wrote:
>> The existing APIs for requesting TX and RX DMA channels rely on parsing
>> a device-tree node to obtain the Channel/Thread IDs from their names.
>
> Yes, since it is a DMA device and it is using the standard DMA mapping.
> It is by design that the standard DMAengine and the custom glue layer
> (which should have been a temporary solution) uses the same standard DMA
> binding to make sure that we are not going to deviate from the standard
> and be able to move the glue users to DMAengine (which would need core
> changes).
>
>> However, it is possible to know the thread IDs by alternative means such
>> as being informed by Firmware on a remote core regarding the allocated
>> TX/RX DMA channel IDs. Thus, add APIs to support such use cases.
>
> I see, so the TISCI res manager is going to managed the channels/flows
> for some peripherals?
>
> What is the API and parameters to get these channels?
>
> I would really like to follow a standard binding since what will happen
> if the firmware will start to provision channels/flows for DMAengine
> users? It is not that simple to hack that around.
Please consider the following use-case for which the APIs are being added by
this series. I apologize for not explaining the idea behind the APIs in more
detail earlier.
Firmware running on a remote core is in control of a peripheral (CPSW Ethernet
Switch for example) and shares the peripheral across software running on
different cores. The control path between the Firmware and the Clients on
various cores is via RPMsg, while the data path used by the Clients is the DMA
Channels. In the example where Clients send data to the shared peripheral over
DMA, the Clients send RPMsg based requests to the Firmware to obtain the
allocated thead IDs. Firmware allocates the thread IDs by making a request to
TISCI Resource Manager followed by sharing the thread IDs to the Clients.
In such use cases, the Linux Client is probed by RPMsg endpoint discovery over
the RPMsg bus. Therefore, there is no device-tree corresponding to the Client
device. The Client knows the DMA Channel IDs as well as the RX Flow details from
the Firmware. Knowing these details, the Client can request the configuration of
the TX and RX Channels/Flows by using the DMA APIs which this series adds.
Please let me know in case of any suggestions for an implementation which shall
address the above use-case.
>
> My initial take is that this can be implemented via the existing DMA
> crossbar support. It has been created exactly for this sort of purpose.
> I'm sure you need to provide some sort of parameters to TISCI to get the
> channel/rflow provisioned for the host requesting, right?
> The crossbar implements the binding with the given set of parameters,
> does the needed 'black magic' to get the information needed for the
> target DMA and crafts the binding for it and get's the channel.
>
> If you take a look at the drivers/dma/ti/dma-crossbar.c, it implements
> two types of crossbars.
>
> For DMAengine, it would be relatively simple to write a new one for
> tisci, The glue layer might needs a bit more work as it is not relying
> on core, but I would not think that it is that much complicated to
> extend it to be able to handle a crossbar binding.
> The benefit is that none of the clients should not need to know about
> the way the channel is looked up, they just request for an RX channel
> and depending on the binding they will get it directly from the DMA or
> get the translation via the crossbar to be able to fetch the channel.
>
> Can you check if this would be doable?
>
> For reference:
> Documentation/devicetree/bindings/dma/dma-router.yaml
> Documentation/devicetree/bindings/dma/ti-dma-crossbar.txt
> drivers/dma/ti/dma-crossbar.c
>
>> Additionally, since the name of the device for the remote RX channel is
>> being set purely on the basis of the RX channel ID itself, it can result
>> in duplicate names when multiple flows are used on the same channel.
>> Avoid name duplication by including the flow in the name.
>
> Make sense.
May I post that patch separately in that case?
>> Series is based on linux-next tagged next-20231114.
>>
>> RFC Series:
>> https://lore.kernel.org/r/20231111121555.2656760-1-s-vadapalli@ti.com/
>>
>> Changes since RFC Series:
>> - Rebased patches 1, 2 and 3 on linux-next tagged next-20231114.
>> - Added patch 4 to the series.
>>
>> Regards,
>> Siddharth.
>>
>> Siddharth Vadapalli (4):
>> dmaengine: ti: k3-udma-glue: Add function to parse channel by ID
>> dmaengine: ti: k3-udma-glue: Add function to request TX channel by ID
>> dmaengine: ti: k3-udma-glue: Add function to request RX channel by ID
>> dmaengine: ti: k3-udma-glue: Update name for remote RX channel device
>>
>> drivers/dma/ti/k3-udma-glue.c | 306 ++++++++++++++++++++++---------
>> include/linux/dma/k3-udma-glue.h | 8 +
>> 2 files changed, 228 insertions(+), 86 deletions(-)
>>
>
--
Regards,
Siddharth.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/4] Add APIs to request TX/RX DMA channels by ID
2023-11-17 5:55 ` Siddharth Vadapalli
@ 2023-11-22 15:22 ` Péter Ujfalusi
2023-12-04 8:21 ` Siddharth Vadapalli
0 siblings, 1 reply; 10+ messages in thread
From: Péter Ujfalusi @ 2023-11-22 15:22 UTC (permalink / raw)
To: Siddharth Vadapalli
Cc: vkoul, dmaengine, linux-kernel, linux-arm-kernel, srk, vigneshr
Hi Siddharth,
On 17/11/2023 07:55, Siddharth Vadapalli wrote:
>> I would really like to follow a standard binding since what will happen
>> if the firmware will start to provision channels/flows for DMAengine
>> users? It is not that simple to hack that around.
>
> Please consider the following use-case for which the APIs are being added by
> this series. I apologize for not explaining the idea behind the APIs in more
> detail earlier.
>
> Firmware running on a remote core is in control of a peripheral (CPSW Ethernet
> Switch for example) and shares the peripheral across software running on
> different cores. The control path between the Firmware and the Clients on
> various cores is via RPMsg, while the data path used by the Clients is the DMA
> Channels. In the example where Clients send data to the shared peripheral over
> DMA, the Clients send RPMsg based requests to the Firmware to obtain the
> allocated thead IDs. Firmware allocates the thread IDs by making a request to
> TISCI Resource Manager followed by sharing the thread IDs to the Clients.
>
> In such use cases, the Linux Client is probed by RPMsg endpoint discovery over
> the RPMsg bus. Therefore, there is no device-tree corresponding to the Client
> device. The Client knows the DMA Channel IDs as well as the RX Flow details from
> the Firmware. Knowing these details, the Client can request the configuration of
> the TX and RX Channels/Flows by using the DMA APIs which this series adds.
I see, so the CPSW will be probed in a similar way as USB peripherals
for example? The CPSW does not have a DT entry at all? Is this correct?
> Please let me know in case of any suggestions for an implementation which shall
> address the above use-case.
How does the driver knows how to request a DMA resource from the remote
core? How that scales with different SoCs and even with changes in the
firmware?
You are right, this is in a grey area. The DMA channel as it is
controlled by the remote processor, it lends a thread to clients on
other cores (like Linux) via RPMsg.
Well, it is similar to how non DT is working in a way.
This CPSW type is not yet supported mainline, right?
--
Péter
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/4] Add APIs to request TX/RX DMA channels by ID
2023-11-22 15:22 ` Péter Ujfalusi
@ 2023-12-04 8:21 ` Siddharth Vadapalli
2023-12-11 9:00 ` Siddharth Vadapalli
0 siblings, 1 reply; 10+ messages in thread
From: Siddharth Vadapalli @ 2023-12-04 8:21 UTC (permalink / raw)
To: Péter Ujfalusi
Cc: vkoul, dmaengine, linux-kernel, linux-arm-kernel, srk, vigneshr,
s-vadapalli
Hello Péter,
On 22/11/23 20:52, Péter Ujfalusi wrote:
> Hi Siddharth,
>
> On 17/11/2023 07:55, Siddharth Vadapalli wrote:
>>> I would really like to follow a standard binding since what will happen
>>> if the firmware will start to provision channels/flows for DMAengine
>>> users? It is not that simple to hack that around.
>>
>> Please consider the following use-case for which the APIs are being added by
>> this series. I apologize for not explaining the idea behind the APIs in more
>> detail earlier.
>>
>> Firmware running on a remote core is in control of a peripheral (CPSW Ethernet
>> Switch for example) and shares the peripheral across software running on
>> different cores. The control path between the Firmware and the Clients on
>> various cores is via RPMsg, while the data path used by the Clients is the DMA
>> Channels. In the example where Clients send data to the shared peripheral over
>> DMA, the Clients send RPMsg based requests to the Firmware to obtain the
>> allocated thead IDs. Firmware allocates the thread IDs by making a request to
>> TISCI Resource Manager followed by sharing the thread IDs to the Clients.
>>
>> In such use cases, the Linux Client is probed by RPMsg endpoint discovery over
>> the RPMsg bus. Therefore, there is no device-tree corresponding to the Client
>> device. The Client knows the DMA Channel IDs as well as the RX Flow details from
>> the Firmware. Knowing these details, the Client can request the configuration of
>> the TX and RX Channels/Flows by using the DMA APIs which this series adds.
>
> I see, so the CPSW will be probed in a similar way as USB peripherals
> for example? The CPSW does not have a DT entry at all? Is this correct?
I apologize for the delayed response. Yes, the CPSW instance which shall be in
control of Firmware running on the remote core will not have a DT entry. The
Linux Client driver shall be probed when the Firmware announces its endpoint
over the RPMsg bus, which the Client driver shall register with the RPMsg framework.
>
>> Please let me know in case of any suggestions for an implementation which shall
>> address the above use-case.
>
> How does the driver knows how to request a DMA resource from the remote
> core? How that scales with different SoCs and even with changes in the
> firmware?
After getting probed, the Client driver communicates with Firmware via RPMsg,
requesting details of the allocated resources including the TX Channels and RX
Flows. Knowing these parameters, the Client driver can use the newly added DMA
APIs to request TX Channel and RX Flows by IDs. The only dependency here is that
the Client driver needs to know which DMA instance to request these resources
from. That information is hard coded in the driver's data in the form of the
compatible used for the DMA instance, thereby allowing the Client driver to get
a reference to the DMA controller node using the of_find_compatible_node() API.
Since all the resource allocation information comes from Firmware, the
device-specific details will be hard coded in the Firmware while the Client
driver can be used across all K3 SoCs which have the same DMA APIs.
>
> You are right, this is in a grey area. The DMA channel as it is
> controlled by the remote processor, it lends a thread to clients on
> other cores (like Linux) via RPMsg.
> Well, it is similar to how non DT is working in a way.
>
> This CPSW type is not yet supported mainline, right?
Yes, it is not yet supported in mainline. This series is a dependency for
upstreaming the Client driver.
--
Regards,
Siddharth.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/4] Add APIs to request TX/RX DMA channels by ID
2023-12-04 8:21 ` Siddharth Vadapalli
@ 2023-12-11 9:00 ` Siddharth Vadapalli
0 siblings, 0 replies; 10+ messages in thread
From: Siddharth Vadapalli @ 2023-12-11 9:00 UTC (permalink / raw)
To: Péter Ujfalusi
Cc: vkoul, dmaengine, linux-kernel, linux-arm-kernel, srk, vigneshr,
s-vadapalli
If there are no concerns, may I post the v2 of this series, rebasing it on the
latest linux-next tree with minor code cleanup and reordering of the patches?
On 04/12/23 13:51, Siddharth Vadapalli wrote:
> Hello Péter,
>
> On 22/11/23 20:52, Péter Ujfalusi wrote:
>> Hi Siddharth,
>>
>> On 17/11/2023 07:55, Siddharth Vadapalli wrote:
>>>> I would really like to follow a standard binding since what will happen
>>>> if the firmware will start to provision channels/flows for DMAengine
>>>> users? It is not that simple to hack that around.
>>>
>>> Please consider the following use-case for which the APIs are being added by
>>> this series. I apologize for not explaining the idea behind the APIs in more
>>> detail earlier.
>>>
>>> Firmware running on a remote core is in control of a peripheral (CPSW Ethernet
>>> Switch for example) and shares the peripheral across software running on
>>> different cores. The control path between the Firmware and the Clients on
>>> various cores is via RPMsg, while the data path used by the Clients is the DMA
>>> Channels. In the example where Clients send data to the shared peripheral over
>>> DMA, the Clients send RPMsg based requests to the Firmware to obtain the
>>> allocated thead IDs. Firmware allocates the thread IDs by making a request to
>>> TISCI Resource Manager followed by sharing the thread IDs to the Clients.
>>>
>>> In such use cases, the Linux Client is probed by RPMsg endpoint discovery over
>>> the RPMsg bus. Therefore, there is no device-tree corresponding to the Client
>>> device. The Client knows the DMA Channel IDs as well as the RX Flow details from
>>> the Firmware. Knowing these details, the Client can request the configuration of
>>> the TX and RX Channels/Flows by using the DMA APIs which this series adds.
>>
>> I see, so the CPSW will be probed in a similar way as USB peripherals
>> for example? The CPSW does not have a DT entry at all? Is this correct?
>
> I apologize for the delayed response. Yes, the CPSW instance which shall be in
> control of Firmware running on the remote core will not have a DT entry. The
> Linux Client driver shall be probed when the Firmware announces its endpoint
> over the RPMsg bus, which the Client driver shall register with the RPMsg framework.
>
>>
>>> Please let me know in case of any suggestions for an implementation which shall
>>> address the above use-case.
>>
>> How does the driver knows how to request a DMA resource from the remote
>> core? How that scales with different SoCs and even with changes in the
>> firmware?
>
> After getting probed, the Client driver communicates with Firmware via RPMsg,
> requesting details of the allocated resources including the TX Channels and RX
> Flows. Knowing these parameters, the Client driver can use the newly added DMA
> APIs to request TX Channel and RX Flows by IDs. The only dependency here is that
> the Client driver needs to know which DMA instance to request these resources
> from. That information is hard coded in the driver's data in the form of the
> compatible used for the DMA instance, thereby allowing the Client driver to get
> a reference to the DMA controller node using the of_find_compatible_node() API.
>
> Since all the resource allocation information comes from Firmware, the
> device-specific details will be hard coded in the Firmware while the Client
> driver can be used across all K3 SoCs which have the same DMA APIs.
>
>>
>> You are right, this is in a grey area. The DMA channel as it is
>> controlled by the remote processor, it lends a thread to clients on
>> other cores (like Linux) via RPMsg.
>> Well, it is similar to how non DT is working in a way.
>>
>> This CPSW type is not yet supported mainline, right?
>
> Yes, it is not yet supported in mainline. This series is a dependency for
> upstreaming the Client driver.
>
--
Regards,
Siddharth.
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2023-12-11 9:00 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-11-14 8:39 [PATCH 0/4] Add APIs to request TX/RX DMA channels by ID Siddharth Vadapalli
2023-11-14 8:39 ` [PATCH 1/4] dmaengine: ti: k3-udma-glue: Add function to parse channel " Siddharth Vadapalli
2023-11-14 8:39 ` [PATCH 2/4] dmaengine: ti: k3-udma-glue: Add function to request TX " Siddharth Vadapalli
2023-11-14 8:39 ` [PATCH 3/4] dmaengine: ti: k3-udma-glue: Add function to request RX " Siddharth Vadapalli
2023-11-14 8:39 ` [PATCH 4/4] dmaengine: ti: k3-udma-glue: Update name for remote RX channel device Siddharth Vadapalli
2023-11-15 19:59 ` [PATCH 0/4] Add APIs to request TX/RX DMA channels by ID Péter Ujfalusi
2023-11-17 5:55 ` Siddharth Vadapalli
2023-11-22 15:22 ` Péter Ujfalusi
2023-12-04 8:21 ` Siddharth Vadapalli
2023-12-11 9:00 ` Siddharth Vadapalli
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox