On Apr 15, Simon Horman wrote: > On Tue, Apr 14, 2026 at 08:50:52AM +0200, Lorenzo Bianconi wrote: > > Similar to airoha_qdma_cleanup_rx_queue(), reset DMA TX descriptors in > > airoha_qdma_cleanup_tx_queue routine. Moreover, reset TX_DMA_IDX to > > TX_CPU_IDX to notify the NIC the QDMA TX ring is empty. > > > > Fixes: 23020f0493270 ("net: airoha: Introduce ethernet support for EN7581 SoC") > > Signed-off-by: Lorenzo Bianconi > > --- > > Changes in v2: > > - Move q->ndesc initialization at end of airoha_qdma_init_tx routine in > > order to avoid any possible NULL pointer dereference in > > airoha_qdma_cleanup_tx_queue() > > This seems to be a separate issue. > If so, I think it should be split out into a separate patch. > > > - Check if q->tx_list is empty in airoha_qdma_cleanup_tx_queue() > > - Link to v1: https://lore.kernel.org/r/20260410-airoha_qdma_cleanup_tx_queue-fix-net-v1-1-b7171c8f1e78@kernel.org > > I think it was covered in the review Jakub forwarded for v1. But FTR, > Sashiko has some feedback on this patch in the form of an existing bug > (that should almost certainly be handled separately from this patch). Hi Simon, I took a look to the Sashiko's report [0] but this issue is not introduced by this patch and, even if it would be a better approach, I guess the hw is capable of managing out-of-order TX descriptors. So I guess this patch is fine in this way, agree? [0] https://sashiko.dev/#/patchset/20260414-airoha_qdma_cleanup_tx_queue-fix-net-v2-1-875de57cc022%40kernel.org > > > --- > > drivers/net/ethernet/airoha/airoha_eth.c | 41 ++++++++++++++++++++++++++------ > > 1 file changed, 34 insertions(+), 7 deletions(-) > > > > diff --git a/drivers/net/ethernet/airoha/airoha_eth.c b/drivers/net/ethernet/airoha/airoha_eth.c > > index 9e995094c32a..3c1a2bc68c42 100644 > > --- a/drivers/net/ethernet/airoha/airoha_eth.c > > +++ b/drivers/net/ethernet/airoha/airoha_eth.c > > @@ -966,27 +966,27 @@ static int airoha_qdma_init_tx_queue(struct airoha_queue *q, > > dma_addr_t dma_addr; > > > > spin_lock_init(&q->lock); > > - q->ndesc = size; > > q->qdma = qdma; > > q->free_thr = 1 + MAX_SKB_FRAGS; > > INIT_LIST_HEAD(&q->tx_list); > > > > - q->entry = devm_kzalloc(eth->dev, q->ndesc * sizeof(*q->entry), > > + q->entry = devm_kzalloc(eth->dev, size * sizeof(*q->entry), > > GFP_KERNEL); > > if (!q->entry) > > return -ENOMEM; > > > > - q->desc = dmam_alloc_coherent(eth->dev, q->ndesc * sizeof(*q->desc), > > + q->desc = dmam_alloc_coherent(eth->dev, size * sizeof(*q->desc), > > &dma_addr, GFP_KERNEL); > > if (!q->desc) > > return -ENOMEM; > > > > - for (i = 0; i < q->ndesc; i++) { > > + for (i = 0; i < size; i++) { > > u32 val = FIELD_PREP(QDMA_DESC_DONE_MASK, 1); > > > > list_add_tail(&q->entry[i].list, &q->tx_list); > > WRITE_ONCE(q->desc[i].ctrl, cpu_to_le32(val)); > > } > > + q->ndesc = size; > > > > /* xmit ring drop default setting */ > > airoha_qdma_set(qdma, REG_TX_RING_BLOCKING(qid), > > @@ -1051,13 +1051,17 @@ static int airoha_qdma_init_tx(struct airoha_qdma *qdma) > > > > static void airoha_qdma_cleanup_tx_queue(struct airoha_queue *q) > > { > > - struct airoha_eth *eth = q->qdma->eth; > > - int i; > > + struct airoha_qdma *qdma = q->qdma; > > + struct airoha_eth *eth = qdma->eth; > > + int i, qid = q - &qdma->q_tx[0]; > > + struct airoha_queue_entry *e; > > + u16 index = 0; > > > > spin_lock_bh(&q->lock); > > for (i = 0; i < q->ndesc; i++) { > > - struct airoha_queue_entry *e = &q->entry[i]; > > super nit: In v2 e is always used within a block (here and in the hunk below). > So I would lean towards declaring e in the blocks where it is > used. > > No need to repost just for this! I can fix it if I need to repost. Regards, Lorenzo > > > + struct airoha_qdma_desc *desc = &q->desc[i]; > > > > + e = &q->entry[i]; > > if (!e->dma_addr) > > continue; > > > > @@ -1067,8 +1071,31 @@ static void airoha_qdma_cleanup_tx_queue(struct airoha_queue *q) > > e->dma_addr = 0; > > e->skb = NULL; > > list_add_tail(&e->list, &q->tx_list); > > + > > + /* Reset DMA descriptor */ > > + WRITE_ONCE(desc->ctrl, 0); > > + WRITE_ONCE(desc->addr, 0); > > + WRITE_ONCE(desc->data, 0); > > + WRITE_ONCE(desc->msg0, 0); > > + WRITE_ONCE(desc->msg1, 0); > > + WRITE_ONCE(desc->msg2, 0); > > + > > q->queued--; > > } > > + > > + if (!list_empty(&q->tx_list)) { > > + e = list_first_entry(&q->tx_list, struct airoha_queue_entry, > > + list); > > + index = e - q->entry; > > + } > > + /* Set TX_DMA_IDX to TX_CPU_IDX to notify the hw the QDMA TX ring is > > + * empty. > > + */ > > + airoha_qdma_rmw(qdma, REG_TX_CPU_IDX(qid), TX_RING_CPU_IDX_MASK, > > + FIELD_PREP(TX_RING_CPU_IDX_MASK, index)); > > + airoha_qdma_rmw(qdma, REG_TX_DMA_IDX(qid), TX_RING_DMA_IDX_MASK, > > + FIELD_PREP(TX_RING_DMA_IDX_MASK, index)); > > + > > spin_unlock_bh(&q->lock); > > } > > > > > > --- > > base-commit: 2cd7e6971fc2787408ceef17906ea152791448cf > > change-id: 20260410-airoha_qdma_cleanup_tx_queue-fix-net-93375f5ee80f > > > > Best regards, > > -- > > Lorenzo Bianconi > >