public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Subbaraya Sundeep <sbhatta@marvell.com>
To: Paolo Abeni <pabeni@redhat.com>
Cc: <linux-kernel@vger.kernel.org>, <lcherian@marvell.com>,
	<andrew+netdev@lunn.ch>, <davem@davemloft.net>, <kuba@kernel.org>,
	<gakula@marvell.com>, <edumazet@google.com>,
	<sgoutham@marvell.com>, <bbhushan2@marvell.com>,
	<netdev@vger.kernel.org>
Subject: Re: [net-next,v4,4/4] octeontx2-pf: cn20k: Use unified Halo context
Date: Fri, 3 Apr 2026 11:31:28 +0530	[thread overview]
Message-ID: <20260403060128.GA1485819@kernel-ep2> (raw)
In-Reply-To: <20260402112011.216286-1-pabeni@redhat.com>

Hi Paolo,

On 2026-04-02 at 16:50:11, Paolo Abeni (pabeni@redhat.com) wrote:
> This is an AI-generated review of your patch. The human sending this
> email has considered the AI review valid, or at least plausible.
> 
> For local reproduction steps see https://netdev-ai.bots.linux.dev/ai-local.html
> ---
> octeontx2-pf: cn20k: Use unified Halo context
> 
> This commit replaces the separate aura and pool contexts with the unified
> Halo context available in CN20K hardware for managing network device pools
> and buffer allocation.
> 
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn20k.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn20k.c
> > --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn20k.c
> > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn20k.c
> 
> [ ... ]
> 
> > @@ -516,85 +507,8 @@ int cn20k_tc_alloc_entry(struct otx2_nic *nic,
> >
> >  	return 0;
> >  }
> > -
> > -static int cn20k_aura_aq_init(struct otx2_nic *pfvf, int aura_id,
> > -			      int pool_id, int numptrs)
> > -{
> > -	struct npa_cn20k_aq_enq_req *aq;
> > -	struct otx2_pool *pool;
> > -	u8 bpid_idx;
> > -	int err;
> > -
> > -	pool = &pfvf->qset.pool[pool_id];
> > -
> > -	/* Allocate memory for HW to update Aura count.
> > -	 * Alloc one cache line, so that it fits all FC_STYPE modes.
> > -	 */
> > -	if (!pool->fc_addr) {
> > -		err = qmem_alloc(pfvf->dev, &pool->fc_addr, 1, OTX2_ALIGN);
> > -		if (err)
> > -			return err;
> > -	}
> > -
> > -	/* Initialize this aura's context via AF */
> > -	aq = otx2_mbox_alloc_msg_npa_cn20k_aq_enq(&pfvf->mbox);
> > -	if (!aq) {
> > -		/* Shared mbox memory buffer is full, flush it and retry */
> > -		err = otx2_sync_mbox_msg(&pfvf->mbox);
> > -		if (err)
> > -			return err;
> > -		aq = otx2_mbox_alloc_msg_npa_cn20k_aq_enq(&pfvf->mbox);
> > -		if (!aq)
> > -			return -ENOMEM;
> > -	}
> > -
> > -	aq->aura_id = aura_id;
> > -
> > -	/* Will be filled by AF with correct pool context address */
> > -	aq->aura.pool_addr = pool_id;
> > -	aq->aura.pool_caching = 1;
> > -	aq->aura.shift = ilog2(numptrs) - 8;
> > -	aq->aura.count = numptrs;
> > -	aq->aura.limit = numptrs;
> > -	aq->aura.avg_level = 255;
> > -	aq->aura.ena = 1;
> > -	aq->aura.fc_ena = 1;
> > -	aq->aura.fc_addr = pool->fc_addr->iova;
> > -	aq->aura.fc_hyst_bits = 0; /* Store count on all updates */
> >
> > -	/* Enable backpressure for RQ aura */
> > -	if (aura_id < pfvf->hw.rqpool_cnt && !is_otx2_lbkvf(pfvf->pdev)) {
> > -		aq->aura.bp_ena = 0;
> > -		/* If NIX1 LF is attached then specify NIX1_RX.
> > -		 *
> > -		 * Below NPA_AURA_S[BP_ENA] is set according to the
> > -		 * NPA_BPINTF_E enumeration given as:
> > -		 * 0x0 + a*0x1 where 'a' is 0 for NIX0_RX and 1 for NIX1_RX so
> > -		 * NIX0_RX is 0x0 + 0*0x1 = 0
> > -		 * NIX1_RX is 0x0 + 1*0x1 = 1
> > -		 * But in HRM it is given that
> > -		 * "NPA_AURA_S[BP_ENA](w1[33:32]) - Enable aura backpressure to
> > -		 * NIX-RX based on [BP] level. One bit per NIX-RX; index
> > -		 * enumerated by NPA_BPINTF_E."
> > -		 */
> > -		if (pfvf->nix_blkaddr == BLKADDR_NIX1)
> > -			aq->aura.bp_ena = 1;
> > -
> > -		bpid_idx = cn20k_aura_bpid_idx(pfvf, aura_id);
> > -		aq->aura.bpid = pfvf->bpid[bpid_idx];
> > -
> > -		/* Set backpressure level for RQ's Aura */
> > -		aq->aura.bp = RQ_BP_LVL_AURA;
> > -	}
> > -
> > -	/* Fill AQ info */
> > -	aq->ctype = NPA_AQ_CTYPE_AURA;
> > -	aq->op = NPA_AQ_INSTOP_INIT;
> > -
> > -	return 0;
> > -}
> > -
> > -static int cn20k_pool_aq_init(struct otx2_nic *pfvf, u16 pool_id,
> > +static int cn20k_halo_aq_init(struct otx2_nic *pfvf, u16 pool_id,
> >  			      int stack_pages, int numptrs, int buf_size,
> >  			      int type)
> >  {
> > @@ -610,36 +524,57 @@ struct page_pool_params pp_params = { 0 };
> >  	if (err)
> >  		return err;
> >
> > +	/* Allocate memory for HW to update Aura count.
> > +	 * Alloc one cache line, so that it fits all FC_STYPE modes.
> > +	 */
> > +	if (!pool->fc_addr) {
> > +		err = qmem_alloc(pfvf->dev, &pool->fc_addr, 1, OTX2_ALIGN);
> > +		if (err) {
> > +			qmem_free(pfvf->dev, pool->stack);
> > +			return err;
> > +		}
> > +	}
> > +
> >  	pool->rbsize = buf_size;
> >
> > -	/* Initialize this pool's context via AF */
> > +	/* Initialize this aura's context via AF */
> >  	aq = otx2_mbox_alloc_msg_npa_cn20k_aq_enq(&pfvf->mbox);
> >  	if (!aq) {
> >  		/* Shared mbox memory buffer is full, flush it and retry */
> >  		err = otx2_sync_mbox_msg(&pfvf->mbox);
> > -		if (err) {
> > -			qmem_free(pfvf->dev, pool->stack);
> > -			return err;
> > -		}
> > +		if (err)
> > +			goto free_mem;
> >  		aq = otx2_mbox_alloc_msg_npa_cn20k_aq_enq(&pfvf->mbox);
> >  		if (!aq) {
> > -			qmem_free(pfvf->dev, pool->stack);
> > -			return -ENOMEM;
> > +			err = -ENOMEM;
> > +			goto free_mem;
> >  		}
> >  	}
> >
> >  	aq->aura_id = pool_id;
> > -	aq->pool.stack_base = pool->stack->iova;
> > -	aq->pool.stack_caching = 1;
> > -	aq->pool.ena = 1;
> > -	aq->pool.buf_size = buf_size / 128;
> > -	aq->pool.stack_max_pages = stack_pages;
> > -	aq->pool.shift = ilog2(numptrs) - 8;
> > -	aq->pool.ptr_start = 0;
> > -	aq->pool.ptr_end = ~0ULL;
> >
> > +	aq->halo.stack_base = pool->stack->iova;
> > +	aq->halo.stack_caching = 1;
> > +	aq->halo.ena = 1;
> > +	aq->halo.buf_size = buf_size / 128;
> > +	aq->halo.stack_max_pages = stack_pages;
> > +	aq->halo.shift = ilog2(numptrs) - 8;
> > +	aq->halo.ptr_start = 0;
> > +	aq->halo.ptr_end = ~0ULL;
> > +
> > +	aq->halo.avg_level = 255;
> > +	aq->halo.fc_ena = 1;
> > +	aq->halo.fc_addr = pool->fc_addr->iova;
> > +	aq->halo.fc_hyst_bits = 0; /* Store count on all updates */
> > +
> > +	if (pfvf->npa_dpc_valid) {
> > +		aq->halo.op_dpc_ena = 1;
> > +		aq->halo.op_dpc_set = pfvf->npa_dpc;
> > +	}
> > +	aq->halo.unified_ctx = 1;
>                     ^^^^
> 
> Does this code properly configure backpressure for receive queues? The old
> cn20k_aura_aq_init configured NPA-level backpressure by setting bp_ena,
> bpid, and bp fields in the aura context. This caused the hardware to signal
> backpressure to NIX-RX when receive buffers filled to 85% (RQ_BP_LVL_AURA):

For now we want backpressure to be disabled since we do not have setup
to test it at the moment. I will write a comment saying disable backpressure
in Halo context and revisit later?
> 
>     /* Enable backpressure for RQ aura */
>     if (aura_id < pfvf->hw.rqpool_cnt && !is_otx2_lbkvf(pfvf->pdev)) {
>         aq->aura.bp_ena = 0;
>         if (pfvf->nix_blkaddr == BLKADDR_NIX1)
>             aq->aura.bp_ena = 1;
> 
>         bpid_idx = cn20k_aura_bpid_idx(pfvf, aura_id);
>         aq->aura.bpid = pfvf->bpid[bpid_idx];
> 
>         /* Set backpressure level for RQ's Aura */
>         aq->aura.bp = RQ_BP_LVL_AURA;
>     }
> 
> The new cn20k_halo_aq_init does not set the equivalent Halo context fields
> (bp_ena_0, bpid_0, bp_0). Looking at struct npa_cn20k_halo_s in the AF
> headers, these fields exist in the Halo context.
> 
> The AF side rvu_npa_aq_enq_inst() does not auto-configure backpressure
> during INIT operations. The AF teardown code rvu_npa_halo_hwctx_disable()
> explicitly clears bp_ena_0 through bp_ena_7, confirming these fields are
> expected to be set during initialization.
> 
> Without NPA-level backpressure configuration, wouldn't CN20K devices fail
> to trigger backpressure when aura buffers fill up, leading to packet drops
> under heavy traffic load?
> 
> >
> >  	/* Fill AQ info */
> > -	aq->ctype = NPA_AQ_CTYPE_POOL;
> > +	aq->ctype = NPA_AQ_CTYPE_HALO;
> >  	aq->op = NPA_AQ_INSTOP_INIT;
> 
> [ ... ]
> 
> > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
> > --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
> > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
> > @@ -1651,6 +1651,9 @@ int otx2_init_hw_resources(struct otx2_nic *pf)
> >  	if (!is_otx2_lbkvf(pf->pdev))
> >  		otx2_nix_config_bp(pf, true);
> >
> > +	if (is_cn20k(pf->pdev))
> > +		cn20k_npa_alloc_dpc(pf);
> > +
> >  	/* Init Auras and pools used by NIX RQ, for free buffer ptrs */
> >  	err = otx2_rq_aura_pool_init(pf);
> >  	if (err) {
> > @@ -1726,6 +1729,8 @@
> >  	otx2_ctx_disable(mbox, NPA_AQ_CTYPE_AURA, true);
> >  	otx2_aura_pool_free(pf);
> >  err_free_nix_lf:
> > +	if (pf->npa_dpc_valid)
> > +		cn20k_npa_free_dpc(pf);
> 
> Is there a potential double-free of the NPA DPC counter via the
> rvu_rep_rsrc_init error path?
> 
> cn20k_npa_free_dpc() does not clear nic->npa_dpc_valid after successfully
> freeing the DPC counter. If otx2_init_hw_resources() fails after
> cn20k_npa_alloc_dpc() succeeds (e.g., otx2_rq_aura_pool_init() returns an
> error), the error cleanup at err_free_nix_lf calls cn20k_npa_free_dpc(pf)
> which frees the counter but leaves npa_dpc_valid=true.
> 
> The caller rvu_rep_rsrc_init() then calls otx2_free_hw_resources(priv) at
> its err_free_rsrc label:
> 
> rvu_rep_rsrc_init()
>     otx2_init_hw_resources(priv)
>         cn20k_npa_alloc_dpc(pf) [succeeds]
>         otx2_rq_aura_pool_init(pf) [fails]
>         goto err_free_nix_lf
>         cn20k_npa_free_dpc(pf) [frees counter, npa_dpc_valid still true]
>     goto err_free_rsrc
>     otx2_free_hw_resources(priv)
>         cn20k_npa_free_dpc(pf) [frees same counter again]
> 
> Looking at cn20k_npa_free_dpc() in cn20k.c:
> 
> int cn20k_npa_free_dpc(struct otx2_nic *nic)
> {
>     struct npa_cn20k_dpc_free_req *req;
>     int err;
> 
>     if (!nic->npa_dpc_valid)
>         return 0;
> 
>     mutex_lock(&nic->mbox.lock);
> 
>     req = otx2_mbox_alloc_msg_npa_cn20k_dpc_free(&nic->mbox);
>     if (!req) {
>         mutex_unlock(&nic->mbox.lock);
>         return -ENOMEM;
>     }
> 
>     req->cntr_id = nic->npa_dpc;
> 
>     err = otx2_sync_mbox_msg(&nic->mbox);
> 
>     mutex_unlock(&nic->mbox.lock);
> 
>     return err;
> }
> 
> Since npa_dpc_valid is still true, the second call passes the guard check
> and sends another mbox message to free the same counter ID. Shouldn't
> cn20k_npa_free_dpc() clear nic->npa_dpc_valid after the successful
> otx2_sync_mbox_msg() call to prevent this double-free?
>
Nice catch. Will fix this.

Thanks,
Sundeep
 
> >  	mutex_lock(&mbox->lock);
> >  	free_req = otx2_mbox_alloc_msg_nix_lf_free(mbox);
> >  	if (free_req) {
> 
> [ ... ]
> 

      reply	other threads:[~2026-04-03  6:10 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-31  7:08 [net-next PATCH v4 0/4] octeontx2: CN20K NPA Halo context support Subbaraya Sundeep
2026-03-31  7:08 ` [net-next PATCH v4 1/4] octeontx2-af: npa: cn20k: Add NPA Halo support Subbaraya Sundeep
2026-03-31  7:08 ` [net-next PATCH v4 2/4] octeontx2-af: npa: cn20k: Add DPC support Subbaraya Sundeep
2026-03-31  7:08 ` [net-next PATCH v4 3/4] octeontx2-af: npa: cn20k: Add debugfs for Halo Subbaraya Sundeep
2026-03-31  7:08 ` [net-next PATCH v4 4/4] octeontx2-pf: cn20k: Use unified Halo context Subbaraya Sundeep
2026-04-02 11:20   ` [net-next,v4,4/4] " Paolo Abeni
2026-04-03  6:01     ` Subbaraya Sundeep [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260403060128.GA1485819@kernel-ep2 \
    --to=sbhatta@marvell.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=bbhushan2@marvell.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=gakula@marvell.com \
    --cc=kuba@kernel.org \
    --cc=lcherian@marvell.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=sgoutham@marvell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox