public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Simon Horman <horms@kernel.org>
To: sbhatta@marvell.com
Cc: Simon Horman <horms@kernel.org>,
	kuba@kernel.org, gakula@marvell.com, bbhushan2@marvell.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	davem@davemloft.net, lcherian@marvell.com, andrew+netdev@lunn.ch,
	edumazet@google.com, pabeni@redhat.com, sgoutham@marvell.com
Subject: Re: [net-next PATCH v2 2/4] octeontx2-af: npa: cn20k: Add DPC support
Date: Fri, 20 Mar 2026 16:50:17 +0000	[thread overview]
Message-ID: <20260320165017.96244-1-horms@kernel.org> (raw)
In-Reply-To: <1773920826-16985-3-git-send-email-sbhatta@marvell.com>

This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.

For local reproduction steps see https://netdev-ai.bots.linux.dev/ai-local.html
---
octeontx2-af: npa: cn20k: Add DPC support

CN20k introduces 32 diagnostic and performance counters that are shared
across all NPA LFs. This patch adds new mbox messages npa_dpc_alloc/free
to allocate and map counters to LFs, and ensures LF to DPC counter
mappings are cleared at LF free/teardown.

> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cn20k/npa.c b/drivers/net/ethernet/marvell/octeontx2/af/cn20k/npa.c
> index c963f43dc7b0..1def2504872f 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/af/cn20k/npa.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/cn20k/npa.c

[ ... ]

> @@ -46,3 +48,117 @@ int rvu_npa_halo_hwctx_disable(struct npa_aq_enq_req *req)
>
>  	return 0;
>  }
> +
> +int npa_cn20k_dpc_alloc(struct rvu *rvu, struct npa_cn20k_dpc_alloc_req *req,
> +			struct npa_cn20k_dpc_alloc_rsp *rsp)
> +{
> +	struct rvu_hwinfo *hw = rvu->hw;
> +	u16 pcifunc = req->hdr.pcifunc;
> +	int cntr, lf, blkaddr, ridx;
> +	struct rvu_block *block;
> +	struct rvu_pfvf *pfvf;
> +	u64 val, lfmask;
> +
> +	pfvf = rvu_get_pfvf(rvu, pcifunc);
> +
> +	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPA, 0);
> +	if (!pfvf->npalf || blkaddr < 0)
> +		return NPA_AF_ERR_AF_LF_INVALID;
> +
> +	block = &hw->block[blkaddr];
> +	lf = rvu_get_lf(rvu, block, pcifunc, 0);
> +	if (lf < 0)
> +		return NPA_AF_ERR_AF_LF_INVALID;
> +
> +	/* allocate a new counter */
> +	cntr = rvu_alloc_rsrc(&rvu->npa_dpc);
                ^^^^

Are rvu_alloc_rsrc() and rvu_free_rsrc() calls synchronized here?

These functions use non-atomic find_first_zero_bit() + __set_bit() and
__clear_bit() operations on the bitmap. Looking at other callers like
rvu_mbox_handler_attach_resources() in rvu.c:

int rvu_mbox_handler_attach_resources(struct rvu *rvu,
                                      struct rsrc_attach *attach,
                                      struct msg_rsp *rsp)
{
    ...
    mutex_lock(&rvu->rsrc_lock);

    for (q_cnt = 0; q_cnt < attach->npa_msixoff; q_cnt++) {
        msix = rvu_alloc_rsrc(&pfvf->msix);
        ...
    }
    mutex_unlock(&rvu->rsrc_lock);
    ...
}

Similarly, rvu_mbox_handler_nix_bandprof_alloc() and
rvu_mbox_handler_mcs_alloc_resources() also acquire rvu->rsrc_lock
before calling rvu_alloc_rsrc(). rvu_mbox_handler_npc_mcam_alloc_counter()
uses mcam->lock.

Since npa_cn20k_dpc_alloc() can be invoked by the mbox handler
rvu_mbox_handler_npa_cn20k_dpc_alloc() concurrently from different
PF/VF workqueue items in __rvu_mbox_handler(), two threads could race:

Thread 1                                 Thread 2
npa_cn20k_dpc_alloc()                    npa_cn20k_dpc_alloc()
  find_first_zero_bit(npa_dpc) = 0         find_first_zero_bit(npa_dpc) = 0
  __set_bit(0, npa_dpc)                    __set_bit(0, npa_dpc)
  return cntr = 0                          return cntr = 0

Both threads get counter 0, leading to duplicate allocation. The same
race exists in npa_cn20k_dpc_free() with rvu_free_rsrc().

Should these calls be protected with mutex_lock(&rvu->rsrc_lock) similar
to other resource allocation paths?

> +	if (cntr < 0)
> +		return cntr;
> +	rsp->cntr_id = cntr;
> +
> +	/* DPC counter config */
> +	rvu_write64(rvu, blkaddr, NPA_AF_DPCX_CFG(cntr), req->dpc_conf);
> +
> +	/* 0 to 63 lfs -> idx 0, 64 - 127 lfs -> idx 1 */
> +	ridx = lf >> 6;
> +	lfmask = BIT_ULL(ridx ? lf - NPA_DPC_LFS_PER_REG : lf);
> +
> +	ridx = 2 * cntr + ridx;
> +	/* Give permission for LF access */
> +	val = rvu_read64(rvu, blkaddr, NPA_AF_DPC_PERMITX(ridx));
> +	val |= lfmask;
> +	rvu_write64(rvu, blkaddr, NPA_AF_DPC_PERMITX(ridx), val);
> +
> +	return 0;
> +}
> +
> +int rvu_mbox_handler_npa_cn20k_dpc_alloc(struct rvu *rvu,
> +					 struct npa_cn20k_dpc_alloc_req *req,
> +					 struct npa_cn20k_dpc_alloc_rsp *rsp)
> +{
> +	return npa_cn20k_dpc_alloc(rvu, req, rsp);
> +}
> +
> +int npa_cn20k_dpc_free(struct rvu *rvu, struct npa_cn20k_dpc_free_req *req)
> +{
> +	struct rvu_hwinfo *hw = rvu->hw;
> +	u16 pcifunc = req->hdr.pcifunc;
> +	int cntr, lf, blkaddr, ridx;
> +	struct rvu_block *block;
> +	struct rvu_pfvf *pfvf;
> +	u64 val, lfmask;
> +
> +	pfvf = rvu_get_pfvf(rvu, pcifunc);
> +
> +	blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPA, 0);
> +	if (!pfvf->npalf || blkaddr < 0)
> +		return NPA_AF_ERR_AF_LF_INVALID;
> +
> +	block = &hw->block[blkaddr];
> +	lf = rvu_get_lf(rvu, block, pcifunc, 0);
> +	if (lf < 0)
> +		return NPA_AF_ERR_AF_LF_INVALID;
> +
> +	if (req->cntr_id >= NPA_DPC_MAX)
> +		return NPA_AF_ERR_PARAM;
> +
> +	/* 0 to 63 lfs -> idx 0, 64 - 127 lfs -> idx 1 */
> +	ridx = lf >> 6;
> +	lfmask = BIT_ULL(ridx ? lf - NPA_DPC_LFS_PER_REG : lf);
> +	cntr = req->cntr_id;
> +
> +	ridx = 2 * cntr + ridx;
> +
> +	val = rvu_read64(rvu, blkaddr, NPA_AF_DPC_PERMITX(ridx));
> +	/* Check if the counter is allotted to this LF */
> +	if (!(val & lfmask))
> +		return 0;
> +
> +	/* Revert permission */
> +	val &= ~lfmask;
> +	rvu_write64(rvu, blkaddr, NPA_AF_DPC_PERMITX(ridx), val);
> +
> +	/* Free this counter */
> +	rvu_free_rsrc(&rvu->npa_dpc, req->cntr_id);
> +
> +	return 0;
> +}

[ ... ]

  reply	other threads:[~2026-03-20 16:50 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-19 11:47 [net-next PATCH v2 0/4] octeontx2: CN20K NPA Halo context support Subbaraya Sundeep
2026-03-19 11:47 ` [net-next PATCH v2 1/4] octeontx2-af: npa: cn20k: Add NPA Halo support Subbaraya Sundeep
2026-03-20 16:52   ` Simon Horman
2026-03-19 11:47 ` [net-next PATCH v2 2/4] octeontx2-af: npa: cn20k: Add DPC support Subbaraya Sundeep
2026-03-20 16:50   ` Simon Horman [this message]
2026-03-19 11:47 ` [net-next PATCH v2 3/4] octeontx2-af: npa: cn20k: Add debugfs for Halo Subbaraya Sundeep
2026-03-19 11:47 ` [net-next PATCH v2 4/4] octeontx2-pf: cn20k: Use unified Halo context Subbaraya Sundeep
2026-03-20 16:50   ` Simon Horman
2026-03-26  7:00     ` Subbaraya Sundeep

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260320165017.96244-1-horms@kernel.org \
    --to=horms@kernel.org \
    --cc=andrew+netdev@lunn.ch \
    --cc=bbhushan2@marvell.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=gakula@marvell.com \
    --cc=kuba@kernel.org \
    --cc=lcherian@marvell.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=sbhatta@marvell.com \
    --cc=sgoutham@marvell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox