public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Jakub Kicinski <kuba@kernel.org>
To: anthony.l.nguyen@intel.com
Cc: Jakub Kicinski <kuba@kernel.org>,
	edumazet@google.com, davem@davemloft.net,
	aleksandr.loktionov@intel.com, netdev@vger.kernel.org,
	Samuel.salin@intel.com, pabeni@redhat.com,
	madhu.chittim@intel.com, andrew+netdev@lunn.ch,
	milena.olech@intel.com, joshua.a.hay@intel.com
Subject: Re: [net-next,03/10] idpf: move queue resources to idpf_q_vec_rsrc structure
Date: Sat, 17 Jan 2026 18:27:58 -0800	[thread overview]
Message-ID: <20260118022758.1106078-1-kuba@kernel.org> (raw)
In-Reply-To: <20260115234749.2365504-4-anthony.l.nguyen@intel.com>

This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.

For local reproduction steps see https://netdev-ai.bots.linux.dev/ai-local.html
---
idpf: move queue resources to idpf_q_vec_rsrc structure

This patch moves TX and RX queue resources from struct idpf_vport to
the newly introduced idpf_q_vec_rsrc structure as part of a refactoring
effort.

> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> index 48557c67dda8..25c15d2990d6 100644

[ ... ]

> @@ -438,9 +440,10 @@ static void idpf_rx_buf_rel_all(struct idpf_rx_queue *rxq)
>   *
>   * Free a specific rx queue resources
>   */
> -static void idpf_rx_desc_rel(struct idpf_rx_queue *rxq, struct device *dev,
> -			     u32 model)
> +static void idpf_rx_desc_rel(struct idpf_rx_queue *rxq, u32 model)
>  {
> +	struct device *dev;
> +
>  	if (!rxq)
>  		return;
>

[ ... ]

>  	if (!rxq->desc_ring)
>  		return;
>
> +	dev = &rxq->q_vector->vport->adapter->pdev->dev;
                 ^^^^^^^^^

Is this safe when q_vector has not been initialized yet? Looking at
idpf_vport_open(), the call sequence is:

  idpf_vport_queues_alloc()  -> allocates desc_ring
  idpf_vport_queue_ids_init() -> can fail, goes to queues_rel
  idpf_vport_intr_init()      -> sets q_vector via
                                 idpf_vport_intr_map_vector_to_qs()

If idpf_vport_queue_ids_init() fails, the error path jumps to queues_rel
which calls idpf_vport_queues_rel() -> idpf_rx_desc_rel_all() ->
idpf_rx_desc_rel(). At this point desc_ring is allocated but q_vector
is still NULL, causing a NULL pointer dereference here.

The old code passed dev as a parameter derived from vport->adapter->pdev->dev
which was always valid.

> 	dmam_free_coherent(dev, rxq->size, rxq->desc_ring, rxq->dma);
>  	rxq->desc_ring = NULL;
>  }

> @@ -467,9 +471,10 @@ static void idpf_rx_desc_rel(struct idpf_rx_queue *rxq, struct device *dev,
>   * @bufq: buffer queue to clean the resources from
>   * @dev: device to free DMA memory
>   */
> -static void idpf_rx_desc_rel_bufq(struct idpf_buf_queue *bufq,
> -				  struct device *dev)
> +static void idpf_rx_desc_rel_bufq(struct idpf_buf_queue *bufq)
>  {
> +	struct device *dev;
> +
>  	if (!bufq)
>  		return;
>

[ ... ]

>  	if (!bufq->split_buf)
>  		return;
>
> +	dev = &bufq->q_vector->vport->adapter->pdev->dev;
                 ^^^^^^^^^

The same concern applies here. bufq->q_vector may be NULL if we reach
this cleanup path before idpf_vport_intr_init() has been called.

> 	dma_free_coherent(dev, bufq->size, bufq->split_buf, bufq->dma);
>  	bufq->split_buf = NULL;
>  }

  reply	other threads:[~2026-01-18  2:28 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-15 23:47 [PATCH net-next 00/10][pull request] refactor IDPF resource access Tony Nguyen
2026-01-15 23:47 ` [PATCH net-next 01/10] idpf: introduce local idpf structure to store virtchnl queue chunks Tony Nguyen
2026-01-18  2:25   ` Jakub Kicinski
2026-01-20 19:36     ` Tony Nguyen
2026-01-15 23:47 ` [PATCH net-next 02/10] idpf: introduce idpf_q_vec_rsrc struct and move vector resources to it Tony Nguyen
2026-01-15 23:47 ` [PATCH net-next 03/10] idpf: move queue resources to idpf_q_vec_rsrc structure Tony Nguyen
2026-01-18  2:27   ` Jakub Kicinski [this message]
2026-01-15 23:47 ` [PATCH net-next 04/10] idpf: move some iterator declarations inside for loops Tony Nguyen
2026-01-15 23:47 ` [PATCH net-next 05/10] idpf: reshuffle idpf_vport struct members to avoid holes Tony Nguyen
2026-01-15 23:47 ` [PATCH net-next 06/10] idpf: add rss_data field to RSS function parameters Tony Nguyen
2026-01-15 23:47 ` [PATCH net-next 07/10] idpf: remove vport pointer from queue sets Tony Nguyen
2026-01-15 23:47 ` [PATCH net-next 08/10] idpf: generalize send virtchnl message API Tony Nguyen
2026-01-18  2:28   ` [net-next,08/10] " Jakub Kicinski
2026-01-15 23:47 ` [PATCH net-next 09/10] idpf: avoid calling get_rx_ptypes for each vport Tony Nguyen
2026-01-15 23:47 ` [PATCH net-next 10/10] idpf: generalize mailbox API Tony Nguyen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260118022758.1106078-1-kuba@kernel.org \
    --to=kuba@kernel.org \
    --cc=Samuel.salin@intel.com \
    --cc=aleksandr.loktionov@intel.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=anthony.l.nguyen@intel.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=joshua.a.hay@intel.com \
    --cc=madhu.chittim@intel.com \
    --cc=milena.olech@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox