public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Jakub Kicinski <kuba@kernel.org>
To: devnexen@gmail.com
Cc: Jakub Kicinski <kuba@kernel.org>,
	horatiu.vultur@microchip.com, UNGLinuxDriver@microchip.com,
	andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com,
	pabeni@redhat.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, stable@vger.kernel.org
Subject: Re: [PATCH] net: lan966x: fix page_pool error handling in lan966x_fdma_rx_alloc_page_pool()
Date: Fri,  3 Apr 2026 15:26:35 -0700	[thread overview]
Message-ID: <20260403222634.2287124-2-kuba@kernel.org> (raw)
In-Reply-To: <20260402172823.83467-1-devnexen@gmail.com>

This is an AI-generated review of your patch. The human sending this
email says "Let's make sure the callers can handle the error, too."
---
net: lan966x: fix page_pool error handling in lan966x_fdma_rx_alloc_page_pool()

This commit adds an IS_ERR check after page_pool_create() to return early
on failure. This prevents passing an error pointer to
xdp_rxq_info_reg_mem_model() which would cause a kernel oops when
dereferenced.

> diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c b/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c
> index 7b6369e43451d..34bbcae2f0685 100644
> --- a/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c
> +++ b/drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c
> @@ -92,6 +92,9 @@ static int lan966x_fdma_rx_alloc_page_pool(struct lan966x_rx *rx)
>  
>  	rx->page_pool = page_pool_create(&pp_params);
>  
> +	if (unlikely(IS_ERR(rx->page_pool)))
> +		return PTR_ERR(rx->page_pool);
> +

This isn't a bug in the patch itself, but since this patch correctly handles
errors from page_pool_create(), it makes the existing error recovery paths more
reachable.

If lan966x_fdma_rx_alloc_page_pool() succeeds here, it creates a new page pool
and updates port->xdp_rxq memory models to point to it. If the subsequent
fdma_alloc_coherent() call in lan966x_fdma_rx_alloc() fails:

lan966x_fdma.c:lan966x_fdma_rx_alloc() {
	if (lan966x_fdma_rx_alloc_page_pool(rx))
		return PTR_ERR(rx->page_pool);

	err = fdma_alloc_coherent(lan966x->dev, fdma);
	if (err)
		return err;
    ...
}

Does this return an error without destroying the newly created page pool?

During an MTU change in lan966x_fdma_reload(), this failure triggers a jump
to the restore: label, which reverts rx->page_pool to the old pool:

lan966x_fdma.c:lan966x_fdma_reload() {
    ...
	err = lan966x_fdma_rx_alloc(&lan966x->rx);
	if (err)
		goto restore;
    ...
restore:
	lan966x->rx.page_pool = page_pool;
    ...
}

Does this irrevocably leak the newly created page pool and leave the xdp_rxq
memory models pointing to the leaked pool?

Similarly, in lan966x_fdma_init(), if lan966x_fdma_tx_alloc() fails:

lan966x_fdma.c:lan966x_fdma_init() {
    ...
	err = lan966x_fdma_rx_alloc(&lan966x->rx);
	if (err)
		return err;

	err = lan966x_fdma_tx_alloc(&lan966x->tx);
	if (err) {
		fdma_free_coherent(lan966x->dev, &lan966x->rx.fdma);
		return err;
	}
    ...
}

Does the error handler explicitly free rx.fdma via fdma_free_coherent() but
forget to destroy rx.page_pool, resulting in a permanent memory leak during
probe failure?

Additionally, the error recovery path in lan966x_fdma_reload() appears to
resume hardware DMA into freed pages.

When lan966x_fdma_reload() runs, it first stops the hardware and calls
lan966x_fdma_rx_free_pages(), which returns all RX ring pages back to the
old page pool via page_pool_put_full_page(). If lan966x_fdma_rx_alloc()
subsequently fails, execution jumps to the restore: label:

lan966x_fdma.c:lan966x_fdma_reload() {
    ...
	lan966x_fdma_rx_free_pages(&lan966x->rx);
    ...
	err = lan966x_fdma_rx_alloc(&lan966x->rx);
	if (err)
		goto restore;
    ...
restore:
	lan966x->rx.page_pool = page_pool;
	memcpy(&lan966x->rx.fdma, &fdma_rx_old, sizeof(struct fdma));
	lan966x_fdma_rx_start(&lan966x->rx);
    ...
}

The restore block copies back the old FDMA descriptors, which point to the
pages that were just freed, and calls lan966x_fdma_rx_start() to re-enable
the hardware RX channel.

Since it never re-allocates or re-maps pages for these restored descriptors,
will the hardware actively resume DMAing incoming network packets into
physical addresses that the page pool considers free? If the page pool
returns these pages to the buddy allocator, could this overwrite memory
owned by unrelated kernel subsystems?

>  	for (int i = 0; i < lan966x->num_phys_ports; ++i) {
>  		struct lan966x_port *port;
>

      parent reply	other threads:[~2026-04-03 22:26 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-02 17:28 [PATCH] net: lan966x: fix page_pool error handling in lan966x_fdma_rx_alloc_page_pool() David Carlier
2026-04-02 20:59 ` Joe Damato
2026-04-03 22:26 ` Jakub Kicinski [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260403222634.2287124-2-kuba@kernel.org \
    --to=kuba@kernel.org \
    --cc=UNGLinuxDriver@microchip.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=davem@davemloft.net \
    --cc=devnexen@gmail.com \
    --cc=edumazet@google.com \
    --cc=horatiu.vultur@microchip.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox