All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jakub Kicinski <kuba@kernel.org>
To: github@szelinsky.de
Cc: Jakub Kicinski <kuba@kernel.org>,
	o.rempel@pengutronix.de, kory.maincent@bootlin.com,
	andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com,
	pabeni@redhat.com, krzk@kernel.org, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-leds@vger.kernel.org
Subject: Re: [PATCH net-next v5 1/2] net: pse-pd: add devm_pse_poll_helper()
Date: Mon,  4 May 2026 18:57:57 -0700	[thread overview]
Message-ID: <20260505015757.831372-1-kuba@kernel.org> (raw)
In-Reply-To: <20260429213224.1747410-2-github@szelinsky.de>

This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
---
    net: pse-pd: add devm_pse_poll_helper()

This patch extracts the common event handling loop from pse_isr() into a
shared function and adds a poll-based alternative for PSE controllers
that lack interrupt support.

> diff --git a/drivers/net/pse-pd/pse_core.c b/drivers/net/pse-pd/pse_core.c
> index 87aa4f4e9724..b7ffec0c942c 100644
> --- a/drivers/net/pse-pd/pse_core.c
> +++ b/drivers/net/pse-pd/pse_core.c
[ ... ]
> @@ -1118,6 +1125,8 @@ void pse_controller_unregister(struct pse_controller_dev *pcdev)
>  	pse_release_pis(pcdev);
>  	if (pcdev->irq)
>  		disable_irq(pcdev->irq);
> +	if (pcdev->polling)
> +		cancel_delayed_work_sync(&pcdev->poll_work);

Because pse_release_pis() executes kfree(pcdev->pi) before the delayed work
is cancelled, can pse_poll_worker() run concurrently and access the freed
pcdev->pi array inside pse_handle_events() before the cancellation completes?

[ ... ]
> @@ -1356,6 +1403,54 @@ int devm_pse_irq_helper(struct pse_controller_dev *pcdev, int irq,
>  }
>  EXPORT_SYMBOL_GPL(devm_pse_irq_helper);
>  
> +/**
> + * devm_pse_poll_helper - Register poll-based PSE event notifier
> + * @pcdev: a pointer to the PSE controller device
> + * @d: PSE event description (uses same pse_irq_desc as IRQ path)
[ ... ]
> +int devm_pse_poll_helper(struct pse_controller_dev *pcdev,
> +			 const struct pse_irq_desc *d)
> +{
> +	struct device *dev = pcdev->dev;
> +
> +	if (!d || !d->map_event || !d->name)
> +		return -EINVAL;
> +
> +	pcdev->poll_desc = *d;
> +	pcdev->poll_notifs = devm_kcalloc(dev, pcdev->nr_lines,
> +					  sizeof(*pcdev->poll_notifs),
> +					  GFP_KERNEL);
> +	if (!pcdev->poll_notifs)
> +		return -ENOMEM;
> +
> +	pcdev->poll_notifs_mask = devm_bitmap_zalloc(dev, pcdev->nr_lines,
> +						     GFP_KERNEL);
> +	if (!pcdev->poll_notifs_mask)
> +		return -ENOMEM;
> +
> +	if (!pcdev->poll_interval_ms)
> +		pcdev->poll_interval_ms = PSE_DEFAULT_POLL_INTERVAL_MS;
> +
> +	INIT_DELAYED_WORK(&pcdev->poll_work, pse_poll_worker);
> +	pcdev->polling = true;
> +
> +	queue_delayed_work(system_freezable_wq, &pcdev->poll_work,
> +			   msecs_to_jiffies(pcdev->poll_interval_ms));

Since memory is allocated with devm_ helpers but no devm cleanup action is
registered to cancel the delayed work, what happens if a driver's probe
function encounters an error after calling this helper? Will devres free the
pcdev structure while the system workqueue still has the delayed work armed,
leading to a use-after-free when the timer expires?

The commit message mentions that drivers typically set up the helper before
registering the controller. If queue_delayed_work() schedules the worker
to run before devm_pse_controller_register() completes, will
pse_poll_worker() attempt to acquire pcdev->lock and push events into
pcdev->ntf_fifo before they are initialized?

> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(devm_pse_poll_helper);

  reply	other threads:[~2026-05-05  1:57 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-10 12:44 [PATCH net-next v4 0/2] net: pse-pd: add poll path and LED trigger support Carlo Szelinsky
2026-04-10 12:44 ` [PATCH net-next v4 1/2] net: pse-pd: add devm_pse_poll_helper() Carlo Szelinsky
2026-04-13 22:50   ` Jakub Kicinski
2026-04-14 14:05     ` Kory Maincent
2026-04-14 14:11       ` Kory Maincent
2026-04-10 12:44 ` [PATCH net-next v4 2/2] net: pse-pd: add LED trigger support via notification path Carlo Szelinsky
2026-04-13 22:51   ` Jakub Kicinski
2026-04-13 22:53   ` Jakub Kicinski
2026-04-29 21:32 ` [PATCH net-next v5 0/2] net: pse-pd: add poll path and LED trigger support Carlo Szelinsky
2026-04-29 21:32   ` [PATCH net-next v5 1/2] net: pse-pd: add devm_pse_poll_helper() Carlo Szelinsky
2026-05-05  1:57     ` Jakub Kicinski [this message]
2026-05-16 10:17       ` Carlo Szelinsky
2026-04-29 21:32   ` [PATCH net-next v5 2/2] net: pse-pd: add LED trigger support via notification path Carlo Szelinsky
2026-05-05  1:57     ` Jakub Kicinski
2026-05-16 10:17       ` Carlo Szelinsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260505015757.831372-1-kuba@kernel.org \
    --to=kuba@kernel.org \
    --cc=andrew+netdev@lunn.ch \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=github@szelinsky.de \
    --cc=kory.maincent@bootlin.com \
    --cc=krzk@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-leds@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=o.rempel@pengutronix.de \
    --cc=pabeni@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.