public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jakub Kicinski <kuba@kernel.org>
To: grzegorz.nitka@intel.com
Cc: Jakub Kicinski <kuba@kernel.org>,
	netdev@vger.kernel.org, ivecera@redhat.com,
	vadim.fedorenko@linux.dev, jiri@resnulli.us, edumazet@google.com,
	przemyslaw.kitszel@intel.com, richardcochran@gmail.com,
	donald.hunter@gmail.com, linux-kernel@vger.kernel.org,
	arkadiusz.kubalewski@intel.com, andrew+netdev@lunn.ch,
	intel-wired-lan@lists.osuosl.org, horms@kernel.org,
	Prathosh.Satish@microchip.com, anthony.l.nguyen@intel.com,
	pabeni@redhat.com, davem@davemloft.net
Subject: Re: [PATCH v7 net-next 5/8] ice: introduce TXC DPLL device and TX ref clock pin framework for E825
Date: Sat,  2 May 2026 10:33:11 -0700	[thread overview]
Message-ID: <20260502173312.3849706-1-kuba@kernel.org> (raw)
In-Reply-To: <20260430094238.987976-6-grzegorz.nitka@intel.com>

This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
---
ice: introduce TXC DPLL device and TX ref clock pin framework for E825

This patch adds a new TXC DPLL device of DPLL_TYPE_GENERIC for E825
devices, together with a TXCLK pin type and two TX reference pins
(EXT_EREF0 and a fwnode-described SYNCE pin). Pin state callbacks and
TXC lock status are placeholders at this stage.

> diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c
> index 62f75701d652..d839b50187ba 100644
> --- a/drivers/net/ethernet/intel/ice/ice_dpll.c
> +++ b/drivers/net/ethernet/intel/ice/ice_dpll.c

[ ... ]

> @@ -3199,19 +3276,40 @@ static bool ice_dpll_is_fwnode_pin(struct ice_dpll_pin *pin)
>  	return !IS_ERR_OR_NULL(pin->fwnode);
>  }
>  
> +static bool ice_dpll_fwnode_eq(const struct fwnode_handle *a,
> +			       const struct fwnode_handle *b)
> +{
> +	return a && b && a == b;
> +}
> +
>  static void ice_dpll_pin_notify_work(struct work_struct *work)
>  {
>  	struct ice_dpll_pin_work *w = container_of(work,
>  						   struct ice_dpll_pin_work,
>  						   work);
>  	struct ice_dpll_pin *pin, *parent = w->pin;
> +	bool is_tx_synce_parent = false;
>  	struct ice_pf *pf = parent->pf;
> +	bool is_rclk_parent = false;
>  	int ret;
>  
>  	wait_for_completion(&pf->dplls.dpll_init);
>  	if (!test_bit(ICE_FLAG_DPLL, pf->flags))
>  		goto out; /* DPLL initialization failed */
>  
> +	/* Decide which parent we are handling, defensively checking FWNs */

[ ... ]

> +		} else if (is_tx_synce_parent) {
> +			/* Register TX-CLK SYNCE pin directly to TXC DPLL */
> +			pin = &pf->dplls.txclks[E825_EXT_SYNCE_PIN_IDX];
> +			ret = dpll_pin_register(pf->dplls.txc.dpll, pin->pin,
> +						&ice_dpll_txclk_ops, pin);
> +			if (ret) {
> +				dev_err(ice_pf_to_dev(pf),
> +					"TX SYNCE pin register failed: %pe\n",
> +					ERR_PTR(ret));
> +				goto drop_parent_ref;
> +			}
>  		}
>  		break;

[ ... ]

> @@ -3401,6 +3524,19 @@ ice_dpll_deinit_fwnode_pins(struct ice_pf *pf, struct ice_dpll_pin *pins,
>  	destroy_workqueue(pf->dplls.wq);
>  }
>  
> +static int ice_dpll_deinit_txclk_pins(struct ice_pf *pf)
> +{
> +	struct ice_dpll_pin *synce_pin = &pf->dplls.txclks[E825_EXT_SYNCE_PIN_IDX];
> +	struct ice_dpll *dt = &pf->dplls.txc;
> +
> +	ice_dpll_unregister_pins(dt->dpll, pf->dplls.txclks,
> +				 &ice_dpll_txclk_ops,
> +				 ARRAY_SIZE(pf->dplls.txclks));
> +	ice_dpll_release_pins(&pf->dplls.txclks[E825_EXT_EREF_PIN_IDX], 1);
> +	ice_dpll_deinit_fwnode_pin(synce_pin);
> +	return 0;
> +}

Is there a teardown race between this function and a concurrent
ice_dpll_pin_notify_work() that is about to register the SYNCE pin on
the TXC DPLL?

The work's registration path runs:

	parent->pin = fwnode_dpll_pin_find(parent->fwnode,
					   &parent->tracker);
	...
	} else if (is_tx_synce_parent) {
		pin = &pf->dplls.txclks[E825_EXT_SYNCE_PIN_IDX];
		ret = dpll_pin_register(pf->dplls.txc.dpll, pin->pin,
					&ice_dpll_txclk_ops, pin);

while ice_dpll_deinit_txclk_pins() first scans pf->dplls.txclks[] with
ice_dpll_unregister_pins(), which skips any slot where pins[i].pin is
still NULL:

	for (i = 0; i < count; i++) {
		if (pins[i].hidden)
			continue;
		if (IS_ERR_OR_NULL(pins[i].pin))
			continue;
		dpll_pin_unregister(dpll, pins[i].pin, ops, &pins[i]);
	}

Only afterward does it call ice_dpll_deinit_fwnode_pin(synce_pin), which
unregisters the notifier and flushes pf->dplls.wq.

If an external fwnode DPLL provider emits a DPLL_PIN_CREATED for
clk_ref_synce between the two steps, can this sequence occur?

  deinit thread                    notifier work
  -------------                    -------------
  ice_dpll_unregister_pins(txclks)
     sees txclks[1].pin == NULL
     skips SYNCE slot
                                   parent->pin = fwnode_dpll_pin_find()
                                   dpll_pin_register(txc->dpll,
                                       pin->pin, &ops, pin)
                                     -> adds to txc->dpll->pin_refs
                                        and pin->dpll_refs
  ice_dpll_release_pins(EREF0, 1)
  ice_dpll_deinit_fwnode_pin(synce_pin)
     unregister_dpll_notifier()
     flush_workqueue()   /* no work pending */
     dpll_pin_put(synce_pin->pin, ...)

After this, dpll_pin_put() only drops the pin refcount; it does not call
dpll_pin_unregister(). So the TXC dpll's pin_refs still contains the
SYNCE entry when ice_dpll_deinit_dpll(pf, &pf->dplls.txc, false) later
runs dpll_device_unregister() and dpll_device_put(). Does this trip
WARN_ON_ONCE(!xa_empty(&dpll->pin_refs)) in __dpll_device_put() and then
free the dpll_device while pin->dpll_refs still references it, leaving a
use-after-free on any subsequent walk of pin->dpll_refs?

Would it be safer to call ice_dpll_deinit_fwnode_pin(synce_pin) first so
the notifier is torn down and pf->dplls.wq drained before txclks[] is
scanned, or to hold pf->dplls.lock across both the deinit scan and the
work's parent->pin = ...; dpll_pin_register(...) sequence?

A similar structural race exists for the rclk path via
dpll_pin_on_pin_register(), but this patch introduces a new instance on
the direct-on-TXC SYNCE registration. The ordering in
ice_dpll_deinit_txclk_pins() appears unchanged by later patches in the
series, so the same race seems to persist beyond this commit.

  parent reply	other threads:[~2026-05-02 17:33 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-30  9:42 [PATCH v7 net-next 0/8] dpll/ice: Add generic DPLL type and full TX reference clock control for E825 Grzegorz Nitka
2026-04-30  9:42 ` [PATCH v7 net-next 1/8] dpll: add generic DPLL type Grzegorz Nitka
2026-04-30 11:49   ` [Intel-wired-lan] " Loktionov, Aleksandr
2026-04-30  9:42 ` [PATCH v7 net-next 2/8] dpll: allow registering FW-identified pin with a different DPLL Grzegorz Nitka
2026-05-02 17:27   ` Jakub Kicinski
2026-04-30  9:42 ` [PATCH v7 net-next 3/8] dpll: extend pin notifier and netlink events with notification source ID Grzegorz Nitka
2026-05-02 17:29   ` Jakub Kicinski
2026-05-02 17:31   ` Jakub Kicinski
2026-04-30  9:42 ` [PATCH v7 net-next 4/8] dpll: zl3073x: allow SyncE_Ref pin state change Grzegorz Nitka
2026-05-02 17:33   ` Jakub Kicinski
2026-04-30  9:42 ` [PATCH v7 net-next 5/8] ice: introduce TXC DPLL device and TX ref clock pin framework for E825 Grzegorz Nitka
2026-04-30 11:46   ` [Intel-wired-lan] " Loktionov, Aleksandr
2026-05-02 17:33   ` Jakub Kicinski [this message]
2026-04-30  9:42 ` [PATCH v7 net-next 6/8] ice: implement CPI support for E825C Grzegorz Nitka
2026-05-02 17:33   ` Jakub Kicinski
2026-04-30  9:42 ` [PATCH v7 net-next 7/8] ice: add Tx reference clock index handling to AN restart command Grzegorz Nitka
2026-04-30 11:29   ` [Intel-wired-lan] " Loktionov, Aleksandr
2026-04-30  9:42 ` [PATCH v7 net-next 8/8] ice: implement E825 TX ref clock control and TXC hardware sync status Grzegorz Nitka
2026-04-30 11:33   ` [Intel-wired-lan] " Loktionov, Aleksandr
2026-04-30 11:37   ` Loktionov, Aleksandr

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260502173312.3849706-1-kuba@kernel.org \
    --to=kuba@kernel.org \
    --cc=Prathosh.Satish@microchip.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=anthony.l.nguyen@intel.com \
    --cc=arkadiusz.kubalewski@intel.com \
    --cc=davem@davemloft.net \
    --cc=donald.hunter@gmail.com \
    --cc=edumazet@google.com \
    --cc=grzegorz.nitka@intel.com \
    --cc=horms@kernel.org \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=ivecera@redhat.com \
    --cc=jiri@resnulli.us \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=przemyslaw.kitszel@intel.com \
    --cc=richardcochran@gmail.com \
    --cc=vadim.fedorenko@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox