public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Alexander Lobakin <aleksander.lobakin@intel.com>
To: Jakub Kicinski <kuba@kernel.org>
Cc: <anthony.l.nguyen@intel.com>, <horms@kernel.org>,
	<edumazet@google.com>, <sdf@fomichev.me>,
	<aleksandr.loktionov@intel.com>, <ast@kernel.org>,
	<john.fastabend@gmail.com>, <bpf@vger.kernel.org>,
	<netdev@vger.kernel.org>, <davem@davemloft.net>,
	<andrew+netdev@lunn.ch>, <hawk@kernel.org>,
	<nxne.cnse.osdt.itp.upstreaming@intel.com>,
	<magnus.karlsson@intel.com>, <pabeni@redhat.com>,
	<jacob.e.keller@intel.com>, <daniel@iogearbox.net>,
	<maciej.fijalkowski@intel.com>
Subject: Re: [net-next,3/5] ice: migrate to netdev ops lock
Date: Mon, 16 Mar 2026 16:10:49 +0100	[thread overview]
Message-ID: <4537e162-6610-4c3f-b8d5-e62415322282@intel.com> (raw)
In-Reply-To: <20260313014628.1509116-1-kuba@kernel.org>

From: Jakub Kicinski <kuba@kernel.org>
Date: Thu, 12 Mar 2026 18:46:28 -0700

> This is an AI-generated review of your patch. The human sending this
> email has considered the AI review valid, or at least plausible.
> 
> For local reproduction steps see https://netdev-ai.bots.linux.dev/ai-local.html
> ---
> ice: migrate to netdev ops lock
> 
> This commit migrates the ice driver to use netdev ops locking. Queue
> management ops unconditionally enable netdev locking, so the driver now
> uses _locked counterparts of NAPI configuration functions to avoid
> deadlocks.
> 
>> diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
>> index bd77f1c001ee..08157dc04ed6 100644
>> --- a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
>> +++ b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
>> @@ -273,14 +273,13 @@ void ice_vsi_cfg_dcb_rings(struct ice_vsi *vsi)
>>  /**
>>   * ice_dcb_ena_dis_vsi - disable certain VSIs for DCB config/reconfig
>>   * @pf: pointer to the PF instance
>>   * @ena: true to enable VSIs, false to disable
>> - * @locked: true if caller holds RTNL lock, false otherwise
>>   *
>>   * Before a new DCB configuration can be applied, VSIs of type PF, SWITCHDEV
>>   * and CHNL need to be brought down. Following completion of DCB configuration
>>   * the VSIs that were downed need to be brought up again. This helper function
>>   * does both.
>>   */
>> -static void ice_dcb_ena_dis_vsi(struct ice_pf *pf, bool ena, bool locked)
>> +static void ice_dcb_ena_dis_vsi(struct ice_pf *pf, bool ena)
>>  {
>>  	int i;
>>
>> @@ -294,9 +293,9 @@ static void ice_dcb_ena_dis_vsi(struct ice_pf *pf, bool ena, bool locked)
>>  		case ICE_VSI_CHNL:
>>  		case ICE_VSI_PF:
>>  			if (ena)
>> -				ice_ena_vsi(vsi, locked);
>> +				ice_ena_vsi(vsi, true);
>>  			else
>> -				ice_dis_vsi(vsi, locked);
>> +				ice_dis_vsi_locked(vsi);
> 
> Does ice_dcb_ena_dis_vsi() need to acquire the netdev lock before calling
> ice_dis_vsi_locked() and ice_ena_vsi()?
> 
> Looking at the disable path, ice_dis_vsi_locked() calls ice_vsi_close()
> which goes through ice_down() -> ice_napi_disable_all() ->
> napi_disable_locked(), and that function has netdev_assert_locked(n->dev) in
> net/core/dev.c. Similarly, ice_vsi_close() calls
> ice_vsi_clear_napi_queues_locked() -> netif_napi_set_irq_locked() which
> checks netdev_assert_locked_or_invisible().
> 
> The enable path calls ice_ena_vsi() -> ice_open_internal() ->
> ice_vsi_open(), and that now calls ice_vsi_set_napi_queues_locked() ->
> netif_napi_set_irq_locked() (requires netdev lock) and ice_up_complete() ->
> ice_napi_enable_all() -> napi_enable_locked().
> 
> The old code called ice_dis_vsi(vsi, locked) and ice_ena_vsi(vsi, locked)
> where the old ice_dis_vsi() only dealt with rtnl_lock. The new code switches
> to _locked variants that require the netdev lock but does not acquire it.
> 
> Callers ice_pf_dcb_cfg() and ice_dcb_process_lldp_set_mib_change() hold
> rtnl_lock but not the netdev lock.

Doesn't the DCB core take the netdev lock just like netdev_ops? Or it
only takes the RTNL?

> 
> [ ... ]
> 
>> diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
>> index e7308e381e2f..a8e2db6dc05a 100644
>> --- a/drivers/net/ethernet/intel/ice/ice_main.c
>> +++ b/drivers/net/ethernet/intel/ice/ice_main.c
> 
> [ ... ]
> 
>> @@ -7563,7 +7570,7 @@ int ice_vsi_open(struct ice_vsi *vsi)
>>  		if (err)
>>  			goto err_set_qs;
>>
>> -		ice_vsi_set_napi_queues(vsi);
>> +		ice_vsi_set_napi_queues_locked(vsi);
>>  	}
>>
>>  	err = ice_up_complete(vsi);
> 
> Does ice_ena_vsi() need to take the netdev lock before calling
> ice_open_internal()?
> 
> ice_vsi_open() now calls ice_vsi_set_napi_queues_locked() which requires the
> netdev lock via netif_napi_set_irq_locked() -> netdev_assert_locked_or_
> invisible(). It also calls ice_up_complete() -> ice_napi_enable_all() ->
> napi_enable_locked().
> 
> However, ice_ena_vsi() in ice_lib.c calls ice_open_internal() ->
> ice_vsi_open() while holding only rtnl_lock, not the netdev lock. This path
> is reached during PF reset rebuild:
> 
> ice_rebuild()
>     ice_vsi_rebuild_by_type()
>         ice_ena_vsi(vsi, false)
>             ice_open_internal()
>                 ice_vsi_open()
> 
> The netdev lock is NOT held at this point. ice_vsi_rebuild() took and
> released it earlier in ice_vsi_rebuild_by_type().
> 
> Additionally, if ice_up_complete() fails, the error path goes through
> ice_down() -> ice_napi_disable_all() -> napi_disable_locked(), which has an
> explicit netdev_assert_locked(n->dev) check.

This one... Looks fair. I need to adjust ice_ena_vsi() to take the
netdev lock instead of RTNL.

Thanks,
Olek

  reply	other threads:[~2026-03-16 15:13 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-10 22:06 [PATCH net-next 0/5][pull request] ice: add support for devmem/io_uring Rx and Tx Tony Nguyen
2026-03-10 22:06 ` [PATCH net-next 1/5] libeth: pass Rx queue index to PP when creating a fill queue Tony Nguyen
2026-03-10 22:06 ` [PATCH net-next 2/5] libeth: handle creating pools with unreadable buffers Tony Nguyen
2026-03-10 22:06 ` [PATCH net-next 3/5] ice: migrate to netdev ops lock Tony Nguyen
2026-03-13  1:46   ` [net-next,3/5] " Jakub Kicinski
2026-03-16 15:10     ` Alexander Lobakin [this message]
2026-03-16 23:16       ` Jakub Kicinski
2026-03-17 15:30         ` Alexander Lobakin
2026-03-17 15:50           ` Jakub Kicinski
2026-03-17 16:15             ` Tony Nguyen
2026-03-17 17:29               ` Jakub Kicinski
2026-03-17 17:57                 ` Tony Nguyen
2026-03-10 22:06 ` [PATCH net-next 4/5] ice: implement Rx queue management ops Tony Nguyen
2026-03-10 22:06 ` [PATCH net-next 5/5] ice: add support for transmitting unreadable frags Tony Nguyen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4537e162-6610-4c3f-b8d5-e62415322282@intel.com \
    --to=aleksander.lobakin@intel.com \
    --cc=aleksandr.loktionov@intel.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=anthony.l.nguyen@intel.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=horms@kernel.org \
    --cc=jacob.e.keller@intel.com \
    --cc=john.fastabend@gmail.com \
    --cc=kuba@kernel.org \
    --cc=maciej.fijalkowski@intel.com \
    --cc=magnus.karlsson@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=nxne.cnse.osdt.itp.upstreaming@intel.com \
    --cc=pabeni@redhat.com \
    --cc=sdf@fomichev.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox