Linux-HyperV List
 help / color / mirror / Atom feed
From: sashiko-bot@kernel.org
To: "Long Li" <longli@microsoft.com>
Cc: linux-hyperv@vger.kernel.org
Subject: Re: [PATCH net-next v10 1/6] net: mana: Create separate EQs for each vPort
Date: Sat, 16 May 2026 04:05:32 +0000	[thread overview]
Message-ID: <20260516040532.7EEF5C19425@smtp.kernel.org> (raw)
In-Reply-To: <20260515040508.491748-2-longli@microsoft.com>

Thank you for your contribution! Sashiko AI review found 2 potential issue(s) to consider:
- [Critical] Missing cross-port validation in `mana_ib_create_qp_rss()` allows userspace to trigger NULL pointer dereferences and corrupt other ports' queues.
- [High] Use of boolean flag `channel_changing` as an ad-hoc lock in `mana_set_channels()` violates networking subsystem guidelines.
--

commit ed2a523d0427d1b949add95925cad26c8b468ff8
Author: Long Li <longli@microsoft.com>
Date:   Fri May 15 04:05:03 2026 +0000

    net: mana: Create separate EQs for each vPort

This commit separates EQs for each vPort, removing the shared EQ mechanism
from the MANA driver to prepare for dedicated MSI-X vector assignment.
RSS QPs are updated to take a vport reference, ensuring EQs outlive QP
consumers.

> diff --git a/drivers/infiniband/hw/mana/qp.c b/drivers/infiniband/hw/mana/qp.c
> index 0fbcf449c134b..108ec4c5ce51c 100644
> --- a/drivers/infiniband/hw/mana/qp.c
> +++ b/drivers/infiniband/hw/mana/qp.c
[ ... ]
> @@ -155,6 +156,18 @@ static int mana_ib_create_qp_rss(struct ib_qp *ibqp, struct ib_pd *pd,
>  
>  	qp->port = port;
>  
> +	/* Take a reference on the vport to ensure EQs outlive this QP.
> +	 * The vport must already be configured by a raw QP.
> +	 */
> +	mutex_lock(&mana_pd->vport_mutex);
> +	if (!mana_pd->vport_use_count) {
> +		mutex_unlock(&mana_pd->vport_mutex);
> +		ret = -EINVAL;
> +		goto fail;
> +	}
> +	mana_pd->vport_use_count++;
> +	mutex_unlock(&mana_pd->vport_mutex);
> +

Does this code need to verify that the requested port matches
mana_pd->vport_port?

While mana_ib_cfg_vport() correctly rejects cross-port PD sharing for
raw QPs, an RSS QP creation could specify a different port (e.g., Port B)
using the same PD bound to Port A.

If Port B is unconfigured, could fetching its mpc result in a NULL
pointer dereference when accessing mpc->eqs below?

If Port B is active, could this lead to erroneously destroying Port B's EQs
or underflowing its vport_use_count when the PD drops its references during
teardown?

>  	for (i = 0; i < ind_tbl_size; i++) {
>  		struct mana_obj_spec wq_spec = {};
[ ... ]
> diff --git a/drivers/net/ethernet/microsoft/mana/mana_ethtool.c b/drivers/net/ethernet/microsoft/mana/mana_ethtool.c
> index 04350973e19e3..4633acc976f0a 100644
> --- a/drivers/net/ethernet/microsoft/mana/mana_ethtool.c
> +++ b/drivers/net/ethernet/microsoft/mana/mana_ethtool.c
[ ... ]
> @@ -462,10 +467,22 @@ static int mana_set_channels(struct net_device *ndev,
>  	unsigned int old_count = apc->num_queues;
>  	int err;
>  
> +	/* Set channel_changing to block RDMA from grabbing the vport
> +	 * during the detach/attach window. mana_cfg_vport() checks
> +	 * this flag under vport_mutex and returns -EBUSY if set.
> +	 */
> +	mutex_lock(&apc->vport_mutex);
> +	if (!apc->port_is_up && apc->vport_use_count) {
> +		mutex_unlock(&apc->vport_mutex);
> +		return -EBUSY;
> +	}
> +	apc->channel_changing = true;
> +	mutex_unlock(&apc->vport_mutex);
> +

Is channel_changing acting as an ad-hoc lock here?

The networking subsystem guidelines discourage using boolean flags set on
entry and cleared on exit to guard code sections against concurrent access.

Since mana_detach() and mana_attach() are executed while this flag is held
to prevent mana_cfg_vport() from proceeding, could this be replaced with a
standard locking mechanism to ensure lockdep visibility?

>  	err = mana_pre_alloc_rxbufs(apc, ndev->mtu, new_count);
>  	if (err) {
>  		netdev_err(ndev, "Insufficient memory for new allocations");
> -		return err;
> +		goto clear_flag;
>  	}

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260515040508.491748-1-longli@microsoft.com?part=1

  reply	other threads:[~2026-05-16  4:05 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-15  4:05 [PATCH net-next v10 0/6] net: mana: Per-vPort EQ and MSI-X interrupt management Long Li
2026-05-15  4:05 ` [PATCH net-next v10 1/6] net: mana: Create separate EQs for each vPort Long Li
2026-05-16  4:05   ` sashiko-bot [this message]
2026-05-15  4:05 ` [PATCH net-next v10 2/6] net: mana: Query device capabilities and configure MSI-X sharing for EQs Long Li
2026-05-16  4:05   ` sashiko-bot
2026-05-15  4:05 ` [PATCH net-next v10 3/6] net: mana: Introduce GIC context with refcounting for interrupt management Long Li
2026-05-15  4:05 ` [PATCH net-next v10 4/6] net: mana: Use GIC functions to allocate global EQs Long Li
2026-05-15  4:05 ` [PATCH net-next v10 5/6] net: mana: Allocate interrupt context for each EQ when creating vPort Long Li
2026-05-16  4:05   ` sashiko-bot
2026-05-15  4:05 ` [PATCH net-next v10 6/6] RDMA/mana_ib: Allocate interrupt contexts on EQs Long Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260516040532.7EEF5C19425@smtp.kernel.org \
    --to=sashiko-bot@kernel.org \
    --cc=linux-hyperv@vger.kernel.org \
    --cc=longli@microsoft.com \
    --cc=sashiko-reviews@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox