From: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
To: Kamil Maziarz <kamil.maziarz@intel.com>
Cc: Rafal Rogalski <rafalx.rogalski@intel.com>,
intel-wired-lan@lists.osuosl.org
Subject: Re: [Intel-wired-lan] [PATCH net v1] ice: Ignore RDMA message on setup tc qdisc
Date: Thu, 12 Jan 2023 08:11:33 +0100 [thread overview]
Message-ID: <Y7+ypc+uffjS52Js@localhost.localdomain> (raw)
In-Reply-To: <20230111142029.318092-1-kamil.maziarz@intel.com>
On Wed, Jan 11, 2023 at 03:20:29PM +0100, Kamil Maziarz wrote:
> From: Rafal Rogalski <rafalx.rogalski@intel.com>
>
> Commit f70b9d5f4426 ("ice: check for a leaf node presence") prevents
> removal of VSI with leaf nodes. This is an expectation of driver action
> induced by FW requirements. However, this caused RDMA scheduler config
> removal to fail every time a qdisc was added or deleted.
>
> Fix this by ignoring errors from RDMA configuration removal when qdisc are
> being reconfigured.
>
> Fixes: ff7e93219442 ("ice: Fix failure to re-add LAN/RDMA Tx queues")
> Signed-off-by: Rafal Rogalski <rafalx.rogalski@intel.com>
> Signed-off-by: Kamil Maziarz <kamil.maziarz@intel.com>
> ---
Thanks for changes. It is a good practise to include changelog here.
Something like:
v1 --> v2:
- check for -EBUSY etc.
---
Is Your patch rebased on dev-queue? I am pretty sure that
ice_vsi_rebuild looks different on current dev-queue, please check
that.
> drivers/net/ethernet/intel/ice/ice.h | 1 +
> drivers/net/ethernet/intel/ice/ice_lib.c | 2 +-
> drivers/net/ethernet/intel/ice/ice_main.c | 3 +++
> 3 files changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
> index 2f0b604abc5e..b572d07bc126 100644
> --- a/drivers/net/ethernet/intel/ice/ice.h
> +++ b/drivers/net/ethernet/intel/ice/ice.h
> @@ -306,6 +306,7 @@ enum ice_pf_state {
> ICE_PHY_INIT_COMPLETE,
> ICE_FD_VF_FLUSH_CTX, /* set at FD Rx IRQ or timeout */
> ICE_AUX_ERR_PENDING,
> + ICE_SETUP_TC_QDISC,
> ICE_STATE_NBITS /* must be last */
> };
>
> diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
> index 22bcb414546a..0ee3acbea108 100644
> --- a/drivers/net/ethernet/intel/ice/ice_lib.c
> +++ b/drivers/net/ethernet/intel/ice/ice_lib.c
> @@ -3448,7 +3448,7 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi)
>
> ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx);
> ret = ice_rm_vsi_rdma_cfg(vsi->port_info, vsi->idx);
> - if (ret)
> + if (ret && !(test_bit(ICE_SETUP_TC_QDISC, pf->state) && ret == -EBUSY))
It is fine for me in current state, but do we really need to do this
check only in setting TC qdisc? Doesn't the same thing happen when the
number of queues is changed from ethtool? (maybe it is not because the
changes is blocked when RDMA is active). Maybe whole
ice_setup_tc_mqprio_qdisc() should return -EBOUSY or inval if RDMA is
active? Is it valid to a new qdisc when RDMA is active?
> dev_err(ice_pf_to_dev(vsi->back), "Failed to remove RDMA scheduler config for VSI %u, err %d\n",
> vsi->vsi_num, ret);
> ice_vsi_free_q_vectors(vsi);
> diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
> index a9a7f8b52140..5ff137645f08 100644
> --- a/drivers/net/ethernet/intel/ice/ice_main.c
> +++ b/drivers/net/ethernet/intel/ice/ice_main.c
> @@ -8706,6 +8706,7 @@ static int ice_setup_tc_mqprio_qdisc(struct net_device *netdev, void *type_data)
> cur_txq = vsi->num_txq;
> cur_rxq = vsi->num_rxq;
>
> + set_bit(ICE_SETUP_TC_QDISC, pf->state);
> /* proceed with rebuild main VSI using correct number of queues */
> ret = ice_vsi_rebuild(vsi, false);
> if (ret) {
> @@ -8716,9 +8717,11 @@ static int ice_setup_tc_mqprio_qdisc(struct net_device *netdev, void *type_data)
> clear_bit(ICE_RESET_FAILED, pf->state);
> if (ice_vsi_rebuild(vsi, false)) {
> dev_err(dev, "Rebuild of main VSI failed again\n");
> + clear_bit(ICE_SETUP_TC_QDISC, pf->state);
> return ret;
> }
> }
> + clear_bit(ICE_SETUP_TC_QDISC, pf->state);
>
> vsi->all_numtc = num_tcf;
> vsi->all_enatc = ena_tc_qdisc;
> --
> 2.31.1
>
> _______________________________________________
> Intel-wired-lan mailing list
> Intel-wired-lan@osuosl.org
> https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
next prev parent reply other threads:[~2023-01-12 7:11 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-11 14:20 [Intel-wired-lan] [PATCH net v1] ice: Ignore RDMA message on setup tc qdisc Kamil Maziarz
2023-01-12 7:11 ` Michal Swiatkowski [this message]
2023-01-12 13:54 ` Rogalski, RafalX
2023-01-13 6:30 ` Michal Swiatkowski
2023-01-18 18:01 ` Tony Nguyen
2023-01-23 11:43 ` Rogalski, RafalX
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y7+ypc+uffjS52Js@localhost.localdomain \
--to=michal.swiatkowski@linux.intel.com \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=kamil.maziarz@intel.com \
--cc=rafalx.rogalski@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox