From: Przemek Kitszel <przemyslaw.kitszel@intel.com>
To: intel-wired-lan@lists.osuosl.org,
Michal Schmidt <mschmidt@redhat.com>,
Jakub Kicinski <kuba@kernel.org>, Jiri Pirko <jiri@resnulli.us>
Cc: netdev@vger.kernel.org, Simon Horman <horms@kernel.org>,
Tony Nguyen <anthony.l.nguyen@intel.com>,
Michal Swiatkowski <michal.swiatkowski@linux.intel.com>,
bruce.richardson@intel.com,
Vladimir Medvedkin <vladimir.medvedkin@intel.com>,
padraig.j.connolly@intel.com, ananth.s@intel.com,
timothy.miskell@intel.com,
Jacob Keller <jacob.e.keller@intel.com>,
Lukasz Czapnik <lukasz.czapnik@intel.com>,
Aleksandr Loktionov <aleksandr.loktionov@intel.com>,
Andrew Lunn <andrew+netdev@lunn.ch>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Paolo Abeni <pabeni@redhat.com>,
Saeed Mahameed <saeedm@nvidia.com>,
Leon Romanovsky <leon@kernel.org>,
Tariq Toukan <tariqt@nvidia.com>, Mark Bloch <mbloch@nvidia.com>,
Przemek Kitszel <przemyslaw.kitszel@intel.com>,
Wojciech Drewek <wojciech.drewek@intel.com>,
Jedrzej Jagielski <jedrzej.jagielski@intel.com>
Subject: [PATCH iwl-next v1 04/15] ice: add VF queue ena/dis helper functions
Date: Fri, 8 May 2026 14:41:57 +0200 [thread overview]
Message-ID: <20260508124208.11622-5-przemyslaw.kitszel@intel.com> (raw)
In-Reply-To: <20260508124208.11622-1-przemyslaw.kitszel@intel.com>
From: Brett Creeley <brett.creeley@intel.com>
Add three new functions:
- ice_vf_vsi_dis_single_rxq()
- ice_vf_vsi_ena_single_rxq()
- ice_vf_vsi_ena_single_txq()
(ice_vf_vsi_dis_single_txq() was introduced earlier.
Those functions wrap operations needed in the processes of enabling
and disabling single Tx/Rx VF queue, which are:
- check if the queue is not already in desired state
- perform the dis/ena operations
- bookkeeping of the queue's state.
Future commit will use them from another callsite.
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Wojciech Drewek <wojciech.drewek@intel.com>
Reviewed-by: Jedrzej Jagielski <jedrzej.jagielski@intel.com>
Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
---
drivers/net/ethernet/intel/ice/virt/queues.c | 111 ++++++++++++++-----
1 file changed, 84 insertions(+), 27 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/virt/queues.c b/drivers/net/ethernet/intel/ice/virt/queues.c
index 6e4ec681fd07..28adc24197b8 100644
--- a/drivers/net/ethernet/intel/ice/virt/queues.c
+++ b/drivers/net/ethernet/intel/ice/virt/queues.c
@@ -224,6 +224,57 @@ void ice_vf_ena_rxq_interrupt(struct ice_vsi *vsi, u32 q_idx)
wr32(hw, QINT_RQCTL(pfq), reg | QINT_RQCTL_CAUSE_ENA_M);
}
+/**
+ * ice_vf_vsi_ena_single_rxq - enable single Rx queue based on relative q_id
+ * @vf: VF to enable queue for
+ * @vsi: VSI for the VF
+ * @q_id: VSI relative (0-based) queue ID
+ *
+ * Enable the Rx queue passed in.
+ *
+ * Return: 0 on success or negative on error.
+ */
+static int ice_vf_vsi_ena_single_rxq(struct ice_vf *vf, struct ice_vsi *vsi,
+ u16 q_id)
+{
+ int err;
+
+ if (test_bit(q_id, vf->rxq_ena))
+ return 0;
+
+ err = ice_vsi_ctrl_one_rx_ring(vsi, true, q_id, true);
+ if (err) {
+ dev_err(ice_pf_to_dev(vsi->back), "Failed to enable Rx ring %d on VSI %d\n",
+ q_id, vsi->vsi_num);
+ return err;
+ }
+
+ ice_vf_ena_rxq_interrupt(vsi, q_id);
+ set_bit(q_id, vf->rxq_ena);
+
+ return 0;
+}
+
+/**
+ * ice_vf_vsi_ena_single_txq - enable single Tx queue based on relative q_id
+ * @vf: VF to enable queue for
+ * @vsi: VSI for the VF
+ * @q_id: VSI relative (0-based) queue ID
+ *
+ * Enable the Tx queue's interrupt. Note that the Tx queue(s) should have
+ * already been configurated/enabled in VIRTCHNL_OP_CONFIG_QUEUES so this
+ * function only enables the interrupt associated with the q_id.
+ */
+static void ice_vf_vsi_ena_single_txq(struct ice_vf *vf, struct ice_vsi *vsi,
+ u16 q_id)
+{
+ if (test_bit(q_id, vf->txq_ena))
+ return;
+
+ ice_vf_ena_txq_interrupt(vsi, q_id);
+ set_bit(q_id, vf->txq_ena);
+}
+
/**
* ice_vc_ena_qs_msg
* @vf: pointer to the VF info
@@ -272,34 +323,20 @@ int ice_vc_ena_qs_msg(struct ice_vf *vf, u8 *msg)
goto error_param;
}
- /* Skip queue if enabled */
- if (test_bit(vf_q_id, vf->rxq_ena))
- continue;
-
- if (ice_vsi_ctrl_one_rx_ring(vsi, true, vf_q_id, true)) {
- dev_err(ice_pf_to_dev(vsi->back), "Failed to enable Rx ring %d on VSI %d\n",
- vf_q_id, vsi->vsi_num);
+ if (ice_vf_vsi_ena_single_rxq(vf, vsi, vf_q_id)) {
v_ret = VIRTCHNL_STATUS_ERR_PARAM;
goto error_param;
}
-
- ice_vf_ena_rxq_interrupt(vsi, vf_q_id);
- set_bit(vf_q_id, vf->rxq_ena);
}
q_map = vqs->tx_queues;
for_each_set_bit(vf_q_id, &q_map, ICE_MAX_RSS_QS_PER_VF) {
if (!ice_vc_isvalid_q_id(vsi, vf_q_id)) {
v_ret = VIRTCHNL_STATUS_ERR_PARAM;
goto error_param;
}
- /* Skip queue if enabled */
- if (test_bit(vf_q_id, vf->txq_ena))
- continue;
-
- ice_vf_ena_txq_interrupt(vsi, vf_q_id);
- set_bit(vf_q_id, vf->txq_ena);
+ ice_vf_vsi_ena_single_txq(vf, vsi, vf_q_id);
}
/* Set flag to indicate that queues are enabled */
@@ -351,6 +388,36 @@ int ice_vf_vsi_dis_single_txq(struct ice_vf *vf, struct ice_vsi *vsi, u16 q_id)
return 0;
}
+/*
+ * ice_vf_vsi_dis_single_rxq - disable a Rx queue for VF on relative queue ID
+ * @vf: VF to disable queue for
+ * @vsi: VSI for the VF
+ * @q_id: VSI relative (0-based) queue ID
+ *
+ * Attempt to disable the Rx queue passed in. If the Rx queue was successfully
+ * disabled then clear q_id bit in the enabled queues bitmap.
+ */
+static int ice_vf_vsi_dis_single_rxq(struct ice_vf *vf, struct ice_vsi *vsi,
+ u16 q_id)
+{
+ int err;
+
+ if (!test_bit(q_id, vf->rxq_ena))
+ return 0;
+
+ err = ice_vsi_ctrl_one_rx_ring(vsi, false, q_id, true);
+ if (err) {
+ dev_err(ice_pf_to_dev(vsi->back), "Failed to stop Rx ring %d on VSI %d\n",
+ q_id, vsi->vsi_num);
+ return err;
+ }
+
+ /* Clear enabled queues flag */
+ clear_bit(q_id, vf->rxq_ena);
+
+ return 0;
+}
+
/**
* ice_vc_dis_qs_msg
* @vf: pointer to the VF info
@@ -415,20 +482,10 @@ int ice_vc_dis_qs_msg(struct ice_vf *vf, u8 *msg)
goto error_param;
}
- /* Skip queue if not enabled */
- if (!test_bit(vf_q_id, vf->rxq_ena))
- continue;
-
- if (ice_vsi_ctrl_one_rx_ring(vsi, false, vf_q_id,
- true)) {
- dev_err(ice_pf_to_dev(vsi->back), "Failed to stop Rx ring %d on VSI %d\n",
- vf_q_id, vsi->vsi_num);
+ if (ice_vf_vsi_dis_single_rxq(vf, vsi, vf_q_id)) {
v_ret = VIRTCHNL_STATUS_ERR_PARAM;
goto error_param;
}
-
- /* Clear enabled queues flag */
- clear_bit(vf_q_id, vf->rxq_ena);
}
}
--
2.39.3
next prev parent reply other threads:[~2026-05-08 12:59 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-08 12:41 [PATCH iwl-next v1 00/15] devlink, mlx5, iavf, ice: XLVF for iavf Przemek Kitszel
2026-05-08 12:41 ` [PATCH iwl-next v1 01/15] devlink, mlx5: add init/fini ops for shared devlink Przemek Kitszel
2026-05-11 11:36 ` Jiri Pirko
2026-05-11 13:26 ` Przemek Kitszel
2026-05-08 12:41 ` [PATCH iwl-next v1 02/15] ice: use shared devlink to store ice_adapters instead of custom xarray Przemek Kitszel
2026-05-08 12:41 ` [PATCH iwl-next v1 03/15] ice: simplify ice_vc_dis_qs_msg() a little Przemek Kitszel
2026-05-08 13:31 ` Loktionov, Aleksandr
2026-05-08 12:41 ` Przemek Kitszel [this message]
2026-05-08 13:37 ` [PATCH iwl-next v1 04/15] ice: add VF queue ena/dis helper functions Loktionov, Aleksandr
2026-05-11 9:33 ` Przemek Kitszel
2026-05-08 12:41 ` [PATCH iwl-next v1 05/15] ice: add helpers for Global RSS LUT alloc, free, vsi_update Przemek Kitszel
2026-05-08 13:38 ` Loktionov, Aleksandr
2026-05-08 12:41 ` [PATCH iwl-next v1 06/15] ice: rename ICE_MAX_RSS_QS_PER_VF to ICE_MAX_QS_PER_VF_VCV1 Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 07/15] ice: bump to 256qs for VF Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 08/15] iavf: extend iavf_configure_queues() to support more queues Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 09/15] iavf: temporary rename of IAVF_MAX_REQ_QUEUES to IAVF_MAX_REQ_QUEUES_VCV1 Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 10/15] iavf: increase max number of queues to 256 Przemek Kitszel
2026-05-08 16:49 ` Loktionov, Aleksandr
2026-05-11 9:37 ` Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 11/15] iavf: use new opcodes to request more than 16 queues Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 12/15] ice: introduce handling of virtchnl LARGE VF opcodes Przemek Kitszel
2026-05-08 16:55 ` Loktionov, Aleksandr
2026-05-11 9:39 ` Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 13/15] devlink: give user option to allocate resources Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 14/15] ice: represent RSS LUTs as devlink resources Przemek Kitszel
2026-05-08 17:03 ` Loktionov, Aleksandr
2026-05-11 9:41 ` Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 15/15] ice: support up to 256 VF queues Przemek Kitszel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260508124208.11622-5-przemyslaw.kitszel@intel.com \
--to=przemyslaw.kitszel@intel.com \
--cc=aleksandr.loktionov@intel.com \
--cc=ananth.s@intel.com \
--cc=andrew+netdev@lunn.ch \
--cc=anthony.l.nguyen@intel.com \
--cc=bruce.richardson@intel.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=horms@kernel.org \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=jacob.e.keller@intel.com \
--cc=jedrzej.jagielski@intel.com \
--cc=jiri@resnulli.us \
--cc=kuba@kernel.org \
--cc=leon@kernel.org \
--cc=lukasz.czapnik@intel.com \
--cc=mbloch@nvidia.com \
--cc=michal.swiatkowski@linux.intel.com \
--cc=mschmidt@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=padraig.j.connolly@intel.com \
--cc=saeedm@nvidia.com \
--cc=tariqt@nvidia.com \
--cc=timothy.miskell@intel.com \
--cc=vladimir.medvedkin@intel.com \
--cc=wojciech.drewek@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox