From: Przemek Kitszel <przemyslaw.kitszel@intel.com>
To: intel-wired-lan@lists.osuosl.org,
Michal Schmidt <mschmidt@redhat.com>,
Jakub Kicinski <kuba@kernel.org>, Jiri Pirko <jiri@resnulli.us>
Cc: netdev@vger.kernel.org, Simon Horman <horms@kernel.org>,
Tony Nguyen <anthony.l.nguyen@intel.com>,
Michal Swiatkowski <michal.swiatkowski@linux.intel.com>,
bruce.richardson@intel.com,
Vladimir Medvedkin <vladimir.medvedkin@intel.com>,
padraig.j.connolly@intel.com, ananth.s@intel.com,
timothy.miskell@intel.com,
Jacob Keller <jacob.e.keller@intel.com>,
Lukasz Czapnik <lukasz.czapnik@intel.com>,
Aleksandr Loktionov <aleksandr.loktionov@intel.com>,
Andrew Lunn <andrew+netdev@lunn.ch>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Paolo Abeni <pabeni@redhat.com>,
Saeed Mahameed <saeedm@nvidia.com>,
Leon Romanovsky <leon@kernel.org>,
Tariq Toukan <tariqt@nvidia.com>, Mark Bloch <mbloch@nvidia.com>,
Przemek Kitszel <przemyslaw.kitszel@intel.com>,
Jedrzej Jagielski <jedrzej.jagielski@intel.com>
Subject: [PATCH iwl-next v1 10/15] iavf: increase max number of queues to 256
Date: Fri, 8 May 2026 14:42:03 +0200 [thread overview]
Message-ID: <20260508124208.11622-11-przemyslaw.kitszel@intel.com> (raw)
In-Reply-To: <20260508124208.11622-1-przemyslaw.kitszel@intel.com>
Increase the max number of queues that driver will handle to 256.
Use old legacy limit in the virtchnl handling of iavf_map_queues().
Reviewed-by: Jedrzej Jagielski <jedrzej.jagielski@intel.com>
Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
---
drivers/net/ethernet/intel/iavf/iavf.h | 4 +++-
drivers/net/ethernet/intel/iavf/iavf_main.c | 4 ++--
drivers/net/ethernet/intel/iavf/iavf_virtchnl.c | 15 ++++++++-------
3 files changed, 13 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
index a0c42f2357fb..569686d34ff4 100644
--- a/drivers/net/ethernet/intel/iavf/iavf.h
+++ b/drivers/net/ethernet/intel/iavf/iavf.h
@@ -87,8 +87,10 @@ struct iavf_vsi {
#define IAVF_TX_DESC(R, i) (&(((struct iavf_tx_desc *)((R)->desc))[i]))
#define IAVF_TX_CTXTDESC(R, i) \
(&(((struct iavf_tx_context_desc *)((R)->desc))[i]))
+
/* for "old" virtchnl opcodes that accept up to 16 queues */
#define IAVF_MAX_REQ_QUEUES_VCV1 16
+#define IAVF_MAX_REQ_QUEUES 256
#define IAVF_HKEY_ARRAY_SIZE ((IAVF_VFQF_HKEY_MAX_INDEX + 1) * 4)
#define IAVF_HLUT_ARRAY_SIZE ((IAVF_VFQF_HLUT_MAX_INDEX + 1) * 4)
@@ -108,7 +110,7 @@ struct iavf_q_vector {
struct napi_struct napi;
struct iavf_ring_container rx;
struct iavf_ring_container tx;
- u32 ring_mask;
+ DECLARE_BITMAP(ring_mask, IAVF_MAX_REQ_QUEUES);
u8 itr_countdown; /* when 0 should adjust adaptive ITR */
u8 num_ringpairs; /* total number of ring pairs in vector */
u16 v_idx; /* index in the vsi->q_vector array. */
diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
index 8149b01ae24a..abc0fe070ee7 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
@@ -439,7 +439,7 @@ iavf_map_vector_to_rxq(struct iavf_adapter *adapter, int v_idx, int r_idx)
q_vector->rx.count++;
q_vector->rx.next_update = jiffies + 1;
q_vector->rx.target_itr = ITR_TO_REG(rx_ring->itr_setting);
- q_vector->ring_mask |= BIT(r_idx);
+ set_bit(r_idx, q_vector->ring_mask);
wr32(hw, IAVF_VFINT_ITRN1(IAVF_RX_ITR, q_vector->reg_idx),
q_vector->rx.current_itr >> 1);
q_vector->rx.current_itr = q_vector->rx.target_itr;
@@ -5362,7 +5362,7 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
pci_set_master(pdev);
netdev = alloc_etherdev_mq(sizeof(struct iavf_adapter),
- IAVF_MAX_REQ_QUEUES_VCV1);
+ IAVF_MAX_REQ_QUEUES);
if (!netdev) {
err = -ENOMEM;
goto err_alloc_etherdev;
diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
index d3b5398b6130..9102bc4bddb0 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
@@ -260,19 +260,19 @@ int iavf_send_vf_ptp_caps_msg(struct iavf_adapter *adapter)
**/
static void iavf_validate_num_queues(struct iavf_adapter *adapter)
{
- if (adapter->vf_res->num_queue_pairs > IAVF_MAX_REQ_QUEUES_VCV1) {
+ if (adapter->vf_res->num_queue_pairs > IAVF_MAX_REQ_QUEUES) {
struct virtchnl_vsi_resource *vsi_res;
int i;
dev_info(&adapter->pdev->dev, "Received %d queues, but can only have a max of %d\n",
adapter->vf_res->num_queue_pairs,
- IAVF_MAX_REQ_QUEUES_VCV1);
+ IAVF_MAX_REQ_QUEUES);
dev_info(&adapter->pdev->dev, "Fixing by reducing queues to %d\n",
- IAVF_MAX_REQ_QUEUES_VCV1);
- adapter->vf_res->num_queue_pairs = IAVF_MAX_REQ_QUEUES_VCV1;
+ IAVF_MAX_REQ_QUEUES);
+ adapter->vf_res->num_queue_pairs = IAVF_MAX_REQ_QUEUES;
for (i = 0; i < adapter->vf_res->num_vsis; i++) {
vsi_res = &adapter->vf_res->vsi_res[i];
- vsi_res->num_queue_pairs = IAVF_MAX_REQ_QUEUES_VCV1;
+ vsi_res->num_queue_pairs = IAVF_MAX_REQ_QUEUES;
}
}
}
@@ -554,8 +554,9 @@ void iavf_map_queues(struct iavf_adapter *adapter)
vecmap->vsi_id = adapter->vsi_res->vsi_id;
vecmap->vector_id = v_idx + NONQ_VECS;
- vecmap->txq_map = q_vector->ring_mask;
- vecmap->rxq_map = q_vector->ring_mask;
+ vecmap->txq_map = bitmap_read(q_vector->ring_mask, 0,
+ IAVF_MAX_REQ_QUEUES_VCV1);
+ vecmap->rxq_map = vecmap->txq_map;
vecmap->rxitr_idx = IAVF_RX_ITR;
vecmap->txitr_idx = IAVF_TX_ITR;
}
--
2.39.3
next prev parent reply other threads:[~2026-05-08 12:59 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-08 12:41 [PATCH iwl-next v1 00/15] devlink, mlx5, iavf, ice: XLVF for iavf Przemek Kitszel
2026-05-08 12:41 ` [PATCH iwl-next v1 01/15] devlink, mlx5: add init/fini ops for shared devlink Przemek Kitszel
2026-05-11 11:36 ` Jiri Pirko
2026-05-11 13:26 ` Przemek Kitszel
2026-05-08 12:41 ` [PATCH iwl-next v1 02/15] ice: use shared devlink to store ice_adapters instead of custom xarray Przemek Kitszel
2026-05-08 12:41 ` [PATCH iwl-next v1 03/15] ice: simplify ice_vc_dis_qs_msg() a little Przemek Kitszel
2026-05-08 13:31 ` Loktionov, Aleksandr
2026-05-08 12:41 ` [PATCH iwl-next v1 04/15] ice: add VF queue ena/dis helper functions Przemek Kitszel
2026-05-08 13:37 ` Loktionov, Aleksandr
2026-05-11 9:33 ` Przemek Kitszel
2026-05-08 12:41 ` [PATCH iwl-next v1 05/15] ice: add helpers for Global RSS LUT alloc, free, vsi_update Przemek Kitszel
2026-05-08 13:38 ` Loktionov, Aleksandr
2026-05-08 12:41 ` [PATCH iwl-next v1 06/15] ice: rename ICE_MAX_RSS_QS_PER_VF to ICE_MAX_QS_PER_VF_VCV1 Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 07/15] ice: bump to 256qs for VF Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 08/15] iavf: extend iavf_configure_queues() to support more queues Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 09/15] iavf: temporary rename of IAVF_MAX_REQ_QUEUES to IAVF_MAX_REQ_QUEUES_VCV1 Przemek Kitszel
2026-05-08 12:42 ` Przemek Kitszel [this message]
2026-05-08 16:49 ` [PATCH iwl-next v1 10/15] iavf: increase max number of queues to 256 Loktionov, Aleksandr
2026-05-11 9:37 ` Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 11/15] iavf: use new opcodes to request more than 16 queues Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 12/15] ice: introduce handling of virtchnl LARGE VF opcodes Przemek Kitszel
2026-05-08 16:55 ` Loktionov, Aleksandr
2026-05-11 9:39 ` Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 13/15] devlink: give user option to allocate resources Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 14/15] ice: represent RSS LUTs as devlink resources Przemek Kitszel
2026-05-08 17:03 ` Loktionov, Aleksandr
2026-05-11 9:41 ` Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 15/15] ice: support up to 256 VF queues Przemek Kitszel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260508124208.11622-11-przemyslaw.kitszel@intel.com \
--to=przemyslaw.kitszel@intel.com \
--cc=aleksandr.loktionov@intel.com \
--cc=ananth.s@intel.com \
--cc=andrew+netdev@lunn.ch \
--cc=anthony.l.nguyen@intel.com \
--cc=bruce.richardson@intel.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=horms@kernel.org \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=jacob.e.keller@intel.com \
--cc=jedrzej.jagielski@intel.com \
--cc=jiri@resnulli.us \
--cc=kuba@kernel.org \
--cc=leon@kernel.org \
--cc=lukasz.czapnik@intel.com \
--cc=mbloch@nvidia.com \
--cc=michal.swiatkowski@linux.intel.com \
--cc=mschmidt@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=padraig.j.connolly@intel.com \
--cc=saeedm@nvidia.com \
--cc=tariqt@nvidia.com \
--cc=timothy.miskell@intel.com \
--cc=vladimir.medvedkin@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox