From: Przemek Kitszel <przemyslaw.kitszel@intel.com>
To: "Loktionov, Aleksandr" <aleksandr.loktionov@intel.com>,
"intel-wired-lan@lists.osuosl.org"
<intel-wired-lan@lists.osuosl.org>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
"Schmidt, Michal" <mschmidt@redhat.com>,
Jakub Kicinski <kuba@kernel.org>, Jiri Pirko <jiri@resnulli.us>,
Simon Horman <horms@kernel.org>,
"Nguyen, Anthony L" <anthony.l.nguyen@intel.com>,
Michal Swiatkowski <michal.swiatkowski@linux.intel.com>,
"Richardson, Bruce" <bruce.richardson@intel.com>,
"Medvedkin, Vladimir" <vladimir.medvedkin@intel.com>,
"Connolly, Padraig J" <padraig.j.connolly@intel.com>,
"S, Ananth" <ananth.s@intel.com>,
"Miskell, Timothy" <timothy.miskell@intel.com>,
"Keller, Jacob E" <jacob.e.keller@intel.com>,
"Czapnik, Lukasz" <lukasz.czapnik@intel.com>,
Andrew Lunn <andrew+netdev@lunn.ch>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Paolo Abeni <pabeni@redhat.com>,
"Saeed Mahameed" <saeedm@nvidia.com>,
Leon Romanovsky <leon@kernel.org>,
Tariq Toukan <tariqt@nvidia.com>, Mark Bloch <mbloch@nvidia.com>
Subject: Re: [PATCH iwl-next v1 12/15] ice: introduce handling of virtchnl LARGE VF opcodes
Date: Mon, 11 May 2026 11:39:05 +0200 [thread overview]
Message-ID: <f1b79865-7390-4512-aeb1-3aeac2c176d9@intel.com> (raw)
In-Reply-To: <IA3PR11MB8986A9CD58E2977B0E79B8D3E53D2@IA3PR11MB8986.namprd11.prod.outlook.com>
On 5/8/26 18:55, Loktionov, Aleksandr wrote:
>
>
>> -----Original Message-----
>> From: Kitszel, Przemyslaw <przemyslaw.kitszel@intel.com>
>> Sent: Friday, May 8, 2026 2:42 PM
>> To: intel-wired-lan@lists.osuosl.org; Schmidt, Michal
>> <mschmidt@redhat.com>; Jakub Kicinski <kuba@kernel.org>; Jiri Pirko
>> <jiri@resnulli.us>
>> Cc: netdev@vger.kernel.org; Simon Horman <horms@kernel.org>; Nguyen,
>> Anthony L <anthony.l.nguyen@intel.com>; Michal Swiatkowski
>> <michal.swiatkowski@linux.intel.com>; Richardson, Bruce
>> <bruce.richardson@intel.com>; Medvedkin, Vladimir
>> <vladimir.medvedkin@intel.com>; Connolly, Padraig J
>> <padraig.j.connolly@intel.com>; S, Ananth <ananth.s@intel.com>;
>> Miskell, Timothy <timothy.miskell@intel.com>; Keller, Jacob E
>> <jacob.e.keller@intel.com>; Czapnik, Lukasz
>> <lukasz.czapnik@intel.com>; Loktionov, Aleksandr
>> <aleksandr.loktionov@intel.com>; Andrew Lunn <andrew+netdev@lunn.ch>;
>> David S. Miller <davem@davemloft.net>; Eric Dumazet
>> <edumazet@google.com>; Paolo Abeni <pabeni@redhat.com>; Saeed Mahameed
>> <saeedm@nvidia.com>; Leon Romanovsky <leon@kernel.org>; Tariq Toukan
>> <tariqt@nvidia.com>; Mark Bloch <mbloch@nvidia.com>; Kitszel,
>> Przemyslaw <przemyslaw.kitszel@intel.com>
>> Subject: [PATCH iwl-next v1 12/15] ice: introduce handling of virtchnl
>> LARGE VF opcodes
>>
>> From: Brett Creeley <brett.creeley@intel.com>
>>
>> With new virtchnl offload/capability VFs are able to make use of more
>> than
>> 16 queues. But to old opcodes were designed with a max of 16 queues,
>> so new ones were added (by iavf/virtchnl commit of this series):
>> VIRTCHNL_OP_GET_MAX_RSS_QREGION, VIRTCHNL_OP_ENABLE_QUEUES_V2,
>> VIRTCHNL_OP_DISABLE_QUEUES_V2, VIRTCHNL_OP_MAP_QUEUE_VECTOR.
>>
>> If a VF wishes to request >16 queues it should first make sure that
>> the PF supports the VIRTCHNL_VF_LARGE_NUM_QPAIRS capability.
>>
>> Co-developed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
>> Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
>> Co-developed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> #
>> msglen val
>> Signed-off-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
>> Signed-off-by: Brett Creeley <brett.creeley@intel.com>
>> ---
>> drivers/net/ethernet/intel/ice/ice_vf_lib.h | 1 +
>> drivers/net/ethernet/intel/ice/virt/queues.h | 3 +
>> .../net/ethernet/intel/ice/virt/allowlist.c | 8 +
>> drivers/net/ethernet/intel/ice/virt/queues.c | 324
>> ++++++++++++++++++
>> 4 files changed, 336 insertions(+)
>>
>> diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.h
>> b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
>> index 1b56f7150eb7..5411eaa1761c 100644
>> --- a/drivers/net/ethernet/intel/ice/ice_vf_lib.h
>> +++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
>> @@ -125,6 +125,7 @@ struct ice_vf_ops {
>> void (*clear_reset_trigger)(struct ice_vf *vf);
>> void (*irq_close)(struct ice_vf *vf);
>> void (*post_vsi_rebuild)(struct ice_vf *vf);
>
> ...
>
>> +/**
>> + * ice_vc_map_q_vector_msg - message handling for
>> +VIRTCHNL_OP_MAP_QUEUE_VECTOR
>> + * @vf: source of the request
>> + * @msg: message to handle
>> + * @msglen: length of @msg
>> + *
>> + * Return: 0 on success or negative on error */ int
>> +ice_vc_map_q_vector_msg(struct ice_vf *vf, u8 *msg, u16 msglen) {
>> + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS;
>> + struct virtchnl_queue_vector_maps *qv_maps;
>> + struct ice_vsi *vsi;
>> +
>> + qv_maps = (struct virtchnl_queue_vector_maps *)msg;
>> +
>> + if (!ice_vc_validate_qv_maps(vf, qv_maps, msglen)) {
>> + v_ret = VIRTCHNL_STATUS_ERR_PARAM;
>> + goto error_param;
>> + }
>> +
>> + for (int i = 0; i < qv_maps->num_qv_maps; i++) {
>> + struct virtchnl_queue_vector *qv_map = &qv_maps-
>>> qv_maps[i];
>> + struct ice_q_vector *q_vector;
>> + u16 vector_id;
>> + int vsi_q_id;
>> +
>> + vsi = ice_get_vf_vsi(vf);
>> + vsi_q_id = qv_map->queue_id;
>> + vector_id = qv_map->vector_id;
>> +
>> + if (!vsi) {
>> + v_ret = VIRTCHNL_STATUS_ERR_PARAM;
>> + goto error_param;
>> + }
>> +
>> + q_vector = vf->vf_ops->get_q_vector(vsi, vector_id);
>> +
>> + if (!q_vector) {
>> + v_ret = VIRTCHNL_STATUS_ERR_PARAM;
>> + goto error_param;
>> + }
>> +
>> + if (!ice_vc_isvalid_q_id(vsi, vsi_q_id))
> This function declared as returning linux errno, not enum.
> And in this case there is no reply to VF (goto error_param), couldn't it lead to VF stall?
good catch, will fix it
it wouldn't be VF stall, but an ugly "PF not replied to our request" msg
>
>> + return VIRTCHNL_STATUS_ERR_PARAM;
>> +
>> + if (qv_map->queue_type == VIRTCHNL_QUEUE_TYPE_RX)
>> + ice_cfg_rxq_interrupt(vsi, vsi_q_id,
>> + q_vector->vf_reg_idx,
>> + qv_map->itr_idx);
>> + else if (qv_map->queue_type == VIRTCHNL_QUEUE_TYPE_TX)
>> + ice_cfg_txq_interrupt(vsi, vsi_q_id,
>> + q_vector->vf_reg_idx,
>> + qv_map->itr_idx);
>> + }
>> +
>> +error_param:
>> + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_MAP_QUEUE_VECTOR,
>> + v_ret, NULL, 0);
>> +}
>> --
>> 2.39.3
>
next prev parent reply other threads:[~2026-05-11 9:39 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-08 12:41 [PATCH iwl-next v1 00/15] devlink, mlx5, iavf, ice: XLVF for iavf Przemek Kitszel
2026-05-08 12:41 ` [PATCH iwl-next v1 01/15] devlink, mlx5: add init/fini ops for shared devlink Przemek Kitszel
2026-05-11 11:36 ` Jiri Pirko
2026-05-11 13:26 ` Przemek Kitszel
2026-05-08 12:41 ` [PATCH iwl-next v1 02/15] ice: use shared devlink to store ice_adapters instead of custom xarray Przemek Kitszel
2026-05-08 12:41 ` [PATCH iwl-next v1 03/15] ice: simplify ice_vc_dis_qs_msg() a little Przemek Kitszel
2026-05-08 13:31 ` Loktionov, Aleksandr
2026-05-08 12:41 ` [PATCH iwl-next v1 04/15] ice: add VF queue ena/dis helper functions Przemek Kitszel
2026-05-08 13:37 ` Loktionov, Aleksandr
2026-05-11 9:33 ` Przemek Kitszel
2026-05-08 12:41 ` [PATCH iwl-next v1 05/15] ice: add helpers for Global RSS LUT alloc, free, vsi_update Przemek Kitszel
2026-05-08 13:38 ` Loktionov, Aleksandr
2026-05-08 12:41 ` [PATCH iwl-next v1 06/15] ice: rename ICE_MAX_RSS_QS_PER_VF to ICE_MAX_QS_PER_VF_VCV1 Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 07/15] ice: bump to 256qs for VF Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 08/15] iavf: extend iavf_configure_queues() to support more queues Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 09/15] iavf: temporary rename of IAVF_MAX_REQ_QUEUES to IAVF_MAX_REQ_QUEUES_VCV1 Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 10/15] iavf: increase max number of queues to 256 Przemek Kitszel
2026-05-08 16:49 ` Loktionov, Aleksandr
2026-05-11 9:37 ` Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 11/15] iavf: use new opcodes to request more than 16 queues Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 12/15] ice: introduce handling of virtchnl LARGE VF opcodes Przemek Kitszel
2026-05-08 16:55 ` Loktionov, Aleksandr
2026-05-11 9:39 ` Przemek Kitszel [this message]
2026-05-08 12:42 ` [PATCH iwl-next v1 13/15] devlink: give user option to allocate resources Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 14/15] ice: represent RSS LUTs as devlink resources Przemek Kitszel
2026-05-08 17:03 ` Loktionov, Aleksandr
2026-05-11 9:41 ` Przemek Kitszel
2026-05-08 12:42 ` [PATCH iwl-next v1 15/15] ice: support up to 256 VF queues Przemek Kitszel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f1b79865-7390-4512-aeb1-3aeac2c176d9@intel.com \
--to=przemyslaw.kitszel@intel.com \
--cc=aleksandr.loktionov@intel.com \
--cc=ananth.s@intel.com \
--cc=andrew+netdev@lunn.ch \
--cc=anthony.l.nguyen@intel.com \
--cc=bruce.richardson@intel.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=horms@kernel.org \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=jacob.e.keller@intel.com \
--cc=jiri@resnulli.us \
--cc=kuba@kernel.org \
--cc=leon@kernel.org \
--cc=lukasz.czapnik@intel.com \
--cc=mbloch@nvidia.com \
--cc=michal.swiatkowski@linux.intel.com \
--cc=mschmidt@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=padraig.j.connolly@intel.com \
--cc=saeedm@nvidia.com \
--cc=tariqt@nvidia.com \
--cc=timothy.miskell@intel.com \
--cc=vladimir.medvedkin@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox