From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 827513DDDDB for ; Fri, 8 May 2026 13:00:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778245203; cv=none; b=nCJ6Ab6A4wFDHEGIB43xXEPl5gEo8DO/OSamzIm32Awfzs/uRVCXyog4RhW5NPkjsbDGXl3oOfbnLvx+gI9qYyzjjAYsKIak4iJzAduBRr6L3D12K78WIG1TM1IXdA/37HptoRdTua8MuYSJVx4w6gERAlOSiLQfFFLMwPAXoXE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778245203; c=relaxed/simple; bh=aOY+1PAmf+sRv0Rgt9OoKUU8GhDT+msBIr+iZE9nWWQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Q5kTHjG/Y3MZyI+3eI4N9NvgVOAInifYUa6JiegH0mv7SSILBNQUGrAzpd2ganLwsDBbCJ+snAeI0eNpau6XyzzOsO6YB9IT2CCbPMdTYYcG4pObmVPUHjGUKuI4uIiIXHxSbCCSF21JISbIcZzV9y+u131wBKb0mUmnliM4wqA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=I79QE8hF; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="I79QE8hF" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1778245202; x=1809781202; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aOY+1PAmf+sRv0Rgt9OoKUU8GhDT+msBIr+iZE9nWWQ=; b=I79QE8hFEE1TO0z0BvWbRZOEImruZks7zhG450OP1Tmu1OycwbSY1jtF kL7pLFyTeoEkXzZaC6PpsD75EBq1cjxw9cYbgG+Wa/YI3co5oLlOMArRL LL4+3osgjiixIlRVhyS++SLihvPRBRnbsf2eOWsKn25MTkrORJvxFipcJ OsPNPq6r9fssOPNCAat+WTzKTGQEGgKvzmzW2b/BhPpHT+KjcHGUbEg5Q lwsIa9eI7kMXFUcikrMeMBXid/boaYV875Ig30Nw72fKGtYvQXu1M5PyI ok7xkqkt6W8mWyZLlAvGUITP/M+UJoscLFfw9MuGbg+FZyLDoqacfSrwA Q==; X-CSE-ConnectionGUID: ZXMM2WpyRFywZoh2Ddv2gA== X-CSE-MsgGUID: NJCrXn2OQwCGlatar9dyCg== X-IronPort-AV: E=McAfee;i="6800,10657,11779"; a="79199964" X-IronPort-AV: E=Sophos;i="6.23,223,1770624000"; d="scan'208";a="79199964" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 May 2026 06:00:01 -0700 X-CSE-ConnectionGUID: S+prxU8KRyq5cKKUH4opnw== X-CSE-MsgGUID: pGlF8a4LQgaASAXSQaQduw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,223,1770624000"; d="scan'208";a="241730161" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmviesa005.fm.intel.com with ESMTP; 08 May 2026 05:59:55 -0700 Received: from vecna.igk.intel.com (vecna.igk.intel.com [10.123.220.17]) by irvmail002.ir.intel.com (Postfix) with ESMTP id E5C902FC43; Fri, 8 May 2026 13:59:53 +0100 (IST) From: Przemek Kitszel To: intel-wired-lan@lists.osuosl.org, Michal Schmidt , Jakub Kicinski , Jiri Pirko Cc: netdev@vger.kernel.org, Simon Horman , Tony Nguyen , Michal Swiatkowski , bruce.richardson@intel.com, Vladimir Medvedkin , padraig.j.connolly@intel.com, ananth.s@intel.com, timothy.miskell@intel.com, Jacob Keller , Lukasz Czapnik , Aleksandr Loktionov , Andrew Lunn , "David S. Miller" , Eric Dumazet , Paolo Abeni , Saeed Mahameed , Leon Romanovsky , Tariq Toukan , Mark Bloch , Przemek Kitszel Subject: [PATCH iwl-next v1 12/15] ice: introduce handling of virtchnl LARGE VF opcodes Date: Fri, 8 May 2026 14:42:05 +0200 Message-Id: <20260508124208.11622-13-przemyslaw.kitszel@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20260508124208.11622-1-przemyslaw.kitszel@intel.com> References: <20260508124208.11622-1-przemyslaw.kitszel@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Brett Creeley With new virtchnl offload/capability VFs are able to make use of more than 16 queues. But to old opcodes were designed with a max of 16 queues, so new ones were added (by iavf/virtchnl commit of this series): VIRTCHNL_OP_GET_MAX_RSS_QREGION, VIRTCHNL_OP_ENABLE_QUEUES_V2, VIRTCHNL_OP_DISABLE_QUEUES_V2, VIRTCHNL_OP_MAP_QUEUE_VECTOR. If a VF wishes to request >16 queues it should first make sure that the PF supports the VIRTCHNL_VF_LARGE_NUM_QPAIRS capability. Co-developed-by: Przemek Kitszel Signed-off-by: Przemek Kitszel Co-developed-by: Aleksandr Loktionov # msglen val Signed-off-by: Aleksandr Loktionov Signed-off-by: Brett Creeley --- drivers/net/ethernet/intel/ice/ice_vf_lib.h | 1 + drivers/net/ethernet/intel/ice/virt/queues.h | 3 + .../net/ethernet/intel/ice/virt/allowlist.c | 8 + drivers/net/ethernet/intel/ice/virt/queues.c | 324 ++++++++++++++++++ 4 files changed, 336 insertions(+) diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.h b/drivers/net/ethernet/intel/ice/ice_vf_lib.h index 1b56f7150eb7..5411eaa1761c 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.h @@ -125,6 +125,7 @@ struct ice_vf_ops { void (*clear_reset_trigger)(struct ice_vf *vf); void (*irq_close)(struct ice_vf *vf); void (*post_vsi_rebuild)(struct ice_vf *vf); + struct ice_q_vector *(*get_q_vector)(struct ice_vsi *vsi, u16 vec_id); }; /* Virtchnl/SR-IOV config info */ diff --git a/drivers/net/ethernet/intel/ice/virt/queues.h b/drivers/net/ethernet/intel/ice/virt/queues.h index c4a792cecea1..223f609dd4f3 100644 --- a/drivers/net/ethernet/intel/ice/virt/queues.h +++ b/drivers/net/ethernet/intel/ice/virt/queues.h @@ -16,5 +16,8 @@ int ice_vc_cfg_q_bw(struct ice_vf *vf, u8 *msg); int ice_vc_cfg_q_quanta(struct ice_vf *vf, u8 *msg); int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg); int ice_vc_request_qs_msg(struct ice_vf *vf, u8 *msg); +int ice_vc_ena_qs_v2_msg(struct ice_vf *vf, u8 *msg, u16 msglen); +int ice_vc_dis_qs_v2_msg(struct ice_vf *vf, u8 *msg, u16 msglen); +int ice_vc_map_q_vector_msg(struct ice_vf *vf, u8 *msg, u16 msglen); #endif /* _ICE_VIRT_QUEUES_H_ */ diff --git a/drivers/net/ethernet/intel/ice/virt/allowlist.c b/drivers/net/ethernet/intel/ice/virt/allowlist.c index a07efec19c45..ef769b843c6f 100644 --- a/drivers/net/ethernet/intel/ice/virt/allowlist.c +++ b/drivers/net/ethernet/intel/ice/virt/allowlist.c @@ -95,6 +95,13 @@ static const u32 tc_allowlist_opcodes[] = { VIRTCHNL_OP_CONFIG_QUANTA, }; +static const u32 large_num_qpairs_allowlist_opcodes[] = { + VIRTCHNL_OP_GET_MAX_RSS_QREGION, + VIRTCHNL_OP_ENABLE_QUEUES_V2, + VIRTCHNL_OP_DISABLE_QUEUES_V2, + VIRTCHNL_OP_MAP_QUEUE_VECTOR, +}; + struct allowlist_opcode_info { const u32 *opcodes; size_t size; @@ -117,6 +124,7 @@ static const struct allowlist_opcode_info allowlist_opcodes[] = { ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_VLAN_V2, vlan_v2_allowlist_opcodes), ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_QOS, tc_allowlist_opcodes), ALLOW_ITEM(VIRTCHNL_VF_CAP_PTP, ptp_allowlist_opcodes), + ALLOW_ITEM(VIRTCHNL_VF_LARGE_NUM_QPAIRS, large_num_qpairs_allowlist_opcodes), }; /** diff --git a/drivers/net/ethernet/intel/ice/virt/queues.c b/drivers/net/ethernet/intel/ice/virt/queues.c index 1d9f69026d1b..b99f18a25024 100644 --- a/drivers/net/ethernet/intel/ice/virt/queues.c +++ b/drivers/net/ethernet/intel/ice/virt/queues.c @@ -1021,3 +1021,327 @@ int ice_vc_request_qs_msg(struct ice_vf *vf, u8 *msg) v_ret, (u8 *)vfres, sizeof(*vfres)); } +static bool ice_vc_supported_queue_type(s32 queue_type) +{ + return queue_type == VIRTCHNL_QUEUE_TYPE_RX || + queue_type == VIRTCHNL_QUEUE_TYPE_TX; +} + +/** + * ice_vc_validate_qs_v2_msg - validate all qs_msg parameters + * @vf: VF the message was received from + * @qs_msg: contents of the message from the VF + * @msglen: length of @qs_msg + * + * Used to validate both the VIRTCHNL_OP_ENABLE_QUEUES_V2 and + * VIRTCHNL_OP_DISABLE_QUEUES_V2 messages. This should always be called before + * attempting to enable and/or disable queues on behalf of a VF in response to + * the previously mentioned opcodes. + * + * Return: If all checks succeed, then return true. Otherwise return + * false, indicating to the caller that the qs_msg is invalid. + */ +static bool ice_vc_validate_qs_v2_msg(struct ice_vf *vf, + struct virtchnl_del_ena_dis_queues *qs_msg, + u16 msglen) +{ + if (msglen < virtchnl_struct_size(qs_msg, chunks, 0)) + return false; + + if (msglen < virtchnl_struct_size(qs_msg, chunks, qs_msg->num_chunks)) + return false; + + if (!qs_msg->num_chunks) + return false; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) + return false; + + if (!ice_vc_isvalid_vsi_id(vf, qs_msg->vport_id)) + return false; + + for (int i = 0; i < qs_msg->num_chunks; i++) { + u32 max_queue_in_chunk; + + if (!ice_vc_supported_queue_type(qs_msg->chunks[i].type)) + return false; + + if (!qs_msg->chunks[i].num_queues) + return false; + + max_queue_in_chunk = qs_msg->chunks[i].start_queue_id + + qs_msg->chunks[i].num_queues; + if (max_queue_in_chunk > vf->num_vf_qs) + return false; + } + + return true; +} + +#define ice_for_each_q_in_chunk(chunk, q_id) \ + for ((q_id) = (chunk)->start_queue_id; \ + (q_id) < (chunk)->start_queue_id + (chunk)->num_queues; \ + (q_id)++) + +static int +ice_vc_ena_rxq_chunk(struct ice_vf *vf, struct virtchnl_queue_chunk *chunk) +{ + struct ice_vsi *vsi; + u32 vf_qid; + + ice_for_each_q_in_chunk(chunk, vf_qid) { + int err; + + vsi = ice_get_vf_vsi(vf); + err = ice_vf_vsi_ena_single_rxq(vf, vsi, vf_qid); + if (err) + return err; + } + + return 0; +} + +static int +ice_vc_ena_txq_chunk(struct ice_vf *vf, struct virtchnl_queue_chunk *chunk) +{ + struct ice_vsi *vsi; + u32 vf_qid; + + ice_for_each_q_in_chunk(chunk, vf_qid) { + vsi = ice_get_vf_vsi(vf); + ice_vf_vsi_ena_single_txq(vf, vsi, vf_qid); + } + + return 0; +} + +/** + * ice_vc_ena_qs_v2_msg - message handling for VIRTCHNL_OP_ENABLE_QUEUES_V2 + * @vf: source of the request + * @msg: message to handle + * @msglen: length of the @msg + * + * Return: 0 on success or negative on error. + */ +int ice_vc_ena_qs_v2_msg(struct ice_vf *vf, u8 *msg, u16 msglen) +{ + struct virtchnl_del_ena_dis_queues *ena_qs_msg = + (struct virtchnl_del_ena_dis_queues *)msg; + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + + if (!ice_vc_validate_qs_v2_msg(vf, ena_qs_msg, msglen)) + goto error_param; + + for (int i = 0; i < ena_qs_msg->num_chunks; i++) { + struct virtchnl_queue_chunk *chunk = &ena_qs_msg->chunks[i]; + + if (chunk->type == VIRTCHNL_QUEUE_TYPE_RX && + ice_vc_ena_rxq_chunk(vf, chunk)) + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + else if (chunk->type == VIRTCHNL_QUEUE_TYPE_TX && + ice_vc_ena_txq_chunk(vf, chunk)) + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + + if (v_ret != VIRTCHNL_STATUS_SUCCESS) + goto error_param; + } + + set_bit(ICE_VF_STATE_QS_ENA, vf->vf_states); + +error_param: + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ENABLE_QUEUES_V2, + v_ret, NULL, 0); +} + +static int +ice_vc_dis_rxq_chunk(struct ice_vf *vf, struct virtchnl_queue_chunk *chunk) +{ + struct ice_vsi *vsi; + u32 vf_qid; + + ice_for_each_q_in_chunk(chunk, vf_qid) { + int err; + + vsi = ice_get_vf_vsi(vf); + err = ice_vf_vsi_dis_single_rxq(vf, vsi, vf_qid); + if (err) + return err; + } + + return 0; +} + +static int +ice_vc_dis_txq_chunk(struct ice_vf *vf, struct virtchnl_queue_chunk *chunk) +{ + struct ice_vsi *vsi; + u32 vf_qid; + + ice_for_each_q_in_chunk(chunk, vf_qid) { + int err; + + vsi = ice_get_vf_vsi(vf); + err = ice_vf_vsi_dis_single_txq(vf, vsi, vf_qid); + if (err) + return err; + } + + return 0; +} + +/** + * ice_vc_dis_qs_v2_msg - message handling for VIRTCHNL_OP_DISABLE_QUEUES_V2 + * @vf: source of the request + * @msg: message to handle + * @msglen: length of @msg + * + * Return: 0 on success or negative on error. + */ +int ice_vc_dis_qs_v2_msg(struct ice_vf *vf, u8 *msg, u16 msglen) +{ + struct virtchnl_del_ena_dis_queues *dis_qs_msg = + (struct virtchnl_del_ena_dis_queues *)msg; + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + + if (!ice_vc_validate_qs_v2_msg(vf, dis_qs_msg, msglen)) + goto error_param; + + for (int i = 0; i < dis_qs_msg->num_chunks; i++) { + struct virtchnl_queue_chunk *chunk = &dis_qs_msg->chunks[i]; + + if (chunk->type == VIRTCHNL_QUEUE_TYPE_RX && + ice_vc_dis_rxq_chunk(vf, chunk)) + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + else if (chunk->type == VIRTCHNL_QUEUE_TYPE_TX && + ice_vc_dis_txq_chunk(vf, chunk)) + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + + if (v_ret != VIRTCHNL_STATUS_SUCCESS) + goto error_param; + } + + if (ice_vf_has_no_qs_ena(vf)) + clear_bit(ICE_VF_STATE_QS_ENA, vf->vf_states); + +error_param: + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_DISABLE_QUEUES_V2, + v_ret, NULL, 0); +} + +/** + * ice_vc_validate_qv_maps - validate parameters sent in the qs_msg structure + * @vf: VF the message was received from + * @qv_maps: contents of the message from the VF + * @msglen: length of the @qv_maps + * + * Used to validate VIRTCHNL_OP_MAP_QUEUE_VECTOR messages. This should always + * be called before attempting map interrupts to queues. If all checks succeed, + * then return success indicating to the caller that the qv_maps are valid. + * Otherwise return false, indicating to the caller that the qv_maps are + * invalid. + * + * Return: true if parameters are valid, false otherwise. + */ +static bool ice_vc_validate_qv_maps(struct ice_vf *vf, + struct virtchnl_queue_vector_maps *qv_maps, + u16 msglen) +{ + struct ice_vsi *vsi; + int total_vectors; + + vsi = vf->pf->vsi[vf->lan_vsi_idx]; + if (!vsi) + return false; + + if (msglen < virtchnl_struct_size(qv_maps, qv_maps, 0)) + return false; + + if (msglen < virtchnl_struct_size(qv_maps, qv_maps, qv_maps->num_qv_maps)) + return false; + + if (!qv_maps->num_qv_maps) + return false; + + if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) + return false; + + if (!ice_vc_isvalid_vsi_id(vf, qv_maps->vport_id)) + return false; + + total_vectors = vsi->num_q_vectors + ICE_NONQ_VECS_VF; + + for (int i = 0; i < qv_maps->num_qv_maps; i++) { + if (!ice_vc_supported_queue_type(qv_maps->qv_maps[i].queue_type)) + return false; + + if (qv_maps->qv_maps[i].queue_id >= vf->num_vf_qs) + return false; + + if (qv_maps->qv_maps[i].vector_id >= total_vectors || + qv_maps->qv_maps[i].vector_id < ICE_NONQ_VECS_VF) + return false; + } + + return true; +} + +/** + * ice_vc_map_q_vector_msg - message handling for VIRTCHNL_OP_MAP_QUEUE_VECTOR + * @vf: source of the request + * @msg: message to handle + * @msglen: length of @msg + * + * Return: 0 on success or negative on error + */ +int ice_vc_map_q_vector_msg(struct ice_vf *vf, u8 *msg, u16 msglen) +{ + enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS; + struct virtchnl_queue_vector_maps *qv_maps; + struct ice_vsi *vsi; + + qv_maps = (struct virtchnl_queue_vector_maps *)msg; + + if (!ice_vc_validate_qv_maps(vf, qv_maps, msglen)) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto error_param; + } + + for (int i = 0; i < qv_maps->num_qv_maps; i++) { + struct virtchnl_queue_vector *qv_map = &qv_maps->qv_maps[i]; + struct ice_q_vector *q_vector; + u16 vector_id; + int vsi_q_id; + + vsi = ice_get_vf_vsi(vf); + vsi_q_id = qv_map->queue_id; + vector_id = qv_map->vector_id; + + if (!vsi) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto error_param; + } + + q_vector = vf->vf_ops->get_q_vector(vsi, vector_id); + + if (!q_vector) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto error_param; + } + + if (!ice_vc_isvalid_q_id(vsi, vsi_q_id)) + return VIRTCHNL_STATUS_ERR_PARAM; + + if (qv_map->queue_type == VIRTCHNL_QUEUE_TYPE_RX) + ice_cfg_rxq_interrupt(vsi, vsi_q_id, + q_vector->vf_reg_idx, + qv_map->itr_idx); + else if (qv_map->queue_type == VIRTCHNL_QUEUE_TYPE_TX) + ice_cfg_txq_interrupt(vsi, vsi_q_id, + q_vector->vf_reg_idx, + qv_map->itr_idx); + } + +error_param: + return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_MAP_QUEUE_VECTOR, + v_ret, NULL, 0); +} -- 2.39.3