stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [stable 4.19 0/4] Intel Wired Ethernet Fixes for 4.19.y
@ 2019-02-01 20:50 Jeff Kirsher
  2019-02-01 20:50 ` [stable 4.19 1/4] ice: Update expected FW version Jeff Kirsher
                   ` (4 more replies)
  0 siblings, 5 replies; 11+ messages in thread
From: Jeff Kirsher @ 2019-02-01 20:50 UTC (permalink / raw)
  To: stable; +Cc: Jeff Kirsher

This series contains backports and partial backports for the Intel ice
driver.

These backports are required to get the ice driver to load on the
current hardware.  This is due to mismatches between the driver code and
the firmware running on the devices.  At the time of release of 4.19,
hardware was not available for verification.  Since the 4.19.y kernel is
a LTS kernel, we want to ensure that users with our E800 devices will be
able to have the in-kernel driver load and pass traffic, if running a
4.19 or later kernel.

Anirudh Venkataramanan (4):
  ice: Update expected FW version
  ice: Updates to Tx scheduler code
  ice: Introduce ice_dev_onetime_setup
  ice: Set timeout when disabling queues

 .../net/ethernet/intel/ice/ice_adminq_cmd.h   |   5 +-
 drivers/net/ethernet/intel/ice/ice_common.c   |  24 +++
 drivers/net/ethernet/intel/ice/ice_common.h   |   1 +
 drivers/net/ethernet/intel/ice/ice_controlq.h |   5 +-
 drivers/net/ethernet/intel/ice/ice_main.c     |   2 +
 drivers/net/ethernet/intel/ice/ice_sched.c    | 161 ++++++------------
 drivers/net/ethernet/intel/ice/ice_type.h     |   2 +
 7 files changed, 83 insertions(+), 117 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [stable 4.19 1/4] ice: Update expected FW version
  2019-02-01 20:50 [stable 4.19 0/4] Intel Wired Ethernet Fixes for 4.19.y Jeff Kirsher
@ 2019-02-01 20:50 ` Jeff Kirsher
  2019-02-02  9:02   ` Greg KH
  2019-02-01 20:50 ` [stable 4.19 2/4] ice: Updates to Tx scheduler code Jeff Kirsher
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 11+ messages in thread
From: Jeff Kirsher @ 2019-02-01 20:50 UTC (permalink / raw)
  To: stable; +Cc: Anirudh Venkataramanan, Jeff Kirsher

From: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>

This is a backport of mainline commit ac5a8aef112e ("ice: Update
expected FW version"). This change is required so that the driver
can work with the latest firmware. Without this change, the driver
fails probe.

Update to the current firmware major and minor version which are
1 and 3 respectively.

Also remove an empty comment line.

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_controlq.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_controlq.h b/drivers/net/ethernet/intel/ice/ice_controlq.h
index ea02b89243e2..f3f8327833aa 100644
--- a/drivers/net/ethernet/intel/ice/ice_controlq.h
+++ b/drivers/net/ethernet/intel/ice/ice_controlq.h
@@ -18,11 +18,10 @@
 
 /* Defines that help manage the driver vs FW API checks.
  * Take a look at ice_aq_ver_check in ice_controlq.c for actual usage.
- *
  */
 #define EXP_FW_API_VER_BRANCH		0x00
-#define EXP_FW_API_VER_MAJOR		0x00
-#define EXP_FW_API_VER_MINOR		0x01
+#define EXP_FW_API_VER_MAJOR		0x01
+#define EXP_FW_API_VER_MINOR		0x03
 
 /* Different control queue types: These are mainly for SW consumption. */
 enum ice_ctl_q {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [stable 4.19 2/4] ice: Updates to Tx scheduler code
  2019-02-01 20:50 [stable 4.19 0/4] Intel Wired Ethernet Fixes for 4.19.y Jeff Kirsher
  2019-02-01 20:50 ` [stable 4.19 1/4] ice: Update expected FW version Jeff Kirsher
@ 2019-02-01 20:50 ` Jeff Kirsher
  2019-02-02 11:28   ` Greg KH
  2019-02-01 20:50 ` [stable 4.19 3/4] ice: Introduce ice_dev_onetime_setup Jeff Kirsher
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 11+ messages in thread
From: Jeff Kirsher @ 2019-02-01 20:50 UTC (permalink / raw)
  To: stable; +Cc: Anirudh Venkataramanan, Tony Brelinski, Jeff Kirsher

From: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>

This is a backport of the mainline commit b36c598c999c ("ice:
Updates to Tx scheduler code"). This change is required for the
driver to work with the latest firmware. Without this patch,
the driver fails probe.

1) The maximum device nodes is a global value and shared by the whole
   device. Add element AQ command would fail if there is no space to
   add new nodes so the check for max nodes isn't required. So remove
   ice_sched_get_num_nodes_per_layer and ice_sched_val_max_nodes.

2) In ice_sched_add_elems, set default node's CIR/EIR bandwidth weight.

3) Fix default scheduler topology buffer size as the firmware expects
   a 4KB buffer at all times, and will error out if one of any other
   size is provided.

4) In the latest spec, max children per node per layer is replaced by
   max sibling group size. Now it provides the max children of the below
   layer node, not the current layer node.

5) Fix some newline/whitespace issues for consistency.

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Tested-by: Tony Brelinski <tonyx.brelinski@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
 .../net/ethernet/intel/ice/ice_adminq_cmd.h   |   5 +-
 drivers/net/ethernet/intel/ice/ice_common.c   |   7 +
 drivers/net/ethernet/intel/ice/ice_sched.c    | 161 ++++++------------
 drivers/net/ethernet/intel/ice/ice_type.h     |   2 +
 4 files changed, 61 insertions(+), 114 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
index a0614f472658..9a33fb95c0ea 100644
--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
@@ -771,9 +771,8 @@ struct ice_aqc_layer_props {
 	u8 chunk_size;
 	__le16 max_device_nodes;
 	__le16 max_pf_nodes;
-	u8 rsvd0[2];
-	__le16 max_shared_rate_lmtr;
-	__le16 max_children;
+	u8 rsvd0[4];
+	__le16 max_sibl_grp_sz;
 	__le16 max_cir_rl_profiles;
 	__le16 max_eir_rl_profiles;
 	__le16 max_srl_profiles;
diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index 661beea6af79..50b7545adc68 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -472,6 +472,13 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
 	if (status)
 		goto err_unroll_sched;
 
+	/* need a valid SW entry point to build a Tx tree */
+	if (!hw->sw_entry_point_layer) {
+		ice_debug(hw, ICE_DBG_SCHED, "invalid sw entry point\n");
+		status = ICE_ERR_CFG;
+		goto err_unroll_sched;
+	}
+
 	status = ice_init_fltr_mgmt_struct(hw);
 	if (status)
 		goto err_unroll_sched;
diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c
index eeae199469b6..9b7b50554952 100644
--- a/drivers/net/ethernet/intel/ice/ice_sched.c
+++ b/drivers/net/ethernet/intel/ice/ice_sched.c
@@ -17,7 +17,6 @@ ice_sched_add_root_node(struct ice_port_info *pi,
 {
 	struct ice_sched_node *root;
 	struct ice_hw *hw;
-	u16 max_children;
 
 	if (!pi)
 		return ICE_ERR_PARAM;
@@ -28,8 +27,8 @@ ice_sched_add_root_node(struct ice_port_info *pi,
 	if (!root)
 		return ICE_ERR_NO_MEMORY;
 
-	max_children = le16_to_cpu(hw->layer_info[0].max_children);
-	root->children = devm_kcalloc(ice_hw_to_dev(hw), max_children,
+	/* coverity[suspicious_sizeof] */
+	root->children = devm_kcalloc(ice_hw_to_dev(hw), hw->max_children[0],
 				      sizeof(*root), GFP_KERNEL);
 	if (!root->children) {
 		devm_kfree(ice_hw_to_dev(hw), root);
@@ -100,7 +99,6 @@ ice_sched_add_node(struct ice_port_info *pi, u8 layer,
 	struct ice_sched_node *parent;
 	struct ice_sched_node *node;
 	struct ice_hw *hw;
-	u16 max_children;
 
 	if (!pi)
 		return ICE_ERR_PARAM;
@@ -120,9 +118,10 @@ ice_sched_add_node(struct ice_port_info *pi, u8 layer,
 	node = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*node), GFP_KERNEL);
 	if (!node)
 		return ICE_ERR_NO_MEMORY;
-	max_children = le16_to_cpu(hw->layer_info[layer].max_children);
-	if (max_children) {
-		node->children = devm_kcalloc(ice_hw_to_dev(hw), max_children,
+	if (hw->max_children[layer]) {
+		/* coverity[suspicious_sizeof] */
+		node->children = devm_kcalloc(ice_hw_to_dev(hw),
+					      hw->max_children[layer],
 					      sizeof(*node), GFP_KERNEL);
 		if (!node->children) {
 			devm_kfree(ice_hw_to_dev(hw), node);
@@ -192,14 +191,17 @@ ice_sched_remove_elems(struct ice_hw *hw, struct ice_sched_node *parent,
 	buf = devm_kzalloc(ice_hw_to_dev(hw), buf_size, GFP_KERNEL);
 	if (!buf)
 		return ICE_ERR_NO_MEMORY;
+
 	buf->hdr.parent_teid = parent->info.node_teid;
 	buf->hdr.num_elems = cpu_to_le16(num_nodes);
 	for (i = 0; i < num_nodes; i++)
 		buf->teid[i] = cpu_to_le32(node_teids[i]);
+
 	status = ice_aq_delete_sched_elems(hw, 1, buf, buf_size,
 					   &num_groups_removed, NULL);
 	if (status || num_groups_removed != 1)
 		ice_debug(hw, ICE_DBG_SCHED, "remove elements failed\n");
+
 	devm_kfree(ice_hw_to_dev(hw), buf);
 	return status;
 }
@@ -592,13 +594,16 @@ static void ice_sched_clear_port(struct ice_port_info *pi)
  */
 void ice_sched_cleanup_all(struct ice_hw *hw)
 {
-	if (!hw || !hw->port_info)
+	if (!hw)
 		return;
 
-	if (hw->layer_info)
+	if (hw->layer_info) {
 		devm_kfree(ice_hw_to_dev(hw), hw->layer_info);
+		hw->layer_info = NULL;
+	}
 
-	ice_sched_clear_port(hw->port_info);
+	if (hw->port_info)
+		ice_sched_clear_port(hw->port_info);
 
 	hw->num_tx_sched_layers = 0;
 	hw->num_tx_sched_phys_layers = 0;
@@ -671,9 +676,13 @@ ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node,
 			ICE_AQC_ELEM_VALID_EIR;
 		buf->generic[i].data.generic = 0;
 		buf->generic[i].data.cir_bw.bw_profile_idx =
-			ICE_SCHED_DFLT_RL_PROF_ID;
+			cpu_to_le16(ICE_SCHED_DFLT_RL_PROF_ID);
+		buf->generic[i].data.cir_bw.bw_alloc =
+			cpu_to_le16(ICE_SCHED_DFLT_BW_WT);
 		buf->generic[i].data.eir_bw.bw_profile_idx =
-			ICE_SCHED_DFLT_RL_PROF_ID;
+			cpu_to_le16(ICE_SCHED_DFLT_RL_PROF_ID);
+		buf->generic[i].data.eir_bw.bw_alloc =
+			cpu_to_le16(ICE_SCHED_DFLT_BW_WT);
 	}
 
 	status = ice_aq_add_sched_elems(hw, 1, buf, buf_size,
@@ -697,7 +706,6 @@ ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node,
 
 		teid = le32_to_cpu(buf->generic[i].node_teid);
 		new_node = ice_sched_find_node_by_teid(parent, teid);
-
 		if (!new_node) {
 			ice_debug(hw, ICE_DBG_SCHED,
 				  "Node is missing for teid =%d\n", teid);
@@ -710,7 +718,6 @@ ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node,
 		/* add it to previous node sibling pointer */
 		/* Note: siblings are not linked across branches */
 		prev = ice_sched_get_first_node(hw, tc_node, layer);
-
 		if (prev && prev != new_node) {
 			while (prev->sibling)
 				prev = prev->sibling;
@@ -760,8 +767,7 @@ ice_sched_add_nodes_to_layer(struct ice_port_info *pi,
 		return ICE_ERR_PARAM;
 
 	/* max children per node per layer */
-	max_child_nodes =
-	    le16_to_cpu(hw->layer_info[parent->tx_sched_layer].max_children);
+	max_child_nodes = hw->max_children[parent->tx_sched_layer];
 
 	/* current number of children + required nodes exceed max children ? */
 	if ((parent->num_children + num_nodes) > max_child_nodes) {
@@ -850,78 +856,6 @@ static u8 ice_sched_get_vsi_layer(struct ice_hw *hw)
 	return hw->sw_entry_point_layer;
 }
 
-/**
- * ice_sched_get_num_nodes_per_layer - Get the total number of nodes per layer
- * @pi: pointer to the port info struct
- * @layer: layer number
- *
- * This function calculates the number of nodes present in the scheduler tree
- * including all the branches for a given layer
- */
-static u16
-ice_sched_get_num_nodes_per_layer(struct ice_port_info *pi, u8 layer)
-{
-	struct ice_hw *hw;
-	u16 num_nodes = 0;
-	u8 i;
-
-	if (!pi)
-		return num_nodes;
-
-	hw = pi->hw;
-
-	/* Calculate the number of nodes for all TCs */
-	for (i = 0; i < pi->root->num_children; i++) {
-		struct ice_sched_node *tc_node, *node;
-
-		tc_node = pi->root->children[i];
-
-		/* Get the first node */
-		node = ice_sched_get_first_node(hw, tc_node, layer);
-		if (!node)
-			continue;
-
-		/* count the siblings */
-		while (node) {
-			num_nodes++;
-			node = node->sibling;
-		}
-	}
-
-	return num_nodes;
-}
-
-/**
- * ice_sched_val_max_nodes - check max number of nodes reached or not
- * @pi: port information structure
- * @new_num_nodes_per_layer: pointer to the new number of nodes array
- *
- * This function checks whether the scheduler tree layers have enough space to
- * add new nodes
- */
-static enum ice_status
-ice_sched_validate_for_max_nodes(struct ice_port_info *pi,
-				 u16 *new_num_nodes_per_layer)
-{
-	struct ice_hw *hw = pi->hw;
-	u8 i, qg_layer;
-	u16 num_nodes;
-
-	qg_layer = ice_sched_get_qgrp_layer(hw);
-
-	/* walk through all the layers from SW entry point to qgroup layer */
-	for (i = hw->sw_entry_point_layer; i <= qg_layer; i++) {
-		num_nodes = ice_sched_get_num_nodes_per_layer(pi, i);
-		if (num_nodes + new_num_nodes_per_layer[i] >
-		    le16_to_cpu(hw->layer_info[i].max_pf_nodes)) {
-			ice_debug(hw, ICE_DBG_SCHED,
-				  "max nodes reached for layer = %d\n", i);
-			return ICE_ERR_CFG;
-		}
-	}
-	return 0;
-}
-
 /**
  * ice_rm_dflt_leaf_node - remove the default leaf node in the tree
  * @pi: port information structure
@@ -1003,14 +937,12 @@ enum ice_status ice_sched_init_port(struct ice_port_info *pi)
 	hw = pi->hw;
 
 	/* Query the Default Topology from FW */
-	buf = devm_kcalloc(ice_hw_to_dev(hw), ICE_TXSCHED_MAX_BRANCHES,
-			   sizeof(*buf), GFP_KERNEL);
+	buf = devm_kzalloc(ice_hw_to_dev(hw), ICE_AQ_MAX_BUF_LEN, GFP_KERNEL);
 	if (!buf)
 		return ICE_ERR_NO_MEMORY;
 
 	/* Query default scheduling tree topology */
-	status = ice_aq_get_dflt_topo(hw, pi->lport, buf,
-				      sizeof(*buf) * ICE_TXSCHED_MAX_BRANCHES,
+	status = ice_aq_get_dflt_topo(hw, pi->lport, buf, ICE_AQ_MAX_BUF_LEN,
 				      &num_branches, NULL);
 	if (status)
 		goto err_init_port;
@@ -1097,6 +1029,8 @@ enum ice_status ice_sched_query_res_alloc(struct ice_hw *hw)
 {
 	struct ice_aqc_query_txsched_res_resp *buf;
 	enum ice_status status = 0;
+	__le16 max_sibl;
+	u8 i;
 
 	if (hw->layer_info)
 		return status;
@@ -1115,7 +1049,20 @@ enum ice_status ice_sched_query_res_alloc(struct ice_hw *hw)
 	hw->flattened_layers = buf->sched_props.flattening_bitmap;
 	hw->max_cgds = buf->sched_props.max_pf_cgds;
 
-	 hw->layer_info = devm_kmemdup(ice_hw_to_dev(hw), buf->layer_props,
+	/* max sibling group size of current layer refers to the max children
+	 * of the below layer node.
+	 * layer 1 node max children will be layer 2 max sibling group size
+	 * layer 2 node max children will be layer 3 max sibling group size
+	 * and so on. This array will be populated from root (index 0) to
+	 * qgroup layer 7. Leaf node has no children.
+	 */
+	for (i = 0; i < hw->num_tx_sched_layers; i++) {
+		max_sibl = buf->layer_props[i].max_sibl_grp_sz;
+		hw->max_children[i] = le16_to_cpu(max_sibl);
+	}
+
+	hw->layer_info = (struct ice_aqc_layer_props *)
+			  devm_kmemdup(ice_hw_to_dev(hw), buf->layer_props,
 				       (hw->num_tx_sched_layers *
 					sizeof(*hw->layer_info)),
 				       GFP_KERNEL);
@@ -1202,7 +1149,7 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_id, u8 tc,
 	u8 qgrp_layer;
 
 	qgrp_layer = ice_sched_get_qgrp_layer(pi->hw);
-	max_children = le16_to_cpu(pi->hw->layer_info[qgrp_layer].max_children);
+	max_children = pi->hw->max_children[qgrp_layer];
 
 	list_elem = ice_sched_get_vsi_info_entry(pi, vsi_id);
 	if (!list_elem)
@@ -1278,10 +1225,8 @@ ice_sched_calc_vsi_child_nodes(struct ice_hw *hw, u16 num_qs, u16 *num_nodes)
 
 	/* calculate num nodes from q group to VSI layer */
 	for (i = qgl; i > vsil; i--) {
-		u16 max_children = le16_to_cpu(hw->layer_info[i].max_children);
-
 		/* round to the next integer if there is a remainder */
-		num = DIV_ROUND_UP(num, max_children);
+		num = DIV_ROUND_UP(num, hw->max_children[i]);
 
 		/* need at least one node */
 		num_nodes[i] = num ? num : 1;
@@ -1311,16 +1256,13 @@ ice_sched_add_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_id,
 	u16 num_added = 0;
 	u8 i, qgl, vsil;
 
-	status = ice_sched_validate_for_max_nodes(pi, num_nodes);
-	if (status)
-		return status;
-
 	qgl = ice_sched_get_qgrp_layer(hw);
 	vsil = ice_sched_get_vsi_layer(hw);
 	parent = ice_sched_get_vsi_node(hw, tc_node, vsi_id);
 	for (i = vsil + 1; i <= qgl; i++) {
 		if (!parent)
 			return ICE_ERR_CFG;
+
 		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent, i,
 						      num_nodes[i],
 						      &first_node_teid,
@@ -1398,8 +1340,8 @@ ice_sched_calc_vsi_support_nodes(struct ice_hw *hw,
 				 struct ice_sched_node *tc_node, u16 *num_nodes)
 {
 	struct ice_sched_node *node;
-	u16 max_child;
-	u8 i, vsil;
+	u8 vsil;
+	int i;
 
 	vsil = ice_sched_get_vsi_layer(hw);
 	for (i = vsil; i >= hw->sw_entry_point_layer; i--)
@@ -1412,12 +1354,10 @@ ice_sched_calc_vsi_support_nodes(struct ice_hw *hw,
 			/* If intermediate nodes are reached max children
 			 * then add a new one.
 			 */
-			node = ice_sched_get_first_node(hw, tc_node, i);
-			max_child = le16_to_cpu(hw->layer_info[i].max_children);
-
+			node = ice_sched_get_first_node(hw, tc_node, (u8)i);
 			/* scan all the siblings */
 			while (node) {
-				if (node->num_children < max_child)
+				if (node->num_children < hw->max_children[i])
 					break;
 				node = node->sibling;
 			}
@@ -1451,10 +1391,6 @@ ice_sched_add_vsi_support_nodes(struct ice_port_info *pi, u16 vsi_id,
 	if (!pi)
 		return ICE_ERR_PARAM;
 
-	status = ice_sched_validate_for_max_nodes(pi, num_nodes);
-	if (status)
-		return status;
-
 	vsil = ice_sched_get_vsi_layer(pi->hw);
 	for (i = pi->hw->sw_entry_point_layer; i <= vsil; i++) {
 		status = ice_sched_add_nodes_to_layer(pi, tc_node, parent,
@@ -1479,6 +1415,7 @@ ice_sched_add_vsi_support_nodes(struct ice_port_info *pi, u16 vsi_id,
 		if (i == vsil)
 			parent->vsi_id = vsi_id;
 	}
+
 	return 0;
 }
 
@@ -1633,9 +1570,11 @@ ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_id, u8 tc, u16 maxqs,
 		status = ice_sched_add_vsi_to_topo(pi, vsi_id, tc);
 		if (status)
 			return status;
+
 		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_id);
 		if (!vsi_node)
 			return ICE_ERR_CFG;
+
 		vsi->vsi_node[tc] = vsi_node;
 		vsi_node->in_use = true;
 	}
diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h
index ba11b5898833..a4cec180a9ab 100644
--- a/drivers/net/ethernet/intel/ice/ice_type.h
+++ b/drivers/net/ethernet/intel/ice/ice_type.h
@@ -204,6 +204,7 @@ enum ice_agg_type {
 };
 
 #define ICE_SCHED_DFLT_RL_PROF_ID	0
+#define ICE_SCHED_DFLT_BW_WT		1
 
 /* vsi type list entry to locate corresponding vsi/ag nodes */
 struct ice_sched_vsi_info {
@@ -286,6 +287,7 @@ struct ice_hw {
 	u8 flattened_layers;
 	u8 max_cgds;
 	u8 sw_entry_point_layer;
+	u16 max_children[ICE_AQC_TOPO_MAX_LEVEL_NUM];
 
 	u8 evb_veb;		/* true for VEB, false for VEPA */
 	struct ice_bus_info bus;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [stable 4.19 3/4] ice: Introduce ice_dev_onetime_setup
  2019-02-01 20:50 [stable 4.19 0/4] Intel Wired Ethernet Fixes for 4.19.y Jeff Kirsher
  2019-02-01 20:50 ` [stable 4.19 1/4] ice: Update expected FW version Jeff Kirsher
  2019-02-01 20:50 ` [stable 4.19 2/4] ice: Updates to Tx scheduler code Jeff Kirsher
@ 2019-02-01 20:50 ` Jeff Kirsher
  2019-02-01 20:50 ` [stable 4.19 4/4] ice: Set timeout when disabling queues Jeff Kirsher
  2019-02-02  9:00 ` [stable 4.19 0/4] Intel Wired Ethernet Fixes for 4.19.y Greg KH
  4 siblings, 0 replies; 11+ messages in thread
From: Jeff Kirsher @ 2019-02-01 20:50 UTC (permalink / raw)
  To: stable; +Cc: Anirudh Venkataramanan, Jeff Kirsher

From: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>

This is a partial backport of the mainline commit f203dca363f8
("ice: Introduce ice_dev_onetime_setup").

ice_dev_onetime_setup() contains a workaround for a current firmware
limitation. Without this patch, the driver is not able to process Rx
packets. This workaround is expected to away when the underlying
firmware issue is fixed.

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_common.c | 15 +++++++++++++++
 drivers/net/ethernet/intel/ice/ice_common.h |  1 +
 drivers/net/ethernet/intel/ice/ice_main.c   |  2 ++
 3 files changed, 18 insertions(+)

diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index 50b7545adc68..fb5e77f263e5 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -42,6 +42,19 @@ static enum ice_status ice_set_mac_type(struct ice_hw *hw)
 	return 0;
 }
 
+/**
+ * ice_dev_onetime_setup - Temporary HW/FW workarounds
+ * @hw: pointer to the HW structure
+ *
+ * This function provides temporary workarounds for certain issues
+ * that are expected to be fixed in the HW/FW.
+ */
+void ice_dev_onetime_setup(struct ice_hw *hw)
+{
+	/* configure Rx - set non pxe mode */
+	wr32(hw, GLLAN_RCTL_0, 0x1);
+}
+
 /**
  * ice_clear_pf_cfg - Clear PF configuration
  * @hw: pointer to the hardware structure
@@ -483,6 +496,8 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
 	if (status)
 		goto err_unroll_sched;
 
+	ice_dev_onetime_setup(hw);
+
 	/* Get MAC information */
 	/* A single port can report up to two (LAN and WoL) addresses */
 	mac_buf = devm_kcalloc(ice_hw_to_dev(hw), 2,
diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h
index 9a5519130af1..6ef1df0f5d99 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.h
+++ b/drivers/net/ethernet/intel/ice/ice_common.h
@@ -32,6 +32,7 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 		struct ice_sq_cd *cd);
 void ice_clear_pxe_mode(struct ice_hw *hw);
 enum ice_status ice_get_caps(struct ice_hw *hw);
+void ice_dev_onetime_setup(struct ice_hw *hw);
 enum ice_status
 ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
 		  u32 rxq_index);
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 3f047bb43348..a95c8fbc0475 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -5179,6 +5179,8 @@ static void ice_rebuild(struct ice_pf *pf)
 
 	ice_clear_pxe_mode(hw);
 
+	ice_dev_onetime_setup(hw);
+
 	ret = ice_get_caps(hw);
 	if (ret) {
 		dev_err(dev, "ice_get_caps failed %d\n", ret);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [stable 4.19 4/4] ice: Set timeout when disabling queues
  2019-02-01 20:50 [stable 4.19 0/4] Intel Wired Ethernet Fixes for 4.19.y Jeff Kirsher
                   ` (2 preceding siblings ...)
  2019-02-01 20:50 ` [stable 4.19 3/4] ice: Introduce ice_dev_onetime_setup Jeff Kirsher
@ 2019-02-01 20:50 ` Jeff Kirsher
  2019-02-02  9:00 ` [stable 4.19 0/4] Intel Wired Ethernet Fixes for 4.19.y Greg KH
  4 siblings, 0 replies; 11+ messages in thread
From: Jeff Kirsher @ 2019-02-01 20:50 UTC (permalink / raw)
  To: stable; +Cc: Anirudh Venkataramanan, Jeff Kirsher

From: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>

This patch is a backport of a single line of code from mainline
commit ddf30f7ff840 ("ice: Add handler to configure SR-IOV")

Queues are disabled during module unload using the ice_aqc_opc_dis_txqs
admin queue command. On the latest firmware, this command fails with
the following message: "Failed to disable LAN Tx queues, error: -100"

This patch fixes this issue by setting the timeout field in the command
descriptor.

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_common.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index fb5e77f263e5..a8749f4a0cb8 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -1865,6 +1865,8 @@ ice_aq_dis_lan_txq(struct ice_hw *hw, u8 num_qgrps,
 		return ICE_ERR_PARAM;
 	desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
 	cmd->num_entries = num_qgrps;
+	cmd->vmvf_and_timeout = cpu_to_le16((5 << ICE_AQC_Q_DIS_TIMEOUT_S) &
+					    ICE_AQC_Q_DIS_TIMEOUT_M);
 
 	for (i = 0; i < num_qgrps; ++i) {
 		/* Calculate the size taken up by the queue IDs in this group */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [stable 4.19 0/4] Intel Wired Ethernet Fixes for 4.19.y
  2019-02-01 20:50 [stable 4.19 0/4] Intel Wired Ethernet Fixes for 4.19.y Jeff Kirsher
                   ` (3 preceding siblings ...)
  2019-02-01 20:50 ` [stable 4.19 4/4] ice: Set timeout when disabling queues Jeff Kirsher
@ 2019-02-02  9:00 ` Greg KH
  4 siblings, 0 replies; 11+ messages in thread
From: Greg KH @ 2019-02-02  9:00 UTC (permalink / raw)
  To: Jeff Kirsher; +Cc: stable

On Fri, Feb 01, 2019 at 12:50:25PM -0800, Jeff Kirsher wrote:
> This series contains backports and partial backports for the Intel ice
> driver.
> 
> These backports are required to get the ice driver to load on the
> current hardware.  This is due to mismatches between the driver code and
> the firmware running on the devices.  At the time of release of 4.19,
> hardware was not available for verification.  Since the 4.19.y kernel is
> a LTS kernel, we want to ensure that users with our E800 devices will be
> able to have the in-kernel driver load and pass traffic, if running a
> 4.19 or later kernel.
> 
> Anirudh Venkataramanan (4):
>   ice: Update expected FW version
>   ice: Updates to Tx scheduler code
>   ice: Introduce ice_dev_onetime_setup
>   ice: Set timeout when disabling queues

Can you respin these with the git commit ids of the upstream patches in
them so that we can properly include that in the patches?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [stable 4.19 1/4] ice: Update expected FW version
  2019-02-01 20:50 ` [stable 4.19 1/4] ice: Update expected FW version Jeff Kirsher
@ 2019-02-02  9:02   ` Greg KH
  0 siblings, 0 replies; 11+ messages in thread
From: Greg KH @ 2019-02-02  9:02 UTC (permalink / raw)
  To: Jeff Kirsher; +Cc: stable, Anirudh Venkataramanan

On Fri, Feb 01, 2019 at 12:50:26PM -0800, Jeff Kirsher wrote:
> From: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
> 
> This is a backport of mainline commit ac5a8aef112e ("ice: Update
> expected FW version"). This change is required so that the driver
> can work with the latest firmware. Without this change, the driver
> fails probe.

Ah, you did include the id here, sorry, I missed it at first glance.
I'll pick these up later today.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [stable 4.19 2/4] ice: Updates to Tx scheduler code
  2019-02-01 20:50 ` [stable 4.19 2/4] ice: Updates to Tx scheduler code Jeff Kirsher
@ 2019-02-02 11:28   ` Greg KH
  2019-02-08 23:46     ` Jeff Kirsher
  0 siblings, 1 reply; 11+ messages in thread
From: Greg KH @ 2019-02-02 11:28 UTC (permalink / raw)
  To: Jeff Kirsher; +Cc: stable, Anirudh Venkataramanan, Tony Brelinski

On Fri, Feb 01, 2019 at 12:50:27PM -0800, Jeff Kirsher wrote:
> From: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
> 
> This is a backport of the mainline commit b36c598c999c ("ice:
> Updates to Tx scheduler code"). This change is required for the
> driver to work with the latest firmware. Without this patch,
> the driver fails probe.


Note, this is a big change.  You can't have these systems just run the
old firmware?  The fact that you require a kernel change for newer
firmware, isn't the best, you don't provide backwards compatibility
somehow?

I'm all for adding new device ids and the like to stable kernels, but to
backport new functionality just to get newer hardware to work on older
kernels, is not the goal of the LTS kernel trees.  Why can't you just
have users use 4.20 or newer?  There shouldn't be anything keeping them
from doing that and just updating to the latest stable release every 4
months or so, right?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [stable 4.19 2/4] ice: Updates to Tx scheduler code
  2019-02-02 11:28   ` Greg KH
@ 2019-02-08 23:46     ` Jeff Kirsher
  2019-02-18 12:25       ` Greg KH
  0 siblings, 1 reply; 11+ messages in thread
From: Jeff Kirsher @ 2019-02-08 23:46 UTC (permalink / raw)
  To: Greg KH; +Cc: stable, Anirudh Venkataramanan, Tony Brelinski

[-- Attachment #1: Type: text/plain, Size: 1297 bytes --]

On Sat, 2019-02-02 at 12:28 +0100, Greg KH wrote:
> On Fri, Feb 01, 2019 at 12:50:27PM -0800, Jeff Kirsher wrote:
> > From: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
> > 
> > This is a backport of the mainline commit b36c598c999c ("ice:
> > Updates to Tx scheduler code"). This change is required for the
> > driver to work with the latest firmware. Without this patch,
> > the driver fails probe.
> 
> Note, this is a big change.  You can't have these systems just run
> the
> old firmware?  The fact that you require a kernel change for newer
> firmware, isn't the best, you don't provide backwards compatibility
> somehow?
> 
> I'm all for adding new device ids and the like to stable kernels, but
> to
> backport new functionality just to get newer hardware to work on
> older
> kernels, is not the goal of the LTS kernel trees.  Why can't you just
> have users use 4.20 or newer?  There shouldn't be anything keeping
> them
> from doing that and just updating to the latest stable release every
> 4
> months or so, right?

Sorry Greg, I am not ignoring you.  I have been trying to get more
information and reasoning from others within Intel which needed this
support within the 4.19 kernel.  I am hoping I will have a response for
you by Monday.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [stable 4.19 2/4] ice: Updates to Tx scheduler code
  2019-02-08 23:46     ` Jeff Kirsher
@ 2019-02-18 12:25       ` Greg KH
  2019-02-19 15:57         ` Jeff Kirsher
  0 siblings, 1 reply; 11+ messages in thread
From: Greg KH @ 2019-02-18 12:25 UTC (permalink / raw)
  To: Jeff Kirsher; +Cc: stable, Anirudh Venkataramanan, Tony Brelinski

On Fri, Feb 08, 2019 at 03:46:26PM -0800, Jeff Kirsher wrote:
> On Sat, 2019-02-02 at 12:28 +0100, Greg KH wrote:
> > On Fri, Feb 01, 2019 at 12:50:27PM -0800, Jeff Kirsher wrote:
> > > From: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
> > > 
> > > This is a backport of the mainline commit b36c598c999c ("ice:
> > > Updates to Tx scheduler code"). This change is required for the
> > > driver to work with the latest firmware. Without this patch,
> > > the driver fails probe.
> > 
> > Note, this is a big change.  You can't have these systems just run
> > the
> > old firmware?  The fact that you require a kernel change for newer
> > firmware, isn't the best, you don't provide backwards compatibility
> > somehow?
> > 
> > I'm all for adding new device ids and the like to stable kernels, but
> > to
> > backport new functionality just to get newer hardware to work on
> > older
> > kernels, is not the goal of the LTS kernel trees.  Why can't you just
> > have users use 4.20 or newer?  There shouldn't be anything keeping
> > them
> > from doing that and just updating to the latest stable release every
> > 4
> > months or so, right?
> 
> Sorry Greg, I am not ignoring you.  I have been trying to get more
> information and reasoning from others within Intel which needed this
> support within the 4.19 kernel.  I am hoping I will have a response for
> you by Monday.

It's been more than one "Monday" now, so I'm dropping this series from
my patch queue.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [stable 4.19 2/4] ice: Updates to Tx scheduler code
  2019-02-18 12:25       ` Greg KH
@ 2019-02-19 15:57         ` Jeff Kirsher
  0 siblings, 0 replies; 11+ messages in thread
From: Jeff Kirsher @ 2019-02-19 15:57 UTC (permalink / raw)
  To: Greg KH; +Cc: stable, Anirudh Venkataramanan, Tony Brelinski

[-- Attachment #1: Type: text/plain, Size: 1647 bytes --]

On Mon, 2019-02-18 at 13:25 +0100, Greg KH wrote:
> On Fri, Feb 08, 2019 at 03:46:26PM -0800, Jeff Kirsher wrote:
> > On Sat, 2019-02-02 at 12:28 +0100, Greg KH wrote:
> > > On Fri, Feb 01, 2019 at 12:50:27PM -0800, Jeff Kirsher wrote:
> > > > From: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
> > > > 
> > > > This is a backport of the mainline commit b36c598c999c ("ice:
> > > > Updates to Tx scheduler code"). This change is required for the
> > > > driver to work with the latest firmware. Without this patch,
> > > > the driver fails probe.
> > > 
> > > Note, this is a big change.  You can't have these systems just run
> > > the
> > > old firmware?  The fact that you require a kernel change for newer
> > > firmware, isn't the best, you don't provide backwards compatibility
> > > somehow?
> > > 
> > > I'm all for adding new device ids and the like to stable kernels, but
> > > to
> > > backport new functionality just to get newer hardware to work on
> > > older
> > > kernels, is not the goal of the LTS kernel trees.  Why can't you just
> > > have users use 4.20 or newer?  There shouldn't be anything keeping
> > > them
> > > from doing that and just updating to the latest stable release every
> > > 4
> > > months or so, right?
> > 
> > Sorry Greg, I am not ignoring you.  I have been trying to get more
> > information and reasoning from others within Intel which needed this
> > support within the 4.19 kernel.  I am hoping I will have a response for
> > you by Monday.
> 
> It's been more than one "Monday" now, so I'm dropping this series from
> my patch queue.

That is fine.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-02-19 15:57 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-02-01 20:50 [stable 4.19 0/4] Intel Wired Ethernet Fixes for 4.19.y Jeff Kirsher
2019-02-01 20:50 ` [stable 4.19 1/4] ice: Update expected FW version Jeff Kirsher
2019-02-02  9:02   ` Greg KH
2019-02-01 20:50 ` [stable 4.19 2/4] ice: Updates to Tx scheduler code Jeff Kirsher
2019-02-02 11:28   ` Greg KH
2019-02-08 23:46     ` Jeff Kirsher
2019-02-18 12:25       ` Greg KH
2019-02-19 15:57         ` Jeff Kirsher
2019-02-01 20:50 ` [stable 4.19 3/4] ice: Introduce ice_dev_onetime_setup Jeff Kirsher
2019-02-01 20:50 ` [stable 4.19 4/4] ice: Set timeout when disabling queues Jeff Kirsher
2019-02-02  9:00 ` [stable 4.19 0/4] Intel Wired Ethernet Fixes for 4.19.y Greg KH

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).