Intel-Wired-Lan Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Wilczynski, Michal" <michal.wilczynski@intel.com>
To: Tony Nguyen <anthony.l.nguyen@intel.com>,
	<intel-wired-lan@lists.osuosl.org>
Subject: Re: [Intel-wired-lan] [PATCH net-next v2 2/2] ice: Enable switching default tx scheduler topology
Date: Fri, 1 Jul 2022 17:19:28 +0200	[thread overview]
Message-ID: <826859dd-b9ea-5112-c7af-215a4d9c72d5@intel.com> (raw)
In-Reply-To: <c7139d4f-a103-0c80-a319-42b53a803a5f@intel.com>

Thanks for your review,

Based on your comments posted v3.

On 6/29/2022 12:27 AM, Tony Nguyen wrote:
>
>
> On 6/24/2022 3:21 AM, Michal Wilczynski wrote:
>> Introduce support for tx scheduler topology change, based on
>> user selection, from default 9-layer to 5-layer.
>> In order for switch to be successful there is a new NVM
>> and DDP package required.
>> This commit enables 5-layer topology in init path of
>> the driver, so before ice driver load, the user selection
>> should be changed in NVM using some external tools.
>>
>> Title: Enable switching default tx scheduler topology
>> Change-type: ImplementationChange
>> Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com>
>> ---
>
> ...
>
>> +/**
>> + * ice_init_tx_topology - performs Tx topology initialization
>> + * @hw: pointer to the hardware structure
>> + * @firmware: pointer to firmware structure
>> + */
>> +static int ice_init_tx_topology(struct ice_hw *hw,
>> +                const struct firmware *firmware)
>> +{
>> +    u8 num_tx_sched_layers = hw->num_tx_sched_layers;
>> +    struct ice_pf *pf = hw->back;
>> +    struct device *dev;
>> +    u8 *buf_copy;
>> +    int err = 0;
>
> This initialization is unnecessary.
>
>> +
>> +    dev = ice_pf_to_dev(pf);
>> +    /* ice_cfg_tx_topo buf argument is not a constant,
>> +     * so we have to make a copy
>> +     */
>> +    buf_copy = devm_kmemdup(ice_hw_to_dev(hw), firmware->data,
>> +                firmware->size, GFP_KERNEL);
>
> It looks like the devm variant is not needed
>
>> +
>> +    err = ice_cfg_tx_topo(hw, buf_copy, firmware->size);
>> +    if (!err) {
>> +        if (hw->num_tx_sched_layers > num_tx_sched_layers)
>> +            dev_info(dev, "Transmit balancing feature disabled\n");
>> +        else
>> +            dev_info(dev, "Transmit balancing feature enabled\n");
>> +        /* if there was a change in topology ice_cfg_tx_topo triggered
>> +         * a CORER and we need to re-init hw
>> +         */
>> +        ice_deinit_hw(hw);
>> +        err = ice_init_hw(hw);
>> +
>> +        /* in this case we're not allowing safe mode */
>> +        devm_kfree(ice_hw_to_dev(hw), buf_copy);
>> +
>> +        return err;
>> +
>> +    } else if (err == -EIO) {
>> +        dev_info(dev, "DDP package does not support transmit 
>> balancing feature - please update to the latest DDP package and try 
>> again\n");
>>       }
>>   -    /* request for firmware was successful. Download to device */
>> +    devm_kfree(ice_hw_to_dev(hw), buf_copy);
>> +
>> +    return 0;
>> +}
>> +
>> +/**
>> + * ice_init_ddp_config - DDP related configuration
>> + * @hw: pointer to the hardware structure
>> + * @pf: pointer to pf structure
>> + *
>> + * This function loads DDP file from the disk, then initializes tx
>> + * topology. At the end DDP package is loaded on the card.
>> + */
>> +static int ice_init_ddp_config(struct ice_hw *hw, struct ice_pf *pf)
>> +{
>> +    struct device *dev = ice_pf_to_dev(pf);
>> +    const struct firmware *firmware = NULL;
>> +    int err = 0;
>
> Initialization not needed.
>
>> +
>> +    err = ice_request_fw(pf, &firmware);
>> +    if (err)
>> +    /* we can still operate in safe mode if DDP package load fails */
>> +        return 0;
>> +
>> +    err = ice_init_tx_topology(hw, firmware);
>> +    if (err) {
>> +        dev_err(dev, "ice_init_hw during change of tx topology 
>> failed: %d\n",
>> +            err);
>> +        release_firmware(firmware);
>> +        return err;
>> +    }
>> +
>> +    /* Download firmware to device */
>>       ice_load_pkg(firmware, pf);
>>       release_firmware(firmware);
>> +
>> +    return 0;
>>   }
>>     /**
>> @@ -4641,9 +4710,15 @@ ice_probe(struct pci_dev *pdev, const struct 
>> pci_device_id __always_unused *ent)
>>         ice_init_feature_support(pf);
>>   -    ice_request_fw(pf);
>> +    err = ice_init_ddp_config(hw, pf);
>> +
>> +    /* during topology change ice_init_hw may fail */
>> +    if (err) {
>> +        err = -EIO;
>> +        goto err_exit_unroll;
>> +    }
>>   -    /* if ice_request_fw fails, ICE_FLAG_ADV_FEATURES bit won't be
>> +    /* if ice_init_ddp_config fails, ICE_FLAG_ADV_FEATURES bit won't be
>>        * set in pf->state, which will cause ice_is_safe_mode to return
>>        * true
>>        */
>> diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c 
>> b/drivers/net/ethernet/intel/ice/ice_sched.c
>> index 7947223536e3..f18a7121ca55 100644
>> --- a/drivers/net/ethernet/intel/ice/ice_sched.c
>> +++ b/drivers/net/ethernet/intel/ice/ice_sched.c
>> @@ -1102,12 +1102,11 @@ static u8 ice_sched_get_vsi_layer(struct 
>> ice_hw *hw)
>>        *     5 or less       sw_entry_point_layer
>>        */
>>       /* calculate the VSI layer based on number of layers. */
>> -    if (hw->num_tx_sched_layers > ICE_VSI_LAYER_OFFSET + 1) {
>> -        u8 layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
>> -
>> -        if (layer > hw->sw_entry_point_layer)
>> -            return layer;
>> -    }
>> +    if (hw->num_tx_sched_layers == ICE_SCHED_9_LAYERS)
>> +        return hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
>> +    else if (hw->num_tx_sched_layers == ICE_SCHED_5_LAYERS)
>> +        /* qgroup and VSI layers are same */
>> +        return hw->num_tx_sched_layers - ICE_QGRP_LAYER_OFFSET;
>>       return hw->sw_entry_point_layer;
>
> These can all be if's since they all return:
>
>     if (hw->num_tx_sched_layers == ICE_SCHED_9_LAYERS)
>         return hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
>     if (hw->num_tx_sched_layers == ICE_SCHED_5_LAYERS)
>         /* qgroup and VSI layers are same */
>         return hw->num_tx_sched_layers - ICE_QGRP_LAYER_OFFSET;
>      return hw->sw_entry_point_layer;
>
>>   }
>>   @@ -1124,12 +1123,8 @@ static u8 ice_sched_get_agg_layer(struct 
>> ice_hw *hw)
>>        *     7 or less       sw_entry_point_layer
>>        */
>>       /* calculate the aggregator layer based on number of layers. */
>> -    if (hw->num_tx_sched_layers > ICE_AGG_LAYER_OFFSET + 1) {
>> -        u8 layer = hw->num_tx_sched_layers - ICE_AGG_LAYER_OFFSET;
>> -
>> -        if (layer > hw->sw_entry_point_layer)
>> -            return layer;
>> -    }
>> +    if (hw->num_tx_sched_layers == ICE_SCHED_9_LAYERS)
>> +        return hw->num_tx_sched_layers - ICE_AGG_LAYER_OFFSET;
>>       return hw->sw_entry_point_layer;
>>   }
>>   @@ -1485,10 +1480,11 @@ ice_sched_get_free_qparent(struct 
>> ice_port_info *pi, u16 vsi_handle, u8 tc,
>>   {
>>       struct ice_sched_node *vsi_node, *qgrp_node;
>>       struct ice_vsi_ctx *vsi_ctx;
>> +    u8 qgrp_layer, vsi_layer;
>>       u16 max_children;
>> -    u8 qgrp_layer;
>>         qgrp_layer = ice_sched_get_qgrp_layer(pi->hw);
>> +    vsi_layer = ice_sched_get_vsi_layer(pi->hw);
>>       max_children = pi->hw->max_children[qgrp_layer];
>>         vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
>> @@ -1499,6 +1495,12 @@ ice_sched_get_free_qparent(struct 
>> ice_port_info *pi, u16 vsi_handle, u8 tc,
>>       if (!vsi_node)
>>           return NULL;
>>   +    /* If the queue group and vsi layer are same then queues
>> +     * are all attached directly to VSI
>> +     */
>> +    if (qgrp_layer == vsi_layer)
>> +        return vsi_node;
>> +
>>       /* get the first queue group node from VSI sub-tree */
>>       qgrp_node = ice_sched_get_first_node(pi, vsi_node, qgrp_layer);
>>       while (qgrp_node) {
>> @@ -3178,8 +3180,9 @@ ice_sched_add_rl_profile(struct ice_port_info *pi,
>>       u8 profile_type;
>>       int status;
>>   -    if (layer_num >= ICE_AQC_TOPO_MAX_LEVEL_NUM)
>> +    if (!pi || layer_num >= pi->hw->num_tx_sched_layers)
>>           return NULL;
>> +
>>       switch (rl_type) {
>>       case ICE_MIN_BW:
>>           profile_type = ICE_AQC_RL_PROFILE_TYPE_CIR;
>> @@ -3194,8 +3197,6 @@ ice_sched_add_rl_profile(struct ice_port_info *pi,
>>           return NULL;
>>       }
>>   -    if (!pi)
>> -        return NULL;
>>       hw = pi->hw;
>>       list_for_each_entry(rl_prof_elem, &pi->rl_prof_list[layer_num],
>>                   list_entry)
>> @@ -3425,7 +3426,7 @@ ice_sched_rm_rl_profile(struct ice_port_info 
>> *pi, u8 layer_num, u8 profile_type,
>>       struct ice_aqc_rl_profile_info *rl_prof_elem;
>>       int status = 0;
>>   -    if (layer_num >= ICE_AQC_TOPO_MAX_LEVEL_NUM)
>> +    if (layer_num >= pi->hw->num_tx_sched_layers)
>>           return -EINVAL;
>>       /* Check the existing list for RL profile */
>>       list_for_each_entry(rl_prof_elem, &pi->rl_prof_list[layer_num],
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

      reply	other threads:[~2022-07-01 15:19 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-24 10:21 [Intel-wired-lan] [PATCH net-next v2 0/2] ice: Support 5 layer tx scheduler topology Michal Wilczynski
2022-06-24 10:21 ` [Intel-wired-lan] [PATCH net-next v2 1/2] ice: Code added to support 5 layer topology Michal Wilczynski
2022-06-28 22:21   ` Tony Nguyen
2022-06-29  8:54     ` Wilczynski, Michal
2022-06-29 15:57       ` Tony Nguyen
2022-06-29 10:49   ` Paul Menzel
2022-07-01 13:22     ` Wilczynski, Michal
2022-06-24 10:21 ` [Intel-wired-lan] [PATCH net-next v2 2/2] ice: Enable switching default tx scheduler topology Michal Wilczynski
2022-06-25  7:10   ` Paul Menzel
2022-06-29  9:54     ` Wilczynski, Michal
2022-07-01 13:12       ` Wilczynski, Michal
2022-06-28 22:27   ` Tony Nguyen
2022-07-01 15:19     ` Wilczynski, Michal [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=826859dd-b9ea-5112-c7af-215a4d9c72d5@intel.com \
    --to=michal.wilczynski@intel.com \
    --cc=anthony.l.nguyen@intel.com \
    --cc=intel-wired-lan@lists.osuosl.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox