netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Simon Horman <horms@kernel.org>
To: Ahmed Zaki <ahmed.zaki@intel.com>
Cc: intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org,
	jacob.e.keller@intel.com, anthony.l.nguyen@intel.com,
	Junfeng Guo <junfeng.guo@intel.com>,
	Marcin Szycik <marcin.szycik@linux.intel.com>
Subject: Re: [PATCH iwl-next v2 11/13] ice: enable FDIR filters from raw binary patterns for VFs
Date: Fri, 31 May 2024 14:18:02 +0100	[thread overview]
Message-ID: <20240531131802.GG123401@kernel.org> (raw)
In-Reply-To: <20240527185810.3077299-12-ahmed.zaki@intel.com>

On Mon, May 27, 2024 at 12:58:08PM -0600, Ahmed Zaki wrote:
> From: Junfeng Guo <junfeng.guo@intel.com>
> 
> Enable VFs to create FDIR filters from raw binary patterns.
> The corresponding processes for raw flow are added in the
> Parse / Create / Destroy stages.
> 
> Reviewed-by: Marcin Szycik <marcin.szycik@linux.intel.com>
> Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
> Co-developed-by: Ahmed Zaki <ahmed.zaki@intel.com>
> Signed-off-by: Ahmed Zaki <ahmed.zaki@intel.com>

...

> diff --git a/drivers/net/ethernet/intel/ice/ice_flow.c b/drivers/net/ethernet/intel/ice/ice_flow.c

...

> +/**
> + * ice_flow_set_parser_prof - Set flow profile based on the parsed profile info
> + * @hw: pointer to the HW struct
> + * @dest_vsi: dest VSI
> + * @fdir_vsi: fdir programming VSI
> + * @prof: stores parsed profile info from raw flow
> + * @blk: classification blk
> + */
> +int
> +ice_flow_set_parser_prof(struct ice_hw *hw, u16 dest_vsi, u16 fdir_vsi,
> +			 struct ice_parser_profile *prof, enum ice_block blk)
> +{
> +	u64 id = find_first_bit(prof->ptypes, ICE_FLOW_PTYPE_MAX);
> +	struct ice_flow_prof_params *params __free(kfree);
> +	u8 fv_words = hw->blk[blk].es.fvw;
> +	int status;
> +	int i, idx;
> +
> +	params = kzalloc(sizeof(*params), GFP_KERNEL);
> +	if (!params)
> +		return -ENOMEM;


params seems to be leaked when this function returns below,
in both error and non-error cases.

> +
> +	for (i = 0; i < ICE_MAX_FV_WORDS; i++) {
> +		params->es[i].prot_id = ICE_PROT_INVALID;
> +		params->es[i].off = ICE_FV_OFFSET_INVAL;
> +	}
> +
> +	for (i = 0; i < prof->fv_num; i++) {
> +		if (hw->blk[blk].es.reverse)
> +			idx = fv_words - i - 1;
> +		else
> +			idx = i;
> +		params->es[idx].prot_id = prof->fv[i].proto_id;
> +		params->es[idx].off = prof->fv[i].offset;
> +		params->mask[idx] = (((prof->fv[i].msk) << BITS_PER_BYTE) &
> +				      HI_BYTE_IN_WORD) |
> +				    (((prof->fv[i].msk) >> BITS_PER_BYTE) &
> +				      LO_BYTE_IN_WORD);
> +	}
> +
> +	switch (prof->flags) {
> +	case FLAG_GTPU_DW:
> +		params->attr = ice_attr_gtpu_down;
> +		params->attr_cnt = ARRAY_SIZE(ice_attr_gtpu_down);
> +		break;
> +	case FLAG_GTPU_UP:
> +		params->attr = ice_attr_gtpu_up;
> +		params->attr_cnt = ARRAY_SIZE(ice_attr_gtpu_up);
> +		break;
> +	default:
> +		if (prof->flags_msk & FLAG_GTPU_MSK) {
> +			params->attr = ice_attr_gtpu_session;
> +			params->attr_cnt = ARRAY_SIZE(ice_attr_gtpu_session);
> +		}
> +		break;
> +	}
> +
> +	status = ice_add_prof(hw, blk, id, (u8 *)prof->ptypes,
> +			      params->attr, params->attr_cnt,
> +			      params->es, params->mask, false, false);
> +	if (status)
> +		return status;
> +
> +	status = ice_flow_assoc_fdir_prof(hw, blk, dest_vsi, fdir_vsi, id);
> +	if (status)
> +		ice_rem_prof(hw, blk, id);
> +
> +	return status;
> +}

...

> diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.c b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
> index 5635e9da2212..9138f7783da0 100644
> --- a/drivers/net/ethernet/intel/ice/ice_vf_lib.c
> +++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
> @@ -1,8 +1,8 @@
>  // SPDX-License-Identifier: GPL-2.0
>  /* Copyright (C) 2022, Intel Corporation. */
>  
> -#include "ice_vf_lib_private.h"
>  #include "ice.h"
> +#include "ice_vf_lib_private.h"
>  #include "ice_lib.h"
>  #include "ice_fltr.h"
>  #include "ice_virtchnl_allowlist.h"

To me tweaking the order of includes seems to indicate
that something isn't quite right. Is there some sort of
dependency loop being juggled here?

> diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.h b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
> index fec16919ec19..be4266899690 100644
> --- a/drivers/net/ethernet/intel/ice/ice_vf_lib.h
> +++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
> @@ -12,6 +12,7 @@
>  #include <net/devlink.h>
>  #include <linux/avf/virtchnl.h>
>  #include "ice_type.h"
> +#include "ice_flow.h"
>  #include "ice_virtchnl_fdir.h"
>  #include "ice_vsi_vlan_ops.h"
>  

...

> diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
> index 1c6ce0c4ed4e..886869648c91 100644
> --- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
> +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
> @@ -1,9 +1,9 @@
>  // SPDX-License-Identifier: GPL-2.0
>  /* Copyright (C) 2022, Intel Corporation. */
>  
> +#include "ice.h"
>  #include "ice_virtchnl.h"
>  #include "ice_vf_lib_private.h"
> -#include "ice.h"
>  #include "ice_base.h"
>  #include "ice_lib.h"
>  #include "ice_fltr.h"

...

> @@ -784,6 +798,106 @@ ice_vc_fdir_config_input_set(struct ice_vf *vf, struct virtchnl_fdir_add *fltr,
>  	return ret;
>  }
>  
> +/**
> + * ice_vc_fdir_is_raw_flow
> + * @proto: virtchnl protocol headers
> + *
> + * Check if the FDIR rule is raw flow (protocol agnostic flow) or not.
> + * Note that common FDIR rule must have non-zero proto->count.
> + * Thus, we choose the tunnel_level and count of proto as the indicators.
> + * If both tunnel_level and count of proto are zeros, this FDIR rule will
> + * be regarded as raw flow.
> + *
> + * Returns wheater headers describe raw flow or not.
> + */
> +static bool
> +ice_vc_fdir_is_raw_flow(struct virtchnl_proto_hdrs *proto)
> +{
> +	return (proto->tunnel_level == 0 && proto->count == 0);

nit: Parentheses are not needed here.
     Likewise elsewhere.

> +
> +/**
> + * ice_vc_fdir_parse_raw
> + * @vf: pointer to the VF info
> + * @proto: virtchnl protocol headers
> + * @conf: FDIR configuration for each filter
> + *
> + * Parse the virtual channel filter's raw flow and store it in @conf
> + *
> + * Return: 0 on success, and other on error.
> + */
> +static int
> +ice_vc_fdir_parse_raw(struct ice_vf *vf,
> +		      struct virtchnl_proto_hdrs *proto,
> +		      struct virtchnl_fdir_fltr_conf *conf)
> +{
> +	u8 *pkt_buf, *msk_buf __free(kfree);
> +	struct ice_parser_result rslt;
> +	struct ice_pf *pf = vf->pf;
> +	struct ice_parser *psr;
> +	int status = -ENOMEM;
> +	struct ice_hw *hw;
> +	u16 udp_port = 0;
> +
> +	pkt_buf = kzalloc(proto->raw.pkt_len, GFP_KERNEL);
> +	msk_buf = kzalloc(proto->raw.pkt_len, GFP_KERNEL);

msk_buf appears to be leaked both in when this function
returns for both error and non-error cases.

> +	if (!pkt_buf || !msk_buf)
> +		goto err_mem_alloc;
> +
> +	memcpy(pkt_buf, proto->raw.spec, proto->raw.pkt_len);
> +	memcpy(msk_buf, proto->raw.mask, proto->raw.pkt_len);
> +
> +	hw = &pf->hw;
> +
> +	/* Get raw profile info via Parser Lib */
> +	psr = ice_parser_create(hw);
> +	if (IS_ERR(psr)) {
> +		status = PTR_ERR(psr);
> +		goto err_mem_alloc;
> +	}
> +
> +	ice_parser_dvm_set(psr, ice_is_dvm_ena(hw));
> +
> +	if (ice_get_open_tunnel_port(hw, &udp_port, TNL_VXLAN))
> +		ice_parser_vxlan_tunnel_set(psr, udp_port, true);
> +
> +	status = ice_parser_run(psr, pkt_buf, proto->raw.pkt_len, &rslt);
> +	if (status)
> +		goto err_parser_destroy;
> +
> +	if (hw->debug_mask & ICE_DBG_PARSER)
> +		ice_parser_result_dump(hw, &rslt);
> +
> +	conf->prof = kzalloc(sizeof(*conf->prof), GFP_KERNEL);
> +	if (!conf->prof)
> +		goto err_parser_destroy;
> +
> +	status = ice_parser_profile_init(&rslt, pkt_buf, msk_buf,
> +					 proto->raw.pkt_len, ICE_BLK_FD,
> +					 conf->prof);
> +	if (status)
> +		goto err_parser_profile_init;
> +
> +	if (hw->debug_mask & ICE_DBG_PARSER)
> +		ice_parser_profile_dump(hw, conf->prof);
> +
> +	/* Store raw flow info into @conf */
> +	conf->pkt_len = proto->raw.pkt_len;
> +	conf->pkt_buf = pkt_buf;
> +	conf->parser_ena = true;
> +
> +	ice_parser_destroy(psr);
> +	return 0;
> +
> +err_parser_profile_init:
> +	kfree(conf->prof);
> +err_parser_destroy:
> +	ice_parser_destroy(psr);
> +err_mem_alloc:
> +	kfree(pkt_buf);
> +	return status;
> +}

...

  reply	other threads:[~2024-05-31 13:18 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-27 18:57 [PATCH iwl-next v2 00/13] ice: iavf: add support for TC U32 filters on VFs Ahmed Zaki
2024-05-27 18:57 ` [PATCH iwl-next v2 01/13] ice: add parser create and destroy skeleton Ahmed Zaki
2024-05-31 13:11   ` Simon Horman
2024-07-23  7:53     ` [Intel-wired-lan] " Romanowski, Rafal
2024-05-27 18:57 ` [PATCH iwl-next v2 02/13] ice: parse and init various DDP parser sections Ahmed Zaki
2024-05-31 13:14   ` Simon Horman
2024-07-23  7:54     ` [Intel-wired-lan] " Romanowski, Rafal
2024-05-27 18:58 ` [PATCH iwl-next v2 03/13] ice: add debugging functions for the " Ahmed Zaki
2024-07-23  7:55   ` [Intel-wired-lan] " Romanowski, Rafal
2024-05-27 18:58 ` [PATCH iwl-next v2 04/13] ice: add parser internal helper functions Ahmed Zaki
2024-05-31 13:15   ` Simon Horman
2024-07-23  8:07     ` [Intel-wired-lan] " Romanowski, Rafal
2024-05-27 18:58 ` [PATCH iwl-next v2 05/13] ice: add parser execution main loop Ahmed Zaki
2024-07-23  7:58   ` [Intel-wired-lan] " Romanowski, Rafal
2024-05-27 18:58 ` [PATCH iwl-next v2 06/13] ice: support turning on/off the parser's double vlan mode Ahmed Zaki
2024-07-23  7:59   ` [Intel-wired-lan] " Romanowski, Rafal
2024-05-27 18:58 ` [PATCH iwl-next v2 07/13] ice: add UDP tunnels support to the parser Ahmed Zaki
2024-07-23  8:01   ` [Intel-wired-lan] " Romanowski, Rafal
2024-05-27 18:58 ` [PATCH iwl-next v2 08/13] ice: add API for parser profile initialization Ahmed Zaki
2024-07-23  8:02   ` [Intel-wired-lan] " Romanowski, Rafal
2024-05-27 18:58 ` [PATCH iwl-next v2 09/13] virtchnl: support raw packet in protocol header Ahmed Zaki
2024-07-23  8:03   ` [Intel-wired-lan] " Romanowski, Rafal
2024-05-27 18:58 ` [PATCH iwl-next v2 10/13] ice: add method to disable FDIR SWAP option Ahmed Zaki
2024-07-23  8:04   ` [Intel-wired-lan] " Romanowski, Rafal
2024-05-27 18:58 ` [PATCH iwl-next v2 11/13] ice: enable FDIR filters from raw binary patterns for VFs Ahmed Zaki
2024-05-31 13:18   ` Simon Horman [this message]
2024-05-31 15:47     ` Ahmed Zaki
2024-05-31 18:11       ` Simon Horman
2024-07-23  8:05         ` [Intel-wired-lan] " Romanowski, Rafal
2024-06-01  0:24       ` Keller, Jacob E
2024-06-01 12:06         ` Simon Horman
2024-05-27 18:58 ` [PATCH iwl-next v2 12/13] iavf: refactor add/del FDIR filters Ahmed Zaki
2024-07-23  8:05   ` [Intel-wired-lan] " Romanowski, Rafal
2024-05-27 18:58 ` [PATCH iwl-next v2 13/13] iavf: add support for offloading tc U32 cls filters Ahmed Zaki
2024-07-23  8:06   ` [Intel-wired-lan] " Romanowski, Rafal
2024-07-23  8:00 ` [Intel-wired-lan] [PATCH iwl-next v2 00/13] ice: iavf: add support for TC U32 filters on VFs Romanowski, Rafal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240531131802.GG123401@kernel.org \
    --to=horms@kernel.org \
    --cc=ahmed.zaki@intel.com \
    --cc=anthony.l.nguyen@intel.com \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=jacob.e.keller@intel.com \
    --cc=junfeng.guo@intel.com \
    --cc=marcin.szycik@linux.intel.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).