From: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
To: Paul Menzel <pmenzel@molgen.mpg.de>
Cc: intel-wired-lan@lists.osuosl.org, wojciech.drewek@intel.com,
marcin.szycik@intel.com, netdev@vger.kernel.org,
Jedrzej Jagielski <jedrzej.jagielski@intel.com>,
sridhar.samudrala@intel.com
Subject: Re: [Intel-wired-lan] [iwl-next v1 2/2] ice: tc: allow ip_proto matching
Date: Tue, 20 Feb 2024 14:14:32 +0100 [thread overview]
Message-ID: <ZdSluDkqY1R4CMBq@mev-dev> (raw)
In-Reply-To: <dc03726a-d59b-47a1-b394-7a435f8aee1a@molgen.mpg.de>
On Tue, Feb 20, 2024 at 01:26:34PM +0100, Paul Menzel wrote:
> Dear Michal,
>
>
> Thank you for the patch. Some minor nits from my side.
>
> Am 20.02.24 um 11:59 schrieb Michal Swiatkowski:
> > Add new matching type. There is no encap version of ip_proto field.
>
> Excuse my ignorance, I do not understand the second sentence. Is an encap
> version going to be added?
>
No, I will rephrase it, thanks.
> > Use it in the same lookup type as for TTL. In hardware it have the same
>
> s/have/has/
>
Will fix.
> > protocol ID, but different offset.
> >
> > Example command to add filter with ip_proto:
> > $tc filter add dev eth10 ingress protocol ip flower ip_proto icmp \
> > skip_sw action mirred egress redirect dev eth0
> >
> > Reviewed-by: Jedrzej Jagielski <jedrzej.jagielski@intel.com>
> > Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> > ---
> > drivers/net/ethernet/intel/ice/ice_tc_lib.c | 17 +++++++++++++++--
> > drivers/net/ethernet/intel/ice/ice_tc_lib.h | 1 +
> > 2 files changed, 16 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/intel/ice/ice_tc_lib.c b/drivers/net/ethernet/intel/ice/ice_tc_lib.c
> > index 49ed5fd7db10..f7c0f62fb730 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_tc_lib.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_tc_lib.c
> > @@ -78,7 +78,8 @@ ice_tc_count_lkups(u32 flags, struct ice_tc_flower_lyr_2_4_hdrs *headers,
> > ICE_TC_FLWR_FIELD_DEST_IPV6 | ICE_TC_FLWR_FIELD_SRC_IPV6))
> > lkups_cnt++;
> > - if (flags & (ICE_TC_FLWR_FIELD_IP_TOS | ICE_TC_FLWR_FIELD_IP_TTL))
> > + if (flags & (ICE_TC_FLWR_FIELD_IP_TOS | ICE_TC_FLWR_FIELD_IP_TTL |
> > + ICE_TC_FLWR_FIELD_IP_PROTO))
>
> Should this be sorted? (Also below).
>
Do you mean PROTO before TOS and TTL? I like the current order, because
for ipv6 we don't have PROTO, but we have TOS and TTL, it looks better
when PROTO is as additional one here.
> > lkups_cnt++;
> > /* are L2TPv3 options specified? */
> > @@ -530,7 +531,8 @@ ice_tc_fill_rules(struct ice_hw *hw, u32 flags,
> > }
> > if (headers->l2_key.n_proto == htons(ETH_P_IP) &&
> > - (flags & (ICE_TC_FLWR_FIELD_IP_TOS | ICE_TC_FLWR_FIELD_IP_TTL))) {
> > + (flags & (ICE_TC_FLWR_FIELD_IP_TOS | ICE_TC_FLWR_FIELD_IP_TTL |
> > + ICE_TC_FLWR_FIELD_IP_PROTO))) {
> > list[i].type = ice_proto_type_from_ipv4(inner);
> > if (flags & ICE_TC_FLWR_FIELD_IP_TOS) {
> > @@ -545,6 +547,13 @@ ice_tc_fill_rules(struct ice_hw *hw, u32 flags,
> > headers->l3_mask.ttl;
> > }
> > + if (flags & ICE_TC_FLWR_FIELD_IP_PROTO) {
> > + list[i].h_u.ipv4_hdr.protocol =
> > + headers->l3_key.ip_proto;
> > + list[i].m_u.ipv4_hdr.protocol =
> > + headers->l3_mask.ip_proto;
>
> (Strange to break the line each time, but seems to be the surrounding coding
> style.)
>
Yeah, without breaking it is longer than 80.
> > + }
> > +
> > i++;
> > }
> > @@ -1515,7 +1524,11 @@ ice_parse_cls_flower(struct net_device *filter_dev, struct ice_vsi *vsi,
> > headers->l2_key.n_proto = cpu_to_be16(n_proto_key);
> > headers->l2_mask.n_proto = cpu_to_be16(n_proto_mask);
> > +
> > + if (match.key->ip_proto)
> > + fltr->flags |= ICE_TC_FLWR_FIELD_IP_PROTO;
> > headers->l3_key.ip_proto = match.key->ip_proto;
> > + headers->l3_mask.ip_proto = match.mask->ip_proto;
> > }
> > if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
> > diff --git a/drivers/net/ethernet/intel/ice/ice_tc_lib.h b/drivers/net/ethernet/intel/ice/ice_tc_lib.h
> > index 65d387163a46..856f371d0687 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_tc_lib.h
> > +++ b/drivers/net/ethernet/intel/ice/ice_tc_lib.h
> > @@ -34,6 +34,7 @@
> > #define ICE_TC_FLWR_FIELD_VLAN_PRIO BIT(27)
> > #define ICE_TC_FLWR_FIELD_CVLAN_PRIO BIT(28)
> > #define ICE_TC_FLWR_FIELD_VLAN_TPID BIT(29)
> > +#define ICE_TC_FLWR_FIELD_IP_PROTO BIT(30)
> > #define ICE_TC_FLOWER_MASK_32 0xFFFFFFFF
>
>
> Kind regards,
>
> Paul
Thanks,
Michal
prev parent reply other threads:[~2024-02-20 13:14 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-20 10:59 [iwl-next v1 0/2] ice: extend tc flower offload Michal Swiatkowski
2024-02-20 10:59 ` [iwl-next v1 1/2] ice: tc: check src_vsi in case of traffic from VF Michal Swiatkowski
2024-02-20 11:23 ` [Intel-wired-lan] " Paul Menzel
2024-02-20 12:24 ` Michal Swiatkowski
2024-02-20 10:59 ` [iwl-next v1 2/2] ice: tc: allow ip_proto matching Michal Swiatkowski
2024-02-20 12:26 ` [Intel-wired-lan] " Paul Menzel
2024-02-20 13:14 ` Michal Swiatkowski [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZdSluDkqY1R4CMBq@mev-dev \
--to=michal.swiatkowski@linux.intel.com \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=jedrzej.jagielski@intel.com \
--cc=marcin.szycik@intel.com \
--cc=netdev@vger.kernel.org \
--cc=pmenzel@molgen.mpg.de \
--cc=sridhar.samudrala@intel.com \
--cc=wojciech.drewek@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).