From: Alexander Lobakin <alexandr.lobakin@intel.com>
To: Marcin Szycik <marcin.szycik@linux.intel.com>
Cc: intel-wired-lan@lists.osuosl.org
Subject: Re: [Intel-wired-lan] [PATCH net-next] ice: Add support for ip TTL & ToS offload
Date: Wed, 6 Jul 2022 12:39:40 +0200 [thread overview]
Message-ID: <20220706103940.6444-1-alexandr.lobakin@intel.com> (raw)
In-Reply-To: <20220701163222.318531-1-marcin.szycik@linux.intel.com>
From: Marcin Szycik <marcin.szycik@linux.intel.com>
Date: Fri, 1 Jul 2022 18:32:22 +0200
> Add support for parsing TTL and ToS (Hop Limit and Traffic Class) tc fields
> and matching on those fields in filters. Incomplete part of implementation
> was already in place (getting enc_ip and enc_tos from flow_match_ip and
> writing them to filter header).
>
> Note: matching on ipv6 hop_limit is currently not supported by DDP package.
>
> Signed-off-by: Marcin Szycik <marcin.szycik@linux.intel.com>
> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> ---
> drivers/net/ethernet/intel/ice/ice_tc_lib.c | 138 +++++++++++++++++++-
> drivers/net/ethernet/intel/ice/ice_tc_lib.h | 6 +
> 2 files changed, 140 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/ice/ice_tc_lib.c b/drivers/net/ethernet/intel/ice/ice_tc_lib.c
> index 14795157846b..f482715cdf7f 100644
> --- a/drivers/net/ethernet/intel/ice/ice_tc_lib.c
> +++ b/drivers/net/ethernet/intel/ice/ice_tc_lib.c
> @@ -36,6 +36,10 @@ ice_tc_count_lkups(u32 flags, struct ice_tc_flower_lyr_2_4_hdrs *headers,
> ICE_TC_FLWR_FIELD_ENC_DEST_IPV6))
> lkups_cnt++;
>
> + if (flags & (ICE_TC_FLWR_FIELD_ENC_IP_TOS |
> + ICE_TC_FLWR_FIELD_ENC_IP_TTL))
> + lkups_cnt++;
> +
> if (flags & ICE_TC_FLWR_FIELD_ENC_DEST_L4_PORT)
> lkups_cnt++;
>
> @@ -59,6 +63,9 @@ ice_tc_count_lkups(u32 flags, struct ice_tc_flower_lyr_2_4_hdrs *headers,
> ICE_TC_FLWR_FIELD_DEST_IPV6 | ICE_TC_FLWR_FIELD_SRC_IPV6))
> lkups_cnt++;
>
> + if (flags & (ICE_TC_FLWR_FIELD_IP_TOS | ICE_TC_FLWR_FIELD_IP_TTL))
> + lkups_cnt++;
> +
> /* is L4 (TCP/UDP/any other L4 protocol fields) specified? */
> if (flags & (ICE_TC_FLWR_FIELD_DEST_L4_PORT |
> ICE_TC_FLWR_FIELD_SRC_L4_PORT))
> @@ -252,6 +259,48 @@ ice_tc_fill_tunnel_outer(u32 flags, struct ice_tc_flower_fltr *fltr,
> i++;
> }
>
> + if (fltr->inner_headers.l2_key.n_proto == htons(ETH_P_IP) &&
> + flags & (ICE_TC_FLWR_FIELD_ENC_IP_TOS |
> + ICE_TC_FLWR_FIELD_ENC_IP_TTL)) {
> + list[i].type = ice_proto_type_from_ipv4(false);
> +
> + if (flags & ICE_TC_FLWR_FIELD_ENC_IP_TOS) {
> + list[i].h_u.ipv4_hdr.tos = hdr->l3_key.tos;
> + list[i].m_u.ipv4_hdr.tos = hdr->l3_mask.tos;
> + }
> +
> + if (flags & ICE_TC_FLWR_FIELD_ENC_IP_TTL) {
> + list[i].h_u.ipv4_hdr.time_to_live = hdr->l3_key.ttl;
> + list[i].m_u.ipv4_hdr.time_to_live = hdr->l3_mask.ttl;
> + }
> +
> + i++;
> + }
> +
> + if (fltr->inner_headers.l2_key.n_proto == htons(ETH_P_IPV6) &&
> + flags & (ICE_TC_FLWR_FIELD_ENC_IP_TOS |
> + ICE_TC_FLWR_FIELD_ENC_IP_TTL)) {
Please wrap the second condition in a separate set of braces as it's
a bitop.
> + struct ice_ipv6_hdr *hdr_h, *hdr_m;
> +
> + hdr_h = &list[i].h_u.ipv6_hdr;
> + hdr_m = &list[i].m_u.ipv6_hdr;
> + list[i].type = ice_proto_type_from_ipv6(false);
> +
> + if (flags & ICE_TC_FLWR_FIELD_ENC_IP_TOS) {
> + hdr_h->be_ver_tc_flow |= htonl(hdr->l3_key.tos <<
> + ICE_IPV6_HDR_TC_OFFSET);
^^^^^^^^^^^^^^^^^^^^^^
A candidate for FIELD_PREP()?
> + hdr_m->be_ver_tc_flow |= htonl(hdr->l3_mask.tos <<
> + ICE_IPV6_HDR_TC_OFFSET);
> + }
> +
> + if (flags & ICE_TC_FLWR_FIELD_ENC_IP_TTL) {
> + hdr_h->hop_limit = hdr->l3_key.ttl;
> + hdr_m->hop_limit = hdr->l3_mask.ttl;
> + }
> +
> + i++;
> + }
> +
> if ((flags & ICE_TC_FLWR_FIELD_ENC_DEST_L4_PORT) &&
> hdr->l3_key.ip_proto == IPPROTO_UDP) {
> list[i].type = ICE_UDP_OF;
> @@ -393,6 +442,48 @@ ice_tc_fill_rules(struct ice_hw *hw, u32 flags,
> i++;
> }
>
> + if (headers->l2_key.n_proto == htons(ETH_P_IP) &&
> + flags & (ICE_TC_FLWR_FIELD_IP_TOS | ICE_TC_FLWR_FIELD_IP_TTL)) {
Also a bitop here, so a separate pair of brackets is recommended.
> + list[i].type = ice_proto_type_from_ipv4(inner);
> +
> + if (flags & ICE_TC_FLWR_FIELD_IP_TOS) {
> + list[i].h_u.ipv4_hdr.tos = headers->l3_key.tos;
> + list[i].m_u.ipv4_hdr.tos = headers->l3_mask.tos;
> + }
> +
> + if (flags & ICE_TC_FLWR_FIELD_IP_TTL) {
> + list[i].h_u.ipv4_hdr.time_to_live =
> + headers->l3_key.ttl;
> + list[i].m_u.ipv4_hdr.time_to_live =
> + headers->l3_mask.ttl;
> + }
> +
> + i++;
> + }
> +
> + if (headers->l2_key.n_proto == htons(ETH_P_IPV6) &&
> + flags & (ICE_TC_FLWR_FIELD_IP_TOS | ICE_TC_FLWR_FIELD_IP_TTL)) {
Same.
> + struct ice_ipv6_hdr *hdr_h, *hdr_m;
> +
> + hdr_h = &list[i].h_u.ipv6_hdr;
> + hdr_m = &list[i].m_u.ipv6_hdr;
> + list[i].type = ice_proto_type_from_ipv6(inner);
> +
> + if (flags & ICE_TC_FLWR_FIELD_IP_TOS) {
> + hdr_h->be_ver_tc_flow |= htonl(headers->l3_key.tos <<
> + ICE_IPV6_HDR_TC_OFFSET);
Same regarding FIELD_PREP().
You can even use be32_encode_bits() or be32p_replace_bits, e.g.
hdr_h->be_ver_tc_flow |=
be32_encode_bits(headers->l3_key.tos,
ICE_IPV6_HDR_TC_OFFSET);
or
be32p_replace_bits(&hdr_h->be_ver_tc_flow,
headers->l3_key.tos,
ICE_IPV6_HDR_TC_OFFSET);
> + hdr_m->be_ver_tc_flow |= htonl(headers->l3_mask.tos <<
> + ICE_IPV6_HDR_TC_OFFSET);
> + }
> +
> + if (flags & ICE_TC_FLWR_FIELD_IP_TTL) {
> + hdr_h->hop_limit = headers->l3_key.ttl;
> + hdr_m->hop_limit = headers->l3_mask.ttl;
> + }
> +
> + i++;
> + }
> +
> /* copy L4 (src, dest) port */
> if (flags & (ICE_TC_FLWR_FIELD_DEST_L4_PORT |
> ICE_TC_FLWR_FIELD_SRC_L4_PORT)) {
> @@ -786,6 +877,40 @@ ice_tc_set_ipv6(struct flow_match_ipv6_addrs *match,
> return 0;
> }
>
> +/**
> + * ice_tc_set_tos_ttl - Parse IP ToS/TTL from TC flower filter
> + * @match: Pointer to flow match structure
> + * @fltr: Pointer to filter structure
> + * @headers: inner or outer header fields
> + * @is_encap: set true for tunnel
> + */
> +static void
> +ice_tc_set_tos_ttl(struct flow_match_ip *match,
> + struct ice_tc_flower_fltr *fltr,
> + struct ice_tc_flower_lyr_2_4_hdrs *headers,
> + bool is_encap)
> +{
> + if (match->mask->tos) {
> + if (is_encap)
> + fltr->flags |= ICE_TC_FLWR_FIELD_ENC_IP_TOS;
> + else
> + fltr->flags |= ICE_TC_FLWR_FIELD_IP_TOS;
> +
> + headers->l3_key.tos = match->key->tos;
> + headers->l3_mask.tos = match->mask->tos;
> + }
> +
> + if (match->mask->ttl) {
> + if (is_encap)
> + fltr->flags |= ICE_TC_FLWR_FIELD_ENC_IP_TTL;
> + else
> + fltr->flags |= ICE_TC_FLWR_FIELD_IP_TTL;
> +
> + headers->l3_key.ttl = match->key->ttl;
> + headers->l3_mask.ttl = match->mask->ttl;
> + }
> +}
> +
> /**
> * ice_tc_set_port - Parse ports from TC flower filter
> * @match: Flow match structure
> @@ -915,10 +1040,7 @@ ice_parse_tunnel_attr(struct net_device *dev, struct flow_rule *rule,
> struct flow_match_ip match;
>
> flow_rule_match_enc_ip(rule, &match);
> - headers->l3_key.tos = match.key->tos;
> - headers->l3_key.ttl = match.key->ttl;
> - headers->l3_mask.tos = match.mask->tos;
> - headers->l3_mask.ttl = match.mask->ttl;
> + ice_tc_set_tos_ttl(&match, fltr, headers, true);
> }
>
> if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_PORTS) &&
> @@ -987,6 +1109,7 @@ ice_parse_cls_flower(struct net_device *filter_dev, struct ice_vsi *vsi,
> BIT(FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS) |
> BIT(FLOW_DISSECTOR_KEY_ENC_PORTS) |
> BIT(FLOW_DISSECTOR_KEY_ENC_OPTS) |
> + BIT(FLOW_DISSECTOR_KEY_IP) |
> BIT(FLOW_DISSECTOR_KEY_ENC_IP) |
> BIT(FLOW_DISSECTOR_KEY_PORTS))) {
> NL_SET_ERR_MSG_MOD(fltr->extack, "Unsupported key used");
> @@ -1148,6 +1271,13 @@ ice_parse_cls_flower(struct net_device *filter_dev, struct ice_vsi *vsi,
> return -EINVAL;
> }
>
> + if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IP)) {
> + struct flow_match_ip match;
> +
> + flow_rule_match_ip(rule, &match);
> + ice_tc_set_tos_ttl(&match, fltr, headers, false);
> + }
> +
> if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_PORTS)) {
> struct flow_match_ports match;
>
> diff --git a/drivers/net/ethernet/intel/ice/ice_tc_lib.h b/drivers/net/ethernet/intel/ice/ice_tc_lib.h
> index 0193874cd203..a083dcaed0c4 100644
> --- a/drivers/net/ethernet/intel/ice/ice_tc_lib.h
> +++ b/drivers/net/ethernet/intel/ice/ice_tc_lib.h
> @@ -24,9 +24,15 @@
> #define ICE_TC_FLWR_FIELD_ETH_TYPE_ID BIT(17)
> #define ICE_TC_FLWR_FIELD_ENC_OPTS BIT(18)
> #define ICE_TC_FLWR_FIELD_CVLAN BIT(19)
> +#define ICE_TC_FLWR_FIELD_IP_TOS BIT(20)
> +#define ICE_TC_FLWR_FIELD_IP_TTL BIT(21)
> +#define ICE_TC_FLWR_FIELD_ENC_IP_TOS BIT(22)
> +#define ICE_TC_FLWR_FIELD_ENC_IP_TTL BIT(23)
>
> #define ICE_TC_FLOWER_MASK_32 0xFFFFFFFF
>
> +#define ICE_IPV6_HDR_TC_OFFSET 20
> +
> struct ice_indr_block_priv {
> struct net_device *netdev;
> struct ice_netdev_priv *np;
> --
> 2.35.1
Those are minors anyway, great job in general!
Thanks,
Olek
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
next prev parent reply other threads:[~2022-07-06 10:40 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-07-01 16:32 [Intel-wired-lan] [PATCH net-next] ice: Add support for ip TTL & ToS offload Marcin Szycik
2022-07-06 10:39 ` Alexander Lobakin [this message]
2022-07-06 12:27 ` Marcin Szycik
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220706103940.6444-1-alexandr.lobakin@intel.com \
--to=alexandr.lobakin@intel.com \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=marcin.szycik@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox