netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next 0/3] cls_flower: Offload unmasked key
@ 2020-06-19  9:41 dsatish
  2020-06-19  9:41 ` [PATCH net-next 1/3] cls_flower: Set addr_type when ip mask is non-zero dsatish
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: dsatish @ 2020-06-19  9:41 UTC (permalink / raw)
  To: davem
  Cc: jhs, xiyou.wangcong, jiri, kuba, netdev, simon.horman, kesavac,
	satish.d, prathibha.nagooru, intiyaz.basha, jai.rana

This series of patches add support for offloading unmasked key along
with the masked key to the hardware. This enables hardware to manage
its own tables better based on the hardware features/capabilities.

It is possible that hardware as part of its limitations/optimizations
remove certain flows. To handle such cases where the hardware lost
the flows, tc can offload to hardware if it receives a flow for
which there exists an entry in its flow table. To handle such cases
TC when it returns EEXIST error, also programs the flow in hardware,
if hardware offload is enabled.

This also covers the uses case where addr_type type should be set
only if mask for ipv4/v6 is non-zero, as while classifying packet,
address type is set based on mask dissector of IPv4 and IPV6 keys,
hence while inserting flow also addr type should be set based on
the mask availability.

Satish Dhote (3):
  cls_flower: Set addr_type when ip mask is non-zero
  cls_flower: Pass the unmasked key to the hardware
  cls_flower: Allow flow offloading though masked key exist.

 include/net/flow_offload.h |  45 ++++++++++
 net/core/flow_offload.c    | 171 +++++++++++++++++++++++++++++++++++++
 net/sched/cls_flower.c     | 132 +++++++++++++++++++++-------
 3 files changed, 316 insertions(+), 32 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH net-next 1/3] cls_flower: Set addr_type when ip mask is non-zero
  2020-06-19  9:41 [PATCH net-next 0/3] cls_flower: Offload unmasked key dsatish
@ 2020-06-19  9:41 ` dsatish
  2020-06-19  9:41 ` [PATCH net-next 2/3] cls_flower: Pass the unmasked key to hw dsatish
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 11+ messages in thread
From: dsatish @ 2020-06-19  9:41 UTC (permalink / raw)
  To: davem
  Cc: jhs, xiyou.wangcong, jiri, kuba, netdev, simon.horman, kesavac,
	satish.d, prathibha.nagooru, intiyaz.basha, jai.rana

Set the address type in the flower key and mask structure only if
the mask is non-zero for IPv4 and IPv6 fields.

During classifying packet, address type is set based on mask dissector
of IPv4 and IPV6 keys, hence while inserting flow also addr type should
be set based on the mask availability.

This is required for the upcoming patch in OVS where OVS offloads all
the fields that are part of the flower key irrespective of whether the
mask for those fields are zero or nonzero.

Signed-off-by: Chandra Kesava <kesavac@gmail.com>
Signed-off-by: Prathibha Nagooru <prathibha.nagooru@oneconvergence.com>
Signed-off-by: Satish Dhote <satish.d@oneconvergence.com>
---
 net/sched/cls_flower.c | 66 ++++++++++++++++++++++++------------------
 1 file changed, 38 insertions(+), 28 deletions(-)

diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
index b2da37286082..64b70d396397 100644
--- a/net/sched/cls_flower.c
+++ b/net/sched/cls_flower.c
@@ -1422,6 +1422,26 @@ static int fl_set_key_ct(struct nlattr **tb,
 	return 0;
 }
 
+#define FL_KEY_MEMBER_OFFSET(member) offsetof(struct fl_flow_key, member)
+#define FL_KEY_MEMBER_SIZE(member) sizeof_field(struct fl_flow_key, member)
+
+#define FL_KEY_IS_MASKED(mask, member)					\
+	memchr_inv(((char *)mask) + FL_KEY_MEMBER_OFFSET(member),	\
+		   0, FL_KEY_MEMBER_SIZE(member))			\
+
+#define FL_KEY_SET(keys, cnt, id, member)				\
+	do {								\
+		keys[cnt].key_id = id;					\
+		keys[cnt].offset = FL_KEY_MEMBER_OFFSET(member);	\
+		cnt++;							\
+	} while (0)
+
+#define FL_KEY_SET_IF_MASKED(mask, keys, cnt, id, member)		\
+	do {								\
+		if (FL_KEY_IS_MASKED(mask, member))			\
+			FL_KEY_SET(keys, cnt, id, member);		\
+	} while (0)
+
 static int fl_set_key(struct net *net, struct nlattr **tb,
 		      struct fl_flow_key *key, struct fl_flow_key *mask,
 		      struct netlink_ext_ack *extack)
@@ -1484,23 +1504,27 @@ static int fl_set_key(struct net *net, struct nlattr **tb,
 	}
 
 	if (tb[TCA_FLOWER_KEY_IPV4_SRC] || tb[TCA_FLOWER_KEY_IPV4_DST]) {
-		key->control.addr_type = FLOW_DISSECTOR_KEY_IPV4_ADDRS;
-		mask->control.addr_type = ~0;
 		fl_set_key_val(tb, &key->ipv4.src, TCA_FLOWER_KEY_IPV4_SRC,
 			       &mask->ipv4.src, TCA_FLOWER_KEY_IPV4_SRC_MASK,
 			       sizeof(key->ipv4.src));
 		fl_set_key_val(tb, &key->ipv4.dst, TCA_FLOWER_KEY_IPV4_DST,
 			       &mask->ipv4.dst, TCA_FLOWER_KEY_IPV4_DST_MASK,
 			       sizeof(key->ipv4.dst));
+		if (FL_KEY_IS_MASKED(mask, ipv4)) {
+			key->control.addr_type = FLOW_DISSECTOR_KEY_IPV4_ADDRS;
+			mask->control.addr_type = ~0;
+		}
 	} else if (tb[TCA_FLOWER_KEY_IPV6_SRC] || tb[TCA_FLOWER_KEY_IPV6_DST]) {
-		key->control.addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS;
-		mask->control.addr_type = ~0;
 		fl_set_key_val(tb, &key->ipv6.src, TCA_FLOWER_KEY_IPV6_SRC,
 			       &mask->ipv6.src, TCA_FLOWER_KEY_IPV6_SRC_MASK,
 			       sizeof(key->ipv6.src));
 		fl_set_key_val(tb, &key->ipv6.dst, TCA_FLOWER_KEY_IPV6_DST,
 			       &mask->ipv6.dst, TCA_FLOWER_KEY_IPV6_DST_MASK,
 			       sizeof(key->ipv6.dst));
+		if (FL_KEY_IS_MASKED(mask, ipv6)) {
+			key->control.addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS;
+			mask->control.addr_type = ~0;
+		}
 	}
 
 	if (key->basic.ip_proto == IPPROTO_TCP) {
@@ -1581,8 +1605,6 @@ static int fl_set_key(struct net *net, struct nlattr **tb,
 
 	if (tb[TCA_FLOWER_KEY_ENC_IPV4_SRC] ||
 	    tb[TCA_FLOWER_KEY_ENC_IPV4_DST]) {
-		key->enc_control.addr_type = FLOW_DISSECTOR_KEY_IPV4_ADDRS;
-		mask->enc_control.addr_type = ~0;
 		fl_set_key_val(tb, &key->enc_ipv4.src,
 			       TCA_FLOWER_KEY_ENC_IPV4_SRC,
 			       &mask->enc_ipv4.src,
@@ -1593,12 +1615,15 @@ static int fl_set_key(struct net *net, struct nlattr **tb,
 			       &mask->enc_ipv4.dst,
 			       TCA_FLOWER_KEY_ENC_IPV4_DST_MASK,
 			       sizeof(key->enc_ipv4.dst));
+		if (FL_KEY_IS_MASKED(mask, enc_ipv4)) {
+			key->enc_control.addr_type =
+				FLOW_DISSECTOR_KEY_IPV4_ADDRS;
+			mask->enc_control.addr_type = ~0;
+		}
 	}
 
 	if (tb[TCA_FLOWER_KEY_ENC_IPV6_SRC] ||
 	    tb[TCA_FLOWER_KEY_ENC_IPV6_DST]) {
-		key->enc_control.addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS;
-		mask->enc_control.addr_type = ~0;
 		fl_set_key_val(tb, &key->enc_ipv6.src,
 			       TCA_FLOWER_KEY_ENC_IPV6_SRC,
 			       &mask->enc_ipv6.src,
@@ -1609,6 +1634,11 @@ static int fl_set_key(struct net *net, struct nlattr **tb,
 			       &mask->enc_ipv6.dst,
 			       TCA_FLOWER_KEY_ENC_IPV6_DST_MASK,
 			       sizeof(key->enc_ipv6.dst));
+		if (FL_KEY_IS_MASKED(mask, enc_ipv6)) {
+			key->enc_control.addr_type =
+				FLOW_DISSECTOR_KEY_IPV6_ADDRS;
+			mask->enc_control.addr_type = ~0;
+		}
 	}
 
 	fl_set_key_val(tb, &key->enc_key_id.keyid, TCA_FLOWER_KEY_ENC_KEY_ID,
@@ -1667,26 +1697,6 @@ static int fl_init_mask_hashtable(struct fl_flow_mask *mask)
 	return rhashtable_init(&mask->ht, &mask->filter_ht_params);
 }
 
-#define FL_KEY_MEMBER_OFFSET(member) offsetof(struct fl_flow_key, member)
-#define FL_KEY_MEMBER_SIZE(member) sizeof_field(struct fl_flow_key, member)
-
-#define FL_KEY_IS_MASKED(mask, member)						\
-	memchr_inv(((char *)mask) + FL_KEY_MEMBER_OFFSET(member),		\
-		   0, FL_KEY_MEMBER_SIZE(member))				\
-
-#define FL_KEY_SET(keys, cnt, id, member)					\
-	do {									\
-		keys[cnt].key_id = id;						\
-		keys[cnt].offset = FL_KEY_MEMBER_OFFSET(member);		\
-		cnt++;								\
-	} while(0);
-
-#define FL_KEY_SET_IF_MASKED(mask, keys, cnt, id, member)			\
-	do {									\
-		if (FL_KEY_IS_MASKED(mask, member))				\
-			FL_KEY_SET(keys, cnt, id, member);			\
-	} while(0);
-
 static void fl_init_dissector(struct flow_dissector *dissector,
 			      struct fl_flow_key *mask)
 {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH net-next 2/3] cls_flower: Pass the unmasked key to hw
  2020-06-19  9:41 [PATCH net-next 0/3] cls_flower: Offload unmasked key dsatish
  2020-06-19  9:41 ` [PATCH net-next 1/3] cls_flower: Set addr_type when ip mask is non-zero dsatish
@ 2020-06-19  9:41 ` dsatish
  2020-06-19 13:14   ` Ido Schimmel
  2020-06-19 14:00   ` Jiri Pirko
  2020-06-19  9:41 ` [PATCH net-next 3/3] cls_flower: Allow flow offloading though masked key exist dsatish
  2020-06-19 20:12 ` [PATCH net-next 0/3] cls_flower: Offload unmasked key David Miller
  3 siblings, 2 replies; 11+ messages in thread
From: dsatish @ 2020-06-19  9:41 UTC (permalink / raw)
  To: davem
  Cc: jhs, xiyou.wangcong, jiri, kuba, netdev, simon.horman, kesavac,
	satish.d, prathibha.nagooru, intiyaz.basha, jai.rana

Pass the unmasked key along with the masked key to the hardware.
This enables hardware to manage its own tables better based on the
hardware features/capabilities.

Signed-off-by: Chandra Kesava <kesavac@gmail.com>
Signed-off-by: Prathibha Nagooru <prathibha.nagooru@oneconvergence.com>
Signed-off-by: Satish Dhote <satish.d@oneconvergence.com>
---
 include/net/flow_offload.h |  45 ++++++++++
 net/core/flow_offload.c    | 171 +++++++++++++++++++++++++++++++++++++
 net/sched/cls_flower.c     |  43 ++++++++++
 3 files changed, 259 insertions(+)

diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h
index f2c8311a0433..26c6bd6bdb98 100644
--- a/include/net/flow_offload.h
+++ b/include/net/flow_offload.h
@@ -11,6 +11,8 @@ struct flow_match {
 	struct flow_dissector	*dissector;
 	void			*mask;
 	void			*key;
+	void			*unmasked_key;
+	struct flow_dissector	*unmasked_key_dissector;
 };
 
 struct flow_match_meta {
@@ -118,6 +120,49 @@ void flow_rule_match_enc_opts(const struct flow_rule *rule,
 void flow_rule_match_ct(const struct flow_rule *rule,
 			struct flow_match_ct *out);
 
+void flow_rule_match_unmasked_key_meta(const struct flow_rule *rule,
+				       struct flow_match_meta *out);
+void flow_rule_match_unmasked_key_basic(const struct flow_rule *rule,
+					struct flow_match_basic *out);
+void flow_rule_match_unmasked_key_control(const struct flow_rule *rule,
+					  struct flow_match_control *out);
+void flow_rule_match_unmasked_key_eth_addrs(const struct flow_rule *rule,
+					    struct flow_match_eth_addrs *out);
+void flow_rule_match_unmasked_key_vlan(const struct flow_rule *rule,
+				       struct flow_match_vlan *out);
+void flow_rule_match_unmasked_key_cvlan(const struct flow_rule *rule,
+					struct flow_match_vlan *out);
+void flow_rule_match_unmasked_key_ipv4_addrs(const struct flow_rule *rule,
+					     struct flow_match_ipv4_addrs *out);
+void flow_rule_match_unmasked_key_ipv6_addrs(const struct flow_rule *rule,
+					     struct flow_match_ipv6_addrs *out);
+void flow_rule_match_unmasked_key_ip(const struct flow_rule *rule,
+				     struct flow_match_ip *out);
+void flow_rule_match_unmasked_key_ports(const struct flow_rule *rule,
+					struct flow_match_ports *out);
+void flow_rule_match_unmasked_key_tcp(const struct flow_rule *rule,
+				      struct flow_match_tcp *out);
+void flow_rule_match_unmasked_key_icmp(const struct flow_rule *rule,
+				       struct flow_match_icmp *out);
+void flow_rule_match_unmasked_key_mpls(const struct flow_rule *rule,
+				       struct flow_match_mpls *out);
+void flow_rule_match_unmasked_key_enc_control(const struct flow_rule *rule,
+					      struct flow_match_control *out);
+void flow_rule_match_unmasked_key_enc_ipv4_addrs(const struct flow_rule *rule,
+					    struct flow_match_ipv4_addrs *out);
+void flow_rule_match_unmasked_key_enc_ipv6_addrs(const struct flow_rule *rule,
+					    struct flow_match_ipv6_addrs *out);
+void flow_rule_match_unmasked_key_enc_ip(const struct flow_rule *rule,
+					 struct flow_match_ip *out);
+void flow_rule_match_unmasked_key_enc_ports(const struct flow_rule *rule,
+					    struct flow_match_ports *out);
+void flow_rule_match_unmasked_key_enc_keyid(const struct flow_rule *rule,
+					    struct flow_match_enc_keyid *out);
+void flow_rule_match_unmasked_key_enc_opts(const struct flow_rule *rule,
+					   struct flow_match_enc_opts *out);
+void flow_rule_match_unmasked_key_ct(const struct flow_rule *rule,
+				     struct flow_match_ct *out);
+
 enum flow_action_id {
 	FLOW_ACTION_ACCEPT		= 0,
 	FLOW_ACTION_DROP,
diff --git a/net/core/flow_offload.c b/net/core/flow_offload.c
index 0cfc35e6be28..a98c31e864b1 100644
--- a/net/core/flow_offload.c
+++ b/net/core/flow_offload.c
@@ -173,6 +173,177 @@ void flow_rule_match_enc_opts(const struct flow_rule *rule,
 }
 EXPORT_SYMBOL(flow_rule_match_enc_opts);
 
+#define FLOW_UNMASKED_KEY_DISSECTOR_MATCH(__rule, __type, __out)	\
+	const struct flow_match *__m = &(__rule)->match;		\
+	struct flow_dissector *__d = (__m)->unmasked_key_dissector;	\
+									\
+	(__out)->key = skb_flow_dissector_target(__d, __type,		\
+						 (__m)->unmasked_key);	\
+	(__out)->mask = skb_flow_dissector_target(__d, __type,		\
+						  (__m)->mask)		\
+
+void flow_rule_match_unmasked_key_meta(const struct flow_rule *rule,
+				       struct flow_match_meta *out)
+{
+	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_META, out);
+}
+EXPORT_SYMBOL(flow_rule_match_unmasked_key_meta);
+
+void
+flow_rule_match_unmasked_key_basic(const struct flow_rule *rule,
+				   struct flow_match_basic *out)
+{
+	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_BASIC, out);
+}
+EXPORT_SYMBOL(flow_rule_match_unmasked_key_basic);
+
+void flow_rule_match_unmasked_key_control(const struct flow_rule *rule,
+					  struct flow_match_control *out)
+{
+	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_CONTROL,
+					  out);
+}
+EXPORT_SYMBOL(flow_rule_match_unmasked_key_control);
+
+void flow_rule_match_unmasked_key_eth_addrs(const struct flow_rule *rule,
+					    struct flow_match_eth_addrs *out)
+{
+	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS,
+					  out);
+}
+EXPORT_SYMBOL(flow_rule_match_unmasked_key_eth_addrs);
+
+void flow_rule_match_unmasked_key_vlan(const struct flow_rule *rule,
+				       struct flow_match_vlan *out)
+{
+	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_VLAN, out);
+}
+EXPORT_SYMBOL(flow_rule_match_unmasked_key_vlan);
+
+void flow_rule_match_unmasked_key_cvlan(const struct flow_rule *rule,
+					struct flow_match_vlan *out)
+{
+	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_CVLAN, out);
+}
+EXPORT_SYMBOL(flow_rule_match_unmasked_key_cvlan);
+
+void flow_rule_match_unmasked_key_ipv4_addrs(const struct flow_rule *rule,
+					     struct flow_match_ipv4_addrs *out)
+{
+	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_IPV4_ADDRS,
+					  out);
+}
+EXPORT_SYMBOL(flow_rule_match_unmasked_key_ipv4_addrs);
+
+void flow_rule_match_unmasked_key_ipv6_addrs(const struct flow_rule *rule,
+					     struct flow_match_ipv6_addrs *out)
+{
+	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_IPV6_ADDRS,
+					  out);
+}
+EXPORT_SYMBOL(flow_rule_match_unmasked_key_ipv6_addrs);
+
+void flow_rule_match_unmasked_key_ip(const struct flow_rule *rule,
+				     struct flow_match_ip *out)
+{
+	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_IP, out);
+}
+EXPORT_SYMBOL(flow_rule_match_unmasked_key_ip);
+
+void flow_rule_match_unmasked_key_ports(const struct flow_rule *rule,
+					struct flow_match_ports *out)
+{
+	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_PORTS, out);
+}
+EXPORT_SYMBOL(flow_rule_match_unmasked_key_ports);
+
+void flow_rule_match_unmasked_key_tcp(const struct flow_rule *rule,
+				      struct flow_match_tcp *out)
+{
+	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_TCP, out);
+}
+EXPORT_SYMBOL(flow_rule_match_unmasked_key_tcp);
+
+void flow_rule_match_unmasked_key_icmp(const struct flow_rule *rule,
+				       struct flow_match_icmp *out)
+{
+	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ICMP, out);
+}
+EXPORT_SYMBOL(flow_rule_match_unmasked_key_icmp);
+
+void flow_rule_match_unmasked_key_mpls(const struct flow_rule *rule,
+				       struct flow_match_mpls *out)
+{
+	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_MPLS, out);
+}
+EXPORT_SYMBOL(flow_rule_match_unmasked_key_mpls);
+
+void flow_rule_match_unmasked_key_enc_control(const struct flow_rule *rule,
+					      struct flow_match_control *out)
+{
+	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_CONTROL,
+					  out);
+}
+EXPORT_SYMBOL(flow_rule_match_unmasked_key_enc_control);
+
+void
+flow_rule_match_unmasked_key_enc_ipv4_addrs(const struct flow_rule *rule,
+					    struct flow_match_ipv4_addrs *out)
+{
+	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule,
+					  FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS,
+					  out);
+}
+EXPORT_SYMBOL(flow_rule_match_unmasked_key_enc_ipv4_addrs);
+
+void
+flow_rule_match_unmasked_key_enc_ipv6_addrs(const struct flow_rule *rule,
+					    struct flow_match_ipv6_addrs *out)
+{
+	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule,
+					  FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS,
+					  out);
+}
+EXPORT_SYMBOL(flow_rule_match_unmasked_key_enc_ipv6_addrs);
+
+void flow_rule_match_unmasked_key_enc_ip(const struct flow_rule *rule,
+					 struct flow_match_ip *out)
+{
+	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_IP, out);
+}
+EXPORT_SYMBOL(flow_rule_match_unmasked_key_enc_ip);
+
+void flow_rule_match_unmasked_key_enc_ports(const struct flow_rule *rule,
+					    struct flow_match_ports *out)
+{
+	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_PORTS,
+					  out);
+}
+EXPORT_SYMBOL(flow_rule_match_unmasked_key_enc_ports);
+
+void flow_rule_match_unmasked_key_enc_keyid(const struct flow_rule *rule,
+					    struct flow_match_enc_keyid *out)
+{
+	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_KEYID,
+					  out);
+}
+EXPORT_SYMBOL(flow_rule_match_unmasked_key_enc_keyid);
+
+void flow_rule_match_unmasked_key_enc_opts(const struct flow_rule *rule,
+					   struct flow_match_enc_opts *out)
+{
+	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_OPTS,
+					  out);
+}
+EXPORT_SYMBOL(flow_rule_match_unmasked_key_enc_opts);
+
+void flow_rule_match_unmasked_key_ct(const struct flow_rule *rule,
+				     struct flow_match_ct *out)
+{
+	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_CT, out);
+}
+EXPORT_SYMBOL(flow_rule_match_unmasked_key_ct);
+
 struct flow_action_cookie *flow_action_cookie_create(void *data,
 						     unsigned int len,
 						     gfp_t gfp)
diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
index 64b70d396397..f1a5352cbb04 100644
--- a/net/sched/cls_flower.c
+++ b/net/sched/cls_flower.c
@@ -121,6 +121,7 @@ struct cls_fl_filter {
 	 */
 	refcount_t refcnt;
 	bool deleted;
+	struct flow_dissector unmasked_key_dissector;
 };
 
 static const struct rhashtable_params mask_ht_params = {
@@ -449,6 +450,13 @@ static int fl_hw_replace_filter(struct tcf_proto *tp,
 	cls_flower.rule->match.key = &f->mkey;
 	cls_flower.classid = f->res.classid;
 
+	/* Pass unmasked key and corresponding dissector also to the driver,
+	 * hardware may optimize its flow table based on its capabilities.
+	 */
+	cls_flower.rule->match.unmasked_key = &f->key;
+	cls_flower.rule->match.unmasked_key_dissector =
+						&f->unmasked_key_dissector;
+
 	err = tc_setup_flow_action(&cls_flower.rule->action, &f->exts);
 	if (err) {
 		kfree(cls_flower.rule);
@@ -1753,6 +1761,39 @@ static void fl_init_dissector(struct flow_dissector *dissector,
 	skb_flow_dissector_init(dissector, keys, cnt);
 }
 
+/* Initialize dissector for unmasked key. */
+static void fl_init_unmasked_key_dissector(struct flow_dissector *dissector)
+{
+	struct flow_dissector_key keys[FLOW_DISSECTOR_KEY_MAX];
+	size_t cnt = 0;
+
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_META, meta);
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_CONTROL, control);
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_BASIC, basic);
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_ETH_ADDRS, eth);
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_IPV4_ADDRS, ipv4);
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_IPV6_ADDRS, ipv6);
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_PORTS, tp);
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_PORTS_RANGE, tp_range);
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_IP, ip);
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_TCP, tcp);
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_ICMP, icmp);
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_ARP, arp);
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_MPLS, mpls);
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_VLAN, vlan);
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_CVLAN, cvlan);
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_ENC_KEYID, enc_key_id);
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS, enc_ipv4);
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS, enc_ipv6);
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_ENC_CONTROL, enc_control);
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_ENC_PORTS, enc_tp);
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_ENC_IP, enc_ip);
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_ENC_OPTS, enc_opts);
+	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_CT, ct);
+
+	skb_flow_dissector_init(dissector, keys, cnt);
+}
+
 static struct fl_flow_mask *fl_create_new_mask(struct cls_fl_head *head,
 					       struct fl_flow_mask *mask)
 {
@@ -1980,6 +2021,8 @@ static int fl_change(struct net *net, struct sk_buff *in_skb,
 	if (err)
 		goto errout;
 
+	fl_init_unmasked_key_dissector(&fnew->unmasked_key_dissector);
+
 	err = fl_ht_insert_unique(fnew, fold, &in_ht);
 	if (err)
 		goto errout_mask;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH net-next 3/3] cls_flower: Allow flow offloading though masked key exist.
  2020-06-19  9:41 [PATCH net-next 0/3] cls_flower: Offload unmasked key dsatish
  2020-06-19  9:41 ` [PATCH net-next 1/3] cls_flower: Set addr_type when ip mask is non-zero dsatish
  2020-06-19  9:41 ` [PATCH net-next 2/3] cls_flower: Pass the unmasked key to hw dsatish
@ 2020-06-19  9:41 ` dsatish
  2020-06-19 13:35   ` Ido Schimmel
                     ` (2 more replies)
  2020-06-19 20:12 ` [PATCH net-next 0/3] cls_flower: Offload unmasked key David Miller
  3 siblings, 3 replies; 11+ messages in thread
From: dsatish @ 2020-06-19  9:41 UTC (permalink / raw)
  To: davem
  Cc: jhs, xiyou.wangcong, jiri, kuba, netdev, simon.horman, kesavac,
	satish.d, prathibha.nagooru, intiyaz.basha, jai.rana

A packet reaches OVS user space, only if, either there is no rule in
datapath/hardware or there is race condition that the flow is added
to hardware but before it is processed another packet arrives.

It is possible hardware as part of its limitations/optimizations
remove certain flows. To handle such cases where the hardware lost
the flows, tc can offload to hardware if it receives a flow for which
there exists an entry in its flow table. To handle such cases TC when
it returns EEXIST error, also programs the flow in hardware, if
hardware offload is enabled.

Signed-off-by: Chandra Kesava <kesavac@gmail.com>
Signed-off-by: Prathibha Nagooru <prathibha.nagooru@oneconvergence.com>
Signed-off-by: Satish Dhote <satish.d@oneconvergence.com>
---
 net/sched/cls_flower.c | 23 +++++++++++++++++++----
 1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
index f1a5352cbb04..d718233cd5b9 100644
--- a/net/sched/cls_flower.c
+++ b/net/sched/cls_flower.c
@@ -431,7 +431,8 @@ static void fl_hw_destroy_filter(struct tcf_proto *tp, struct cls_fl_filter *f,
 
 static int fl_hw_replace_filter(struct tcf_proto *tp,
 				struct cls_fl_filter *f, bool rtnl_held,
-				struct netlink_ext_ack *extack)
+				struct netlink_ext_ack *extack,
+				unsigned long cookie)
 {
 	struct tcf_block *block = tp->chain->block;
 	struct flow_cls_offload cls_flower = {};
@@ -444,7 +445,7 @@ static int fl_hw_replace_filter(struct tcf_proto *tp,
 
 	tc_cls_common_offload_init(&cls_flower.common, tp, f->flags, extack);
 	cls_flower.command = FLOW_CLS_REPLACE;
-	cls_flower.cookie = (unsigned long) f;
+	cls_flower.cookie = cookie;
 	cls_flower.rule->match.dissector = &f->mask->dissector;
 	cls_flower.rule->match.mask = &f->mask->key;
 	cls_flower.rule->match.key = &f->mkey;
@@ -2024,11 +2025,25 @@ static int fl_change(struct net *net, struct sk_buff *in_skb,
 	fl_init_unmasked_key_dissector(&fnew->unmasked_key_dissector);
 
 	err = fl_ht_insert_unique(fnew, fold, &in_ht);
-	if (err)
+	if (err) {
+		/* It is possible Hardware lost the flow even though TC has it,
+		 * and flow miss in hardware causes controller to offload flow again.
+		 */
+		if (err == -EEXIST && !tc_skip_hw(fnew->flags)) {
+			struct cls_fl_filter *f =
+				__fl_lookup(fnew->mask, &fnew->mkey);
+
+			if (f)
+				fl_hw_replace_filter(tp, fnew, rtnl_held,
+						     extack,
+						     (unsigned long)(f));
+		}
 		goto errout_mask;
+	}
 
 	if (!tc_skip_hw(fnew->flags)) {
-		err = fl_hw_replace_filter(tp, fnew, rtnl_held, extack);
+		err = fl_hw_replace_filter(tp, fnew, rtnl_held, extack,
+					   (unsigned long)fnew);
 		if (err)
 			goto errout_ht;
 	}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 2/3] cls_flower: Pass the unmasked key to hw
  2020-06-19  9:41 ` [PATCH net-next 2/3] cls_flower: Pass the unmasked key to hw dsatish
@ 2020-06-19 13:14   ` Ido Schimmel
  2020-06-19 14:00   ` Jiri Pirko
  1 sibling, 0 replies; 11+ messages in thread
From: Ido Schimmel @ 2020-06-19 13:14 UTC (permalink / raw)
  To: dsatish
  Cc: davem, jhs, xiyou.wangcong, jiri, kuba, netdev, simon.horman,
	kesavac, prathibha.nagooru, intiyaz.basha, jai.rana

On Fri, Jun 19, 2020 at 03:11:55PM +0530, dsatish wrote:
> Pass the unmasked key along with the masked key to the hardware.
> This enables hardware to manage its own tables better based on the
> hardware features/capabilities.

How? Which hardware? You did not patch a single driver...

Are you familiar with chain templates? I think it might help:
https://github.com/Mellanox/mlxsw/wiki/ACLs#chain-templates

> 
> Signed-off-by: Chandra Kesava <kesavac@gmail.com>
> Signed-off-by: Prathibha Nagooru <prathibha.nagooru@oneconvergence.com>
> Signed-off-by: Satish Dhote <satish.d@oneconvergence.com>
> ---
>  include/net/flow_offload.h |  45 ++++++++++
>  net/core/flow_offload.c    | 171 +++++++++++++++++++++++++++++++++++++
>  net/sched/cls_flower.c     |  43 ++++++++++
>  3 files changed, 259 insertions(+)
> 
> diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h
> index f2c8311a0433..26c6bd6bdb98 100644
> --- a/include/net/flow_offload.h
> +++ b/include/net/flow_offload.h
> @@ -11,6 +11,8 @@ struct flow_match {
>  	struct flow_dissector	*dissector;
>  	void			*mask;
>  	void			*key;
> +	void			*unmasked_key;
> +	struct flow_dissector	*unmasked_key_dissector;
>  };
>  
>  struct flow_match_meta {
> @@ -118,6 +120,49 @@ void flow_rule_match_enc_opts(const struct flow_rule *rule,
>  void flow_rule_match_ct(const struct flow_rule *rule,
>  			struct flow_match_ct *out);
>  
> +void flow_rule_match_unmasked_key_meta(const struct flow_rule *rule,
> +				       struct flow_match_meta *out);
> +void flow_rule_match_unmasked_key_basic(const struct flow_rule *rule,
> +					struct flow_match_basic *out);
> +void flow_rule_match_unmasked_key_control(const struct flow_rule *rule,
> +					  struct flow_match_control *out);
> +void flow_rule_match_unmasked_key_eth_addrs(const struct flow_rule *rule,
> +					    struct flow_match_eth_addrs *out);
> +void flow_rule_match_unmasked_key_vlan(const struct flow_rule *rule,
> +				       struct flow_match_vlan *out);
> +void flow_rule_match_unmasked_key_cvlan(const struct flow_rule *rule,
> +					struct flow_match_vlan *out);
> +void flow_rule_match_unmasked_key_ipv4_addrs(const struct flow_rule *rule,
> +					     struct flow_match_ipv4_addrs *out);
> +void flow_rule_match_unmasked_key_ipv6_addrs(const struct flow_rule *rule,
> +					     struct flow_match_ipv6_addrs *out);
> +void flow_rule_match_unmasked_key_ip(const struct flow_rule *rule,
> +				     struct flow_match_ip *out);
> +void flow_rule_match_unmasked_key_ports(const struct flow_rule *rule,
> +					struct flow_match_ports *out);
> +void flow_rule_match_unmasked_key_tcp(const struct flow_rule *rule,
> +				      struct flow_match_tcp *out);
> +void flow_rule_match_unmasked_key_icmp(const struct flow_rule *rule,
> +				       struct flow_match_icmp *out);
> +void flow_rule_match_unmasked_key_mpls(const struct flow_rule *rule,
> +				       struct flow_match_mpls *out);
> +void flow_rule_match_unmasked_key_enc_control(const struct flow_rule *rule,
> +					      struct flow_match_control *out);
> +void flow_rule_match_unmasked_key_enc_ipv4_addrs(const struct flow_rule *rule,
> +					    struct flow_match_ipv4_addrs *out);
> +void flow_rule_match_unmasked_key_enc_ipv6_addrs(const struct flow_rule *rule,
> +					    struct flow_match_ipv6_addrs *out);
> +void flow_rule_match_unmasked_key_enc_ip(const struct flow_rule *rule,
> +					 struct flow_match_ip *out);
> +void flow_rule_match_unmasked_key_enc_ports(const struct flow_rule *rule,
> +					    struct flow_match_ports *out);
> +void flow_rule_match_unmasked_key_enc_keyid(const struct flow_rule *rule,
> +					    struct flow_match_enc_keyid *out);
> +void flow_rule_match_unmasked_key_enc_opts(const struct flow_rule *rule,
> +					   struct flow_match_enc_opts *out);
> +void flow_rule_match_unmasked_key_ct(const struct flow_rule *rule,
> +				     struct flow_match_ct *out);
> +
>  enum flow_action_id {
>  	FLOW_ACTION_ACCEPT		= 0,
>  	FLOW_ACTION_DROP,
> diff --git a/net/core/flow_offload.c b/net/core/flow_offload.c
> index 0cfc35e6be28..a98c31e864b1 100644
> --- a/net/core/flow_offload.c
> +++ b/net/core/flow_offload.c
> @@ -173,6 +173,177 @@ void flow_rule_match_enc_opts(const struct flow_rule *rule,
>  }
>  EXPORT_SYMBOL(flow_rule_match_enc_opts);
>  
> +#define FLOW_UNMASKED_KEY_DISSECTOR_MATCH(__rule, __type, __out)	\
> +	const struct flow_match *__m = &(__rule)->match;		\
> +	struct flow_dissector *__d = (__m)->unmasked_key_dissector;	\
> +									\
> +	(__out)->key = skb_flow_dissector_target(__d, __type,		\
> +						 (__m)->unmasked_key);	\
> +	(__out)->mask = skb_flow_dissector_target(__d, __type,		\
> +						  (__m)->mask)		\
> +
> +void flow_rule_match_unmasked_key_meta(const struct flow_rule *rule,
> +				       struct flow_match_meta *out)
> +{
> +	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_META, out);
> +}
> +EXPORT_SYMBOL(flow_rule_match_unmasked_key_meta);
> +
> +void
> +flow_rule_match_unmasked_key_basic(const struct flow_rule *rule,
> +				   struct flow_match_basic *out)
> +{
> +	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_BASIC, out);
> +}
> +EXPORT_SYMBOL(flow_rule_match_unmasked_key_basic);
> +
> +void flow_rule_match_unmasked_key_control(const struct flow_rule *rule,
> +					  struct flow_match_control *out)
> +{
> +	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_CONTROL,
> +					  out);
> +}
> +EXPORT_SYMBOL(flow_rule_match_unmasked_key_control);
> +
> +void flow_rule_match_unmasked_key_eth_addrs(const struct flow_rule *rule,
> +					    struct flow_match_eth_addrs *out)
> +{
> +	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS,
> +					  out);
> +}
> +EXPORT_SYMBOL(flow_rule_match_unmasked_key_eth_addrs);
> +
> +void flow_rule_match_unmasked_key_vlan(const struct flow_rule *rule,
> +				       struct flow_match_vlan *out)
> +{
> +	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_VLAN, out);
> +}
> +EXPORT_SYMBOL(flow_rule_match_unmasked_key_vlan);
> +
> +void flow_rule_match_unmasked_key_cvlan(const struct flow_rule *rule,
> +					struct flow_match_vlan *out)
> +{
> +	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_CVLAN, out);
> +}
> +EXPORT_SYMBOL(flow_rule_match_unmasked_key_cvlan);
> +
> +void flow_rule_match_unmasked_key_ipv4_addrs(const struct flow_rule *rule,
> +					     struct flow_match_ipv4_addrs *out)
> +{
> +	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_IPV4_ADDRS,
> +					  out);
> +}
> +EXPORT_SYMBOL(flow_rule_match_unmasked_key_ipv4_addrs);
> +
> +void flow_rule_match_unmasked_key_ipv6_addrs(const struct flow_rule *rule,
> +					     struct flow_match_ipv6_addrs *out)
> +{
> +	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_IPV6_ADDRS,
> +					  out);
> +}
> +EXPORT_SYMBOL(flow_rule_match_unmasked_key_ipv6_addrs);
> +
> +void flow_rule_match_unmasked_key_ip(const struct flow_rule *rule,
> +				     struct flow_match_ip *out)
> +{
> +	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_IP, out);
> +}
> +EXPORT_SYMBOL(flow_rule_match_unmasked_key_ip);
> +
> +void flow_rule_match_unmasked_key_ports(const struct flow_rule *rule,
> +					struct flow_match_ports *out)
> +{
> +	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_PORTS, out);
> +}
> +EXPORT_SYMBOL(flow_rule_match_unmasked_key_ports);
> +
> +void flow_rule_match_unmasked_key_tcp(const struct flow_rule *rule,
> +				      struct flow_match_tcp *out)
> +{
> +	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_TCP, out);
> +}
> +EXPORT_SYMBOL(flow_rule_match_unmasked_key_tcp);
> +
> +void flow_rule_match_unmasked_key_icmp(const struct flow_rule *rule,
> +				       struct flow_match_icmp *out)
> +{
> +	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ICMP, out);
> +}
> +EXPORT_SYMBOL(flow_rule_match_unmasked_key_icmp);
> +
> +void flow_rule_match_unmasked_key_mpls(const struct flow_rule *rule,
> +				       struct flow_match_mpls *out)
> +{
> +	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_MPLS, out);
> +}
> +EXPORT_SYMBOL(flow_rule_match_unmasked_key_mpls);
> +
> +void flow_rule_match_unmasked_key_enc_control(const struct flow_rule *rule,
> +					      struct flow_match_control *out)
> +{
> +	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_CONTROL,
> +					  out);
> +}
> +EXPORT_SYMBOL(flow_rule_match_unmasked_key_enc_control);
> +
> +void
> +flow_rule_match_unmasked_key_enc_ipv4_addrs(const struct flow_rule *rule,
> +					    struct flow_match_ipv4_addrs *out)
> +{
> +	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule,
> +					  FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS,
> +					  out);
> +}
> +EXPORT_SYMBOL(flow_rule_match_unmasked_key_enc_ipv4_addrs);
> +
> +void
> +flow_rule_match_unmasked_key_enc_ipv6_addrs(const struct flow_rule *rule,
> +					    struct flow_match_ipv6_addrs *out)
> +{
> +	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule,
> +					  FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS,
> +					  out);
> +}
> +EXPORT_SYMBOL(flow_rule_match_unmasked_key_enc_ipv6_addrs);
> +
> +void flow_rule_match_unmasked_key_enc_ip(const struct flow_rule *rule,
> +					 struct flow_match_ip *out)
> +{
> +	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_IP, out);
> +}
> +EXPORT_SYMBOL(flow_rule_match_unmasked_key_enc_ip);
> +
> +void flow_rule_match_unmasked_key_enc_ports(const struct flow_rule *rule,
> +					    struct flow_match_ports *out)
> +{
> +	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_PORTS,
> +					  out);
> +}
> +EXPORT_SYMBOL(flow_rule_match_unmasked_key_enc_ports);
> +
> +void flow_rule_match_unmasked_key_enc_keyid(const struct flow_rule *rule,
> +					    struct flow_match_enc_keyid *out)
> +{
> +	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_KEYID,
> +					  out);
> +}
> +EXPORT_SYMBOL(flow_rule_match_unmasked_key_enc_keyid);
> +
> +void flow_rule_match_unmasked_key_enc_opts(const struct flow_rule *rule,
> +					   struct flow_match_enc_opts *out)
> +{
> +	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_ENC_OPTS,
> +					  out);
> +}
> +EXPORT_SYMBOL(flow_rule_match_unmasked_key_enc_opts);
> +
> +void flow_rule_match_unmasked_key_ct(const struct flow_rule *rule,
> +				     struct flow_match_ct *out)
> +{
> +	FLOW_UNMASKED_KEY_DISSECTOR_MATCH(rule, FLOW_DISSECTOR_KEY_CT, out);
> +}
> +EXPORT_SYMBOL(flow_rule_match_unmasked_key_ct);
> +
>  struct flow_action_cookie *flow_action_cookie_create(void *data,
>  						     unsigned int len,
>  						     gfp_t gfp)
> diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
> index 64b70d396397..f1a5352cbb04 100644
> --- a/net/sched/cls_flower.c
> +++ b/net/sched/cls_flower.c
> @@ -121,6 +121,7 @@ struct cls_fl_filter {
>  	 */
>  	refcount_t refcnt;
>  	bool deleted;
> +	struct flow_dissector unmasked_key_dissector;
>  };
>  
>  static const struct rhashtable_params mask_ht_params = {
> @@ -449,6 +450,13 @@ static int fl_hw_replace_filter(struct tcf_proto *tp,
>  	cls_flower.rule->match.key = &f->mkey;
>  	cls_flower.classid = f->res.classid;
>  
> +	/* Pass unmasked key and corresponding dissector also to the driver,
> +	 * hardware may optimize its flow table based on its capabilities.
> +	 */
> +	cls_flower.rule->match.unmasked_key = &f->key;
> +	cls_flower.rule->match.unmasked_key_dissector =
> +						&f->unmasked_key_dissector;
> +
>  	err = tc_setup_flow_action(&cls_flower.rule->action, &f->exts);
>  	if (err) {
>  		kfree(cls_flower.rule);
> @@ -1753,6 +1761,39 @@ static void fl_init_dissector(struct flow_dissector *dissector,
>  	skb_flow_dissector_init(dissector, keys, cnt);
>  }
>  
> +/* Initialize dissector for unmasked key. */
> +static void fl_init_unmasked_key_dissector(struct flow_dissector *dissector)
> +{
> +	struct flow_dissector_key keys[FLOW_DISSECTOR_KEY_MAX];
> +	size_t cnt = 0;
> +
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_META, meta);
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_CONTROL, control);
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_BASIC, basic);
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_ETH_ADDRS, eth);
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_IPV4_ADDRS, ipv4);
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_IPV6_ADDRS, ipv6);
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_PORTS, tp);
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_PORTS_RANGE, tp_range);
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_IP, ip);
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_TCP, tcp);
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_ICMP, icmp);
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_ARP, arp);
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_MPLS, mpls);
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_VLAN, vlan);
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_CVLAN, cvlan);
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_ENC_KEYID, enc_key_id);
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS, enc_ipv4);
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS, enc_ipv6);
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_ENC_CONTROL, enc_control);
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_ENC_PORTS, enc_tp);
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_ENC_IP, enc_ip);
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_ENC_OPTS, enc_opts);
> +	FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_CT, ct);
> +
> +	skb_flow_dissector_init(dissector, keys, cnt);
> +}
> +
>  static struct fl_flow_mask *fl_create_new_mask(struct cls_fl_head *head,
>  					       struct fl_flow_mask *mask)
>  {
> @@ -1980,6 +2021,8 @@ static int fl_change(struct net *net, struct sk_buff *in_skb,
>  	if (err)
>  		goto errout;
>  
> +	fl_init_unmasked_key_dissector(&fnew->unmasked_key_dissector);
> +
>  	err = fl_ht_insert_unique(fnew, fold, &in_ht);
>  	if (err)
>  		goto errout_mask;
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 3/3] cls_flower: Allow flow offloading though masked key exist.
  2020-06-19  9:41 ` [PATCH net-next 3/3] cls_flower: Allow flow offloading though masked key exist dsatish
@ 2020-06-19 13:35   ` Ido Schimmel
  2020-06-19 14:01   ` Jiri Pirko
  2020-06-19 16:15   ` Vlad Buslov
  2 siblings, 0 replies; 11+ messages in thread
From: Ido Schimmel @ 2020-06-19 13:35 UTC (permalink / raw)
  To: dsatish
  Cc: davem, jhs, xiyou.wangcong, jiri, kuba, netdev, simon.horman,
	kesavac, prathibha.nagooru, intiyaz.basha, jai.rana

On Fri, Jun 19, 2020 at 03:11:56PM +0530, dsatish wrote:
> A packet reaches OVS user space, only if, either there is no rule in
> datapath/hardware or there is race condition that the flow is added
> to hardware but before it is processed another packet arrives.
> 
> It is possible hardware as part of its limitations/optimizations
> remove certain flows.

Which driver is doing this? I don't believe it's valid to remove filters
from hardware and not tell anyone about it.

> To handle such cases where the hardware lost
> the flows, tc can offload to hardware if it receives a flow for which
> there exists an entry in its flow table. To handle such cases TC when
> it returns EEXIST error, also programs the flow in hardware, if
> hardware offload is enabled.
> 
> Signed-off-by: Chandra Kesava <kesavac@gmail.com>
> Signed-off-by: Prathibha Nagooru <prathibha.nagooru@oneconvergence.com>
> Signed-off-by: Satish Dhote <satish.d@oneconvergence.com>
> ---
>  net/sched/cls_flower.c | 23 +++++++++++++++++++----
>  1 file changed, 19 insertions(+), 4 deletions(-)
> 
> diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
> index f1a5352cbb04..d718233cd5b9 100644
> --- a/net/sched/cls_flower.c
> +++ b/net/sched/cls_flower.c
> @@ -431,7 +431,8 @@ static void fl_hw_destroy_filter(struct tcf_proto *tp, struct cls_fl_filter *f,
>  
>  static int fl_hw_replace_filter(struct tcf_proto *tp,
>  				struct cls_fl_filter *f, bool rtnl_held,
> -				struct netlink_ext_ack *extack)
> +				struct netlink_ext_ack *extack,
> +				unsigned long cookie)
>  {
>  	struct tcf_block *block = tp->chain->block;
>  	struct flow_cls_offload cls_flower = {};
> @@ -444,7 +445,7 @@ static int fl_hw_replace_filter(struct tcf_proto *tp,
>  
>  	tc_cls_common_offload_init(&cls_flower.common, tp, f->flags, extack);
>  	cls_flower.command = FLOW_CLS_REPLACE;
> -	cls_flower.cookie = (unsigned long) f;
> +	cls_flower.cookie = cookie;
>  	cls_flower.rule->match.dissector = &f->mask->dissector;
>  	cls_flower.rule->match.mask = &f->mask->key;
>  	cls_flower.rule->match.key = &f->mkey;
> @@ -2024,11 +2025,25 @@ static int fl_change(struct net *net, struct sk_buff *in_skb,
>  	fl_init_unmasked_key_dissector(&fnew->unmasked_key_dissector);
>  
>  	err = fl_ht_insert_unique(fnew, fold, &in_ht);
> -	if (err)
> +	if (err) {
> +		/* It is possible Hardware lost the flow even though TC has it,
> +		 * and flow miss in hardware causes controller to offload flow again.
> +		 */
> +		if (err == -EEXIST && !tc_skip_hw(fnew->flags)) {
> +			struct cls_fl_filter *f =
> +				__fl_lookup(fnew->mask, &fnew->mkey);
> +
> +			if (f)
> +				fl_hw_replace_filter(tp, fnew, rtnl_held,
> +						     extack,
> +						     (unsigned long)(f));
> +		}
>  		goto errout_mask;
> +	}
>  
>  	if (!tc_skip_hw(fnew->flags)) {
> -		err = fl_hw_replace_filter(tp, fnew, rtnl_held, extack);
> +		err = fl_hw_replace_filter(tp, fnew, rtnl_held, extack,
> +					   (unsigned long)fnew);
>  		if (err)
>  			goto errout_ht;
>  	}
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 2/3] cls_flower: Pass the unmasked key to hw
  2020-06-19  9:41 ` [PATCH net-next 2/3] cls_flower: Pass the unmasked key to hw dsatish
  2020-06-19 13:14   ` Ido Schimmel
@ 2020-06-19 14:00   ` Jiri Pirko
  1 sibling, 0 replies; 11+ messages in thread
From: Jiri Pirko @ 2020-06-19 14:00 UTC (permalink / raw)
  To: dsatish
  Cc: davem, jhs, xiyou.wangcong, kuba, netdev, simon.horman, kesavac,
	prathibha.nagooru, intiyaz.basha, jai.rana

Fri, Jun 19, 2020 at 11:41:55AM CEST, satish.d@oneconvergence.com wrote:
>Pass the unmasked key along with the masked key to the hardware.
>This enables hardware to manage its own tables better based on the
>hardware features/capabilities.

I don't undertand the motivation. Could you provide some examples?
Also, as Ido pointed out already, you need to submit the driver changes
in this patchset too.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 3/3] cls_flower: Allow flow offloading though masked key exist.
  2020-06-19  9:41 ` [PATCH net-next 3/3] cls_flower: Allow flow offloading though masked key exist dsatish
  2020-06-19 13:35   ` Ido Schimmel
@ 2020-06-19 14:01   ` Jiri Pirko
  2020-06-19 16:15   ` Vlad Buslov
  2 siblings, 0 replies; 11+ messages in thread
From: Jiri Pirko @ 2020-06-19 14:01 UTC (permalink / raw)
  To: dsatish
  Cc: davem, jhs, xiyou.wangcong, kuba, netdev, simon.horman, kesavac,
	prathibha.nagooru, intiyaz.basha, jai.rana

Fri, Jun 19, 2020 at 11:41:56AM CEST, satish.d@oneconvergence.com wrote:
>A packet reaches OVS user space, only if, either there is no rule in
>datapath/hardware or there is race condition that the flow is added
>to hardware but before it is processed another packet arrives.
>
>It is possible hardware as part of its limitations/optimizations
>remove certain flows. To handle such cases where the hardware lost
>the flows, tc can offload to hardware if it receives a flow for which
>there exists an entry in its flow table. To handle such cases TC when
>it returns EEXIST error, also programs the flow in hardware, if
>hardware offload is enabled.
>
>Signed-off-by: Chandra Kesava <kesavac@gmail.com>
>Signed-off-by: Prathibha Nagooru <prathibha.nagooru@oneconvergence.com>
>Signed-off-by: Satish Dhote <satish.d@oneconvergence.com>
>---
> net/sched/cls_flower.c | 23 +++++++++++++++++++----
> 1 file changed, 19 insertions(+), 4 deletions(-)
>
>diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
>index f1a5352cbb04..d718233cd5b9 100644
>--- a/net/sched/cls_flower.c
>+++ b/net/sched/cls_flower.c
>@@ -431,7 +431,8 @@ static void fl_hw_destroy_filter(struct tcf_proto *tp, struct cls_fl_filter *f,
> 
> static int fl_hw_replace_filter(struct tcf_proto *tp,
> 				struct cls_fl_filter *f, bool rtnl_held,
>-				struct netlink_ext_ack *extack)
>+				struct netlink_ext_ack *extack,
>+				unsigned long cookie)
> {
> 	struct tcf_block *block = tp->chain->block;
> 	struct flow_cls_offload cls_flower = {};
>@@ -444,7 +445,7 @@ static int fl_hw_replace_filter(struct tcf_proto *tp,
> 
> 	tc_cls_common_offload_init(&cls_flower.common, tp, f->flags, extack);
> 	cls_flower.command = FLOW_CLS_REPLACE;
>-	cls_flower.cookie = (unsigned long) f;
>+	cls_flower.cookie = cookie;
> 	cls_flower.rule->match.dissector = &f->mask->dissector;
> 	cls_flower.rule->match.mask = &f->mask->key;
> 	cls_flower.rule->match.key = &f->mkey;
>@@ -2024,11 +2025,25 @@ static int fl_change(struct net *net, struct sk_buff *in_skb,
> 	fl_init_unmasked_key_dissector(&fnew->unmasked_key_dissector);
> 
> 	err = fl_ht_insert_unique(fnew, fold, &in_ht);
>-	if (err)
>+	if (err) {
>+		/* It is possible Hardware lost the flow even though TC has it,
>+		 * and flow miss in hardware causes controller to offload flow again.

What? Seems that you have a buggy HW what should be fixed...

What is "controller"?


>+		 */
>+		if (err == -EEXIST && !tc_skip_hw(fnew->flags)) {
>+			struct cls_fl_filter *f =
>+				__fl_lookup(fnew->mask, &fnew->mkey);
>+
>+			if (f)
>+				fl_hw_replace_filter(tp, fnew, rtnl_held,
>+						     extack,
>+						     (unsigned long)(f));
>+		}
> 		goto errout_mask;
>+	}
> 
> 	if (!tc_skip_hw(fnew->flags)) {
>-		err = fl_hw_replace_filter(tp, fnew, rtnl_held, extack);
>+		err = fl_hw_replace_filter(tp, fnew, rtnl_held, extack,
>+					   (unsigned long)fnew);
> 		if (err)
> 			goto errout_ht;
> 	}
>-- 
>2.17.1
>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 3/3] cls_flower: Allow flow offloading though masked key exist.
  2020-06-19  9:41 ` [PATCH net-next 3/3] cls_flower: Allow flow offloading though masked key exist dsatish
  2020-06-19 13:35   ` Ido Schimmel
  2020-06-19 14:01   ` Jiri Pirko
@ 2020-06-19 16:15   ` Vlad Buslov
  2 siblings, 0 replies; 11+ messages in thread
From: Vlad Buslov @ 2020-06-19 16:15 UTC (permalink / raw)
  To: dsatish
  Cc: davem, jhs, xiyou.wangcong, jiri, kuba, netdev, simon.horman,
	kesavac, prathibha.nagooru, intiyaz.basha, jai.rana


On Fri 19 Jun 2020 at 12:41, dsatish <satish.d@oneconvergence.com> wrote:
> A packet reaches OVS user space, only if, either there is no rule in
> datapath/hardware or there is race condition that the flow is added
> to hardware but before it is processed another packet arrives.
>
> It is possible hardware as part of its limitations/optimizations
> remove certain flows. To handle such cases where the hardware lost
> the flows, tc can offload to hardware if it receives a flow for which
> there exists an entry in its flow table. To handle such cases TC when
> it returns EEXIST error, also programs the flow in hardware, if
> hardware offload is enabled.
>
> Signed-off-by: Chandra Kesava <kesavac@gmail.com>
> Signed-off-by: Prathibha Nagooru <prathibha.nagooru@oneconvergence.com>
> Signed-off-by: Satish Dhote <satish.d@oneconvergence.com>
> ---
>  net/sched/cls_flower.c | 23 +++++++++++++++++++----
>  1 file changed, 19 insertions(+), 4 deletions(-)
>
> diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
> index f1a5352cbb04..d718233cd5b9 100644
> --- a/net/sched/cls_flower.c
> +++ b/net/sched/cls_flower.c
> @@ -431,7 +431,8 @@ static void fl_hw_destroy_filter(struct tcf_proto *tp, struct cls_fl_filter *f,
>
>  static int fl_hw_replace_filter(struct tcf_proto *tp,
>  				struct cls_fl_filter *f, bool rtnl_held,
> -				struct netlink_ext_ack *extack)
> +				struct netlink_ext_ack *extack,
> +				unsigned long cookie)
>  {
>  	struct tcf_block *block = tp->chain->block;
>  	struct flow_cls_offload cls_flower = {};
> @@ -444,7 +445,7 @@ static int fl_hw_replace_filter(struct tcf_proto *tp,
>
>  	tc_cls_common_offload_init(&cls_flower.common, tp, f->flags, extack);
>  	cls_flower.command = FLOW_CLS_REPLACE;
> -	cls_flower.cookie = (unsigned long) f;
> +	cls_flower.cookie = cookie;
>  	cls_flower.rule->match.dissector = &f->mask->dissector;
>  	cls_flower.rule->match.mask = &f->mask->key;
>  	cls_flower.rule->match.key = &f->mkey;
> @@ -2024,11 +2025,25 @@ static int fl_change(struct net *net, struct sk_buff *in_skb,
>  	fl_init_unmasked_key_dissector(&fnew->unmasked_key_dissector);
>
>  	err = fl_ht_insert_unique(fnew, fold, &in_ht);
> -	if (err)
> +	if (err) {
> +		/* It is possible Hardware lost the flow even though TC has it,
> +		 * and flow miss in hardware causes controller to offload flow again.
> +		 */
> +		if (err == -EEXIST && !tc_skip_hw(fnew->flags)) {
> +			struct cls_fl_filter *f =
> +				__fl_lookup(fnew->mask, &fnew->mkey);

You don't hold neither rcu read lock nor reference to the "f" filter
here, which means it can be concurrently destroyed at any time.

> +
> +			if (f)
> +				fl_hw_replace_filter(tp, fnew, rtnl_held,
> +						     extack,
> +						     (unsigned long)(f));
> +		}

It looks like you are inventing filter replace/overwrite functionality
here. However, such functionality is already supported. fl_change()
receives "fold" via "arg" argument, if filter with specified key exists
in classifier instance and NLM_F_EXCL netlink message flag is not set.

>  		goto errout_mask;
> +	}
>
>  	if (!tc_skip_hw(fnew->flags)) {
> -		err = fl_hw_replace_filter(tp, fnew, rtnl_held, extack);
> +		err = fl_hw_replace_filter(tp, fnew, rtnl_held, extack,
> +					   (unsigned long)fnew);
>  		if (err)
>  			goto errout_ht;
>  	}

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 0/3] cls_flower: Offload unmasked key
  2020-06-19  9:41 [PATCH net-next 0/3] cls_flower: Offload unmasked key dsatish
                   ` (2 preceding siblings ...)
  2020-06-19  9:41 ` [PATCH net-next 3/3] cls_flower: Allow flow offloading though masked key exist dsatish
@ 2020-06-19 20:12 ` David Miller
  2020-06-30  3:09   ` Satish
  3 siblings, 1 reply; 11+ messages in thread
From: David Miller @ 2020-06-19 20:12 UTC (permalink / raw)
  To: satish.d
  Cc: jhs, xiyou.wangcong, jiri, kuba, netdev, simon.horman, kesavac,
	prathibha.nagooru, intiyaz.basha, jai.rana


You are giving no context on what hardware and with what driver
your changes make a difference for.

This kind of context and information is required in order for
anyone to understand your changes.

We're not accepting these changes until you explain all of this.

Thank you.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 0/3] cls_flower: Offload unmasked key
  2020-06-19 20:12 ` [PATCH net-next 0/3] cls_flower: Offload unmasked key David Miller
@ 2020-06-30  3:09   ` Satish
  0 siblings, 0 replies; 11+ messages in thread
From: Satish @ 2020-06-30  3:09 UTC (permalink / raw)
  To: David Miller
  Cc: Jamal Hadi Salim, xiyou.wangcong, Jiří Pírko, kuba,
	netdev, Simon Horman, kesavac, Prathibha Nagooru, Intiyaz Basha,
	Jai Rana

Hi David,

Thank you so much for providing your review comments. We decided to
withdraw this patch for the time being. We'll resubmit this patch
addressing all your comments with the driver code in future".


On Sat, Jun 20, 2020 at 1:42 AM David Miller <davem@davemloft.net> wrote:
>
>
> You are giving no context on what hardware and with what driver
> your changes make a difference for.
>
> This kind of context and information is required in order for
> anyone to understand your changes.
>
> We're not accepting these changes until you explain all of this.
>
> Thank you.

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2020-06-30  3:10 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-06-19  9:41 [PATCH net-next 0/3] cls_flower: Offload unmasked key dsatish
2020-06-19  9:41 ` [PATCH net-next 1/3] cls_flower: Set addr_type when ip mask is non-zero dsatish
2020-06-19  9:41 ` [PATCH net-next 2/3] cls_flower: Pass the unmasked key to hw dsatish
2020-06-19 13:14   ` Ido Schimmel
2020-06-19 14:00   ` Jiri Pirko
2020-06-19  9:41 ` [PATCH net-next 3/3] cls_flower: Allow flow offloading though masked key exist dsatish
2020-06-19 13:35   ` Ido Schimmel
2020-06-19 14:01   ` Jiri Pirko
2020-06-19 16:15   ` Vlad Buslov
2020-06-19 20:12 ` [PATCH net-next 0/3] cls_flower: Offload unmasked key David Miller
2020-06-30  3:09   ` Satish

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).