* [PATCH net-next 0/7] nfp: flower vxlan tunnel offload
@ 2017-09-25 10:23 Simon Horman
2017-09-25 10:23 ` [PATCH net-next 1/7] nfp: add helper to get flower cmsg length Simon Horman
` (9 more replies)
0 siblings, 10 replies; 29+ messages in thread
From: Simon Horman @ 2017-09-25 10:23 UTC (permalink / raw)
To: David Miller, Jakub Kicinski; +Cc: netdev, oss-drivers, Simon Horman
From: Simon Horman <simon.horman@netronome.com>
John says:
This patch set allows offloading of TC flower match and set tunnel fields
to the NFP. The initial focus is on VXLAN traffic. Due to the current
state of the NFP firmware, only VXLAN traffic on well known port 4789 is
handled. The match and action fields must explicity set this value to be
supported. Tunnel end point information is also offloaded to the NFP for
both encapsulation and decapsulation. The NFP expects 3 separate data sets
to be supplied.
For decapsulation, 2 separate lists exist; a list of MAC addresses
referenced by an index comprised of the port number, and a list of IP
addresses. These IP addresses are not connected to a MAC or port. The MAC
addresses can be written as a block or one at a time (because they have an
index, previous values can be overwritten) while the IP addresses are
always written as a list of all the available IPs. Because the MAC address
used as a tunnel end point may be associated with a physical port or may
be a virtual netdev like an OVS bridge, we do not know which addresses
should be offloaded. For this reason, all MAC addresses of active netdevs
are offloaded to the NFP. A notifier checks for changes to any currently
offloaded MACs or any new netdevs that may occur. For IP addresses, the
tunnel end point used in the rules is known as the destination IP address
must be specified in the flower classifier rule. When a new IP address
appears in a rule, the IP address is offloaded. The IP is removed from the
offloaded list when all rules matching on that IP are deleted.
For encapsulation, a next hop table is updated on the NFP that contains
the source/dest IPs, MACs and egress port. These are written individually
when requested. If the NFP tries to encapsulate a packet but does not know
the next hop, then is sends a request to the host. The host carries out a
route lookup and populates the given entry on the NFP table. A notifier
also exists to check for any links changing or going down in the kernel
next hop table. If an offloaded next hop entry is removed from the kernel
then it is also removed on the NFP.
The NFP periodically sends a message to the host telling it which tunnel
ports have packets egressing the system. The host uses this information to
update the used value in the neighbour entry. This means that, rather than
expire when it times out, the kernel will send an ARP to check if the link
is still live. From an NFP perspective, this means that valid entries will
not be removed from its next hop table.
John Hurley (7):
nfp: add helper to get flower cmsg length
nfp: compile flower vxlan tunnel metadata match fields
nfp: compile flower vxlan tunnel set actions
nfp: offload flower vxlan endpoint MAC addresses
nfp: offload vxlan IPv4 endpoints of flower rules
nfp: flower vxlan neighbour offload
nfp: flower vxlan neighbour keep-alive
drivers/net/ethernet/netronome/nfp/Makefile | 3 +-
drivers/net/ethernet/netronome/nfp/flower/action.c | 169 ++++-
drivers/net/ethernet/netronome/nfp/flower/cmsg.c | 16 +-
drivers/net/ethernet/netronome/nfp/flower/cmsg.h | 87 ++-
drivers/net/ethernet/netronome/nfp/flower/main.c | 13 +
drivers/net/ethernet/netronome/nfp/flower/main.h | 35 +
drivers/net/ethernet/netronome/nfp/flower/match.c | 75 +-
.../net/ethernet/netronome/nfp/flower/metadata.c | 2 +-
.../net/ethernet/netronome/nfp/flower/offload.c | 74 +-
.../ethernet/netronome/nfp/flower/tunnel_conf.c | 811 +++++++++++++++++++++
10 files changed, 1243 insertions(+), 42 deletions(-)
create mode 100644 drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
--
2.1.4
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH net-next 1/7] nfp: add helper to get flower cmsg length
2017-09-25 10:23 [PATCH net-next 0/7] nfp: flower vxlan tunnel offload Simon Horman
@ 2017-09-25 10:23 ` Simon Horman
2017-09-25 10:23 ` [PATCH net-next 2/7] nfp: compile flower vxlan tunnel metadata match fields Simon Horman
` (8 subsequent siblings)
9 siblings, 0 replies; 29+ messages in thread
From: Simon Horman @ 2017-09-25 10:23 UTC (permalink / raw)
To: David Miller, Jakub Kicinski
Cc: netdev, oss-drivers, John Hurley, Simon Horman
From: John Hurley <john.hurley@netronome.com>
Add a helper function that returns the length of the cmsg data when given
the cmsg skb
Signed-off-by: John Hurley <john.hurley@netronome.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
---
drivers/net/ethernet/netronome/nfp/flower/cmsg.h | 5 +++++
drivers/net/ethernet/netronome/nfp/flower/metadata.c | 2 +-
2 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
index a2ec60344236..7a5ccf0cc7c2 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
+++ b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
@@ -323,6 +323,11 @@ static inline void *nfp_flower_cmsg_get_data(struct sk_buff *skb)
return (unsigned char *)skb->data + NFP_FLOWER_CMSG_HLEN;
}
+static inline int nfp_flower_cmsg_get_data_len(struct sk_buff *skb)
+{
+ return skb->len - NFP_FLOWER_CMSG_HLEN;
+}
+
struct sk_buff *
nfp_flower_cmsg_mac_repr_start(struct nfp_app *app, unsigned int num_ports);
void
diff --git a/drivers/net/ethernet/netronome/nfp/flower/metadata.c b/drivers/net/ethernet/netronome/nfp/flower/metadata.c
index 3226ddc55f99..193520ef23f0 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/metadata.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/metadata.c
@@ -140,7 +140,7 @@ nfp_flower_update_stats(struct nfp_app *app, struct nfp_fl_stats_frame *stats)
void nfp_flower_rx_flow_stats(struct nfp_app *app, struct sk_buff *skb)
{
- unsigned int msg_len = skb->len - NFP_FLOWER_CMSG_HLEN;
+ unsigned int msg_len = nfp_flower_cmsg_get_data_len(skb);
struct nfp_fl_stats_frame *stats_frame;
unsigned char *msg;
int i;
--
2.1.4
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH net-next 2/7] nfp: compile flower vxlan tunnel metadata match fields
2017-09-25 10:23 [PATCH net-next 0/7] nfp: flower vxlan tunnel offload Simon Horman
2017-09-25 10:23 ` [PATCH net-next 1/7] nfp: add helper to get flower cmsg length Simon Horman
@ 2017-09-25 10:23 ` Simon Horman
2017-09-25 18:35 ` Or Gerlitz
2017-09-25 10:23 ` [PATCH net-next 3/7] nfp: compile flower vxlan tunnel set actions Simon Horman
` (7 subsequent siblings)
9 siblings, 1 reply; 29+ messages in thread
From: Simon Horman @ 2017-09-25 10:23 UTC (permalink / raw)
To: David Miller, Jakub Kicinski
Cc: netdev, oss-drivers, John Hurley, Simon Horman
From: John Hurley <john.hurley@netronome.com>
Compile ovs-tc flower vxlan metadata match fields for offloading. Only
support offload of tunnel data when the VXLAN port specifically matches
well known port 4789.
Signed-off-by: John Hurley <john.hurley@netronome.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
---
drivers/net/ethernet/netronome/nfp/flower/cmsg.h | 38 ++++++++++++
drivers/net/ethernet/netronome/nfp/flower/main.h | 2 +
drivers/net/ethernet/netronome/nfp/flower/match.c | 60 +++++++++++++++++--
.../net/ethernet/netronome/nfp/flower/offload.c | 70 +++++++++++++++++++---
4 files changed, 158 insertions(+), 12 deletions(-)
diff --git a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
index 7a5ccf0cc7c2..af9165b3b652 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
+++ b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
@@ -83,6 +83,14 @@
#define NFP_FL_PUSH_VLAN_CFI BIT(12)
#define NFP_FL_PUSH_VLAN_VID GENMASK(11, 0)
+/* Tunnel ports */
+#define NFP_FL_PORT_TYPE_TUN 0x50000000
+
+enum nfp_flower_tun_type {
+ NFP_FL_TUNNEL_NONE = 0,
+ NFP_FL_TUNNEL_VXLAN = 2,
+};
+
struct nfp_fl_output {
__be16 a_op;
__be16 flags;
@@ -230,6 +238,36 @@ struct nfp_flower_ipv6 {
struct in6_addr ipv6_dst;
};
+/* Flow Frame VXLAN --> Tunnel details (4W/16B)
+ * -----------------------------------------------------------------
+ * 3 2 1
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv4_addr_src |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv4_addr_dst |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | tun_flags | tos | ttl |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | gpe_flags | Reserved | Next Protocol |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | VNI | Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ */
+struct nfp_flower_vxlan {
+ __be32 ip_src;
+ __be32 ip_dst;
+ __be16 tun_flags;
+ u8 tos;
+ u8 ttl;
+ u8 gpe_flags;
+ u8 reserved[2];
+ u8 nxt_proto;
+ __be32 tun_id;
+};
+
+#define NFP_FL_TUN_VNI_OFFSET 8
+
/* The base header for a control message packet.
* Defines an 8-bit version, and an 8-bit type, padded
* to a 32-bit word. Rest of the packet is type-specific.
diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.h b/drivers/net/ethernet/netronome/nfp/flower/main.h
index c20dd00a1cae..cd695eabce02 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/main.h
+++ b/drivers/net/ethernet/netronome/nfp/flower/main.h
@@ -58,6 +58,8 @@ struct nfp_app;
#define NFP_FL_MASK_REUSE_TIME_NS 40000
#define NFP_FL_MASK_ID_LOCATION 1
+#define NFP_FL_VXLAN_PORT 4789
+
struct nfp_fl_mask_id {
struct circ_buf mask_id_free_list;
struct timespec64 *last_used;
diff --git a/drivers/net/ethernet/netronome/nfp/flower/match.c b/drivers/net/ethernet/netronome/nfp/flower/match.c
index d25b5038c3a2..1fd1bab0611f 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/match.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/match.c
@@ -77,14 +77,17 @@ nfp_flower_compile_meta(struct nfp_flower_meta_one *frame, u8 key_type)
static int
nfp_flower_compile_port(struct nfp_flower_in_port *frame, u32 cmsg_port,
- bool mask_version)
+ bool mask_version, enum nfp_flower_tun_type tun_type)
{
if (mask_version) {
frame->in_port = cpu_to_be32(~0);
return 0;
}
- frame->in_port = cpu_to_be32(cmsg_port);
+ if (tun_type)
+ frame->in_port = cpu_to_be32(NFP_FL_PORT_TYPE_TUN | tun_type);
+ else
+ frame->in_port = cpu_to_be32(cmsg_port);
return 0;
}
@@ -189,15 +192,53 @@ nfp_flower_compile_ipv6(struct nfp_flower_ipv6 *frame,
}
}
+static void
+nfp_flower_compile_vxlan(struct nfp_flower_vxlan *frame,
+ struct tc_cls_flower_offload *flow,
+ bool mask_version)
+{
+ struct fl_flow_key *target = mask_version ? flow->mask : flow->key;
+ struct flow_dissector_key_ipv4_addrs *vxlan_ips;
+ struct flow_dissector_key_keyid *vni;
+
+ /* Wildcard TOS/TTL/GPE_FLAGS/NXT_PROTO for now. */
+ memset(frame, 0, sizeof(struct nfp_flower_vxlan));
+
+ if (dissector_uses_key(flow->dissector,
+ FLOW_DISSECTOR_KEY_ENC_KEYID)) {
+ u32 temp_vni;
+
+ vni = skb_flow_dissector_target(flow->dissector,
+ FLOW_DISSECTOR_KEY_ENC_KEYID,
+ target);
+ temp_vni = be32_to_cpu(vni->keyid) << NFP_FL_TUN_VNI_OFFSET;
+ frame->tun_id = cpu_to_be32(temp_vni);
+ }
+
+ if (dissector_uses_key(flow->dissector,
+ FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS)) {
+ vxlan_ips =
+ skb_flow_dissector_target(flow->dissector,
+ FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS,
+ target);
+ frame->ip_src = vxlan_ips->src;
+ frame->ip_dst = vxlan_ips->dst;
+ }
+}
+
int nfp_flower_compile_flow_match(struct tc_cls_flower_offload *flow,
struct nfp_fl_key_ls *key_ls,
struct net_device *netdev,
struct nfp_fl_payload *nfp_flow)
{
+ enum nfp_flower_tun_type tun_type = NFP_FL_TUNNEL_NONE;
int err;
u8 *ext;
u8 *msk;
+ if (key_ls->key_layer & NFP_FLOWER_LAYER_VXLAN)
+ tun_type = NFP_FL_TUNNEL_VXLAN;
+
memset(nfp_flow->unmasked_data, 0, key_ls->key_size);
memset(nfp_flow->mask_data, 0, key_ls->key_size);
@@ -216,14 +257,14 @@ int nfp_flower_compile_flow_match(struct tc_cls_flower_offload *flow,
/* Populate Exact Port data. */
err = nfp_flower_compile_port((struct nfp_flower_in_port *)ext,
nfp_repr_get_port_id(netdev),
- false);
+ false, tun_type);
if (err)
return err;
/* Populate Mask Port Data. */
err = nfp_flower_compile_port((struct nfp_flower_in_port *)msk,
nfp_repr_get_port_id(netdev),
- true);
+ true, tun_type);
if (err)
return err;
@@ -291,5 +332,16 @@ int nfp_flower_compile_flow_match(struct tc_cls_flower_offload *flow,
msk += sizeof(struct nfp_flower_ipv6);
}
+ if (key_ls->key_layer & NFP_FLOWER_LAYER_VXLAN) {
+ /* Populate Exact VXLAN Data. */
+ nfp_flower_compile_vxlan((struct nfp_flower_vxlan *)ext,
+ flow, false);
+ /* Populate Mask VXLAN Data. */
+ nfp_flower_compile_vxlan((struct nfp_flower_vxlan *)msk,
+ flow, true);
+ ext += sizeof(struct nfp_flower_vxlan);
+ msk += sizeof(struct nfp_flower_vxlan);
+ }
+
return 0;
}
diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
index a18b4d2b1d3e..637372ba8f55 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
@@ -52,8 +52,25 @@
BIT(FLOW_DISSECTOR_KEY_PORTS) | \
BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) | \
BIT(FLOW_DISSECTOR_KEY_VLAN) | \
+ BIT(FLOW_DISSECTOR_KEY_ENC_KEYID) | \
+ BIT(FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS) | \
+ BIT(FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS) | \
+ BIT(FLOW_DISSECTOR_KEY_ENC_CONTROL) | \
+ BIT(FLOW_DISSECTOR_KEY_ENC_PORTS) | \
BIT(FLOW_DISSECTOR_KEY_IP))
+#define NFP_FLOWER_WHITELIST_TUN_DISSECTOR \
+ (BIT(FLOW_DISSECTOR_KEY_ENC_CONTROL) | \
+ BIT(FLOW_DISSECTOR_KEY_ENC_KEYID) | \
+ BIT(FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS) | \
+ BIT(FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS) | \
+ BIT(FLOW_DISSECTOR_KEY_ENC_PORTS))
+
+#define NFP_FLOWER_WHITELIST_TUN_DISSECTOR_R \
+ (BIT(FLOW_DISSECTOR_KEY_ENC_CONTROL) | \
+ BIT(FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS) | \
+ BIT(FLOW_DISSECTOR_KEY_ENC_PORTS))
+
static int
nfp_flower_xmit_flow(struct net_device *netdev,
struct nfp_fl_payload *nfp_flow, u8 mtype)
@@ -125,15 +142,58 @@ nfp_flower_calculate_key_layers(struct nfp_fl_key_ls *ret_key_ls,
if (flow->dissector->used_keys & ~NFP_FLOWER_WHITELIST_DISSECTOR)
return -EOPNOTSUPP;
+ /* If any tun dissector is used then the required set must be used. */
+ if (flow->dissector->used_keys & NFP_FLOWER_WHITELIST_TUN_DISSECTOR &&
+ (flow->dissector->used_keys & NFP_FLOWER_WHITELIST_TUN_DISSECTOR_R)
+ != NFP_FLOWER_WHITELIST_TUN_DISSECTOR_R)
+ return -EOPNOTSUPP;
+
+ key_layer_two = 0;
+ key_layer = NFP_FLOWER_LAYER_PORT | NFP_FLOWER_LAYER_MAC;
+ key_size = sizeof(struct nfp_flower_meta_one) +
+ sizeof(struct nfp_flower_in_port) +
+ sizeof(struct nfp_flower_mac_mpls);
+
if (dissector_uses_key(flow->dissector,
FLOW_DISSECTOR_KEY_ENC_CONTROL)) {
+ struct flow_dissector_key_ipv4_addrs *mask_ipv4 = NULL;
+ struct flow_dissector_key_ports *mask_enc_ports = NULL;
+ struct flow_dissector_key_ports *enc_ports = NULL;
struct flow_dissector_key_control *mask_enc_ctl =
skb_flow_dissector_target(flow->dissector,
FLOW_DISSECTOR_KEY_ENC_CONTROL,
flow->mask);
- /* We are expecting a tunnel. For now we ignore offloading. */
- if (mask_enc_ctl->addr_type)
+ struct flow_dissector_key_control *enc_ctl =
+ skb_flow_dissector_target(flow->dissector,
+ FLOW_DISSECTOR_KEY_ENC_CONTROL,
+ flow->key);
+ if (mask_enc_ctl->addr_type != 0xffff ||
+ enc_ctl->addr_type != FLOW_DISSECTOR_KEY_IPV4_ADDRS)
return -EOPNOTSUPP;
+
+ /* These fields are already verified as used. */
+ mask_ipv4 =
+ skb_flow_dissector_target(flow->dissector,
+ FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS,
+ flow->mask);
+ if (mask_ipv4->dst != cpu_to_be32(~0))
+ return -EOPNOTSUPP;
+
+ mask_enc_ports =
+ skb_flow_dissector_target(flow->dissector,
+ FLOW_DISSECTOR_KEY_ENC_PORTS,
+ flow->mask);
+ enc_ports =
+ skb_flow_dissector_target(flow->dissector,
+ FLOW_DISSECTOR_KEY_ENC_PORTS,
+ flow->key);
+
+ if (mask_enc_ports->dst != cpu_to_be16(~0) ||
+ enc_ports->dst != htons(NFP_FL_VXLAN_PORT))
+ return -EOPNOTSUPP;
+
+ key_layer |= NFP_FLOWER_LAYER_VXLAN;
+ key_size += sizeof(struct nfp_flower_vxlan);
}
if (dissector_uses_key(flow->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
@@ -151,12 +211,6 @@ nfp_flower_calculate_key_layers(struct nfp_fl_key_ls *ret_key_ls,
FLOW_DISSECTOR_KEY_IP,
flow->mask);
- key_layer_two = 0;
- key_layer = NFP_FLOWER_LAYER_PORT | NFP_FLOWER_LAYER_MAC;
- key_size = sizeof(struct nfp_flower_meta_one) +
- sizeof(struct nfp_flower_in_port) +
- sizeof(struct nfp_flower_mac_mpls);
-
if (mask_basic && mask_basic->n_proto) {
/* Ethernet type is present in the key. */
switch (key_basic->n_proto) {
--
2.1.4
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH net-next 3/7] nfp: compile flower vxlan tunnel set actions
2017-09-25 10:23 [PATCH net-next 0/7] nfp: flower vxlan tunnel offload Simon Horman
2017-09-25 10:23 ` [PATCH net-next 1/7] nfp: add helper to get flower cmsg length Simon Horman
2017-09-25 10:23 ` [PATCH net-next 2/7] nfp: compile flower vxlan tunnel metadata match fields Simon Horman
@ 2017-09-25 10:23 ` Simon Horman
2017-09-25 10:23 ` [PATCH net-next 4/7] nfp: offload flower vxlan endpoint MAC addresses Simon Horman
` (6 subsequent siblings)
9 siblings, 0 replies; 29+ messages in thread
From: Simon Horman @ 2017-09-25 10:23 UTC (permalink / raw)
To: David Miller, Jakub Kicinski
Cc: netdev, oss-drivers, John Hurley, Simon Horman
From: John Hurley <john.hurley@netronome.com>
Compile set tunnel actions for tc flower. Only support VXLAN and ensure a
tunnel destination port of 4789 is used.
Signed-off-by: John Hurley <john.hurley@netronome.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
---
drivers/net/ethernet/netronome/nfp/flower/action.c | 169 ++++++++++++++++++---
drivers/net/ethernet/netronome/nfp/flower/cmsg.h | 31 +++-
2 files changed, 179 insertions(+), 21 deletions(-)
diff --git a/drivers/net/ethernet/netronome/nfp/flower/action.c b/drivers/net/ethernet/netronome/nfp/flower/action.c
index db9750695dc7..38f3835ae176 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/action.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/action.c
@@ -37,6 +37,7 @@
#include <net/tc_act/tc_gact.h>
#include <net/tc_act/tc_mirred.h>
#include <net/tc_act/tc_vlan.h>
+#include <net/tc_act/tc_tunnel_key.h>
#include "cmsg.h"
#include "main.h"
@@ -80,14 +81,27 @@ nfp_fl_push_vlan(struct nfp_fl_push_vlan *push_vlan,
push_vlan->vlan_tci = cpu_to_be16(tmp_push_vlan_tci);
}
+static bool nfp_fl_netdev_is_tunnel_type(struct net_device *out_dev,
+ enum nfp_flower_tun_type tun_type)
+{
+ if (!out_dev->rtnl_link_ops)
+ return false;
+
+ if (!strcmp(out_dev->rtnl_link_ops->kind, "vxlan"))
+ return tun_type == NFP_FL_TUNNEL_VXLAN;
+
+ return false;
+}
+
static int
nfp_fl_output(struct nfp_fl_output *output, const struct tc_action *action,
struct nfp_fl_payload *nfp_flow, bool last,
- struct net_device *in_dev)
+ struct net_device *in_dev, enum nfp_flower_tun_type tun_type,
+ int *tun_out_cnt)
{
size_t act_size = sizeof(struct nfp_fl_output);
+ u16 tmp_output_op, tmp_flags;
struct net_device *out_dev;
- u16 tmp_output_op;
int ifindex;
/* Set action opcode to output action. */
@@ -97,25 +111,114 @@ nfp_fl_output(struct nfp_fl_output *output, const struct tc_action *action,
output->a_op = cpu_to_be16(tmp_output_op);
- /* Set action output parameters. */
- output->flags = cpu_to_be16(last ? NFP_FL_OUT_FLAGS_LAST : 0);
-
ifindex = tcf_mirred_ifindex(action);
out_dev = __dev_get_by_index(dev_net(in_dev), ifindex);
if (!out_dev)
return -EOPNOTSUPP;
- /* Only offload egress ports are on the same device as the ingress
- * port.
+ tmp_flags = last ? NFP_FL_OUT_FLAGS_LAST : 0;
+
+ if (tun_type) {
+ /* Verify the egress netdev matches the tunnel type. */
+ if (!nfp_fl_netdev_is_tunnel_type(out_dev, tun_type))
+ return -EOPNOTSUPP;
+
+ if (*tun_out_cnt)
+ return -EOPNOTSUPP;
+ (*tun_out_cnt)++;
+
+ output->flags = cpu_to_be16(tmp_flags |
+ NFP_FL_OUT_FLAGS_USE_TUN);
+ output->port = cpu_to_be32(NFP_FL_PORT_TYPE_TUN | tun_type);
+ } else {
+ /* Set action output parameters. */
+ output->flags = cpu_to_be16(tmp_flags);
+
+ /* Only offload if egress ports are on the same device as the
+ * ingress port.
+ */
+ if (!switchdev_port_same_parent_id(in_dev, out_dev))
+ return -EOPNOTSUPP;
+
+ output->port = cpu_to_be32(nfp_repr_get_port_id(out_dev));
+ if (!output->port)
+ return -EOPNOTSUPP;
+ }
+ nfp_flow->meta.shortcut = output->port;
+
+ return 0;
+}
+
+static bool nfp_fl_supported_tun_port(const struct tc_action *action)
+{
+ struct ip_tunnel_info *tun = tcf_tunnel_info(action);
+
+ return tun->key.tp_dst == htons(NFP_FL_VXLAN_PORT);
+}
+
+static struct nfp_fl_pre_tunnel *nfp_fl_pre_tunnel(char *act_data, int act_len)
+{
+ size_t act_size = sizeof(struct nfp_fl_pre_tunnel);
+ struct nfp_fl_pre_tunnel *pre_tun_act;
+ u16 tmp_pre_tun_op;
+
+ /* Pre_tunnel action must be first on action list.
+ * If other actions already exist they need pushed forward.
*/
- if (!switchdev_port_same_parent_id(in_dev, out_dev))
- return -EOPNOTSUPP;
+ if (act_len)
+ memmove(act_data + act_size, act_data, act_len);
+
+ pre_tun_act = (struct nfp_fl_pre_tunnel *)act_data;
+
+ memset(pre_tun_act, 0, act_size);
+
+ tmp_pre_tun_op =
+ FIELD_PREP(NFP_FL_ACT_LEN_LW, act_size >> NFP_FL_LW_SIZ) |
+ FIELD_PREP(NFP_FL_ACT_JMP_ID, NFP_FL_ACTION_OPCODE_PRE_TUNNEL);
+
+ pre_tun_act->a_op = cpu_to_be16(tmp_pre_tun_op);
- output->port = cpu_to_be32(nfp_repr_get_port_id(out_dev));
- if (!output->port)
+ return pre_tun_act;
+}
+
+static int
+nfp_fl_set_vxlan(struct nfp_fl_set_vxlan *set_vxlan,
+ const struct tc_action *action,
+ struct nfp_fl_pre_tunnel *pre_tun)
+{
+ struct ip_tunnel_info *vxlan = tcf_tunnel_info(action);
+ size_t act_size = sizeof(struct nfp_fl_set_vxlan);
+ u32 tmp_set_vxlan_type_index = 0;
+ u16 tmp_set_vxlan_op;
+ /* Currently support one pre-tunnel so index is always 0. */
+ int pretun_idx = 0;
+
+ if (vxlan->options_len) {
+ /* Do not support options e.g. vxlan gpe. */
return -EOPNOTSUPP;
+ }
- nfp_flow->meta.shortcut = output->port;
+ tmp_set_vxlan_op =
+ FIELD_PREP(NFP_FL_ACT_LEN_LW, act_size >> NFP_FL_LW_SIZ) |
+ FIELD_PREP(NFP_FL_ACT_JMP_ID,
+ NFP_FL_ACTION_OPCODE_SET_IPV4_TUNNEL);
+
+ set_vxlan->a_op = cpu_to_be16(tmp_set_vxlan_op);
+
+ /* Set tunnel type and pre-tunnel index. */
+ tmp_set_vxlan_type_index |=
+ FIELD_PREP(NFP_FL_IPV4_TUNNEL_TYPE, NFP_FL_TUNNEL_VXLAN) |
+ FIELD_PREP(NFP_FL_IPV4_PRE_TUN_INDEX, pretun_idx);
+
+ set_vxlan->tun_type_index = cpu_to_be32(tmp_set_vxlan_type_index);
+
+ set_vxlan->tun_id = vxlan->key.tun_id;
+ set_vxlan->tun_flags = vxlan->key.tun_flags;
+ set_vxlan->ipv4_ttl = vxlan->key.ttl;
+ set_vxlan->ipv4_tos = vxlan->key.tos;
+
+ /* Complete pre_tunnel action. */
+ pre_tun->ipv4_dst = vxlan->key.u.ipv4.dst;
return 0;
}
@@ -123,8 +226,11 @@ nfp_fl_output(struct nfp_fl_output *output, const struct tc_action *action,
static int
nfp_flower_loop_action(const struct tc_action *a,
struct nfp_fl_payload *nfp_fl, int *a_len,
- struct net_device *netdev)
+ struct net_device *netdev,
+ enum nfp_flower_tun_type *tun_type, int *tun_out_cnt)
{
+ struct nfp_fl_pre_tunnel *pre_tun;
+ struct nfp_fl_set_vxlan *s_vxl;
struct nfp_fl_push_vlan *psh_v;
struct nfp_fl_pop_vlan *pop_v;
struct nfp_fl_output *output;
@@ -137,7 +243,8 @@ nfp_flower_loop_action(const struct tc_action *a,
return -EOPNOTSUPP;
output = (struct nfp_fl_output *)&nfp_fl->action_data[*a_len];
- err = nfp_fl_output(output, a, nfp_fl, true, netdev);
+ err = nfp_fl_output(output, a, nfp_fl, true, netdev, *tun_type,
+ tun_out_cnt);
if (err)
return err;
@@ -147,7 +254,8 @@ nfp_flower_loop_action(const struct tc_action *a,
return -EOPNOTSUPP;
output = (struct nfp_fl_output *)&nfp_fl->action_data[*a_len];
- err = nfp_fl_output(output, a, nfp_fl, false, netdev);
+ err = nfp_fl_output(output, a, nfp_fl, false, netdev, *tun_type,
+ tun_out_cnt);
if (err)
return err;
@@ -170,6 +278,29 @@ nfp_flower_loop_action(const struct tc_action *a,
nfp_fl_push_vlan(psh_v, a);
*a_len += sizeof(struct nfp_fl_push_vlan);
+ } else if (is_tcf_tunnel_set(a) && nfp_fl_supported_tun_port(a)) {
+ /* Pre-tunnel action is required for tunnel encap.
+ * This checks for next hop entries on NFP.
+ * If none, the packet falls back before applying other actions.
+ */
+ if (*a_len + sizeof(struct nfp_fl_pre_tunnel) +
+ sizeof(struct nfp_fl_set_vxlan) > NFP_FL_MAX_A_SIZ)
+ return -EOPNOTSUPP;
+
+ *tun_type = NFP_FL_TUNNEL_VXLAN;
+ pre_tun = nfp_fl_pre_tunnel(nfp_fl->action_data, *a_len);
+ nfp_fl->meta.shortcut = cpu_to_be32(NFP_FL_SC_ACT_NULL);
+ *a_len += sizeof(struct nfp_fl_pre_tunnel);
+
+ s_vxl = (struct nfp_fl_set_vxlan *)&nfp_fl->action_data[*a_len];
+ err = nfp_fl_set_vxlan(s_vxl, a, pre_tun);
+ if (err)
+ return err;
+
+ *a_len += sizeof(struct nfp_fl_set_vxlan);
+ } else if (is_tcf_tunnel_release(a)) {
+ /* Tunnel decap is handled by default so accept action. */
+ return 0;
} else {
/* Currently we do not handle any other actions. */
return -EOPNOTSUPP;
@@ -182,18 +313,22 @@ int nfp_flower_compile_action(struct tc_cls_flower_offload *flow,
struct net_device *netdev,
struct nfp_fl_payload *nfp_flow)
{
- int act_len, act_cnt, err;
+ int act_len, act_cnt, err, tun_out_cnt;
+ enum nfp_flower_tun_type tun_type;
const struct tc_action *a;
LIST_HEAD(actions);
memset(nfp_flow->action_data, 0, NFP_FL_MAX_A_SIZ);
nfp_flow->meta.act_len = 0;
+ tun_type = NFP_FL_TUNNEL_NONE;
act_len = 0;
act_cnt = 0;
+ tun_out_cnt = 0;
tcf_exts_to_list(flow->exts, &actions);
list_for_each_entry(a, &actions, list) {
- err = nfp_flower_loop_action(a, nfp_flow, &act_len, netdev);
+ err = nfp_flower_loop_action(a, nfp_flow, &act_len, netdev,
+ &tun_type, &tun_out_cnt);
if (err)
return err;
act_cnt++;
diff --git a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
index af9165b3b652..ff42ce8a1e9c 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
+++ b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
@@ -67,10 +67,12 @@
#define NFP_FL_LW_SIZ 2
/* Action opcodes */
-#define NFP_FL_ACTION_OPCODE_OUTPUT 0
-#define NFP_FL_ACTION_OPCODE_PUSH_VLAN 1
-#define NFP_FL_ACTION_OPCODE_POP_VLAN 2
-#define NFP_FL_ACTION_OPCODE_NUM 32
+#define NFP_FL_ACTION_OPCODE_OUTPUT 0
+#define NFP_FL_ACTION_OPCODE_PUSH_VLAN 1
+#define NFP_FL_ACTION_OPCODE_POP_VLAN 2
+#define NFP_FL_ACTION_OPCODE_SET_IPV4_TUNNEL 6
+#define NFP_FL_ACTION_OPCODE_PRE_TUNNEL 17
+#define NFP_FL_ACTION_OPCODE_NUM 32
#define NFP_FL_ACT_JMP_ID GENMASK(15, 8)
#define NFP_FL_ACT_LEN_LW GENMASK(7, 0)
@@ -85,6 +87,8 @@
/* Tunnel ports */
#define NFP_FL_PORT_TYPE_TUN 0x50000000
+#define NFP_FL_IPV4_TUNNEL_TYPE GENMASK(7, 4)
+#define NFP_FL_IPV4_PRE_TUN_INDEX GENMASK(2, 0)
enum nfp_flower_tun_type {
NFP_FL_TUNNEL_NONE = 0,
@@ -123,6 +127,25 @@ struct nfp_flower_meta_one {
u16 reserved;
};
+struct nfp_fl_pre_tunnel {
+ __be16 a_op;
+ __be16 reserved;
+ __be32 ipv4_dst;
+ /* reserved for use with IPv6 addresses */
+ __be32 extra[3];
+};
+
+struct nfp_fl_set_vxlan {
+ __be16 a_op;
+ __be16 reserved;
+ __be64 tun_id;
+ __be32 tun_type_index;
+ __be16 tun_flags;
+ u8 ipv4_ttl;
+ u8 ipv4_tos;
+ __be32 extra[2];
+} __packed;
+
/* Metadata with L2 (1W/4B)
* ----------------------------------------------------------------
* 3 2 1
--
2.1.4
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH net-next 4/7] nfp: offload flower vxlan endpoint MAC addresses
2017-09-25 10:23 [PATCH net-next 0/7] nfp: flower vxlan tunnel offload Simon Horman
` (2 preceding siblings ...)
2017-09-25 10:23 ` [PATCH net-next 3/7] nfp: compile flower vxlan tunnel set actions Simon Horman
@ 2017-09-25 10:23 ` Simon Horman
2017-09-25 10:23 ` [PATCH net-next 5/7] nfp: offload vxlan IPv4 endpoints of flower rules Simon Horman
` (5 subsequent siblings)
9 siblings, 0 replies; 29+ messages in thread
From: Simon Horman @ 2017-09-25 10:23 UTC (permalink / raw)
To: David Miller, Jakub Kicinski
Cc: netdev, oss-drivers, John Hurley, Simon Horman
From: John Hurley <john.hurley@netronome.com>
Generate a list of MAC addresses of netdevs that could be used as VXLAN
tunnel end points. Give offloaded MACs an index for storage on the NFP in
the ranges:
0x100-0x1ff physical port representors
0x200-0x2ff VF port representors
0x300-0x3ff other offloads (e.g. vxlan netdevs, ovs bridges)
Assign phys and vf indexes based on unique 8 bit values in the port num.
Maintain list of other netdevs to ensure same netdev is not offloaded
twice and each gets a unique ID without exhausting the entries. Because
the IDs are unique but constant for a netdev, any changes are implemented
by overwriting the index on NFP.
Signed-off-by: John Hurley <john.hurley@netronome.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
---
drivers/net/ethernet/netronome/nfp/Makefile | 3 +-
drivers/net/ethernet/netronome/nfp/flower/cmsg.c | 7 -
drivers/net/ethernet/netronome/nfp/flower/cmsg.h | 9 +
drivers/net/ethernet/netronome/nfp/flower/main.c | 13 +
drivers/net/ethernet/netronome/nfp/flower/main.h | 18 +
drivers/net/ethernet/netronome/nfp/flower/match.c | 7 +
.../ethernet/netronome/nfp/flower/tunnel_conf.c | 374 +++++++++++++++++++++
7 files changed, 423 insertions(+), 8 deletions(-)
create mode 100644 drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
diff --git a/drivers/net/ethernet/netronome/nfp/Makefile b/drivers/net/ethernet/netronome/nfp/Makefile
index 96e579a15cbe..becaacf1554d 100644
--- a/drivers/net/ethernet/netronome/nfp/Makefile
+++ b/drivers/net/ethernet/netronome/nfp/Makefile
@@ -37,7 +37,8 @@ nfp-objs += \
flower/main.o \
flower/match.o \
flower/metadata.o \
- flower/offload.o
+ flower/offload.o \
+ flower/tunnel_conf.o
endif
ifeq ($(CONFIG_BPF_SYSCALL),y)
diff --git a/drivers/net/ethernet/netronome/nfp/flower/cmsg.c b/drivers/net/ethernet/netronome/nfp/flower/cmsg.c
index c3ca05d10fe1..b756006dba6f 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/cmsg.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/cmsg.c
@@ -38,17 +38,10 @@
#include <net/dst_metadata.h>
#include "main.h"
-#include "../nfpcore/nfp_cpp.h"
#include "../nfp_net.h"
#include "../nfp_net_repr.h"
#include "./cmsg.h"
-#define nfp_flower_cmsg_warn(app, fmt, args...) \
- do { \
- if (net_ratelimit()) \
- nfp_warn((app)->cpp, fmt, ## args); \
- } while (0)
-
static struct nfp_flower_cmsg_hdr *
nfp_flower_cmsg_get_hdr(struct sk_buff *skb)
{
diff --git a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
index ff42ce8a1e9c..dc248193c996 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
+++ b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
@@ -39,6 +39,7 @@
#include <linux/types.h>
#include "../nfp_app.h"
+#include "../nfpcore/nfp_cpp.h"
#define NFP_FLOWER_LAYER_META BIT(0)
#define NFP_FLOWER_LAYER_PORT BIT(1)
@@ -90,6 +91,12 @@
#define NFP_FL_IPV4_TUNNEL_TYPE GENMASK(7, 4)
#define NFP_FL_IPV4_PRE_TUN_INDEX GENMASK(2, 0)
+#define nfp_flower_cmsg_warn(app, fmt, args...) \
+ do { \
+ if (net_ratelimit()) \
+ nfp_warn((app)->cpp, fmt, ## args); \
+ } while (0)
+
enum nfp_flower_tun_type {
NFP_FL_TUNNEL_NONE = 0,
NFP_FL_TUNNEL_VXLAN = 2,
@@ -310,6 +317,7 @@ enum nfp_flower_cmsg_type_port {
NFP_FLOWER_CMSG_TYPE_FLOW_DEL = 2,
NFP_FLOWER_CMSG_TYPE_MAC_REPR = 7,
NFP_FLOWER_CMSG_TYPE_PORT_MOD = 8,
+ NFP_FLOWER_CMSG_TYPE_TUN_MAC = 11,
NFP_FLOWER_CMSG_TYPE_FLOW_STATS = 15,
NFP_FLOWER_CMSG_TYPE_PORT_ECHO = 16,
NFP_FLOWER_CMSG_TYPE_MAX = 32,
@@ -343,6 +351,7 @@ enum nfp_flower_cmsg_port_type {
NFP_FLOWER_CMSG_PORT_TYPE_UNSPEC = 0x0,
NFP_FLOWER_CMSG_PORT_TYPE_PHYS_PORT = 0x1,
NFP_FLOWER_CMSG_PORT_TYPE_PCIE_PORT = 0x2,
+ NFP_FLOWER_CMSG_PORT_TYPE_OTHER_PORT = 0x3,
};
enum nfp_flower_cmsg_port_vnic_type {
diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.c b/drivers/net/ethernet/netronome/nfp/flower/main.c
index 91fe03617106..e46e7c60d491 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/main.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/main.c
@@ -436,6 +436,16 @@ static void nfp_flower_clean(struct nfp_app *app)
app->priv = NULL;
}
+static int nfp_flower_start(struct nfp_app *app)
+{
+ return nfp_tunnel_config_start(app);
+}
+
+static void nfp_flower_stop(struct nfp_app *app)
+{
+ nfp_tunnel_config_stop(app);
+}
+
const struct nfp_app_type app_flower = {
.id = NFP_APP_FLOWER_NIC,
.name = "flower",
@@ -453,6 +463,9 @@ const struct nfp_app_type app_flower = {
.repr_open = nfp_flower_repr_netdev_open,
.repr_stop = nfp_flower_repr_netdev_stop,
+ .start = nfp_flower_start,
+ .stop = nfp_flower_stop,
+
.ctrl_msg_rx = nfp_flower_cmsg_rx,
.sriov_enable = nfp_flower_sriov_enable,
diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.h b/drivers/net/ethernet/netronome/nfp/flower/main.h
index cd695eabce02..9de375acc254 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/main.h
+++ b/drivers/net/ethernet/netronome/nfp/flower/main.h
@@ -84,6 +84,13 @@ struct nfp_fl_stats_id {
* @flow_table: Hash table used to store flower rules
* @cmsg_work: Workqueue for control messages processing
* @cmsg_skbs: List of skbs for control message processing
+ * @nfp_mac_off_list: List of MAC addresses to offload
+ * @nfp_mac_index_list: List of unique 8-bit indexes for non NFP netdevs
+ * @nfp_mac_off_lock: Lock for the MAC address list
+ * @nfp_mac_index_lock: Lock for the MAC index list
+ * @nfp_mac_off_ids: IDA to manage id assignment for offloaded macs
+ * @nfp_mac_off_count: Number of MACs in address list
+ * @nfp_tun_mac_nb: Notifier to monitor link state
*/
struct nfp_flower_priv {
struct nfp_app *app;
@@ -96,6 +103,13 @@ struct nfp_flower_priv {
DECLARE_HASHTABLE(flow_table, NFP_FLOWER_HASH_BITS);
struct work_struct cmsg_work;
struct sk_buff_head cmsg_skbs;
+ struct list_head nfp_mac_off_list;
+ struct list_head nfp_mac_index_list;
+ struct mutex nfp_mac_off_lock;
+ struct mutex nfp_mac_index_lock;
+ struct ida nfp_mac_off_ids;
+ int nfp_mac_off_count;
+ struct notifier_block nfp_tun_mac_nb;
};
struct nfp_fl_key_ls {
@@ -165,4 +179,8 @@ nfp_flower_remove_fl_table(struct nfp_app *app, unsigned long tc_flower_cookie);
void nfp_flower_rx_flow_stats(struct nfp_app *app, struct sk_buff *skb);
+int nfp_tunnel_config_start(struct nfp_app *app);
+void nfp_tunnel_config_stop(struct nfp_app *app);
+void nfp_tunnel_write_macs(struct nfp_app *app);
+
#endif
diff --git a/drivers/net/ethernet/netronome/nfp/flower/match.c b/drivers/net/ethernet/netronome/nfp/flower/match.c
index 1fd1bab0611f..cb3ff6c126e8 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/match.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/match.c
@@ -232,6 +232,7 @@ int nfp_flower_compile_flow_match(struct tc_cls_flower_offload *flow,
struct nfp_fl_payload *nfp_flow)
{
enum nfp_flower_tun_type tun_type = NFP_FL_TUNNEL_NONE;
+ struct nfp_repr *netdev_repr;
int err;
u8 *ext;
u8 *msk;
@@ -341,6 +342,12 @@ int nfp_flower_compile_flow_match(struct tc_cls_flower_offload *flow,
flow, true);
ext += sizeof(struct nfp_flower_vxlan);
msk += sizeof(struct nfp_flower_vxlan);
+
+ /* Configure tunnel end point MAC. */
+ if (nfp_netdev_is_nfp_repr(netdev)) {
+ netdev_repr = netdev_priv(netdev);
+ nfp_tunnel_write_macs(netdev_repr->app);
+ }
}
return 0;
diff --git a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
new file mode 100644
index 000000000000..34be85803020
--- /dev/null
+++ b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
@@ -0,0 +1,374 @@
+/*
+ * Copyright (C) 2017 Netronome Systems, Inc.
+ *
+ * This software is dual licensed under the GNU General License Version 2,
+ * June 1991 as shown in the file COPYING in the top-level directory of this
+ * source tree or the BSD 2-Clause License provided below. You have the
+ * option to license this software under the complete terms of either license.
+ *
+ * The BSD 2-Clause License:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/etherdevice.h>
+#include <linux/idr.h>
+#include <net/dst_metadata.h>
+
+#include "cmsg.h"
+#include "main.h"
+#include "../nfp_net_repr.h"
+#include "../nfp_net.h"
+
+/**
+ * struct nfp_tun_mac_addr - configure MAC address of tunnel EP on NFP
+ * @reserved: reserved for future use
+ * @count: number of MAC addresses in the message
+ * @index: index of MAC address in the lookup table
+ * @addr: interface MAC address
+ * @addresses: series of MACs to offload
+ */
+struct nfp_tun_mac_addr {
+ __be16 reserved;
+ __be16 count;
+ struct index_mac_addr {
+ __be16 index;
+ u8 addr[ETH_ALEN];
+ } addresses[];
+};
+
+/**
+ * struct nfp_tun_mac_offload_entry - list of MACs to offload
+ * @index: index of MAC address for offloading
+ * @addr: interface MAC address
+ * @list: list pointer
+ */
+struct nfp_tun_mac_offload_entry {
+ __be16 index;
+ u8 addr[ETH_ALEN];
+ struct list_head list;
+};
+
+#define NFP_MAX_MAC_INDEX 0xff
+
+/**
+ * struct nfp_tun_mac_non_nfp_idx - converts non NFP netdev ifindex to 8-bit id
+ * @ifindex: netdev ifindex of the device
+ * @index: index of netdevs mac on NFP
+ * @list: list pointer
+ */
+struct nfp_tun_mac_non_nfp_idx {
+ int ifindex;
+ u8 index;
+ struct list_head list;
+};
+
+static bool nfp_tun_is_netdev_to_offload(struct net_device *netdev)
+{
+ if (!netdev->rtnl_link_ops)
+ return false;
+ if (!strcmp(netdev->rtnl_link_ops->kind, "openvswitch"))
+ return true;
+ if (!strcmp(netdev->rtnl_link_ops->kind, "vxlan"))
+ return true;
+
+ return false;
+}
+
+static int
+nfp_flower_xmit_tun_conf(struct nfp_app *app, u8 mtype, u16 plen, void *pdata)
+{
+ struct sk_buff *skb;
+ unsigned char *msg;
+
+ skb = nfp_flower_cmsg_alloc(app, plen, mtype);
+ if (!skb)
+ return -ENOMEM;
+
+ msg = nfp_flower_cmsg_get_data(skb);
+ memcpy(msg, pdata, nfp_flower_cmsg_get_data_len(skb));
+
+ nfp_ctrl_tx(app->ctrl, skb);
+ return 0;
+}
+
+void nfp_tunnel_write_macs(struct nfp_app *app)
+{
+ struct nfp_flower_priv *priv = app->priv;
+ struct nfp_tun_mac_offload_entry *entry;
+ struct nfp_tun_mac_addr *payload;
+ struct list_head *ptr, *storage;
+ int mac_count, err, pay_size;
+
+ mutex_lock(&priv->nfp_mac_off_lock);
+ if (!priv->nfp_mac_off_count) {
+ mutex_unlock(&priv->nfp_mac_off_lock);
+ return;
+ }
+
+ pay_size = sizeof(struct nfp_tun_mac_addr) +
+ sizeof(struct index_mac_addr) * priv->nfp_mac_off_count;
+
+ payload = kzalloc(pay_size, GFP_KERNEL);
+ if (!payload) {
+ mutex_unlock(&priv->nfp_mac_off_lock);
+ return;
+ }
+
+ payload->count = cpu_to_be16(priv->nfp_mac_off_count);
+
+ mac_count = 0;
+ list_for_each_safe(ptr, storage, &priv->nfp_mac_off_list) {
+ entry = list_entry(ptr, struct nfp_tun_mac_offload_entry,
+ list);
+ payload->addresses[mac_count].index = entry->index;
+ ether_addr_copy(payload->addresses[mac_count].addr,
+ entry->addr);
+ mac_count++;
+ }
+
+ err = nfp_flower_xmit_tun_conf(app, NFP_FLOWER_CMSG_TYPE_TUN_MAC,
+ pay_size, payload);
+
+ kfree(payload);
+
+ if (err) {
+ mutex_unlock(&priv->nfp_mac_off_lock);
+ /* Write failed so retain list for future retry. */
+ return;
+ }
+
+ /* If list was successfully offloaded, flush it. */
+ list_for_each_safe(ptr, storage, &priv->nfp_mac_off_list) {
+ entry = list_entry(ptr, struct nfp_tun_mac_offload_entry,
+ list);
+ list_del(&entry->list);
+ kfree(entry);
+ }
+
+ priv->nfp_mac_off_count = 0;
+ mutex_unlock(&priv->nfp_mac_off_lock);
+}
+
+static int nfp_tun_get_mac_idx(struct nfp_app *app, int ifindex)
+{
+ struct nfp_flower_priv *priv = app->priv;
+ struct nfp_tun_mac_non_nfp_idx *entry;
+ struct list_head *ptr, *storage;
+ int idx;
+
+ mutex_lock(&priv->nfp_mac_index_lock);
+ list_for_each_safe(ptr, storage, &priv->nfp_mac_index_list) {
+ entry = list_entry(ptr, struct nfp_tun_mac_non_nfp_idx, list);
+ if (entry->ifindex == ifindex) {
+ idx = entry->index;
+ mutex_unlock(&priv->nfp_mac_index_lock);
+ return idx;
+ }
+ }
+
+ idx = ida_simple_get(&priv->nfp_mac_off_ids, 0,
+ NFP_MAX_MAC_INDEX, GFP_KERNEL);
+ if (idx < 0) {
+ mutex_unlock(&priv->nfp_mac_index_lock);
+ return idx;
+ }
+
+ entry = kmalloc(sizeof(*entry), GFP_KERNEL);
+ if (!entry) {
+ mutex_unlock(&priv->nfp_mac_index_lock);
+ return -ENOMEM;
+ }
+ entry->ifindex = ifindex;
+ entry->index = idx;
+ list_add_tail(&entry->list, &priv->nfp_mac_index_list);
+ mutex_unlock(&priv->nfp_mac_index_lock);
+
+ return idx;
+}
+
+static void nfp_tun_del_mac_idx(struct nfp_app *app, int ifindex)
+{
+ struct nfp_flower_priv *priv = app->priv;
+ struct nfp_tun_mac_non_nfp_idx *entry;
+ struct list_head *ptr, *storage;
+
+ mutex_lock(&priv->nfp_mac_index_lock);
+ list_for_each_safe(ptr, storage, &priv->nfp_mac_index_list) {
+ entry = list_entry(ptr, struct nfp_tun_mac_non_nfp_idx, list);
+ if (entry->ifindex == ifindex) {
+ ida_simple_remove(&priv->nfp_mac_off_ids,
+ entry->index);
+ list_del(&entry->list);
+ kfree(entry);
+ break;
+ }
+ }
+ mutex_unlock(&priv->nfp_mac_index_lock);
+}
+
+static void nfp_tun_add_to_mac_offload_list(struct net_device *netdev,
+ struct nfp_app *app)
+{
+ struct nfp_flower_priv *priv = app->priv;
+ struct nfp_tun_mac_offload_entry *entry;
+ u16 nfp_mac_idx;
+ int port = 0;
+
+ /* Check if MAC should be offloaded. */
+ if (!is_valid_ether_addr(netdev->dev_addr))
+ return;
+
+ if (nfp_netdev_is_nfp_repr(netdev))
+ port = nfp_repr_get_port_id(netdev);
+ else if (!nfp_tun_is_netdev_to_offload(netdev))
+ return;
+
+ entry = kmalloc(sizeof(*entry), GFP_KERNEL);
+ if (!entry) {
+ nfp_flower_cmsg_warn(app, "Mem fail when offloading MAC.\n");
+ return;
+ }
+
+ if (FIELD_GET(NFP_FLOWER_CMSG_PORT_TYPE, port) ==
+ NFP_FLOWER_CMSG_PORT_TYPE_PHYS_PORT) {
+ nfp_mac_idx = port << 8 | NFP_FLOWER_CMSG_PORT_TYPE_PHYS_PORT;
+ } else if (FIELD_GET(NFP_FLOWER_CMSG_PORT_TYPE, port) ==
+ NFP_FLOWER_CMSG_PORT_TYPE_PCIE_PORT) {
+ port = FIELD_GET(NFP_FLOWER_CMSG_PORT_VNIC, port);
+ nfp_mac_idx = port << 8 | NFP_FLOWER_CMSG_PORT_TYPE_PCIE_PORT;
+ } else {
+ /* Must assign our own unique 8-bit index. */
+ int idx = nfp_tun_get_mac_idx(app, netdev->ifindex);
+
+ if (idx < 0) {
+ nfp_flower_cmsg_warn(app, "Can't assign non-repr MAC index.\n");
+ kfree(entry);
+ return;
+ }
+ nfp_mac_idx = idx << 8 | NFP_FLOWER_CMSG_PORT_TYPE_OTHER_PORT;
+ }
+
+ entry->index = cpu_to_be16(nfp_mac_idx);
+ ether_addr_copy(entry->addr, netdev->dev_addr);
+
+ mutex_lock(&priv->nfp_mac_off_lock);
+ priv->nfp_mac_off_count++;
+ list_add_tail(&entry->list, &priv->nfp_mac_off_list);
+ mutex_unlock(&priv->nfp_mac_off_lock);
+}
+
+static int nfp_tun_mac_event_handler(struct notifier_block *nb,
+ unsigned long event, void *ptr)
+{
+ struct nfp_flower_priv *app_priv;
+ struct net_device *netdev;
+ struct nfp_app *app;
+
+ if (event == NETDEV_DOWN || event == NETDEV_UNREGISTER) {
+ app_priv = container_of(nb, struct nfp_flower_priv,
+ nfp_tun_mac_nb);
+ app = app_priv->app;
+ netdev = netdev_notifier_info_to_dev(ptr);
+
+ /* If non-nfp netdev then free its offload index. */
+ if (nfp_tun_is_netdev_to_offload(netdev))
+ nfp_tun_del_mac_idx(app, netdev->ifindex);
+ } else if (event == NETDEV_UP || event == NETDEV_CHANGEADDR ||
+ event == NETDEV_REGISTER) {
+ app_priv = container_of(nb, struct nfp_flower_priv,
+ nfp_tun_mac_nb);
+ app = app_priv->app;
+ netdev = netdev_notifier_info_to_dev(ptr);
+
+ nfp_tun_add_to_mac_offload_list(netdev, app);
+
+ /* Force a list write to keep NFP up to date. */
+ nfp_tunnel_write_macs(app);
+ }
+ return NOTIFY_OK;
+}
+
+int nfp_tunnel_config_start(struct nfp_app *app)
+{
+ struct nfp_flower_priv *priv = app->priv;
+ struct net_device *netdev;
+ int err;
+
+ /* Initialise priv data for MAC offloading. */
+ priv->nfp_mac_off_count = 0;
+ mutex_init(&priv->nfp_mac_off_lock);
+ INIT_LIST_HEAD(&priv->nfp_mac_off_list);
+ priv->nfp_tun_mac_nb.notifier_call = nfp_tun_mac_event_handler;
+ mutex_init(&priv->nfp_mac_index_lock);
+ INIT_LIST_HEAD(&priv->nfp_mac_index_list);
+ ida_init(&priv->nfp_mac_off_ids);
+
+ err = register_netdevice_notifier(&priv->nfp_tun_mac_nb);
+ if (err)
+ goto err_free_mac_ida;
+
+ /* Parse netdevs already registered for MACs that need offloaded. */
+ rtnl_lock();
+ for_each_netdev(&init_net, netdev)
+ nfp_tun_add_to_mac_offload_list(netdev, app);
+ rtnl_unlock();
+
+ return 0;
+
+err_free_mac_ida:
+ ida_destroy(&priv->nfp_mac_off_ids);
+ return err;
+}
+
+void nfp_tunnel_config_stop(struct nfp_app *app)
+{
+ struct nfp_tun_mac_offload_entry *mac_entry;
+ struct nfp_flower_priv *priv = app->priv;
+ struct nfp_tun_mac_non_nfp_idx *mac_idx;
+ struct list_head *ptr, *storage;
+
+ unregister_netdevice_notifier(&priv->nfp_tun_mac_nb);
+
+ /* Free any memory that may be occupied by MAC list. */
+ mutex_lock(&priv->nfp_mac_off_lock);
+ list_for_each_safe(ptr, storage, &priv->nfp_mac_off_list) {
+ mac_entry = list_entry(ptr, struct nfp_tun_mac_offload_entry,
+ list);
+ list_del(&mac_entry->list);
+ kfree(mac_entry);
+ }
+ mutex_unlock(&priv->nfp_mac_off_lock);
+
+ /* Free any memory that may be occupied by MAC index list. */
+ mutex_lock(&priv->nfp_mac_index_lock);
+ list_for_each_safe(ptr, storage, &priv->nfp_mac_index_list) {
+ mac_idx = list_entry(ptr, struct nfp_tun_mac_non_nfp_idx,
+ list);
+ list_del(&mac_idx->list);
+ kfree(mac_idx);
+ }
+ mutex_unlock(&priv->nfp_mac_index_lock);
+
+ ida_destroy(&priv->nfp_mac_off_ids);
+}
--
2.1.4
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH net-next 5/7] nfp: offload vxlan IPv4 endpoints of flower rules
2017-09-25 10:23 [PATCH net-next 0/7] nfp: flower vxlan tunnel offload Simon Horman
` (3 preceding siblings ...)
2017-09-25 10:23 ` [PATCH net-next 4/7] nfp: offload flower vxlan endpoint MAC addresses Simon Horman
@ 2017-09-25 10:23 ` Simon Horman
2017-09-25 10:23 ` [PATCH net-next 6/7] nfp: flower vxlan neighbour offload Simon Horman
` (4 subsequent siblings)
9 siblings, 0 replies; 29+ messages in thread
From: Simon Horman @ 2017-09-25 10:23 UTC (permalink / raw)
To: David Miller, Jakub Kicinski
Cc: netdev, oss-drivers, John Hurley, Simon Horman
From: John Hurley <john.hurley@netronome.com>
Maintain a list of IPv4 addresses used as the tunnel destination IP match
fields in currently active flower rules. Offload the entire list of
NFP_FL_IPV4_ADDRS_MAX (even if some are unused) when new IPs are added or
removed. The NFP should only be aware of tunnel end points that are
currently used by rules on the device
Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
---
drivers/net/ethernet/netronome/nfp/flower/cmsg.h | 1 +
drivers/net/ethernet/netronome/nfp/flower/main.h | 7 ++
drivers/net/ethernet/netronome/nfp/flower/match.c | 14 ++-
.../net/ethernet/netronome/nfp/flower/offload.c | 4 +
.../ethernet/netronome/nfp/flower/tunnel_conf.c | 120 +++++++++++++++++++++
5 files changed, 143 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
index dc248193c996..6540bb1ceefb 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
+++ b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
@@ -318,6 +318,7 @@ enum nfp_flower_cmsg_type_port {
NFP_FLOWER_CMSG_TYPE_MAC_REPR = 7,
NFP_FLOWER_CMSG_TYPE_PORT_MOD = 8,
NFP_FLOWER_CMSG_TYPE_TUN_MAC = 11,
+ NFP_FLOWER_CMSG_TYPE_TUN_IPS = 14,
NFP_FLOWER_CMSG_TYPE_FLOW_STATS = 15,
NFP_FLOWER_CMSG_TYPE_PORT_ECHO = 16,
NFP_FLOWER_CMSG_TYPE_MAX = 32,
diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.h b/drivers/net/ethernet/netronome/nfp/flower/main.h
index 9de375acc254..53306af6cfe8 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/main.h
+++ b/drivers/net/ethernet/netronome/nfp/flower/main.h
@@ -86,8 +86,10 @@ struct nfp_fl_stats_id {
* @cmsg_skbs: List of skbs for control message processing
* @nfp_mac_off_list: List of MAC addresses to offload
* @nfp_mac_index_list: List of unique 8-bit indexes for non NFP netdevs
+ * @nfp_ipv4_off_list: List of IPv4 addresses to offload
* @nfp_mac_off_lock: Lock for the MAC address list
* @nfp_mac_index_lock: Lock for the MAC index list
+ * @nfp_ipv4_off_lock: Lock for the IPv4 address list
* @nfp_mac_off_ids: IDA to manage id assignment for offloaded macs
* @nfp_mac_off_count: Number of MACs in address list
* @nfp_tun_mac_nb: Notifier to monitor link state
@@ -105,8 +107,10 @@ struct nfp_flower_priv {
struct sk_buff_head cmsg_skbs;
struct list_head nfp_mac_off_list;
struct list_head nfp_mac_index_list;
+ struct list_head nfp_ipv4_off_list;
struct mutex nfp_mac_off_lock;
struct mutex nfp_mac_index_lock;
+ struct mutex nfp_ipv4_off_lock;
struct ida nfp_mac_off_ids;
int nfp_mac_off_count;
struct notifier_block nfp_tun_mac_nb;
@@ -142,6 +146,7 @@ struct nfp_fl_payload {
struct rcu_head rcu;
spinlock_t lock; /* lock stats */
struct nfp_fl_stats stats;
+ __be32 nfp_tun_ipv4_addr;
char *unmasked_data;
char *mask_data;
char *action_data;
@@ -182,5 +187,7 @@ void nfp_flower_rx_flow_stats(struct nfp_app *app, struct sk_buff *skb);
int nfp_tunnel_config_start(struct nfp_app *app);
void nfp_tunnel_config_stop(struct nfp_app *app);
void nfp_tunnel_write_macs(struct nfp_app *app);
+void nfp_tunnel_del_ipv4_off(struct nfp_app *app, __be32 ipv4);
+void nfp_tunnel_add_ipv4_off(struct nfp_app *app, __be32 ipv4);
#endif
diff --git a/drivers/net/ethernet/netronome/nfp/flower/match.c b/drivers/net/ethernet/netronome/nfp/flower/match.c
index cb3ff6c126e8..865a815ab92a 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/match.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/match.c
@@ -195,7 +195,7 @@ nfp_flower_compile_ipv6(struct nfp_flower_ipv6 *frame,
static void
nfp_flower_compile_vxlan(struct nfp_flower_vxlan *frame,
struct tc_cls_flower_offload *flow,
- bool mask_version)
+ bool mask_version, __be32 *tun_dst)
{
struct fl_flow_key *target = mask_version ? flow->mask : flow->key;
struct flow_dissector_key_ipv4_addrs *vxlan_ips;
@@ -223,6 +223,7 @@ nfp_flower_compile_vxlan(struct nfp_flower_vxlan *frame,
target);
frame->ip_src = vxlan_ips->src;
frame->ip_dst = vxlan_ips->dst;
+ *tun_dst = vxlan_ips->dst;
}
}
@@ -232,6 +233,7 @@ int nfp_flower_compile_flow_match(struct tc_cls_flower_offload *flow,
struct nfp_fl_payload *nfp_flow)
{
enum nfp_flower_tun_type tun_type = NFP_FL_TUNNEL_NONE;
+ __be32 tun_dst, tun_dst_mask = 0;
struct nfp_repr *netdev_repr;
int err;
u8 *ext;
@@ -336,10 +338,10 @@ int nfp_flower_compile_flow_match(struct tc_cls_flower_offload *flow,
if (key_ls->key_layer & NFP_FLOWER_LAYER_VXLAN) {
/* Populate Exact VXLAN Data. */
nfp_flower_compile_vxlan((struct nfp_flower_vxlan *)ext,
- flow, false);
+ flow, false, &tun_dst);
/* Populate Mask VXLAN Data. */
nfp_flower_compile_vxlan((struct nfp_flower_vxlan *)msk,
- flow, true);
+ flow, true, &tun_dst_mask);
ext += sizeof(struct nfp_flower_vxlan);
msk += sizeof(struct nfp_flower_vxlan);
@@ -347,6 +349,12 @@ int nfp_flower_compile_flow_match(struct tc_cls_flower_offload *flow,
if (nfp_netdev_is_nfp_repr(netdev)) {
netdev_repr = netdev_priv(netdev);
nfp_tunnel_write_macs(netdev_repr->app);
+
+ /* Store the tunnel destination in the rule data.
+ * This must be present and be an exact match.
+ */
+ nfp_flow->nfp_tun_ipv4_addr = tun_dst;
+ nfp_tunnel_add_ipv4_off(netdev_repr->app, tun_dst);
}
}
diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
index 637372ba8f55..3d9537ebdea4 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
@@ -306,6 +306,7 @@ nfp_flower_allocate_new(struct nfp_fl_key_ls *key_layer)
if (!flow_pay->action_data)
goto err_free_mask;
+ flow_pay->nfp_tun_ipv4_addr = 0;
flow_pay->meta.flags = 0;
spin_lock_init(&flow_pay->lock);
@@ -415,6 +416,9 @@ nfp_flower_del_offload(struct nfp_app *app, struct net_device *netdev,
if (err)
goto err_free_flow;
+ if (nfp_flow->nfp_tun_ipv4_addr)
+ nfp_tunnel_del_ipv4_off(app, nfp_flow->nfp_tun_ipv4_addr);
+
err = nfp_flower_xmit_flow(netdev, nfp_flow,
NFP_FLOWER_CMSG_TYPE_FLOW_DEL);
if (err)
diff --git a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
index 34be85803020..185505140f5e 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
@@ -32,6 +32,7 @@
*/
#include <linux/etherdevice.h>
+#include <linux/inetdevice.h>
#include <linux/idr.h>
#include <net/dst_metadata.h>
@@ -40,6 +41,30 @@
#include "../nfp_net_repr.h"
#include "../nfp_net.h"
+#define NFP_FL_IPV4_ADDRS_MAX 32
+
+/**
+ * struct nfp_tun_ipv4_addr - set the IP address list on the NFP
+ * @count: number of IPs populated in the array
+ * @ipv4_addr: array of IPV4_ADDRS_MAX 32 bit IPv4 addresses
+ */
+struct nfp_tun_ipv4_addr {
+ __be32 count;
+ __be32 ipv4_addr[NFP_FL_IPV4_ADDRS_MAX];
+};
+
+/**
+ * struct nfp_ipv4_addr_entry - cached IPv4 addresses
+ * @ipv4_addr: IP address
+ * @ref_count: number of rules currently using this IP
+ * @list: list pointer
+ */
+struct nfp_ipv4_addr_entry {
+ __be32 ipv4_addr;
+ int ref_count;
+ struct list_head list;
+};
+
/**
* struct nfp_tun_mac_addr - configure MAC address of tunnel EP on NFP
* @reserved: reserved for future use
@@ -112,6 +137,87 @@ nfp_flower_xmit_tun_conf(struct nfp_app *app, u8 mtype, u16 plen, void *pdata)
return 0;
}
+static void nfp_tun_write_ipv4_list(struct nfp_app *app)
+{
+ struct nfp_flower_priv *priv = app->priv;
+ struct nfp_ipv4_addr_entry *entry;
+ struct nfp_tun_ipv4_addr payload;
+ struct list_head *ptr, *storage;
+ int count;
+
+ memset(&payload, 0, sizeof(struct nfp_tun_ipv4_addr));
+ mutex_lock(&priv->nfp_ipv4_off_lock);
+ count = 0;
+ list_for_each_safe(ptr, storage, &priv->nfp_ipv4_off_list) {
+ if (count >= NFP_FL_IPV4_ADDRS_MAX) {
+ mutex_unlock(&priv->nfp_ipv4_off_lock);
+ nfp_flower_cmsg_warn(app, "IPv4 offload exceeds limit.\n");
+ return;
+ }
+ entry = list_entry(ptr, struct nfp_ipv4_addr_entry, list);
+ payload.ipv4_addr[count++] = entry->ipv4_addr;
+ }
+ payload.count = cpu_to_be32(count);
+ mutex_unlock(&priv->nfp_ipv4_off_lock);
+
+ nfp_flower_xmit_tun_conf(app, NFP_FLOWER_CMSG_TYPE_TUN_IPS,
+ sizeof(struct nfp_tun_ipv4_addr),
+ &payload);
+}
+
+void nfp_tunnel_add_ipv4_off(struct nfp_app *app, __be32 ipv4)
+{
+ struct nfp_flower_priv *priv = app->priv;
+ struct nfp_ipv4_addr_entry *entry;
+ struct list_head *ptr, *storage;
+
+ mutex_lock(&priv->nfp_ipv4_off_lock);
+ list_for_each_safe(ptr, storage, &priv->nfp_ipv4_off_list) {
+ entry = list_entry(ptr, struct nfp_ipv4_addr_entry, list);
+ if (entry->ipv4_addr == ipv4) {
+ entry->ref_count++;
+ mutex_unlock(&priv->nfp_ipv4_off_lock);
+ return;
+ }
+ }
+
+ entry = kmalloc(sizeof(*entry), GFP_KERNEL);
+ if (!entry) {
+ mutex_unlock(&priv->nfp_ipv4_off_lock);
+ nfp_flower_cmsg_warn(app, "Mem error when offloading IP address.\n");
+ return;
+ }
+ entry->ipv4_addr = ipv4;
+ entry->ref_count = 1;
+ list_add_tail(&entry->list, &priv->nfp_ipv4_off_list);
+ mutex_unlock(&priv->nfp_ipv4_off_lock);
+
+ nfp_tun_write_ipv4_list(app);
+}
+
+void nfp_tunnel_del_ipv4_off(struct nfp_app *app, __be32 ipv4)
+{
+ struct nfp_flower_priv *priv = app->priv;
+ struct nfp_ipv4_addr_entry *entry;
+ struct list_head *ptr, *storage;
+
+ mutex_lock(&priv->nfp_ipv4_off_lock);
+ list_for_each_safe(ptr, storage, &priv->nfp_ipv4_off_list) {
+ entry = list_entry(ptr, struct nfp_ipv4_addr_entry, list);
+ if (entry->ipv4_addr == ipv4) {
+ entry->ref_count--;
+ if (!entry->ref_count) {
+ list_del(&entry->list);
+ kfree(entry);
+ }
+ break;
+ }
+ }
+ mutex_unlock(&priv->nfp_ipv4_off_lock);
+
+ nfp_tun_write_ipv4_list(app);
+}
+
void nfp_tunnel_write_macs(struct nfp_app *app)
{
struct nfp_flower_priv *priv = app->priv;
@@ -324,6 +430,10 @@ int nfp_tunnel_config_start(struct nfp_app *app)
INIT_LIST_HEAD(&priv->nfp_mac_index_list);
ida_init(&priv->nfp_mac_off_ids);
+ /* Initialise priv data for IPv4 offloading. */
+ mutex_init(&priv->nfp_ipv4_off_lock);
+ INIT_LIST_HEAD(&priv->nfp_ipv4_off_list);
+
err = register_netdevice_notifier(&priv->nfp_tun_mac_nb);
if (err)
goto err_free_mac_ida;
@@ -346,6 +456,7 @@ void nfp_tunnel_config_stop(struct nfp_app *app)
struct nfp_tun_mac_offload_entry *mac_entry;
struct nfp_flower_priv *priv = app->priv;
struct nfp_tun_mac_non_nfp_idx *mac_idx;
+ struct nfp_ipv4_addr_entry *ip_entry;
struct list_head *ptr, *storage;
unregister_netdevice_notifier(&priv->nfp_tun_mac_nb);
@@ -371,4 +482,13 @@ void nfp_tunnel_config_stop(struct nfp_app *app)
mutex_unlock(&priv->nfp_mac_index_lock);
ida_destroy(&priv->nfp_mac_off_ids);
+
+ /* Free any memory that may be occupied by ipv4 list. */
+ mutex_lock(&priv->nfp_ipv4_off_lock);
+ list_for_each_safe(ptr, storage, &priv->nfp_ipv4_off_list) {
+ ip_entry = list_entry(ptr, struct nfp_ipv4_addr_entry, list);
+ list_del(&ip_entry->list);
+ kfree(ip_entry);
+ }
+ mutex_unlock(&priv->nfp_ipv4_off_lock);
}
--
2.1.4
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH net-next 6/7] nfp: flower vxlan neighbour offload
2017-09-25 10:23 [PATCH net-next 0/7] nfp: flower vxlan tunnel offload Simon Horman
` (4 preceding siblings ...)
2017-09-25 10:23 ` [PATCH net-next 5/7] nfp: offload vxlan IPv4 endpoints of flower rules Simon Horman
@ 2017-09-25 10:23 ` Simon Horman
2017-09-25 10:23 ` [PATCH net-next 7/7] nfp: flower vxlan neighbour keep-alive Simon Horman
` (3 subsequent siblings)
9 siblings, 0 replies; 29+ messages in thread
From: Simon Horman @ 2017-09-25 10:23 UTC (permalink / raw)
To: David Miller, Jakub Kicinski
Cc: netdev, oss-drivers, John Hurley, Simon Horman
From: John Hurley <john.hurley@netronome.com>
Receive a request when the NFP does not know the next hop for a packet
that is to be encapsulated in a VXLAN tunnel. Do a route lookup, determine
the next hop entry and update neighbour table on NFP. Monitor the kernel
neighbour table for link changes and update NFP with relevant information.
Overwrite routes with zero values on the NFP when they expire.
Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
---
drivers/net/ethernet/netronome/nfp/flower/cmsg.c | 6 +
drivers/net/ethernet/netronome/nfp/flower/cmsg.h | 2 +
drivers/net/ethernet/netronome/nfp/flower/main.h | 7 +
.../ethernet/netronome/nfp/flower/tunnel_conf.c | 253 +++++++++++++++++++++
4 files changed, 268 insertions(+)
diff --git a/drivers/net/ethernet/netronome/nfp/flower/cmsg.c b/drivers/net/ethernet/netronome/nfp/flower/cmsg.c
index b756006dba6f..862787daaa68 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/cmsg.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/cmsg.c
@@ -181,6 +181,12 @@ nfp_flower_cmsg_process_one_rx(struct nfp_app *app, struct sk_buff *skb)
case NFP_FLOWER_CMSG_TYPE_FLOW_STATS:
nfp_flower_rx_flow_stats(app, skb);
break;
+ case NFP_FLOWER_CMSG_TYPE_NO_NEIGH:
+ nfp_tunnel_request_route(app, skb);
+ break;
+ case NFP_FLOWER_CMSG_TYPE_TUN_NEIGH:
+ /* Acks from the NFP that the route is added - ignore. */
+ break;
default:
nfp_flower_cmsg_warn(app, "Cannot handle invalid repr control type %u\n",
type);
diff --git a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
index 6540bb1ceefb..1dc72a1ed577 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
+++ b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
@@ -317,7 +317,9 @@ enum nfp_flower_cmsg_type_port {
NFP_FLOWER_CMSG_TYPE_FLOW_DEL = 2,
NFP_FLOWER_CMSG_TYPE_MAC_REPR = 7,
NFP_FLOWER_CMSG_TYPE_PORT_MOD = 8,
+ NFP_FLOWER_CMSG_TYPE_NO_NEIGH = 10,
NFP_FLOWER_CMSG_TYPE_TUN_MAC = 11,
+ NFP_FLOWER_CMSG_TYPE_TUN_NEIGH = 13,
NFP_FLOWER_CMSG_TYPE_TUN_IPS = 14,
NFP_FLOWER_CMSG_TYPE_FLOW_STATS = 15,
NFP_FLOWER_CMSG_TYPE_PORT_ECHO = 16,
diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.h b/drivers/net/ethernet/netronome/nfp/flower/main.h
index 53306af6cfe8..93ad969c3653 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/main.h
+++ b/drivers/net/ethernet/netronome/nfp/flower/main.h
@@ -87,12 +87,15 @@ struct nfp_fl_stats_id {
* @nfp_mac_off_list: List of MAC addresses to offload
* @nfp_mac_index_list: List of unique 8-bit indexes for non NFP netdevs
* @nfp_ipv4_off_list: List of IPv4 addresses to offload
+ * @nfp_neigh_off_list: List of neighbour offloads
* @nfp_mac_off_lock: Lock for the MAC address list
* @nfp_mac_index_lock: Lock for the MAC index list
* @nfp_ipv4_off_lock: Lock for the IPv4 address list
+ * @nfp_neigh_off_lock: Lock for the neighbour address list
* @nfp_mac_off_ids: IDA to manage id assignment for offloaded macs
* @nfp_mac_off_count: Number of MACs in address list
* @nfp_tun_mac_nb: Notifier to monitor link state
+ * @nfp_tun_neigh_nb: Notifier to monitor neighbour state
*/
struct nfp_flower_priv {
struct nfp_app *app;
@@ -108,12 +111,15 @@ struct nfp_flower_priv {
struct list_head nfp_mac_off_list;
struct list_head nfp_mac_index_list;
struct list_head nfp_ipv4_off_list;
+ struct list_head nfp_neigh_off_list;
struct mutex nfp_mac_off_lock;
struct mutex nfp_mac_index_lock;
struct mutex nfp_ipv4_off_lock;
+ struct mutex nfp_neigh_off_lock;
struct ida nfp_mac_off_ids;
int nfp_mac_off_count;
struct notifier_block nfp_tun_mac_nb;
+ struct notifier_block nfp_tun_neigh_nb;
};
struct nfp_fl_key_ls {
@@ -189,5 +195,6 @@ void nfp_tunnel_config_stop(struct nfp_app *app);
void nfp_tunnel_write_macs(struct nfp_app *app);
void nfp_tunnel_del_ipv4_off(struct nfp_app *app, __be32 ipv4);
void nfp_tunnel_add_ipv4_off(struct nfp_app *app, __be32 ipv4);
+void nfp_tunnel_request_route(struct nfp_app *app, struct sk_buff *skb);
#endif
diff --git a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
index 185505140f5e..8c6b88a1306b 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
@@ -33,6 +33,7 @@
#include <linux/etherdevice.h>
#include <linux/inetdevice.h>
+#include <net/netevent.h>
#include <linux/idr.h>
#include <net/dst_metadata.h>
@@ -41,6 +42,44 @@
#include "../nfp_net_repr.h"
#include "../nfp_net.h"
+/**
+ * struct nfp_tun_neigh - neighbour/route entry on the NFP
+ * @dst_ipv4: destination IPv4 address
+ * @src_ipv4: source IPv4 address
+ * @dst_addr: destination MAC address
+ * @src_addr: source MAC address
+ * @port_id: NFP port to output packet on - associated with source IPv4
+ */
+struct nfp_tun_neigh {
+ __be32 dst_ipv4;
+ __be32 src_ipv4;
+ u8 dst_addr[ETH_ALEN];
+ u8 src_addr[ETH_ALEN];
+ __be32 port_id;
+};
+
+/**
+ * struct nfp_tun_req_route_ipv4 - NFP requests a route/neighbour lookup
+ * @ingress_port: ingress port of packet that signalled request
+ * @ipv4_addr: destination ipv4 address for route
+ * @reserved: reserved for future use
+ */
+struct nfp_tun_req_route_ipv4 {
+ __be32 ingress_port;
+ __be32 ipv4_addr;
+ __be32 reserved[2];
+};
+
+/**
+ * struct nfp_ipv4_route_entry - routes that are offloaded to the NFP
+ * @ipv4_addr: destination of route
+ * @list: list pointer
+ */
+struct nfp_ipv4_route_entry {
+ __be32 ipv4_addr;
+ struct list_head list;
+};
+
#define NFP_FL_IPV4_ADDRS_MAX 32
/**
@@ -137,6 +176,197 @@ nfp_flower_xmit_tun_conf(struct nfp_app *app, u8 mtype, u16 plen, void *pdata)
return 0;
}
+static bool nfp_tun_has_route(struct nfp_app *app, __be32 ipv4_addr)
+{
+ struct nfp_flower_priv *priv = app->priv;
+ struct nfp_ipv4_route_entry *entry;
+ struct list_head *ptr, *storage;
+
+ mutex_lock(&priv->nfp_neigh_off_lock);
+ list_for_each_safe(ptr, storage, &priv->nfp_neigh_off_list) {
+ entry = list_entry(ptr, struct nfp_ipv4_route_entry, list);
+ if (entry->ipv4_addr == ipv4_addr) {
+ mutex_unlock(&priv->nfp_neigh_off_lock);
+ return true;
+ }
+ }
+ mutex_unlock(&priv->nfp_neigh_off_lock);
+ return false;
+}
+
+static void nfp_tun_add_route_to_cache(struct nfp_app *app, __be32 ipv4_addr)
+{
+ struct nfp_flower_priv *priv = app->priv;
+ struct nfp_ipv4_route_entry *entry;
+ struct list_head *ptr, *storage;
+
+ mutex_lock(&priv->nfp_neigh_off_lock);
+ list_for_each_safe(ptr, storage, &priv->nfp_neigh_off_list) {
+ entry = list_entry(ptr, struct nfp_ipv4_route_entry, list);
+ if (entry->ipv4_addr == ipv4_addr) {
+ mutex_unlock(&priv->nfp_neigh_off_lock);
+ return;
+ }
+ }
+ entry = kmalloc(sizeof(*entry), GFP_KERNEL);
+ if (!entry) {
+ mutex_unlock(&priv->nfp_neigh_off_lock);
+ nfp_flower_cmsg_warn(app, "Mem error when storing new route.\n");
+ return;
+ }
+
+ entry->ipv4_addr = ipv4_addr;
+ list_add_tail(&entry->list, &priv->nfp_neigh_off_list);
+ mutex_unlock(&priv->nfp_neigh_off_lock);
+}
+
+static void nfp_tun_del_route_from_cache(struct nfp_app *app, __be32 ipv4_addr)
+{
+ struct nfp_flower_priv *priv = app->priv;
+ struct nfp_ipv4_route_entry *entry;
+ struct list_head *ptr, *storage;
+
+ mutex_lock(&priv->nfp_neigh_off_lock);
+ list_for_each_safe(ptr, storage, &priv->nfp_neigh_off_list) {
+ entry = list_entry(ptr, struct nfp_ipv4_route_entry, list);
+ if (entry->ipv4_addr == ipv4_addr) {
+ list_del(&entry->list);
+ kfree(entry);
+ break;
+ }
+ }
+ mutex_unlock(&priv->nfp_neigh_off_lock);
+}
+
+static void
+nfp_tun_write_neigh(struct net_device *netdev, struct nfp_app *app,
+ struct flowi4 *flow, struct neighbour *neigh)
+{
+ struct nfp_tun_neigh payload;
+
+ /* Only offload representor IPv4s for now. */
+ if (!nfp_netdev_is_nfp_repr(netdev))
+ return;
+
+ memset(&payload, 0, sizeof(struct nfp_tun_neigh));
+ payload.dst_ipv4 = flow->daddr;
+
+ /* If entry has expired send dst IP with all other fields 0. */
+ if (!(neigh->nud_state & NUD_VALID)) {
+ nfp_tun_del_route_from_cache(app, payload.dst_ipv4);
+ /* Trigger ARP to verify invalid neighbour state. */
+ neigh_event_send(neigh, NULL);
+ goto send_msg;
+ }
+
+ /* Have a valid neighbour so populate rest of entry. */
+ payload.src_ipv4 = flow->saddr;
+ ether_addr_copy(payload.src_addr, netdev->dev_addr);
+ neigh_ha_snapshot(payload.dst_addr, neigh, netdev);
+ payload.port_id = cpu_to_be32(nfp_repr_get_port_id(netdev));
+ /* Add destination of new route to NFP cache. */
+ nfp_tun_add_route_to_cache(app, payload.dst_ipv4);
+
+send_msg:
+ nfp_flower_xmit_tun_conf(app, NFP_FLOWER_CMSG_TYPE_TUN_NEIGH,
+ sizeof(struct nfp_tun_neigh),
+ (unsigned char *)&payload);
+}
+
+static int
+nfp_tun_neigh_event_handler(struct notifier_block *nb, unsigned long event,
+ void *ptr)
+{
+ struct nfp_flower_priv *app_priv;
+ struct netevent_redirect *redir;
+ struct flowi4 flow = {};
+ struct neighbour *n;
+ struct nfp_app *app;
+ struct rtable *rt;
+ int err;
+
+ switch (event) {
+ case NETEVENT_REDIRECT:
+ redir = (struct netevent_redirect *)ptr;
+ n = redir->neigh;
+ break;
+ case NETEVENT_NEIGH_UPDATE:
+ n = (struct neighbour *)ptr;
+ break;
+ default:
+ return NOTIFY_DONE;
+ }
+
+ flow.daddr = *(__be32 *)n->primary_key;
+
+ /* Only concerned with route changes for representors. */
+ if (!nfp_netdev_is_nfp_repr(n->dev))
+ return NOTIFY_DONE;
+
+ app_priv = container_of(nb, struct nfp_flower_priv, nfp_tun_neigh_nb);
+ app = app_priv->app;
+
+ /* Only concerned with changes to routes already added to NFP. */
+ if (!nfp_tun_has_route(app, flow.daddr))
+ return NOTIFY_DONE;
+
+#if IS_ENABLED(CONFIG_INET)
+ /* Do a route lookup to populate flow data. */
+ rt = ip_route_output_key(dev_net(n->dev), &flow);
+ err = PTR_ERR_OR_ZERO(rt);
+ if (err)
+ return NOTIFY_DONE;
+#else
+ return NOTIFY_DONE;
+#endif
+
+ flow.flowi4_proto = IPPROTO_UDP;
+ nfp_tun_write_neigh(n->dev, app, &flow, n);
+
+ return NOTIFY_OK;
+}
+
+void nfp_tunnel_request_route(struct nfp_app *app, struct sk_buff *skb)
+{
+ struct nfp_tun_req_route_ipv4 *payload;
+ struct net_device *netdev;
+ struct flowi4 flow = {};
+ struct neighbour *n;
+ struct rtable *rt;
+ int err;
+
+ payload = nfp_flower_cmsg_get_data(skb);
+
+ netdev = nfp_app_repr_get(app, be32_to_cpu(payload->ingress_port));
+ if (!netdev)
+ goto route_fail_warning;
+
+ flow.daddr = payload->ipv4_addr;
+ flow.flowi4_proto = IPPROTO_UDP;
+
+#if IS_ENABLED(CONFIG_INET)
+ /* Do a route lookup on same namespace as ingress port. */
+ rt = ip_route_output_key(dev_net(netdev), &flow);
+ err = PTR_ERR_OR_ZERO(rt);
+ if (err)
+ goto route_fail_warning;
+#else
+ goto route_fail_warning;
+#endif
+
+ /* Get the neighbour entry for the lookup */
+ n = dst_neigh_lookup(&rt->dst, &flow.daddr);
+ ip_rt_put(rt);
+ if (!n)
+ goto route_fail_warning;
+ nfp_tun_write_neigh(n->dev, app, &flow, n);
+ neigh_release(n);
+ return;
+
+route_fail_warning:
+ nfp_flower_cmsg_warn(app, "Requested route not found.\n");
+}
+
static void nfp_tun_write_ipv4_list(struct nfp_app *app)
{
struct nfp_flower_priv *priv = app->priv;
@@ -434,10 +664,19 @@ int nfp_tunnel_config_start(struct nfp_app *app)
mutex_init(&priv->nfp_ipv4_off_lock);
INIT_LIST_HEAD(&priv->nfp_ipv4_off_list);
+ /* Initialise priv data for neighbour offloading. */
+ mutex_init(&priv->nfp_neigh_off_lock);
+ INIT_LIST_HEAD(&priv->nfp_neigh_off_list);
+ priv->nfp_tun_neigh_nb.notifier_call = nfp_tun_neigh_event_handler;
+
err = register_netdevice_notifier(&priv->nfp_tun_mac_nb);
if (err)
goto err_free_mac_ida;
+ err = register_netevent_notifier(&priv->nfp_tun_neigh_nb);
+ if (err)
+ goto err_unreg_mac_nb;
+
/* Parse netdevs already registered for MACs that need offloaded. */
rtnl_lock();
for_each_netdev(&init_net, netdev)
@@ -446,6 +685,8 @@ int nfp_tunnel_config_start(struct nfp_app *app)
return 0;
+err_unreg_mac_nb:
+ unregister_netdevice_notifier(&priv->nfp_tun_mac_nb);
err_free_mac_ida:
ida_destroy(&priv->nfp_mac_off_ids);
return err;
@@ -455,11 +696,13 @@ void nfp_tunnel_config_stop(struct nfp_app *app)
{
struct nfp_tun_mac_offload_entry *mac_entry;
struct nfp_flower_priv *priv = app->priv;
+ struct nfp_ipv4_route_entry *route_entry;
struct nfp_tun_mac_non_nfp_idx *mac_idx;
struct nfp_ipv4_addr_entry *ip_entry;
struct list_head *ptr, *storage;
unregister_netdevice_notifier(&priv->nfp_tun_mac_nb);
+ unregister_netevent_notifier(&priv->nfp_tun_neigh_nb);
/* Free any memory that may be occupied by MAC list. */
mutex_lock(&priv->nfp_mac_off_lock);
@@ -491,4 +734,14 @@ void nfp_tunnel_config_stop(struct nfp_app *app)
kfree(ip_entry);
}
mutex_unlock(&priv->nfp_ipv4_off_lock);
+
+ /* Free any memory that may be occupied by the route list. */
+ mutex_lock(&priv->nfp_neigh_off_lock);
+ list_for_each_safe(ptr, storage, &priv->nfp_neigh_off_list) {
+ route_entry = list_entry(ptr, struct nfp_ipv4_route_entry,
+ list);
+ list_del(&route_entry->list);
+ kfree(route_entry);
+ }
+ mutex_unlock(&priv->nfp_neigh_off_lock);
}
--
2.1.4
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH net-next 7/7] nfp: flower vxlan neighbour keep-alive
2017-09-25 10:23 [PATCH net-next 0/7] nfp: flower vxlan tunnel offload Simon Horman
` (5 preceding siblings ...)
2017-09-25 10:23 ` [PATCH net-next 6/7] nfp: flower vxlan neighbour offload Simon Horman
@ 2017-09-25 10:23 ` Simon Horman
2017-09-25 18:32 ` Or Gerlitz
2017-09-25 11:00 ` [PATCH net-next 0/7] nfp: flower vxlan tunnel offload Jakub Kicinski
` (2 subsequent siblings)
9 siblings, 1 reply; 29+ messages in thread
From: Simon Horman @ 2017-09-25 10:23 UTC (permalink / raw)
To: David Miller, Jakub Kicinski
Cc: netdev, oss-drivers, John Hurley, Simon Horman
From: John Hurley <john.hurley@netronome.com>
Periodically receive messages containing the destination IPs of tunnels
that have recently forwarded traffic. Update the neighbour entries 'used'
value for these IPs next hop.
This prevents the neighbour entry from expiring on timeout but rather
signals an ARP to verify the connection. From an NFP perspective, packets
will not fall back mid-flow unless the link is verified to be down.
Signed-off-by: John Hurley <john.hurley@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
---
drivers/net/ethernet/netronome/nfp/flower/cmsg.c | 3 +
drivers/net/ethernet/netronome/nfp/flower/cmsg.h | 1 +
drivers/net/ethernet/netronome/nfp/flower/main.h | 1 +
.../ethernet/netronome/nfp/flower/tunnel_conf.c | 64 ++++++++++++++++++++++
4 files changed, 69 insertions(+)
diff --git a/drivers/net/ethernet/netronome/nfp/flower/cmsg.c b/drivers/net/ethernet/netronome/nfp/flower/cmsg.c
index 862787daaa68..6b71c719deba 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/cmsg.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/cmsg.c
@@ -184,6 +184,9 @@ nfp_flower_cmsg_process_one_rx(struct nfp_app *app, struct sk_buff *skb)
case NFP_FLOWER_CMSG_TYPE_NO_NEIGH:
nfp_tunnel_request_route(app, skb);
break;
+ case NFP_FLOWER_CMSG_TYPE_ACTIVE_TUNS:
+ nfp_tunnel_keep_alive(app, skb);
+ break;
case NFP_FLOWER_CMSG_TYPE_TUN_NEIGH:
/* Acks from the NFP that the route is added - ignore. */
break;
diff --git a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
index 1dc72a1ed577..504ddaa21701 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
+++ b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
@@ -319,6 +319,7 @@ enum nfp_flower_cmsg_type_port {
NFP_FLOWER_CMSG_TYPE_PORT_MOD = 8,
NFP_FLOWER_CMSG_TYPE_NO_NEIGH = 10,
NFP_FLOWER_CMSG_TYPE_TUN_MAC = 11,
+ NFP_FLOWER_CMSG_TYPE_ACTIVE_TUNS = 12,
NFP_FLOWER_CMSG_TYPE_TUN_NEIGH = 13,
NFP_FLOWER_CMSG_TYPE_TUN_IPS = 14,
NFP_FLOWER_CMSG_TYPE_FLOW_STATS = 15,
diff --git a/drivers/net/ethernet/netronome/nfp/flower/main.h b/drivers/net/ethernet/netronome/nfp/flower/main.h
index 93ad969c3653..12c319a219d8 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/main.h
+++ b/drivers/net/ethernet/netronome/nfp/flower/main.h
@@ -196,5 +196,6 @@ void nfp_tunnel_write_macs(struct nfp_app *app);
void nfp_tunnel_del_ipv4_off(struct nfp_app *app, __be32 ipv4);
void nfp_tunnel_add_ipv4_off(struct nfp_app *app, __be32 ipv4);
void nfp_tunnel_request_route(struct nfp_app *app, struct sk_buff *skb);
+void nfp_tunnel_keep_alive(struct nfp_app *app, struct sk_buff *skb);
#endif
diff --git a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
index 8c6b88a1306b..c495f8f38506 100644
--- a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+++ b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
@@ -36,12 +36,36 @@
#include <net/netevent.h>
#include <linux/idr.h>
#include <net/dst_metadata.h>
+#include <net/arp.h>
#include "cmsg.h"
#include "main.h"
#include "../nfp_net_repr.h"
#include "../nfp_net.h"
+#define NFP_FL_MAX_ROUTES 32
+
+/**
+ * struct nfp_tun_active_tuns - periodic message of active tunnels
+ * @seq: sequence number of the message
+ * @count: number of tunnels report in message
+ * @flags: options part of the request
+ * @ipv4: dest IPv4 address of active route
+ * @egress_port: port the encapsulated packet egressed
+ * @extra: reserved for future use
+ * @tun_info: tunnels that have sent traffic in reported period
+ */
+struct nfp_tun_active_tuns {
+ __be32 seq;
+ __be32 count;
+ __be32 flags;
+ struct route_ip_info {
+ __be32 ipv4;
+ __be32 egress_port;
+ __be32 extra[2];
+ } tun_info[];
+};
+
/**
* struct nfp_tun_neigh - neighbour/route entry on the NFP
* @dst_ipv4: destination IPv4 address
@@ -147,6 +171,46 @@ struct nfp_tun_mac_non_nfp_idx {
struct list_head list;
};
+void nfp_tunnel_keep_alive(struct nfp_app *app, struct sk_buff *skb)
+{
+ struct nfp_tun_active_tuns *payload;
+ struct net_device *netdev;
+ int count, i, pay_len;
+ struct neighbour *n;
+ __be32 ipv4_addr;
+ u32 port;
+
+ payload = nfp_flower_cmsg_get_data(skb);
+ count = be32_to_cpu(payload->count);
+ if (count > NFP_FL_MAX_ROUTES) {
+ nfp_flower_cmsg_warn(app, "Tunnel keep-alive request exceeds max routes.\n");
+ return;
+ }
+
+ pay_len = nfp_flower_cmsg_get_data_len(skb);
+ if (pay_len != sizeof(struct nfp_tun_active_tuns) +
+ sizeof(struct route_ip_info) * count) {
+ nfp_flower_cmsg_warn(app, "Corruption in tunnel keep-alive message.\n");
+ return;
+ }
+
+ for (i = 0; i < count; i++) {
+ ipv4_addr = payload->tun_info[i].ipv4;
+ port = be32_to_cpu(payload->tun_info[i].egress_port);
+ netdev = nfp_app_repr_get(app, port);
+ if (!netdev)
+ continue;
+
+ n = neigh_lookup(&arp_tbl, &ipv4_addr, netdev);
+ if (!n)
+ continue;
+
+ /* Update the used timestamp of neighbour */
+ neigh_event_send(n, NULL);
+ neigh_release(n);
+ }
+}
+
static bool nfp_tun_is_netdev_to_offload(struct net_device *netdev)
{
if (!netdev->rtnl_link_ops)
--
2.1.4
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [PATCH net-next 0/7] nfp: flower vxlan tunnel offload
2017-09-25 10:23 [PATCH net-next 0/7] nfp: flower vxlan tunnel offload Simon Horman
` (6 preceding siblings ...)
2017-09-25 10:23 ` [PATCH net-next 7/7] nfp: flower vxlan neighbour keep-alive Simon Horman
@ 2017-09-25 11:00 ` Jakub Kicinski
2017-09-25 15:25 ` Or Gerlitz
2017-09-27 4:29 ` David Miller
9 siblings, 0 replies; 29+ messages in thread
From: Jakub Kicinski @ 2017-09-25 11:00 UTC (permalink / raw)
To: Simon Horman; +Cc: David Miller, netdev, oss-drivers
On Mon, 25 Sep 2017 12:23:34 +0200, Simon Horman wrote:
> From: Simon Horman <simon.horman@netronome.com>
>
> John says:
>
> This patch set allows offloading of TC flower match and set tunnel fields
> to the NFP. The initial focus is on VXLAN traffic. Due to the current
> state of the NFP firmware, only VXLAN traffic on well known port 4789 is
> handled. The match and action fields must explicity set this value to be
> supported. Tunnel end point information is also offloaded to the NFP for
> both encapsulation and decapsulation. The NFP expects 3 separate data sets
> to be supplied.
...
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH net-next 0/7] nfp: flower vxlan tunnel offload
2017-09-25 10:23 [PATCH net-next 0/7] nfp: flower vxlan tunnel offload Simon Horman
` (7 preceding siblings ...)
2017-09-25 11:00 ` [PATCH net-next 0/7] nfp: flower vxlan tunnel offload Jakub Kicinski
@ 2017-09-25 15:25 ` Or Gerlitz
2017-09-25 17:04 ` Simon Horman
2017-09-27 4:29 ` David Miller
9 siblings, 1 reply; 29+ messages in thread
From: Or Gerlitz @ 2017-09-25 15:25 UTC (permalink / raw)
To: Simon Horman; +Cc: David Miller, Jakub Kicinski, Linux Netdev List, oss-drivers
On Mon, Sep 25, 2017 at 1:23 PM, Simon Horman
<simon.horman@netronome.com> wrote:
> From: Simon Horman <simon.horman@netronome.com>
>
> John says:
>
> This patch set allows offloading of TC flower match and set tunnel fields
> to the NFP. The initial focus is on VXLAN traffic. Due to the current
> state of the NFP firmware, only VXLAN traffic on well known port 4789 is
> handled. The match and action fields must explicity set this value to be
> supported. Tunnel end point information is also offloaded to the NFP for
> both encapsulation and decapsulation. The NFP expects 3 separate data sets
> to be supplied.
> For decapsulation, 2 separate lists exist; a list of MAC addresses
> referenced by an index comprised of the port number, and a list of IP
> addresses. These IP addresses are not connected to a MAC or port.
Do these IP addresses exist on the host kernel SW stack? can the same
set of TC rules be fully functional and generate the same traffic
pattern when set to run in SW (skip_hw)?
> The MAC
> addresses can be written as a block or one at a time (because they have an
> index, previous values can be overwritten) while the IP addresses are
> always written as a list of all the available IPs. Because the MAC address
> used as a tunnel end point may be associated with a physical port or may
> be a virtual netdev like an OVS bridge, we do not know which addresses
> should be offloaded. For this reason, all MAC addresses of active netdevs
> are offloaded to the NFP. A notifier checks for changes to any currently
> offloaded MACs or any new netdevs that may occur. For IP addresses, the
> tunnel end point used in the rules is known as the destination IP address
> must be specified in the flower classifier rule. When a new IP address
> appears in a rule, the IP address is offloaded. The IP is removed from the
> offloaded list when all rules matching on that IP are deleted.
>
> For encapsulation, a next hop table is updated on the NFP that contains
> the source/dest IPs, MACs and egress port. These are written individually
> when requested. If the NFP tries to encapsulate a packet but does not know
> the next hop, then is sends a request to the host. The host carries out a
> route lookup and populates the given entry on the NFP table. A notifier
> also exists to check for any links changing or going down in the kernel
> next hop table. If an offloaded next hop entry is removed from the kernel
> then it is also removed on the NFP.
>
> The NFP periodically sends a message to the host telling it which tunnel
> ports have packets egressing the system. The host uses this information to
> update the used value in the neighbour entry. This means that, rather than
> expire when it times out, the kernel will send an ARP to check if the link
> is still live. From an NFP perspective, this means that valid entries will
> not be removed from its next hop table.
>
> John Hurley (7):
> nfp: add helper to get flower cmsg length
> nfp: compile flower vxlan tunnel metadata match fields
> nfp: compile flower vxlan tunnel set actions
> nfp: offload flower vxlan endpoint MAC addresses
> nfp: offload vxlan IPv4 endpoints of flower rules
> nfp: flower vxlan neighbour offload
> nfp: flower vxlan neighbour keep-alive
>
> drivers/net/ethernet/netronome/nfp/Makefile | 3 +-
> drivers/net/ethernet/netronome/nfp/flower/action.c | 169 ++++-
> drivers/net/ethernet/netronome/nfp/flower/cmsg.c | 16 +-
> drivers/net/ethernet/netronome/nfp/flower/cmsg.h | 87 ++-
> drivers/net/ethernet/netronome/nfp/flower/main.c | 13 +
> drivers/net/ethernet/netronome/nfp/flower/main.h | 35 +
> drivers/net/ethernet/netronome/nfp/flower/match.c | 75 +-
> .../net/ethernet/netronome/nfp/flower/metadata.c | 2 +-
> .../net/ethernet/netronome/nfp/flower/offload.c | 74 +-
> .../ethernet/netronome/nfp/flower/tunnel_conf.c | 811 +++++++++++++++++++++
> 10 files changed, 1243 insertions(+), 42 deletions(-)
> create mode 100644 drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
>
> --
> 2.1.4
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH net-next 0/7] nfp: flower vxlan tunnel offload
2017-09-25 15:25 ` Or Gerlitz
@ 2017-09-25 17:04 ` Simon Horman
2017-09-26 10:15 ` Jiri Benc
0 siblings, 1 reply; 29+ messages in thread
From: Simon Horman @ 2017-09-25 17:04 UTC (permalink / raw)
To: Or Gerlitz
Cc: David Miller, Jakub Kicinski, Linux Netdev List, oss-drivers,
John Hurley
On Mon, Sep 25, 2017 at 06:25:03PM +0300, Or Gerlitz wrote:
> On Mon, Sep 25, 2017 at 1:23 PM, Simon Horman
> <simon.horman@netronome.com> wrote:
> > From: Simon Horman <simon.horman@netronome.com>
> >
> > John says:
> >
> > This patch set allows offloading of TC flower match and set tunnel fields
> > to the NFP. The initial focus is on VXLAN traffic. Due to the current
> > state of the NFP firmware, only VXLAN traffic on well known port 4789 is
> > handled. The match and action fields must explicity set this value to be
> > supported. Tunnel end point information is also offloaded to the NFP for
> > both encapsulation and decapsulation. The NFP expects 3 separate data sets
> > to be supplied.
>
> > For decapsulation, 2 separate lists exist; a list of MAC addresses
> > referenced by an index comprised of the port number, and a list of IP
> > addresses. These IP addresses are not connected to a MAC or port.
>
> Do these IP addresses exist on the host kernel SW stack? can the same
> set of TC rules be fully functional and generate the same traffic
> pattern when set to run in SW (skip_hw)?
Hi Or,
I asked John (now CCed) about this and his response was:
The MAC addresses are extracted from the netdevs already loaded in the
kernel and are monitored for any changes. The IP addresses are slightly
different in that they are extracted from the rules themselves. We make the
assumption that, if a packet is decapsulated at the end point and a match
is attempted on the IP address, that this IP address should be recognised
in the kernel. That being the case, the same traffic pattern should be
witnessed if the skip_hw flag is applied.
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH net-next 7/7] nfp: flower vxlan neighbour keep-alive
2017-09-25 10:23 ` [PATCH net-next 7/7] nfp: flower vxlan neighbour keep-alive Simon Horman
@ 2017-09-25 18:32 ` Or Gerlitz
[not found] ` <CAK+XE=mVKbAqYwSYvLb0y48O9D-Oq+B_bks7c9iwjsm0j7oYvw@mail.gmail.com>
0 siblings, 1 reply; 29+ messages in thread
From: Or Gerlitz @ 2017-09-25 18:32 UTC (permalink / raw)
To: Simon Horman
Cc: David Miller, Jakub Kicinski, Linux Netdev List, oss-drivers,
John Hurley
On Mon, Sep 25, 2017 at 1:23 PM, Simon Horman
<simon.horman@netronome.com> wrote:
> From: John Hurley <john.hurley@netronome.com>
>
> Periodically receive messages containing the destination IPs of tunnels
> that have recently forwarded traffic. Update the neighbour entries 'used'
> value for these IPs next hop.
Are you proactively sending keep alive messages from the driver or the
fw? what's wrong with the probes sent by the kernel NUD subsystem?
In our driver we also update the used value for neighs of offloaded
tunnels, we do it based on flow counters for the offloaded tunnels
which is an evidence for activity. Any reason for you not to apply a
similar practice?
Or.
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH net-next 2/7] nfp: compile flower vxlan tunnel metadata match fields
2017-09-25 10:23 ` [PATCH net-next 2/7] nfp: compile flower vxlan tunnel metadata match fields Simon Horman
@ 2017-09-25 18:35 ` Or Gerlitz
2017-09-26 13:58 ` John Hurley
0 siblings, 1 reply; 29+ messages in thread
From: Or Gerlitz @ 2017-09-25 18:35 UTC (permalink / raw)
To: Simon Horman
Cc: David Miller, Jakub Kicinski, Linux Netdev List, oss-drivers,
John Hurley
On Mon, Sep 25, 2017 at 1:23 PM, Simon Horman
<simon.horman@netronome.com> wrote:
> From: John Hurley <john.hurley@netronome.com>
>
> Compile ovs-tc flower vxlan metadata match fields for offloading. Only
anything in the npf kernel bits has direct relation to ovs? what?
> +++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
> @@ -52,8 +52,25 @@
> BIT(FLOW_DISSECTOR_KEY_PORTS) | \
> BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) | \
> BIT(FLOW_DISSECTOR_KEY_VLAN) | \
> + BIT(FLOW_DISSECTOR_KEY_ENC_KEYID) | \
> + BIT(FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS) | \
> + BIT(FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS) | \
this series takes care of IPv6 tunnels too?
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH net-next 7/7] nfp: flower vxlan neighbour keep-alive
[not found] ` <CAK+XE=mVKbAqYwSYvLb0y48O9D-Oq+B_bks7c9iwjsm0j7oYvw@mail.gmail.com>
@ 2017-09-26 9:37 ` John Hurley
2017-09-26 12:44 ` Or Gerlitz
0 siblings, 1 reply; 29+ messages in thread
From: John Hurley @ 2017-09-26 9:37 UTC (permalink / raw)
To: Or Gerlitz
Cc: Simon Horman, David Miller, Jakub Kicinski, Linux Netdev List,
oss-drivers
[ Reposting in plantext only]
On Mon, Sep 25, 2017 at 7:32 PM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
>
> On Mon, Sep 25, 2017 at 1:23 PM, Simon Horman
> <simon.horman@netronome.com> wrote:
> > From: John Hurley <john.hurley@netronome.com>
> >
> > Periodically receive messages containing the destination IPs of tunnels
> > that have recently forwarded traffic. Update the neighbour entries 'used'
> > value for these IPs next hop.
>
> Are you proactively sending keep alive messages from the driver or the
> fw? what's wrong with the probes sent by the kernel NUD subsystem?
>
Hi Or,
The messages are sent from the FW to the driver. They indicate which
offloaded tunnels are currently active.
>
> In our driver we also update the used value for neighs of offloaded
> tunnels, we do it based on flow counters for the offloaded tunnels
> which is an evidence for activity. Any reason for you not to apply a
> similar practice?
Yes, this would provide the same outcome. Because our firmware already
offered these messages, we chose to support this approach.
>
>
> Or.
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH net-next 0/7] nfp: flower vxlan tunnel offload
2017-09-25 17:04 ` Simon Horman
@ 2017-09-26 10:15 ` Jiri Benc
2017-09-26 12:41 ` Or Gerlitz
0 siblings, 1 reply; 29+ messages in thread
From: Jiri Benc @ 2017-09-26 10:15 UTC (permalink / raw)
To: Simon Horman
Cc: Or Gerlitz, David Miller, Jakub Kicinski, Linux Netdev List,
oss-drivers, John Hurley, Paolo Abeni, Eli Cohen, Paul Blakey
On Mon, 25 Sep 2017 19:04:53 +0200, Simon Horman wrote:
> The MAC addresses are extracted from the netdevs already loaded in the
> kernel and are monitored for any changes. The IP addresses are slightly
> different in that they are extracted from the rules themselves. We make the
> assumption that, if a packet is decapsulated at the end point and a match
> is attempted on the IP address,
You lost me here, I'm afraid. What do you mean by "match"?
> that this IP address should be recognised
> in the kernel. That being the case, the same traffic pattern should be
> witnessed if the skip_hw flag is applied.
Just to be really sure that this works correctly, can you confirm that
this will match the packet:
ip link add vxlan0 type vxlan dstport 4789 dev eth0 external
ip link set dev vxlan0 up
tc qdisc add dev vxlan0 ingress
ethtool -K eth0 hw-tc-offload on
tc filter add dev vxlan0 protocol ip parent ffff: flower enc_key_id 102 \
enc_dst_port 4789 src_ip 3.4.5.6 skip_sw action [...]
while this one will NOT match:
ip link add vxlan0 type vxlan dstport 4789 dev eth0 external
ip link set dev vxlan0 up
tc qdisc add dev eth0 ingress
ethtool -K eth0 hw-tc-offload on
tc filter add dev eth0 protocol ip parent ffff: flower enc_key_id 102 \
enc_dst_port 4789 src_ip 3.4.5.6 skip_sw action [...]
We found that with mlx5, the second one actually matches, too. Which is
a very serious bug. (Adding Paolo who found this. And adding a few more
Mellanox guys to be aware of the bug.)
Jiri
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH net-next 0/7] nfp: flower vxlan tunnel offload
2017-09-26 10:15 ` Jiri Benc
@ 2017-09-26 12:41 ` Or Gerlitz
2017-09-26 12:51 ` Jiri Benc
0 siblings, 1 reply; 29+ messages in thread
From: Or Gerlitz @ 2017-09-26 12:41 UTC (permalink / raw)
To: Jiri Benc
Cc: Simon Horman, David Miller, Jakub Kicinski, Linux Netdev List,
oss-drivers, John Hurley, Paolo Abeni, Paul Blakey, Jiri Pirko,
Roi Dayan
On Tue, Sep 26, 2017 at 1:15 PM, Jiri Benc <jbenc@redhat.com> wrote:
> On Mon, 25 Sep 2017 19:04:53 +0200, Simon Horman wrote:
>> The MAC addresses are extracted from the netdevs already loaded in the
>> kernel and are monitored for any changes. The IP addresses are slightly
>> different in that they are extracted from the rules themselves. We make the
>> assumption that, if a packet is decapsulated at the end point and a match
>> is attempted on the IP address,
>
> You lost me here, I'm afraid. What do you mean by "match"?
>
>> that this IP address should be recognised
>> in the kernel. That being the case, the same traffic pattern should be
>> witnessed if the skip_hw flag is applied.
>
> Just to be really sure that this works correctly, can you confirm that
> this will match the packet:
>
> ip link add vxlan0 type vxlan dstport 4789 dev eth0 external
> ip link set dev vxlan0 up
> tc qdisc add dev vxlan0 ingress
> ethtool -K eth0 hw-tc-offload on
> tc filter add dev vxlan0 protocol ip parent ffff: flower enc_key_id 102 \
> enc_dst_port 4789 src_ip 3.4.5.6 skip_sw action [...]
>
> while this one will NOT match:
what do you exactly mean by "will not match"
> ip link add vxlan0 type vxlan dstport 4789 dev eth0 external
> ip link set dev vxlan0 up
> tc qdisc add dev eth0 ingress
> ethtool -K eth0 hw-tc-offload on
> tc filter add dev eth0 protocol ip parent ffff: flower enc_key_id 102 \
> enc_dst_port 4789 src_ip 3.4.5.6 skip_sw action [...]
Please note that the way the rule is being set to the HW driver is by delegation
done in flower, see these commits (specifically "Add offload support
using egress Hardware device")
a6e1693 net/sched: cls_flower: Set the filter Hardware device for all use-cases
7091d8c net/sched: cls_flower: Add offload support using egress Hardware device
255cb30 net/sched: act_mirred: Add new tc_action_ops get_dev()
Since the egress port is not HW port netdev but rather SW virtual tunnel netdev
we have some logic in the kernel to delegate the rule programming to
HW via the HW netdev
OKay? if not, please elaborate
> We found that with mlx5, the second one actually matches, too. Which is
> a very serious bug. (Adding Paolo who found this. And adding a few more
> Mellanox guys to be aware of the bug.)
What is the bug in your view?
Or.
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH net-next 7/7] nfp: flower vxlan neighbour keep-alive
2017-09-26 9:37 ` John Hurley
@ 2017-09-26 12:44 ` Or Gerlitz
0 siblings, 0 replies; 29+ messages in thread
From: Or Gerlitz @ 2017-09-26 12:44 UTC (permalink / raw)
To: John Hurley
Cc: Simon Horman, David Miller, Jakub Kicinski, Linux Netdev List,
oss-drivers
On Tue, Sep 26, 2017 at 12:37 PM, John Hurley <john.hurley@netronome.com> wrote:
> [ Reposting in plantext only]
> On Mon, Sep 25, 2017 at 7:32 PM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
>>
>> On Mon, Sep 25, 2017 at 1:23 PM, Simon Horman
>> <simon.horman@netronome.com> wrote:
>> > From: John Hurley <john.hurley@netronome.com>
>> >
>> > Periodically receive messages containing the destination IPs of tunnels
>> > that have recently forwarded traffic. Update the neighbour entries 'used'
>> > value for these IPs next hop.
>>
>> Are you proactively sending keep alive messages from the driver or the
>> fw? what's wrong with the probes sent by the kernel NUD subsystem?
> The messages are sent from the FW to the driver. They indicate which
> offloaded tunnels are currently active.
Do you support flow counters for offloaded TC rules? do you support last-use?
If Y && Y and you cache someone the prev counter value, you can use
this for the "used" feedback. I don't see why add keep-alive and not piggy back
on the flow counters logic.
>> In our driver we also update the used value for neighs of offloaded
>> tunnels, we do it based on flow counters for the offloaded tunnels
>> which is an evidence for activity. Any reason for you not to apply a
>> similar practice?
> Yes, this would provide the same outcome. Because our firmware already
> offered these messages, we chose to support this approach.
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH net-next 0/7] nfp: flower vxlan tunnel offload
2017-09-26 12:41 ` Or Gerlitz
@ 2017-09-26 12:51 ` Jiri Benc
2017-09-26 14:17 ` Or Gerlitz
0 siblings, 1 reply; 29+ messages in thread
From: Jiri Benc @ 2017-09-26 12:51 UTC (permalink / raw)
To: Or Gerlitz
Cc: Simon Horman, David Miller, Jakub Kicinski, Linux Netdev List,
oss-drivers, John Hurley, Paolo Abeni, Paul Blakey, Jiri Pirko,
Roi Dayan
On Tue, 26 Sep 2017 15:41:37 +0300, Or Gerlitz wrote:
> Please note that the way the rule is being set to the HW driver is by delegation
> done in flower, see these commits (specifically "Add offload support
> using egress Hardware device")
It's very well possible the bug is somewhere in net/sched.
> What is the bug in your view?
If you replace skip_sw with skip_hw, the rules have to work
identically. In software, decapsulated packets appear on the vxlan0
interface, not on the eth0 interface. As the consequence, the second
example must not match on such packets. Those packets do not appear on
eth0 with software only path. eth0 sees encapsulated packets only. It's
vxlan0 that sees decapsulated packets with attached dst_metadata and
that's the only interface where the flower filter in the example can
match.
Hardware offloaded path must behave identically to the software path.
Jiri
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH net-next 2/7] nfp: compile flower vxlan tunnel metadata match fields
2017-09-25 18:35 ` Or Gerlitz
@ 2017-09-26 13:58 ` John Hurley
2017-09-26 14:12 ` Or Gerlitz
0 siblings, 1 reply; 29+ messages in thread
From: John Hurley @ 2017-09-26 13:58 UTC (permalink / raw)
To: Or Gerlitz
Cc: Simon Horman, David Miller, Jakub Kicinski, Linux Netdev List,
oss-drivers
On Mon, Sep 25, 2017 at 7:35 PM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
> On Mon, Sep 25, 2017 at 1:23 PM, Simon Horman
> <simon.horman@netronome.com> wrote:
>> From: John Hurley <john.hurley@netronome.com>
>>
>> Compile ovs-tc flower vxlan metadata match fields for offloading. Only
>
> anything in the npf kernel bits has direct relation to ovs? what?
>
Sorry, this is a typo and should refer to TC.
>> +++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
>> @@ -52,8 +52,25 @@
>> BIT(FLOW_DISSECTOR_KEY_PORTS) | \
>> BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) | \
>> BIT(FLOW_DISSECTOR_KEY_VLAN) | \
>> + BIT(FLOW_DISSECTOR_KEY_ENC_KEYID) | \
>> + BIT(FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS) | \
>> + BIT(FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS) | \
>
> this series takes care of IPv6 tunnels too?
IPv6 is not included in this set.
The reason the IPv6 bit is included here is to account for behavior we
have noticed in TC flower.
If, for example, I add a filter with the following match fields:
'protocol ip flower enc_src_ip 10.0.0.1 enc_dst_ip 10.0.0.2
enc_dst_port 4789 enc_key_id 123'
The 'used_keys' value in the dissector marks both IPv4 and IPv6 encap
addresses as 'used'.
I am not sure if this is a bug in TC or that we are expected to check
the enc_control fields to determine if IPv4 or v6 addresses are used.
Including the IPv6 used_keys bit in our whitelist approach allows us
to accept legitimate IPv4 tunnel rules in these situations.
If it is found to be IPv6 when the rule is parsed, it will be rejected here.
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH net-next 2/7] nfp: compile flower vxlan tunnel metadata match fields
2017-09-26 13:58 ` John Hurley
@ 2017-09-26 14:12 ` Or Gerlitz
2017-09-26 15:11 ` John Hurley
0 siblings, 1 reply; 29+ messages in thread
From: Or Gerlitz @ 2017-09-26 14:12 UTC (permalink / raw)
To: John Hurley
Cc: Simon Horman, David Miller, Jakub Kicinski, Linux Netdev List,
oss-drivers
On Tue, Sep 26, 2017 at 4:58 PM, John Hurley <john.hurley@netronome.com> wrote:
> On Mon, Sep 25, 2017 at 7:35 PM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
>> On Mon, Sep 25, 2017 at 1:23 PM, Simon Horman
>> <simon.horman@netronome.com> wrote:
>>> From: John Hurley <john.hurley@netronome.com>
>>>
>>> Compile ovs-tc flower vxlan metadata match fields for offloading. Only
>>
>> anything in the npf kernel bits has direct relation to ovs? what?
>>
>
> Sorry, this is a typo and should refer to TC.
>
>>> +++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
>>> @@ -52,8 +52,25 @@
>>> BIT(FLOW_DISSECTOR_KEY_PORTS) | \
>>> BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) | \
>>> BIT(FLOW_DISSECTOR_KEY_VLAN) | \
>>> + BIT(FLOW_DISSECTOR_KEY_ENC_KEYID) | \
>>> + BIT(FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS) | \
>>> + BIT(FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS) | \
>>
>> this series takes care of IPv6 tunnels too?
>
> IPv6 is not included in this set.
> The reason the IPv6 bit is included here is to account for behavior we
> have noticed in TC flower.
> If, for example, I add a filter with the following match fields:
> 'protocol ip flower enc_src_ip 10.0.0.1 enc_dst_ip 10.0.0.2
> enc_dst_port 4789 enc_key_id 123'
> The 'used_keys' value in the dissector marks both IPv4 and IPv6 encap
> addresses as 'used'.
> I am not sure if this is a bug in TC or that we are expected to check
> the enc_control fields to determine if IPv4 or v6 addresses are used.
you should have your code to check enc_control->addr_type to be
FLOW_DISSECTOR_KEY_IPV4_ADDRS or IPV6_ADDRS
> Including the IPv6 used_keys bit in our whitelist approach allows us
> to accept legitimate IPv4 tunnel rules in these situations.
mmm can please take a look on fl_init_dissector() and tell me if you
see why FLOW_DISSECTOR_KEY_IPV6_ADDRS is set for ipv4 tunnels,
I am not sure.
> If it is found to be IPv6 when the rule is parsed, it will be rejected here.
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH net-next 0/7] nfp: flower vxlan tunnel offload
2017-09-26 12:51 ` Jiri Benc
@ 2017-09-26 14:17 ` Or Gerlitz
2017-09-26 14:31 ` Jiri Benc
2017-09-26 14:50 ` Paolo Abeni
0 siblings, 2 replies; 29+ messages in thread
From: Or Gerlitz @ 2017-09-26 14:17 UTC (permalink / raw)
To: Jiri Benc
Cc: Simon Horman, David Miller, Jakub Kicinski, Linux Netdev List,
oss-drivers, John Hurley, Paolo Abeni, Paul Blakey, Jiri Pirko,
Roi Dayan
On Tue, Sep 26, 2017 at 3:51 PM, Jiri Benc <jbenc@redhat.com> wrote:
> On Tue, 26 Sep 2017 15:41:37 +0300, Or Gerlitz wrote:
>> Please note that the way the rule is being set to the HW driver is by delegation
>> done in flower, see these commits (specifically "Add offload support
>> using egress Hardware device")
>
> It's very well possible the bug is somewhere in net/sched.
maybe before/instead you call it a bug, take a look on the design
there and maybe
tell us how to possibly do that otherwise?
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH net-next 0/7] nfp: flower vxlan tunnel offload
2017-09-26 14:17 ` Or Gerlitz
@ 2017-09-26 14:31 ` Jiri Benc
2017-09-26 14:50 ` Paolo Abeni
1 sibling, 0 replies; 29+ messages in thread
From: Jiri Benc @ 2017-09-26 14:31 UTC (permalink / raw)
To: Or Gerlitz
Cc: Simon Horman, David Miller, Jakub Kicinski, Linux Netdev List,
oss-drivers, John Hurley, Paolo Abeni, Paul Blakey, Jiri Pirko,
Roi Dayan
On Tue, 26 Sep 2017 17:17:02 +0300, Or Gerlitz wrote:
> maybe before/instead you call it a bug,
But it is a bug. When offloaded, the rules must not behave differently.
That's the fundamental thing about offloading. Here, the rules behave
differently when offloaded and when not. That's a bug.
> take a look on the design there and maybe
> tell us how to possibly do that otherwise?
I don't know the design. It's the responsibility of those who implement
the offloading to do it in the way that it's consistent with the
software path. That has always been the case.
This needs to be fixed. If it can't be fixed, the feature needs to be
reverted. It's not that Linux has to make use of every single offload
supported by hardware. If the offloading cannot be fit into how Linux
works, then the offload can't be supported. There are in fact many
precedents.
Jiri
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH net-next 0/7] nfp: flower vxlan tunnel offload
2017-09-26 14:17 ` Or Gerlitz
2017-09-26 14:31 ` Jiri Benc
@ 2017-09-26 14:50 ` Paolo Abeni
2017-09-27 7:40 ` Jiri Pirko
1 sibling, 1 reply; 29+ messages in thread
From: Paolo Abeni @ 2017-09-26 14:50 UTC (permalink / raw)
To: Or Gerlitz, Jiri Benc
Cc: Simon Horman, David Miller, Jakub Kicinski, Linux Netdev List,
oss-drivers, John Hurley, Paul Blakey, Jiri Pirko, Roi Dayan
On Tue, 2017-09-26 at 17:17 +0300, Or Gerlitz wrote:
> On Tue, Sep 26, 2017 at 3:51 PM, Jiri Benc <jbenc@redhat.com> wrote:
> > On Tue, 26 Sep 2017 15:41:37 +0300, Or Gerlitz wrote:
> > > Please note that the way the rule is being set to the HW driver is by delegation
> > > done in flower, see these commits (specifically "Add offload support
> > > using egress Hardware device")
> >
> > It's very well possible the bug is somewhere in net/sched.
>
> maybe before/instead you call it a bug, take a look on the design
> there and maybe
> tell us how to possibly do that otherwise?
The problem, AFAICT, is in the API between flower and NIC implementing
the offload, because in the above example the kernel will call the
offload hook with exactly the same arguments with the 'bad' rule and
the 'good' one - but the 'bad' rule should never match any packets.
I think that can be fixed changing the flower code to invoke the
offload hook for filters with tunnel-based match only if the device
specified in such match has the appropriate type, e.g. given that
currently only vxlan is supported with something like the code below
(very rough and untested, just to give the idea):
Cheers,
Paolo
---
diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
index d230cb4c8094..ff8476e56d4e 100644
--- a/net/sched/cls_flower.c
+++ b/net/sched/cls_flower.c
@@ -243,10 +243,11 @@ static int fl_hw_replace_filter(struct tcf_proto *tp,
struct fl_flow_key *mask,
struct cls_fl_filter *f)
{
- struct net_device *dev = tp->q->dev_queue->dev;
+ struct net_device *ingress_dev, *dev = tp->q->dev_queue->dev;
struct tc_cls_flower_offload cls_flower = {};
int err;
+ ingress_dev = dev;
if (!tc_can_offload(dev)) {
if (tcf_exts_get_dev(dev, &f->exts, &f->hw_dev) ||
(f->hw_dev && !tc_can_offload(f->hw_dev))) {
@@ -259,6 +260,12 @@ static int fl_hw_replace_filter(struct tcf_proto *tp,
f->hw_dev = dev;
}
+ if ((dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_KEYID) ||
+ dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_PORTS) ||
+ // ... list all the others tunnel based keys ...
+ ) && strcmp(ingress_dev->rtnl_link_ops->kind, "vxlan"))
+ return tc_skip_sw(f->flags) ? -EINVAL : 0;
+
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [PATCH net-next 2/7] nfp: compile flower vxlan tunnel metadata match fields
2017-09-26 14:12 ` Or Gerlitz
@ 2017-09-26 15:11 ` John Hurley
2017-09-26 15:33 ` Or Gerlitz
0 siblings, 1 reply; 29+ messages in thread
From: John Hurley @ 2017-09-26 15:11 UTC (permalink / raw)
To: Or Gerlitz
Cc: Simon Horman, David Miller, Jakub Kicinski, Linux Netdev List,
oss-drivers
On Tue, Sep 26, 2017 at 3:12 PM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
> On Tue, Sep 26, 2017 at 4:58 PM, John Hurley <john.hurley@netronome.com> wrote:
>> On Mon, Sep 25, 2017 at 7:35 PM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
>>> On Mon, Sep 25, 2017 at 1:23 PM, Simon Horman
>>> <simon.horman@netronome.com> wrote:
>>>> From: John Hurley <john.hurley@netronome.com>
>>>>
>>>> Compile ovs-tc flower vxlan metadata match fields for offloading. Only
>>>
>>> anything in the npf kernel bits has direct relation to ovs? what?
>>>
>>
>> Sorry, this is a typo and should refer to TC.
>>
>>>> +++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
>>>> @@ -52,8 +52,25 @@
>>>> BIT(FLOW_DISSECTOR_KEY_PORTS) | \
>>>> BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) | \
>>>> BIT(FLOW_DISSECTOR_KEY_VLAN) | \
>>>> + BIT(FLOW_DISSECTOR_KEY_ENC_KEYID) | \
>>>> + BIT(FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS) | \
>>>> + BIT(FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS) | \
>>>
>>> this series takes care of IPv6 tunnels too?
>>
>> IPv6 is not included in this set.
>> The reason the IPv6 bit is included here is to account for behavior we
>> have noticed in TC flower.
>> If, for example, I add a filter with the following match fields:
>> 'protocol ip flower enc_src_ip 10.0.0.1 enc_dst_ip 10.0.0.2
>> enc_dst_port 4789 enc_key_id 123'
>> The 'used_keys' value in the dissector marks both IPv4 and IPv6 encap
>> addresses as 'used'.
>> I am not sure if this is a bug in TC or that we are expected to check
>> the enc_control fields to determine if IPv4 or v6 addresses are used.
>
> you should have your code to check enc_control->addr_type to be
> FLOW_DISSECTOR_KEY_IPV4_ADDRS or IPV6_ADDRS
>
>
>> Including the IPv6 used_keys bit in our whitelist approach allows us
>> to accept legitimate IPv4 tunnel rules in these situations.
>
> mmm can please take a look on fl_init_dissector() and tell me if you
> see why FLOW_DISSECTOR_KEY_IPV6_ADDRS is set for ipv4 tunnels,
> I am not sure.
The fl_init_dissector uses the FL_KEY_SET_IF_MASKED macro to set an
array of keys which are then translated to the used_keys values.
The FL_KEY_SET_IF_MASKED takes a 'struct fl_flow_key' as input and
checks if any mask bits are set in a particular field - if so it
eventually marks it as used.
In struct fl_flow_key, the encap ipv4 and ipv6 addresses are
represented as a union of the 2.
Therefore, if we have masked bits set for IPv4, they are also being
set for the IPv6 field.
>
>> If it is found to be IPv6 when the rule is parsed, it will be rejected here.
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH net-next 2/7] nfp: compile flower vxlan tunnel metadata match fields
2017-09-26 15:11 ` John Hurley
@ 2017-09-26 15:33 ` Or Gerlitz
2017-09-26 15:39 ` John Hurley
0 siblings, 1 reply; 29+ messages in thread
From: Or Gerlitz @ 2017-09-26 15:33 UTC (permalink / raw)
To: John Hurley
Cc: Simon Horman, David Miller, Jakub Kicinski, Linux Netdev List,
oss-drivers
On Tue, Sep 26, 2017 at 6:11 PM, John Hurley <john.hurley@netronome.com> wrote:
> On Tue, Sep 26, 2017 at 3:12 PM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
>> On Tue, Sep 26, 2017 at 4:58 PM, John Hurley <john.hurley@netronome.com> wrote:
>>> On Mon, Sep 25, 2017 at 7:35 PM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
>>>> On Mon, Sep 25, 2017 at 1:23 PM, Simon Horman
>>>> <simon.horman@netronome.com> wrote:
>>>>> From: John Hurley <john.hurley@netronome.com>
>>>>>
>>>>> Compile ovs-tc flower vxlan metadata match fields for offloading. Only
>>>>
>>>> anything in the npf kernel bits has direct relation to ovs? what?
>>>>
>>>
>>> Sorry, this is a typo and should refer to TC.
>>>
>>>>> +++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
>>>>> @@ -52,8 +52,25 @@
>>>>> BIT(FLOW_DISSECTOR_KEY_PORTS) | \
>>>>> BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) | \
>>>>> BIT(FLOW_DISSECTOR_KEY_VLAN) | \
>>>>> + BIT(FLOW_DISSECTOR_KEY_ENC_KEYID) | \
>>>>> + BIT(FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS) | \
>>>>> + BIT(FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS) | \
>>>>
>>>> this series takes care of IPv6 tunnels too?
>>>
>>> IPv6 is not included in this set.
>>> The reason the IPv6 bit is included here is to account for behavior we
>>> have noticed in TC flower.
>>> If, for example, I add a filter with the following match fields:
>>> 'protocol ip flower enc_src_ip 10.0.0.1 enc_dst_ip 10.0.0.2
>>> enc_dst_port 4789 enc_key_id 123'
>>> The 'used_keys' value in the dissector marks both IPv4 and IPv6 encap
>>> addresses as 'used'.
>>> I am not sure if this is a bug in TC or that we are expected to check
>>> the enc_control fields to determine if IPv4 or v6 addresses are used.
>>
>> you should have your code to check enc_control->addr_type to be
>> FLOW_DISSECTOR_KEY_IPV4_ADDRS or IPV6_ADDRS
>>
>>
>>> Including the IPv6 used_keys bit in our whitelist approach allows us
>>> to accept legitimate IPv4 tunnel rules in these situations.
>>
>> mmm can please take a look on fl_init_dissector() and tell me if you
>> see why FLOW_DISSECTOR_KEY_IPV6_ADDRS is set for ipv4 tunnels,
>> I am not sure.
>
>
> The fl_init_dissector uses the FL_KEY_SET_IF_MASKED macro to set an
> array of keys which are then translated to the used_keys values.
> The FL_KEY_SET_IF_MASKED takes a 'struct fl_flow_key' as input and
> checks if any mask bits are set in a particular field - if so it
> eventually marks it as used.
> In struct fl_flow_key, the encap ipv4 and ipv6 addresses are
> represented as a union of the 2.
> Therefore, if we have masked bits set for IPv4, they are also being
> set for the IPv6 field.
I see, do you consider it a bug?
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH net-next 2/7] nfp: compile flower vxlan tunnel metadata match fields
2017-09-26 15:33 ` Or Gerlitz
@ 2017-09-26 15:39 ` John Hurley
0 siblings, 0 replies; 29+ messages in thread
From: John Hurley @ 2017-09-26 15:39 UTC (permalink / raw)
To: Or Gerlitz
Cc: Simon Horman, David Miller, Jakub Kicinski, Linux Netdev List,
oss-drivers
On Tue, Sep 26, 2017 at 4:33 PM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
> On Tue, Sep 26, 2017 at 6:11 PM, John Hurley <john.hurley@netronome.com> wrote:
>> On Tue, Sep 26, 2017 at 3:12 PM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
>>> On Tue, Sep 26, 2017 at 4:58 PM, John Hurley <john.hurley@netronome.com> wrote:
>>>> On Mon, Sep 25, 2017 at 7:35 PM, Or Gerlitz <gerlitz.or@gmail.com> wrote:
>>>>> On Mon, Sep 25, 2017 at 1:23 PM, Simon Horman
>>>>> <simon.horman@netronome.com> wrote:
>>>>>> From: John Hurley <john.hurley@netronome.com>
>>>>>>
>>>>>> Compile ovs-tc flower vxlan metadata match fields for offloading. Only
>>>>>
>>>>> anything in the npf kernel bits has direct relation to ovs? what?
>>>>>
>>>>
>>>> Sorry, this is a typo and should refer to TC.
>>>>
>>>>>> +++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
>>>>>> @@ -52,8 +52,25 @@
>>>>>> BIT(FLOW_DISSECTOR_KEY_PORTS) | \
>>>>>> BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS) | \
>>>>>> BIT(FLOW_DISSECTOR_KEY_VLAN) | \
>>>>>> + BIT(FLOW_DISSECTOR_KEY_ENC_KEYID) | \
>>>>>> + BIT(FLOW_DISSECTOR_KEY_ENC_IPV4_ADDRS) | \
>>>>>> + BIT(FLOW_DISSECTOR_KEY_ENC_IPV6_ADDRS) | \
>>>>>
>>>>> this series takes care of IPv6 tunnels too?
>>>>
>>>> IPv6 is not included in this set.
>>>> The reason the IPv6 bit is included here is to account for behavior we
>>>> have noticed in TC flower.
>>>> If, for example, I add a filter with the following match fields:
>>>> 'protocol ip flower enc_src_ip 10.0.0.1 enc_dst_ip 10.0.0.2
>>>> enc_dst_port 4789 enc_key_id 123'
>>>> The 'used_keys' value in the dissector marks both IPv4 and IPv6 encap
>>>> addresses as 'used'.
>>>> I am not sure if this is a bug in TC or that we are expected to check
>>>> the enc_control fields to determine if IPv4 or v6 addresses are used.
>>>
>>> you should have your code to check enc_control->addr_type to be
>>> FLOW_DISSECTOR_KEY_IPV4_ADDRS or IPV6_ADDRS
>>>
>>>
>>>> Including the IPv6 used_keys bit in our whitelist approach allows us
>>>> to accept legitimate IPv4 tunnel rules in these situations.
>>>
>>> mmm can please take a look on fl_init_dissector() and tell me if you
>>> see why FLOW_DISSECTOR_KEY_IPV6_ADDRS is set for ipv4 tunnels,
>>> I am not sure.
>>
>>
>> The fl_init_dissector uses the FL_KEY_SET_IF_MASKED macro to set an
>> array of keys which are then translated to the used_keys values.
>> The FL_KEY_SET_IF_MASKED takes a 'struct fl_flow_key' as input and
>> checks if any mask bits are set in a particular field - if so it
>> eventually marks it as used.
>> In struct fl_flow_key, the encap ipv4 and ipv6 addresses are
>> represented as a union of the 2.
>> Therefore, if we have masked bits set for IPv4, they are also being
>> set for the IPv6 field.
>
> I see, do you consider it a bug?
The code seems to insist that, if either IPv4 or IPv6 is in use then a
control encap key is also used:
if (FL_KEY_IS_MASKED(&mask->key, enc_ipv4) ||
FL_KEY_IS_MASKED(&mask->key, enc_ipv6))
FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_ENC_CONTROL,
enc_control);
Therefore, I think it should be ok to use this to determine the IP
type in use by the tunnel.
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH net-next 0/7] nfp: flower vxlan tunnel offload
2017-09-25 10:23 [PATCH net-next 0/7] nfp: flower vxlan tunnel offload Simon Horman
` (8 preceding siblings ...)
2017-09-25 15:25 ` Or Gerlitz
@ 2017-09-27 4:29 ` David Miller
2017-09-27 7:27 ` Simon Horman
9 siblings, 1 reply; 29+ messages in thread
From: David Miller @ 2017-09-27 4:29 UTC (permalink / raw)
To: simon.horman; +Cc: jakub.kicinski, netdev, oss-drivers
From: Simon Horman <simon.horman@netronome.com>
Date: Mon, 25 Sep 2017 12:23:34 +0200
> From: Simon Horman <simon.horman@netronome.com>
>
> John says:
>
> This patch set allows offloading of TC flower match and set tunnel fields
> to the NFP. The initial focus is on VXLAN traffic. Due to the current
> state of the NFP firmware, only VXLAN traffic on well known port 4789 is
> handled. The match and action fields must explicity set this value to be
> supported. Tunnel end point information is also offloaded to the NFP for
> both encapsulation and decapsulation. The NFP expects 3 separate data sets
> to be supplied.
...
Series applied, thanks.
I see there is some discussion about ipv6 flow dissector key handling
and ND keepalives, but those should be addressable in follow-on changes.
Thanks.
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH net-next 0/7] nfp: flower vxlan tunnel offload
2017-09-27 4:29 ` David Miller
@ 2017-09-27 7:27 ` Simon Horman
0 siblings, 0 replies; 29+ messages in thread
From: Simon Horman @ 2017-09-27 7:27 UTC (permalink / raw)
To: David Miller; +Cc: jakub.kicinski, netdev, oss-drivers
On Tue, Sep 26, 2017 at 09:29:03PM -0700, David Miller wrote:
> From: Simon Horman <simon.horman@netronome.com>
> Date: Mon, 25 Sep 2017 12:23:34 +0200
>
> > From: Simon Horman <simon.horman@netronome.com>
> >
> > John says:
> >
> > This patch set allows offloading of TC flower match and set tunnel fields
> > to the NFP. The initial focus is on VXLAN traffic. Due to the current
> > state of the NFP firmware, only VXLAN traffic on well known port 4789 is
> > handled. The match and action fields must explicity set this value to be
> > supported. Tunnel end point information is also offloaded to the NFP for
> > both encapsulation and decapsulation. The NFP expects 3 separate data sets
> > to be supplied.
> ...
>
> Series applied, thanks.
>
> I see there is some discussion about ipv6 flow dissector key handling
> and ND keepalives, but those should be addressable in follow-on changes.
Thanks Dave,
I'll make sure that discussion is brought to a satisfactory conclusion.
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH net-next 0/7] nfp: flower vxlan tunnel offload
2017-09-26 14:50 ` Paolo Abeni
@ 2017-09-27 7:40 ` Jiri Pirko
0 siblings, 0 replies; 29+ messages in thread
From: Jiri Pirko @ 2017-09-27 7:40 UTC (permalink / raw)
To: Paolo Abeni
Cc: Or Gerlitz, Jiri Benc, Simon Horman, David Miller, Jakub Kicinski,
Linux Netdev List, oss-drivers, John Hurley, Paul Blakey,
Jiri Pirko, Roi Dayan
Tue, Sep 26, 2017 at 04:50:10PM CEST, pabeni@redhat.com wrote:
>On Tue, 2017-09-26 at 17:17 +0300, Or Gerlitz wrote:
>> On Tue, Sep 26, 2017 at 3:51 PM, Jiri Benc <jbenc@redhat.com> wrote:
>> > On Tue, 26 Sep 2017 15:41:37 +0300, Or Gerlitz wrote:
>> > > Please note that the way the rule is being set to the HW driver is by delegation
>> > > done in flower, see these commits (specifically "Add offload support
>> > > using egress Hardware device")
>> >
>> > It's very well possible the bug is somewhere in net/sched.
>>
>> maybe before/instead you call it a bug, take a look on the design
>> there and maybe
>> tell us how to possibly do that otherwise?
>
>The problem, AFAICT, is in the API between flower and NIC implementing
>the offload, because in the above example the kernel will call the
>offload hook with exactly the same arguments with the 'bad' rule and
>the 'good' one - but the 'bad' rule should never match any packets.
>
>I think that can be fixed changing the flower code to invoke the
>offload hook for filters with tunnel-based match only if the device
>specified in such match has the appropriate type, e.g. given that
>currently only vxlan is supported with something like the code below
>(very rough and untested, just to give the idea):
>
>Cheers,
>
>Paolo
>
>---
>diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
>index d230cb4c8094..ff8476e56d4e 100644
>--- a/net/sched/cls_flower.c
>+++ b/net/sched/cls_flower.c
>@@ -243,10 +243,11 @@ static int fl_hw_replace_filter(struct tcf_proto *tp,
> struct fl_flow_key *mask,
> struct cls_fl_filter *f)
> {
>- struct net_device *dev = tp->q->dev_queue->dev;
>+ struct net_device *ingress_dev, *dev = tp->q->dev_queue->dev;
> struct tc_cls_flower_offload cls_flower = {};
> int err;
>
>+ ingress_dev = dev;
> if (!tc_can_offload(dev)) {
> if (tcf_exts_get_dev(dev, &f->exts, &f->hw_dev) ||
> (f->hw_dev && !tc_can_offload(f->hw_dev))) {
>@@ -259,6 +260,12 @@ static int fl_hw_replace_filter(struct tcf_proto *tp,
> f->hw_dev = dev;
> }
>
>+ if ((dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_KEYID) ||
>+ dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_ENC_PORTS) ||
>+ // ... list all the others tunnel based keys ...
>+ ) && strcmp(ingress_dev->rtnl_link_ops->kind, "vxlan"))
>+ return tc_skip_sw(f->flags) ? -EINVAL : 0;
This kind of hooks are giving me nightmares. The code is screwed up as
it is already. I'm currently working on conversion to callbacks. This
part is handled in:
https://github.com/jpirko/linux_mlxsw/commits/jiri_devel_egdevcb
^ permalink raw reply [flat|nested] 29+ messages in thread
end of thread, other threads:[~2017-09-27 7:40 UTC | newest]
Thread overview: 29+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-09-25 10:23 [PATCH net-next 0/7] nfp: flower vxlan tunnel offload Simon Horman
2017-09-25 10:23 ` [PATCH net-next 1/7] nfp: add helper to get flower cmsg length Simon Horman
2017-09-25 10:23 ` [PATCH net-next 2/7] nfp: compile flower vxlan tunnel metadata match fields Simon Horman
2017-09-25 18:35 ` Or Gerlitz
2017-09-26 13:58 ` John Hurley
2017-09-26 14:12 ` Or Gerlitz
2017-09-26 15:11 ` John Hurley
2017-09-26 15:33 ` Or Gerlitz
2017-09-26 15:39 ` John Hurley
2017-09-25 10:23 ` [PATCH net-next 3/7] nfp: compile flower vxlan tunnel set actions Simon Horman
2017-09-25 10:23 ` [PATCH net-next 4/7] nfp: offload flower vxlan endpoint MAC addresses Simon Horman
2017-09-25 10:23 ` [PATCH net-next 5/7] nfp: offload vxlan IPv4 endpoints of flower rules Simon Horman
2017-09-25 10:23 ` [PATCH net-next 6/7] nfp: flower vxlan neighbour offload Simon Horman
2017-09-25 10:23 ` [PATCH net-next 7/7] nfp: flower vxlan neighbour keep-alive Simon Horman
2017-09-25 18:32 ` Or Gerlitz
[not found] ` <CAK+XE=mVKbAqYwSYvLb0y48O9D-Oq+B_bks7c9iwjsm0j7oYvw@mail.gmail.com>
2017-09-26 9:37 ` John Hurley
2017-09-26 12:44 ` Or Gerlitz
2017-09-25 11:00 ` [PATCH net-next 0/7] nfp: flower vxlan tunnel offload Jakub Kicinski
2017-09-25 15:25 ` Or Gerlitz
2017-09-25 17:04 ` Simon Horman
2017-09-26 10:15 ` Jiri Benc
2017-09-26 12:41 ` Or Gerlitz
2017-09-26 12:51 ` Jiri Benc
2017-09-26 14:17 ` Or Gerlitz
2017-09-26 14:31 ` Jiri Benc
2017-09-26 14:50 ` Paolo Abeni
2017-09-27 7:40 ` Jiri Pirko
2017-09-27 4:29 ` David Miller
2017-09-27 7:27 ` Simon Horman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).