* [PATCH nf-next v6 0/2] Add IPIP flowtable SW acceleratio
@ 2025-08-18 9:07 Lorenzo Bianconi
2025-08-18 9:07 ` [PATCH nf-next v6 1/2] net: netfilter: Add IPIP flowtable SW acceleration Lorenzo Bianconi
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Lorenzo Bianconi @ 2025-08-18 9:07 UTC (permalink / raw)
To: David S. Miller, David Ahern, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Simon Horman, Pablo Neira Ayuso, Jozsef Kadlecsik,
Shuah Khan, Andrew Lunn
Cc: Florian Westphal, netdev, netfilter-devel, coreteam,
linux-kselftest, Lorenzo Bianconi
Introduce SW acceleration for IPIP tunnels in the netfilter flowtable
infrastructure.
---
Changes in v6:
- Rebase on top of nf-next main branch
- Link to v5: https://lore.kernel.org/r/20250721-nf-flowtable-ipip-v5-0-0865af9e58c6@kernel.org
Changes in v5:
- Rely on __ipv4_addr_hash() to compute the hash used as encap ID
- Remove unnecessary pskb_may_pull() in nf_flow_tuple_encap()
- Add nf_flow_ip4_ecanp_pop utility routine
- Link to v4: https://lore.kernel.org/r/20250718-nf-flowtable-ipip-v4-0-f8bb1c18b986@kernel.org
Changes in v4:
- Use the hash value of the saddr, daddr and protocol of outer IP header as
encapsulation id.
- Link to v3: https://lore.kernel.org/r/20250703-nf-flowtable-ipip-v3-0-880afd319b9f@kernel.org
Changes in v3:
- Add outer IP header sanity checks
- target nf-next tree instead of net-next
- Link to v2: https://lore.kernel.org/r/20250627-nf-flowtable-ipip-v2-0-c713003ce75b@kernel.org
Changes in v2:
- Introduce IPIP flowtable selftest
- Link to v1: https://lore.kernel.org/r/20250623-nf-flowtable-ipip-v1-1-2853596e3941@kernel.org
---
Lorenzo Bianconi (2):
net: netfilter: Add IPIP flowtable SW acceleration
selftests: netfilter: nft_flowtable.sh: Add IPIP flowtable selftest
include/linux/netdevice.h | 1 +
net/ipv4/ipip.c | 28 +++++++++++
net/netfilter/nf_flow_table_ip.c | 56 +++++++++++++++++++++-
net/netfilter/nft_flow_offload.c | 1 +
.../selftests/net/netfilter/nft_flowtable.sh | 40 ++++++++++++++++
5 files changed, 124 insertions(+), 2 deletions(-)
---
base-commit: bab3ce404553de56242d7b09ad7ea5b70441ea41
change-id: 20250623-nf-flowtable-ipip-1b3d7b08d067
Best regards,
--
Lorenzo Bianconi <lorenzo@kernel.org>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH nf-next v6 1/2] net: netfilter: Add IPIP flowtable SW acceleration
2025-08-18 9:07 [PATCH nf-next v6 0/2] Add IPIP flowtable SW acceleratio Lorenzo Bianconi
@ 2025-08-18 9:07 ` Lorenzo Bianconi
2025-09-09 21:31 ` Pablo Neira Ayuso
2025-08-18 9:07 ` [PATCH nf-next v6 2/2] selftests: netfilter: nft_flowtable.sh: Add IPIP flowtable selftest Lorenzo Bianconi
2025-09-05 21:09 ` [PATCH nf-next v6 0/2] Add IPIP flowtable SW acceleratio Lorenzo Bianconi
2 siblings, 1 reply; 6+ messages in thread
From: Lorenzo Bianconi @ 2025-08-18 9:07 UTC (permalink / raw)
To: David S. Miller, David Ahern, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Simon Horman, Pablo Neira Ayuso, Jozsef Kadlecsik,
Shuah Khan, Andrew Lunn
Cc: Florian Westphal, netdev, netfilter-devel, coreteam,
linux-kselftest, Lorenzo Bianconi
Introduce SW acceleration for IPIP tunnels in the netfilter flowtable
infrastructure.
IPIP SW acceleration can be tested running the following scenario where
the traffic is forwarded between two NICs (eth0 and eth1) and an IPIP
tunnel is used to access a remote site (using eth1 as the underlay device):
ETH0 -- TUN0 <==> ETH1 -- [IP network] -- TUN1 (192.168.100.2)
$ip addr show
6: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:00:22:33:11:55 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.2/24 scope global eth0
valid_lft forever preferred_lft forever
7: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:11:22:33:11:55 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.1/24 scope global eth1
valid_lft forever preferred_lft forever
8: tun0@NONE: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 192.168.1.1 peer 192.168.1.2
inet 192.168.100.1/24 scope global tun0
valid_lft forever preferred_lft forever
$ip route show
default via 192.168.100.2 dev tun0
192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.2
192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.1
192.168.100.0/24 dev tun0 proto kernel scope link src 192.168.100.1
$nft list ruleset
table inet filter {
flowtable ft {
hook ingress priority filter
devices = { eth0, eth1 }
}
chain forward {
type filter hook forward priority filter; policy accept;
meta l4proto { tcp, udp } flow add @ft
}
}
Reproducing the scenario described above using veths I got the following
results:
- TCP stream transmitted into the IPIP tunnel:
- net-next: ~41Gbps
- net-next + IPIP flowtbale support: ~40Gbps
- TCP stream received from the IPIP tunnel:
- net-next: ~35Gbps
- net-next + IPIP flowtbale support: ~49Gbps
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
include/linux/netdevice.h | 1 +
net/ipv4/ipip.c | 28 ++++++++++++++++++++
net/netfilter/nf_flow_table_ip.c | 56 ++++++++++++++++++++++++++++++++++++++--
net/netfilter/nft_flow_offload.c | 1 +
4 files changed, 84 insertions(+), 2 deletions(-)
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index f3a3b761abfb1b883a970b04634c1ef3e7ee5407..0527a4e3d1fd512b564e47311f6ce3957b66298f 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -874,6 +874,7 @@ enum net_device_path_type {
DEV_PATH_PPPOE,
DEV_PATH_DSA,
DEV_PATH_MTK_WDMA,
+ DEV_PATH_IPENCAP,
};
struct net_device_path {
diff --git a/net/ipv4/ipip.c b/net/ipv4/ipip.c
index 3e03af073a1ccc3d7597a998a515b6cfdded40b5..b7a3311bd061c341987380b5872caa8990d02e63 100644
--- a/net/ipv4/ipip.c
+++ b/net/ipv4/ipip.c
@@ -353,6 +353,33 @@ ipip_tunnel_ctl(struct net_device *dev, struct ip_tunnel_parm_kern *p, int cmd)
return ip_tunnel_ctl(dev, p, cmd);
}
+static int ipip_fill_forward_path(struct net_device_path_ctx *ctx,
+ struct net_device_path *path)
+{
+ struct ip_tunnel *tunnel = netdev_priv(ctx->dev);
+ const struct iphdr *tiph = &tunnel->parms.iph;
+ struct rtable *rt;
+
+ rt = ip_route_output(dev_net(ctx->dev), tiph->daddr, 0, 0, 0,
+ RT_SCOPE_UNIVERSE);
+ if (IS_ERR(rt))
+ return PTR_ERR(rt);
+
+ path->type = DEV_PATH_IPENCAP;
+ path->dev = ctx->dev;
+ path->encap.proto = htons(ETH_P_IP);
+ /* Use the hash of outer header IP src and dst addresses as
+ * encapsulation ID. This must be kept in sync with
+ * nf_flow_tuple_encap().
+ */
+ path->encap.id = __ipv4_addr_hash(tiph->saddr, ntohl(tiph->daddr));
+
+ ctx->dev = rt->dst.dev;
+ ip_rt_put(rt);
+
+ return 0;
+}
+
static const struct net_device_ops ipip_netdev_ops = {
.ndo_init = ipip_tunnel_init,
.ndo_uninit = ip_tunnel_uninit,
@@ -362,6 +389,7 @@ static const struct net_device_ops ipip_netdev_ops = {
.ndo_get_stats64 = dev_get_tstats64,
.ndo_get_iflink = ip_tunnel_get_iflink,
.ndo_tunnel_ctl = ipip_tunnel_ctl,
+ .ndo_fill_forward_path = ipip_fill_forward_path,
};
#define IPIP_FEATURES (NETIF_F_SG | \
diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c
index 8cd4cf7ae21120f1057c4fce5aaca4e3152ae76d..454a2be3feafd181596374478a4ccd244f81d53b 100644
--- a/net/netfilter/nf_flow_table_ip.c
+++ b/net/netfilter/nf_flow_table_ip.c
@@ -147,6 +147,7 @@ static void nf_flow_tuple_encap(struct sk_buff *skb,
{
struct vlan_ethhdr *veth;
struct pppoe_hdr *phdr;
+ struct iphdr *iph;
int i = 0;
if (skb_vlan_tag_present(skb)) {
@@ -165,6 +166,19 @@ static void nf_flow_tuple_encap(struct sk_buff *skb,
tuple->encap[i].id = ntohs(phdr->sid);
tuple->encap[i].proto = skb->protocol;
break;
+ case htons(ETH_P_IP):
+ iph = (struct iphdr *)skb_network_header(skb);
+ if (iph->protocol != IPPROTO_IPIP)
+ break;
+
+ tuple->encap[i].proto = htons(ETH_P_IP);
+ /* For IP tunnels the hash of outer header IP src and dst
+ * addresses is used as encapsulation ID so it must be kept in
+ * sync with IP tunnel ndo_fill_forward_path callbacks.
+ */
+ tuple->encap[i].id = __ipv4_addr_hash(iph->daddr,
+ ntohl(iph->saddr));
+ break;
}
}
@@ -277,6 +291,40 @@ static unsigned int nf_flow_xmit_xfrm(struct sk_buff *skb,
return NF_STOLEN;
}
+static bool nf_flow_ip4_encap_proto(struct sk_buff *skb, u32 *psize)
+{
+ struct iphdr *iph;
+ u16 size;
+
+ if (!pskb_may_pull(skb, sizeof(*iph)))
+ return false;
+
+ iph = (struct iphdr *)skb_network_header(skb);
+ size = iph->ihl << 2;
+
+ if (ip_is_fragment(iph) || unlikely(ip_has_options(size)))
+ return false;
+
+ if (iph->ttl <= 1)
+ return false;
+
+ if (iph->protocol == IPPROTO_IPIP)
+ *psize += size;
+
+ return true;
+}
+
+static void nf_flow_ip4_ecanp_pop(struct sk_buff *skb)
+{
+ struct iphdr *iph = (struct iphdr *)skb_network_header(skb);
+
+ if (iph->protocol != IPPROTO_IPIP)
+ return;
+
+ skb_pull(skb, iph->ihl << 2);
+ skb_reset_network_header(skb);
+}
+
static bool nf_flow_skb_encap_protocol(struct sk_buff *skb, __be16 proto,
u32 *offset)
{
@@ -284,6 +332,8 @@ static bool nf_flow_skb_encap_protocol(struct sk_buff *skb, __be16 proto,
__be16 inner_proto;
switch (skb->protocol) {
+ case htons(ETH_P_IP):
+ return nf_flow_ip4_encap_proto(skb, offset);
case htons(ETH_P_8021Q):
if (!pskb_may_pull(skb, skb_mac_offset(skb) + sizeof(*veth)))
return false;
@@ -331,6 +381,9 @@ static void nf_flow_encap_pop(struct sk_buff *skb,
break;
}
}
+
+ if (skb->protocol == htons(ETH_P_IP))
+ nf_flow_ip4_ecanp_pop(skb);
}
static unsigned int nf_flow_queue_xmit(struct net *net, struct sk_buff *skb,
@@ -357,8 +410,7 @@ nf_flow_offload_lookup(struct nf_flowtable_ctx *ctx,
{
struct flow_offload_tuple tuple = {};
- if (skb->protocol != htons(ETH_P_IP) &&
- !nf_flow_skb_encap_protocol(skb, htons(ETH_P_IP), &ctx->offset))
+ if (!nf_flow_skb_encap_protocol(skb, htons(ETH_P_IP), &ctx->offset))
return NULL;
if (nf_flow_tuple_ip(ctx, skb, &tuple) < 0)
diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c
index 225ff293cd50081a30fc82feeed5bb054f6387f0..4fe9a5e5dab839b17fc2acea835b72efccf7e1d9 100644
--- a/net/netfilter/nft_flow_offload.c
+++ b/net/netfilter/nft_flow_offload.c
@@ -108,6 +108,7 @@ static void nft_dev_path_info(const struct net_device_path_stack *stack,
case DEV_PATH_DSA:
case DEV_PATH_VLAN:
case DEV_PATH_PPPOE:
+ case DEV_PATH_IPENCAP:
info->indev = path->dev;
if (is_zero_ether_addr(info->h_source))
memcpy(info->h_source, path->dev->dev_addr, ETH_ALEN);
--
2.50.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH nf-next v6 2/2] selftests: netfilter: nft_flowtable.sh: Add IPIP flowtable selftest
2025-08-18 9:07 [PATCH nf-next v6 0/2] Add IPIP flowtable SW acceleratio Lorenzo Bianconi
2025-08-18 9:07 ` [PATCH nf-next v6 1/2] net: netfilter: Add IPIP flowtable SW acceleration Lorenzo Bianconi
@ 2025-08-18 9:07 ` Lorenzo Bianconi
2025-09-05 21:09 ` [PATCH nf-next v6 0/2] Add IPIP flowtable SW acceleratio Lorenzo Bianconi
2 siblings, 0 replies; 6+ messages in thread
From: Lorenzo Bianconi @ 2025-08-18 9:07 UTC (permalink / raw)
To: David S. Miller, David Ahern, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Simon Horman, Pablo Neira Ayuso, Jozsef Kadlecsik,
Shuah Khan, Andrew Lunn
Cc: Florian Westphal, netdev, netfilter-devel, coreteam,
linux-kselftest, Lorenzo Bianconi
Introduce specific selftest for IPIP flowtable SW acceleration in
nft_flowtable.sh
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
.../selftests/net/netfilter/nft_flowtable.sh | 40 ++++++++++++++++++++++
1 file changed, 40 insertions(+)
diff --git a/tools/testing/selftests/net/netfilter/nft_flowtable.sh b/tools/testing/selftests/net/netfilter/nft_flowtable.sh
index a4ee5496f2a17cedf1ee71214397012c7906650f..d1c9d3eeda2c9874008f9d6de6cabaabea79b9fb 100755
--- a/tools/testing/selftests/net/netfilter/nft_flowtable.sh
+++ b/tools/testing/selftests/net/netfilter/nft_flowtable.sh
@@ -519,6 +519,44 @@ if ! test_tcp_forwarding_nat "$ns1" "$ns2" 1 ""; then
ip netns exec "$nsr1" nft list ruleset
fi
+# IPIP tunnel test:
+# Add IPIP tunnel interfaces and check flowtable acceleration.
+test_ipip() {
+if ! ip -net "$nsr1" link add name tun0 type ipip \
+ local 192.168.10.1 remote 192.168.10.2 >/dev/null;then
+ echo "SKIP: could not add ipip tunnel"
+ [ "$ret" -eq 0 ] && ret=$ksft_skip
+ return
+fi
+ip -net "$nsr1" link set tun0 up
+ip -net "$nsr1" addr add 192.168.100.1/24 dev tun0
+ip netns exec "$nsr1" sysctl net.ipv4.conf.tun0.forwarding=1 > /dev/null
+
+ip -net "$nsr2" link add name tun0 type ipip local 192.168.10.2 remote 192.168.10.1
+ip -net "$nsr2" link set tun0 up
+ip -net "$nsr2" addr add 192.168.100.2/24 dev tun0
+ip netns exec "$nsr2" sysctl net.ipv4.conf.tun0.forwarding=1 > /dev/null
+
+ip -net "$nsr1" route change default via 192.168.100.2
+ip -net "$nsr2" route change default via 192.168.100.1
+ip -net "$ns2" route add default via 10.0.2.1
+
+ip netns exec "$nsr1" nft -a insert rule inet filter forward 'meta oif tun0 accept'
+ip netns exec "$nsr1" nft -a insert rule inet filter forward \
+ 'meta oif "veth0" tcp sport 12345 ct mark set 1 flow add @f1 counter name routed_repl accept'
+
+if ! test_tcp_forwarding_nat "$ns1" "$ns2" 1 "IPIP tunnel"; then
+ echo "FAIL: flow offload for ns1/ns2 with IPIP tunnel" 1>&2
+ ip netns exec "$nsr1" nft list ruleset
+ ret=1
+fi
+
+# Restore the previous configuration
+ip -net "$nsr1" route change default via 192.168.10.2
+ip -net "$nsr2" route change default via 192.168.10.1
+ip -net "$ns2" route del default via 10.0.2.1
+}
+
# Another test:
# Add bridge interface br0 to Router1, with NAT enabled.
test_bridge() {
@@ -604,6 +642,8 @@ ip -net "$nsr1" addr add dead:1::1/64 dev veth0 nodad
ip -net "$nsr1" link set up dev veth0
}
+test_ipip
+
test_bridge
KEY_SHA="0x"$(ps -af | sha1sum | cut -d " " -f 1)
--
2.50.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH nf-next v6 0/2] Add IPIP flowtable SW acceleratio
2025-08-18 9:07 [PATCH nf-next v6 0/2] Add IPIP flowtable SW acceleratio Lorenzo Bianconi
2025-08-18 9:07 ` [PATCH nf-next v6 1/2] net: netfilter: Add IPIP flowtable SW acceleration Lorenzo Bianconi
2025-08-18 9:07 ` [PATCH nf-next v6 2/2] selftests: netfilter: nft_flowtable.sh: Add IPIP flowtable selftest Lorenzo Bianconi
@ 2025-09-05 21:09 ` Lorenzo Bianconi
2 siblings, 0 replies; 6+ messages in thread
From: Lorenzo Bianconi @ 2025-09-05 21:09 UTC (permalink / raw)
To: David S. Miller, David Ahern, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Simon Horman, Pablo Neira Ayuso, Jozsef Kadlecsik,
Shuah Khan, Andrew Lunn
Cc: Florian Westphal, netdev, netfilter-devel, coreteam,
linux-kselftest, Felix Fietkau
[-- Attachment #1: Type: text/plain, Size: 2068 bytes --]
> Introduce SW acceleration for IPIP tunnels in the netfilter flowtable
> infrastructure.
>
Hi Pablo, Florian and Jozsef,
any update about this patch? What is the best way to proceed on this feature?
Regards,
Lorenzo
> ---
> Changes in v6:
> - Rebase on top of nf-next main branch
> - Link to v5: https://lore.kernel.org/r/20250721-nf-flowtable-ipip-v5-0-0865af9e58c6@kernel.org
>
> Changes in v5:
> - Rely on __ipv4_addr_hash() to compute the hash used as encap ID
> - Remove unnecessary pskb_may_pull() in nf_flow_tuple_encap()
> - Add nf_flow_ip4_ecanp_pop utility routine
> - Link to v4: https://lore.kernel.org/r/20250718-nf-flowtable-ipip-v4-0-f8bb1c18b986@kernel.org
>
> Changes in v4:
> - Use the hash value of the saddr, daddr and protocol of outer IP header as
> encapsulation id.
> - Link to v3: https://lore.kernel.org/r/20250703-nf-flowtable-ipip-v3-0-880afd319b9f@kernel.org
>
> Changes in v3:
> - Add outer IP header sanity checks
> - target nf-next tree instead of net-next
> - Link to v2: https://lore.kernel.org/r/20250627-nf-flowtable-ipip-v2-0-c713003ce75b@kernel.org
>
> Changes in v2:
> - Introduce IPIP flowtable selftest
> - Link to v1: https://lore.kernel.org/r/20250623-nf-flowtable-ipip-v1-1-2853596e3941@kernel.org
>
> ---
> Lorenzo Bianconi (2):
> net: netfilter: Add IPIP flowtable SW acceleration
> selftests: netfilter: nft_flowtable.sh: Add IPIP flowtable selftest
>
> include/linux/netdevice.h | 1 +
> net/ipv4/ipip.c | 28 +++++++++++
> net/netfilter/nf_flow_table_ip.c | 56 +++++++++++++++++++++-
> net/netfilter/nft_flow_offload.c | 1 +
> .../selftests/net/netfilter/nft_flowtable.sh | 40 ++++++++++++++++
> 5 files changed, 124 insertions(+), 2 deletions(-)
> ---
> base-commit: bab3ce404553de56242d7b09ad7ea5b70441ea41
> change-id: 20250623-nf-flowtable-ipip-1b3d7b08d067
>
> Best regards,
> --
> Lorenzo Bianconi <lorenzo@kernel.org>
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH nf-next v6 1/2] net: netfilter: Add IPIP flowtable SW acceleration
2025-08-18 9:07 ` [PATCH nf-next v6 1/2] net: netfilter: Add IPIP flowtable SW acceleration Lorenzo Bianconi
@ 2025-09-09 21:31 ` Pablo Neira Ayuso
2025-10-21 17:46 ` Lorenzo Bianconi
0 siblings, 1 reply; 6+ messages in thread
From: Pablo Neira Ayuso @ 2025-09-09 21:31 UTC (permalink / raw)
To: Lorenzo Bianconi
Cc: David S. Miller, David Ahern, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Simon Horman, Jozsef Kadlecsik, Shuah Khan,
Andrew Lunn, Florian Westphal, netdev, netfilter-devel, coreteam,
linux-kselftest
[-- Attachment #1: Type: text/plain, Size: 5441 bytes --]
On Mon, Aug 18, 2025 at 11:07:33AM +0200, Lorenzo Bianconi wrote:
> Introduce SW acceleration for IPIP tunnels in the netfilter flowtable
> infrastructure.
> IPIP SW acceleration can be tested running the following scenario where
> the traffic is forwarded between two NICs (eth0 and eth1) and an IPIP
> tunnel is used to access a remote site (using eth1 as the underlay device):
>
> ETH0 -- TUN0 <==> ETH1 -- [IP network] -- TUN1 (192.168.100.2)
>
> $ip addr show
> 6: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
> link/ether 00:00:22:33:11:55 brd ff:ff:ff:ff:ff:ff
> inet 192.168.0.2/24 scope global eth0
> valid_lft forever preferred_lft forever
> 7: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
> link/ether 00:11:22:33:11:55 brd ff:ff:ff:ff:ff:ff
> inet 192.168.1.1/24 scope global eth1
> valid_lft forever preferred_lft forever
> 8: tun0@NONE: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
> link/ipip 192.168.1.1 peer 192.168.1.2
> inet 192.168.100.1/24 scope global tun0
> valid_lft forever preferred_lft forever
>
> $ip route show
> default via 192.168.100.2 dev tun0
> 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.2
> 192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.1
> 192.168.100.0/24 dev tun0 proto kernel scope link src 192.168.100.1
>
> $nft list ruleset
> table inet filter {
> flowtable ft {
> hook ingress priority filter
> devices = { eth0, eth1 }
> }
>
> chain forward {
> type filter hook forward priority filter; policy accept;
> meta l4proto { tcp, udp } flow add @ft
> }
> }
>
> Reproducing the scenario described above using veths I got the following
> results:
> - TCP stream transmitted into the IPIP tunnel:
> - net-next: ~41Gbps
> - net-next + IPIP flowtbale support: ~40Gbps
I found this patch in one of my trees (see attachment) to explore
tunnel integration of the tx path, there has been similar patches
floating on the mailing list for layer 2 encapsulation (eg. pppoe and
vlan), IIRC for pppoe I remember they claim to accelerate tx.
Another aspect of this series is that I think it would be good to
explore integration of other layer 3 tunnel protocols, rather than
following an incremental approach.
More comments below.
> - TCP stream received from the IPIP tunnel:
> - net-next: ~35Gbps
> - net-next + IPIP flowtbale support: ~49Gbps
>
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> ---
> include/linux/netdevice.h | 1 +
> net/ipv4/ipip.c | 28 ++++++++++++++++++++
> net/netfilter/nf_flow_table_ip.c | 56 ++++++++++++++++++++++++++++++++++++++--
> net/netfilter/nft_flow_offload.c | 1 +
> 4 files changed, 84 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> index f3a3b761abfb1b883a970b04634c1ef3e7ee5407..0527a4e3d1fd512b564e47311f6ce3957b66298f 100644
> --- a/include/linux/netdevice.h
> +++ b/include/linux/netdevice.h
> @@ -874,6 +874,7 @@ enum net_device_path_type {
> DEV_PATH_PPPOE,
> DEV_PATH_DSA,
> DEV_PATH_MTK_WDMA,
> + DEV_PATH_IPENCAP,
> };
>
> struct net_device_path {
> diff --git a/net/ipv4/ipip.c b/net/ipv4/ipip.c
> index 3e03af073a1ccc3d7597a998a515b6cfdded40b5..b7a3311bd061c341987380b5872caa8990d02e63 100644
> --- a/net/ipv4/ipip.c
> +++ b/net/ipv4/ipip.c
> @@ -353,6 +353,33 @@ ipip_tunnel_ctl(struct net_device *dev, struct ip_tunnel_parm_kern *p, int cmd)
> return ip_tunnel_ctl(dev, p, cmd);
> }
>
> +static int ipip_fill_forward_path(struct net_device_path_ctx *ctx,
> + struct net_device_path *path)
> +{
> + struct ip_tunnel *tunnel = netdev_priv(ctx->dev);
> + const struct iphdr *tiph = &tunnel->parms.iph;
> + struct rtable *rt;
> +
> + rt = ip_route_output(dev_net(ctx->dev), tiph->daddr, 0, 0, 0,
> + RT_SCOPE_UNIVERSE);
> + if (IS_ERR(rt))
> + return PTR_ERR(rt);
> +
> + path->type = DEV_PATH_IPENCAP;
> + path->dev = ctx->dev;
> + path->encap.proto = htons(ETH_P_IP);
> + /* Use the hash of outer header IP src and dst addresses as
> + * encapsulation ID. This must be kept in sync with
> + * nf_flow_tuple_encap().
> + */
> + path->encap.id = __ipv4_addr_hash(tiph->saddr, ntohl(tiph->daddr));
This hash approach sounds reasonable, but I feel a bit uncomfortable
with the idea that the flowtable bypasses _entirely_ the existing
firewall policy and that this does not provide a perfect match. The
idea is that only initial packets of a flow goes through the policy,
then once flow is added in the flowtabled such firewall policy
validation is circumvented.
To achieve a perfect match, this means more memory consumption to
store the two IPs in the tuple.
struct {
u16 id;
__be16 proto;
} encap[NF_FLOW_TABLE_ENCAP_MAX];
And possibility more information will need to be stored for other
layer 3 tunnel protocols.
While this hash trick looks like an interesting approach, I am
ambivalent.
And one nitpick (typo) below...
> + ctx->dev = rt->dst.dev;
> + ip_rt_put(rt);
> +
> + return 0;
> +}
> +
[...]
> +static void nf_flow_ip4_ecanp_pop(struct sk_buff *skb)
_encap_pop ?
[-- Attachment #2: ipip-tx.patch --]
[-- Type: text/x-diff, Size: 5865 bytes --]
commit 4c635431740ecaa011c732bce954086266f07218
Author: Pablo Neira Ayuso <pablo@netfilter.org>
Date: Wed Jul 6 12:52:02 2022 +0200
netfilter: flowtable: tunnel tx support
diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h
index d21da5b57eeb..d4ecb57a8bfc 100644
--- a/include/net/netfilter/nf_flow_table.h
+++ b/include/net/netfilter/nf_flow_table.h
@@ -139,6 +139,27 @@ struct flow_offload_tuple {
struct {
struct dst_entry *dst_cache;
u32 dst_cookie;
+ u8 tunnel_num;
+ struct {
+ u8 l3proto;
+ u8 l4proto;
+ u8 tos;
+ u8 ttl;
+ __be16 df;
+
+ union {
+ struct in_addr src_v4;
+ struct in6_addr src_v6;
+ };
+ union {
+ struct in_addr dst_v4;
+ struct in6_addr dst_v6;
+ };
+ struct {
+ __be16 src_port;
+ __be16 dst_port;
+ };
+ } tunnel;
};
struct {
u32 ifidx;
@@ -223,6 +244,17 @@ struct nf_flow_route {
u32 hw_ifindex;
u8 h_source[ETH_ALEN];
u8 h_dest[ETH_ALEN];
+
+ int num_tunnels;
+ struct {
+ int ifindex;
+ u8 l3proto;
+ u8 l4proto;
+ struct {
+ __be32 saddr;
+ __be32 daddr;
+ } ip;
+ } tun;
} out;
enum flow_offload_xmit_type xmit_type;
} tuple[FLOW_OFFLOAD_DIR_MAX];
diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
index ab7df5c54eba..9244168c8cc8 100644
--- a/net/netfilter/nf_flow_table_core.c
+++ b/net/netfilter/nf_flow_table_core.c
@@ -177,6 +177,24 @@ static int flow_offload_fill_route(struct flow_offload *flow,
flow_tuple->tun.inner = flow->inner_tuple;
}
+ if (route->tuple[dir].out.num_tunnels) {
+ flow_tuple->tunnel_num++;
+
+ switch (route->tuple[dir].out.tun.l3proto) {
+ case NFPROTO_IPV4:
+ flow_tuple->tunnel.src_v4.s_addr = route->tuple[dir].out.tun.ip.saddr;
+ flow_tuple->tunnel.dst_v4.s_addr = route->tuple[dir].out.tun.ip.daddr;
+ break;
+ case NFPROTO_IPV6:
+ break;
+ }
+
+ flow_tuple->tunnel.l3proto = route->tuple[dir].out.tun.l3proto;
+ flow_tuple->tunnel.l4proto = route->tuple[dir].out.tun.l4proto;
+ flow_tuple->tunnel.src_port = 0;
+ flow_tuple->tunnel.dst_port = 0;
+ }
+
return 0;
}
diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c
index c1156d4ce865..1b96309210b8 100644
--- a/net/netfilter/nf_flow_table_ip.c
+++ b/net/netfilter/nf_flow_table_ip.c
@@ -349,6 +349,58 @@ static unsigned int nf_flow_queue_xmit(struct net *net, struct sk_buff *skb,
return NF_STOLEN;
}
+/* extract from ip_tunnel_xmit(). */
+static unsigned int nf_flow_tunnel_add(struct net *net, struct sk_buff *skb,
+ struct flow_offload *flow, int dir,
+ const struct rtable *rt,
+ struct iphdr *inner_iph)
+{
+ u32 headroom = sizeof(struct iphdr);
+ struct iphdr *iph;
+ u8 tos, ttl;
+ __be16 df;
+
+ if (iptunnel_handle_offloads(skb, SKB_GSO_IPXIP4))
+ return -1;
+
+ skb_set_inner_ipproto(skb, IPPROTO_IPIP);
+
+ headroom += LL_RESERVED_SPACE(rt->dst.dev) + rt->dst.header_len;
+
+ if (skb_cow_head(skb, headroom))
+ return -1;
+
+ skb_scrub_packet(skb, true);
+ skb_clear_hash_if_not_l4(skb);
+ memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
+
+ /* Push down and install the IP header. */
+ skb_push(skb, sizeof(struct iphdr));
+ skb_reset_network_header(skb);
+
+ df = flow->tuple[dir]->tunnel.df;
+ tos = ip_tunnel_ecn_encap(flow->tuple[dir]->tunnel.tos, inner_iph, skb);
+ ttl = flow->tuple[dir]->tunnel.ttl;
+ if (ttl == 0)
+ ttl = inner_iph->ttl;
+
+ iph = ip_hdr(skb);
+
+ iph->version = 4;
+ iph->ihl = sizeof(struct iphdr) >> 2;
+ iph->frag_off = ip_mtu_locked(&rt->dst) ? 0 : df;
+ iph->protocol = flow->tuple[dir]->tunnel.l4proto;
+ iph->tos = flow->tuple[dir]->tunnel.tos;
+ iph->daddr = flow->tuple[dir]->tunnel.dst_v4.s_addr;
+ iph->saddr = flow->tuple[dir]->tunnel.src_v4.s_addr;
+ iph->ttl = ttl;
+ iph->tot_len = htons(skb->len);
+ __ip_select_ident(net, iph, skb_shinfo(skb)->gso_segs ?: 1);
+ ip_send_check(iph);
+
+ return 0;
+}
+
unsigned int
nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
const struct nf_hook_state *state)
@@ -430,9 +482,19 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
switch (flow->tuple[dir]->xmit_type) {
case FLOW_OFFLOAD_XMIT_NEIGH:
rt = (struct rtable *)flow->tuple[dir]->dst_cache;
+ if (flow->tuple[dir]->tunnel_num) {
+ ret = nf_flow_tunnel_add(state->net, skb, flow, dir, rt, iph);
+ if (ret < 0) {
+ ret = NF_DROP;
+ flow_offload_teardown(flow);
+ break;
+ }
+ nexthop = rt_nexthop(rt, flow->tuple[dir]->tunnel.dst_v4.s_addr);
+ } else {
+ nexthop = rt_nexthop(rt, flow->tuple[!dir]->src_v4.s_addr);
+ }
outdev = rt->dst.dev;
skb->dev = outdev;
- nexthop = rt_nexthop(rt, flow->tuple[!dir]->src_v4.s_addr);
skb_dst_set_noref(skb, &rt->dst);
neigh_xmit(NEIGH_ARP_TABLE, outdev, &nexthop, skb);
ret = NF_STOLEN;
diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c
index ea403b95326c..1d672310ac6a 100644
--- a/net/netfilter/nft_flow_offload.c
+++ b/net/netfilter/nft_flow_offload.c
@@ -159,7 +159,13 @@ static void nft_dev_path_info(const struct net_device_path_stack *stack,
route->tuple[!dir].in.tun.ip.saddr = path->tun.ip.daddr;
route->tuple[!dir].in.tun.ip.daddr = path->tun.ip.saddr;
route->tuple[!dir].in.tun.l4proto = path->tun.l4proto;
- dst_release(path->tun.dst);
+
+ route->tuple[dir].out.num_tunnels++;
+ route->tuple[dir].out.tun.l3proto = path->tun.l3proto;
+ route->tuple[dir].out.tun.ip.saddr = path->tun.ip.saddr;
+ route->tuple[dir].out.tun.ip.daddr = path->tun.ip.daddr;
+ route->tuple[dir].out.tun.l4proto = path->tun.l4proto;
+ route->tuple[dir].dst = path->tun.dst;
break;
default:
info->indev = NULL;
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH nf-next v6 1/2] net: netfilter: Add IPIP flowtable SW acceleration
2025-09-09 21:31 ` Pablo Neira Ayuso
@ 2025-10-21 17:46 ` Lorenzo Bianconi
0 siblings, 0 replies; 6+ messages in thread
From: Lorenzo Bianconi @ 2025-10-21 17:46 UTC (permalink / raw)
To: Pablo Neira Ayuso
Cc: David S. Miller, David Ahern, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Simon Horman, Jozsef Kadlecsik, Shuah Khan,
Andrew Lunn, Florian Westphal, netdev, netfilter-devel, coreteam,
linux-kselftest
[-- Attachment #1: Type: text/plain, Size: 8544 bytes --]
> On Mon, Aug 18, 2025 at 11:07:33AM +0200, Lorenzo Bianconi wrote:
Hi Pablo,
sorry for the long delay.
[...]
>
> I found this patch in one of my trees (see attachment) to explore
> tunnel integration of the tx path, there has been similar patches
> floating on the mailing list for layer 2 encapsulation (eg. pppoe and
> vlan), IIRC for pppoe I remember they claim to accelerate tx.
ack, thx. I will look into it for v7.
>
> Another aspect of this series is that I think it would be good to
> explore integration of other layer 3 tunnel protocols, rather than
> following an incremental approach.
ack.
>
> More comments below.
>
> > - TCP stream received from the IPIP tunnel:
> > - net-next: ~35Gbps
> > - net-next + IPIP flowtbale support: ~49Gbps
> >
[...]
> > + path->encap.id = __ipv4_addr_hash(tiph->saddr, ntohl(tiph->daddr));
>
> This hash approach sounds reasonable, but I feel a bit uncomfortable
> with the idea that the flowtable bypasses _entirely_ the existing
> firewall policy and that this does not provide a perfect match. The
> idea is that only initial packets of a flow goes through the policy,
> then once flow is added in the flowtabled such firewall policy
> validation is circumvented.
ack, I will implement a perfect match for tuple lookup in v7.
>
> To achieve a perfect match, this means more memory consumption to
> store the two IPs in the tuple.
>
> struct {
> u16 id;
> __be16 proto;
> } encap[NF_FLOW_TABLE_ENCAP_MAX];
>
> And possibility more information will need to be stored for other
> layer 3 tunnel protocols.
>
> While this hash trick looks like an interesting approach, I am
> ambivalent.
>
> And one nitpick (typo) below...
ack, I will fix it in v7.
Regards,
Lorenzo
>
> > + ctx->dev = rt->dst.dev;
> > + ip_rt_put(rt);
> > +
> > + return 0;
> > +}
> > +
>
> [...]
> > +static void nf_flow_ip4_ecanp_pop(struct sk_buff *skb)
>
> _encap_pop ?
> commit 4c635431740ecaa011c732bce954086266f07218
> Author: Pablo Neira Ayuso <pablo@netfilter.org>
> Date: Wed Jul 6 12:52:02 2022 +0200
>
> netfilter: flowtable: tunnel tx support
>
> diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h
> index d21da5b57eeb..d4ecb57a8bfc 100644
> --- a/include/net/netfilter/nf_flow_table.h
> +++ b/include/net/netfilter/nf_flow_table.h
> @@ -139,6 +139,27 @@ struct flow_offload_tuple {
> struct {
> struct dst_entry *dst_cache;
> u32 dst_cookie;
> + u8 tunnel_num;
> + struct {
> + u8 l3proto;
> + u8 l4proto;
> + u8 tos;
> + u8 ttl;
> + __be16 df;
> +
> + union {
> + struct in_addr src_v4;
> + struct in6_addr src_v6;
> + };
> + union {
> + struct in_addr dst_v4;
> + struct in6_addr dst_v6;
> + };
> + struct {
> + __be16 src_port;
> + __be16 dst_port;
> + };
> + } tunnel;
> };
> struct {
> u32 ifidx;
> @@ -223,6 +244,17 @@ struct nf_flow_route {
> u32 hw_ifindex;
> u8 h_source[ETH_ALEN];
> u8 h_dest[ETH_ALEN];
> +
> + int num_tunnels;
> + struct {
> + int ifindex;
> + u8 l3proto;
> + u8 l4proto;
> + struct {
> + __be32 saddr;
> + __be32 daddr;
> + } ip;
> + } tun;
> } out;
> enum flow_offload_xmit_type xmit_type;
> } tuple[FLOW_OFFLOAD_DIR_MAX];
> diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
> index ab7df5c54eba..9244168c8cc8 100644
> --- a/net/netfilter/nf_flow_table_core.c
> +++ b/net/netfilter/nf_flow_table_core.c
> @@ -177,6 +177,24 @@ static int flow_offload_fill_route(struct flow_offload *flow,
> flow_tuple->tun.inner = flow->inner_tuple;
> }
>
> + if (route->tuple[dir].out.num_tunnels) {
> + flow_tuple->tunnel_num++;
> +
> + switch (route->tuple[dir].out.tun.l3proto) {
> + case NFPROTO_IPV4:
> + flow_tuple->tunnel.src_v4.s_addr = route->tuple[dir].out.tun.ip.saddr;
> + flow_tuple->tunnel.dst_v4.s_addr = route->tuple[dir].out.tun.ip.daddr;
> + break;
> + case NFPROTO_IPV6:
> + break;
> + }
> +
> + flow_tuple->tunnel.l3proto = route->tuple[dir].out.tun.l3proto;
> + flow_tuple->tunnel.l4proto = route->tuple[dir].out.tun.l4proto;
> + flow_tuple->tunnel.src_port = 0;
> + flow_tuple->tunnel.dst_port = 0;
> + }
> +
> return 0;
> }
>
> diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c
> index c1156d4ce865..1b96309210b8 100644
> --- a/net/netfilter/nf_flow_table_ip.c
> +++ b/net/netfilter/nf_flow_table_ip.c
> @@ -349,6 +349,58 @@ static unsigned int nf_flow_queue_xmit(struct net *net, struct sk_buff *skb,
> return NF_STOLEN;
> }
>
> +/* extract from ip_tunnel_xmit(). */
> +static unsigned int nf_flow_tunnel_add(struct net *net, struct sk_buff *skb,
> + struct flow_offload *flow, int dir,
> + const struct rtable *rt,
> + struct iphdr *inner_iph)
> +{
> + u32 headroom = sizeof(struct iphdr);
> + struct iphdr *iph;
> + u8 tos, ttl;
> + __be16 df;
> +
> + if (iptunnel_handle_offloads(skb, SKB_GSO_IPXIP4))
> + return -1;
> +
> + skb_set_inner_ipproto(skb, IPPROTO_IPIP);
> +
> + headroom += LL_RESERVED_SPACE(rt->dst.dev) + rt->dst.header_len;
> +
> + if (skb_cow_head(skb, headroom))
> + return -1;
> +
> + skb_scrub_packet(skb, true);
> + skb_clear_hash_if_not_l4(skb);
> + memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
> +
> + /* Push down and install the IP header. */
> + skb_push(skb, sizeof(struct iphdr));
> + skb_reset_network_header(skb);
> +
> + df = flow->tuple[dir]->tunnel.df;
> + tos = ip_tunnel_ecn_encap(flow->tuple[dir]->tunnel.tos, inner_iph, skb);
> + ttl = flow->tuple[dir]->tunnel.ttl;
> + if (ttl == 0)
> + ttl = inner_iph->ttl;
> +
> + iph = ip_hdr(skb);
> +
> + iph->version = 4;
> + iph->ihl = sizeof(struct iphdr) >> 2;
> + iph->frag_off = ip_mtu_locked(&rt->dst) ? 0 : df;
> + iph->protocol = flow->tuple[dir]->tunnel.l4proto;
> + iph->tos = flow->tuple[dir]->tunnel.tos;
> + iph->daddr = flow->tuple[dir]->tunnel.dst_v4.s_addr;
> + iph->saddr = flow->tuple[dir]->tunnel.src_v4.s_addr;
> + iph->ttl = ttl;
> + iph->tot_len = htons(skb->len);
> + __ip_select_ident(net, iph, skb_shinfo(skb)->gso_segs ?: 1);
> + ip_send_check(iph);
> +
> + return 0;
> +}
> +
> unsigned int
> nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
> const struct nf_hook_state *state)
> @@ -430,9 +482,19 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
> switch (flow->tuple[dir]->xmit_type) {
> case FLOW_OFFLOAD_XMIT_NEIGH:
> rt = (struct rtable *)flow->tuple[dir]->dst_cache;
> + if (flow->tuple[dir]->tunnel_num) {
> + ret = nf_flow_tunnel_add(state->net, skb, flow, dir, rt, iph);
> + if (ret < 0) {
> + ret = NF_DROP;
> + flow_offload_teardown(flow);
> + break;
> + }
> + nexthop = rt_nexthop(rt, flow->tuple[dir]->tunnel.dst_v4.s_addr);
> + } else {
> + nexthop = rt_nexthop(rt, flow->tuple[!dir]->src_v4.s_addr);
> + }
> outdev = rt->dst.dev;
> skb->dev = outdev;
> - nexthop = rt_nexthop(rt, flow->tuple[!dir]->src_v4.s_addr);
> skb_dst_set_noref(skb, &rt->dst);
> neigh_xmit(NEIGH_ARP_TABLE, outdev, &nexthop, skb);
> ret = NF_STOLEN;
> diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c
> index ea403b95326c..1d672310ac6a 100644
> --- a/net/netfilter/nft_flow_offload.c
> +++ b/net/netfilter/nft_flow_offload.c
> @@ -159,7 +159,13 @@ static void nft_dev_path_info(const struct net_device_path_stack *stack,
> route->tuple[!dir].in.tun.ip.saddr = path->tun.ip.daddr;
> route->tuple[!dir].in.tun.ip.daddr = path->tun.ip.saddr;
> route->tuple[!dir].in.tun.l4proto = path->tun.l4proto;
> - dst_release(path->tun.dst);
> +
> + route->tuple[dir].out.num_tunnels++;
> + route->tuple[dir].out.tun.l3proto = path->tun.l3proto;
> + route->tuple[dir].out.tun.ip.saddr = path->tun.ip.saddr;
> + route->tuple[dir].out.tun.ip.daddr = path->tun.ip.daddr;
> + route->tuple[dir].out.tun.l4proto = path->tun.l4proto;
> + route->tuple[dir].dst = path->tun.dst;
> break;
> default:
> info->indev = NULL;
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-10-21 17:46 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-18 9:07 [PATCH nf-next v6 0/2] Add IPIP flowtable SW acceleratio Lorenzo Bianconi
2025-08-18 9:07 ` [PATCH nf-next v6 1/2] net: netfilter: Add IPIP flowtable SW acceleration Lorenzo Bianconi
2025-09-09 21:31 ` Pablo Neira Ayuso
2025-10-21 17:46 ` Lorenzo Bianconi
2025-08-18 9:07 ` [PATCH nf-next v6 2/2] selftests: netfilter: nft_flowtable.sh: Add IPIP flowtable selftest Lorenzo Bianconi
2025-09-05 21:09 ` [PATCH nf-next v6 0/2] Add IPIP flowtable SW acceleratio Lorenzo Bianconi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).