From: Daniel Borkmann <daniel@iogearbox.net>
To: martin.lau@kernel.org
Cc: kuba@kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org,
Daniel Borkmann <daniel@iogearbox.net>,
Nikolay Aleksandrov <razor@blackwall.org>
Subject: [PATCH bpf 4/6] bpf, netkit: Add indirect call wrapper for fetching peer dev
Date: Fri, 3 Nov 2023 23:27:46 +0100 [thread overview]
Message-ID: <20231103222748.12551-5-daniel@iogearbox.net> (raw)
In-Reply-To: <20231103222748.12551-1-daniel@iogearbox.net>
ndo_get_peer_dev is used in tcx BPF fast path, therefore make use of
indirect call wrapper and therefore optimize the bpf_redirect_peer()
internal handling a bit. Add a small skb_get_peer_dev() wrapper which
utilizes the INDIRECT_CALL_1() macro instead of open coding.
Co-developed-by: Nikolay Aleksandrov <razor@blackwall.org>
Signed-off-by: Nikolay Aleksandrov <razor@blackwall.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
drivers/net/netkit.c | 3 ++-
include/net/netkit.h | 6 ++++++
net/core/filter.c | 18 +++++++++++++-----
3 files changed, 21 insertions(+), 6 deletions(-)
diff --git a/drivers/net/netkit.c b/drivers/net/netkit.c
index dc51c23b40f0..934c71a73b5c 100644
--- a/drivers/net/netkit.c
+++ b/drivers/net/netkit.c
@@ -7,6 +7,7 @@
#include <linux/filter.h>
#include <linux/netfilter_netdev.h>
#include <linux/bpf_mprog.h>
+#include <linux/indirect_call_wrapper.h>
#include <net/netkit.h>
#include <net/dst.h>
@@ -177,7 +178,7 @@ static void netkit_set_headroom(struct net_device *dev, int headroom)
rcu_read_unlock();
}
-static struct net_device *netkit_peer_dev(struct net_device *dev)
+INDIRECT_CALLABLE_SCOPE struct net_device *netkit_peer_dev(struct net_device *dev)
{
return rcu_dereference(netkit_priv(dev)->peer);
}
diff --git a/include/net/netkit.h b/include/net/netkit.h
index 0ba2e6b847ca..9ec0163739f4 100644
--- a/include/net/netkit.h
+++ b/include/net/netkit.h
@@ -10,6 +10,7 @@ int netkit_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog);
int netkit_link_attach(const union bpf_attr *attr, struct bpf_prog *prog);
int netkit_prog_detach(const union bpf_attr *attr, struct bpf_prog *prog);
int netkit_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr);
+INDIRECT_CALLABLE_DECLARE(struct net_device *netkit_peer_dev(struct net_device *dev));
#else
static inline int netkit_prog_attach(const union bpf_attr *attr,
struct bpf_prog *prog)
@@ -34,5 +35,10 @@ static inline int netkit_prog_query(const union bpf_attr *attr,
{
return -EINVAL;
}
+
+static inline struct net_device *netkit_peer_dev(struct net_device *dev)
+{
+ return NULL;
+}
#endif /* CONFIG_NETKIT */
#endif /* __NET_NETKIT_H */
diff --git a/net/core/filter.c b/net/core/filter.c
index 7aca28b7d0fd..dbf92b272022 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -81,6 +81,7 @@
#include <net/xdp.h>
#include <net/mptcp.h>
#include <net/netfilter/nf_conntrack_bpf.h>
+#include <net/netkit.h>
#include <linux/un.h>
#include "dev.h"
@@ -2468,6 +2469,16 @@ static const struct bpf_func_proto bpf_clone_redirect_proto = {
DEFINE_PER_CPU(struct bpf_redirect_info, bpf_redirect_info);
EXPORT_PER_CPU_SYMBOL_GPL(bpf_redirect_info);
+static struct net_device *skb_get_peer_dev(struct net_device *dev)
+{
+ const struct net_device_ops *ops = dev->netdev_ops;
+
+ if (likely(ops->ndo_get_peer_dev))
+ return INDIRECT_CALL_1(ops->ndo_get_peer_dev,
+ netkit_peer_dev, dev);
+ return NULL;
+}
+
int skb_do_redirect(struct sk_buff *skb)
{
struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info);
@@ -2481,12 +2492,9 @@ int skb_do_redirect(struct sk_buff *skb)
if (unlikely(!dev))
goto out_drop;
if (flags & BPF_F_PEER) {
- const struct net_device_ops *ops = dev->netdev_ops;
-
- if (unlikely(!ops->ndo_get_peer_dev ||
- !skb_at_tc_ingress(skb)))
+ if (unlikely(!skb_at_tc_ingress(skb)))
goto out_drop;
- dev = ops->ndo_get_peer_dev(dev);
+ dev = skb_get_peer_dev(dev);
if (unlikely(!dev ||
!(dev->flags & IFF_UP) ||
net_eq(net, dev_net(dev))))
--
2.34.1
next prev parent reply other threads:[~2023-11-03 22:28 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-03 22:27 [PATCH bpf 0/6] bpf_redirect_peer fixes Daniel Borkmann
2023-11-03 22:27 ` [PATCH bpf 1/6] netkit: Add tstats per-CPU traffic counters Daniel Borkmann
2023-11-06 21:28 ` Jakub Kicinski
2023-11-06 23:42 ` Daniel Borkmann
2023-11-03 22:27 ` [PATCH bpf 2/6] veth: Use " Daniel Borkmann
2023-11-03 22:27 ` [PATCH bpf 3/6] bpf: Fix dev's rx stats for bpf_redirect_peer traffic Daniel Borkmann
2023-11-03 22:27 ` Daniel Borkmann [this message]
2023-11-06 17:21 ` [PATCH bpf 4/6] bpf, netkit: Add indirect call wrapper for fetching peer dev Stanislav Fomichev
2023-11-06 18:21 ` Daniel Borkmann
2023-11-06 21:32 ` Jakub Kicinski
2023-11-06 23:44 ` Daniel Borkmann
2023-11-03 22:27 ` [PATCH bpf 5/6] selftests/bpf: De-veth-ize the tc_redirect test case Daniel Borkmann
2023-11-03 22:27 ` [PATCH bpf 6/6] selftests/bpf: Add netkit to tc_redirect selftest Daniel Borkmann
2023-11-06 17:22 ` [PATCH bpf 0/6] bpf_redirect_peer fixes Stanislav Fomichev
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231103222748.12551-5-daniel@iogearbox.net \
--to=daniel@iogearbox.net \
--cc=bpf@vger.kernel.org \
--cc=kuba@kernel.org \
--cc=martin.lau@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=razor@blackwall.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).