netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yonghong Song <yhs@fb.com>
To: Andrii Nakryiko <andriin@fb.com>, <bpf@vger.kernel.org>,
	Martin KaFai Lau <kafai@fb.com>, <netdev@vger.kernel.org>
Cc: Alexei Starovoitov <ast@fb.com>,
	Daniel Borkmann <daniel@iogearbox.net>, <kernel-team@fb.com>
Subject: [RFC PATCH bpf-next v2 15/17] tools/bpf: selftests: add dumper programs for ipv6_route and netlink
Date: Wed, 15 Apr 2020 12:27:58 -0700	[thread overview]
Message-ID: <20200415192758.4084048-1-yhs@fb.com> (raw)
In-Reply-To: <20200415192740.4082659-1-yhs@fb.com>

Two bpf programs are added in this patch for netlink and ipv6_route
target. On my VM, I am able to achieve identical
results compared to /proc/net/netlink and /proc/net/ipv6_route.

  $ cat /proc/net/netlink
  sk               Eth Pid        Groups   Rmem     Wmem     Dump  Locks    Drops    Inode
  000000002c42d58b 0   0          00000000 0        0        0     2        0        7
  00000000a4e8b5e1 0   1          00000551 0        0        0     2        0        18719
  00000000e1b1c195 4   0          00000000 0        0        0     2        0        16422
  000000007e6b29f9 6   0          00000000 0        0        0     2        0        16424
  ....
  00000000159a170d 15  1862       00000002 0        0        0     2        0        1886
  000000009aca4bc9 15  3918224839 00000002 0        0        0     2        0        19076
  00000000d0ab31d2 15  1          00000002 0        0        0     2        0        18683
  000000008398fb08 16  0          00000000 0        0        0     2        0        27
  $ cat /sys/kernel/bpfdump/netlink/my1
  sk               Eth Pid        Groups   Rmem     Wmem     Dump  Locks    Drops    Inode
  000000002c42d58b 0   0          00000000 0        0        0     2        0        7
  00000000a4e8b5e1 0   1          00000551 0        0        0     2        0        18719
  00000000e1b1c195 4   0          00000000 0        0        0     2        0        16422
  000000007e6b29f9 6   0          00000000 0        0        0     2        0        16424
  ....
  00000000159a170d 15  1862       00000002 0        0        0     2        0        1886
  000000009aca4bc9 15  3918224839 00000002 0        0        0     2        0        19076
  00000000d0ab31d2 15  1          00000002 0        0        0     2        0        18683
  000000008398fb08 16  0          00000000 0        0        0     2        0        27

  $ cat /proc/net/ipv6_route
  fe800000000000000000000000000000 40 00000000000000000000000000000000 00 00000000000000000000000000000000 00000100 00000001 00000000 00000001     eth0
  00000000000000000000000000000000 00 00000000000000000000000000000000 00 00000000000000000000000000000000 ffffffff 00000001 00000000 00200200       lo
  00000000000000000000000000000001 80 00000000000000000000000000000000 00 00000000000000000000000000000000 00000000 00000003 00000000 80200001       lo
  fe80000000000000c04b03fffe7827ce 80 00000000000000000000000000000000 00 00000000000000000000000000000000 00000000 00000002 00000000 80200001     eth0
  ff000000000000000000000000000000 08 00000000000000000000000000000000 00 00000000000000000000000000000000 00000100 00000003 00000000 00000001     eth0
  00000000000000000000000000000000 00 00000000000000000000000000000000 00 00000000000000000000000000000000 ffffffff 00000001 00000000 00200200       lo
  $ cat /sys/kernel/bpfdump/ipv6_route/my1
  fe800000000000000000000000000000 40 00000000000000000000000000000000 00 00000000000000000000000000000000 00000100 00000001 00000000 00000001     eth0
  00000000000000000000000000000000 00 00000000000000000000000000000000 00 00000000000000000000000000000000 ffffffff 00000001 00000000 00200200       lo
  00000000000000000000000000000001 80 00000000000000000000000000000000 00 00000000000000000000000000000000 00000000 00000003 00000000 80200001       lo
  fe80000000000000c04b03fffe7827ce 80 00000000000000000000000000000000 00 00000000000000000000000000000000 00000000 00000002 00000000 80200001     eth0
  ff000000000000000000000000000000 08 00000000000000000000000000000000 00 00000000000000000000000000000000 00000100 00000003 00000000 00000001     eth0
  00000000000000000000000000000000 00 00000000000000000000000000000000 00 00000000000000000000000000000000 ffffffff 00000001 00000000 00200200       lo

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 .../selftests/bpf/progs/bpfdump_ipv6_route.c  | 71 ++++++++++++++++
 .../selftests/bpf/progs/bpfdump_netlink.c     | 80 +++++++++++++++++++
 2 files changed, 151 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/bpfdump_ipv6_route.c
 create mode 100644 tools/testing/selftests/bpf/progs/bpfdump_netlink.c

diff --git a/tools/testing/selftests/bpf/progs/bpfdump_ipv6_route.c b/tools/testing/selftests/bpf/progs/bpfdump_ipv6_route.c
new file mode 100644
index 000000000000..ff187577e2b5
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/bpfdump_ipv6_route.c
@@ -0,0 +1,71 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2020 Facebook */
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_endian.h>
+
+char _license[] SEC("license") = "GPL";
+
+extern bool CONFIG_IPV6_SUBTREES __kconfig __weak;
+
+#define	RTF_GATEWAY		0x0002
+#define IFNAMSIZ		16
+#define fib_nh_gw_family        nh_common.nhc_gw_family
+#define fib_nh_gw6              nh_common.nhc_gw.ipv6
+#define fib_nh_dev              nh_common.nhc_dev
+
+SEC("dump//sys/kernel/bpfdump/ipv6_route")
+int dump_ipv6_route(struct bpfdump__ipv6_route *ctx)
+{
+	static const char fmt1[] = "%pi6 %02x ";
+	static const char fmt2[] = "%pi6 ";
+	static const char fmt3[] = "00000000000000000000000000000000 ";
+	static const char fmt4[] = "%08x %08x ";
+	static const char fmt5[] = "%8s\n";
+	static const char fmt6[] = "\n";
+	static const char fmt7[] = "00000000000000000000000000000000 00 ";
+	struct seq_file *seq = ctx->meta->seq;
+	struct fib6_info *rt = ctx->rt;
+	const struct net_device *dev;
+	struct fib6_nh *fib6_nh;
+	unsigned int flags;
+	struct nexthop *nh;
+
+	if (rt == (void *)0)
+		return 0;
+
+	fib6_nh = &rt->fib6_nh[0];
+	flags = rt->fib6_flags;
+
+	/* FIXME: nexthop_is_multipath is not handled here. */
+	nh = rt->nh;
+	if (rt->nh)
+		fib6_nh = &nh->nh_info->fib6_nh;
+
+	bpf_seq_printf(seq, fmt1, sizeof(fmt1), &rt->fib6_dst.addr,
+		       rt->fib6_dst.plen);
+
+	if (CONFIG_IPV6_SUBTREES)
+		bpf_seq_printf(seq, fmt1, sizeof(fmt1), &rt->fib6_src.addr,
+			       rt->fib6_src.plen);
+	else
+		bpf_seq_printf(seq, fmt7, sizeof(fmt7));
+
+	if (fib6_nh->fib_nh_gw_family) {
+		flags |= RTF_GATEWAY;
+		bpf_seq_printf(seq, fmt2, sizeof(fmt2), &fib6_nh->fib_nh_gw6);
+	} else {
+		bpf_seq_printf(seq, fmt3, sizeof(fmt3));
+	}
+
+	dev = fib6_nh->fib_nh_dev;
+	bpf_seq_printf(seq, fmt4, sizeof(fmt4), rt->fib6_metric, rt->fib6_ref.refs.counter);
+	bpf_seq_printf(seq, fmt4, sizeof(fmt4), 0, flags);
+	if (dev)
+		bpf_seq_printf(seq, fmt5, sizeof(fmt5), dev->name);
+	else
+		bpf_seq_printf(seq, fmt6, sizeof(fmt6));
+
+	return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/bpfdump_netlink.c b/tools/testing/selftests/bpf/progs/bpfdump_netlink.c
new file mode 100644
index 000000000000..8a1aec0ba7d0
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/bpfdump_netlink.c
@@ -0,0 +1,80 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2020 Facebook */
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_endian.h>
+
+char _license[] SEC("license") = "GPL";
+
+#define sk_rmem_alloc	sk_backlog.rmem_alloc
+#define sk_refcnt	__sk_common.skc_refcnt
+
+#define offsetof(TYPE, MEMBER)  ((size_t)&((TYPE *)0)->MEMBER)
+#define container_of(ptr, type, member) ({                              \
+        void *__mptr = (void *)(ptr);                                   \
+        ((type *)(__mptr - offsetof(type, member))); })
+
+static inline struct inode *SOCK_INODE(struct socket *socket)
+{
+	return &container_of(socket, struct socket_alloc, socket)->vfs_inode;
+}
+
+SEC("dump//sys/kernel/bpfdump/netlink")
+int dump_netlink(struct bpfdump__netlink *ctx)
+{
+	static const char banner[] =
+		"sk               Eth Pid        Groups   "
+		"Rmem     Wmem     Dump  Locks    Drops    Inode\n";
+	static const char fmt1[] = "%pK %-3d ";
+	static const char fmt2[] = "%-10u %08x ";
+	static const char fmt3[] = "%-8d %-8d ";
+	static const char fmt4[] = "%-5d %-8d ";
+	static const char fmt5[] = "%-8u %-8lu\n";
+	struct seq_file *seq = ctx->meta->seq;
+	struct netlink_sock *nlk = ctx->sk;
+	unsigned long group, ino;
+	struct inode *inode;
+	struct socket *sk;
+	struct sock *s;
+
+	if (nlk == (void *)0)
+		return 0;
+
+	if (ctx->meta->seq_num == 0)
+		bpf_seq_printf(seq, banner, sizeof(banner));
+
+	s = &nlk->sk;
+	bpf_seq_printf(seq, fmt1, sizeof(fmt1), s, s->sk_protocol);
+
+	if (!nlk->groups)  {
+		group = 0;
+	} else {
+		/* FIXME: temporary use bpf_probe_read here, needs
+		 * verifier support to do direct access.
+		 */
+		bpf_probe_read(&group, sizeof(group), &nlk->groups[0]);
+	}
+	bpf_seq_printf(seq, fmt2, sizeof(fmt2), nlk->portid, (u32)group);
+
+
+	bpf_seq_printf(seq, fmt3, sizeof(fmt3), s->sk_rmem_alloc.counter,
+		       s->sk_wmem_alloc.refs.counter - 1);
+	bpf_seq_printf(seq, fmt4, sizeof(fmt4), nlk->cb_running,
+		       s->sk_refcnt.refs.counter);
+
+	sk = s->sk_socket;
+	if (!sk) {
+		ino = 0;
+	} else {
+		/* FIXME: container_of inside SOCK_INODE has a forced
+		 * type conversion, and direct access cannot be used
+		 * with current verifier.
+		 */
+		inode = SOCK_INODE(sk);
+		bpf_probe_read(&ino, sizeof(ino), &inode->i_ino);
+	}
+	bpf_seq_printf(seq, fmt5, sizeof(fmt5), s->sk_drops.counter, ino);
+
+	return 0;
+}
-- 
2.24.1


  parent reply	other threads:[~2020-04-15 19:32 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-15 19:27 [RFC PATCH bpf-next v2 00/17] bpf: implement bpf based dumping of kernel data structures Yonghong Song
2020-04-15 19:27 ` [RFC PATCH bpf-next v2 01/17] net: refactor net assignment for seq_net_private structure Yonghong Song
2020-04-15 19:27 ` [RFC PATCH bpf-next v2 02/17] bpf: create /sys/kernel/bpfdump mount file system Yonghong Song
2020-04-15 19:27 ` [RFC PATCH bpf-next v2 03/17] bpf: provide a way for targets to register themselves Yonghong Song
2020-04-15 19:27 ` [RFC PATCH bpf-next v2 04/17] bpf: allow loading of a dumper program Yonghong Song
2020-04-15 19:27 ` [RFC PATCH bpf-next v2 05/17] bpf: create file or anonymous dumpers Yonghong Song
2020-04-15 19:27 ` [RFC PATCH bpf-next v2 06/17] bpf: add PTR_TO_BTF_ID_OR_NULL support Yonghong Song
2020-04-15 19:27 ` [RFC PATCH bpf-next v2 07/17] bpf: add netlink and ipv6_route targets Yonghong Song
2020-04-15 19:27 ` [RFC PATCH bpf-next v2 08/17] bpf: add bpf_map target Yonghong Song
2020-04-15 19:27 ` [RFC PATCH bpf-next v2 09/17] bpf: add task and task/file targets Yonghong Song
2020-04-15 19:27 ` [RFC PATCH bpf-next v2 10/17] bpf: add bpf_seq_printf and bpf_seq_write helpers Yonghong Song
2020-04-15 19:27 ` [RFC PATCH bpf-next v2 11/17] bpf: support variable length array in tracing programs Yonghong Song
2020-04-15 19:27 ` [RFC PATCH bpf-next v2 12/17] bpf: implement query for target_proto and file dumper prog_id Yonghong Song
2020-04-15 19:27 ` [RFC PATCH bpf-next v2 13/17] tools/libbpf: libbpf support for bpfdump Yonghong Song
2020-04-15 19:27 ` [RFC PATCH bpf-next v2 14/17] tools/bpftool: add bpf dumper support Yonghong Song
2020-04-15 19:27 ` Yonghong Song [this message]
2020-04-15 19:27 ` [RFC PATCH bpf-next v2 16/17] tools/bpf: selftests: add dumper progs for bpf_map/task/task_file Yonghong Song
2020-04-15 19:28 ` [RFC PATCH bpf-next v2 17/17] tools/bpf: selftests: add a selftest for anonymous dumper Yonghong Song
2020-04-16  2:23 ` [RFC PATCH bpf-next v2 00/17] bpf: implement bpf based dumping of kernel data structures David Ahern
2020-04-16  6:41   ` Yonghong Song
2020-04-17 15:02     ` Alan Maguire
2020-04-19  5:34       ` Yonghong Song
2020-04-17 10:54   ` Alan Maguire
2020-04-19  5:30     ` Yonghong Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200415192758.4084048-1-yhs@fb.com \
    --to=yhs@fb.com \
    --cc=andriin@fb.com \
    --cc=ast@fb.com \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=kafai@fb.com \
    --cc=kernel-team@fb.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).