netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next 0/2] bpf misc update
@ 2015-05-29 21:23 Daniel Borkmann
  2015-05-29 21:23 ` [PATCH net-next 1/2] ebpf: allow bpf_ktime_get_ns_proto also for networking Daniel Borkmann
  2015-05-29 21:23 ` [PATCH net-next 2/2] ebpf: misc core cleanup Daniel Borkmann
  0 siblings, 2 replies; 6+ messages in thread
From: Daniel Borkmann @ 2015-05-29 21:23 UTC (permalink / raw)
  To: davem; +Cc: ast, netdev, Daniel Borkmann

Daniel Borkmann (2):
  ebpf: allow bpf_ktime_get_ns_proto also for networking
  ebpf: misc core cleanup

 include/linux/bpf.h      |  1 +
 kernel/bpf/core.c        | 73 ++++++++++++++++++++++++++++--------------------
 kernel/bpf/helpers.c     | 47 ++++++++++++++++++++-----------
 kernel/trace/bpf_trace.c | 12 --------
 net/core/filter.c        |  2 ++
 5 files changed, 75 insertions(+), 60 deletions(-)

-- 
1.9.3

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH net-next 1/2] ebpf: allow bpf_ktime_get_ns_proto also for networking
  2015-05-29 21:23 [PATCH net-next 0/2] bpf misc update Daniel Borkmann
@ 2015-05-29 21:23 ` Daniel Borkmann
  2015-06-01  4:44   ` David Miller
  2015-05-29 21:23 ` [PATCH net-next 2/2] ebpf: misc core cleanup Daniel Borkmann
  1 sibling, 1 reply; 6+ messages in thread
From: Daniel Borkmann @ 2015-05-29 21:23 UTC (permalink / raw)
  To: davem; +Cc: ast, netdev, Daniel Borkmann, Ingo Molnar

As this is already exported from tracing side via commit d9847d310ab4
("tracing: Allow BPF programs to call bpf_ktime_get_ns()"), we might
as well want to move it to the core, so also networking users can make
use of it, e.g. to measure diffs for certain flows from ingress/egress.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Ingo Molnar <mingo@kernel.org>
---
 include/linux/bpf.h      |  1 +
 kernel/bpf/core.c        |  1 +
 kernel/bpf/helpers.c     | 13 +++++++++++++
 kernel/trace/bpf_trace.c | 12 ------------
 net/core/filter.c        |  2 ++
 5 files changed, 17 insertions(+), 12 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 8821b9a..0fb0e72 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -182,5 +182,6 @@ extern const struct bpf_func_proto bpf_map_delete_elem_proto;
 extern const struct bpf_func_proto bpf_get_prandom_u32_proto;
 extern const struct bpf_func_proto bpf_get_smp_processor_id_proto;
 extern const struct bpf_func_proto bpf_tail_call_proto;
+extern const struct bpf_func_proto bpf_ktime_get_ns_proto;
 
 #endif /* _LINUX_BPF_H */
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index d44b25c..4548422 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -734,6 +734,7 @@ const struct bpf_func_proto bpf_map_delete_elem_proto __weak;
 
 const struct bpf_func_proto bpf_get_prandom_u32_proto __weak;
 const struct bpf_func_proto bpf_get_smp_processor_id_proto __weak;
+const struct bpf_func_proto bpf_ktime_get_ns_proto __weak;
 
 /* To execute LD_ABS/LD_IND instructions __bpf_prog_run() may call
  * skb_copy_bits(), so provide a weak definition of it for NET-less config.
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index bd7f598..b3aaabd 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -13,6 +13,7 @@
 #include <linux/rcupdate.h>
 #include <linux/random.h>
 #include <linux/smp.h>
+#include <linux/ktime.h>
 
 /* If kernel subsystem is allowing eBPF programs to call this function,
  * inside its own verifier_ops->get_func_proto() callback it should return
@@ -111,3 +112,15 @@ const struct bpf_func_proto bpf_get_smp_processor_id_proto = {
 	.gpl_only	= false,
 	.ret_type	= RET_INTEGER,
 };
+
+static u64 bpf_ktime_get_ns(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
+{
+	/* NMI safe access to clock monotonic */
+	return ktime_get_mono_fast_ns();
+}
+
+const struct bpf_func_proto bpf_ktime_get_ns_proto = {
+	.func		= bpf_ktime_get_ns,
+	.gpl_only	= true,
+	.ret_type	= RET_INTEGER,
+};
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 646445e..50c4015 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -79,18 +79,6 @@ static const struct bpf_func_proto bpf_probe_read_proto = {
 	.arg3_type	= ARG_ANYTHING,
 };
 
-static u64 bpf_ktime_get_ns(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
-{
-	/* NMI safe access to clock monotonic */
-	return ktime_get_mono_fast_ns();
-}
-
-static const struct bpf_func_proto bpf_ktime_get_ns_proto = {
-	.func		= bpf_ktime_get_ns,
-	.gpl_only	= true,
-	.ret_type	= RET_INTEGER,
-};
-
 /*
  * limited trace_printk()
  * only %d %u %x %ld %lu %lx %lld %llu %llx %p conversion specifiers allowed
diff --git a/net/core/filter.c b/net/core/filter.c
index 3adcca6..2a7c70f 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -1423,6 +1423,8 @@ sk_filter_func_proto(enum bpf_func_id func_id)
 		return &bpf_get_smp_processor_id_proto;
 	case BPF_FUNC_tail_call:
 		return &bpf_tail_call_proto;
+	case BPF_FUNC_ktime_get_ns:
+		return &bpf_ktime_get_ns_proto;
 	default:
 		return NULL;
 	}
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH net-next 2/2] ebpf: misc core cleanup
  2015-05-29 21:23 [PATCH net-next 0/2] bpf misc update Daniel Borkmann
  2015-05-29 21:23 ` [PATCH net-next 1/2] ebpf: allow bpf_ktime_get_ns_proto also for networking Daniel Borkmann
@ 2015-05-29 21:23 ` Daniel Borkmann
  2015-05-29 23:25   ` Alexei Starovoitov
  2015-06-01  4:45   ` David Miller
  1 sibling, 2 replies; 6+ messages in thread
From: Daniel Borkmann @ 2015-05-29 21:23 UTC (permalink / raw)
  To: davem; +Cc: ast, netdev, Daniel Borkmann

Besides others, move bpf_tail_call_proto to the remaining definitions
of other protos, improve comments a bit (i.e. remove some obvious ones,
where the code is already self-documenting, add objectives for others),
simplify bpf_prog_array_compatible() a bit.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
 kernel/bpf/core.c    | 72 ++++++++++++++++++++++++++++++----------------------
 kernel/bpf/helpers.c | 34 ++++++++++++-------------
 2 files changed, 58 insertions(+), 48 deletions(-)

diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 4548422..1e00aa3 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -26,9 +26,10 @@
 #include <linux/vmalloc.h>
 #include <linux/random.h>
 #include <linux/moduleloader.h>
-#include <asm/unaligned.h>
 #include <linux/bpf.h>
 
+#include <asm/unaligned.h>
+
 /* Registers */
 #define BPF_R0	regs[BPF_REG_0]
 #define BPF_R1	regs[BPF_REG_1]
@@ -62,6 +63,7 @@ void *bpf_internal_load_pointer_neg_helper(const struct sk_buff *skb, int k, uns
 		ptr = skb_network_header(skb) + k - SKF_NET_OFF;
 	else if (k >= SKF_LL_OFF)
 		ptr = skb_mac_header(skb) + k - SKF_LL_OFF;
+
 	if (ptr >= skb->head && ptr + size <= skb_tail_pointer(skb))
 		return ptr;
 
@@ -176,15 +178,6 @@ noinline u64 __bpf_call_base(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
 	return 0;
 }
 
-const struct bpf_func_proto bpf_tail_call_proto = {
-	.func = NULL,
-	.gpl_only = false,
-	.ret_type = RET_VOID,
-	.arg1_type = ARG_PTR_TO_CTX,
-	.arg2_type = ARG_CONST_MAP_PTR,
-	.arg3_type = ARG_ANYTHING,
-};
-
 /**
  *	__bpf_prog_run - run eBPF program on a given context
  *	@ctx: is the data we are operating on
@@ -650,36 +643,35 @@ load_byte:
 		return 0;
 }
 
-void __weak bpf_int_jit_compile(struct bpf_prog *prog)
-{
-}
-
-bool bpf_prog_array_compatible(struct bpf_array *array, const struct bpf_prog *fp)
+bool bpf_prog_array_compatible(struct bpf_array *array,
+			       const struct bpf_prog *fp)
 {
-	if (array->owner_prog_type) {
-		if (array->owner_prog_type != fp->type)
-			return false;
-		if (array->owner_jited != fp->jited)
-			return false;
-	} else {
+	if (!array->owner_prog_type) {
+		/* There's no owner yet where we could check for
+		 * compatibility.
+		 */
 		array->owner_prog_type = fp->type;
 		array->owner_jited = fp->jited;
+
+		return true;
 	}
-	return true;
+
+	return array->owner_prog_type == fp->type &&
+	       array->owner_jited == fp->jited;
 }
 
-static int check_tail_call(const struct bpf_prog *fp)
+static int bpf_check_tail_call(const struct bpf_prog *fp)
 {
 	struct bpf_prog_aux *aux = fp->aux;
 	int i;
 
 	for (i = 0; i < aux->used_map_cnt; i++) {
+		struct bpf_map *map = aux->used_maps[i];
 		struct bpf_array *array;
-		struct bpf_map *map;
 
-		map = aux->used_maps[i];
 		if (map->map_type != BPF_MAP_TYPE_PROG_ARRAY)
 			continue;
+
 		array = container_of(map, struct bpf_array, map);
 		if (!bpf_prog_array_compatible(array, fp))
 			return -EINVAL;
@@ -689,22 +681,25 @@ static int check_tail_call(const struct bpf_prog *fp)
 }
 
 /**
- *	bpf_prog_select_runtime - select execution runtime for BPF program
+ *	bpf_prog_select_runtime - select exec runtime for BPF program
  *	@fp: bpf_prog populated with internal BPF program
  *
- * try to JIT internal BPF program, if JIT is not available select interpreter
- * BPF program will be executed via BPF_PROG_RUN() macro
+ * Try to JIT eBPF program, if JIT is not available, use interpreter.
+ * The BPF program will be executed via BPF_PROG_RUN() macro.
  */
 int bpf_prog_select_runtime(struct bpf_prog *fp)
 {
 	fp->bpf_func = (void *) __bpf_prog_run;
 
-	/* Probe if internal BPF can be JITed */
 	bpf_int_jit_compile(fp);
-	/* Lock whole bpf_prog as read-only */
 	bpf_prog_lock_ro(fp);
 
-	return check_tail_call(fp);
+	/* The tail call compatibility check can only be done at
+	 * this late stage as we need to determine, if we deal
+	 * with JITed or non JITed program concatenations and not
+	 * all eBPF JITs might immediately support all features.
+	 */
+	return bpf_check_tail_call(fp);
 }
 EXPORT_SYMBOL_GPL(bpf_prog_select_runtime);
 
@@ -736,6 +731,21 @@ const struct bpf_func_proto bpf_get_prandom_u32_proto __weak;
 const struct bpf_func_proto bpf_get_smp_processor_id_proto __weak;
 const struct bpf_func_proto bpf_ktime_get_ns_proto __weak;
 
+/* Always built-in helper functions. */
+const struct bpf_func_proto bpf_tail_call_proto = {
+	.func		= NULL,
+	.gpl_only	= false,
+	.ret_type	= RET_VOID,
+	.arg1_type	= ARG_PTR_TO_CTX,
+	.arg2_type	= ARG_CONST_MAP_PTR,
+	.arg3_type	= ARG_ANYTHING,
+};
+
+/* For classic BPF JITs that don't implement bpf_int_jit_compile(). */
+void __weak bpf_int_jit_compile(struct bpf_prog *prog)
+{
+}
+
 /* To execute LD_ABS/LD_IND instructions __bpf_prog_run() may call
  * skb_copy_bits(), so provide a weak definition of it for NET-less config.
  */
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index b3aaabd..7ad5d88 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -45,11 +45,11 @@ static u64 bpf_map_lookup_elem(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
 }
 
 const struct bpf_func_proto bpf_map_lookup_elem_proto = {
-	.func = bpf_map_lookup_elem,
-	.gpl_only = false,
-	.ret_type = RET_PTR_TO_MAP_VALUE_OR_NULL,
-	.arg1_type = ARG_CONST_MAP_PTR,
-	.arg2_type = ARG_PTR_TO_MAP_KEY,
+	.func		= bpf_map_lookup_elem,
+	.gpl_only	= false,
+	.ret_type	= RET_PTR_TO_MAP_VALUE_OR_NULL,
+	.arg1_type	= ARG_CONST_MAP_PTR,
+	.arg2_type	= ARG_PTR_TO_MAP_KEY,
 };
 
 static u64 bpf_map_update_elem(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
@@ -64,13 +64,13 @@ static u64 bpf_map_update_elem(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
 }
 
 const struct bpf_func_proto bpf_map_update_elem_proto = {
-	.func = bpf_map_update_elem,
-	.gpl_only = false,
-	.ret_type = RET_INTEGER,
-	.arg1_type = ARG_CONST_MAP_PTR,
-	.arg2_type = ARG_PTR_TO_MAP_KEY,
-	.arg3_type = ARG_PTR_TO_MAP_VALUE,
-	.arg4_type = ARG_ANYTHING,
+	.func		= bpf_map_update_elem,
+	.gpl_only	= false,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_CONST_MAP_PTR,
+	.arg2_type	= ARG_PTR_TO_MAP_KEY,
+	.arg3_type	= ARG_PTR_TO_MAP_VALUE,
+	.arg4_type	= ARG_ANYTHING,
 };
 
 static u64 bpf_map_delete_elem(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
@@ -84,11 +84,11 @@ static u64 bpf_map_delete_elem(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
 }
 
 const struct bpf_func_proto bpf_map_delete_elem_proto = {
-	.func = bpf_map_delete_elem,
-	.gpl_only = false,
-	.ret_type = RET_INTEGER,
-	.arg1_type = ARG_CONST_MAP_PTR,
-	.arg2_type = ARG_PTR_TO_MAP_KEY,
+	.func		= bpf_map_delete_elem,
+	.gpl_only	= false,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_CONST_MAP_PTR,
+	.arg2_type	= ARG_PTR_TO_MAP_KEY,
 };
 
 static u64 bpf_get_prandom_u32(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH net-next 2/2] ebpf: misc core cleanup
  2015-05-29 21:23 ` [PATCH net-next 2/2] ebpf: misc core cleanup Daniel Borkmann
@ 2015-05-29 23:25   ` Alexei Starovoitov
  2015-06-01  4:45   ` David Miller
  1 sibling, 0 replies; 6+ messages in thread
From: Alexei Starovoitov @ 2015-05-29 23:25 UTC (permalink / raw)
  To: Daniel Borkmann, davem; +Cc: netdev

On 5/29/15 2:23 PM, Daniel Borkmann wrote:
> Besides others, move bpf_tail_call_proto to the remaining definitions
> of other protos, improve comments a bit (i.e. remove some obvious ones,
> where the code is already self-documenting, add objectives for others),
> simplify bpf_prog_array_compatible() a bit.
>
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>

bpf_check_tail_call() cleanup is nice. The rest won't hurt ;)
Acked-by: Alexei Starovoitov <ast@plumgrid.com>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH net-next 1/2] ebpf: allow bpf_ktime_get_ns_proto also for networking
  2015-05-29 21:23 ` [PATCH net-next 1/2] ebpf: allow bpf_ktime_get_ns_proto also for networking Daniel Borkmann
@ 2015-06-01  4:44   ` David Miller
  0 siblings, 0 replies; 6+ messages in thread
From: David Miller @ 2015-06-01  4:44 UTC (permalink / raw)
  To: daniel; +Cc: ast, netdev, mingo

From: Daniel Borkmann <daniel@iogearbox.net>
Date: Fri, 29 May 2015 23:23:06 +0200

> As this is already exported from tracing side via commit d9847d310ab4
> ("tracing: Allow BPF programs to call bpf_ktime_get_ns()"), we might
> as well want to move it to the core, so also networking users can make
> use of it, e.g. to measure diffs for certain flows from ingress/egress.
> 
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>

Applied.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH net-next 2/2] ebpf: misc core cleanup
  2015-05-29 21:23 ` [PATCH net-next 2/2] ebpf: misc core cleanup Daniel Borkmann
  2015-05-29 23:25   ` Alexei Starovoitov
@ 2015-06-01  4:45   ` David Miller
  1 sibling, 0 replies; 6+ messages in thread
From: David Miller @ 2015-06-01  4:45 UTC (permalink / raw)
  To: daniel; +Cc: ast, netdev

From: Daniel Borkmann <daniel@iogearbox.net>
Date: Fri, 29 May 2015 23:23:07 +0200

> Besides others, move bpf_tail_call_proto to the remaining definitions
> of other protos, improve comments a bit (i.e. remove some obvious ones,
> where the code is already self-documenting, add objectives for others),
> simplify bpf_prog_array_compatible() a bit.
> 
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>

Applied.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2015-06-01  4:45 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-05-29 21:23 [PATCH net-next 0/2] bpf misc update Daniel Borkmann
2015-05-29 21:23 ` [PATCH net-next 1/2] ebpf: allow bpf_ktime_get_ns_proto also for networking Daniel Borkmann
2015-06-01  4:44   ` David Miller
2015-05-29 21:23 ` [PATCH net-next 2/2] ebpf: misc core cleanup Daniel Borkmann
2015-05-29 23:25   ` Alexei Starovoitov
2015-06-01  4:45   ` David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).