* [PATCH v7 net-next 0/3] filter: add Extended BPF interpreter and converter, seccomp
@ 2014-03-08 23:15 Alexei Starovoitov
2014-03-08 23:15 ` [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter Alexei Starovoitov
` (2 more replies)
0 siblings, 3 replies; 19+ messages in thread
From: Alexei Starovoitov @ 2014-03-08 23:15 UTC (permalink / raw)
To: David S. Miller
Cc: Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt,
Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross,
Thomas Gleixner, Masami Hiramatsu, Tom Zanussi, Jovi Zhangwei,
Eric Dumazet, Linus Torvalds, Andrew Morton, Frederic Weisbecker,
Arnaldo Carvalho de Melo, Pekka Enberg, Arjan van de Ven,
Christoph Hellwig, linux-kernel, netdev
Hi Dave,
V1 patches:
http://thread.gmane.org/gmane.linux.kernel/1605783
V2 patches:
http://thread.gmane.org/gmane.linux.kernel/1642325
V3 patches:
http://thread.gmane.org/gmane.linux.kernel/1656538
V4 summary:
- addressed Daniel comments
- RFC for seccomp with extended BPF
- added extended BPF design doc
V5 summary:
- fixed commit one-liner, removed empty line
- added Hagen's ack
V6 summary:
- unrolled loop in populate_seccomp_data() to help gcc on arm
- removed empty line at the end of the file
- removed redundant (u32) cast in JSET
- fixed BPF_RVAL instead of BPF_SRC for BPF_RET in sk_convert_filter()
- updated commit log
- added Daniel's Reviewed-by
- added Kees's Reviewed-by
V7 summary:
1/3:
- addressed Dave's feedback regarding typecasting:
added 'jited' flag to sk_filter and union for bpf_func/bpf_func_ext
- added a comment to sk_run_filter_ext() about ctx<->skb relation
- removed CPU specific code from sk_run_filter() and sk_run_filter_ext()
because of that revised arm32 cache-hit bpf micro-bench numbers slightly
slower, but seccomp and cache-miss arm32 numbers stayed the same
2/3 and 3/3: no changes
x86_64, i386 and arm32 look clean.
Thanks!
Alexei Starovoitov (3):
filter: add Extended BPF interpreter and converter
seccomp: convert seccomp to use extended BPF
doc: filter: add Extended BPF documentation
Documentation/networking/filter.txt | 181 ++++++++
arch/arm/net/bpf_jit_32.c | 3 +-
arch/powerpc/net/bpf_jit_comp.c | 3 +-
arch/s390/net/bpf_jit_comp.c | 3 +-
arch/sparc/net/bpf_jit_comp.c | 3 +-
arch/x86/net/bpf_jit_comp.c | 3 +-
include/linux/filter.h | 16 +-
include/linux/netdevice.h | 1 +
include/linux/seccomp.h | 1 -
include/uapi/linux/filter.h | 33 +-
kernel/seccomp.c | 118 +++--
net/core/filter.c | 806 ++++++++++++++++++++++++++++++++++-
net/core/sysctl_net_core.c | 7 +
13 files changed, 1077 insertions(+), 101 deletions(-)
--
1.7.9.5
^ permalink raw reply [flat|nested] 19+ messages in thread* [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter 2014-03-08 23:15 [PATCH v7 net-next 0/3] filter: add Extended BPF interpreter and converter, seccomp Alexei Starovoitov @ 2014-03-08 23:15 ` Alexei Starovoitov 2014-03-09 12:29 ` Daniel Borkmann ` (2 more replies) 2014-03-08 23:15 ` [PATCH v7 net-next 2/3] seccomp: convert seccomp to use extended BPF Alexei Starovoitov 2014-03-08 23:15 ` [PATCH v7 net-next 3/3] doc: filter: add Extended BPF documentation Alexei Starovoitov 2 siblings, 3 replies; 19+ messages in thread From: Alexei Starovoitov @ 2014-03-08 23:15 UTC (permalink / raw) To: David S. Miller Cc: Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt, Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross, Thomas Gleixner, Masami Hiramatsu, Tom Zanussi, Jovi Zhangwei, Eric Dumazet, Linus Torvalds, Andrew Morton, Frederic Weisbecker, Arnaldo Carvalho de Melo, Pekka Enberg, Arjan van de Ven, Christoph Hellwig, linux-kernel, netdev Extended BPF extends old BPF in the following ways: - from 2 to 10 registers Original BPF has two registers (A and X) and hidden frame pointer. Extended BPF has ten registers and read-only frame pointer. - from 32-bit registers to 64-bit registers semantics of old 32-bit ALU operations are preserved via 32-bit subregisters - if (cond) jump_true; else jump_false; old BPF insns are replaced with: if (cond) jump_true; /* else fallthrough */ - adds signed > and >= insns - 16 4-byte stack slots for register spill-fill replaced with up to 512 bytes of multi-use stack space - introduces bpf_call insn and register passing convention for zero overhead calls from/to other kernel functions (not part of this patch) - adds arithmetic right shift insn - adds swab32/swab64 insns - adds atomic_add insn - old tax/txa insns are replaced with 'mov dst,src' insn Extended BPF is designed to be JITed with one to one mapping, which allows GCC/LLVM backends to generate optimized BPF code that performs almost as fast as natively compiled code sk_convert_filter() remaps old style insns into extended: 'sock_filter' instructions are remapped on the fly to 'sock_filter_ext' extended instructions when sysctl net.core.bpf_ext_enable=1 Old filter comes through sk_attach_filter() or sk_unattached_filter_create() if (bpf_ext_enable) { convert to new sk_chk_filter() - check old bpf use sk_run_filter_ext() - new interpreter } else { sk_chk_filter() - check old bpf if (bpf_jit_enable) use old jit else use sk_run_filter() - old interpreter } sk_run_filter_ext() interpreter is noticeably faster than sk_run_filter() for two reasons: 1.fall-through jumps Old BPF jump instructions are forced to go either 'true' or 'false' branch which causes branch-miss penalty. Extended BPF jump instructions have one branch and fall-through, which fit CPU branch predictor logic better. 'perf stat' shows drastic difference for branch-misses. 2.jump-threaded implementation of interpreter vs switch statement Instead of single tablejump at the top of 'switch' statement, GCC will generate multiple tablejump instructions, which helps CPU branch predictor Performance of two BPF filters generated by libpcap was measured on x86_64, i386 and arm32. fprog #1 is taken from Documentation/networking/filter.txt: tcpdump -i eth0 port 22 -dd fprog #2 is taken from 'man tcpdump': tcpdump -i eth0 'tcp port 22 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' -dd Other libpcap programs have similar performance differences. Raw performance data from BPF micro-benchmark: SK_RUN_FILTER on same SKB (cache-hit) or 10k SKBs (cache-miss) time in nsec per call, smaller is better --x86_64-- fprog #1 fprog #1 fprog #2 fprog #2 cache-hit cache-miss cache-hit cache-miss old BPF 90 101 192 202 ext BPF 31 71 47 97 old BPF jit 12 34 17 44 ext BPF jit TBD --i386-- fprog #1 fprog #1 fprog #2 fprog #2 cache-hit cache-miss cache-hit cache-miss old BPF 107 136 227 252 ext BPF 40 119 69 172 --arm32-- fprog #1 fprog #1 fprog #2 fprog #2 cache-hit cache-miss cache-hit cache-miss old BPF 202 300 475 540 ext BPF 180 270 330 470 old BPF jit 26 182 37 202 new BPF jit TBD Tested with trinify BPF fuzzer Future work: 0. add bpf/ebpf testsuite to tools/testing/selftests/net/bpf 1. add extended BPF JIT for x86_64 2. add inband old/new demux and extended BPF verifier, so that new programs can be loaded through old sk_attach_filter() and sk_unattached_filter_create() interfaces 3. tracing filters systemtap-like with extended BPF 4. OVS with extended BPF 5. nftables with extended BPF Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Acked-by: Hagen Paul Pfeifer <hagen@jauu.net> Reviewed-by: Daniel Borkmann <dborkman@redhat.com> --- I think typecasting fixes are minor, so I kept Daniel's and Hagen's rev-by/ack. arch/arm/net/bpf_jit_32.c | 3 +- arch/powerpc/net/bpf_jit_comp.c | 3 +- arch/s390/net/bpf_jit_comp.c | 3 +- arch/sparc/net/bpf_jit_comp.c | 3 +- arch/x86/net/bpf_jit_comp.c | 3 +- include/linux/filter.h | 16 +- include/linux/netdevice.h | 1 + include/uapi/linux/filter.h | 33 +- net/core/filter.c | 801 ++++++++++++++++++++++++++++++++++++++- net/core/sysctl_net_core.c | 7 + 10 files changed, 840 insertions(+), 33 deletions(-) diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c index 271b5e971568..e72ff51f4561 100644 --- a/arch/arm/net/bpf_jit_32.c +++ b/arch/arm/net/bpf_jit_32.c @@ -925,6 +925,7 @@ void bpf_jit_compile(struct sk_filter *fp) bpf_jit_dump(fp->len, alloc_size, 2, ctx.target); fp->bpf_func = (void *)ctx.target; + fp->jited = 1; out: kfree(ctx.offsets); return; @@ -932,7 +933,7 @@ out: void bpf_jit_free(struct sk_filter *fp) { - if (fp->bpf_func != sk_run_filter) + if (fp->jited) module_free(NULL, fp->bpf_func); kfree(fp); } diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c index 555034f8505e..c0c5fcb0736a 100644 --- a/arch/powerpc/net/bpf_jit_comp.c +++ b/arch/powerpc/net/bpf_jit_comp.c @@ -689,6 +689,7 @@ void bpf_jit_compile(struct sk_filter *fp) ((u64 *)image)[0] = (u64)code_base; ((u64 *)image)[1] = local_paca->kernel_toc; fp->bpf_func = (void *)image; + fp->jited = 1; } out: kfree(addrs); @@ -697,7 +698,7 @@ out: void bpf_jit_free(struct sk_filter *fp) { - if (fp->bpf_func != sk_run_filter) + if (fp->jited) module_free(NULL, fp->bpf_func); kfree(fp); } diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c index 708d60e40066..bf56fe51b5c1 100644 --- a/arch/s390/net/bpf_jit_comp.c +++ b/arch/s390/net/bpf_jit_comp.c @@ -877,6 +877,7 @@ void bpf_jit_compile(struct sk_filter *fp) if (jit.start) { set_memory_ro((unsigned long)header, header->pages); fp->bpf_func = (void *) jit.start; + fp->jited = 1; } out: kfree(addrs); @@ -887,7 +888,7 @@ void bpf_jit_free(struct sk_filter *fp) unsigned long addr = (unsigned long)fp->bpf_func & PAGE_MASK; struct bpf_binary_header *header = (void *)addr; - if (fp->bpf_func == sk_run_filter) + if (!fp->jited) goto free_filter; set_memory_rw(addr, header->pages); module_free(NULL, header); diff --git a/arch/sparc/net/bpf_jit_comp.c b/arch/sparc/net/bpf_jit_comp.c index 01fe9946d388..8c01be66f67d 100644 --- a/arch/sparc/net/bpf_jit_comp.c +++ b/arch/sparc/net/bpf_jit_comp.c @@ -809,6 +809,7 @@ cond_branch: f_offset = addrs[i + filter[i].jf]; if (image) { bpf_flush_icache(image, image + proglen); fp->bpf_func = (void *)image; + fp->jited = 1; } out: kfree(addrs); @@ -817,7 +818,7 @@ out: void bpf_jit_free(struct sk_filter *fp) { - if (fp->bpf_func != sk_run_filter) + if (fp->jited) module_free(NULL, fp->bpf_func); kfree(fp); } diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 4ed75dd81d05..7fa182cd3973 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -772,6 +772,7 @@ cond_branch: f_offset = addrs[i + filter[i].jf] - addrs[i]; bpf_flush_icache(header, image + proglen); set_memory_ro((unsigned long)header, header->pages); fp->bpf_func = (void *)image; + fp->jited = 1; } out: kfree(addrs); @@ -791,7 +792,7 @@ static void bpf_jit_free_deferred(struct work_struct *work) void bpf_jit_free(struct sk_filter *fp) { - if (fp->bpf_func != sk_run_filter) { + if (fp->jited) { INIT_WORK(&fp->work, bpf_jit_free_deferred); schedule_work(&fp->work); } else { diff --git a/include/linux/filter.h b/include/linux/filter.h index e568c8ef896b..0a9278258763 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -26,11 +26,17 @@ struct sk_filter { atomic_t refcnt; unsigned int len; /* Number of filter blocks */ + unsigned int jited:1; struct rcu_head rcu; - unsigned int (*bpf_func)(const struct sk_buff *skb, - const struct sock_filter *filter); + union { + unsigned int (*bpf_func)(const struct sk_buff *skb, + const struct sock_filter *fp); + unsigned int (*bpf_func_ext)(void *ctx, + const struct sock_filter_ext *fp); + }; union { struct sock_filter insns[0]; + struct sock_filter_ext insns_ext[0]; struct work_struct work; }; }; @@ -52,7 +58,11 @@ extern int sk_detach_filter(struct sock *sk); extern int sk_chk_filter(struct sock_filter *filter, unsigned int flen); extern int sk_get_filter(struct sock *sk, struct sock_filter __user *filter, unsigned len); extern void sk_decode_filter(struct sock_filter *filt, struct sock_filter *to); +int sk_convert_filter(struct sock_filter *old_prog, int len, + struct sock_filter_ext *new_prog, int *p_new_len); +unsigned int sk_run_filter_ext(void *ctx, const struct sock_filter_ext *insn); +#define SK_RUN_FILTER(FILTER, SKB) (*FILTER->bpf_func)(SKB, FILTER->insns) #ifdef CONFIG_BPF_JIT #include <stdarg.h> #include <linux/linkage.h> @@ -70,7 +80,6 @@ static inline void bpf_jit_dump(unsigned int flen, unsigned int proglen, print_hex_dump(KERN_ERR, "JIT code: ", DUMP_PREFIX_OFFSET, 16, 1, image, proglen, false); } -#define SK_RUN_FILTER(FILTER, SKB) (*FILTER->bpf_func)(SKB, FILTER->insns) #else #include <linux/slab.h> static inline void bpf_jit_compile(struct sk_filter *fp) @@ -80,7 +89,6 @@ static inline void bpf_jit_free(struct sk_filter *fp) { kfree(fp); } -#define SK_RUN_FILTER(FILTER, SKB) sk_run_filter(SKB, FILTER->insns) #endif static inline int bpf_tell_extensions(void) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 1a869488b8ae..2c13d000389c 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -3054,6 +3054,7 @@ extern int netdev_max_backlog; extern int netdev_tstamp_prequeue; extern int weight_p; extern int bpf_jit_enable; +extern int bpf_ext_enable; bool netdev_has_upper_dev(struct net_device *dev, struct net_device *upper_dev); struct net_device *netdev_all_upper_get_next_dev_rcu(struct net_device *dev, diff --git a/include/uapi/linux/filter.h b/include/uapi/linux/filter.h index 8eb9ccaa5b48..4e98fe16ba88 100644 --- a/include/uapi/linux/filter.h +++ b/include/uapi/linux/filter.h @@ -1,5 +1,6 @@ /* * Linux Socket Filter Data Structures + * Extended BPF is Copyright (c) 2011-2014, PLUMgrid, http://plumgrid.com */ #ifndef _UAPI__LINUX_FILTER_H__ @@ -19,7 +20,7 @@ * Try and keep these values and structures similar to BSD, especially * the BPF code definitions which need to match so you can share filters */ - + struct sock_filter { /* Filter block */ __u16 code; /* Actual filter code */ __u8 jt; /* Jump true */ @@ -27,6 +28,14 @@ struct sock_filter { /* Filter block */ __u32 k; /* Generic multiuse field */ }; +struct sock_filter_ext { + __u8 code; /* opcode */ + __u8 a_reg:4; /* dest register */ + __u8 x_reg:4; /* source register */ + __s16 off; /* signed offset */ + __s32 imm; /* signed immediate constant */ +}; + struct sock_fprog { /* Required for SO_ATTACH_FILTER. */ unsigned short len; /* Number of filter blocks */ struct sock_filter __user *filter; @@ -45,12 +54,14 @@ struct sock_fprog { /* Required for SO_ATTACH_FILTER. */ #define BPF_JMP 0x05 #define BPF_RET 0x06 #define BPF_MISC 0x07 +#define BPF_ALU64 0x07 /* ld/ldx fields */ #define BPF_SIZE(code) ((code) & 0x18) #define BPF_W 0x00 #define BPF_H 0x08 #define BPF_B 0x10 +#define BPF_DW 0x18 #define BPF_MODE(code) ((code) & 0xe0) #define BPF_IMM 0x00 #define BPF_ABS 0x20 @@ -58,6 +69,7 @@ struct sock_fprog { /* Required for SO_ATTACH_FILTER. */ #define BPF_MEM 0x60 #define BPF_LEN 0x80 #define BPF_MSH 0xa0 +#define BPF_XADD 0xc0 /* exclusive add */ /* alu/jmp fields */ #define BPF_OP(code) ((code) & 0xf0) @@ -68,16 +80,24 @@ struct sock_fprog { /* Required for SO_ATTACH_FILTER. */ #define BPF_OR 0x40 #define BPF_AND 0x50 #define BPF_LSH 0x60 -#define BPF_RSH 0x70 +#define BPF_RSH 0x70 /* logical shift right */ #define BPF_NEG 0x80 #define BPF_MOD 0x90 #define BPF_XOR 0xa0 +#define BPF_MOV 0xb0 /* mov reg to reg */ +#define BPF_ARSH 0xc0 /* sign extending arithmetic shift right */ +#define BPF_BSWAP32 0xd0 /* swap lower 4 bytes of 64-bit register */ +#define BPF_BSWAP64 0xe0 /* swap all 8 bytes of 64-bit register */ #define BPF_JA 0x00 -#define BPF_JEQ 0x10 -#define BPF_JGT 0x20 -#define BPF_JGE 0x30 -#define BPF_JSET 0x40 +#define BPF_JEQ 0x10 /* jump == */ +#define BPF_JGT 0x20 /* GT is unsigned '>', JA in x86 */ +#define BPF_JGE 0x30 /* GE is unsigned '>=', JAE in x86 */ +#define BPF_JSET 0x40 /* if (A & X) */ +#define BPF_JNE 0x50 /* jump != */ +#define BPF_JSGT 0x60 /* SGT is signed '>', GT in x86 */ +#define BPF_JSGE 0x70 /* SGE is signed '>=', GE in x86 */ +#define BPF_CALL 0x80 /* function call */ #define BPF_SRC(code) ((code) & 0x08) #define BPF_K 0x00 #define BPF_X 0x08 @@ -134,5 +154,4 @@ struct sock_fprog { /* Required for SO_ATTACH_FILTER. */ #define SKF_NET_OFF (-0x100000) #define SKF_LL_OFF (-0x200000) - #endif /* _UAPI__LINUX_FILTER_H__ */ diff --git a/net/core/filter.c b/net/core/filter.c index ad30d626a5bd..ad6f7546ce64 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -1,5 +1,6 @@ /* * Linux Socket Filter - Kernel level socket filtering + * Extended BPF is Copyright (c) 2011-2014 PLUMgrid, http://plumgrid.com * * Author: * Jay Schulist <jschlst@samba.org> @@ -40,6 +41,8 @@ #include <linux/seccomp.h> #include <linux/if_vlan.h> +int bpf_ext_enable __read_mostly; + /* No hurry in this branch * * Exported for the bpf jit load helper. @@ -134,11 +137,7 @@ unsigned int sk_run_filter(const struct sk_buff *skb, * Process array of filter instructions. */ for (;; fentry++) { -#if defined(CONFIG_X86_32) -#define K (fentry->k) -#else const u32 K = fentry->k; -#endif switch (fentry->code) { case BPF_S_ALU_ADD_X: @@ -646,6 +645,7 @@ static int __sk_prepare_filter(struct sk_filter *fp) int err; fp->bpf_func = sk_run_filter; + fp->jited = 0; err = sk_chk_filter(fp->insns, fp->len); if (err) @@ -655,6 +655,84 @@ static int __sk_prepare_filter(struct sk_filter *fp) return 0; } +static int sk_prepare_filter_ext(struct sk_filter **pfp, + struct sock_fprog *fprog, struct sock *sk) +{ + unsigned int fsize = sizeof(struct sock_filter) * fprog->len; + struct sock_filter *old_prog; + unsigned int sk_fsize; + struct sk_filter *fp; + int new_len; + int err; + + BUILD_BUG_ON(sizeof(struct sock_filter) != + sizeof(struct sock_filter_ext)); + + /* store old program into buffer, since chk_filter will remap opcodes */ + old_prog = kmalloc(fsize, GFP_KERNEL); + if (!old_prog) + return -ENOMEM; + + if (sk) { + if (copy_from_user(old_prog, fprog->filter, fsize)) { + err = -EFAULT; + goto free_prog; + } + } else { + memcpy(old_prog, fprog->filter, fsize); + } + + /* calculate bpf_ext program length */ + err = sk_convert_filter(fprog->filter, fprog->len, NULL, &new_len); + if (err) + goto free_prog; + + sk_fsize = sk_filter_size(new_len); + /* allocate sk_filter to store bpf_ext program */ + if (sk) + fp = sock_kmalloc(sk, sk_fsize, GFP_KERNEL); + else + fp = kmalloc(sk_fsize, GFP_KERNEL); + if (!fp) { + err = -ENOMEM; + goto free_prog; + } + + /* remap sock_filter insns into sock_filter_ext insns */ + err = sk_convert_filter(old_prog, fprog->len, fp->insns_ext, &new_len); + if (err) + /* 2nd sk_convert_filter() can fail only if it fails + * to allocate memory, remapping must succeed + */ + goto free_fp; + + /* now chk_filter can overwrite old_prog while checking */ + err = sk_chk_filter(old_prog, fprog->len); + if (err) + goto free_fp; + + /* discard old prog */ + kfree(old_prog); + + atomic_set(&fp->refcnt, 1); + fp->len = new_len; + fp->jited = 0; + + /* sock_filter_ext insns must be executed by sk_run_filter_ext */ + fp->bpf_func_ext = sk_run_filter_ext; + + *pfp = fp; + return 0; +free_fp: + if (sk) + sock_kfree_s(sk, fp, sk_fsize); + else + kfree(fp); +free_prog: + kfree(old_prog); + return err; +} + /** * sk_unattached_filter_create - create an unattached filter * @fprog: the filter program @@ -676,6 +754,9 @@ int sk_unattached_filter_create(struct sk_filter **pfp, if (fprog->filter == NULL) return -EINVAL; + if (bpf_ext_enable) + return sk_prepare_filter_ext(pfp, fprog, NULL); + fp = kmalloc(sk_filter_size(fprog->len), GFP_KERNEL); if (!fp) return -ENOMEM; @@ -726,21 +807,27 @@ int sk_attach_filter(struct sock_fprog *fprog, struct sock *sk) if (fprog->filter == NULL) return -EINVAL; - fp = sock_kmalloc(sk, sk_fsize, GFP_KERNEL); - if (!fp) - return -ENOMEM; - if (copy_from_user(fp->insns, fprog->filter, fsize)) { - sock_kfree_s(sk, fp, sk_fsize); - return -EFAULT; - } + if (bpf_ext_enable) { + err = sk_prepare_filter_ext(&fp, fprog, sk); + if (err) + return err; + } else { + fp = sock_kmalloc(sk, sk_fsize, GFP_KERNEL); + if (!fp) + return -ENOMEM; + if (copy_from_user(fp->insns, fprog->filter, fsize)) { + sock_kfree_s(sk, fp, sk_fsize); + return -EFAULT; + } - atomic_set(&fp->refcnt, 1); - fp->len = fprog->len; + atomic_set(&fp->refcnt, 1); + fp->len = fprog->len; - err = __sk_prepare_filter(fp); - if (err) { - sk_filter_uncharge(sk, fp); - return err; + err = __sk_prepare_filter(fp); + if (err) { + sk_filter_uncharge(sk, fp); + return err; + } } old_fp = rcu_dereference_protected(sk->sk_filter, @@ -882,3 +969,683 @@ out: release_sock(sk); return ret; } + +/** + * sk_convert_filter - convert filter program + * @old_prog: the filter program + * @len: the length of filter program + * @new_prog: buffer where converted program will be stored + * @p_new_len: pointer to store length of converted program + * + * remap 'sock_filter' style BPF instruction set to 'sock_filter_ext' style + * + * first, call sk_convert_filter(old_prog, len, NULL, &new_len) to calculate new + * program length in one pass + * + * then new_prog = kmalloc(sizeof(struct sock_filter_ext) * new_len); + * + * and call it again: sk_convert_filter(old_prog, len, new_prog, &new_len); + * to remap in two passes: 1st pass finds new jump offsets, 2nd pass remaps + */ +int sk_convert_filter(struct sock_filter *old_prog, int len, + struct sock_filter_ext *new_prog, int *p_new_len) +{ + struct sock_filter_ext *new_insn; + struct sock_filter *fp; + int *addrs = NULL; + int new_len = 0; + int pass = 0; + int tgt, i; + u8 bpf_src; + + if (len <= 0 || len >= BPF_MAXINSNS) + return -EINVAL; + + if (new_prog) { + addrs = kzalloc(len * sizeof(*addrs), GFP_KERNEL); + if (!addrs) + return -ENOMEM; + } + +do_pass: + new_insn = new_prog; + fp = old_prog; + for (i = 0; i < len; fp++, i++) { + struct sock_filter_ext tmp_insns[3] = {}; + struct sock_filter_ext *insn = tmp_insns; + + if (addrs) + addrs[i] = new_insn - new_prog; + + switch (fp->code) { + /* all arithmetic insns and skb loads map as-is */ + case BPF_ALU | BPF_ADD | BPF_X: + case BPF_ALU | BPF_ADD | BPF_K: + case BPF_ALU | BPF_SUB | BPF_X: + case BPF_ALU | BPF_SUB | BPF_K: + case BPF_ALU | BPF_AND | BPF_X: + case BPF_ALU | BPF_AND | BPF_K: + case BPF_ALU | BPF_OR | BPF_X: + case BPF_ALU | BPF_OR | BPF_K: + case BPF_ALU | BPF_LSH | BPF_X: + case BPF_ALU | BPF_LSH | BPF_K: + case BPF_ALU | BPF_RSH | BPF_X: + case BPF_ALU | BPF_RSH | BPF_K: + case BPF_ALU | BPF_XOR | BPF_X: + case BPF_ALU | BPF_XOR | BPF_K: + case BPF_ALU | BPF_MUL | BPF_X: + case BPF_ALU | BPF_MUL | BPF_K: + case BPF_ALU | BPF_DIV | BPF_X: + case BPF_ALU | BPF_DIV | BPF_K: + case BPF_ALU | BPF_MOD | BPF_X: + case BPF_ALU | BPF_MOD | BPF_K: + case BPF_ALU | BPF_NEG: + case BPF_LD | BPF_ABS | BPF_W: + case BPF_LD | BPF_ABS | BPF_H: + case BPF_LD | BPF_ABS | BPF_B: + case BPF_LD | BPF_IND | BPF_W: + case BPF_LD | BPF_IND | BPF_H: + case BPF_LD | BPF_IND | BPF_B: + insn->code = fp->code; + insn->a_reg = 6; + insn->x_reg = 7; + insn->imm = fp->k; + break; + + /* jump opcodes map as-is, but offsets need adjustment */ + case BPF_JMP | BPF_JA: + tgt = i + fp->k + 1; + insn->code = fp->code; +#define EMIT_JMP \ + do { \ + if (tgt >= len || tgt < 0) \ + goto err; \ + insn->off = addrs ? addrs[tgt] - addrs[i] - 1 : 0; \ + /* adjust pc relative offset for 2nd or 3rd insn */ \ + insn->off -= insn - tmp_insns; \ + } while (0) + + EMIT_JMP; + break; + + case BPF_JMP | BPF_JEQ | BPF_K: + case BPF_JMP | BPF_JEQ | BPF_X: + case BPF_JMP | BPF_JSET | BPF_K: + case BPF_JMP | BPF_JSET | BPF_X: + case BPF_JMP | BPF_JGT | BPF_K: + case BPF_JMP | BPF_JGT | BPF_X: + case BPF_JMP | BPF_JGE | BPF_K: + case BPF_JMP | BPF_JGE | BPF_X: + if (BPF_SRC(fp->code) == BPF_K && + (int)fp->k < 0) { + /* extended BPF immediates are signed, + * zero extend immediate into tmp register + * and use it in compare insn + */ + insn->code = BPF_ALU | BPF_MOV | BPF_K; + insn->a_reg = 2; + insn->imm = fp->k; + insn++; + + insn->a_reg = 6; + insn->x_reg = 2; + bpf_src = BPF_X; + } else { + insn->a_reg = 6; + insn->x_reg = 7; + insn->imm = fp->k; + bpf_src = BPF_SRC(fp->code); + } + /* common case where 'jump_false' is next insn */ + if (fp->jf == 0) { + insn->code = BPF_JMP | BPF_OP(fp->code) | + bpf_src; + tgt = i + fp->jt + 1; + EMIT_JMP; + break; + } + /* convert JEQ into JNE when 'jump_true' is next insn */ + if (fp->jt == 0 && BPF_OP(fp->code) == BPF_JEQ) { + insn->code = BPF_JMP | BPF_JNE | bpf_src; + tgt = i + fp->jf + 1; + EMIT_JMP; + break; + } + /* other jumps are mapped into two insns: Jxx and JA */ + tgt = i + fp->jt + 1; + insn->code = BPF_JMP | BPF_OP(fp->code) | bpf_src; + EMIT_JMP; + + insn++; + insn->code = BPF_JMP | BPF_JA; + tgt = i + fp->jf + 1; + EMIT_JMP; + break; + + /* ldxb 4*([14]&0xf) is remaped into 3 insns */ + case BPF_LDX | BPF_MSH | BPF_B: + insn->code = BPF_LD | BPF_ABS | BPF_B; + insn->a_reg = 7; + insn->imm = fp->k; + + insn++; + insn->code = BPF_ALU | BPF_AND | BPF_K; + insn->a_reg = 7; + insn->imm = 0xf; + + insn++; + insn->code = BPF_ALU | BPF_LSH | BPF_K; + insn->a_reg = 7; + insn->imm = 2; + break; + + /* RET_K, RET_A are remaped into 2 insns */ + case BPF_RET | BPF_A: + case BPF_RET | BPF_K: + insn->code = BPF_ALU | BPF_MOV | + (BPF_RVAL(fp->code) == BPF_K ? BPF_K : BPF_X); + insn->a_reg = 0; + insn->x_reg = 6; + insn->imm = fp->k; + + insn++; + insn->code = BPF_RET | BPF_K; + break; + + /* store to stack */ + case BPF_ST: + case BPF_STX: + insn->code = BPF_STX | BPF_MEM | BPF_W; + insn->a_reg = 10; + insn->x_reg = fp->code == BPF_ST ? 6 : 7; + insn->off = -(BPF_MEMWORDS - fp->k) * 4; + break; + + /* load from stack */ + case BPF_LD | BPF_MEM: + case BPF_LDX | BPF_MEM: + insn->code = BPF_LDX | BPF_MEM | BPF_W; + insn->a_reg = BPF_CLASS(fp->code) == BPF_LD ? 6 : 7; + insn->x_reg = 10; + insn->off = -(BPF_MEMWORDS - fp->k) * 4; + break; + + /* A = K or X = K */ + case BPF_LD | BPF_IMM: + case BPF_LDX | BPF_IMM: + insn->code = BPF_ALU | BPF_MOV | BPF_K; + insn->a_reg = BPF_CLASS(fp->code) == BPF_LD ? 6 : 7; + insn->imm = fp->k; + break; + + /* X = A */ + case BPF_MISC | BPF_TAX: + insn->code = BPF_ALU64 | BPF_MOV | BPF_X; + insn->a_reg = 7; + insn->x_reg = 6; + break; + + /* A = X */ + case BPF_MISC | BPF_TXA: + insn->code = BPF_ALU64 | BPF_MOV | BPF_X; + insn->a_reg = 6; + insn->x_reg = 7; + break; + + /* A = skb->len or X = skb->len */ + case BPF_LD | BPF_W | BPF_LEN: + case BPF_LDX | BPF_W | BPF_LEN: + insn->code = BPF_LDX | BPF_MEM | BPF_W; + insn->a_reg = BPF_CLASS(fp->code) == BPF_LD ? 6 : 7; + insn->x_reg = 1; + insn->off = offsetof(struct sk_buff, len); + break; + + /* access seccomp_data fields */ + case BPF_LDX | BPF_ABS | BPF_W: + insn->code = BPF_LDX | BPF_MEM | BPF_W; + insn->a_reg = 6; + insn->x_reg = 1; + insn->off = fp->k; + break; + + default: + /* pr_err("unknown opcode %02x\n", fp->code); */ + goto err; + } + + insn++; + if (new_prog) { + memcpy(new_insn, tmp_insns, + sizeof(*insn) * (insn - tmp_insns)); + } + new_insn += insn - tmp_insns; + } + + if (!new_prog) { + /* only calculating new length */ + *p_new_len = new_insn - new_prog; + return 0; + } + + pass++; + if (new_len != new_insn - new_prog) { + new_len = new_insn - new_prog; + if (pass > 2) + goto err; + goto do_pass; + } + kfree(addrs); + if (*p_new_len != new_len) + /* inconsistent new program length */ + pr_err("sk_convert_filter() usage error\n"); + return 0; +err: + kfree(addrs); + return -EINVAL; +} + +/** + * sk_run_filter_ext - run an extended filter + * @ctx: buffer to run the filter on + * @insn: filter to apply + * + * Decode and execute extended BPF instructions. + * @ctx is the data we are operating on. + * @filter is the array of filter instructions. + */ +notrace u32 sk_run_filter_ext(void *ctx, const struct sock_filter_ext *insn) +{ + u64 stack[64]; + u64 regs[16]; + void *ptr; + u64 tmp; + int off; + +#define K insn->imm +#define A regs[insn->a_reg] +#define X regs[insn->x_reg] + +#define CONT ({insn++; goto select_insn; }) +#define CONT_JMP ({insn++; goto select_insn; }) +/* some compilers may need help: + * #define CONT_JMP ({insn++; goto *jumptable[insn->code]; }) + */ + + static const void *jumptable[256] = { + [0 ... 255] = &&default_label, +#define DL(A, B, C) [A|B|C] = &&A##_##B##_##C, + DL(BPF_ALU, BPF_ADD, BPF_X) + DL(BPF_ALU, BPF_ADD, BPF_K) + DL(BPF_ALU, BPF_SUB, BPF_X) + DL(BPF_ALU, BPF_SUB, BPF_K) + DL(BPF_ALU, BPF_AND, BPF_X) + DL(BPF_ALU, BPF_AND, BPF_K) + DL(BPF_ALU, BPF_OR, BPF_X) + DL(BPF_ALU, BPF_OR, BPF_K) + DL(BPF_ALU, BPF_LSH, BPF_X) + DL(BPF_ALU, BPF_LSH, BPF_K) + DL(BPF_ALU, BPF_RSH, BPF_X) + DL(BPF_ALU, BPF_RSH, BPF_K) + DL(BPF_ALU, BPF_XOR, BPF_X) + DL(BPF_ALU, BPF_XOR, BPF_K) + DL(BPF_ALU, BPF_MUL, BPF_X) + DL(BPF_ALU, BPF_MUL, BPF_K) + DL(BPF_ALU, BPF_MOV, BPF_X) + DL(BPF_ALU, BPF_MOV, BPF_K) + DL(BPF_ALU, BPF_DIV, BPF_X) + DL(BPF_ALU, BPF_DIV, BPF_K) + DL(BPF_ALU, BPF_MOD, BPF_X) + DL(BPF_ALU, BPF_MOD, BPF_K) + DL(BPF_ALU64, BPF_ADD, BPF_X) + DL(BPF_ALU64, BPF_ADD, BPF_K) + DL(BPF_ALU64, BPF_SUB, BPF_X) + DL(BPF_ALU64, BPF_SUB, BPF_K) + DL(BPF_ALU64, BPF_AND, BPF_X) + DL(BPF_ALU64, BPF_AND, BPF_K) + DL(BPF_ALU64, BPF_OR, BPF_X) + DL(BPF_ALU64, BPF_OR, BPF_K) + DL(BPF_ALU64, BPF_LSH, BPF_X) + DL(BPF_ALU64, BPF_LSH, BPF_K) + DL(BPF_ALU64, BPF_RSH, BPF_X) + DL(BPF_ALU64, BPF_RSH, BPF_K) + DL(BPF_ALU64, BPF_XOR, BPF_X) + DL(BPF_ALU64, BPF_XOR, BPF_K) + DL(BPF_ALU64, BPF_MUL, BPF_X) + DL(BPF_ALU64, BPF_MUL, BPF_K) + DL(BPF_ALU64, BPF_MOV, BPF_X) + DL(BPF_ALU64, BPF_MOV, BPF_K) + DL(BPF_ALU64, BPF_ARSH, BPF_X) + DL(BPF_ALU64, BPF_ARSH, BPF_K) + DL(BPF_ALU64, BPF_DIV, BPF_X) + DL(BPF_ALU64, BPF_DIV, BPF_K) + DL(BPF_ALU64, BPF_MOD, BPF_X) + DL(BPF_ALU64, BPF_MOD, BPF_K) + DL(BPF_ALU64, BPF_BSWAP32, BPF_X) + DL(BPF_ALU64, BPF_BSWAP64, BPF_X) + DL(BPF_ALU, BPF_NEG, 0) + DL(BPF_JMP, BPF_CALL, 0) + DL(BPF_JMP, BPF_JA, 0) + DL(BPF_JMP, BPF_JEQ, BPF_X) + DL(BPF_JMP, BPF_JEQ, BPF_K) + DL(BPF_JMP, BPF_JNE, BPF_X) + DL(BPF_JMP, BPF_JNE, BPF_K) + DL(BPF_JMP, BPF_JGT, BPF_X) + DL(BPF_JMP, BPF_JGT, BPF_K) + DL(BPF_JMP, BPF_JGE, BPF_X) + DL(BPF_JMP, BPF_JGE, BPF_K) + DL(BPF_JMP, BPF_JSGT, BPF_X) + DL(BPF_JMP, BPF_JSGT, BPF_K) + DL(BPF_JMP, BPF_JSGE, BPF_X) + DL(BPF_JMP, BPF_JSGE, BPF_K) + DL(BPF_JMP, BPF_JSET, BPF_X) + DL(BPF_JMP, BPF_JSET, BPF_K) + DL(BPF_STX, BPF_MEM, BPF_B) + DL(BPF_STX, BPF_MEM, BPF_H) + DL(BPF_STX, BPF_MEM, BPF_W) + DL(BPF_STX, BPF_MEM, BPF_DW) + DL(BPF_ST, BPF_MEM, BPF_B) + DL(BPF_ST, BPF_MEM, BPF_H) + DL(BPF_ST, BPF_MEM, BPF_W) + DL(BPF_ST, BPF_MEM, BPF_DW) + DL(BPF_LDX, BPF_MEM, BPF_B) + DL(BPF_LDX, BPF_MEM, BPF_H) + DL(BPF_LDX, BPF_MEM, BPF_W) + DL(BPF_LDX, BPF_MEM, BPF_DW) + DL(BPF_STX, BPF_XADD, BPF_W) +#ifdef CONFIG_64BIT + DL(BPF_STX, BPF_XADD, BPF_DW) +#endif + DL(BPF_LD, BPF_ABS, BPF_W) + DL(BPF_LD, BPF_ABS, BPF_H) + DL(BPF_LD, BPF_ABS, BPF_B) + DL(BPF_LD, BPF_IND, BPF_W) + DL(BPF_LD, BPF_IND, BPF_H) + DL(BPF_LD, BPF_IND, BPF_B) + DL(BPF_RET, BPF_K, 0) +#undef DL + }; + + regs[10/* BPF R10 */] = (u64)(ulong)&stack[64]; + regs[1/* BPF R1 */] = (u64)(ulong)ctx; + + /* execute 1st insn */ +select_insn: + goto *jumptable[insn->code]; + + /* ALU */ +#define ALU(OPCODE, OP) \ + BPF_ALU64_##OPCODE##_BPF_X: \ + A = A OP X; \ + CONT; \ + BPF_ALU_##OPCODE##_BPF_X: \ + A = (u32)A OP (u32)X; \ + CONT; \ + BPF_ALU64_##OPCODE##_BPF_K: \ + A = A OP K; \ + CONT; \ + BPF_ALU_##OPCODE##_BPF_K: \ + A = (u32)A OP (u32)K; \ + CONT; + + ALU(BPF_ADD, +) + ALU(BPF_SUB, -) + ALU(BPF_AND, &) + ALU(BPF_OR, |) + ALU(BPF_LSH, <<) + ALU(BPF_RSH, >>) + ALU(BPF_XOR, ^) + ALU(BPF_MUL, *) +#undef ALU + +BPF_ALU_BPF_NEG_0: + A = (u32)-A; + CONT; +BPF_ALU_BPF_MOV_BPF_X: + A = (u32)X; + CONT; +BPF_ALU_BPF_MOV_BPF_K: + A = (u32)K; + CONT; +BPF_ALU64_BPF_MOV_BPF_X: + A = X; + CONT; +BPF_ALU64_BPF_MOV_BPF_K: + A = K; + CONT; +BPF_ALU64_BPF_ARSH_BPF_X: + (*(s64 *) &A) >>= X; + CONT; +BPF_ALU64_BPF_ARSH_BPF_K: + (*(s64 *) &A) >>= K; + CONT; +BPF_ALU64_BPF_MOD_BPF_X: + tmp = A; + if (X) + A = do_div(tmp, X); + CONT; +BPF_ALU_BPF_MOD_BPF_X: + tmp = (u32)A; + if (X) + A = do_div(tmp, (u32)X); + CONT; +BPF_ALU64_BPF_MOD_BPF_K: + tmp = A; + if (K) + A = do_div(tmp, K); + CONT; +BPF_ALU_BPF_MOD_BPF_K: + tmp = (u32)A; + if (K) + A = do_div(tmp, (u32)K); + CONT; +BPF_ALU64_BPF_DIV_BPF_X: + if (X) + do_div(A, X); + CONT; +BPF_ALU_BPF_DIV_BPF_X: + tmp = (u32)A; + if (X) + do_div(tmp, (u32)X); + A = (u32)tmp; + CONT; +BPF_ALU64_BPF_DIV_BPF_K: + if (K) + do_div(A, K); + CONT; +BPF_ALU_BPF_DIV_BPF_K: + tmp = (u32)A; + if (K) + do_div(tmp, (u32)K); + A = (u32)tmp; + CONT; +BPF_ALU64_BPF_BSWAP32_BPF_X: + A = swab32(A); + CONT; +BPF_ALU64_BPF_BSWAP64_BPF_X: + A = swab64(A); + CONT; + + /* CALL */ +BPF_JMP_BPF_CALL_0: + return 0; /* not implemented yet */ + + /* JMP */ +BPF_JMP_BPF_JA_0: + insn += insn->off; + CONT; +BPF_JMP_BPF_JEQ_BPF_X: + if (A == X) { + insn += insn->off; + CONT_JMP; + } + CONT; +BPF_JMP_BPF_JEQ_BPF_K: + if (A == K) { + insn += insn->off; + CONT_JMP; + } + CONT; +BPF_JMP_BPF_JNE_BPF_X: + if (A != X) { + insn += insn->off; + CONT_JMP; + } + CONT; +BPF_JMP_BPF_JNE_BPF_K: + if (A != K) { + insn += insn->off; + CONT_JMP; + } + CONT; +BPF_JMP_BPF_JGT_BPF_X: + if (A > X) { + insn += insn->off; + CONT_JMP; + } + CONT; +BPF_JMP_BPF_JGT_BPF_K: + if (A > K) { + insn += insn->off; + CONT_JMP; + } + CONT; +BPF_JMP_BPF_JGE_BPF_X: + if (A >= X) { + insn += insn->off; + CONT_JMP; + } + CONT; +BPF_JMP_BPF_JGE_BPF_K: + if (A >= K) { + insn += insn->off; + CONT_JMP; + } + CONT; +BPF_JMP_BPF_JSGT_BPF_X: + if (((s64)A) > ((s64)X)) { + insn += insn->off; + CONT_JMP; + } + CONT; +BPF_JMP_BPF_JSGT_BPF_K: + if (((s64)A) > ((s64)K)) { + insn += insn->off; + CONT_JMP; + } + CONT; +BPF_JMP_BPF_JSGE_BPF_X: + if (((s64)A) >= ((s64)X)) { + insn += insn->off; + CONT_JMP; + } + CONT; +BPF_JMP_BPF_JSGE_BPF_K: + if (((s64)A) >= ((s64)K)) { + insn += insn->off; + CONT_JMP; + } + CONT; +BPF_JMP_BPF_JSET_BPF_X: + if (A & X) { + insn += insn->off; + CONT_JMP; + } + CONT; +BPF_JMP_BPF_JSET_BPF_K: + if (A & K) { + insn += insn->off; + CONT_JMP; + } + CONT; + + /* STX and ST and LDX*/ +#define LDST(SIZEOP, SIZE) \ + BPF_STX_BPF_MEM_##SIZEOP: \ + *(SIZE *)(ulong)(A + insn->off) = X; \ + CONT; \ + BPF_ST_BPF_MEM_##SIZEOP: \ + *(SIZE *)(ulong)(A + insn->off) = K; \ + CONT; \ + BPF_LDX_BPF_MEM_##SIZEOP: \ + A = *(SIZE *)(ulong)(X + insn->off); \ + CONT; + + LDST(BPF_B, u8) + LDST(BPF_H, u16) + LDST(BPF_W, u32) + LDST(BPF_DW, u64) +#undef LDST + +BPF_STX_BPF_XADD_BPF_W: /* lock xadd *(u32 *)(A + insn->off) += X */ + atomic_add((u32)X, (atomic_t *)(ulong)(A + insn->off)); + CONT; +#ifdef CONFIG_64BIT +BPF_STX_BPF_XADD_BPF_DW: /* lock xadd *(u64 *)(A + insn->off) += X */ + atomic64_add((u64)X, (atomic64_t *)(ulong)(A + insn->off)); + CONT; +#endif + +BPF_LD_BPF_ABS_BPF_W: /* A = *(u32 *)(SKB + K) */ + off = K; +load_word: + /* sk_convert_filter() and sk_chk_filter_ext() will make sure + * that BPF_LD+BPD_ABS and BPF_LD+BPF_IND insns are only + * appearing in the programs where ctx == skb + */ + ptr = load_pointer((struct sk_buff *)ctx, off, 4, &tmp); + if (likely(ptr != NULL)) { + A = get_unaligned_be32(ptr); + CONT; + } + return 0; + +BPF_LD_BPF_ABS_BPF_H: /* A = *(u16 *)(SKB + K) */ + off = K; +load_half: + ptr = load_pointer((struct sk_buff *)ctx, off, 2, &tmp); + if (likely(ptr != NULL)) { + A = get_unaligned_be16(ptr); + CONT; + } + return 0; + +BPF_LD_BPF_ABS_BPF_B: /* A = *(u8 *)(SKB + K) */ + off = K; +load_byte: + ptr = load_pointer((struct sk_buff *)ctx, off, 1, &tmp); + if (likely(ptr != NULL)) { + A = *(u8 *)ptr; + CONT; + } + return 0; + +BPF_LD_BPF_IND_BPF_W: /* A = *(u32 *)(SKB + X + K) */ + off = K + X; + goto load_word; + +BPF_LD_BPF_IND_BPF_H: /* A = *(u16 *)(SKB + X + K) */ + off = K + X; + goto load_half; + +BPF_LD_BPF_IND_BPF_B: /* A = *(u8 *)(SKB + X + K) */ + off = K + X; + goto load_byte; + + /* RET */ +BPF_RET_BPF_K_0: + return regs[0/* R0 */]; + +default_label: + /* sk_chk_filter_ext() and sk_convert_filter() guarantee + * that we never reach here + */ + WARN_RATELIMIT(1, "unknown opcode %02x\n", insn->code); + return 0; +#undef CONT +#undef A +#undef X +#undef K +#undef LOAD_IMM +} +EXPORT_SYMBOL(sk_run_filter_ext); diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c index cf9cd13509a7..e1b979312588 100644 --- a/net/core/sysctl_net_core.c +++ b/net/core/sysctl_net_core.c @@ -273,6 +273,13 @@ static struct ctl_table net_core_table[] = { }, #endif { + .procname = "bpf_ext_enable", + .data = &bpf_ext_enable, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = proc_dointvec + }, + { .procname = "netdev_tstamp_prequeue", .data = &netdev_tstamp_prequeue, .maxlen = sizeof(int), -- 1.7.9.5 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter 2014-03-08 23:15 ` [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter Alexei Starovoitov @ 2014-03-09 12:29 ` Daniel Borkmann 2014-03-09 17:08 ` Alexei Starovoitov 2014-03-09 14:45 ` Eric Dumazet 2014-03-09 14:49 ` Eric Dumazet 2 siblings, 1 reply; 19+ messages in thread From: Daniel Borkmann @ 2014-03-09 12:29 UTC (permalink / raw) To: Alexei Starovoitov Cc: David S. Miller, Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt, Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross, Thomas Gleixner, Masami Hiramatsu, Tom Zanussi, Jovi Zhangwei, Eric Dumazet, Linus Torvalds, Andrew Morton, Frederic Weisbecker, Arnaldo Carvalho de Melo, Pekka Enberg, Arjan van de Ven, Christoph Hellwig, linux-kernel, netdev On 03/09/2014 12:15 AM, Alexei Starovoitov wrote: > Extended BPF extends old BPF in the following ways: > - from 2 to 10 registers > Original BPF has two registers (A and X) and hidden frame pointer. > Extended BPF has ten registers and read-only frame pointer. > - from 32-bit registers to 64-bit registers > semantics of old 32-bit ALU operations are preserved via 32-bit > subregisters > - if (cond) jump_true; else jump_false; > old BPF insns are replaced with: > if (cond) jump_true; /* else fallthrough */ > - adds signed > and >= insns > - 16 4-byte stack slots for register spill-fill replaced with > up to 512 bytes of multi-use stack space > - introduces bpf_call insn and register passing convention for zero > overhead calls from/to other kernel functions (not part of this patch) > - adds arithmetic right shift insn > - adds swab32/swab64 insns > - adds atomic_add insn > - old tax/txa insns are replaced with 'mov dst,src' insn > > Extended BPF is designed to be JITed with one to one mapping, which > allows GCC/LLVM backends to generate optimized BPF code that performs > almost as fast as natively compiled code > > sk_convert_filter() remaps old style insns into extended: > 'sock_filter' instructions are remapped on the fly to > 'sock_filter_ext' extended instructions when > sysctl net.core.bpf_ext_enable=1 > > Old filter comes through sk_attach_filter() or sk_unattached_filter_create() > if (bpf_ext_enable) { > convert to new > sk_chk_filter() - check old bpf > use sk_run_filter_ext() - new interpreter > } else { > sk_chk_filter() - check old bpf > if (bpf_jit_enable) > use old jit > else > use sk_run_filter() - old interpreter > } > > sk_run_filter_ext() interpreter is noticeably faster > than sk_run_filter() for two reasons: > > 1.fall-through jumps > Old BPF jump instructions are forced to go either 'true' or 'false' > branch which causes branch-miss penalty. > Extended BPF jump instructions have one branch and fall-through, > which fit CPU branch predictor logic better. > 'perf stat' shows drastic difference for branch-misses. > > 2.jump-threaded implementation of interpreter vs switch statement > Instead of single tablejump at the top of 'switch' statement, GCC will > generate multiple tablejump instructions, which helps CPU branch predictor > > Performance of two BPF filters generated by libpcap was measured > on x86_64, i386 and arm32. > > fprog #1 is taken from Documentation/networking/filter.txt: > tcpdump -i eth0 port 22 -dd > > fprog #2 is taken from 'man tcpdump': > tcpdump -i eth0 'tcp port 22 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - > ((tcp[12]&0xf0)>>2)) != 0)' -dd > > Other libpcap programs have similar performance differences. > > Raw performance data from BPF micro-benchmark: > SK_RUN_FILTER on same SKB (cache-hit) or 10k SKBs (cache-miss) > time in nsec per call, smaller is better > --x86_64-- > fprog #1 fprog #1 fprog #2 fprog #2 > cache-hit cache-miss cache-hit cache-miss > old BPF 90 101 192 202 > ext BPF 31 71 47 97 > old BPF jit 12 34 17 44 > ext BPF jit TBD > > --i386-- > fprog #1 fprog #1 fprog #2 fprog #2 > cache-hit cache-miss cache-hit cache-miss > old BPF 107 136 227 252 > ext BPF 40 119 69 172 > > --arm32-- > fprog #1 fprog #1 fprog #2 fprog #2 > cache-hit cache-miss cache-hit cache-miss > old BPF 202 300 475 540 > ext BPF 180 270 330 470 > old BPF jit 26 182 37 202 > new BPF jit TBD > > Tested with trinify BPF fuzzer > > Future work: > > 0. add bpf/ebpf testsuite to tools/testing/selftests/net/bpf > > 1. add extended BPF JIT for x86_64 > > 2. add inband old/new demux and extended BPF verifier, so that new programs > can be loaded through old sk_attach_filter() and sk_unattached_filter_create() > interfaces > > 3. tracing filters systemtap-like with extended BPF > > 4. OVS with extended BPF > > 5. nftables with extended BPF > > Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> > Acked-by: Hagen Paul Pfeifer <hagen@jauu.net> > Reviewed-by: Daniel Borkmann <dborkman@redhat.com> One more question or possible issue that came through my mind: When someone attaches a socket filter from user space, and bpf_ext_enable=1 then the old filter will transparently be converted to the new representation. If then user space (e.g. through checkpoint restore) will issue a sk_get_filter() and thus we're calling sk_decode_filter() on sk->sk_filter and, therefore, try to decode what we stored in insns_ext[] with the assumption we still have the old code. Would that actually crash (or leak memory, or just return garbage), as we access decodes[] array with filt->code? Would be great if you could double-check. The assumption with sk_get_filter() is that it returns the same filter that was previously attached, so that it can be re-attached again at a later point in time. Cheers, Daniel ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter 2014-03-09 12:29 ` Daniel Borkmann @ 2014-03-09 17:08 ` Alexei Starovoitov 2014-03-09 22:00 ` Daniel Borkmann 0 siblings, 1 reply; 19+ messages in thread From: Alexei Starovoitov @ 2014-03-09 17:08 UTC (permalink / raw) To: Daniel Borkmann Cc: David S. Miller, Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt, Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross, Thomas Gleixner, Masami Hiramatsu, Tom Zanussi, Jovi Zhangwei, Eric Dumazet, Linus Torvalds, Andrew Morton, Frederic Weisbecker, Arnaldo Carvalho de Melo, Pekka Enberg, Arjan van de Ven, Christoph Hellwig, LKML, netdev On Sun, Mar 9, 2014 at 5:29 AM, Daniel Borkmann <borkmann@iogearbox.net> wrote: > On 03/09/2014 12:15 AM, Alexei Starovoitov wrote: >> >> Extended BPF extends old BPF in the following ways: >> - from 2 to 10 registers >> Original BPF has two registers (A and X) and hidden frame pointer. >> Extended BPF has ten registers and read-only frame pointer. >> - from 32-bit registers to 64-bit registers >> semantics of old 32-bit ALU operations are preserved via 32-bit >> subregisters >> - if (cond) jump_true; else jump_false; >> old BPF insns are replaced with: >> if (cond) jump_true; /* else fallthrough */ >> - adds signed > and >= insns >> - 16 4-byte stack slots for register spill-fill replaced with >> up to 512 bytes of multi-use stack space >> - introduces bpf_call insn and register passing convention for zero >> overhead calls from/to other kernel functions (not part of this patch) >> - adds arithmetic right shift insn >> - adds swab32/swab64 insns >> - adds atomic_add insn >> - old tax/txa insns are replaced with 'mov dst,src' insn >> >> Extended BPF is designed to be JITed with one to one mapping, which >> allows GCC/LLVM backends to generate optimized BPF code that performs >> almost as fast as natively compiled code >> >> sk_convert_filter() remaps old style insns into extended: >> 'sock_filter' instructions are remapped on the fly to >> 'sock_filter_ext' extended instructions when >> sysctl net.core.bpf_ext_enable=1 >> >> Old filter comes through sk_attach_filter() or >> sk_unattached_filter_create() >> if (bpf_ext_enable) { >> convert to new >> sk_chk_filter() - check old bpf >> use sk_run_filter_ext() - new interpreter >> } else { >> sk_chk_filter() - check old bpf >> if (bpf_jit_enable) >> use old jit >> else >> use sk_run_filter() - old interpreter >> } >> >> sk_run_filter_ext() interpreter is noticeably faster >> than sk_run_filter() for two reasons: >> >> 1.fall-through jumps >> Old BPF jump instructions are forced to go either 'true' or 'false' >> branch which causes branch-miss penalty. >> Extended BPF jump instructions have one branch and fall-through, >> which fit CPU branch predictor logic better. >> 'perf stat' shows drastic difference for branch-misses. >> >> 2.jump-threaded implementation of interpreter vs switch statement >> Instead of single tablejump at the top of 'switch' statement, GCC will >> generate multiple tablejump instructions, which helps CPU branch >> predictor >> >> Performance of two BPF filters generated by libpcap was measured >> on x86_64, i386 and arm32. >> >> fprog #1 is taken from Documentation/networking/filter.txt: >> tcpdump -i eth0 port 22 -dd >> >> fprog #2 is taken from 'man tcpdump': >> tcpdump -i eth0 'tcp port 22 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - >> ((tcp[12]&0xf0)>>2)) != 0)' -dd >> >> Other libpcap programs have similar performance differences. >> >> Raw performance data from BPF micro-benchmark: >> SK_RUN_FILTER on same SKB (cache-hit) or 10k SKBs (cache-miss) >> time in nsec per call, smaller is better >> --x86_64-- >> fprog #1 fprog #1 fprog #2 fprog #2 >> cache-hit cache-miss cache-hit cache-miss >> old BPF 90 101 192 202 >> ext BPF 31 71 47 97 >> old BPF jit 12 34 17 44 >> ext BPF jit TBD >> >> --i386-- >> fprog #1 fprog #1 fprog #2 fprog #2 >> cache-hit cache-miss cache-hit cache-miss >> old BPF 107 136 227 252 >> ext BPF 40 119 69 172 >> >> --arm32-- >> fprog #1 fprog #1 fprog #2 fprog #2 >> cache-hit cache-miss cache-hit cache-miss >> old BPF 202 300 475 540 >> ext BPF 180 270 330 470 >> old BPF jit 26 182 37 202 >> new BPF jit TBD >> >> Tested with trinify BPF fuzzer >> >> Future work: >> >> 0. add bpf/ebpf testsuite to tools/testing/selftests/net/bpf >> >> 1. add extended BPF JIT for x86_64 >> >> 2. add inband old/new demux and extended BPF verifier, so that new >> programs >> can be loaded through old sk_attach_filter() and >> sk_unattached_filter_create() >> interfaces >> >> 3. tracing filters systemtap-like with extended BPF >> >> 4. OVS with extended BPF >> >> 5. nftables with extended BPF >> >> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> >> Acked-by: Hagen Paul Pfeifer <hagen@jauu.net> >> Reviewed-by: Daniel Borkmann <dborkman@redhat.com> > > > One more question or possible issue that came through my mind: When > someone attaches a socket filter from user space, and bpf_ext_enable=1 > then the old filter will transparently be converted to the new > representation. If then user space (e.g. through checkpoint restore) > will issue a sk_get_filter() and thus we're calling sk_decode_filter() > on sk->sk_filter and, therefore, try to decode what we stored in > insns_ext[] with the assumption we still have the old code. Would that > actually crash (or leak memory, or just return garbage), as we access > decodes[] array with filt->code? Would be great if you could double-check. ohh. yes. missed that. when bpf_ext_enable=1 I think it's cleaner to return ebpf filter. This way the user space can see how old bpf filter was converted. Of course we can allocate extra memory and keep original bpf code there just to return it via sk_get_filter(), but that seems overkill. > The assumption with sk_get_filter() is that it returns the same filter > that was previously attached, so that it can be re-attached again at > a later point in time. when bpf_ext_enable=1, load old, sk_get_filter() returns new ebpf, this ebpf will be re-attachable, since there will be inband demux for bpf/ebpf. Thanks Alexei ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter 2014-03-09 17:08 ` Alexei Starovoitov @ 2014-03-09 22:00 ` Daniel Borkmann 2014-03-10 0:41 ` Alexei Starovoitov 2014-03-11 17:40 ` Pavel Emelyanov 0 siblings, 2 replies; 19+ messages in thread From: Daniel Borkmann @ 2014-03-09 22:00 UTC (permalink / raw) To: Alexei Starovoitov Cc: David S. Miller, Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt, Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross, Thomas Gleixner, Masami Hiramatsu, Tom Zanussi, Jovi Zhangwei, Eric Dumazet, Linus Torvalds, Andrew Morton, Frederic Weisbecker, Arnaldo Carvalho de Melo, Pekka Enberg, Arjan van de Ven, Christoph Hellwig, LKML, netdev On 03/09/2014 06:08 PM, Alexei Starovoitov wrote: > On Sun, Mar 9, 2014 at 5:29 AM, Daniel Borkmann <borkmann@iogearbox.net> wrote: >> On 03/09/2014 12:15 AM, Alexei Starovoitov wrote: >>> >>> Extended BPF extends old BPF in the following ways: >>> - from 2 to 10 registers >>> Original BPF has two registers (A and X) and hidden frame pointer. >>> Extended BPF has ten registers and read-only frame pointer. >>> - from 32-bit registers to 64-bit registers >>> semantics of old 32-bit ALU operations are preserved via 32-bit >>> subregisters >>> - if (cond) jump_true; else jump_false; >>> old BPF insns are replaced with: >>> if (cond) jump_true; /* else fallthrough */ >>> - adds signed > and >= insns >>> - 16 4-byte stack slots for register spill-fill replaced with >>> up to 512 bytes of multi-use stack space >>> - introduces bpf_call insn and register passing convention for zero >>> overhead calls from/to other kernel functions (not part of this patch) >>> - adds arithmetic right shift insn >>> - adds swab32/swab64 insns >>> - adds atomic_add insn >>> - old tax/txa insns are replaced with 'mov dst,src' insn >>> >>> Extended BPF is designed to be JITed with one to one mapping, which >>> allows GCC/LLVM backends to generate optimized BPF code that performs >>> almost as fast as natively compiled code >>> >>> sk_convert_filter() remaps old style insns into extended: >>> 'sock_filter' instructions are remapped on the fly to >>> 'sock_filter_ext' extended instructions when >>> sysctl net.core.bpf_ext_enable=1 >>> >>> Old filter comes through sk_attach_filter() or >>> sk_unattached_filter_create() >>> if (bpf_ext_enable) { >>> convert to new >>> sk_chk_filter() - check old bpf >>> use sk_run_filter_ext() - new interpreter >>> } else { >>> sk_chk_filter() - check old bpf >>> if (bpf_jit_enable) >>> use old jit >>> else >>> use sk_run_filter() - old interpreter >>> } >>> >>> sk_run_filter_ext() interpreter is noticeably faster >>> than sk_run_filter() for two reasons: >>> >>> 1.fall-through jumps >>> Old BPF jump instructions are forced to go either 'true' or 'false' >>> branch which causes branch-miss penalty. >>> Extended BPF jump instructions have one branch and fall-through, >>> which fit CPU branch predictor logic better. >>> 'perf stat' shows drastic difference for branch-misses. >>> >>> 2.jump-threaded implementation of interpreter vs switch statement >>> Instead of single tablejump at the top of 'switch' statement, GCC will >>> generate multiple tablejump instructions, which helps CPU branch >>> predictor >>> >>> Performance of two BPF filters generated by libpcap was measured >>> on x86_64, i386 and arm32. >>> >>> fprog #1 is taken from Documentation/networking/filter.txt: >>> tcpdump -i eth0 port 22 -dd >>> >>> fprog #2 is taken from 'man tcpdump': >>> tcpdump -i eth0 'tcp port 22 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - >>> ((tcp[12]&0xf0)>>2)) != 0)' -dd >>> >>> Other libpcap programs have similar performance differences. >>> >>> Raw performance data from BPF micro-benchmark: >>> SK_RUN_FILTER on same SKB (cache-hit) or 10k SKBs (cache-miss) >>> time in nsec per call, smaller is better >>> --x86_64-- >>> fprog #1 fprog #1 fprog #2 fprog #2 >>> cache-hit cache-miss cache-hit cache-miss >>> old BPF 90 101 192 202 >>> ext BPF 31 71 47 97 >>> old BPF jit 12 34 17 44 >>> ext BPF jit TBD >>> >>> --i386-- >>> fprog #1 fprog #1 fprog #2 fprog #2 >>> cache-hit cache-miss cache-hit cache-miss >>> old BPF 107 136 227 252 >>> ext BPF 40 119 69 172 >>> >>> --arm32-- >>> fprog #1 fprog #1 fprog #2 fprog #2 >>> cache-hit cache-miss cache-hit cache-miss >>> old BPF 202 300 475 540 >>> ext BPF 180 270 330 470 >>> old BPF jit 26 182 37 202 >>> new BPF jit TBD >>> >>> Tested with trinify BPF fuzzer >>> >>> Future work: >>> >>> 0. add bpf/ebpf testsuite to tools/testing/selftests/net/bpf >>> >>> 1. add extended BPF JIT for x86_64 >>> >>> 2. add inband old/new demux and extended BPF verifier, so that new >>> programs >>> can be loaded through old sk_attach_filter() and >>> sk_unattached_filter_create() >>> interfaces >>> >>> 3. tracing filters systemtap-like with extended BPF >>> >>> 4. OVS with extended BPF >>> >>> 5. nftables with extended BPF >>> >>> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> >>> Acked-by: Hagen Paul Pfeifer <hagen@jauu.net> >>> Reviewed-by: Daniel Borkmann <dborkman@redhat.com> >> >> >> One more question or possible issue that came through my mind: When >> someone attaches a socket filter from user space, and bpf_ext_enable=1 >> then the old filter will transparently be converted to the new >> representation. If then user space (e.g. through checkpoint restore) >> will issue a sk_get_filter() and thus we're calling sk_decode_filter() >> on sk->sk_filter and, therefore, try to decode what we stored in >> insns_ext[] with the assumption we still have the old code. Would that >> actually crash (or leak memory, or just return garbage), as we access >> decodes[] array with filt->code? Would be great if you could double-check. > > ohh. yes. missed that. > when bpf_ext_enable=1 I think it's cleaner to return ebpf filter. > This way the user space can see how old bpf filter was converted. > > Of course we can allocate extra memory and keep original bpf code there > just to return it via sk_get_filter(), but that seems overkill. Cc'ing Pavel for a8fc92778080 ("sk-filter: Add ability to get socket filter program (v2)"). I think the issue can be that when applications could get migrated from one machine to another and their kernel won't support ebpf yet, then filter could not get loaded this way as it's expected to return what the user loaded. The trade-off, however, is that the original BPF code needs to be stored as well. :( >> The assumption with sk_get_filter() is that it returns the same filter >> that was previously attached, so that it can be re-attached again at >> a later point in time. > > when bpf_ext_enable=1, load old, sk_get_filter() returns new ebpf, > this ebpf will be re-attachable, since there will be inband demux for bpf/ebpf. > > Thanks > Alexei > ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter 2014-03-09 22:00 ` Daniel Borkmann @ 2014-03-10 0:41 ` Alexei Starovoitov 2014-03-11 17:40 ` Pavel Emelyanov 1 sibling, 0 replies; 19+ messages in thread From: Alexei Starovoitov @ 2014-03-10 0:41 UTC (permalink / raw) To: Daniel Borkmann Cc: David S. Miller, Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt, Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross, Thomas Gleixner, Masami Hiramatsu, Tom Zanussi, Jovi Zhangwei, Eric Dumazet, Linus Torvalds, Andrew Morton, Frederic Weisbecker, Arnaldo Carvalho de Melo, Pekka Enberg, Arjan van de Ven, Christoph Hellwig, LKML, netdev On Sun, Mar 9, 2014 at 3:00 PM, Daniel Borkmann <borkmann@iogearbox.net> wrote: > On 03/09/2014 06:08 PM, Alexei Starovoitov wrote: >> >> On Sun, Mar 9, 2014 at 5:29 AM, Daniel Borkmann <borkmann@iogearbox.net> >> wrote: >>> >>> On 03/09/2014 12:15 AM, Alexei Starovoitov wrote: >>>> >>>> >>>> Extended BPF extends old BPF in the following ways: >>>> - from 2 to 10 registers >>>> Original BPF has two registers (A and X) and hidden frame pointer. >>>> Extended BPF has ten registers and read-only frame pointer. >>>> - from 32-bit registers to 64-bit registers >>>> semantics of old 32-bit ALU operations are preserved via 32-bit >>>> subregisters >>>> - if (cond) jump_true; else jump_false; >>>> old BPF insns are replaced with: >>>> if (cond) jump_true; /* else fallthrough */ >>>> - adds signed > and >= insns >>>> - 16 4-byte stack slots for register spill-fill replaced with >>>> up to 512 bytes of multi-use stack space >>>> - introduces bpf_call insn and register passing convention for zero >>>> overhead calls from/to other kernel functions (not part of this >>>> patch) >>>> - adds arithmetic right shift insn >>>> - adds swab32/swab64 insns >>>> - adds atomic_add insn >>>> - old tax/txa insns are replaced with 'mov dst,src' insn >>>> >>>> Extended BPF is designed to be JITed with one to one mapping, which >>>> allows GCC/LLVM backends to generate optimized BPF code that performs >>>> almost as fast as natively compiled code >>>> >>>> sk_convert_filter() remaps old style insns into extended: >>>> 'sock_filter' instructions are remapped on the fly to >>>> 'sock_filter_ext' extended instructions when >>>> sysctl net.core.bpf_ext_enable=1 >>>> >>>> Old filter comes through sk_attach_filter() or >>>> sk_unattached_filter_create() >>>> if (bpf_ext_enable) { >>>> convert to new >>>> sk_chk_filter() - check old bpf >>>> use sk_run_filter_ext() - new interpreter >>>> } else { >>>> sk_chk_filter() - check old bpf >>>> if (bpf_jit_enable) >>>> use old jit >>>> else >>>> use sk_run_filter() - old interpreter >>>> } >>>> >>>> sk_run_filter_ext() interpreter is noticeably faster >>>> than sk_run_filter() for two reasons: >>>> >>>> 1.fall-through jumps >>>> Old BPF jump instructions are forced to go either 'true' or 'false' >>>> branch which causes branch-miss penalty. >>>> Extended BPF jump instructions have one branch and fall-through, >>>> which fit CPU branch predictor logic better. >>>> 'perf stat' shows drastic difference for branch-misses. >>>> >>>> 2.jump-threaded implementation of interpreter vs switch statement >>>> Instead of single tablejump at the top of 'switch' statement, GCC >>>> will >>>> generate multiple tablejump instructions, which helps CPU branch >>>> predictor >>>> >>>> Performance of two BPF filters generated by libpcap was measured >>>> on x86_64, i386 and arm32. >>>> >>>> fprog #1 is taken from Documentation/networking/filter.txt: >>>> tcpdump -i eth0 port 22 -dd >>>> >>>> fprog #2 is taken from 'man tcpdump': >>>> tcpdump -i eth0 'tcp port 22 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - >>>> ((tcp[12]&0xf0)>>2)) != 0)' -dd >>>> >>>> Other libpcap programs have similar performance differences. >>>> >>>> Raw performance data from BPF micro-benchmark: >>>> SK_RUN_FILTER on same SKB (cache-hit) or 10k SKBs (cache-miss) >>>> time in nsec per call, smaller is better >>>> --x86_64-- >>>> fprog #1 fprog #1 fprog #2 fprog #2 >>>> cache-hit cache-miss cache-hit cache-miss >>>> old BPF 90 101 192 202 >>>> ext BPF 31 71 47 97 >>>> old BPF jit 12 34 17 44 >>>> ext BPF jit TBD >>>> >>>> --i386-- >>>> fprog #1 fprog #1 fprog #2 fprog #2 >>>> cache-hit cache-miss cache-hit cache-miss >>>> old BPF 107 136 227 252 >>>> ext BPF 40 119 69 172 >>>> >>>> --arm32-- >>>> fprog #1 fprog #1 fprog #2 fprog #2 >>>> cache-hit cache-miss cache-hit cache-miss >>>> old BPF 202 300 475 540 >>>> ext BPF 180 270 330 470 >>>> old BPF jit 26 182 37 202 >>>> new BPF jit TBD >>>> >>>> Tested with trinify BPF fuzzer >>>> >>>> Future work: >>>> >>>> 0. add bpf/ebpf testsuite to tools/testing/selftests/net/bpf >>>> >>>> 1. add extended BPF JIT for x86_64 >>>> >>>> 2. add inband old/new demux and extended BPF verifier, so that new >>>> programs >>>> can be loaded through old sk_attach_filter() and >>>> sk_unattached_filter_create() >>>> interfaces >>>> >>>> 3. tracing filters systemtap-like with extended BPF >>>> >>>> 4. OVS with extended BPF >>>> >>>> 5. nftables with extended BPF >>>> >>>> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> >>>> Acked-by: Hagen Paul Pfeifer <hagen@jauu.net> >>>> Reviewed-by: Daniel Borkmann <dborkman@redhat.com> >>> >>> >>> >>> One more question or possible issue that came through my mind: When >>> someone attaches a socket filter from user space, and bpf_ext_enable=1 >>> then the old filter will transparently be converted to the new >>> representation. If then user space (e.g. through checkpoint restore) >>> will issue a sk_get_filter() and thus we're calling sk_decode_filter() >>> on sk->sk_filter and, therefore, try to decode what we stored in >>> insns_ext[] with the assumption we still have the old code. Would that >>> actually crash (or leak memory, or just return garbage), as we access >>> decodes[] array with filt->code? Would be great if you could >>> double-check. >> >> >> ohh. yes. missed that. >> when bpf_ext_enable=1 I think it's cleaner to return ebpf filter. >> This way the user space can see how old bpf filter was converted. >> >> Of course we can allocate extra memory and keep original bpf code there >> just to return it via sk_get_filter(), but that seems overkill. > > > Cc'ing Pavel for a8fc92778080 ("sk-filter: Add ability to get socket > filter program (v2)"). > > I think the issue can be that when applications could get migrated > from one machine to another and their kernel won't support ebpf yet, > then filter could not get loaded this way as it's expected to return > what the user loaded. The trade-off, however, is that the original > BPF code needs to be stored as well. :( I see. ...even on one machine: bpf_ext=1, attach, get_filter, bpf_ext=0, re-attach... So we need to save original. At least we don't need to keep it for 'unattached' filters. Should memory come from sk_optmem budget or plain kmalloc is enough ? Latter would have simpler implementation, but former is probably cleaner? Thanks Alexei ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter 2014-03-09 22:00 ` Daniel Borkmann 2014-03-10 0:41 ` Alexei Starovoitov @ 2014-03-11 17:40 ` Pavel Emelyanov 2014-03-11 18:03 ` Alexei Starovoitov 1 sibling, 1 reply; 19+ messages in thread From: Pavel Emelyanov @ 2014-03-11 17:40 UTC (permalink / raw) To: Daniel Borkmann Cc: Alexei Starovoitov, David S. Miller, Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt, Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross, Thomas Gleixner, Masami Hiramatsu, Tom Zanussi, Jovi Zhangwei, Eric Dumazet, Linus Torvalds, Andrew Morton, Frederic Weisbecker, Arnaldo Carvalho de Melo, Pekka Enberg, Arjan van de Ven, Christoph Hellwig, LKML <linu On 03/10/2014 02:00 AM, Daniel Borkmann wrote: > On 03/09/2014 06:08 PM, Alexei Starovoitov wrote: >> On Sun, Mar 9, 2014 at 5:29 AM, Daniel Borkmann <borkmann@iogearbox.net> wrote: >>> On 03/09/2014 12:15 AM, Alexei Starovoitov wrote: >>>> >>>> Extended BPF extends old BPF in the following ways: >>>> - from 2 to 10 registers >>>> Original BPF has two registers (A and X) and hidden frame pointer. >>>> Extended BPF has ten registers and read-only frame pointer. >>>> - from 32-bit registers to 64-bit registers >>>> semantics of old 32-bit ALU operations are preserved via 32-bit >>>> subregisters >>>> - if (cond) jump_true; else jump_false; >>>> old BPF insns are replaced with: >>>> if (cond) jump_true; /* else fallthrough */ >>>> - adds signed > and >= insns >>>> - 16 4-byte stack slots for register spill-fill replaced with >>>> up to 512 bytes of multi-use stack space >>>> - introduces bpf_call insn and register passing convention for zero >>>> overhead calls from/to other kernel functions (not part of this patch) >>>> - adds arithmetic right shift insn >>>> - adds swab32/swab64 insns >>>> - adds atomic_add insn >>>> - old tax/txa insns are replaced with 'mov dst,src' insn >>>> >>>> Extended BPF is designed to be JITed with one to one mapping, which >>>> allows GCC/LLVM backends to generate optimized BPF code that performs >>>> almost as fast as natively compiled code >>>> >>>> sk_convert_filter() remaps old style insns into extended: >>>> 'sock_filter' instructions are remapped on the fly to >>>> 'sock_filter_ext' extended instructions when >>>> sysctl net.core.bpf_ext_enable=1 >>>> >>>> Old filter comes through sk_attach_filter() or >>>> sk_unattached_filter_create() >>>> if (bpf_ext_enable) { >>>> convert to new >>>> sk_chk_filter() - check old bpf >>>> use sk_run_filter_ext() - new interpreter >>>> } else { >>>> sk_chk_filter() - check old bpf >>>> if (bpf_jit_enable) >>>> use old jit >>>> else >>>> use sk_run_filter() - old interpreter >>>> } >>>> >>>> sk_run_filter_ext() interpreter is noticeably faster >>>> than sk_run_filter() for two reasons: >>>> >>>> 1.fall-through jumps >>>> Old BPF jump instructions are forced to go either 'true' or 'false' >>>> branch which causes branch-miss penalty. >>>> Extended BPF jump instructions have one branch and fall-through, >>>> which fit CPU branch predictor logic better. >>>> 'perf stat' shows drastic difference for branch-misses. >>>> >>>> 2.jump-threaded implementation of interpreter vs switch statement >>>> Instead of single tablejump at the top of 'switch' statement, GCC will >>>> generate multiple tablejump instructions, which helps CPU branch >>>> predictor >>>> >>>> Performance of two BPF filters generated by libpcap was measured >>>> on x86_64, i386 and arm32. >>>> >>>> fprog #1 is taken from Documentation/networking/filter.txt: >>>> tcpdump -i eth0 port 22 -dd >>>> >>>> fprog #2 is taken from 'man tcpdump': >>>> tcpdump -i eth0 'tcp port 22 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - >>>> ((tcp[12]&0xf0)>>2)) != 0)' -dd >>>> >>>> Other libpcap programs have similar performance differences. >>>> >>>> Raw performance data from BPF micro-benchmark: >>>> SK_RUN_FILTER on same SKB (cache-hit) or 10k SKBs (cache-miss) >>>> time in nsec per call, smaller is better >>>> --x86_64-- >>>> fprog #1 fprog #1 fprog #2 fprog #2 >>>> cache-hit cache-miss cache-hit cache-miss >>>> old BPF 90 101 192 202 >>>> ext BPF 31 71 47 97 >>>> old BPF jit 12 34 17 44 >>>> ext BPF jit TBD >>>> >>>> --i386-- >>>> fprog #1 fprog #1 fprog #2 fprog #2 >>>> cache-hit cache-miss cache-hit cache-miss >>>> old BPF 107 136 227 252 >>>> ext BPF 40 119 69 172 >>>> >>>> --arm32-- >>>> fprog #1 fprog #1 fprog #2 fprog #2 >>>> cache-hit cache-miss cache-hit cache-miss >>>> old BPF 202 300 475 540 >>>> ext BPF 180 270 330 470 >>>> old BPF jit 26 182 37 202 >>>> new BPF jit TBD >>>> >>>> Tested with trinify BPF fuzzer >>>> >>>> Future work: >>>> >>>> 0. add bpf/ebpf testsuite to tools/testing/selftests/net/bpf >>>> >>>> 1. add extended BPF JIT for x86_64 >>>> >>>> 2. add inband old/new demux and extended BPF verifier, so that new >>>> programs >>>> can be loaded through old sk_attach_filter() and >>>> sk_unattached_filter_create() >>>> interfaces >>>> >>>> 3. tracing filters systemtap-like with extended BPF >>>> >>>> 4. OVS with extended BPF >>>> >>>> 5. nftables with extended BPF >>>> >>>> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> >>>> Acked-by: Hagen Paul Pfeifer <hagen@jauu.net> >>>> Reviewed-by: Daniel Borkmann <dborkman@redhat.com> >>> >>> >>> One more question or possible issue that came through my mind: When >>> someone attaches a socket filter from user space, and bpf_ext_enable=1 >>> then the old filter will transparently be converted to the new >>> representation. If then user space (e.g. through checkpoint restore) >>> will issue a sk_get_filter() and thus we're calling sk_decode_filter() >>> on sk->sk_filter and, therefore, try to decode what we stored in >>> insns_ext[] with the assumption we still have the old code. Would that >>> actually crash (or leak memory, or just return garbage), as we access >>> decodes[] array with filt->code? Would be great if you could double-check. >> >> ohh. yes. missed that. >> when bpf_ext_enable=1 I think it's cleaner to return ebpf filter. >> This way the user space can see how old bpf filter was converted. >> >> Of course we can allocate extra memory and keep original bpf code there >> just to return it via sk_get_filter(), but that seems overkill. > > Cc'ing Pavel for a8fc92778080 ("sk-filter: Add ability to get socket > filter program (v2)"). > > I think the issue can be that when applications could get migrated > from one machine to another and their kernel won't support ebpf yet, > then filter could not get loaded this way as it's expected to return > what the user loaded. The trade-off, however, is that the original > BPF code needs to be stored as well. :( Sorry if I miss the point, but isn't the original filter kept on socket? The sk_attach_filter() does so, then calls __sk_prepare_filter, which in turn calls bpf_jit_compile(), and the latter two keep the insns in place. Thanks, Pavel ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter 2014-03-11 17:40 ` Pavel Emelyanov @ 2014-03-11 18:03 ` Alexei Starovoitov 2014-03-11 18:19 ` Pavel Emelyanov 0 siblings, 1 reply; 19+ messages in thread From: Alexei Starovoitov @ 2014-03-11 18:03 UTC (permalink / raw) To: Pavel Emelyanov Cc: Daniel Borkmann, David S. Miller, Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt, Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross, Thomas Gleixner, Masami Hiramatsu, Tom Zanussi, Jovi Zhangwei, Eric Dumazet, Linus Torvalds, Andrew Morton, Frederic Weisbecker, Arnaldo Carvalho de Melo, Pekka Enberg, Arjan van de Ven, Christoph Hellwig, LKML <l On Tue, Mar 11, 2014 at 10:40 AM, Pavel Emelyanov <xemul@parallels.com> wrote: > On 03/10/2014 02:00 AM, Daniel Borkmann wrote: >> On 03/09/2014 06:08 PM, Alexei Starovoitov wrote: >>> On Sun, Mar 9, 2014 at 5:29 AM, Daniel Borkmann <borkmann@iogearbox.net> wrote: >>>> On 03/09/2014 12:15 AM, Alexei Starovoitov wrote: >>>>> >>>>> Extended BPF extends old BPF in the following ways: >>>>> - from 2 to 10 registers >>>>> Original BPF has two registers (A and X) and hidden frame pointer. >>>>> Extended BPF has ten registers and read-only frame pointer. >>>>> - from 32-bit registers to 64-bit registers >>>>> semantics of old 32-bit ALU operations are preserved via 32-bit >>>>> subregisters >>>>> - if (cond) jump_true; else jump_false; >>>>> old BPF insns are replaced with: >>>>> if (cond) jump_true; /* else fallthrough */ >>>>> - adds signed > and >= insns >>>>> - 16 4-byte stack slots for register spill-fill replaced with >>>>> up to 512 bytes of multi-use stack space >>>>> - introduces bpf_call insn and register passing convention for zero >>>>> overhead calls from/to other kernel functions (not part of this patch) >>>>> - adds arithmetic right shift insn >>>>> - adds swab32/swab64 insns >>>>> - adds atomic_add insn >>>>> - old tax/txa insns are replaced with 'mov dst,src' insn >>>>> >>>>> Extended BPF is designed to be JITed with one to one mapping, which >>>>> allows GCC/LLVM backends to generate optimized BPF code that performs >>>>> almost as fast as natively compiled code >>>>> >>>>> sk_convert_filter() remaps old style insns into extended: >>>>> 'sock_filter' instructions are remapped on the fly to >>>>> 'sock_filter_ext' extended instructions when >>>>> sysctl net.core.bpf_ext_enable=1 >>>>> >>>>> Old filter comes through sk_attach_filter() or >>>>> sk_unattached_filter_create() >>>>> if (bpf_ext_enable) { >>>>> convert to new >>>>> sk_chk_filter() - check old bpf >>>>> use sk_run_filter_ext() - new interpreter >>>>> } else { >>>>> sk_chk_filter() - check old bpf >>>>> if (bpf_jit_enable) >>>>> use old jit >>>>> else >>>>> use sk_run_filter() - old interpreter >>>>> } >>>>> >>>>> sk_run_filter_ext() interpreter is noticeably faster >>>>> than sk_run_filter() for two reasons: >>>>> >>>>> 1.fall-through jumps >>>>> Old BPF jump instructions are forced to go either 'true' or 'false' >>>>> branch which causes branch-miss penalty. >>>>> Extended BPF jump instructions have one branch and fall-through, >>>>> which fit CPU branch predictor logic better. >>>>> 'perf stat' shows drastic difference for branch-misses. >>>>> >>>>> 2.jump-threaded implementation of interpreter vs switch statement >>>>> Instead of single tablejump at the top of 'switch' statement, GCC will >>>>> generate multiple tablejump instructions, which helps CPU branch >>>>> predictor >>>>> >>>>> Performance of two BPF filters generated by libpcap was measured >>>>> on x86_64, i386 and arm32. >>>>> >>>>> fprog #1 is taken from Documentation/networking/filter.txt: >>>>> tcpdump -i eth0 port 22 -dd >>>>> >>>>> fprog #2 is taken from 'man tcpdump': >>>>> tcpdump -i eth0 'tcp port 22 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - >>>>> ((tcp[12]&0xf0)>>2)) != 0)' -dd >>>>> >>>>> Other libpcap programs have similar performance differences. >>>>> >>>>> Raw performance data from BPF micro-benchmark: >>>>> SK_RUN_FILTER on same SKB (cache-hit) or 10k SKBs (cache-miss) >>>>> time in nsec per call, smaller is better >>>>> --x86_64-- >>>>> fprog #1 fprog #1 fprog #2 fprog #2 >>>>> cache-hit cache-miss cache-hit cache-miss >>>>> old BPF 90 101 192 202 >>>>> ext BPF 31 71 47 97 >>>>> old BPF jit 12 34 17 44 >>>>> ext BPF jit TBD >>>>> >>>>> --i386-- >>>>> fprog #1 fprog #1 fprog #2 fprog #2 >>>>> cache-hit cache-miss cache-hit cache-miss >>>>> old BPF 107 136 227 252 >>>>> ext BPF 40 119 69 172 >>>>> >>>>> --arm32-- >>>>> fprog #1 fprog #1 fprog #2 fprog #2 >>>>> cache-hit cache-miss cache-hit cache-miss >>>>> old BPF 202 300 475 540 >>>>> ext BPF 180 270 330 470 >>>>> old BPF jit 26 182 37 202 >>>>> new BPF jit TBD >>>>> >>>>> Tested with trinify BPF fuzzer >>>>> >>>>> Future work: >>>>> >>>>> 0. add bpf/ebpf testsuite to tools/testing/selftests/net/bpf >>>>> >>>>> 1. add extended BPF JIT for x86_64 >>>>> >>>>> 2. add inband old/new demux and extended BPF verifier, so that new >>>>> programs >>>>> can be loaded through old sk_attach_filter() and >>>>> sk_unattached_filter_create() >>>>> interfaces >>>>> >>>>> 3. tracing filters systemtap-like with extended BPF >>>>> >>>>> 4. OVS with extended BPF >>>>> >>>>> 5. nftables with extended BPF >>>>> >>>>> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> >>>>> Acked-by: Hagen Paul Pfeifer <hagen@jauu.net> >>>>> Reviewed-by: Daniel Borkmann <dborkman@redhat.com> >>>> >>>> >>>> One more question or possible issue that came through my mind: When >>>> someone attaches a socket filter from user space, and bpf_ext_enable=1 >>>> then the old filter will transparently be converted to the new >>>> representation. If then user space (e.g. through checkpoint restore) >>>> will issue a sk_get_filter() and thus we're calling sk_decode_filter() >>>> on sk->sk_filter and, therefore, try to decode what we stored in >>>> insns_ext[] with the assumption we still have the old code. Would that >>>> actually crash (or leak memory, or just return garbage), as we access >>>> decodes[] array with filt->code? Would be great if you could double-check. >>> >>> ohh. yes. missed that. >>> when bpf_ext_enable=1 I think it's cleaner to return ebpf filter. >>> This way the user space can see how old bpf filter was converted. >>> >>> Of course we can allocate extra memory and keep original bpf code there >>> just to return it via sk_get_filter(), but that seems overkill. >> >> Cc'ing Pavel for a8fc92778080 ("sk-filter: Add ability to get socket >> filter program (v2)"). >> >> I think the issue can be that when applications could get migrated >> from one machine to another and their kernel won't support ebpf yet, >> then filter could not get loaded this way as it's expected to return >> what the user loaded. The trade-off, however, is that the original >> BPF code needs to be stored as well. :( > > Sorry if I miss the point, but isn't the original filter kept on socket? > The sk_attach_filter() does so, then calls __sk_prepare_filter, which > in turn calls bpf_jit_compile(), and the latter two keep the insns in place. Yes. in V8/V9 series original filter is kept on socket. and your crtools/test/zdtm/live/static/socket_filter.c test passes. Let me know if there are any other tests I can try. Thanks Alexei > > Thanks, > Pavel > ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter 2014-03-11 18:03 ` Alexei Starovoitov @ 2014-03-11 18:19 ` Pavel Emelyanov 0 siblings, 0 replies; 19+ messages in thread From: Pavel Emelyanov @ 2014-03-11 18:19 UTC (permalink / raw) To: Alexei Starovoitov Cc: Daniel Borkmann, David S. Miller, Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt, Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross, Thomas Gleixner, Masami Hiramatsu, Tom Zanussi, Jovi Zhangwei, Eric Dumazet, Linus Torvalds, Andrew Morton, Frederic Weisbecker, Arnaldo Carvalho de Melo, Pekka Enberg, Arjan van de Ven, Christoph Hellwig, LKML <l On 03/11/2014 10:03 PM, Alexei Starovoitov wrote: > On Tue, Mar 11, 2014 at 10:40 AM, Pavel Emelyanov <xemul@parallels.com> wrote: >> On 03/10/2014 02:00 AM, Daniel Borkmann wrote: >>> On 03/09/2014 06:08 PM, Alexei Starovoitov wrote: >>>> On Sun, Mar 9, 2014 at 5:29 AM, Daniel Borkmann <borkmann@iogearbox.net> wrote: >>>>> On 03/09/2014 12:15 AM, Alexei Starovoitov wrote: >>>>>> >>>>>> Extended BPF extends old BPF in the following ways: >>>>>> - from 2 to 10 registers >>>>>> Original BPF has two registers (A and X) and hidden frame pointer. >>>>>> Extended BPF has ten registers and read-only frame pointer. >>>>>> - from 32-bit registers to 64-bit registers >>>>>> semantics of old 32-bit ALU operations are preserved via 32-bit >>>>>> subregisters >>>>>> - if (cond) jump_true; else jump_false; >>>>>> old BPF insns are replaced with: >>>>>> if (cond) jump_true; /* else fallthrough */ >>>>>> - adds signed > and >= insns >>>>>> - 16 4-byte stack slots for register spill-fill replaced with >>>>>> up to 512 bytes of multi-use stack space >>>>>> - introduces bpf_call insn and register passing convention for zero >>>>>> overhead calls from/to other kernel functions (not part of this patch) >>>>>> - adds arithmetic right shift insn >>>>>> - adds swab32/swab64 insns >>>>>> - adds atomic_add insn >>>>>> - old tax/txa insns are replaced with 'mov dst,src' insn >>>>>> >>>>>> Extended BPF is designed to be JITed with one to one mapping, which >>>>>> allows GCC/LLVM backends to generate optimized BPF code that performs >>>>>> almost as fast as natively compiled code >>>>>> >>>>>> sk_convert_filter() remaps old style insns into extended: >>>>>> 'sock_filter' instructions are remapped on the fly to >>>>>> 'sock_filter_ext' extended instructions when >>>>>> sysctl net.core.bpf_ext_enable=1 >>>>>> >>>>>> Old filter comes through sk_attach_filter() or >>>>>> sk_unattached_filter_create() >>>>>> if (bpf_ext_enable) { >>>>>> convert to new >>>>>> sk_chk_filter() - check old bpf >>>>>> use sk_run_filter_ext() - new interpreter >>>>>> } else { >>>>>> sk_chk_filter() - check old bpf >>>>>> if (bpf_jit_enable) >>>>>> use old jit >>>>>> else >>>>>> use sk_run_filter() - old interpreter >>>>>> } >>>>>> >>>>>> sk_run_filter_ext() interpreter is noticeably faster >>>>>> than sk_run_filter() for two reasons: >>>>>> >>>>>> 1.fall-through jumps >>>>>> Old BPF jump instructions are forced to go either 'true' or 'false' >>>>>> branch which causes branch-miss penalty. >>>>>> Extended BPF jump instructions have one branch and fall-through, >>>>>> which fit CPU branch predictor logic better. >>>>>> 'perf stat' shows drastic difference for branch-misses. >>>>>> >>>>>> 2.jump-threaded implementation of interpreter vs switch statement >>>>>> Instead of single tablejump at the top of 'switch' statement, GCC will >>>>>> generate multiple tablejump instructions, which helps CPU branch >>>>>> predictor >>>>>> >>>>>> Performance of two BPF filters generated by libpcap was measured >>>>>> on x86_64, i386 and arm32. >>>>>> >>>>>> fprog #1 is taken from Documentation/networking/filter.txt: >>>>>> tcpdump -i eth0 port 22 -dd >>>>>> >>>>>> fprog #2 is taken from 'man tcpdump': >>>>>> tcpdump -i eth0 'tcp port 22 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - >>>>>> ((tcp[12]&0xf0)>>2)) != 0)' -dd >>>>>> >>>>>> Other libpcap programs have similar performance differences. >>>>>> >>>>>> Raw performance data from BPF micro-benchmark: >>>>>> SK_RUN_FILTER on same SKB (cache-hit) or 10k SKBs (cache-miss) >>>>>> time in nsec per call, smaller is better >>>>>> --x86_64-- >>>>>> fprog #1 fprog #1 fprog #2 fprog #2 >>>>>> cache-hit cache-miss cache-hit cache-miss >>>>>> old BPF 90 101 192 202 >>>>>> ext BPF 31 71 47 97 >>>>>> old BPF jit 12 34 17 44 >>>>>> ext BPF jit TBD >>>>>> >>>>>> --i386-- >>>>>> fprog #1 fprog #1 fprog #2 fprog #2 >>>>>> cache-hit cache-miss cache-hit cache-miss >>>>>> old BPF 107 136 227 252 >>>>>> ext BPF 40 119 69 172 >>>>>> >>>>>> --arm32-- >>>>>> fprog #1 fprog #1 fprog #2 fprog #2 >>>>>> cache-hit cache-miss cache-hit cache-miss >>>>>> old BPF 202 300 475 540 >>>>>> ext BPF 180 270 330 470 >>>>>> old BPF jit 26 182 37 202 >>>>>> new BPF jit TBD >>>>>> >>>>>> Tested with trinify BPF fuzzer >>>>>> >>>>>> Future work: >>>>>> >>>>>> 0. add bpf/ebpf testsuite to tools/testing/selftests/net/bpf >>>>>> >>>>>> 1. add extended BPF JIT for x86_64 >>>>>> >>>>>> 2. add inband old/new demux and extended BPF verifier, so that new >>>>>> programs >>>>>> can be loaded through old sk_attach_filter() and >>>>>> sk_unattached_filter_create() >>>>>> interfaces >>>>>> >>>>>> 3. tracing filters systemtap-like with extended BPF >>>>>> >>>>>> 4. OVS with extended BPF >>>>>> >>>>>> 5. nftables with extended BPF >>>>>> >>>>>> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> >>>>>> Acked-by: Hagen Paul Pfeifer <hagen@jauu.net> >>>>>> Reviewed-by: Daniel Borkmann <dborkman@redhat.com> >>>>> >>>>> >>>>> One more question or possible issue that came through my mind: When >>>>> someone attaches a socket filter from user space, and bpf_ext_enable=1 >>>>> then the old filter will transparently be converted to the new >>>>> representation. If then user space (e.g. through checkpoint restore) >>>>> will issue a sk_get_filter() and thus we're calling sk_decode_filter() >>>>> on sk->sk_filter and, therefore, try to decode what we stored in >>>>> insns_ext[] with the assumption we still have the old code. Would that >>>>> actually crash (or leak memory, or just return garbage), as we access >>>>> decodes[] array with filt->code? Would be great if you could double-check. >>>> >>>> ohh. yes. missed that. >>>> when bpf_ext_enable=1 I think it's cleaner to return ebpf filter. >>>> This way the user space can see how old bpf filter was converted. >>>> >>>> Of course we can allocate extra memory and keep original bpf code there >>>> just to return it via sk_get_filter(), but that seems overkill. >>> >>> Cc'ing Pavel for a8fc92778080 ("sk-filter: Add ability to get socket >>> filter program (v2)"). >>> >>> I think the issue can be that when applications could get migrated >>> from one machine to another and their kernel won't support ebpf yet, >>> then filter could not get loaded this way as it's expected to return >>> what the user loaded. The trade-off, however, is that the original >>> BPF code needs to be stored as well. :( >> >> Sorry if I miss the point, but isn't the original filter kept on socket? >> The sk_attach_filter() does so, then calls __sk_prepare_filter, which >> in turn calls bpf_jit_compile(), and the latter two keep the insns in place. > > Yes. in V8/V9 series original filter is kept on socket. Ah, I see :) > and your crtools/test/zdtm/live/static/socket_filter.c test passes. > Let me know if there are any other tests I can try. No, that's the only test we need wrt sk-filter. Thanks for keeping an eye on it :) ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter 2014-03-08 23:15 ` [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter Alexei Starovoitov 2014-03-09 12:29 ` Daniel Borkmann @ 2014-03-09 14:45 ` Eric Dumazet 2014-03-09 17:38 ` Alexei Starovoitov 2014-03-09 14:49 ` Eric Dumazet 2 siblings, 1 reply; 19+ messages in thread From: Eric Dumazet @ 2014-03-09 14:45 UTC (permalink / raw) To: Alexei Starovoitov Cc: David S. Miller, Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt, Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross, Thomas Gleixner, Masami Hiramatsu, Tom Zanussi, Jovi Zhangwei, Eric Dumazet, Linus Torvalds, Andrew Morton, Frederic Weisbecker, Arnaldo Carvalho de Melo, Pekka Enberg, Arjan van de Ven, Christoph Hellwig, linux-kernel, netdev On Sat, 2014-03-08 at 15:15 -0800, Alexei Starovoitov wrote: > +/** > + * sk_run_filter_ext - run an extended filter > + * @ctx: buffer to run the filter on > + * @insn: filter to apply > + * > + * Decode and execute extended BPF instructions. > + * @ctx is the data we are operating on. > + * @filter is the array of filter instructions. > + */ > +notrace u32 sk_run_filter_ext(void *ctx, const struct sock_filter_ext *insn) > +{ > + u64 stack[64]; > + u64 regs[16]; > + void *ptr; > + u64 tmp; > + int off; Why is this 'notrace' ? 80 u64 on the stack, that is 640 bytes to run a filter ???? ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter 2014-03-09 14:45 ` Eric Dumazet @ 2014-03-09 17:38 ` Alexei Starovoitov 2014-03-09 18:11 ` Eric Dumazet 0 siblings, 1 reply; 19+ messages in thread From: Alexei Starovoitov @ 2014-03-09 17:38 UTC (permalink / raw) To: Eric Dumazet Cc: David S. Miller, Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt, Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross, Thomas Gleixner, Masami Hiramatsu, Tom Zanussi, Jovi Zhangwei, Eric Dumazet, Linus Torvalds, Andrew Morton, Frederic Weisbecker, Arnaldo Carvalho de Melo, Pekka Enberg, Arjan van de Ven, Christoph Hellwig, LKML, netdev On Sun, Mar 9, 2014 at 7:45 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote: > On Sat, 2014-03-08 at 15:15 -0800, Alexei Starovoitov wrote: > >> +/** >> + * sk_run_filter_ext - run an extended filter >> + * @ctx: buffer to run the filter on >> + * @insn: filter to apply >> + * >> + * Decode and execute extended BPF instructions. >> + * @ctx is the data we are operating on. >> + * @filter is the array of filter instructions. >> + */ >> +notrace u32 sk_run_filter_ext(void *ctx, const struct sock_filter_ext *insn) >> +{ >> + u64 stack[64]; >> + u64 regs[16]; >> + void *ptr; >> + u64 tmp; >> + int off; First of all, great that you finally reviewed it! Feedback is appreciated :) > Why is this 'notrace' ? to avoid overhead of dummy call. JITed filters are not adding this dummy call. So 'notrace' on interpreter brings it to parity with JITed filters. > 80 u64 on the stack, that is 640 bytes to run a filter ???? yes. that was described in commit log and in Doc...filter.txt: " - 16 4-byte stack slots for register spill-fill replaced with up to 512 bytes of multi-use stack space " For interpreter it is prohibitive to dynamically allocate stack space that's why it just grabs 64*8 to run any program. For JIT it's going to be close to zero for majority of filters, since generated program will allocate only as much as was allowed by sk_chk_filter_ext(). Only largest programs would need 'up to 512'. This much stack would be needed for programs that need to use large key/value pairs in their ebpf tables. So far I haven't seen a program that approaches this limit, but it seems to me that 512 is reasonable, since kernel warns on functions with > 1k stack. btw, current x86 jit just does 'subq $96,%rsp', I think ebpf jit should use the minimum amount of stack. Only amount that is needed. May be I'm over thinking it and having 'subq $512, %rsp' for JIT is also fine. Let me know. Thanks Alexei ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter 2014-03-09 17:38 ` Alexei Starovoitov @ 2014-03-09 18:11 ` Eric Dumazet 2014-03-09 18:57 ` Alexei Starovoitov 0 siblings, 1 reply; 19+ messages in thread From: Eric Dumazet @ 2014-03-09 18:11 UTC (permalink / raw) To: Alexei Starovoitov Cc: David S. Miller, Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt, Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross, Thomas Gleixner, Masami Hiramatsu, Tom Zanussi, Jovi Zhangwei, Eric Dumazet, Linus Torvalds, Andrew Morton, Frederic Weisbecker, Arnaldo Carvalho de Melo, Pekka Enberg, Arjan van de Ven, Christoph Hellwig, LKML, netdev On Sun, 2014-03-09 at 10:38 -0700, Alexei Starovoitov wrote: > On Sun, Mar 9, 2014 at 7:45 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote: > > On Sat, 2014-03-08 at 15:15 -0800, Alexei Starovoitov wrote: > > > >> +/** > >> + * sk_run_filter_ext - run an extended filter > >> + * @ctx: buffer to run the filter on > >> + * @insn: filter to apply > >> + * > >> + * Decode and execute extended BPF instructions. > >> + * @ctx is the data we are operating on. > >> + * @filter is the array of filter instructions. > >> + */ > >> +notrace u32 sk_run_filter_ext(void *ctx, const struct sock_filter_ext *insn) > >> +{ > >> + u64 stack[64]; > >> + u64 regs[16]; > >> + void *ptr; > >> + u64 tmp; > >> + int off; > > First of all, great that you finally reviewed it! Feedback is appreciated :) > > > Why is this 'notrace' ? > > to avoid overhead of dummy call. > JITed filters are not adding this dummy call. > So 'notrace' on interpreter brings it to parity with JITed filters. Then its a wrong reason. At the time we wrote JIT, there was (yet) no support for profiling JIT from perf tools. I asked for help and nobody answered. Maybe this has changed, if so, please someone add support. > > > 80 u64 on the stack, that is 640 bytes to run a filter ???? > > yes. that was described in commit log and in Doc...filter.txt: > " > - 16 4-byte stack slots for register spill-fill replaced with > up to 512 bytes of multi-use stack space > " > > For interpreter it is prohibitive to dynamically allocate stack space > that's why it just grabs 64*8 to run any program. Where is checked the max capacity of this stack ? ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter 2014-03-09 18:11 ` Eric Dumazet @ 2014-03-09 18:57 ` Alexei Starovoitov 2014-03-09 19:11 ` Eric Dumazet 0 siblings, 1 reply; 19+ messages in thread From: Alexei Starovoitov @ 2014-03-09 18:57 UTC (permalink / raw) To: Eric Dumazet Cc: David S. Miller, Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt, Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross, Thomas Gleixner, Masami Hiramatsu, Tom Zanussi, Jovi Zhangwei, Eric Dumazet, Linus Torvalds, Andrew Morton, Frederic Weisbecker, Arnaldo Carvalho de Melo, Pekka Enberg, Arjan van de Ven, Christoph Hellwig, LKML, netdev On Sun, Mar 9, 2014 at 11:11 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote: > On Sun, 2014-03-09 at 10:38 -0700, Alexei Starovoitov wrote: >> On Sun, Mar 9, 2014 at 7:45 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote: >> > On Sat, 2014-03-08 at 15:15 -0800, Alexei Starovoitov wrote: >> > >> >> +/** >> >> + * sk_run_filter_ext - run an extended filter >> >> + * @ctx: buffer to run the filter on >> >> + * @insn: filter to apply >> >> + * >> >> + * Decode and execute extended BPF instructions. >> >> + * @ctx is the data we are operating on. >> >> + * @filter is the array of filter instructions. >> >> + */ >> >> +notrace u32 sk_run_filter_ext(void *ctx, const struct sock_filter_ext *insn) >> >> +{ >> >> + u64 stack[64]; >> >> + u64 regs[16]; >> >> + void *ptr; >> >> + u64 tmp; >> >> + int off; >> >> First of all, great that you finally reviewed it! Feedback is appreciated :) >> >> > Why is this 'notrace' ? >> >> to avoid overhead of dummy call. >> JITed filters are not adding this dummy call. >> So 'notrace' on interpreter brings it to parity with JITed filters. > > Then its a wrong reason. fine. I'll remove it then. > At the time we wrote JIT, there was (yet) no support for profiling JIT > from perf tools. I asked for help and nobody answered. > > Maybe this has changed, if so, please someone add support. I have few ideas on how to get the line number info from C into ebpf via llvm. Mainly to have nice messages when kernel rejects ebpf filter when it fails safety check. but that will come several commits from now. This info can be used to beautify perf tools as well. >> >> > 80 u64 on the stack, that is 640 bytes to run a filter ???? >> >> yes. that was described in commit log and in Doc...filter.txt: >> " >> - 16 4-byte stack slots for register spill-fill replaced with >> up to 512 bytes of multi-use stack space >> " >> >> For interpreter it is prohibitive to dynamically allocate stack space >> that's why it just grabs 64*8 to run any program. > > Where is checked the max capacity of this stack ? In case of sk_convert_filter() case the converted ebpf filter is guaranteed to use max 4*16 bytes of stack, since sk_chk_filter() verified old bpf. In case of ebpf, the check is done in several places. In V1 series there are check_stack_boundary() and check_mem_access() functions which are called from bpf_check() (which will be renamed to sk_chk_filter_ext()) Here is a macro and comment from V1 series: +/* JITed code allocates 512 bytes and used bottom 4 slots + * to save R6-R9 + */ +#define MAX_BPF_STACK (512 - 4 * 8) So sk_chk_filter_ext() enforces 480 byte limit for ebpf program itself and 512 of real CPU stack is allocated by ebpf jit-ed program. As I was saying in previous email I was planning to make this stack allocation less hard coded for JITed program. Here is relevant comment from V1 series: + * Future improvements: + * stack size is hardcoded to 512 bytes maximum per program, relax it In sk_run_filter_ext() I used "u64 stack[64];", but "u64 stack[60];" is safe too, but I didn't want to go into extensive explanation of 'magic' 60 number in the first patch, so I just rounded it to 64. Since now you understand, I can make it stack[60] now :) Or, even better, I can reintroduce MAX_BPF_STACK into this patch set and use it in sk_run_filter_ext()... I will also add: BUILD_BUG_ON(BPF_MEMWORDS * sizeof(u32) > MAX_BPF_STACK); to make sure that old filters via sk_convert_filter() stay correct. Thanks Alexei ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter 2014-03-09 18:57 ` Alexei Starovoitov @ 2014-03-09 19:11 ` Eric Dumazet 2014-03-09 19:20 ` Alexei Starovoitov 0 siblings, 1 reply; 19+ messages in thread From: Eric Dumazet @ 2014-03-09 19:11 UTC (permalink / raw) To: Alexei Starovoitov Cc: David S. Miller, Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt, Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross, Thomas Gleixner, Masami Hiramatsu, Tom Zanussi, Jovi Zhangwei, Eric Dumazet, Linus Torvalds, Andrew Morton, Frederic Weisbecker, Arnaldo Carvalho de Melo, Pekka Enberg, Arjan van de Ven, Christoph Hellwig, LKML, netdev On Sun, 2014-03-09 at 11:57 -0700, Alexei Starovoitov wrote: > In sk_run_filter_ext() I used "u64 stack[64];", but "u64 stack[60];" is > safe too, but I didn't want to go into extensive explanation > of 'magic' 60 number in the first patch, so I just rounded it to 64. > Since now you understand, I can make it stack[60] now :) My point was : You should not use 64 or 60 in the C code. I should not have to ask you why it is safe. It should be obvious just reading the source code. And so far it is not. Thats why we use macros and comments at the macro definition. ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter 2014-03-09 19:11 ` Eric Dumazet @ 2014-03-09 19:20 ` Alexei Starovoitov 0 siblings, 0 replies; 19+ messages in thread From: Alexei Starovoitov @ 2014-03-09 19:20 UTC (permalink / raw) To: Eric Dumazet Cc: David S. Miller, Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt, Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross, Thomas Gleixner, Masami Hiramatsu, Tom Zanussi, Jovi Zhangwei, Eric Dumazet, Linus Torvalds, Andrew Morton, Frederic Weisbecker, Arnaldo Carvalho de Melo, Pekka Enberg, Arjan van de Ven, Christoph Hellwig, LKML, netdev On Sun, Mar 9, 2014 at 12:11 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote: > On Sun, 2014-03-09 at 11:57 -0700, Alexei Starovoitov wrote: > >> In sk_run_filter_ext() I used "u64 stack[64];", but "u64 stack[60];" is >> safe too, but I didn't want to go into extensive explanation >> of 'magic' 60 number in the first patch, so I just rounded it to 64. >> Since now you understand, I can make it stack[60] now :) > > My point was : You should not use 64 or 60 in the C code. > > I should not have to ask you why it is safe. > > It should be obvious just reading the source code. And so far it is not. > > Thats why we use macros and comments at the macro definition. Agree. Will fix and send V8 for these issues and sk_get_filter() caught by Daniel. Thanks! ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter 2014-03-08 23:15 ` [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter Alexei Starovoitov 2014-03-09 12:29 ` Daniel Borkmann 2014-03-09 14:45 ` Eric Dumazet @ 2014-03-09 14:49 ` Eric Dumazet 2014-03-09 18:02 ` Alexei Starovoitov 2 siblings, 1 reply; 19+ messages in thread From: Eric Dumazet @ 2014-03-09 14:49 UTC (permalink / raw) To: Alexei Starovoitov Cc: David S. Miller, Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt, Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross, Thomas Gleixner, Masami Hiramatsu, Tom Zanussi, Jovi Zhangwei, Eric Dumazet, Linus Torvalds, Andrew Morton, Frederic Weisbecker, Arnaldo Carvalho de Melo, Pekka Enberg, Arjan van de Ven, Christoph Hellwig, linux-kernel, netdev On Sat, 2014-03-08 at 15:15 -0800, Alexei Starovoitov wrote: > + if (BPF_SRC(fp->code) == BPF_K && > + (int)fp->k < 0) { > + /* extended BPF immediates are signed, > + * zero extend immediate into tmp register > + * and use it in compare insn > + */ > + insn->code = BPF_ALU | BPF_MOV | BPF_K; > + insn->a_reg = 2; > + insn->imm = fp->k; > + insn++; > + > + insn->a_reg = 6; > + insn->x_reg = 2; > + bpf_src = BPF_X; > + } else { > + insn->a_reg = 6; > + insn->x_reg = 7; > + insn->imm = fp->k; > + bpf_src = BPF_SRC(fp->code); > + } > + /* common case where 'jump_false' is next insn */ > + if (fp->jf == 0) { > + insn->code = BPF_JMP | BPF_OP(fp->code) | > + bpf_src; > + tgt = i + fp->jt + 1; > + EMIT_JMP; > + break; > + } > + /* convert JEQ into JNE when 'jump_true' is next insn */ > + if (fp->jt == 0 && BPF_OP(fp->code) == BPF_JEQ) { > + insn->code = BPF_JMP | BPF_JNE | bpf_src; > + tgt = i + fp->jf + 1; > + EMIT_JMP; > + break; > + } > + /* other jumps are mapped into two insns: Jxx and JA */ > + tgt = i + fp->jt + 1; > + insn->code = BPF_JMP | BPF_OP(fp->code) | bpf_src; > + EMIT_JMP; > + > + insn++; > + insn->code = BPF_JMP | BPF_JA; > + tgt = i + fp->jf + 1; > + EMIT_JMP; > + break; > + > + /* ldxb 4*([14]&0xf) is remaped into 3 insns */ > + case BPF_LDX | BPF_MSH | BPF_B: > + insn->code = BPF_LD | BPF_ABS | BPF_B; > + insn->a_reg = 7; > + insn->imm = fp->k; > + > + insn++; > + insn->code = BPF_ALU | BPF_AND | BPF_K; > + insn->a_reg = 7; > + insn->imm = 0xf; > + > + insn++; > + insn->code = BPF_ALU | BPF_LSH | BPF_K; > + insn->a_reg = 7; > + insn->imm = 2; > + break; > + > + /* RET_K, RET_A are remaped into 2 insns */ > + case BPF_RET | BPF_A: > + case BPF_RET | BPF_K: > + insn->code = BPF_ALU | BPF_MOV | > + (BPF_RVAL(fp->code) == BPF_K ? BPF_K : BPF_X); > + insn->a_reg = 0; > + insn->x_reg = 6; > + insn->imm = fp->k; > + > + insn++; > + insn->code = BPF_RET | BPF_K; > + break; What the hell is this ? All this magical values, like 2, 6, 7, 10. I am afraid nobody will be able to read this but you. ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter 2014-03-09 14:49 ` Eric Dumazet @ 2014-03-09 18:02 ` Alexei Starovoitov 0 siblings, 0 replies; 19+ messages in thread From: Alexei Starovoitov @ 2014-03-09 18:02 UTC (permalink / raw) To: Eric Dumazet Cc: David S. Miller, Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt, Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross, Thomas Gleixner, Masami Hiramatsu, Tom Zanussi, Jovi Zhangwei, Eric Dumazet, Linus Torvalds, Andrew Morton, Frederic Weisbecker, Arnaldo Carvalho de Melo, Pekka Enberg, Arjan van de Ven, Christoph Hellwig, LKML, netdev On Sun, Mar 9, 2014 at 7:49 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote: > On Sat, 2014-03-08 at 15:15 -0800, Alexei Starovoitov wrote: > >> + if (BPF_SRC(fp->code) == BPF_K && >> + (int)fp->k < 0) { >> + /* extended BPF immediates are signed, >> + * zero extend immediate into tmp register >> + * and use it in compare insn >> + */ >> + insn->code = BPF_ALU | BPF_MOV | BPF_K; >> + insn->a_reg = 2; >> + insn->imm = fp->k; >> + insn++; >> + >> + insn->a_reg = 6; >> + insn->x_reg = 2; >> + bpf_src = BPF_X; >> + } else { >> + insn->a_reg = 6; >> + insn->x_reg = 7; >> + insn->imm = fp->k; >> + bpf_src = BPF_SRC(fp->code); >> + } >> + /* common case where 'jump_false' is next insn */ >> + if (fp->jf == 0) { >> + insn->code = BPF_JMP | BPF_OP(fp->code) | >> + bpf_src; >> + tgt = i + fp->jt + 1; >> + EMIT_JMP; >> + break; >> + } >> + /* convert JEQ into JNE when 'jump_true' is next insn */ >> + if (fp->jt == 0 && BPF_OP(fp->code) == BPF_JEQ) { >> + insn->code = BPF_JMP | BPF_JNE | bpf_src; >> + tgt = i + fp->jf + 1; >> + EMIT_JMP; >> + break; >> + } >> + /* other jumps are mapped into two insns: Jxx and JA */ >> + tgt = i + fp->jt + 1; >> + insn->code = BPF_JMP | BPF_OP(fp->code) | bpf_src; >> + EMIT_JMP; >> + >> + insn++; >> + insn->code = BPF_JMP | BPF_JA; >> + tgt = i + fp->jf + 1; >> + EMIT_JMP; >> + break; >> + >> + /* ldxb 4*([14]&0xf) is remaped into 3 insns */ >> + case BPF_LDX | BPF_MSH | BPF_B: >> + insn->code = BPF_LD | BPF_ABS | BPF_B; >> + insn->a_reg = 7; >> + insn->imm = fp->k; >> + >> + insn++; >> + insn->code = BPF_ALU | BPF_AND | BPF_K; >> + insn->a_reg = 7; >> + insn->imm = 0xf; >> + >> + insn++; >> + insn->code = BPF_ALU | BPF_LSH | BPF_K; >> + insn->a_reg = 7; >> + insn->imm = 2; >> + break; >> + >> + /* RET_K, RET_A are remaped into 2 insns */ >> + case BPF_RET | BPF_A: >> + case BPF_RET | BPF_K: >> + insn->code = BPF_ALU | BPF_MOV | >> + (BPF_RVAL(fp->code) == BPF_K ? BPF_K : BPF_X); >> + insn->a_reg = 0; >> + insn->x_reg = 6; >> + insn->imm = fp->k; >> + >> + insn++; >> + insn->code = BPF_RET | BPF_K; >> + break; > > > What the hell is this ? > > All this magical values, like 2, 6, 7, 10. they are register numbers, since they are assigned into 'a_reg' and 'x_reg' which are described in uapi/filter.h: __u8 a_reg:4; /* dest register */ __u8 x_reg:4; /* source register */ and in Doc...filter.txt In the V1 series I had a bunch of #define like: #define R1 1 #define R2 2 which seemed as silly as doing '#define one 1' I thought that the sk_convert_filter() code is pretty clear in terms of what it's doing, but I'm happy to add an extensive comment to describe the mechanics. Also it felt that most of the time you and other folks want me to remove comments, so I figured I'll add comments on demand. Here looks like it's the case. > I am afraid nobody will be able to read this but you. that's certainly not the intent. I've presented it at the last plumbers conf and would like to share more, since I think ebpf is a fundamental breakthrough that can be used by many kernel subsystems. This patch only covers old filters and seccomp. We can do a lot more interesting things with tracing+ebpf and so on. Regards, Alexei ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v7 net-next 2/3] seccomp: convert seccomp to use extended BPF 2014-03-08 23:15 [PATCH v7 net-next 0/3] filter: add Extended BPF interpreter and converter, seccomp Alexei Starovoitov 2014-03-08 23:15 ` [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter Alexei Starovoitov @ 2014-03-08 23:15 ` Alexei Starovoitov 2014-03-08 23:15 ` [PATCH v7 net-next 3/3] doc: filter: add Extended BPF documentation Alexei Starovoitov 2 siblings, 0 replies; 19+ messages in thread From: Alexei Starovoitov @ 2014-03-08 23:15 UTC (permalink / raw) To: David S. Miller Cc: Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt, Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross, Thomas Gleixner, Masami Hiramatsu, Tom Zanussi, Jovi Zhangwei, Eric Dumazet, Linus Torvalds, Andrew Morton, Frederic Weisbecker, Arnaldo Carvalho de Melo, Pekka Enberg, Arjan van de Ven, Christoph Hellwig, linux-kernel, netdev use sk_convert_filter() to convert seccomp BPF into extended BPF 05-sim-long_jumps.c of libseccomp was used as micro-benchmark: seccomp_rule_add_exact(ctx,... seccomp_rule_add_exact(ctx,... rc = seccomp_load(ctx); for (i = 0; i < 10000000; i++) syscall(199, 100); 'short filter' has 2 rules 'large filter' has 200 rules 'short filter' performance is slightly better on x86_64,i386,arm32 'large filter' is much faster on x86_64 and i386 and shows no difference on arm32 --x86_64-- short filter old BPF: 2.7 sec 39.12% bench libc-2.15.so [.] syscall 8.10% bench [kernel.kallsyms] [k] sk_run_filter 6.31% bench [kernel.kallsyms] [k] system_call 5.59% bench [kernel.kallsyms] [k] trace_hardirqs_on_caller 4.37% bench [kernel.kallsyms] [k] trace_hardirqs_off_caller 3.70% bench [kernel.kallsyms] [k] __secure_computing 3.67% bench [kernel.kallsyms] [k] lock_is_held 3.03% bench [kernel.kallsyms] [k] seccomp_bpf_load new BPF: 2.58 sec 42.05% bench libc-2.15.so [.] syscall 6.91% bench [kernel.kallsyms] [k] system_call 6.25% bench [kernel.kallsyms] [k] trace_hardirqs_on_caller 6.07% bench [kernel.kallsyms] [k] __secure_computing 5.08% bench [kernel.kallsyms] [k] sk_run_filter_ext --arm32-- short filter old BPF: 4.0 sec 39.92% bench [kernel.kallsyms] [k] vector_swi 16.60% bench [kernel.kallsyms] [k] sk_run_filter 14.66% bench libc-2.17.so [.] syscall 5.42% bench [kernel.kallsyms] [k] seccomp_bpf_load 5.10% bench [kernel.kallsyms] [k] __secure_computing new BPF: 3.7 sec 35.93% bench [kernel.kallsyms] [k] vector_swi 21.89% bench libc-2.17.so [.] syscall 13.45% bench [kernel.kallsyms] [k] sk_run_filter_ext 6.25% bench [kernel.kallsyms] [k] __secure_computing 3.96% bench [kernel.kallsyms] [k] syscall_trace_exit --x86_64-- large filter old BPF: 8.6 seconds 73.38% bench [kernel.kallsyms] [k] sk_run_filter 10.70% bench libc-2.15.so [.] syscall 5.09% bench [kernel.kallsyms] [k] seccomp_bpf_load 1.97% bench [kernel.kallsyms] [k] system_call ext BPF: 5.7 seconds 66.20% bench [kernel.kallsyms] [k] sk_run_filter_ext 16.75% bench libc-2.15.so [.] syscall 3.31% bench [kernel.kallsyms] [k] system_call 2.88% bench [kernel.kallsyms] [k] __secure_computing --i386-- large filter old BPF: 5.4 sec ext BPF: 3.8 sec --arm32-- large filter old BPF: 13.5 sec 73.88% bench [kernel.kallsyms] [k] sk_run_filter 10.29% bench [kernel.kallsyms] [k] vector_swi 6.46% bench libc-2.17.so [.] syscall 2.94% bench [kernel.kallsyms] [k] seccomp_bpf_load 1.19% bench [kernel.kallsyms] [k] __secure_computing 0.87% bench [kernel.kallsyms] [k] sys_getuid new BPF: 13.5 sec 76.08% bench [kernel.kallsyms] [k] sk_run_filter_ext 10.98% bench [kernel.kallsyms] [k] vector_swi 5.87% bench libc-2.17.so [.] syscall 1.77% bench [kernel.kallsyms] [k] __secure_computing 0.93% bench [kernel.kallsyms] [k] sys_getuid BPF filters generated by seccomp are very branchy, so ext BPF performance is better than old BPF. Performance gains will be even higher when extended BPF JIT is committed. Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Reviewed-by: Kees Cook <keescook@chromium.org> --- include/linux/seccomp.h | 1 - kernel/seccomp.c | 118 ++++++++++++++++++++++------------------------- net/core/filter.c | 5 -- 3 files changed, 56 insertions(+), 68 deletions(-) diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h index 6f19cfd1840e..4054b0994071 100644 --- a/include/linux/seccomp.h +++ b/include/linux/seccomp.h @@ -76,7 +76,6 @@ static inline int seccomp_mode(struct seccomp *s) #ifdef CONFIG_SECCOMP_FILTER extern void put_seccomp_filter(struct task_struct *tsk); extern void get_seccomp_filter(struct task_struct *tsk); -extern u32 seccomp_bpf_load(int off); #else /* CONFIG_SECCOMP_FILTER */ static inline void put_seccomp_filter(struct task_struct *tsk) { diff --git a/kernel/seccomp.c b/kernel/seccomp.c index b7a10048a32c..2a18f43acbd6 100644 --- a/kernel/seccomp.c +++ b/kernel/seccomp.c @@ -55,60 +55,31 @@ struct seccomp_filter { atomic_t usage; struct seccomp_filter *prev; unsigned short len; /* Instruction count */ - struct sock_filter insns[]; + struct sock_filter_ext insns[]; }; /* Limit any path through the tree to 256KB worth of instructions. */ #define MAX_INSNS_PER_PATH ((1 << 18) / sizeof(struct sock_filter)) -/** - * get_u32 - returns a u32 offset into data - * @data: a unsigned 64 bit value - * @index: 0 or 1 to return the first or second 32-bits - * - * This inline exists to hide the length of unsigned long. If a 32-bit - * unsigned long is passed in, it will be extended and the top 32-bits will be - * 0. If it is a 64-bit unsigned long, then whatever data is resident will be - * properly returned. - * +/* * Endianness is explicitly ignored and left for BPF program authors to manage * as per the specific architecture. */ -static inline u32 get_u32(u64 data, int index) +static void populate_seccomp_data(struct seccomp_data *sd) { - return ((u32 *)&data)[index]; -} - -/* Helper for bpf_load below. */ -#define BPF_DATA(_name) offsetof(struct seccomp_data, _name) -/** - * bpf_load: checks and returns a pointer to the requested offset - * @off: offset into struct seccomp_data to load from - * - * Returns the requested 32-bits of data. - * seccomp_check_filter() should assure that @off is 32-bit aligned - * and not out of bounds. Failure to do so is a BUG. - */ -u32 seccomp_bpf_load(int off) -{ - struct pt_regs *regs = task_pt_regs(current); - if (off == BPF_DATA(nr)) - return syscall_get_nr(current, regs); - if (off == BPF_DATA(arch)) - return syscall_get_arch(current, regs); - if (off >= BPF_DATA(args[0]) && off < BPF_DATA(args[6])) { - unsigned long value; - int arg = (off - BPF_DATA(args[0])) / sizeof(u64); - int index = !!(off % sizeof(u64)); - syscall_get_arguments(current, regs, arg, 1, &value); - return get_u32(value, index); - } - if (off == BPF_DATA(instruction_pointer)) - return get_u32(KSTK_EIP(current), 0); - if (off == BPF_DATA(instruction_pointer) + sizeof(u32)) - return get_u32(KSTK_EIP(current), 1); - /* seccomp_check_filter should make this impossible. */ - BUG(); + struct task_struct *task = current; + struct pt_regs *regs = task_pt_regs(task); + + sd->nr = syscall_get_nr(task, regs); + sd->arch = syscall_get_arch(task, regs); + /* unroll syscall_get_args to help gcc on arm */ + syscall_get_arguments(task, regs, 0, 1, (unsigned long *)&sd->args[0]); + syscall_get_arguments(task, regs, 1, 1, (unsigned long *)&sd->args[1]); + syscall_get_arguments(task, regs, 2, 1, (unsigned long *)&sd->args[2]); + syscall_get_arguments(task, regs, 3, 1, (unsigned long *)&sd->args[3]); + syscall_get_arguments(task, regs, 4, 1, (unsigned long *)&sd->args[4]); + syscall_get_arguments(task, regs, 5, 1, (unsigned long *)&sd->args[5]); + sd->instruction_pointer = KSTK_EIP(task); } /** @@ -133,17 +104,17 @@ static int seccomp_check_filter(struct sock_filter *filter, unsigned int flen) switch (code) { case BPF_S_LD_W_ABS: - ftest->code = BPF_S_ANC_SECCOMP_LD_W; + ftest->code = BPF_LDX | BPF_W | BPF_ABS; /* 32-bit aligned and not out of bounds. */ if (k >= sizeof(struct seccomp_data) || k & 3) return -EINVAL; continue; case BPF_S_LD_W_LEN: - ftest->code = BPF_S_LD_IMM; + ftest->code = BPF_LD | BPF_IMM; ftest->k = sizeof(struct seccomp_data); continue; case BPF_S_LDX_W_LEN: - ftest->code = BPF_S_LDX_IMM; + ftest->code = BPF_LDX | BPF_IMM; ftest->k = sizeof(struct seccomp_data); continue; /* Explicitly include allowed calls. */ @@ -185,6 +156,7 @@ static int seccomp_check_filter(struct sock_filter *filter, unsigned int flen) case BPF_S_JMP_JGT_X: case BPF_S_JMP_JSET_K: case BPF_S_JMP_JSET_X: + sk_decode_filter(ftest, ftest); continue; default: return -EINVAL; @@ -202,18 +174,21 @@ static int seccomp_check_filter(struct sock_filter *filter, unsigned int flen) static u32 seccomp_run_filters(int syscall) { struct seccomp_filter *f; + struct seccomp_data sd; u32 ret = SECCOMP_RET_ALLOW; /* Ensure unexpected behavior doesn't result in failing open. */ if (WARN_ON(current->seccomp.filter == NULL)) return SECCOMP_RET_KILL; + populate_seccomp_data(&sd); + /* * All filters in the list are evaluated and the lowest BPF return * value always takes priority (ignoring the DATA). */ for (f = current->seccomp.filter; f; f = f->prev) { - u32 cur_ret = sk_run_filter(NULL, f->insns); + u32 cur_ret = sk_run_filter_ext(&sd, f->insns); if ((cur_ret & SECCOMP_RET_ACTION) < (ret & SECCOMP_RET_ACTION)) ret = cur_ret; } @@ -231,6 +206,8 @@ static long seccomp_attach_filter(struct sock_fprog *fprog) struct seccomp_filter *filter; unsigned long fp_size = fprog->len * sizeof(struct sock_filter); unsigned long total_insns = fprog->len; + struct sock_filter *fp; + int new_len; long ret; if (fprog->len == 0 || fprog->len > BPF_MAXINSNS) @@ -252,28 +229,42 @@ static long seccomp_attach_filter(struct sock_fprog *fprog) CAP_SYS_ADMIN) != 0) return -EACCES; - /* Allocate a new seccomp_filter */ - filter = kzalloc(sizeof(struct seccomp_filter) + fp_size, - GFP_KERNEL|__GFP_NOWARN); - if (!filter) + fp = kzalloc(fp_size, GFP_KERNEL|__GFP_NOWARN); + if (!fp) return -ENOMEM; - atomic_set(&filter->usage, 1); - filter->len = fprog->len; /* Copy the instructions from fprog. */ ret = -EFAULT; - if (copy_from_user(filter->insns, fprog->filter, fp_size)) - goto fail; + if (copy_from_user(fp, fprog->filter, fp_size)) + goto free_prog; /* Check and rewrite the fprog via the skb checker */ - ret = sk_chk_filter(filter->insns, filter->len); + ret = sk_chk_filter(fp, fprog->len); if (ret) - goto fail; + goto free_prog; /* Check and rewrite the fprog for seccomp use */ - ret = seccomp_check_filter(filter->insns, filter->len); + ret = seccomp_check_filter(fp, fprog->len); if (ret) - goto fail; + goto free_prog; + + /* convert 'sock_filter' insns to 'sock_filter_ext' insns */ + ret = sk_convert_filter(fp, fprog->len, NULL, &new_len); + if (ret) + goto free_prog; + + /* Allocate a new seccomp_filter */ + filter = kzalloc(sizeof(struct seccomp_filter) + + sizeof(struct sock_filter_ext) * new_len, + GFP_KERNEL|__GFP_NOWARN); + if (!filter) + goto free_prog; + + ret = sk_convert_filter(fp, fprog->len, filter->insns, &new_len); + if (ret) + goto free_filter; + atomic_set(&filter->usage, 1); + filter->len = new_len; /* * If there is an existing filter, make it the prev and don't drop its @@ -282,8 +273,11 @@ static long seccomp_attach_filter(struct sock_fprog *fprog) filter->prev = current->seccomp.filter; current->seccomp.filter = filter; return 0; -fail: + +free_filter: kfree(filter); +free_prog: + kfree(fp); return ret; } diff --git a/net/core/filter.c b/net/core/filter.c index ad6f7546ce64..71dc257641f2 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -384,11 +384,6 @@ load_b: A = 0; continue; } -#ifdef CONFIG_SECCOMP_FILTER - case BPF_S_ANC_SECCOMP_LD_W: - A = seccomp_bpf_load(fentry->k); - continue; -#endif default: WARN_RATELIMIT(1, "Unknown code:%u jt:%u tf:%u k:%u\n", fentry->code, fentry->jt, -- 1.7.9.5 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v7 net-next 3/3] doc: filter: add Extended BPF documentation 2014-03-08 23:15 [PATCH v7 net-next 0/3] filter: add Extended BPF interpreter and converter, seccomp Alexei Starovoitov 2014-03-08 23:15 ` [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter Alexei Starovoitov 2014-03-08 23:15 ` [PATCH v7 net-next 2/3] seccomp: convert seccomp to use extended BPF Alexei Starovoitov @ 2014-03-08 23:15 ` Alexei Starovoitov 2 siblings, 0 replies; 19+ messages in thread From: Alexei Starovoitov @ 2014-03-08 23:15 UTC (permalink / raw) To: David S. Miller Cc: Daniel Borkmann, Ingo Molnar, Will Drewry, Steven Rostedt, Peter Zijlstra, H. Peter Anvin, Hagen Paul Pfeifer, Jesse Gross, Thomas Gleixner, Masami Hiramatsu, Tom Zanussi, Jovi Zhangwei, Eric Dumazet, Linus Torvalds, Andrew Morton, Frederic Weisbecker, Arnaldo Carvalho de Melo, Pekka Enberg, Arjan van de Ven, Christoph Hellwig, linux-kernel, netdev Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Reviewed-by: Daniel Borkmann <dborkman@redhat.com> --- Documentation/networking/filter.txt | 181 +++++++++++++++++++++++++++++++++++ 1 file changed, 181 insertions(+) diff --git a/Documentation/networking/filter.txt b/Documentation/networking/filter.txt index a06b48d2f5cc..c3f687bf8e82 100644 --- a/Documentation/networking/filter.txt +++ b/Documentation/networking/filter.txt @@ -546,6 +546,186 @@ ffffffffa0069c8f + <x>: For BPF JIT developers, bpf_jit_disasm, bpf_asm and bpf_dbg provides a useful toolchain for developing and testing the kernel's JIT compiler. +Extended BPF +------------ +Extended BPF extends BPF in the following ways: +- from 2 to 10 registers + Original BPF has two registers (A and X) and hidden frame pointer. + Extended BPF has ten registers and read-only frame pointer. +- from 32-bit registers to 64-bit registers + semantics of old 32-bit ALU operations are preserved via 32-bit + subregisters +- if (cond) jump_true; else jump_false; + old BPF insns are replaced with: + if (cond) jump_true; /* else fallthrough */ +- adds signed > and >= insns +- 16 4-byte stack slots for register spill-fill replaced with + up to 512 bytes of multi-use stack space +- introduces bpf_call insn and register passing convention for zero + overhead calls from/to other kernel functions (not part of this patch) +- adds arithmetic right shift insn +- adds swab32/swab64 insns +- adds atomic_add insn +- old tax/txa insns are replaced with 'mov dst,src' insn + +Extended BPF is designed to be JITed with one to one mapping, which +allows GCC/LLVM compilers to generate optimized BPF code that performs +almost as fast as natively compiled code + +sysctl net.core.bpf_ext_enable=1 +controls whether filters attached to sockets will be automatically +converted to extended BPF or not. + +BPF is safe dynamically loadable program that can call fixed set +of kernel functions and takes a pointer to data as an input, +where data is skb, seccomp_data, kprobe function arguments or else. + +Extended Instruction Set was designed with these goals: +- write programs in restricted C and compile into BPF with GCC/LLVM +- just-in-time map to modern 64-bit CPU with minimal performance overhead + over two steps: C -> BPF -> native code +- guarantee termination and safety of BPF program in kernel + with simple algorithm + +GCC/LLVM-bpf backend is optional. +Extended BPF can be coded with macroses from filter.h just like original BPF, +though the same filter done in C is easier to understand. +sk_convert_filter() remaps original BPF insns into extended. + +Minimal performance overhead is achieved by having one to one mapping +between BPF insns and native insns, and one to one mapping between BPF +registers and native registers on 64-bit CPUs + +Extended BPF may allow jump forward and backward for two reasons: +to reduce branch mispredict penalty compiler moves cold basic blocks out of +fall-through path and to reduce code duplication that would be hard to avoid +if only jump forward was available. +To guarantee termination simple non-recursive depth-first-search verifies +that there are no back-edges (no loops in the program), program is a DAG +with root at the first insn, all branches end at the last RET insn and +all instructions are reachable. +Original BPF actually allows unreachable insns. Though it's safe, it will be +fixed when extended BPF replaces BPF completely. + +Original BPF has two registers (A and X) and hidden frame pointer. +Extended BPF has ten registers and read-only frame pointer. +Since 64-bit CPUs are passing arguments to the functions via registers +the number of args from BPF program to in-kernel function is restricted to 5 +and one register is used to accept return value from in-kernel function. +x86_64 passes first 6 arguments in registers. +aarch64/sparcv9/mips64 have 7-8 registers for arguments. +x86_64 has 6 callee saved registers. +aarch64/sparcv9/mips64 have 11 or more callee saved registers. + +Therefore extended BPF calling convention is defined as: +R0 - return value from in-kernel function +R1-R5 - arguments from BPF program to in-kernel function +R6-R9 - callee saved registers that in-kernel function will preserve +R10 - read-only frame pointer to access stack + +so that all BPF registers map one to one to HW registers on x86_64,aarch64,etc +and BPF calling convention maps directly to ABIs used by kernel on 64-bit +architectures. +On 32-bit architectures JIT may map programs that use only 32-bit arithmetic +and let more complex programs to be interpreted. + +R0-R5 are scratch registers and BPF program needs spill/fill them if necessary +across calls. +Note that there is only one BPF program == one BPF function and it cannot call +other BPF functions. It can only call predefined in-kernel functions. + +All BPF registers are 64-bit with 32-bit lower subregister that zero-extends +into 64-bit if written to. That behavior maps directly to x86_64 and arm64 +subregister defintion, but makes other JITs more difficult. + +Original BPF and extended BPF are two operand instructions, which helps +to do one-to-one mapping between BPF insn and x86 insn during JIT. + +Extended BPF doesn't have pre-defined endianness not to favor one +architecture vs another. Therefore bswap insn is available. +Original BPF doesn't have such insn and does bswap as part of sk_load_word call +which is often unnecessary if we want to compare the value with the constant. +Restricted C code might be written differently depending on endianness +and GCC/LLVM-bpf will take an endianness flag. + +32-bit architectures run 64-bit extended BPF programs via interpreter. +Their JITs may convert BPF programs that only use 32-bit subregs into native +instruction set and let the rest being interpreted. + +Extended BPF is 64-bit, because on 64-bit architectures, pointers are 64-bit +and we want to pass 64-bit values in/out kernel functions, so 32-bit BPF +registers would require to define register-pair ABI, there won't be a direct +BPF register to HW register mapping and JIT would need to do +combine/split/move operations for every register in and out of the function, +which is complex, bug prone and slow. +Another reason is atomic 64-bit counters + +Just like original BPF, extended BPF is safe, deterministic and kernel can +easily prove that. The safety of the program is determined in two steps. +First step does depth-first-search to disallow loops and other CFG validation. +Second step starts from the first insn and descends all possible paths. +It simulates execution of every insn and observes the state change of +registers and stack. +At the start of the program the register R1 contains a pointer to context +and has type PTR_TO_CTX. If checker sees an insn that does R2=R1, then R2 has +now type PTR_TO_CTX as well and can be used on right hand side of expression. +If R1=PTR_TO_CTX and insn is R2=R1+1, then R2=INVALID_PTR and it is readable. +If register was never written to, it's not readable. +After kernel function call, R1-R5 are reset to unreadable and R0 has a return +type of the function. Since R6-R9 are callee saved, their state is preserved +across the call. +load/store instructions are allowed only with registers of valid types, which +are PTR_TO_CTX, PTR_TO_TABLE, PTR_TO_STACK. They are bounds and alginment +checked. + +Input context pointer is generic. Its contents are defined by specific use case. +For seccomp R1 points to seccomp_data +For converted BPF filters R1 points to skb +Through get_context_access callback BPF checker is customized, so that BPF +program can only access certain fields of input context with specified size +and alignment. +For example, the following insn: + BPF_INSN_LD(BPF_W, R0, R6, 8) +intends to load word from address R6 + 8 and store it into R0 +If R6=PTR_TO_CTX, then get_context_access callback should let the checker know +that offset 8 of size 4 bytes can be accessed for reading, otherwise the checker +will reject the program. + +If R6=PTR_TO_STACK, then access should be aligned and be within stack bounds, +which are hard coded to [-480, 0]. In this example offset is 8, so it will fail +verification. +The checker will allow BPF program to read data from stack only after it wrote +into it. +Pointer register spill/fill is tracked as well, since four (R6-R9) callee saved +registers may not be enough for some programs. + +Allowed function calls are customized via get_func_proto callback. + +One of the useful functions that can be made available to BPF program +are bpf_table_lookup/bpf_table_update. +They can help tracing filters collect different types of statistics. +Example: pc addresses for drop_monitor filter + +In seccomp and socket filter use cases extended BPF program consists +of intructions only, but for tracing filters case BPF program may contain +BPF tables as well. +There are no special instructions to access BPF tables. The access is done +via function calls. + +BPF program identifies the table by table_id and accesses it in C like: +elem = bpf_table_lookup(ctx, table_id, key); + +BPF checker matches 'table_id' against known tables, verifies that 'key' points +to stack and table->key_size bytes are initialized. +bpf_table_lookup() is a normal kernel function. It needs to do a lookup and +return either valid pointer to the element or NULL. +BPF checker will verify that the program accesses the pointer only +after comparing it to NULL. +It's up to implementation to decide how lookup is done and meaning of the key. + +Just like original, extended BPF is limited to 4096 insns, which means that any +program will terminate quickly and will call fixed number of kernel functions. + Misc ---- @@ -561,3 +741,4 @@ the underlying architecture. Jay Schulist <jschlst@samba.org> Daniel Borkmann <dborkman@redhat.com> +Alexei Starovoitov <ast@plumgrid.com> -- 1.7.9.5 ^ permalink raw reply related [flat|nested] 19+ messages in thread
end of thread, other threads:[~2014-03-11 18:19 UTC | newest] Thread overview: 19+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2014-03-08 23:15 [PATCH v7 net-next 0/3] filter: add Extended BPF interpreter and converter, seccomp Alexei Starovoitov 2014-03-08 23:15 ` [PATCH v7 net-next 1/3] filter: add Extended BPF interpreter and converter Alexei Starovoitov 2014-03-09 12:29 ` Daniel Borkmann 2014-03-09 17:08 ` Alexei Starovoitov 2014-03-09 22:00 ` Daniel Borkmann 2014-03-10 0:41 ` Alexei Starovoitov 2014-03-11 17:40 ` Pavel Emelyanov 2014-03-11 18:03 ` Alexei Starovoitov 2014-03-11 18:19 ` Pavel Emelyanov 2014-03-09 14:45 ` Eric Dumazet 2014-03-09 17:38 ` Alexei Starovoitov 2014-03-09 18:11 ` Eric Dumazet 2014-03-09 18:57 ` Alexei Starovoitov 2014-03-09 19:11 ` Eric Dumazet 2014-03-09 19:20 ` Alexei Starovoitov 2014-03-09 14:49 ` Eric Dumazet 2014-03-09 18:02 ` Alexei Starovoitov 2014-03-08 23:15 ` [PATCH v7 net-next 2/3] seccomp: convert seccomp to use extended BPF Alexei Starovoitov 2014-03-08 23:15 ` [PATCH v7 net-next 3/3] doc: filter: add Extended BPF documentation Alexei Starovoitov
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).