* [PATCH 1/3] net: bpf jit: ppc: optimize choose_load_func error path
@ 2013-10-08 16:31 Vladimir Murzin
2013-10-08 22:50 ` Jan Seiffert
[not found] ` <1381249910-17338-2-git-send-email-murzin.v@gmail.com>
0 siblings, 2 replies; 11+ messages in thread
From: Vladimir Murzin @ 2013-10-08 16:31 UTC (permalink / raw)
To: netdev
Cc: av1474, Vladimir Murzin, Jan Seiffert, Benjamin Herrenschmidt,
Paul Mackerras, Daniel Borkmann, Matt Evans
Macro CHOOSE_LOAD_FUNC returns handler for "any offset" if checks for K
were not passed. At the same time handlers for "any offset" cases make
the same checks against r_addr at run-time, that will always lead to
bpf_error.
Run-time checks are still necessary for indirect load operations, but
error path for absolute and mesh loads are worth to optimize during bpf
compile time.
Signed-off-by: Vladimir Murzin <murzin.v@gmail.com>
Cc: Jan Seiffert <kaffeemonster@googlemail.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Daniel Borkmann <dborkman@redhat.com>
Cc: Matt Evans <matt@ozlabs.org>
---
arch/powerpc/net/bpf_jit_comp.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
index bf56e33..754320a 100644
--- a/arch/powerpc/net/bpf_jit_comp.c
+++ b/arch/powerpc/net/bpf_jit_comp.c
@@ -132,7 +132,7 @@ static void bpf_jit_build_epilogue(u32 *image, struct codegen_context *ctx)
}
#define CHOOSE_LOAD_FUNC(K, func) \
- ((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative_offset : func) : func##_positive_offset)
+ ((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative_offset : NULL) : func##_positive_offset)
/* Assemble the body code between the prologue & epilogue. */
static int bpf_jit_build_body(struct sk_filter *fp, u32 *image,
@@ -427,6 +427,11 @@ static int bpf_jit_build_body(struct sk_filter *fp, u32 *image,
case BPF_S_LD_B_ABS:
func = CHOOSE_LOAD_FUNC(K, sk_load_byte);
common_load:
+ if (!func) {
+ PPC_LI(r_ret, 0);
+ PPC_JMP(exit_addr);
+ break;
+ }
/* Load from [K]. */
ctx->seen |= SEEN_DATAREF;
PPC_LI64(r_scratch1, func);
--
1.7.10.4
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 1/3] net: bpf jit: ppc: optimize choose_load_func error path
2013-10-08 16:31 [PATCH 1/3] net: bpf jit: ppc: optimize choose_load_func error path Vladimir Murzin
@ 2013-10-08 22:50 ` Jan Seiffert
2013-10-13 14:26 ` Vladimir Murzin
[not found] ` <1381249910-17338-2-git-send-email-murzin.v@gmail.com>
1 sibling, 1 reply; 11+ messages in thread
From: Jan Seiffert @ 2013-10-08 22:50 UTC (permalink / raw)
To: Vladimir Murzin, netdev
Cc: av1474, Benjamin Herrenschmidt, Paul Mackerras, Daniel Borkmann,
Matt Evans
Vladimir Murzin schrieb:
> Macro CHOOSE_LOAD_FUNC returns handler for "any offset" if checks for K
> were not passed. At the same time handlers for "any offset" cases make
> the same checks against r_addr at run-time, that will always lead to
> bpf_error.
>
Hmmm, if i only would remember why i wrote it that way....
I memory serves me right the idea was to always have a solid fall back, no
matter what, to the generic load function which works more like the load_pointer
from filter.c.
This way the COOSE-macro may could have been used at more places, but that
never played out.
And since all i wanted was to get the negative indirect load fixed,
optimizing the constant error case was not on my plate.
That you can get your negative K filter JITed in the first place, even
if the constant error case was slower than necessary, was good enough ;)
The ARM JIT is broken till this date...
You can have my
I'm-OK-with-this: Jan Seiffert <kaffeemonster@googlemail.com>
for all three patches, -ENOTIME for a full review ATM.
> Run-time checks are still necessary for indirect load operations, but
> error path for absolute and mesh loads are worth to optimize during bpf
> compile time.
>
> Signed-off-by: Vladimir Murzin <murzin.v@gmail.com>
>
> Cc: Jan Seiffert <kaffeemonster@googlemail.com>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Daniel Borkmann <dborkman@redhat.com>
> Cc: Matt Evans <matt@ozlabs.org>
>
> ---
> arch/powerpc/net/bpf_jit_comp.c | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> index bf56e33..754320a 100644
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
> @@ -132,7 +132,7 @@ static void bpf_jit_build_epilogue(u32 *image, struct codegen_context *ctx)
> }
>
> #define CHOOSE_LOAD_FUNC(K, func) \
> - ((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative_offset : func) : func##_positive_offset)
> + ((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative_offset : NULL) : func##_positive_offset)
>
> /* Assemble the body code between the prologue & epilogue. */
> static int bpf_jit_build_body(struct sk_filter *fp, u32 *image,
> @@ -427,6 +427,11 @@ static int bpf_jit_build_body(struct sk_filter *fp, u32 *image,
> case BPF_S_LD_B_ABS:
> func = CHOOSE_LOAD_FUNC(K, sk_load_byte);
> common_load:
> + if (!func) {
> + PPC_LI(r_ret, 0);
> + PPC_JMP(exit_addr);
> + break;
> + }
> /* Load from [K]. */
> ctx->seen |= SEEN_DATAREF;
> PPC_LI64(r_scratch1, func);
>
--
An UDP packet walks into a
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] net: bpf jit: x86: optimize choose_load_func error path
[not found] ` <1381249910-17338-2-git-send-email-murzin.v@gmail.com>
@ 2013-10-11 18:56 ` David Miller
2013-10-13 14:21 ` Vladimir Murzin
2013-10-13 14:54 ` Vladimir Murzin
1 sibling, 1 reply; 11+ messages in thread
From: David Miller @ 2013-10-11 18:56 UTC (permalink / raw)
To: murzin.v; +Cc: netdev, av1474, kaffeemonster, edumazet, mingo, tglx
From: Vladimir Murzin <murzin.v@gmail.com>
Date: Tue, 8 Oct 2013 20:31:49 +0400
> Macro CHOOSE_LOAD_FUNC returns handler for "any offset" if checks for K
> were not passed. At the same time handlers for "any offset" cases make
> the same checks against r_addr at run-time, that will always lead to
> bpf_error.
>
> Run-time checks are still necessary for indirect load operations, but
> error path for absolute and mesh loads are worth to optimize during bpf
> compile time.
>
> Signed-off-by: Vladimir Murzin <murzin.v@gmail.com>
>
> Cc: Jan Seiffert <kaffeemonster@googlemail.com>
> Cc: Eric Dumazet <edumazet@google.com>
> Cc: "David S. Miller" <davem@davemloft.net
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
>
> ---
> arch/x86/net/bpf_jit_comp.c | 15 +++++++++------
> 1 file changed, 9 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index 79c216a..28ac17f 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -123,7 +123,7 @@ static inline void bpf_flush_icache(void *start, void *end)
> }
>
> #define CHOOSE_LOAD_FUNC(K, func) \
> - ((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative_offset : func) : func##_positive_offset)
> + ((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative_offset : NULL) : func##_positive_offset)
>
> /* Helper to find the offset of pkt_type in sk_buff
> * We want to make sure its still a 3bit field starting at a byte boundary.
> @@ -611,7 +611,13 @@ void bpf_jit_compile(struct sk_filter *fp)
> }
> case BPF_S_LD_W_ABS:
> func = CHOOSE_LOAD_FUNC(K, sk_load_word);
> -common_load: seen |= SEEN_DATAREF;
> +common_load:
> + if (!func) {
> + CLEAR_A();
> + EMIT_JMP(cleanup_addr - addrs[i]);
> + break;
> + }
> + seen |= SEEN_DATAREF;
> t_offset = func - (image + addrs[i]);
> EMIT1_off32(0xbe, K); /* mov imm32,%esi */
> EMIT1_off32(0xe8, t_offset); /* call */
> @@ -625,10 +631,7 @@ common_load: seen |= SEEN_DATAREF;
> case BPF_S_LDX_B_MSH:
> func = CHOOSE_LOAD_FUNC(K, sk_load_byte_msh);
> seen |= SEEN_DATAREF | SEEN_XREG;
> - t_offset = func - (image + addrs[i]);
> - EMIT1_off32(0xbe, K); /* mov imm32,%esi */
> - EMIT1_off32(0xe8, t_offset); /* call sk_load_byte_msh */
> - break;
> + goto common_load;
This second hunk will set SEEN_DATAREF even if common_load takes the
!func path, that is not the intention at all here.
There's a reason why these two code blocks aren't shared.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] net: bpf jit: x86: optimize choose_load_func error path
2013-10-11 18:56 ` [PATCH 2/3] net: bpf jit: x86: " David Miller
@ 2013-10-13 14:21 ` Vladimir Murzin
2013-10-13 14:25 ` malc
0 siblings, 1 reply; 11+ messages in thread
From: Vladimir Murzin @ 2013-10-13 14:21 UTC (permalink / raw)
To: David Miller; +Cc: netdev, av1474, kaffeemonster, edumazet, mingo, tglx
On Fri, Oct 11, 2013 at 02:56:13PM -0400, David Miller wrote:
> From: Vladimir Murzin <murzin.v@gmail.com>
> Date: Tue, 8 Oct 2013 20:31:49 +0400
>
> > Macro CHOOSE_LOAD_FUNC returns handler for "any offset" if checks for K
> > were not passed. At the same time handlers for "any offset" cases make
> > the same checks against r_addr at run-time, that will always lead to
> > bpf_error.
> >
> > Run-time checks are still necessary for indirect load operations, but
> > error path for absolute and mesh loads are worth to optimize during bpf
> > compile time.
> >
> > Signed-off-by: Vladimir Murzin <murzin.v@gmail.com>
> >
> > Cc: Jan Seiffert <kaffeemonster@googlemail.com>
> > Cc: Eric Dumazet <edumazet@google.com>
> > Cc: "David S. Miller" <davem@davemloft.net
> > Cc: "H. Peter Anvin" <hpa@zytor.com>
> > Cc: Ingo Molnar <mingo@redhat.com>
> > Cc: Thomas Gleixner <tglx@linutronix.de>
> >
> > ---
> > arch/x86/net/bpf_jit_comp.c | 15 +++++++++------
> > 1 file changed, 9 insertions(+), 6 deletions(-)
> >
> > diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> > index 79c216a..28ac17f 100644
> > --- a/arch/x86/net/bpf_jit_comp.c
> > +++ b/arch/x86/net/bpf_jit_comp.c
> > @@ -123,7 +123,7 @@ static inline void bpf_flush_icache(void *start, void *end)
> > }
> >
> > #define CHOOSE_LOAD_FUNC(K, func) \
> > - ((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative_offset : func) : func##_positive_offset)
> > + ((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative_offset : NULL) : func##_positive_offset)
> >
> > /* Helper to find the offset of pkt_type in sk_buff
> > * We want to make sure its still a 3bit field starting at a byte boundary.
> > @@ -611,7 +611,13 @@ void bpf_jit_compile(struct sk_filter *fp)
> > }
> > case BPF_S_LD_W_ABS:
> > func = CHOOSE_LOAD_FUNC(K, sk_load_word);
> > -common_load: seen |= SEEN_DATAREF;
> > +common_load:
> > + if (!func) {
> > + CLEAR_A();
> > + EMIT_JMP(cleanup_addr - addrs[i]);
> > + break;
> > + }
> > + seen |= SEEN_DATAREF;
> > t_offset = func - (image + addrs[i]);
> > EMIT1_off32(0xbe, K); /* mov imm32,%esi */
> > EMIT1_off32(0xe8, t_offset); /* call */
> > @@ -625,10 +631,7 @@ common_load: seen |= SEEN_DATAREF;
> > case BPF_S_LDX_B_MSH:
> > func = CHOOSE_LOAD_FUNC(K, sk_load_byte_msh);
> > seen |= SEEN_DATAREF | SEEN_XREG;
> > - t_offset = func - (image + addrs[i]);
> > - EMIT1_off32(0xbe, K); /* mov imm32,%esi */
> > - EMIT1_off32(0xe8, t_offset); /* call sk_load_byte_msh */
> > - break;
> > + goto common_load;
>
> This second hunk will set SEEN_DATAREF even if common_load takes the
> !func path, that is not the intention at all here.
>
> There's a reason why these two code blocks aren't shared.
Thanks for review, David!
What about patch bellow?
---
arch/x86/net/bpf_jit_comp.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 79c216a..92128fe 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -123,7 +123,7 @@ static inline void bpf_flush_icache(void *start, void *end)
}
#define CHOOSE_LOAD_FUNC(K, func) \
- ((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative_offset : func) : func##_positive_offset)
+ ((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative_offset : NULL) : func##_positive_offset)
/* Helper to find the offset of pkt_type in sk_buff
* We want to make sure its still a 3bit field starting at a byte boundary.
@@ -611,7 +611,14 @@ void bpf_jit_compile(struct sk_filter *fp)
}
case BPF_S_LD_W_ABS:
func = CHOOSE_LOAD_FUNC(K, sk_load_word);
-common_load: seen |= SEEN_DATAREF;
+common_load:
+ if (!func) {
+ CLEAR_A();
+ EIT_JMP(cleanup_addr - addrs[i]);
+ break;
+ }
+
+ seen |= SEEN_DATAREF;
t_offset = func - (image + addrs[i]);
EMIT1_off32(0xbe, K); /* mov imm32,%esi */
EMIT1_off32(0xe8, t_offset); /* call */
@@ -624,6 +631,13 @@ common_load: seen |= SEEN_DATAREF;
goto common_load;
case BPF_S_LDX_B_MSH:
func = CHOOSE_LOAD_FUNC(K, sk_load_byte_msh);
+
+ if (!func) {
+ CLEAR_A();
+ EIT_JMP(cleanup_addr - addrs[i]);
+ break;
+ }
+
seen |= SEEN_DATAREF | SEEN_XREG;
t_offset = func - (image + addrs[i]);
EMIT1_off32(0xbe, K); /* mov imm32,%esi */
--
1.8.1.5
Vladimir
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] net: bpf jit: x86: optimize choose_load_func error path
2013-10-13 14:21 ` Vladimir Murzin
@ 2013-10-13 14:25 ` malc
2013-10-13 14:31 ` Vladimir Murzin
0 siblings, 1 reply; 11+ messages in thread
From: malc @ 2013-10-13 14:25 UTC (permalink / raw)
To: Vladimir Murzin
Cc: David Miller, netdev, kaffeemonster, edumazet, mingo, tglx
On Sun, 13 Oct 2013, Vladimir Murzin wrote:
> On Fri, Oct 11, 2013 at 02:56:13PM -0400, David Miller wrote:
> > From: Vladimir Murzin <murzin.v@gmail.com>
> > Date: Tue, 8 Oct 2013 20:31:49 +0400
> >
[..snip..]
> -common_load: seen |= SEEN_DATAREF;
> +common_load:
> + if (!func) {
> + CLEAR_A();
> + EIT_JMP(cleanup_addr - addrs[i]);
EMIT? (likewise elsewhere)
> + break;
> + }
> +
> + seen |= SEEN_DATAREF;
> t_offset = func - (image + addrs[i]);
> EMIT1_off32(0xbe, K); /* mov imm32,%esi */
> EMIT1_off32(0xe8, t_offset); /* call */
> @@ -624,6 +631,13 @@ common_load: seen |= SEEN_DATAREF;
> goto common_load;
> case BPF_S_LDX_B_MSH:
> func = CHOOSE_LOAD_FUNC(K, sk_load_byte_msh);
> +
> + if (!func) {
> + CLEAR_A();
> + EIT_JMP(cleanup_addr - addrs[i]);
> + break;
> + }
> +
> seen |= SEEN_DATAREF | SEEN_XREG;
> t_offset = func - (image + addrs[i]);
> EMIT1_off32(0xbe, K); /* mov imm32,%esi */
>
--
mailto:av1474@comtv.ru
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/3] net: bpf jit: ppc: optimize choose_load_func error path
2013-10-08 22:50 ` Jan Seiffert
@ 2013-10-13 14:26 ` Vladimir Murzin
0 siblings, 0 replies; 11+ messages in thread
From: Vladimir Murzin @ 2013-10-13 14:26 UTC (permalink / raw)
To: Jan Seiffert
Cc: netdev, av1474, Benjamin Herrenschmidt, Paul Mackerras,
Daniel Borkmann, Matt Evans
On Wed, Oct 09, 2013 at 12:50:32AM +0200, Jan Seiffert wrote:
> Vladimir Murzin schrieb:
> > Macro CHOOSE_LOAD_FUNC returns handler for "any offset" if checks for K
> > were not passed. At the same time handlers for "any offset" cases make
> > the same checks against r_addr at run-time, that will always lead to
> > bpf_error.
> >
>
> Hmmm, if i only would remember why i wrote it that way....
> I memory serves me right the idea was to always have a solid fall back, no
> matter what, to the generic load function which works more like the load_pointer
> from filter.c.
> This way the COOSE-macro may could have been used at more places, but that
> never played out.
>
> And since all i wanted was to get the negative indirect load fixed,
> optimizing the constant error case was not on my plate.
> That you can get your negative K filter JITed in the first place, even
> if the constant error case was slower than necessary, was good enough ;)
>
> The ARM JIT is broken till this date...
... and s390 too.
>
> You can have my
> I'm-OK-with-this: Jan Seiffert <kaffeemonster@googlemail.com>
>
> for all three patches, -ENOTIME for a full review ATM.
>
Thanks for feedback, Jan!
> > Run-time checks are still necessary for indirect load operations, but
> > error path for absolute and mesh loads are worth to optimize during bpf
> > compile time.
> >
> > Signed-off-by: Vladimir Murzin <murzin.v@gmail.com>
> >
> > Cc: Jan Seiffert <kaffeemonster@googlemail.com>
> > Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> > Cc: Paul Mackerras <paulus@samba.org>
> > Cc: Daniel Borkmann <dborkman@redhat.com>
> > Cc: Matt Evans <matt@ozlabs.org>
> >
> > ---
> > arch/powerpc/net/bpf_jit_comp.c | 7 ++++++-
> > 1 file changed, 6 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> > index bf56e33..754320a 100644
> > --- a/arch/powerpc/net/bpf_jit_comp.c
> > +++ b/arch/powerpc/net/bpf_jit_comp.c
> > @@ -132,7 +132,7 @@ static void bpf_jit_build_epilogue(u32 *image, struct codegen_context *ctx)
> > }
> >
> > #define CHOOSE_LOAD_FUNC(K, func) \
> > - ((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative_offset : func) : func##_positive_offset)
> > + ((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative_offset : NULL) : func##_positive_offset)
> >
> > /* Assemble the body code between the prologue & epilogue. */
> > static int bpf_jit_build_body(struct sk_filter *fp, u32 *image,
> > @@ -427,6 +427,11 @@ static int bpf_jit_build_body(struct sk_filter *fp, u32 *image,
> > case BPF_S_LD_B_ABS:
> > func = CHOOSE_LOAD_FUNC(K, sk_load_byte);
> > common_load:
> > + if (!func) {
> > + PPC_LI(r_ret, 0);
> > + PPC_JMP(exit_addr);
> > + break;
> > + }
> > /* Load from [K]. */
> > ctx->seen |= SEEN_DATAREF;
> > PPC_LI64(r_scratch1, func);
> >
>
>
> --
> An UDP packet walks into a
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] net: bpf jit: x86: optimize choose_load_func error path
2013-10-13 14:25 ` malc
@ 2013-10-13 14:31 ` Vladimir Murzin
0 siblings, 0 replies; 11+ messages in thread
From: Vladimir Murzin @ 2013-10-13 14:31 UTC (permalink / raw)
To: malc; +Cc: David Miller, netdev, kaffeemonster, edumazet, mingo, tglx
On Sun, Oct 13, 2013 at 06:25:34PM +0400, malc wrote:
> On Sun, 13 Oct 2013, Vladimir Murzin wrote:
>
> > On Fri, Oct 11, 2013 at 02:56:13PM -0400, David Miller wrote:
> > > From: Vladimir Murzin <murzin.v@gmail.com>
> > > Date: Tue, 8 Oct 2013 20:31:49 +0400
> > >
>
> [..snip..]
>
> > -common_load: seen |= SEEN_DATAREF;
> > +common_load:
> > + if (!func) {
> > + CLEAR_A();
> > + EIT_JMP(cleanup_addr - addrs[i]);
>
> EMIT? (likewise elsewhere)
Oops... Thanks for quick response!
I'd better send the patch as a separate message.
>
> > + break;
> > + }
> > +
> > + seen |= SEEN_DATAREF;
> > t_offset = func - (image + addrs[i]);
> > EMIT1_off32(0xbe, K); /* mov imm32,%esi */
> > EMIT1_off32(0xe8, t_offset); /* call */
> > @@ -624,6 +631,13 @@ common_load: seen |= SEEN_DATAREF;
> > goto common_load;
> > case BPF_S_LDX_B_MSH:
> > func = CHOOSE_LOAD_FUNC(K, sk_load_byte_msh);
> > +
> > + if (!func) {
> > + CLEAR_A();
> > + EIT_JMP(cleanup_addr - addrs[i]);
> > + break;
> > + }
> > +
> > seen |= SEEN_DATAREF | SEEN_XREG;
> > t_offset = func - (image + addrs[i]);
> > EMIT1_off32(0xbe, K); /* mov imm32,%esi */
> >
>
> --
> mailto:av1474@comtv.ru
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 2/3] net: bpf jit: x86: optimize choose_load_func error path
[not found] ` <1381249910-17338-2-git-send-email-murzin.v@gmail.com>
2013-10-11 18:56 ` [PATCH 2/3] net: bpf jit: x86: " David Miller
@ 2013-10-13 14:54 ` Vladimir Murzin
2013-10-13 14:54 ` Vladimir Murzin
1 sibling, 1 reply; 11+ messages in thread
From: Vladimir Murzin @ 2013-10-13 14:54 UTC (permalink / raw)
To: netdev; +Cc: davem, edumazet, av1474, Vladimir Murzin
Macro CHOOSE_LOAD_FUNC returns handler for "any offset" if checks for K
were not passed. At the same time handlers for "any offset" cases make
the same checks against r_addr at run-time, that will always lead to
bpf_error.
Run-time checks are still necessary for indirect load operations, but
error path for absolute and mesh loads are worth to optimize during bpf
compile time.
Signed-off-by: Vladimir Murzin <murzin.v@gmail.com>
---
David pointed at inability to merge mesh load with common load code. This
patch is updated according to this note.
arch/x86/net/bpf_jit_comp.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 79c216a..92128fe 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -123,7 +123,7 @@ static inline void bpf_flush_icache(void *start, void *end)
}
#define CHOOSE_LOAD_FUNC(K, func) \
- ((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative_offset : func) : func##_positive_offset)
+ ((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative_offset : NULL) : func##_positive_offset)
/* Helper to find the offset of pkt_type in sk_buff
* We want to make sure its still a 3bit field starting at a byte boundary.
@@ -611,7 +611,14 @@ void bpf_jit_compile(struct sk_filter *fp)
}
case BPF_S_LD_W_ABS:
func = CHOOSE_LOAD_FUNC(K, sk_load_word);
-common_load: seen |= SEEN_DATAREF;
+common_load:
+ if (!func) {
+ CLEAR_A();
+ EMIT_JMP(cleanup_addr - addrs[i]);
+ break;
+ }
+
+ seen |= SEEN_DATAREF;
t_offset = func - (image + addrs[i]);
EMIT1_off32(0xbe, K); /* mov imm32,%esi */
EMIT1_off32(0xe8, t_offset); /* call */
@@ -624,6 +631,13 @@ common_load: seen |= SEEN_DATAREF;
goto common_load;
case BPF_S_LDX_B_MSH:
func = CHOOSE_LOAD_FUNC(K, sk_load_byte_msh);
+
+ if (!func) {
+ CLEAR_A();
+ EMIT_JMP(cleanup_addr - addrs[i]);
+ break;
+ }
+
seen |= SEEN_DATAREF | SEEN_XREG;
t_offset = func - (image + addrs[i]);
EMIT1_off32(0xbe, K); /* mov imm32,%esi */
--
1.8.1.5
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 2/3] net: bpf jit: x86: optimize choose_load_func error path
2013-10-13 14:54 ` Vladimir Murzin
@ 2013-10-13 14:54 ` Vladimir Murzin
2013-10-13 16:36 ` Eric Dumazet
0 siblings, 1 reply; 11+ messages in thread
From: Vladimir Murzin @ 2013-10-13 14:54 UTC (permalink / raw)
To: netdev; +Cc: davem, edumazet, av1474, Vladimir Murzin
Macro CHOOSE_LOAD_FUNC returns handler for "any offset" if checks for K
were not passed. At the same time handlers for "any offset" cases make
the same checks against r_addr at run-time, that will always lead to
bpf_error.
Run-time checks are still necessary for indirect load operations, but
error path for absolute and mesh loads are worth to optimize during bpf
compile time.
Signed-off-by: Vladimir Murzin <murzin.v@gmail.com>
---
David pointed at inability to merge mesh load with common load code. This
patch is updated according to this note.
arch/x86/net/bpf_jit_comp.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 79c216a..92128fe 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -123,7 +123,7 @@ static inline void bpf_flush_icache(void *start, void *end)
}
#define CHOOSE_LOAD_FUNC(K, func) \
- ((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative_offset : func) : func##_positive_offset)
+ ((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative_offset : NULL) : func##_positive_offset)
/* Helper to find the offset of pkt_type in sk_buff
* We want to make sure its still a 3bit field starting at a byte boundary.
@@ -611,7 +611,14 @@ void bpf_jit_compile(struct sk_filter *fp)
}
case BPF_S_LD_W_ABS:
func = CHOOSE_LOAD_FUNC(K, sk_load_word);
-common_load: seen |= SEEN_DATAREF;
+common_load:
+ if (!func) {
+ CLEAR_A();
+ EMIT_JMP(cleanup_addr - addrs[i]);
+ break;
+ }
+
+ seen |= SEEN_DATAREF;
t_offset = func - (image + addrs[i]);
EMIT1_off32(0xbe, K); /* mov imm32,%esi */
EMIT1_off32(0xe8, t_offset); /* call */
@@ -624,6 +631,13 @@ common_load: seen |= SEEN_DATAREF;
goto common_load;
case BPF_S_LDX_B_MSH:
func = CHOOSE_LOAD_FUNC(K, sk_load_byte_msh);
+
+ if (!func) {
+ CLEAR_A();
+ EMIT_JMP(cleanup_addr - addrs[i]);
+ break;
+ }
+
seen |= SEEN_DATAREF | SEEN_XREG;
t_offset = func - (image + addrs[i]);
EMIT1_off32(0xbe, K); /* mov imm32,%esi */
--
1.8.1.5
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] net: bpf jit: x86: optimize choose_load_func error path
2013-10-13 14:54 ` Vladimir Murzin
@ 2013-10-13 16:36 ` Eric Dumazet
2013-10-14 2:55 ` Vladimir Murzin
0 siblings, 1 reply; 11+ messages in thread
From: Eric Dumazet @ 2013-10-13 16:36 UTC (permalink / raw)
To: Vladimir Murzin; +Cc: netdev, davem, edumazet, av1474
On Sun, 2013-10-13 at 16:54 +0200, Vladimir Murzin wrote:
> Macro CHOOSE_LOAD_FUNC returns handler for "any offset" if checks for K
> were not passed. At the same time handlers for "any offset" cases make
> the same checks against r_addr at run-time, that will always lead to
> bpf_error.
>
> Run-time checks are still necessary for indirect load operations, but
> error path for absolute and mesh loads are worth to optimize during bpf
> compile time.
I don't get the point.
What real world use case or problem are you trying to handle ?
bpf_error returns 0, so it seems your patch does the same.
A buggy BPF program should not expect us to 'save' a few cycles.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] net: bpf jit: x86: optimize choose_load_func error path
2013-10-13 16:36 ` Eric Dumazet
@ 2013-10-14 2:55 ` Vladimir Murzin
0 siblings, 0 replies; 11+ messages in thread
From: Vladimir Murzin @ 2013-10-14 2:55 UTC (permalink / raw)
To: Eric Dumazet; +Cc: netdev, davem, edumazet, av1474
On Sun, Oct 13, 2013 at 09:36:34AM -0700, Eric Dumazet wrote:
> On Sun, 2013-10-13 at 16:54 +0200, Vladimir Murzin wrote:
> > Macro CHOOSE_LOAD_FUNC returns handler for "any offset" if checks for K
> > were not passed. At the same time handlers for "any offset" cases make
> > the same checks against r_addr at run-time, that will always lead to
> > bpf_error.
> >
> > Run-time checks are still necessary for indirect load operations, but
> > error path for absolute and mesh loads are worth to optimize during bpf
> > compile time.
>
> I don't get the point.
>
> What real world use case or problem are you trying to handle ?
>
> bpf_error returns 0, so it seems your patch does the same.
>
> A buggy BPF program should not expect us to 'save' a few cycles.
>
>
>
Hi Eric!
There is no real world use case for me - it was eliminated by plain code
reading. The patch is not supposed to change behavior of BPF program - only
optimization of the error path.
I agree with, you there is no significant reason for optimizations of rarely
used pice of code. However, it is not only saving pipeline cycles and I-cache
lines for usually "never taken" branch. In case this "never taken" branch is a
buggy part of BPF program, we can avoid extra instructions and save space for
the rest of BPF program - there is no need to care about seen flags of buggy
part anymore.
Anyway, if you still think it's not good enough - just throw it away ;)
Thanks
Vladimir
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2013-10-14 2:56 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-10-08 16:31 [PATCH 1/3] net: bpf jit: ppc: optimize choose_load_func error path Vladimir Murzin
2013-10-08 22:50 ` Jan Seiffert
2013-10-13 14:26 ` Vladimir Murzin
[not found] ` <1381249910-17338-2-git-send-email-murzin.v@gmail.com>
2013-10-11 18:56 ` [PATCH 2/3] net: bpf jit: x86: " David Miller
2013-10-13 14:21 ` Vladimir Murzin
2013-10-13 14:25 ` malc
2013-10-13 14:31 ` Vladimir Murzin
2013-10-13 14:54 ` Vladimir Murzin
2013-10-13 14:54 ` Vladimir Murzin
2013-10-13 16:36 ` Eric Dumazet
2013-10-14 2:55 ` Vladimir Murzin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).