From: Al Viro <viro@zeniv.linux.org.uk>
To: Daniel Borkmann <daniel@iogearbox.net>
Cc: "Jean-Philippe Brucker" <jean-philippe@linaro.org>,
"Tom Herbert" <tom@herbertland.com>,
"Björn Töpel" <bjorn.topel@gmail.com>,
"Anders Roxell" <anders.roxell@gmail.com>,
Netdev <netdev@vger.kernel.org>, bpf <bpf@vger.kernel.org>,
linux-riscv <linux-riscv@lists.infradead.org>,
linux-arm-kernel@lists.infradead.org
Subject: Re: csum_partial() on different archs (selftest/bpf)
Date: Fri, 13 Nov 2020 17:28:28 +0000 [thread overview]
Message-ID: <20201113172828.GK3576660@ZenIV.linux.org.uk> (raw)
In-Reply-To: <f634b437-aa51-736b-e2f3-f6210fc6a711@iogearbox.net>
On Fri, Nov 13, 2020 at 03:32:22PM +0100, Daniel Borkmann wrote:
> > And I would strongly recommend to change the calling conventions of that
> > thing - make it return __sum16. And take __sum16 as well...
> >
> > Again, exposing __wsum to anything that looks like a stable ABI is
> > a mistake - it's an internal detail that can be easily abused,
> > causing unpleasant compat problems.
>
> I'll take a look at both, removing the copying and also wrt not breaking
> existing users for cascading the helper when fixing.
FWIW, see below the patch that sits in the leftovers queue (didn't make it into
work.csum_and_copy, missed the window, didn't get around to dealing with that
for -next this cycle yet); it does not fold the result, but deals with the rest
of that fun. I would still suggest at least folding the result; something like
return csum_fold(csum_sub(csum_partial(from, from_size, 0),
csum_partial(to, to_size, seed)),
instead of what this patch does, to guarantee a normalized return value.
Note that the order of csum_sub() arguments here is inverted compared to the
patch below - csum_fold() returns reduced *complement* of its argument, so we
want to give it SUM(from) - SUM(to) - seed, not seed - SUM(from) + SUM(to).
And it's probably a separate followup (adding normalization, that is).
commit 1dd99d9664ec36e9068afb3ca0017c0a43ee420f
Author: Al Viro <viro@zeniv.linux.org.uk>
Date: Wed Jul 8 00:07:11 2020 -0400
bpf_csum_diff(): don't bother with scratchpads
Just use call csum_partial() on to and from and use csum_sub().
No need to bother with copying, inverting, percpu scratchpads,
etc.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
diff --git a/net/core/filter.c b/net/core/filter.c
index 7124f0fe6974..3e21327b9964 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -1635,15 +1635,6 @@ void sk_reuseport_prog_free(struct bpf_prog *prog)
bpf_prog_destroy(prog);
}
-struct bpf_scratchpad {
- union {
- __be32 diff[MAX_BPF_STACK / sizeof(__be32)];
- u8 buff[MAX_BPF_STACK];
- };
-};
-
-static DEFINE_PER_CPU(struct bpf_scratchpad, bpf_sp);
-
static inline int __bpf_try_make_writable(struct sk_buff *skb,
unsigned int write_len)
{
@@ -1987,10 +1978,6 @@ static const struct bpf_func_proto bpf_l4_csum_replace_proto = {
BPF_CALL_5(bpf_csum_diff, __be32 *, from, u32, from_size,
__be32 *, to, u32, to_size, __wsum, seed)
{
- struct bpf_scratchpad *sp = this_cpu_ptr(&bpf_sp);
- u32 diff_size = from_size + to_size;
- int i, j = 0;
-
/* This is quite flexible, some examples:
*
* from_size == 0, to_size > 0, seed := csum --> pushing data
@@ -1999,16 +1986,11 @@ BPF_CALL_5(bpf_csum_diff, __be32 *, from, u32, from_size,
*
* Even for diffing, from_size and to_size don't need to be equal.
*/
- if (unlikely(((from_size | to_size) & (sizeof(__be32) - 1)) ||
- diff_size > sizeof(sp->diff)))
+ if (unlikely((from_size | to_size) & (sizeof(__be32) - 1)))
return -EINVAL;
- for (i = 0; i < from_size / sizeof(__be32); i++, j++)
- sp->diff[j] = ~from[i];
- for (i = 0; i < to_size / sizeof(__be32); i++, j++)
- sp->diff[j] = to[i];
-
- return csum_partial(sp->diff, diff_size, seed);
+ return csum_sub(csum_partial(to, to_size, seed),
+ csum_partial(from, from_size, 0));
}
static const struct bpf_func_proto bpf_csum_diff_proto = {
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
next prev parent reply other threads:[~2020-11-13 17:28 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-13 10:36 csum_partial() on different archs (selftest/bpf) Björn Töpel
2020-11-13 11:34 ` Eric Dumazet
2020-11-13 13:17 ` Björn Töpel
2020-11-13 12:24 ` Jean-Philippe Brucker
2020-11-13 13:22 ` Björn Töpel
2020-11-13 14:15 ` Al Viro
2020-11-13 14:32 ` Daniel Borkmann
2020-11-13 17:28 ` Al Viro [this message]
2020-11-13 12:42 ` Al Viro
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201113172828.GK3576660@ZenIV.linux.org.uk \
--to=viro@zeniv.linux.org.uk \
--cc=anders.roxell@gmail.com \
--cc=bjorn.topel@gmail.com \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=jean-philippe@linaro.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-riscv@lists.infradead.org \
--cc=netdev@vger.kernel.org \
--cc=tom@herbertland.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox