From: Linus Torvalds <torvalds@linux-foundation.org>
To: Ingo Molnar <mingo@kernel.org>
Cc: Tom Herbert <tom@herbertland.com>,
David Miller <davem@davemloft.net>,
Network Development <netdev@vger.kernel.org>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, Peter Anvin <hpa@zytor.com>,
"the arch/x86 maintainers" <x86@kernel.org>,
kernel-team <kernel-team@fb.com>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Peter Zijlstra <a.p.zijlstra@chello.nl>
Subject: Re: [PATCH v3 net-next] net: Implement fast csum_partial for x86_64
Date: Thu, 4 Feb 2016 17:27:31 -0800 [thread overview]
Message-ID: <CA+55aFwGt-ZCna4cEUXP2Qd39etMgJYu6MTyimP38GKV4aK_aA@mail.gmail.com> (raw)
In-Reply-To: <CA+55aFwqq6n9OOTNOXMdWN4kJX9iJoP58fZ4KwTc51enxtkqgg@mail.gmail.com>
On Thu, Feb 4, 2016 at 2:09 PM, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> The "+" should be "-", of course - the point is to shift up the value
> by 8 bits for odd cases, and we need to load starting one byte early
> for that. The idea is that we use the byte shifter in the load unit to
> do some work for us.
Ok, so I thought some more about this, and the fact is, we don't
actually want to do the byte shifting at all for the first case (the
"length < 8" case), since the address of that one hasn't been shifted.
it's only for the "we're going to align things to 8 bytes" case that
we would want to do it. But then we might as well use the
rotate_by8_if_odd() model, so I suspect the address games are just
entirely pointless.
So here is something that is actually tested (although admittedly not
well), and uses that fairly simple model.
NOTE! I did not do the unrolling of the "adcq" loop in the middle, but
that's a totally trivial thing now. So this isn't very optimized,
because it will do a *lot* of extra "adcq $0" to get rid of the carry
bit. But with that core loop unrolled, you'd get rid of most of them.
Linus
---
static unsigned long rotate_by8_if_odd(unsigned long sum, unsigned long aligned)
{
asm("rorq %b1,%0"
:"=r" (sum)
:"c" ((aligned & 1) << 3), "0" (sum));
return sum;
}
static unsigned long csum_partial_lt8(unsigned long val, int len,
unsigned long sum)
{
unsigned long mask = (1ul << len*8)-1;
val &= mask;
return add64_with_carry(val, sum);
}
static unsigned long csum_partial_64(const void *buff, unsigned long
len, unsigned long sum)
{
unsigned long align, val;
// This is the only potentially unaligned access, and it can
// also theoretically overflow into the next page
val = load_unaligned_zeropad(buff);
if (len < 8)
return csum_partial_lt8(val, len, sum);
align = 7 & -(unsigned long)buff;
sum = csum_partial_lt8(val, align, sum);
buff += align;
len -= align;
sum = rotate_by8_if_odd(sum, align);
while (len >= 8) {
val = *(unsigned long *) buff;
sum = add64_with_carry(sum, val);
buff += 8;
len -= 8;
}
sum = csum_partial_lt8(*(unsigned long *)buff, len, sum);
return rotate_by8_if_odd(sum, align);
}
__wsum csum_partial(const void *buff, unsigned long len, unsigned long sum)
{
sum = csum_partial_64(buff, len, sum);
return add32_with_carry(sum, sum >> 32);
}
next prev parent reply other threads:[~2016-02-05 1:27 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-03 19:18 [PATCH v3 net-next] net: Implement fast csum_partial for x86_64 Tom Herbert
2016-02-04 9:30 ` Ingo Molnar
2016-02-04 10:56 ` Ingo Molnar
2016-02-04 19:24 ` Tom Herbert
2016-02-05 9:24 ` Ingo Molnar
2016-02-04 21:46 ` Linus Torvalds
2016-02-04 22:09 ` Linus Torvalds
2016-02-05 1:27 ` Linus Torvalds [this message]
2016-02-05 1:39 ` Linus Torvalds
2016-02-04 22:43 ` Tom Herbert
2016-02-04 22:57 ` Linus Torvalds
2016-02-05 8:01 ` Ingo Molnar
2016-02-05 10:07 ` David Laight
2016-02-04 11:08 ` David Laight
2016-02-04 16:51 ` Alexander Duyck
2016-02-04 16:58 ` Tom Herbert
2016-02-04 17:09 ` David Laight
2016-02-04 20:59 ` Tom Herbert
2016-02-04 21:09 ` Alexander Duyck
2016-02-04 19:22 ` Alexander Duyck
2016-02-04 19:31 ` Tom Herbert
2016-02-04 19:44 ` Tom Herbert
2016-02-04 20:03 ` Alexander Duyck
-- strict thread matches above, loose matches on Subject: below --
2016-02-08 20:12 George Spelvin
2016-02-09 10:48 ` David Laight
2016-02-10 0:53 ` George Spelvin
2016-02-10 11:39 ` David Laight
2016-02-10 14:43 ` George Spelvin
2016-02-10 15:18 ` David Laight
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CA+55aFwGt-ZCna4cEUXP2Qd39etMgJYu6MTyimP38GKV4aK_aA@mail.gmail.com \
--to=torvalds@linux-foundation.org \
--cc=a.p.zijlstra@chello.nl \
--cc=davem@davemloft.net \
--cc=hpa@zytor.com \
--cc=kernel-team@fb.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=mingo@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=tglx@linutronix.de \
--cc=tom@herbertland.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).