From: Francesco Fusco <ffusco@redhat.com>
To: David Laight <David.Laight@ACULAB.COM>
Cc: Jesse Gross <jesse@nicira.com>, netdev <netdev@vger.kernel.org>,
dev@openvswitch.org, Daniel Borkmann <dborkman@redhat.com>,
Thomas Graf <tgraf@redhat.com>
Subject: Re: [PATCH net-next v2 2/2] net: ovs: use CRC32 accelerated flow hash if available
Date: Fri, 13 Dec 2013 15:53:48 +0100 [thread overview]
Message-ID: <52AB1F7C.1050606@redhat.com> (raw)
In-Reply-To: <AE90C24D6B3A694183C094C60CF0A2F6026B7487@saturn3.aculab.com>
On 12/13/2013 11:01 AM, David Laight wrote:
> My thoughts exactly.
> Given this is a hash it could crc alternate words into separate
> accumulators and the combine the values at the end.
> That way you are still doing sequential accesses to the data.
> (The crc instruction might be better than an xor for the combine.)
> If the cpu has 3 execution units that can do crc, use them all.
>
> It might be that the hash function is now an insignificant cost.
> Looking at how much hashing the data twice (discarding the first
> result - assign to global volatile data) slows things down can
> help determine this.
On i7 CPUs the crc32/crc64 instructions have a throughput
of 1 cycle and a latency of 3 cycles [1], which means that 1) with this
code we pay 3 clocks per crc32 instruction, and 2) we could compute
three CRCs in parallel, each processing 1/3 of the data during the same
clock. This could in theory provide 3x the performance.
For short keys (~100 bytes and less) there is chance that the 3x
theoretical speedup will be destroyed by the additional code required
to compute boundaries, xor the results, etc. But as I already mentioned,
this is something to try.
[1]
http://www.intel.com/content/dam/www/public/us/en/documents/white-papers/fast-crc-computation-paper.pdf
next prev parent reply other threads:[~2013-12-13 14:54 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-12-12 15:09 [PATCH net-next v2 0/2] ovs: introduce arch-specific fast hashing improvements Francesco Fusco
[not found] ` <1386860946-1621-1-git-send-email-ffusco-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2013-12-12 15:09 ` [PATCH net-next v2 1/2] lib: introduce arch optimized hash library Francesco Fusco
2013-12-12 17:54 ` Nicolas Dichtel
[not found] ` <52A9F84F.8060500-pdR9zngts4EAvxtiuMwx3w@public.gmane.org>
2013-12-12 18:04 ` Daniel Borkmann
2013-12-17 19:28 ` [PATCH net-next v2 0/2] ovs: introduce arch-specific fast hashing improvements David Miller
2013-12-12 15:09 ` [PATCH net-next v2 2/2] net: ovs: use CRC32 accelerated flow hash if available Francesco Fusco
2013-12-12 20:20 ` Jesse Gross
2013-12-13 9:55 ` Francesco Fusco
2013-12-13 21:12 ` Jesse Gross
[not found] ` <CAEP_g=-JBT_XVvwSa4xOwVoTysoL0Z8zJc-ERSHJc96+qrA99Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-12-13 10:01 ` David Laight
2013-12-13 14:53 ` Francesco Fusco [this message]
2013-12-12 21:12 ` [PATCH net-next v2 0/2] ovs: introduce arch-specific fast hashing improvements David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=52AB1F7C.1050606@redhat.com \
--to=ffusco@redhat.com \
--cc=David.Laight@ACULAB.COM \
--cc=dborkman@redhat.com \
--cc=dev@openvswitch.org \
--cc=jesse@nicira.com \
--cc=netdev@vger.kernel.org \
--cc=tgraf@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).