From: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
To: netdev@vger.kernel.org
Cc: davem@davemloft.net, caitlinb@broadcom.com, kelly@au1.ibm.com,
rusty@rustcorp.com.au, johnpol@2ka.mipt.ru
Subject: Initial benchmarks of some VJ ideas [mmap memcpy vs copy_to_user].
Date: Mon, 8 May 2006 16:24:22 +0400 [thread overview]
Message-ID: <20060508122418.GA22554@2ka.mipt.ru> (raw)
[-- Attachment #1: Type: text/plain, Size: 1668 bytes --]
I hope he does not take offence at name shortening :)
I've sligtly modified UDP receiving path and run several benchmarks
in the following cases:
1. pure recvfrom() using copy_to_user() with 4k and 40k buffers.
2. recvfrom() remains the same, but skb->data is copied into kernel
buffer which can be mapped into userspace with 4k and 40k buffers
instead of copy_to_user().
3. recvfrom() remains the same, but no data is copied at all, and only
iovec pointer is increased and it's size decreased.
Receiving is simple userspace application with one thread,
which does blocking read from UDP socket with default socket/stack parameters.
Receiver runs on 2.4 Ghz Xeon (HT enabled) with 1Gb of RAM and e1000 gigabit NIC.
Sender runs on amd64 nvidia nforce4 with 1Gb of RAM and r8169 NIC.
Machines are connected with d-link dgs-1216t gigabit switch.
Performance graph attached.
Conclusions:
at least in UDP case with 1gbit NIC performance was not increased,
but it can be the result of either NIC speed (I do not entrust to
nvidia and/or realtek), or broken sender application.
So the only observable result here is CPU usage changes:
it was decreased by 30% for copy_to_user() -> memcpy() changes
with 40k buffers. 4k buffers are too small to see any performance
changes due to syscall overhead.
If we transform CPU related changes to network speed, we still can
not get 6 times (or even 2 times) performance gain.
Luckily TCP processing is much more costly, e1000 interrupt handler
is too big, there are a lot of context switches and other
cache-unfriendly and locking stuff, but I still
wonder where does 6 (!) times performance gain lives.
--
Evgeniy Polyakov
[-- Attachment #2: netchannel_speed.png --]
[-- Type: image/png, Size: 9120 bytes --]
next reply other threads:[~2006-05-08 12:25 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-05-08 12:24 Evgeniy Polyakov [this message]
2006-05-08 19:51 ` Initial benchmarks of some VJ ideas [mmap memcpy vs copy_to_user] Evgeniy Polyakov
2006-05-08 20:15 ` David S. Miller
2006-05-10 19:58 ` David S. Miller
2006-05-11 6:40 ` Evgeniy Polyakov
2006-05-11 7:07 ` David S. Miller
2006-05-11 8:30 ` Evgeniy Polyakov
2006-05-11 16:18 ` Evgeniy Polyakov
2006-05-11 18:54 ` David S. Miller
2006-05-11 19:30 ` Rick Jones
2006-05-12 7:54 ` Evgeniy Polyakov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20060508122418.GA22554@2ka.mipt.ru \
--to=johnpol@2ka.mipt.ru \
--cc=caitlinb@broadcom.com \
--cc=davem@davemloft.net \
--cc=kelly@au1.ibm.com \
--cc=netdev@vger.kernel.org \
--cc=rusty@rustcorp.com.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).