From: jszhang@marvell.com (Jisheng Zhang)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH net-next v3 0/4] net: mvneta: improve rx/tx performance
Date: Mon, 20 Feb 2017 20:53:40 +0800 [thread overview]
Message-ID: <20170220125344.3555-1-jszhang@marvell.com> (raw)
In hot code path such as mvneta_rx_swbm(), we access fields of rx_desc
and tx_desc. These DMA descs are allocated by dma_alloc_coherent, they
are uncacheable if the device isn't cache coherent, reading from
uncached memory is fairly slow.
patch1 reuses the read out status to getting status field of rx_desc
again.
patch2 avoids getting buf_phys_addr from rx_desc again in
mvneta_rx_hwbm by reusing the phys_addr variable.
patch3 avoids reading from tx_desc as much as possible by store what
we need in local variable.
We get the following performance data on Marvell BG4CT Platforms
(tested with iperf):
before the patch:
sending 1GB in mvneta_tx()(disabled TSO) costs 793553760ns
after the patch:
sending 1GB in mvneta_tx()(disabled TSO) costs 719953800ns
we saved 9.2% time.
patch4 uses cacheable memory to store the rx buffer DMA address.
We get the following performance data on Marvell BG4CT Platforms
(tested with iperf):
before the patch:
recving 1GB in mvneta_rx_swbm() costs 1492659600 ns
after the patch:
recving 1GB in mvneta_rx_swbm() costs 1421565640 ns
We saved 4.76% time.
Basically, patch1 and patch4 do what Arnd mentioned in [1].
Hi Arnd,
I added "Suggested-by you" tag, I hope you don't mind ;)
Thanks
[1] https://www.spinics.net/lists/netdev/msg405889.html
Since v2:
- add Gregory's ack to patch1
- only get rx buffer DMA address from cacheable memory for mvneta_rx_swbm()
- add patch 2 to read rx_desc->buf_phys_addr once in mvneta_rx_hwbm()
- add patch 3 to avoid reading from tx_desc as much as possible
Since v1:
- correct the performance data typo
Jisheng Zhang (4):
net: mvneta: avoid getting status from rx_desc as much as possible
net: mvneta: avoid getting buf_phys_addr from rx_desc again
net: mvneta: avoid reading from tx_desc as much as possible
net: mvneta: Use cacheable memory to store the rx buffer DMA address
drivers/net/ethernet/marvell/mvneta.c | 80 +++++++++++++++++++----------------
1 file changed, 43 insertions(+), 37 deletions(-)
--
2.11.0
next reply other threads:[~2017-02-20 12:53 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-02-20 12:53 Jisheng Zhang [this message]
2017-02-20 12:53 ` [PATCH net-next v3 1/4] net: mvneta: avoid getting status from rx_desc as much as possible Jisheng Zhang
2017-02-20 12:53 ` [PATCH net-next v3 2/4] net: mvneta: avoid getting buf_phys_addr from rx_desc again Jisheng Zhang
2017-02-20 12:53 ` [PATCH net-next v3 3/4] net: mvneta: avoid reading from tx_desc as much as possible Jisheng Zhang
2017-02-20 12:53 ` [PATCH net-next v3 4/4] net: mvneta: Use cacheable memory to store the rx buffer DMA address Jisheng Zhang
2017-02-20 14:21 ` [PATCH net-next v3 0/4] net: mvneta: improve rx/tx performance Gregory CLEMENT
2017-02-21 4:37 ` Jisheng Zhang
2017-02-21 16:16 ` David Miller
2017-02-21 16:35 ` Marcin Wojtas
2017-02-24 11:56 ` Jisheng Zhang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170220125344.3555-1-jszhang@marvell.com \
--to=jszhang@marvell.com \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).