netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Miller <davem@davemloft.net>
To: manish.chopra@cavium.com
Cc: netdev@vger.kernel.org, ariel.elior@cavium.com,
	michal.kalderon@cavium.com
Subject: Re: [PATCH net-next 1/1] qede: Add build_skb() support.
Date: Thu, 17 May 2018 17:08:20 -0400 (EDT)	[thread overview]
Message-ID: <20180517.170820.1966715619269826905.davem@davemloft.net> (raw)
In-Reply-To: <20180517190500.24814-1-manish.chopra@cavium.com>

From: Manish Chopra <manish.chopra@cavium.com>
Date: Thu, 17 May 2018 12:05:00 -0700

> This patch makes use of build_skb() throughout in driver's receieve
> data path [HW gro flow and non HW gro flow]. With this, driver can
> build skb directly from the page segments which are already mapped
> to the hardware instead of allocating new SKB via netdev_alloc_skb()
> and memcpy the data which is quite costly.
> 
> This really improves performance (keeping same or slight gain in rx
> throughput) in terms of CPU utilization which is significantly reduced
> [almost half] in non HW gro flow where for every incoming MTU sized
> packet driver had to allocate skb, memcpy headers etc. Additionally
> in that flow, it also gets rid of bunch of additional overheads
> [eth_get_headlen() etc.] to split headers and data in the skb.
> 
> Tested with:
> system: 2 sockets, 4 cores per socket, hyperthreading, 2x4x2=16 cores
> iperf [server]: iperf -s
> iperf [client]: iperf -c <server_ip> -t 500 -i 10 -P 32
> 
> HW GRO off – w/o build_skb(), throughput: 36.8 Gbits/sec
> 
> Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
> Average:     all    0.59    0.00   32.93    0.00    0.00   43.07    0.00    0.00   23.42
> 
> HW GRO off - with build_skb(), throughput: 36.9 Gbits/sec
> 
> Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
> Average:     all    0.70    0.00   31.70    0.00    0.00   25.68    0.00    0.00   41.92
                                                             ^^^^^                   ^^^^^
> 
> HW GRO on - w/o build_skb(), throughput: 36.9 Gbits/sec
> 
> Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
> Average:     all    0.86    0.00   24.14    0.00    0.00    6.59    0.00    0.00   68.41
> 
> HW GRO on - with build_skb(), throughput: 37.5 Gbits/sec
> 
> Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
> Average:     all    0.87    0.00   23.75    0.00    0.00    6.19    0.00    0.00   69.19
> 
> Signed-off-by: Ariel Elior <ariel.elior@cavium.com>
> Signed-off-by: Manish Chopra <manish.chopra@cavium.com>

Looks great, applied, thank you.

      reply	other threads:[~2018-05-17 21:08 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-17 19:05 [PATCH net-next 1/1] qede: Add build_skb() support Manish Chopra
2018-05-17 21:08 ` David Miller [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180517.170820.1966715619269826905.davem@davemloft.net \
    --to=davem@davemloft.net \
    --cc=ariel.elior@cavium.com \
    --cc=manish.chopra@cavium.com \
    --cc=michal.kalderon@cavium.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).