netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
Cc: brouer@redhat.com, "Saeed Mahameed" <saeedm@mellanox.com>,
	"Matteo Croce" <mcroce@redhat.com>,
	"Tariq Toukan" <tariqt@mellanox.com>,
	"Toke Høiland-Jørgensen" <toke@redhat.com>,
	"Jonathan Lemon" <jonathan.lemon@gmail.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>
Subject: Created benchmarks modules for page_pool
Date: Tue, 21 Jan 2020 17:09:45 +0100	[thread overview]
Message-ID: <20200121170945.41e58f32@carbon> (raw)

Hi Ilias and Lorenzo, (Cc others + netdev)

I've created two benchmarks modules for page_pool.

[1] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/lib/bench_page_pool_simple.c
[2] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/lib/bench_page_pool_cross_cpu.c

I think we/you could actually use this as part of your presentation[3]?

The first benchmark[1] illustrate/measure what happen when page_pool
alloc and free/return happens on the same CPU.  Here there are 3 modes
of operations with different performance characteristic.

Fast_path NAPI recycle (XDP_DROP use-case)
 - cost per elem: 15 cycles(tsc) 4.437 ns

Recycle via ptr_ring
 - cost per elem: 48 cycles(tsc) 13.439 ns

Failed recycle, return to page-allocator
 - cost per elem: 256 cycles(tsc) 71.169 ns


The second benchmark[2] measures what happens cross-CPU.  It is
primarily the concurrent return-path that I want to capture. As this
is page_pool's weak spot, that we/I need to improve performance of.
Hint when SKBs use page_pool return this will happen more often.
It is a little more tricky to get proper measurement as we want to
observe the case, where return-path isn't stalling/waiting on pages to
return.

- 1 CPU returning  , cost per elem: 110 cycles(tsc)   30.709 ns
- 2 concurrent CPUs, cost per elem: 989 cycles(tsc)  274.861 ns
- 3 concurrent CPUs, cost per elem: 2089 cycles(tsc) 580.530 ns
- 4 concurrent CPUs, cost per elem: 2339 cycles(tsc) 649.984 ns

[3] https://netdevconf.info/0x14/session.html?tutorial-add-XDP-support-to-a-NIC-driver
-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


             reply	other threads:[~2020-01-21 16:10 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-21 16:09 Jesper Dangaard Brouer [this message]
2020-01-22 10:42 ` Created benchmarks modules for page_pool Ilias Apalodimas
2020-01-22 12:09   ` Jesper Dangaard Brouer
2020-01-28 16:22     ` Matteo Croce
2020-01-28 18:41       ` Jesper Dangaard Brouer
2020-01-29  9:07         ` Ilias Apalodimas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200121170945.41e58f32@carbon \
    --to=brouer@redhat.com \
    --cc=ilias.apalodimas@linaro.org \
    --cc=jonathan.lemon@gmail.com \
    --cc=lorenzo.bianconi@redhat.com \
    --cc=mcroce@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=saeedm@mellanox.com \
    --cc=tariqt@mellanox.com \
    --cc=toke@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).