From: Tariq Toukan <ttoukan.linux@gmail.com>
To: Eric Dumazet <edumazet@google.com>,
"David S . Miller" <davem@davemloft.net>
Cc: netdev <netdev@vger.kernel.org>,
Tariq Toukan <tariqt@mellanox.com>,
Saeed Mahameed <saeedm@mellanox.com>,
Willem de Bruijn <willemb@google.com>,
Alexei Starovoitov <ast@kernel.org>,
Eric Dumazet <eric.dumazet@gmail.com>,
Alexander Duyck <alexander.duyck@gmail.com>
Subject: Re: [PATCH v2 net-next] mlx4: Better use of order-0 pages in RX path
Date: Wed, 15 Mar 2017 17:36:12 +0200 [thread overview]
Message-ID: <60f6dc92-511d-b7be-64d2-2532e112d845@gmail.com> (raw)
In-Reply-To: <20170314151143.16231-1-edumazet@google.com>
On 14/03/2017 5:11 PM, Eric Dumazet wrote:
> When adding order-0 pages allocations and page recycling in receive path,
> I added issues on PowerPC, or more generally on arches with large pages.
>
> A GRO packet, aggregating 45 segments, ended up using 45 page frags
> on 45 different pages. Before my changes we were very likely packing
> up to 42 Ethernet frames per 64KB page.
>
> 1) At skb freeing time, all put_page() on the skb frags now touch 45
> different 'struct page' and this adds more cache line misses.
> Too bad that standard Ethernet MTU is so small :/
>
> 2) Using one order-0 page per ring slot consumes ~42 times more memory
> on PowerPC.
>
> 3) Allocating order-0 pages is very likely to use pages from very
> different locations, increasing TLB pressure on hosts with more
> than 256 GB of memory after days of uptime.
>
> This patch uses a refined strategy, addressing these points.
>
> We still use order-0 pages, but the page recyling technique is modified
> so that we have better chances to lower number of pages containing the
> frags for a given GRO skb (factor of 2 on x86, and 21 on PowerPC)
>
> Page allocations are split in two halves :
> - One currently visible by the NIC for DMA operations.
> - The other contains pages that already added to old skbs, put in
> a quarantine.
>
> When we receive a frame, we look at the oldest entry in the pool and
> check if the page count is back to one, meaning old skbs/frags were
> consumed and the page can be recycled.
>
> Page allocations are attempted using high order ones, trying
> to lower TLB pressure. We remember in ring->rx_alloc_order the last attempted
> order and quickly decrement it in case of failures.
> Then mlx4_en_recover_from_oom() called every 250 msec will attempt
> to gradually restore rx_alloc_order to its optimal value.
>
> On x86, memory allocations stay the same. (One page per RX slot for MTU=1500)
> But on PowerPC, this patch considerably reduces the allocated memory.
>
> Performance gain on PowerPC is about 50% for a single TCP flow.
>
> On x86, I could not measure the difference, my test machine being
> limited by the sender (33 Gbit per TCP flow).
> 22 less cache line misses per 64 KB GRO packet is probably in the order
> of 2 % or so.
>
> Signed-off-by: Eric Dumazet <edumazet@google.com>
> Cc: Tariq Toukan <tariqt@mellanox.com>
> Cc: Saeed Mahameed <saeedm@mellanox.com>
> Cc: Alexander Duyck <alexander.duyck@gmail.com>
> ---
> drivers/net/ethernet/mellanox/mlx4/en_rx.c | 470 ++++++++++++++++-----------
> drivers/net/ethernet/mellanox/mlx4/en_tx.c | 15 +-
> drivers/net/ethernet/mellanox/mlx4/mlx4_en.h | 54 ++-
> 3 files changed, 317 insertions(+), 222 deletions(-)
>
Hi Eric,
Thanks for your patch.
I will do the XDP tests and complete the review, by tomorrow.
Regards,
Tariq
next prev parent reply other threads:[~2017-03-15 15:36 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-14 15:11 [PATCH v2 net-next] mlx4: Better use of order-0 pages in RX path Eric Dumazet
2017-03-15 4:06 ` Alexei Starovoitov
2017-03-15 13:21 ` Eric Dumazet
2017-03-15 23:06 ` Alexei Starovoitov
2017-03-15 23:34 ` Eric Dumazet
2017-03-16 0:44 ` Alexei Starovoitov
2017-03-16 1:07 ` Eric Dumazet
2017-03-16 1:10 ` Eric Dumazet
2017-03-16 1:56 ` Alexei Starovoitov
2017-03-16 2:48 ` Eric Dumazet
2017-03-16 5:39 ` Alexei Starovoitov
2017-03-16 12:00 ` Eric Dumazet
2017-03-15 15:36 ` Tariq Toukan [this message]
2017-03-15 16:27 ` Eric Dumazet
2017-03-20 12:59 ` Tariq Toukan
2017-03-20 13:04 ` Eric Dumazet
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=60f6dc92-511d-b7be-64d2-2532e112d845@gmail.com \
--to=ttoukan.linux@gmail.com \
--cc=alexander.duyck@gmail.com \
--cc=ast@kernel.org \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=eric.dumazet@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=saeedm@mellanox.com \
--cc=tariqt@mellanox.com \
--cc=willemb@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox