From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Matteo Croce <mcroce@redhat.com>
Cc: Ilias Apalodimas <ilias.apalodimas@linaro.org>,
netdev <netdev@vger.kernel.org>,
LKML <linux-kernel@vger.kernel.org>,
Lorenzo Bianconi <lorenzo@kernel.org>,
Maxime Chevallier <maxime.chevallier@bootlin.com>,
Antoine Tenart <antoine.tenart@bootlin.com>,
Luka Perkov <luka.perkov@sartura.hr>,
Tomislav Tomasic <tomislav.tomasic@sartura.hr>,
Marcin Wojtas <mw@semihalf.com>,
Stefan Chulski <stefanc@marvell.com>,
Nadav Haklai <nadavh@marvell.com>,
brouer@redhat.com
Subject: Re: [RFC net-next 0/2] mvpp2: page_pool support
Date: Tue, 24 Dec 2019 15:04:45 +0100 [thread overview]
Message-ID: <20191224150445.2d6ab982@carbon> (raw)
In-Reply-To: <CAGnkfhzrSaVe3zJ+0rriqqELha554Gmv-zskrJbiBjhHdUG2uQ@mail.gmail.com>
On Tue, 24 Dec 2019 14:34:07 +0100
Matteo Croce <mcroce@redhat.com> wrote:
> On Tue, Dec 24, 2019 at 10:52 AM Ilias Apalodimas
> <ilias.apalodimas@linaro.org> wrote:
> >
> > On Tue, Dec 24, 2019 at 02:01:01AM +0100, Matteo Croce wrote:
> > > This patches change the memory allocator of mvpp2 from the frag allocator to
> > > the page_pool API. This change is needed to add later XDP support to mvpp2.
> > >
> > > The reason I send it as RFC is that with this changeset, mvpp2 performs much
> > > more slower. This is the tc drop rate measured with a single flow:
> > >
> > > stock net-next with frag allocator:
> > > rx: 900.7 Mbps 1877 Kpps
> > >
> > > this patchset with page_pool:
> > > rx: 423.5 Mbps 882.3 Kpps
> > >
> > > This is the perf top when receiving traffic:
> > >
> > > 27.68% [kernel] [k] __page_pool_clean_page
> >
> > This seems extremly high on the list.
> >
> > > 9.79% [kernel] [k] get_page_from_freelist
> > > 7.18% [kernel] [k] free_unref_page
> > > 4.64% [kernel] [k] build_skb
> > > 4.63% [kernel] [k] __netif_receive_skb_core
> > > 3.83% [mvpp2] [k] mvpp2_poll
> > > 3.64% [kernel] [k] eth_type_trans
> > > 3.61% [kernel] [k] kmem_cache_free
> > > 3.03% [kernel] [k] kmem_cache_alloc
> > > 2.76% [kernel] [k] dev_gro_receive
> > > 2.69% [mvpp2] [k] mvpp2_bm_pool_put
> > > 2.68% [kernel] [k] page_frag_free
> > > 1.83% [kernel] [k] inet_gro_receive
> > > 1.74% [kernel] [k] page_pool_alloc_pages
> > > 1.70% [kernel] [k] __build_skb
> > > 1.47% [kernel] [k] __alloc_pages_nodemask
> > > 1.36% [mvpp2] [k] mvpp2_buf_alloc.isra.0
> > > 1.29% [kernel] [k] tcf_action_exec
> > >
> > > I tried Ilias patches for page_pool recycling, I get an improvement
> > > to ~1100, but I'm still far than the original allocator.
> >
> > Can you post the recycling perf for comparison?
> >
>
> 12.00% [kernel] [k] get_page_from_freelist
> 9.25% [kernel] [k] free_unref_page
Hmm, this indicate pages are not getting recycled.
> 6.83% [kernel] [k] eth_type_trans
> 5.33% [kernel] [k] __netif_receive_skb_core
> 4.96% [mvpp2] [k] mvpp2_poll
> 4.64% [kernel] [k] kmem_cache_free
> 4.06% [kernel] [k] __xdp_return
You do invoke __xdp_return() code, but it might find that the page
cannot be recycled...
> 3.60% [kernel] [k] kmem_cache_alloc
> 3.31% [kernel] [k] dev_gro_receive
> 3.29% [kernel] [k] __page_pool_clean_page
> 3.25% [mvpp2] [k] mvpp2_bm_pool_put
> 2.73% [kernel] [k] __page_pool_put_page
> 2.33% [kernel] [k] __alloc_pages_nodemask
> 2.33% [kernel] [k] inet_gro_receive
> 2.05% [kernel] [k] __build_skb
> 1.95% [kernel] [k] build_skb
> 1.89% [cls_matchall] [k] mall_classify
> 1.83% [kernel] [k] page_pool_alloc_pages
> 1.80% [kernel] [k] tcf_action_exec
> 1.70% [mvpp2] [k] mvpp2_buf_alloc.isra.0
> 1.63% [kernel] [k] free_unref_page_prepare.part.0
> 1.45% [kernel] [k] page_pool_return_skb_page
> 1.42% [act_gact] [k] tcf_gact_act
> 1.16% [kernel] [k] netif_receive_skb_list_internal
> 1.08% [kernel] [k] kfree_skb
> 1.07% [kernel] [k] skb_release_data
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
next prev parent reply other threads:[~2019-12-24 14:05 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-12-24 1:01 [RFC net-next 0/2] mvpp2: page_pool support Matteo Croce
2019-12-24 1:01 ` [RFC net-next 1/2] mvpp2: use page_pool allocator Matteo Croce
2019-12-24 1:01 ` [RFC net-next 2/2] mvpp2: memory accounting Matteo Croce
2019-12-24 9:52 ` [RFC net-next 0/2] mvpp2: page_pool support Ilias Apalodimas
2019-12-24 13:34 ` Matteo Croce
2019-12-24 14:04 ` Jesper Dangaard Brouer [this message]
2019-12-24 14:00 ` Jesper Dangaard Brouer
2019-12-24 14:37 ` Matteo Croce
2019-12-27 11:51 ` Ilias Apalodimas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191224150445.2d6ab982@carbon \
--to=brouer@redhat.com \
--cc=antoine.tenart@bootlin.com \
--cc=ilias.apalodimas@linaro.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lorenzo@kernel.org \
--cc=luka.perkov@sartura.hr \
--cc=maxime.chevallier@bootlin.com \
--cc=mcroce@redhat.com \
--cc=mw@semihalf.com \
--cc=nadavh@marvell.com \
--cc=netdev@vger.kernel.org \
--cc=stefanc@marvell.com \
--cc=tomislav.tomasic@sartura.hr \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).