netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
To: Matteo Croce <mcroce@redhat.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>,
	netdev <netdev@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Lorenzo Bianconi <lorenzo@kernel.org>,
	Maxime Chevallier <maxime.chevallier@bootlin.com>,
	Antoine Tenart <antoine.tenart@bootlin.com>,
	Luka Perkov <luka.perkov@sartura.hr>,
	Tomislav Tomasic <tomislav.tomasic@sartura.hr>,
	Marcin Wojtas <mw@semihalf.com>,
	Stefan Chulski <stefanc@marvell.com>,
	Nadav Haklai <nadavh@marvell.com>
Subject: Re: [RFC net-next 0/2] mvpp2: page_pool support
Date: Fri, 27 Dec 2019 13:51:56 +0200	[thread overview]
Message-ID: <20191227115156.GA29682@apalos.home> (raw)
In-Reply-To: <CAGnkfhzYdqBPvRM8j98HVMzeHSbJ8RyVH+nLpoKBuz2iqErPog@mail.gmail.com>

On Tue, Dec 24, 2019 at 03:37:49PM +0100, Matteo Croce wrote:
> On Tue, Dec 24, 2019 at 3:01 PM Jesper Dangaard Brouer
> <brouer@redhat.com> wrote:
> >
> > On Tue, 24 Dec 2019 11:52:29 +0200
> > Ilias Apalodimas <ilias.apalodimas@linaro.org> wrote:
> >
> > > On Tue, Dec 24, 2019 at 02:01:01AM +0100, Matteo Croce wrote:
> > > > This patches change the memory allocator of mvpp2 from the frag allocator to
> > > > the page_pool API. This change is needed to add later XDP support to mvpp2.
> > > >
> > > > The reason I send it as RFC is that with this changeset, mvpp2 performs much
> > > > more slower. This is the tc drop rate measured with a single flow:
> > > >
> > > > stock net-next with frag allocator:
> > > > rx: 900.7 Mbps 1877 Kpps
> > > >
> > > > this patchset with page_pool:
> > > > rx: 423.5 Mbps 882.3 Kpps
> > > >
> > > > This is the perf top when receiving traffic:
> > > >
> > > >   27.68%  [kernel]            [k] __page_pool_clean_page
> > >
> > > This seems extremly high on the list.
> >
> > This looks related to the cost of dma unmap, as page_pool have
> > PP_FLAG_DMA_MAP. (It is a little strange, as page_pool have flag
> > DMA_ATTR_SKIP_CPU_SYNC, which should make it less expensive).
> >
> >
> > > >    9.79%  [kernel]            [k] get_page_from_freelist
> >
> > You are clearly hitting page-allocator every time, because you are not
> > using page_pool recycle facility.
> >
> >
> > > >    7.18%  [kernel]            [k] free_unref_page
> > > >    4.64%  [kernel]            [k] build_skb
> > > >    4.63%  [kernel]            [k] __netif_receive_skb_core
> > > >    3.83%  [mvpp2]             [k] mvpp2_poll
> > > >    3.64%  [kernel]            [k] eth_type_trans
> > > >    3.61%  [kernel]            [k] kmem_cache_free
> > > >    3.03%  [kernel]            [k] kmem_cache_alloc
> > > >    2.76%  [kernel]            [k] dev_gro_receive
> > > >    2.69%  [mvpp2]             [k] mvpp2_bm_pool_put
> > > >    2.68%  [kernel]            [k] page_frag_free
> > > >    1.83%  [kernel]            [k] inet_gro_receive
> > > >    1.74%  [kernel]            [k] page_pool_alloc_pages
> > > >    1.70%  [kernel]            [k] __build_skb
> > > >    1.47%  [kernel]            [k] __alloc_pages_nodemask
> > > >    1.36%  [mvpp2]             [k] mvpp2_buf_alloc.isra.0
> > > >    1.29%  [kernel]            [k] tcf_action_exec
> > > >
> > > > I tried Ilias patches for page_pool recycling, I get an improvement
> > > > to ~1100, but I'm still far than the original allocator.
> > --
> > Best regards,
> >   Jesper Dangaard Brouer
> >   MSc.CS, Principal Kernel Engineer at Red Hat
> >   LinkedIn: http://www.linkedin.com/in/brouer
> >
> 
> The change I did to use the recycling is the following:
> 
> --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> @@ -3071,7 +3071,7 @@ static int mvpp2_rx(struct mvpp2_port *port,
> struct napi_struct *napi,
>     if (pp)
> -       page_pool_release_page(pp, virt_to_page(data));
> +      skb_mark_for_recycle(skb, virt_to_page(data), &rxq->xdp_rxq.mem);
>     else
>          dma_unmap_single_attrs(dev->dev.parent, dma_addr,
> 
> 
Jesper is rightm you aren't recycling anything.

The mark skb_mark_for_recycle() usage seems correct. 
There are a few more places that we refuse to recycle (for example coalescing
page pool and slub allocated pages is forbidden). I wonder if you hit any of
those cases and recycling doesn't take place. 
We'll hopefully release updated code shortly. I'll ping you and we can test on
that


Thanks
/Ilias
> 
> 
> --
> Matteo Croce
> per aspera ad upstream
> 

      reply	other threads:[~2019-12-27 11:52 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-24  1:01 [RFC net-next 0/2] mvpp2: page_pool support Matteo Croce
2019-12-24  1:01 ` [RFC net-next 1/2] mvpp2: use page_pool allocator Matteo Croce
2019-12-24  1:01 ` [RFC net-next 2/2] mvpp2: memory accounting Matteo Croce
2019-12-24  9:52 ` [RFC net-next 0/2] mvpp2: page_pool support Ilias Apalodimas
2019-12-24 13:34   ` Matteo Croce
2019-12-24 14:04     ` Jesper Dangaard Brouer
2019-12-24 14:00   ` Jesper Dangaard Brouer
2019-12-24 14:37     ` Matteo Croce
2019-12-27 11:51       ` Ilias Apalodimas [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191227115156.GA29682@apalos.home \
    --to=ilias.apalodimas@linaro.org \
    --cc=antoine.tenart@bootlin.com \
    --cc=brouer@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lorenzo@kernel.org \
    --cc=luka.perkov@sartura.hr \
    --cc=maxime.chevallier@bootlin.com \
    --cc=mcroce@redhat.com \
    --cc=mw@semihalf.com \
    --cc=nadavh@marvell.com \
    --cc=netdev@vger.kernel.org \
    --cc=stefanc@marvell.com \
    --cc=tomislav.tomasic@sartura.hr \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).