From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Matteo Croce <mcroce@linux.microsoft.com>
Cc: brouer@redhat.com, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org,
Jonathan Lemon <jonathan.lemon@gmail.com>,
"David S. Miller" <davem@davemloft.net>,
Ilias Apalodimas <ilias.apalodimas@linaro.org>,
Jesper Dangaard Brouer <hawk@kernel.org>,
Lorenzo Bianconi <lorenzo@kernel.org>,
Saeed Mahameed <saeedm@nvidia.com>,
David Ahern <dsahern@gmail.com>,
Saeed Mahameed <saeed@kernel.org>, Andrew Lunn <andrew@lunn.ch>
Subject: Re: [PATCH net-next 6/6] mvneta: recycle buffers
Date: Tue, 23 Mar 2021 16:06:11 +0100 [thread overview]
Message-ID: <20210323160611.28ddc712@carbon> (raw)
In-Reply-To: <20210322170301.26017-7-mcroce@linux.microsoft.com>
On Mon, 22 Mar 2021 18:03:01 +0100
Matteo Croce <mcroce@linux.microsoft.com> wrote:
> From: Matteo Croce <mcroce@microsoft.com>
>
> Use the new recycling API for page_pool.
> In a drop rate test, the packet rate increased di 10%,
> from 269 Kpps to 296 Kpps.
>
> perf top on a stock system shows:
>
> Overhead Shared Object Symbol
> 21.78% [kernel] [k] __pi___inval_dcache_area
> 21.66% [mvneta] [k] mvneta_rx_swbm
> 7.00% [kernel] [k] kmem_cache_alloc
> 6.05% [kernel] [k] eth_type_trans
> 4.44% [kernel] [k] kmem_cache_free.part.0
> 3.80% [kernel] [k] __netif_receive_skb_core
> 3.68% [kernel] [k] dev_gro_receive
> 3.65% [kernel] [k] get_page_from_freelist
> 3.43% [kernel] [k] page_pool_release_page
> 3.35% [kernel] [k] free_unref_page
>
> And this is the same output with recycling enabled:
>
> Overhead Shared Object Symbol
> 24.10% [kernel] [k] __pi___inval_dcache_area
> 23.02% [mvneta] [k] mvneta_rx_swbm
> 7.19% [kernel] [k] kmem_cache_alloc
> 6.50% [kernel] [k] eth_type_trans
> 4.93% [kernel] [k] __netif_receive_skb_core
> 4.77% [kernel] [k] kmem_cache_free.part.0
> 3.93% [kernel] [k] dev_gro_receive
> 3.03% [kernel] [k] build_skb
> 2.91% [kernel] [k] page_pool_put_page
> 2.85% [kernel] [k] __xdp_return
>
> The test was done with mausezahn on the TX side with 64 byte raw
> ethernet frames.
>
> Signed-off-by: Matteo Croce <mcroce@microsoft.com>
> ---
> drivers/net/ethernet/marvell/mvneta.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> index a635cf84608a..8b3250394703 100644
> --- a/drivers/net/ethernet/marvell/mvneta.c
> +++ b/drivers/net/ethernet/marvell/mvneta.c
> @@ -2332,7 +2332,7 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
> if (!skb)
> return ERR_PTR(-ENOMEM);
>
> - page_pool_release_page(rxq->page_pool, virt_to_page(xdp->data));
> + skb_mark_for_recycle(skb, virt_to_page(xdp->data), &xdp->rxq->mem);
>
> skb_reserve(skb, xdp->data - xdp->data_hard_start);
> skb_put(skb, xdp->data_end - xdp->data);
> @@ -2344,7 +2344,7 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
> skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
> skb_frag_page(frag), skb_frag_off(frag),
> skb_frag_size(frag), PAGE_SIZE);
> - page_pool_release_page(rxq->page_pool, skb_frag_page(frag));
> + skb_mark_for_recycle(skb, skb_frag_page(frag), &xdp->rxq->mem);
> }
>
> return skb;
This cause skb_mark_for_recycle() to set 'skb->pp_recycle=1' multiple
times, for the same SKB. (copy-pasted function below signature to help
reviewers).
This makes me question if we need an API for setting this per page
fragment?
Or if the API skb_mark_for_recycle() need to walk the page fragments in
the SKB and set the info stored in the page for each?
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
next prev parent reply other threads:[~2021-03-23 15:07 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-22 17:02 [PATCH net-next 0/6] page_pool: recycle buffers Matteo Croce
2021-03-22 17:02 ` [PATCH net-next 1/6] xdp: reduce size of struct xdp_mem_info Matteo Croce
2021-03-22 17:02 ` [PATCH net-next 2/6] mm: add a signature in struct page Matteo Croce
2021-03-22 17:02 ` [PATCH net-next 3/6] page_pool: DMA handling and allow to recycles frames via SKB Matteo Croce
2021-03-22 19:38 ` Matteo Croce
2021-03-22 17:02 ` [PATCH net-next 4/6] net: change users of __skb_frag_unref() and add an extra argument Matteo Croce
2021-03-22 17:03 ` [PATCH net-next 5/6] mvpp2: recycle buffers Matteo Croce
2021-03-22 17:03 ` [PATCH net-next 6/6] mvneta: " Matteo Croce
2021-03-23 15:06 ` Jesper Dangaard Brouer [this message]
2021-03-24 9:28 ` Lorenzo Bianconi
2021-03-24 21:48 ` Ilias Apalodimas
2021-03-23 14:57 ` [PATCH net-next 0/6] page_pool: " David Ahern
2021-03-23 15:03 ` Ilias Apalodimas
2021-03-23 15:41 ` Alexander Lobakin
2021-03-23 15:47 ` Ilias Apalodimas
2021-03-23 16:04 ` Jesper Dangaard Brouer
2021-03-23 16:10 ` Ilias Apalodimas
2021-03-23 16:28 ` Matteo Croce
2021-03-23 16:55 ` Alexander Lobakin
2021-03-23 17:01 ` Ilias Apalodimas
2021-03-23 20:03 ` Alexander Lobakin
2021-03-24 7:50 ` Ilias Apalodimas
2021-03-24 11:42 ` Alexander Lobakin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210323160611.28ddc712@carbon \
--to=brouer@redhat.com \
--cc=andrew@lunn.ch \
--cc=davem@davemloft.net \
--cc=dsahern@gmail.com \
--cc=hawk@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=jonathan.lemon@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lorenzo@kernel.org \
--cc=mcroce@linux.microsoft.com \
--cc=netdev@vger.kernel.org \
--cc=saeed@kernel.org \
--cc=saeedm@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).