* [PATCH v3] page_pool: add a comment explaining the fragment counter usage
@ 2023-02-17 22:21 Ilias Apalodimas
2023-02-18 19:53 ` Jesper Dangaard Brouer
` (3 more replies)
0 siblings, 4 replies; 6+ messages in thread
From: Ilias Apalodimas @ 2023-02-17 22:21 UTC (permalink / raw)
To: netdev
Cc: alexander.duyck, Ilias Apalodimas, Alexander Duyck,
Jesper Dangaard Brouer, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, linux-kernel
When reading the page_pool code the first impression is that keeping
two separate counters, one being the page refcnt and the other being
fragment pp_frag_count, is counter-intuitive.
However without that fragment counter we don't know when to reliably
destroy or sync the outstanding DMA mappings. So let's add a comment
explaining this part.
Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
---
Changes since v2:
- Removed a uneeded commas on the comment
Changes since v1:
- Update the comment withe the correct description for pp_frag_count
include/net/page_pool.h | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index 34bf531ffc8d..ddfa0b328677 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -277,6 +277,16 @@ void page_pool_put_defragged_page(struct page_pool *pool, struct page *page,
unsigned int dma_sync_size,
bool allow_direct);
+/* pp_frag_count represents the number of writers who can update the page
+ * either by updating skb->data or via DMA mappings for the device.
+ * We can't rely on the page refcnt for that as we don't know who might be
+ * holding page references and we can't reliably destroy or sync DMA mappings
+ * of the fragments.
+ *
+ * When pp_frag_count reaches 0 we can either recycle the page if the page
+ * refcnt is 1 or return it back to the memory allocator and destroy any
+ * mappings we have.
+ */
static inline void page_pool_fragment_page(struct page *page, long nr)
{
atomic_long_set(&page->pp_frag_count, nr);
--
2.38.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v3] page_pool: add a comment explaining the fragment counter usage
2023-02-17 22:21 [PATCH v3] page_pool: add a comment explaining the fragment counter usage Ilias Apalodimas
@ 2023-02-18 19:53 ` Jesper Dangaard Brouer
2023-02-21 9:12 ` Paolo Abeni
` (2 subsequent siblings)
3 siblings, 0 replies; 6+ messages in thread
From: Jesper Dangaard Brouer @ 2023-02-18 19:53 UTC (permalink / raw)
To: Ilias Apalodimas, netdev
Cc: brouer, alexander.duyck, Alexander Duyck, Jesper Dangaard Brouer,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
linux-kernel
On 17/02/2023 23.21, Ilias Apalodimas wrote:
> When reading the page_pool code the first impression is that keeping
> two separate counters, one being the page refcnt and the other being
> fragment pp_frag_count, is counter-intuitive.
>
> However without that fragment counter we don't know when to reliably
> destroy or sync the outstanding DMA mappings. So let's add a comment
> explaining this part.
>
> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
> Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
> ---
> Changes since v2:
> - Removed a uneeded commas on the comment
> Changes since v1:
> - Update the comment withe the correct description for pp_frag_count
> include/net/page_pool.h | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> index 34bf531ffc8d..ddfa0b328677 100644
> --- a/include/net/page_pool.h
> +++ b/include/net/page_pool.h
> @@ -277,6 +277,16 @@ void page_pool_put_defragged_page(struct page_pool *pool, struct page *page,
> unsigned int dma_sync_size,
> bool allow_direct);
>
> +/* pp_frag_count represents the number of writers who can update the page
> + * either by updating skb->data or via DMA mappings for the device.
> + * We can't rely on the page refcnt for that as we don't know who might be
> + * holding page references and we can't reliably destroy or sync DMA mappings
> + * of the fragments.
> + *
> + * When pp_frag_count reaches 0 we can either recycle the page if the page
> + * refcnt is 1 or return it back to the memory allocator and destroy any
> + * mappings we have.
> + */
> static inline void page_pool_fragment_page(struct page *page, long nr)
> {
> atomic_long_set(&page->pp_frag_count, nr);
> --
> 2.38.1
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v3] page_pool: add a comment explaining the fragment counter usage
2023-02-17 22:21 [PATCH v3] page_pool: add a comment explaining the fragment counter usage Ilias Apalodimas
2023-02-18 19:53 ` Jesper Dangaard Brouer
@ 2023-02-21 9:12 ` Paolo Abeni
2023-02-21 17:14 ` Jakub Kicinski
2023-02-21 17:30 ` patchwork-bot+netdevbpf
3 siblings, 0 replies; 6+ messages in thread
From: Paolo Abeni @ 2023-02-21 9:12 UTC (permalink / raw)
To: Ilias Apalodimas, netdev
Cc: alexander.duyck, Alexander Duyck, Jesper Dangaard Brouer,
David S. Miller, Eric Dumazet, Jakub Kicinski, linux-kernel
On Sat, 2023-02-18 at 00:21 +0200, Ilias Apalodimas wrote:
> When reading the page_pool code the first impression is that keeping
> two separate counters, one being the page refcnt and the other being
> fragment pp_frag_count, is counter-intuitive.
>
> However without that fragment counter we don't know when to reliably
> destroy or sync the outstanding DMA mappings. So let's add a comment
> explaining this part.
>
> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
> Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
# Form letter - net-next is closed
The merge window for v6.3 has begun and therefore net-next is closed
for new drivers, features, code refactoring and optimizations.
We are currently accepting bug fixes only.
Please repost when net-next reopens after Mar 6th.
RFC patches sent for review only are obviously welcome at any time.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v3] page_pool: add a comment explaining the fragment counter usage
2023-02-17 22:21 [PATCH v3] page_pool: add a comment explaining the fragment counter usage Ilias Apalodimas
2023-02-18 19:53 ` Jesper Dangaard Brouer
2023-02-21 9:12 ` Paolo Abeni
@ 2023-02-21 17:14 ` Jakub Kicinski
2023-02-21 17:21 ` Ilias Apalodimas
2023-02-21 17:30 ` patchwork-bot+netdevbpf
3 siblings, 1 reply; 6+ messages in thread
From: Jakub Kicinski @ 2023-02-21 17:14 UTC (permalink / raw)
To: Ilias Apalodimas
Cc: netdev, alexander.duyck, Alexander Duyck, Jesper Dangaard Brouer,
David S. Miller, Eric Dumazet, Paolo Abeni, linux-kernel
On Sat, 18 Feb 2023 00:21:30 +0200 Ilias Apalodimas wrote:
> When reading the page_pool code the first impression is that keeping
> two separate counters, one being the page refcnt and the other being
> fragment pp_frag_count, is counter-intuitive.
>
> However without that fragment counter we don't know when to reliably
> destroy or sync the outstanding DMA mappings. So let's add a comment
> explaining this part.
I discussed with Paolo off-list, since it's just a comment change
I'll push it in.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v3] page_pool: add a comment explaining the fragment counter usage
2023-02-21 17:14 ` Jakub Kicinski
@ 2023-02-21 17:21 ` Ilias Apalodimas
0 siblings, 0 replies; 6+ messages in thread
From: Ilias Apalodimas @ 2023-02-21 17:21 UTC (permalink / raw)
To: Jakub Kicinski
Cc: netdev, alexander.duyck, Alexander Duyck, Jesper Dangaard Brouer,
David S. Miller, Eric Dumazet, Paolo Abeni, linux-kernel
On Tue, 21 Feb 2023 at 19:15, Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Sat, 18 Feb 2023 00:21:30 +0200 Ilias Apalodimas wrote:
> > When reading the page_pool code the first impression is that keeping
> > two separate counters, one being the page refcnt and the other being
> > fragment pp_frag_count, is counter-intuitive.
> >
> > However without that fragment counter we don't know when to reliably
> > destroy or sync the outstanding DMA mappings. So let's add a comment
> > explaining this part.
>
> I discussed with Paolo off-list, since it's just a comment change
> I'll push it in.
Fair enough. Thanks Jakub.
Regards
/Ilias
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v3] page_pool: add a comment explaining the fragment counter usage
2023-02-17 22:21 [PATCH v3] page_pool: add a comment explaining the fragment counter usage Ilias Apalodimas
` (2 preceding siblings ...)
2023-02-21 17:14 ` Jakub Kicinski
@ 2023-02-21 17:30 ` patchwork-bot+netdevbpf
3 siblings, 0 replies; 6+ messages in thread
From: patchwork-bot+netdevbpf @ 2023-02-21 17:30 UTC (permalink / raw)
To: Ilias Apalodimas
Cc: netdev, alexander.duyck, alexanderduyck, hawk, davem, edumazet,
kuba, pabeni, linux-kernel
Hello:
This patch was applied to netdev/net-next.git (master)
by Jakub Kicinski <kuba@kernel.org>:
On Sat, 18 Feb 2023 00:21:30 +0200 you wrote:
> When reading the page_pool code the first impression is that keeping
> two separate counters, one being the page refcnt and the other being
> fragment pp_frag_count, is counter-intuitive.
>
> However without that fragment counter we don't know when to reliably
> destroy or sync the outstanding DMA mappings. So let's add a comment
> explaining this part.
>
> [...]
Here is the summary with links:
- [v3] page_pool: add a comment explaining the fragment counter usage
https://git.kernel.org/netdev/net-next/c/4d4266e3fd32
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2023-02-21 17:30 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-02-17 22:21 [PATCH v3] page_pool: add a comment explaining the fragment counter usage Ilias Apalodimas
2023-02-18 19:53 ` Jesper Dangaard Brouer
2023-02-21 9:12 ` Paolo Abeni
2023-02-21 17:14 ` Jakub Kicinski
2023-02-21 17:21 ` Ilias Apalodimas
2023-02-21 17:30 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).