linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <jbrouer@redhat.com>
To: Matthew Wilcox <willy@infradead.org>,
	Jesper Dangaard Brouer <jbrouer@redhat.com>
Cc: brouer@redhat.com, Jesper Dangaard Brouer <hawk@kernel.org>,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	netdev@vger.kernel.org, linux-mm@kvack.org,
	Shakeel Butt <shakeelb@google.com>
Subject: Re: [PATCH v2 17/24] page_pool: Convert page_pool_return_skb_page() to use netmem
Date: Tue, 10 Jan 2023 11:04:32 +0100	[thread overview]
Message-ID: <67d60543-2f3c-b0ff-b7fb-e44518cf325b@redhat.com> (raw)
In-Reply-To: <Y7xexniPnKSgCMVE@casper.infradead.org>


On 09/01/2023 19.36, Matthew Wilcox wrote:
> On Fri, Jan 06, 2023 at 09:16:25PM +0100, Jesper Dangaard Brouer wrote:
>>
>>
>> On 06/01/2023 17.53, Matthew Wilcox wrote:
>>> On Fri, Jan 06, 2023 at 04:49:12PM +0100, Jesper Dangaard Brouer wrote:
>>>> On 05/01/2023 22.46, Matthew Wilcox (Oracle) wrote:
>>>>> This function accesses the pagepool members of struct page directly,
>>>>> so it needs to become netmem.  Add page_pool_put_full_netmem() and
>>>>> page_pool_recycle_netmem().
>>>>>
>>>>> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
>>>>> ---
>>>>>     include/net/page_pool.h | 14 +++++++++++++-
>>>>>     net/core/page_pool.c    | 13 ++++++-------
>>>>>     2 files changed, 19 insertions(+), 8 deletions(-)
>>>>>
>>>>> diff --git a/include/net/page_pool.h b/include/net/page_pool.h
>>>>> index fbb653c9f1da..126c04315929 100644
>>>>> --- a/include/net/page_pool.h
>>>>> +++ b/include/net/page_pool.h
>>>>> @@ -464,10 +464,16 @@ static inline void page_pool_put_page(struct page_pool *pool,
>>>>>     }
>>>>>     /* Same as above but will try to sync the entire area pool->max_len */
>>>>> +static inline void page_pool_put_full_netmem(struct page_pool *pool,
>>>>> +		struct netmem *nmem, bool allow_direct)
>>>>> +{
>>>>> +	page_pool_put_netmem(pool, nmem, -1, allow_direct);
>>>>> +}
>>>>> +
>>>>>     static inline void page_pool_put_full_page(struct page_pool *pool,
>>>>>     					   struct page *page, bool allow_direct)
>>>>>     {
>>>>> -	page_pool_put_page(pool, page, -1, allow_direct);
>>>>> +	page_pool_put_full_netmem(pool, page_netmem(page), allow_direct);
>>>>>     }
>>>>>     /* Same as above but the caller must guarantee safe context. e.g NAPI */
>>>>> @@ -477,6 +483,12 @@ static inline void page_pool_recycle_direct(struct page_pool *pool,
>>>>>     	page_pool_put_full_page(pool, page, true);
>>>>>     }
>>>>> +static inline void page_pool_recycle_netmem(struct page_pool *pool,
>>>>> +					    struct netmem *nmem)
>>>>> +{
>>>>> +	page_pool_put_full_netmem(pool, nmem, true);
>>>>                                                 ^^^^
>>>>
>>>> It is not clear in what context page_pool_recycle_netmem() will be used,
>>>> but I think the 'true' (allow_direct=true) might be wrong here.
>>>>
>>>> It is only in limited special cases (RX-NAPI context) we can allow
>>>> direct return to the RX-alloc-cache.
>>>
>>> Mmm.  It's a c'n'p of the previous function:
>>>
>>> static inline void page_pool_recycle_direct(struct page_pool *pool,
>>>                                               struct page *page)
>>> {
>>>           page_pool_put_full_page(pool, page, true);
>>> }
>>>
>>> so perhaps it's just badly named?
>>
>> Yes, I think so.
>>
>> Can we name it:
>>   page_pool_recycle_netmem_direct
>>
>> And perhaps add a comment with a warning like:
>>   /* Caller must guarantee safe context. e.g NAPI */
>>
>> Like the page_pool_recycle_direct() function has a comment.
> 
> I don't really like the new name you're proposing here.  Really,
> page_pool_recycle_direct() is the perfect name, it just has the wrong
> type.
> 
> I considered the attached megapatch, but I don't think that's a great
> idea.
> 
> So here's what I'm planning instead:

I do like below patch.
I must admit I had to lookup _Generic() when I started reviewing this
patchset.  I think it makes a lot of sense to use here as it allow us to
easier convert drivers over.

We have 22 call spots in drivers:

  $ git grep page_pool_recycle_direct drivers/net/ethernet/ | wc -l
  22

But approx 9 drivers doing this (as each driver calls it in multiple 
places).


> 
>      page_pool: Allow page_pool_recycle_direct() to take a netmem or a page
> 
>      With no better name for a variant of page_pool_recycle_direct() which
>      takes a netmem instead of a page, use _Generic() to allow it to take
>      either a page or a netmem argument.  It's a bit ugly, but maybe not
>      the worst alternative?
> 
>      Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> 
> diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> index abe3822a1125..1eed8ed2dcc1 100644
> --- a/include/net/page_pool.h
> +++ b/include/net/page_pool.h
> @@ -477,12 +477,22 @@ static inline void page_pool_put_full_page(struct page_pool *pool,
>   }
> 
>   /* Same as above but the caller must guarantee safe context. e.g NAPI */
> -static inline void page_pool_recycle_direct(struct page_pool *pool,
> +static inline void __page_pool_recycle_direct(struct page_pool *pool,
> +                                           struct netmem *nmem)
> +{
> +       page_pool_put_full_netmem(pool, nmem, true);
> +}
> +
> +static inline void __page_pool_recycle_page_direct(struct page_pool *pool,
>                                              struct page *page)
>   {
> -       page_pool_put_full_page(pool, page, true);
> +       page_pool_put_full_netmem(pool, page_netmem(page), true);
>   }
> 
> +#define page_pool_recycle_direct(pool, mem)    _Generic((mem),         \
> +       struct netmem *: __page_pool_recycle_direct(pool, (struct netmem *)mem),                \
> +       struct page *:   __page_pool_recycle_page_direct(pool, (struct page *)mem))
> +
>   #define PAGE_POOL_DMA_USE_PP_FRAG_COUNT        \
>                  (sizeof(dma_addr_t) > sizeof(unsigned long))
> 
> 



  reply	other threads:[~2023-01-10 10:04 UTC|newest]

Thread overview: 84+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-05 21:46 [PATCH v2 00/24] Split netmem from struct page Matthew Wilcox (Oracle)
2023-01-05 21:46 ` [PATCH v2 01/24] netmem: Create new type Matthew Wilcox (Oracle)
2023-01-06 13:07   ` Jesper Dangaard Brouer
2023-01-09 17:20   ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 02/24] netmem: Add utility functions Matthew Wilcox (Oracle)
2023-01-06  2:24   ` kernel test robot
2023-01-06 20:35     ` Matthew Wilcox
2023-01-06 13:35   ` Jesper Dangaard Brouer
2023-01-05 21:46 ` [PATCH v2 03/24] page_pool: Add netmem_set_dma_addr() and netmem_get_dma_addr() Matthew Wilcox (Oracle)
2023-01-06 13:43   ` Jesper Dangaard Brouer
2023-01-09 17:30   ` Ilias Apalodimas
2023-01-10  9:17     ` Ilias Apalodimas
2023-01-10 18:16       ` Matthew Wilcox
2023-01-10 18:15     ` Matthew Wilcox
2023-01-05 21:46 ` [PATCH v2 04/24] page_pool: Convert page_pool_release_page() to page_pool_release_netmem() Matthew Wilcox (Oracle)
2023-01-06 13:46   ` Jesper Dangaard Brouer
2023-01-10  9:28   ` Ilias Apalodimas
2023-01-10 18:47     ` Matthew Wilcox
2023-01-11 13:56       ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 05/24] page_pool: Start using netmem in allocation path Matthew Wilcox (Oracle)
2023-01-06  2:34   ` kernel test robot
2023-01-06 13:59   ` Jesper Dangaard Brouer
2023-01-06 15:36     ` Matthew Wilcox
2023-01-10  9:30   ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 06/24] page_pool: Convert page_pool_return_page() to page_pool_return_netmem() Matthew Wilcox (Oracle)
2023-01-06 14:10   ` Jesper Dangaard Brouer
2023-01-10  9:39   ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 07/24] page_pool: Convert __page_pool_put_page() to __page_pool_put_netmem() Matthew Wilcox (Oracle)
2023-01-06 14:14   ` Jesper Dangaard Brouer
2023-01-10  9:47   ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 08/24] page_pool: Convert pp_alloc_cache to contain netmem Matthew Wilcox (Oracle)
2023-01-06 14:18   ` Jesper Dangaard Brouer
2023-01-10  9:58   ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 09/24] page_pool: Convert page_pool_defrag_page() to page_pool_defrag_netmem() Matthew Wilcox (Oracle)
2023-01-06 14:29   ` Jesper Dangaard Brouer
2023-01-10 10:27   ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 10/24] page_pool: Convert page_pool_put_defragged_page() to netmem Matthew Wilcox (Oracle)
2023-01-06 14:32   ` Jesper Dangaard Brouer
2023-01-10 10:36   ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 11/24] page_pool: Convert page_pool_empty_ring() to use netmem Matthew Wilcox (Oracle)
2023-01-06 15:22   ` Jesper Dangaard Brouer
2023-01-10 10:38   ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 12/24] page_pool: Convert page_pool_alloc_pages() to page_pool_alloc_netmem() Matthew Wilcox (Oracle)
2023-01-06 15:27   ` Jesper Dangaard Brouer
2023-01-10 10:45   ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 13/24] page_pool: Convert page_pool_dma_sync_for_device() to take a netmem Matthew Wilcox (Oracle)
2023-01-06 15:28   ` Jesper Dangaard Brouer
2023-01-10 10:47   ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 14/24] page_pool: Convert page_pool_recycle_in_cache() to netmem Matthew Wilcox (Oracle)
2023-01-06 15:29   ` Jesper Dangaard Brouer
2023-01-10 10:48   ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 15/24] page_pool: Remove page_pool_defrag_page() Matthew Wilcox (Oracle)
2023-01-06 15:29   ` Jesper Dangaard Brouer
2023-01-10  9:47   ` Ilias Apalodimas
2023-01-10 22:00     ` Matthew Wilcox
2023-01-11 13:58       ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 16/24] page_pool: Use netmem in page_pool_drain_frag() Matthew Wilcox (Oracle)
2023-01-06 15:30   ` Jesper Dangaard Brouer
2023-01-10 11:00   ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 17/24] page_pool: Convert page_pool_return_skb_page() to use netmem Matthew Wilcox (Oracle)
2023-01-06 15:49   ` Jesper Dangaard Brouer
2023-01-06 16:53     ` Matthew Wilcox
2023-01-06 20:16       ` Jesper Dangaard Brouer
2023-01-09 18:36         ` Matthew Wilcox
2023-01-10 10:04           ` Jesper Dangaard Brouer [this message]
2023-01-05 21:46 ` [PATCH v2 18/24] page_pool: Convert frag_page to frag_nmem Matthew Wilcox (Oracle)
2023-01-06 15:51   ` Jesper Dangaard Brouer
2023-01-10 11:36   ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 19/24] xdp: Convert to netmem Matthew Wilcox (Oracle)
2023-01-06 15:53   ` Jesper Dangaard Brouer
2023-01-10 11:50   ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 20/24] mm: Remove page pool members from struct page Matthew Wilcox (Oracle)
2023-01-06 15:56   ` Jesper Dangaard Brouer
2023-01-10 11:51   ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 21/24] page_pool: Pass a netmem to init_callback() Matthew Wilcox (Oracle)
2023-01-06 16:02   ` Jesper Dangaard Brouer
2023-01-10 11:32   ` Ilias Apalodimas
2023-01-05 21:46 ` [PATCH v2 22/24] net: Add support for netmem in skb_frag Matthew Wilcox (Oracle)
2023-01-05 21:46 ` [PATCH v2 23/24] mvneta: Convert to netmem Matthew Wilcox (Oracle)
2023-01-05 21:46 ` [PATCH v2 24/24] mlx5: " Matthew Wilcox (Oracle)
2023-01-06 16:31   ` Jesper Dangaard Brouer
2023-01-09 11:46     ` Tariq Toukan
2023-01-09 12:27   ` Tariq Toukan
2023-01-06  1:20 ` [PATCH v2 00/24] Split netmem from struct page Jesse Brandeburg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=67d60543-2f3c-b0ff-b7fb-e44518cf325b@redhat.com \
    --to=jbrouer@redhat.com \
    --cc=brouer@redhat.com \
    --cc=hawk@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=linux-mm@kvack.org \
    --cc=netdev@vger.kernel.org \
    --cc=shakeelb@google.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).