linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@techsingularity.net>
To: Jesper Dangaard Brouer <jbrouer@redhat.com>
Cc: Chuck Lever <chuck.lever@oracle.com>,
	linux-nfs@vger.kernel.org, linux-mm@kvack.org, brouer@redhat.com
Subject: Re: [PATCH v3] mm/page_alloc: Further fix __alloc_pages_bulk() return value
Date: Wed, 30 Jun 2021 07:58:55 +0100	[thread overview]
Message-ID: <20210630065855.GH3840@techsingularity.net> (raw)
In-Reply-To: <be9186d9-1e8e-4d99-ab0f-84c0518025c5@redhat.com>

On Tue, Jun 29, 2021 at 06:01:12PM +0200, Jesper Dangaard Brouer wrote:
> On 29/06/2021 15.48, Chuck Lever wrote:
> 
> > The call site in __page_pool_alloc_pages_slow() also seems to be
> > confused on this matter. It should be attended to by someone who
> > is familiar with that code.
> 
> I don't think we need a fix for __page_pool_alloc_pages_slow(), as the array
> is guaranteed to be empty.
> 
> But a fix would look like this:
> 
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index c137ce308c27..1b04538a3da3 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -245,22 +245,23 @@ static struct page
> *__page_pool_alloc_pages_slow(struct page_pool *pool,
>         if (unlikely(pp_order))
>                 return __page_pool_alloc_page_order(pool, gfp);
> 
>         /* Unnecessary as alloc cache is empty, but guarantees zero count */
> -       if (unlikely(pool->alloc.count > 0))
> +       if (unlikely(pool->alloc.count > 0))   // ^^^^^^^^^^^^^^^^^^^^^^
>                 return pool->alloc.cache[--pool->alloc.count];
> 
>         /* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array
> */
>         memset(&pool->alloc.cache, 0, sizeof(void *) * bulk);
> 
> +       /* bulk API ret value also count existing pages, but array is empty
> */
>         nr_pages = alloc_pages_bulk_array(gfp, bulk, pool->alloc.cache);
>         if (unlikely(!nr_pages))
>                 return NULL;
> 
>         /* Pages have been filled into alloc.cache array, but count is zero
> and
>          * page element have not been (possibly) DMA mapped.
>          */
> -       for (i = 0; i < nr_pages; i++) {
> +       for (i = pool->alloc.count; i < nr_pages; i++) {

That last part would break as the loop is updating pool->alloc_count.
Just setting pool->alloc_count = nr_pages would break if PP_FLAG_DMA_MAP
was set and page_pool_dma_map failed. Right?

-- 
Mel Gorman
SUSE Labs


  parent reply	other threads:[~2021-06-30  6:59 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-29 13:48 [PATCH v3] mm/page_alloc: Further fix __alloc_pages_bulk() return value Chuck Lever
2021-06-29 15:33 ` Mel Gorman
2021-06-29 16:04   ` Jesper Dangaard Brouer
2021-06-29 16:01 ` Jesper Dangaard Brouer
2021-06-29 16:32   ` Chuck Lever III
2021-06-30  6:58   ` Mel Gorman [this message]
2021-06-30 11:22     ` Jesper Dangaard Brouer
2021-06-30 12:05       ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210630065855.GH3840@techsingularity.net \
    --to=mgorman@techsingularity.net \
    --cc=brouer@redhat.com \
    --cc=chuck.lever@oracle.com \
    --cc=jbrouer@redhat.com \
    --cc=linux-mm@kvack.org \
    --cc=linux-nfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).