From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 313B3C11F65 for ; Wed, 30 Jun 2021 07:05:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 178E861469 for ; Wed, 30 Jun 2021 07:05:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232018AbhF3HIF (ORCPT ); Wed, 30 Jun 2021 03:08:05 -0400 Received: from outbound-smtp09.blacknight.com ([46.22.139.14]:36021 "EHLO outbound-smtp09.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232598AbhF3HIF (ORCPT ); Wed, 30 Jun 2021 03:08:05 -0400 X-Greylist: delayed 399 seconds by postgrey-1.27 at vger.kernel.org; Wed, 30 Jun 2021 03:08:05 EDT Received: from mail.blacknight.com (pemlinmail05.blacknight.ie [81.17.254.26]) by outbound-smtp09.blacknight.com (Postfix) with ESMTPS id CE4521C424C for ; Wed, 30 Jun 2021 07:58:56 +0100 (IST) Received: (qmail 386 invoked from network); 30 Jun 2021 06:58:56 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.17.255]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 30 Jun 2021 06:58:56 -0000 Date: Wed, 30 Jun 2021 07:58:55 +0100 From: Mel Gorman To: Jesper Dangaard Brouer Cc: Chuck Lever , linux-nfs@vger.kernel.org, linux-mm@kvack.org, brouer@redhat.com Subject: Re: [PATCH v3] mm/page_alloc: Further fix __alloc_pages_bulk() return value Message-ID: <20210630065855.GH3840@techsingularity.net> References: <162497449506.16614.7781489905877008435.stgit@klimt.1015granger.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org On Tue, Jun 29, 2021 at 06:01:12PM +0200, Jesper Dangaard Brouer wrote: > On 29/06/2021 15.48, Chuck Lever wrote: > > > The call site in __page_pool_alloc_pages_slow() also seems to be > > confused on this matter. It should be attended to by someone who > > is familiar with that code. > > I don't think we need a fix for __page_pool_alloc_pages_slow(), as the array > is guaranteed to be empty. > > But a fix would look like this: > > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > index c137ce308c27..1b04538a3da3 100644 > --- a/net/core/page_pool.c > +++ b/net/core/page_pool.c > @@ -245,22 +245,23 @@ static struct page > *__page_pool_alloc_pages_slow(struct page_pool *pool, >         if (unlikely(pp_order)) >                 return __page_pool_alloc_page_order(pool, gfp); > >         /* Unnecessary as alloc cache is empty, but guarantees zero count */ > -       if (unlikely(pool->alloc.count > 0)) > +       if (unlikely(pool->alloc.count > 0))   // ^^^^^^^^^^^^^^^^^^^^^^ >                 return pool->alloc.cache[--pool->alloc.count]; > >         /* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array > */ >         memset(&pool->alloc.cache, 0, sizeof(void *) * bulk); > > +       /* bulk API ret value also count existing pages, but array is empty > */ >         nr_pages = alloc_pages_bulk_array(gfp, bulk, pool->alloc.cache); >         if (unlikely(!nr_pages)) >                 return NULL; > >         /* Pages have been filled into alloc.cache array, but count is zero > and >          * page element have not been (possibly) DMA mapped. >          */ > -       for (i = 0; i < nr_pages; i++) { > +       for (i = pool->alloc.count; i < nr_pages; i++) { That last part would break as the loop is updating pool->alloc_count. Just setting pool->alloc_count = nr_pages would break if PP_FLAG_DMA_MAP was set and page_pool_dma_map failed. Right? -- Mel Gorman SUSE Labs