From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5427D2288E2 for ; Thu, 19 Dec 2024 15:50:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734623444; cv=none; b=ir0sXYgtnU4+a7w6PIa8jV/bVWdtRzMa3Ab53dVWR461OLFwJ/yqMC1x1wHJLF2QFWZ0pSUOk8ehEAUU6ZAQAwC+bbeaA+34aVHRg+pa3/y2GwC5g0OpuZDxsM1hBhZ9ynjdK91MT8QpvH8dSih9NwafLs1XrKWmileu2lKgoKk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734623444; c=relaxed/simple; bh=E7X7mTP+44HUh5nY9ub9/ex7U8nXK8HHK02R4RGrwtQ=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=F0ew5fyaRZvs1vFGapx9EhFNNPpMk6B6891BxmQyRqrOHhc2n9IlVNC3vbJnu8QR3OWp2gyu+Ge3id/Uoh/yt+HCtvFl8EkHxik1GRw/jUSH6tuK4cgG20k/QjakFBdI7GRU+ksMzlp+9F5RfN5pVxYXf0xfiikQDIdpLc7FOq0= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=FYeVFHkk; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="FYeVFHkk" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1734623441; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3s9o6WAMCXz8RfhV/vK5+m/o3+V+ZNxpOsJBZdOHT6o=; b=FYeVFHkkDrIKXE3lik+RkkuqnOQUEhtWtRdq38Pf3vGeoGLU7yycOh0bYOtRWS/ickQVgl NrjadFUIXe8c0zb24NUyZks4FAV94SskzBy8e1VZ9g+nv+k+2eEFthR/feHyBIPrNpPvi9 bhSP8GecEzHuf+m8UP1hylJbpBCHims= Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-149-X3Tzc_HLPFCAN1hC4GICPw-1; Thu, 19 Dec 2024 10:50:39 -0500 X-MC-Unique: X3Tzc_HLPFCAN1hC4GICPw-1 X-Mimecast-MFC-AGG-ID: X3Tzc_HLPFCAN1hC4GICPw Received: by mail-qk1-f200.google.com with SMTP id af79cd13be357-7b6f2515eb3so62019685a.0 for ; Thu, 19 Dec 2024 07:50:39 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734623439; x=1735228239; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3s9o6WAMCXz8RfhV/vK5+m/o3+V+ZNxpOsJBZdOHT6o=; b=N39s9p9FHBBS3TqUkItJBAhX2DcnkcWfAhtCHE1iEJNpzqrQtY1UyzEZCsvuQ2DbZz S5Le5Hzv06xz7rIcvHYLXN61e39KwzmQyNeL4mw3m05hd5u71mACTY4s6Q398wDVNydh iTmjWkZE/FraVVsbuck7u0fTlKZUSWVOYjSOSsjovES9abDiA8vkn4SGQFcdFiKyMNwI H9BWIpAxZacYITTJk62/NcI5sRnPqDDxt/CENDJmxLMy8sQYBai71O5J5CtBTyCF+rMv dPie6Le76YXuFfT3KxQ/qsDje37ARJKy4WB5ae9CEmVo5bLxf21PUO4Jio3zk7bie3v1 t9aQ== X-Gm-Message-State: AOJu0YwHmms83jEGvvg7JiLc4mEfPsNcMGd3JrALuL2qZoUUbyy+2d+O SetnKR//I2Y785EOqJsW+zhpo/sJhUq5Ur0qTN5esvIYcI0LgtAMELb6yC3Ao6xttwnGH4dgSiG fbg0yO5Aop1wgaS9pcQpBIYtRuwLKkKGtFVQrYAzwTg22WFt34vY7bE8NsooA4g== X-Gm-Gg: ASbGncvi79nH83Cor0FlWptrexNaZasN89Lr91rZwboX9csD8fqBET9RvkCE6AcZsVw o6u78Mq1rqS5wj8hfBQWpZVLwHs/r+lRL/gy5Kw5mJsrcYTWmszWcM2gs9vjTreD0q9Sm9WjbMC 3gMbVlq0k14l8F57V9ca4LCdyaJP7cNt0chsLScl0jXYWVZLCznfooxlWPTRJe7siOR8A6DM1Bh 6KlfkpBvsB8gCriANO9TNW8Rbk2Xic6JiO8MGJ7NFy1S3fvhS/GCxAj X-Received: by 2002:a05:620a:2a06:b0:7b1:549b:b992 with SMTP id af79cd13be357-7b9b958cdbcmr4695185a.23.1734623439024; Thu, 19 Dec 2024 07:50:39 -0800 (PST) X-Google-Smtp-Source: AGHT+IEMLbvJ4MojX59p4KynmWaD+7PicYXSAD9VzYmSlfGPXSnP2Fkj9gobIbA3NPnZrsyDRth5uQ== X-Received: by 2002:a05:620a:2a06:b0:7b1:549b:b992 with SMTP id af79cd13be357-7b9b958cdbcmr4692685a.23.1734623438681; Thu, 19 Dec 2024 07:50:38 -0800 (PST) Received: from [192.168.2.110] ([70.52.22.87]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7b9ac4ee992sm57945885a.127.2024.12.19.07.50.38 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 19 Dec 2024 07:50:38 -0800 (PST) Message-ID: <0cb5a0cc-724e-4c9a-982e-bbc0d87a246e@redhat.com> Date: Thu, 19 Dec 2024 10:50:22 -0500 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 1/2] mm: alloc_pages_bulk_noprof: drop page_list argument To: David Hildenbrand , linux-mm@kvack.org, mgorman@techsingularity.net, willy@infradead.org Cc: linux-kernel@vger.kernel.org, lcapitulino@gmail.com References: <4d3041315d1032e9acbe50971f952e716e8f4089.1734453061.git.luizcap@redhat.com> <9acb98b0-2c6e-4833-96ae-fbefd37854f5@redhat.com> Content-Language: en-US, en-CA From: Luiz Capitulino In-Reply-To: <9acb98b0-2c6e-4833-96ae-fbefd37854f5@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit On 2024-12-19 08:24, David Hildenbrand wrote: > On 17.12.24 17:31, Luiz Capitulino wrote: >> The commit 387ba26fb1cb added __alloc_pages_bulk() along with the page_list >> argument. The next commit 0f87d9d30f21 added the array-based argument. As > > Nit: Use "commit 387ba26fb1cb ("mm/page_alloc: add a bulk page allocator")" ,same for the other commit > > (likely scripts/checkpatch.pl should complain) > >> it turns out, the page_list argument has no users in the current tree (if it >> ever had any). Dropping it allows for a slight simplification and eliminates >> some unnecessary checks, now that page_array is required. >> > > It's probably a good idea to link to Mel's patch and the discussion from 2023. Quoting what Willy said back then about performance of list vs. arrays might be valuable to have here is well. I'll add these and the other patch's suggestion for v2. Thanks for the review, David. > > > Acked-by: David Hildenbrand > >> Signed-off-by: Luiz Capitulino >> --- >>   include/linux/gfp.h |  8 ++------ >>   mm/mempolicy.c      | 14 +++++++------- >>   mm/page_alloc.c     | 39 ++++++++++++--------------------------- >>   3 files changed, 21 insertions(+), 40 deletions(-) >> >> diff --git a/include/linux/gfp.h b/include/linux/gfp.h >> index b0fe9f62d15b6..eebed36443b35 100644 >> --- a/include/linux/gfp.h >> +++ b/include/linux/gfp.h >> @@ -212,7 +212,6 @@ struct folio *__folio_alloc_noprof(gfp_t gfp, unsigned int order, int preferred_ >>   unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, >>                   nodemask_t *nodemask, int nr_pages, >> -                struct list_head *page_list, >>                   struct page **page_array); >>   #define __alloc_pages_bulk(...)            alloc_hooks(alloc_pages_bulk_noprof(__VA_ARGS__)) >> @@ -223,11 +222,8 @@ unsigned long alloc_pages_bulk_array_mempolicy_noprof(gfp_t gfp, >>       alloc_hooks(alloc_pages_bulk_array_mempolicy_noprof(__VA_ARGS__)) >>   /* Bulk allocate order-0 pages */ >> -#define alloc_pages_bulk_list(_gfp, _nr_pages, _list)            \ >> -    __alloc_pages_bulk(_gfp, numa_mem_id(), NULL, _nr_pages, _list, NULL) >> - >>   #define alloc_pages_bulk_array(_gfp, _nr_pages, _page_array)        \ >> -    __alloc_pages_bulk(_gfp, numa_mem_id(), NULL, _nr_pages, NULL, _page_array) >> +    __alloc_pages_bulk(_gfp, numa_mem_id(), NULL, _nr_pages, _page_array) >>   static inline unsigned long >>   alloc_pages_bulk_array_node_noprof(gfp_t gfp, int nid, unsigned long nr_pages, >> @@ -236,7 +232,7 @@ alloc_pages_bulk_array_node_noprof(gfp_t gfp, int nid, unsigned long nr_pages, >>       if (nid == NUMA_NO_NODE) >>           nid = numa_mem_id(); >> -    return alloc_pages_bulk_noprof(gfp, nid, NULL, nr_pages, NULL, page_array); >> +    return alloc_pages_bulk_noprof(gfp, nid, NULL, nr_pages, page_array); >>   } >>   #define alloc_pages_bulk_array_node(...)                \ >> diff --git a/mm/mempolicy.c b/mm/mempolicy.c >> index 04f35659717ae..42a7b07ccc15a 100644 >> --- a/mm/mempolicy.c >> +++ b/mm/mempolicy.c >> @@ -2375,13 +2375,13 @@ static unsigned long alloc_pages_bulk_array_interleave(gfp_t gfp, >>           if (delta) { >>               nr_allocated = alloc_pages_bulk_noprof(gfp, >>                       interleave_nodes(pol), NULL, >> -                    nr_pages_per_node + 1, NULL, >> +                    nr_pages_per_node + 1, >>                       page_array); >>               delta--; >>           } else { >>               nr_allocated = alloc_pages_bulk_noprof(gfp, >>                       interleave_nodes(pol), NULL, >> -                    nr_pages_per_node, NULL, page_array); >> +                    nr_pages_per_node, page_array); >>           } >>           page_array += nr_allocated; >> @@ -2430,7 +2430,7 @@ static unsigned long alloc_pages_bulk_array_weighted_interleave(gfp_t gfp, >>       if (weight && node_isset(node, nodes)) { >>           node_pages = min(rem_pages, weight); >>           nr_allocated = __alloc_pages_bulk(gfp, node, NULL, node_pages, >> -                          NULL, page_array); >> +                          page_array); >>           page_array += nr_allocated; >>           total_allocated += nr_allocated; >>           /* if that's all the pages, no need to interleave */ >> @@ -2493,7 +2493,7 @@ static unsigned long alloc_pages_bulk_array_weighted_interleave(gfp_t gfp, >>           if (!node_pages) >>               break; >>           nr_allocated = __alloc_pages_bulk(gfp, node, NULL, node_pages, >> -                          NULL, page_array); >> +                          page_array); >>           page_array += nr_allocated; >>           total_allocated += nr_allocated; >>           if (total_allocated == nr_pages) >> @@ -2517,11 +2517,11 @@ static unsigned long alloc_pages_bulk_array_preferred_many(gfp_t gfp, int nid, >>       preferred_gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); >>       nr_allocated  = alloc_pages_bulk_noprof(preferred_gfp, nid, &pol->nodes, >> -                       nr_pages, NULL, page_array); >> +                       nr_pages, page_array); >>       if (nr_allocated < nr_pages) >>           nr_allocated += alloc_pages_bulk_noprof(gfp, numa_node_id(), NULL, >> -                nr_pages - nr_allocated, NULL, >> +                nr_pages - nr_allocated, >>                   page_array + nr_allocated); >>       return nr_allocated; >>   } >> @@ -2557,7 +2557,7 @@ unsigned long alloc_pages_bulk_array_mempolicy_noprof(gfp_t gfp, >>       nid = numa_node_id(); >>       nodemask = policy_nodemask(gfp, pol, NO_INTERLEAVE_INDEX, &nid); >>       return alloc_pages_bulk_noprof(gfp, nid, nodemask, >> -                       nr_pages, NULL, page_array); >> +                       nr_pages, page_array); >>   } >>   int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst) >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >> index 1cb4b8c8886d8..3ef6d902e2fea 100644 >> --- a/mm/page_alloc.c >> +++ b/mm/page_alloc.c >> @@ -4529,28 +4529,23 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, >>   } >>   /* >> - * __alloc_pages_bulk - Allocate a number of order-0 pages to a list or array >> + * __alloc_pages_bulk - Allocate a number of order-0 pages to an array >>    * @gfp: GFP flags for the allocation >>    * @preferred_nid: The preferred NUMA node ID to allocate from >>    * @nodemask: Set of nodes to allocate from, may be NULL >> - * @nr_pages: The number of pages desired on the list or array >> - * @page_list: Optional list to store the allocated pages >> - * @page_array: Optional array to store the pages >> + * @nr_pages: The number of pages desired in the array >> + * @page_array: Array to store the pages >>    * >>    * This is a batched version of the page allocator that attempts to >> - * allocate nr_pages quickly. Pages are added to page_list if page_list >> - * is not NULL, otherwise it is assumed that the page_array is valid. >> + * allocate nr_pages quickly. Pages are added to the page_array. >>    * >> - * For lists, nr_pages is the number of pages that should be allocated. >> - * >> - * For arrays, only NULL elements are populated with pages and nr_pages >> + * Note that only NULL elements are populated with pages and nr_pages >>    * is the maximum number of pages that will be stored in the array. >>    * >> - * Returns the number of pages on the list or array. >> + * Returns the number of pages in the array. >>    */ >>   unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, >>               nodemask_t *nodemask, int nr_pages, >> -            struct list_head *page_list, >>               struct page **page_array) >>   { >>       struct page *page; >> @@ -4568,7 +4563,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, >>        * Skip populated array elements to determine if any pages need >>        * to be allocated before disabling IRQs. >>        */ >> -    while (page_array && nr_populated < nr_pages && page_array[nr_populated]) >> +    while (nr_populated < nr_pages && page_array[nr_populated]) >>           nr_populated++; >>       /* No pages requested? */ >> @@ -4576,7 +4571,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, >>           goto out; >>       /* Already populated array? */ >> -    if (unlikely(page_array && nr_pages - nr_populated == 0)) >> +    if (unlikely(nr_pages - nr_populated == 0)) >>           goto out; >>       /* Bulk allocator does not support memcg accounting. */ >> @@ -4658,7 +4653,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, >>       while (nr_populated < nr_pages) { >>           /* Skip existing pages */ >> -        if (page_array && page_array[nr_populated]) { >> +        if (page_array[nr_populated]) { >>               nr_populated++; >>               continue; >>           } >> @@ -4676,11 +4671,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, >>           nr_account++; >>           prep_new_page(page, 0, gfp, 0); >> -        if (page_list) >> -            list_add(&page->lru, page_list); >> -        else >> -            page_array[nr_populated] = page; >> -        nr_populated++; >> +        page_array[nr_populated++] = page; >>       } >>       pcp_spin_unlock(pcp); >> @@ -4697,14 +4688,8 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, >>   failed: >>       page = __alloc_pages_noprof(gfp, 0, preferred_nid, nodemask); >> -    if (page) { >> -        if (page_list) >> -            list_add(&page->lru, page_list); >> -        else >> -            page_array[nr_populated] = page; >> -        nr_populated++; >> -    } >> - >> +    if (page) >> +        page_array[nr_populated++] = page; >>       goto out; >>   } >>   EXPORT_SYMBOL_GPL(alloc_pages_bulk_noprof); > >