From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A24FC4338F for ; Wed, 28 Jul 2021 16:09:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C305D600D1 for ; Wed, 28 Jul 2021 16:09:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C305D600D1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 39DFD8D0002; Wed, 28 Jul 2021 12:09:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 34CF28D0001; Wed, 28 Jul 2021 12:09:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1EE958D0002; Wed, 28 Jul 2021 12:09:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0132.hostedemail.com [216.40.44.132]) by kanga.kvack.org (Postfix) with ESMTP id 01BCF8D0001 for ; Wed, 28 Jul 2021 12:09:11 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 9D30C1830451B for ; Wed, 28 Jul 2021 16:09:11 +0000 (UTC) X-FDA: 78412480902.32.15D7627 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf12.hostedemail.com (Postfix) with ESMTP id 5E57E1003EC6 for ; Wed, 28 Jul 2021 16:09:08 +0000 (UTC) Received: by mail-pl1-f181.google.com with SMTP id c16so3250718plh.7 for ; Wed, 28 Jul 2021 09:09:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=dCQLDiGzGaw5khKMcicFqIWA7WbxBFG59Mi4mmTKjP0=; b=Zh7MdYG5QgOP64vqrb7lbYeY93i3cwWAvTTjVgdU6ZiR5NgBFW3bdALx6J7uPOQoRb 6B23hH/qIey5KgUh98nt4YGqo+DWrvTNM+ZaDIEddtjOUPjuJSw2CTBiXfvfiFGFp9i5 mg7Kcb8eH5gsWTYezGJgQIh5losNkyz74gS7NSLLqEkItB7FaKz+dpKyLfeqwVFWtvds GGFjQzbJ9vfYvbLEyiYfwWqVkI86pfpezrWtplwONff5jp66jw9DY2yrdvpLhGwoV6zj VHjeU2lt95i22FfdytELXe06fUp59d1EpQgfecz6Z6GEWwwV8xAgqklIozocVIxfS2sm 4sZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=dCQLDiGzGaw5khKMcicFqIWA7WbxBFG59Mi4mmTKjP0=; b=gs8PtXkY9LCW40DTn23taNQ72861wsoUQwxZo/oyXw3wwPqT0Y2RS1i1vCtK9w1Y6t dGumRb57KW8WurGGJHZZbwzTDEMk/Nu7vfb41e3227iFr7mew7cnDRzJnjR1lWyqXHNH 6GmdZiKwUDwEJKhL805kV8SykMWxUpvHkP22uXigjZfO4VmdfSV0csAI5Ht/9bnrbsvF 6BxeFQwgkR6RMn5fMWezPy1hjhjaXhCelTdBODqvh3TYwK6D9nKPEpFCSBXugj6osKda d4a3lhXaEnXoUNdaCUqbmjRa9MxUa30cVoxNZ4SUZHTaoIrLA/CQqOzbEOSxWgMarggi UKwg== X-Gm-Message-State: AOAM530WzNvm27oycD+xSim9GMqTlfw3zaO0yFnxd5UVu2qCl1OHgJzR vU7tnlZsePdB8JAfprETkvLi4wrsXyqh8a0gGrWJMQ== X-Google-Smtp-Source: ABdhPJwhvTdmZ49xWIqwGIKOae57+qV07eOtZyDgQmpn/2iflEi2gfPYAwXeBWCiOl/KnWL8UWSxbu19S+E3ldWNuf8= X-Received: by 2002:a17:902:ab91:b029:12b:8dae:b1ff with SMTP id f17-20020a170902ab91b029012b8daeb1ffmr407924plr.52.1627488547209; Wed, 28 Jul 2021 09:09:07 -0700 (PDT) MIME-Version: 1.0 References: <20210714193542.21857-1-joao.m.martins@oracle.com> <20210714193542.21857-10-joao.m.martins@oracle.com> In-Reply-To: From: Dan Williams Date: Wed, 28 Jul 2021 09:08:56 -0700 Message-ID: Subject: Re: [PATCH v3 09/14] mm/page_alloc: reuse tail struct pages for compound pagemaps To: Joao Martins Cc: Linux MM , Vishal Verma , Dave Jiang , Naoya Horiguchi , Matthew Wilcox , Jason Gunthorpe , John Hubbard , Jane Chu , Muchun Song , Mike Kravetz , Andrew Morton , Jonathan Corbet , Linux NVDIMM , Linux Doc Mailing List Content-Type: text/plain; charset="UTF-8" Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=intel-com.20150623.gappssmtp.com header.s=20150623 header.b=Zh7MdYG5; spf=none (imf12.hostedemail.com: domain of dan.j.williams@intel.com has no SPF policy when checking 209.85.214.181) smtp.mailfrom=dan.j.williams@intel.com; dmarc=fail reason="No valid SPF, DKIM not aligned (relaxed)" header.from=intel.com (policy=none) X-Rspamd-Server: rspam02 X-Stat-Signature: o64ojiiqjcjsu9rdfiejcfrqfb5xwb6j X-Rspamd-Queue-Id: 5E57E1003EC6 X-HE-Tag: 1627488548-214432 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jul 28, 2021 at 8:56 AM Joao Martins wrote: > > On 7/28/21 8:28 AM, Dan Williams wrote: > > On Wed, Jul 14, 2021 at 12:36 PM Joao Martins wrote: > >> > >> + /* > >> + * With compound page geometry and when struct pages are stored in ram > >> + * (!altmap) most tail pages are reused. Consequently, the amount of > >> + * unique struct pages to initialize is a lot smaller that the total > >> + * amount of struct pages being mapped. > >> + * See vmemmap_populate_compound_pages(). > >> + */ > >> + if (!altmap) > >> + nr_pages = min_t(unsigned long, nr_pages, > > > > What's the scenario where nr_pages is < 128? Shouldn't alignment > > already be guaranteed? > > > Oh yeah, that's right. > > >> + 2 * (PAGE_SIZE/sizeof(struct page))); > > > > > >> + > >> __SetPageHead(page); > >> > >> for (i = 1; i < nr_pages; i++) { > >> @@ -6657,7 +6669,7 @@ void __ref memmap_init_zone_device(struct zone *zone, > >> continue; > >> > >> memmap_init_compound(page, pfn, zone_idx, nid, pgmap, > >> - pfns_per_compound); > >> + altmap, pfns_per_compound); > > > > This feels odd, memmap_init_compound() doesn't really care about > > altmap, what do you think about explicitly calculating the parameters > > that memmap_init_compound() needs and passing them in? > > > > Not a strong requirement to change, but take another look at let me know. > > > > Yeah, memmap_init_compound() indeed doesn't care about @altmap itself -- but a previous > comment was to abstract this away in memmap_init_compound() given the mix of complexity in > memmap_init_zone_device() PAGE_SIZE geometry case and the compound case: > > https://lore.kernel.org/linux-mm/CAPcyv4gtSqfmuAaX9cs63OvLkf-h4B_5fPiEnM9p9cqLZztXpg@mail.gmail.com/ > > Before this was called @ntails above and I hide that calculation in memmap_init_compound(). > > But I can move this back to the caller: > > memmap_init_compound(page, pfn, zone_idx, nid, pgmap, > (!altmap ? 2 * (PAGE_SIZE/sizeof(struct page))) : pfns_per_compound); > > Or with another helper like: > > #define compound_nr_pages(__altmap, __nr_pages) \ > (!__altmap ? 2 * (PAGE_SIZE/sizeof(struct page))) : __nr_pages); > > memmap_init_compound(page, pfn, zone_idx, nid, pgmap, > compound_nr_pages(altmap, pfns_per_compound)); I like the helper, but I'd go further to make it a function with a comment that it is a paired / mild layering violation with explicit knowledge of how the sparse_vmemmap() internals handle compound pages in the presence of an altmap. I.e. if someone later goes to add altmap support, leave them a breadcrumb that they need to update both locations.