From: Alexander Duyck <alexander.h.duyck@linux.intel.com>
To: Mike Rapoport <rppt@linux.ibm.com>,
Pasha Tatashin <Pavel.Tatashin@microsoft.com>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"mhocko@suse.com" <mhocko@suse.com>,
"dave.jiang@intel.com" <dave.jiang@intel.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"willy@infradead.org" <willy@infradead.org>,
"davem@davemloft.net" <davem@davemloft.net>,
"yi.z.zhang@linux.intel.com" <yi.z.zhang@linux.intel.com>,
"khalid.aziz@oracle.com" <khalid.aziz@oracle.com>,
"rppt@linux.vnet.ibm.com" <rppt@linux.vnet.ibm.com>,
"vbabka@suse.cz" <vbabka@suse.cz>,
"sparclinux@vger.kernel.org" <sparclinux@vger.kernel.org>,
"dan.j.williams@intel.com" <dan.j.williams@intel.com>,
"ldufour@linux.vnet.ibm.com" <ldufour@linux.vnet.ibm.com>,
"mgorman@techsingularity.net" <mgorman@techsingularity.net>,
"mingo@kernel.org" <mingo@kernel.org>,
"kirill.shutemov@linux.intel.com"
<kirill.shutemov@linux.intel.com>
Subject: Re: [mm PATCH v4 3/6] mm: Use memblock/zone specific iterator for handling deferred page init
Date: Thu, 01 Nov 2018 08:10:48 -0700 [thread overview]
Message-ID: <5eb92a4b34a934459e8558d0f7695a89ee178f89.camel@linux.intel.com> (raw)
In-Reply-To: <20181101061733.GA8866@rapoport-lnx>
On Thu, 2018-11-01 at 08:17 +0200, Mike Rapoport wrote:
> On Wed, Oct 31, 2018 at 03:40:02PM +0000, Pasha Tatashin wrote:
> >
> >
> > On 10/17/18 7:54 PM, Alexander Duyck wrote:
> > > This patch introduces a new iterator for_each_free_mem_pfn_range_in_zone.
> > >
> > > This iterator will take care of making sure a given memory range provided
> > > is in fact contained within a zone. It takes are of all the bounds checking
> > > we were doing in deferred_grow_zone, and deferred_init_memmap. In addition
> > > it should help to speed up the search a bit by iterating until the end of a
> > > range is greater than the start of the zone pfn range, and will exit
> > > completely if the start is beyond the end of the zone.
> > >
> > > This patch adds yet another iterator called
> > > for_each_free_mem_range_in_zone_from and then uses it to support
> > > initializing and freeing pages in groups no larger than MAX_ORDER_NR_PAGES.
> > > By doing this we can greatly improve the cache locality of the pages while
> > > we do several loops over them in the init and freeing process.
> > >
> > > We are able to tighten the loops as a result since we only really need the
> > > checks for first_init_pfn in our first iteration and after that we can
> > > assume that all future values will be greater than this. So I have added a
> > > function called deferred_init_mem_pfn_range_in_zone that primes the
> > > iterators and if it fails we can just exit.
> > >
> > > On my x86_64 test system with 384GB of memory per node I saw a reduction in
> > > initialization time from 1.85s to 1.38s as a result of this patch.
> > >
> > > Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
>
>
> [ ... ]
>
> > > ---
> > > include/linux/memblock.h | 58 +++++++++++++++
> > > mm/memblock.c | 63 ++++++++++++++++
> > > mm/page_alloc.c | 176 ++++++++++++++++++++++++++++++++--------------
> > > 3 files changed, 242 insertions(+), 55 deletions(-)
> > >
> > > diff --git a/include/linux/memblock.h b/include/linux/memblock.h
> > > index aee299a6aa76..2ddd1bafdd03 100644
> > > --- a/include/linux/memblock.h
> > > +++ b/include/linux/memblock.h
> > > @@ -178,6 +178,25 @@ void __next_reserved_mem_region(u64 *idx, phys_addr_t *out_start,
> > > p_start, p_end, p_nid))
> > >
> > > /**
> > > + * for_each_mem_range_from - iterate through memblock areas from type_a and not
> > > + * included in type_b. Or just type_a if type_b is NULL.
> > > + * @i: u64 used as loop variable
> > > + * @type_a: ptr to memblock_type to iterate
> > > + * @type_b: ptr to memblock_type which excludes from the iteration
> > > + * @nid: node selector, %NUMA_NO_NODE for all nodes
> > > + * @flags: pick from blocks based on memory attributes
> > > + * @p_start: ptr to phys_addr_t for start address of the range, can be %NULL
> > > + * @p_end: ptr to phys_addr_t for end address of the range, can be %NULL
> > > + * @p_nid: ptr to int for nid of the range, can be %NULL
> > > + */
> > > +#define for_each_mem_range_from(i, type_a, type_b, nid, flags, \
> > > + p_start, p_end, p_nid) \
> > > + for (i = 0, __next_mem_range(&i, nid, flags, type_a, type_b, \
> > > + p_start, p_end, p_nid); \
> > > + i != (u64)ULLONG_MAX; \
> > > + __next_mem_range(&i, nid, flags, type_a, type_b, \
> > > + p_start, p_end, p_nid))
> > > +/**
> > > * for_each_mem_range_rev - reverse iterate through memblock areas from
> > > * type_a and not included in type_b. Or just type_a if type_b is NULL.
> > > * @i: u64 used as loop variable
> > > @@ -248,6 +267,45 @@ void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn,
> > > i >= 0; __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid))
> > > #endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
> > >
> > > +#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
>
> Sorry for jumping late, but I've noticed this only now.
> Do the new iterators have to be restricted by
> CONFIG_DEFERRED_STRUCT_PAGE_INIT?
They don't have to be. I just wrapped them since I figured it is better
to just strip the code if it isn't going to be used rather then leave
it floating around taking up space.
Thanks.
- Alex
next prev parent reply other threads:[~2018-11-01 15:10 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-10-17 23:54 [mm PATCH v4 0/6] Deferred page init improvements Alexander Duyck
2018-10-17 23:54 ` [mm PATCH v4 1/6] mm: Use mm_zero_struct_page from SPARC on all 64b architectures Alexander Duyck
2018-10-18 18:29 ` Pavel Tatashin
2018-10-29 20:12 ` Michal Hocko
2018-10-17 23:54 ` [mm PATCH v4 2/6] mm: Drop meminit_pfn_in_nid as it is redundant Alexander Duyck
2018-10-17 23:54 ` [mm PATCH v4 3/6] mm: Use memblock/zone specific iterator for handling deferred page init Alexander Duyck
2018-10-31 15:40 ` Pasha Tatashin
2018-10-31 16:05 ` Alexander Duyck
2018-10-31 16:06 ` Pasha Tatashin
2018-11-01 6:17 ` Mike Rapoport
2018-11-01 15:10 ` Alexander Duyck [this message]
2018-10-17 23:54 ` [mm PATCH v4 4/6] mm: Move hot-plug specific memory init into separate functions and optimize Alexander Duyck
2018-10-17 23:54 ` [mm PATCH v4 5/6] mm: Add reserved flag setting to set_page_links Alexander Duyck
2018-10-17 23:54 ` [mm PATCH v4 6/6] mm: Use common iterator for deferred_init_pages and deferred_free_pages Alexander Duyck
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5eb92a4b34a934459e8558d0f7695a89ee178f89.camel@linux.intel.com \
--to=alexander.h.duyck@linux.intel.com \
--cc=Pavel.Tatashin@microsoft.com \
--cc=akpm@linux-foundation.org \
--cc=dan.j.williams@intel.com \
--cc=dave.jiang@intel.com \
--cc=davem@davemloft.net \
--cc=khalid.aziz@oracle.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=ldufour@linux.vnet.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=mingo@kernel.org \
--cc=rppt@linux.ibm.com \
--cc=rppt@linux.vnet.ibm.com \
--cc=sparclinux@vger.kernel.org \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=yi.z.zhang@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).