From: David Hildenbrand <david@redhat.com>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Dave Hansen <dave.hansen@intel.com>,
Andrew Morton <akpm@linux-foundation.org>,
Hillf Danton <hdanton@sina.com>,
Dave Hansen <dave.hansen@linux.intel.com>,
Vlastimil Babka <vbabka@suse.cz>,
Michal Hocko <mhocko@kernel.org>,
LKML <linux-kernel@vger.kernel.org>,
Linux-MM <linux-mm@kvack.org>, "Tang, Feng" <feng.tang@intel.com>
Subject: Re: [PATCH 0/6 v2] Calculate pcp->high based on zone sizes and active CPUs
Date: Fri, 28 May 2021 11:52:53 +0200 [thread overview]
Message-ID: <6c189def-11cc-80db-0fde-56aa506cfdea@redhat.com> (raw)
In-Reply-To: <20210528094949.GL30378@techsingularity.net>
On 28.05.21 11:49, Mel Gorman wrote:
> On Fri, May 28, 2021 at 11:08:01AM +0200, David Hildenbrand wrote:
>> On 28.05.21 11:03, David Hildenbrand wrote:
>>> On 28.05.21 10:55, Mel Gorman wrote:
>>>> On Thu, May 27, 2021 at 12:36:21PM -0700, Dave Hansen wrote:
>>>>> Hi Mel,
>>>>>
>>>>> Feng Tang tossed these on a "Cascade Lake" system with 96 threads and
>>>>> ~512G of persistent memory and 128G of DRAM. The PMEM is in "volatile
>>>>> use" mode and being managed via the buddy just like the normal RAM.
>>>>>
>>>>> The PMEM zones are big ones:
>>>>>
>>>>> present 65011712 = 248 G
>>>>> high 134595 = 525 M
>>>>>
>>>>> The PMEM nodes, of course, don't have any CPUs in them.
>>>>>
>>>>> With your series, the pcp->high value per-cpu is 69584 pages or about
>>>>> 270MB per CPU. Scaled up by the 96 CPU threads, that's ~26GB of
>>>>> worst-case memory in the pcps per zone, or roughly 10% of the size of
>>>>> the zone.
>>>
>>> When I read about having such big amounts of free memory theoretically
>>> stuck in PCP lists, I guess we really want to start draining the PCP in
>>> alloc_contig_range(), just as we do with memory hotunplug when offlining.
>>>
>>
>> Correction: we already drain the pcp, we just don't temporarily disable it,
>> so a race as described in offline_pages() could apply:
>>
>> "Disable pcplists so that page isolation cannot race with freeing
>> in a way that pages from isolated pageblock are left on pcplists."
>>
>> Guess we'd then want to move the draining before start_isolate_page_range()
>> in alloc_contig_range().
>>
>
> Or instead of draining, validate the PFN range in alloc_contig_range
> is within the same zone and if so, call zone_pcp_disable() before
> start_isolate_page_range and enable after __alloc_contig_migrate_range.
>
We require the caller to only pass a range within a single zone, so that
should be fine.
The only ugly thing about zone_pcp_disable() is
mutex_lock(&pcp_batch_high_lock) which would serialize all
alloc_contig_range() and even with offline_pages().
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2021-05-28 9:53 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-25 8:01 [PATCH 0/6 v2] Calculate pcp->high based on zone sizes and active CPUs Mel Gorman
2021-05-25 8:01 ` [PATCH 1/6] mm/page_alloc: Delete vm.percpu_pagelist_fraction Mel Gorman
2021-05-26 17:41 ` Vlastimil Babka
2021-05-25 8:01 ` [PATCH 2/6] mm/page_alloc: Disassociate the pcp->high from pcp->batch Mel Gorman
2021-05-26 18:14 ` Vlastimil Babka
2021-05-27 10:52 ` Mel Gorman
2021-05-28 10:27 ` Vlastimil Babka
2021-05-25 8:01 ` [PATCH 3/6] mm/page_alloc: Adjust pcp->high after CPU hotplug events Mel Gorman
2021-05-28 11:08 ` Vlastimil Babka
2021-05-25 8:01 ` [PATCH 4/6] mm/page_alloc: Scale the number of pages that are batch freed Mel Gorman
2021-05-28 11:19 ` Vlastimil Babka
2021-05-25 8:01 ` [PATCH 5/6] mm/page_alloc: Limit the number of pages on PCP lists when reclaim is active Mel Gorman
2021-05-28 11:43 ` Vlastimil Babka
2021-05-25 8:01 ` [PATCH 6/6] mm/page_alloc: Introduce vm.percpu_pagelist_high_fraction Mel Gorman
2021-05-28 11:59 ` Vlastimil Babka
2021-05-28 12:53 ` Mel Gorman
2021-05-28 14:38 ` Vlastimil Babka
2021-05-27 19:36 ` [PATCH 0/6 v2] Calculate pcp->high based on zone sizes and active CPUs Dave Hansen
2021-05-28 8:55 ` Mel Gorman
2021-05-28 9:03 ` David Hildenbrand
2021-05-28 9:08 ` David Hildenbrand
2021-05-28 9:49 ` Mel Gorman
2021-05-28 9:52 ` David Hildenbrand [this message]
2021-05-28 10:09 ` Mel Gorman
2021-05-28 10:21 ` David Hildenbrand
2021-05-28 12:12 ` Vlastimil Babka
2021-05-28 12:37 ` Mel Gorman
2021-05-28 14:39 ` Dave Hansen
2021-05-28 15:18 ` Mel Gorman
2021-05-28 16:17 ` Dave Hansen
2021-05-31 12:00 ` Feng Tang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6c189def-11cc-80db-0fde-56aa506cfdea@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=dave.hansen@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=feng.tang@intel.com \
--cc=hdanton@sina.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@kernel.org \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).