linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	akpm@linux-foundation.org, vbabka@suse.cz, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2] mm: compaction: skip memory hole rapidly when isolating migratable pages
Date: Thu, 15 Jun 2023 10:41:30 +0200	[thread overview]
Message-ID: <dbc29be7-554e-3ec6-fcef-c75c7bc4f80d@redhat.com> (raw)
In-Reply-To: <87bkhhf7d2.fsf@yhuang6-desk2.ccr.corp.intel.com>

On 15.06.23 10:38, Huang, Ying wrote:
> David Hildenbrand <david@redhat.com> writes:
> 
>> On 15.06.23 09:22, Huang, Ying wrote:
>>> Baolin Wang <baolin.wang@linux.alibaba.com> writes:
>>>
>>>> On 6/15/2023 11:22 AM, Huang, Ying wrote:
>>>>> Hi, Mel,
>>>>> Mel Gorman <mgorman@techsingularity.net> writes:
>>>>>
>>>>>> On Tue, Jun 13, 2023 at 04:55:04PM +0800, Baolin Wang wrote:
>>>>>>> On some machines, the normal zone can have a large memory hole like
>>>>>>> below memory layout, and we can see the range from 0x100000000 to
>>>>>>> 0x1800000000 is a hole. So when isolating some migratable pages, the
>>>>>>> scanner can meet the hole and it will take more time to skip the large
>>>>>>> hole. From my measurement, I can see the isolation scanner will take
>>>>>>> 80us ~ 100us to skip the large hole [0x100000000 - 0x1800000000].
>>>>>>>
>>>>>>> So adding a new helper to fast search next online memory section
>>>>>>> to skip the large hole can help to find next suitable pageblock
>>>>>>> efficiently. With this patch, I can see the large hole scanning only
>>>>>>> takes < 1us.
>>>>>>>
>>>>>>> [    0.000000] Zone ranges:
>>>>>>> [    0.000000]   DMA      [mem 0x0000000040000000-0x00000000ffffffff]
>>>>>>> [    0.000000]   DMA32    empty
>>>>>>> [    0.000000]   Normal   [mem 0x0000000100000000-0x0000001fa7ffffff]
>>>>>>> [    0.000000] Movable zone start for each node
>>>>>>> [    0.000000] Early memory node ranges
>>>>>>> [    0.000000]   node   0: [mem 0x0000000040000000-0x0000000fffffffff]
>>>>>>> [    0.000000]   node   0: [mem 0x0000001800000000-0x0000001fa3c7ffff]
>>>>>>> [    0.000000]   node   0: [mem 0x0000001fa3c80000-0x0000001fa3ffffff]
>>>>>>> [    0.000000]   node   0: [mem 0x0000001fa4000000-0x0000001fa402ffff]
>>>>>>> [    0.000000]   node   0: [mem 0x0000001fa4030000-0x0000001fa40effff]
>>>>>>> [    0.000000]   node   0: [mem 0x0000001fa40f0000-0x0000001fa73cffff]
>>>>>>> [    0.000000]   node   0: [mem 0x0000001fa73d0000-0x0000001fa745ffff]
>>>>>>> [    0.000000]   node   0: [mem 0x0000001fa7460000-0x0000001fa746ffff]
>>>>>>> [    0.000000]   node   0: [mem 0x0000001fa7470000-0x0000001fa758ffff]
>>>>>>> [    0.000000]   node   0: [mem 0x0000001fa7590000-0x0000001fa7ffffff]
>>>>>>>
>>>>>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>>>>
>>>>>> This may only be necessary for non-contiguous zones so a check for
>>>>>> zone_contiguous could be made but I suspect the saving, if any, would be
>>>>>> marginal.
>>>>>>
>>>>>> However, it's subtle that block_end_pfn can end up in an arbirary location
>>>>>> past the end of the zone or past cc->free_pfn. As the "continue" will update
>>>>>> cc->migrate_pfn, that might lead to errors in the future. It would be a
>>>>>> lot safer to pass in cc->free_pfn and do two things with the value. First,
>>>>>> there is no point scanning for a valid online section past cc->free_pfn so
>>>>>> terminating after cc->free_pfn may save some cycles. Second, cc->migrate_pfn
>>>>>> does not end up with an arbitrary value which is a more defensive approach
>>>>>> to any future programming errors.
>>>>> I have thought about this before.  Originally, I had thought that we
>>>>> were safe because cc->free_pfn should be in a online section and
>>>>> block_end_pfn should reach cc->free_pfn before the end of zone.  But
>>>>> after checking more code and thinking about it again, I found that the
>>>>> underlying sections may go offline under us during compaction.  So that,
>>>>> cc->free_pfn may be in a offline section or after the end of zone.  So,
>>>>> you are right, we need to consider the range of block_end_pfn.
>>>>> But, if we thought in this way (memory online/offline at any time),
>>>>> it
>>>>> appears that we need to check whether the underlying section was
>>>>> offlined.  For example, is it safe to use "pfn_to_page()" in
>>>>> "isolate_migratepages_block()"?  Is it possible for the underlying
>>>>> section to be offlined under us?
>>>>
>>>> It is possible. There is a previous discussion[1] about the race
>>>> between pfn_to_online_page() and memory offline.
>>>>
>>>> [1]
>>>> https://lore.kernel.org/lkml/87zgc6buoq.fsf@nvidia.com/T/#m642d91bcc726437e1848b295bc57ce249c7ca399
>>> Thank you very much for sharing!  That answers my questions
>>> directly!
>>
>> I remember another discussion (but can't find it) regarding why memory
>> compaction can get away without pfn_to_online_page() all over the
>> place. The use is limited to __reset_isolation_pfn().
> 
> Per my understanding, isolate_migratepages() -> pageblock_pfn_to_page()
> will check whether the pageblock is online.  So if the pageblock isn't
> offlined afterwards, we can use pfn_to_page().

Oh, indeed, that was the magic bit, thanks!

-- 
Cheers,

David / dhildenb



      reply	other threads:[~2023-06-15  8:41 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-13  8:55 [PATCH v2] mm: compaction: skip memory hole rapidly when isolating migratable pages Baolin Wang
2023-06-13  9:56 ` David Hildenbrand
2023-06-13 11:13   ` Baolin Wang
2023-06-13 12:36     ` David Hildenbrand
2023-06-14  1:08       ` Huang, Ying
2023-06-14  9:55 ` Mel Gorman
2023-06-14 12:22   ` Baolin Wang
2023-06-15  3:22   ` Huang, Ying
2023-06-15  3:59     ` Baolin Wang
2023-06-15  7:22       ` Huang, Ying
2023-06-15  7:46         ` David Hildenbrand
2023-06-15  8:38           ` Huang, Ying
2023-06-15  8:41             ` David Hildenbrand [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=dbc29be7-554e-3ec6-fcef-c75c7bc4f80d@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=vbabka@suse.cz \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).