From: David Hildenbrand <david@redhat.com>
To: Mike Kravetz <mike.kravetz@oracle.com>,
Oscar Salvador <osalvador@suse.de>,
linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org, mhocko@suse.com
Subject: Re: [RFC PATCH] mm,memory_hotplug: Unlock 1GB-hugetlb on x86_64
Date: Thu, 28 Feb 2019 08:38:34 +0100 [thread overview]
Message-ID: <bb71b68e-dc1b-a4d3-d842-b311535b92a8@redhat.com> (raw)
In-Reply-To: <201cc8d8-953f-f198-bbfe-96470136db68@oracle.com>
On 27.02.19 23:00, Mike Kravetz wrote:
> On 2/27/19 1:51 PM, Oscar Salvador wrote:
>> On Thu, Feb 21, 2019 at 10:42:12AM +0100, Oscar Salvador wrote:
>>> [1] https://lore.kernel.org/patchwork/patch/998796/
>>>
>>> Signed-off-by: Oscar Salvador <osalvador@suse.de>
>>
>> Any further comments on this?
>> I do have a "concern" I would like to sort out before dropping the RFC:
>>
>> It is the fact that unless we have spare gigantic pages in other notes, the
>> offlining operation will loop forever (until the customer cancels the operation).
>> While I do not really like that, I do think that memory offlining should be done
>> with some sanity, and the administrator should know in advance if the system is going
>> to be able to keep up with the memory pressure, aka: make sure we got what we need in
>> order to make the offlining operation to succeed.
>> That translates to be sure that we have spare gigantic pages and other nodes
>> can take them.
>>
>> Given said that, another thing I thought about is that we could check if we have
>> spare gigantic pages at has_unmovable_pages() time.
>> Something like checking "h->free_huge_pages - h->resv_huge_pages > 0", and if it
>> turns out that we do not have gigantic pages anywhere, just return as we have
>> non-movable pages.
>
> Of course, that check would be racy. Even if there is an available gigantic
> page at has_unmovable_pages() time there is no guarantee it will be there when
> we want to allocate/use it. But, you would at least catch 'most' cases of
> looping forever.
>
>> But I would rather not convulate has_unmovable_pages() with such checks and "trust"
>> the administrator.
I think we have the exact same issue already with huge/ordinary pages if
we are low on memory. We could loop forever.
In the long run, we should properly detect such issues and abort instead
of looping forever I guess. But as we all know, error handling in the
whole offlining part is still far away from being perfect ...
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2019-02-28 7:38 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-21 9:42 [RFC PATCH] mm,memory_hotplug: Unlock 1GB-hugetlb on x86_64 Oscar Salvador
2019-02-21 22:12 ` Mike Kravetz
2019-02-22 8:24 ` Oscar Salvador
2019-02-27 21:51 ` Oscar Salvador
2019-02-27 22:00 ` Mike Kravetz
2019-02-28 7:38 ` David Hildenbrand [this message]
2019-02-28 9:16 ` Michal Hocko
2019-02-28 9:21 ` Michal Hocko
2019-02-28 9:41 ` Oscar Salvador
2019-02-28 9:55 ` Michal Hocko
2019-02-28 10:19 ` Oscar Salvador
2019-02-28 12:11 ` Michal Hocko
2019-02-28 13:40 ` Oscar Salvador
2019-02-28 14:08 ` Michal Hocko
2019-02-28 21:01 ` Oscar Salvador
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bb71b68e-dc1b-a4d3-d842-b311535b92a8@redhat.com \
--to=david@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=mike.kravetz@oracle.com \
--cc=osalvador@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).