From: David Hildenbrand <david@redhat.com>
To: "Miaohe Lin" <linmiaohe@huawei.com>,
"HORIGUCHI NAOYA(堀口 直也)" <naoya.horiguchi@nec.com>,
"Oscar Salvador" <osalvador@suse.de>
Cc: Naoya Horiguchi <naoya.horiguchi@linux.dev>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
Andrew Morton <akpm@linux-foundation.org>,
Mike Kravetz <mike.kravetz@oracle.com>,
Yang Shi <shy828301@gmail.com>,
Muchun Song <songmuchun@bytedance.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH v1 0/4] mm, hwpoison: improve handling workload related to hugetlb and memory_hotplug
Date: Thu, 12 May 2022 14:59:53 +0200 [thread overview]
Message-ID: <5b43d8f7-3477-a2c2-028e-e31d40ac932c@redhat.com> (raw)
In-Reply-To: <04781d15-9d87-1763-02fe-e353679c50d7@huawei.com>
On 12.05.22 13:13, Miaohe Lin wrote:
> On 2022/5/12 15:28, David Hildenbrand wrote:
>>>>>>
>>>>>> Once the problematic DIMM would actually get unplugged, the memory block devices
>>>>>> would get removed as well. So when hotplugging a new DIMM in the same
>>>>>> location, we could online that memory again.
>>>>>
>>>>> What about PG_hwpoison flags? struct pages are also freed and reallocated
>>>>> in the actual DIMM replacement?
>>>>
>>>> Once memory is offline, the memmap is stale and is no longer
>>>> trustworthy. It gets reinitialize during memory onlining -- so any
>>>> previous PG_hwpoison is overridden at least there. In some setups, we
>>>> even poison the whole memmap via page_init_poison() during memory offlining.
>>>>
>>>> Apart from that, we should be freeing the memmap in all relevant cases
>>>> when removing memory. I remember there are a couple of corner cases, but
>>>> we don't really have to care about that.
>>>
>>> OK, so there seems no need to manipulate struct pages for hwpoison in
>>> all relevant cases.
>>
>> Right. When offlining a memory block, all we have to do is remember if
>> we stumbled over a hwpoisoned page and rememebr that inside the memory
>> block. Rejecting to online is then easy.
>
> BTW: How should we deal with the below race window:
>
> CPU A CPU B CPU C
> accessing page while hold page refcnt
> memory_failure happened on page
> offline_pages
> page can be offlined due to page refcnt
> is ignored when PG_hwpoison is set
> can still access page struct...
>
> Any in use page (with page refcnt incremented) might be offlined while its content, e.g. flags, private ..., can
> still be accessed if the above race happened. Is this possible? Or am I miss something? Any suggestion to fix it?
> I can't figure out a way yet. :(
I assume you mean that test_pages_isolated() essentially only checks for
PageHWPoison() and doesn't care about the refcount?
That part is very dodgy and it's part of my motivation to question that
whole handling in the first place.
In do_migrate_range(), there is a comment:
"
HWPoison pages have elevated reference counts so the migration would
fail on them. It also doesn't make any sense to migrate them in the
first place. Still try to unmap such a page in case it is still mapped
(e.g. current hwpoison implementation doesn't unmap KSM pages but keep
the unmap as the catch all safety net).
"
My assumption would be: if there are any unexpected references on a
hwpoison page, we must fail offlining. Ripping out the page might be
more harmful then just leaving it in place and failing offlining for the
time being.
I am no export on PageHWPoison(). Which guarantees do we have regarding
the page count?
If we succeed in unmapping the page, there shouldn't be any references
from the page tables. We might still have GUP references to such pages,
and it would be fair enough to fail offlining. I remember we try
removing the page from the pagecache etc. to free up these references.
So which additional references do we have that the comment in offlining
code talks about? A single additional one from hwpoison code?
Once we figure that out, we might tweak test_pages_isolated() to also
consider the page count and not rip out random pages that are still
referenced in the system.
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2022-05-12 12:59 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-27 4:28 [RFC PATCH v1 0/4] mm, hwpoison: improve handling workload related to hugetlb and memory_hotplug Naoya Horiguchi
2022-04-27 4:28 ` [RFC PATCH v1 1/4] mm, hwpoison, hugetlb: introduce SUBPAGE_INDEX_HWPOISON to save raw error page Naoya Horiguchi
2022-04-27 7:11 ` Miaohe Lin
2022-04-27 13:03 ` HORIGUCHI NAOYA(堀口 直也)
2022-04-28 3:14 ` Miaohe Lin
2022-05-12 22:31 ` Jane Chu
2022-05-12 22:49 ` HORIGUCHI NAOYA(堀口 直也)
2022-04-27 4:28 ` [RFC PATCH v1 2/4] mm,hwpoison,hugetlb,memory_hotplug: hotremove memory section with hwpoisoned hugepage Naoya Horiguchi
2022-04-29 8:49 ` Miaohe Lin
2022-05-09 7:55 ` HORIGUCHI NAOYA(堀口 直也)
2022-05-09 8:57 ` Miaohe Lin
2022-04-27 4:28 ` [RFC PATCH v1 3/4] mm, hwpoison: add parameter unpoison to get_hwpoison_huge_page() Naoya Horiguchi
2022-04-27 4:28 ` [RFC PATCH v1 4/4] mm, memory_hotplug: fix inconsistent num_poisoned_pages on memory hotremove Naoya Horiguchi
2022-04-28 3:20 ` Miaohe Lin
2022-04-28 4:05 ` HORIGUCHI NAOYA(堀口 直也)
2022-04-28 7:16 ` Miaohe Lin
2022-05-09 13:34 ` Naoya Horiguchi
2022-04-27 10:48 ` [RFC PATCH v1 0/4] mm, hwpoison: improve handling workload related to hugetlb and memory_hotplug David Hildenbrand
2022-04-27 12:20 ` Oscar Salvador
2022-04-27 12:20 ` HORIGUCHI NAOYA(堀口 直也)
2022-04-28 8:44 ` David Hildenbrand
2022-05-09 7:29 ` HORIGUCHI NAOYA(堀口 直也)
2022-05-09 9:04 ` Miaohe Lin
2022-05-09 9:58 ` Oscar Salvador
2022-05-09 10:53 ` Miaohe Lin
2022-05-11 15:11 ` David Hildenbrand
2022-05-11 16:10 ` HORIGUCHI NAOYA(堀口 直也)
2022-05-11 16:22 ` David Hildenbrand
2022-05-12 3:04 ` Miaohe Lin
2022-05-12 6:35 ` HORIGUCHI NAOYA(堀口 直也)
2022-05-12 7:28 ` David Hildenbrand
2022-05-12 11:13 ` Miaohe Lin
2022-05-12 12:59 ` David Hildenbrand [this message]
2022-05-16 3:25 ` Miaohe Lin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5b43d8f7-3477-a2c2-028e-e31d40ac932c@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=linmiaohe@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mike.kravetz@oracle.com \
--cc=naoya.horiguchi@linux.dev \
--cc=naoya.horiguchi@nec.com \
--cc=osalvador@suse.de \
--cc=shy828301@gmail.com \
--cc=songmuchun@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).