public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
From: Miaohe Lin <linmiaohe@huawei.com>
To: "HORIGUCHI NAOYA(堀口 直也)" <naoya.horiguchi@nec.com>
Cc: Naoya Horiguchi <naoya.horiguchi@linux.dev>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	Linux-MM <linux-mm@kvack.org>, Xu Yu <xuyu@linux.alibaba.com>,
	Oscar Salvador <osalvador@suse.de>
Subject: Re: [PATCH] mm/memory-failure.c: bail out early if huge zero page
Date: Wed, 13 Apr 2022 17:03:18 +0800	[thread overview]
Message-ID: <094eb114-7c7d-72d8-e64d-ad36952813d7@huawei.com> (raw)
In-Reply-To: <20220413083620.GA3278735@hori.linux.bs1.fc.nec.co.jp>

On 2022/4/13 16:36, HORIGUCHI NAOYA(堀口 直也) wrote:
> On Tue, Apr 12, 2022 at 07:08:45PM +0800, Miaohe Lin wrote:
> ...
>>> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
>>> index 9b76222ee237..771fb4fc626c 100644
>>> --- a/mm/memory-failure.c
>>> +++ b/mm/memory-failure.c
>>> @@ -1852,6 +1852,12 @@ int memory_failure(unsigned long pfn, int flags)
>>>     }
>>>
>>>     if (PageTransHuge(hpage)) {
>>> +           if (is_huge_zero_page(hpage)) {
>>> +                   action_result(pfn, MF_MSG_KERNEL_HIGH_ORDER, MF_IGNORED);
>>> +                   res = -EBUSY;
>>> +                   goto unlock_mutex;
>>> +           }
>>> +
>>
>> It seems that huge_zero_page could be handled simply by zap the corresponding page table without
>> loss any user data.
> 
> Yes, zapping all page table entries to huge_zero_page is OK, and I think
> that maybe huge_zero_page should be set to NULL.  The broken huge_zero page
> has no user data, but could have corrupted data (with unexpected non-zero
> bits), so it's safer to replace with new zero pages.  And
> get_huge_zero_page() seems to allocate a new huge zero page if
> huge_zero_page is NULL when called, so it would be gracefully switched
> to new one on the first later access.

Agree.

> 
>> Should we also try to handle this kind of page? Or just bail out as it's rare?
> 
> We should handle it if it's worth doing. I think that memory errors on zero
> pages might be rare events (because they occupy small portion of physicall
> memory). But if zero pages could be used by many process, the impact of the
> error might be non-negligible.

Yes, when this becomes non-negligible, we could handle it. :)
Thanks.

> 
> Thanks,
> Naoya Horiguchi
> 



  reply	other threads:[~2022-04-13  9:24 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-10 15:22 [PATCH] mm/memory-failure.c: bail out early if huge zero page Xu Yu
2022-04-11  2:18 ` Miaohe Lin
2022-04-12  9:09   ` Naoya Horiguchi
2022-04-12  9:45     ` Yu Xu
2022-04-12 10:00       ` Yu Xu
2022-04-12 11:11         ` HORIGUCHI NAOYA(堀口 直也)
2022-04-12 11:08     ` Miaohe Lin
2022-04-13  8:36       ` HORIGUCHI NAOYA(堀口 直也)
2022-04-13  9:03         ` Miaohe Lin [this message]
2022-04-12  8:31 ` Oscar Salvador
2022-04-12  9:25   ` Miaohe Lin
2022-04-12  9:30     ` Oscar Salvador
2022-04-12  9:47       ` Yu Xu
2022-04-12 10:58       ` Miaohe Lin
2022-04-12  8:59 ` Yu Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=094eb114-7c7d-72d8-e64d-ad36952813d7@huawei.com \
    --to=linmiaohe@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-mm@kvack.org \
    --cc=naoya.horiguchi@linux.dev \
    --cc=naoya.horiguchi@nec.com \
    --cc=osalvador@suse.de \
    --cc=xuyu@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox