From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.6 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73431C48BCD for ; Wed, 9 Jun 2021 07:20:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 16E71613C5 for ; Wed, 9 Jun 2021 07:20:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 16E71613C5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 760CE6B0036; Wed, 9 Jun 2021 03:20:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 710C06B006E; Wed, 9 Jun 2021 03:20:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5638A6B0070; Wed, 9 Jun 2021 03:20:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0199.hostedemail.com [216.40.44.199]) by kanga.kvack.org (Postfix) with ESMTP id 23BDB6B0036 for ; Wed, 9 Jun 2021 03:20:46 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 99468181AEF31 for ; Wed, 9 Jun 2021 07:20:45 +0000 (UTC) X-FDA: 78233338050.23.78DDD8A Received: from mail-pg1-f182.google.com (mail-pg1-f182.google.com [209.85.215.182]) by imf23.hostedemail.com (Postfix) with ESMTP id EE192A0001A9 for ; Wed, 9 Jun 2021 07:20:41 +0000 (UTC) Received: by mail-pg1-f182.google.com with SMTP id n12so18653855pgs.13 for ; Wed, 09 Jun 2021 00:20:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=/MZL9GnN7cMYmNxZNzQu4EfClaV6/mydVmBdwbhvsW8=; b=GQkDvL7x43XLd+tZX+oYPQq/Op72lOo3JK5A2hm2kPZCSVoMnpaaFR91PiJQS0/vt/ 29K2fsKIO1FCXrAxcKmCRG3n83RZutV1eWaJ/b60yf2Bi56jgBFcB8Kzu1OqH3rK6xJl AhFCmKjYg5YEH9REdDX9dnli3xtXaMzlRyOExVppx3e8OrR4nPKnbjnsE48hQFoXgdD1 eWdFTOOKx5VgTnm0n48Wgozgb3eBZfSmB/4wscrujSk9NW+VIJB3CLor31wf3Z003vX7 ZITYcURt5q8kfYIXTlCtHIUpJfHFXZHdNtIMexDSfd2bAvw7YV0rwCj4ydsPDrL12RGd x0lA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=/MZL9GnN7cMYmNxZNzQu4EfClaV6/mydVmBdwbhvsW8=; b=QVKzajrpYpmBu3X7KIsHUNQ2RmnmFgDJHy+sGSbY32DpPfp035h+OHklFVeH9D88FW 69Z2IZGdTkePMn2wp05dz/dgWDSAe8S20LTRuO8DUeF9MY/8M8g+xepViqjg2n4RN0VM mDYnEn68jmew9A4GhmuMCkCnEVgdsnSYoUb/+qY4PrjBcI6TSxlhyWhaP5lgRbHmXsj3 nkIh/pabKkUONhHJdBbEFE9t6bIz7goiUEMLmCZYaTssmFHj5WMU9kjcqZX+d0hGuaTV xybpeQFtgAiFlXuRkC7K0fxO85axeyUs213lPKlPekIA4NoDi4DIuulqUYL34+jjRIKo hKWg== X-Gm-Message-State: AOAM530r92gi9YD6CQej+xd21MrLEobjr+rjBC43lAszmECteH5tYxPT Fa8wkgKKWJ2DnpC7rhy/DajE9fboEiAn X-Google-Smtp-Source: ABdhPJxy4psHo6/EJX1GkSIVJ8lstih6LmYcwilcflZPmeH0jh8gYpS80/8I7kRHfLL+p7MHP6yMbg== X-Received: by 2002:a63:1011:: with SMTP id f17mr2451249pgl.274.1623223243793; Wed, 09 Jun 2021 00:20:43 -0700 (PDT) Received: from localhost.localdomain (h175-177-040-153.catv02.itscom.jp. [175.177.40.153]) by smtp.gmail.com with ESMTPSA id r9sm12833852pfq.158.2021.06.09.00.20.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Jun 2021 00:20:43 -0700 (PDT) From: Naoya Horiguchi To: linux-mm@kvack.org Cc: Andrew Morton , Linus Torvalds , Oscar Salvador , Michal Hocko , Tony Luck , "Aneesh Kumar K.V" , Naoya Horiguchi , linux-kernel@vger.kernel.org Subject: [PATCH v3] mm/hwpoison: do not lock page again when me_huge_page() successfully recovers Date: Wed, 9 Jun 2021 16:20:29 +0900 Message-Id: <20210609072029.74645-1-nao.horiguchi@gmail.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: EE192A0001A9 X-Stat-Signature: qg9gp3y3yu8g6yb8fepmpw7hokdrz1c7 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20161025 header.b=GQkDvL7x; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf23.hostedemail.com: domain of naohoriguchi@gmail.com designates 209.85.215.182 as permitted sender) smtp.mailfrom=naohoriguchi@gmail.com X-HE-Tag: 1623223241-536660 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Naoya Horiguchi Currently me_huge_page() temporary unlocks page to perform some actions then locks it again later. My testcase (which calls hard-offline on some tail page in a hugetlb, then accesses the address of the hugetlb range) showed that page allocation code detects this page lock on buddy page and printed out "BUG: Bad page state" message. check_new_page_bad() does not consider a page with __PG_HWPOISON as bad page, so this flag works as kind of filter, but this filtering doesn't work in this case because the "bad page" is not the actual hwpoisoned page. So stop locking page again. Actions to be taken depend on the page type of the error, so page unlocking should be done in ->action() callbacks. So let's make it assumed and change all existing callbacks that way. Fixes: commit 78bb920344b8 ("mm: hwpoison: dissolve in-use hugepage in un= recoverable memory error") Signed-off-by: Naoya Horiguchi Cc: stable@vger.kernel.org --- ChangeLog v3: - switch to "unlock page in ->action()" approach based on suggestion from= Linus https://lore.kernel.org/linux-mm/CAHk-=3Dwht_gk_d9k+NZs7eJvBeLOQT4xGcyk= gaCRHuiQ+LbReRw@mail.gmail.com/ --- mm/memory-failure.c | 44 ++++++++++++++++++++++++++++++-------------- 1 file changed, 30 insertions(+), 14 deletions(-) diff --git v5.13-rc2/mm/memory-failure.c v5.13-rc2_patched/mm/memory-fail= ure.c index 763955dd6431..e5a1531f7f4e 100644 --- v5.13-rc2/mm/memory-failure.c +++ v5.13-rc2_patched/mm/memory-failure.c @@ -801,6 +801,7 @@ static int truncate_error_page(struct page *p, unsign= ed long pfn, */ static int me_kernel(struct page *p, unsigned long pfn) { + unlock_page(p); return MF_IGNORED; } =20 @@ -810,6 +811,7 @@ static int me_kernel(struct page *p, unsigned long pf= n) static int me_unknown(struct page *p, unsigned long pfn) { pr_err("Memory failure: %#lx: Unknown page state\n", pfn); + unlock_page(p); return MF_FAILED; } =20 @@ -818,6 +820,7 @@ static int me_unknown(struct page *p, unsigned long p= fn) */ static int me_pagecache_clean(struct page *p, unsigned long pfn) { + int ret; struct address_space *mapping; =20 delete_from_lru_cache(p); @@ -826,8 +829,10 @@ static int me_pagecache_clean(struct page *p, unsign= ed long pfn) * For anonymous pages we're done the only reference left * should be the one m_f() holds. */ - if (PageAnon(p)) - return MF_RECOVERED; + if (PageAnon(p)) { + ret =3D MF_RECOVERED; + goto out; + } =20 /* * Now truncate the page in the page cache. This is really @@ -841,7 +846,8 @@ static int me_pagecache_clean(struct page *p, unsigne= d long pfn) /* * Page has been teared down in the meanwhile */ - return MF_FAILED; + ret =3D MF_FAILED; + goto out; } =20 /* @@ -849,7 +855,10 @@ static int me_pagecache_clean(struct page *p, unsign= ed long pfn) * * Open: to take i_mutex or not for this? Right now we don't. */ - return truncate_error_page(p, pfn, mapping); + ret =3D truncate_error_page(p, pfn, mapping); +out: + unlock_page(p); + return ret; } =20 /* @@ -925,24 +934,26 @@ static int me_pagecache_dirty(struct page *p, unsig= ned long pfn) */ static int me_swapcache_dirty(struct page *p, unsigned long pfn) { + int ret; + ClearPageDirty(p); /* Trigger EIO in shmem: */ ClearPageUptodate(p); =20 - if (!delete_from_lru_cache(p)) - return MF_DELAYED; - else - return MF_FAILED; + ret =3D delete_from_lru_cache(p) ? MF_FAILED : MF_DELAYED; + unlock_page(p); + return ret; } =20 static int me_swapcache_clean(struct page *p, unsigned long pfn) { + int ret; + delete_from_swap_cache(p); =20 - if (!delete_from_lru_cache(p)) - return MF_RECOVERED; - else - return MF_FAILED; + ret =3D delete_from_lru_cache(p) ? MF_FAILED : MF_RECOVERED; + unlock_page(p); + return ret; } =20 /* @@ -963,6 +974,7 @@ static int me_huge_page(struct page *p, unsigned long= pfn) mapping =3D page_mapping(hpage); if (mapping) { res =3D truncate_error_page(hpage, pfn, mapping); + unlock_page(hpage); } else { res =3D MF_FAILED; unlock_page(hpage); @@ -977,7 +989,6 @@ static int me_huge_page(struct page *p, unsigned long= pfn) page_ref_inc(p); res =3D MF_RECOVERED; } - lock_page(hpage); } =20 return res; @@ -1009,6 +1020,8 @@ static struct page_state { unsigned long mask; unsigned long res; enum mf_action_page_type type; + + /* Callback ->action() has to unlock the relevant page inside it. */ int (*action)(struct page *p, unsigned long pfn); } error_states[] =3D { { reserved, reserved, MF_MSG_KERNEL, me_kernel }, @@ -1072,6 +1085,7 @@ static int page_action(struct page_state *ps, struc= t page *p, int result; int count; =20 + /* page p should be unlocked after returning from ps->action(). */ result =3D ps->action(p, pfn); =20 count =3D page_count(p) - 1; @@ -1476,7 +1490,7 @@ static int memory_failure_hugetlb(unsigned long pfn= , int flags) goto out; } =20 - res =3D identify_page_state(pfn, p, page_flags); + return identify_page_state(pfn, p, page_flags); out: unlock_page(head); return res; @@ -1768,6 +1782,8 @@ int memory_failure(unsigned long pfn, int flags) =20 identify_page_state: res =3D identify_page_state(pfn, p, page_flags); + mutex_unlock(&mf_mutex); + return res; unlock_page: unlock_page(p); unlock_mutex: --=20 2.25.1