From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E5C2C433EF for ; Mon, 9 May 2022 11:01:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4BEB26B0073; Mon, 9 May 2022 07:01:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 41C3D6B0074; Mon, 9 May 2022 07:01:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 222CF6B0075; Mon, 9 May 2022 07:01:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 109036B0073 for ; Mon, 9 May 2022 07:01:04 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id DBACF20FFC for ; Mon, 9 May 2022 11:01:03 +0000 (UTC) X-FDA: 79445912406.30.4820BB4 Received: from mail-pj1-f51.google.com (mail-pj1-f51.google.com [209.85.216.51]) by imf24.hostedemail.com (Postfix) with ESMTP id 8F56C1800BC for ; Mon, 9 May 2022 11:00:55 +0000 (UTC) Received: by mail-pj1-f51.google.com with SMTP id e24so12826068pjt.2 for ; Mon, 09 May 2022 04:01:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Hh8Q/KvNuhUqACFeqsQV2T1sVIozPsF9U1lcARGxZeI=; b=ZxSdbIS2ZWZlnQRxzJ9+QdFDYogt3VJySniueW++bzx/HyauWYkmgTLRR+0vFxIGtj KsOT7SkViIB3J13U7og8tKuvd4DS3rawovJxNXug3FgyVblYaj95VZr/TbPrfqkw4OwB uUmJxH4y0zUWiamwK3ZcS8fH8m8MiZ9IWeVtXYghqAQbpeiUTSUONGLNCa+FjIxRkgtr I2AhQ/5O5ZKkCUBup3/zcTMPWw3vY33vj/tT3aaGogUtfI584oTfWsxyQToDzKqqqRVo lmocWhOmcYkeS3M760hkIGGe4Pfgp4ytK+w/4TZHIERGD3ZpSGXjhARW07eLbSCQs6Zf kS5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Hh8Q/KvNuhUqACFeqsQV2T1sVIozPsF9U1lcARGxZeI=; b=Rrn7LGqevZr0RohrRU2aw0MIcuc1T2Fz8gcWXLv5uEu231esW7usvioZEi/BfxxWBz cmAgMmdaDv2RlA/UEAGCtVm02wBOF4pv6ItNVW4lAE/GyDkrQtovEc82PC68z1ZB9AuM R5qgLqcWHfH2qaUrH1sq0wzW1rbCYnm/YhEFO9YUSalGW77dZcvE4Uw6iAZLuy7NcDko JhYq1kDOTQqgij9rFfmtKmO0kFIh+01gB1QbK52SCy3n2SMOINNYOOS6Fljg8ED99Ust hv6Oyevliwa5CDHoLT2jgbAu6Bqh3m3dN8SvWcIzAydDSjZZTxDblSujMnQ0NJO5pbyu aNsw== X-Gm-Message-State: AOAM530F3c1/JvZNqyDaovtoPSvECTByc/rMn6TjNe9skIuBUB02VGfj 65ie8E7yjJaX2X360gi63qFSBw== X-Google-Smtp-Source: ABdhPJx+fU8dUrM5IVwoZn/ILMFer/x3N87PdLxZhj/Brm/n0VW9GFWDnNWNGaOBopn9Ncaaz3NanA== X-Received: by 2002:a17:902:bc4c:b0:15f:12dc:7c85 with SMTP id t12-20020a170902bc4c00b0015f12dc7c85mr4518856plz.108.1652094061885; Mon, 09 May 2022 04:01:01 -0700 (PDT) Received: from always-x1.www.tendawifi.com ([139.177.225.239]) by smtp.gmail.com with ESMTPSA id n12-20020a170902968c00b0015e8d4eb244sm6813303plp.142.2022.05.09.04.00.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 May 2022 04:01:01 -0700 (PDT) From: zhenwei pi To: akpm@linux-foundation.org, naoya.horiguchi@nec.com Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, zhenwei pi Subject: [PATCH v2 1/5] mm/memory-failure.c: move clear_hwpoisoned_pages Date: Mon, 9 May 2022 18:56:37 +0800 Message-Id: <20220509105641.491313-2-pizhenwei@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220509105641.491313-1-pizhenwei@bytedance.com> References: <20220509105641.491313-1-pizhenwei@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Stat-Signature: gefpxz1fs3tk4d1o45aycadfqoxawpqe Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=ZxSdbIS2; spf=pass (imf24.hostedemail.com: domain of pizhenwei@bytedance.com designates 209.85.216.51 as permitted sender) smtp.mailfrom=pizhenwei@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 8F56C1800BC X-HE-Tag: 1652094055-753879 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: clear_hwpoisoned_pages() clears HWPoison flag and decreases the number of poisoned pages, this actually works as part of memory failure. Move this function from sparse.c to memory-failure.c, finally there is no CONFIG_MEMORY_FAILURE in sparse.c. Acked-by: Naoya Horiguchi Signed-off-by: zhenwei pi --- mm/internal.h | 11 +++++++++++ mm/memory-failure.c | 21 +++++++++++++++++++++ mm/sparse.c | 27 --------------------------- 3 files changed, 32 insertions(+), 27 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index cf16280ce132..84dd6aa7ba97 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -634,6 +634,9 @@ static inline int find_next_best_node(int node, nodemask_t *used_node_mask) } #endif +/* + * mm/memory-failure.c + */ extern int hwpoison_filter(struct page *p); extern u32 hwpoison_filter_dev_major; @@ -643,6 +646,14 @@ extern u64 hwpoison_filter_flags_value; extern u64 hwpoison_filter_memcg; extern u32 hwpoison_filter_enable; +#ifdef CONFIG_MEMORY_FAILURE +void clear_hwpoisoned_pages(struct page *memmap, int nr_pages); +#else +static inline void clear_hwpoisoned_pages(struct page *memmap, int nr_pages) +{ +} +#endif + extern unsigned long __must_check vm_mmap_pgoff(struct file *, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long); diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 27760c19bad7..46d9fb612dcc 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2401,3 +2401,24 @@ int soft_offline_page(unsigned long pfn, int flags) return ret; } + +void clear_hwpoisoned_pages(struct page *memmap, int nr_pages) +{ + int i; + + /* + * A further optimization is to have per section refcounted + * num_poisoned_pages. But that would need more space per memmap, so + * for now just do a quick global check to speed up this routine in the + * absence of bad pages. + */ + if (atomic_long_read(&num_poisoned_pages) == 0) + return; + + for (i = 0; i < nr_pages; i++) { + if (PageHWPoison(&memmap[i])) { + num_poisoned_pages_dec(); + ClearPageHWPoison(&memmap[i]); + } + } +} diff --git a/mm/sparse.c b/mm/sparse.c index 952f06d8f373..e983c38fac8f 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -916,33 +916,6 @@ int __meminit sparse_add_section(int nid, unsigned long start_pfn, return 0; } -#ifdef CONFIG_MEMORY_FAILURE -static void clear_hwpoisoned_pages(struct page *memmap, int nr_pages) -{ - int i; - - /* - * A further optimization is to have per section refcounted - * num_poisoned_pages. But that would need more space per memmap, so - * for now just do a quick global check to speed up this routine in the - * absence of bad pages. - */ - if (atomic_long_read(&num_poisoned_pages) == 0) - return; - - for (i = 0; i < nr_pages; i++) { - if (PageHWPoison(&memmap[i])) { - num_poisoned_pages_dec(); - ClearPageHWPoison(&memmap[i]); - } - } -} -#else -static inline void clear_hwpoisoned_pages(struct page *memmap, int nr_pages) -{ -} -#endif - void sparse_remove_section(struct mem_section *ms, unsigned long pfn, unsigned long nr_pages, unsigned long map_offset, struct vmem_altmap *altmap) -- 2.20.1