From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A40DC11F67 for ; Wed, 14 Jul 2021 12:37:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BFD3C613B0 for ; Wed, 14 Jul 2021 12:37:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BFD3C613B0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 15C3B6B0085; Wed, 14 Jul 2021 08:37:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 10CC76B0088; Wed, 14 Jul 2021 08:37:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EC8466B008C; Wed, 14 Jul 2021 08:37:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0024.hostedemail.com [216.40.44.24]) by kanga.kvack.org (Postfix) with ESMTP id CB87C6B0085 for ; Wed, 14 Jul 2021 08:37:49 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id C077E82F3CD4 for ; Wed, 14 Jul 2021 12:37:48 +0000 (UTC) X-FDA: 78361145016.14.E3F5E65 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf09.hostedemail.com (Postfix) with ESMTP id 72C9A3000104 for ; Wed, 14 Jul 2021 12:37:48 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 11C8F613B7; Wed, 14 Jul 2021 12:37:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1626266267; bh=HDxU9C4XWcZvXj9PvP7Rf+xajNX1bDrKnIsSd+7fGbQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qXX49VE8d7FWEc2fS7HJKBW6kNmhoyDmD4rHk8igoXeJCwBBrozG3EdVpO8lebiFM 6QPXQ62hwcKB+b7k6Px3KTXPFa+O/UdIfL0QCX2wk6Mq6FaB7BYirB6pSyj2vTc/OG S7MIPd+PfTRwGYlmW4LPvnQpvI0ZOsAd2oXz1+xn5D536+UsqTbT1GiQUAs8fW6Exf dKu8ZhQkBBLr45l25xI4uhA0ZMkvw3u1HCA1a42LT+MCKh60EjOMfMHuLflyFgsuMf 00T9stDHhgXyvJ8isLvbXiHUPy7++oW6IIeEIF1Hdv0zrAQiQVzxD+H43vbBSTKTUE HVLtiMCYiZZgA== From: Mike Rapoport To: Andrew Morton Cc: Michal Simek , Mike Rapoport , Mike Rapoport , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 1/4] mm/page_alloc: always initialize memory map for the holes Date: Wed, 14 Jul 2021 15:37:36 +0300 Message-Id: <20210714123739.16493-2-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210714123739.16493-1-rppt@kernel.org> References: <20210714123739.16493-1-rppt@kernel.org> MIME-Version: 1.0 Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=qXX49VE8; spf=pass (imf09.hostedemail.com: domain of rppt@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Stat-Signature: y7fyh5hcqdi6cwbapb7wn77mjc36e534 X-Rspamd-Queue-Id: 72C9A3000104 X-Rspamd-Server: rspam01 X-HE-Tag: 1626266268-260758 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport Currently memory map for the holes is initialized only when SPARSEMEM memory model is used. Yet, even with FLATMEM there could be holes in the physical memory layout that have memory map entries. For instance, the memory reserved using e820 API on i386 or "reserved-memory" nodes in device tree would not appear in memblock.memor= y and hence the struct pages for such holes will be skipped during memory m= ap initialization. These struct pages will be zeroed because the memory map for FLATMEM systems is allocated with memblock_alloc_node() that clears the allocated memory. While zeroed struct pages do not cause immediate problems, the correct behaviour is to initialize every page using __init_single_page(). Besides, enabling page poison for FLATMEM case will trigger PF_POISONED_CHECK() unless the memory map is properly initialized. Make sure init_unavailable_range() is called for both SPARSEMEM and FLATM= EM so that struct pages representing memory holes would appear as PG_Reserve= d with any memory layout. Signed-off-by: Mike Rapoport --- mm/page_alloc.c | 8 -------- 1 file changed, 8 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3b97e17806be..878d7af4403d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6624,7 +6624,6 @@ static void __meminit zone_init_free_lists(struct z= one *zone) } } =20 -#if !defined(CONFIG_FLATMEM) /* * Only struct pages that correspond to ranges defined by memblock.memor= y * are zeroed and initialized by going through __init_single_page() duri= ng @@ -6669,13 +6668,6 @@ static void __init init_unavailable_range(unsigned= long spfn, pr_info("On node %d, zone %s: %lld pages in unavailable ranges", node, zone_names[zone], pgcnt); } -#else -static inline void init_unavailable_range(unsigned long spfn, - unsigned long epfn, - int zone, int node) -{ -} -#endif =20 static void __init memmap_init_zone_range(struct zone *zone, unsigned long start_pfn, --=20 2.28.0