From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34B8CC433B4 for ; Tue, 18 May 2021 09:06:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D8E1660240 for ; Tue, 18 May 2021 09:06:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D8E1660240 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 000816B0092; Tue, 18 May 2021 05:06:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EB34A6B0093; Tue, 18 May 2021 05:06:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A6ECE6B0095; Tue, 18 May 2021 05:06:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0099.hostedemail.com [216.40.44.99]) by kanga.kvack.org (Postfix) with ESMTP id 64B3D6B0092 for ; Tue, 18 May 2021 05:06:31 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 0C09F8249980 for ; Tue, 18 May 2021 09:06:31 +0000 (UTC) X-FDA: 78153770982.01.7F19372 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf13.hostedemail.com (Postfix) with ESMTP id 0FC49E000129 for ; Tue, 18 May 2021 09:06:26 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 5C13561209; Tue, 18 May 2021 09:06:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621328790; bh=9+8VlIrFfLEmzRoxqWF0ylaXDWsmZt0DQAU8LObraoc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Tum/Yu+aKBsznhRuVylQK1gPSXlk4/7f+dqaXzkFCAWeO/0tZdRrJZtzGTM6D28JO kNXoQsHu8pYcBj4X1N6ufEmYx14mMILc9d/2tIvJVWRyafGupktUOVExadKNVYaMGZ llENDa6we67Ooe34Xi2kSQ/P6XCtnqQ5WnnqX9SoyRCkHAwcWOxOdhZTeF86UzVNxL 04embraLt7uvp7lZ/rLgrZ6w2rjJAcQRUn/cTkCXggepKY2CGsNGa+ScXwBTxXyKTs SBvoaM2ND8iIQF2D7E+02+2ciTrM3HEhoMf5yOI0k+bMMe8zfn6o1H8bWgy6toeWHM gbkuHTV/w6DmA== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Andrew Morton , Kefeng Wang , Mike Rapoport , Mike Rapoport , Russell King , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 3/3] arm: extend pfn_valid to take into accound freed memory map alignment Date: Tue, 18 May 2021 12:06:13 +0300 Message-Id: <20210518090613.21519-4-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210518090613.21519-1-rppt@kernel.org> References: <20210518090613.21519-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 0FC49E000129 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="Tum/Yu+a"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf13.hostedemail.com: domain of rppt@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=rppt@kernel.org X-Rspamd-Server: rspam03 X-Stat-Signature: 6869nahda1npbffas985qnmd84qj3n6s X-HE-Tag: 1621328786-865931 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport When unused memory map is freed the preserved part of the memory map is extended to match pageblock boundaries because lots of core mm functionality relies on homogeneity of the memory map within pageblock boundaries. Since pfn_valid() is used to check whether there is a valid memory map entry for a PFN, make it return true also for PFNs that have memory map entries even if there is no actual memory populated there. Signed-off-by: Mike Rapoport --- arch/arm/mm/init.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index 9d4744a632c6..bb678c0ba143 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -125,11 +125,24 @@ static void __init zone_sizes_init(unsigned long mi= n, unsigned long max_low, int pfn_valid(unsigned long pfn) { phys_addr_t addr =3D __pfn_to_phys(pfn); + unsigned long pageblock_size =3D PAGE_SIZE * pageblock_nr_pages; =20 if (__phys_to_pfn(addr) !=3D pfn) return 0; =20 - return memblock_is_map_memory(addr); + if (memblock_is_map_memory(addr)) + return 1; + + /* + * If address less than pageblock_size bytes away from a present + * memory chunk there still will be a memory map entry for it + * because we round freed memory map to the pageblock boundaries + */ + if (memblock_is_map_memory(ALIGN(addr + 1, pageblock_size)) || + memblock_is_map_memory(ALIGN_DOWN(addr, pageblock_size))) + return 1; + + return 0; } EXPORT_SYMBOL(pfn_valid); #endif --=20 2.28.0