From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5338C433ED for ; Tue, 18 May 2021 10:53:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 67A216135B for ; Tue, 18 May 2021 10:53:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 67A216135B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E65ED8D001A; Tue, 18 May 2021 06:53:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E15D48D0001; Tue, 18 May 2021 06:53:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CB5E48D001A; Tue, 18 May 2021 06:53:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0227.hostedemail.com [216.40.44.227]) by kanga.kvack.org (Postfix) with ESMTP id 964778D0001 for ; Tue, 18 May 2021 06:53:55 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3382A180AD830 for ; Tue, 18 May 2021 10:53:55 +0000 (UTC) X-FDA: 78154041630.03.6E44C89 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf23.hostedemail.com (Postfix) with ESMTP id C0CB2A000383 for ; Tue, 18 May 2021 10:53:52 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 530AE61209; Tue, 18 May 2021 10:53:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621335233; bh=1albl1iFT7lO5QgNDVJ5jG4lfAFdGkvbnRzuLTqjPyM=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=EXYY4DqSW4QsbgaQL6QXi6ZdeHaDW0T9FE0beaOSIKCYCn9kq/YT0pBgfFKzJadpq mg5IKthMepa5IUsNYEkt7XZupKPjSMoT0FDUPkt/n06x5N3wRKbnBLI5QuiHOGA38I v1sopQyLLuJ1Jp+PeDm/2Aqv9H1VE4+nLlqnIzu3lZf0JKdAHh7vB87BSwcu0W2QF1 Mm0tUX9k4jU+m5LuGtr3Zz+n2x7YGZwmWnytR53m/Pu+EUpYAZJs3J3k34MB3DHx9H UJBMJJgMhHbV2RIBzRl/BPZAL+O9wT3vpsNgHQDWU+EwzBjPXmV7Ije2PISjlP3VXM sSXb3k9WsydOg== Date: Tue, 18 May 2021 13:53:46 +0300 From: Mike Rapoport To: "Russell King (Oracle)" Cc: linux-arm-kernel@lists.infradead.org, Andrew Morton , Kefeng Wang , Mike Rapoport , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH 3/3] arm: extend pfn_valid to take into accound freed memory map alignment Message-ID: References: <20210518090613.21519-1-rppt@kernel.org> <20210518090613.21519-4-rppt@kernel.org> <20210518094427.GR12395@shell.armlinux.org.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210518094427.GR12395@shell.armlinux.org.uk> Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=EXYY4DqS; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf23.hostedemail.com: domain of rppt@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=rppt@kernel.org X-Stat-Signature: acsi1qxd4nhag3t5qbmt3gngommt8k95 X-Rspamd-Queue-Id: C0CB2A000383 X-Rspamd-Server: rspam02 X-HE-Tag: 1621335232-670939 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, May 18, 2021 at 10:44:27AM +0100, Russell King (Oracle) wrote: > On Tue, May 18, 2021 at 12:06:13PM +0300, Mike Rapoport wrote: > > From: Mike Rapoport > > > > When unused memory map is freed the preserved part of the memory map is > > extended to match pageblock boundaries because lots of core mm > > functionality relies on homogeneity of the memory map within pageblock > > boundaries. > > > > Since pfn_valid() is used to check whether there is a valid memory map > > entry for a PFN, make it return true also for PFNs that have memory map > > entries even if there is no actual memory populated there. > > I thought pfn_valid() was a particularly hot path... do we really want > to be doing multiple lookups here? Is there no better solution? It is hot, but for more, hmm, straightforward memory layouts it'll take if (memblock_is_map_memory(addr)) return 1; branch, I think. Most of mm operations are on pages that are fed into buddy allocator, and if there are no holes with weird alignment pfn_valid() will return 1 right away. Now thinking about it, with the patch that marks NOMAP areas reserved in the memory map [1], we could also use memblock_overlaps_region(&memblock.memory, ALIGN_DOWN(addr, pageblock_size), ALIGN(addr + 1, pageblock_size)) to have only one lookup. Completely another approach would be to simply stop freeing memory map with SPARSEMEM. For systems like the one Kefen is using, it would waste less than 2M out of 1.5G. It is worse of course for old systems with small memories. The worst case being mach-ep93xx with sections size of 256M and I presume 16M per section would be normal for such machines. [1] https://lore.kernel.org/lkml/20210511100550.28178-3-rppt@kernel.org -- Sincerely yours, Mike.