From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9492C433ED for ; Mon, 3 May 2021 06:27:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 862EC6134F for ; Mon, 3 May 2021 06:27:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232190AbhECG1y (ORCPT ); Mon, 3 May 2021 02:27:54 -0400 Received: from mail.kernel.org ([198.145.29.99]:51206 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229639AbhECG1x (ORCPT ); Mon, 3 May 2021 02:27:53 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id A376461185; Mon, 3 May 2021 06:26:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1620023221; bh=ARgCJkbvRnhDoHk3mV9x6qfowVNrQmhn5MqXTbvPEYU=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=JLCHzBplf919bfRjQhdvb086292DMWBVZ45XMSphxu+pcjDiMcckytIgWXw8oIMvl z18xh8OtBWrApFxp0Ap7DYavNOBdHbF2GD9xw92gKbbluKtpPRzgQzkjWTt2pRg3Bg wBhtw0n2qCbOyCudUIRfXQPYxbJIHPI9r/vQtzoJdLfoqQwsHc3DO0snvF4vcUGdP8 Mlk/usfjkMj3HVCII0mYn451J74eeuCXvepkDWSRrDg5TugCTJasOKobmY9n9tah12 TMr62xVdWVrQTTVeIrlfu9AMNqwI3MWcfSfk2gMjdB3lSwvuf6A4yLjtDHYLsFG9my ljVCh+EdI3S5w== Date: Mon, 3 May 2021 09:26:52 +0300 From: Mike Rapoport To: Kefeng Wang Cc: linux-arm-kernel@lists.infradead.org, Andrew Morton , Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: arm32: panic in move_freepages (Was [PATCH v2 0/4] arm64: drop pfn_valid_within() and simplify pfn_valid()) Message-ID: References: <2d879629-3059-fd42-428f-4b7c2a73d698@huawei.com> <259d14df-a713-72e7-4ccb-c06a8ee31e13@huawei.com> <6ad2956c-70ae-c423-ed7d-88e94c88060f@huawei.com> <0cb013e4-1157-f2fa-96ec-e69e60833f72@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Apr 30, 2021 at 07:24:37PM +0800, Kefeng Wang wrote: > > > On 2021/4/30 17:51, Mike Rapoport wrote: > > On Thu, Apr 29, 2021 at 06:22:55PM +0800, Kefeng Wang wrote: > > > > > > On 2021/4/29 14:57, Mike Rapoport wrote: > > > > > > > > > Do you use SPARSMEM? If yes, what is your section size? > > > > > > What is the value if CONFIG_FORCE_MAX_ZONEORDER in your configuration? > > > > > Yes, > > > > > > > > > > CONFIG_SPARSEMEM=y > > > > > > > > > > CONFIG_SPARSEMEM_STATIC=y > > > > > > > > > > CONFIG_FORCE_MAX_ZONEORDER = 11 > > > > > > > > > > CONFIG_PAGE_OFFSET=0xC0000000 > > > > > CONFIG_HAVE_ARCH_PFN_VALID=y > > > > > CONFIG_HIGHMEM=y > > > > > #define SECTION_SIZE_BITS 26 > > > > > #define MAX_PHYSADDR_BITS 32 > > > > > #define MAX_PHYSMEM_BITS 32 > > > > > > > > > With the patch,  the addr is aligned, but the panic still occurred, > > > > Is this the same panic at move_freepages() for range [de600, de7ff]? > > > > Do you enable CONFIG_ARM_LPAE? > > no, the CONFIG_ARM_LPAE is not set, and yes with same panic at > move_freepages at > > start_pfn/end_pfn [de600, de7ff], [de600000, de7ff000] : pfn =de600, page > =ef3cc000, page-flags = ffffffff, pfn2phy = de600000 > > > > __free_memory_core, range: 0xb0200000 - 0xc0000000, pfn: b0200 - b0200 > > > __free_memory_core, range: 0xcc000000 - 0xdca00000, pfn: cc000 - b0200 > > > __free_memory_core, range: 0xde700000 - 0xdea00000, pfn: de700 - b0200 Hmm, [de600, de7ff] is not added to the free lists which is correct. But then it's unclear how the page for de600 gets to move_freepages()... Can't say I have any bright ideas to try here... > the __free_memory_core will check the start pfn and end pfn, > > if (start_pfn >= end_pfn) > return 0; > > __free_pages_memory(start_pfn, end_pfn); > so the memory will not be freed to buddy, confused... It's a check for range validity, all valid ranges are added. > > > __free_memory_core, range: 0xe0800000 - 0xe0c00000, pfn: e0800 - b0200 > > > __free_memory_core, range: 0xf4b00000 - 0xf7000000, pfn: f4b00 - b0200 > > > __free_memory_core, range: 0xfda00000 - 0xffffffff, pfn: fda00 - b0200 > > > > It seems that with SPARSEMEM we don't align the freed parts on pageblock > > > > boundaries. > > > > > > > > Can you try the patch below: > > > > > > > > diff --git a/mm/memblock.c b/mm/memblock.c > > > > index afaefa8fc6ab..1926369b52ec 100644 > > > > --- a/mm/memblock.c > > > > +++ b/mm/memblock.c > > > > @@ -1941,14 +1941,13 @@ static void __init free_unused_memmap(void) > > > > * due to SPARSEMEM sections which aren't present. > > > > */ > > > > start = min(start, ALIGN(prev_end, PAGES_PER_SECTION)); > > > > -#else > > > > +#endif > > > > /* > > > > * Align down here since the VM subsystem insists that the > > > > * memmap entries are valid from the bank start aligned to > > > > * MAX_ORDER_NR_PAGES. > > > > */ > > > > start = round_down(start, MAX_ORDER_NR_PAGES); > > > > -#endif > > > > /* > > > > * If we had a previous bank, and there is a space > > > > > > -- Sincerely yours, Mike.