From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D349C433B4 for ; Sun, 25 Apr 2021 06:59:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7AAD761208 for ; Sun, 25 Apr 2021 06:59:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7AAD761208 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D156A6B0036; Sun, 25 Apr 2021 02:59:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C77446B006C; Sun, 25 Apr 2021 02:59:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ACACB6B006E; Sun, 25 Apr 2021 02:59:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0079.hostedemail.com [216.40.44.79]) by kanga.kvack.org (Postfix) with ESMTP id 873376B0036 for ; Sun, 25 Apr 2021 02:59:31 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 2D1A41801D001 for ; Sun, 25 Apr 2021 06:59:31 +0000 (UTC) X-FDA: 78069988542.06.A8BF0C0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf01.hostedemail.com (Postfix) with ESMTP id 194595001531 for ; Sun, 25 Apr 2021 06:59:25 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 40FB46023B; Sun, 25 Apr 2021 06:59:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1619333969; bh=ALZmUg3aLVOMwTOpSkzKAu3QbSkOH0nkV19EGy5CnWY=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=ZTkBA1bAi227vq8nv7Y3635yQJXT4Dm9rXiN9XadbtKqV0vegdA4EYsMfQGwBg7bh fTxh7S3Q+utvZRMgunLLzZR2H6SWAqRs3SM0bbIezJGwNJ65hYW0SQ0rDrnNxdsRgk STaTVMIop+tRNiVtTjMmeBPPfXP+Bu2zJxuAaeBZGvk5awCTT47WPZgCi5q3p7hga+ 7ZWjc0MfUT2awSF/RVlYPYoagO5R328dq/OVCYOIwewc9wMN+zkbedbqEtHk3lQ9k+ 1DmOGz2COpdWYXfZ03tPJqud7ZB8paNMhg5qZsTaOuzMEH5A83kANNkAAxOfZCfv0n sqFwgjLUO+5Yg== Date: Sun, 25 Apr 2021 09:59:20 +0300 From: Mike Rapoport To: Kefeng Wang Cc: linux-arm-kernel@lists.infradead.org, Andrew Morton , Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v2 0/4] arm64: drop pfn_valid_within() and simplify pfn_valid() Message-ID: References: <20210421065108.1987-1-rppt@kernel.org> <9aa68d26-d736-3b75-4828-f148964eb7f0@huawei.com> <33fa74c2-f32d-f224-eb30-acdb717179ff@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <33fa74c2-f32d-f224-eb30-acdb717179ff@huawei.com> X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 194595001531 X-Stat-Signature: e4e3gx9ywdakzkiwqjj9yrdhayz7tgfk Received-SPF: none (kernel.org>: No applicable sender policy available) receiver=imf01; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1619333965-280374 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Apr 22, 2021 at 11:28:24PM +0800, Kefeng Wang wrote: >=20 > On 2021/4/22 15:29, Mike Rapoport wrote: > > On Thu, Apr 22, 2021 at 03:00:20PM +0800, Kefeng Wang wrote: > > > On 2021/4/21 14:51, Mike Rapoport wrote: > > > > From: Mike Rapoport > > > >=20 > > > > Hi, > > > >=20 > > > > These patches aim to remove CONFIG_HOLES_IN_ZONE and essentially = hardwire > > > > pfn_valid_within() to 1. > > > >=20 > > > > The idea is to mark NOMAP pages as reserved in the memory map and= restore > > > > the intended semantics of pfn_valid() to designate availability o= f struct > > > > page for a pfn. > > > >=20 > > > > With this the core mm will be able to cope with the fact that it = cannot use > > > > NOMAP pages and the holes created by NOMAP ranges within MAX_ORDE= R blocks > > > > will be treated correctly even without the need for pfn_valid_wit= hin. > > > >=20 > > > > The patches are only boot tested on qemu-system-aarch64 so I'd re= ally > > > > appreciate memory stress tests on real hardware. > > > >=20 > > > > If this actually works we'll be one step closer to drop custom pf= n_valid() > > > > on arm64 altogether. > > > Hi Mike=EF=BC=8CI have a question, without HOLES_IN_ZONE, the pfn_v= alid_within() in > > > move_freepages_block()->move_freepages() > > > will be optimized, if there are holes in zone, the 'struce page'(me= mory map) > > > for pfn range of hole will be free by > > > free_memmap(), and then the page traverse in the zone(with holes) f= rom > > > move_freepages() will meet the wrong page=EF=BC=8C > > > then it could panic at PageLRU(page) test, check link[1], > > First, HOLES_IN_ZONE name us hugely misleading, this configuration op= tion > > has nothing to to with memory holes, but rather it is there to deal w= ith > > holes or undefined struct pages in the memory map, when these holes c= an be > > inside a MAX_ORDER_NR_PAGES region. > >=20 > > In general pfn walkers use pfn_valid() and pfn_valid_within() to avoi= d > > accessing *missing* struct pages, like those that are freed at > > free_memmap(). But on arm64 these tests also filter out the nomap ent= ries > > because their struct pages are not initialized. > >=20 > > The panic you refer to happened because there was an uninitialized st= ruct > > page in the middle of MAX_ORDER_NR_PAGES region because it correspond= ed to > > nomap memory. > >=20 > > With these changes I make sure that such pages will be properly initi= alized > > as PageReserved and the pfn walkers will be able to rely on the memor= y map. > >=20 > > Note also, that free_memmap() aligns the parts being freed on MAX_ORD= ER > > boundaries, so there will be no missing parts in the memory map withi= n a > > MAX_ORDER_NR_PAGES region. >=20 > Ok, thanks, we met a same panic like the link on arm32(without > HOLES_IN_ZONE), >=20 > the scheme for arm64 could be suit for arm32, right? In general yes. You just need to make sure that usage of pfn_valid() in arch/arm does not presume that it tests something beyond availability of struct page for a pfn. =20 >=C2=A0I will try the patchset with some changes on arm32 and give some > feedback. >=20 > Again, the stupid question, where will mark the region of memblock with > MEMBLOCK_NOMAP flag ? =20 Not sure I understand the question. The memory regions with "nomap" property in the device tree will be marked MEMBLOCK_NOMAP. =20 > > > "The idea is to mark NOMAP pages as reserved in the memory map", I = see the > > > patch2 check memblock_is_nomap() in memory region > > > of memblock, but it seems that memblock_mark_nomap() is not called(= maybe I > > > missed), then memmap_init_reserved_pages() won't > > > work, so should the HOLES_IN_ZONE still be needed for generic mm co= de? > > >=20 > > > [1] https://lore.kernel.org/linux-arm-kernel/541193a6-2bce-f042-5bb= 2-88913d5f1047@arm.com/ > > >=20 --=20 Sincerely yours, Mike.