From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F408C00A89 for ; Thu, 5 Nov 2020 11:32:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F0FFD20867 for ; Thu, 5 Nov 2020 11:32:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="lV6UuMgv" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F0FFD20867 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2475A6B00E9; Thu, 5 Nov 2020 06:32:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F9086B00EA; Thu, 5 Nov 2020 06:32:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0E7B06B00EB; Thu, 5 Nov 2020 06:32:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0032.hostedemail.com [216.40.44.32]) by kanga.kvack.org (Postfix) with ESMTP id D51F86B00E9 for ; Thu, 5 Nov 2020 06:32:17 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 63888362A for ; Thu, 5 Nov 2020 11:32:17 +0000 (UTC) X-FDA: 77450151114.27.jewel33_20056fe272c9 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 47C173D66B for ; Thu, 5 Nov 2020 11:32:17 +0000 (UTC) X-HE-Tag: jewel33_20056fe272c9 X-Filterd-Recvd-Size: 8934 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Thu, 5 Nov 2020 11:32:16 +0000 (UTC) Received: from kernel.org (unknown [2.55.183.164]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6CBFE2078E; Thu, 5 Nov 2020 11:32:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604575935; bh=Bddnym2ScSgZW0MXOH6ivSBDQPosBdmDCn8iejOUWN0=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=lV6UuMgvF8R3zfaurE9MkrC6I4RRQ2ehwYW+6ca/MvBFcAHxNY9cvsP6pB/VOgFkz 490HRfm8GhZ2lwAZWCnAZGiXP7hhSKrUfsMb0TcYfEijy+kbMpqnBowIlTt59yIWhQ v06Jxbw6wvDvaklmn4EOV9uG879gJBciLf4YrWYo= Date: Thu, 5 Nov 2020 13:31:54 +0200 From: Mike Rapoport To: Vlastimil Babka Cc: Andrew Morton , Albert Ou , Andy Lutomirski , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Christian Borntraeger , Christoph Lameter , "David S. Miller" , Dave Hansen , David Hildenbrand , David Rientjes , "Edgecombe, Rick P" , "H. Peter Anvin" , Heiko Carstens , Ingo Molnar , Joonsoo Kim , "Kirill A . Shutemov" , "Kirill A. Shutemov" , Len Brown , Michael Ellerman , Mike Rapoport , Palmer Dabbelt , Paul Mackerras , Paul Walmsley , Pavel Machek , Pekka Enberg , Peter Zijlstra , "Rafael J. Wysocki" , Thomas Gleixner , Vasily Gorbik , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-pm@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: Re: [PATCH v4 1/4] mm: introduce debug_pagealloc_map_pages() helper Message-ID: <20201105113154.GX4879@kernel.org> References: <20201103162057.22916-1-rppt@kernel.org> <20201103162057.22916-2-rppt@kernel.org> <971e9638-2395-daf4-d19e-fe3cf5d34b98@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <971e9638-2395-daf4-d19e-fe3cf5d34b98@suse.cz> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Nov 04, 2020 at 06:35:50PM +0100, Vlastimil Babka wrote: > On 11/3/20 5:20 PM, Mike Rapoport wrote: > > From: Mike Rapoport > > > > When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel > > direct mapping after free_pages(). The pages than need to be mapped back > > before they could be used. Theese mapping operations use > > __kernel_map_pages() guarded with with debug_pagealloc_enabled(). > > > > The only place that calls __kernel_map_pages() without checking whether > > DEBUG_PAGEALLOC is enabled is the hibernation code that presumes > > availability of this function when ARCH_HAS_SET_DIRECT_MAP is set. > > Still, on arm64, __kernel_map_pages() will bail out when DEBUG_PAGEALLOC is > > not enabled but set_direct_map_invalid_noflush() may render some pages not > > present in the direct map and hibernation code won't be able to save such > > pages. > > > > To make page allocation debugging and hibernation interaction more robust, > > the dependency on DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP has to be made > > more explicit. > > > > Start with combining the guard condition and the call to > > __kernel_map_pages() into a single debug_pagealloc_map_pages() function to > > emphasize that __kernel_map_pages() should not be called without > > DEBUG_PAGEALLOC and use this new function to map/unmap pages when page > > allocation debug is enabled. > > > > Signed-off-by: Mike Rapoport > > Reviewed-by: David Hildenbrand > > Acked-by: Kirill A. Shutemov > > Acked-by: Vlastimil Babka > > But, the "enable" param is hideous. I would rather have map and unmap > variants (and just did the same split for page poisoning) and this seems to > be a good opportunity. If David didn't propose it already, I'm surprised ;) I'm ok with map and unmap, and no, David didn't propose it already :) > > --- > > include/linux/mm.h | 10 ++++++++++ > > mm/memory_hotplug.c | 3 +-- > > mm/page_alloc.c | 6 ++---- > > mm/slab.c | 8 +++----- > > 4 files changed, 16 insertions(+), 11 deletions(-) > > > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > index ef360fe70aaf..1fc0609056dc 100644 > > --- a/include/linux/mm.h > > +++ b/include/linux/mm.h > > @@ -2936,12 +2936,22 @@ kernel_map_pages(struct page *page, int numpages, int enable) > > { > > __kernel_map_pages(page, numpages, enable); > > } > > + > > +static inline void debug_pagealloc_map_pages(struct page *page, > > + int numpages, int enable) > > +{ > > + if (debug_pagealloc_enabled_static()) > > + __kernel_map_pages(page, numpages, enable); > > +} > > + > > #ifdef CONFIG_HIBERNATION > > extern bool kernel_page_present(struct page *page); > > #endif /* CONFIG_HIBERNATION */ > > #else /* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */ > > static inline void > > kernel_map_pages(struct page *page, int numpages, int enable) {} > > +static inline void debug_pagealloc_map_pages(struct page *page, > > + int numpages, int enable) {} > > #ifdef CONFIG_HIBERNATION > > static inline bool kernel_page_present(struct page *page) { return true; } > > #endif /* CONFIG_HIBERNATION */ > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > > index b44d4c7ba73b..e2b6043a4428 100644 > > --- a/mm/memory_hotplug.c > > +++ b/mm/memory_hotplug.c > > @@ -614,8 +614,7 @@ void generic_online_page(struct page *page, unsigned int order) > > * so we should map it first. This is better than introducing a special > > * case in page freeing fast path. > > */ > > - if (debug_pagealloc_enabled_static()) > > - kernel_map_pages(page, 1 << order, 1); > > + debug_pagealloc_map_pages(page, 1 << order, 1); > > __free_pages_core(page, order); > > totalram_pages_add(1UL << order); > > #ifdef CONFIG_HIGHMEM > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 23f5066bd4a5..9a66a1ff9193 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -1272,8 +1272,7 @@ static __always_inline bool free_pages_prepare(struct page *page, > > */ > > arch_free_page(page, order); > > - if (debug_pagealloc_enabled_static()) > > - kernel_map_pages(page, 1 << order, 0); > > + debug_pagealloc_map_pages(page, 1 << order, 0); > > kasan_free_nondeferred_pages(page, order); > > @@ -2270,8 +2269,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order, > > set_page_refcounted(page); > > arch_alloc_page(page, order); > > - if (debug_pagealloc_enabled_static()) > > - kernel_map_pages(page, 1 << order, 1); > > + debug_pagealloc_map_pages(page, 1 << order, 1); > > kasan_alloc_pages(page, order); > > kernel_poison_pages(page, 1 << order, 1); > > set_page_owner(page, order, gfp_flags); > > diff --git a/mm/slab.c b/mm/slab.c > > index b1113561b98b..340db0ce74c4 100644 > > --- a/mm/slab.c > > +++ b/mm/slab.c > > @@ -1431,10 +1431,8 @@ static bool is_debug_pagealloc_cache(struct kmem_cache *cachep) > > #ifdef CONFIG_DEBUG_PAGEALLOC > > static void slab_kernel_map(struct kmem_cache *cachep, void *objp, int map) > > { > > - if (!is_debug_pagealloc_cache(cachep)) > > - return; > > - > > - kernel_map_pages(virt_to_page(objp), cachep->size / PAGE_SIZE, map); > > + debug_pagealloc_map_pages(virt_to_page(objp), > > + cachep->size / PAGE_SIZE, map); > > } > > #else > > @@ -2062,7 +2060,7 @@ int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags) > > #if DEBUG > > /* > > - * If we're going to use the generic kernel_map_pages() > > + * If we're going to use the generic debug_pagealloc_map_pages() > > * poisoning, then it's going to smash the contents of > > * the redzone and userword anyhow, so switch them off. > > */ > > > -- Sincerely yours, Mike.