From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx2.suse.de ([195.135.220.15]:54204 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729891AbgKILdy (ORCPT ); Mon, 9 Nov 2020 06:33:54 -0500 Subject: Re: [PATCH v5 1/5] mm: introduce debug_pagealloc_{map,unmap}_pages() helpers References: <20201108065758.1815-1-rppt@kernel.org> <20201108065758.1815-2-rppt@kernel.org> From: Vlastimil Babka Message-ID: <4bd5ae2b-4fc6-73dc-b83b-e71826990946@suse.cz> Date: Mon, 9 Nov 2020 12:33:46 +0100 MIME-Version: 1.0 In-Reply-To: <20201108065758.1815-2-rppt@kernel.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit List-ID: To: Mike Rapoport , Andrew Morton Cc: Albert Ou , Andy Lutomirski , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Christian Borntraeger , Christoph Lameter , "David S. Miller" , Dave Hansen , David Hildenbrand , David Rientjes , "Edgecombe, Rick P" , "H. Peter Anvin" , Heiko Carstens , Ingo Molnar , Joonsoo Kim , "Kirill A . Shutemov" , "Kirill A. Shutemov" , Len Brown , Michael Ellerman , Mike Rapoport , Palmer Dabbelt , Paul Mackerras , Paul Walmsley , Pavel Machek , Pekka Enberg , Peter Zijlstra , "Rafael J. Wysocki" , Thomas Gleixner , Vasily Gorbik , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-pm@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org On 11/8/20 7:57 AM, Mike Rapoport wrote: > --- a/mm/slab.c > +++ b/mm/slab.c > @@ -1428,21 +1428,19 @@ static bool is_debug_pagealloc_cache(struct kmem_cache *cachep) > return false; > } > > -#ifdef CONFIG_DEBUG_PAGEALLOC > static void slab_kernel_map(struct kmem_cache *cachep, void *objp, int map) > { > if (!is_debug_pagealloc_cache(cachep)) > return; Hmm, I didn't notice earlier, sorry. The is_debug_pagealloc_cache() above includes a debug_pagealloc_enabled_static() check, so it should be fine to use __kernel_map_pages() directly below. Otherwise we generate two static key checks for the same key needlessly. > > - kernel_map_pages(virt_to_page(objp), cachep->size / PAGE_SIZE, map); > + if (map) > + debug_pagealloc_map_pages(virt_to_page(objp), > + cachep->size / PAGE_SIZE); > + else > + debug_pagealloc_unmap_pages(virt_to_page(objp), > + cachep->size / PAGE_SIZE); > } > > -#else > -static inline void slab_kernel_map(struct kmem_cache *cachep, void *objp, > - int map) {} > - > -#endif > - > static void poison_obj(struct kmem_cache *cachep, void *addr, unsigned char val) > { > int size = cachep->object_size; > @@ -2062,7 +2060,7 @@ int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags) > > #if DEBUG > /* > - * If we're going to use the generic kernel_map_pages() > + * If we're going to use the generic debug_pagealloc_map_pages() > * poisoning, then it's going to smash the contents of > * the redzone and userword anyhow, so switch them off. > */ >