From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B22DAC433EF for ; Fri, 5 Nov 2021 19:50:45 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 76B0761053 for ; Fri, 5 Nov 2021 19:50:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 76B0761053 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=uFnifxVm+xymYCQnc5HZdNsY1cYDY+lqJ43Wy9o835I=; b=Yv9mzgQ1fuyoj+ wR68gNBXPUi4+NH1IuDxtee+MMxq/5jWg8HFSTIa2WHQD7wBbrqe2/yAIJKL6pGf0+XFVflHCNMtu LNjbJEz8De9LA1i8i9eyXlZCmQPlxpX895p6BPsP2Cmm7bYqbXXNwQR1k7w4TzAu4ZhspfeCb9CRT wPtW/1H8JWNfToUnYeXAGdT95/d2k8KIw7QpCN3tRRp/JfXot5nYcBIbGnmQQXlp/hLedWci1HeHF ztqddtR7J+b8dJIpq0s+qT+Q6Sk9+WCa8Hd5zxAgVuZuz9CKLdlZzxTGbPUrbi4LjxOtvTGCUcKsx 1Gve0nSFnluDwQYpSSpQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mj5DA-00CEMZ-QZ; Fri, 05 Nov 2021 19:49:16 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mj5D5-00CELw-QT for linux-arm-kernel@lists.infradead.org; Fri, 05 Nov 2021 19:49:13 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 9B51D61053; Fri, 5 Nov 2021 19:49:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1636141751; bh=y9E2/3tUW798ZN2V86zmgMhgAcJlwxxzk/tb4Atgp4w=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=U46T9o8IV/RIMPrYDEotuj/2aagThYdTPu35w0FmRmg+DLk/WW9tVuxnDYxA/o/8x MLzqCcFnOPeDMubg/0HZ0I+7FyAjygtoQbBYonTTSZDe8sxDjz4DPzmFLXQYMaOiJy sSR+H1V0kgrd47R0nYq6JrsbuhE345saZ6CDn8e1DuUk2+j3nKroTGkBUtWGaP08BU /sR00KaRFzdk+3VVZgcQtK0/e2gcXauJQPRyxNUzal1gY0oYibV6hvtPDQTLABeoHD WIUBpuiov1aPJ/F+Woxd8WnC6KYaUciiBBez8zhsr6wL88PuFztyKzbPKDSreid6hV oUXeZPCUJcsEg== Date: Fri, 5 Nov 2021 21:49:01 +0200 From: Mike Rapoport To: Qian Cai Cc: Catalin Marinas , Will Deacon , Andrew Morton , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Russell King , kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] arm64: Track no early_pgtable_alloc() for kmemleak Message-ID: References: <20211105150509.7826-1-quic_qiancai@quicinc.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20211105150509.7826-1-quic_qiancai@quicinc.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211105_124911_945948_1CD9266F X-CRM114-Status: GOOD ( 28.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Nov 05, 2021 at 11:05:09AM -0400, Qian Cai wrote: > After switched page size from 64KB to 4KB on several arm64 servers here, > kmemleak starts to run out of early memory pool due to a huge number of > those early_pgtable_alloc() calls: > > kmemleak_alloc_phys() > memblock_alloc_range_nid() > memblock_phys_alloc_range() > early_pgtable_alloc() > init_pmd() > alloc_init_pud() > __create_pgd_mapping() > __map_memblock() > paging_init() > setup_arch() > start_kernel() > > Increased the default value of DEBUG_KMEMLEAK_MEM_POOL_SIZE by 4 times > won't be enough for a server with 200GB+ memory. There isn't much > interesting to check memory leaks for those early page tables and those > early memory mappings should not reference to other memory. Hence, no > kmemleak false positives, and we can safely skip tracking those early > allocations from kmemleak like we did in the commit fed84c785270 > ("mm/memblock.c: skip kmemleak for kasan_init()") without needing to > introduce complications to automatically scale the value depends on the > runtime memory size etc. After the patch, the default value of > DEBUG_KMEMLEAK_MEM_POOL_SIZE becomes sufficient again. > > Signed-off-by: Qian Cai Reviewed-by: Mike Rapoport > --- > v2: > Rename MEMBLOCK_ALLOC_KASAN to MEMBLOCK_ALLOC_NOLEAKTRACE to deal with > those situations in general. > > arch/arm/mm/kasan_init.c | 2 +- > arch/arm64/mm/kasan_init.c | 5 +++-- > arch/arm64/mm/mmu.c | 3 ++- > include/linux/memblock.h | 2 +- > mm/memblock.c | 9 ++++++--- > 5 files changed, 13 insertions(+), 8 deletions(-) > > diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c > index 4b1619584b23..5ad0d6c56d56 100644 > --- a/arch/arm/mm/kasan_init.c > +++ b/arch/arm/mm/kasan_init.c > @@ -32,7 +32,7 @@ pmd_t tmp_pmd_table[PTRS_PER_PMD] __page_aligned_bss; > static __init void *kasan_alloc_block(size_t size) > { > return memblock_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS), > - MEMBLOCK_ALLOC_KASAN, NUMA_NO_NODE); > + MEMBLOCK_ALLOC_NOLEAKTRACE, NUMA_NO_NODE); > } > > static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr, > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c > index 6f5a6fe8edd7..c12cd700598f 100644 > --- a/arch/arm64/mm/kasan_init.c > +++ b/arch/arm64/mm/kasan_init.c > @@ -36,7 +36,7 @@ static phys_addr_t __init kasan_alloc_zeroed_page(int node) > { > void *p = memblock_alloc_try_nid(PAGE_SIZE, PAGE_SIZE, > __pa(MAX_DMA_ADDRESS), > - MEMBLOCK_ALLOC_KASAN, node); > + MEMBLOCK_ALLOC_NOLEAKTRACE, node); > if (!p) > panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d from=%llx\n", > __func__, PAGE_SIZE, PAGE_SIZE, node, > @@ -49,7 +49,8 @@ static phys_addr_t __init kasan_alloc_raw_page(int node) > { > void *p = memblock_alloc_try_nid_raw(PAGE_SIZE, PAGE_SIZE, > __pa(MAX_DMA_ADDRESS), > - MEMBLOCK_ALLOC_KASAN, node); > + MEMBLOCK_ALLOC_NOLEAKTRACE, > + node); > if (!p) > panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d from=%llx\n", > __func__, PAGE_SIZE, PAGE_SIZE, node, > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index d77bf06d6a6d..acfae9b41cc8 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -96,7 +96,8 @@ static phys_addr_t __init early_pgtable_alloc(int shift) > phys_addr_t phys; > void *ptr; > > - phys = memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); > + phys = memblock_phys_alloc_range(PAGE_SIZE, PAGE_SIZE, 0, > + MEMBLOCK_ALLOC_NOLEAKTRACE); > if (!phys) > panic("Failed to allocate page table page\n"); > > diff --git a/include/linux/memblock.h b/include/linux/memblock.h > index 7df557b16c1e..8adcf1fa8096 100644 > --- a/include/linux/memblock.h > +++ b/include/linux/memblock.h > @@ -389,7 +389,7 @@ static inline int memblock_get_region_node(const struct memblock_region *r) > /* Flags for memblock allocation APIs */ > #define MEMBLOCK_ALLOC_ANYWHERE (~(phys_addr_t)0) > #define MEMBLOCK_ALLOC_ACCESSIBLE 0 > -#define MEMBLOCK_ALLOC_KASAN 1 > +#define MEMBLOCK_ALLOC_NOLEAKTRACE 1 > > /* We are using top down, so it is safe to use 0 here */ > #define MEMBLOCK_LOW_LIMIT 0 > diff --git a/mm/memblock.c b/mm/memblock.c > index 659bf0ffb086..1018e50566f3 100644 > --- a/mm/memblock.c > +++ b/mm/memblock.c > @@ -287,7 +287,7 @@ static phys_addr_t __init_memblock memblock_find_in_range_node(phys_addr_t size, > { > /* pump up @end */ > if (end == MEMBLOCK_ALLOC_ACCESSIBLE || > - end == MEMBLOCK_ALLOC_KASAN) > + end == MEMBLOCK_ALLOC_NOLEAKTRACE) > end = memblock.current_limit; > > /* avoid allocating the first page */ > @@ -1387,8 +1387,11 @@ phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size, > return 0; > > done: > - /* Skip kmemleak for kasan_init() due to high volume. */ > - if (end != MEMBLOCK_ALLOC_KASAN) > + /* > + * Skip kmemleak for those places like kasan_init() and > + * early_pgtable_alloc() due to high volume. > + */ > + if (end != MEMBLOCK_ALLOC_NOLEAKTRACE) > /* > * The min_count is set to 0 so that memblock allocated > * blocks are never reported as leaks. This is because many > -- > 2.30.2 > -- Sincerely yours, Mike. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel