From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B7A5B1F16B for ; Thu, 2 Jan 2025 17:15:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735838147; cv=none; b=qoIGfA1b093e156l+u39T6RbILmRpKHA7gyf7NCS1YwRLmWMq48vFBMNJuMBoNJvrXc1+zocLrRm6wODXfBa68fcHyQkAnBSQXzH64vX62M6etrEWt5wgvnKY2Yjv97k2Ixn0q9eahM4md+MEBdvcnKO3eZ025H2xmKvF3K/yrs= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735838147; c=relaxed/simple; bh=gmuofuJSVgI78ceJn6DFgBtj/Ky08GLXCZgX1sJv/2c=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=Jn3eig77hktwzhhJVOXriIggUNpHlPWxscRA+0uBFz7Ciy55hRKuuoDvrUanK53YypUN90CX+WyKn9ha6WToxKttULNN4WuTtRWOnhgjYxSWgBu8JcFbNnoZJsWILRxUqiTyQc1nCNgI6UcvhpC/DdZ+GhTLKwIgzovo3eeUQAA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id E3AE8C4CED0; Thu, 2 Jan 2025 17:15:45 +0000 (UTC) Date: Thu, 2 Jan 2025 17:15:43 +0000 From: Catalin Marinas To: Guo Weikang Cc: Mike Rapoport , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] mm/memmap: Prevent double scanning of memmap by kmemleak Message-ID: References: <20250102065704.647693-1-guoweikang.kernel@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250102065704.647693-1-guoweikang.kernel@gmail.com> On Thu, Jan 02, 2025 at 02:57:03PM +0800, Guo Weikang wrote: > diff --git a/include/linux/memblock.h b/include/linux/memblock.h > index 673d5cae7c81..b0483c534ef7 100644 > --- a/include/linux/memblock.h > +++ b/include/linux/memblock.h > @@ -375,7 +375,13 @@ static inline int memblock_get_region_node(const struct memblock_region *r) > } > #endif /* CONFIG_NUMA */ > > -/* Flags for memblock allocation APIs */ > +/* > + * Flags for memblock allocation APIs > + * MEMBLOCK_ALLOC_ANYWHERE and MEMBLOCK_ALLOC_ACCESSIBLE > + * indicates wheather the allocation is limited by memblock.current_limit. > + * MEMBLOCK_ALLOC_NOLEAKTRACE not only indicates that it does not need to > + * be scanned by kmemleak, but also implies MEMBLOCK_ALLOC_ACCESSIBLE > + */ I'd keep the comment short here, something like: /* * MEMBLOCK_ALLOC_NOLEAKTRACE avoids kmemleak tracing. It implies * MEMBLOCK_ALLOC_ACCESSIBLE. */ > #define MEMBLOCK_ALLOC_ANYWHERE (~(phys_addr_t)0) > #define MEMBLOCK_ALLOC_ACCESSIBLE 0 > #define MEMBLOCK_ALLOC_NOLEAKTRACE 1 > diff --git a/mm/mm_init.c b/mm/mm_init.c > index 24b68b425afb..71b58f5f2492 100644 > --- a/mm/mm_init.c > +++ b/mm/mm_init.c > @@ -1580,6 +1580,10 @@ static void __init free_area_init_core(struct pglist_data *pgdat) > } > } > > +/* > + * Kmemleak will explicitly scan mem_map by traversing all valid `struct *page`, > + * so memblock does not need to be added to the scan list. > + */ > void __init *memmap_alloc(phys_addr_t size, phys_addr_t align, > phys_addr_t min_addr, int nid, bool exact_nid) > { > @@ -1587,11 +1591,11 @@ void __init *memmap_alloc(phys_addr_t size, phys_addr_t align, > > if (exact_nid) > ptr = memblock_alloc_exact_nid_raw(size, align, min_addr, > - MEMBLOCK_ALLOC_ACCESSIBLE, > + MEMBLOCK_ALLOC_NOLEAKTRACE, > nid); > else > ptr = memblock_alloc_try_nid_raw(size, align, min_addr, > - MEMBLOCK_ALLOC_ACCESSIBLE, > + MEMBLOCK_ALLOC_NOLEAKTRACE, > nid); > > if (ptr && size > 0) > diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c > index cec67c5f37d8..b6ac9b1d4ff7 100644 > --- a/mm/sparse-vmemmap.c > +++ b/mm/sparse-vmemmap.c > @@ -27,25 +27,10 @@ > #include > #include > #include > - > +#include "internal.h" > #include > #include > > -/* > - * Allocate a block of memory to be used to back the virtual memory map > - * or to back the page tables that are used to create the mapping. > - * Uses the main allocators if they are available, else bootmem. > - */ > - > -static void * __ref __earlyonly_bootmem_alloc(int node, > - unsigned long size, > - unsigned long align, > - unsigned long goal) > -{ > - return memblock_alloc_try_nid_raw(size, align, goal, > - MEMBLOCK_ALLOC_ACCESSIBLE, node); > -} > - > void * __meminit vmemmap_alloc_block(unsigned long size, int node) > { > /* If the main allocator is up use that, fallback to bootmem. */ > @@ -66,8 +51,7 @@ void * __meminit vmemmap_alloc_block(unsigned long size, int node) > } > return NULL; > } else > - return __earlyonly_bootmem_alloc(node, size, size, > - __pa(MAX_DMA_ADDRESS)); > + return memmap_alloc(size, size, __pa(MAX_DMA_ADDRESS), node, false); > } As the kernel test robot reported, the __ref annotation for __earlyonly_bootmem_alloc() is still needed, otherwise you get a warning that a __meminit function (vmemmap_alloc_block()) is calling an __init one (memmap_alloc()). So I think it's better if you keep this function. Maybe get it to call memmap_alloc() instead of memblock_alloc_try_nid_raw(). -- Catalin