From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DACBC07E96 for ; Thu, 15 Jul 2021 06:10:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E5F2161358 for ; Thu, 15 Jul 2021 06:10:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E5F2161358 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3D7888D0099; Thu, 15 Jul 2021 02:10:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 35FBE8D0065; Thu, 15 Jul 2021 02:10:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 227DB8D0099; Thu, 15 Jul 2021 02:10:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0220.hostedemail.com [216.40.44.220]) by kanga.kvack.org (Postfix) with ESMTP id EF9DA8D0065 for ; Thu, 15 Jul 2021 02:10:38 -0400 (EDT) Received: from smtpin39.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C06CE1945B for ; Thu, 15 Jul 2021 06:10:37 +0000 (UTC) X-FDA: 78363798114.39.8F8BB3A Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf20.hostedemail.com (Postfix) with ESMTP id 6B4DBD0000AA for ; Thu, 15 Jul 2021 06:10:37 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 3217F61358; Thu, 15 Jul 2021 06:10:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1626329436; bh=NF9HPSYFdmTGIZ7fJoMVjM5ZOLU0Fy555wIvMVupgQA=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=iXEULQyincIujtOinmEdo6s01AtI0/sRUdGrHO0Q2+2xhfXojuWVRKR0B4YA2d5/2 RnZG/eA+pCFAYmfpGCmbBGtLeCMbg+/baYDPHYcT8R5klvPpS1bvCEGB6krq50d5h/ Wkehc54w49Ixn0ES0fnV1TrIPaGNB7OE1mihlrwkhXd1VDhDVS6kiBmu8n6oU5YWWi aIm2yadS6dtuBT3KubOttjrxjO1ojaus3djtONyleS0j4Q4zTMRvhGE7xyB/fifJQi 9LK/qRHu0szhRY4GJfoRp81YLWXH751o2pS3XJACokdNNKmwRLc/msSXwY/blPyDdt 4Vls+y0KR0a+Q== Date: Thu, 15 Jul 2021 09:10:30 +0300 From: Mike Rapoport To: Andrew Morton Cc: Michal Simek , Mike Rapoport , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH 3/4] mm: introduce memmap_alloc() to unify memory map allocation Message-ID: References: <20210714123739.16493-1-rppt@kernel.org> <20210714123739.16493-4-rppt@kernel.org> <20210714153208.ef96cfc7c6bac360598101ed@linux-foundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210714153208.ef96cfc7c6bac360598101ed@linux-foundation.org> Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=iXEULQyi; spf=pass (imf20.hostedemail.com: domain of rppt@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Stat-Signature: b14j1s4gsjfjg5g6gdm3aswdazicob9f X-Rspamd-Queue-Id: 6B4DBD0000AA X-Rspamd-Server: rspam01 X-HE-Tag: 1626329437-806395 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jul 14, 2021 at 03:32:08PM -0700, Andrew Morton wrote: > On Wed, 14 Jul 2021 15:37:38 +0300 Mike Rapoport wrote: > > > From: Mike Rapoport > > > > There are several places that allocate memory for the memory map: > > alloc_node_mem_map() for FLATMEM, sparse_buffer_init() and > > __populate_section_memmap() for SPARSEMEM. > > > > The memory allocated in the FLATMEM case is zeroed and it is never > > poisoned, regardless of CONFIG_PAGE_POISON setting. > > > > The memory allocated in the SPARSEMEM cases is not zeroed and it is > > implicitly poisoned inside memblock if CONFIG_PAGE_POISON is set. > > > > Introduce memmap_alloc() wrapper for memblock allocators that will be used > > for both FLATMEM and SPARSEMEM cases and will makei memory map zeroing and > > poisoning consistent for different memory models. > > > > ... > > > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -6730,6 +6730,26 @@ static void __init memmap_init(void) > > init_unavailable_range(hole_pfn, end_pfn, zone_id, nid); > > } > > > > +void __init *memmap_alloc(phys_addr_t size, phys_addr_t align, > > + phys_addr_t min_addr, int nid, bool exact_nid) > > +{ > > + void *ptr; > > + > > + if (exact_nid) > > + ptr = memblock_alloc_exact_nid_raw(size, align, min_addr, > > + MEMBLOCK_ALLOC_ACCESSIBLE, > > + nid); > > + else > > + ptr = memblock_alloc_try_nid_raw(size, align, min_addr, > > + MEMBLOCK_ALLOC_ACCESSIBLE, > > + nid); > > + > > + if (ptr && size > 0) > > + page_init_poison(ptr, size); > > + > > + return ptr; > > +} > > + > > static int zone_batchsize(struct zone *zone) > > { > > #ifdef CONFIG_MMU > > @@ -7501,8 +7521,8 @@ static void __ref alloc_node_mem_map(struct pglist_data *pgdat) > > end = pgdat_end_pfn(pgdat); > > end = ALIGN(end, MAX_ORDER_NR_PAGES); > > size = (end - start) * sizeof(struct page); > > - map = memblock_alloc_node(size, SMP_CACHE_BYTES, > > - pgdat->node_id); > > + map = memmap_alloc(size, SMP_CACHE_BYTES, MEMBLOCK_LOW_LIMIT, > > + pgdat->node_id, false); > > Mostly offtopic, but... Why is alloc_node_mem_map() marked __ref? Once free_area_init_node() was __meminit, I stopped digging at that point. > afaict it can be __init? Yes. -- Sincerely yours, Mike.