From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBE57C432BE for ; Sun, 1 Aug 2021 16:01:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7269060EC0 for ; Sun, 1 Aug 2021 16:01:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 7269060EC0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 0745E6B0033; Sun, 1 Aug 2021 12:01:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0248D8D0001; Sun, 1 Aug 2021 12:01:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E55356B005D; Sun, 1 Aug 2021 12:01:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0186.hostedemail.com [216.40.44.186]) by kanga.kvack.org (Postfix) with ESMTP id C9D9E6B0033 for ; Sun, 1 Aug 2021 12:01:22 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 5B5441D585 for ; Sun, 1 Aug 2021 16:01:22 +0000 (UTC) X-FDA: 78426976404.09.31E3040 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf11.hostedemail.com (Postfix) with ESMTP id E52D7F0019EE for ; Sun, 1 Aug 2021 16:01:21 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 9223160200; Sun, 1 Aug 2021 16:01:17 +0000 (UTC) Date: Sun, 1 Aug 2021 08:53:13 -0700 From: Catalin Marinas To: Kefeng Wang Cc: Will Deacon , Andrey Ryabinin , Andrey Konovalov , Dmitry Vyukov , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, Greg Kroah-Hartman Subject: Re: [PATCH v2 2/3] arm64: Support page mapping percpu first chunk allocator Message-ID: <20210801155302.GA29188@arm.com> References: <20210720025105.103680-1-wangkefeng.wang@huawei.com> <20210720025105.103680-3-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210720025105.103680-3-wangkefeng.wang@huawei.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: E52D7F0019EE Authentication-Results: imf11.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none); spf=pass (imf11.hostedemail.com: domain of cmarinas@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=cmarinas@kernel.org X-Stat-Signature: e4xakw17j4gtdi8nrq7jfr9txh34ntow X-HE-Tag: 1627833681-348612 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jul 20, 2021 at 10:51:04AM +0800, Kefeng Wang wrote: > Percpu embedded first chunk allocator is the firstly option, but it > could fails on ARM64, eg, > "percpu: max_distance=0x5fcfdc640000 too large for vmalloc space 0x781fefff0000" > "percpu: max_distance=0x600000540000 too large for vmalloc space 0x7dffb7ff0000" > "percpu: max_distance=0x5fff9adb0000 too large for vmalloc space 0x5dffb7ff0000" > > then we could meet "WARNING: CPU: 15 PID: 461 at vmalloc.c:3087 pcpu_get_vm_areas+0x488/0x838", > even the system could not boot successfully. > > Let's implement page mapping percpu first chunk allocator as a fallback > to the embedding allocator to increase the robustness of the system. It looks like x86, powerpc and sparc implement their own setup_per_cpu_areas(). I had a quick look on finding some commonalities but I think it's a lot more hassle to make a generic version out of them (powerpc looks the simplest though). I think we could add a generic variant with the arm64 support and later migrate other architectures to it if possible. The patch looks ok to me otherwise but I'd need an ack from Greg as it touches drivers/. BTW, do we need something similar for the non-NUMA setup_per_cpu_areas()? I can see this patch only enables NEED_PER_CPU_PAGE_FIRST_CHUNK if NUMA. Leaving the rest of the patch below for Greg. > Signed-off-by: Kefeng Wang > --- > arch/arm64/Kconfig | 4 ++ > drivers/base/arch_numa.c | 82 +++++++++++++++++++++++++++++++++++----- > 2 files changed, 76 insertions(+), 10 deletions(-) > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index b5b13a932561..eacb5873ded1 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -1045,6 +1045,10 @@ config NEED_PER_CPU_EMBED_FIRST_CHUNK > def_bool y > depends on NUMA > > +config NEED_PER_CPU_PAGE_FIRST_CHUNK > + def_bool y > + depends on NUMA > + > source "kernel/Kconfig.hz" > > config ARCH_SPARSEMEM_ENABLE > diff --git a/drivers/base/arch_numa.c b/drivers/base/arch_numa.c > index 4cc4e117727d..563b2013b75a 100644 > --- a/drivers/base/arch_numa.c > +++ b/drivers/base/arch_numa.c > @@ -14,6 +14,7 @@ > #include > > #include > +#include > > struct pglist_data *node_data[MAX_NUMNODES] __read_mostly; > EXPORT_SYMBOL(node_data); > @@ -168,22 +169,83 @@ static void __init pcpu_fc_free(void *ptr, size_t size) > memblock_free_early(__pa(ptr), size); > } > > +#ifdef CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK > +static void __init pcpu_populate_pte(unsigned long addr) > +{ > + pgd_t *pgd = pgd_offset_k(addr); > + p4d_t *p4d; > + pud_t *pud; > + pmd_t *pmd; > + > + p4d = p4d_offset(pgd, addr); > + if (p4d_none(*p4d)) { > + pud_t *new; > + > + new = memblock_alloc(PAGE_SIZE, PAGE_SIZE); > + if (!new) > + goto err_alloc; > + p4d_populate(&init_mm, p4d, new); > + } > + > + pud = pud_offset(p4d, addr); > + if (pud_none(*pud)) { > + pmd_t *new; > + > + new = memblock_alloc(PAGE_SIZE, PAGE_SIZE); > + if (!new) > + goto err_alloc; > + pud_populate(&init_mm, pud, new); > + } > + > + pmd = pmd_offset(pud, addr); > + if (!pmd_present(*pmd)) { > + pte_t *new; > + > + new = memblock_alloc(PAGE_SIZE, PAGE_SIZE); > + if (!new) > + goto err_alloc; > + pmd_populate_kernel(&init_mm, pmd, new); > + } > + > + return; > + > +err_alloc: > + panic("%s: Failed to allocate %lu bytes align=%lx from=%lx\n", > + __func__, PAGE_SIZE, PAGE_SIZE, PAGE_SIZE); > +} > +#endif > + > void __init setup_per_cpu_areas(void) > { > unsigned long delta; > unsigned int cpu; > - int rc; > + int rc = -EINVAL; > + > + if (pcpu_chosen_fc != PCPU_FC_PAGE) { > + /* > + * Always reserve area for module percpu variables. That's > + * what the legacy allocator did. > + */ > + rc = pcpu_embed_first_chunk(PERCPU_MODULE_RESERVE, > + PERCPU_DYNAMIC_RESERVE, PAGE_SIZE, > + pcpu_cpu_distance, > + pcpu_fc_alloc, pcpu_fc_free); > +#ifdef CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK > + if (rc < 0) > + pr_warn("PERCPU: %s allocator failed (%d), falling back to page size\n", > + pcpu_fc_names[pcpu_chosen_fc], rc); > +#endif > + } > > - /* > - * Always reserve area for module percpu variables. That's > - * what the legacy allocator did. > - */ > - rc = pcpu_embed_first_chunk(PERCPU_MODULE_RESERVE, > - PERCPU_DYNAMIC_RESERVE, PAGE_SIZE, > - pcpu_cpu_distance, > - pcpu_fc_alloc, pcpu_fc_free); > +#ifdef CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK > + if (rc < 0) > + rc = pcpu_page_first_chunk(PERCPU_MODULE_RESERVE, > + pcpu_fc_alloc, > + pcpu_fc_free, > + pcpu_populate_pte); > +#endif > if (rc < 0) > - panic("Failed to initialize percpu areas."); > + panic("Failed to initialize percpu areas (err=%d).", rc); > > delta = (unsigned long)pcpu_base_addr - (unsigned long)__per_cpu_start; > for_each_possible_cpu(cpu) > -- > 2.26.2