From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5C2BBEA795B for ; Thu, 5 Feb 2026 03:15:59 +0000 (UTC) Received: from boromir.ozlabs.org (localhost [127.0.0.1]) by lists.ozlabs.org (Postfix) with ESMTP id 4f62Rc2nF4z2xrk; Thu, 05 Feb 2026 14:15:56 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; arc=none smtp.remote-ip=113.46.200.219 ARC-Seal: i=1; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1770261356; cv=none; b=Xt8E8jP/untZTdE32aWIjPqlbnGA0oimP/0syEphQb5XZo+CojVy7FUfEDilaGbxiSQ35kOE1jGcGeUZhX8ueZJ4th/I+NPd4pwVQOJDSWamu3yTuwIcJ9OpTIxgaG8q3o4VJB4aFVie7aa3GeSeKjnzQtWXrw9PNUi0uw0CWIgn10hWHzLBXxv3QkH0A+1rt+FkfsPmYShoqVqCxOJLX9+z4JMIdfHQZHoEC1xM5fZXZIov8WBAXD72XpFBdBlZ1qiYBHDo/VvOdo1Dm+dbfMeEjSiug3KWRF3CCDlc/ghvF4LZN+LVFZLnJBALaOB3gEtPW4whdkOpzlkeHLAhsg== ARC-Message-Signature: i=1; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1770261356; c=relaxed/relaxed; bh=m0YhAEtCzGzoYGvZg42KcYgs+OHxXA5AmpEhewJRkE0=; h=Message-ID:Date:MIME-Version:Subject:To:References:From: In-Reply-To:Content-Type; b=Yhbb1Vp4CTtIT2ioEQ6y64l/kZySq+n73MaosaUZe2MWLKm3i8kP13kNptMIBrAfz4PvMM8RAZnf8/A4mNfXWeZ68BOPFqLwqMFuzVNc+PvKEFS5DOjfLcTm+XpZlemSCuen4CXzXdDM4OHDtEgVRlxq0udgDkEtZG8WV6fxThvDIcbqn+zx88A5F/RDpT3fd5agHd826xLPlveicHmtsopJvOuPx8pys8sDPHGrKVO6dNt8XANB03+HmBdzhYKgLMvlWDBY61NIDI8OFMl31Kag1OhrMap3sJu9nv7CLEMz+hrLnoIhzHimxnoy8tuimDesRRzQsJgwPj8ZFV3eSA== ARC-Authentication-Results: i=1; lists.ozlabs.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; dkim=pass (1024-bit key; unprotected) header.d=huawei.com header.i=@huawei.com header.a=rsa-sha256 header.s=dkim header.b=mrg81Ley; dkim-atps=neutral; spf=pass (client-ip=113.46.200.219; helo=canpmsgout04.his.huawei.com; envelope-from=ruanjinjie@huawei.com; receiver=lists.ozlabs.org) smtp.mailfrom=huawei.com Authentication-Results: lists.ozlabs.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: lists.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=huawei.com header.i=@huawei.com header.a=rsa-sha256 header.s=dkim header.b=mrg81Ley; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=huawei.com (client-ip=113.46.200.219; helo=canpmsgout04.his.huawei.com; envelope-from=ruanjinjie@huawei.com; receiver=lists.ozlabs.org) X-Greylist: delayed 63546 seconds by postgrey-1.37 at boromir; Thu, 05 Feb 2026 14:15:52 AEDT Received: from canpmsgout04.his.huawei.com (canpmsgout04.his.huawei.com [113.46.200.219]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4f62RX490cz2xHt for ; Thu, 05 Feb 2026 14:15:50 +1100 (AEDT) dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=m0YhAEtCzGzoYGvZg42KcYgs+OHxXA5AmpEhewJRkE0=; b=mrg81Ley9PfGVLZQRfso+yz1ManXQbV+PfgxLEEqszlmqVu7nf5ibUS3D3jz2hxv4nVKl3Pxy u1f26cRtxz6qdg+Sa5H47rkNKtSL2+fw1FqsGBvUi1egGqaUWoOeHZW1xEPRfIPMfxyDybkK+me y6AzQqnzHgyAcWGTJDZFXnA= Received: from mail.maildlp.com (unknown [172.19.162.223]) by canpmsgout04.his.huawei.com (SkyGuard) with ESMTPS id 4f62L33nmhz1prQr; Thu, 5 Feb 2026 11:11:07 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id A0B3540561; Thu, 5 Feb 2026 11:15:44 +0800 (CST) Received: from [10.67.109.254] (10.67.109.254) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 5 Feb 2026 11:15:41 +0800 Message-ID: <113f1d02-69df-b28e-edb9-514d284c6b29@huawei.com> Date: Thu, 5 Feb 2026 11:15:40 +0800 X-Mailing-List: linuxppc-dev@lists.ozlabs.org List-Id: List-Help: List-Owner: List-Post: List-Archive: , List-Subscribe: , , List-Unsubscribe: Precedence: list MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.2.0 Subject: Re: [PATCH v3 1/3] crash: Exclude crash kernel memory in crash core Content-Language: en-US To: Sourabh Jain , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , References: <20260204093728.1447527-1-ruanjinjie@huawei.com> <20260204093728.1447527-2-ruanjinjie@huawei.com> <4dc944c7-20ad-4e92-b05e-28a9e0c5a2b8@linux.ibm.com> From: Jinjie Ruan In-Reply-To: <4dc944c7-20ad-4e92-b05e-28a9e0c5a2b8@linux.ibm.com> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-Originating-IP: [10.67.109.254] X-ClientProxiedBy: kwepems200001.china.huawei.com (7.221.188.67) To dggpemf500011.china.huawei.com (7.185.36.131) On 2026/2/4 20:32, Sourabh Jain wrote: > > > On 04/02/26 15:07, Jinjie Ruan wrote: >> The exclude of crashk_res, crashk_low_res and crashk_cma memory >> are almost identical across different architectures, so handling them >> in the crash core would eliminate a lot of duplication, so do >> them in the common code. >> >> Signed-off-by: Jinjie Ruan >> --- >>   arch/arm64/kernel/machine_kexec_file.c     | 12 ------- >>   arch/loongarch/kernel/machine_kexec_file.c | 12 ------- >>   arch/powerpc/kexec/ranges.c                | 16 ++------- >>   arch/riscv/kernel/machine_kexec_file.c     |  5 +-- >>   arch/x86/kernel/crash.c                    | 39 ++-------------------- >>   kernel/crash_core.c                        | 28 ++++++++++++++++ >>   6 files changed, 34 insertions(+), 78 deletions(-) >> [...] >> -static int crash_exclude_mem_range_guarded(struct crash_mem >> **mem_ranges, >> -                       unsigned long long mstart, >> -                       unsigned long long mend) >> +static int crash_realloc_mem_range_guarded(struct crash_mem >> **mem_ranges) >>   { >>       struct crash_mem *tmem = *mem_ranges; >>   @@ -566,7 +564,7 @@ static int >> crash_exclude_mem_range_guarded(struct crash_mem **mem_ranges, >>               return -ENOMEM; >>       } >>   -    return crash_exclude_mem_range(tmem, mstart, mend); >> +    return 0; >>   } >>     /** >> @@ -604,18 +602,10 @@ int get_crash_memory_ranges(struct crash_mem >> **mem_ranges) >>               sort_memory_ranges(*mem_ranges, true); >>       } >>   -    /* Exclude crashkernel region */ >> -    ret = crash_exclude_mem_range_guarded(mem_ranges, >> crashk_res.start, crashk_res.end); >> +    ret = crash_realloc_mem_range_guarded(mem_ranges); > > What if max_nr_ranges - nr_ranges = 1, then no realloc will happen here. > And in > elf_header_exclude_ranges we may not enough space to store additional > memory ranges needed while excluding one or more CMA ranges. You're absolutely right — if max_nr_ranges - nr_ranges == 1 we skip the realloc, yet elf_header_exclude_ranges() can easily need more than one extra slot. Thanks for catching this. Jinjie > >>       if (ret) >>           goto out; >>   -    for (i = 0; i < crashk_cma_cnt; ++i) { >> -        ret = crash_exclude_mem_range_guarded(mem_ranges, >> crashk_cma_ranges[i].start, >> -                          crashk_cma_ranges[i].end); >> -        if (ret) >> -            goto out; >> -    } >> - >>       /* >>        * FIXME: For now, stay in parity with kexec-tools but if RTAS/OPAL >>        *        regions are exported to save their context at the time of [...] >>   +static int crash_exclude_mem_ranges(struct crash_mem *cmem) >> +{ >> +    int ret, i; >> + >> +    /* Exclude crashkernel region */ >> +    ret = crash_exclude_mem_range(cmem, crashk_res.start, >> crashk_res.end); >> +    if (ret) >> +        return ret; >> + >> +    if (crashk_low_res.end) { >> +        ret = crash_exclude_mem_range(cmem, crashk_low_res.start, >> crashk_low_res.end); >> +        if (ret) >> +            return ret; >> +    } >>   +    for (i = 0; i < crashk_cma_cnt; ++i) { >> +        ret = crash_exclude_mem_range(cmem, crashk_cma_ranges[i].start, >> +                          crashk_cma_ranges[i].end); >> +        if (ret) >> +            return ret; >> +    } >>   +    return ret; >> +} >>     int crash_prepare_elf64_headers(struct crash_mem *mem, int >> need_kernel_map, >>                 void **addr, unsigned long *sz) >> @@ -174,6 +197,11 @@ int crash_prepare_elf64_headers(struct crash_mem >> *mem, int need_kernel_map, >>       unsigned int cpu, i; >>       unsigned long long notes_addr; >>       unsigned long mstart, mend; >> +    int ret; >> + >> +    ret = crash_exclude_mem_ranges(mem); > > I think the assumption here is that mem should have enough space > to hold the extra ranges created while excluding crash memory ranges. > Right now, this is not happening on powerpc for the case I mentioned > in the above comment. Yes, as you mentioned above. > > Also, if crashk_cma_cnt changes in the future, or if a new type of > crash memory is added, then every architecture would need to adjust > the mem allocation accordingly. Instead, could we handle this in > generic code rather than in architecture-specific code, so that we > always ensure mem has enough space? I agree — hard-coding the worst-case count in every arch is a maintenance trap. Let's move the size calculation (and the realloc if needed) into the generic crash core so that: - New CMA regions or future crash-memory types are automatically accounted for; - Each architecture no longer has to play whack-a-mole with its private array size. Thanks for the suggestion. > >> +    if (ret) >> +        return ret; >>         /* extra phdr for vmcoreinfo ELF note */ >>       nr_phdr = nr_cpus + 1; >