public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Zhongkun He <hezhongkun.hzk@bytedance.com>,
	jbohac@suse.cz, Baoquan He <bhe@redhat.com>
Cc: kas@kernel.org, riel@surriel.com, vbabka@suse.cz,
	nphamcs@gmail.com, Vivek Goyal <vgoyal@redhat.com>,
	Dave Young <dyoung@redhat.com>,
	kexec@lists.infradead.org, akpm@linux-foundation.org,
	Philipp Rudo <prudo@redhat.com>,
	Donald Dutile <ddutile@redhat.com>,
	Pingfan Liu <piliu@redhat.com>, Tao Liu <ltao@redhat.com>,
	linux-kernel@vger.kernel.org, Michal Hocko <mhocko@suse.cz>,
	Muchun Song <muchun.song@linux.dev>
Subject: Re: [External] Re: [PATCH v5 0/5] kdump: crashkernel reservation from CMA
Date: Mon, 13 Oct 2025 10:00:32 +0200	[thread overview]
Message-ID: <e28a62ae-5482-4bda-bb00-fd8a5083fb31@redhat.com> (raw)
In-Reply-To: <CACSyD1N0fb1H3_ssEyaAMh=2shQy-64gG_t3MqkedwfOLEExEA@mail.gmail.com>

On 13.10.25 06:03, Zhongkun He wrote:
> Hi folks,
> 
> We’re encountering the same issue that this patch series aims to address,
> and we also planned to leverage CMA to solve it. However, some implementation
> details on our side may differ, so we’d like to discuss our proposed approach we
> have tried in this thread.
> 
> 1. Register a dedicated CMA area for kexec kernel use
> Introduce a dedicated CMA region (e.g., kexec_cma) and allocate the control
> code page and crash segments from this area via cma_alloc. Pages for a
> normal kexec kernel can also be allocated from this region [1].
> 
> 2. Keep crashkernel=xx unchanged (register CMA)
> We introduced a flag in the kexec syscall to dynamically enable
> or disable memory reuse without system reboot. For example, with
> crashkernel=500M (a 500M cma region), cma_alloc may use 100M for the
> kernel,initrd and others data. This region could then be reused for the current
> kernel if the reuse flag is set in the syscall, or left unused for dumping user
> pages in case of a crash.
> 
> 3. Keep this CMA region inactive by default
> The CMA region should remain inactive until kexec is enabled with the reuse flag
> (or fully reused when the kdump service is not enabled). It can then
> be activated for
> use by the current kernel.
> 
> 4. Introduce a new migratetype KEXEC_CMA
> Similar to the existing CMA type, this would be used to:
> 1)Prevent page allocation from this zone for get_user_pages (GUP).
> 2)Handle page migration correctly when pages are pinned after allocation.

It will be hard to get something like that in for the purpose of kdump. 
Further, I'm afraid it might open up a can of worms of "migration 
temporarily failed" -> GUP failed issues for some workloads.

So I assume this might currently not be the best way to move forward.

One alternative would be using GCMA [1] in the current design. The CMA 
memory would not be exposed to the buddy, but can still be used as a 
cache for clean file pages. Pinning etc. is not a problem in that context.

Of course, the more we limit the usage of that region, the less 
versatile it is.

[1] https://lkml.kernel.org/r/20251010011951.2136980-1-surenb@google.com

-- 
Cheers

David / dhildenb


  reply	other threads:[~2025-10-13  8:00 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-12 10:11 [PATCH v5 0/5] kdump: crashkernel reservation from CMA Jiri Bohac
2025-06-12 10:13 ` [PATCH v5 1/5] Add a new optional ",cma" suffix to the crashkernel= command line option Jiri Bohac
2025-06-12 10:16 ` [PATCH v5 2/5] kdump: implement reserve_crashkernel_cma Jiri Bohac
2025-06-12 10:17 ` [PATCH v5 3/5] kdump, documentation: describe craskernel CMA reservation Jiri Bohac
2025-06-12 10:18 ` [PATCH v5 4/5] kdump: wait for DMA to finish when using CMA Jiri Bohac
2025-06-12 23:47   ` Andrew Morton
2025-06-13  9:19     ` David Hildenbrand
2025-06-14  2:41       ` Baoquan He
2025-06-19 12:46       ` Jiri Bohac
2025-06-12 10:20 ` [PATCH v5 5/5] x86: implement crashkernel cma reservation Jiri Bohac
2025-08-20 15:46 ` [PATCH v5 0/5] kdump: crashkernel reservation from CMA Breno Leitao
2025-08-20 16:20   ` Jiri Bohac
2025-08-21  8:35     ` Breno Leitao
2025-08-22 19:45       ` Jiri Bohac
2025-10-03 15:51 ` Breno Leitao
2025-10-06  8:16   ` David Hildenbrand
2025-10-06 16:25     ` Breno Leitao
2025-10-06 16:45       ` David Hildenbrand
2025-10-06 23:34         ` Tao Liu
2025-10-07  3:55         ` Baoquan He
2025-10-07  9:11           ` Jiri Bohac
2025-10-08 10:42           ` Breno Leitao
2025-10-13  4:03             ` [External] " Zhongkun He
2025-10-13  8:00               ` David Hildenbrand [this message]
2025-10-14  7:36                 ` Zhongkun He

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e28a62ae-5482-4bda-bb00-fd8a5083fb31@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=bhe@redhat.com \
    --cc=ddutile@redhat.com \
    --cc=dyoung@redhat.com \
    --cc=hezhongkun.hzk@bytedance.com \
    --cc=jbohac@suse.cz \
    --cc=kas@kernel.org \
    --cc=kexec@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ltao@redhat.com \
    --cc=mhocko@suse.cz \
    --cc=muchun.song@linux.dev \
    --cc=nphamcs@gmail.com \
    --cc=piliu@redhat.com \
    --cc=prudo@redhat.com \
    --cc=riel@surriel.com \
    --cc=vbabka@suse.cz \
    --cc=vgoyal@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox