From: <dan.j.williams@intel.com>
To: Davidlohr Bueso <dave@stgolabs.net>,
<dave.hansen@linux.intel.com>, <peterz@infradead.org>,
<bp@alien8.de>
Cc: <hpa@zytor.com>, <dan.j.williams@intel.com>,
<Jonathan.Cameron@huawei.com>, <dave.jiang@intel.com>,
<dave@stgolabs.net>, <linux-kernel@vger.kernel.org>,
<x86@kernel.org>, <linux-cxl@vger.kernel.org>
Subject: Re: [PATCH] x86, memregion: Avoid big hammer from cpu_cache_invalidate_memregion()
Date: Fri, 23 Jan 2026 16:16:40 -0800 [thread overview]
Message-ID: <69740f683fc0a_3095100fe@dwillia2-mobl4.notmuch> (raw)
In-Reply-To: <20260122015825.873904-1-dave@stgolabs.net>
Hi Davidlohr,
Davidlohr Bueso wrote:
> The reason for getting away with wbinvd_on_all_cpus() was originally
> that the users at the time were a one time at boot occurrence, so it
> mitigated a lot of the effects of the system-wide disruptiveness and
> cache destruction. This has now changed with users such as provisioning
> memory through CXL Dynamic Capacity Devices.
Except the kernel does not support CXL Dynamic Capacity yet.
> Lets instead use clflushopt and only invalidate the range in question.
> The performance of course scales poorly with the region size but is
> ultimately less invasive.
>
> Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
> ---
> arch/x86/mm/pat/set_memory.c | 13 +++++++++++++
> 1 file changed, 13 insertions(+)
>
> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> index 6c6eb486f7a6..4a1c4f6bec17 100644
> --- a/arch/x86/mm/pat/set_memory.c
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -372,6 +372,19 @@ int cpu_cache_invalidate_memregion(phys_addr_t start, size_t len)
> {
> if (WARN_ON_ONCE(!cpu_cache_has_invalidate_memregion()))
> return -ENXIO;
> +
> + if (static_cpu_has(X86_FEATURE_CLFLUSHOPT)) {
> + void *vaddr = memremap(start, len, MEMREMAP_WB);
How much of the cost is in the mapping management?
I was not expecting that virtual address based flushing would be
reasonable to call from all the places where
cpu_cache_invalidate_memregion() is called to do physical flushing. If
the concern is increased frequency of flushing due to dynamic capacity,
and that dynamic capacity updates have a chance to be finer grained,
then I would then expect some kind of tie into memory hotplug that can
invalidate cache using the direct map.
prev parent reply other threads:[~2026-01-24 0:16 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-22 1:58 [PATCH] x86, memregion: Avoid big hammer from cpu_cache_invalidate_memregion() Davidlohr Bueso
2026-01-24 0:16 ` dan.j.williams [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=69740f683fc0a_3095100fe@dwillia2-mobl4.notmuch \
--to=dan.j.williams@intel.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=bp@alien8.de \
--cc=dave.hansen@linux.intel.com \
--cc=dave.jiang@intel.com \
--cc=dave@stgolabs.net \
--cc=hpa@zytor.com \
--cc=linux-cxl@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=peterz@infradead.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox