From: <dan.j.williams@intel.com>
To: Jonathan Cameron <Jonathan.Cameron@huawei.com>,
Peter Zijlstra <peterz@infradead.org>, <linuxarm@huawei.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>,
Catalin Marinas <catalin.marinas@arm.com>, <james.morse@arm.com>,
<linux-cxl@vger.kernel.org>,
<linux-arm-kernel@lists.infradead.org>,
<linux-acpi@vger.kernel.org>, <linux-arch@vger.kernel.org>,
<linux-mm@kvack.org>, <gregkh@linuxfoundation.org>,
Will Deacon <will@kernel.org>,
Dan Williams <dan.j.williams@intel.com>,
Davidlohr Bueso <dave@stgolabs.net>,
Yicong Yang <yangyicong@huawei.com>,
Yushan Wang <wangyushan12@huawei.com>,
Lorenzo Pieralisi <lpieralisi@kernel.org>,
"Mark Rutland" <mark.rutland@arm.com>,
Dave Hansen <dave.hansen@linux.intel.com>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
<x86@kernel.org>, Andy Lutomirski <luto@kernel.org>
Subject: Re: [PATCH v2 0/8] Cache coherency management subsystem
Date: Wed, 9 Jul 2025 22:32:16 -0700 [thread overview]
Message-ID: <686f506020726_1d3d10069@dwillia2-xfh.jf.intel.com.notmuch> (raw)
In-Reply-To: <20250626105530.000010be@huawei.com>
Jonathan Cameron wrote:
> On Wed, 25 Jun 2025 18:03:43 +0100
> Jonathan Cameron <Jonathan.Cameron@huawei.com> wrote:
>
> > On Wed, 25 Jun 2025 11:31:52 +0200
> > Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > > On Wed, Jun 25, 2025 at 02:12:39AM -0700, H. Peter Anvin wrote:
> > > > On June 25, 2025 1:52:04 AM PDT, Peter Zijlstra <peterz@infradead.org> wrote:
> > > > >On Tue, Jun 24, 2025 at 04:47:56PM +0100, Jonathan Cameron wrote:
> > > > >
> > > > >> On x86 there is the much loved WBINVD instruction that causes a write back
> > > > >> and invalidate of all caches in the system. It is expensive but it is
> > > > >
> > > > >Expensive is not the only problem. It actively interferes with things
> > > > >like Cache-Allocation-Technology (RDT-CAT for the intel folks). Doing
> > > > >WBINVD utterly destroys the cache subsystem for everybody on the
> > > > >machine.
> > > > >
> > > > >> necessary in a few corner cases.
> > > > >
> > > > >Don't we have things like CLFLUSH/CLFLUSHOPT/CLWB exactly so that we can
> > > > >avoid doing dumb things like WBINVD ?!?
> > > > >
> > > > >> These are cases where the contents of
> > > > >> Physical Memory may change without any writes from the host. Whilst there
> > > > >> are a few reasons this might happen, the one I care about here is when
> > > > >> we are adding or removing mappings on CXL. So typically going from
> > > > >> there being actual memory at a host Physical Address to nothing there
> > > > >> (reads as zero, writes dropped) or visa-versa.
> > > > >
> > > > >> The
> > > > >> thing that makes it very hard to handle with CPU flushes is that the
> > > > >> instructions are normally VA based and not guaranteed to reach beyond
> > > > >> the Point of Coherence or similar. You might be able to (ab)use
> > > > >> various flush operations intended to ensure persistence memory but
> > > > >> in general they don't work either.
> > > > >
> > > > >Urgh so this. Dan, Dave, are we getting new instructions to deal with
> > > > >this? I'm really not keen on having WBINVD in active use.
> > > > >
> > > >
> > > > WBINVD is the nuclear weapon to use when you have lost all notion of
> > > > where the problematic data can be, and amounts to a full reset of the
> > > > cache system.
> > > >
> > > > WBINVD can block interrupts for many *milliseconds*, system wide, and
> > > > so is really only useful for once-per-boot type events, like MTRR
> > > > initialization.
> > >
> > > Right this... But that CXL thing sounds like that's semi 'regular' to
> > > the point that providing some infrastructure around it makes sense. This
> > > should not be.
> >
> > I'm fully on board with the WBINVD issues (and hope for something new for
> > the X86 world). However, this particular infrastructure (for those systems
> > that can do so) is about pushing the problem and information to where it
> > can be handled in a lot less disruptive fashion. It can take 'a while' but
> > we are flushing only cache entries in the requested PA range. Other than
> > some potential excess snoop traffic if the coherency tracking isn't precise,
> > there should be limited affect on the rest of the system.
> >
> > So, for the systems I particularly care about, the CXL case isn't that bad.
> >
> > Just for giggles, if you want some horror stories the (dropped) ARM PSCI
> > spec provides for approaches that require synchronization of calls across
> > all CPUs.
> >
> > "CPU Rendezvous" in the attributes of CLEAN_INV_MEMREGION requires all
> > CPUs to make a call within an impdef (discoverable) timeout.
> > https://developer.arm.com/documentation/den0022/falp1/?lang=en
> >
> > I gather no one actually needs that on 'real' systems - that is systems
> > where we actually need to do these flushes! The ACPI 'RFC' doesn't support
> > that delight.
>
> Seems I introduced some confusion. Let me try summarizing:
>
> 1. x86 has a potential feature gap. From a CXL ecosystem point of view I'd
> like to see that gap closed. (Inappropriate for me to make any proposals
> on how to do it on that architecture).
I disagree this is an x86 feature gap. This is CXL exporting complexity
to Linux. Linux is better served in the long term by CXL cleaning up
that problem than Linux deploying more software mitigations.
> 2. This patch set has nothing to do with x86 (beyond modifying a function
> signature). The hardware it is targeting avoids many of the issues around
> WBINVD. The solution is not specific to ARM64, though the implementation
> I care about is on an ARM64 implementation.
>
> Right now, on x86 we have a functionally correct solution, this patch set
> adds infrastructure and 2 implementations to provide similar for other
> architectures.
Theoretically there could be a threshold at which a CLFLUSHOPT loop is a
better option, but I would rather it be the case* that software CXL
cache management is stop-gap for early generation CXL platforms.
* personal kernel developer opinion, not necessarily opinion of $employer
next prev parent reply other threads:[~2025-07-10 5:32 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-24 15:47 [PATCH v2 0/8] Cache coherency management subsystem Jonathan Cameron
2025-06-24 15:47 ` [PATCH v2 1/8] memregion: Support fine grained invalidate by cpu_cache_invalidate_memregion() Jonathan Cameron
2025-07-09 19:46 ` Davidlohr Bueso
2025-07-09 22:31 ` dan.j.williams
2025-07-11 11:54 ` Jonathan Cameron
2025-06-24 15:47 ` [PATCH v2 2/8] generic: Support ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION Jonathan Cameron
2025-06-24 16:16 ` Greg KH
2025-06-25 16:46 ` Jonathan Cameron
2025-07-10 5:57 ` dan.j.williams
2025-07-10 6:01 ` H. Peter Anvin
2025-07-11 11:53 ` Jonathan Cameron
2025-07-11 11:52 ` Jonathan Cameron
2025-08-07 16:07 ` Jonathan Cameron
2025-06-24 15:47 ` [PATCH v2 3/8] cache: coherency core registration and instance handling Jonathan Cameron
2025-06-24 15:48 ` [PATCH v2 4/8] MAINTAINERS: Add Jonathan Cameron to drivers/cache Jonathan Cameron
2025-06-24 15:48 ` [PATCH v2 5/8] arm64: Select GENERIC_CPU_CACHE_INVALIDATE_MEMREGION Jonathan Cameron
2025-06-25 16:21 ` kernel test robot
2025-06-28 7:10 ` kernel test robot
2025-06-24 15:48 ` [PATCH v2 6/8] cache: Support cache maintenance for HiSilicon SoC Hydra Home Agent Jonathan Cameron
2025-06-24 17:18 ` Randy Dunlap
2025-06-24 15:48 ` [RFC v2 7/8] acpi: PoC of Cache control via ACPI0019 and _DSM Jonathan Cameron
2025-06-24 15:48 ` [PATCH v2 8/8] Hack: Pretend we have PSCI 1.2 Jonathan Cameron
2025-06-25 8:52 ` [PATCH v2 0/8] Cache coherency management subsystem Peter Zijlstra
2025-06-25 9:12 ` H. Peter Anvin
2025-06-25 9:31 ` Peter Zijlstra
2025-06-25 17:03 ` Jonathan Cameron
2025-06-26 9:55 ` Jonathan Cameron
2025-07-10 5:32 ` dan.j.williams [this message]
2025-07-10 10:59 ` Peter Zijlstra
2025-07-10 18:36 ` dan.j.williams
2025-07-10 5:22 ` dan.j.williams
2025-07-10 5:31 ` H. Peter Anvin
2025-07-10 10:56 ` Peter Zijlstra
2025-07-10 18:45 ` dan.j.williams
2025-07-10 18:55 ` H. Peter Anvin
2025-07-10 19:11 ` dan.j.williams
2025-07-10 19:16 ` H. Peter Anvin
2025-07-09 19:53 ` Davidlohr Bueso
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=686f506020726_1d3d10069@dwillia2-xfh.jf.intel.com.notmuch \
--to=dan.j.williams@intel.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=bp@alien8.de \
--cc=catalin.marinas@arm.com \
--cc=dave.hansen@linux.intel.com \
--cc=dave@stgolabs.net \
--cc=gregkh@linuxfoundation.org \
--cc=hpa@zytor.com \
--cc=james.morse@arm.com \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-arch@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-cxl@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linuxarm@huawei.com \
--cc=lpieralisi@kernel.org \
--cc=luto@kernel.org \
--cc=mark.rutland@arm.com \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=wangyushan12@huawei.com \
--cc=will@kernel.org \
--cc=x86@kernel.org \
--cc=yangyicong@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).