public inbox for linux-cxl@vger.kernel.org
 help / color / mirror / Atom feed
From: <dan.j.williams@intel.com>
To: Jonathan Cameron <jonathan.cameron@huawei.com>,
	<dan.j.williams@intel.com>
Cc: <dave.jiang@intel.com>, <linux-cxl@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>,
	<Smita.KoralahalliChannabasappa@amd.com>,
	<alison.schofield@intel.com>, <terry.bowman@amd.com>,
	<alejandro.lucero-palau@amd.com>, <linux-pci@vger.kernel.org>,
	Ben Cheatham <benjamin.cheatham@amd.com>,
	Alejandro Lucero <alucerop@amd.com>
Subject: Re: [PATCH v2 6/6] cxl/mem: Introduce cxl_memdev_attach for CXL-dependent operation
Date: Thu, 18 Dec 2025 11:50:10 -0800	[thread overview]
Message-ID: <69445af227f36_1cee1009e@dwillia2-mobl4.notmuch> (raw)
In-Reply-To: <20251218144608.0000238c@huawei.com>

Jonathan Cameron wrote:
> On Wed, 17 Dec 2025 08:27:00 -0800
> dan.j.williams@intel.com wrote:
> 
> > Jonathan Cameron wrote:
> > > On Mon, 15 Dec 2025 16:56:16 -0800
> > > Dan Williams <dan.j.williams@intel.com> wrote:
> > >   
> > > > Unlike the cxl_pci class driver that opportunistically enables memory
> > > > expansion with no other dependent functionality, CXL accelerator drivers
> > > > have distinct PCIe-only and CXL-enhanced operation states. If CXL is
> > > > available some additional coherent memory/cache operations can be enabled,
> > > > otherwise traditional DMA+MMIO over PCIe/CXL.io is a fallback.
> > > > 
> > > > Allow for a driver to pass a routine to be called in cxl_mem_probe()
> > > > context. This ability is inspired by and mirrors the semantics of
> > > > faux_device_create(). It allows for the caller to run CXL-topology
> > > > attach-dependent logic on behalf of the caller.  
> > > 
> > > This seems confusing.   
> > 
> > Is faux_device_create() confusing?
> 
> Just to be clear this question is all about the word 'caller' being repeated
> in that sentence. Not about the code itself or anything else in the explanation
> or flow. 

Oh, sorry, I took it as the design was confusing.

> This comment that reads very oddly to me and I think means something very
> different from what is going on here.
> 
> > 
> > > The caller is running logic for the caller?  It can do that whenever
> > > it likes!  One of those is presumably callee  
> > 
> > No, it cannot execute CXL topology attach dependendent functionality in
> > the device's initial probe context synchronous with the device-attach
> > event "whenever it likes".
> 
> I'm still lost. Why 'caller to run' ... 'on behalf of the caller.'  In this case
> caller is in both cases the function calling cxl_memdev_alloc()?
> 
> Maybe something like
> 
> "This arranges for CXL-topology attach-dependent logic to be run later, on behalf of
> the caller."
> 
> Though that kind of repeats what follows, so maybe just drop the sentence.

How about this reworked changelog?

---

Unlike the cxl_pci class driver that opportunistically enables memory
expansion with no other dependent functionality, CXL accelerator drivers
have distinct PCIe-only and CXL-enhanced operation states. If CXL is
available some additional coherent memory/cache operations can be enabled,
otherwise traditional DMA+MMIO over PCIe/CXL.io is a fallback.

This constitutes a new mode of operation where the caller of
devm_cxl_add_memdev() wants to make a "go/no-go" decision about running
in CXL accelerated mode or falling back to PCIe-only operation. Part of
that decision making process likely also includes additional
CXL-acceleration-specific resource setup. Encapsulate both of those
requirements into 'struct cxl_memdev_attach' that provides a ->probe()
callback. The probe callback runs in cxl_mem_probe() context, after the
port topology is successfully attached for the given memdev. It supports
a contract where, upon successful return from devm_cxl_add_memdev(),
everything needed for CXL accelerated operation has been enabled.

Additionally the presence of @cxlmd->attach indicates that the accelerator
driver be detached when CXL operation ends. This conceptually makes a CXL
link loss event mirror a PCIe link loss event which results in triggering
the ->remove() callback of affected devices+drivers. A driver can re-attach
to recover back to PCIe-only operation. Live recovery, i.e. without a
->remove()/->probe() cycle, is left as a future consideration.

---

> > > > @@ -1081,6 +1093,18 @@ static struct cxl_memdev *cxl_memdev_autoremove(struct cxl_memdev *cxlmd)
> > > >  {
> > > >  	int rc;
> > > >  
> > > > +	/*  
> > > 
> > > The general approach is fine but is the function name appropriate for this
> > > new stuff?  Naturally I'm not suggesting the bikeshed should be any particular
> > > alternative color just maybe not the current blue.  
> > 
> > The _autoremove() verb appears multiple times in the subsystem, not sure
> > why it is raising bikeshed concerns now. Please send a new proposal if
> > "autoremove" is not jibing.
> 
> It felt like a stretch given the additional code that is not about registering
> autoremove for later, but doing it now under some circumstances.  Ah well I don't
> care that much what it's called.

It is the same semantic as devm_add_action_or_reset() in that if the
"add_action" fails then the "reset" is triggered.

Yes, there is additional code that validates that the device to be
registered for removal is attached to its driver. That organization
supports having a single handoff from scoped-based cleanup to devm based
cleanup.

If you can think of a better organization and name I am open to hearing
options, but nothing better is immediately jumping out at me.

  reply	other threads:[~2025-12-18 19:50 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-16  0:56 [PATCH v2 0/6] cxl: Initialization reworks to support Soft Reserve Recovery and Accelerator Memory Dan Williams
2025-12-16  0:56 ` [PATCH v2 1/6] cxl/mem: Fix devm_cxl_memdev_edac_release() confusion Dan Williams
2025-12-16  0:56 ` [PATCH v2 2/6] cxl/mem: Arrange for always-synchronous memdev attach Dan Williams
2025-12-16  0:56 ` [PATCH v2 3/6] cxl/port: Arrange for always synchronous endpoint attach Dan Williams
2025-12-16  0:56 ` [PATCH v2 4/6] cxl/mem: Convert devm_cxl_add_memdev() to scope-based-cleanup Dan Williams
2025-12-17 15:48   ` Jonathan Cameron
2025-12-16  0:56 ` [PATCH v2 5/6] cxl/mem: Drop @host argument to devm_cxl_add_memdev() Dan Williams
2025-12-16  0:56 ` [PATCH v2 6/6] cxl/mem: Introduce cxl_memdev_attach for CXL-dependent operation Dan Williams
2025-12-17 15:57   ` Jonathan Cameron
2025-12-17 16:27     ` dan.j.williams
2025-12-18 14:46       ` Jonathan Cameron
2025-12-18 19:50         ` dan.j.williams [this message]
2025-12-19 10:26           ` Jonathan Cameron
2025-12-20  7:31   ` Alejandro Lucero Palau
2026-02-10  5:19   ` Gregory Price
2026-02-10 15:05     ` Dave Jiang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=69445af227f36_1cee1009e@dwillia2-mobl4.notmuch \
    --to=dan.j.williams@intel.com \
    --cc=Smita.KoralahalliChannabasappa@amd.com \
    --cc=alejandro.lucero-palau@amd.com \
    --cc=alison.schofield@intel.com \
    --cc=alucerop@amd.com \
    --cc=benjamin.cheatham@amd.com \
    --cc=dave.jiang@intel.com \
    --cc=jonathan.cameron@huawei.com \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=terry.bowman@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox