Linux PCI subsystem development
 help / color / mirror / Atom feed
From: "Dan Williams (nvidia)" <djbw@kernel.org>
To: smadhavan@nvidia.com,  bhelgaas@google.com,
	 dan.j.williams@intel.com,  dave.jiang@intel.com,
	 jonathan.cameron@huawei.com,  ira.weiny@intel.com,
	 vishal.l.verma@intel.com,  alison.schofield@intel.com,
	 dave@stgolabs.net
Cc: alwilliamson@nvidia.com,  jeshuas@nvidia.com,  vsethi@nvidia.com,
	 skancherla@nvidia.com,  vaslot@nvidia.com,
	 sdonthineni@nvidia.com,  mhonap@nvidia.com,  vidyas@nvidia.com,
	 jan@nvidia.com,  mochs@nvidia.com,  dschumacher@nvidia.com,
	 linux-cxl@vger.kernel.org,  linux-pci@vger.kernel.org,
	 linux-kernel@vger.kernel.org,
	 Srirangan Madhavan <smadhavan@nvidia.com>
Subject: Re: [PATCH v5 5/7] cxl: Add CXL DVSEC reset sequence and flow orchestration
Date: Tue, 12 May 2026 19:45:20 -0700	[thread overview]
Message-ID: <6a03e5c099d98_123a100e3@djbw-dev.notmuch> (raw)
In-Reply-To: <20260306092322.148765-6-smadhavan@nvidia.com>

smadhavan@ wrote:
> From: Srirangan Madhavan <smadhavan@nvidia.com>

Hi Srirangan, some comments below:

> cxl_dev_reset() implements the hardware reset sequence:
> optionally enable memory clear, initiate reset via
> CTRL2, wait for completion, and re-enable caching.
> 
> cxl_do_reset() orchestrates the full reset flow:
>   1. CXL pre-reset: mem offlining and cache flush (when memdev present)
>   2. PCI save/disable: pci_dev_save_and_disable() automatically saves
>      CXL DVSEC and HDM decoder state via PCI core hooks
>   3. Sibling coordination: save/disable CXL.cachemem sibling functions
>   4. Execute CXL DVSEC reset
>   5. Sibling restore: always runs to re-enable sibling functions
>   6. PCI restore: pci_dev_restore() automatically restores CXL state
> 
> The CXL-specific DVSEC and HDM save/restore is handled
> by the PCI core's CXL save/restore infrastructure (drivers/pci/cxl.c).

I think the PCI core rules for pci_save_and_disable() really only work
for a device that can be reliably quarantined off the bus simply by
writing to its command register. I am also uncomfortable saving new MMIO
data with new iomap mappings at pci_save_state() time, remapping MMIO
space, and rewriting the command register at pci_restore_state() time.
That feels like it is stretching the PCI state save model, and runs
counter to other MMIO restore scenarios.

Lets start with assuming similar behavior to MSI where restoration of
MMIO is driven from other cached configuration, not PCI state save. For
example, for MSI there is nothing to do at restore time if a driver has
not first enabled interrupts.

The similarity for CXL would be that if nothing has ever registered a
CXL memdev then it is indeterminate whether Linux has ever successfully
parsed the CXL configuration and "enabled CXL".

The downside of this is losing the ability to save firmware initiated
CXL state from PCI reset paths. That loss is mitigated somewhat by SBR
masking and the fact that FLR does not reset CXL state. So those resets
do not destroy CXL HDM state by default.

Otherwise, if cxl_reset moves to a cxl_memdev sysfs attribute then the
PCI ABI stays free of CXL concerns. More below:

> Signed-off-by: Srirangan Madhavan <smadhavan@nvidia.com>
> ---
>  drivers/cxl/core/pci.c | 181 ++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 179 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/cxl/core/pci.c b/drivers/cxl/core/pci.c
> index b6f10a2cb404..c758b3f1b3f9 100644
> --- a/drivers/cxl/core/pci.c
> +++ b/drivers/cxl/core/pci.c
> @@ -1078,7 +1078,7 @@ static int cxl_reset_collect_sibling(struct pci_dev *func, void *data)
>  	return 0;
>  }
> 
> -static void __maybe_unused cxl_pci_functions_reset_prepare(struct cxl_reset_context *ctx)
> +static void cxl_pci_functions_reset_prepare(struct cxl_reset_context *ctx)
>  {
>  	struct pci_dev *pdev = ctx->target;
>  	struct cxl_reset_walk_ctx wctx;
> @@ -1103,7 +1103,7 @@ static void __maybe_unused cxl_pci_functions_reset_prepare(struct cxl_reset_cont
>  	}
>  }
> 
> -static void __maybe_unused cxl_pci_functions_reset_done(struct cxl_reset_context *ctx)
> +static void cxl_pci_functions_reset_done(struct cxl_reset_context *ctx)
>  {
>  	int i;
> 
> @@ -1116,3 +1116,180 @@ static void __maybe_unused cxl_pci_functions_reset_done(struct cxl_reset_context
>  	ctx->pci_functions = NULL;
>  	ctx->pci_func_count = 0;
>  }
> +
> +/*
> + * CXL device reset execution
> + */
> +static int cxl_dev_reset(struct pci_dev *pdev, int dvsec)
> +{
> +	static const u32 reset_timeout_ms[] = { 10, 100, 1000, 10000, 100000 };
> +	u16 cap, ctrl2, status2;
> +	u32 timeout_ms;
> +	int rc, idx;
> +
> +	if (!pci_wait_for_pending_transaction(pdev))
> +		pci_err(pdev, "timed out waiting for pending transactions\n");
> +
> +	rc = pci_read_config_word(pdev, dvsec + PCI_DVSEC_CXL_CAP, &cap);
> +	if (rc)
> +		return rc;
> +
> +	rc = pci_read_config_word(pdev, dvsec + PCI_DVSEC_CXL_CTRL2, &ctrl2);
> +	if (rc)
> +		return rc;
> +
> +	/*
> +	 * Disable caching and initiate cache writeback+invalidation if the
> +	 * device supports it. Poll for completion.
> +	 * Per CXL r3.2 section 9.6, software may use the cache size from
> +	 * DVSEC CXL Capability2 to compute a suitable timeout; we use a
> +	 * default of 10ms.
> +	 */
> +	if (cap & PCI_DVSEC_CXL_CACHE_WBI_CAPABLE) {
> +		u32 wbi_poll_us = 100;
> +		s32 wbi_remaining_us = 10000;
> +
> +		ctrl2 |= PCI_DVSEC_CXL_DISABLE_CACHING;
> +		rc = pci_write_config_word(pdev, dvsec + PCI_DVSEC_CXL_CTRL2,
> +					   ctrl2);
> +		if (rc)
> +			return rc;
> +
> +		ctrl2 |= PCI_DVSEC_CXL_INIT_CACHE_WBI;
> +		rc = pci_write_config_word(pdev, dvsec + PCI_DVSEC_CXL_CTRL2,
> +					   ctrl2);
> +		if (rc)
> +			return rc;
> +
> +		do {
> +			usleep_range(wbi_poll_us, wbi_poll_us + 1);
> +			wbi_remaining_us -= wbi_poll_us;
> +			rc = pci_read_config_word(pdev,
> +						  dvsec + PCI_DVSEC_CXL_STATUS2,
> +						  &status2);
> +			if (rc)
> +				return rc;
> +		} while (!(status2 & PCI_DVSEC_CXL_CACHE_INV) &&
> +			 wbi_remaining_us > 0);
> +
> +		if (!(status2 & PCI_DVSEC_CXL_CACHE_INV)) {
> +			pci_err(pdev, "CXL cache WB+I timed out\n");
> +			return -ETIMEDOUT;
> +		}
> +	} else if (cap & PCI_DVSEC_CXL_CACHE_CAPABLE) {
> +		ctrl2 |= PCI_DVSEC_CXL_DISABLE_CACHING;
> +		rc = pci_write_config_word(pdev, dvsec + PCI_DVSEC_CXL_CTRL2,
> +					   ctrl2);
> +		if (rc)
> +			return rc;
> +	}
> +
> +	if (cap & PCI_DVSEC_CXL_RST_MEM_CLR_CAPABLE) {
> +		rc = pci_read_config_word(pdev, dvsec + PCI_DVSEC_CXL_CTRL2,
> +					  &ctrl2);
> +		if (rc)
> +			return rc;
> +
> +		ctrl2 |= PCI_DVSEC_CXL_RST_MEM_CLR_EN;
> +		rc = pci_write_config_word(pdev, dvsec + PCI_DVSEC_CXL_CTRL2,
> +					   ctrl2);
> +		if (rc)
> +			return rc;
> +	}
> +
> +	idx = FIELD_GET(PCI_DVSEC_CXL_RST_TIMEOUT, cap);
> +	if (idx >= ARRAY_SIZE(reset_timeout_ms))
> +		idx = ARRAY_SIZE(reset_timeout_ms) - 1;
> +	timeout_ms = reset_timeout_ms[idx];
> +
> +	rc = pci_read_config_word(pdev, dvsec + PCI_DVSEC_CXL_CTRL2, &ctrl2);
> +	if (rc)
> +		return rc;
> +
> +	ctrl2 |= PCI_DVSEC_CXL_INIT_CXL_RST;
> +	rc = pci_write_config_word(pdev, dvsec + PCI_DVSEC_CXL_CTRL2, ctrl2);
> +	if (rc)
> +		return rc;
> +
> +	msleep(timeout_ms);

Should this be msleep(max(100, timeout_ms)) just to honor the
specification note:

"9.7.3 CXL Reset and Request Retry Status (RRS) ...The device behavior
in response to Configuration Space access to the device within 100 ms of
initiating a CXL Reset is undefined"

...not sure why the specification allowed for devices to specify less
than 100ms timeout values.

> +
> +	rc = pci_read_config_word(pdev, dvsec + PCI_DVSEC_CXL_STATUS2,
> +				  &status2);
> +	if (rc)
> +		return rc;
> +
> +	if (status2 & PCI_DVSEC_CXL_RST_ERR) {
> +		pci_err(pdev, "CXL reset error\n");
> +		return -EIO;
> +	}
> +
> +	if (!(status2 & PCI_DVSEC_CXL_RST_DONE)) {
> +		pci_err(pdev, "CXL reset timeout\n");
> +		return -ETIMEDOUT;
> +	}
> +
> +	rc = pci_read_config_word(pdev, dvsec + PCI_DVSEC_CXL_CTRL2, &ctrl2);
> +	if (rc)
> +		return rc;
> +
> +	ctrl2 &= ~PCI_DVSEC_CXL_DISABLE_CACHING;
> +	rc = pci_write_config_word(pdev, dvsec + PCI_DVSEC_CXL_CTRL2, ctrl2);
> +	if (rc)
> +		return rc;
> +
> +	return 0;
> +}
> +
> +static int match_memdev_by_parent(struct device *dev, const void *parent)
> +{
> +	return is_cxl_memdev(dev) && dev->parent == parent;
> +}
> +
> +static int cxl_do_reset(struct pci_dev *pdev)
> +{
> +	struct cxl_reset_context ctx = { .target = pdev };
> +	struct cxl_memdev *cxlmd = NULL;
> +	struct device *memdev = NULL;
> +	int dvsec, rc;
> +
> +	dvsec = pci_find_dvsec_capability(pdev, PCI_VENDOR_ID_CXL,
> +					  PCI_DVSEC_CXL_DEVICE);
> +	if (!dvsec)
> +		return -ENODEV;
> +
> +	memdev = bus_find_device(&cxl_bus_type, NULL, &pdev->dev,
> +				 match_memdev_by_parent);
> +	if (memdev) {
> +		cxlmd = to_cxl_memdev(memdev);
> +		guard(device)(&cxlmd->dev);

This guard scope ends at this bracket, so it is a nop.

> +	}
> +
> +	mutex_lock(&cxl_reset_mutex);
> +	pci_dev_lock(pdev);

Given that CXL drivers create memdevs the lock ordering needs to be
pci_dev_lock(), then guard(device)(&cxlmd->dev).

> +	if (cxlmd) {
> +		rc = cxl_reset_prepare_memdev(cxlmd);

Ok, so per the lead in comment, a cxl_memdev becomes a requirement, not
a trigger for opportunistic safety.

I do think it ultimately begs the question of whether the PCI core
should one day automatically register cxl devices for this purpose. For
now the assumption is a driver, like vfio-pci, has registered a memdev
to get a memX/cxl_reset interface published in sysfs.

The presence of a cxl_memdev allows coordinating not only region
discovery, but also reconfiguration locking. With the memdev we have a
chance to ask the questions:

1/ are any cacheable mappings of this device established?
2/ is that answer durable for the duration of the reset?
3/ can the HDM configuration be recovered from the cxl_memdev to
   cxl_root decoder association?

In the short term the fastest answers to those questions is just walk
the memdev and cxl root, in the longer term I would be open to building
more CXL awareness into the core so that CXL reset does not need a
driver loaded.

As for the safety guarantees, I do not think cxl_reset_prepare_memdev()
should be trying to hot unplug memory. Either the caller has arranged
for CXL to be idle or the reset fails.

With cxl_acpi and cxl_mem loaded you can pin down the HPA space for the
duration of the reset. It has a chance to be compatible with all CXL
reset use cases.

  parent reply	other threads:[~2026-05-13  2:45 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-06  9:23 [PATCH v5 0/7] CXL: Add cxl_reset sysfs attribute for PCI devices smadhavan
2026-03-06  9:23 ` [PATCH v5 1/7] PCI: Add CXL DVSEC reset and capability register definitions smadhavan
2026-03-06  9:23 ` [PATCH v5 2/7] PCI: Export pci_dev_save_and_disable() and pci_dev_restore() smadhavan
2026-03-06  9:23 ` [PATCH v5 3/7] cxl: Add memory offlining and cache flush helpers smadhavan
2026-03-06 23:34   ` Alex Williamson
2026-03-09 23:01   ` Dave Jiang
2026-03-06  9:23 ` [PATCH v5 4/7] cxl: Add multi-function sibling coordination for CXL reset smadhavan
2026-03-06 23:34   ` Alex Williamson
2026-03-06  9:23 ` [PATCH v5 5/7] cxl: Add CXL DVSEC reset sequence and flow orchestration smadhavan
2026-03-06 23:33   ` Alex Williamson
2026-03-10  0:26   ` Dave Jiang
2026-05-13  2:45   ` Dan Williams (nvidia) [this message]
2026-03-06  9:23 ` [PATCH v5 6/7] cxl: Add cxl_reset sysfs interface for PCI devices smadhavan
2026-03-06 23:32   ` Alex Williamson
2026-03-12 13:01   ` Jonathan Cameron
2026-03-14 20:39   ` Krzysztof Wilczyński
2026-03-06  9:23 ` [PATCH v5 7/7] Documentation: ABI: Add CXL PCI cxl_reset sysfs attribute smadhavan
2026-03-06 23:32   ` Alex Williamson
2026-03-09 22:37 ` [PATCH v5 0/7] CXL: Add cxl_reset sysfs attribute for PCI devices Dave Jiang
2026-03-09 22:40   ` Dave Jiang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6a03e5c099d98_123a100e3@djbw-dev.notmuch \
    --to=djbw@kernel.org \
    --cc=alison.schofield@intel.com \
    --cc=alwilliamson@nvidia.com \
    --cc=bhelgaas@google.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.jiang@intel.com \
    --cc=dave@stgolabs.net \
    --cc=dschumacher@nvidia.com \
    --cc=ira.weiny@intel.com \
    --cc=jan@nvidia.com \
    --cc=jeshuas@nvidia.com \
    --cc=jonathan.cameron@huawei.com \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=mhonap@nvidia.com \
    --cc=mochs@nvidia.com \
    --cc=sdonthineni@nvidia.com \
    --cc=skancherla@nvidia.com \
    --cc=smadhavan@nvidia.com \
    --cc=vaslot@nvidia.com \
    --cc=vidyas@nvidia.com \
    --cc=vishal.l.verma@intel.com \
    --cc=vsethi@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox