Linux CXL
 help / color / mirror / Atom feed
From: Jonathan Cameron <Jonathan.Cameron@Huawei.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: <linux-cxl@vger.kernel.org>,
	Dan Carpenter <dan.carpenter@oracle.com>,
	Ariel Sibley <ariel.sibley@microchip.com>, <ira.weiny@intel.com>
Subject: Re: [PATCH v2 14/14] cxl/port: Enable HDM Capability after validating DVSEC Ranges
Date: Wed, 18 May 2022 18:17:10 +0100	[thread overview]
Message-ID: <20220518181710.000077fb@Huawei.com> (raw)
In-Reply-To: <165283418817.1033989.11273676872054815459.stgit@dwillia2-xfh>

On Tue, 17 May 2022 17:38:10 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> CXL memory expanders that support the CXL 2.0 memory device class code
> include an "HDM Decoder Capability" mechanism to supplant the "CXL DVSEC
> Range" mechanism originally defined in CXL 1.1. Both mechanisms depend
> on a "mem_enable" bit being set in configuration space before either
> mechanism activates. When the HDM Decoder Capability is enabled the CXL
> DVSEC Range settings are ignored.
> 
> Previously, the cxl_mem driver was relying on platform-firmware to set
> "mem_enable". That is an invalid assumption as there is no requirement
> that platform-firmware sets the bit before the driver sees a device,
> especially in hot-plug scenarios. Additionally, ACPI-platforms that
> support CXL 2.0 devices also support the ACPI CEDT (CXL Early Discovery
> Table). That table outlines the platform permissible address ranges for
> CXL operation. So, there is a need for the driver to set "mem_enable",
> and there is information available to determine the validity of the CXL
> DVSEC Ranges. While DVSEC Ranges are expected to be at least
> 256M in size, the specification (CXL 2.0 Section 8.1.3.8.4 DVSEC CXL
> Range 1 Base Low) allows for the possibilty of devices smaller than
> 256M. So the range [0, 256M) is considered active even if Memory_size
> is 0.
> 
> Arrange for the driver to optionally enable the HDM Decoder Capability
> if "mem_enable" was not set by platform firmware, or the CXL DVSEC Range
> configuration was invalid. Be careful to only disable memory decode if
> the kernel was the one to enable it. In other words, if CXL is backing
> all of kernel memory at boot the device needs to maintain "mem_enable"
> and "HDM Decoder enable" all the way up to handoff back to platform
> firmware (e.g. ACPI S5 state entry may require CXL memory to stay
> active).
> 
> Fixes: 560f78559006 ("cxl/pci: Retrieve CXL DVSEC memory info")
> Cc: Dan Carpenter <dan.carpenter@oracle.com>
> [dan: fix early terminiation of range-allowed loop]
> Cc: Ariel Sibley <ariel.sibley@microchip.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
> Changes since v1:
> - Fix range-allowed loop termination (Smatch / Dan)
That had me confused before I saw v2 :)

I'm not keen on the trick to do disallowed in the debug message...

Other than ongoing discussion around the range being always allowed
(or not) this looks good to me.

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

> - Clean up changeloe wording around why [0, 256M) is considered always
>   active (Ariel)
> 
>  drivers/cxl/core/pci.c |  163 ++++++++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 151 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/cxl/core/pci.c b/drivers/cxl/core/pci.c
> index a697c48fc830..528430da0e77 100644
> --- a/drivers/cxl/core/pci.c
> +++ b/drivers/cxl/core/pci.c
> @@ -175,30 +175,164 @@ static int wait_for_valid(struct cxl_dev_state *cxlds)
>  	return -ETIMEDOUT;
>  }
>  
> +static int cxl_set_mem_enable(struct cxl_dev_state *cxlds, u16 val)
> +{
> +	struct pci_dev *pdev = to_pci_dev(cxlds->dev);
> +	int d = cxlds->cxl_dvsec;
> +	u16 ctrl;
> +	int rc;
> +
> +	rc = pci_read_config_word(pdev, d + CXL_DVSEC_CTRL_OFFSET, &ctrl);
> +	if (rc < 0)
> +		return rc;
> +
> +	if ((ctrl & CXL_DVSEC_MEM_ENABLE) == val)
> +		return 1;
> +	ctrl &= ~CXL_DVSEC_MEM_ENABLE;
> +	ctrl |= val;
> +
> +	rc = pci_write_config_word(pdev, d + CXL_DVSEC_CTRL_OFFSET, ctrl);
> +	if (rc < 0)
> +		return rc;
> +
> +	return 0;
> +}
> +
> +static void clear_mem_enable(void *cxlds)
> +{
> +	cxl_set_mem_enable(cxlds, 0);
> +}
> +
> +static int devm_cxl_enable_mem(struct device *host, struct cxl_dev_state *cxlds)
> +{
> +	int rc;
> +
> +	rc = cxl_set_mem_enable(cxlds, CXL_DVSEC_MEM_ENABLE);
> +	if (rc < 0)
> +		return rc;
> +	if (rc > 0)
> +		return 0;
> +	return devm_add_action_or_reset(host, clear_mem_enable, cxlds);
> +}
> +
> +static bool range_contains(struct range *r1, struct range *r2)
> +{
> +	return r1->start <= r2->start && r1->end >= r2->end;
> +}
> +
> +/* require dvsec ranges to be covered by a locked platform window */
> +static int dvsec_range_allowed(struct device *dev, void *arg)
> +{
> +	struct range *dev_range = arg;
> +	struct cxl_decoder *cxld;
> +	struct range root_range;
> +
> +	if (!is_root_decoder(dev))
> +		return 0;
> +
> +	cxld = to_cxl_decoder(dev);
> +
> +	if (!(cxld->flags & CXL_DECODER_F_LOCK))
> +		return 0;
> +	if (!(cxld->flags & CXL_DECODER_F_RAM))
> +		return 0;
> +
> +	root_range = (struct range) {
> +		.start = cxld->platform_res.start,
> +		.end = cxld->platform_res.end,
> +	};
> +
> +	return range_contains(&root_range, dev_range);
> +}
> +
> +static void disable_hdm(void *_cxlhdm)
> +{
> +	u32 global_ctrl;
> +	struct cxl_hdm *cxlhdm = _cxlhdm;
> +	void __iomem *hdm = cxlhdm->regs.hdm_decoder;
> +
> +	global_ctrl = readl(hdm + CXL_HDM_DECODER_CTRL_OFFSET);
> +	writel(global_ctrl & ~CXL_HDM_DECODER_ENABLE,
> +	       hdm + CXL_HDM_DECODER_CTRL_OFFSET);
> +}
> +
> +static int devm_cxl_enable_hdm(struct device *host, struct cxl_hdm *cxlhdm)
> +{
> +	void __iomem *hdm = cxlhdm->regs.hdm_decoder;
> +	u32 global_ctrl;
> +
> +	global_ctrl = readl(hdm + CXL_HDM_DECODER_CTRL_OFFSET);
> +	writel(global_ctrl | CXL_HDM_DECODER_ENABLE,
> +	       hdm + CXL_HDM_DECODER_CTRL_OFFSET);
> +
> +	return devm_add_action_or_reset(host, disable_hdm, cxlhdm);
> +}
> +
>  static bool __cxl_hdm_decode_init(struct cxl_dev_state *cxlds,
>  				  struct cxl_hdm *cxlhdm,
>  				  struct cxl_endpoint_dvsec_info *info)
>  {
>  	void __iomem *hdm = cxlhdm->regs.hdm_decoder;
> -	bool global_enable;
> +	struct cxl_port *port = cxlhdm->port;
> +	struct device *dev = cxlds->dev;
> +	struct cxl_port *root;
> +	int i, rc, allowed;
>  	u32 global_ctrl;
>  
>  	global_ctrl = readl(hdm + CXL_HDM_DECODER_CTRL_OFFSET);
> -	global_enable = global_ctrl & CXL_HDM_DECODER_ENABLE;
>  
> -	if (!global_enable && info->mem_enabled)
> +	/*
> +	 * If the HDM Decoder Capability is already enabled then assume
> +	 * that some other agent like platform firmware set it up.
> +	 */
> +	if (global_ctrl & CXL_HDM_DECODER_ENABLE) {
> +		rc = devm_cxl_enable_mem(&port->dev, cxlds);
> +		if (rc)
> +			return false;
> +		return true;
> +	}
> +
> +	root = to_cxl_port(port->dev.parent);
> +	while (!is_cxl_root(root) && is_cxl_port(root->dev.parent))
> +		root = to_cxl_port(root->dev.parent);
> +	if (!is_cxl_root(root)) {
> +		dev_err(dev, "Failed to acquire root port for HDM enable\n");
>  		return false;
> +	}
> +
> +	for (i = 0, allowed = 0; info->mem_enabled && i < info->ranges; i++) {
> +		struct device *cxld_dev;
> +
> +		cxld_dev = device_find_child(&root->dev, &info->dvsec_range[i],
> +					     dvsec_range_allowed);
> +		dev_dbg(dev, "DVSEC Range%d %sallowed by platform\n", i,
> +			cxld_dev ? "" : "dis");

Ouch.  Not worth doing that to save a few chars. Makes the message
harder to grep for.

> +		if (!cxld_dev)
> +			continue;
> +		put_device(cxld_dev);
> +		allowed++;
> +	}
> +	put_device(&root->dev);
> +
> +	if (!allowed) {
> +		cxl_set_mem_enable(cxlds, 0);
> +		info->mem_enabled = 0;
> +	}

  parent reply	other threads:[~2022-05-18 17:17 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-12 18:14 [PATCH 00/14] cxl: Fix "mem_enable" handling Dan Williams
2022-05-12 18:14 ` [PATCH 01/14] cxl/mem: Drop mem_enabled check from wait_for_media() Dan Williams
2022-05-18 17:21   ` Jonathan Cameron
2022-05-12 18:14 ` [PATCH 02/14] cxl/pci: Consolidate wait_for_media() and wait_for_media_ready() Dan Williams
2022-05-18 17:22   ` Jonathan Cameron
2022-05-12 18:14 ` [PATCH 03/14] cxl/pci: Drop wait_for_valid() from cxl_await_media_ready() Dan Williams
2022-05-18 17:22   ` Jonathan Cameron
2022-05-12 18:14 ` [PATCH 04/14] cxl/mem: Fix cxl_mem_probe() error exit Dan Williams
2022-05-18 17:23   ` Jonathan Cameron
2022-05-12 18:14 ` [PATCH 05/14] cxl/mem: Validate port connectivity before dvsec ranges Dan Williams
2022-05-18 16:13   ` Jonathan Cameron
2022-05-18 16:41     ` Dan Williams
2022-05-18 17:21       ` Jonathan Cameron
2022-05-12 18:14 ` [PATCH 06/14] cxl/pci: Move cxl_await_media_ready() to the core Dan Williams
2022-05-18 16:21   ` Jonathan Cameron
2022-05-18 16:37     ` Dan Williams
2022-05-18 17:20       ` Jonathan Cameron
2022-05-18 18:22         ` Dan Williams
2022-05-12 18:14 ` [PATCH 07/14] cxl/mem: Consolidate CXL DVSEC Range enumeration in " Dan Williams
2022-05-18 16:31   ` Jonathan Cameron
2022-05-18 16:52     ` Dan Williams
2022-05-18 17:24       ` Jonathan Cameron
2022-05-12 18:14 ` [PATCH 08/14] cxl/mem: Skip range enumeration if mem_enable clear Dan Williams
2022-05-18 17:25   ` Jonathan Cameron
2022-05-12 18:15 ` [PATCH 09/14] cxl/mem: Fix CXL DVSEC Range Sizing Dan Williams
2022-05-18 16:40   ` Jonathan Cameron
2022-05-18 17:06     ` Dan Williams
2022-05-12 18:15 ` [PATCH 10/14] cxl/mem: Merge cxl_dvsec_ranges() and cxl_hdm_decode_init() Dan Williams
2022-05-12 18:15 ` [PATCH 11/14] cxl/pci: Drop @info argument to cxl_hdm_decode_init() Dan Williams
2022-05-18 16:45   ` Jonathan Cameron
2022-05-12 18:15 ` [PATCH 12/14] cxl/port: Move endpoint HDM Decoder Capability init to port driver Dan Williams
2022-05-18 16:50   ` Jonathan Cameron
2022-05-12 18:15 ` [PATCH 13/14] cxl/port: Reuse 'struct cxl_hdm' context for hdm init Dan Williams
2022-05-18 16:50   ` Jonathan Cameron
2022-05-12 18:15 ` [PATCH 14/14] cxl/port: Enable HDM Capability after validating DVSEC Ranges Dan Williams
2022-05-16 18:41   ` Ariel.Sibley
2022-05-16 18:52     ` Dan Williams
2022-05-16 19:31       ` Ariel.Sibley
2022-05-16 20:07         ` Dan Williams
2022-05-18  0:38   ` [PATCH v2 " Dan Williams
2022-05-18  2:07     ` Ariel.Sibley
2022-05-18  2:44       ` Dan Williams
2022-05-18 15:33         ` Jonathan Cameron
2022-05-18 17:17     ` Jonathan Cameron [this message]
2022-05-18 18:00       ` Dan Williams
2022-05-18  0:50 ` [PATCH 00/14] cxl: Fix "mem_enable" handling Ira Weiny

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220518181710.000077fb@Huawei.com \
    --to=jonathan.cameron@huawei.com \
    --cc=ariel.sibley@microchip.com \
    --cc=dan.carpenter@oracle.com \
    --cc=dan.j.williams@intel.com \
    --cc=ira.weiny@intel.com \
    --cc=linux-cxl@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox