From: Jonathan Cameron <Jonathan.Cameron@Huawei.com>
To: Dave Jiang <dave.jiang@intel.com>
Cc: <linux-cxl@vger.kernel.org>, <dan.j.williams@intel.com>,
<ira.weiny@intel.com>, <vishal.l.verma@intel.com>,
<alison.schofield@intel.com>
Subject: Re: [PATCH v5 09/14] cxl: Wait Memory_Info_Valid before access memory related info
Date: Fri, 12 May 2023 16:16:29 +0100 [thread overview]
Message-ID: <20230512161629.00000251@Huawei.com> (raw)
In-Reply-To: <168357886796.2756219.4806167633587850772.stgit@djiang5-mobl3>
On Mon, 08 May 2023 13:47:47 -0700
Dave Jiang <dave.jiang@intel.com> wrote:
> CXL rev3.0 8.1.3.8.2 Memory_Info_valid field
>
> The Memory_Info_Valid bit indicates that the CXL Range Size High and Size
> Low registers are valid. The bit must be set within 1 second of reset
> deassertion to the device. Check valid bit before we check the
> Memory_Active bit when waiting for cxl_await_media_ready() to ensure that
> the memory info is valid for consumption.
>
> Fixes: 2e4ba0ec9783 ("cxl/pci: Move cxl_await_media_ready() to the core")
> Signed-off-by: Dave Jiang <dave.jiang@intel.com>
It's a fix - should be first patch in series (to make it obvious backporting is
easy etc).
Hard to read this one as diff has been unfriendly, but I 'think'
it's fine.
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
>
> ---
> v2:
> - Check both ranges. (Jonathan)
> ---
> drivers/cxl/core/pci.c | 83 +++++++++++++++++++++++++++++++++++++++++++-----
> drivers/cxl/cxlpci.h | 2 +
> 2 files changed, 77 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/cxl/core/pci.c b/drivers/cxl/core/pci.c
> index 6c99a964eb54..536672d469a1 100644
> --- a/drivers/cxl/core/pci.c
> +++ b/drivers/cxl/core/pci.c
> @@ -102,21 +102,55 @@ int devm_cxl_port_enumerate_dports(struct cxl_port *port)
> }
> EXPORT_SYMBOL_NS_GPL(devm_cxl_port_enumerate_dports, CXL);
>
> -/*
> - * Wait up to @media_ready_timeout for the device to report memory
> - * active.
> - */
> -int cxl_await_media_ready(struct cxl_dev_state *cxlds)
> +static int cxl_dvsec_mem_range_valid(struct cxl_dev_state *cxlds, int id)
> +{
> + struct pci_dev *pdev = to_pci_dev(cxlds->dev);
> + int d = cxlds->cxl_dvsec;
> + bool valid = false;
> + int rc, i;
> + u32 temp;
> +
> + if (id > CXL_DVSEC_RANGE_MAX)
> + return -EINVAL;
> +
> + /* Check MEM INFO VALID bit first, give up after 1s */
> + i = 1;
> + do {
> + rc = pci_read_config_dword(pdev,
> + d + CXL_DVSEC_RANGE_SIZE_LOW(id),
> + &temp);
> + if (rc)
> + return rc;
> +
> + valid = FIELD_GET(CXL_DVSEC_MEM_INFO_VALID, temp);
> + if (valid)
> + break;
> + msleep(1000);
> + } while (i--);
> +
> + if (!valid) {
> + dev_err(&pdev->dev,
> + "Timeout awaiting memory range %d valid after 1s.\n",
> + id);
> + return -ETIMEDOUT;
> + }
> +
> + return 0;
> +}
> +
> +static int cxl_dvsec_mem_range_active(struct cxl_dev_state *cxlds, int id)
> {
> struct pci_dev *pdev = to_pci_dev(cxlds->dev);
> int d = cxlds->cxl_dvsec;
> bool active = false;
> - u64 md_status;
> int rc, i;
> + u32 temp;
>
> - for (i = media_ready_timeout; i; i--) {
> - u32 temp;
> + if (id > CXL_DVSEC_RANGE_MAX)
> + return -EINVAL;
>
> + /* Check MEM ACTIVE bit, up to 60s timeout by default */
> + for (i = media_ready_timeout; i; i--) {
> rc = pci_read_config_dword(
> pdev, d + CXL_DVSEC_RANGE_SIZE_LOW(0), &temp);
> if (rc)
> @@ -135,6 +169,39 @@ int cxl_await_media_ready(struct cxl_dev_state *cxlds)
> return -ETIMEDOUT;
> }
>
> + return 0;
> +}
> +
> +/*
> + * Wait up to @media_ready_timeout for the device to report memory
> + * active.
> + */
> +int cxl_await_media_ready(struct cxl_dev_state *cxlds)
> +{
> + struct pci_dev *pdev = to_pci_dev(cxlds->dev);
> + int d = cxlds->cxl_dvsec;
> + int rc, i, hdm_count;
> + u64 md_status;
> + u16 cap;
> +
> + rc = pci_read_config_word(pdev,
> + d + CXL_DVSEC_CAP_OFFSET, &cap);
> + if (rc)
> + return rc;
> +
> + hdm_count = FIELD_GET(CXL_DVSEC_HDM_COUNT_MASK, cap);
> + for (i = 0; i < hdm_count; i++) {
> + rc = cxl_dvsec_mem_range_valid(cxlds, i);
> + if (rc)
> + return rc;
> + }
> +
> + for (i = 0; i < hdm_count; i++) {
> + rc = cxl_dvsec_mem_range_active(cxlds, i);
> + if (rc)
> + return rc;
> + }
> +
> md_status = readq(cxlds->regs.memdev + CXLMDEV_STATUS_OFFSET);
> if (!CXLMDEV_READY(md_status))
> return -EIO;
> diff --git a/drivers/cxl/cxlpci.h b/drivers/cxl/cxlpci.h
> index 84c9f73c7d92..1772cd226108 100644
> --- a/drivers/cxl/cxlpci.h
> +++ b/drivers/cxl/cxlpci.h
> @@ -31,6 +31,8 @@
> #define CXL_DVSEC_RANGE_BASE_LOW(i) (0x24 + (i * 0x10))
> #define CXL_DVSEC_MEM_BASE_LOW_MASK GENMASK(31, 28)
>
> +#define CXL_DVSEC_RANGE_MAX 2
> +
> /* CXL 2.0 8.1.4: Non-CXL Function Map DVSEC */
> #define CXL_DVSEC_FUNCTION_MAP 2
>
>
>
next prev parent reply other threads:[~2023-05-12 15:16 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-08 20:46 [PATCH v5 00/14] cxl: Add support for QTG ID retrieval for CXL subsystem Dave Jiang
2023-05-08 20:46 ` [PATCH v5 01/14] cxl: Add callback to parse the DSMAS subtables from CDAT Dave Jiang
2023-05-12 14:26 ` Jonathan Cameron
2023-05-08 20:47 ` [PATCH v5 02/14] cxl: Add callback to parse the DSLBIS subtable " Dave Jiang
2023-05-12 14:33 ` Jonathan Cameron
2023-05-08 20:47 ` [PATCH v5 03/14] cxl: Add callback to parse the SSLBIS " Dave Jiang
2023-05-12 14:41 ` Jonathan Cameron
2023-05-16 17:49 ` Dave Jiang
2023-05-08 20:47 ` [PATCH v5 04/14] cxl: Add support for _DSM Function for retrieving QTG ID Dave Jiang
2023-05-12 14:50 ` Jonathan Cameron
2023-05-08 20:47 ` [PATCH v5 05/14] cxl: Calculate and store PCI link latency for the downstream ports Dave Jiang
2023-05-08 20:47 ` [PATCH v5 06/14] cxl: Store the access coordinates for the generic ports Dave Jiang
2023-05-12 14:59 ` Jonathan Cameron
2023-05-16 20:58 ` Dave Jiang
2023-05-16 21:13 ` Dan Williams
2023-05-16 21:52 ` Dave Jiang
2023-05-08 20:47 ` [PATCH v5 07/14] cxl: Add helper function that calculate performance data for downstream ports Dave Jiang
2023-05-12 15:05 ` Jonathan Cameron
2023-05-08 20:47 ` [PATCH v5 08/14] cxl: Compute the entire CXL path latency and bandwidth data Dave Jiang
2023-05-12 15:09 ` Jonathan Cameron
2023-05-08 20:47 ` [PATCH v5 09/14] cxl: Wait Memory_Info_Valid before access memory related info Dave Jiang
2023-05-12 15:16 ` Jonathan Cameron [this message]
2023-05-08 20:47 ` [PATCH v5 10/14] cxl: Move identify and partition query from pci probe to port probe Dave Jiang
2023-05-12 15:17 ` Jonathan Cameron
2023-05-08 20:47 ` [PATCH v5 11/14] cxl: Move read_cdat_data() to after media is ready Dave Jiang
2023-05-12 15:18 ` Jonathan Cameron
2023-05-08 20:48 ` [PATCH v5 12/14] cxl: Store QTG IDs and related info to the CXL memory device context Dave Jiang
2023-05-12 15:30 ` Jonathan Cameron
2023-05-08 20:48 ` [PATCH v5 13/14] cxl: Export sysfs attributes for memory device QoS class Dave Jiang
2023-05-12 15:33 ` Jonathan Cameron
2023-05-08 20:48 ` [PATCH v5 14/14] cxl/mem: Add debugfs output for QTG related data Dave Jiang
2023-05-12 15:36 ` Jonathan Cameron
2023-05-17 22:28 ` Dave Jiang
2023-05-12 15:28 ` [PATCH v5 00/14] cxl: Add support for QTG ID retrieval for CXL subsystem Jonathan Cameron
2023-05-16 21:49 ` Dan Williams
2023-05-17 8:50 ` Jonathan Cameron
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230512161629.00000251@Huawei.com \
--to=jonathan.cameron@huawei.com \
--cc=alison.schofield@intel.com \
--cc=dan.j.williams@intel.com \
--cc=dave.jiang@intel.com \
--cc=ira.weiny@intel.com \
--cc=linux-cxl@vger.kernel.org \
--cc=vishal.l.verma@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox