From: Ben Widawsky <ben.widawsky@intel.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: linux-cxl@vger.kernel.org, linux-pci@vger.kernel.org,
nvdimm@lists.linux.dev
Subject: Re: [PATCH v3 22/40] cxl/core/hdm: Add CXL standard decoder enumeration to the core
Date: Mon, 31 Jan 2022 16:24:35 -0800 [thread overview]
Message-ID: <20220201002435.oodbf3xuhb7xknus@intel.com> (raw)
In-Reply-To: <164298423561.3018233.8938479363856921038.stgit@dwillia2-desk3.amr.corp.intel.com>
On 22-01-23 16:30:35, Dan Williams wrote:
> Unlike the decoder enumeration for "root decoders" described by platform
> firmware, standard coders can be enumerated from the component registers
^ decoders
> space once the base address has been identified (via PCI, ACPI, or
> another mechanism).
>
> Add common infrastructure for HDM (Host-managed-Device-Memory) Decoder
> enumeration and share it between host-bridge, upstream switch port, and
> cxl_test defined decoders.
>
> The locking model for switch level decoders is to hold the port lock
> over the enumeration. This facilitates moving the dport and decoder
> enumeration to a 'port' driver. For now, the only enumerator of decoder
> resources is the cxl_acpi root driver.
>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
I authored some parts of this patch, not sure how much percentage-wise. If it
was intentional to drop me, that's fine - just checking.
Some comments below.
Reviewed-by: Ben Widawsky <ben.widawsky@intel.com>
> ---
> drivers/cxl/acpi.c | 43 ++-----
> drivers/cxl/core/Makefile | 1
> drivers/cxl/core/core.h | 2
> drivers/cxl/core/hdm.c | 247 +++++++++++++++++++++++++++++++++++++++++
> drivers/cxl/core/port.c | 65 ++++++++---
> drivers/cxl/core/regs.c | 5 -
> drivers/cxl/cxl.h | 33 ++++-
> drivers/cxl/cxlmem.h | 8 +
> tools/testing/cxl/Kbuild | 4 +
> tools/testing/cxl/test/cxl.c | 29 +++++
> tools/testing/cxl/test/mock.c | 50 ++++++++
> tools/testing/cxl/test/mock.h | 3
> 12 files changed, 436 insertions(+), 54 deletions(-)
> create mode 100644 drivers/cxl/core/hdm.c
>
> diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c
> index 259441245687..8c2ced91518b 100644
> --- a/drivers/cxl/acpi.c
> +++ b/drivers/cxl/acpi.c
> @@ -168,10 +168,10 @@ static int add_host_bridge_uport(struct device *match, void *arg)
> struct device *host = root_port->dev.parent;
> struct acpi_device *bridge = to_cxl_host_bridge(host, match);
> struct acpi_pci_root *pci_root;
> - int single_port_map[1], rc;
> - struct cxl_decoder *cxld;
> struct cxl_dport *dport;
> + struct cxl_hdm *cxlhdm;
> struct cxl_port *port;
> + int rc;
>
> if (!bridge)
> return 0;
> @@ -200,37 +200,24 @@ static int add_host_bridge_uport(struct device *match, void *arg)
> rc = devm_cxl_port_enumerate_dports(host, port);
> if (rc < 0)
> return rc;
> - if (rc > 1)
> - return 0;
> -
> - /* TODO: Scan CHBCR for HDM Decoder resources */
> -
> - /*
> - * Per the CXL specification (8.2.5.12 CXL HDM Decoder Capability
> - * Structure) single ported host-bridges need not publish a decoder
> - * capability when a passthrough decode can be assumed, i.e. all
> - * transactions that the uport sees are claimed and passed to the single
> - * dport. Disable the range until the first CXL region is enumerated /
> - * activated.
> - */
> - cxld = cxl_switch_decoder_alloc(port, 1);
> - if (IS_ERR(cxld))
> - return PTR_ERR(cxl);
> -
> cxl_device_lock(&port->dev);
> - dport = list_first_entry(&port->dports, typeof(*dport), list);
> - cxl_device_unlock(&port->dev);
> + if (rc == 1) {
> + rc = devm_cxl_add_passthrough_decoder(host, port);
> + goto out;
> + }
>
> - single_port_map[0] = dport->port_id;
> + cxlhdm = devm_cxl_setup_hdm(host, port);
> + if (IS_ERR(cxlhdm)) {
> + rc = PTR_ERR(cxlhdm);
> + goto out;
> + }
>
> - rc = cxl_decoder_add(cxld, single_port_map);
> + rc = devm_cxl_enumerate_decoders(host, cxlhdm);
> if (rc)
> - put_device(&cxld->dev);
> - else
> - rc = cxl_decoder_autoremove(host, cxld);
> + dev_err(&port->dev, "Couldn't enumerate decoders (%d)\n", rc);
>
> - if (rc == 0)
> - dev_dbg(host, "add: %s\n", dev_name(&cxld->dev));
> +out:
> + cxl_device_unlock(&port->dev);
> return rc;
> }
>
> diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile
> index 91057f0ec763..6d37cd78b151 100644
> --- a/drivers/cxl/core/Makefile
> +++ b/drivers/cxl/core/Makefile
> @@ -8,3 +8,4 @@ cxl_core-y += regs.o
> cxl_core-y += memdev.o
> cxl_core-y += mbox.o
> cxl_core-y += pci.o
> +cxl_core-y += hdm.o
> diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h
> index e0c9aacc4e9c..1a50c0fc399c 100644
> --- a/drivers/cxl/core/core.h
> +++ b/drivers/cxl/core/core.h
> @@ -14,6 +14,8 @@ struct cxl_mem_query_commands;
> int cxl_query_cmd(struct cxl_memdev *cxlmd,
> struct cxl_mem_query_commands __user *q);
> int cxl_send_cmd(struct cxl_memdev *cxlmd, struct cxl_send_command __user *s);
> +void __iomem *devm_cxl_iomap_block(struct device *dev, resource_size_t addr,
> + resource_size_t length);
>
> int cxl_memdev_init(void);
> void cxl_memdev_exit(void);
> diff --git a/drivers/cxl/core/hdm.c b/drivers/cxl/core/hdm.c
> new file mode 100644
> index 000000000000..802048dc2046
> --- /dev/null
> +++ b/drivers/cxl/core/hdm.c
> @@ -0,0 +1,247 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* Copyright(c) 2022 Intel Corporation. All rights reserved. */
> +#include <linux/io-64-nonatomic-hi-lo.h>
> +#include <linux/device.h>
> +#include <linux/delay.h>
> +
> +#include "cxlmem.h"
> +#include "core.h"
> +
> +/**
> + * DOC: cxl core hdm
> + *
> + * Compute Express Link Host Managed Device Memory, starting with the
> + * CXL 2.0 specification, is managed by an array of HDM Decoder register
> + * instances per CXL port and per CXL endpoint. Define common helpers
> + * for enumerating these registers and capabilities.
> + */
> +
> +static int add_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld,
> + int *target_map)
> +{
> + int rc;
> +
> + rc = cxl_decoder_add_locked(cxld, target_map);
> + if (rc) {
> + put_device(&cxld->dev);
> + dev_err(&port->dev, "Failed to add decoder\n");
> + return rc;
> + }
> +
> + rc = cxl_decoder_autoremove(&port->dev, cxld);
> + if (rc)
> + return rc;
> +
> + dev_dbg(&cxld->dev, "Added to port %s\n", dev_name(&port->dev));
> +
> + return 0;
> +}
> +
> +/*
> + * Per the CXL specification (8.2.5.12 CXL HDM Decoder Capability Structure)
> + * single ported host-bridges need not publish a decoder capability when a
> + * passthrough decode can be assumed, i.e. all transactions that the uport sees
> + * are claimed and passed to the single dport. Disable the range until the first
> + * CXL region is enumerated / activated.
> + */
> +int devm_cxl_add_passthrough_decoder(struct device *host, struct cxl_port *port)
> +{
> + struct cxl_decoder *cxld;
> + struct cxl_dport *dport;
> + int single_port_map[1];
> +
> + cxld = cxl_switch_decoder_alloc(port, 1);
> + if (IS_ERR(cxld))
> + return PTR_ERR(cxld);
> +
> + device_lock_assert(&port->dev);
> +
> + dport = list_first_entry(&port->dports, typeof(*dport), list);
> + single_port_map[0] = dport->port_id;
> +
> + return add_hdm_decoder(port, cxld, single_port_map);
> +}
> +EXPORT_SYMBOL_NS_GPL(devm_cxl_add_passthrough_decoder, CXL);
Hmm, this makes me realize I need to modify the region driver to not care about
finding decoder resources for a passthrough decoder.
> +
> +static void parse_hdm_decoder_caps(struct cxl_hdm *cxlhdm)
> +{
> + u32 hdm_cap;
> +
> + hdm_cap = readl(cxlhdm->regs.hdm_decoder + CXL_HDM_DECODER_CAP_OFFSET);
> + cxlhdm->decoder_count = cxl_hdm_decoder_count(hdm_cap);
> + cxlhdm->target_count =
> + FIELD_GET(CXL_HDM_DECODER_TARGET_COUNT_MASK, hdm_cap);
> + if (FIELD_GET(CXL_HDM_DECODER_INTERLEAVE_11_8, hdm_cap))
> + cxlhdm->interleave_mask |= GENMASK(11, 8);
> + if (FIELD_GET(CXL_HDM_DECODER_INTERLEAVE_14_12, hdm_cap))
> + cxlhdm->interleave_mask |= GENMASK(14, 12);
> +}
> +
> +static void __iomem *map_hdm_decoder_regs(struct cxl_port *port,
> + void __iomem *crb)
> +{
> + struct cxl_register_map map;
> + struct cxl_component_reg_map *comp_map = &map.component_map;
> +
> + cxl_probe_component_regs(&port->dev, crb, comp_map);
> + if (!comp_map->hdm_decoder.valid) {
> + dev_err(&port->dev, "HDM decoder registers invalid\n");
> + return IOMEM_ERR_PTR(-ENXIO);
> + }
> +
> + return crb + comp_map->hdm_decoder.offset;
> +}
> +
> +/**
> + * devm_cxl_setup_hdm - map HDM decoder component registers
> + * @port: cxl_port to map
> + */
This got messed up on the fixup. You need @host and @port at this point. It'd be
pretty cool if we could skip straight to not @host arg.
> +struct cxl_hdm *devm_cxl_setup_hdm(struct device *host, struct cxl_port *port)
> +{
> + void __iomem *crb, __iomem *hdm;
> + struct device *dev = &port->dev;
> + struct cxl_hdm *cxlhdm;
> +
> + cxlhdm = devm_kzalloc(host, sizeof(*cxlhdm), GFP_KERNEL);
> + if (!cxlhdm)
> + return ERR_PTR(-ENOMEM);
> +
> + cxlhdm->port = port;
> + crb = devm_cxl_iomap_block(host, port->component_reg_phys,
> + CXL_COMPONENT_REG_BLOCK_SIZE);
> + if (!crb) {
> + dev_err(dev, "No component registers mapped\n");
> + return ERR_PTR(-ENXIO);
> + }
Does this work if the port is operating in passthrough decoder mode? Is the idea
to just not call this thing if so?
> +
> + hdm = map_hdm_decoder_regs(port, crb);
> + if (IS_ERR(hdm))
> + return ERR_CAST(hdm);
> + cxlhdm->regs.hdm_decoder = hdm;
> +
> + parse_hdm_decoder_caps(cxlhdm);
> + if (cxlhdm->decoder_count == 0) {
> + dev_err(dev, "Spec violation. Caps invalid\n");
> + return ERR_PTR(-ENXIO);
> + }
> +
> + return cxlhdm;
> +}
> +EXPORT_SYMBOL_NS_GPL(devm_cxl_setup_hdm, CXL);
> +
> +static int to_interleave_granularity(u32 ctrl)
> +{
> + int val = FIELD_GET(CXL_HDM_DECODER0_CTRL_IG_MASK, ctrl);
> +
> + return 256 << val;
> +}
> +
> +static int to_interleave_ways(u32 ctrl)
> +{
> + int val = FIELD_GET(CXL_HDM_DECODER0_CTRL_IW_MASK, ctrl);
> +
> + switch (val) {
> + case 0 ... 4:
> + return 1 << val;
> + case 8 ... 10:
> + return 3 << (val - 8);
> + default:
> + return 0;
> + }
> +}
> +
> +static void init_hdm_decoder(struct cxl_decoder *cxld, int *target_map,
> + void __iomem *hdm, int which)
> +{
> + u64 size, base;
> + u32 ctrl;
> + int i;
> + union {
> + u64 value;
> + unsigned char target_id[8];
> + } target_list;
> +
> + ctrl = readl(hdm + CXL_HDM_DECODER0_CTRL_OFFSET(which));
> + base = ioread64_hi_lo(hdm + CXL_HDM_DECODER0_BASE_LOW_OFFSET(which));
> + size = ioread64_hi_lo(hdm + CXL_HDM_DECODER0_SIZE_LOW_OFFSET(which));
> +
> + if (!(ctrl & CXL_HDM_DECODER0_CTRL_COMMITTED))
> + size = 0;
> +
> + cxld->decoder_range = (struct range) {
> + .start = base,
> + .end = base + size - 1,
> + };
> +
> + /* switch decoders are always enabled if committed */
> + if (ctrl & CXL_HDM_DECODER0_CTRL_COMMITTED) {
> + cxld->flags |= CXL_DECODER_F_ENABLE;
> + if (ctrl & CXL_HDM_DECODER0_CTRL_LOCK)
> + cxld->flags |= CXL_DECODER_F_LOCK;
> + }
> + cxld->interleave_ways = to_interleave_ways(ctrl);
> + cxld->interleave_granularity = to_interleave_granularity(ctrl);
> +
> + if (FIELD_GET(CXL_HDM_DECODER0_CTRL_TYPE, ctrl))
> + cxld->target_type = CXL_DECODER_EXPANDER;
> + else
> + cxld->target_type = CXL_DECODER_ACCELERATOR;
> +
> + target_list.value =
> + ioread64_hi_lo(hdm + CXL_HDM_DECODER0_TL_LOW(which));
> + for (i = 0; i < cxld->interleave_ways; i++)
> + target_map[i] = target_list.target_id[i];
> +}
> +
> +/**
> + * devm_cxl_enumerate_decoders - add decoder objects per HDM register set
> + * @port: cxl_port HDM capability to scan
> + */
> +int devm_cxl_enumerate_decoders(struct device *host, struct cxl_hdm *cxlhdm)
> +{
> + void __iomem *hdm = cxlhdm->regs.hdm_decoder;
> + struct cxl_port *port = cxlhdm->port;
> + int i, committed;
> + u32 ctrl;
> +
> + /*
> + * Since the register resource was recently claimed via request_region()
> + * be careful about trusting the "not-committed" status until the commit
> + * timeout has elapsed. The commit timeout is 10ms (CXL 2.0
> + * 8.2.5.12.20), but double it to be tolerant of any clock skew between
> + * host and target.
> + */
> + for (i = 0, committed = 0; i < cxlhdm->decoder_count; i++) {
> + ctrl = readl(hdm + CXL_HDM_DECODER0_CTRL_OFFSET(i));
> + if (ctrl & CXL_HDM_DECODER0_CTRL_COMMITTED)
> + committed++;
> + }
> +
> + /* ensure that future checks of committed can be trusted */
> + if (committed != cxlhdm->decoder_count)
> + msleep(20);
> +
> + for (i = 0; i < cxlhdm->decoder_count; i++) {
> + int target_map[CXL_DECODER_MAX_INTERLEAVE] = { 0 };
> + int rc, target_count = cxlhdm->target_count;
> + struct cxl_decoder *cxld;
> +
> + cxld = cxl_switch_decoder_alloc(port, target_count);
> + if (IS_ERR(cxld)) {
> + dev_warn(&port->dev,
> + "Failed to allocate the decoder\n");
> + return PTR_ERR(cxld);
> + }
> +
> + init_hdm_decoder(cxld, target_map, cxlhdm->regs.hdm_decoder, i);
> + rc = add_hdm_decoder(port, cxld, target_map);
> + if (rc) {
> + dev_warn(&port->dev,
> + "Failed to add decoder to switch port\n");
> + return rc;
> + }
> + }
> +
> + return 0;
> +}
> +EXPORT_SYMBOL_NS_GPL(devm_cxl_enumerate_decoders, CXL);
> diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c
> index 777de6d91dde..72633865b386 100644
> --- a/drivers/cxl/core/port.c
> +++ b/drivers/cxl/core/port.c
> @@ -591,33 +591,27 @@ EXPORT_SYMBOL_NS_GPL(devm_cxl_add_dport, CXL);
> static int decoder_populate_targets(struct cxl_decoder *cxld,
> struct cxl_port *port, int *target_map)
> {
> - int rc = 0, i;
> + int i;
>
> if (!target_map)
> return 0;
>
> - cxl_device_lock(&port->dev);
> - if (list_empty(&port->dports)) {
> - rc = -EINVAL;
> - goto out_unlock;
> - }
> + device_lock_assert(&port->dev);
> +
> + if (list_empty(&port->dports))
> + return -EINVAL;
>
> write_seqlock(&cxld->target_lock);
> for (i = 0; i < cxld->nr_targets; i++) {
> struct cxl_dport *dport = find_dport(port, target_map[i]);
>
> - if (!dport) {
> - rc = -ENXIO;
> - goto out_unlock;
> - }
> + if (!dport)
> + return -ENXIO;
> cxld->target[i] = dport;
> }
> write_sequnlock(&cxld->target_lock);
>
> -out_unlock:
> - cxl_device_unlock(&port->dev);
> -
> - return rc;
> + return 0;
> }
>
> /**
> @@ -713,7 +707,7 @@ struct cxl_decoder *cxl_switch_decoder_alloc(struct cxl_port *port,
> EXPORT_SYMBOL_NS_GPL(cxl_switch_decoder_alloc, CXL);
>
> /**
> - * cxl_decoder_add - Add a decoder with targets
> + * cxl_decoder_add_locked - Add a decoder with targets
> * @cxld: The cxl decoder allocated by cxl_decoder_alloc()
> * @target_map: A list of downstream ports that this decoder can direct memory
> * traffic to. These numbers should correspond with the port number
> @@ -723,12 +717,15 @@ EXPORT_SYMBOL_NS_GPL(cxl_switch_decoder_alloc, CXL);
> * is an endpoint device. A more awkward example is a hostbridge whose root
> * ports get hot added (technically possible, though unlikely).
> *
> - * Context: Process context. Takes and releases the cxld's device lock.
> + * This is the locked variant of cxl_decoder_add().
> + *
> + * Context: Process context. Expects the device lock of the port that owns the
> + * @cxld to be held.
> *
> * Return: Negative error code if the decoder wasn't properly configured; else
> * returns 0.
> */
> -int cxl_decoder_add(struct cxl_decoder *cxld, int *target_map)
> +int cxl_decoder_add_locked(struct cxl_decoder *cxld, int *target_map)
> {
> struct cxl_port *port;
> struct device *dev;
> @@ -762,6 +759,40 @@ int cxl_decoder_add(struct cxl_decoder *cxld, int *target_map)
>
> return device_add(dev);
> }
> +EXPORT_SYMBOL_NS_GPL(cxl_decoder_add_locked, CXL);
> +
> +/**
> + * cxl_decoder_add - Add a decoder with targets
> + * @cxld: The cxl decoder allocated by cxl_decoder_alloc()
> + * @target_map: A list of downstream ports that this decoder can direct memory
> + * traffic to. These numbers should correspond with the port number
> + * in the PCIe Link Capabilities structure.
> + *
> + * This is the unlocked variant of cxl_decoder_add_locked().
> + * See cxl_decoder_add_locked().
> + *
> + * Context: Process context. Takes and releases the device lock of the port that
> + * owns the @cxld.
> + */
> +int cxl_decoder_add(struct cxl_decoder *cxld, int *target_map)
> +{
> + struct cxl_port *port;
> + int rc;
> +
> + if (WARN_ON_ONCE(!cxld))
> + return -EINVAL;
> +
> + if (WARN_ON_ONCE(IS_ERR(cxld)))
> + return PTR_ERR(cxld);
> +
> + port = to_cxl_port(cxld->dev.parent);
> +
> + cxl_device_lock(&port->dev);
> + rc = cxl_decoder_add_locked(cxld, target_map);
> + cxl_device_unlock(&port->dev);
> +
> + return rc;
> +}
> EXPORT_SYMBOL_NS_GPL(cxl_decoder_add, CXL);
>
> static void cxld_unregister(void *dev)
> diff --git a/drivers/cxl/core/regs.c b/drivers/cxl/core/regs.c
> index 65d7f5880671..718b6b0ae4b3 100644
> --- a/drivers/cxl/core/regs.c
> +++ b/drivers/cxl/core/regs.c
> @@ -159,9 +159,8 @@ void cxl_probe_device_regs(struct device *dev, void __iomem *base,
> }
> EXPORT_SYMBOL_NS_GPL(cxl_probe_device_regs, CXL);
>
> -static void __iomem *devm_cxl_iomap_block(struct device *dev,
> - resource_size_t addr,
> - resource_size_t length)
> +void __iomem *devm_cxl_iomap_block(struct device *dev, resource_size_t addr,
> + resource_size_t length)
> {
> void __iomem *ret_val;
> struct resource *res;
> diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
> index 7de9504bc995..ca3777061181 100644
> --- a/drivers/cxl/cxl.h
> +++ b/drivers/cxl/cxl.h
> @@ -17,6 +17,9 @@
> * (port-driver, region-driver, nvdimm object-drivers... etc).
> */
>
> +/* CXL 2.0 8.2.4 CXL Component Register Layout and Definition */
> +#define CXL_COMPONENT_REG_BLOCK_SIZE SZ_64K
> +
> /* CXL 2.0 8.2.5 CXL.cache and CXL.mem Registers*/
> #define CXL_CM_OFFSET 0x1000
> #define CXL_CM_CAP_HDR_OFFSET 0x0
> @@ -36,11 +39,23 @@
> #define CXL_HDM_DECODER_CAP_OFFSET 0x0
> #define CXL_HDM_DECODER_COUNT_MASK GENMASK(3, 0)
> #define CXL_HDM_DECODER_TARGET_COUNT_MASK GENMASK(7, 4)
> -#define CXL_HDM_DECODER0_BASE_LOW_OFFSET 0x10
> -#define CXL_HDM_DECODER0_BASE_HIGH_OFFSET 0x14
> -#define CXL_HDM_DECODER0_SIZE_LOW_OFFSET 0x18
> -#define CXL_HDM_DECODER0_SIZE_HIGH_OFFSET 0x1c
> -#define CXL_HDM_DECODER0_CTRL_OFFSET 0x20
> +#define CXL_HDM_DECODER_INTERLEAVE_11_8 BIT(8)
> +#define CXL_HDM_DECODER_INTERLEAVE_14_12 BIT(9)
> +#define CXL_HDM_DECODER_CTRL_OFFSET 0x4
> +#define CXL_HDM_DECODER_ENABLE BIT(1)
> +#define CXL_HDM_DECODER0_BASE_LOW_OFFSET(i) (0x20 * (i) + 0x10)
> +#define CXL_HDM_DECODER0_BASE_HIGH_OFFSET(i) (0x20 * (i) + 0x14)
> +#define CXL_HDM_DECODER0_SIZE_LOW_OFFSET(i) (0x20 * (i) + 0x18)
> +#define CXL_HDM_DECODER0_SIZE_HIGH_OFFSET(i) (0x20 * (i) + 0x1c)
> +#define CXL_HDM_DECODER0_CTRL_OFFSET(i) (0x20 * (i) + 0x20)
> +#define CXL_HDM_DECODER0_CTRL_IG_MASK GENMASK(3, 0)
> +#define CXL_HDM_DECODER0_CTRL_IW_MASK GENMASK(7, 4)
> +#define CXL_HDM_DECODER0_CTRL_LOCK BIT(8)
> +#define CXL_HDM_DECODER0_CTRL_COMMIT BIT(9)
> +#define CXL_HDM_DECODER0_CTRL_COMMITTED BIT(10)
> +#define CXL_HDM_DECODER0_CTRL_TYPE BIT(12)
> +#define CXL_HDM_DECODER0_TL_LOW(i) (0x20 * (i) + 0x24)
> +#define CXL_HDM_DECODER0_TL_HIGH(i) (0x20 * (i) + 0x28)
>
> static inline int cxl_hdm_decoder_count(u32 cap_hdr)
> {
> @@ -162,7 +177,8 @@ int cxl_find_regblock(struct pci_dev *pdev, enum cxl_regloc_type type,
> #define CXL_DECODER_F_TYPE2 BIT(2)
> #define CXL_DECODER_F_TYPE3 BIT(3)
> #define CXL_DECODER_F_LOCK BIT(4)
> -#define CXL_DECODER_F_MASK GENMASK(4, 0)
> +#define CXL_DECODER_F_ENABLE BIT(5)
> +#define CXL_DECODER_F_MASK GENMASK(5, 0)
>
> enum cxl_decoder_type {
> CXL_DECODER_ACCELERATOR = 2,
> @@ -300,7 +316,12 @@ struct cxl_decoder *cxl_root_decoder_alloc(struct cxl_port *port,
> struct cxl_decoder *cxl_switch_decoder_alloc(struct cxl_port *port,
> unsigned int nr_targets);
> int cxl_decoder_add(struct cxl_decoder *cxld, int *target_map);
> +int cxl_decoder_add_locked(struct cxl_decoder *cxld, int *target_map);
> int cxl_decoder_autoremove(struct device *host, struct cxl_decoder *cxld);
> +struct cxl_hdm;
> +struct cxl_hdm *devm_cxl_setup_hdm(struct device *host, struct cxl_port *port);
> +int devm_cxl_enumerate_decoders(struct device *host, struct cxl_hdm *cxlhdm);
> +int devm_cxl_add_passthrough_decoder(struct device *host, struct cxl_port *port);
>
> extern struct bus_type cxl_bus_type;
>
> diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
> index 8d96d009ad90..fca2d1b5f6ff 100644
> --- a/drivers/cxl/cxlmem.h
> +++ b/drivers/cxl/cxlmem.h
> @@ -264,4 +264,12 @@ int cxl_mem_create_range_info(struct cxl_dev_state *cxlds);
> struct cxl_dev_state *cxl_dev_state_create(struct device *dev);
> void set_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds);
> void clear_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds);
> +
> +struct cxl_hdm {
> + struct cxl_component_regs regs;
> + unsigned int decoder_count;
> + unsigned int target_count;
> + unsigned int interleave_mask;
> + struct cxl_port *port;
> +};
> #endif /* __CXL_MEM_H__ */
> diff --git a/tools/testing/cxl/Kbuild b/tools/testing/cxl/Kbuild
> index 61123544aa49..3045d7cba0db 100644
> --- a/tools/testing/cxl/Kbuild
> +++ b/tools/testing/cxl/Kbuild
> @@ -5,6 +5,9 @@ ldflags-y += --wrap=acpi_evaluate_integer
> ldflags-y += --wrap=acpi_pci_find_root
> ldflags-y += --wrap=nvdimm_bus_register
> ldflags-y += --wrap=devm_cxl_port_enumerate_dports
> +ldflags-y += --wrap=devm_cxl_setup_hdm
> +ldflags-y += --wrap=devm_cxl_add_passthrough_decoder
> +ldflags-y += --wrap=devm_cxl_enumerate_decoders
>
> DRIVERS := ../../../drivers
> CXL_SRC := $(DRIVERS)/cxl
> @@ -31,6 +34,7 @@ cxl_core-y += $(CXL_CORE_SRC)/regs.o
> cxl_core-y += $(CXL_CORE_SRC)/memdev.o
> cxl_core-y += $(CXL_CORE_SRC)/mbox.o
> cxl_core-y += $(CXL_CORE_SRC)/pci.o
> +cxl_core-y += $(CXL_CORE_SRC)/hdm.o
> cxl_core-y += config_check.o
>
> obj-m += test/
> diff --git a/tools/testing/cxl/test/cxl.c b/tools/testing/cxl/test/cxl.c
> index ef002e909d38..81c09380c537 100644
> --- a/tools/testing/cxl/test/cxl.c
> +++ b/tools/testing/cxl/test/cxl.c
> @@ -8,6 +8,7 @@
> #include <linux/acpi.h>
> #include <linux/pci.h>
> #include <linux/mm.h>
> +#include <cxlmem.h>
> #include "mock.h"
>
> #define NR_CXL_HOST_BRIDGES 4
> @@ -398,6 +399,31 @@ static struct acpi_pci_root *mock_acpi_pci_find_root(acpi_handle handle)
> return &mock_pci_root[host_bridge_index(adev)];
> }
>
> +static struct cxl_hdm *mock_cxl_setup_hdm(struct device *host,
> + struct cxl_port *port)
> +{
> + struct cxl_hdm *cxlhdm = devm_kzalloc(&port->dev, sizeof(*cxlhdm), GFP_KERNEL);
> +
> + if (!cxlhdm)
> + return ERR_PTR(-ENOMEM);
> +
> + cxlhdm->port = port;
> + return cxlhdm;
> +}
> +
> +static int mock_cxl_add_passthrough_decoder(struct device *host,
> + struct cxl_port *port)
> +{
> + dev_err(&port->dev, "unexpected passthrough decoder for cxl_test\n");
> + return -EOPNOTSUPP;
> +}
> +
> +static int mock_cxl_enumerate_decoders(struct device *host,
> + struct cxl_hdm *cxlhdm)
> +{
> + return 0;
> +}
> +
> static int mock_cxl_port_enumerate_dports(struct device *host,
> struct cxl_port *port)
> {
> @@ -439,6 +465,9 @@ static struct cxl_mock_ops cxl_mock_ops = {
> .acpi_evaluate_integer = mock_acpi_evaluate_integer,
> .acpi_pci_find_root = mock_acpi_pci_find_root,
> .devm_cxl_port_enumerate_dports = mock_cxl_port_enumerate_dports,
> + .devm_cxl_setup_hdm = mock_cxl_setup_hdm,
> + .devm_cxl_add_passthrough_decoder = mock_cxl_add_passthrough_decoder,
> + .devm_cxl_enumerate_decoders = mock_cxl_enumerate_decoders,
> .list = LIST_HEAD_INIT(cxl_mock_ops.list),
> };
>
> diff --git a/tools/testing/cxl/test/mock.c b/tools/testing/cxl/test/mock.c
> index 56b4b7d734bc..18d3b65e2a9b 100644
> --- a/tools/testing/cxl/test/mock.c
> +++ b/tools/testing/cxl/test/mock.c
> @@ -131,6 +131,56 @@ __wrap_nvdimm_bus_register(struct device *dev,
> }
> EXPORT_SYMBOL_GPL(__wrap_nvdimm_bus_register);
>
> +struct cxl_hdm *__wrap_devm_cxl_setup_hdm(struct device *host,
> + struct cxl_port *port)
> +{
> + int index;
> + struct cxl_hdm *cxlhdm;
> + struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
> +
> + if (ops && ops->is_mock_port(port->uport))
> + cxlhdm = ops->devm_cxl_setup_hdm(host, port);
> + else
> + cxlhdm = devm_cxl_setup_hdm(host, port);
> + put_cxl_mock_ops(index);
> +
> + return cxlhdm;
> +}
> +EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_setup_hdm, CXL);
> +
> +int __wrap_devm_cxl_add_passthrough_decoder(struct device *host,
> + struct cxl_port *port)
> +{
> + int rc, index;
> + struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
> +
> + if (ops && ops->is_mock_port(port->uport))
> + rc = ops->devm_cxl_add_passthrough_decoder(host, port);
> + else
> + rc = devm_cxl_add_passthrough_decoder(host, port);
> + put_cxl_mock_ops(index);
> +
> + return rc;
> +}
> +EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_add_passthrough_decoder, CXL);
> +
> +int __wrap_devm_cxl_enumerate_decoders(struct device *host,
> + struct cxl_hdm *cxlhdm)
> +{
> + int rc, index;
> + struct cxl_port *port = cxlhdm->port;
> + struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
> +
> + if (ops && ops->is_mock_port(port->uport))
> + rc = ops->devm_cxl_enumerate_decoders(host, cxlhdm);
> + else
> + rc = devm_cxl_enumerate_decoders(host, cxlhdm);
> + put_cxl_mock_ops(index);
> +
> + return rc;
> +}
> +EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_enumerate_decoders, CXL);
> +
> int __wrap_devm_cxl_port_enumerate_dports(struct device *host,
> struct cxl_port *port)
> {
> diff --git a/tools/testing/cxl/test/mock.h b/tools/testing/cxl/test/mock.h
> index 99e7ff38090d..15e48063ea4b 100644
> --- a/tools/testing/cxl/test/mock.h
> +++ b/tools/testing/cxl/test/mock.h
> @@ -21,6 +21,9 @@ struct cxl_mock_ops {
> bool (*is_mock_dev)(struct device *dev);
> int (*devm_cxl_port_enumerate_dports)(struct device *host,
> struct cxl_port *port);
> + struct cxl_hdm *(*devm_cxl_setup_hdm)(struct device *host, struct cxl_port *port);
> + int (*devm_cxl_add_passthrough_decoder)(struct device *host, struct cxl_port *port);
> + int (*devm_cxl_enumerate_decoders)(struct device *host, struct cxl_hdm *hdm);
> };
>
> void register_cxl_mock_ops(struct cxl_mock_ops *ops);
>
next prev parent reply other threads:[~2022-02-01 0:24 UTC|newest]
Thread overview: 172+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-24 0:28 [PATCH v3 00/40] CXL.mem Topology Discovery and Hotplug Support Dan Williams
2022-01-24 0:28 ` [PATCH v3 01/40] cxl: Rename CXL_MEM to CXL_PCI Dan Williams
2022-01-24 0:28 ` [PATCH v3 02/40] cxl/pci: Implement Interface Ready Timeout Dan Williams
2022-01-31 22:21 ` Ben Widawsky
2022-01-31 23:11 ` Dan Williams
2022-01-31 23:25 ` Ben Widawsky
2022-01-31 23:47 ` Dan Williams
2022-01-31 23:51 ` [PATCH v4 " Dan Williams
2022-01-24 0:28 ` [PATCH v3 03/40] cxl/pci: Defer mailbox status checks to command timeouts Dan Williams
2022-01-31 22:28 ` Ben Widawsky
2022-01-24 0:29 ` [PATCH v3 04/40] cxl: Flesh out register names Dan Williams
2022-01-24 0:29 ` [PATCH v3 05/40] cxl/pci: Add new DVSEC definitions Dan Williams
2022-01-24 0:29 ` [PATCH v3 06/40] cxl/acpi: Map component registers for Root Ports Dan Williams
2022-01-24 0:29 ` [PATCH v3 07/40] cxl: Introduce module_cxl_driver Dan Williams
2022-01-24 0:29 ` [PATCH v3 08/40] cxl/core/port: Rename bus.c to port.c Dan Williams
2022-01-31 22:34 ` Ben Widawsky
2022-01-24 0:29 ` [PATCH v3 09/40] cxl/decoder: Hide physical address information from non-root Dan Williams
2022-01-31 14:14 ` Jonathan Cameron
2022-01-31 22:34 ` Ben Widawsky
2022-01-24 0:29 ` [PATCH v3 10/40] cxl/core: Convert decoder range to resource Dan Williams
2022-01-24 0:29 ` [PATCH v3 11/40] cxl/core/port: Clarify decoder creation Dan Williams
2022-01-31 14:46 ` Jonathan Cameron
2022-01-31 21:17 ` Dan Williams
2022-01-31 21:33 ` [PATCH v4 " Dan Williams
2022-02-01 10:49 ` Jonathan Cameron
2022-01-24 0:29 ` [PATCH v3 12/40] cxl/core: Fix cxl_probe_component_regs() error message Dan Williams
2022-01-31 14:53 ` Jonathan Cameron
2022-01-31 22:29 ` Dan Williams
2022-01-31 22:39 ` Ben Widawsky
2022-01-24 0:29 ` [PATCH v3 13/40] cxl/core/port: Make passthrough decoder init implicit Dan Williams
2022-01-31 14:56 ` Jonathan Cameron
2022-01-24 0:29 ` [PATCH v3 14/40] cxl/core: Track port depth Dan Williams
2022-01-31 14:57 ` Jonathan Cameron
2022-01-24 0:29 ` [PATCH v3 15/40] cxl: Prove CXL locking Dan Williams
2022-01-31 15:48 ` Jonathan Cameron
2022-01-31 19:43 ` Dan Williams
2022-01-31 19:50 ` [PATCH v4 " Dan Williams
2022-01-31 23:23 ` Ben Widawsky
2022-01-24 0:30 ` [PATCH v3 16/40] cxl/core/port: Use dedicated lock for decoder target list Dan Williams
2022-01-26 2:54 ` [PATCH v4 " Dan Williams
2022-01-31 15:59 ` Jonathan Cameron
2022-01-31 23:31 ` Dan Williams
2022-01-31 23:34 ` Ben Widawsky
2022-01-31 23:38 ` Dan Williams
2022-01-31 23:42 ` Ben Widawsky
2022-01-31 23:58 ` Dan Williams
2022-01-31 23:35 ` [PATCH v5 " Dan Williams
2022-02-01 10:52 ` Jonathan Cameron
2022-01-24 0:30 ` [PATCH v3 17/40] cxl/port: Introduce cxl_port_to_pci_bus() Dan Williams
2022-01-31 16:04 ` Jonathan Cameron
2022-01-31 16:44 ` [PATCH v4 " Dan Williams
2022-01-31 23:41 ` Ben Widawsky
2022-01-24 0:30 ` [PATCH v3 18/40] cxl/pmem: Introduce a find_cxl_root() helper Dan Williams
2022-01-26 18:55 ` [PATCH v4 " Dan Williams
2022-01-26 23:59 ` [PATCH v5 " Dan Williams
2022-01-31 16:18 ` Jonathan Cameron
2022-02-01 0:22 ` Dan Williams
2022-02-01 10:58 ` Jonathan Cameron
2022-02-01 0:34 ` [PATCH v6 " Dan Williams
2022-02-01 10:59 ` Jonathan Cameron
2022-01-24 0:30 ` [PATCH v3 19/40] cxl/port: Up-level cxl_add_dport() locking requirements to the caller Dan Williams
2022-01-31 16:20 ` Jonathan Cameron
2022-01-31 23:47 ` Ben Widawsky
2022-02-01 0:43 ` Dan Williams
2022-02-01 1:07 ` [PATCH v4 " Dan Williams
2022-02-01 11:00 ` Jonathan Cameron
2022-01-24 0:30 ` [PATCH v3 20/40] cxl/pci: Rename pci.h to cxlpci.h Dan Williams
2022-01-31 16:22 ` Jonathan Cameron
2022-02-01 0:00 ` Dan Williams
2022-01-31 23:48 ` Ben Widawsky
2022-01-24 0:30 ` [PATCH v3 21/40] cxl/core: Generalize dport enumeration in the core Dan Williams
2022-01-31 17:02 ` Jonathan Cameron
2022-02-01 1:58 ` Dan Williams
2022-02-01 2:10 ` [PATCH v4 " Dan Williams
2022-02-01 11:03 ` Jonathan Cameron
2022-01-24 0:30 ` [PATCH v3 22/40] cxl/core/hdm: Add CXL standard decoder enumeration to " Dan Williams
2022-01-26 3:09 ` [PATCH v4 " Dan Williams
2022-01-31 14:26 ` Jonathan Cameron
2022-01-31 17:51 ` Jonathan Cameron
2022-02-01 5:10 ` Dan Williams
2022-02-01 20:24 ` [PATCH v5 " Dan Williams
2022-02-02 9:31 ` Jonathan Cameron
2022-02-01 0:24 ` Ben Widawsky [this message]
2022-02-01 4:58 ` [PATCH v3 " Dan Williams
2022-01-24 0:30 ` [PATCH v3 23/40] cxl/core: Emit modalias for CXL devices Dan Williams
2022-01-31 17:57 ` Jonathan Cameron
2022-02-01 15:11 ` Ben Widawsky
2022-01-24 0:30 ` [PATCH v3 24/40] cxl/port: Add a driver for 'struct cxl_port' objects Dan Williams
2022-01-26 20:16 ` [PATCH v4 " Dan Williams
2022-01-31 18:11 ` Jonathan Cameron
2022-02-01 20:43 ` Dan Williams
2022-02-02 9:33 ` Jonathan Cameron
2022-02-01 21:07 ` [PATCH v5 " Dan Williams
2022-01-24 0:30 ` [PATCH v3 25/40] cxl/core/port: Remove @host argument for dport + decoder enumeration Dan Williams
2022-01-31 14:32 ` Jonathan Cameron
2022-01-31 18:14 ` Jonathan Cameron
2022-02-01 15:17 ` Ben Widawsky
2022-02-01 21:09 ` Dan Williams
2022-02-01 21:23 ` [PATCH v4 " Dan Williams
2022-01-24 0:30 ` [PATCH v3 26/40] cxl/pci: Store component register base in cxlds Dan Williams
2022-01-31 18:15 ` Jonathan Cameron
2022-02-01 21:28 ` [PATCH v4 " Dan Williams
2022-01-24 0:31 ` [PATCH v3 27/40] cxl/pci: Cache device DVSEC offset Dan Williams
2022-01-31 18:19 ` Jonathan Cameron
2022-02-01 15:24 ` Ben Widawsky
2022-02-01 21:41 ` Dan Williams
2022-02-01 22:11 ` Ben Widawsky
2022-02-01 22:15 ` Dan Williams
2022-02-01 22:20 ` Ben Widawsky
2022-02-01 22:24 ` Dan Williams
2022-02-02 9:36 ` Jonathan Cameron
2022-02-01 22:06 ` [PATCH v4 " Dan Williams
2022-02-02 9:36 ` Jonathan Cameron
2022-01-24 0:31 ` [PATCH v3 28/40] cxl/pci: Retrieve CXL DVSEC memory info Dan Williams
2022-01-31 18:25 ` Jonathan Cameron
2022-02-01 22:52 ` Dan Williams
2022-02-01 23:48 ` [PATCH v4 " Dan Williams
2022-02-02 9:39 ` Jonathan Cameron
2022-01-24 0:31 ` [PATCH v3 29/40] cxl/pci: Implement wait for media active Dan Williams
2022-01-31 18:29 ` Jonathan Cameron
2022-02-01 23:56 ` Dan Williams
2022-01-24 0:31 ` [PATCH v3 30/40] cxl/pci: Emit device serial number Dan Williams
2022-01-31 18:33 ` Jonathan Cameron
2022-01-31 21:43 ` Dan Williams
2022-01-31 21:56 ` [PATCH v4 " Dan Williams
2022-01-24 0:31 ` [PATCH v3 31/40] cxl/memdev: Add numa_node attribute Dan Williams
2022-01-31 18:41 ` Jonathan Cameron
2022-02-01 23:57 ` Dan Williams
2022-02-02 9:44 ` Jonathan Cameron
2022-02-02 15:44 ` Dan Williams
2022-02-03 9:41 ` Jonathan Cameron
2022-02-03 16:59 ` Dan Williams
2022-02-03 18:05 ` Jonathan Cameron
2022-02-04 4:25 ` Dan Williams
2022-02-01 15:31 ` Ben Widawsky
2022-02-01 15:49 ` Jonathan Cameron
2022-02-01 16:35 ` Ben Widawsky
2022-02-01 17:38 ` Jonathan Cameron
2022-02-01 23:59 ` Dan Williams
2022-02-02 1:18 ` Dan Williams
2022-01-24 0:31 ` [PATCH v3 32/40] cxl/core/port: Add switch port enumeration Dan Williams
2022-02-01 12:13 ` Jonathan Cameron
2022-02-02 5:26 ` Dan Williams
2022-02-01 17:37 ` Ben Widawsky
2022-02-02 6:03 ` Dan Williams
2022-02-02 17:07 ` [PATCH v4 " Dan Williams
2022-02-03 9:55 ` Jonathan Cameron
2022-02-04 15:08 ` [PATCH v5 " Dan Williams
2022-01-24 0:31 ` [PATCH v3 33/40] cxl/mem: Add the cxl_mem driver Dan Williams
2022-01-26 3:16 ` [PATCH v4 " Dan Williams
2022-02-01 12:45 ` Jonathan Cameron
2022-02-01 17:44 ` Ben Widawsky
2022-02-03 2:49 ` Dan Williams
2022-02-03 9:59 ` Jonathan Cameron
2022-02-04 14:54 ` Dan Williams
2022-02-03 3:56 ` [PATCH v5 " Dan Williams
2022-02-03 12:07 ` Jonathan Cameron
2022-02-04 15:18 ` [PATCH v6 " Dan Williams
2022-01-24 0:31 ` [PATCH v3 34/40] cxl/core: Move target_list out of base decoder attributes Dan Williams
2022-01-31 18:45 ` Jonathan Cameron
2022-02-01 17:45 ` Ben Widawsky
2022-01-24 0:31 ` [PATCH v3 35/40] cxl/core/port: Add endpoint decoders Dan Williams
2022-02-01 12:47 ` Jonathan Cameron
2022-02-03 4:02 ` [PATCH v4 " Dan Williams
2022-02-14 17:45 ` Jonathan Cameron
2022-02-14 19:14 ` Dan Williams
2022-01-24 0:31 ` [PATCH v3 36/40] tools/testing/cxl: Mock dvsec_ranges() Dan Williams
2022-01-24 0:31 ` [PATCH v3 37/40] tools/testing/cxl: Fix root port to host bridge assignment Dan Williams
2022-01-24 0:32 ` [PATCH v3 38/40] tools/testing/cxl: Mock one level of switches Dan Williams
2022-01-24 0:32 ` [PATCH v3 39/40] tools/testing/cxl: Enumerate mock decoders Dan Williams
2022-01-24 0:32 ` [PATCH v3 40/40] tools/testing/cxl: Add a physical_node link Dan Williams
2022-02-01 12:53 ` Jonathan Cameron
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220201002435.oodbf3xuhb7xknus@intel.com \
--to=ben.widawsky@intel.com \
--cc=dan.j.williams@intel.com \
--cc=linux-cxl@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=nvdimm@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox