From: Jonathan Cameron <Jonathan.Cameron@Huawei.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: <linux-cxl@vger.kernel.org>, <ben.widawsky@intel.com>,
<vishal.l.verma@intel.com>, <alison.schofield@intel.com>,
<ira.weiny@intel.com>, <linux-pci@vger.kernel.org>
Subject: Re: [PATCH 6/8] cxl/pci: Prepare for mapping RAS Capability Structure
Date: Thu, 17 Mar 2022 10:56:59 +0000 [thread overview]
Message-ID: <20220317105659.000075fc@Huawei.com> (raw)
In-Reply-To: <164740405408.3912056.16337643017370667205.stgit@dwillia2-desk3.amr.corp.intel.com>
On Tue, 15 Mar 2022 21:14:14 -0700
Dan Williams <dan.j.williams@intel.com> wrote:
> The RAS Capabilitiy Structure is a CXL Component register capability
> block. Unlike the HDM Decoder Capability, it will be referenced by the
> cxl_pci driver in response to PCIe AER events. Due to this it is no
> longer the case that cxl_map_component_regs() can assume that it should
> map all component registers. Plumb a bitmask of capability ids to map
> through cxl_map_component_regs().
>
> For symmetry cxl_probe_device_regs() is updated to populate @id in
> 'struct cxl_reg_map' even though cxl_map_device_regs() does not have a
> need to map a subset of the device registers per caller.
I guess it doesn't hurt to add that, though without the mask being
passed into appropriate functions it's a bit pointless. What we have
is now 'half symmetric' perhaps.
>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
A few trivial comments inline, but basically looks good to me.
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> ---
> drivers/cxl/core/hdm.c | 3 ++-
> drivers/cxl/core/regs.c | 36 ++++++++++++++++++++++++++----------
> drivers/cxl/cxl.h | 7 ++++---
> 3 files changed, 32 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/cxl/core/hdm.c b/drivers/cxl/core/hdm.c
> index 09221afca309..b348217ab704 100644
> --- a/drivers/cxl/core/hdm.c
> +++ b/drivers/cxl/core/hdm.c
> @@ -92,7 +92,8 @@ static int map_hdm_decoder_regs(struct cxl_port *port, void __iomem *crb,
> return -ENXIO;
> }
>
> - return cxl_map_component_regs(&port->dev, regs, &map);
> + return cxl_map_component_regs(&port->dev, regs, &map,
> + BIT(CXL_CM_CAP_CAP_ID_HDM));
> }
>
> /**
> diff --git a/drivers/cxl/core/regs.c b/drivers/cxl/core/regs.c
> index 219c7d0e43e2..c022c8937dfc 100644
> --- a/drivers/cxl/core/regs.c
> +++ b/drivers/cxl/core/regs.c
> @@ -92,6 +92,7 @@ void cxl_probe_component_regs(struct device *dev, void __iomem *base,
> if (!rmap)
> continue;
> rmap->valid = true;
> + rmap->id = cap_id;
> rmap->offset = CXL_CM_OFFSET + offset;
> rmap->size = length;
> }
> @@ -159,6 +160,7 @@ void cxl_probe_device_regs(struct device *dev, void __iomem *base,
> if (!rmap)
> continue;
> rmap->valid = true;
> + rmap->id = cap_id;
> rmap->offset = offset;
> rmap->size = length;
> }
> @@ -187,17 +189,31 @@ void __iomem *devm_cxl_iomap_block(struct device *dev, resource_size_t addr,
> }
>
> int cxl_map_component_regs(struct device *dev, struct cxl_component_regs *regs,
> - struct cxl_register_map *map)
> + struct cxl_register_map *map, unsigned long map_mask)
Maybe pass an unsigned long *map_mask from the start for the inevitable
capability IDs passing the minimum length of a long?
Disadvantage being you'll need to roll a local BITMAP to pass in at all callsites.
> {
> - resource_size_t phys_addr;
> - resource_size_t length;
> -
> - phys_addr = map->resource;
> - phys_addr += map->component_map.hdm_decoder.offset;
> - length = map->component_map.hdm_decoder.size;
> - regs->hdm_decoder = devm_cxl_iomap_block(dev, phys_addr, length);
> - if (!regs->hdm_decoder)
> - return -ENOMEM;
> + struct mapinfo {
> + struct cxl_reg_map *rmap;
> + void __iomem **addr;
> + } mapinfo[] = {
> + { .rmap = &map->component_map.hdm_decoder, ®s->hdm_decoder },
As in previous instance, mixing two styles of assignment seems odd.
> + };
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(mapinfo); i++) {
> + struct mapinfo *mi = &mapinfo[i];
> + resource_size_t phys_addr;
> + resource_size_t length;
> +
> + if (!mi->rmap->valid)
> + continue;
> + if (!test_bit(mi->rmap->id, &map_mask))
> + continue;
> + phys_addr = map->resource + mi->rmap->offset;
> + length = mi->rmap->size;
> + *(mi->addr) = devm_cxl_iomap_block(dev, phys_addr, length);
> + if (!*(mi->addr))
> + return -ENOMEM;
> + }
>
> return 0;
> }
> diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
> index 2080a75c61fe..52bd77d8e22a 100644
> --- a/drivers/cxl/cxl.h
> +++ b/drivers/cxl/cxl.h
> @@ -115,6 +115,7 @@ struct cxl_regs {
>
> struct cxl_reg_map {
> bool valid;
> + int id;
> unsigned long offset;
> unsigned long size;
> };
> @@ -153,9 +154,9 @@ void cxl_probe_component_regs(struct device *dev, void __iomem *base,
> struct cxl_component_reg_map *map);
> void cxl_probe_device_regs(struct device *dev, void __iomem *base,
> struct cxl_device_reg_map *map);
> -int cxl_map_component_regs(struct device *dev,
> - struct cxl_component_regs *regs,
> - struct cxl_register_map *map);
> +int cxl_map_component_regs(struct device *dev, struct cxl_component_regs *regs,
Worth the rewrapping and extra diff as a result? (I think it's precisely 80 chars)
> + struct cxl_register_map *map,
> + unsigned long map_mask);
> int cxl_map_device_regs(struct device *dev,
> struct cxl_device_regs *regs,
> struct cxl_register_map *map);
>
next prev parent reply other threads:[~2022-03-17 10:57 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-16 4:13 [PATCH 0/8] cxl/pci: Add fundamental error handling Dan Williams
2022-03-16 4:13 ` [PATCH 1/8] cxl/pci: Cleanup repeated code in cxl_probe_regs() helpers Dan Williams
2022-03-17 10:02 ` Jonathan Cameron
2022-03-16 4:13 ` [PATCH 2/8] cxl/pci: Cleanup cxl_map_device_regs() Dan Williams
2022-03-17 10:07 ` Jonathan Cameron
2022-03-18 17:13 ` Dan Williams
2022-03-16 4:13 ` [PATCH 3/8] cxl/pci: Kill cxl_map_regs() Dan Williams
2022-03-17 10:09 ` Jonathan Cameron
2022-03-18 17:08 ` Dan Williams
2022-03-16 4:14 ` [PATCH 4/8] cxl/core/regs: Make cxl_map_{component, device}_regs() device generic Dan Williams
2022-03-17 10:25 ` Jonathan Cameron
2022-03-18 17:06 ` Dan Williams
2022-03-16 4:14 ` [PATCH 5/8] cxl/port: Limit the port driver to just the HDM Decoder Capability Dan Williams
2022-03-17 10:48 ` Jonathan Cameron
2022-03-16 4:14 ` [PATCH 6/8] cxl/pci: Prepare for mapping RAS Capability Structure Dan Williams
2022-03-17 10:56 ` Jonathan Cameron [this message]
2022-03-18 19:51 ` Dan Williams
2022-03-17 17:32 ` Ben Widawsky
2022-03-18 16:19 ` Dan Williams
2022-03-16 4:14 ` [PATCH 7/8] cxl/pci: Find and map the " Dan Williams
2022-03-17 15:10 ` Jonathan Cameron
2022-03-16 4:14 ` [PATCH 8/8] cxl/pci: Add (hopeful) error handling support Dan Williams
2022-03-17 15:16 ` Jonathan Cameron
2022-03-18 9:41 ` Shiju Jose
2022-04-24 22:15 ` Dan Williams
2022-03-16 4:23 ` [PATCH 0/8] cxl/pci: Add fundamental error handling Dan Williams
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220317105659.000075fc@Huawei.com \
--to=jonathan.cameron@huawei.com \
--cc=alison.schofield@intel.com \
--cc=ben.widawsky@intel.com \
--cc=dan.j.williams@intel.com \
--cc=ira.weiny@intel.com \
--cc=linux-cxl@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=vishal.l.verma@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).