public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Ira Weiny <ira.weiny@intel.com>
To: "Jørgen Hansen" <Jorgen.Hansen@wdc.com>,
	"ira.weiny@intel.com" <ira.weiny@intel.com>,
	"Dave Jiang" <dave.jiang@intel.com>,
	"Fan Ni" <fan.ni@samsung.com>,
	"Jonathan Cameron" <Jonathan.Cameron@huawei.com>,
	"Navneet Singh" <navneet.singh@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>,
	Davidlohr Bueso <dave@stgolabs.net>,
	Alison Schofield <alison.schofield@intel.com>,
	"Vishal Verma" <vishal.l.verma@intel.com>,
	"linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>,
	"linux-cxl@vger.kernel.org" <linux-cxl@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 14/26] cxl/region: Read existing extents on region creation
Date: Tue, 9 Apr 2024 23:29:02 -0700	[thread overview]
Message-ID: <661631ae232b3_e9f9f2947@iweiny-mobl.notmuch> (raw)
In-Reply-To: <33489603-3da4-498a-ac0f-8021df2150e4@wdc.com>

Jørgen Hansen wrote:
> On 3/25/24 00:18, ira.weiny@intel.com wrote:
> > From: Navneet Singh <navneet.singh@intel.com>
> > 
> > Dynamic capacity device extents may be left in an accepted state on a
> > device due to an unexpected host crash.  In this case creation of a new
> > region on top of the DC partition (region) is expected to expose those
> > extents for continued use.
> > 
> > Once all endpoint decoders are part of a region and the region is being
> > realized read the device extent list.  For ease of review, this patch
> > stops after reading the extent list and leaves realization of the region
> > extents to a future patch.
> > 
> > Signed-off-by: Navneet Singh <navneet.singh@intel.com>
> > Co-developed-by: Ira Weiny <ira.weiny@intel.com>
> > Signed-off-by: Ira Weiny <ira.weiny@intel.com>
> > 
> > ---
> > Changes for v1:
> > [iweiny: remove extent list xarray]
> > [iweiny: Update spec references to 3.1]
> > [iweiny: use struct range in extents]
> > [iweiny: remove all reference tracking and let regions track extents
> >           through the extent devices.]
> > [djbw/Jonathan/Fan: move extent tracking to endpoint decoders]
> > ---
> >   drivers/cxl/core/core.h   |   9 +++
> >   drivers/cxl/core/mbox.c   | 192 ++++++++++++++++++++++++++++++++++++++++++++++
> >   drivers/cxl/core/region.c |  29 +++++++
> >   drivers/cxl/cxlmem.h      |  49 ++++++++++++
> >   4 files changed, 279 insertions(+)
> > 
> > diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h
> > index 91abeffbe985..119b12362977 100644
> > --- a/drivers/cxl/core/core.h
> > +++ b/drivers/cxl/core/core.h
> > @@ -4,6 +4,8 @@
> >   #ifndef __CXL_CORE_H__
> >   #define __CXL_CORE_H__
> > 
> > +#include <cxlmem.h>
> > +
> >   extern const struct device_type cxl_nvdimm_bridge_type;
> >   extern const struct device_type cxl_nvdimm_type;
> >   extern const struct device_type cxl_pmu_type;
> > @@ -28,6 +30,8 @@ void cxl_decoder_kill_region(struct cxl_endpoint_decoder *cxled);
> >   int cxl_region_init(void);
> >   void cxl_region_exit(void);
> >   int cxl_get_poison_by_endpoint(struct cxl_port *port);
> > +int cxl_ed_add_one_extent(struct cxl_endpoint_decoder *cxled,
> > +                         struct cxl_dc_extent *dc_extent);
> >   #else
> >   static inline int cxl_get_poison_by_endpoint(struct cxl_port *port)
> >   {
> > @@ -43,6 +47,11 @@ static inline int cxl_region_init(void)
> >   static inline void cxl_region_exit(void)
> >   {
> >   }
> > +static inline int cxl_ed_add_one_extent(struct cxl_endpoint_decoder *cxled,
> > +                                       struct cxl_dc_extent *dc_extent)
> > +{
> > +       return 0;
> > +}
> >   #define CXL_REGION_ATTR(x) NULL
> >   #define CXL_REGION_TYPE(x) NULL
> >   #define SET_CXL_REGION_ATTR(x)
> > diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
> > index 58b31fa47b93..9e33a0976828 100644
> > --- a/drivers/cxl/core/mbox.c
> > +++ b/drivers/cxl/core/mbox.c
> > @@ -870,6 +870,53 @@ int cxl_enumerate_cmds(struct cxl_memdev_state *mds)
> >   }
> >   EXPORT_SYMBOL_NS_GPL(cxl_enumerate_cmds, CXL);
> > 
> > +static int cxl_validate_extent(struct cxl_memdev_state *mds,
> > +                              struct cxl_dc_extent *dc_extent)
> > +{
> > +       struct device *dev = mds->cxlds.dev;
> > +       uint64_t start, len;
> > +
> > +       start = le64_to_cpu(dc_extent->start_dpa);
> > +       len = le64_to_cpu(dc_extent->length);
> > +
> > +       /* Extents must not cross region boundary's */
> > +       for (int i = 0; i < mds->nr_dc_region; i++) {
> > +               struct cxl_dc_region_info *dcr = &mds->dc_region[i];
> > +
> > +               if (dcr->base <= start &&
> > +                   (start + len) <= (dcr->base + dcr->decode_len)) {
> > +                       dev_dbg(dev, "DC extent DPA %#llx - %#llx (DCR:%d:%#llx)\n",
> > +                               start, start + len - 1, i, start - dcr->base);
> > +                       return 0;
> > +               }
> > +       }
> > +
> > +       dev_err_ratelimited(dev,
> > +                           "DC extent DPA %#llx - %#llx is not in any DC region\n",
> > +                           start, start + len - 1);
> > +       return -EINVAL;
> > +}
> > +
> > +static bool cxl_dc_extent_in_ed(struct cxl_endpoint_decoder *cxled,
> > +                               struct cxl_dc_extent *extent)
> > +{
> > +       uint64_t start = le64_to_cpu(extent->start_dpa);
> > +       uint64_t length = le64_to_cpu(extent->length);
> > +       struct range ext_range = (struct range){
> > +               .start = start,
> > +               .end = start + length - 1,
> > +       };
> > +       struct range ed_range = (struct range) {
> > +               .start = cxled->dpa_res->start,
> > +               .end = cxled->dpa_res->end,
> > +       };
> > +
> > +       dev_dbg(&cxled->cxld.dev, "Checking ED (%pr) for extent DPA:%#llx LEN:%#llx\n",
> > +               cxled->dpa_res, start, length);
> > +
> > +       return range_contains(&ed_range, &ext_range);
> > +}
> > +
> >   void cxl_event_trace_record(const struct cxl_memdev *cxlmd,
> >                              enum cxl_event_log_type type,
> >                              enum cxl_event_type event_type,
> > @@ -973,6 +1020,15 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds,
> >          return rc;
> >   }
> > 
> > +static struct cxl_memdev_state *
> > +cxled_to_mds(struct cxl_endpoint_decoder *cxled)
> > +{
> > +       struct cxl_memdev *cxlmd = cxled_to_memdev(cxled);
> > +       struct cxl_dev_state *cxlds = cxlmd->cxlds;
> > +
> > +       return container_of(cxlds, struct cxl_memdev_state, cxlds);
> > +}
> > +
> >   static void cxl_mem_get_records_log(struct cxl_memdev_state *mds,
> >                                      enum cxl_event_log_type type)
> >   {
> > @@ -1406,6 +1462,142 @@ int cxl_dev_dynamic_capacity_identify(struct cxl_memdev_state *mds)
> >   }
> >   EXPORT_SYMBOL_NS_GPL(cxl_dev_dynamic_capacity_identify, CXL);
> > 
> > +static int cxl_dev_get_dc_extent_cnt(struct cxl_memdev_state *mds,
> > +                                    unsigned int *extent_gen_num)
> > +{
> > +       struct cxl_mbox_get_dc_extent_in get_dc_extent;
> > +       struct cxl_mbox_get_dc_extent_out dc_extents;
> > +       struct cxl_mbox_cmd mbox_cmd;
> > +       unsigned int count;
> > +       int rc;
> > +
> > +       get_dc_extent = (struct cxl_mbox_get_dc_extent_in) {
> > +               .extent_cnt = cpu_to_le32(0),
> > +               .start_extent_index = cpu_to_le32(0),
> > +       };
> > +
> > +       mbox_cmd = (struct cxl_mbox_cmd) {
> > +               .opcode = CXL_MBOX_OP_GET_DC_EXTENT_LIST,
> > +               .payload_in = &get_dc_extent,
> > +               .size_in = sizeof(get_dc_extent),
> > +               .size_out = sizeof(dc_extents),
> > +               .payload_out = &dc_extents,
> > +               .min_out = 1,
> > +       };
> > +
> > +       rc = cxl_internal_send_cmd(mds, &mbox_cmd);
> > +       if (rc < 0)
> > +               return rc;
> > +
> > +       count = le32_to_cpu(dc_extents.total_extent_cnt);
> > +       *extent_gen_num = le32_to_cpu(dc_extents.extent_list_num);
> > +
> > +       return count;
> > +}
> > +
> > +static int cxl_dev_get_dc_extents(struct cxl_endpoint_decoder *cxled,
> > +                                 unsigned int start_gen_num,
> > +                                 unsigned int exp_cnt)
> > +{
> > +       struct cxl_memdev_state *mds = cxled_to_mds(cxled);
> > +       unsigned int start_index, total_read;
> > +       struct device *dev = mds->cxlds.dev;
> > +       struct cxl_mbox_cmd mbox_cmd;
> > +
> > +       struct cxl_mbox_get_dc_extent_out *dc_extents __free(kfree) =
> > +                               kvmalloc(mds->payload_size, GFP_KERNEL);
> > +       if (!dc_extents)
> > +               return -ENOMEM;
> > +
> > +       total_read = 0;
> > +       start_index = 0;
> > +       do {
> > +               unsigned int nr_ext, total_extent_cnt, gen_num;
> > +               struct cxl_mbox_get_dc_extent_in get_dc_extent;
> > +               int rc;
> > +
> > +               get_dc_extent = (struct cxl_mbox_get_dc_extent_in) {
> > +                       .extent_cnt = cpu_to_le32(exp_cnt - start_index),
> > +                       .start_extent_index = cpu_to_le32(start_index),
> > +               };
> > +
> > +               mbox_cmd = (struct cxl_mbox_cmd) {
> > +                       .opcode = CXL_MBOX_OP_GET_DC_EXTENT_LIST,
> > +                       .payload_in = &get_dc_extent,
> > +                       .size_in = sizeof(get_dc_extent),
> > +                       .size_out = mds->payload_size,
> > +                       .payload_out = dc_extents,
> > +                       .min_out = 1,
> > +               };
> > +
> > +               rc = cxl_internal_send_cmd(mds, &mbox_cmd);
> > +               if (rc < 0)
> > +                       return rc;
> > +
> > +               nr_ext = le32_to_cpu(dc_extents->ret_extent_cnt);
> > +               total_read += nr_ext;
> > +               total_extent_cnt = le32_to_cpu(dc_extents->total_extent_cnt);
> > +               gen_num = le32_to_cpu(dc_extents->extent_list_num);
> > +
> > +               dev_dbg(dev, "Get extent list count:%d generation Num:%d\n",
> > +                       total_extent_cnt, gen_num);
> > +
> > +               if (gen_num != start_gen_num || exp_cnt != total_extent_cnt) {
> > +                       dev_err(dev, "Possible incomplete extent list; gen %u != %u : cnt %u != %u\n",
> > +                               gen_num, start_gen_num, exp_cnt, total_extent_cnt);
> > +                       return -EIO;
> > +               }
> > +
> > +               for (int i = 0; i < nr_ext ; i++) {
> > +                       dev_dbg(dev, "Processing extent %d/%d\n",
> > +                               start_index + i, exp_cnt);
> > +                       rc = cxl_validate_extent(mds, &dc_extents->extent[i]);
> > +                       if (rc)
> > +                               continue;
> > +                       if (!cxl_dc_extent_in_ed(cxled, &dc_extents->extent[i]))
> > +                               continue;
> > +                       rc = cxl_ed_add_one_extent(cxled, &dc_extents->extent[i]);
> > +                       if (rc)
> > +                               return rc;
> > +               }
> > +
> > +               start_index += nr_ext;
> > +       } while (exp_cnt > total_read);
> > +
> > +       return 0;
> > +}
> > +
> > +/**
> > + * cxl_read_dc_extents() - Read any existing extents
> > + * @cxled: Endpoint decoder which is part of a region
> > + *
> > + * Issue the Get Dynamic Capacity Extent List command to the device
> > + * and add any existing extents found which belong to this decoder.
> > + *
> > + * Return: 0 if command was executed successfully, -ERRNO on error.
> > + */
> > +int cxl_read_dc_extents(struct cxl_endpoint_decoder *cxled)
> > +{
> > +       struct cxl_memdev_state *mds = cxled_to_mds(cxled);
> > +       struct device *dev = mds->cxlds.dev;
> > +       unsigned int extent_gen_num;
> > +       int rc;
> > +
> > +       if (!cxl_dcd_supported(mds)) {
> > +               dev_dbg(dev, "DCD unsupported\n");
> > +               return 0;
> > +       }
> > +
> > +       rc = cxl_dev_get_dc_extent_cnt(mds, &extent_gen_num);
> > +       dev_dbg(mds->cxlds.dev, "Extent count: %d Generation Num: %d\n",
> > +               rc, extent_gen_num);
> > +       if (rc <= 0) /* 0 == no records found */
> > +               return rc;
> > +
> > +       return cxl_dev_get_dc_extents(cxled, extent_gen_num, rc);
> 
> Is it necessary to spend a device interaction to get the generation 
> number?

Not completely necessary no.

> Couldn't cxl_dev_get_dc_extents obtain that as part of the first 
> call to the device, and then use it to ensure the consistency of any 
> remaining calls, if any are necessary?

... However, this is not a critical path and the extra query to hardware makes
the code a bit easier to follow IMO.  There are 2 distinct steps.

	1) get expected number of extents and the current generation number
	2) query for that number whilst checking that the gen number is stable

Doing what you suggest results in special casing the first query within the
loop which is kind of ugly IMO.

That said, with the new retry requirement Fan pointed out I'll consider this in
that new algorithm context.

Ira

  reply	other threads:[~2024-04-10  6:29 UTC|newest]

Thread overview: 161+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-24 23:18 [PATCH 00/26] DCD: Add support for Dynamic Capacity Devices (DCD) ira.weiny
2024-03-24 23:18 ` [PATCH 01/26] cxl/mbox: Flag " ira.weiny
2024-03-25 16:11   ` Jonathan Cameron
2024-03-25 22:16   ` fan
2024-03-25 22:56   ` Davidlohr Bueso
2024-04-02 22:26     ` Ira Weiny
2024-03-26 16:34   ` Dave Jiang
2024-04-02 22:30     ` Ira Weiny
2024-04-10 18:15   ` Alison Schofield
2024-03-24 23:18 ` [PATCH 02/26] cxl/core: Separate region mode from decoder mode ira.weiny
2024-03-25 16:20   ` Jonathan Cameron
2024-04-02 23:24     ` Ira Weiny
2024-03-25 23:18   ` Davidlohr Bueso
2024-03-28  5:22     ` Ira Weiny
2024-03-28 20:09       ` Dave Jiang
2024-04-02 23:27         ` Ira Weiny
2024-04-24 17:58         ` Ira Weiny
2024-04-02 23:25     ` Ira Weiny
2024-04-10 18:49   ` Alison Schofield
2024-03-24 23:18 ` [PATCH 03/26] cxl/mem: Read dynamic capacity configuration from the device ira.weiny
2024-03-25 17:40   ` Jonathan Cameron
2024-04-03 22:22     ` Ira Weiny
2024-03-25 23:36   ` fan
2024-04-03 22:41     ` Ira Weiny
2024-04-02 11:41   ` Jørgen Hansen
2024-04-05 18:09     ` Ira Weiny
2024-04-09  8:42       ` Jørgen Hansen
2024-04-09  2:00   ` Alison Schofield
2024-03-24 23:18 ` [PATCH 04/26] cxl/region: Add dynamic capacity decoder and region modes ira.weiny
2024-03-25 17:42   ` Jonathan Cameron
2024-03-26 16:17   ` fan
2024-03-27 15:43   ` Dave Jiang
2024-04-05 18:19     ` Ira Weiny
2024-04-06  0:01       ` Dave Jiang
2024-05-14  2:40   ` Zhijian Li (Fujitsu)
2024-03-24 23:18 ` [PATCH 05/26] cxl/core: Simplify cxl_dpa_set_mode() Ira Weiny
2024-03-25 17:46   ` Jonathan Cameron
2024-03-25 21:38   ` Davidlohr Bueso
2024-03-26 16:25   ` fan
2024-03-26 17:46   ` Dave Jiang
2024-04-05 19:21     ` Ira Weiny
2024-04-06  0:02       ` Dave Jiang
2024-04-09  0:43   ` Alison Schofield
2024-05-03 19:09     ` Ira Weiny
2024-05-03 20:33       ` Alison Schofield
2024-05-04  1:19       ` Dan Williams
2024-05-06  4:06         ` Ira Weiny
2024-05-04  4:13       ` Dan Williams
2024-05-06  3:46         ` Ira Weiny
2024-03-24 23:18 ` [PATCH 06/26] cxl/port: Add Dynamic Capacity mode support to endpoint decoders ira.weiny
2024-03-26 16:35   ` fan
2024-04-05 19:50     ` Ira Weiny
2024-03-26 17:58   ` Dave Jiang
2024-04-05 20:34     ` Ira Weiny
2024-04-04  8:32   ` Jonathan Cameron
2024-04-05 20:56     ` Ira Weiny
2024-05-06 16:22       ` Dan Williams
2024-05-10  5:31         ` Ira Weiny
2024-04-10 20:33   ` Alison Schofield
2024-03-24 23:18 ` [PATCH 07/26] cxl/port: Add dynamic capacity size " ira.weiny
2024-04-05 13:54   ` Jonathan Cameron
2024-05-03 17:09     ` Ira Weiny
2024-05-03 17:21       ` Dan Williams
2024-05-06  4:07         ` Ira Weiny
2024-04-10 22:50   ` Alison Schofield
2024-03-24 23:18 ` [PATCH 08/26] cxl/mem: Expose device dynamic capacity capabilities ira.weiny
2024-03-25 23:40   ` Davidlohr Bueso
2024-03-26 18:30     ` fan
2024-04-04  8:44       ` Jonathan Cameron
2024-04-04  8:51   ` Jonathan Cameron
2024-03-24 23:18 ` [PATCH 09/26] cxl/region: Add Dynamic Capacity CXL region support ira.weiny
2024-03-26 22:31   ` fan
2024-04-10  4:25     ` Ira Weiny
2024-03-27 17:27   ` Dave Jiang
2024-04-10  4:35     ` Ira Weiny
2024-04-04 10:26   ` Jonathan Cameron
2024-04-10  4:40     ` Ira Weiny
2024-03-24 23:18 ` [PATCH 10/26] cxl/events: Factor out event msgnum configuration Ira Weiny
2024-03-27 17:38   ` Dave Jiang
2024-04-04 15:07   ` Jonathan Cameron
2024-03-24 23:18 ` [PATCH 11/26] cxl/pci: Delay event buffer allocation Ira Weiny
2024-03-25 22:26   ` Davidlohr Bueso
2024-03-27 17:38   ` Dave Jiang
2024-04-04 15:08   ` Jonathan Cameron
2024-03-24 23:18 ` [PATCH 12/26] cxl/pci: Factor out interrupt policy check Ira Weiny
2024-03-27 17:41   ` Dave Jiang
2024-04-04 15:10   ` Jonathan Cameron
2024-03-24 23:18 ` [PATCH 13/26] cxl/mem: Configure dynamic capacity interrupts ira.weiny
2024-03-26 23:12   ` fan
2024-04-10  4:48     ` Ira Weiny
2024-03-27 17:54   ` Dave Jiang
2024-04-10  5:26     ` Ira Weiny
2024-04-04 15:22   ` Jonathan Cameron
2024-04-10  5:34     ` Ira Weiny
2024-04-10 23:23   ` Alison Schofield
2024-05-06 16:56   ` Dan Williams
2024-03-24 23:18 ` [PATCH 14/26] cxl/region: Read existing extents on region creation ira.weiny
2024-03-26 23:27   ` fan
2024-04-10  5:46     ` Ira Weiny
2024-03-27 17:45   ` fan
2024-04-10  6:19     ` Ira Weiny
2024-03-27 18:31   ` Dave Jiang
2024-04-10  6:09     ` Ira Weiny
2024-04-02 13:57   ` Jørgen Hansen
2024-04-10  6:29     ` Ira Weiny [this message]
2024-04-04 16:04   ` Jonathan Cameron
2024-04-04 16:13   ` Jonathan Cameron
2024-04-10 17:44   ` Alison Schofield
2024-05-06 18:34   ` Dan Williams
2024-06-29  3:47     ` Ira Weiny
2024-03-24 23:18 ` [PATCH 15/26] range: Add range_overlaps() Ira Weiny
2024-03-25 18:33   ` David Sterba
2024-03-25 21:24   ` Davidlohr Bueso
2024-03-26 12:51   ` Johannes Thumshirn
2024-03-27 17:36   ` fan
2024-03-28 20:09   ` Dave Jiang
2024-04-04 16:06   ` Jonathan Cameron
2024-03-24 23:18 ` [PATCH 16/26] cxl/extent: Realize extent devices ira.weiny
2024-03-27 22:34   ` fan
2024-03-28 21:11   ` Dave Jiang
2024-04-24 19:57     ` Ira Weiny
2024-04-04 16:32   ` Jonathan Cameron
2024-04-30  3:23     ` Ira Weiny
2024-05-02 21:12       ` Dan Williams
2024-05-06  4:35         ` Ira Weiny
2024-04-11  0:09   ` Alison Schofield
2024-05-07  1:30   ` Dan Williams
2024-03-24 23:18 ` [PATCH 17/26] dax/region: Create extent resources on DAX region driver load ira.weiny
2024-04-04 16:36   ` Jonathan Cameron
2024-04-09 16:22   ` fan
2024-05-07  2:31   ` Dan Williams
2024-03-24 23:18 ` [PATCH 18/26] cxl/mem: Handle DCD add & release capacity events ira.weiny
2024-04-04 17:03   ` Jonathan Cameron
2024-05-07  5:04   ` Dan Williams
2024-03-24 23:18 ` [PATCH 19/26] dax/bus: Factor out dev dax resize logic Ira Weiny
2024-04-04 17:15   ` Jonathan Cameron
2024-03-24 23:18 ` [PATCH 20/26] dax: Document dax dev range tuple Ira Weiny
2024-04-01 17:06   ` Dave Jiang
2024-04-04 17:19   ` Jonathan Cameron
2024-03-24 23:18 ` [PATCH 21/26] dax/region: Prevent range mapping allocation on sparse regions Ira Weiny
2024-04-01 17:07   ` Dave Jiang
2024-04-10 23:02   ` Alison Schofield
2024-03-24 23:18 ` [PATCH 22/26] dax/region: Support DAX device creation on sparse DAX regions Ira Weiny
2024-04-04 17:36   ` Jonathan Cameron
2024-03-24 23:18 ` [PATCH 23/26] cxl/mem: Trace Dynamic capacity Event Record ira.weiny
2024-04-01 17:56   ` Dave Jiang
2024-04-04 17:38   ` Jonathan Cameron
2024-04-10 17:03   ` Alison Schofield
2024-03-24 23:18 ` [PATCH 24/26] tools/testing/cxl: Make event logs dynamic Ira Weiny
2024-03-24 23:18 ` [PATCH 25/26] tools/testing/cxl: Add DC Regions to mock mem data Ira Weiny
2024-03-24 23:18 ` [PATCH 26/26] tools/testing/cxl: Add Dynamic Capacity events Ira Weiny
2024-03-25 19:24 ` [PATCH 00/26] DCD: Add support for Dynamic Capacity Devices (DCD) fan
2024-03-28  5:20   ` Ira Weiny
2024-04-03 20:39     ` Jonathan Cameron
2024-04-04 10:20 ` Jonathan Cameron
2024-04-04 17:49 ` Jonathan Cameron
2024-05-01 23:49   ` Ira Weiny
2024-05-03  9:20     ` Jonathan Cameron
2024-05-06  4:24       ` Ira Weiny
2024-05-08 14:43         ` Jonathan Cameron
2024-04-10 18:01 ` Alison Schofield

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=661631ae232b3_e9f9f2947@iweiny-mobl.notmuch \
    --to=ira.weiny@intel.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=Jorgen.Hansen@wdc.com \
    --cc=alison.schofield@intel.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.jiang@intel.com \
    --cc=dave@stgolabs.net \
    --cc=fan.ni@samsung.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-cxl@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=navneet.singh@intel.com \
    --cc=vishal.l.verma@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox