Linux CXL
 help / color / mirror / Atom feed
From: Dave Jiang <dave.jiang@intel.com>
To: Jonathan Cameron <Jonathan.Cameron@Huawei.com>
Cc: <linux-cxl@vger.kernel.org>, <dan.j.williams@intel.com>,
	<ira.weiny@intel.com>, <vishal.l.verma@intel.com>,
	<alison.schofield@intel.com>, <dave@stgolabs.net>
Subject: Re: [PATCH v10 20/22] cxl: Store QTG IDs and related info to the CXL memory device context
Date: Wed, 11 Oct 2023 09:04:45 -0700	[thread overview]
Message-ID: <bb95f77c-fac0-4279-bd4d-dd4fef238ef0@intel.com> (raw)
In-Reply-To: <20231011141933.000028fb@Huawei.com>



On 10/11/23 06:19, Jonathan Cameron wrote:
> On Tue, 10 Oct 2023 18:06:59 -0700
> Dave Jiang <dave.jiang@intel.com> wrote:
> 
>> Once the QTG ID _DSM is executed successfully, the QTG ID is retrieved from
>> the return package. Create a list of entries in the cxl_memdev context and
>> store the QTG ID as qos_class token and the associated DPA range. This
>> information can be exposed to user space via sysfs in order to help region
>> setup for hot-plugged CXL memory devices.
>>
>> Signed-off-by: Dave Jiang <dave.jiang@intel.com>
>>
>> ---
>> v10:
>> - Store single qos_class value. (Dan)
> 
> I'm fine with doing this, but I'd like a print if more than one is
> provided by the DSMAS (e.g. multiple DSMAS entries for a given region)
> Mostly so we have something to let us know if anyone is shipping such a
> device.
> 
> Otherwise a couple of minor comments inline.
> 
> Jonathan
> 
> 
>> - Rename cxl_memdev_set_qtg() to cxl_memdev_set_qos_class()
>> - Removed Jonathan's review tag due to code changes.
> 
>> diff --git a/drivers/cxl/port.c b/drivers/cxl/port.c
>> index 99a619360bc5..049a16b7eb1f 100644
>> --- a/drivers/cxl/port.c
>> +++ b/drivers/cxl/port.c
>> @@ -105,6 +105,40 @@ static int cxl_port_perf_data_calculate(struct cxl_port *port,
>>  	return 0;
>>  }
>>  
>> +static void cxl_memdev_set_qos_class(struct cxl_dev_state *cxlds,
>> +				     struct list_head *dsmas_list)
>> +{
>> +	struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
>> +	struct range pmem_range = {
>> +		.start = cxlds->pmem_res.start,
>> +		.end = cxlds->pmem_res.end,
>> +	};
>> +	struct range ram_range = {
>> +		.start = cxlds->ram_res.start,
>> +		.end = cxlds->ram_res.end,
>> +	};
>> +	struct perf_prop_entry *perf;
>> +	struct dsmas_entry *dent;
>> +
>> +	list_for_each_entry(dent, dsmas_list, list) {
>> +		perf = devm_kzalloc(cxlds->dev, sizeof(*perf), GFP_KERNEL);
>> +		if (!perf)
>> +			return;
>> +
>> +		perf->dpa_range = dent->dpa_range;
>> +		perf->coord = dent->coord;
>> +		perf->qos_class = dent->qos_class;
>> +		list_add_tail(&perf->list, &mds->perf_list);
>> +
>> +		if (resource_size(&cxlds->ram_res) &&
>> +		    range_contains(&ram_range, &dent->dpa_range))
>> +			mds->ram_qos_class = perf->qos_class;
> 
> So this assumes one DSMAS per memory type.
> Not an unreasonable starting place, but I think this should 
> print something to the log if it does see more that one.

I'll add that. Also I seem to have dropped the check from before for multiple entries so we aren't clobbering the previous set value.

> 
>> +		else if (resource_size(&cxlds->pmem_res) &&
>> +			 range_contains(&pmem_range, &dent->dpa_range))
>> +			mds->pmem_qos_class = perf->qos_class;
>> +	}
>> +}
>> +
>>  static int cxl_switch_port_probe(struct cxl_port *port)
>>  {
>>  	struct cxl_hdm *cxlhdm;
>> @@ -201,6 +235,8 @@ static int cxl_endpoint_port_probe(struct cxl_port *port)
>>  			if (rc)
>>  				dev_dbg(&port->dev,
>>  					"Failed to do perf coord calculations.\n");
>> +			else
>> +				cxl_memdev_set_qos_class(cxlds, &dsmas_list);
> 
> This is getting a bit deeply nested.  Perhaps a follow up patch to factor
> it out makes sense so we can use a goto to cleanup the dmsmas_list without
> that label being nested as well.

I'll just fix it in place. It doesn't look too bad.

> 
>>  		}
>>  
>>  		cxl_cdat_dsmas_list_destroy(&dsmas_list);
>>
>>
> 

  reply	other threads:[~2023-10-11 16:05 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-11  1:04 [PATCH v10 00/22] cxl: Add support for QTG ID retrieval for CXL subsystem Dave Jiang
2023-10-11  1:05 ` [PATCH v10 01/22] cxl: Export QTG ids from CFMWS to sysfs as qos_class attribute Dave Jiang
2023-10-11  1:05 ` [PATCH v10 02/22] cxl: Add checksum verification to CDAT from CXL Dave Jiang
2023-10-11  1:05 ` [PATCH v10 03/22] cxl: Add support for reading CXL switch CDAT table Dave Jiang
2023-10-11  1:05 ` [PATCH v10 04/22] acpi: Move common tables helper functions to common lib Dave Jiang
2023-10-11 12:55   ` Jonathan Cameron
2023-10-11  1:05 ` [PATCH v10 05/22] lib/firmware_table: tables: Add CDAT table parsing support Dave Jiang
2023-10-11  1:05 ` [PATCH v10 06/22] base/node / acpi: Change 'node_hmem_attrs' to 'access_coordinates' Dave Jiang
2023-10-11 12:57   ` Jonathan Cameron
2023-10-11  1:05 ` [PATCH v10 07/22] acpi: numa: Create enum for memory_target access coordinates indexing Dave Jiang
2023-10-11  1:05 ` [PATCH v10 08/22] acpi: numa: Add genport target allocation to the HMAT parsing Dave Jiang
2023-10-11  1:05 ` [PATCH v10 09/22] acpi: Break out nesting for hmat_parse_locality() Dave Jiang
2023-10-11  1:05 ` [PATCH v10 10/22] acpi: numa: Add setting of generic port system locality attributes Dave Jiang
2023-10-11  1:06 ` [PATCH v10 11/22] acpi: numa: Add helper function to retrieve the performance attributes Dave Jiang
2023-10-11  1:06 ` [PATCH v10 12/22] cxl: Add callback to parse the DSMAS subtables from CDAT Dave Jiang
2023-10-11  1:06 ` [PATCH v10 13/22] cxl: Add callback to parse the DSLBIS subtable " Dave Jiang
2023-10-11  1:06 ` [PATCH v10 14/22] cxl: Add callback to parse the SSLBIS " Dave Jiang
2023-10-11  1:06 ` [PATCH v10 15/22] cxl: Add support for _DSM Function for retrieving QTG ID Dave Jiang
2023-10-11 13:10   ` Jonathan Cameron
2023-10-11 15:37     ` Dave Jiang
2023-10-11  1:06 ` [PATCH v10 16/22] cxl: Calculate and store PCI link latency for the downstream ports Dave Jiang
2023-10-11  1:06 ` [PATCH v10 17/22] cxl: Store the access coordinates for the generic ports Dave Jiang
2023-10-11  1:06 ` [PATCH v10 18/22] cxl: Add helper function that calculate performance data for downstream ports Dave Jiang
2023-10-11  1:06 ` [PATCH v10 19/22] cxl: Compute the entire CXL path latency and bandwidth data Dave Jiang
2023-10-11  1:06 ` [PATCH v10 20/22] cxl: Store QTG IDs and related info to the CXL memory device context Dave Jiang
2023-10-11 13:19   ` Jonathan Cameron
2023-10-11 16:04     ` Dave Jiang [this message]
2023-10-11  1:07 ` [PATCH v10 21/22] cxl: Export sysfs attributes for memory device QoS class Dave Jiang
2023-10-11 13:26   ` Jonathan Cameron
2023-10-11 21:43     ` Dave Jiang
2023-10-12 11:04       ` Jonathan Cameron
2023-10-11  1:07 ` [PATCH v10 22/22] cxl: Check qos_class validity on memdev probe Dave Jiang
2023-10-11 13:29   ` Jonathan Cameron
2023-10-11 16:28     ` Dave Jiang
2023-10-11 12:59 ` [PATCH v10 00/22] cxl: Add support for QTG ID retrieval for CXL subsystem Jonathan Cameron
2023-10-11 16:31   ` Dave Jiang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bb95f77c-fac0-4279-bd4d-dd4fef238ef0@intel.com \
    --to=dave.jiang@intel.com \
    --cc=Jonathan.Cameron@Huawei.com \
    --cc=alison.schofield@intel.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave@stgolabs.net \
    --cc=ira.weiny@intel.com \
    --cc=linux-cxl@vger.kernel.org \
    --cc=vishal.l.verma@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox