public inbox for devicetree@vger.kernel.org
 help / color / mirror / Atom feed
From: Reinette Chatre <reinette.chatre@intel.com>
To: "Drew Fustini" <fustini@kernel.org>,
	"Paul Walmsley" <pjw@kernel.org>,
	"Palmer Dabbelt" <palmer@dabbelt.com>,
	"Albert Ou" <aou@eecs.berkeley.edu>,
	"Alexandre Ghiti" <alex@ghiti.fr>,
	"Radim Krčmář" <rkrcmar@ventanamicro.com>,
	"Samuel Holland" <samuel.holland@sifive.com>,
	"Adrien Ricciardi" <aricciardi@baylibre.com>,
	"Nicolas Pitre" <npitre@baylibre.com>,
	"Kornel Dulęba" <mindal@semihalf.com>,
	"Atish Patra" <atish.patra@linux.dev>,
	"Atish Kumar Patra" <atishp@rivosinc.com>,
	"Vasudevan Srinivasan" <vasu@rivosinc.com>,
	"Ved Shanbhogue" <ved@rivosinc.com>,
	"Conor Dooley" <conor.dooley@microchip.com>,
	"yunhui cui" <cuiyunhui@bytedance.com>,
	"Chen Pei" <cp0613@linux.alibaba.com>,
	"Liu Zhiwei" <zhiwei_liu@linux.alibaba.com>,
	"Weiwei Li" <liwei1518@gmail.com>,
	guo.wenjia23@zte.com.cn,
	"Gong Shuai" <gong.shuai@sanechips.com.cn>,
	"Gong Shuai" <gsh517@gmail.com>,
	liu.qingtao2@zte.com.cn, "Tony Luck" <tony.luck@intel.com>,
	"Babu Moger" <babu.moger@amd.com>,
	"Peter Newman" <peternewman@google.com>,
	"Fenghua Yu" <fenghua.yu@intel.com>,
	"James Morse" <james.morse@arm.com>,
	"Ben Horgan" <ben.horgan@arm.com>,
	"Dave Martin" <Dave.Martin@arm.com>,
	"Rob Herring" <robh@kernel.org>,
	"Conor Dooley" <conor+dt@kernel.org>,
	"Krzysztof Kozlowski" <krzk+dt@kernel.org>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	"Len Brown" <lenb@kernel.org>,
	"Robert Moore" <robert.moore@intel.com>,
	"Sunil V L" <sunilvl@ventanamicro.com>
Cc: <linux-kernel@vger.kernel.org>, <linux-riscv@lists.infradead.org>,
	<x86@kernel.org>, <linux-acpi@vger.kernel.org>,
	<acpica-devel@lists.linux.dev>, <devicetree@vger.kernel.org>,
	Paul Walmsley <paul.walmsley@sifive.com>
Subject: Re: [PATCH RFC v3 06/11] RISC-V: QoS: add resctrl setup and domain management
Date: Thu, 30 Apr 2026 16:20:09 -0700	[thread overview]
Message-ID: <9a8860a5-f63f-497c-ade9-6f64286abff0@intel.com> (raw)
In-Reply-To: <20260414-ssqosid-cbqri-rqsc-v7-0-v3-6-b3b2e7e9847a@kernel.org>

Hi Drew,

On 4/14/26 6:54 PM, Drew Fustini wrote:
> +
> +static int qos_resctrl_add_controller_domain(struct cbqri_controller *ctrl)
> +{
> +	struct rdt_ctrl_domain *domain;
> +	struct cbqri_resctrl_res *cbqri_res = NULL;
> +	struct rdt_resource *res = NULL;
> +	struct list_head *pos = NULL;
> +	int err;
> +
> +	domain = qos_new_domain(ctrl);
> +	if (!domain)
> +		return -ENOSPC;
> +
> +	switch (ctrl->type) {
> +	case CBQRI_CONTROLLER_TYPE_CAPACITY:
> +		cpumask_copy(&domain->hdr.cpu_mask, &ctrl->cache.cpu_mask);

Looking at patch #10 ctrl->cache.cpu_mask contains all CPUs associated with cache
even if they are offline. This is not what resctrl expects. Instead the expectation is
that a domain exists and is online (hence "resctrl_online_ctrl_domain()") if at least one CPU
belonging to that domain is online and domain->hdr.cpu_mask lists all the *online* CPUs
associated with that domain.
This is why resctrl always takes the CPU hotplug lock when traversing the domain
lists.

I thus expected this initialization to be split between an early initialization of
resource capabilities and then domain initialization as part of the CPU online/offline
handlers.

> +		domain->hdr.id = ctrl->cache.cache_id;
> +
> +		if (ctrl->cache.cache_level == 2) {
> +			cbqri_res = &cbqri_resctrl_resources[RDT_RESOURCE_L2];
> +			err = qos_init_cache_resource(ctrl, cbqri_res,
> +						      RDT_RESOURCE_L2, "L2",
> +						      RESCTRL_L2_CACHE);
> +		} else if (ctrl->cache.cache_level == 3) {
> +			cbqri_res = &cbqri_resctrl_resources[RDT_RESOURCE_L3];
> +			err = qos_init_cache_resource(ctrl, cbqri_res,
> +						      RDT_RESOURCE_L3, "L3",
> +						      RESCTRL_L3_CACHE);
> +		} else {
> +			pr_err("unknown cache level %d\n", ctrl->cache.cache_level);
> +			err = -ENODEV;
> +		}
> +		if (err)
> +			goto err_free_domain;
> +		res = &cbqri_res->resctrl_res;
> +		break;
> +
> +	case CBQRI_CONTROLLER_TYPE_BANDWIDTH:
> +		cpumask_copy(&domain->hdr.cpu_mask, &ctrl->mem.cpu_mask);
> +		domain->hdr.id = ctrl->mem.prox_dom;
> +		if (ctrl->alloc_capable) {
> +			cbqri_res = &cbqri_resctrl_resources[RDT_RESOURCE_MBA];
> +			err = qos_init_membw_resource(ctrl, cbqri_res);
> +			if (err)
> +				goto err_free_domain;
> +			res = &cbqri_res->resctrl_res;
> +		}
> +		break;
> +
> +	default:
> +		pr_err("unknown controller type %d\n", ctrl->type);
> +		err = -ENODEV;
> +		goto err_free_domain;
> +	}
> +
> +	if (!res)
> +		goto out;
> +
> +	err = qos_init_domain_ctrlval(res, domain);
> +	if (err)
> +		goto err_free_domain;
> +
> +	if (resctrl_find_domain(&res->ctrl_domains, domain->hdr.id, &pos)) {
> +		pr_err("duplicate domain id %d for resource %s\n",
> +		       domain->hdr.id, res->name);
> +		err = -EEXIST;
> +		goto err_free_domain;
> +	}
> +	if (pos)
> +		list_add_tail(&domain->hdr.list, pos);
> +	else
> +		list_add_tail(&domain->hdr.list, &res->ctrl_domains);

resctrl_find_domain() returns NULL if it cannot find an existing domain, in that
case it initializes "pos" to support adding a new domain in a sorted list.
Expectation is that domains are managed as part of CPU hotplug handlers. When
a CPU comes online then handler can check if the domain it belongs to already exists,
if it does then the CPU can just be added to that domain's cpu_mask, if it does
not then a new domain is created and added in the the appropriate spot in the
sorted list (based on domain ID) of domains. 


> +
> +	err = resctrl_online_ctrl_domain(res, domain);
> +	if (err) {
> +		pr_err("failed to online domain %d\n", domain->hdr.id);
> +		list_del(&domain->hdr.list);
> +		goto err_free_domain;
> +	}
> +
> +out:
> +	return 0;
> +
> +err_free_domain:
> +	kfree(container_of(domain, struct cbqri_resctrl_dom, resctrl_ctrl_dom));
> +	return err;
> +}
> +
> +int qos_resctrl_setup(void)
> +{
> +	struct rdt_ctrl_domain *domain, *domain_temp;
> +	struct cbqri_controller *ctrl;
> +	struct cbqri_resctrl_res *res;
> +	int err = 0;
> +	int i = 0;
> +
> +	max_rmid = U32_MAX;
> +
> +	for (i = 0; i < RDT_NUM_RESOURCES; i++) {
> +		res = &cbqri_resctrl_resources[i];
> +		INIT_LIST_HEAD(&res->resctrl_res.ctrl_domains);
> +		INIT_LIST_HEAD(&res->resctrl_res.mon_domains);
> +		res->resctrl_res.rid = i;
> +	}
> +
> +	list_for_each_entry(ctrl, &cbqri_controllers, list) {
> +		err = cbqri_probe_controller(ctrl);
> +		if (err) {
> +			pr_err("%s(): failed (%d)\n", __func__, err);
> +			goto err_free_controllers_list;
> +		}
> +
> +		err = qos_resctrl_add_controller_domain(ctrl);
> +		if (err) {
> +			pr_err("%s(): failed to add controller domain (%d)\n", __func__, err);
> +			goto err_free_controllers_list;
> +		}
> +
> +		/*
> +		 * CDP (code data prioritization) on x86 is similar to
> +		 * the AT (access type) field in CBQRI. CDP only supports
> +		 * caches so this must be a CBQRI capacity controller.
> +		 */
> +		if (ctrl->type == CBQRI_CONTROLLER_TYPE_CAPACITY &&
> +		    ctrl->cc.supports_alloc_at_code) {
> +			if (ctrl->cache.cache_level == 2)
> +				exposed_cdp_l2_capable = true;
> +			else
> +				exposed_cdp_l3_capable = true;
> +		}
> +	}
> +	pr_debug("alloc=%d cdp_l2=%d cdp_l3=%d\n",
> +		 exposed_alloc_capable,
> +		 exposed_cdp_l2_capable, exposed_cdp_l3_capable);
> +
> +	err = resctrl_init();
> +	if (err)
> +		goto err_free_controllers_list;
> +
> +	return 0;
> +
> +err_free_controllers_list:
> +	for (i = 0; i < RDT_NUM_RESOURCES; i++) {
> +		res = &cbqri_resctrl_resources[i];
> +		list_for_each_entry_safe(domain, domain_temp, &res->resctrl_res.ctrl_domains,
> +					 hdr.list) {
> +			resctrl_offline_ctrl_domain(&res->resctrl_res, domain);
> +			list_del(&domain->hdr.list);
> +			kfree(container_of(domain, struct cbqri_resctrl_dom, resctrl_ctrl_dom));
> +		}
> +	}
> +
> +	list_for_each_entry(ctrl, &cbqri_controllers, list) {
> +		if (!ctrl->base)
> +			break;
> +		iounmap(ctrl->base);
> +		ctrl->base = NULL;
> +		release_mem_region(ctrl->addr, ctrl->size);
> +	}
> +
> +	return err;
> +}
> +
> +int qos_resctrl_online_cpu(unsigned int cpu)
> +{
> +	resctrl_online_cpu(cpu);

This is where a domain is expected to be added when its first CPU comes online.

> +	return 0;
> +}
> +
> +int qos_resctrl_offline_cpu(unsigned int cpu)
> +{
> +	resctrl_offline_cpu(cpu);

This is where a domain is expected to be removed when its last CPU goes offline.

> +	return 0;
> +}
> 

Reinette

  parent reply	other threads:[~2026-04-30 23:20 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-15  1:53 [PATCH RFC v3 00/11] RISC-V: QoS: add CBQRI resctrl interface Drew Fustini
2026-04-15  1:53 ` [PATCH RFC v3 01/11] dt-bindings: riscv: Add Ssqosid extension description Drew Fustini
2026-04-15  1:53 ` [PATCH RFC v3 02/11] RISC-V: Detect the Ssqosid extension Drew Fustini
2026-04-15  1:53 ` [PATCH RFC v3 03/11] RISC-V: Add support for srmcfg CSR from " Drew Fustini
2026-04-15  1:53 ` [PATCH RFC v3 04/11] RISC-V: QoS: add CBQRI hardware interface Drew Fustini
2026-04-30 23:15   ` Reinette Chatre
2026-05-01  4:45     ` Drew Fustini
2026-04-15  1:53 ` [PATCH RFC v3 05/11] RISC-V: QoS: add resctrl arch callbacks for CBQRI controllers Drew Fustini
2026-04-30 23:17   ` Reinette Chatre
2026-04-30 23:37     ` Drew Fustini
2026-04-30 23:52       ` Reinette Chatre
2026-04-15  1:54 ` [PATCH RFC v3 06/11] RISC-V: QoS: add resctrl setup and domain management Drew Fustini
2026-04-17 10:52   ` guo.wenjia23
2026-04-18 22:01     ` Drew Fustini
2026-04-30 23:20   ` Reinette Chatre [this message]
2026-05-01 22:56     ` Drew Fustini
2026-04-15  1:54 ` [PATCH RFC v3 07/11] RISC-V: QoS: enable resctrl support for Ssqosid Drew Fustini
2026-04-15  1:54 ` [PATCH RFC v3 08/11] ACPI: PPTT: Add acpi_pptt_get_cache_size_from_id helper Drew Fustini
2026-04-30 23:20   ` Reinette Chatre
2026-05-01 16:53     ` Drew Fustini
2026-04-15  1:54 ` [PATCH RFC v3 09/11] DO NOT MERGE: include: acpi: actbl2: Add structs for RQSC table Drew Fustini
2026-04-15  1:54 ` [PATCH RFC v3 10/11] ACPI: RISC-V: Parse RISC-V Quality of Service Controller (RQSC) table Drew Fustini
2026-04-15  1:54 ` [PATCH RFC v3 11/11] ACPI: RISC-V: Add support for RISC-V Quality of Service Controller (RQSC) Drew Fustini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9a8860a5-f63f-497c-ade9-6f64286abff0@intel.com \
    --to=reinette.chatre@intel.com \
    --cc=Dave.Martin@arm.com \
    --cc=acpica-devel@lists.linux.dev \
    --cc=alex@ghiti.fr \
    --cc=aou@eecs.berkeley.edu \
    --cc=aricciardi@baylibre.com \
    --cc=atish.patra@linux.dev \
    --cc=atishp@rivosinc.com \
    --cc=babu.moger@amd.com \
    --cc=ben.horgan@arm.com \
    --cc=conor+dt@kernel.org \
    --cc=conor.dooley@microchip.com \
    --cc=cp0613@linux.alibaba.com \
    --cc=cuiyunhui@bytedance.com \
    --cc=devicetree@vger.kernel.org \
    --cc=fenghua.yu@intel.com \
    --cc=fustini@kernel.org \
    --cc=gong.shuai@sanechips.com.cn \
    --cc=gsh517@gmail.com \
    --cc=guo.wenjia23@zte.com.cn \
    --cc=james.morse@arm.com \
    --cc=krzk+dt@kernel.org \
    --cc=lenb@kernel.org \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=liu.qingtao2@zte.com.cn \
    --cc=liwei1518@gmail.com \
    --cc=mindal@semihalf.com \
    --cc=npitre@baylibre.com \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=peternewman@google.com \
    --cc=pjw@kernel.org \
    --cc=rafael@kernel.org \
    --cc=rkrcmar@ventanamicro.com \
    --cc=robert.moore@intel.com \
    --cc=robh@kernel.org \
    --cc=samuel.holland@sifive.com \
    --cc=sunilvl@ventanamicro.com \
    --cc=tony.luck@intel.com \
    --cc=vasu@rivosinc.com \
    --cc=ved@rivosinc.com \
    --cc=x86@kernel.org \
    --cc=zhiwei_liu@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox