linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Reinette Chatre <reinette.chatre@intel.com>
To: Babu Moger <babu.moger@amd.com>, <corbet@lwn.net>,
	<tony.luck@intel.com>, <Dave.Martin@arm.com>,
	<james.morse@arm.com>, <tglx@linutronix.de>, <mingo@redhat.com>,
	<bp@alien8.de>, <dave.hansen@linux.intel.com>
Cc: <x86@kernel.org>, <hpa@zytor.com>, <akpm@linux-foundation.org>,
	<paulmck@kernel.org>, <rostedt@goodmis.org>,
	<Neeraj.Upadhyay@amd.com>, <david@redhat.com>, <arnd@arndb.de>,
	<fvdl@google.com>, <seanjc@google.com>, <thomas.lendacky@amd.com>,
	<pawan.kumar.gupta@linux.intel.com>, <yosry.ahmed@linux.dev>,
	<sohil.mehta@intel.com>, <xin@zytor.com>, <kai.huang@intel.com>,
	<xiaoyao.li@intel.com>, <peterz@infradead.org>, <me@mixaill.net>,
	<mario.limonciello@amd.com>, <xin3.li@intel.com>,
	<ebiggers@google.com>, <ak@linux.intel.com>,
	<chang.seok.bae@intel.com>, <andrew.cooper3@citrix.com>,
	<perry.yuan@amd.com>, <linux-doc@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <manali.shukla@amd.com>,
	<gautham.shenoy@amd.com>
Subject: Re: [PATCH v8 08/10] fs/resctrl: Modify rdt_parse_data to pass mode and CLOSID
Date: Thu, 7 Aug 2025 18:52:09 -0700	[thread overview]
Message-ID: <b08c1d64-8d19-48e7-853a-21947d61ff98@intel.com> (raw)
In-Reply-To: <330b69d50c0161b7ac61986447a9867db7221f93.1754436586.git.babu.moger@amd.com>

Hi Babu,

On 8/5/25 4:30 PM, Babu Moger wrote:
> parse_cbm() and parse_bw() require mode and CLOSID to validate the Capacity

Again [1], parse_bw() does not validate any CBMs.

To be more specific: "mode" -> "resource group mode"?

> Bit Mask (CBM). It is passed via struct rdtgroup in struct rdt_parse_data.
> 
> The io_alloc feature also uses CBMs to indicate which portions of cache are
> allocated for I/O traffic. The CBMs are provided by user space and need to
> be validated the same as CBMs provided for general (CPU) cache allocation.
> parse_cbm() cannot be used as-is since io_alloc does not have rdtgroup
> context.
> 
> Pass the mode and CLOSID directly to parse_cbm() via struct rdt_parse_data
> instead of through the rdtgroup struct to facilitate calling parse_cbm() to
> verify the CBM of the io_alloc feature.
> 
> Signed-off-by: Babu Moger <babu.moger@amd.com>
> ---

...

> @@ -156,9 +157,10 @@ static bool cbm_validate(char *buf, u32 *data, struct rdt_resource *r)
>  static int parse_cbm(struct rdt_parse_data *data, struct resctrl_schema *s,
>  		     struct rdt_ctrl_domain *d)
>  {
> -	struct rdtgroup *rdtgrp = data->rdtgrp;
> +	enum rdtgrp_mode mode = data->mode;
>  	struct resctrl_staged_config *cfg;
>  	struct rdt_resource *r = s->res;
> +	u32 closid = data->closid;
>  	u32 cbm_val;
>  
>  	cfg = &d->staged_config[s->conf_type];
> @@ -171,7 +173,7 @@ static int parse_cbm(struct rdt_parse_data *data, struct resctrl_schema *s,
>  	 * Cannot set up more than one pseudo-locked region in a cache
>  	 * hierarchy.
>  	 */
> -	if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP &&
> +	if (mode == RDT_MODE_PSEUDO_LOCKSETUP &&
>  	    rdtgroup_pseudo_locked_in_hierarchy(d)) {
>  		rdt_last_cmd_puts("Pseudo-locked region in hierarchy\n");
>  		return -EINVAL;
> @@ -180,8 +182,8 @@ static int parse_cbm(struct rdt_parse_data *data, struct resctrl_schema *s,
>  	if (!cbm_validate(data->buf, &cbm_val, r))
>  		return -EINVAL;
>  
> -	if ((rdtgrp->mode == RDT_MODE_EXCLUSIVE ||
> -	     rdtgrp->mode == RDT_MODE_SHAREABLE) &&
> +	if ((mode == RDT_MODE_EXCLUSIVE ||
> +	     mode == RDT_MODE_SHAREABLE) &&

This can now be on one line?

>  	    rdtgroup_cbm_overlaps_pseudo_locked(d, cbm_val)) {
>  		rdt_last_cmd_puts("CBM overlaps with pseudo-locked region\n");
>  		return -EINVAL;

Reinette

[1] https://lore.kernel.org/lkml/798ba4db-3ac2-44a9-9e0d-e9cbb0dbff45@intel.com/

  reply	other threads:[~2025-08-08  1:52 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-05 23:30 [PATCH v8 00/10] x86,fs/resctrl: Support L3 Smart Data Cache Injection Allocation Enforcement (SDCIAE) Babu Moger
2025-08-05 23:30 ` [PATCH v8 01/10] x86/cpufeatures: Add support for L3 Smart Data Cache Injection Allocation Enforcement Babu Moger
2025-08-05 23:30 ` [PATCH v8 02/10] x86/resctrl: Add SDCIAE feature in the command line options Babu Moger
2025-08-08  1:44   ` Reinette Chatre
2025-08-22 22:07     ` Moger, Babu
2025-08-05 23:30 ` [PATCH v8 03/10] x86,fs/resctrl: Detect io_alloc feature Babu Moger
2025-08-05 23:30 ` [PATCH v8 04/10] x86,fs/resctrl: Implement "io_alloc" enable/disable handlers Babu Moger
2025-08-08  1:47   ` Reinette Chatre
2025-08-22 22:10     ` Moger, Babu
2025-08-05 23:30 ` [PATCH v8 05/10] fs/resctrl: Introduce interface to display "io_alloc" support Babu Moger
2025-08-08  1:48   ` Reinette Chatre
2025-08-22 22:12     ` Moger, Babu
2025-08-05 23:30 ` [PATCH v8 06/10] fs/resctrl: Add user interface to enable/disable io_alloc feature Babu Moger
2025-08-08  1:49   ` Reinette Chatre
2025-08-22 22:53     ` Moger, Babu
2025-08-27 20:39       ` Moger, Babu
2025-08-29  2:47         ` Reinette Chatre
2025-09-02 16:20           ` Moger, Babu
2025-08-21  5:02   ` Gautham R. Shenoy
2025-08-22 23:10     ` Moger, Babu
2025-08-05 23:30 ` [PATCH v8 07/10] fs/resctrl: Introduce interface to display io_alloc CBMs Babu Moger
2025-08-08  1:51   ` Reinette Chatre
2025-08-26 18:33     ` Moger, Babu
2025-08-05 23:30 ` [PATCH v8 08/10] fs/resctrl: Modify rdt_parse_data to pass mode and CLOSID Babu Moger
2025-08-08  1:52   ` Reinette Chatre [this message]
2025-08-26 18:40     ` Moger, Babu
2025-08-05 23:30 ` [PATCH v8 09/10] fs/resctrl: Introduce interface to modify io_alloc Capacity Bit Masks Babu Moger
2025-08-08  1:53   ` Reinette Chatre
2025-08-26 18:53     ` Moger, Babu
2025-08-05 23:30 ` [PATCH v8 10/10] fs/resctrl: Update bit_usage to reflect io_alloc Babu Moger
2025-08-08  1:54   ` Reinette Chatre
2025-08-26 22:51     ` Moger, Babu
2025-08-29  3:11       ` Reinette Chatre
2025-09-02 16:32         ` Moger, Babu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b08c1d64-8d19-48e7-853a-21947d61ff98@intel.com \
    --to=reinette.chatre@intel.com \
    --cc=Dave.Martin@arm.com \
    --cc=Neeraj.Upadhyay@amd.com \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=andrew.cooper3@citrix.com \
    --cc=arnd@arndb.de \
    --cc=babu.moger@amd.com \
    --cc=bp@alien8.de \
    --cc=chang.seok.bae@intel.com \
    --cc=corbet@lwn.net \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=ebiggers@google.com \
    --cc=fvdl@google.com \
    --cc=gautham.shenoy@amd.com \
    --cc=hpa@zytor.com \
    --cc=james.morse@arm.com \
    --cc=kai.huang@intel.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=manali.shukla@amd.com \
    --cc=mario.limonciello@amd.com \
    --cc=me@mixaill.net \
    --cc=mingo@redhat.com \
    --cc=paulmck@kernel.org \
    --cc=pawan.kumar.gupta@linux.intel.com \
    --cc=perry.yuan@amd.com \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=seanjc@google.com \
    --cc=sohil.mehta@intel.com \
    --cc=tglx@linutronix.de \
    --cc=thomas.lendacky@amd.com \
    --cc=tony.luck@intel.com \
    --cc=x86@kernel.org \
    --cc=xiaoyao.li@intel.com \
    --cc=xin3.li@intel.com \
    --cc=xin@zytor.com \
    --cc=yosry.ahmed@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).