From: "Moger, Babu" <babu.moger@amd.com>
To: Reinette Chatre <reinette.chatre@intel.com>,
corbet@lwn.net, tony.luck@intel.com, Dave.Martin@arm.com,
james.morse@arm.com, tglx@linutronix.de, mingo@redhat.com,
bp@alien8.de, dave.hansen@linux.intel.com
Cc: x86@kernel.org, hpa@zytor.com, akpm@linux-foundation.org,
paulmck@kernel.org, rostedt@goodmis.org, Neeraj.Upadhyay@amd.com,
david@redhat.com, arnd@arndb.de, fvdl@google.com,
seanjc@google.com, thomas.lendacky@amd.com,
pawan.kumar.gupta@linux.intel.com, yosry.ahmed@linux.dev,
sohil.mehta@intel.com, xin@zytor.com, kai.huang@intel.com,
xiaoyao.li@intel.com, peterz@infradead.org, me@mixaill.net,
mario.limonciello@amd.com, xin3.li@intel.com,
ebiggers@google.com, ak@linux.intel.com,
chang.seok.bae@intel.com, andrew.cooper3@citrix.com,
perry.yuan@amd.com, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org, manali.shukla@amd.com,
gautham.shenoy@amd.com
Subject: Re: [PATCH v8 06/10] fs/resctrl: Add user interface to enable/disable io_alloc feature
Date: Wed, 27 Aug 2025 15:39:43 -0500 [thread overview]
Message-ID: <d11e20c1-1162-422f-8915-97efa69644c7@amd.com> (raw)
In-Reply-To: <d5438a53-c803-4704-84db-1da019f50a3d@amd.com>
On 8/22/25 17:53, Moger, Babu wrote:
> Hi Reinette,
>
> On 8/7/2025 8:49 PM, Reinette Chatre wrote:
>> Hi Babu,
>>
>> On 8/5/25 4:30 PM, Babu Moger wrote:
>>> "io_alloc" feature in resctrl enables direct insertion of data from I/O
>>> devices into the cache.
>>>
>>> On AMD systems, when io_alloc is enabled, the highest CLOSID is reserved
>>> exclusively for I/O allocation traffic and is no longer available for
>>> general CPU cache allocation. Users are encouraged to enable it only when
>>> running workloads that can benefit from this functionality.
>>>
>>> Since CLOSIDs are managed by resctrl fs, it is least invasive to make the
>>> "io_alloc is supported by maximum supported CLOSID" part of the initial
>>> resctrl fs support for io_alloc. Take care not to expose this use of
>>> CLOSID
>>> for io_alloc to user space so that this is not required from other
>>> architectures that may support io_alloc differently in the future.
>>>
>>> Introduce user interface to enable/disable io_alloc feature.
>> Please include high level overview of what this patch does to enable
>> and disable io_alloc. Doing so will help connect why the changelog contains
>> information about CLOSID management.
>
>
> Sure.
>
>>
>>> diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c
>>> index d495a5d5c9d5..bf982eab7b18 100644
>>> --- a/fs/resctrl/ctrlmondata.c
>>> +++ b/fs/resctrl/ctrlmondata.c
>>> @@ -685,3 +685,140 @@ int resctrl_io_alloc_show(struct kernfs_open_file
>>> *of, struct seq_file *seq, voi
>>> return 0;
>>> }
>>> +
>>> +/*
>>> + * resctrl_io_alloc_closid_supported() - io_alloc feature utilizes the
>>> + * highest CLOSID value to direct I/O traffic. Ensure that
>>> io_alloc_closid
>>> + * is in the supported range.
>>> + */
>>> +static bool resctrl_io_alloc_closid_supported(u32 io_alloc_closid)
>>> +{
>>> + return io_alloc_closid < closids_supported();
>>> +}
>>> +
>>> +static struct resctrl_schema *resctrl_get_schema(enum
>>> resctrl_conf_type type)
>>> +{
>>> + struct resctrl_schema *schema;
>>> +
>>> + list_for_each_entry(schema, &resctrl_schema_all, list) {
>>> + if (schema->conf_type == type)
>>> + return schema;
>> This does not look right. More than one resource can have the same
>> configuration type, no?
>> Think about L2 and L3 having CDP enabled ...
>> Looks like this is missing a resource type as parameter and a check for
>> the resource ...
>> but is this function even necessary (more below)?
>
> May not be required. Comments below.
>
>>
>>> + }
>>> +
>>> + return NULL;
>>> +}
>>> +
>>> +/*
>>> + * Initialize io_alloc CLOSID cache resource CBM with all usable (shared
>>> + * and unused) cache portions.
>>> + */
>>> +static int resctrl_io_alloc_init_cbm(struct resctrl_schema *s, u32
>>> closid)
>>> +{
>>> + struct rdt_resource *r = s->res;
>> Needs reverse fir.
>
>
> Sure.
>
>>
>>> + enum resctrl_conf_type peer_type;
>>> + struct resctrl_schema *peer_s;
>>> + int ret;
>>> +
>>> + rdt_staged_configs_clear();
>>> +
>>> + ret = rdtgroup_init_cat(s, closid);
>>> + if (ret < 0)
>>> + goto out;
>>> +
>>> + /* Initialize schema for both CDP_DATA and CDP_CODE when CDP is
>>> enabled */
>>> + if (resctrl_arch_get_cdp_enabled(r->rid)) {
>>> + peer_type = resctrl_peer_type(s->conf_type);
>>> + peer_s = resctrl_get_schema(peer_type);
>>> + if (peer_s) {
>>> + ret = rdtgroup_init_cat(peer_s, closid);
>> This is unexpected. In v7 I suggested that when parsing the CBM of one
>> of the CDP
>> resources it is not necessary to do so again for the peer. The CBM can be
>> parsed *once* and the configuration just copied over. See:
>> https://lore.kernel.org/
>> lkml/82045638-2b26-4682-9374-1c3e400a580a@intel.com/
>
> Let met try to understand.
>
> So, rdtgroup_init_cat() sets up the staged _config for the specific CDP
> type for all the domains.
>
> We need to apply those staged_configs to its peer type on all the domains.
>
> Something like this?
>
> /* Initialize staged_config of the peer type when CDP is enabled */
> if (resctrl_arch_get_cdp_enabled(r->rid)) {
> list_for_each_entry(d, &s->res->ctrl_domains, hdr.list) {
> cfg = &d->staged_config[s->conf_type];
> cfg_peer = &d->staged_config[peer_type];
> cfg_peer->new_ctrl = cfg->new_ctrl;
> cfg_peer->have_new_ctrl = cfg->have_new_ctrl;
> }
> }
>
Replaced with following snippet.
/* Initialize schema for both CDP_DATA and CDP_CODE when CDP is enabled */
+ if (resctrl_arch_get_cdp_enabled(r->rid)) {
+ peer_type = resctrl_peer_type(s->conf_type);
+ list_for_each_entry(d, &s->res->ctrl_domains, hdr.list)
+ memcpy(&d->staged_config[peer_type],
+ &d->staged_config[s->conf_type],
+ sizeof(*d->staged_config));
+ }
>
>>
>> Generally when feedback is provided it is good to check all places in
>> series where
>> it is relevant. oh ... but looking ahead you ignored the feedback in the
>> patch
>> it was given also :(
>
>
> My bad.
>
> I will address that.
>
> Thanks
>
> Babu
>
>
--
Thanks
Babu Moger
next prev parent reply other threads:[~2025-08-27 20:39 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-05 23:30 [PATCH v8 00/10] x86,fs/resctrl: Support L3 Smart Data Cache Injection Allocation Enforcement (SDCIAE) Babu Moger
2025-08-05 23:30 ` [PATCH v8 01/10] x86/cpufeatures: Add support for L3 Smart Data Cache Injection Allocation Enforcement Babu Moger
2025-08-05 23:30 ` [PATCH v8 02/10] x86/resctrl: Add SDCIAE feature in the command line options Babu Moger
2025-08-08 1:44 ` Reinette Chatre
2025-08-22 22:07 ` Moger, Babu
2025-08-05 23:30 ` [PATCH v8 03/10] x86,fs/resctrl: Detect io_alloc feature Babu Moger
2025-08-05 23:30 ` [PATCH v8 04/10] x86,fs/resctrl: Implement "io_alloc" enable/disable handlers Babu Moger
2025-08-08 1:47 ` Reinette Chatre
2025-08-22 22:10 ` Moger, Babu
2025-08-05 23:30 ` [PATCH v8 05/10] fs/resctrl: Introduce interface to display "io_alloc" support Babu Moger
2025-08-08 1:48 ` Reinette Chatre
2025-08-22 22:12 ` Moger, Babu
2025-08-05 23:30 ` [PATCH v8 06/10] fs/resctrl: Add user interface to enable/disable io_alloc feature Babu Moger
2025-08-08 1:49 ` Reinette Chatre
2025-08-22 22:53 ` Moger, Babu
2025-08-27 20:39 ` Moger, Babu [this message]
2025-08-29 2:47 ` Reinette Chatre
2025-09-02 16:20 ` Moger, Babu
2025-08-21 5:02 ` Gautham R. Shenoy
2025-08-22 23:10 ` Moger, Babu
2025-08-05 23:30 ` [PATCH v8 07/10] fs/resctrl: Introduce interface to display io_alloc CBMs Babu Moger
2025-08-08 1:51 ` Reinette Chatre
2025-08-26 18:33 ` Moger, Babu
2025-08-05 23:30 ` [PATCH v8 08/10] fs/resctrl: Modify rdt_parse_data to pass mode and CLOSID Babu Moger
2025-08-08 1:52 ` Reinette Chatre
2025-08-26 18:40 ` Moger, Babu
2025-08-05 23:30 ` [PATCH v8 09/10] fs/resctrl: Introduce interface to modify io_alloc Capacity Bit Masks Babu Moger
2025-08-08 1:53 ` Reinette Chatre
2025-08-26 18:53 ` Moger, Babu
2025-08-05 23:30 ` [PATCH v8 10/10] fs/resctrl: Update bit_usage to reflect io_alloc Babu Moger
2025-08-08 1:54 ` Reinette Chatre
2025-08-26 22:51 ` Moger, Babu
2025-08-29 3:11 ` Reinette Chatre
2025-09-02 16:32 ` Moger, Babu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d11e20c1-1162-422f-8915-97efa69644c7@amd.com \
--to=babu.moger@amd.com \
--cc=Dave.Martin@arm.com \
--cc=Neeraj.Upadhyay@amd.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=andrew.cooper3@citrix.com \
--cc=arnd@arndb.de \
--cc=bp@alien8.de \
--cc=chang.seok.bae@intel.com \
--cc=corbet@lwn.net \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=ebiggers@google.com \
--cc=fvdl@google.com \
--cc=gautham.shenoy@amd.com \
--cc=hpa@zytor.com \
--cc=james.morse@arm.com \
--cc=kai.huang@intel.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=manali.shukla@amd.com \
--cc=mario.limonciello@amd.com \
--cc=me@mixaill.net \
--cc=mingo@redhat.com \
--cc=paulmck@kernel.org \
--cc=pawan.kumar.gupta@linux.intel.com \
--cc=perry.yuan@amd.com \
--cc=peterz@infradead.org \
--cc=reinette.chatre@intel.com \
--cc=rostedt@goodmis.org \
--cc=seanjc@google.com \
--cc=sohil.mehta@intel.com \
--cc=tglx@linutronix.de \
--cc=thomas.lendacky@amd.com \
--cc=tony.luck@intel.com \
--cc=x86@kernel.org \
--cc=xiaoyao.li@intel.com \
--cc=xin3.li@intel.com \
--cc=xin@zytor.com \
--cc=yosry.ahmed@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).