From: "Moger, Babu" <babu.moger@amd.com>
To: "Gautham R. Shenoy" <gautham.shenoy@amd.com>
Cc: corbet@lwn.net, tony.luck@intel.com, reinette.chatre@intel.com,
Dave.Martin@arm.com, james.morse@arm.com, tglx@linutronix.de,
mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com,
x86@kernel.org, hpa@zytor.com, akpm@linux-foundation.org,
paulmck@kernel.org, rostedt@goodmis.org, Neeraj.Upadhyay@amd.com,
david@redhat.com, arnd@arndb.de, fvdl@google.com,
seanjc@google.com, thomas.lendacky@amd.com,
pawan.kumar.gupta@linux.intel.com, yosry.ahmed@linux.dev,
sohil.mehta@intel.com, xin@zytor.com, kai.huang@intel.com,
xiaoyao.li@intel.com, peterz@infradead.org, me@mixaill.net,
mario.limonciello@amd.com, xin3.li@intel.com,
ebiggers@google.com, ak@linux.intel.com,
chang.seok.bae@intel.com, andrew.cooper3@citrix.com,
perry.yuan@amd.com, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org, manali.shukla@amd.com
Subject: Re: [PATCH v8 06/10] fs/resctrl: Add user interface to enable/disable io_alloc feature
Date: Fri, 22 Aug 2025 18:10:26 -0500 [thread overview]
Message-ID: <ea4eb63e-c174-40fa-ab7a-0a1a08b6542f@amd.com> (raw)
In-Reply-To: <aKaoYYm1ixYkVtyV@BLRRASHENOY1.amd.com>
Hi Gautham,
On 8/21/2025 12:02 AM, Gautham R. Shenoy wrote:
> Hello Babu,
>
> On Tue, Aug 05, 2025 at 06:30:26PM -0500, Babu Moger wrote:
>> "io_alloc" feature in resctrl enables direct insertion of data from I/O
>> devices into the cache.
>>
>> On AMD systems, when io_alloc is enabled, the highest CLOSID is reserved
>> exclusively for I/O allocation traffic and is no longer available for
>> general CPU cache allocation. Users are encouraged to enable it only when
>> running workloads that can benefit from this functionality.
>>
>> Since CLOSIDs are managed by resctrl fs, it is least invasive to make the
>> "io_alloc is supported by maximum supported CLOSID" part of the initial
>> resctrl fs support for io_alloc. Take care not to expose this use of CLOSID
>> for io_alloc to user space so that this is not required from other
>> architectures that may support io_alloc differently in the future.
>>
>> Introduce user interface to enable/disable io_alloc feature.
>>
>> Signed-off-by: Babu Moger <babu.moger@amd.com>
> [..snip..]
>
>
>> +ssize_t resctrl_io_alloc_write(struct kernfs_open_file *of, char *buf,
>> + size_t nbytes, loff_t off)
>> +{
>> + struct resctrl_schema *s = rdt_kn_parent_priv(of->kn);
>> + struct rdt_resource *r = s->res;
>> + char const *grp_name;
>> + u32 io_alloc_closid;
>> + bool enable;
>> + int ret;
>> +
>> + ret = kstrtobool(buf, &enable);
>> + if (ret)
>> + return ret;
>> +
>> + cpus_read_lock();
>> + mutex_lock(&rdtgroup_mutex);
>> +
>> + rdt_last_cmd_clear();
>> +
>> + if (!r->cache.io_alloc_capable) {
>> + rdt_last_cmd_printf("io_alloc is not supported on %s\n", s->name);
>> + ret = -ENODEV;
>> + goto out_unlock;
>> + }
>> +
>> + io_alloc_closid = resctrl_io_alloc_closid(r);
>> + if (!resctrl_io_alloc_closid_supported(io_alloc_closid)) {
>> + rdt_last_cmd_printf("io_alloc CLOSID (ctrl_hw_id) %d is not available\n",
>> + io_alloc_closid);
>> + ret = -EINVAL;
>> + goto out_unlock;
>> + }
>> +
>> + /* If the feature is already up to date, no action is needed. */
>> + if (resctrl_arch_get_io_alloc_enabled(r) == enable)
>> + goto out_unlock;
> Does it make sense to move this check before calling resctrl_io_alloc_closid(r) ?
Sure. We can do that.
Thanks
Babu
>
>
>> +
>> + if (enable) {
>> + if (!closid_alloc_fixed(io_alloc_closid)) {
>> + grp_name = rdtgroup_name_by_closid(io_alloc_closid);
>> + WARN_ON_ONCE(!grp_name);
>> + rdt_last_cmd_printf("CLOSID (ctrl_hw_id) %d for io_alloc is used by %s group\n",
>> + io_alloc_closid, grp_name ? grp_name : "another");
>> + ret = -ENOSPC;
>> + goto out_unlock;
>> + }
>> +
>> + ret = resctrl_io_alloc_init_cbm(s, io_alloc_closid);
>> + if (ret) {
>> + rdt_last_cmd_puts("Failed to initialize io_alloc allocations\n");
>> + closid_free(io_alloc_closid);
>> + goto out_unlock;
>> + }
>> + } else {
>> + closid_free(io_alloc_closid);
>> + }
>> +
>> + ret = resctrl_arch_io_alloc_enable(r, enable);
>> +
>> +out_unlock:
>> + mutex_unlock(&rdtgroup_mutex);
>> + cpus_read_unlock();
>> +
>> + return ret ?: nbytes;
>> +}
> [..snip..]
>
next prev parent reply other threads:[~2025-08-22 23:10 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-05 23:30 [PATCH v8 00/10] x86,fs/resctrl: Support L3 Smart Data Cache Injection Allocation Enforcement (SDCIAE) Babu Moger
2025-08-05 23:30 ` [PATCH v8 01/10] x86/cpufeatures: Add support for L3 Smart Data Cache Injection Allocation Enforcement Babu Moger
2025-08-05 23:30 ` [PATCH v8 02/10] x86/resctrl: Add SDCIAE feature in the command line options Babu Moger
2025-08-08 1:44 ` Reinette Chatre
2025-08-22 22:07 ` Moger, Babu
2025-08-05 23:30 ` [PATCH v8 03/10] x86,fs/resctrl: Detect io_alloc feature Babu Moger
2025-08-05 23:30 ` [PATCH v8 04/10] x86,fs/resctrl: Implement "io_alloc" enable/disable handlers Babu Moger
2025-08-08 1:47 ` Reinette Chatre
2025-08-22 22:10 ` Moger, Babu
2025-08-05 23:30 ` [PATCH v8 05/10] fs/resctrl: Introduce interface to display "io_alloc" support Babu Moger
2025-08-08 1:48 ` Reinette Chatre
2025-08-22 22:12 ` Moger, Babu
2025-08-05 23:30 ` [PATCH v8 06/10] fs/resctrl: Add user interface to enable/disable io_alloc feature Babu Moger
2025-08-08 1:49 ` Reinette Chatre
2025-08-22 22:53 ` Moger, Babu
2025-08-27 20:39 ` Moger, Babu
2025-08-29 2:47 ` Reinette Chatre
2025-09-02 16:20 ` Moger, Babu
2025-08-21 5:02 ` Gautham R. Shenoy
2025-08-22 23:10 ` Moger, Babu [this message]
2025-08-05 23:30 ` [PATCH v8 07/10] fs/resctrl: Introduce interface to display io_alloc CBMs Babu Moger
2025-08-08 1:51 ` Reinette Chatre
2025-08-26 18:33 ` Moger, Babu
2025-08-05 23:30 ` [PATCH v8 08/10] fs/resctrl: Modify rdt_parse_data to pass mode and CLOSID Babu Moger
2025-08-08 1:52 ` Reinette Chatre
2025-08-26 18:40 ` Moger, Babu
2025-08-05 23:30 ` [PATCH v8 09/10] fs/resctrl: Introduce interface to modify io_alloc Capacity Bit Masks Babu Moger
2025-08-08 1:53 ` Reinette Chatre
2025-08-26 18:53 ` Moger, Babu
2025-08-05 23:30 ` [PATCH v8 10/10] fs/resctrl: Update bit_usage to reflect io_alloc Babu Moger
2025-08-08 1:54 ` Reinette Chatre
2025-08-26 22:51 ` Moger, Babu
2025-08-29 3:11 ` Reinette Chatre
2025-09-02 16:32 ` Moger, Babu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ea4eb63e-c174-40fa-ab7a-0a1a08b6542f@amd.com \
--to=babu.moger@amd.com \
--cc=Dave.Martin@arm.com \
--cc=Neeraj.Upadhyay@amd.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=andrew.cooper3@citrix.com \
--cc=arnd@arndb.de \
--cc=bp@alien8.de \
--cc=chang.seok.bae@intel.com \
--cc=corbet@lwn.net \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=ebiggers@google.com \
--cc=fvdl@google.com \
--cc=gautham.shenoy@amd.com \
--cc=hpa@zytor.com \
--cc=james.morse@arm.com \
--cc=kai.huang@intel.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=manali.shukla@amd.com \
--cc=mario.limonciello@amd.com \
--cc=me@mixaill.net \
--cc=mingo@redhat.com \
--cc=paulmck@kernel.org \
--cc=pawan.kumar.gupta@linux.intel.com \
--cc=perry.yuan@amd.com \
--cc=peterz@infradead.org \
--cc=reinette.chatre@intel.com \
--cc=rostedt@goodmis.org \
--cc=seanjc@google.com \
--cc=sohil.mehta@intel.com \
--cc=tglx@linutronix.de \
--cc=thomas.lendacky@amd.com \
--cc=tony.luck@intel.com \
--cc=x86@kernel.org \
--cc=xiaoyao.li@intel.com \
--cc=xin3.li@intel.com \
--cc=xin@zytor.com \
--cc=yosry.ahmed@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).