From: Reinette Chatre <reinette.chatre@intel.com>
To: Babu Moger <babu.moger@amd.com>, <corbet@lwn.net>,
<tony.luck@intel.com>, <Dave.Martin@arm.com>,
<james.morse@arm.com>, <tglx@linutronix.de>, <mingo@redhat.com>,
<bp@alien8.de>, <dave.hansen@linux.intel.com>
Cc: <x86@kernel.org>, <hpa@zytor.com>, <kas@kernel.org>,
<rick.p.edgecombe@intel.com>, <akpm@linux-foundation.org>,
<paulmck@kernel.org>, <pmladek@suse.com>,
<pawan.kumar.gupta@linux.intel.com>, <rostedt@goodmis.org>,
<kees@kernel.org>, <arnd@arndb.de>, <fvdl@google.com>,
<seanjc@google.com>, <thomas.lendacky@amd.com>,
<manali.shukla@amd.com>, <perry.yuan@amd.com>,
<sohil.mehta@intel.com>, <xin@zytor.com>, <peterz@infradead.org>,
<mario.limonciello@amd.com>, <gautham.shenoy@amd.com>,
<nikunj@amd.com>, <dapeng1.mi@linux.intel.com>,
<ak@linux.intel.com>, <chang.seok.bae@intel.com>,
<ebiggers@google.com>, <linux-doc@vger.kernel.org>,
<linux-kernel@vger.kernel.org>, <linux-coco@lists.linux.dev>,
<kvm@vger.kernel.org>
Subject: Re: [PATCH v9 06/10] fs/resctrl: Add user interface to enable/disable io_alloc feature
Date: Wed, 17 Sep 2025 22:37:33 -0700 [thread overview]
Message-ID: <d18dc408-0a05-47b4-9126-19a7bd5fff6b@intel.com> (raw)
In-Reply-To: <2cc1e83ba1b232ff9e763111241863672b45d3ea.1756851697.git.babu.moger@amd.com>
Hi Babu,
On 9/2/25 3:41 PM, Babu Moger wrote:
> "io_alloc" feature in resctrl enables direct insertion of data from I/O
> devices into the cache.
(repetition)
>
> On AMD systems, when io_alloc is enabled, the highest CLOSID is reserved
> exclusively for I/O allocation traffic and is no longer available for
> general CPU cache allocation. Users are encouraged to enable it only when
> running workloads that can benefit from this functionality.
>
> Since CLOSIDs are managed by resctrl fs, it is least invasive to make the
> "io_alloc is supported by maximum supported CLOSID" part of the initial
> resctrl fs support for io_alloc. Take care not to expose this use of CLOSID
> for io_alloc to user space so that this is not required from other
> architectures that may support io_alloc differently in the future.
>
> Introduce user interface to enable/disable io_alloc feature. Check to
> verify the availability of CLOSID reserved for io_alloc, and initialize
> the CLOSID with a usable CBMs across all the domains.
I think the flow will improve if above two paragraphs are swapped. This is
also missing the non-obvious support for CDP. As mentioned in previous patch, if
the related doc change is moved from patch 5 to here it can be handled together.
Trying to put it all together, please feel free to improve:
AMD's SDCIAE forces all SDCI lines to be placed into the L3 cache portions
identified by the highest-supported L3_MASK_n register, where n is the maximum
supported CLOSID.
To support AMD's SDCIAE, when io_alloc resctrl feature is enabled, reserve the
highest CLOSID exclusively for I/O allocation traffic making it no longer available for
general CPU cache allocation.
Introduce user interface to enable/disable io_alloc feature and encourage users
to enable io_alloc only when running workloads that can benefit from this
functionality. On enable, initialize the io_alloc CLOSID with all usable CBMs
across all the domains.
Since CLOSIDs are managed by resctrl fs, it is least invasive to make
"io_alloc is supported by maximum supported CLOSID" part of the initial
resctrl fs support for io_alloc. Take care to minimally (only in error messages)
expose this use of CLOSID for io_alloc to user space so that this is
not required from other architectures that may support io_alloc differently in the future.
When resctrl is mounted with "-o cdp" to enable code/data prioritization
there are two L3 resources that can support I/O allocation: L3CODE and L3DATA.
From resctrl fs perspective the two resources share a CLOSID and the
architecture's available CLOSID are halved to support this.
The architecture's underlying CLOSID used by SDCIAE when CDP is enabled is
the CLOSID associated with the L3CODE resource, but from resctrl's perspective
there is only one CLOSID for both L3CODE and L3DATA. L3DATA is thus not usable
for general (CPU) cache allocation nor I/O allocation. Keep the L3CODE and
L3DATA I/O alloc status in sync to avoid any confusion to user space. That
is, enabling io_alloc on L3CODE does so on L3DATA and vice-versa, and
keep the I/O allocation CBMs of L3CODE and L3DATA in sync.
>
> Signed-off-by: Babu Moger <babu.moger@amd.com>
> ---
...
> +ssize_t resctrl_io_alloc_write(struct kernfs_open_file *of, char *buf,
> + size_t nbytes, loff_t off)
> +{
> + struct resctrl_schema *s = rdt_kn_parent_priv(of->kn);
> + struct rdt_resource *r = s->res;
> + char const *grp_name;
> + u32 io_alloc_closid;
> + bool enable;
> + int ret;
> +
> + ret = kstrtobool(buf, &enable);
> + if (ret)
> + return ret;
> +
> + cpus_read_lock();
> + mutex_lock(&rdtgroup_mutex);
> +
> + rdt_last_cmd_clear();
> +
> + if (!r->cache.io_alloc_capable) {
> + rdt_last_cmd_printf("io_alloc is not supported on %s\n", s->name);
> + ret = -ENODEV;
> + goto out_unlock;
> + }
> +
> + /* If the feature is already up to date, no action is needed. */
> + if (resctrl_arch_get_io_alloc_enabled(r) == enable)
> + goto out_unlock;
> +
> + io_alloc_closid = resctrl_io_alloc_closid(r);
> + if (!resctrl_io_alloc_closid_supported(io_alloc_closid)) {
> + rdt_last_cmd_printf("io_alloc CLOSID (ctrl_hw_id) %d is not available\n",
%d -> %u ?
> + io_alloc_closid);
> + ret = -EINVAL;
> + goto out_unlock;
> + }
> +
> + if (enable) {
> + if (!closid_alloc_fixed(io_alloc_closid)) {
> + grp_name = rdtgroup_name_by_closid(io_alloc_closid);
> + WARN_ON_ONCE(!grp_name);
> + rdt_last_cmd_printf("CLOSID (ctrl_hw_id) %d for io_alloc is used by %s group\n",
%d -> %u ?
> + io_alloc_closid, grp_name ? grp_name : "another");
> + ret = -ENOSPC;
> + goto out_unlock;
> + }
> +
> + ret = resctrl_io_alloc_init_cbm(s, io_alloc_closid);
> + if (ret) {
> + rdt_last_cmd_puts("Failed to initialize io_alloc allocations\n");
> + closid_free(io_alloc_closid);
> + goto out_unlock;
> + }
> + } else {
> + closid_free(io_alloc_closid);
> + }
> +
> + ret = resctrl_arch_io_alloc_enable(r, enable);
> +
> +out_unlock:
> + mutex_unlock(&rdtgroup_mutex);
> + cpus_read_unlock();
> +
> + return ret ?: nbytes;
> +}
Reinette
next prev parent reply other threads:[~2025-09-18 5:37 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-02 22:41 [PATCH v9 00/10] x86,fs/resctrl: Support L3 Smart Data Cache Injection Allocation Enforcement (SDCIAE) Babu Moger
2025-09-02 22:41 ` [PATCH v9 01/10] x86/cpufeatures: Add support for L3 Smart Data Cache Injection Allocation Enforcement Babu Moger
2025-09-18 5:08 ` Reinette Chatre
2025-09-19 15:45 ` Moger, Babu
2025-09-02 22:41 ` [PATCH v9 02/10] x86/resctrl: Add SDCIAE feature in the command line options Babu Moger
2025-09-18 5:09 ` Reinette Chatre
2025-09-19 16:40 ` Moger, Babu
2025-09-02 22:41 ` [PATCH v9 03/10] x86,fs/resctrl: Detect io_alloc feature Babu Moger
2025-09-18 5:15 ` Reinette Chatre
2025-09-19 16:53 ` Moger, Babu
2025-09-02 22:41 ` [PATCH v9 04/10] x86,fs/resctrl: Implement "io_alloc" enable/disable handlers Babu Moger
2025-09-18 5:19 ` Reinette Chatre
2025-09-19 17:18 ` Moger, Babu
2025-09-02 22:41 ` [PATCH v9 05/10] fs/resctrl: Introduce interface to display "io_alloc" support Babu Moger
2025-09-18 5:28 ` Reinette Chatre
2025-09-19 17:49 ` Moger, Babu
2025-09-02 22:41 ` [PATCH v9 06/10] fs/resctrl: Add user interface to enable/disable io_alloc feature Babu Moger
2025-09-18 5:37 ` Reinette Chatre [this message]
2025-09-19 19:09 ` Moger, Babu
2025-09-02 22:41 ` [PATCH v9 07/10] fs/resctrl: Introduce interface to display io_alloc CBMs Babu Moger
2025-09-18 5:43 ` Reinette Chatre
2025-09-19 19:38 ` Moger, Babu
2025-09-02 22:41 ` [PATCH v9 08/10] fs/resctrl: Modify rdt_parse_data to pass mode and CLOSID Babu Moger
2025-09-18 5:44 ` Reinette Chatre
2025-09-19 19:49 ` Moger, Babu
2025-09-02 22:41 ` [PATCH v9 09/10] fs/resctrl: Introduce interface to modify io_alloc Capacity Bit Masks Babu Moger
2025-09-18 6:03 ` Reinette Chatre
2025-09-19 20:49 ` Moger, Babu
2025-09-22 22:48 ` Reinette Chatre
2025-09-25 18:54 ` Moger, Babu
2025-09-02 22:41 ` [PATCH v9 10/10] fs/resctrl: Update bit_usage to reflect io_alloc Babu Moger
2025-09-18 6:08 ` Reinette Chatre
2025-09-19 21:05 ` Moger, Babu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d18dc408-0a05-47b4-9126-19a7bd5fff6b@intel.com \
--to=reinette.chatre@intel.com \
--cc=Dave.Martin@arm.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=arnd@arndb.de \
--cc=babu.moger@amd.com \
--cc=bp@alien8.de \
--cc=chang.seok.bae@intel.com \
--cc=corbet@lwn.net \
--cc=dapeng1.mi@linux.intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=ebiggers@google.com \
--cc=fvdl@google.com \
--cc=gautham.shenoy@amd.com \
--cc=hpa@zytor.com \
--cc=james.morse@arm.com \
--cc=kas@kernel.org \
--cc=kees@kernel.org \
--cc=kvm@vger.kernel.org \
--cc=linux-coco@lists.linux.dev \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=manali.shukla@amd.com \
--cc=mario.limonciello@amd.com \
--cc=mingo@redhat.com \
--cc=nikunj@amd.com \
--cc=paulmck@kernel.org \
--cc=pawan.kumar.gupta@linux.intel.com \
--cc=perry.yuan@amd.com \
--cc=peterz@infradead.org \
--cc=pmladek@suse.com \
--cc=rick.p.edgecombe@intel.com \
--cc=rostedt@goodmis.org \
--cc=seanjc@google.com \
--cc=sohil.mehta@intel.com \
--cc=tglx@linutronix.de \
--cc=thomas.lendacky@amd.com \
--cc=tony.luck@intel.com \
--cc=x86@kernel.org \
--cc=xin@zytor.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox