From: Reinette Chatre <reinette.chatre@intel.com>
To: "Luck, Tony" <tony.luck@intel.com>,
"Moger, Babu" <bmoger@amd.com>,
"Dave Martin" <Dave.Martin@arm.com>,
Babu Moger <babu.moger@amd.com>
Cc: "tglx@linutronix.de" <tglx@linutronix.de>,
"mingo@redhat.com" <mingo@redhat.com>,
"bp@alien8.de" <bp@alien8.de>,
"dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com>,
"corbet@lwn.net" <corbet@lwn.net>,
"james.morse@arm.com" <james.morse@arm.com>,
"x86@kernel.org" <x86@kernel.org>,
"hpa@zytor.com" <hpa@zytor.com>,
"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"paulmck@kernel.org" <paulmck@kernel.org>,
"rdunlap@infradead.org" <rdunlap@infradead.org>,
"pmladek@suse.com" <pmladek@suse.com>,
"kees@kernel.org" <kees@kernel.org>,
"arnd@arndb.de" <arnd@arndb.de>,
"fvdl@google.com" <fvdl@google.com>,
"seanjc@google.com" <seanjc@google.com>,
"pawan.kumar.gupta@linux.intel.com"
<pawan.kumar.gupta@linux.intel.com>,
"xin@zytor.com" <xin@zytor.com>,
"thomas.lendacky@amd.com" <thomas.lendacky@amd.com>,
"Mehta, Sohil" <sohil.mehta@intel.com>,
"jarkko@kernel.org" <jarkko@kernel.org>,
"Bae, Chang Seok" <chang.seok.bae@intel.com>,
"ebiggers@google.com" <ebiggers@google.com>,
"Reshetova, Elena" <elena.reshetova@intel.com>,
"ak@linux.intel.com" <ak@linux.intel.com>,
"mario.limonciello@amd.com" <mario.limonciello@amd.com>,
"perry.yuan@amd.com" <perry.yuan@amd.com>,
"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"peternewman@google.com" <peternewman@google.com>,
"feng.tang@linux.alibaba.com" <feng.tang@linux.alibaba.com>
Subject: Re: [PATCH v11 06/10] fs/resctrl: Add user interface to enable/disable io_alloc feature
Date: Wed, 5 Nov 2025 16:48:51 -0800 [thread overview]
Message-ID: <5bf8e5f6-d515-4cd0-a2d7-c0eb9a305c5d@intel.com> (raw)
In-Reply-To: <SJ1PR11MB608391142B594331922876EFFCC5A@SJ1PR11MB6083.namprd11.prod.outlook.com>
Hi Dave and Tony,
On 11/5/25 10:25 AM, Luck, Tony wrote:
>> But in AMD systems its the highest CLOSID (15). Also, this CLOSID usage
>> in not visible to user. There is no update of PQR_ASSOC register during
>> the context switch. Hardware internally routes the traffic using the
>> CLOSID's(15) limits.
>
> Things are even more complex for Intel IO as described in the RDT architecture
> specification. There can be separate IO caches from the CPU caches. When
> this happens the RMIDs and CLOSIDs for IO are in a separate space from
> those for CPU. I.e. you can assign RMID=1 CLOSID=1 to some tasks and
> those will measure and control traffic from a CPU L3 cache instance.
> IO devices may also use RMID=1, CLOSID=1 ... but those measure and
> control traffic from an IO cache instance.
>
> This looks like the multi-socket case where RMID=1 on
> socket 0 (and thus L3 cache 0) is independent from RMID=1 on
> socket 1. But resctrl partially hides this by making RMID allocation
> global and just providing separate event files for each L3 cache
> instance.
>
> I don't think this maps to CPU vs IO. As Babu notes above, there's
> no update for IO CLOSID/RMID on process context switch. So it
> makes no sense to allocate IO RMIDs from the same pool as CPU
> RMIDs.
>
> I haven't come up with any concrete plans for how to implement this
> version of IO RDT into resctrl. The earlier implementation on Granite Rapids
> didn't have IO caches independent from CPU caches.
It seems to common now that we need to build support for new features and ideally
any new interface can be designed with some gateway to enable future enhancements. SDCIAE/io_alloc
is a global IO alloc feature while the ones mentioned here for Arm and Intel seem to be a better
match to be managed per resource group. I did try to think how to "keep the door open" for
future enhancements, hypothetically "/sys/fs/resctrl/info/L3/io_alloc" could in the future
return a new value that implies "managed_in_resource_group" that opens door to create
new interfaces in the resource groups to manage IO alloc there. The "io_alloc" documentation
does seem high level enough to support such an enhancement.
Do you see any additional changes we can add to the interface or documentation to
set resctrl up for success to support these features in the future?
Reinette
next prev parent reply other threads:[~2025-11-06 0:49 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-30 17:15 [PATCH v11 00/10] x86,fs/resctrl: Support L3 Smart Data Cache Injection Allocation Enforcement (SDCIAE) Babu Moger
2025-10-30 17:15 ` [PATCH v11 01/10] x86/cpufeatures: Add support for L3 Smart Data Cache Injection Allocation Enforcement Babu Moger
2025-10-30 17:15 ` [PATCH v11 02/10] x86/resctrl: Add SDCIAE feature in the command line options Babu Moger
2025-10-30 17:15 ` [PATCH v11 03/10] x86,fs/resctrl: Detect io_alloc feature Babu Moger
2025-10-30 17:15 ` [PATCH v11 04/10] x86,fs/resctrl: Implement "io_alloc" enable/disable handlers Babu Moger
2025-10-30 17:15 ` [PATCH v11 05/10] fs/resctrl: Introduce interface to display "io_alloc" support Babu Moger
2025-10-30 17:15 ` [PATCH v11 06/10] fs/resctrl: Add user interface to enable/disable io_alloc feature Babu Moger
2025-11-03 19:05 ` Reinette Chatre
2025-11-03 21:36 ` Babu Moger
2025-11-05 16:46 ` Dave Martin
2025-11-05 17:31 ` Moger, Babu
2025-11-05 18:25 ` Luck, Tony
2025-11-06 0:48 ` Reinette Chatre [this message]
2025-10-30 17:15 ` [PATCH v11 07/10] fs/resctrl: Introduce interface to display io_alloc CBMs Babu Moger
2025-10-30 17:15 ` [PATCH v11 08/10] fs/resctrl: Modify struct rdt_parse_data to pass mode and CLOSID Babu Moger
2025-10-30 17:15 ` [PATCH v11 09/10] fs/resctrl: Introduce interface to modify io_alloc capacity bitmasks Babu Moger
2025-10-30 17:15 ` [PATCH v11 10/10] fs/resctrl: Update bit_usage to reflect io_alloc Babu Moger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5bf8e5f6-d515-4cd0-a2d7-c0eb9a305c5d@intel.com \
--to=reinette.chatre@intel.com \
--cc=Dave.Martin@arm.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=arnd@arndb.de \
--cc=babu.moger@amd.com \
--cc=bmoger@amd.com \
--cc=bp@alien8.de \
--cc=chang.seok.bae@intel.com \
--cc=corbet@lwn.net \
--cc=dave.hansen@linux.intel.com \
--cc=ebiggers@google.com \
--cc=elena.reshetova@intel.com \
--cc=feng.tang@linux.alibaba.com \
--cc=fvdl@google.com \
--cc=hpa@zytor.com \
--cc=james.morse@arm.com \
--cc=jarkko@kernel.org \
--cc=kees@kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mario.limonciello@amd.com \
--cc=mingo@redhat.com \
--cc=paulmck@kernel.org \
--cc=pawan.kumar.gupta@linux.intel.com \
--cc=perry.yuan@amd.com \
--cc=peternewman@google.com \
--cc=pmladek@suse.com \
--cc=rdunlap@infradead.org \
--cc=seanjc@google.com \
--cc=sohil.mehta@intel.com \
--cc=tglx@linutronix.de \
--cc=thomas.lendacky@amd.com \
--cc=tony.luck@intel.com \
--cc=x86@kernel.org \
--cc=xin@zytor.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).