public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Aaron Tomlin <atomlin@atomlin.com>
To: tony.luck@intel.com, reinette.chatre@intel.com,
	Dave.Martin@arm.com, james.morse@arm.com, babu.moger@amd.com,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	dave.hansen@linux.intel.com
Cc: dave.martin@arm.com, atomlin@atomlin.com, sean@ashe.io,
	linux-kernel@vger.kernel.org
Subject: [PATCH 1/3] fs/resctrl: Add helpers to check io_alloc support and enabled state
Date: Wed, 26 Nov 2025 12:16:50 -0500	[thread overview]
Message-ID: <20251126171653.1004321-2-atomlin@atomlin.com> (raw)
In-Reply-To: <20251126171653.1004321-1-atomlin@atomlin.com>

This patch introduces two helpers to validate io_alloc support and
whether it is enabled. This reduces code duplication, clarifies
intent, and preserves existing semantics and messages.

Signed-off-by: Aaron Tomlin <atomlin@atomlin.com>
---
 fs/resctrl/ctrlmondata.c | 74 +++++++++++++++++++++++++++++-----------
 1 file changed, 54 insertions(+), 20 deletions(-)

diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c
index b2d178d3556e..5f6f96d70e4a 100644
--- a/fs/resctrl/ctrlmondata.c
+++ b/fs/resctrl/ctrlmondata.c
@@ -758,6 +758,50 @@ u32 resctrl_io_alloc_closid(struct rdt_resource *r)
 		return resctrl_arch_get_num_closid(r) - 1;
 }
 
+/*
+ * check_io_alloc_support() - Establish if io_alloc is supported
+ *
+ * @s: resctrl resource schema.
+ *
+ * This function must be called under the cpu hotplug lock
+ * and rdtgroup mutex
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+static int check_io_alloc_support(struct resctrl_schema *s)
+{
+	struct rdt_resource *r = s->res;
+
+	if (!r->cache.io_alloc.io_alloc_capable) {
+		rdt_last_cmd_printf("io_alloc is not supported on %s\n", s->name);
+		return -ENODEV;
+	}
+
+	return 0;
+}
+
+/*
+ * check_io_alloc_enabled() - Establish if io_alloc is enabled
+ *
+ * @s: resctrl resource schema
+ *
+ * This function must be called under the cpu hotplug lock
+ * and rdtgroup mutex
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+static int check_io_alloc_enabled(struct resctrl_schema *s)
+{
+	struct rdt_resource *r = s->res;
+
+	if (!resctrl_arch_get_io_alloc_enabled(r)) {
+		rdt_last_cmd_printf("io_alloc is not enabled on %s\n", s->name);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 ssize_t resctrl_io_alloc_write(struct kernfs_open_file *of, char *buf,
 			       size_t nbytes, loff_t off)
 {
@@ -777,11 +821,9 @@ ssize_t resctrl_io_alloc_write(struct kernfs_open_file *of, char *buf,
 
 	rdt_last_cmd_clear();
 
-	if (!r->cache.io_alloc_capable) {
-		rdt_last_cmd_printf("io_alloc is not supported on %s\n", s->name);
-		ret = -ENODEV;
+	ret = check_io_alloc_support(s);
+	if (ret)
 		goto out_unlock;
-	}
 
 	/* If the feature is already up to date, no action is needed. */
 	if (resctrl_arch_get_io_alloc_enabled(r) == enable)
@@ -839,17 +881,13 @@ int resctrl_io_alloc_cbm_show(struct kernfs_open_file *of, struct seq_file *seq,
 
 	rdt_last_cmd_clear();
 
-	if (!r->cache.io_alloc_capable) {
-		rdt_last_cmd_printf("io_alloc is not supported on %s\n", s->name);
-		ret = -ENODEV;
+	ret = check_io_alloc_support(s);
+	if (ret)
 		goto out_unlock;
-	}
 
-	if (!resctrl_arch_get_io_alloc_enabled(r)) {
-		rdt_last_cmd_printf("io_alloc is not enabled on %s\n", s->name);
-		ret = -EINVAL;
+	ret = check_io_alloc_enabled(s);
+	if (ret)
 		goto out_unlock;
-	}
 
 	/*
 	 * When CDP is enabled, the CBMs of the highest CLOSID of CDP_CODE and
@@ -928,17 +966,13 @@ ssize_t resctrl_io_alloc_cbm_write(struct kernfs_open_file *of, char *buf,
 	mutex_lock(&rdtgroup_mutex);
 	rdt_last_cmd_clear();
 
-	if (!r->cache.io_alloc_capable) {
-		rdt_last_cmd_printf("io_alloc is not supported on %s\n", s->name);
-		ret = -ENODEV;
+	ret = check_io_alloc_support(s);
+	if (ret)
 		goto out_unlock;
-	}
 
-	if (!resctrl_arch_get_io_alloc_enabled(r)) {
-		rdt_last_cmd_printf("io_alloc is not enabled on %s\n", s->name);
-		ret = -EINVAL;
+	ret = check_io_alloc_enabled(s);
+	if (ret)
 		goto out_unlock;
-	}
 
 	io_alloc_closid = resctrl_io_alloc_closid(r);
 
-- 
2.51.0


  reply	other threads:[~2025-11-26 17:17 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-26 17:16 [PATCH 0/3] x86/resctrl: Add "*" shorthand to set minimum io_alloc CBM for all domains Aaron Tomlin
2025-11-26 17:16 ` Aaron Tomlin [this message]
2025-12-05 19:35   ` [PATCH 1/3] fs/resctrl: Add helpers to check io_alloc support and enabled state Moger, Babu
2025-12-15 22:27     ` Aaron Tomlin
2025-11-26 17:16 ` [PATCH 2/3] fs/resctrl: Return -EINVAL for a missing seq_show implementation Aaron Tomlin
2025-11-26 17:16 ` [PATCH 3/3] x86/resctrl: Add "*" shorthand to set minimum io_alloc CBM for all domains Aaron Tomlin
2025-12-05 19:30   ` Moger, Babu
2025-12-15 22:50     ` Aaron Tomlin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251126171653.1004321-2-atomlin@atomlin.com \
    --to=atomlin@atomlin.com \
    --cc=Dave.Martin@arm.com \
    --cc=babu.moger@amd.com \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=james.morse@arm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=reinette.chatre@intel.com \
    --cc=sean@ashe.io \
    --cc=tglx@linutronix.de \
    --cc=tony.luck@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox