From: Kunal Joshi <kunal1.joshi@intel.com>
To: intel-gfx@lists.freedesktop.org, intel-xe@lists.freedesktop.org
Cc: imre.deak@intel.com, jani.nikula@intel.com,
Kunal Joshi <kunal1.joshi@intel.com>
Subject: [RFC 3/7] drm/display/dp_tunnel: Add bw_limit debugfs cap for BW pressure injection
Date: Mon, 11 May 2026 11:10:24 +0530 [thread overview]
Message-ID: <20260511054028.1310995-4-kunal1.joshi@intel.com> (raw)
In-Reply-To: <20260511054028.1310995-1-kunal1.joshi@intel.com>
IGT needs to inject deterministic BW pressure to validate mode
filtering and fallback paths without requiring a real sink that
consumes a specific amount of bandwidth. Add a writable 'bw_limit'
file (in kB/s) under each tunnel's debugfs subdir that caps the
value reported by drm_dp_tunnel_available_bw(). Writing 0 clears
the cap.
Cc: Imre Deak <imre.deak@intel.com>
Assisted-by: Copilot:claude-sonnet-4-6
Signed-off-by: Kunal Joshi <kunal1.joshi@intel.com>
---
drivers/gpu/drm/display/drm_dp_tunnel.c | 76 ++++++++++++++++++++++++-
1 file changed, 75 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/display/drm_dp_tunnel.c b/drivers/gpu/drm/display/drm_dp_tunnel.c
index b29dd59263ae2..c16b36d3bcf8a 100644
--- a/drivers/gpu/drm/display/drm_dp_tunnel.c
+++ b/drivers/gpu/drm/display/drm_dp_tunnel.c
@@ -154,6 +154,7 @@ struct drm_dp_tunnel {
#ifdef CONFIG_DEBUG_FS
struct list_head debugfs_dirs;
+ int bw_limit;
#endif
};
@@ -1445,10 +1446,26 @@ EXPORT_SYMBOL(drm_dp_tunnel_max_dprx_lane_count);
* Returns the @tunnel group's estimated total available bandwidth in kB/s
* units, or -1 if the available BW isn't valid (the BW allocation mode is
* not enabled or the tunnel's state hasn't been updated).
+ *
+ * If a debug BW cap has been set via the "dp_tunnel/bw_limit" debugfs
+ * file, the returned value is min(group->available_bw, bw_limit). The
+ * cap defaults to 0 (no cap) and is only available when CONFIG_DEBUG_FS
+ * is enabled.
*/
int drm_dp_tunnel_available_bw(const struct drm_dp_tunnel *tunnel)
{
- return tunnel->group->available_bw;
+ int bw = tunnel->group->available_bw;
+
+#ifdef CONFIG_DEBUG_FS
+ {
+ int limit = READ_ONCE(tunnel->bw_limit);
+
+ if (bw > 0 && limit > 0)
+ bw = min(bw, limit);
+ }
+#endif
+
+ return bw;
}
EXPORT_SYMBOL(drm_dp_tunnel_available_bw);
@@ -2088,6 +2105,61 @@ static const struct file_operations tunnel_bw_alloc_enable_fops = {
.write = tunnel_bw_alloc_enable_write,
};
+static int tunnel_bw_limit_show(struct seq_file *m, void *data)
+{
+ struct drm_dp_tunnel *tunnel = m->private;
+
+ seq_printf(m, "%d\n", READ_ONCE(tunnel->bw_limit));
+
+ return 0;
+}
+
+static ssize_t tunnel_bw_limit_write(struct file *file,
+ const char __user *ubuf,
+ size_t len, loff_t *offp)
+{
+ struct seq_file *m = file->private_data;
+ struct drm_dp_tunnel *tunnel = m->private;
+ int limit;
+ int ret;
+
+ ret = kstrtoint_from_user(ubuf, len, 0, &limit);
+ if (ret)
+ return ret;
+
+ if (limit < 0)
+ return -EINVAL;
+
+ mutex_lock(&tunnel->group->mgr->debugfs_lock);
+
+ if (tunnel->destroyed) {
+ ret = -ENODEV;
+ goto unlock;
+ }
+
+ WRITE_ONCE(tunnel->bw_limit, limit);
+ ret = 0;
+
+unlock:
+ mutex_unlock(&tunnel->group->mgr->debugfs_lock);
+
+ return ret < 0 ? ret : len;
+}
+
+static int tunnel_bw_limit_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, tunnel_bw_limit_show, inode->i_private);
+}
+
+static const struct file_operations tunnel_bw_limit_fops = {
+ .owner = THIS_MODULE,
+ .open = tunnel_bw_limit_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+ .write = tunnel_bw_limit_write,
+};
+
/**
* drm_dp_tunnel_debugfs_add - Add DP tunnel debugfs entries
* @tunnel: Tunnel object the entries are registered for
@@ -2150,6 +2222,8 @@ void drm_dp_tunnel_debugfs_add(struct drm_dp_tunnel *tunnel, struct dentry *root
debugfs_create_file("info", 0444, dir, tunnel, &tunnel_info_fops);
debugfs_create_file("bw_alloc_enable", 0644, dir, tunnel,
&tunnel_bw_alloc_enable_fops);
+ debugfs_create_file("bw_limit", 0644, dir, tunnel,
+ &tunnel_bw_limit_fops);
unlock:
mutex_unlock(&tunnel->group->mgr->debugfs_lock);
--
2.25.1
next prev parent reply other threads:[~2026-05-11 5:19 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-11 5:40 [RFC 0/7] drm/display/dp_tunnel: Add debugfs surface for BWA validation Kunal Joshi
2026-05-11 5:40 ` [RFC 1/7] drm/display/dp_tunnel: Add debugfs interface with info file Kunal Joshi
2026-05-11 5:40 ` [RFC 2/7] drm/display/dp_tunnel: Add bw_alloc_enable debugfs knob Kunal Joshi
2026-05-11 5:40 ` Kunal Joshi [this message]
2026-05-11 5:40 ` [RFC 4/7] drm/i915/dp_tunnel: Wire up DP tunnel debugfs from DRM core Kunal Joshi
2026-05-11 5:40 ` [RFC 5/7] drm/i915/display: Expose DP tunnel debugfs under each connector Kunal Joshi
2026-05-11 5:40 ` [RFC 6/7] drm/display/dp_tunnel: Sync SW allocated_bw after enabling BW alloc Kunal Joshi
2026-05-11 5:40 ` [RFC 7/7] drm/i915/dp_tunnel: Re-attach dp_tunnel debugfs to MST children on re-detect Kunal Joshi
2026-05-11 14:54 ` ✓ i915.CI.BAT: success for drm/display/dp_tunnel: Add debugfs surface for BWA validation Patchwork
2026-05-11 20:10 ` ✗ i915.CI.Full: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260511054028.1310995-4-kunal1.joshi@intel.com \
--to=kunal1.joshi@intel.com \
--cc=imre.deak@intel.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=jani.nikula@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox