* [PATCH v2 0/3] drm/msm/gpu: Devfreq fixes+tuning
@ 2023-01-10 23:14 Rob Clark
2023-01-10 23:14 ` [PATCH v2 1/3] drm/msm/gpu: Add devfreq tuning debugfs Rob Clark
` (3 more replies)
0 siblings, 4 replies; 7+ messages in thread
From: Rob Clark @ 2023-01-10 23:14 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Akhil P Oommen, Chia-I Wu,
Dmitry Baryshkov, Douglas Anderson, Konrad Dybcio, open list,
Sean Paul
From: Rob Clark <robdclark@chromium.org>
Rob Clark (3):
drm/msm/gpu: Add devfreq tuning debugfs
drm/msm/gpu: Bypass PM QoS constraint for idle clamp
drm/msm/gpu: Add default devfreq thresholds
drivers/gpu/drm/msm/Kconfig | 1 +
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +-
drivers/gpu/drm/msm/msm_debugfs.c | 12 +++
drivers/gpu/drm/msm/msm_drv.h | 9 ++
drivers/gpu/drm/msm/msm_gpu.h | 15 ++-
drivers/gpu/drm/msm/msm_gpu_devfreq.c | 148 ++++++++++++--------------
6 files changed, 100 insertions(+), 87 deletions(-)
--
2.38.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 1/3] drm/msm/gpu: Add devfreq tuning debugfs
2023-01-10 23:14 [PATCH v2 0/3] drm/msm/gpu: Devfreq fixes+tuning Rob Clark
@ 2023-01-10 23:14 ` Rob Clark
2023-01-23 12:38 ` Krzysztof Kozlowski
2023-01-10 23:14 ` [PATCH v2 2/3] drm/msm/gpu: Bypass PM QoS constraint for idle clamp Rob Clark
` (2 subsequent siblings)
3 siblings, 1 reply; 7+ messages in thread
From: Rob Clark @ 2023-01-10 23:14 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, David Airlie, Daniel Vetter,
Akhil P Oommen, Chia-I Wu, Douglas Anderson, Konrad Dybcio,
open list
From: Rob Clark <robdclark@chromium.org>
Make the handful of tuning knobs available visible via debugfs.
v2: select DEVFREQ_GOV_SIMPLE_ONDEMAND because for some reason
struct devfreq_simple_ondemand_data depends on this
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/Kconfig | 1 +
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +-
drivers/gpu/drm/msm/msm_debugfs.c | 12 ++++++++++++
drivers/gpu/drm/msm/msm_drv.h | 9 +++++++++
drivers/gpu/drm/msm/msm_gpu.h | 3 ---
drivers/gpu/drm/msm/msm_gpu_devfreq.c | 6 ++++--
6 files changed, 27 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/msm/Kconfig b/drivers/gpu/drm/msm/Kconfig
index 3c9dfdb0b328..f7abacb4b221 100644
--- a/drivers/gpu/drm/msm/Kconfig
+++ b/drivers/gpu/drm/msm/Kconfig
@@ -23,6 +23,7 @@ config DRM_MSM
select SHMEM
select TMPFS
select QCOM_SCM
+ select DEVFREQ_GOV_SIMPLE_ONDEMAND
select WANT_DEV_COREDUMP
select SND_SOC_HDMI_CODEC if SND_SOC
select SYNC_FILE
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index 36c8fb699b56..6f7401f2acda 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -2021,7 +2021,7 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev)
* to cause power supply issues:
*/
if (adreno_is_a618(adreno_gpu) || adreno_is_7c3(adreno_gpu))
- gpu->clamp_to_idle = true;
+ priv->gpu_clamp_to_idle = true;
/* Check if there is a GMU phandle and set it up */
node = of_parse_phandle(pdev->dev.of_node, "qcom,gmu", 0);
diff --git a/drivers/gpu/drm/msm/msm_debugfs.c b/drivers/gpu/drm/msm/msm_debugfs.c
index 95f4374ae21c..d6ecff0ab618 100644
--- a/drivers/gpu/drm/msm/msm_debugfs.c
+++ b/drivers/gpu/drm/msm/msm_debugfs.c
@@ -305,6 +305,7 @@ void msm_debugfs_init(struct drm_minor *minor)
{
struct drm_device *dev = minor->dev;
struct msm_drm_private *priv = dev->dev_private;
+ struct dentry *gpu_devfreq;
drm_debugfs_create_files(msm_debugfs_list,
ARRAY_SIZE(msm_debugfs_list),
@@ -325,6 +326,17 @@ void msm_debugfs_init(struct drm_minor *minor)
debugfs_create_file("shrink", S_IRWXU, minor->debugfs_root,
dev, &shrink_fops);
+ gpu_devfreq = debugfs_create_dir("devfreq", minor->debugfs_root);
+
+ debugfs_create_bool("idle_clamp",0600, gpu_devfreq,
+ &priv->gpu_clamp_to_idle);
+
+ debugfs_create_u32("upthreshold",0600, gpu_devfreq,
+ &priv->gpu_devfreq_config.upthreshold);
+
+ debugfs_create_u32("downdifferential",0600, gpu_devfreq,
+ &priv->gpu_devfreq_config.downdifferential);
+
if (priv->kms && priv->kms->funcs->debugfs_init)
priv->kms->funcs->debugfs_init(priv->kms, minor);
diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index 876d8d5eec2f..6cb1c6d230e8 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -11,6 +11,7 @@
#include <linux/kernel.h>
#include <linux/clk.h>
#include <linux/cpufreq.h>
+#include <linux/devfreq.h>
#include <linux/module.h>
#include <linux/component.h>
#include <linux/platform_device.h>
@@ -234,6 +235,14 @@ struct msm_drm_private {
*/
unsigned int hangcheck_period;
+ /** gpu_devfreq_config: Devfreq tuning config for the GPU. */
+ struct devfreq_simple_ondemand_data gpu_devfreq_config;
+
+ /**
+ * gpu_clamp_to_idle: Enable clamping to idle freq when inactive
+ */
+ bool gpu_clamp_to_idle;
+
/**
* disable_err_irq:
*
diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
index 732295e25683..ab110c377916 100644
--- a/drivers/gpu/drm/msm/msm_gpu.h
+++ b/drivers/gpu/drm/msm/msm_gpu.h
@@ -275,9 +275,6 @@ struct msm_gpu {
struct msm_gpu_state *crashstate;
- /* Enable clamping to idle freq when inactive: */
- bool clamp_to_idle;
-
/* True if the hardware supports expanded apriv (a650 and newer) */
bool hw_apriv;
diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
index 025940eb08d1..0d7ff7ddc029 100644
--- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
@@ -183,6 +183,7 @@ static bool has_devfreq(struct msm_gpu *gpu)
void msm_devfreq_init(struct msm_gpu *gpu)
{
struct msm_gpu_devfreq *df = &gpu->devfreq;
+ struct msm_drm_private *priv = gpu->dev->dev_private;
/* We need target support to do devfreq */
if (!gpu->funcs->gpu_busy)
@@ -209,7 +210,7 @@ void msm_devfreq_init(struct msm_gpu *gpu)
df->devfreq = devm_devfreq_add_device(&gpu->pdev->dev,
&msm_devfreq_profile, DEVFREQ_GOV_SIMPLE_ONDEMAND,
- NULL);
+ &priv->gpu_devfreq_config);
if (IS_ERR(df->devfreq)) {
DRM_DEV_ERROR(&gpu->pdev->dev, "Couldn't initialize GPU devfreq\n");
@@ -358,10 +359,11 @@ static void msm_devfreq_idle_work(struct kthread_work *work)
struct msm_gpu_devfreq *df = container_of(work,
struct msm_gpu_devfreq, idle_work.work);
struct msm_gpu *gpu = container_of(df, struct msm_gpu, devfreq);
+ struct msm_drm_private *priv = gpu->dev->dev_private;
df->idle_time = ktime_get();
- if (gpu->clamp_to_idle)
+ if (priv->gpu_clamp_to_idle)
dev_pm_qos_update_request(&df->idle_freq, 0);
}
--
2.38.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 2/3] drm/msm/gpu: Bypass PM QoS constraint for idle clamp
2023-01-10 23:14 [PATCH v2 0/3] drm/msm/gpu: Devfreq fixes+tuning Rob Clark
2023-01-10 23:14 ` [PATCH v2 1/3] drm/msm/gpu: Add devfreq tuning debugfs Rob Clark
@ 2023-01-10 23:14 ` Rob Clark
2023-01-10 23:14 ` [PATCH v2 3/3] drm/msm/gpu: Add default devfreq thresholds Rob Clark
2023-01-13 21:34 ` [PATCH v2 0/3] drm/msm/gpu: Devfreq fixes+tuning Chia-I Wu
3 siblings, 0 replies; 7+ messages in thread
From: Rob Clark @ 2023-01-10 23:14 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, David Airlie, Daniel Vetter,
open list
From: Rob Clark <robdclark@chromium.org>
Change idle freq clamping back to the direct method, bypassing PM QoS
requests. The problem with using PM QoS requests is they call
(indirectly) the governors ->get_target_freq() which goes thru a
get_dev_status() cycle. The problem comes when the GPU becomes active
again and we remove the idle-clamp request, we go through another
get_dev_status() cycle for the period that the GPU has been idle, which
triggers the governor to lower the target freq excessively.
This partially reverts commit 7c0ffcd40b16 ("drm/msm/gpu: Respect PM QoS
constraints"), but preserves the use of boost QoS request, so that it
will continue to play nicely with other QoS requests such as a cooling
device. This also mostly undoes commit 78f815c1cf8f ("drm/msm: return the
average load over the polling period")
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_gpu.h | 12 ++-
drivers/gpu/drm/msm/msm_gpu_devfreq.c | 135 +++++++++++---------------
2 files changed, 65 insertions(+), 82 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
index ab110c377916..fc1c0d8611a8 100644
--- a/drivers/gpu/drm/msm/msm_gpu.h
+++ b/drivers/gpu/drm/msm/msm_gpu.h
@@ -109,11 +109,15 @@ struct msm_gpu_devfreq {
struct mutex lock;
/**
- * idle_constraint:
+ * idle_freq:
*
- * A PM QoS constraint to limit max freq while the GPU is idle.
+ * Shadow frequency used while the GPU is idle. From the PoV of
+ * the devfreq governor, we are continuing to sample busyness and
+ * adjust frequency while the GPU is idle, but we use this shadow
+ * value as the GPU is actually clamped to minimum frequency while
+ * it is inactive.
*/
- struct dev_pm_qos_request idle_freq;
+ unsigned long idle_freq;
/**
* boost_constraint:
@@ -135,8 +139,6 @@ struct msm_gpu_devfreq {
/** idle_time: Time of last transition to idle: */
ktime_t idle_time;
- struct devfreq_dev_status average_status;
-
/**
* idle_work:
*
diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
index 0d7ff7ddc029..e578d74d402f 100644
--- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
@@ -33,6 +33,16 @@ static int msm_devfreq_target(struct device *dev, unsigned long *freq,
trace_msm_gpu_freq_change(dev_pm_opp_get_freq(opp));
+ /*
+ * If the GPU is idle, devfreq is not aware, so just stash
+ * the new target freq (to use when we return to active)
+ */
+ if (df->idle_freq) {
+ df->idle_freq = *freq;
+ dev_pm_opp_put(opp);
+ return 0;
+ }
+
if (gpu->funcs->gpu_set_freq) {
mutex_lock(&df->lock);
gpu->funcs->gpu_set_freq(gpu, opp, df->suspended);
@@ -48,15 +58,26 @@ static int msm_devfreq_target(struct device *dev, unsigned long *freq,
static unsigned long get_freq(struct msm_gpu *gpu)
{
+ struct msm_gpu_devfreq *df = &gpu->devfreq;
+
+ /*
+ * If the GPU is idle, use the shadow/saved freq to avoid
+ * confusing devfreq (which is unaware that we are switching
+ * to lowest freq until the device is active again)
+ */
+ if (df->idle_freq)
+ return df->idle_freq;
+
if (gpu->funcs->gpu_get_freq)
return gpu->funcs->gpu_get_freq(gpu);
return clk_get_rate(gpu->core_clk);
}
-static void get_raw_dev_status(struct msm_gpu *gpu,
+static int msm_devfreq_get_dev_status(struct device *dev,
struct devfreq_dev_status *status)
{
+ struct msm_gpu *gpu = dev_to_gpu(dev);
struct msm_gpu_devfreq *df = &gpu->devfreq;
u64 busy_cycles, busy_time;
unsigned long sample_rate;
@@ -72,7 +93,7 @@ static void get_raw_dev_status(struct msm_gpu *gpu,
if (df->suspended) {
mutex_unlock(&df->lock);
status->busy_time = 0;
- return;
+ return 0;
}
busy_cycles = gpu->funcs->gpu_busy(gpu, &sample_rate);
@@ -87,71 +108,6 @@ static void get_raw_dev_status(struct msm_gpu *gpu,
busy_time = ~0LU;
status->busy_time = busy_time;
-}
-
-static void update_average_dev_status(struct msm_gpu *gpu,
- const struct devfreq_dev_status *raw)
-{
- struct msm_gpu_devfreq *df = &gpu->devfreq;
- const u32 polling_ms = df->devfreq->profile->polling_ms;
- const u32 max_history_ms = polling_ms * 11 / 10;
- struct devfreq_dev_status *avg = &df->average_status;
- u64 avg_freq;
-
- /* simple_ondemand governor interacts poorly with gpu->clamp_to_idle.
- * When we enforce the constraint on idle, it calls get_dev_status
- * which would normally reset the stats. When we remove the
- * constraint on active, it calls get_dev_status again where busy_time
- * would be 0.
- *
- * To remedy this, we always return the average load over the past
- * polling_ms.
- */
-
- /* raw is longer than polling_ms or avg has no history */
- if (div_u64(raw->total_time, USEC_PER_MSEC) >= polling_ms ||
- !avg->total_time) {
- *avg = *raw;
- return;
- }
-
- /* Truncate the oldest history first.
- *
- * Because we keep the history with a single devfreq_dev_status,
- * rather than a list of devfreq_dev_status, we have to assume freq
- * and load are the same over avg->total_time. We can scale down
- * avg->busy_time and avg->total_time by the same factor to drop
- * history.
- */
- if (div_u64(avg->total_time + raw->total_time, USEC_PER_MSEC) >=
- max_history_ms) {
- const u32 new_total_time = polling_ms * USEC_PER_MSEC -
- raw->total_time;
- avg->busy_time = div_u64(
- mul_u32_u32(avg->busy_time, new_total_time),
- avg->total_time);
- avg->total_time = new_total_time;
- }
-
- /* compute the average freq over avg->total_time + raw->total_time */
- avg_freq = mul_u32_u32(avg->current_frequency, avg->total_time);
- avg_freq += mul_u32_u32(raw->current_frequency, raw->total_time);
- do_div(avg_freq, avg->total_time + raw->total_time);
-
- avg->current_frequency = avg_freq;
- avg->busy_time += raw->busy_time;
- avg->total_time += raw->total_time;
-}
-
-static int msm_devfreq_get_dev_status(struct device *dev,
- struct devfreq_dev_status *status)
-{
- struct msm_gpu *gpu = dev_to_gpu(dev);
- struct devfreq_dev_status raw;
-
- get_raw_dev_status(gpu, &raw);
- update_average_dev_status(gpu, &raw);
- *status = gpu->devfreq.average_status;
return 0;
}
@@ -191,9 +147,6 @@ void msm_devfreq_init(struct msm_gpu *gpu)
mutex_init(&df->lock);
- dev_pm_qos_add_request(&gpu->pdev->dev, &df->idle_freq,
- DEV_PM_QOS_MAX_FREQUENCY,
- PM_QOS_MAX_FREQUENCY_DEFAULT_VALUE);
dev_pm_qos_add_request(&gpu->pdev->dev, &df->boost_freq,
DEV_PM_QOS_MIN_FREQUENCY, 0);
@@ -214,7 +167,6 @@ void msm_devfreq_init(struct msm_gpu *gpu)
if (IS_ERR(df->devfreq)) {
DRM_DEV_ERROR(&gpu->pdev->dev, "Couldn't initialize GPU devfreq\n");
- dev_pm_qos_remove_request(&df->idle_freq);
dev_pm_qos_remove_request(&df->boost_freq);
df->devfreq = NULL;
return;
@@ -256,7 +208,6 @@ void msm_devfreq_cleanup(struct msm_gpu *gpu)
devfreq_cooling_unregister(gpu->cooling);
dev_pm_qos_remove_request(&df->boost_freq);
- dev_pm_qos_remove_request(&df->idle_freq);
}
void msm_devfreq_resume(struct msm_gpu *gpu)
@@ -329,6 +280,7 @@ void msm_devfreq_active(struct msm_gpu *gpu)
{
struct msm_gpu_devfreq *df = &gpu->devfreq;
unsigned int idle_time;
+ unsigned long target_freq;
if (!has_devfreq(gpu))
return;
@@ -338,8 +290,28 @@ void msm_devfreq_active(struct msm_gpu *gpu)
*/
cancel_idle_work(df);
+ /*
+ * Hold devfreq lock to synchronize with get_dev_status()/
+ * target() callbacks
+ */
+ mutex_lock(&df->devfreq->lock);
+
+ target_freq = df->idle_freq;
+
idle_time = ktime_to_ms(ktime_sub(ktime_get(), df->idle_time));
+ df->idle_freq = 0;
+
+ /*
+ * We could have become active again before the idle work had a
+ * chance to run, in which case the df->idle_freq would have
+ * still been zero. In this case, no need to change freq.
+ */
+ if (target_freq)
+ msm_devfreq_target(&gpu->pdev->dev, &target_freq, 0);
+
+ mutex_unlock(&df->devfreq->lock);
+
/*
* If we've been idle for a significant fraction of a polling
* interval, then we won't meet the threshold of busyness for
@@ -348,9 +320,6 @@ void msm_devfreq_active(struct msm_gpu *gpu)
if (idle_time > msm_devfreq_profile.polling_ms) {
msm_devfreq_boost(gpu, 2);
}
-
- dev_pm_qos_update_request(&df->idle_freq,
- PM_QOS_MAX_FREQUENCY_DEFAULT_VALUE);
}
@@ -360,11 +329,23 @@ static void msm_devfreq_idle_work(struct kthread_work *work)
struct msm_gpu_devfreq, idle_work.work);
struct msm_gpu *gpu = container_of(df, struct msm_gpu, devfreq);
struct msm_drm_private *priv = gpu->dev->dev_private;
+ unsigned long idle_freq, target_freq = 0;
- df->idle_time = ktime_get();
+ /*
+ * Hold devfreq lock to synchronize with get_dev_status()/
+ * target() callbacks
+ */
+ mutex_lock(&df->devfreq->lock);
+
+ idle_freq = get_freq(gpu);
if (priv->gpu_clamp_to_idle)
- dev_pm_qos_update_request(&df->idle_freq, 0);
+ msm_devfreq_target(&gpu->pdev->dev, &target_freq, 0);
+
+ df->idle_time = ktime_get();
+ df->idle_freq = idle_freq;
+
+ mutex_unlock(&df->devfreq->lock);
}
void msm_devfreq_idle(struct msm_gpu *gpu)
--
2.38.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 3/3] drm/msm/gpu: Add default devfreq thresholds
2023-01-10 23:14 [PATCH v2 0/3] drm/msm/gpu: Devfreq fixes+tuning Rob Clark
2023-01-10 23:14 ` [PATCH v2 1/3] drm/msm/gpu: Add devfreq tuning debugfs Rob Clark
2023-01-10 23:14 ` [PATCH v2 2/3] drm/msm/gpu: Bypass PM QoS constraint for idle clamp Rob Clark
@ 2023-01-10 23:14 ` Rob Clark
2023-01-13 21:34 ` [PATCH v2 0/3] drm/msm/gpu: Devfreq fixes+tuning Chia-I Wu
3 siblings, 0 replies; 7+ messages in thread
From: Rob Clark @ 2023-01-10 23:14 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, David Airlie, Daniel Vetter,
open list
From: Rob Clark <robdclark@chromium.org>
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_gpu_devfreq.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
index e578d74d402f..1f31e72ca0cf 100644
--- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
@@ -145,6 +145,15 @@ void msm_devfreq_init(struct msm_gpu *gpu)
if (!gpu->funcs->gpu_busy)
return;
+ /*
+ * Setup default values for simple_ondemand governor tuning. We
+ * want to throttle up at 50% load for the double-buffer case,
+ * where due to stalling waiting for vblank we could get stuck
+ * at (for ex) 30fps at 50% utilization.
+ */
+ priv->gpu_devfreq_config.upthreshold = 50;
+ priv->gpu_devfreq_config.downdifferential = 10;
+
mutex_init(&df->lock);
dev_pm_qos_add_request(&gpu->pdev->dev, &df->boost_freq,
--
2.38.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v2 0/3] drm/msm/gpu: Devfreq fixes+tuning
2023-01-10 23:14 [PATCH v2 0/3] drm/msm/gpu: Devfreq fixes+tuning Rob Clark
` (2 preceding siblings ...)
2023-01-10 23:14 ` [PATCH v2 3/3] drm/msm/gpu: Add default devfreq thresholds Rob Clark
@ 2023-01-13 21:34 ` Chia-I Wu
3 siblings, 0 replies; 7+ messages in thread
From: Chia-I Wu @ 2023-01-13 21:34 UTC (permalink / raw)
To: Rob Clark
Cc: dri-devel, freedreno, linux-arm-msm, Rob Clark, Akhil P Oommen,
Dmitry Baryshkov, Douglas Anderson, Konrad Dybcio, open list,
Sean Paul
Series is
Reviewed-by: Chia-I Wu <olvaffe@gmail.com>
On Tue, Jan 10, 2023 at 3:14 PM Rob Clark <robdclark@gmail.com> wrote:
>
> From: Rob Clark <robdclark@chromium.org>
>
> Rob Clark (3):
> drm/msm/gpu: Add devfreq tuning debugfs
> drm/msm/gpu: Bypass PM QoS constraint for idle clamp
> drm/msm/gpu: Add default devfreq thresholds
>
> drivers/gpu/drm/msm/Kconfig | 1 +
> drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +-
> drivers/gpu/drm/msm/msm_debugfs.c | 12 +++
> drivers/gpu/drm/msm/msm_drv.h | 9 ++
> drivers/gpu/drm/msm/msm_gpu.h | 15 ++-
> drivers/gpu/drm/msm/msm_gpu_devfreq.c | 148 ++++++++++++--------------
> 6 files changed, 100 insertions(+), 87 deletions(-)
>
> --
> 2.38.1
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 1/3] drm/msm/gpu: Add devfreq tuning debugfs
2023-01-10 23:14 ` [PATCH v2 1/3] drm/msm/gpu: Add devfreq tuning debugfs Rob Clark
@ 2023-01-23 12:38 ` Krzysztof Kozlowski
2023-01-23 15:38 ` Rob Clark
0 siblings, 1 reply; 7+ messages in thread
From: Krzysztof Kozlowski @ 2023-01-23 12:38 UTC (permalink / raw)
To: Rob Clark, dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, David Airlie, Daniel Vetter,
Akhil P Oommen, Chia-I Wu, Douglas Anderson, Konrad Dybcio,
open list, Bjorn Andersson
On 11/01/2023 00:14, Rob Clark wrote:
> From: Rob Clark <robdclark@chromium.org>
>
> Make the handful of tuning knobs available visible via debugfs.
>
> v2: select DEVFREQ_GOV_SIMPLE_ONDEMAND because for some reason
> struct devfreq_simple_ondemand_data depends on this
>
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
For some reason this was merged even though earlier kbuild reported
build failure. This breaks linux next and qcom defconfig. Please drop
the patch from the linux next.
See earlier build issues reported:
https://lore.kernel.org/all/202301130108.fslQjvJ8-lkp@intel.com/
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 1/3] drm/msm/gpu: Add devfreq tuning debugfs
2023-01-23 12:38 ` Krzysztof Kozlowski
@ 2023-01-23 15:38 ` Rob Clark
0 siblings, 0 replies; 7+ messages in thread
From: Rob Clark @ 2023-01-23 15:38 UTC (permalink / raw)
To: Krzysztof Kozlowski
Cc: Rob Clark, dri-devel, freedreno, linux-arm-msm, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, David Airlie, Daniel Vetter,
Akhil P Oommen, Chia-I Wu, Douglas Anderson, Konrad Dybcio,
open list, Bjorn Andersson
On Mon, Jan 23, 2023 at 4:38 AM Krzysztof Kozlowski <krzk@kernel.org> wrote:
>
> On 11/01/2023 00:14, Rob Clark wrote:
> > From: Rob Clark <robdclark@chromium.org>
> >
> > Make the handful of tuning knobs available visible via debugfs.
> >
> > v2: select DEVFREQ_GOV_SIMPLE_ONDEMAND because for some reason
> > struct devfreq_simple_ondemand_data depends on this
> >
> > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > ---
>
> For some reason this was merged even though earlier kbuild reported
> build failure. This breaks linux next and qcom defconfig. Please drop
> the patch from the linux next.
>
> See earlier build issues reported:
> https://lore.kernel.org/all/202301130108.fslQjvJ8-lkp@intel.com/
>
This will fix it: https://patchwork.freedesktop.org/series/113232/
BR,
-R
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2023-01-23 15:38 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-01-10 23:14 [PATCH v2 0/3] drm/msm/gpu: Devfreq fixes+tuning Rob Clark
2023-01-10 23:14 ` [PATCH v2 1/3] drm/msm/gpu: Add devfreq tuning debugfs Rob Clark
2023-01-23 12:38 ` Krzysztof Kozlowski
2023-01-23 15:38 ` Rob Clark
2023-01-10 23:14 ` [PATCH v2 2/3] drm/msm/gpu: Bypass PM QoS constraint for idle clamp Rob Clark
2023-01-10 23:14 ` [PATCH v2 3/3] drm/msm/gpu: Add default devfreq thresholds Rob Clark
2023-01-13 21:34 ` [PATCH v2 0/3] drm/msm/gpu: Devfreq fixes+tuning Chia-I Wu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox