* [PATCH V3 0/2] firmware: arm_scmi: Register and handle limits change notification @ 2024-02-27 18:16 Sibi Sankar 2024-02-27 18:16 ` [PATCH V3 1/2] cpufreq: Export cpufreq_update_pressure Sibi Sankar 2024-02-27 18:16 ` [PATCH V3 2/2] cpufreq: scmi: Register for limit change notifications Sibi Sankar 0 siblings, 2 replies; 16+ messages in thread From: Sibi Sankar @ 2024-02-27 18:16 UTC (permalink / raw) To: sudeep.holla, cristian.marussi, rafael, viresh.kumar, morten.rasmussen, dietmar.eggemann, lukasz.luba, pierre.gondois Cc: linux-arm-kernel, linux-pm, linux-kernel, quic_mdtipton, linux-arm-msm, Sibi Sankar This series registers for scmi limits change notifications to determine the throttled frequency and apply HW pressure. V3: * Sanitize range_max received from the notifier. [Pierre] * Drop patches 1/2 from v2. [Cristian] * Update commit message in patch 2. V2: * Rename opp_xlate -> freq_xlate [Viresh] * Export cpufreq_update_pressure and use it directly [Lukasz] Depends on: HW pressure v5: https://patchwork.kernel.org/project/linux-arm-msm/cover/20240220145947.1107937-1-vincent.guittot@linaro.org/ boost frequency support: https://patchwork.kernel.org/project/linux-arm-msm/cover/20240227173434.650334-1-quic_sibis@quicinc.com/ Sibi Sankar (2): cpufreq: Export cpufreq_update_pressure cpufreq: scmi: Register for limit change notifications drivers/cpufreq/cpufreq.c | 3 ++- drivers/cpufreq/scmi-cpufreq.c | 29 ++++++++++++++++++++++++++++- include/linux/cpufreq.h | 2 ++ 3 files changed, 32 insertions(+), 2 deletions(-) -- 2.34.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH V3 1/2] cpufreq: Export cpufreq_update_pressure 2024-02-27 18:16 [PATCH V3 0/2] firmware: arm_scmi: Register and handle limits change notification Sibi Sankar @ 2024-02-27 18:16 ` Sibi Sankar 2024-02-27 19:32 ` Trilok Soni 2024-02-27 18:16 ` [PATCH V3 2/2] cpufreq: scmi: Register for limit change notifications Sibi Sankar 1 sibling, 1 reply; 16+ messages in thread From: Sibi Sankar @ 2024-02-27 18:16 UTC (permalink / raw) To: sudeep.holla, cristian.marussi, rafael, viresh.kumar, morten.rasmussen, dietmar.eggemann, lukasz.luba, pierre.gondois Cc: linux-arm-kernel, linux-pm, linux-kernel, quic_mdtipton, linux-arm-msm, Sibi Sankar The SCMI cpufreq driver doesn't require any additional signal smoothing provided by arch_update_hw_pressure interface, export cpufreq_update_pressure so that it can be called upon directly instead. Suggested-by: Lukasz Luba <lukasz.luba@arm.com> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> --- drivers/cpufreq/cpufreq.c | 3 ++- include/linux/cpufreq.h | 2 ++ 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 76002aa3d12d..bdec2dfd77eb 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -2573,7 +2573,7 @@ DEFINE_PER_CPU(unsigned long, cpufreq_pressure); * * Update the value of cpufreq pressure for all @cpus in the policy. */ -static void cpufreq_update_pressure(struct cpufreq_policy *policy) +void cpufreq_update_pressure(struct cpufreq_policy *policy) { unsigned long max_capacity, capped_freq, pressure; u32 max_freq; @@ -2598,6 +2598,7 @@ static void cpufreq_update_pressure(struct cpufreq_policy *policy) for_each_cpu(cpu, policy->related_cpus) WRITE_ONCE(per_cpu(cpufreq_pressure, cpu), pressure); } +EXPORT_SYMBOL(cpufreq_update_pressure); /** * cpufreq_set_policy - Modify cpufreq policy parameters. diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h index 414bfc976b30..957bf8e4ca0d 100644 --- a/include/linux/cpufreq.h +++ b/include/linux/cpufreq.h @@ -241,6 +241,7 @@ struct kobject *get_governor_parent_kobj(struct cpufreq_policy *policy); void cpufreq_enable_fast_switch(struct cpufreq_policy *policy); void cpufreq_disable_fast_switch(struct cpufreq_policy *policy); bool has_target_index(void); +void cpufreq_update_pressure(struct cpufreq_policy *policy); DECLARE_PER_CPU(unsigned long, cpufreq_pressure); static inline unsigned long cpufreq_get_pressure(int cpu) @@ -270,6 +271,7 @@ static inline bool cpufreq_supports_freq_invariance(void) } static inline void disable_cpufreq(void) { } static inline void cpufreq_update_limits(unsigned int cpu) { } +static inline void cpufreq_update_pressure(struct cpufreq_policy *policy) { } static inline unsigned long cpufreq_get_pressure(int cpu) { return 0; -- 2.34.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH V3 1/2] cpufreq: Export cpufreq_update_pressure 2024-02-27 18:16 ` [PATCH V3 1/2] cpufreq: Export cpufreq_update_pressure Sibi Sankar @ 2024-02-27 19:32 ` Trilok Soni 2024-02-28 5:16 ` Sibi Sankar 0 siblings, 1 reply; 16+ messages in thread From: Trilok Soni @ 2024-02-27 19:32 UTC (permalink / raw) To: Sibi Sankar, sudeep.holla, cristian.marussi, rafael, viresh.kumar, morten.rasmussen, dietmar.eggemann, lukasz.luba, pierre.gondois Cc: linux-arm-kernel, linux-pm, linux-kernel, quic_mdtipton, linux-arm-msm On 2/27/2024 10:16 AM, Sibi Sankar wrote: > The SCMI cpufreq driver doesn't require any additional signal > smoothing provided by arch_update_hw_pressure interface, export > cpufreq_update_pressure so that it can be called upon directly > instead. > > Suggested-by: Lukasz Luba <lukasz.luba@arm.com> > Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> > --- > drivers/cpufreq/cpufreq.c | 3 ++- > include/linux/cpufreq.h | 2 ++ > 2 files changed, 4 insertions(+), 1 deletion(-) > > diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c > index 76002aa3d12d..bdec2dfd77eb 100644 > --- a/drivers/cpufreq/cpufreq.c > +++ b/drivers/cpufreq/cpufreq.c > @@ -2573,7 +2573,7 @@ DEFINE_PER_CPU(unsigned long, cpufreq_pressure); > * > * Update the value of cpufreq pressure for all @cpus in the policy. > */ > -static void cpufreq_update_pressure(struct cpufreq_policy *policy) > +void cpufreq_update_pressure(struct cpufreq_policy *policy) > { > unsigned long max_capacity, capped_freq, pressure; > u32 max_freq; > @@ -2598,6 +2598,7 @@ static void cpufreq_update_pressure(struct cpufreq_policy *policy) > for_each_cpu(cpu, policy->related_cpus) > WRITE_ONCE(per_cpu(cpufreq_pressure, cpu), pressure); > } > +EXPORT_SYMBOL(cpufreq_update_pressure); EXPORT_SYMBOL_GPL please. Other symbols in this file are _GPL as well. > > /** > * cpufreq_set_policy - Modify cpufreq policy parameters. > diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h > index 414bfc976b30..957bf8e4ca0d 100644 > --- a/include/linux/cpufreq.h > +++ b/include/linux/cpufreq.h > @@ -241,6 +241,7 @@ struct kobject *get_governor_parent_kobj(struct cpufreq_policy *policy); > void cpufreq_enable_fast_switch(struct cpufreq_policy *policy); > void cpufreq_disable_fast_switch(struct cpufreq_policy *policy); > bool has_target_index(void); > +void cpufreq_update_pressure(struct cpufreq_policy *policy); > > DECLARE_PER_CPU(unsigned long, cpufreq_pressure); > static inline unsigned long cpufreq_get_pressure(int cpu) > @@ -270,6 +271,7 @@ static inline bool cpufreq_supports_freq_invariance(void) > } > static inline void disable_cpufreq(void) { } > static inline void cpufreq_update_limits(unsigned int cpu) { } > +static inline void cpufreq_update_pressure(struct cpufreq_policy *policy) { } > static inline unsigned long cpufreq_get_pressure(int cpu) > { > return 0; -- ---Trilok Soni _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH V3 1/2] cpufreq: Export cpufreq_update_pressure 2024-02-27 19:32 ` Trilok Soni @ 2024-02-28 5:16 ` Sibi Sankar 0 siblings, 0 replies; 16+ messages in thread From: Sibi Sankar @ 2024-02-28 5:16 UTC (permalink / raw) To: Trilok Soni, sudeep.holla, cristian.marussi, rafael, viresh.kumar, morten.rasmussen, dietmar.eggemann, lukasz.luba, pierre.gondois Cc: linux-arm-kernel, linux-pm, linux-kernel, quic_mdtipton, linux-arm-msm On 2/28/24 01:02, Trilok Soni wrote: > On 2/27/2024 10:16 AM, Sibi Sankar wrote: >> The SCMI cpufreq driver doesn't require any additional signal >> smoothing provided by arch_update_hw_pressure interface, export >> cpufreq_update_pressure so that it can be called upon directly >> instead. >> >> Suggested-by: Lukasz Luba <lukasz.luba@arm.com> >> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> >> --- >> drivers/cpufreq/cpufreq.c | 3 ++- >> include/linux/cpufreq.h | 2 ++ >> 2 files changed, 4 insertions(+), 1 deletion(-) >> >> diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c >> index 76002aa3d12d..bdec2dfd77eb 100644 >> --- a/drivers/cpufreq/cpufreq.c >> +++ b/drivers/cpufreq/cpufreq.c >> @@ -2573,7 +2573,7 @@ DEFINE_PER_CPU(unsigned long, cpufreq_pressure); >> * >> * Update the value of cpufreq pressure for all @cpus in the policy. >> */ >> -static void cpufreq_update_pressure(struct cpufreq_policy *policy) >> +void cpufreq_update_pressure(struct cpufreq_policy *policy) >> { >> unsigned long max_capacity, capped_freq, pressure; >> u32 max_freq; >> @@ -2598,6 +2598,7 @@ static void cpufreq_update_pressure(struct cpufreq_policy *policy) >> for_each_cpu(cpu, policy->related_cpus) >> WRITE_ONCE(per_cpu(cpufreq_pressure, cpu), pressure); >> } >> +EXPORT_SYMBOL(cpufreq_update_pressure); > > EXPORT_SYMBOL_GPL please. Other symbols in this file are _GPL as well. Hey Trilok, Thanks for catching this. Will fix it in the re-spin. -Sibi > >> >> /** >> * cpufreq_set_policy - Modify cpufreq policy parameters. >> diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h >> index 414bfc976b30..957bf8e4ca0d 100644 >> --- a/include/linux/cpufreq.h >> +++ b/include/linux/cpufreq.h >> @@ -241,6 +241,7 @@ struct kobject *get_governor_parent_kobj(struct cpufreq_policy *policy); >> void cpufreq_enable_fast_switch(struct cpufreq_policy *policy); >> void cpufreq_disable_fast_switch(struct cpufreq_policy *policy); >> bool has_target_index(void); >> +void cpufreq_update_pressure(struct cpufreq_policy *policy); >> >> DECLARE_PER_CPU(unsigned long, cpufreq_pressure); >> static inline unsigned long cpufreq_get_pressure(int cpu) >> @@ -270,6 +271,7 @@ static inline bool cpufreq_supports_freq_invariance(void) >> } >> static inline void disable_cpufreq(void) { } >> static inline void cpufreq_update_limits(unsigned int cpu) { } >> +static inline void cpufreq_update_pressure(struct cpufreq_policy *policy) { } >> static inline unsigned long cpufreq_get_pressure(int cpu) >> { >> return 0; _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH V3 2/2] cpufreq: scmi: Register for limit change notifications 2024-02-27 18:16 [PATCH V3 0/2] firmware: arm_scmi: Register and handle limits change notification Sibi Sankar 2024-02-27 18:16 ` [PATCH V3 1/2] cpufreq: Export cpufreq_update_pressure Sibi Sankar @ 2024-02-27 18:16 ` Sibi Sankar 2024-02-28 13:24 ` Lukasz Luba 1 sibling, 1 reply; 16+ messages in thread From: Sibi Sankar @ 2024-02-27 18:16 UTC (permalink / raw) To: sudeep.holla, cristian.marussi, rafael, viresh.kumar, morten.rasmussen, dietmar.eggemann, lukasz.luba, pierre.gondois Cc: linux-arm-kernel, linux-pm, linux-kernel, quic_mdtipton, linux-arm-msm, Sibi Sankar Register for limit change notifications if supported and use the throttled frequency from the notification to apply HW pressure. Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> --- v3: * Sanitize range_max received from the notifier. [Pierre] * Update commit message. drivers/cpufreq/scmi-cpufreq.c | 29 ++++++++++++++++++++++++++++- 1 file changed, 28 insertions(+), 1 deletion(-) diff --git a/drivers/cpufreq/scmi-cpufreq.c b/drivers/cpufreq/scmi-cpufreq.c index 76a0ddbd9d24..78b87b72962d 100644 --- a/drivers/cpufreq/scmi-cpufreq.c +++ b/drivers/cpufreq/scmi-cpufreq.c @@ -25,9 +25,13 @@ struct scmi_data { int domain_id; int nr_opp; struct device *cpu_dev; + struct cpufreq_policy *policy; cpumask_var_t opp_shared_cpus; + struct notifier_block limit_notify_nb; }; +const struct scmi_handle *handle; +static struct scmi_device *scmi_dev; static struct scmi_protocol_handle *ph; static const struct scmi_perf_proto_ops *perf_ops; static struct cpufreq_driver scmi_cpufreq_driver; @@ -151,6 +155,20 @@ static struct freq_attr *scmi_cpufreq_hw_attr[] = { NULL, }; +static int scmi_limit_notify_cb(struct notifier_block *nb, unsigned long event, void *data) +{ + struct scmi_data *priv = container_of(nb, struct scmi_data, limit_notify_nb); + struct scmi_perf_limits_report *limit_notify = data; + struct cpufreq_policy *policy = priv->policy; + + policy->max = clamp(limit_notify->range_max_freq/HZ_PER_KHZ, policy->cpuinfo.min_freq, + policy->cpuinfo.max_freq); + + cpufreq_update_pressure(policy); + + return NOTIFY_OK; +} + static int scmi_cpufreq_init(struct cpufreq_policy *policy) { int ret, nr_opp, domain; @@ -269,6 +287,15 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy) } } + priv->limit_notify_nb.notifier_call = scmi_limit_notify_cb; + ret = handle->notify_ops->devm_event_notifier_register(scmi_dev, SCMI_PROTOCOL_PERF, + SCMI_EVENT_PERFORMANCE_LIMITS_CHANGED, + &domain, + &priv->limit_notify_nb); + if (ret) + dev_warn(cpu_dev, + "failed to register for limits change notifier for domain %d\n", domain); + priv->policy = policy; return 0; @@ -342,8 +369,8 @@ static int scmi_cpufreq_probe(struct scmi_device *sdev) { int ret; struct device *dev = &sdev->dev; - const struct scmi_handle *handle; + scmi_dev = sdev; handle = sdev->handle; if (!handle) -- 2.34.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH V3 2/2] cpufreq: scmi: Register for limit change notifications 2024-02-27 18:16 ` [PATCH V3 2/2] cpufreq: scmi: Register for limit change notifications Sibi Sankar @ 2024-02-28 13:24 ` Lukasz Luba 2024-02-28 17:00 ` Sibi Sankar 0 siblings, 1 reply; 16+ messages in thread From: Lukasz Luba @ 2024-02-28 13:24 UTC (permalink / raw) To: Sibi Sankar Cc: linux-arm-kernel, pierre.gondois, dietmar.eggemann, morten.rasmussen, viresh.kumar, rafael, cristian.marussi, sudeep.holla, linux-pm, linux-kernel, quic_mdtipton, linux-arm-msm On 2/27/24 18:16, Sibi Sankar wrote: > Register for limit change notifications if supported and use the throttled > frequency from the notification to apply HW pressure. > > Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> > --- > > v3: > * Sanitize range_max received from the notifier. [Pierre] > * Update commit message. > > drivers/cpufreq/scmi-cpufreq.c | 29 ++++++++++++++++++++++++++++- > 1 file changed, 28 insertions(+), 1 deletion(-) > > diff --git a/drivers/cpufreq/scmi-cpufreq.c b/drivers/cpufreq/scmi-cpufreq.c > index 76a0ddbd9d24..78b87b72962d 100644 > --- a/drivers/cpufreq/scmi-cpufreq.c > +++ b/drivers/cpufreq/scmi-cpufreq.c > @@ -25,9 +25,13 @@ struct scmi_data { > int domain_id; > int nr_opp; > struct device *cpu_dev; > + struct cpufreq_policy *policy; > cpumask_var_t opp_shared_cpus; > + struct notifier_block limit_notify_nb; > }; > > +const struct scmi_handle *handle; > +static struct scmi_device *scmi_dev; > static struct scmi_protocol_handle *ph; > static const struct scmi_perf_proto_ops *perf_ops; > static struct cpufreq_driver scmi_cpufreq_driver; > @@ -151,6 +155,20 @@ static struct freq_attr *scmi_cpufreq_hw_attr[] = { > NULL, > }; > > +static int scmi_limit_notify_cb(struct notifier_block *nb, unsigned long event, void *data) > +{ > + struct scmi_data *priv = container_of(nb, struct scmi_data, limit_notify_nb); > + struct scmi_perf_limits_report *limit_notify = data; > + struct cpufreq_policy *policy = priv->policy; > + > + policy->max = clamp(limit_notify->range_max_freq/HZ_PER_KHZ, policy->cpuinfo.min_freq, > + policy->cpuinfo.max_freq); Please take the division operation out of this clamp() call, somewhere above. Currently it 'blurs' these stuff, while it's important convertion to khz. You can call it e.g.: limit_freq_khz = limit_notify->range_max_freq / HZ_PER_KHZ; then use in clamp(limit_freq_khz, ...) > + > + cpufreq_update_pressure(policy); > + > + return NOTIFY_OK; > +} > + > static int scmi_cpufreq_init(struct cpufreq_policy *policy) > { > int ret, nr_opp, domain; > @@ -269,6 +287,15 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy) > } > } > > + priv->limit_notify_nb.notifier_call = scmi_limit_notify_cb; > + ret = handle->notify_ops->devm_event_notifier_register(scmi_dev, SCMI_PROTOCOL_PERF, > + SCMI_EVENT_PERFORMANCE_LIMITS_CHANGED, > + &domain, > + &priv->limit_notify_nb); > + if (ret) > + dev_warn(cpu_dev, > + "failed to register for limits change notifier for domain %d\n", domain); > + > priv->policy = policy; > > return 0; > @@ -342,8 +369,8 @@ static int scmi_cpufreq_probe(struct scmi_device *sdev) > { > int ret; > struct device *dev = &sdev->dev; > - const struct scmi_handle *handle; It should be a compilation error... > > + scmi_dev = sdev; > handle = sdev->handle; due to usage here, wasn't it? > > if (!handle) _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH V3 2/2] cpufreq: scmi: Register for limit change notifications 2024-02-28 13:24 ` Lukasz Luba @ 2024-02-28 17:00 ` Sibi Sankar 2024-02-29 9:59 ` Lukasz Luba 0 siblings, 1 reply; 16+ messages in thread From: Sibi Sankar @ 2024-02-28 17:00 UTC (permalink / raw) To: Lukasz Luba Cc: linux-arm-kernel, pierre.gondois, dietmar.eggemann, morten.rasmussen, viresh.kumar, rafael, cristian.marussi, sudeep.holla, linux-pm, linux-kernel, quic_mdtipton, linux-arm-msm On 2/28/24 18:54, Lukasz Luba wrote: > > > On 2/27/24 18:16, Sibi Sankar wrote: >> Register for limit change notifications if supported and use the >> throttled >> frequency from the notification to apply HW pressure. Lukasz, Thanks for taking time to review the series! >> >> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> >> --- >> >> v3: >> * Sanitize range_max received from the notifier. [Pierre] >> * Update commit message. >> >> drivers/cpufreq/scmi-cpufreq.c | 29 ++++++++++++++++++++++++++++- >> 1 file changed, 28 insertions(+), 1 deletion(-) >> >> diff --git a/drivers/cpufreq/scmi-cpufreq.c >> b/drivers/cpufreq/scmi-cpufreq.c >> index 76a0ddbd9d24..78b87b72962d 100644 >> --- a/drivers/cpufreq/scmi-cpufreq.c >> +++ b/drivers/cpufreq/scmi-cpufreq.c >> @@ -25,9 +25,13 @@ struct scmi_data { >> int domain_id; >> int nr_opp; >> struct device *cpu_dev; >> + struct cpufreq_policy *policy; >> cpumask_var_t opp_shared_cpus; >> + struct notifier_block limit_notify_nb; >> }; >> +const struct scmi_handle *handle; >> +static struct scmi_device *scmi_dev; >> static struct scmi_protocol_handle *ph; >> static const struct scmi_perf_proto_ops *perf_ops; >> static struct cpufreq_driver scmi_cpufreq_driver; >> @@ -151,6 +155,20 @@ static struct freq_attr *scmi_cpufreq_hw_attr[] = { >> NULL, >> }; >> +static int scmi_limit_notify_cb(struct notifier_block *nb, unsigned >> long event, void *data) >> +{ >> + struct scmi_data *priv = container_of(nb, struct scmi_data, >> limit_notify_nb); >> + struct scmi_perf_limits_report *limit_notify = data; >> + struct cpufreq_policy *policy = priv->policy; >> + >> + policy->max = clamp(limit_notify->range_max_freq/HZ_PER_KHZ, >> policy->cpuinfo.min_freq, >> + policy->cpuinfo.max_freq); > > Please take the division operation out of this clamp() call, somewhere > above. Currently it 'blurs' these stuff, while it's important convertion > to khz. You can call it e.g.: > > limit_freq_khz = limit_notify->range_max_freq / HZ_PER_KHZ; > > then use in clamp(limit_freq_khz, ...) ack > >> + >> + cpufreq_update_pressure(policy); >> + >> + return NOTIFY_OK; >> +} >> + >> static int scmi_cpufreq_init(struct cpufreq_policy *policy) >> { >> int ret, nr_opp, domain; >> @@ -269,6 +287,15 @@ static int scmi_cpufreq_init(struct >> cpufreq_policy *policy) >> } >> } >> + priv->limit_notify_nb.notifier_call = scmi_limit_notify_cb; >> + ret = handle->notify_ops->devm_event_notifier_register(scmi_dev, >> SCMI_PROTOCOL_PERF, >> + SCMI_EVENT_PERFORMANCE_LIMITS_CHANGED, >> + &domain, >> + &priv->limit_notify_nb); >> + if (ret) >> + dev_warn(cpu_dev, >> + "failed to register for limits change notifier for >> domain %d\n", domain); >> + >> priv->policy = policy; >> return 0; >> @@ -342,8 +369,8 @@ static int scmi_cpufreq_probe(struct scmi_device >> *sdev) >> { >> int ret; >> struct device *dev = &sdev->dev; >> - const struct scmi_handle *handle; > > It should be a compilation error... > >> + scmi_dev = sdev; >> handle = sdev->handle; > > due to usage here, wasn't it? Not really, isn't it getting the first initialization here? Are there any compiler options that I need to turn on to catch these? -Sibi > >> if (!handle) _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH V3 2/2] cpufreq: scmi: Register for limit change notifications 2024-02-28 17:00 ` Sibi Sankar @ 2024-02-29 9:59 ` Lukasz Luba 2024-02-29 10:22 ` Lukasz Luba 0 siblings, 1 reply; 16+ messages in thread From: Lukasz Luba @ 2024-02-29 9:59 UTC (permalink / raw) To: Sibi Sankar Cc: linux-arm-kernel, pierre.gondois, dietmar.eggemann, morten.rasmussen, viresh.kumar, rafael, cristian.marussi, sudeep.holla, linux-pm, linux-kernel, quic_mdtipton, linux-arm-msm On 2/28/24 17:00, Sibi Sankar wrote: > > > On 2/28/24 18:54, Lukasz Luba wrote: >> >> >> On 2/27/24 18:16, Sibi Sankar wrote: >>> Register for limit change notifications if supported and use the >>> throttled >>> frequency from the notification to apply HW pressure. > > Lukasz, > > Thanks for taking time to review the series! > >>> >>> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> >>> --- >>> >>> v3: >>> * Sanitize range_max received from the notifier. [Pierre] >>> * Update commit message. >>> >>> drivers/cpufreq/scmi-cpufreq.c | 29 ++++++++++++++++++++++++++++- >>> 1 file changed, 28 insertions(+), 1 deletion(-) >>> >>> diff --git a/drivers/cpufreq/scmi-cpufreq.c >>> b/drivers/cpufreq/scmi-cpufreq.c >>> index 76a0ddbd9d24..78b87b72962d 100644 >>> --- a/drivers/cpufreq/scmi-cpufreq.c >>> +++ b/drivers/cpufreq/scmi-cpufreq.c >>> @@ -25,9 +25,13 @@ struct scmi_data { >>> int domain_id; >>> int nr_opp; >>> struct device *cpu_dev; >>> + struct cpufreq_policy *policy; >>> cpumask_var_t opp_shared_cpus; >>> + struct notifier_block limit_notify_nb; >>> }; >>> +const struct scmi_handle *handle; I've missed this bit here. >>> +static struct scmi_device *scmi_dev; >>> static struct scmi_protocol_handle *ph; >>> static const struct scmi_perf_proto_ops *perf_ops; >>> static struct cpufreq_driver scmi_cpufreq_driver; >>> @@ -151,6 +155,20 @@ static struct freq_attr *scmi_cpufreq_hw_attr[] = { >>> NULL, >>> }; >>> +static int scmi_limit_notify_cb(struct notifier_block *nb, unsigned >>> long event, void *data) >>> +{ >>> + struct scmi_data *priv = container_of(nb, struct scmi_data, >>> limit_notify_nb); >>> + struct scmi_perf_limits_report *limit_notify = data; >>> + struct cpufreq_policy *policy = priv->policy; >>> + >>> + policy->max = clamp(limit_notify->range_max_freq/HZ_PER_KHZ, >>> policy->cpuinfo.min_freq, >>> + policy->cpuinfo.max_freq); >> >> Please take the division operation out of this clamp() call, somewhere >> above. Currently it 'blurs' these stuff, while it's important convertion >> to khz. You can call it e.g.: >> >> limit_freq_khz = limit_notify->range_max_freq / HZ_PER_KHZ; >> >> then use in clamp(limit_freq_khz, ...) > > ack > >> >>> + >>> + cpufreq_update_pressure(policy); >>> + >>> + return NOTIFY_OK; >>> +} >>> + >>> static int scmi_cpufreq_init(struct cpufreq_policy *policy) >>> { >>> int ret, nr_opp, domain; >>> @@ -269,6 +287,15 @@ static int scmi_cpufreq_init(struct >>> cpufreq_policy *policy) >>> } >>> } >>> + priv->limit_notify_nb.notifier_call = scmi_limit_notify_cb; >>> + ret = handle->notify_ops->devm_event_notifier_register(scmi_dev, >>> SCMI_PROTOCOL_PERF, >>> + SCMI_EVENT_PERFORMANCE_LIMITS_CHANGED, >>> + &domain, >>> + &priv->limit_notify_nb); >>> + if (ret) >>> + dev_warn(cpu_dev, >>> + "failed to register for limits change notifier for >>> domain %d\n", domain); >>> + >>> priv->policy = policy; >>> return 0; >>> @@ -342,8 +369,8 @@ static int scmi_cpufreq_probe(struct scmi_device >>> *sdev) >>> { >>> int ret; >>> struct device *dev = &sdev->dev; >>> - const struct scmi_handle *handle; >> >> It should be a compilation error... >> >>> + scmi_dev = sdev; >>> handle = sdev->handle; >> >> due to usage here, wasn't it? > > Not really, isn't it getting the first initialization here? > Are there any compiler options that I need to turn on to > catch these? Yes, you're right, my apologies for confusion. I couldn't apply the series due issues in two patch sets in your dependency list. Now when I have been manually applying the changes I spotted it. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH V3 2/2] cpufreq: scmi: Register for limit change notifications 2024-02-29 9:59 ` Lukasz Luba @ 2024-02-29 10:22 ` Lukasz Luba 2024-02-29 11:28 ` Cristian Marussi 0 siblings, 1 reply; 16+ messages in thread From: Lukasz Luba @ 2024-02-29 10:22 UTC (permalink / raw) To: Sibi Sankar, sudeep.holla, cristian.marussi Cc: linux-arm-kernel, pierre.gondois, dietmar.eggemann, morten.rasmussen, viresh.kumar, rafael, linux-pm, linux-kernel, quic_mdtipton, linux-arm-msm On 2/29/24 09:59, Lukasz Luba wrote: > > > On 2/28/24 17:00, Sibi Sankar wrote: >> >> >> On 2/28/24 18:54, Lukasz Luba wrote: >>> >>> >>> On 2/27/24 18:16, Sibi Sankar wrote: >>>> Register for limit change notifications if supported and use the >>>> throttled >>>> frequency from the notification to apply HW pressure. >> >> Lukasz, >> >> Thanks for taking time to review the series! >> >>>> >>>> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> >>>> --- >>>> >>>> v3: >>>> * Sanitize range_max received from the notifier. [Pierre] >>>> * Update commit message. >>>> >>>> drivers/cpufreq/scmi-cpufreq.c | 29 ++++++++++++++++++++++++++++- >>>> 1 file changed, 28 insertions(+), 1 deletion(-) >>>> >>>> diff --git a/drivers/cpufreq/scmi-cpufreq.c >>>> b/drivers/cpufreq/scmi-cpufreq.c >>>> index 76a0ddbd9d24..78b87b72962d 100644 >>>> --- a/drivers/cpufreq/scmi-cpufreq.c >>>> +++ b/drivers/cpufreq/scmi-cpufreq.c >>>> @@ -25,9 +25,13 @@ struct scmi_data { >>>> int domain_id; >>>> int nr_opp; >>>> struct device *cpu_dev; >>>> + struct cpufreq_policy *policy; >>>> cpumask_var_t opp_shared_cpus; >>>> + struct notifier_block limit_notify_nb; >>>> }; >>>> +const struct scmi_handle *handle; > > I've missed this bit here. So for this change we actually have to ask Cristian or Sudeep because I'm not sure if we have only one 'handle' instance for all cpufreq devices. If we have different 'handle' we cannot move it to the global single pointer. Sudeep, Cristian what do you think? _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH V3 2/2] cpufreq: scmi: Register for limit change notifications 2024-02-29 10:22 ` Lukasz Luba @ 2024-02-29 11:28 ` Cristian Marussi 2024-02-29 11:45 ` Lukasz Luba 0 siblings, 1 reply; 16+ messages in thread From: Cristian Marussi @ 2024-02-29 11:28 UTC (permalink / raw) To: Lukasz Luba Cc: Sibi Sankar, sudeep.holla, linux-arm-kernel, pierre.gondois, dietmar.eggemann, morten.rasmussen, viresh.kumar, rafael, linux-pm, linux-kernel, quic_mdtipton, linux-arm-msm On Thu, Feb 29, 2024 at 10:22:39AM +0000, Lukasz Luba wrote: > > > On 2/29/24 09:59, Lukasz Luba wrote: > > > > > > On 2/28/24 17:00, Sibi Sankar wrote: > > > > > > > > > On 2/28/24 18:54, Lukasz Luba wrote: > > > > > > > > > > > > On 2/27/24 18:16, Sibi Sankar wrote: > > > > > Register for limit change notifications if supported and use > > > > > the throttled > > > > > frequency from the notification to apply HW pressure. > > > > > > Lukasz, > > > > > > Thanks for taking time to review the series! > > > > > > > > > > > > > Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> > > > > > --- > > > > > > > > > > v3: > > > > > * Sanitize range_max received from the notifier. [Pierre] > > > > > * Update commit message. > > > > > > > > > > drivers/cpufreq/scmi-cpufreq.c | 29 ++++++++++++++++++++++++++++- > > > > > 1 file changed, 28 insertions(+), 1 deletion(-) > > > > > > > > > > diff --git a/drivers/cpufreq/scmi-cpufreq.c > > > > > b/drivers/cpufreq/scmi-cpufreq.c > > > > > index 76a0ddbd9d24..78b87b72962d 100644 > > > > > --- a/drivers/cpufreq/scmi-cpufreq.c > > > > > +++ b/drivers/cpufreq/scmi-cpufreq.c > > > > > @@ -25,9 +25,13 @@ struct scmi_data { > > > > > int domain_id; > > > > > int nr_opp; > > > > > struct device *cpu_dev; > > > > > + struct cpufreq_policy *policy; > > > > > cpumask_var_t opp_shared_cpus; > > > > > + struct notifier_block limit_notify_nb; > > > > > }; > > > > > +const struct scmi_handle *handle; > > > > I've missed this bit here. > > So for this change we actually have to ask Cristian or Sudeep > because I'm not sure if we have only one 'handle' instance > for all cpufreq devices. > > If we have different 'handle' we cannot move it to the > global single pointer. > > Sudeep, Cristian what do you think? I was just replying noticing this :D .... since SCMI drivers can be probed multiple times IF you defined multiple scmi top nodes in your DT containing the same protocol nodes, they receive a distinct sdev/handle/ph for each probe...so any attempt to globalize these wont work...BUT... ...this is a bit of a weird setup BUT it is not against the spec and it can be used to parallelize more the SCMI accesses to disjont set of resources within the same protocol (a long story here...) AND this type of setup is something that it is already used by some other colleagues of Sibi working on a different line of products (AFAIK)... So, for these reasons, usually, all the other SCMI drivers have per-instance non-global references to handle/sdev/ph.... ...having said that, thought, looking at the structure of CPUFReq drivers, I am not sure that they can stand such a similar setup where multiple instances of this same driver are probed .... indeed the existent *ph refs above is already global....so it wont already work anyway in case of multiple instances now... ...and if I look at how CPUFreq expects the signature of scmi_cpufreq_get_rate() to be annd how it is implemented now using the global *ph reference, it is clearly already not working cleanly on a multi-instance setup... ...now...I can imagine how to (maybe) fix the above removing the globals and fixing this, BUT the question, more generally, is CPUFreq supposed to work at all in this multi-probed mode of operation ? Does it even make sense to be able to support this in CPUFREQ ? (as an example in cpufreq,c there is static global cpufreq_driver pointing to the arch-specific configured driver BUT that also holds some .driver_data AND that cleraly wont be instance specific if you probe multiple times and register with CPUFreq multiple times...) More questions than answers here :D Thanks, Cristian _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH V3 2/2] cpufreq: scmi: Register for limit change notifications 2024-02-29 11:28 ` Cristian Marussi @ 2024-02-29 11:45 ` Lukasz Luba 2024-02-29 12:11 ` Cristian Marussi 0 siblings, 1 reply; 16+ messages in thread From: Lukasz Luba @ 2024-02-29 11:45 UTC (permalink / raw) To: Cristian Marussi Cc: Sibi Sankar, sudeep.holla, linux-arm-kernel, pierre.gondois, dietmar.eggemann, morten.rasmussen, viresh.kumar, rafael, linux-pm, linux-kernel, quic_mdtipton, linux-arm-msm On 2/29/24 11:28, Cristian Marussi wrote: > On Thu, Feb 29, 2024 at 10:22:39AM +0000, Lukasz Luba wrote: >> >> >> On 2/29/24 09:59, Lukasz Luba wrote: >>> >>> >>> On 2/28/24 17:00, Sibi Sankar wrote: >>>> >>>> >>>> On 2/28/24 18:54, Lukasz Luba wrote: >>>>> >>>>> >>>>> On 2/27/24 18:16, Sibi Sankar wrote: >>>>>> Register for limit change notifications if supported and use >>>>>> the throttled >>>>>> frequency from the notification to apply HW pressure. >>>> >>>> Lukasz, >>>> >>>> Thanks for taking time to review the series! >>>> >>>>>> >>>>>> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> >>>>>> --- >>>>>> >>>>>> v3: >>>>>> * Sanitize range_max received from the notifier. [Pierre] >>>>>> * Update commit message. >>>>>> >>>>>> � drivers/cpufreq/scmi-cpufreq.c | 29 ++++++++++++++++++++++++++++- >>>>>> � 1 file changed, 28 insertions(+), 1 deletion(-) >>>>>> >>>>>> diff --git a/drivers/cpufreq/scmi-cpufreq.c >>>>>> b/drivers/cpufreq/scmi-cpufreq.c >>>>>> index 76a0ddbd9d24..78b87b72962d 100644 >>>>>> --- a/drivers/cpufreq/scmi-cpufreq.c >>>>>> +++ b/drivers/cpufreq/scmi-cpufreq.c >>>>>> @@ -25,9 +25,13 @@ struct scmi_data { >>>>>> ����� int domain_id; >>>>>> ����� int nr_opp; >>>>>> ����� struct device *cpu_dev; >>>>>> +��� struct cpufreq_policy *policy; >>>>>> ����� cpumask_var_t opp_shared_cpus; >>>>>> +��� struct notifier_block limit_notify_nb; >>>>>> � }; >>>>>> +const struct scmi_handle *handle; >>> >>> I've missed this bit here. >> >> So for this change we actually have to ask Cristian or Sudeep >> because I'm not sure if we have only one 'handle' instance >> for all cpufreq devices. >> >> If we have different 'handle' we cannot move it to the >> global single pointer. >> >> Sudeep, Cristian what do you think? > > I was just replying noticing this :D .... since SCMI drivers can be > probed multiple times IF you defined multiple scmi top nodes in your DT > containing the same protocol nodes, they receive a distinct sdev/handle/ph > for each probe...so any attempt to globalize these wont work...BUT... > > ...this is a bit of a weird setup BUT it is not against the spec and it can > be used to parallelize more the SCMI accesses to disjont set of resources > within the same protocol (a long story here...) AND this type of setup is > something that it is already used by some other colleagues of Sibi working > on a different line of products (AFAIK)... > > So, for these reasons, usually, all the other SCMI drivers have per-instance > non-global references to handle/sdev/ph.... > > ...having said that, thought, looking at the structure of CPUFReq > drivers, I am not sure that they can stand such a similar setup > where multiple instances of this same driver are probed > > .... indeed the existent *ph refs above is already global....so it wont already > work anyway in case of multiple instances now... > > ...and if I look at how CPUFreq expects the signature of scmi_cpufreq_get_rate() > to be annd how it is implemented now using the global *ph reference, it is > clearly already not working cleanly on a multi-instance setup... > > ...now...I can imagine how to (maybe) fix the above removing the globals and > fixing this, BUT the question, more generally, is CPUFreq supposed to work at all in > this multi-probed mode of operation ? > Does it even make sense to be able to support this in CPUFREQ ? > > (as an example in cpufreq,c there is static global cpufreq_driver > pointing to the arch-specific configured driver BUT that also holds > some .driver_data AND that cleraly wont be instance specific if you > probe multiple times and register with CPUFreq multiple times...) > > More questions than answers here :D > Thanks Cristian for instant response. Yes, indeed now we have more questions :) (which is good). But that's good description of the situation. So lets consider a few option what we could do now: 1. Let Sibi add another global state the 'handle' but add a BUG_ON() or WARN_ON() in the probe path if the next 'handle' instance is different than already set in global. This would simply mean that we don't support (yet) such configuration in a platform. As you said, we already have the *ph global, so maybe such platforms with multiple instances for this particular cpufreq and performance protocol don't exist yet. 2. Ask Sibi to wait with this change, till we refactor the exiting driver such that it could support easily those multiple instances. Then pick up this patch set. Although, we would also like to have those notifications from our Juno SCP reference FW, so the feature is useful. 3. Ask Sibi to refactor his patch to somehow get the 'handle' in different way, using exiting code and not introduce this global. IHMO we could do this in steps: 1. and then 2. When we create some mock platform to test this refactoring we can start cleaning it. What do you think? Regards, Lukasz _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH V3 2/2] cpufreq: scmi: Register for limit change notifications 2024-02-29 11:45 ` Lukasz Luba @ 2024-02-29 12:11 ` Cristian Marussi 2024-02-29 14:15 ` Lukasz Luba 0 siblings, 1 reply; 16+ messages in thread From: Cristian Marussi @ 2024-02-29 12:11 UTC (permalink / raw) To: Lukasz Luba Cc: Sibi Sankar, sudeep.holla, linux-arm-kernel, pierre.gondois, dietmar.eggemann, morten.rasmussen, viresh.kumar, rafael, linux-pm, linux-kernel, quic_mdtipton, linux-arm-msm On Thu, Feb 29, 2024 at 11:45:41AM +0000, Lukasz Luba wrote: > > > On 2/29/24 11:28, Cristian Marussi wrote: > > On Thu, Feb 29, 2024 at 10:22:39AM +0000, Lukasz Luba wrote: > > > > > > > > > On 2/29/24 09:59, Lukasz Luba wrote: > > > > > > > > > > > > On 2/28/24 17:00, Sibi Sankar wrote: > > > > > > > > > > > > > > > On 2/28/24 18:54, Lukasz Luba wrote: > > > > > > > > > > > > > > > > > > On 2/27/24 18:16, Sibi Sankar wrote: > > > > > > > Register for limit change notifications if supported and use > > > > > > > the throttled > > > > > > > frequency from the notification to apply HW pressure. > > > > > > > > > > Lukasz, > > > > > > > > > > Thanks for taking time to review the series! > > > > > > > > > > > > > > > > > > > Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> > > > > > > > --- > > > > > > > > > > > > > > v3: > > > > > > > * Sanitize range_max received from the notifier. [Pierre] > > > > > > > * Update commit message. > > > > > > > > > > > > > > � drivers/cpufreq/scmi-cpufreq.c | 29 ++++++++++++++++++++++++++++- > > > > > > > � 1 file changed, 28 insertions(+), 1 deletion(-) > > > > > > > > > > > > > > diff --git a/drivers/cpufreq/scmi-cpufreq.c > > > > > > > b/drivers/cpufreq/scmi-cpufreq.c > > > > > > > index 76a0ddbd9d24..78b87b72962d 100644 > > > > > > > --- a/drivers/cpufreq/scmi-cpufreq.c > > > > > > > +++ b/drivers/cpufreq/scmi-cpufreq.c > > > > > > > @@ -25,9 +25,13 @@ struct scmi_data { > > > > > > > ����� int domain_id; > > > > > > > ����� int nr_opp; > > > > > > > ����� struct device *cpu_dev; > > > > > > > +��� struct cpufreq_policy *policy; > > > > > > > ����� cpumask_var_t opp_shared_cpus; > > > > > > > +��� struct notifier_block limit_notify_nb; > > > > > > > � }; > > > > > > > +const struct scmi_handle *handle; > > > > > > > > I've missed this bit here. > > > > > > So for this change we actually have to ask Cristian or Sudeep > > > because I'm not sure if we have only one 'handle' instance > > > for all cpufreq devices. > > > > > > If we have different 'handle' we cannot move it to the > > > global single pointer. > > > > > > Sudeep, Cristian what do you think? > > > > I was just replying noticing this :D .... since SCMI drivers can be > > probed multiple times IF you defined multiple scmi top nodes in your DT > > containing the same protocol nodes, they receive a distinct sdev/handle/ph > > for each probe...so any attempt to globalize these wont work...BUT... > > > > ...this is a bit of a weird setup BUT it is not against the spec and it can > > be used to parallelize more the SCMI accesses to disjont set of resources > > within the same protocol (a long story here...) AND this type of setup is > > something that it is already used by some other colleagues of Sibi working > > on a different line of products (AFAIK)... > > > > So, for these reasons, usually, all the other SCMI drivers have per-instance > > non-global references to handle/sdev/ph.... > > > > ...having said that, thought, looking at the structure of CPUFReq > > drivers, I am not sure that they can stand such a similar setup > > where multiple instances of this same driver are probed > > > > .... indeed the existent *ph refs above is already global....so it wont already > > work anyway in case of multiple instances now... > > > > ...and if I look at how CPUFreq expects the signature of scmi_cpufreq_get_rate() > > to be annd how it is implemented now using the global *ph reference, it is > > clearly already not working cleanly on a multi-instance setup... > > > > ...now...I can imagine how to (maybe) fix the above removing the globals and > > fixing this, BUT the question, more generally, is CPUFreq supposed to work at all in > > this multi-probed mode of operation ? > > Does it even make sense to be able to support this in CPUFREQ ? > > > > (as an example in cpufreq,c there is static global cpufreq_driver > > pointing to the arch-specific configured driver BUT that also holds > > some .driver_data AND that cleraly wont be instance specific if you > > probe multiple times and register with CPUFreq multiple times...) > > > > More questions than answers here :D > > > > Thanks Cristian for instant response. Yes, indeed now we have more > questions :) (which is good). But that's good description of the > situation. > > So lets consider a few option what we could do now: > 1. Let Sibi add another global state the 'handle' but add > a BUG_ON() or WARN_ON() in the probe path if the next > 'handle' instance is different than already set in global. > This would simply mean that we don't support (yet) > such configuration in a platform. As you said, we > already have the *ph global, so maybe such platforms > with multiple instances for this particular cpufreq and > performance protocol don't exist yet. Yes this is the quickst way (and a WARN_ON() is better I'd say) but there are similar issues of "unicity" currently already with another vendor SCMI drivers and custom protocol currently under review, so I was thinking to add a new common mechanism in SCMI to handle this ... not thought about this really in depth and I want to chat with Sudeep about this... > 2. Ask Sibi to wait with this change, till we refactor the > exiting driver such that it could support easily those > multiple instances. Then pick up this patch set. > Although, we would also like to have those notifications from our > Juno SCP reference FW, so the feature is useful. > 3. Ask Sibi to refactor his patch to somehow get the 'handle' > in different way, using exiting code and not introduce this global. > > IHMO we could do this in steps: 1. and then 2. When > we create some mock platform to test this refactoring we can > start cleaning it. > Both of these options really beg an answer to my original previous q question...if we somehow enable this multi-probe support in the scmi-cpufreq.c driver by avoiding glbals refs, does this work at all in the context of CPUFreq ? ...or it is just that CPUFreq cannot handle such a configuration (and maybe dont want to) and so the only solution here is just 1. at first and then a common refined mechanism (as mentioned above) to ensure this "unicity" of the probes for some drivers ? I'm not familiar enough to grasp if this "multi-probed" mode of operation is allowed/supported by CPUFreq and, more important, if it makes any sense at all to be a supported mode... Thanks, Cristian _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH V3 2/2] cpufreq: scmi: Register for limit change notifications 2024-02-29 12:11 ` Cristian Marussi @ 2024-02-29 14:15 ` Lukasz Luba 2024-03-01 5:31 ` Sibi Sankar 0 siblings, 1 reply; 16+ messages in thread From: Lukasz Luba @ 2024-02-29 14:15 UTC (permalink / raw) To: Cristian Marussi, Sibi Sankar Cc: sudeep.holla, linux-arm-kernel, pierre.gondois, dietmar.eggemann, morten.rasmussen, viresh.kumar, rafael, linux-pm, linux-kernel, quic_mdtipton, linux-arm-msm On 2/29/24 12:11, Cristian Marussi wrote: > On Thu, Feb 29, 2024 at 11:45:41AM +0000, Lukasz Luba wrote: >> >> >> On 2/29/24 11:28, Cristian Marussi wrote: >>> On Thu, Feb 29, 2024 at 10:22:39AM +0000, Lukasz Luba wrote: >>>> >>>> >>>> On 2/29/24 09:59, Lukasz Luba wrote: >>>>> >>>>> >>>>> On 2/28/24 17:00, Sibi Sankar wrote: >>>>>> >>>>>> >>>>>> On 2/28/24 18:54, Lukasz Luba wrote: >>>>>>> >>>>>>> >>>>>>> On 2/27/24 18:16, Sibi Sankar wrote: >>>>>>>> Register for limit change notifications if supported and use >>>>>>>> the throttled >>>>>>>> frequency from the notification to apply HW pressure. >>>>>> >>>>>> Lukasz, >>>>>> >>>>>> Thanks for taking time to review the series! >>>>>> >>>>>>>> >>>>>>>> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> >>>>>>>> --- >>>>>>>> >>>>>>>> v3: >>>>>>>> * Sanitize range_max received from the notifier. [Pierre] >>>>>>>> * Update commit message. >>>>>>>> >>>>>>>> � drivers/cpufreq/scmi-cpufreq.c | 29 ++++++++++++++++++++++++++++- >>>>>>>> � 1 file changed, 28 insertions(+), 1 deletion(-) >>>>>>>> >>>>>>>> diff --git a/drivers/cpufreq/scmi-cpufreq.c >>>>>>>> b/drivers/cpufreq/scmi-cpufreq.c >>>>>>>> index 76a0ddbd9d24..78b87b72962d 100644 >>>>>>>> --- a/drivers/cpufreq/scmi-cpufreq.c >>>>>>>> +++ b/drivers/cpufreq/scmi-cpufreq.c >>>>>>>> @@ -25,9 +25,13 @@ struct scmi_data { >>>>>>>> ����� int domain_id; >>>>>>>> ����� int nr_opp; >>>>>>>> ����� struct device *cpu_dev; >>>>>>>> +��� struct cpufreq_policy *policy; >>>>>>>> ����� cpumask_var_t opp_shared_cpus; >>>>>>>> +��� struct notifier_block limit_notify_nb; >>>>>>>> � }; >>>>>>>> +const struct scmi_handle *handle; >>>>> >>>>> I've missed this bit here. >>>> >>>> So for this change we actually have to ask Cristian or Sudeep >>>> because I'm not sure if we have only one 'handle' instance >>>> for all cpufreq devices. >>>> >>>> If we have different 'handle' we cannot move it to the >>>> global single pointer. >>>> >>>> Sudeep, Cristian what do you think? >>> >>> I was just replying noticing this :D .... since SCMI drivers can be >>> probed multiple times IF you defined multiple scmi top nodes in your DT >>> containing the same protocol nodes, they receive a distinct sdev/handle/ph >>> for each probe...so any attempt to globalize these wont work...BUT... >>> >>> ...this is a bit of a weird setup BUT it is not against the spec and it can >>> be used to parallelize more the SCMI accesses to disjont set of resources >>> within the same protocol (a long story here...) AND this type of setup is >>> something that it is already used by some other colleagues of Sibi working >>> on a different line of products (AFAIK)... >>> >>> So, for these reasons, usually, all the other SCMI drivers have per-instance >>> non-global references to handle/sdev/ph.... >>> >>> ...having said that, thought, looking at the structure of CPUFReq >>> drivers, I am not sure that they can stand such a similar setup >>> where multiple instances of this same driver are probed >>> >>> .... indeed the existent *ph refs above is already global....so it wont already >>> work anyway in case of multiple instances now... >>> >>> ...and if I look at how CPUFreq expects the signature of scmi_cpufreq_get_rate() >>> to be annd how it is implemented now using the global *ph reference, it is >>> clearly already not working cleanly on a multi-instance setup... >>> >>> ...now...I can imagine how to (maybe) fix the above removing the globals and >>> fixing this, BUT the question, more generally, is CPUFreq supposed to work at all in >>> this multi-probed mode of operation ? >>> Does it even make sense to be able to support this in CPUFREQ ? >>> >>> (as an example in cpufreq,c there is static global cpufreq_driver >>> pointing to the arch-specific configured driver BUT that also holds >>> some .driver_data AND that cleraly wont be instance specific if you >>> probe multiple times and register with CPUFreq multiple times...) >>> >>> More questions than answers here :D >>> >> >> Thanks Cristian for instant response. Yes, indeed now we have more >> questions :) (which is good). But that's good description of the >> situation. >> >> So lets consider a few option what we could do now: >> 1. Let Sibi add another global state the 'handle' but add >> a BUG_ON() or WARN_ON() in the probe path if the next >> 'handle' instance is different than already set in global. >> This would simply mean that we don't support (yet) >> such configuration in a platform. As you said, we >> already have the *ph global, so maybe such platforms >> with multiple instances for this particular cpufreq and >> performance protocol don't exist yet. > > Yes this is the quickst way (and a WARN_ON() is better I'd say) but there > are similar issues of "unicity" currently already with another vendor SCMI > drivers and custom protocol currently under review, so I was thinking to > add a new common mechanism in SCMI to handle this ... not thought about > this really in depth and I want to chat with Sudeep about this... > >> 2. Ask Sibi to wait with this change, till we refactor the >> exiting driver such that it could support easily those >> multiple instances. Then pick up this patch set. >> Although, we would also like to have those notifications from our >> Juno SCP reference FW, so the feature is useful. >> 3. Ask Sibi to refactor his patch to somehow get the 'handle' >> in different way, using exiting code and not introduce this global. >> > >> IHMO we could do this in steps: 1. and then 2. When >> we create some mock platform to test this refactoring we can >> start cleaning it. >> > > Both of these options really beg an answer to my original previous q > question...if we somehow enable this multi-probe support in the > scmi-cpufreq.c driver by avoiding glbals refs, does this work at all in > the context of CPUFreq ? I don't know yet. > > ...or it is just that CPUFreq cannot handle such a configuration (and > maybe dont want to) and so the only solution here is just 1. at first and > then a common refined mechanism (as mentioned above) to ensure this "unicity" > of the probes for some drivers ? This sounds reasonable. > > I'm not familiar enough to grasp if this "multi-probed" mode of operation is > allowed/supported by CPUFreq and, more important, if it makes any sense > at all to be a supported mode... > OK, let me check some stuff in the code and think for a while on that. Thanks Cristian! Sibi, please give me a few days. In the meantime you can continue on the 'boost' patch set probably. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH V3 2/2] cpufreq: scmi: Register for limit change notifications 2024-02-29 14:15 ` Lukasz Luba @ 2024-03-01 5:31 ` Sibi Sankar 2024-03-22 10:45 ` Lukasz Luba 0 siblings, 1 reply; 16+ messages in thread From: Sibi Sankar @ 2024-03-01 5:31 UTC (permalink / raw) To: Lukasz Luba, Cristian Marussi Cc: sudeep.holla, linux-arm-kernel, pierre.gondois, dietmar.eggemann, morten.rasmussen, viresh.kumar, rafael, linux-pm, linux-kernel, quic_mdtipton, linux-arm-msm On 2/29/24 19:45, Lukasz Luba wrote: > > > On 2/29/24 12:11, Cristian Marussi wrote: >> On Thu, Feb 29, 2024 at 11:45:41AM +0000, Lukasz Luba wrote: >>> >>> >>> On 2/29/24 11:28, Cristian Marussi wrote: >>>> On Thu, Feb 29, 2024 at 10:22:39AM +0000, Lukasz Luba wrote: >>>>> >>>>> >>>>> On 2/29/24 09:59, Lukasz Luba wrote: >>>>>> >>>>>> >>>>>> On 2/28/24 17:00, Sibi Sankar wrote: >>>>>>> >>>>>>> >>>>>>> On 2/28/24 18:54, Lukasz Luba wrote: >>>>>>>> >>>>>>>> >>>>>>>> On 2/27/24 18:16, Sibi Sankar wrote: >>>>>>>>> Register for limit change notifications if supported and use >>>>>>>>> the throttled >>>>>>>>> frequency from the notification to apply HW pressure. >>>>>>> >>>>>>> Lukasz, >>>>>>> >>>>>>> Thanks for taking time to review the series! >>>>>>> >>>>>>>>> >>>>>>>>> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> >>>>>>>>> --- >>>>>>>>> >>>>>>>>> v3: >>>>>>>>> * Sanitize range_max received from the notifier. [Pierre] >>>>>>>>> * Update commit message. >>>>>>>>> >>>>>>>>> � drivers/cpufreq/scmi-cpufreq.c | 29 >>>>>>>>> ++++++++++++++++++++++++++++- >>>>>>>>> � 1 file changed, 28 insertions(+), 1 deletion(-) >>>>>>>>> >>>>>>>>> diff --git a/drivers/cpufreq/scmi-cpufreq.c >>>>>>>>> b/drivers/cpufreq/scmi-cpufreq.c >>>>>>>>> index 76a0ddbd9d24..78b87b72962d 100644 >>>>>>>>> --- a/drivers/cpufreq/scmi-cpufreq.c >>>>>>>>> +++ b/drivers/cpufreq/scmi-cpufreq.c >>>>>>>>> @@ -25,9 +25,13 @@ struct scmi_data { >>>>>>>>> ����� int domain_id; >>>>>>>>> ����� int nr_opp; >>>>>>>>> ����� struct device *cpu_dev; >>>>>>>>> +��� struct cpufreq_policy *policy; >>>>>>>>> ����� cpumask_var_t opp_shared_cpus; >>>>>>>>> +��� struct notifier_block limit_notify_nb; >>>>>>>>> � }; >>>>>>>>> +const struct scmi_handle *handle; >>>>>> >>>>>> I've missed this bit here. >>>>> >>>>> So for this change we actually have to ask Cristian or Sudeep >>>>> because I'm not sure if we have only one 'handle' instance >>>>> for all cpufreq devices. >>>>> >>>>> If we have different 'handle' we cannot move it to the >>>>> global single pointer. >>>>> >>>>> Sudeep, Cristian what do you think? >>>> >>>> I was just replying noticing this :D .... since SCMI drivers can be >>>> probed multiple times IF you defined multiple scmi top nodes in your DT >>>> containing the same protocol nodes, they receive a distinct >>>> sdev/handle/ph >>>> for each probe...so any attempt to globalize these wont work...BUT... >>>> >>>> ...this is a bit of a weird setup BUT it is not against the spec and >>>> it can >>>> be used to parallelize more the SCMI accesses to disjont set of >>>> resources >>>> within the same protocol (a long story here...) AND this type of >>>> setup is >>>> something that it is already used by some other colleagues of Sibi >>>> working >>>> on a different line of products (AFAIK)... >>>> >>>> So, for these reasons, usually, all the other SCMI drivers have >>>> per-instance >>>> non-global references to handle/sdev/ph.... >>>> >>>> ...having said that, thought, looking at the structure of CPUFReq >>>> drivers, I am not sure that they can stand such a similar setup >>>> where multiple instances of this same driver are probed >>>> >>>> .... indeed the existent *ph refs above is already global....so it >>>> wont already >>>> work anyway in case of multiple instances now... >>>> >>>> ...and if I look at how CPUFreq expects the signature of >>>> scmi_cpufreq_get_rate() >>>> to be annd how it is implemented now using the global *ph reference, >>>> it is >>>> clearly already not working cleanly on a multi-instance setup... >>>> >>>> ...now...I can imagine how to (maybe) fix the above removing the >>>> globals and >>>> fixing this, BUT the question, more generally, is CPUFreq supposed >>>> to work at all in >>>> this multi-probed mode of operation ? >>>> Does it even make sense to be able to support this in CPUFREQ ? >>>> >>>> (as an example in cpufreq,c there is static global cpufreq_driver >>>> pointing to the arch-specific configured driver BUT that also holds >>>> some .driver_data AND that cleraly wont be instance specific if you >>>> probe multiple times and register with CPUFreq multiple times...) >>>> >>>> More questions than answers here :D >>>> >>> >>> Thanks Cristian for instant response. Yes, indeed now we have more >>> questions :) (which is good). But that's good description of the >>> situation. >>> >>> So lets consider a few option what we could do now: >>> 1. Let Sibi add another global state the 'handle' but add >>> a BUG_ON() or WARN_ON() in the probe path if the next >>> 'handle' instance is different than already set in global. >>> This would simply mean that we don't support (yet) >>> such configuration in a platform. As you said, we >>> already have the *ph global, so maybe such platforms >>> with multiple instances for this particular cpufreq and >>> performance protocol don't exist yet. >> >> Yes this is the quickst way (and a WARN_ON() is better I'd say) but there >> are similar issues of "unicity" currently already with another vendor >> SCMI >> drivers and custom protocol currently under review, so I was thinking to >> add a new common mechanism in SCMI to handle this ... not thought about >> this really in depth and I want to chat with Sudeep about this... >> >>> 2. Ask Sibi to wait with this change, till we refactor the >>> exiting driver such that it could support easily those >>> multiple instances. Then pick up this patch set. >>> Although, we would also like to have those notifications from our >>> Juno SCP reference FW, so the feature is useful. >>> 3. Ask Sibi to refactor his patch to somehow get the 'handle' >>> in different way, using exiting code and not introduce this global. >>> >> >>> IHMO we could do this in steps: 1. and then 2. When >>> we create some mock platform to test this refactoring we can >>> start cleaning it. I should be able to volunteer a platform to test against when we have things ready. >>> >> >> Both of these options really beg an answer to my original previous q >> question...if we somehow enable this multi-probe support in the >> scmi-cpufreq.c driver by avoiding glbals refs, does this work at all in >> the context of CPUFreq ? > > I don't know yet. > >> >> ...or it is just that CPUFreq cannot handle such a configuration (and >> maybe dont want to) and so the only solution here is just 1. at first and >> then a common refined mechanism (as mentioned above) to ensure this >> "unicity" >> of the probes for some drivers ? > > This sounds reasonable. > >> >> I'm not familiar enough to grasp if this "multi-probed" mode of >> operation is >> allowed/supported by CPUFreq and, more important, if it makes any sense >> at all to be a supported mode... >> > > OK, let me check some stuff in the code and think for a while on that. > Thanks Cristian! > > Sibi, please give me a few days. In the meantime you can continue > on the 'boost' patch set probably. sure, thanks. I've plenty things to send out so no hurry ;) -Sibi _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH V3 2/2] cpufreq: scmi: Register for limit change notifications 2024-03-01 5:31 ` Sibi Sankar @ 2024-03-22 10:45 ` Lukasz Luba 2024-03-26 5:25 ` Sibi Sankar 0 siblings, 1 reply; 16+ messages in thread From: Lukasz Luba @ 2024-03-22 10:45 UTC (permalink / raw) To: Sibi Sankar Cc: sudeep.holla, Cristian Marussi, linux-arm-kernel, pierre.gondois, dietmar.eggemann, morten.rasmussen, viresh.kumar, rafael, linux-pm, linux-kernel, quic_mdtipton, linux-arm-msm Hi Sibi, On 3/1/24 05:31, Sibi Sankar wrote: > > > On 2/29/24 19:45, Lukasz Luba wrote: >> >> >> On 2/29/24 12:11, Cristian Marussi wrote: >>> On Thu, Feb 29, 2024 at 11:45:41AM +0000, Lukasz Luba wrote: >>>> >>>> >>>> On 2/29/24 11:28, Cristian Marussi wrote: >>>>> On Thu, Feb 29, 2024 at 10:22:39AM +0000, Lukasz Luba wrote: >>>>>> >>>>>> >>>>>> On 2/29/24 09:59, Lukasz Luba wrote: >>>>>>> >>>>>>> >>>>>>> On 2/28/24 17:00, Sibi Sankar wrote: >>>>>>>> >>>>>>>> >>>>>>>> On 2/28/24 18:54, Lukasz Luba wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> On 2/27/24 18:16, Sibi Sankar wrote: >>>>>>>>>> Register for limit change notifications if supported and use >>>>>>>>>> the throttled >>>>>>>>>> frequency from the notification to apply HW pressure. >>>>>>>> >>>>>>>> Lukasz, >>>>>>>> >>>>>>>> Thanks for taking time to review the series! >>>>>>>> >>>>>>>>>> >>>>>>>>>> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> >>>>>>>>>> --- >>>>>>>>>> >>>>>>>>>> v3: >>>>>>>>>> * Sanitize range_max received from the notifier. [Pierre] >>>>>>>>>> * Update commit message. >>>>>>>>>> >>>>>>>>>> � drivers/cpufreq/scmi-cpufreq.c | 29 >>>>>>>>>> ++++++++++++++++++++++++++++- >>>>>>>>>> � 1 file changed, 28 insertions(+), 1 deletion(-) >>>>>>>>>> >>>>>>>>>> diff --git a/drivers/cpufreq/scmi-cpufreq.c >>>>>>>>>> b/drivers/cpufreq/scmi-cpufreq.c >>>>>>>>>> index 76a0ddbd9d24..78b87b72962d 100644 >>>>>>>>>> --- a/drivers/cpufreq/scmi-cpufreq.c >>>>>>>>>> +++ b/drivers/cpufreq/scmi-cpufreq.c >>>>>>>>>> @@ -25,9 +25,13 @@ struct scmi_data { >>>>>>>>>> ����� int domain_id; >>>>>>>>>> ����� int nr_opp; >>>>>>>>>> ����� struct device *cpu_dev; >>>>>>>>>> +��� struct cpufreq_policy *policy; >>>>>>>>>> ����� cpumask_var_t opp_shared_cpus; >>>>>>>>>> +��� struct notifier_block limit_notify_nb; >>>>>>>>>> � }; >>>>>>>>>> +const struct scmi_handle *handle; >>>>>>> >>>>>>> I've missed this bit here. >>>>>> >>>>>> So for this change we actually have to ask Cristian or Sudeep >>>>>> because I'm not sure if we have only one 'handle' instance >>>>>> for all cpufreq devices. >>>>>> >>>>>> If we have different 'handle' we cannot move it to the >>>>>> global single pointer. >>>>>> >>>>>> Sudeep, Cristian what do you think? >>>>> >>>>> I was just replying noticing this :D .... since SCMI drivers can be >>>>> probed multiple times IF you defined multiple scmi top nodes in >>>>> your DT >>>>> containing the same protocol nodes, they receive a distinct >>>>> sdev/handle/ph >>>>> for each probe...so any attempt to globalize these wont work...BUT... >>>>> >>>>> ...this is a bit of a weird setup BUT it is not against the spec >>>>> and it can >>>>> be used to parallelize more the SCMI accesses to disjont set of >>>>> resources >>>>> within the same protocol (a long story here...) AND this type of >>>>> setup is >>>>> something that it is already used by some other colleagues of Sibi >>>>> working >>>>> on a different line of products (AFAIK)... >>>>> >>>>> So, for these reasons, usually, all the other SCMI drivers have >>>>> per-instance >>>>> non-global references to handle/sdev/ph.... >>>>> >>>>> ...having said that, thought, looking at the structure of CPUFReq >>>>> drivers, I am not sure that they can stand such a similar setup >>>>> where multiple instances of this same driver are probed >>>>> >>>>> .... indeed the existent *ph refs above is already global....so it >>>>> wont already >>>>> work anyway in case of multiple instances now... >>>>> >>>>> ...and if I look at how CPUFreq expects the signature of >>>>> scmi_cpufreq_get_rate() >>>>> to be annd how it is implemented now using the global *ph >>>>> reference, it is >>>>> clearly already not working cleanly on a multi-instance setup... >>>>> >>>>> ...now...I can imagine how to (maybe) fix the above removing the >>>>> globals and >>>>> fixing this, BUT the question, more generally, is CPUFreq supposed >>>>> to work at all in >>>>> this multi-probed mode of operation ? >>>>> Does it even make sense to be able to support this in CPUFREQ ? >>>>> >>>>> (as an example in cpufreq,c there is static global cpufreq_driver >>>>> pointing to the arch-specific configured driver BUT that also holds >>>>> some .driver_data AND that cleraly wont be instance specific if you >>>>> probe multiple times and register with CPUFreq multiple times...) >>>>> >>>>> More questions than answers here :D >>>>> >>>> >>>> Thanks Cristian for instant response. Yes, indeed now we have more >>>> questions :) (which is good). But that's good description of the >>>> situation. >>>> >>>> So lets consider a few option what we could do now: >>>> 1. Let Sibi add another global state the 'handle' but add >>>> a BUG_ON() or WARN_ON() in the probe path if the next >>>> 'handle' instance is different than already set in global. >>>> This would simply mean that we don't support (yet) >>>> such configuration in a platform. As you said, we >>>> already have the *ph global, so maybe such platforms >>>> with multiple instances for this particular cpufreq and >>>> performance protocol don't exist yet. >>> >>> Yes this is the quickst way (and a WARN_ON() is better I'd say) but >>> there >>> are similar issues of "unicity" currently already with another vendor >>> SCMI >>> drivers and custom protocol currently under review, so I was thinking to >>> add a new common mechanism in SCMI to handle this ... not thought about >>> this really in depth and I want to chat with Sudeep about this... >>> >>>> 2. Ask Sibi to wait with this change, till we refactor the >>>> exiting driver such that it could support easily those >>>> multiple instances. Then pick up this patch set. >>>> Although, we would also like to have those notifications from our >>>> Juno SCP reference FW, so the feature is useful. >>>> 3. Ask Sibi to refactor his patch to somehow get the 'handle' >>>> in different way, using exiting code and not introduce this global. >>>> >>> >>>> IHMO we could do this in steps: 1. and then 2. When >>>> we create some mock platform to test this refactoring we can >>>> start cleaning it. > > I should be able to volunteer a platform to test against when > we have things ready. > >>>> >>> >>> Both of these options really beg an answer to my original previous q >>> question...if we somehow enable this multi-probe support in the >>> scmi-cpufreq.c driver by avoiding glbals refs, does this work at all in >>> the context of CPUFreq ? >> >> I don't know yet. >> >>> >>> ...or it is just that CPUFreq cannot handle such a configuration (and >>> maybe dont want to) and so the only solution here is just 1. at first >>> and >>> then a common refined mechanism (as mentioned above) to ensure this >>> "unicity" >>> of the probes for some drivers ? >> >> This sounds reasonable. >> >>> >>> I'm not familiar enough to grasp if this "multi-probed" mode of >>> operation is >>> allowed/supported by CPUFreq and, more important, if it makes any sense >>> at all to be a supported mode... >>> >> >> OK, let me check some stuff in the code and think for a while on that. >> Thanks Cristian! >> >> Sibi, please give me a few days. In the meantime you can continue >> on the 'boost' patch set probably. > > sure, thanks. I've plenty things to send out so no hurry ;) > > -Sibi > I've went through the cpufreq. It's quite complicated how those policies, cpus, drivers are setup. Although, IHMO we should be safe with you current proposal in this patch. As we discussed with Cristian, we can take that approach further. Therefore, you can add: Reviewed-by: Lukasz Luba <lukasz.luba@arm.com> Regards, Lukasz _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH V3 2/2] cpufreq: scmi: Register for limit change notifications 2024-03-22 10:45 ` Lukasz Luba @ 2024-03-26 5:25 ` Sibi Sankar 0 siblings, 0 replies; 16+ messages in thread From: Sibi Sankar @ 2024-03-26 5:25 UTC (permalink / raw) To: Lukasz Luba Cc: sudeep.holla, Cristian Marussi, linux-arm-kernel, pierre.gondois, dietmar.eggemann, morten.rasmussen, viresh.kumar, rafael, linux-pm, linux-kernel, quic_mdtipton, linux-arm-msm On 3/22/24 16:15, Lukasz Luba wrote: > Hi Sibi, > > On 3/1/24 05:31, Sibi Sankar wrote: >> >> >> On 2/29/24 19:45, Lukasz Luba wrote: >>> >>> >>> On 2/29/24 12:11, Cristian Marussi wrote: >>>> On Thu, Feb 29, 2024 at 11:45:41AM +0000, Lukasz Luba wrote: >>>>> >>>>> >>>>> On 2/29/24 11:28, Cristian Marussi wrote: >>>>>> On Thu, Feb 29, 2024 at 10:22:39AM +0000, Lukasz Luba wrote: >>>>>>> >>>>>>> >>>>>>> On 2/29/24 09:59, Lukasz Luba wrote: >>>>>>>> >>>>>>>> >>>>>>>> On 2/28/24 17:00, Sibi Sankar wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> On 2/28/24 18:54, Lukasz Luba wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On 2/27/24 18:16, Sibi Sankar wrote: >>>>>>>>>>> Register for limit change notifications if supported and use >>>>>>>>>>> the throttled >>>>>>>>>>> frequency from the notification to apply HW pressure. >>>>>>>>> >>>>>>>>> Lukasz, >>>>>>>>> >>>>>>>>> Thanks for taking time to review the series! >>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> >>>>>>>>>>> --- >>>>>>>>>>> >>>>>>>>>>> v3: >>>>>>>>>>> * Sanitize range_max received from the notifier. [Pierre] >>>>>>>>>>> * Update commit message. >>>>>>>>>>> >>>>>>>>>>> � drivers/cpufreq/scmi-cpufreq.c | 29 >>>>>>>>>>> ++++++++++++++++++++++++++++- >>>>>>>>>>> � 1 file changed, 28 insertions(+), 1 deletion(-) >>>>>>>>>>> >>>>>>>>>>> diff --git a/drivers/cpufreq/scmi-cpufreq.c >>>>>>>>>>> b/drivers/cpufreq/scmi-cpufreq.c >>>>>>>>>>> index 76a0ddbd9d24..78b87b72962d 100644 >>>>>>>>>>> --- a/drivers/cpufreq/scmi-cpufreq.c >>>>>>>>>>> +++ b/drivers/cpufreq/scmi-cpufreq.c >>>>>>>>>>> @@ -25,9 +25,13 @@ struct scmi_data { >>>>>>>>>>> ����� int domain_id; >>>>>>>>>>> ����� int nr_opp; >>>>>>>>>>> ����� struct device *cpu_dev; >>>>>>>>>>> +��� struct cpufreq_policy *policy; >>>>>>>>>>> ����� cpumask_var_t opp_shared_cpus; >>>>>>>>>>> +��� struct notifier_block limit_notify_nb; >>>>>>>>>>> � }; >>>>>>>>>>> +const struct scmi_handle *handle; >>>>>>>> >>>>>>>> I've missed this bit here. >>>>>>> >>>>>>> So for this change we actually have to ask Cristian or Sudeep >>>>>>> because I'm not sure if we have only one 'handle' instance >>>>>>> for all cpufreq devices. >>>>>>> >>>>>>> If we have different 'handle' we cannot move it to the >>>>>>> global single pointer. >>>>>>> >>>>>>> Sudeep, Cristian what do you think? >>>>>> >>>>>> I was just replying noticing this :D .... since SCMI drivers can be >>>>>> probed multiple times IF you defined multiple scmi top nodes in >>>>>> your DT >>>>>> containing the same protocol nodes, they receive a distinct >>>>>> sdev/handle/ph >>>>>> for each probe...so any attempt to globalize these wont work...BUT... >>>>>> >>>>>> ...this is a bit of a weird setup BUT it is not against the spec >>>>>> and it can >>>>>> be used to parallelize more the SCMI accesses to disjont set of >>>>>> resources >>>>>> within the same protocol (a long story here...) AND this type of >>>>>> setup is >>>>>> something that it is already used by some other colleagues of Sibi >>>>>> working >>>>>> on a different line of products (AFAIK)... >>>>>> >>>>>> So, for these reasons, usually, all the other SCMI drivers have >>>>>> per-instance >>>>>> non-global references to handle/sdev/ph.... >>>>>> >>>>>> ...having said that, thought, looking at the structure of CPUFReq >>>>>> drivers, I am not sure that they can stand such a similar setup >>>>>> where multiple instances of this same driver are probed >>>>>> >>>>>> .... indeed the existent *ph refs above is already global....so it >>>>>> wont already >>>>>> work anyway in case of multiple instances now... >>>>>> >>>>>> ...and if I look at how CPUFreq expects the signature of >>>>>> scmi_cpufreq_get_rate() >>>>>> to be annd how it is implemented now using the global *ph >>>>>> reference, it is >>>>>> clearly already not working cleanly on a multi-instance setup... >>>>>> >>>>>> ...now...I can imagine how to (maybe) fix the above removing the >>>>>> globals and >>>>>> fixing this, BUT the question, more generally, is CPUFreq supposed >>>>>> to work at all in >>>>>> this multi-probed mode of operation ? >>>>>> Does it even make sense to be able to support this in CPUFREQ ? >>>>>> >>>>>> (as an example in cpufreq,c there is static global cpufreq_driver >>>>>> pointing to the arch-specific configured driver BUT that also >>>>>> holds >>>>>> some .driver_data AND that cleraly wont be instance specific if >>>>>> you >>>>>> probe multiple times and register with CPUFreq multiple times...) >>>>>> >>>>>> More questions than answers here :D >>>>>> >>>>> >>>>> Thanks Cristian for instant response. Yes, indeed now we have more >>>>> questions :) (which is good). But that's good description of the >>>>> situation. >>>>> >>>>> So lets consider a few option what we could do now: >>>>> 1. Let Sibi add another global state the 'handle' but add >>>>> a BUG_ON() or WARN_ON() in the probe path if the next >>>>> 'handle' instance is different than already set in global. >>>>> This would simply mean that we don't support (yet) >>>>> such configuration in a platform. As you said, we >>>>> already have the *ph global, so maybe such platforms >>>>> with multiple instances for this particular cpufreq and >>>>> performance protocol don't exist yet. >>>> >>>> Yes this is the quickst way (and a WARN_ON() is better I'd say) but >>>> there >>>> are similar issues of "unicity" currently already with another >>>> vendor SCMI >>>> drivers and custom protocol currently under review, so I was >>>> thinking to >>>> add a new common mechanism in SCMI to handle this ... not thought about >>>> this really in depth and I want to chat with Sudeep about this... >>>> >>>>> 2. Ask Sibi to wait with this change, till we refactor the >>>>> exiting driver such that it could support easily those >>>>> multiple instances. Then pick up this patch set. >>>>> Although, we would also like to have those notifications from our >>>>> Juno SCP reference FW, so the feature is useful. >>>>> 3. Ask Sibi to refactor his patch to somehow get the 'handle' >>>>> in different way, using exiting code and not introduce this >>>>> global. >>>>> >>>> >>>>> IHMO we could do this in steps: 1. and then 2. When >>>>> we create some mock platform to test this refactoring we can >>>>> start cleaning it. >> >> I should be able to volunteer a platform to test against when >> we have things ready. >> >>>>> >>>> >>>> Both of these options really beg an answer to my original previous q >>>> question...if we somehow enable this multi-probe support in the >>>> scmi-cpufreq.c driver by avoiding glbals refs, does this work at all in >>>> the context of CPUFreq ? >>> >>> I don't know yet. >>> >>>> >>>> ...or it is just that CPUFreq cannot handle such a configuration (and >>>> maybe dont want to) and so the only solution here is just 1. at >>>> first and >>>> then a common refined mechanism (as mentioned above) to ensure this >>>> "unicity" >>>> of the probes for some drivers ? >>> >>> This sounds reasonable. >>> >>>> >>>> I'm not familiar enough to grasp if this "multi-probed" mode of >>>> operation is >>>> allowed/supported by CPUFreq and, more important, if it makes any sense >>>> at all to be a supported mode... >>>> >>> >>> OK, let me check some stuff in the code and think for a while on that. >>> Thanks Cristian! >>> >>> Sibi, please give me a few days. In the meantime you can continue >>> on the 'boost' patch set probably. >> >> sure, thanks. I've plenty things to send out so no hurry ;) >> >> -Sibi >> > > I've went through the cpufreq. It's quite complicated how those > policies, cpus, drivers are setup. Although, IHMO we should be > safe with you current proposal in this patch. > > As we discussed with Cristian, we can take that approach further. > > Therefore, you can add: > > Reviewed-by: Lukasz Luba <lukasz.luba@arm.com> Thanks, I'll re-spin the series with a WARN_ON() in the interim. -Sibi > > Regards, > Lukasz _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2024-03-26 5:26 UTC | newest] Thread overview: 16+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2024-02-27 18:16 [PATCH V3 0/2] firmware: arm_scmi: Register and handle limits change notification Sibi Sankar 2024-02-27 18:16 ` [PATCH V3 1/2] cpufreq: Export cpufreq_update_pressure Sibi Sankar 2024-02-27 19:32 ` Trilok Soni 2024-02-28 5:16 ` Sibi Sankar 2024-02-27 18:16 ` [PATCH V3 2/2] cpufreq: scmi: Register for limit change notifications Sibi Sankar 2024-02-28 13:24 ` Lukasz Luba 2024-02-28 17:00 ` Sibi Sankar 2024-02-29 9:59 ` Lukasz Luba 2024-02-29 10:22 ` Lukasz Luba 2024-02-29 11:28 ` Cristian Marussi 2024-02-29 11:45 ` Lukasz Luba 2024-02-29 12:11 ` Cristian Marussi 2024-02-29 14:15 ` Lukasz Luba 2024-03-01 5:31 ` Sibi Sankar 2024-03-22 10:45 ` Lukasz Luba 2024-03-26 5:25 ` Sibi Sankar
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).