* Re: [PATCH] mmc: sdhc: Add PM QoS support for mmc driver [not found] <20241101024421.26679-1-quic_msana@quicinc.com> @ 2024-11-08 14:43 ` Adrian Hunter 2024-11-12 15:08 ` Ulf Hansson 0 siblings, 1 reply; 4+ messages in thread From: Adrian Hunter @ 2024-11-08 14:43 UTC (permalink / raw) To: Madhusudhan Sana, Ulf Hansson Cc: quic_cang, quic_nguyenb, quic_bhaskarv, quic_mapa, quic_narepall, quic_nitirawa, quic_rampraka, quic_sachgupt, quic_sartgarg, Linux PM list, Rafael J. Wysocki On 1/11/24 04:44, Madhusudhan Sana wrote: > Register mmc driver to CPU latency PM QoS framework to improve > mmc device io performance. Not sure host controller drivers should really be manipulating cpu_latency_qos in order to squeeze a bit more I/O performance. > > Signed-off-by: Madhusudhan Sana <quic_msana@quicinc.com> > --- > drivers/mmc/host/sdhci.c | 47 ++++++++++++++++++++++++++++++++++++++++ > drivers/mmc/host/sdhci.h | 4 ++++ > 2 files changed, 51 insertions(+) > > diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c > index f4a7733a8ad2..ffcc9544a3df 100644 > --- a/drivers/mmc/host/sdhci.c > +++ b/drivers/mmc/host/sdhci.c > @@ -359,6 +359,46 @@ static void sdhci_config_dma(struct sdhci_host *host) > sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL); > } > > +/* > + * sdhci_pm_qos_init - initialize PM QoS request > + */ > +void sdhci_pm_qos_init(struct sdhci_host *host) > +{ > + if (host->pm_qos_enable) > + return; > + > + cpu_latency_qos_add_request(&host->pm_qos_req, PM_QOS_DEFAULT_VALUE); > + > + if (cpu_latency_qos_request_active(&host->pm_qos_req)) > + host->pm_qos_enable = true; > +} > + > +/* > + * sdhci_pm_qos_exit - remove request from PM QoS > + */ > +void sdhci_pm_qos_exit(struct sdhci_host *host) > +{ > + if (!host->pm_qos_enable) > + return; > + > + cpu_latency_qos_remove_request(&host->pm_qos_req); > + host->pm_qos_enable = false; > +} > + > +/* > + * sdhci_pm_qos_update - update PM QoS request > + * @on - True, vote for perf PM QoS mode > + * - False, vote for power save mode. > + */ > +static void sdhci_pm_qos_update(struct sdhci_host *host, bool on) > +{ > + if (!host->pm_qos_enable) > + return; > + > + cpu_latency_qos_update_request(&host->pm_qos_req, on ? > + 0 : PM_QOS_DEFAULT_VALUE); > +} > + > static void sdhci_init(struct sdhci_host *host, int soft) > { > struct mmc_host *mmc = host->mmc; > @@ -384,6 +424,9 @@ static void sdhci_init(struct sdhci_host *host, int soft) > host->reinit_uhs = true; > mmc->ops->set_ios(mmc, &mmc->ios); > } > + > + sdhci_pm_qos_init(host); > + sdhci_pm_qos_update(host, true); > } > > static void sdhci_reinit(struct sdhci_host *host) > @@ -2072,6 +2115,7 @@ void sdhci_set_clock(struct sdhci_host *host, unsigned int clock) > > clk = sdhci_calc_clk(host, clock, &host->mmc->actual_clock); > sdhci_enable_clk(host, clk); > + sdhci_pm_qos_update(host, true); > } > EXPORT_SYMBOL_GPL(sdhci_set_clock); > > @@ -3811,6 +3855,7 @@ int sdhci_suspend_host(struct sdhci_host *host) > sdhci_writel(host, 0, SDHCI_SIGNAL_ENABLE); > free_irq(host->irq, host); > } > + sdhci_pm_qos_update(host, false); > > return 0; > } > @@ -3873,6 +3918,7 @@ int sdhci_runtime_suspend_host(struct sdhci_host *host) > spin_lock_irqsave(&host->lock, flags); > host->runtime_suspended = true; > spin_unlock_irqrestore(&host->lock, flags); > + sdhci_pm_qos_update(host, false); > > return 0; > } > @@ -4987,6 +5033,7 @@ void sdhci_remove_host(struct sdhci_host *host, int dead) > if (host->use_external_dma) > sdhci_external_dma_release(host); > > + sdhci_pm_qos_exit(host); > host->adma_table = NULL; > host->align_buffer = NULL; > } > diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h > index cd0e35a80542..685036ed888b 100644 > --- a/drivers/mmc/host/sdhci.h > +++ b/drivers/mmc/host/sdhci.h > @@ -16,6 +16,7 @@ > #include <linux/io.h> > #include <linux/leds.h> > #include <linux/interrupt.h> > +#include <linux/devfreq.h> > > #include <linux/mmc/host.h> > > @@ -675,6 +676,9 @@ struct sdhci_host { > > u64 data_timeout; > > + struct pm_qos_request pm_qos_req; /* PM QoS request handle */ > + bool pm_qos_enable; /* flag to check PM QoS is enable */ > + > unsigned long private[] ____cacheline_aligned; > }; > ^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] mmc: sdhc: Add PM QoS support for mmc driver 2024-11-08 14:43 ` [PATCH] mmc: sdhc: Add PM QoS support for mmc driver Adrian Hunter @ 2024-11-12 15:08 ` Ulf Hansson 2025-05-14 3:00 ` Ram Prakash Gupta 0 siblings, 1 reply; 4+ messages in thread From: Ulf Hansson @ 2024-11-12 15:08 UTC (permalink / raw) To: Madhusudhan Sana, Adrian Hunter Cc: quic_cang, quic_nguyenb, quic_bhaskarv, quic_mapa, quic_narepall, quic_nitirawa, quic_rampraka, quic_sachgupt, quic_sartgarg, Linux PM list, Rafael J. Wysocki On Fri, 8 Nov 2024 at 15:43, Adrian Hunter <adrian.hunter@intel.com> wrote: > > On 1/11/24 04:44, Madhusudhan Sana wrote: > > Register mmc driver to CPU latency PM QoS framework to improve > > mmc device io performance. > > Not sure host controller drivers should really be manipulating > cpu_latency_qos in order to squeeze a bit more I/O performance. I fully agree, this type of boosting doesn't belong in a low level storage driver, as it is simply not capable of understanding the use-case. Note that the cpu_latency_qos can also be managed from user-space. Moreover, I guess there are use cases where it would make sense to have some in-kernel governor to boost too for some conditions. But as far as I can tell, we don't have a common way to do this, but rather platform specific hacks via devfreq drivers for example. Kind regards Uffe > > > > > Signed-off-by: Madhusudhan Sana <quic_msana@quicinc.com> > > --- > > drivers/mmc/host/sdhci.c | 47 ++++++++++++++++++++++++++++++++++++++++ > > drivers/mmc/host/sdhci.h | 4 ++++ > > 2 files changed, 51 insertions(+) > > > > diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c > > index f4a7733a8ad2..ffcc9544a3df 100644 > > --- a/drivers/mmc/host/sdhci.c > > +++ b/drivers/mmc/host/sdhci.c > > @@ -359,6 +359,46 @@ static void sdhci_config_dma(struct sdhci_host *host) > > sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL); > > } > > > > +/* > > + * sdhci_pm_qos_init - initialize PM QoS request > > + */ > > +void sdhci_pm_qos_init(struct sdhci_host *host) > > +{ > > + if (host->pm_qos_enable) > > + return; > > + > > + cpu_latency_qos_add_request(&host->pm_qos_req, PM_QOS_DEFAULT_VALUE); > > + > > + if (cpu_latency_qos_request_active(&host->pm_qos_req)) > > + host->pm_qos_enable = true; > > +} > > + > > +/* > > + * sdhci_pm_qos_exit - remove request from PM QoS > > + */ > > +void sdhci_pm_qos_exit(struct sdhci_host *host) > > +{ > > + if (!host->pm_qos_enable) > > + return; > > + > > + cpu_latency_qos_remove_request(&host->pm_qos_req); > > + host->pm_qos_enable = false; > > +} > > + > > +/* > > + * sdhci_pm_qos_update - update PM QoS request > > + * @on - True, vote for perf PM QoS mode > > + * - False, vote for power save mode. > > + */ > > +static void sdhci_pm_qos_update(struct sdhci_host *host, bool on) > > +{ > > + if (!host->pm_qos_enable) > > + return; > > + > > + cpu_latency_qos_update_request(&host->pm_qos_req, on ? > > + 0 : PM_QOS_DEFAULT_VALUE); > > +} > > + > > static void sdhci_init(struct sdhci_host *host, int soft) > > { > > struct mmc_host *mmc = host->mmc; > > @@ -384,6 +424,9 @@ static void sdhci_init(struct sdhci_host *host, int soft) > > host->reinit_uhs = true; > > mmc->ops->set_ios(mmc, &mmc->ios); > > } > > + > > + sdhci_pm_qos_init(host); > > + sdhci_pm_qos_update(host, true); > > } > > > > static void sdhci_reinit(struct sdhci_host *host) > > @@ -2072,6 +2115,7 @@ void sdhci_set_clock(struct sdhci_host *host, unsigned int clock) > > > > clk = sdhci_calc_clk(host, clock, &host->mmc->actual_clock); > > sdhci_enable_clk(host, clk); > > + sdhci_pm_qos_update(host, true); > > } > > EXPORT_SYMBOL_GPL(sdhci_set_clock); > > > > @@ -3811,6 +3855,7 @@ int sdhci_suspend_host(struct sdhci_host *host) > > sdhci_writel(host, 0, SDHCI_SIGNAL_ENABLE); > > free_irq(host->irq, host); > > } > > + sdhci_pm_qos_update(host, false); > > > > return 0; > > } > > @@ -3873,6 +3918,7 @@ int sdhci_runtime_suspend_host(struct sdhci_host *host) > > spin_lock_irqsave(&host->lock, flags); > > host->runtime_suspended = true; > > spin_unlock_irqrestore(&host->lock, flags); > > + sdhci_pm_qos_update(host, false); > > > > return 0; > > } > > @@ -4987,6 +5033,7 @@ void sdhci_remove_host(struct sdhci_host *host, int dead) > > if (host->use_external_dma) > > sdhci_external_dma_release(host); > > > > + sdhci_pm_qos_exit(host); > > host->adma_table = NULL; > > host->align_buffer = NULL; > > } > > diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h > > index cd0e35a80542..685036ed888b 100644 > > --- a/drivers/mmc/host/sdhci.h > > +++ b/drivers/mmc/host/sdhci.h > > @@ -16,6 +16,7 @@ > > #include <linux/io.h> > > #include <linux/leds.h> > > #include <linux/interrupt.h> > > +#include <linux/devfreq.h> > > > > #include <linux/mmc/host.h> > > > > @@ -675,6 +676,9 @@ struct sdhci_host { > > > > u64 data_timeout; > > > > + struct pm_qos_request pm_qos_req; /* PM QoS request handle */ > > + bool pm_qos_enable; /* flag to check PM QoS is enable */ > > + > > unsigned long private[] ____cacheline_aligned; > > }; > > > ^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] mmc: sdhc: Add PM QoS support for mmc driver 2024-11-12 15:08 ` Ulf Hansson @ 2025-05-14 3:00 ` Ram Prakash Gupta 2025-05-19 14:00 ` Ulf Hansson 0 siblings, 1 reply; 4+ messages in thread From: Ram Prakash Gupta @ 2025-05-14 3:00 UTC (permalink / raw) To: Ulf Hansson, Madhusudhan Sana, Adrian Hunter Cc: quic_cang, quic_nguyenb, quic_bhaskarv, quic_mapa, quic_narepall, quic_nitirawa, quic_sachgupt, quic_sartgarg, Linux PM list, Rafael J. Wysocki Thanks a lot Adrian and Uffe for your review and comment. The Qualcomm engineer who initiated this work, is no longer working on this and I am taking up the responsibility to continue on this work. On 11/12/2024 8:38 PM, Ulf Hansson wrote: > On Fri, 8 Nov 2024 at 15:43, Adrian Hunter <adrian.hunter@intel.com> wrote: >> On 1/11/24 04:44, Madhusudhan Sana wrote: >>> Register mmc driver to CPU latency PM QoS framework to improve >>> mmc device io performance. >> Not sure host controller drivers should really be manipulating >> cpu_latency_qos in order to squeeze a bit more I/O performance. > I fully agree, this type of boosting doesn't belong in a low level > storage driver, as it is simply not capable of understanding the > use-case. Note that the cpu_latency_qos can also be managed from > user-space. > > Moreover, I guess there are use cases where it would make sense to > have some in-kernel governor to boost too for some conditions. But as > far as I can tell, we don't have a common way to do this, but rather > platform specific hacks via devfreq drivers for example. > > Kind regards > Uffe Hi Uffe/Adrian, In my opinion, many use case owners might not opt to control qos and may not use qos to gain better performance, and similar work was done in other driver eg. https://patchwork.kernel.org/project/linux-mediatek/patch/20231219123706.6463-2-quic_mnaresh@quicinc.com/ Earlier this was done in Qualcomm specific file but later community suggested to make it into core driver, so that it applies for everyone. Having this thought in mind, here also this was made it into core driver. If this still doesn't fit here in mmc context, would like to refactor and move into Qualcomm specific file sdhci-msm.c please share your opinion. Thanks, Ram > >>> Signed-off-by: Madhusudhan Sana <quic_msana@quicinc.com> >>> --- >>> drivers/mmc/host/sdhci.c | 47 ++++++++++++++++++++++++++++++++++++++++ >>> drivers/mmc/host/sdhci.h | 4 ++++ >>> 2 files changed, 51 insertions(+) >>> >>> diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c >>> index f4a7733a8ad2..ffcc9544a3df 100644 >>> --- a/drivers/mmc/host/sdhci.c >>> +++ b/drivers/mmc/host/sdhci.c >>> @@ -359,6 +359,46 @@ static void sdhci_config_dma(struct sdhci_host *host) >>> sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL); >>> } >>> >>> +/* >>> + * sdhci_pm_qos_init - initialize PM QoS request >>> + */ >>> +void sdhci_pm_qos_init(struct sdhci_host *host) >>> +{ >>> + if (host->pm_qos_enable) >>> + return; >>> + >>> + cpu_latency_qos_add_request(&host->pm_qos_req, PM_QOS_DEFAULT_VALUE); >>> + >>> + if (cpu_latency_qos_request_active(&host->pm_qos_req)) >>> + host->pm_qos_enable = true; >>> +} >>> + >>> +/* >>> + * sdhci_pm_qos_exit - remove request from PM QoS >>> + */ >>> +void sdhci_pm_qos_exit(struct sdhci_host *host) >>> +{ >>> + if (!host->pm_qos_enable) >>> + return; >>> + >>> + cpu_latency_qos_remove_request(&host->pm_qos_req); >>> + host->pm_qos_enable = false; >>> +} >>> + >>> +/* >>> + * sdhci_pm_qos_update - update PM QoS request >>> + * @on - True, vote for perf PM QoS mode >>> + * - False, vote for power save mode. >>> + */ >>> +static void sdhci_pm_qos_update(struct sdhci_host *host, bool on) >>> +{ >>> + if (!host->pm_qos_enable) >>> + return; >>> + >>> + cpu_latency_qos_update_request(&host->pm_qos_req, on ? >>> + 0 : PM_QOS_DEFAULT_VALUE); >>> +} >>> + >>> static void sdhci_init(struct sdhci_host *host, int soft) >>> { >>> struct mmc_host *mmc = host->mmc; >>> @@ -384,6 +424,9 @@ static void sdhci_init(struct sdhci_host *host, int soft) >>> host->reinit_uhs = true; >>> mmc->ops->set_ios(mmc, &mmc->ios); >>> } >>> + >>> + sdhci_pm_qos_init(host); >>> + sdhci_pm_qos_update(host, true); >>> } >>> >>> static void sdhci_reinit(struct sdhci_host *host) >>> @@ -2072,6 +2115,7 @@ void sdhci_set_clock(struct sdhci_host *host, unsigned int clock) >>> >>> clk = sdhci_calc_clk(host, clock, &host->mmc->actual_clock); >>> sdhci_enable_clk(host, clk); >>> + sdhci_pm_qos_update(host, true); >>> } >>> EXPORT_SYMBOL_GPL(sdhci_set_clock); >>> >>> @@ -3811,6 +3855,7 @@ int sdhci_suspend_host(struct sdhci_host *host) >>> sdhci_writel(host, 0, SDHCI_SIGNAL_ENABLE); >>> free_irq(host->irq, host); >>> } >>> + sdhci_pm_qos_update(host, false); >>> >>> return 0; >>> } >>> @@ -3873,6 +3918,7 @@ int sdhci_runtime_suspend_host(struct sdhci_host *host) >>> spin_lock_irqsave(&host->lock, flags); >>> host->runtime_suspended = true; >>> spin_unlock_irqrestore(&host->lock, flags); >>> + sdhci_pm_qos_update(host, false); >>> >>> return 0; >>> } >>> @@ -4987,6 +5033,7 @@ void sdhci_remove_host(struct sdhci_host *host, int dead) >>> if (host->use_external_dma) >>> sdhci_external_dma_release(host); >>> >>> + sdhci_pm_qos_exit(host); >>> host->adma_table = NULL; >>> host->align_buffer = NULL; >>> } >>> diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h >>> index cd0e35a80542..685036ed888b 100644 >>> --- a/drivers/mmc/host/sdhci.h >>> +++ b/drivers/mmc/host/sdhci.h >>> @@ -16,6 +16,7 @@ >>> #include <linux/io.h> >>> #include <linux/leds.h> >>> #include <linux/interrupt.h> >>> +#include <linux/devfreq.h> >>> >>> #include <linux/mmc/host.h> >>> >>> @@ -675,6 +676,9 @@ struct sdhci_host { >>> >>> u64 data_timeout; >>> >>> + struct pm_qos_request pm_qos_req; /* PM QoS request handle */ >>> + bool pm_qos_enable; /* flag to check PM QoS is enable */ >>> + >>> unsigned long private[] ____cacheline_aligned; >>> }; >>> ^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] mmc: sdhc: Add PM QoS support for mmc driver 2025-05-14 3:00 ` Ram Prakash Gupta @ 2025-05-19 14:00 ` Ulf Hansson 0 siblings, 0 replies; 4+ messages in thread From: Ulf Hansson @ 2025-05-19 14:00 UTC (permalink / raw) To: Ram Prakash Gupta Cc: Madhusudhan Sana, Adrian Hunter, quic_cang, quic_nguyenb, quic_bhaskarv, quic_mapa, quic_narepall, quic_nitirawa, quic_sachgupt, quic_sartgarg, Linux PM list, Rafael J. Wysocki On Wed, 14 May 2025 at 05:01, Ram Prakash Gupta <quic_rampraka@quicinc.com> wrote: > > Thanks a lot Adrian and Uffe for your review and comment. The Qualcomm > engineer who initiated this work, is no longer working on this and I am > taking up the responsibility to continue on this work. > > > On 11/12/2024 8:38 PM, Ulf Hansson wrote: > > On Fri, 8 Nov 2024 at 15:43, Adrian Hunter <adrian.hunter@intel.com> wrote: > >> On 1/11/24 04:44, Madhusudhan Sana wrote: > >>> Register mmc driver to CPU latency PM QoS framework to improve > >>> mmc device io performance. > >> Not sure host controller drivers should really be manipulating > >> cpu_latency_qos in order to squeeze a bit more I/O performance. > > I fully agree, this type of boosting doesn't belong in a low level > > storage driver, as it is simply not capable of understanding the > > use-case. Note that the cpu_latency_qos can also be managed from > > user-space. > > > > Moreover, I guess there are use cases where it would make sense to > > have some in-kernel governor to boost too for some conditions. But as > > far as I can tell, we don't have a common way to do this, but rather > > platform specific hacks via devfreq drivers for example. > > > > Kind regards > > Uffe > > Hi Uffe/Adrian, > > In my opinion, many use case owners might not opt to control qos and > may not use qos to gain better performance, and similar work was done > in other driver eg. > https://patchwork.kernel.org/project/linux-mediatek/patch/20231219123706.6463-2-quic_mnaresh@quicinc.com/ > > Earlier this was done in Qualcomm specific file but later community > suggested to make it into core driver, so that it applies for everyone. > Having this thought in mind, here also this was made it into core driver. > If this still doesn't fit here in mmc context, would like to refactor and > move into Qualcomm specific file sdhci-msm.c please share your opinion. Well, I think you need to look at the broader picture. I understand what you want to achieve, but there are more than one use-case and platform to keep in mind. The UFS change that you point at above is an indication that there is indeed a common problem - but the problem is not even limited to UFS. For example, how do we make sure that the QoS constraint is the correct one? Depending on the use-case and the platforms, things may not necessarily get better, but worse and wasting energy for no good reasons. Don't get me wrong, I think this deserves to be discussed. My suggestion is that you post an "RFD" to the linux-block email-mailing-list, make sure to describe the problem from a top-level point of view and ask for people's opinion/suggestions. I would be happy to be cc:ed too. [...] Kind regards Uffe ^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2025-05-19 14:01 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20241101024421.26679-1-quic_msana@quicinc.com>
2024-11-08 14:43 ` [PATCH] mmc: sdhc: Add PM QoS support for mmc driver Adrian Hunter
2024-11-12 15:08 ` Ulf Hansson
2025-05-14 3:00 ` Ram Prakash Gupta
2025-05-19 14:00 ` Ulf Hansson
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox