linux-pm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Rafael J. Wysocki" <rjw@sisk.pl>
To: markgross@thegnar.org
Cc: Venkatesh Pallipadi <venki@google.com>,
	Linux PM mailing list <linux-pm@lists.linux-foundation.org>,
	Jean Pihet <j-pihet@ti.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 2/2] PM / QoS: unconditionally build the per-device constraints feature
Date: Sun, 12 Feb 2012 22:27:39 +0100	[thread overview]
Message-ID: <201202122227.39949.rjw@sisk.pl> (raw)
In-Reply-To: <20120212020757.GF18742@gs62>

On Sunday, February 12, 2012, mark gross wrote:
> why are these two patches?
> 
> I think you could squash them into one and it would be just fine.

Well, what sense does _device_ PM QoS make without CONFIG_PM?

Rafael


> On Tue, Feb 07, 2012 at 09:34:06AM +0100, Jean Pihet wrote:
> > The per-device PM QoS feature depends on CONFIG_PM which depends
> > on PM_SLEEP || PM_RUNTIME. This breaks CPU C-states with kernels
> > not having these CONFIGs.
> > 
> > This patch allows the feature in all cases.
> > 
> > Signed-off-by: Jean Pihet <j-pihet@ti.com>
> > Cc: Rafael J. Wysocki <rjw@sisk.pl>
> > Cc: Mark Gross <markgross@thegnar.org>
> > ---
> >  drivers/base/power/Makefile |    3 ++-
> >  include/linux/pm_qos.h      |   39 ---------------------------------------
> >  2 files changed, 2 insertions(+), 40 deletions(-)
> > 
> > diff --git a/drivers/base/power/Makefile b/drivers/base/power/Makefile
> > index 2e58ebb..312eb65 100644
> > --- a/drivers/base/power/Makefile
> > +++ b/drivers/base/power/Makefile
> > @@ -1,4 +1,5 @@
> > -obj-$(CONFIG_PM)	+= sysfs.o generic_ops.o common.o qos.o
> > +obj-y			+= qos.o
> > +obj-$(CONFIG_PM)	+= sysfs.o generic_ops.o common.o
> >  obj-$(CONFIG_PM_SLEEP)	+= main.o wakeup.o
> >  obj-$(CONFIG_PM_RUNTIME)	+= runtime.o
> >  obj-$(CONFIG_PM_TRACE_RTC)	+= trace.o
> > diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
> > index 105be69..37b017a 100644
> > --- a/include/linux/pm_qos.h
> > +++ b/include/linux/pm_qos.h
> > @@ -77,7 +77,6 @@ int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier);
> >  int pm_qos_request_active(struct pm_qos_request *req);
> >  s32 pm_qos_read_value(struct pm_qos_constraints *c);
> >  
> > -#ifdef CONFIG_PM
> >  s32 __dev_pm_qos_read_value(struct device *dev);
> >  s32 dev_pm_qos_read_value(struct device *dev);
> >  int dev_pm_qos_add_request(struct device *dev, struct dev_pm_qos_request *req,
> > @@ -94,43 +93,5 @@ void dev_pm_qos_constraints_init(struct device *dev);
> >  void dev_pm_qos_constraints_destroy(struct device *dev);
> >  int dev_pm_qos_add_ancestor_request(struct device *dev,
> >  				    struct dev_pm_qos_request *req, s32 value);
> > -#else
> > -static inline s32 __dev_pm_qos_read_value(struct device *dev)
> > -			{ return 0; }
> > -static inline s32 dev_pm_qos_read_value(struct device *dev)
> > -			{ return 0; }
> > -static inline int dev_pm_qos_add_request(struct device *dev,
> > -					 struct dev_pm_qos_request *req,
> > -					 s32 value)
> > -			{ return 0; }
> > -static inline int dev_pm_qos_update_request(struct dev_pm_qos_request *req,
> > -					    s32 new_value)
> > -			{ return 0; }
> > -static inline int dev_pm_qos_remove_request(struct dev_pm_qos_request *req)
> > -			{ return 0; }
> > -static inline int dev_pm_qos_add_notifier(struct device *dev,
> > -					  struct notifier_block *notifier)
> > -			{ return 0; }
> > -static inline int dev_pm_qos_remove_notifier(struct device *dev,
> > -					     struct notifier_block *notifier)
> > -			{ return 0; }
> > -static inline int dev_pm_qos_add_global_notifier(
> > -					struct notifier_block *notifier)
> > -			{ return 0; }
> > -static inline int dev_pm_qos_remove_global_notifier(
> > -					struct notifier_block *notifier)
> > -			{ return 0; }
> > -static inline void dev_pm_qos_constraints_init(struct device *dev)
> > -{
> > -	dev->power.power_state = PMSG_ON;
> > -}
> > -static inline void dev_pm_qos_constraints_destroy(struct device *dev)
> > -{
> > -	dev->power.power_state = PMSG_INVALID;
> > -}
> > -static inline int dev_pm_qos_add_ancestor_request(struct device *dev,
> > -				    struct dev_pm_qos_request *req, s32 value)
> > -			{ return 0; }
> > -#endif
> >  
> >  #endif
> 
> 

  reply	other threads:[~2012-02-12 21:27 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-02-07  8:34 [PATCH 0/2] PM / QoS: unconditionally build the feature Jean Pihet
2012-02-07  8:34 ` [PATCH 1/2] " Jean Pihet
2012-02-12  2:06   ` mark gross
2012-02-12 21:37   ` Rafael J. Wysocki
2012-02-13 13:50   ` [PATCH] " Jean Pihet
2012-02-13 15:41     ` Rafael J. Wysocki
2012-02-13 15:40       ` Jean Pihet
2012-02-17 19:27       ` Pavel Machek
2012-02-17 20:48         ` Rafael J. Wysocki
2012-02-07  8:34 ` [PATCH 2/2] PM / QoS: unconditionally build the per-device constraints feature Jean Pihet
2012-02-12  2:07   ` mark gross
2012-02-12 21:27     ` Rafael J. Wysocki [this message]
2012-02-15 15:33       ` mark gross

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201202122227.39949.rjw@sisk.pl \
    --to=rjw@sisk.pl \
    --cc=j-pihet@ti.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@lists.linux-foundation.org \
    --cc=markgross@thegnar.org \
    --cc=venki@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).