public inbox for cpufreq@vger.kernel.org
 help / color / mirror / Atom feed
From: "Rafael J. Wysocki" <rjw@rjwysocki.net>
To: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
Cc: "Bjørn Mork" <bjorn@mork.no>,
	"Viresh Kumar" <viresh.kumar@linaro.org>,
	"Linux PM mailing list" <linux-pm@vger.kernel.org>,
	cpufreq <cpufreq@vger.kernel.org>
Subject: Re: [PATCH] cpufreq: fix garbage kobj on errors during suspend/resume
Date: Sat, 07 Dec 2013 02:17:50 +0100	[thread overview]
Message-ID: <7074288.NJO5GpSg1d@vostro.rjw.lan> (raw)
In-Reply-To: <52A15F44.1030807@linux.vnet.ibm.com>

On Friday, December 06, 2013 10:53:16 AM Srivatsa S. Bhat wrote:
> On 12/06/2013 03:59 AM, Rafael J. Wysocki wrote:
> > On Thursday, December 05, 2013 06:11:19 PM Srivatsa S. Bhat wrote:
> >> On 12/05/2013 12:27 PM, Bjørn Mork wrote:
> >>> Viresh Kumar <viresh.kumar@linaro.org> writes:
> >>>
> >>>> Sending from phone.. html.. so left list.
> >>>>
> >>>> Here is the old thread where we discussed this.. see if this helps..
> >>>>
> >>>> http://marc.info/?t=136845016900003&r=1&w=2
> >>>
> >>> Thanks.  That helped a lot.
> >>>
> >>> Unless I miss something, it looks like the permission problem *started*
> >>> with fallout from special suspend code - surprising the user by not
> >>> creating any offline/online event on suspend/resume.  Quoting from
> >>> http://marc.info/?l=linux-pm&m=136847781510358 :
> >>>
> >>>   (And yes, even code-wise, we use a slightly different
> >>>    path in the S3 code, to initiate hotplug. That's why the uevents
> >>>    are by-passed.)
> >>>
> >>
> >> I hope you didn't miss the main idea I was trying to convey in that
> >> reply: 
> >> "IMHO, using CPU hotplug (offline/online of CPUs) in the S3 path is
> >> supposed to be totally internal to the suspend/resume code. It is not
> >> intended for userspace to know that we are internally offlining/
> >> onlining CPUs."
> > 
> > By the way, in the meantime I discussed this with Viresh in the context of
> > a different (although related) fix and I suggested a different approach.
> > 
> > Namely, to split the CPU offline/online code into "core" and "add-ons"
> > parts, where the core part will do just whatever is needed to offline/online
> > CPU cores cleanly and the "add-ons" part will carry out the rest (e.g.
> > removal/addition of sysfs attributes and so on).
> > 
> > Then, the system suspend/resume code path will only run the "core" part
> > and whatever else CPU handling is needed during suspend/resume will be
> > carried out by the device suspend/resume code (via driver callbacks or
> > stuff similar to cpufreq_suspend() and cpufreq_resume() recently proposed
> > by Viresh).
> > 
> > In turn, the "runtime" CPU offline/online will carry out both the core
> > and the add-ons parts as it does today.
> > 
> > In my view this should address the problems we have with sysfs attributes,
> > governors start/stop etc. during system suspend/resume.
> > 
> 
> Hmm, yes that sounds like a good idea. Are you suggesting this "core" and
> "add-on" split only for the cpufreq parts of CPU hotplug or for everything
> involved in CPU hotplug code? IIUC you are suggesting the latter, which is
> likely to be a significant undertaking, but very well worth it in the long
> run, since it gives us an elegant solution for all these problems.

Yeah, and I'd like that to be done.

> I guess the *_FROZEN CPU hotplug notifications were originally introduced
> to provide us an infrastructure to do something like this, but obviously
> that hasn't worked out very well.

That's been a workaround to be honest.

> So I agree that a fundamental restructuring is in order, to cure all the
> innumerable problems.

Yes.

Thanks!

-- 
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.

  reply	other threads:[~2013-12-07  1:17 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-12-03 11:14 [PATCH] cpufreq: fix garbage kobj on errors during suspend/resume Bjørn Mork
2013-12-03 21:45 ` Rafael J. Wysocki
2013-12-04  6:23 ` Srivatsa S. Bhat
2013-12-24  9:46   ` Jarzmik, Robert
2013-12-04 10:32 ` viresh kumar
2013-12-04 12:08   ` Bjørn Mork
2013-12-04 14:41     ` Viresh Kumar
2013-12-04 15:41       ` Bjørn Mork
     [not found]         ` <CAKohponu3Fu=WaBHXP1iBJM87V9g=+hDPe=M168U_weODenZdQ@mail.gmail.com>
     [not found]           ` <878uvzyecg.fsf@nemi.mork.no>
2013-12-05 12:41             ` Srivatsa S. Bhat
2013-12-05 13:21               ` Bjørn Mork
2013-12-05 22:29               ` Rafael J. Wysocki
2013-12-06  5:23                 ` Srivatsa S. Bhat
2013-12-07  1:17                   ` Rafael J. Wysocki [this message]
2013-12-05  1:29   ` Rafael J. Wysocki
2013-12-09  2:59     ` Lan Tianyu
2013-12-09  6:48       ` Srivatsa S. Bhat
2013-12-09 10:04         ` Bjørn Mork
2013-12-12  1:59           ` Rafael J. Wysocki
2013-12-12  8:52             ` Bjørn Mork
2013-12-09 11:24         ` Martin Ziegler
2013-12-09 11:53           ` Bjørn Mork
2013-12-10 16:02         ` Martin Ziegler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7074288.NJO5GpSg1d@vostro.rjw.lan \
    --to=rjw@rjwysocki.net \
    --cc=bjorn@mork.no \
    --cc=cpufreq@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=srivatsa.bhat@linux.vnet.ibm.com \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox