public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: Alan Stern <stern@rowland.harvard.edu>,
	mingo@kernel.org, peterz@infradead.org, rusty@rustcorp.com.au,
	paulmck@linux.vnet.ibm.com, namhyung@kernel.org, tj@kernel.org,
	rjw@sisk.pl, nikunj@linux.vnet.ibm.com, linux-pm@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	Arjan van de Ven <arjan@infradead.org>
Subject: Re: [RFC PATCH 0/6] CPU hotplug: Reverse invocation of notifiers during CPU hotplug
Date: Wed, 25 Jul 2012 22:21:48 +0530	[thread overview]
Message-ID: <50102424.5010301@linux.vnet.ibm.com> (raw)
In-Reply-To: <alpine.LFD.2.02.1207251758340.32033@ionos>

On 07/25/2012 10:00 PM, Thomas Gleixner wrote:
> On Wed, 25 Jul 2012, Srivatsa S. Bhat wrote:
>> On 07/25/2012 08:27 PM, Alan Stern wrote:
>> One of the other ideas to improve the hotplug notifier stuff that came up during some
>> of the discussions was to implement explicit dependency tracking between the notifiers
>> and perhaps get rid of the priority numbers that are currently being used to provide
>> some sort of ordering between the callbacks. Links to some of the related discussions
>> are provided below.
> 
> The current code which brings up/down a CPU (mostly architecture
> specific) code is comnpletely asymetric.
> 
> We really want a fully symetric state machine here, which also gives
> us the proper invocation points for the other subsystems callbacks.
> 

Right..

> While I thought about having a full dependency tracking system, I'm
> quite convinced by now, that hotplug is a rather linear sequence which
> does not provide much room for paralell setup/teardown.
>

Pretty much, when considering hotplug of a single CPU.

(But when considering booting, Arjan had proposed (while discussing about his asynchronous
booting patch) that it would be good to split up physical vs logical parts of the
booting/hotplug, so that the physical part can happen in parallel with other CPUs, while
the logical online can be done serially, much later. Anyway, this is slightly off-topic
here, since we are mainly talking about hotplug of a single cpu here. I just thought of
putting a word about that here, since we are discussing hotplug redesign anyways..)
 
> At least we should start with a simple linear chain.
> 
> The problem with the current notifiers is, that we only have ordering
> for a few specific callbacks, but we don't have the faintest idea in
> which order all other random stuff is brought up and torn down.
> 

Right, and moreover there are some strange dependencies/ordering between some of
them considering facts such as, CPU_DOWN_PREPARE runs before CPU_DEAD for example,
no matter what priority you give to your callback.. Some callbacks seem to miss
this observation, IIRC.

> So I started experimenting with the following:
> 
> struct hotplug_event {
>        int (*bring_up)(unsigned int cpu);
>        int (*tear_down)(unsigned int cpu);
> };
> 
> enum hotplug_events {
>      CPU_HOTPLUG_START,
>      CPU_HOTPLUG_CREATE_THREADS,
>      CPU_HOTPLUG_INIT_TIMERS,
>      ...
>      CPU_HOTPLUG_KICK_CPU,
>      ...
>      CPU_HOTPLUG_START_THREADS,
>      ...
>      CPU_HOTPLUG_SET_ONLINE,
>      ...
>      CPU_HOTPLUG_MAX_EVENTS,
> };
> 
> Now I have two arrays:
> 
> struct hotplug_event hotplug_events_bp[CPU_HOTPLUG_MAX_EVENTS];
> struct hotplug_event hotplug_events_ap[CPU_HOTPLUG_MAX_EVENTS];
>    
> The _bp one is the list of events which are executed on the active cpu
> and the _ap ones are those executed on the hotplugged cpu.
> 
> The core code advances the events in sync steps, so both BP and AP can
> issue a stop on the process and cause a rollback.
> 

Looks like a nice design!

> Most of the callbacks can be added to the arrays at compile time, just
> the stuff which is in modules requires an register/unregister
> interface.
> 
> Though in any case the enum gives us a very explicit ordering of
> setup/teardown, so rollback or partial online/offline should be simple
> to achieve.
> 
> The only drawback is that it will prevent out of tree modules to use
> the hotplug infrastructure, but I really couldn't care less.
> 

Heh ;-)

Regards,
Srivatsa S. Bhat


  reply	other threads:[~2012-07-25 16:52 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-07-25 11:53 [RFC PATCH 0/6] CPU hotplug: Reverse invocation of notifiers during CPU hotplug Srivatsa S. Bhat
2012-07-25 11:53 ` [RFC PATCH 1/6] list, rcu: Introduce rcu version of reverse list traversal Srivatsa S. Bhat
2012-07-25 11:53 ` [RFC PATCH 2/6] notifiers: Convert notifier chain to circular doubly linked-list Srivatsa S. Bhat
2012-07-25 11:54 ` [RFC PATCH 3/6] notifiers: Add support for reverse invocation of notifier chains Srivatsa S. Bhat
2012-07-25 11:54 ` [RFC PATCH 4/6] sched, cpuset: Prepare scheduler and cpuset CPU hotplug callbacks for reverse invocation Srivatsa S. Bhat
2012-07-25 11:54 ` [RFC PATCH 5/6] sched, perf: Prepare migration and perf " Srivatsa S. Bhat
2012-07-25 11:55 ` [RFC PATCH 6/6] CPU hotplug: Invoke CPU offline notifiers in reverse order Srivatsa S. Bhat
2012-07-25 16:43   ` Tejun Heo
2012-07-25 14:57 ` [RFC PATCH 0/6] CPU hotplug: Reverse invocation of notifiers during CPU hotplug Alan Stern
2012-07-25 15:56   ` Srivatsa S. Bhat
2012-07-25 16:10     ` Alan Stern
2012-07-25 16:37       ` Srivatsa S. Bhat
2012-07-25 16:43       ` Paul E. McKenney
2012-07-25 19:44         ` Alan Stern
2012-07-25 16:30     ` Thomas Gleixner
2012-07-25 16:51       ` Srivatsa S. Bhat [this message]
2012-07-26 11:02         ` Thomas Gleixner
2012-07-26 11:11           ` Srivatsa S. Bhat
2012-07-25 18:22       ` Srivatsa S. Bhat
2012-07-26 10:55         ` Thomas Gleixner
2012-07-26 11:13           ` Srivatsa S. Bhat
2012-07-26 11:22       ` Srivatsa S. Bhat
2012-07-27  7:40       ` Rusty Russell
2012-08-01  7:10         ` Thomas Gleixner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=50102424.5010301@linux.vnet.ibm.com \
    --to=srivatsa.bhat@linux.vnet.ibm.com \
    --cc=arjan@infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=namhyung@kernel.org \
    --cc=nikunj@linux.vnet.ibm.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=rjw@sisk.pl \
    --cc=rusty@rustcorp.com.au \
    --cc=stern@rowland.harvard.edu \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox