public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Oleg Nesterov <oleg@tv-sign.ru>
To: Gautham R Shenoy <ego@in.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-kernel@vger.kernel.org,
	Srivatsa Vaddagiri <vatsa@in.ibm.com>,
	Rusty Russel <rusty@rustcorp.com.au>,
	Dipankar Sarma <dipankar@in.ibm.com>, Ingo Molnar <mingo@elte.hu>,
	Paul E McKenney <paulmck@us.ibm.com>
Subject: Re: [RFC PATCH 3/4] Replace per-subsystem mutexes with get_online_cpus
Date: Sun, 21 Oct 2007 15:39:17 +0400	[thread overview]
Message-ID: <20071021113917.GA80@tv-sign.ru> (raw)
In-Reply-To: <20071016103637.GC16570@in.ibm.com>

On 10/16, Gautham R Shenoy wrote:
>
> This patch converts the known per-subsystem cpu_hotplug mutexes to
> get_online_cpus put_online_cpus.
> It also eliminates the CPU_LOCK_ACQUIRE and CPU_LOCK_RELEASE hotplug
> notification events.

Personally, I like the changes in workqueue.c very much, a couple of
minor nits below.

> --- linux-2.6.23.orig/kernel/workqueue.c
> +++ linux-2.6.23/kernel/workqueue.c
> @@ -592,8 +592,6 @@ EXPORT_SYMBOL(schedule_delayed_work_on);
>   * Returns zero on success.
>   * Returns -ve errno on failure.
>   *
> - * Appears to be racy against CPU hotplug.
> - *

see below,

>   * schedule_on_each_cpu() is very slow.
>   */
>  int schedule_on_each_cpu(work_func_t func)
> @@ -605,7 +603,7 @@ int schedule_on_each_cpu(work_func_t fun
>  	if (!works)
>  		return -ENOMEM;
>
> -	preempt_disable();		/* CPU hotplug */
> +	get_online_cpus();		/* CPU hotplug */
>  	for_each_online_cpu(cpu) {
>  		struct work_struct *work = per_cpu_ptr(works, cpu);
>
> @@ -613,7 +611,7 @@ int schedule_on_each_cpu(work_func_t fun
>  		set_bit(WORK_STRUCT_PENDING, work_data_bits(work));
>  		__queue_work(per_cpu_ptr(keventd_wq->cpu_wq, cpu), work);
>  	}
> -	preempt_enable();
> +	put_online_cpus();
>  	flush_workqueue(keventd_wq);

Still racy. To kill the race, please move flush_workqueue() up, before
put_online_cpus().

> @@ -809,6 +809,7 @@ void destroy_workqueue(struct workqueue_
>  	struct cpu_workqueue_struct *cwq;
>  	int cpu;
>
> +	get_online_cpus();
>  	mutex_lock(&workqueue_mutex);
>  	list_del(&wq->list);
>  	mutex_unlock(&workqueue_mutex);
> @@ -817,6 +818,7 @@ void destroy_workqueue(struct workqueue_
>  		cwq = per_cpu_ptr(wq->cpu_wq, cpu);
>  		cleanup_workqueue_thread(cwq, cpu);
>  	}
> +	put_online_cpus();

Correct, but I'd suggest to do put_online_cpus() earlier, right after
mutex_unlock(&workqueue_mutex).

> @@ -830,22 +832,17 @@ static int __devinit workqueue_cpu_callb
>  	unsigned int cpu = (unsigned long)hcpu;
>  	struct cpu_workqueue_struct *cwq;
>  	struct workqueue_struct *wq;
> +	int ret = NOTIFY_OK;
>
>  	action &= ~CPU_TASKS_FROZEN;
>
>  	switch (action) {
> -	case CPU_LOCK_ACQUIRE:
> -		mutex_lock(&workqueue_mutex);
> -		return NOTIFY_OK;
> -
> -	case CPU_LOCK_RELEASE:
> -		mutex_unlock(&workqueue_mutex);
> -		return NOTIFY_OK;
>

please remove this emtpy line

>  	case CPU_UP_PREPARE:
>  		cpu_set(cpu, cpu_populated_map);
>  	}
>
> +	mutex_lock(&workqueue_mutex);

We don't need workqueue_mutex here. With your patch workqueue_mutex protects
list_head, nothing more. But all other callers (create/destroy) should take
get_online_cpus() anyway. This means that we can convert workqueue_mutex to
spinlock_t.

Oleg.


  reply	other threads:[~2007-10-21 11:34 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-10-16 10:33 [RFC PATCH 0/4] Refcount Based Cpu-Hotplug Revisit Gautham R Shenoy
2007-10-16 10:34 ` [RFC PATCH 1/4] Refcount Based Cpu-Hotplug Implementation Gautham R Shenoy
2007-10-17  0:47   ` Rusty Russell
2007-10-17  5:37     ` Gautham R Shenoy
2007-10-17  6:29       ` Rusty Russell
2007-10-18  6:29         ` Gautham R Shenoy
2007-10-21 12:47       ` Oleg Nesterov
2007-10-17 10:53   ` Paul Jackson
2007-10-17 11:27     ` Paul Jackson
2007-10-17 11:50       ` Gautham R Shenoy
2007-10-17 12:04         ` Paul Jackson
2007-10-16 10:35 ` [RFC PATCH 2/4] Rename lock_cpu_hotplug to get_online_cpus Gautham R Shenoy
2007-10-17 16:13   ` Nathan Lynch
2007-10-18  7:57     ` Gautham R Shenoy
2007-10-18  8:22       ` Nathan Lynch
2007-10-18  8:59         ` Gautham R Shenoy
2007-10-18 17:30           ` Nathan Lynch
2007-10-19  5:04             ` Gautham R Shenoy
2007-10-22  0:43               ` Nathan Lynch
2007-10-22  4:51                 ` Gautham R Shenoy
2007-10-16 10:36 ` [RFC PATCH 3/4] Replace per-subsystem mutexes with get_online_cpus Gautham R Shenoy
2007-10-21 11:39   ` Oleg Nesterov [this message]
2007-10-22  4:58     ` Gautham R Shenoy
2007-10-16 10:37 ` [RFC PATCH 4/4] Remove CPU_DEAD/CPU_UP_CANCELLED handling from workqueue.c Gautham R Shenoy
2007-10-17 11:57   ` Oleg Nesterov
2007-10-16 17:20 ` [RFC PATCH 0/4] Refcount Based Cpu-Hotplug Revisit Linus Torvalds
2007-10-17  2:11   ` Dipankar Sarma
2007-10-17  2:23     ` Linus Torvalds
2007-10-17  4:17       ` Gautham R Shenoy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20071021113917.GA80@tv-sign.ru \
    --to=oleg@tv-sign.ru \
    --cc=akpm@linux-foundation.org \
    --cc=dipankar@in.ibm.com \
    --cc=ego@in.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=paulmck@us.ibm.com \
    --cc=rusty@rustcorp.com.au \
    --cc=torvalds@linux-foundation.org \
    --cc=vatsa@in.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox