public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@elte.hu>
To: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	menage@google.com, miaox@cn.fujitsu.com, maxk@qualcomm.com,
	linux-kernel@vger.kernel.org,
	Peter Zijlstra <a.p.zijlstra@chello.nl>
Subject: Re: [PATCH 1/3] cgroup: convert open-coded mutex_lock(&cgroup_mutex) calls into cgroup_lock() calls
Date: Sun, 18 Jan 2009 10:10:38 +0100	[thread overview]
Message-ID: <20090118091038.GC27144@elte.hu> (raw)
In-Reply-To: <4972E2FD.1010902@cn.fujitsu.com>


* Lai Jiangshan <laijs@cn.fujitsu.com> wrote:

> Convert open-coded mutex_lock(&cgroup_mutex) calls into cgroup_lock()
> calls and convert mutex_unlock(&cgroup_mutex) calls into cgroup_unlock()
> calls.
> 
> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
> Cc: Max Krasnyansky <maxk@qualcomm.com>
> Cc: Miao Xie <miaox@cn.fujitsu.com>
> ---

(please include diffstat output in patches, so that the general source 
code impact can be seen at a glance.)

> diff --git a/kernel/cgroup.c b/kernel/cgroup.c
> index c298310..75a352b 100644
> --- a/kernel/cgroup.c
> +++ b/kernel/cgroup.c
> @@ -616,7 +688,7 @@ static void cgroup_diput(struct dentry *dentry, struct inode *inode)
>  		 * agent */
>  		synchronize_rcu();
>  
> -		mutex_lock(&cgroup_mutex);
> +		cgroup_lock();

this just changes over a clean mutex call to a wrapped lock/unlock 
sequence that has higher overhead in the common case.

We should do the exact opposite, we should change this opaque API:

 void cgroup_lock(void)
 {
         mutex_lock(&cgroup_mutex);
 }

To something more explicit (and more maintainable) like:

  cgroup_mutex_lock(&cgroup_mutex);
  cgroup_mutex_unlock(&cgroup_mutex);

Which is a NOP in the !CGROUPS case and maps to mutex_lock/unlock in the 
CGROUPS=y case.

	Ingo

  reply	other threads:[~2009-01-18  9:11 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-01-16  2:24 [PATCH] cpuset: fix possible deadlock in async_rebuild_sched_domains Miao Xie
2009-01-16  3:33 ` Lai Jiangshan
2009-01-16 20:57   ` Andrew Morton
2009-01-18  8:06     ` [PATCH 1/3] cgroup: convert open-coded mutex_lock(&cgroup_mutex) calls into cgroup_lock() calls Lai Jiangshan
2009-01-18  9:10       ` Ingo Molnar [this message]
2009-01-19  1:37         ` Paul Menage
2009-01-19  1:41           ` Ingo Molnar
2009-01-20  1:28             ` Paul Menage
2009-01-20 18:22               ` Peter Zijlstra
2009-01-20  1:18       ` Paul Menage
2009-01-18  8:06     ` [PATCH 2/3] cgroup: introduce cgroup_queue_deferred_work() Lai Jiangshan
2009-01-18  9:04       ` Ingo Molnar
2009-01-19  1:55         ` Lai Jiangshan
2009-01-20  1:26       ` Paul Menage
2009-01-18  8:06     ` [PATCH 3/3] cpuset: fix possible deadlock in async_rebuild_sched_domains Lai Jiangshan
2009-01-18  9:06       ` Ingo Molnar
2009-01-19  1:40         ` Lai Jiangshan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090118091038.GC27144@elte.hu \
    --to=mingo@elte.hu \
    --cc=a.p.zijlstra@chello.nl \
    --cc=akpm@linux-foundation.org \
    --cc=laijs@cn.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maxk@qualcomm.com \
    --cc=menage@google.com \
    --cc=miaox@cn.fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox