public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: Li Zefan <lizf@cn.fujitsu.com>
Cc: Ingo Molnar <mingo@elte.hu>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Mike Travis <travis@sgi.com>, Paul Menage <menage@google.com>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 3/6] cpuset: convert cpuset_attach() to use cpumask_var_t
Date: Mon, 5 Jan 2009 01:14:14 -0800	[thread overview]
Message-ID: <20090105011414.176a5ee3.akpm@linux-foundation.org> (raw)
In-Reply-To: <4961CD36.7070507@cn.fujitsu.com>

On Mon, 05 Jan 2009 17:04:54 +0800 Li Zefan <lizf@cn.fujitsu.com> wrote:

> Andrew Morton wrote:
> > On Mon, 05 Jan 2009 16:47:21 +0800 Li Zefan <lizf@cn.fujitsu.com> wrote:
> > 
> >> Allocate a global cpumask_var_t at boot, and use it in cpuset_attach(), so
> >> we won't fail cpuset_attach().
> >>
> >> Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
> >> Acked-by: Mike Travis <travis@sgi.com>
> >> ---
> >>  kernel/cpuset.c |   14 +++++++++-----
> >>  1 files changed, 9 insertions(+), 5 deletions(-)
> >>
> >> diff --git a/kernel/cpuset.c b/kernel/cpuset.c
> >> index afa29cf..1e32e6b 100644
> >> --- a/kernel/cpuset.c
> >> +++ b/kernel/cpuset.c
> >> @@ -1306,6 +1306,9 @@ static int fmeter_getrate(struct fmeter *fmp)
> >>  	return val;
> >>  }
> >>  
> >> +/* Protected by cgroup_lock */
> >> +static cpumask_var_t cpus_attach;
> >> +
> >>  /* Called by cgroups to determine if a cpuset is usable; cgroup_mutex held */
> >>  static int cpuset_can_attach(struct cgroup_subsys *ss,
> >>  			     struct cgroup *cont, struct task_struct *tsk)
> >> @@ -1330,7 +1333,6 @@ static void cpuset_attach(struct cgroup_subsys *ss,
> >>  			  struct cgroup *cont, struct cgroup *oldcont,
> >>  			  struct task_struct *tsk)
> >>  {
> >> -	cpumask_t cpus;
> >>  	nodemask_t from, to;
> >>  	struct mm_struct *mm;
> >>  	struct cpuset *cs = cgroup_cs(cont);
> >> @@ -1338,13 +1340,13 @@ static void cpuset_attach(struct cgroup_subsys *ss,
> >>  	int err;
> >>  
> >>  	if (cs == &top_cpuset) {
> >> -		cpus = cpu_possible_map;
> >> +		cpumask_copy(cpus_attach, cpu_possible_mask);
> >>  	} else {
> >>  		mutex_lock(&callback_mutex);
> >> -		guarantee_online_cpus(cs, &cpus);
> >> +		guarantee_online_cpus(cs, cpus_attach);
> >>  		mutex_unlock(&callback_mutex);
> >>  	}
> >> -	err = set_cpus_allowed_ptr(tsk, &cpus);
> >> +	err = set_cpus_allowed_ptr(tsk, cpus_attach);
> >>  	if (err)
> >>  		return;
> >>  
> >> @@ -1357,7 +1359,6 @@ static void cpuset_attach(struct cgroup_subsys *ss,
> >>  			cpuset_migrate_mm(mm, &from, &to);
> >>  		mmput(mm);
> >>  	}
> >> -
> >>  }
> >>  
> >>  /* The various types of files and directories in a cpuset file system */
> >> @@ -1838,6 +1839,9 @@ int __init cpuset_init(void)
> >>  	if (err < 0)
> >>  		return err;
> >>  
> >> +	if (!alloc_cpumask_var(&cpus_attach, GFP_KERNEL))
> >> +		BUG();
> >> +
> >>  	number_of_cpusets = 1;
> >>  	return 0;
> >>  }
> > 
> > OK, that works.
> > 
> > Do we need to dynamically allocate cpus_attach?  Can we just do
> > 
> > static cpumask_t cpus_attach;
> > 
> > ?
> > 
> 
> Yes, it's used by cpuset_attach() only, and cpuset_attach() is called with
> cgroup_lock() held, so it won't happen that 2 threads access cpus_attach
> concurrently.

You misunderstand my question.  I think.

Can we allocate cpus_attach at compile time?  Completely, not
partially.  By doing

static cpumask_t cpus_attach;

instead of

static cpumask_var_t cpus_attach;
...
	alloc_cpumask_var(&cpus_attach, GFP_KERNEL);

?

  reply	other threads:[~2009-01-05  9:15 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-12-31  8:34 [PATCH 0/6] cpuset: convert to new cpumask API Li Zefan
2008-12-31  8:35 ` [PATCH 1/6] cpuset: remove on stack cpumask_t in cpuset_sprintf_cpulist() Li Zefan
2008-12-31  8:35   ` [PATCH 2/6] cpuset: remove on stack cpumask_t in cpuset_can_attach() Li Zefan
2008-12-31  8:36     ` [PATCH 3/6] cpuset: convert cpuset_attach() to use cpumask_var_t Li Zefan
2008-12-31  8:36       ` [PATCH 4/6] cpuset: don't allocate trial cpuset on stack Li Zefan
2008-12-31  8:37         ` [PATCH 5/6] cpuset: convert cpuset->cpus_allowed to cpumask_var_t Li Zefan
2008-12-31  8:37           ` [PATCH 6/6] cpuset: remove remaining pointers to cpumask_t Li Zefan
2009-01-05  7:46         ` [PATCH 4/6] cpuset: don't allocate trial cpuset on stack Andrew Morton
2009-01-05  9:13           ` Li Zefan
2009-01-05  7:38       ` [PATCH 3/6] cpuset: convert cpuset_attach() to use cpumask_var_t Andrew Morton
2009-01-05  8:47         ` Li Zefan
2009-01-05  9:01           ` Andrew Morton
2009-01-05  9:04             ` Li Zefan
2009-01-05  9:14               ` Andrew Morton [this message]
2009-01-05  9:21                 ` Li Zefan
2009-01-07  2:04             ` Rusty Russell
2009-01-07 16:39           ` Paul Menage
2008-12-31 11:56 ` [PATCH 0/6] cpuset: convert to new cpumask API Mike Travis
2008-12-31 13:26 ` Rusty Russell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090105011414.176a5ee3.akpm@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizf@cn.fujitsu.com \
    --cc=menage@google.com \
    --cc=mingo@elte.hu \
    --cc=rusty@rustcorp.com.au \
    --cc=travis@sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox