From: Rusty Russell <rusty@rustcorp.com.au>
To: KOSAKI Motohiro <kosaki.motohiro@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>,
x86@kernel.org, LKML <linux-kernel@vger.kernel.org>,
anton@samba.org, Arnd Bergmann <arnd@arndb.de>,
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
Mike Travis <travis@sgi.com>,
Thomas Gleixner <tglx@linutronix.de>,
Linus Torvalds <torvalds@linux-foundation.org>,
Al Viro <viro@zeniv.linux.org.uk>,
kosaki.motohiro@gmail.com
Subject: Re: [PULL] cpumask: finally make them variable size w/ CPUMASK_OFFSTACK.
Date: Thu, 10 May 2012 11:46:23 +0930 [thread overview]
Message-ID: <871umsdbm0.fsf@rustcorp.com.au> (raw)
In-Reply-To: <4FAB1AC9.1050306@gmail.com>
On Wed, 09 May 2012 21:32:57 -0400, KOSAKI Motohiro <kosaki.motohiro@gmail.com> wrote:
> (5/9/12 2:10 AM), Rusty Russell wrote:
> > Hi Ingo,
> >
> > I finally rebased this on top of your tip tree, and tested it
> > locally. Some more old-style cpumask usages have crept in, but it's a
> > fairly simple series.
> >
> > The final result is that if you enable CONFIG_CPUMASK_OFFSTACK, then
> > 'struct cpumask' becomes an undefined type. You can't accidentally take
> > the size of it, assign it, or pass it by value. And thus it's safe for
> > us to make it smaller if nr_cpu_ids< NR_CPUS, as the final patch does.
> >
> > It unfortunately requires the lglock cleanup patch, which Al already has
> > queued, so I've included it here.
>
> Hi
>
> Thanks this effort. This is very cleaner than I expected.
> However I should NAK following one patch. sorry. because of, lru-drain is
> called from memory reclaim context. It mean, additional allocation may not
> work. Please just use bare NR_CPUS bitmap instead. space wasting is minor
> issue than that.
But if it fails the allocation, that's fine: we just send a few more
IPIs to every CPU:
+ if (!zalloc_cpumask_var(&cpus_with_pcps, GFP_KERNEL)) {
+ on_each_cpu(drain_local_pages, NULL, 1);
+ return;
+ }
We can do it the other way, but it sets a bad example, and after we get
rid of cpumask, it becomes:
static DECLARE_BITMAP(cpus_with_pcps, NR_CPUS);
......
if (has_pcps)
cpumask_set_cpu(cpu, to_cpumask(cpus_with_pcps));
else
cpumask_clear_cpu(cpu, to_cpumask(cpus_with_pcps));
}
on_each_cpu_mask(to_cpumask(cpus_with_pcps), drain_local_pages, NULL, 1);
Or is there a reason we shouldn't even try to allocate here?
Thanks,
Rusty.
next prev parent reply other threads:[~2012-05-10 2:17 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-05-09 6:10 [PULL] cpumask: finally make them variable size w/ CPUMASK_OFFSTACK Rusty Russell
2012-05-09 8:44 ` Ingo Molnar
2012-05-10 0:29 ` Rusty Russell
2012-05-10 7:42 ` Ingo Molnar
2012-05-14 3:22 ` Rusty Russell
2012-05-10 1:32 ` KOSAKI Motohiro
2012-05-10 2:16 ` Rusty Russell [this message]
2012-05-10 2:43 ` KOSAKI Motohiro
2012-05-10 4:54 ` Rusty Russell
2012-05-10 6:42 ` KOSAKI Motohiro
2012-05-14 2:58 ` Rusty Russell
2012-05-15 1:38 ` KOSAKI Motohiro
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=871umsdbm0.fsf@rustcorp.com.au \
--to=rusty@rustcorp.com.au \
--cc=anton@samba.org \
--cc=arnd@arndb.de \
--cc=kosaki.motohiro@gmail.com \
--cc=kosaki.motohiro@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
--cc=travis@sgi.com \
--cc=viro@zeniv.linux.org.uk \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox