From: Tejun Heo <tj@kernel.org>
To: linux-kernel@vger.kernel.org, x86@kernel.org,
linux-arch@vger.kernel.org, mingo@elte.hu, kyle@mcmartin.ca,
cl@linux-foundation.org, Jesper.Nilsson@axis.com,
benh@kernel.crashing.org
Cc: Tejun Heo <tj@kernel.org>, Nick Piggin <npiggin@suse.de>
Subject: [PATCH 1/9] percpu: fix too lazy vunmap cache flushing
Date: Wed, 17 Jun 2009 12:40:52 +0900 [thread overview]
Message-ID: <1245210060-24344-2-git-send-email-tj@kernel.org> (raw)
In-Reply-To: <1245210060-24344-1-git-send-email-tj@kernel.org>
In pcpu_unmap(), flushing virtual cache on vunmap can't be delayed as
the page is going to be returned to the page allocator. Only TLB
flushing can be put off such that vmalloc code can handle it lazily.
Fix it.
[ Impact: fix subtle virtual cache flush bug ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Nick Piggin <npiggin@suse.de>
Cc: Ingo Molnar <mingo@elte.hu>
---
mm/percpu.c | 11 +++++------
1 files changed, 5 insertions(+), 6 deletions(-)
diff --git a/mm/percpu.c b/mm/percpu.c
index c0b2c1a..d06f474 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -549,14 +549,14 @@ static void pcpu_free_area(struct pcpu_chunk *chunk, int freeme)
* @chunk: chunk of interest
* @page_start: page index of the first page to unmap
* @page_end: page index of the last page to unmap + 1
- * @flush: whether to flush cache and tlb or not
+ * @flush_tlb: whether to flush tlb or not
*
* For each cpu, unmap pages [@page_start,@page_end) out of @chunk.
* If @flush is true, vcache is flushed before unmapping and tlb
* after.
*/
static void pcpu_unmap(struct pcpu_chunk *chunk, int page_start, int page_end,
- bool flush)
+ bool flush_tlb)
{
unsigned int last = num_possible_cpus() - 1;
unsigned int cpu;
@@ -569,9 +569,8 @@ static void pcpu_unmap(struct pcpu_chunk *chunk, int page_start, int page_end,
* the whole region at once rather than doing it for each cpu.
* This could be an overkill but is more scalable.
*/
- if (flush)
- flush_cache_vunmap(pcpu_chunk_addr(chunk, 0, page_start),
- pcpu_chunk_addr(chunk, last, page_end));
+ flush_cache_vunmap(pcpu_chunk_addr(chunk, 0, page_start),
+ pcpu_chunk_addr(chunk, last, page_end));
for_each_possible_cpu(cpu)
unmap_kernel_range_noflush(
@@ -579,7 +578,7 @@ static void pcpu_unmap(struct pcpu_chunk *chunk, int page_start, int page_end,
(page_end - page_start) << PAGE_SHIFT);
/* ditto as flush_cache_vunmap() */
- if (flush)
+ if (flush_tlb)
flush_tlb_kernel_range(pcpu_chunk_addr(chunk, 0, page_start),
pcpu_chunk_addr(chunk, last, page_end));
}
--
1.6.0.2
WARNING: multiple messages have this Message-ID (diff)
From: Tejun Heo <tj@kernel.org>
To: linux-kernel@vger.kernel.org, x86@kernel.org,
linux-arch@vger.kernel.org, mingo@elte.hu, kyle@mcmartin.ca,
cl@linux-foundation.org, Jesper.Nilsson@axis.com,
benh@kernel.crashing.org, paulmck@linux.vnet.ibm.com
Cc: Tejun Heo <tj@kernel.org>, Nick Piggin <npiggin@suse.de>
Subject: [PATCH 1/9] percpu: fix too lazy vunmap cache flushing
Date: Wed, 17 Jun 2009 12:40:52 +0900 [thread overview]
Message-ID: <1245210060-24344-2-git-send-email-tj@kernel.org> (raw)
Message-ID: <20090617034052.qHJR5B6UJs07Vv628lY-54EEfaS8VRg1_lHcIBK1hP8@z> (raw)
In-Reply-To: <1245210060-24344-1-git-send-email-tj@kernel.org>
In pcpu_unmap(), flushing virtual cache on vunmap can't be delayed as
the page is going to be returned to the page allocator. Only TLB
flushing can be put off such that vmalloc code can handle it lazily.
Fix it.
[ Impact: fix subtle virtual cache flush bug ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Nick Piggin <npiggin@suse.de>
Cc: Ingo Molnar <mingo@elte.hu>
---
mm/percpu.c | 11 +++++------
1 files changed, 5 insertions(+), 6 deletions(-)
diff --git a/mm/percpu.c b/mm/percpu.c
index c0b2c1a..d06f474 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -549,14 +549,14 @@ static void pcpu_free_area(struct pcpu_chunk *chunk, int freeme)
* @chunk: chunk of interest
* @page_start: page index of the first page to unmap
* @page_end: page index of the last page to unmap + 1
- * @flush: whether to flush cache and tlb or not
+ * @flush_tlb: whether to flush tlb or not
*
* For each cpu, unmap pages [@page_start,@page_end) out of @chunk.
* If @flush is true, vcache is flushed before unmapping and tlb
* after.
*/
static void pcpu_unmap(struct pcpu_chunk *chunk, int page_start, int page_end,
- bool flush)
+ bool flush_tlb)
{
unsigned int last = num_possible_cpus() - 1;
unsigned int cpu;
@@ -569,9 +569,8 @@ static void pcpu_unmap(struct pcpu_chunk *chunk, int page_start, int page_end,
* the whole region at once rather than doing it for each cpu.
* This could be an overkill but is more scalable.
*/
- if (flush)
- flush_cache_vunmap(pcpu_chunk_addr(chunk, 0, page_start),
- pcpu_chunk_addr(chunk, last, page_end));
+ flush_cache_vunmap(pcpu_chunk_addr(chunk, 0, page_start),
+ pcpu_chunk_addr(chunk, last, page_end));
for_each_possible_cpu(cpu)
unmap_kernel_range_noflush(
@@ -579,7 +578,7 @@ static void pcpu_unmap(struct pcpu_chunk *chunk, int page_start, int page_end,
(page_end - page_start) << PAGE_SHIFT);
/* ditto as flush_cache_vunmap() */
- if (flush)
+ if (flush_tlb)
flush_tlb_kernel_range(pcpu_chunk_addr(chunk, 0, page_start),
pcpu_chunk_addr(chunk, last, page_end));
}
--
1.6.0.2
next prev parent reply other threads:[~2009-06-17 3:41 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-06-17 3:40 [GIT PATCH core/percpu] percpu: convert most archs to dynamic percpu, take#3 Tejun Heo
2009-06-17 3:40 ` Tejun Heo
2009-06-17 3:40 ` Tejun Heo [this message]
2009-06-17 3:40 ` [PATCH 1/9] percpu: fix too lazy vunmap cache flushing Tejun Heo
2009-06-17 3:40 ` [PATCH 2/9] percpu: use dynamic percpu allocator as the default percpu allocator Tejun Heo
2009-06-17 3:40 ` Tejun Heo
2009-06-17 12:40 ` James Bottomley
2009-06-17 12:40 ` James Bottomley
2009-06-17 13:15 ` Tejun Heo
2009-06-17 13:32 ` Kyle McMartin
2009-06-17 3:40 ` [PATCH 3/9] CRIS: Change DEFINE_PER_CPU of current_pgd to be non volatile Tejun Heo
2009-06-17 3:40 ` Tejun Heo
2009-06-18 7:27 ` [UPDATED PATCH " Tejun Heo
2009-06-18 7:27 ` Tejun Heo
2009-06-17 3:40 ` [PATCH 4/9] percpu: cleanup percpu array definitions Tejun Heo
2009-06-17 3:40 ` Tejun Heo
2009-06-17 16:57 ` Christoph Lameter
2009-06-17 3:40 ` [PATCH 5/9] percpu: clean up percpu variable definitions Tejun Heo
2009-06-17 3:40 ` Tejun Heo
2009-06-17 17:00 ` Christoph Lameter
2009-06-18 7:36 ` [PATCH 5/9 UPDATED] " Tejun Heo
2009-06-18 7:36 ` Tejun Heo
2009-06-17 3:40 ` [PATCH 7/9] alpha: kill unnecessary __used attribute in PER_CPU_ATTRIBUTES Tejun Heo
2009-06-17 3:40 ` Tejun Heo
2009-06-17 3:40 ` [PATCH 8/9] alpha: switch to dynamic percpu allocator Tejun Heo
2009-06-17 3:40 ` Tejun Heo
2009-06-17 3:41 ` [PATCH 9/9] s390: " Tejun Heo
2009-06-17 3:41 ` Tejun Heo
[not found] ` <1245210060-24344-7-git-send-email-tj@kernel.org>
[not found] ` <4A39EF13.7010309@kernel.org>
2009-06-18 13:26 ` [PATCH 6/9 UPDATED] percpu: enforce global definition Steven Rostedt
2009-06-18 13:51 ` Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1245210060-24344-2-git-send-email-tj@kernel.org \
--to=tj@kernel.org \
--cc=Jesper.Nilsson@axis.com \
--cc=benh@kernel.crashing.org \
--cc=cl@linux-foundation.org \
--cc=kyle@mcmartin.ca \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=npiggin@suse.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox