From: Ingo Molnar <mingo@elte.hu>
To: Ravikiran G Thirumalai <kiran@scalex86.org>
Cc: Frederik Deweerdt <frederik.deweerdt@xprog.eu>,
andi@firstfloor.org, tglx@linutronix.de, hpa@zytor.com,
linux-kernel@vger.kernel.org
Subject: Re: [patch] tlb flush_data: replace per_cpu with an array
Date: Tue, 13 Jan 2009 00:00:52 +0100 [thread overview]
Message-ID: <20090112230052.GB18771@elte.hu> (raw)
In-Reply-To: <20090112223421.GA20594@localdomain>
* Ravikiran G Thirumalai <kiran@scalex86.org> wrote:
> Hi Frederik,
>
> On Mon, Jan 12, 2009 at 10:35:42PM +0100, Frederik Deweerdt wrote:
> >
> >Signed-off-by: Frederik Deweerdt <frederik.deweerdt@xprog.eu>
> >
> >diff --git a/arch/x86/kernel/tlb_64.c b/arch/x86/kernel/tlb_64.c
> >index f8be6f1..c177a1f 100644
> >--- a/arch/x86/kernel/tlb_64.c
> >+++ b/arch/x86/kernel/tlb_64.c
> >@@ -33,7 +33,7 @@
> > * To avoid global state use 8 different call vectors.
> > * Each CPU uses a specific vector to trigger flushes on other
> > * CPUs. Depending on the received vector the target CPUs look into
> >- * the right per cpu variable for the flush data.
> >+ * the right array slot for the flush data.
> > *
> > * With more than 8 CPUs they are hashed to the 8 available
> > * vectors. The limited global vector space forces us to this right now.
> >@@ -54,7 +54,7 @@ union smp_flush_state {
> > /* State is put into the per CPU data section, but padded
> > to a full cache line because other CPUs can access it and we don't
> > want false sharing in the per cpu data segment. */
> >-static DEFINE_PER_CPU(union smp_flush_state, flush_state);
> >+static union smp_flush_state flush_state[NUM_INVALIDATE_TLB_VECTORS];
>
> Since flush_state has a spinlock, an array of flush_states would end up
> having multiples spinlocks on the same L1 cacheline. However, I see
> that the union smp_flush_state is already padded to SMP_CACHE_BYTES.
Correct.
> With per-cpu areas, locks belonging to different array elements did not
> end up on the same internode cacheline due to per-node allocation of
> per-cpu areas, but with a simple array, two locks could end up on the
> same internode cacheline.
>
> Hence, can you please change the padding in smp_flush_state to
> INTERNODE_CACHE_BYTES (you have to derive this off
> INTERNODE_CACHE_SHIFT) and align it using
> ____cacheline_internodealigned_in_smp?
Yes, that matters to vsmp, which has ... 4K node cache-lines.
Note that there's already CONFIG_X86_INTERNODE_CACHE_BYTES which is set
properly on vsmp.
So ... something like the commit below?
Ingo
--------------->
>From 23d9dc8bffc759c131b09a48b5215cc2b37a5ac3 Mon Sep 17 00:00:00 2001
From: Frederik Deweerdt <frederik.deweerdt@xprog.eu>
Date: Mon, 12 Jan 2009 22:35:42 +0100
Subject: [PATCH] x86, tlb flush_data: replace per_cpu with an array
Impact: micro-optimization, memory reduction
On x86_64 flush tlb data is stored in per_cpu variables. This is
unnecessary because only the first NUM_INVALIDATE_TLB_VECTORS entries
are accessed.
This patch aims at making the code less confusing (there's nothing
really "per_cpu") by using a plain array. It also would save some memory
on most distros out there (Ubuntu x86_64 has NR_CPUS=64 by default).
[ Ravikiran G Thirumalai also pointed out that the correct alignment
is ____cacheline_internodealigned_in_smp, so that there's no
bouncing on vsmp. ]
Signed-off-by: Frederik Deweerdt <frederik.deweerdt@xprog.eu>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
arch/x86/kernel/tlb_64.c | 16 ++++++++--------
1 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kernel/tlb_64.c b/arch/x86/kernel/tlb_64.c
index f8be6f1..c5a6c6f 100644
--- a/arch/x86/kernel/tlb_64.c
+++ b/arch/x86/kernel/tlb_64.c
@@ -33,7 +33,7 @@
* To avoid global state use 8 different call vectors.
* Each CPU uses a specific vector to trigger flushes on other
* CPUs. Depending on the received vector the target CPUs look into
- * the right per cpu variable for the flush data.
+ * the right array slot for the flush data.
*
* With more than 8 CPUs they are hashed to the 8 available
* vectors. The limited global vector space forces us to this right now.
@@ -48,13 +48,13 @@ union smp_flush_state {
unsigned long flush_va;
spinlock_t tlbstate_lock;
};
- char pad[SMP_CACHE_BYTES];
-} ____cacheline_aligned;
+ char pad[X86_INTERNODE_CACHE_BYTES];
+} ____cacheline_internodealigned_in_smp;
/* State is put into the per CPU data section, but padded
to a full cache line because other CPUs can access it and we don't
want false sharing in the per cpu data segment. */
-static DEFINE_PER_CPU(union smp_flush_state, flush_state);
+static union smp_flush_state flush_state[NUM_INVALIDATE_TLB_VECTORS];
/*
* We cannot call mmdrop() because we are in interrupt context,
@@ -129,7 +129,7 @@ asmlinkage void smp_invalidate_interrupt(struct pt_regs *regs)
* Use that to determine where the sender put the data.
*/
sender = ~regs->orig_ax - INVALIDATE_TLB_VECTOR_START;
- f = &per_cpu(flush_state, sender);
+ f = &flush_state[sender];
if (!cpu_isset(cpu, f->flush_cpumask))
goto out;
@@ -169,7 +169,7 @@ void native_flush_tlb_others(const cpumask_t *cpumaskp, struct mm_struct *mm,
/* Caller has disabled preemption */
sender = smp_processor_id() % NUM_INVALIDATE_TLB_VECTORS;
- f = &per_cpu(flush_state, sender);
+ f = &flush_state[sender];
/*
* Could avoid this lock when
@@ -205,8 +205,8 @@ static int __cpuinit init_smp_flush(void)
{
int i;
- for_each_possible_cpu(i)
- spin_lock_init(&per_cpu(flush_state, i).tlbstate_lock);
+ for (i = 0; i < ARRAY_SIZE(flush_state); i++)
+ spin_lock_init(&flush_state[i].tlbstate_lock);
return 0;
}
next prev parent reply other threads:[~2009-01-12 23:01 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-01-12 21:35 [patch] tlb flush_data: replace per_cpu with an array Frederik Deweerdt
2009-01-12 21:46 ` Ingo Molnar
2009-01-12 21:57 ` Andi Kleen
2009-01-12 22:10 ` Frederik Deweerdt
2009-01-12 22:32 ` Andi Kleen
2009-01-12 22:50 ` Ingo Molnar
2009-01-12 22:40 ` Ingo Molnar
2009-01-12 22:59 ` H. Peter Anvin
2009-01-13 2:43 ` Andi Kleen
2009-01-13 12:28 ` Ingo Molnar
2009-01-12 22:34 ` Ravikiran G Thirumalai
2009-01-12 23:00 ` Ingo Molnar [this message]
2009-01-12 23:09 ` Ingo Molnar
2009-01-13 2:14 ` Ravikiran G Thirumalai
2009-01-13 12:00 ` Peter Zijlstra
2009-01-13 12:33 ` Ingo Molnar
2009-01-13 18:13 ` Ravikiran G Thirumalai
2009-01-13 18:34 ` Ingo Molnar
2009-01-13 18:42 ` Ravikiran G Thirumalai
2009-01-14 7:31 ` Ingo Molnar
2009-01-15 7:15 ` Ravikiran G Thirumalai
2009-01-14 9:08 ` Andi Kleen
2009-01-14 14:41 ` Frederik Deweerdt
2009-01-14 15:20 ` Andi Kleen
2009-01-14 15:10 ` Frederik Deweerdt
2009-01-12 22:54 ` Mike Travis
2009-01-12 23:51 ` Frederik Deweerdt
2009-01-12 23:57 ` Mike Travis
2009-01-13 0:01 ` Ingo Molnar
2009-01-13 3:36 ` Andi Kleen
2009-01-13 12:14 ` Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090112230052.GB18771@elte.hu \
--to=mingo@elte.hu \
--cc=andi@firstfloor.org \
--cc=frederik.deweerdt@xprog.eu \
--cc=hpa@zytor.com \
--cc=kiran@scalex86.org \
--cc=linux-kernel@vger.kernel.org \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox