From: Tejun Heo <tj@kernel.org>
To: Ingo Molnar <mingo@elte.hu>
Cc: "H. Peter Anvin" <hpa@zytor.com>, Brian Gerst <brgerst@gmail.com>,
ebiederm@xmission.com, cl@linux-foundation.org,
rusty@rustcorp.com.au, travis@sgi.com,
linux-kernel@vger.kernel.org, akpm@linux-foundation.org,
steiner@sgi.com, hugh@veritas.com
Subject: Re: [patch] add optimized generic percpu accessors
Date: Thu, 15 Jan 2009 22:04:43 +0900 [thread overview]
Message-ID: <496F346B.6020804@kernel.org> (raw)
In-Reply-To: <20090115122611.GJ22850@elte.hu>
Ingo Molnar wrote:
>> Yeah, I don't think we can do much better than those ops. I have two
>> issues tho.
>>
>> 1. percpu_and() is missing. I added it for completeness's sake.
>
> Sure - it would be commonly used as well. Perhaps we dont need
> percpu_xor() at all? (or and and ops already give a complete algebra)
Maybe... I don't know.
>> 2. The generic percpu_op() should be local to the cpu, so it should
>> expand to...
>>
>> do { get_cpu_var(var) OP (val); put_cpu_var(var) } while (0)
>>
>> as the original x86_OP_percpu() did. Right?
>>
>> Thanks.
>
> hm, that removes much of its appeal - a preempt off+on sequence is quite
> bloaty. Most percpu usage sites are already within critical sections.
>
> I think they are more analogous to per_cpu(var, cpu), which does not
> disable preemption either. There's no 'get/put' in them, which signals
> that they dont auto-disable preemption.
Yeah, that's exactly the reason why x86 has those specific ops in the
first place because if they can be made single instruction, there's no
need to mess with preemption thus reducing the cost of the operations
greatly but for generic implementation it should be guarded by
preemption disable/enable to guarantee correct operation to maintain
the same/correct semantics (being preempted inbetween the load and
store instructions implementing += can be disastrous for example).
>From the comment I added on top of the generic ops....
/*
* Optional methods for optimized non-lvalue per-cpu variable access.
*
* @var can be a percpu variable or a field of it and its size should
* equal char, int or long. percpu_read() evaluates to a lvalue and
* all others to void.
*
* These operations are guaranteed to be atomic w.r.t. preemption.
* The generic versions use plain get/put_cpu_var(). Archs are
* encouraged to implement single-instruction alternatives which don't
* require preemption protection.
*/
And the updated patch looks like...
---
From: Ingo Molnar <mingo@elte.hu>
Subject: percpu: add optimized generic percpu accessors
It is an optimization and a cleanup, and adds the following new
generic percpu methods:
percpu_read()
percpu_write()
percpu_add()
percpu_sub()
percpu_or()
percpu_xor()
and implements support for them on x86. (other architectures will fall
back to a default implementation)
The advantage is that for example to read a local percpu variable,
instead of this sequence:
return __get_cpu_var(var);
ffffffff8102ca2b: 48 8b 14 fd 80 09 74 mov -0x7e8bf680(,%rdi,8),%rdx
ffffffff8102ca32: 81
ffffffff8102ca33: 48 c7 c0 d8 59 00 00 mov $0x59d8,%rax
ffffffff8102ca3a: 48 8b 04 10 mov (%rax,%rdx,1),%rax
We can get a single instruction by using the optimized variants:
return percpu_read(var);
ffffffff8102ca3f: 65 48 8b 05 91 8f fd mov %gs:0x7efd8f91(%rip),%rax
I also cleaned up the x86-specific APIs and made the x86 code use
these new generic percpu primitives.
tj: * fixed generic percpu_sub() definition as Roel Kluin pointed out
* added percpu_and() for completeness's sake
* made generic percpu ops atomic against preemption
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Roel Kluin <roel.kluin@gmail.com>
---
arch/x86/include/asm/current.h | 2 -
arch/x86/include/asm/irq_regs_32.h | 4 +-
arch/x86/include/asm/mmu_context_32.h | 12 +++----
arch/x86/include/asm/pda.h | 10 +++---
arch/x86/include/asm/percpu.h | 24 ++++++++-------
arch/x86/include/asm/smp.h | 2 -
arch/x86/kernel/process_32.c | 2 -
arch/x86/kernel/tlb_32.c | 10 +++---
arch/x86/mach-voyager/voyager_smp.c | 4 +-
arch/x86/xen/enlighten.c | 14 ++++-----
arch/x86/xen/irq.c | 8 ++---
arch/x86/xen/mmu.c | 2 -
arch/x86/xen/multicalls.h | 2 -
arch/x86/xen/smp.c | 2 -
include/asm-generic/percpu.h | 52 ++++++++++++++++++++++++++++++++++
15 files changed, 102 insertions(+), 48 deletions(-)
Index: work/arch/x86/include/asm/current.h
===================================================================
--- work.orig/arch/x86/include/asm/current.h
+++ work/arch/x86/include/asm/current.h
@@ -10,7 +10,7 @@ struct task_struct;
DECLARE_PER_CPU(struct task_struct *, current_task);
static __always_inline struct task_struct *get_current(void)
{
- return x86_read_percpu(current_task);
+ return percpu_read(current_task);
}
#else /* X86_32 */
Index: work/arch/x86/include/asm/irq_regs_32.h
===================================================================
--- work.orig/arch/x86/include/asm/irq_regs_32.h
+++ work/arch/x86/include/asm/irq_regs_32.h
@@ -15,7 +15,7 @@ DECLARE_PER_CPU(struct pt_regs *, irq_re
static inline struct pt_regs *get_irq_regs(void)
{
- return x86_read_percpu(irq_regs);
+ return percpu_read(irq_regs);
}
static inline struct pt_regs *set_irq_regs(struct pt_regs *new_regs)
@@ -23,7 +23,7 @@ static inline struct pt_regs *set_irq_re
struct pt_regs *old_regs;
old_regs = get_irq_regs();
- x86_write_percpu(irq_regs, new_regs);
+ percpu_write(irq_regs, new_regs);
return old_regs;
}
Index: work/arch/x86/include/asm/mmu_context_32.h
===================================================================
--- work.orig/arch/x86/include/asm/mmu_context_32.h
+++ work/arch/x86/include/asm/mmu_context_32.h
@@ -4,8 +4,8 @@
static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
{
#ifdef CONFIG_SMP
- if (x86_read_percpu(cpu_tlbstate.state) == TLBSTATE_OK)
- x86_write_percpu(cpu_tlbstate.state, TLBSTATE_LAZY);
+ if (percpu_read(cpu_tlbstate.state) == TLBSTATE_OK)
+ percpu_write(cpu_tlbstate.state, TLBSTATE_LAZY);
#endif
}
@@ -19,8 +19,8 @@ static inline void switch_mm(struct mm_s
/* stop flush ipis for the previous mm */
cpu_clear(cpu, prev->cpu_vm_mask);
#ifdef CONFIG_SMP
- x86_write_percpu(cpu_tlbstate.state, TLBSTATE_OK);
- x86_write_percpu(cpu_tlbstate.active_mm, next);
+ percpu_write(cpu_tlbstate.state, TLBSTATE_OK);
+ percpu_write(cpu_tlbstate.active_mm, next);
#endif
cpu_set(cpu, next->cpu_vm_mask);
@@ -35,8 +35,8 @@ static inline void switch_mm(struct mm_s
}
#ifdef CONFIG_SMP
else {
- x86_write_percpu(cpu_tlbstate.state, TLBSTATE_OK);
- BUG_ON(x86_read_percpu(cpu_tlbstate.active_mm) != next);
+ percpu_write(cpu_tlbstate.state, TLBSTATE_OK);
+ BUG_ON(percpu_read(cpu_tlbstate.active_mm) != next);
if (!cpu_test_and_set(cpu, next->cpu_vm_mask)) {
/* We were in lazy tlb mode and leave_mm disabled
Index: work/arch/x86/include/asm/pda.h
===================================================================
--- work.orig/arch/x86/include/asm/pda.h
+++ work/arch/x86/include/asm/pda.h
@@ -45,11 +45,11 @@ extern void pda_init(int);
#define cpu_pda(cpu) (&per_cpu(__pda, cpu))
-#define read_pda(field) x86_read_percpu(__pda.field)
-#define write_pda(field, val) x86_write_percpu(__pda.field, val)
-#define add_pda(field, val) x86_add_percpu(__pda.field, val)
-#define sub_pda(field, val) x86_sub_percpu(__pda.field, val)
-#define or_pda(field, val) x86_or_percpu(__pda.field, val)
+#define read_pda(field) percpu_read(__pda.field)
+#define write_pda(field, val) percpu_write(__pda.field, val)
+#define add_pda(field, val) percpu_add(__pda.field, val)
+#define sub_pda(field, val) percpu_sub(__pda.field, val)
+#define or_pda(field, val) percpu_or(__pda.field, val)
/* This is not atomic against other CPUs -- CPU preemption needs to be off */
#define test_and_clear_bit_pda(bit, field) \
Index: work/arch/x86/include/asm/percpu.h
===================================================================
--- work.orig/arch/x86/include/asm/percpu.h
+++ work/arch/x86/include/asm/percpu.h
@@ -40,16 +40,11 @@
#ifdef CONFIG_SMP
#define __percpu_seg_str "%%"__stringify(__percpu_seg)":"
-#define __my_cpu_offset x86_read_percpu(this_cpu_off)
+#define __my_cpu_offset percpu_read(this_cpu_off)
#else
#define __percpu_seg_str
#endif
-#include <asm-generic/percpu.h>
-
-/* We can use this directly for local CPU (faster). */
-DECLARE_PER_CPU(unsigned long, this_cpu_off);
-
/* For arch-specific code, we can use direct single-insn ops (they
* don't give an lvalue though). */
extern void __bad_percpu_size(void);
@@ -115,11 +110,13 @@ do { \
ret__; \
})
-#define x86_read_percpu(var) percpu_from_op("mov", per_cpu__##var)
-#define x86_write_percpu(var, val) percpu_to_op("mov", per_cpu__##var, val)
-#define x86_add_percpu(var, val) percpu_to_op("add", per_cpu__##var, val)
-#define x86_sub_percpu(var, val) percpu_to_op("sub", per_cpu__##var, val)
-#define x86_or_percpu(var, val) percpu_to_op("or", per_cpu__##var, val)
+#define percpu_read(var) percpu_from_op("mov", per_cpu__##var)
+#define percpu_write(var, val) percpu_to_op("mov", per_cpu__##var, val)
+#define percpu_add(var, val) percpu_to_op("add", per_cpu__##var, val)
+#define percpu_sub(var, val) percpu_to_op("sub", per_cpu__##var, val)
+#define percpu_and(var, val) percpu_to_op("and", per_cpu__##var, val)
+#define percpu_or(var, val) percpu_to_op("or", per_cpu__##var, val)
+#define percpu_xor(var, val) percpu_to_op("xor", per_cpu__##var, val)
/* This is not atomic against other CPUs -- CPU preemption needs to be off */
#define x86_test_and_clear_bit_percpu(bit, var) \
@@ -131,6 +128,11 @@ do { \
old__; \
})
+#include <asm-generic/percpu.h>
+
+/* We can use this directly for local CPU (faster). */
+DECLARE_PER_CPU(unsigned long, this_cpu_off);
+
#ifdef CONFIG_X86_64
extern void load_pda_offset(int cpu);
#else
Index: work/arch/x86/include/asm/smp.h
===================================================================
--- work.orig/arch/x86/include/asm/smp.h
+++ work/arch/x86/include/asm/smp.h
@@ -163,7 +163,7 @@ extern unsigned disabled_cpus __cpuinitd
* from the initial startup. We map APIC_BASE very early in page_setup(),
* so this is correct in the x86 case.
*/
-#define raw_smp_processor_id() (x86_read_percpu(cpu_number))
+#define raw_smp_processor_id() (percpu_read(cpu_number))
extern int safe_smp_processor_id(void);
#elif defined(CONFIG_X86_64_SMP)
Index: work/arch/x86/kernel/process_32.c
===================================================================
--- work.orig/arch/x86/kernel/process_32.c
+++ work/arch/x86/kernel/process_32.c
@@ -592,7 +592,7 @@ __switch_to(struct task_struct *prev_p,
if (prev->gs | next->gs)
loadsegment(gs, next->gs);
- x86_write_percpu(current_task, next_p);
+ percpu_write(current_task, next_p);
return prev_p;
}
Index: work/arch/x86/kernel/tlb_32.c
===================================================================
--- work.orig/arch/x86/kernel/tlb_32.c
+++ work/arch/x86/kernel/tlb_32.c
@@ -34,8 +34,8 @@ static DEFINE_SPINLOCK(tlbstate_lock);
*/
void leave_mm(int cpu)
{
- BUG_ON(x86_read_percpu(cpu_tlbstate.state) == TLBSTATE_OK);
- cpu_clear(cpu, x86_read_percpu(cpu_tlbstate.active_mm)->cpu_vm_mask);
+ BUG_ON(percpu_read(cpu_tlbstate.state) == TLBSTATE_OK);
+ cpu_clear(cpu, percpu_read(cpu_tlbstate.active_mm)->cpu_vm_mask);
load_cr3(swapper_pg_dir);
}
EXPORT_SYMBOL_GPL(leave_mm);
@@ -103,8 +103,8 @@ void smp_invalidate_interrupt(struct pt_
* BUG();
*/
- if (flush_mm == x86_read_percpu(cpu_tlbstate.active_mm)) {
- if (x86_read_percpu(cpu_tlbstate.state) == TLBSTATE_OK) {
+ if (flush_mm == percpu_read(cpu_tlbstate.active_mm)) {
+ if (percpu_read(cpu_tlbstate.state) == TLBSTATE_OK) {
if (flush_va == TLB_FLUSH_ALL)
local_flush_tlb();
else
@@ -237,7 +237,7 @@ static void do_flush_tlb_all(void *info)
unsigned long cpu = smp_processor_id();
__flush_tlb_all();
- if (x86_read_percpu(cpu_tlbstate.state) == TLBSTATE_LAZY)
+ if (percpu_read(cpu_tlbstate.state) == TLBSTATE_LAZY)
leave_mm(cpu);
}
Index: work/arch/x86/mach-voyager/voyager_smp.c
===================================================================
--- work.orig/arch/x86/mach-voyager/voyager_smp.c
+++ work/arch/x86/mach-voyager/voyager_smp.c
@@ -410,7 +410,7 @@ void __init find_smp_config(void)
VOYAGER_SUS_IN_CONTROL_PORT);
current_thread_info()->cpu = boot_cpu_id;
- x86_write_percpu(cpu_number, boot_cpu_id);
+ percpu_write(cpu_number, boot_cpu_id);
}
/*
@@ -1790,7 +1790,7 @@ static void __init voyager_smp_cpus_done
void __init smp_setup_processor_id(void)
{
current_thread_info()->cpu = hard_smp_processor_id();
- x86_write_percpu(cpu_number, hard_smp_processor_id());
+ percpu_write(cpu_number, hard_smp_processor_id());
}
static void voyager_send_call_func(cpumask_t callmask)
Index: work/arch/x86/xen/enlighten.c
===================================================================
--- work.orig/arch/x86/xen/enlighten.c
+++ work/arch/x86/xen/enlighten.c
@@ -702,17 +702,17 @@ static void xen_write_cr0(unsigned long
static void xen_write_cr2(unsigned long cr2)
{
- x86_read_percpu(xen_vcpu)->arch.cr2 = cr2;
+ percpu_read(xen_vcpu)->arch.cr2 = cr2;
}
static unsigned long xen_read_cr2(void)
{
- return x86_read_percpu(xen_vcpu)->arch.cr2;
+ return percpu_read(xen_vcpu)->arch.cr2;
}
static unsigned long xen_read_cr2_direct(void)
{
- return x86_read_percpu(xen_vcpu_info.arch.cr2);
+ return percpu_read(xen_vcpu_info.arch.cr2);
}
static void xen_write_cr4(unsigned long cr4)
@@ -725,12 +725,12 @@ static void xen_write_cr4(unsigned long
static unsigned long xen_read_cr3(void)
{
- return x86_read_percpu(xen_cr3);
+ return percpu_read(xen_cr3);
}
static void set_current_cr3(void *v)
{
- x86_write_percpu(xen_current_cr3, (unsigned long)v);
+ percpu_write(xen_current_cr3, (unsigned long)v);
}
static void __xen_write_cr3(bool kernel, unsigned long cr3)
@@ -755,7 +755,7 @@ static void __xen_write_cr3(bool kernel,
MULTI_mmuext_op(mcs.mc, op, 1, NULL, DOMID_SELF);
if (kernel) {
- x86_write_percpu(xen_cr3, cr3);
+ percpu_write(xen_cr3, cr3);
/* Update xen_current_cr3 once the batch has actually
been submitted. */
@@ -771,7 +771,7 @@ static void xen_write_cr3(unsigned long
/* Update while interrupts are disabled, so its atomic with
respect to ipis */
- x86_write_percpu(xen_cr3, cr3);
+ percpu_write(xen_cr3, cr3);
__xen_write_cr3(true, cr3);
Index: work/arch/x86/xen/irq.c
===================================================================
--- work.orig/arch/x86/xen/irq.c
+++ work/arch/x86/xen/irq.c
@@ -39,7 +39,7 @@ static unsigned long xen_save_fl(void)
struct vcpu_info *vcpu;
unsigned long flags;
- vcpu = x86_read_percpu(xen_vcpu);
+ vcpu = percpu_read(xen_vcpu);
/* flag has opposite sense of mask */
flags = !vcpu->evtchn_upcall_mask;
@@ -62,7 +62,7 @@ static void xen_restore_fl(unsigned long
make sure we're don't switch CPUs between getting the vcpu
pointer and updating the mask. */
preempt_disable();
- vcpu = x86_read_percpu(xen_vcpu);
+ vcpu = percpu_read(xen_vcpu);
vcpu->evtchn_upcall_mask = flags;
preempt_enable_no_resched();
@@ -83,7 +83,7 @@ static void xen_irq_disable(void)
make sure we're don't switch CPUs between getting the vcpu
pointer and updating the mask. */
preempt_disable();
- x86_read_percpu(xen_vcpu)->evtchn_upcall_mask = 1;
+ percpu_read(xen_vcpu)->evtchn_upcall_mask = 1;
preempt_enable_no_resched();
}
@@ -96,7 +96,7 @@ static void xen_irq_enable(void)
the caller is confused and is trying to re-enable interrupts
on an indeterminate processor. */
- vcpu = x86_read_percpu(xen_vcpu);
+ vcpu = percpu_read(xen_vcpu);
vcpu->evtchn_upcall_mask = 0;
/* Doesn't matter if we get preempted here, because any
Index: work/arch/x86/xen/mmu.c
===================================================================
--- work.orig/arch/x86/xen/mmu.c
+++ work/arch/x86/xen/mmu.c
@@ -1074,7 +1074,7 @@ static void drop_other_mm_ref(void *info
/* If this cpu still has a stale cr3 reference, then make sure
it has been flushed. */
- if (x86_read_percpu(xen_current_cr3) == __pa(mm->pgd)) {
+ if (percpu_read(xen_current_cr3) == __pa(mm->pgd)) {
load_cr3(swapper_pg_dir);
arch_flush_lazy_cpu_mode();
}
Index: work/arch/x86/xen/multicalls.h
===================================================================
--- work.orig/arch/x86/xen/multicalls.h
+++ work/arch/x86/xen/multicalls.h
@@ -39,7 +39,7 @@ static inline void xen_mc_issue(unsigned
xen_mc_flush();
/* restore flags saved in xen_mc_batch */
- local_irq_restore(x86_read_percpu(xen_mc_irq_flags));
+ local_irq_restore(percpu_read(xen_mc_irq_flags));
}
/* Set up a callback to be called when the current batch is flushed */
Index: work/arch/x86/xen/smp.c
===================================================================
--- work.orig/arch/x86/xen/smp.c
+++ work/arch/x86/xen/smp.c
@@ -78,7 +78,7 @@ static __cpuinit void cpu_bringup(void)
xen_setup_cpu_clockevents();
cpu_set(cpu, cpu_online_map);
- x86_write_percpu(cpu_state, CPU_ONLINE);
+ percpu_write(cpu_state, CPU_ONLINE);
wmb();
/* We can take interrupts now: we're officially "up". */
Index: work/include/asm-generic/percpu.h
===================================================================
--- work.orig/include/asm-generic/percpu.h
+++ work/include/asm-generic/percpu.h
@@ -80,4 +80,56 @@ extern void setup_per_cpu_areas(void);
#define DECLARE_PER_CPU(type, name) extern PER_CPU_ATTRIBUTES \
__typeof__(type) per_cpu_var(name)
+/*
+ * Optional methods for optimized non-lvalue per-cpu variable access.
+ *
+ * @var can be a percpu variable or a field of it and its size should
+ * equal char, int or long. percpu_read() evaluates to a lvalue and
+ * all others to void.
+ *
+ * These operations are guaranteed to be atomic w.r.t. preemption.
+ * The generic versions use plain get/put_cpu_var(). Archs are
+ * encouraged to implement single-instruction alternatives which don't
+ * require preemption protection.
+ */
+#ifndef percpu_read
+# define percpu_read(var) \
+ ({ \
+ typeof(per_cpu_var(var)) __tmp_var__; \
+ __tmp_var__ = get_cpu_var(var); \
+ put_cpu_var(var); \
+ __tmp_var__; \
+ })
+#endif
+
+#define __percpu_generic_to_op(var, val, op) \
+do { \
+ get_cpu_var(var) op val; \
+ put_cpu_var(var); \
+} while (0)
+
+#ifndef percpu_write
+# define percpu_write(var, val) __percpu_generic_to_op(var, (val), =)
+#endif
+
+#ifndef percpu_add
+# define percpu_add(var, val) __percpu_generic_to_op(var, (val), +=)
+#endif
+
+#ifndef percpu_sub
+# define percpu_sub(var, val) __percpu_generic_to_op(var, (val), -=)
+#endif
+
+#ifndef percpu_and
+# define percpu_and(var, val) __percpu_generic_to_op(var, (val), &=)
+#endif
+
+#ifndef percpu_or
+# define percpu_or(var, val) __percpu_generic_to_op(var, (val), |=)
+#endif
+
+#ifndef percpu_xor
+# define percpu_xor(var, val) __percpu_generic_to_op(var, (val), ^=)
+#endif
+
#endif /* _ASM_GENERIC_PERCPU_H_ */
next prev parent reply other threads:[~2009-01-15 13:06 UTC|newest]
Thread overview: 109+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-01-13 10:38 [PATCHSET linux-2.6-x86:tip] x86: make percpu offsets zero-based on SMP Tejun Heo
2009-01-13 10:38 ` [PATCH 01/13] x86_64: fix pda_to_op() Tejun Heo
2009-01-13 10:38 ` [PATCH 02/13] x86: make early_per_cpu() a lvalue and use it Tejun Heo
2009-01-13 11:43 ` [PATCH 02/13 UPDATED] " Tejun Heo
2009-01-13 10:38 ` [PATCH 03/13] x86_64: Cleanup early setup_percpu references Tejun Heo
2009-01-13 10:38 ` [PATCH 04/13] x86_32: make vmlinux_32.lds.S use PERCPU() macro Tejun Heo
2009-01-13 10:38 ` [PATCH 05/13] x86_64: make percpu symbols zerobased on SMP Tejun Heo
2009-01-13 10:38 ` [PATCH 06/13] x86_64: load pointer to pda into %gs while brining up a CPU Tejun Heo
2009-01-13 10:38 ` [PATCH 07/13] x86_64: use static _cpu_pda array Tejun Heo
2009-01-13 10:38 ` [PATCH 08/13] x86_64: fold pda into percpu area on SMP Tejun Heo
2009-01-13 10:38 ` [PATCH 09/13] x86_64: merge 64 and 32 SMP percpu handling Tejun Heo
2009-01-13 10:38 ` [PATCH 10/13] x86_64: make pda a percpu variable Tejun Heo
2009-01-13 10:38 ` [PATCH 11/13] x86_64: convert pda ops to wrappers around x86 percpu accessors Tejun Heo
2009-01-13 10:38 ` [PATCH 12/13] x86_64: misc clean up after the percpu update Tejun Heo
2009-01-13 10:38 ` [PATCH 13/13] x86_32: make percpu symbols zerobased on SMP Tejun Heo
2009-01-14 0:18 ` Rusty Russell
2009-01-14 2:03 ` Tejun Heo
2009-01-13 10:48 ` [PATCHSET linux-2.6-x86:tip] x86: make percpu offsets zero-based " Tejun Heo
2009-01-13 13:27 ` Brian Gerst
2009-01-13 14:05 ` Tejun Heo
2009-01-13 14:26 ` Brian Gerst
2009-01-13 14:37 ` Tejun Heo
2009-01-14 6:57 ` H. Peter Anvin
2009-01-14 9:38 ` [patch] add optimized generic percpu accessors Ingo Molnar
2009-01-14 9:45 ` Ingo Molnar
[not found] ` <200901151253.44016.rusty@rustcorp.com.au>
2009-01-15 9:55 ` Ingo Molnar
2009-01-15 10:27 ` Tejun Heo
2009-01-15 10:04 ` roel kluin
2009-01-15 10:26 ` Tejun Heo
2009-01-15 11:32 ` Ingo Molnar
2009-01-15 11:36 ` Tejun Heo
2009-01-15 12:22 ` Ingo Molnar
2009-01-15 13:09 ` Tejun Heo
2009-01-15 13:32 ` Ingo Molnar
2009-01-15 13:33 ` Ingo Molnar
2009-01-15 13:39 ` Ingo Molnar
2009-01-15 21:54 ` Tejun Heo
2009-01-16 1:28 ` [PATCH x86/percpu] x86: fix build bug introduced during merge Tejun Heo
2009-01-16 3:25 ` [PATCH x86/percpu] x86_64: initialize this_cpu_off to __per_cpu_load Tejun Heo
2009-01-16 13:16 ` Ingo Molnar
2009-01-16 13:47 ` Tejun Heo
[not found] ` <20090116221651.GA32736@elte.hu>
[not found] ` <20090116223828.GA9294@elte.hu>
[not found] ` <49716F0E.7060605@gmail.com>
[not found] ` <49716FBF.7080203@kernel.org>
2009-01-17 6:29 ` [PATCH core/percpu REPOST] linker script: add missing VMLINUX_SYMBOL Tejun Heo
2009-01-17 6:32 ` [PATCH] linker script: add missing .data.percpu.page_aligned Tejun Heo
2009-01-17 8:21 ` Ingo Molnar
2009-01-21 23:46 ` Christoph Lameter
2009-01-21 23:45 ` [PATCH core/percpu REPOST] linker script: add missing VMLINUX_SYMBOL Christoph Lameter
2009-01-15 22:34 ` [patch] add optimized generic percpu accessors Ingo Molnar
2009-01-15 20:31 ` Christoph Lameter
2009-01-16 9:41 ` Tejun Heo
2009-01-16 13:23 ` Ingo Molnar
2009-01-16 13:49 ` Tejun Heo
2009-01-15 10:26 ` Tejun Heo
2009-01-15 11:30 ` Ingo Molnar
2009-01-15 11:38 ` Tejun Heo
2009-01-15 12:26 ` Ingo Molnar
2009-01-15 13:04 ` Tejun Heo [this message]
2009-01-15 13:07 ` Ingo Molnar
2009-01-15 13:23 ` [PATCH] percpu: " Tejun Heo
2009-01-15 13:36 ` Ingo Molnar
2009-01-15 17:30 ` Andrew Morton
2009-01-15 18:02 ` Ingo Molnar
2009-01-15 18:34 ` Andrew Morton
2009-01-15 18:39 ` Ingo Molnar
2009-01-15 21:53 ` Tejun Heo
2009-01-16 0:12 ` Herbert Xu
2009-01-16 0:15 ` Ingo Molnar
2009-01-16 0:18 ` Herbert Xu
[not found] ` <200901170827.33729.rusty@rustcorp.com.au>
2009-01-16 22:08 ` Ingo Molnar
[not found] ` <200901201328.24605.rusty@rustcorp.com.au>
2009-01-20 6:25 ` Tejun Heo
2009-01-20 10:36 ` Ingo Molnar
[not found] ` <200901271213.18605.rusty@rustcorp.com.au>
2009-01-27 2:24 ` Tejun Heo
2009-01-27 13:13 ` Ingo Molnar
2009-01-27 23:07 ` Tejun Heo
2009-01-28 3:36 ` Tejun Heo
2009-01-28 8:12 ` Tejun Heo
2009-01-27 20:08 ` Christoph Lameter
2009-01-27 21:47 ` David Miller
2009-01-27 22:47 ` Rick Jones
2009-01-28 0:17 ` Luck, Tony
2009-01-28 16:48 ` Christoph Lameter
2009-01-28 17:15 ` Luck, Tony
2009-01-28 16:45 ` Christoph Lameter
2009-01-28 20:47 ` David Miller
2009-01-28 10:38 ` Rusty Russell
2009-01-28 10:56 ` Tejun Heo
2009-01-29 2:06 ` Rusty Russell
2009-01-31 6:11 ` Tejun Heo
2009-01-28 16:50 ` Christoph Lameter
2009-01-28 18:07 ` Mathieu Desnoyers
2009-01-29 18:33 ` Christoph Lameter
2009-01-29 18:48 ` H. Peter Anvin
2009-01-20 10:40 ` Ingo Molnar
2009-01-21 5:52 ` Tejun Heo
2009-01-21 10:05 ` Ingo Molnar
2009-01-21 11:21 ` Eric W. Biederman
2009-01-21 12:45 ` Stephen Hemminger
2009-01-21 14:13 ` Eric W. Biederman
2009-01-21 20:34 ` David Miller
2009-01-16 1:09 ` H. Peter Anvin
[not found] ` <200901170804.18622.rusty@rustcorp.com.au>
2009-01-16 21:59 ` Ingo Molnar
2009-01-16 22:09 ` Ingo Molnar
2009-01-16 14:10 ` Mark Lord
2009-01-15 18:46 ` Ingo Molnar
2009-01-31 10:36 ` Jeremy Fitzhardinge
2009-01-15 13:59 ` [patch] " roel kluin
2009-01-15 21:51 ` Tejun Heo
[not found] ` <200901170748.53734.rusty@rustcorp.com.au>
2009-01-16 21:24 ` Ingo Molnar
2009-01-31 10:30 ` Jeremy Fitzhardinge
2009-01-31 16:00 ` Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=496F346B.6020804@kernel.org \
--to=tj@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=brgerst@gmail.com \
--cc=cl@linux-foundation.org \
--cc=ebiederm@xmission.com \
--cc=hpa@zytor.com \
--cc=hugh@veritas.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=rusty@rustcorp.com.au \
--cc=steiner@sgi.com \
--cc=travis@sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).