From: Christoph Lameter <clameter@sgi.com>
To: akpm@linux-foundation.org
Cc: linux-arch@vger.kernel.org, Tony.Luck@intel.com,
linux-kernel@vger.kernel.org, David Miller <davem@davemloft.net>,
Eric Dumazet <dada1@cosmosbay.com>,
Peter Zijlstra <peterz@infradead.org>,
Rusty Russell <rusty@rustcorp.com.au>,
Mike Travis <travis@sgi.com>
Subject: [patch 35/41] Support for CPU ops
Date: Thu, 29 May 2008 20:56:55 -0700 [thread overview]
Message-ID: <20080530040022.786507220@sgi.com> (raw)
In-Reply-To: 20080530035620.587204923@sgi.com
[-- Attachment #1: ia64_cpu_ops --]
[-- Type: text/plain, Size: 10388 bytes --]
IA64 has no efficient atomic operations. But we can get rid of the need to
add my_percpu_offset(). The address of a per cpu variable can be used directly
on IA64 since its mapped to a per processor area.
This also allows us to kill off the __ia64_get_cpu_var macro. Its nothing but
per_cpu_var().
Cc: Tony.Luck@intel.com
Signed-off-by: Christoph Lameter <clameter@sgi.com>
---
arch/ia64/Kconfig | 3
arch/ia64/kernel/perfmon.c | 2
arch/ia64/kernel/setup.c | 2
arch/ia64/kernel/smp.c | 4 -
arch/ia64/sn/kernel/setup.c | 4 -
include/asm-ia64/mmu_context.h | 6 -
include/asm-ia64/percpu.h | 133 ++++++++++++++++++++++++++++++++++++++---
include/asm-ia64/processor.h | 2
include/asm-ia64/sn/pda.h | 2
9 files changed, 138 insertions(+), 20 deletions(-)
Index: linux-2.6/include/asm-ia64/percpu.h
===================================================================
--- linux-2.6.orig/include/asm-ia64/percpu.h 2008-05-29 19:35:10.000000000 -0700
+++ linux-2.6/include/asm-ia64/percpu.h 2008-05-29 19:35:11.000000000 -0700
@@ -19,7 +19,7 @@
# define PER_CPU_ATTRIBUTES __attribute__((__model__ (__small__)))
#endif
-#define __my_cpu_offset __ia64_per_cpu_var(local_per_cpu_offset)
+#define __my_cpu_offset CPU_READ(per_cpu_var(local_per_cpu_offset))
extern void *per_cpu_init(void);
@@ -31,14 +31,6 @@ extern void *per_cpu_init(void);
#endif /* SMP */
-/*
- * Be extremely careful when taking the address of this variable! Due to virtual
- * remapping, it is different from the canonical address returned by __get_cpu_var(var)!
- * On the positive side, using __ia64_per_cpu_var() instead of __get_cpu_var() is slightly
- * more efficient.
- */
-#define __ia64_per_cpu_var(var) per_cpu__##var
-
#include <asm-generic/percpu.h>
/* Equal to __per_cpu_offset[smp_processor_id()], but faster to access: */
@@ -46,4 +38,127 @@ DECLARE_PER_CPU(unsigned long, local_per
#endif /* !__ASSEMBLY__ */
+/*
+ * Per cpu ops.
+ *
+ * IA64 has no instructions that would allow light weight RMW operations.
+ *
+ * However, the canonical address of a per cpu variable is mapped via
+ * a processor specific TLB entry to the per cpu area of the respective
+ * processor. The THIS_CPU() macro is therefore not necessary here
+ * since the canonical address of the per cpu variable allows access
+ * to the instance of the per cpu variable for the current processor.
+ *
+ * Sadly we cannot simply define THIS_CPU() to return an address in
+ * the per processor mapping space since the address acquired by THIS_CPU\
+ * may be passed to another processor.
+ */
+#define __CPU_READ(var) \
+({ \
+ (var); \
+})
+
+#define __CPU_WRITE(var, value) \
+({ \
+ (var) = (value); \
+})
+
+#define __CPU_ADD(var, value) \
+({ \
+ (var) += (value); \
+})
+
+#define __CPU_INC(var) __CPU_ADD((var), 1)
+#define __CPU_DEC(var) __CPU_ADD((var), -1)
+#define __CPU_SUB(var, value) __CPU_ADD((var), -(value))
+
+#define __CPU_CMPXCHG(var, old, new) \
+({ \
+ typeof(obj) x; \
+ typeof(obj) *p = &(var); \
+ x = *p; \
+ if (x == (old)) \
+ *p = (new); \
+ (x); \
+})
+
+#define __CPU_XCHG(obj, new) \
+({ \
+ typeof(obj) x; \
+ typeof(obj) *p = &(obj); \
+ x = *p; \
+ *p = (new); \
+ (x); \
+})
+
+#define _CPU_READ __CPU_READ
+#define _CPU_WRITE __CPU_WRITE
+
+#define _CPU_ADD(var, value) \
+({ \
+ preempt_disable(); \
+ __CPU_ADD((var), (value)); \
+ preempt_enable(); \
+})
+
+#define _CPU_INC(var) _CPU_ADD((var), 1)
+#define _CPU_DEC(var) _CPU_ADD((var), -1)
+#define _CPU_SUB(var, value) _CPU_ADD((var), -(value))
+
+#define _CPU_CMPXCHG(var, old, new) \
+({ \
+ typeof(addr) x; \
+ preempt_disable(); \
+ x = __CPU_CMPXCHG((var), (old), (new)); \
+ preempt_enable(); \
+ (x); \
+})
+
+#define _CPU_XCHG(var, new) \
+({ \
+ typeof(var) x; \
+ preempt_disable(); \
+ x = __CPU_XCHG((var), (new)); \
+ preempt_enable(); \
+ (x); \
+})
+
+/*
+ * Third group: Interrupt safe CPU functions
+ */
+#define CPU_READ __CPU_READ
+#define CPU_WRITE __CPU_WRITE
+
+#define CPU_ADD(var, value) \
+({ \
+ unsigned long flags; \
+ local_irq_save(flags); \
+ __CPU_ADD((var), (value)); \
+ local_irq_restore(flags); \
+})
+
+#define CPU_INC(var) CPU_ADD((var), 1)
+#define CPU_DEC(var) CPU_ADD((var), -1)
+#define CPU_SUB(var, value) CPU_ADD((var), -(value))
+
+#define CPU_CMPXCHG(var, old, new) \
+({ \
+ unsigned long flags; \
+ typeof(var) x; \
+ local_irq_save(flags); \
+ x = __CPU_CMPXCHG((var), (old), (new)); \
+ local_irq_restore(flags); \
+ (x); \
+})
+
+#define CPU_XCHG(var, new) \
+({ \
+ unsigned long flags; \
+ typeof(var) x; \
+ local_irq_save(flags); \
+ x = __CPU_XCHG((var), (new)); \
+ local_irq_restore(flags); \
+ (x); \
+})
+
#endif /* _ASM_IA64_PERCPU_H */
Index: linux-2.6/arch/ia64/kernel/perfmon.c
===================================================================
--- linux-2.6.orig/arch/ia64/kernel/perfmon.c 2008-05-29 19:35:09.000000000 -0700
+++ linux-2.6/arch/ia64/kernel/perfmon.c 2008-05-29 19:35:13.000000000 -0700
@@ -576,7 +576,7 @@ static struct ctl_table_header *pfm_sysc
static int pfm_context_unload(pfm_context_t *ctx, void *arg, int count, struct pt_regs *regs);
-#define pfm_get_cpu_var(v) __ia64_per_cpu_var(v)
+#define pfm_get_cpu_var(v) per_cpu_var(v)
#define pfm_get_cpu_data(a,b) per_cpu(a, b)
static inline void
Index: linux-2.6/arch/ia64/kernel/setup.c
===================================================================
--- linux-2.6.orig/arch/ia64/kernel/setup.c 2008-05-29 19:35:09.000000000 -0700
+++ linux-2.6/arch/ia64/kernel/setup.c 2008-05-29 19:35:11.000000000 -0700
@@ -925,7 +925,7 @@ cpu_init (void)
* depends on the data returned by identify_cpu(). We break the dependency by
* accessing cpu_data() through the canonical per-CPU address.
*/
- cpu_info = cpu_data + ((char *) &__ia64_per_cpu_var(cpu_info) - __per_cpu_start);
+ cpu_info = cpu_data + ((char *)&per_cpu_var(cpu_info) - __per_cpu_start);
identify_cpu(cpu_info);
#ifdef CONFIG_MCKINLEY
Index: linux-2.6/arch/ia64/kernel/smp.c
===================================================================
--- linux-2.6.orig/arch/ia64/kernel/smp.c 2008-05-29 19:35:09.000000000 -0700
+++ linux-2.6/arch/ia64/kernel/smp.c 2008-05-29 19:35:11.000000000 -0700
@@ -150,7 +150,7 @@ irqreturn_t
handle_IPI (int irq, void *dev_id)
{
int this_cpu = get_cpu();
- unsigned long *pending_ipis = &__ia64_per_cpu_var(ipi_operation);
+ unsigned long *pending_ipis = &per_cpu_var(ipi_operation);
unsigned long ops;
mb(); /* Order interrupt and bit testing. */
@@ -303,7 +303,7 @@ smp_local_flush_tlb(void)
void
smp_flush_tlb_cpumask(cpumask_t xcpumask)
{
- unsigned int *counts = __ia64_per_cpu_var(shadow_flush_counts);
+ unsigned int *counts = per_cpu_var(shadow_flush_counts);
cpumask_t cpumask = xcpumask;
int mycpu, cpu, flush_mycpu = 0;
Index: linux-2.6/arch/ia64/sn/kernel/setup.c
===================================================================
--- linux-2.6.orig/arch/ia64/sn/kernel/setup.c 2008-05-29 19:35:09.000000000 -0700
+++ linux-2.6/arch/ia64/sn/kernel/setup.c 2008-05-29 19:35:11.000000000 -0700
@@ -645,7 +645,7 @@ void __cpuinit sn_cpu_init(void)
/* copy cpu 0's sn_cnodeid_to_nasid table to this cpu's */
memcpy(sn_cnodeid_to_nasid,
(&per_cpu(__sn_cnodeid_to_nasid, 0)),
- sizeof(__ia64_per_cpu_var(__sn_cnodeid_to_nasid)));
+ sizeof(per_cpu_var(__sn_cnodeid_to_nasid)));
}
/*
@@ -706,7 +706,7 @@ void __init build_cnode_tables(void)
memset(physical_node_map, -1, sizeof(physical_node_map));
memset(sn_cnodeid_to_nasid, -1,
- sizeof(__ia64_per_cpu_var(__sn_cnodeid_to_nasid)));
+ sizeof(per_cpu_var(__sn_cnodeid_to_nasid)));
/*
* First populate the tables with C/M bricks. This ensures that
Index: linux-2.6/include/asm-ia64/mmu_context.h
===================================================================
--- linux-2.6.orig/include/asm-ia64/mmu_context.h 2008-05-29 19:35:10.000000000 -0700
+++ linux-2.6/include/asm-ia64/mmu_context.h 2008-05-29 19:35:13.000000000 -0700
@@ -64,11 +64,11 @@ delayed_tlb_flush (void)
extern void local_flush_tlb_all (void);
unsigned long flags;
- if (unlikely(__ia64_per_cpu_var(ia64_need_tlb_flush))) {
+ if (unlikely(CPU_READ(per_cpu_var(ia64_need_tlb_flush)))) {
spin_lock_irqsave(&ia64_ctx.lock, flags);
- if (__ia64_per_cpu_var(ia64_need_tlb_flush)) {
+ if (CPU_READ(per_cpu_var(ia64_need_tlb_flush))) {
local_flush_tlb_all();
- __ia64_per_cpu_var(ia64_need_tlb_flush) = 0;
+ CPU_WRITE(per_cpu_var(ia64_need_tlb_flush), 0);
}
spin_unlock_irqrestore(&ia64_ctx.lock, flags);
}
Index: linux-2.6/include/asm-ia64/processor.h
===================================================================
--- linux-2.6.orig/include/asm-ia64/processor.h 2008-05-29 19:35:09.000000000 -0700
+++ linux-2.6/include/asm-ia64/processor.h 2008-05-29 19:35:11.000000000 -0700
@@ -237,7 +237,7 @@ DECLARE_PER_CPU(struct cpuinfo_ia64, cpu
* Do not use the address of local_cpu_data, since it will be different from
* cpu_data(smp_processor_id())!
*/
-#define local_cpu_data (&__ia64_per_cpu_var(cpu_info))
+#define local_cpu_data (&per_cpu_var(cpu_info))
#define cpu_data(cpu) (&per_cpu(cpu_info, cpu))
extern void print_cpu_info (struct cpuinfo_ia64 *);
Index: linux-2.6/include/asm-ia64/sn/pda.h
===================================================================
--- linux-2.6.orig/include/asm-ia64/sn/pda.h 2008-05-29 19:35:10.000000000 -0700
+++ linux-2.6/include/asm-ia64/sn/pda.h 2008-05-29 19:35:11.000000000 -0700
@@ -62,7 +62,7 @@ typedef struct pda_s {
*/
DECLARE_PER_CPU(struct pda_s, pda_percpu);
-#define pda (&__ia64_per_cpu_var(pda_percpu))
+#define pda (&per_cpu_var(pda_percpu))
#define pdacpu(cpu) (&per_cpu(pda_percpu, cpu))
Index: linux-2.6/arch/ia64/Kconfig
===================================================================
--- linux-2.6.orig/arch/ia64/Kconfig 2008-05-29 19:35:09.000000000 -0700
+++ linux-2.6/arch/ia64/Kconfig 2008-05-29 19:35:11.000000000 -0700
@@ -92,6 +92,9 @@ config GENERIC_TIME_VSYSCALL
config HAVE_SETUP_PER_CPU_AREA
def_bool y
+config HAVE_CPU_OPS
+ def_bool y
+
config DMI
bool
default y
--
next prev parent reply other threads:[~2008-05-30 4:00 UTC|newest]
Thread overview: 163+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-05-30 3:56 [patch 00/41] cpu alloc / cpu ops v3: Optimize per cpu access Christoph Lameter
2008-05-30 3:56 ` [patch 01/41] cpu_alloc: Increase percpu area size to 128k Christoph Lameter
2008-06-02 17:58 ` Luck, Tony
2008-06-02 23:48 ` Rusty Russell
2008-06-10 17:22 ` Christoph Lameter
2008-06-10 17:22 ` Christoph Lameter
2008-06-10 19:54 ` Luck, Tony
2008-05-30 3:56 ` [patch 02/41] cpu alloc: The allocator Christoph Lameter
2008-05-30 4:58 ` Andrew Morton
2008-05-30 5:10 ` Christoph Lameter
2008-05-30 5:31 ` Andrew Morton
2008-06-02 9:29 ` Paul Jackson
2008-05-30 5:56 ` KAMEZAWA Hiroyuki
2008-05-30 6:16 ` Christoph Lameter
2008-06-04 14:48 ` Mike Travis
2008-05-30 5:04 ` Eric Dumazet
2008-05-30 5:20 ` Christoph Lameter
2008-05-30 5:52 ` Rusty Russell
2008-06-04 15:30 ` Mike Travis
2008-06-05 23:48 ` Rusty Russell
2008-05-30 5:54 ` Eric Dumazet
2008-06-04 14:58 ` Mike Travis
2008-06-04 15:11 ` Eric Dumazet
2008-06-06 0:32 ` Rusty Russell
2008-06-06 0:32 ` Rusty Russell
2008-06-10 17:33 ` Christoph Lameter
2008-06-10 18:05 ` Eric Dumazet
2008-06-10 18:28 ` Christoph Lameter
2008-05-30 5:46 ` Rusty Russell
2008-06-04 15:04 ` Mike Travis
2008-06-10 17:34 ` Christoph Lameter
2008-05-31 20:58 ` Pavel Machek
2008-05-30 3:56 ` [patch 03/41] cpu alloc: Use cpu allocator instead of the builtin modules per cpu allocator Christoph Lameter
2008-05-30 4:58 ` Andrew Morton
2008-05-30 5:14 ` Christoph Lameter
2008-05-30 5:34 ` Andrew Morton
2008-05-30 6:08 ` Rusty Russell
2008-05-30 6:21 ` Christoph Lameter
2008-05-30 3:56 ` [patch 04/41] cpu ops: Core piece for generic atomic per cpu operations Christoph Lameter
2008-05-30 4:58 ` Andrew Morton
2008-05-30 5:17 ` Christoph Lameter
2008-05-30 5:38 ` Andrew Morton
2008-05-30 6:12 ` Christoph Lameter
2008-05-30 7:08 ` Rusty Russell
2008-05-30 18:00 ` Christoph Lameter
2008-06-02 2:00 ` Rusty Russell
2008-06-04 18:18 ` Mike Travis
2008-06-05 23:59 ` Rusty Russell
2008-06-09 19:00 ` Christoph Lameter
2008-06-09 23:27 ` Rusty Russell
2008-06-09 23:54 ` Christoph Lameter
2008-06-10 2:56 ` Rusty Russell
2008-06-10 3:18 ` Christoph Lameter
2008-06-11 0:03 ` Rusty Russell
2008-06-11 0:15 ` Christoph Lameter
2008-06-09 23:09 ` Christoph Lameter
2008-06-10 17:42 ` Christoph Lameter
2008-06-11 11:10 ` Rusty Russell
2008-06-11 23:39 ` Christoph Lameter
2008-06-12 0:58 ` Nick Piggin
2008-06-12 2:44 ` Rusty Russell
2008-06-12 3:40 ` Nick Piggin
2008-06-12 9:37 ` Martin Peschke
2008-06-12 11:21 ` Nick Piggin
2008-06-12 17:19 ` Christoph Lameter
2008-06-13 0:38 ` Rusty Russell
2008-06-13 2:27 ` Christoph Lameter
2008-06-15 10:33 ` Rusty Russell
2008-06-15 10:33 ` Rusty Russell
2008-06-16 14:52 ` Christoph Lameter
2008-06-17 0:24 ` Rusty Russell
2008-06-17 2:29 ` Christoph Lameter
2008-06-17 14:21 ` Mike Travis
2008-05-30 7:05 ` Rusty Russell
2008-05-30 6:32 ` Rusty Russell
2008-05-30 3:56 ` [patch 05/41] cpu alloc: Percpu_counter conversion Christoph Lameter
2008-05-30 6:47 ` Rusty Russell
2008-05-30 17:54 ` Christoph Lameter
2008-05-30 3:56 ` [patch 06/41] cpu alloc: crash_notes conversion Christoph Lameter
2008-05-30 3:56 ` [patch 07/41] cpu alloc: Workqueue conversion Christoph Lameter
2008-05-30 3:56 ` [patch 08/41] cpu alloc: ACPI cstate handling conversion Christoph Lameter
2008-05-30 3:56 ` [patch 09/41] cpu alloc: Genhd statistics conversion Christoph Lameter
2008-05-30 3:56 ` [patch 10/41] cpu alloc: blktrace conversion Christoph Lameter
2008-05-30 3:56 ` [patch 11/41] cpu alloc: SRCU cpu alloc conversion Christoph Lameter
2008-05-30 3:56 ` [patch 12/41] cpu alloc: XFS counter conversion Christoph Lameter
2008-05-30 3:56 ` [patch 13/41] cpu alloc: NFS statistics Christoph Lameter
2008-05-30 3:56 ` [patch 14/41] cpu alloc: Neigbour statistics Christoph Lameter
2008-05-30 3:56 ` [patch 15/41] cpu_alloc: Convert ip route statistics Christoph Lameter
2008-05-30 3:56 ` [patch 16/41] cpu alloc: Tcp statistics conversion Christoph Lameter
2008-05-30 3:56 ` [patch 17/41] cpu alloc: Convert scratches to cpu alloc Christoph Lameter
2008-05-30 3:56 ` [patch 18/41] cpu alloc: Dmaengine conversion Christoph Lameter
2008-05-30 3:56 ` [patch 19/41] cpu alloc: Convert loopback statistics Christoph Lameter
2008-05-30 3:56 ` [patch 20/41] cpu alloc: Veth conversion Christoph Lameter
2008-05-30 3:56 ` [patch 21/41] cpu alloc: Chelsio statistics conversion Christoph Lameter
2008-05-30 3:56 ` [patch 22/41] cpu alloc: Convert network sockets inuse counter Christoph Lameter
2008-05-30 3:56 ` [patch 23/41] cpu alloc: Use it for infiniband Christoph Lameter
2008-05-30 3:56 ` [patch 24/41] cpu alloc: Use in the crypto subsystem Christoph Lameter
2008-05-30 3:56 ` [patch 25/41] cpu alloc: scheduler: Convert cpuusage to cpu_alloc Christoph Lameter
2008-05-30 3:56 ` [patch 26/41] cpu alloc: Convert mib handling to cpu alloc Christoph Lameter
2008-05-30 6:47 ` Eric Dumazet
2008-05-30 18:01 ` Christoph Lameter
2008-05-30 3:56 ` [patch 27/41] cpu alloc: Remove the allocpercpu functionality Christoph Lameter
2008-05-30 4:58 ` Andrew Morton
2008-05-30 3:56 ` [patch 28/41] Module handling: Use CPU_xx ops to dynamically allocate counters Christoph Lameter
2008-05-30 3:56 ` [patch 29/41] x86_64: Use CPU ops for nmi alert counter Christoph Lameter
2008-05-30 3:56 ` [patch 30/41] Remove local_t support Christoph Lameter
2008-05-30 3:56 ` [patch 31/41] VM statistics: Use CPU ops Christoph Lameter
2008-05-30 3:56 ` [patch 32/41] cpu alloc: Use in slub Christoph Lameter
2008-05-30 3:56 ` [patch 33/41] cpu alloc: Remove slub fields Christoph Lameter
2008-05-30 3:56 ` [patch 34/41] cpu alloc: Page allocator conversion Christoph Lameter
2008-05-30 3:56 ` Christoph Lameter [this message]
2008-05-30 4:58 ` [patch 35/41] Support for CPU ops Andrew Morton
2008-05-30 5:18 ` Christoph Lameter
2008-05-30 3:56 ` [patch 36/41] Zero based percpu: Infrastructure to rebase the per cpu area to zero Christoph Lameter
2008-05-30 3:56 ` [patch 37/41] x86_64: Fold pda into per cpu area Christoph Lameter
2008-05-30 3:56 ` [patch 38/41] x86: Extend percpu ops to 64 bit Christoph Lameter
2008-05-30 3:56 ` [patch 39/41] x86: Replace cpu_pda() using percpu logic and get rid of _cpu_pda() Christoph Lameter
2008-05-30 3:57 ` [patch 40/41] x86: Replace xxx_pda() operations with x86_xx_percpu() Christoph Lameter
2008-05-30 3:57 ` [patch 41/41] x86_64: Support for cpu ops Christoph Lameter
2008-05-30 4:58 ` [patch 00/41] cpu alloc / cpu ops v3: Optimize per cpu access Andrew Morton
2008-05-30 5:03 ` Christoph Lameter
2008-05-30 5:21 ` Andrew Morton
2008-05-30 5:27 ` Christoph Lameter
2008-05-30 5:49 ` Andrew Morton
2008-05-30 6:16 ` Christoph Lameter
2008-05-30 6:51 ` KAMEZAWA Hiroyuki
2008-05-30 14:38 ` Mike Travis
2008-05-30 17:50 ` Christoph Lameter
2008-05-30 18:00 ` Matthew Wilcox
2008-05-30 18:12 ` Christoph Lameter
2008-05-30 6:01 ` Eric Dumazet
2008-05-30 6:16 ` Andrew Morton
2008-05-30 6:22 ` Christoph Lameter
2008-05-30 6:37 ` Andrew Morton
2008-05-30 11:32 ` Matthew Wilcox
2008-06-04 15:07 ` Mike Travis
2008-06-06 5:33 ` Eric Dumazet
2008-06-06 13:08 ` Mike Travis
2008-06-08 6:00 ` Rusty Russell
2008-06-09 18:44 ` Christoph Lameter
2008-06-09 19:11 ` Andi Kleen
2008-06-09 20:15 ` Eric Dumazet
2008-05-30 9:12 ` Peter Zijlstra
2008-05-30 9:18 ` Ingo Molnar
2008-05-30 18:11 ` Christoph Lameter
2008-05-30 18:40 ` Peter Zijlstra
2008-05-30 18:56 ` Christoph Lameter
2008-05-30 19:13 ` Peter Zijlstra
2008-06-01 3:25 ` Christoph Lameter
2008-06-01 8:19 ` Peter Zijlstra
2008-05-30 18:06 ` Christoph Lameter
2008-05-30 18:19 ` Peter Zijlstra
2008-05-30 18:26 ` Christoph Lameter
2008-05-30 18:47 ` Peter Zijlstra
2008-05-30 19:10 ` Christoph Lameter
2008-05-30 19:21 ` Peter Zijlstra
2008-05-30 19:35 ` Peter Zijlstra
2008-06-01 3:27 ` Christoph Lameter
2008-05-30 18:08 ` Christoph Lameter
2008-05-30 18:39 ` Peter Zijlstra
2008-05-30 18:51 ` Christoph Lameter
2008-05-30 19:00 ` Peter Zijlstra
2008-05-30 19:11 ` Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20080530040022.786507220@sgi.com \
--to=clameter@sgi.com \
--cc=Tony.Luck@intel.com \
--cc=akpm@linux-foundation.org \
--cc=dada1@cosmosbay.com \
--cc=davem@davemloft.net \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=peterz@infradead.org \
--cc=rusty@rustcorp.com.au \
--cc=travis@sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox