From: clameter@sgi.com
From: Christoph Lameter <clameter@sgi.com>
To: ak@suse.de
Cc: akpm@linux-foundation.org
Cc: travis@sgi.com
Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Cc: linux-kernel@vger.kernel.org
Subject: [rfc 02/45] cpu alloc: Simple version of the allocator (static allocations)
Date: Mon, 19 Nov 2007 17:11:34 -0800 [thread overview]
Message-ID: <20071120011332.182511436@sgi.com> (raw)
In-Reply-To: 20071120011132.143632442@sgi.com
[-- Attachment #1: cpu_alloc --]
[-- Type: text/plain, Size: 12525 bytes --]
The core portion of the cpu allocator.
The per cpu allocator allows dynamic allocation of memory on all
processor simultaneously. A bitmap is used to track used areas.
The allocator implements tight packing to reduce the cache footprint
and increase speed since cacheline contention is typically not a concern
for memory mainly used by a single cpu. Small objects will fill up gaps
left by larger allocations that required alignments.
This is a limited version of the cpu allocator that only performs a
static allocation of a single page for each processor. This is enough
for the use of the cpu allocator in the slab and page allocator for most
of the common configurations. The configuration will be useful for
embedded systems to reduce memory requirements. However, there is a hard limit
of the size of the per cpu structures and so the default configuration of an
order 0 allocation can only support up to 150 slab caches (most systems that
I got use 70) and probably not more than 16 or so NUMA nodes. The size of the
statically configured area can be changed via make menuconfig etc.
The cpu allocator virtualization patch is needed in order to support the dynamically
extending per cpu areas.
V1->V2:
- Split off the dynamic extendable cpu area feature to make it clear that it exists.\
- Remove useless variables.
- Add boot_cpu_alloc for bootime cpu area reservations (allows the folding in of
per cpu areas and other arch specific per cpu stuff during boot).
Signed-off-by: Christoph Lameter <clameter@sgi.com>
---
include/linux/percpu.h | 78 +++++++++++++++++++
include/linux/vmstat.h | 2
mm/Kconfig | 7 +
mm/Makefile | 2
mm/cpu_alloc.c | 192 +++++++++++++++++++++++++++++++++++++++++++++++++
mm/vmstat.c | 1
6 files changed, 280 insertions(+), 2 deletions(-)
create mode 100644 include/linux/cpu_alloc.h
create mode 100644 mm/cpu_alloc.c
Index: linux-2.6/include/linux/vmstat.h
===================================================================
--- linux-2.6.orig/include/linux/vmstat.h 2007-11-18 22:07:35.588274285 -0800
+++ linux-2.6/include/linux/vmstat.h 2007-11-18 22:07:49.864273686 -0800
@@ -36,7 +36,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PS
FOR_ALL_ZONES(PGSCAN_KSWAPD),
FOR_ALL_ZONES(PGSCAN_DIRECT),
PGINODESTEAL, SLABS_SCANNED, KSWAPD_STEAL, KSWAPD_INODESTEAL,
- PAGEOUTRUN, ALLOCSTALL, PGROTATED,
+ PAGEOUTRUN, ALLOCSTALL, PGROTATED, CPU_BYTES,
NR_VM_EVENT_ITEMS
};
Index: linux-2.6/mm/Kconfig
===================================================================
--- linux-2.6.orig/mm/Kconfig 2007-11-18 22:07:35.600273725 -0800
+++ linux-2.6/mm/Kconfig 2007-11-18 22:13:51.405773802 -0800
@@ -194,3 +194,10 @@ config NR_QUICK
config VIRT_TO_BUS
def_bool y
depends on !ARCH_NO_VIRT_TO_BUS
+
+config CPU_AREA_ORDER
+ int "Maximum size (order) of CPU area"
+ default "3"
+ help
+ Sets the maximum amount of memory that can be allocated via cpu_alloc
+ The size is set in page order, so 0 = PAGE_SIZE, 1 = PAGE_SIZE << 1 etc.
Index: linux-2.6/mm/Makefile
===================================================================
--- linux-2.6.orig/mm/Makefile 2007-11-18 22:07:35.608273792 -0800
+++ linux-2.6/mm/Makefile 2007-11-18 22:13:44.924523941 -0800
@@ -11,7 +11,7 @@ obj-y := bootmem.o filemap.o mempool.o
page_alloc.o page-writeback.o pdflush.o \
readahead.o swap.o truncate.o vmscan.o \
prio_tree.o util.o mmzone.o vmstat.o backing-dev.o \
- page_isolation.o $(mmu-y)
+ page_isolation.o cpu_alloc.o $(mmu-y)
obj-$(CONFIG_BOUNCE) += bounce.o
obj-$(CONFIG_SWAP) += page_io.o swap_state.o swapfile.o thrash.o
Index: linux-2.6/mm/cpu_alloc.c
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6/mm/cpu_alloc.c 2007-11-18 22:15:04.453743317 -0800
@@ -0,0 +1,192 @@
+/*
+ * Cpu allocator - Manage objects allocated for each processor
+ *
+ * (C) 2007 SGI, Christoph Lameter <clameter@sgi.com>
+ * Basic implementation with allocation and free from a dedicated per
+ * cpu area.
+ *
+ * The per cpu allocator allows dynamic allocation of memory on all
+ * processor simultaneously. A bitmap is used to track used areas.
+ * The allocator implements tight packing to reduce the cache footprint
+ * and increase speed since cacheline contention is typically not a concern
+ * for memory mainly used by a single cpu. Small objects will fill up gaps
+ * left by larger allocations that required alignments.
+ */
+#include <linux/mm.h>
+#include <linux/mmzone.h>
+#include <linux/module.h>
+#include <linux/percpu.h>
+#include <linux/bitmap.h>
+
+/*
+ * Basic allocation unit. A bit map is created to track the use of each
+ * UNIT_SIZE element in the cpu area.
+ */
+
+#define UNIT_SIZE sizeof(int)
+#define UNITS (ALLOC_SIZE / UNIT_SIZE)
+
+/*
+ * How many units are needed for an object of a given size
+ */
+static int size_to_units(unsigned long size)
+{
+ return DIV_ROUND_UP(size, UNIT_SIZE);
+}
+
+/*
+ * Lock to protect the bitmap and the meta data for the cpu allocator.
+ */
+static DEFINE_SPINLOCK(cpu_alloc_map_lock);
+static unsigned long units_reserved; /* Units reserved by boot allocations */
+
+/*
+ * Static configuration. The cpu areas are of a fixed size and
+ * cannot be extended. Such configurations are mainly useful on
+ * machines that do not have MMU support. Note that we have to use
+ * bss space for the static declarations. The combination of a large number
+ * of processors and a large cpu area may cause problems with the size
+ * of the bss segment.
+ */
+#define ALLOC_SIZE (1UL << (CONFIG_CPU_AREA_ORDER + PAGE_SHIFT))
+
+char cpu_area[NR_CPUS * ALLOC_SIZE];
+static DECLARE_BITMAP(cpu_alloc_map, UNITS);
+
+void * __init boot_cpu_alloc(unsigned long size)
+{
+ unsigned long x = units_reserved;
+
+ units_reserved += size_to_units(size);
+ BUG_ON(units_reserved > UNITS);
+ return (void *)(x * UNIT_SIZE);
+}
+
+static int first_free; /* First known free unit */
+EXPORT_SYMBOL(cpu_area);
+
+/*
+ * Mark an object as used in the cpu_alloc_map
+ *
+ * Must hold cpu_alloc_map_lock
+ */
+static void set_map(int start, int length)
+{
+ while (length-- > 0)
+ __set_bit(start++, cpu_alloc_map);
+}
+
+/*
+ * Mark an area as freed.
+ *
+ * Must hold cpu_alloc_map_lock
+ */
+static void clear_map(int start, int length)
+{
+ while (length-- > 0)
+ __clear_bit(start++, cpu_alloc_map);
+}
+
+/*
+ * Allocate an object of a certain size
+ *
+ * Returns a special pointer that can be used with CPU_PTR to find the
+ * address of the object for a certain cpu.
+ */
+void *cpu_alloc(unsigned long size, gfp_t gfpflags, unsigned long align)
+{
+ unsigned long start;
+ int units = size_to_units(size);
+ void *ptr;
+ int first;
+ unsigned long flags;
+
+ BUG_ON(gfpflags & ~(GFP_RECLAIM_MASK | __GFP_ZERO));
+
+ spin_lock_irqsave(&cpu_alloc_map_lock, flags);
+
+ if (!units_reserved)
+ /*
+ * No boot time allocations. Must have at least one
+ * reserved unit to avoid returning a NULL pointer
+ */
+ units_reserved = 1;
+
+ first = 1;
+ start = first_free;
+
+ for ( ; ; ) {
+
+ start = find_next_zero_bit(cpu_alloc_map, ALLOC_SIZE, start);
+ if (start >= UNITS - units_reserved)
+ goto out_of_memory;
+
+ if (first)
+ first_free = start;
+
+ /*
+ * Check alignment and that there is enough space after
+ * the starting unit.
+ */
+ if ((start + units_reserved) % (align / UNIT_SIZE) == 0 &&
+ find_next_bit(cpu_alloc_map, ALLOC_SIZE, start + 1)
+ >= start + units)
+ break;
+ start++;
+ first = 0;
+ }
+
+ if (first)
+ first_free = start + units;
+
+ if (start + units > UNITS - units_reserved)
+ goto out_of_memory;
+
+ set_map(start, units);
+ __count_vm_events(CPU_BYTES, units * UNIT_SIZE);
+
+ spin_unlock_irqrestore(&cpu_alloc_map_lock, flags);
+
+ ptr = (void *)((start + units_reserved) * UNIT_SIZE);
+
+ if (gfpflags & __GFP_ZERO) {
+ int cpu;
+
+ for_each_possible_cpu(cpu)
+ memset(CPU_PTR(ptr, cpu), 0, size);
+ }
+
+ return ptr;
+
+out_of_memory:
+ spin_unlock_irqrestore(&cpu_alloc_map_lock, flags);
+ return NULL;
+}
+EXPORT_SYMBOL(cpu_alloc);
+
+/*
+ * Free an object. The pointer must be a cpu pointer allocated
+ * via cpu_alloc.
+ */
+void cpu_free(void *start, unsigned long size)
+{
+ int units = size_to_units(size);
+ int index;
+ unsigned long p = (unsigned long)start;
+ unsigned long flags;
+
+ BUG_ON(p < units_reserved * UNIT_SIZE);
+ index = p / UNIT_SIZE - units_reserved;
+ BUG_ON(!test_bit(index, cpu_alloc_map) ||
+ index >= UNITS - units_reserved);
+
+ spin_lock_irqsave(&cpu_alloc_map_lock, flags);
+
+ clear_map(index, units);
+ __count_vm_events(CPU_BYTES, -units * UNIT_SIZE);
+ if (index < first_free)
+ first_free = index;
+
+ spin_unlock_irqrestore(&cpu_alloc_map_lock, flags);
+}
+EXPORT_SYMBOL(cpu_free);
Index: linux-2.6/mm/vmstat.c
===================================================================
--- linux-2.6.orig/mm/vmstat.c 2007-11-18 22:07:49.784273594 -0800
+++ linux-2.6/mm/vmstat.c 2007-11-18 22:13:51.538023840 -0800
@@ -639,6 +639,7 @@ static const char * const vmstat_text[]
"allocstall",
"pgrotated",
+ "cpu_bytes",
#endif
};
Index: linux-2.6/include/linux/percpu.h
===================================================================
--- linux-2.6.orig/include/linux/percpu.h 2007-11-18 22:07:49.729023738 -0800
+++ linux-2.6/include/linux/percpu.h 2007-11-18 22:13:51.773274119 -0800
@@ -112,4 +112,82 @@ static inline void percpu_free(void *__p
#define free_percpu(ptr) percpu_free((ptr))
#define per_cpu_ptr(ptr, cpu) percpu_ptr((ptr), (cpu))
+
+/*
+ * cpu allocator definitions
+ *
+ * The cpu allocator allows allocating an array of objects on all processors.
+ * A single pointer can then be used to access the instance of the object
+ * on a particular processor.
+ *
+ * Cpu objects are typically small. The allocator packs them tightly
+ * to increase the chance on each access that a per cpu object is already
+ * cached. Alignments may be specified but the intent is to align the data
+ * properly due to cpu alignment constraints and not to avoid cacheline
+ * contention. Any holes left by aligning objects are filled up with smaller
+ * objects that are allocated later.
+ *
+ * Cpu data can be allocated using CPU_ALLOC. The resulting pointer is
+ * pointing to the instance of the variable on cpu 0. It is generally an
+ * error to use the pointer directly unless we are running on cpu 0. So
+ * the use is valid during boot for example.
+ *
+ * The GFP flags have their usual function: __GFP_ZERO zeroes the object
+ * and other flags may be used to control reclaim behavior if the cpu
+ * areas have to be extended. However, zones cannot be selected nor
+ * can locality constraint flags be used.
+ *
+ * CPU_PTR() may be used to calculate the pointer for a specific processor.
+ * CPU_PTR is highly scalable since it simply adds the shifted value of
+ * smp_processor_id() to the base.
+ *
+ * Note: Synchronization is up to caller. If preemption is disabled then
+ * it is generally safe to access cpu variables (unless they are also
+ * handled from an interrupt context).
+ */
+
+#define SHIFT_PTR(__p, __offset) ((__typeof__(__p))((void *)(__p) \
+ + (__offset)))
+extern char cpu_area[];
+
+static inline unsigned long __cpu_offset(unsigned long cpu)
+{
+ int shift = CONFIG_CPU_AREA_ORDER + PAGE_SHIFT;
+
+ return (unsigned long)cpu_area + (cpu << shift);
+}
+
+static inline unsigned long cpu_offset(unsigned long cpu)
+{
+#ifdef CONFIG_DEBUG_VM
+ if (system_state == SYSTEM_RUNNING) {
+ BUG_ON(!cpu_isset(cpu, cpu_possible_map));
+ WARN_ON(!cpu_isset(cpu, cpu_online_map));
+ }
+#endif
+ return __cpu_offset(cpu);
+}
+
+#define CPU_PTR(__p, __cpu) SHIFT_PTR(__p, cpu_offset(__cpu))
+#define __CPU_PTR(__p, __cpu) SHIFT_PTR(__p, __cpu_offset(__cpu))
+
+#define CPU_ALLOC(type, flags) cpu_alloc(sizeof(type), flags, \
+ __alignof__(type))
+#define CPU_FREE(pointer) cpu_free(pointer, sizeof(*(pointer)))
+
+#define THIS_CPU(__p) CPU_PTR(__p, smp_processor_id())
+#define __THIS_CPU(__p) CPU_PTR(__p, raw_smp_processor_id())
+
+/*
+ * Raw calls
+ */
+void *cpu_alloc(unsigned long size, gfp_t gfp, unsigned long align);
+void cpu_free(void *cpu_pointer, unsigned long size);
+
+/*
+ * Early boot allocator for per_cpu variables and special per cpu areas.
+ * Allocations are not tracked and cannot be freed.
+ */
+void *boot_cpu_alloc(unsigned long size);
+
#endif /* __LINUX_PERCPU_H */
--
next prev parent reply other threads:[~2007-11-20 1:14 UTC|newest]
Thread overview: 120+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-11-20 1:11 [rfc 00/45] [RFC] CPU ops and a rework of per cpu data handling on x86_64 clameter, Christoph Lameter
2007-11-20 1:11 ` [rfc 01/45] ACPI: Avoid references to impossible processors clameter, Christoph Lameter
2007-11-20 12:47 ` Mathieu Desnoyers
2007-11-20 20:16 ` Christoph Lameter
2007-11-20 15:29 ` Andi Kleen
2007-11-20 20:18 ` Christoph Lameter
2007-11-20 1:11 ` clameter, Christoph Lameter [this message]
2007-11-20 1:11 ` [rfc 03/45] Generic CPU operations: Core piece clameter, Christoph Lameter
2007-11-20 3:17 ` Mathieu Desnoyers
2007-11-20 3:30 ` Christoph Lameter
2007-11-20 4:07 ` Mathieu Desnoyers
2007-11-20 20:36 ` Christoph Lameter
2007-11-20 1:11 ` [rfc 04/45] cpu alloc: Use in SLUB clameter, Christoph Lameter
2007-11-20 12:42 ` Mathieu Desnoyers
2007-11-20 20:44 ` Christoph Lameter
2007-11-20 21:23 ` Mathieu Desnoyers
2007-11-20 21:36 ` Christoph Lameter
2007-11-20 21:43 ` Mathieu Desnoyers
2007-11-20 1:11 ` [rfc 05/45] cpu alloc: Remove SLUB fields clameter, Christoph Lameter
2007-11-20 1:11 ` [rfc 06/45] cpu alloc: page allocator conversion clameter, Christoph Lameter
2007-11-20 1:11 ` [rfc 07/45] cpu_alloc: Implement dynamically extendable cpu areas clameter, Christoph Lameter
2007-11-20 1:11 ` [rfc 08/45] cpu alloc: x86 support clameter, Christoph Lameter
2007-11-20 1:35 ` H. Peter Anvin
2007-11-20 2:02 ` Christoph Lameter
2007-11-20 2:18 ` H. Peter Anvin
2007-11-20 3:37 ` Nick Piggin
2007-11-20 3:59 ` Nick Piggin
2007-11-20 12:05 ` Andi Kleen
2007-11-20 3:16 ` Andi Kleen
2007-11-20 3:50 ` Christoph Lameter
2007-11-20 12:01 ` Andi Kleen
2007-11-20 20:35 ` Christoph Lameter
2007-11-20 20:59 ` Andi Kleen
2007-11-20 21:33 ` Christoph Lameter
2007-11-21 0:10 ` Christoph Lameter
2007-11-21 1:16 ` Christoph Lameter
2007-11-21 1:36 ` Andi Kleen
2007-11-21 2:08 ` Christoph Lameter
2007-11-21 13:08 ` Andi Kleen
2007-11-21 19:01 ` Christoph Lameter
2007-11-20 20:43 ` H. Peter Anvin
2007-11-20 20:51 ` Andi Kleen
2007-11-20 20:58 ` Christoph Lameter
2007-11-20 21:06 ` H. Peter Anvin
2007-11-20 21:34 ` Christoph Lameter
2007-11-20 21:01 ` H. Peter Anvin
2007-11-27 4:12 ` John Richard Moser
2007-11-20 1:11 ` [rfc 09/45] cpu alloc: IA64 support clameter, Christoph Lameter
2007-11-20 1:11 ` [rfc 10/45] cpu_alloc: Sparc64 support clameter, Christoph Lameter
2007-11-20 1:11 ` [rfc 11/45] cpu alloc: percpu_counter conversion clameter, Christoph Lameter
2007-11-20 1:11 ` [rfc 12/45] cpu alloc: crash_notes conversion clameter, Christoph Lameter
2007-11-20 13:03 ` Mathieu Desnoyers
2007-11-20 20:50 ` Christoph Lameter
2007-11-20 1:11 ` [rfc 13/45] cpu alloc: workqueue conversion clameter, Christoph Lameter
2007-11-20 1:11 ` [rfc 14/45] cpu alloc: ACPI cstate handling conversion clameter, Christoph Lameter
2007-11-20 1:11 ` [rfc 15/45] cpu alloc: genhd statistics conversion clameter, Christoph Lameter
2007-11-20 1:11 ` [rfc 16/45] cpu alloc: blktrace conversion clameter, Christoph Lameter
2007-11-20 1:11 ` [rfc 17/45] cpu alloc: SRCU clameter, Christoph Lameter
2007-11-20 1:11 ` [rfc 18/45] cpu alloc: XFS counters clameter, Christoph Lameter
2007-11-20 8:12 ` Christoph Hellwig
2007-11-20 20:38 ` Christoph Lameter
2007-11-21 4:47 ` David Chinner
2007-11-21 4:50 ` Christoph Lameter
2007-11-20 1:11 ` [rfc 19/45] cpu alloc: NFS statistics clameter, Christoph Lameter
2007-11-20 13:02 ` Mathieu Desnoyers
2007-11-20 20:49 ` Christoph Lameter
2007-11-20 20:56 ` Trond Myklebust
2007-11-20 21:28 ` Mathieu Desnoyers
2007-11-20 21:48 ` Trond Myklebust
2007-11-20 21:50 ` Mathieu Desnoyers
2007-11-20 22:46 ` Trond Myklebust
2007-11-21 0:53 ` Mathieu Desnoyers
2007-11-20 21:26 ` Mathieu Desnoyers
2007-11-20 1:11 ` [rfc 20/45] cpu alloc: neigbour statistics clameter, Christoph Lameter
2007-11-20 1:11 ` [rfc 21/45] cpu alloc: tcp statistics clameter, Christoph Lameter
2007-11-20 1:11 ` [rfc 22/45] cpu alloc: convert scatches clameter, Christoph Lameter
2007-11-20 1:11 ` [rfc 23/45] cpu alloc: dmaengine conversion clameter, Christoph Lameter
2007-11-20 12:50 ` Mathieu Desnoyers
2007-11-20 20:46 ` Christoph Lameter
2007-11-20 1:11 ` [rfc 24/45] cpu alloc: convert loopback statistics clameter, Christoph Lameter
2007-11-20 1:11 ` [rfc 25/45] cpu alloc: veth conversion clameter, Christoph Lameter
2007-11-20 1:11 ` [rfc 26/45] cpu alloc: Chelsio statistics conversion clameter, Christoph Lameter
2007-11-20 1:11 ` [rfc 27/45] cpu alloc: convert mib handling to cpu alloc clameter, Christoph Lameter
2007-11-20 1:12 ` [rfc 28/45] cpu_alloc: convert network sockets clameter, Christoph Lameter
2007-11-20 1:12 ` [rfc 29/45] cpu alloc: Use for infiniband clameter, Christoph Lameter
2007-11-20 1:12 ` [rfc 30/45] cpu alloc: Use in the crypto subsystem clameter, Christoph Lameter
2007-11-20 1:12 ` [rfc 31/45] cpu alloc: Remove the allocpercpu functionality clameter, Christoph Lameter
2007-11-20 1:12 ` [rfc 32/45] Module handling: Use CPU_xx ops to dynamically allocate counters clameter, Christoph Lameter
2007-11-20 1:12 ` [rfc 33/45] x86_64: Use CPU ops for nmi alert counter clameter, Christoph Lameter
2007-11-20 1:12 ` [rfc 34/45] x86_64: Fold percpu area into the cpu area clameter, Christoph Lameter
2007-11-20 1:12 ` [rfc 35/45] X86_64: Declare pda as per cpu data thereby moving it " clameter, Christoph Lameter
2007-11-20 1:12 ` [rfc 36/45] X86_64: Place pda first in " clameter, Christoph Lameter
2007-11-20 1:12 ` [rfc 37/45] x86_64: Support for fast per cpu operations clameter, Christoph Lameter
2007-11-20 2:00 ` H. Peter Anvin
2007-11-20 2:03 ` Christoph Lameter
2007-11-20 2:15 ` H. Peter Anvin
2007-11-20 2:17 ` David Miller
2007-11-20 2:19 ` H. Peter Anvin
2007-11-20 3:23 ` Andi Kleen
2007-11-20 2:45 ` Paul Mackerras
2007-11-20 1:12 ` [rfc 38/45] x86_64: Remove obsolete per_cpu offset calculations clameter, Christoph Lameter
2007-11-20 1:12 ` [rfc 39/45] x86_64: Remove the data_offset field from the pda clameter, Christoph Lameter
2007-11-20 1:12 ` [rfc 40/45] x86_64: Provide per_cpu_var definition clameter, Christoph Lameter
2007-11-20 1:12 ` [rfc 41/45] VM statistics: Use CPU ops clameter, Christoph Lameter
2007-11-20 1:12 ` [rfc 43/45] x86_64: Add a CPU_OR to support or_pda() clameter, Christoph Lameter
2007-11-20 1:12 ` [rfc 44/45] Remove local_t support clameter, Christoph Lameter
2007-11-20 12:59 ` Mathieu Desnoyers
2007-11-20 20:48 ` Christoph Lameter
2007-11-20 1:12 ` [rfc 45/45] Modules: Hack to handle symbols that have a zero value clameter, Christoph Lameter
2007-11-20 2:20 ` Mathieu Desnoyers
2007-11-20 2:49 ` Christoph Lameter
2007-11-20 3:29 ` Mathieu Desnoyers
2007-11-20 1:18 ` [rfc 00/45] [RFC] CPU ops and a rework of per cpu data handling on x86_64 Christoph Lameter
2007-11-20 1:51 ` David Miller
2007-11-20 1:59 ` Christoph Lameter
2007-11-20 2:10 ` David Miller
2007-11-20 2:12 ` Christoph Lameter
2007-11-20 3:25 ` Andi Kleen
2007-11-20 3:33 ` Christoph Lameter
2007-11-20 4:04 ` David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20071120011332.182511436@sgi.com \
--to=clameter@sgi.com \
--cc=ak@suse.de \
--cc=akpm@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox