* [RFC Patch V1 17/30] mm, intel_powerclamp: Use cpu_to_mem()/numa_mem_id() to support memoryless node
[not found] <1405064267-11678-1-git-send-email-jiang.liu@linux.intel.com>
@ 2014-07-11 7:37 ` Jiang Liu
2014-07-21 17:38 ` Nishanth Aravamudan
2014-07-11 7:37 ` [RFC Patch V1 29/30] mm, x86: Enable memoryless node support to better support CPU/memory hotplug Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 30/30] x86, NUMA: Online node earlier when doing CPU hot-addition Jiang Liu
2 siblings, 1 reply; 9+ messages in thread
From: Jiang Liu @ 2014-07-11 7:37 UTC (permalink / raw)
To: Andrew Morton, Mel Gorman, David Rientjes, Mike Galbraith,
Peter Zijlstra, Rafael J . Wysocki, Zhang Rui, Eduardo Valentin
Cc: Jiang Liu, Tony Luck, linux-mm, linux-hotplug, linux-kernel,
linux-pm
When CONFIG_HAVE_MEMORYLESS_NODES is enabled, cpu_to_node()/numa_node_id()
may return a node without memory, and later cause system failure/panic
when calling kmalloc_node() and friends with returned node id.
So use cpu_to_mem()/numa_mem_id() instead to get the nearest node with
memory for the/current cpu.
If CONFIG_HAVE_MEMORYLESS_NODES is disabled, cpu_to_mem()/numa_mem_id()
is the same as cpu_to_node()/numa_node_id().
Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
---
drivers/thermal/intel_powerclamp.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/thermal/intel_powerclamp.c b/drivers/thermal/intel_powerclamp.c
index 95cb7fc20e17..9d9be8cd1b50 100644
--- a/drivers/thermal/intel_powerclamp.c
+++ b/drivers/thermal/intel_powerclamp.c
@@ -531,7 +531,7 @@ static int start_power_clamp(void)
thread = kthread_create_on_node(clamp_thread,
(void *) cpu,
- cpu_to_node(cpu),
+ cpu_to_mem(cpu),
"kidle_inject/%ld", cpu);
/* bind to cpu here */
if (likely(!IS_ERR(thread))) {
@@ -582,7 +582,7 @@ static int powerclamp_cpu_callback(struct notifier_block *nfb,
case CPU_ONLINE:
thread = kthread_create_on_node(clamp_thread,
(void *) cpu,
- cpu_to_node(cpu),
+ cpu_to_mem(cpu),
"kidle_inject/%lu", cpu);
if (likely(!IS_ERR(thread))) {
kthread_bind(thread, cpu);
--
1.7.10.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [RFC Patch V1 29/30] mm, x86: Enable memoryless node support to better support CPU/memory hotplug
[not found] <1405064267-11678-1-git-send-email-jiang.liu@linux.intel.com>
2014-07-11 7:37 ` [RFC Patch V1 17/30] mm, intel_powerclamp: Use cpu_to_mem()/numa_mem_id() to support memoryless node Jiang Liu
@ 2014-07-11 7:37 ` Jiang Liu
2014-07-24 23:26 ` Nishanth Aravamudan
2014-07-11 7:37 ` [RFC Patch V1 30/30] x86, NUMA: Online node earlier when doing CPU hot-addition Jiang Liu
2 siblings, 1 reply; 9+ messages in thread
From: Jiang Liu @ 2014-07-11 7:37 UTC (permalink / raw)
To: Andrew Morton, Mel Gorman, David Rientjes, Mike Galbraith,
Peter Zijlstra, Rafael J . Wysocki, Thomas Gleixner, Ingo Molnar,
H. Peter Anvin, x86, Rafael J. Wysocki, Len Brown, Pavel Machek,
Toshi Kani, Igor Mammedov, Borislav Petkov, Paul Gortmaker,
Tang Chen, Zhang Yanfei, Jiang Liu, Lans Zhang
Cc: Tony Luck, linux-mm, linux-hotplug, linux-kernel, Ingo Molnar,
linux-pm
With current implementation, all CPUs within a NUMA node will be
assocaited with another NUMA node if the node has no memory installed.
For example, on a four-node system, CPUs on node 2 and 3 are associated
with node 0 when are no memory install on node 2 and 3, which may
confuse users.
root@bkd01sdp:~# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119
node 0 size: 15602 MB
node 0 free: 15014 MB
node 1 cpus: 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
node 1 size: 15985 MB
node 1 free: 15686 MB
node distances:
node 0 1
0: 10 21
1: 21 10
To be worse, the CPU affinity relationship won't get fixed even after
memory has been added to those nodes. After memory hot-addition to
node 2, CPUs on node 2 are still associated with node 0. This may cause
sub-optimal performance.
root@bkd01sdp:/sys/devices/system/node/node2# numactl --hardware
available: 3 nodes (0-2)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119
node 0 size: 15602 MB
node 0 free: 14743 MB
node 1 cpus: 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
node 1 size: 15985 MB
node 1 free: 15715 MB
node 2 cpus:
node 2 size: 128 MB
node 2 free: 128 MB
node distances:
node 0 1 2
0: 10 21 21
1: 21 10 21
2: 21 21 10
With support of memoryless node enabled, it will correctly report system
hardware topology for nodes without memory installed.
root@bkd01sdp:~# numactl --hardware
available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74
node 0 size: 15725 MB
node 0 free: 15129 MB
node 1 cpus: 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
node 1 size: 15862 MB
node 1 free: 15627 MB
node 2 cpus: 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104
node 2 size: 0 MB
node 2 free: 0 MB
node 3 cpus: 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119
node 3 size: 0 MB
node 3 free: 0 MB
node distances:
node 0 1 2 3
0: 10 21 21 21
1: 21 10 21 21
2: 21 21 10 21
3: 21 21 21 10
With memoryless node enabled, CPUs are correctly associated with node 2
after memory hot-addition to node 2.
root@bkd01sdp:/sys/devices/system/node/node2# numactl --hardware
available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74
node 0 size: 15725 MB
node 0 free: 14872 MB
node 1 cpus: 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
node 1 size: 15862 MB
node 1 free: 15641 MB
node 2 cpus: 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104
node 2 size: 128 MB
node 2 free: 127 MB
node 3 cpus: 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119
node 3 size: 0 MB
node 3 free: 0 MB
node distances:
node 0 1 2 3
0: 10 21 21 21
1: 21 10 21 21
2: 21 21 10 21
3: 21 21 21 10
Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
---
arch/x86/Kconfig | 3 +++
arch/x86/kernel/acpi/boot.c | 5 ++++-
arch/x86/kernel/smpboot.c | 2 ++
arch/x86/mm/numa.c | 42 +++++++++++++++++++++++++++++++++++-------
4 files changed, 44 insertions(+), 8 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index a8f749ef0fdc..f35b25b88625 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1887,6 +1887,9 @@ config USE_PERCPU_NUMA_NODE_ID
def_bool y
depends on NUMA
+config HAVE_MEMORYLESS_NODES
+ def_bool NUMA
+
config ARCH_ENABLE_SPLIT_PMD_PTLOCK
def_bool y
depends on X86_64 || X86_PAE
diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
index 86281ffb96d6..3b5641703a49 100644
--- a/arch/x86/kernel/acpi/boot.c
+++ b/arch/x86/kernel/acpi/boot.c
@@ -612,6 +612,8 @@ static void acpi_map_cpu2node(acpi_handle handle, int cpu, int physid)
if (nid != -1) {
set_apicid_to_node(physid, nid);
numa_set_node(cpu, nid);
+ if (node_online(nid))
+ set_cpu_numa_mem(cpu, local_memory_node(nid));
}
#endif
}
@@ -644,9 +646,10 @@ int acpi_unmap_lsapic(int cpu)
{
#ifdef CONFIG_ACPI_NUMA
set_apicid_to_node(per_cpu(x86_cpu_to_apicid, cpu), NUMA_NO_NODE);
+ set_cpu_numa_mem(cpu, NUMA_NO_NODE);
#endif
- per_cpu(x86_cpu_to_apicid, cpu) = -1;
+ per_cpu(x86_cpu_to_apicid, cpu) = BAD_APICID;
set_cpu_present(cpu, false);
num_processors--;
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 5492798930ef..4a5437989ffe 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -162,6 +162,8 @@ static void smp_callin(void)
__func__, cpuid);
}
+ set_numa_mem(local_memory_node(cpu_to_node(cpuid)));
+
/*
* the boot CPU has finished the init stage and is spinning
* on callin_map until we finish. We are free to set up this
diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
index eec4f6c322bb..0d17c05480d2 100644
--- a/arch/x86/mm/numa.c
+++ b/arch/x86/mm/numa.c
@@ -22,6 +22,7 @@
int __initdata numa_off;
nodemask_t numa_nodes_parsed __initdata;
+static nodemask_t numa_nodes_empty __initdata;
struct pglist_data *node_data[MAX_NUMNODES] __read_mostly;
EXPORT_SYMBOL(node_data);
@@ -523,8 +524,12 @@ static int __init numa_register_memblks(struct numa_meminfo *mi)
end = max(mi->blk[i].end, end);
}
- if (start < end)
+ if (start < end) {
setup_node_data(nid, start, end);
+ } else if (IS_ENABLED(CONFIG_HAVE_MEMORYLESS_NODES)) {
+ setup_node_data(nid, 0, 0);
+ node_set(nid, numa_nodes_empty);
+ }
}
/* Dump memblock with node info and return. */
@@ -541,14 +546,18 @@ static int __init numa_register_memblks(struct numa_meminfo *mi)
*/
static void __init numa_init_array(void)
{
- int rr, i;
+ int i, rr = MAX_NUMNODES;
- rr = first_node(node_online_map);
for (i = 0; i < nr_cpu_ids; i++) {
+ /* Search for an onlined node with memory */
+ do {
+ if (rr != MAX_NUMNODES)
+ rr = next_node(rr, node_online_map);
+ if (rr == MAX_NUMNODES)
+ rr = first_node(node_online_map);
+ } while (!node_spanned_pages(rr));
+
numa_set_node(i, rr);
- rr = next_node(rr, node_online_map);
- if (rr == MAX_NUMNODES)
- rr = first_node(node_online_map);
}
}
@@ -694,9 +703,12 @@ static __init int find_near_online_node(int node)
{
int n, val;
int min_val = INT_MAX;
- int best_node = -1;
+ int best_node = NUMA_NO_NODE;
for_each_online_node(n) {
+ if (!node_spanned_pages(n))
+ continue;
+
val = node_distance(node, n);
if (val < min_val) {
@@ -737,6 +749,22 @@ void __init init_cpu_to_node(void)
if (!node_online(node))
node = find_near_online_node(node);
numa_set_node(cpu, node);
+ if (node_spanned_pages(node))
+ set_cpu_numa_mem(cpu, node);
+ if (IS_ENABLED(CONFIG_HAVE_MEMORYLESS_NODES))
+ node_clear(node, numa_nodes_empty);
+ }
+
+ /* Destroy empty nodes */
+ if (IS_ENABLED(CONFIG_HAVE_MEMORYLESS_NODES)) {
+ int nid;
+ const size_t nd_size = roundup(sizeof(pg_data_t), PAGE_SIZE);
+
+ for_each_node_mask(nid, numa_nodes_empty) {
+ node_set_offline(nid);
+ memblock_free(__pa(node_data[nid]), nd_size);
+ node_data[nid] = NULL;
+ }
}
}
--
1.7.10.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [RFC Patch V1 30/30] x86, NUMA: Online node earlier when doing CPU hot-addition
[not found] <1405064267-11678-1-git-send-email-jiang.liu@linux.intel.com>
2014-07-11 7:37 ` [RFC Patch V1 17/30] mm, intel_powerclamp: Use cpu_to_mem()/numa_mem_id() to support memoryless node Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 29/30] mm, x86: Enable memoryless node support to better support CPU/memory hotplug Jiang Liu
@ 2014-07-11 7:37 ` Jiang Liu
2014-07-24 23:30 ` Nishanth Aravamudan
2 siblings, 1 reply; 9+ messages in thread
From: Jiang Liu @ 2014-07-11 7:37 UTC (permalink / raw)
To: Andrew Morton, Mel Gorman, David Rientjes, Mike Galbraith,
Peter Zijlstra, Rafael J . Wysocki, Rafael J. Wysocki, Len Brown,
Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86
Cc: Jiang Liu, Tony Luck, linux-mm, linux-hotplug, linux-kernel,
linux-pm
With typical CPU hot-addition flow on x86, PCI host bridges embedded
in physical processor are always associated with NOMA_NO_NODE, which
may cause sub-optimal performance.
1) Handle CPU hot-addition notification
acpi_processor_add()
acpi_processor_get_info()
acpi_processor_hotadd_init()
acpi_map_lsapic()
1.a) acpi_map_cpu2node()
2) Handle PCI host bridge hot-addition notification
acpi_pci_root_add()
pci_acpi_scan_root()
2.a) if (node != NUMA_NO_NODE && !node_online(node)) node = NUMA_NO_NODE;
3) Handle memory hot-addition notification
acpi_memory_device_add()
acpi_memory_enable_device()
add_memory()
3.a) node_set_online();
4) Online CPUs through sysfs interfaces
cpu_subsys_online()
cpu_up()
try_online_node()
4.a) node_set_online();
So associated node is always in offline state because it is onlined
until step 3.a or 4.a.
We could improve performance by online node at step 1.a. This change
also makes the code symmetric. Nodes are always created when handling
CPU/memory hot-addition events instead of handling user requests from
sysfs interfaces, and are destroyed when handling CPU/memory hot-removal
events.
It also close a race window caused by kmalloc_node(cpu_to_node(cpu)),
which may cause system panic as below.
[ 3663.324476] BUG: unable to handle kernel paging request at 0000000000001f08
[ 3663.332348] IP: [<ffffffff81172219>] __alloc_pages_nodemask+0xb9/0x2d0
[ 3663.339719] PGD 82fe10067 PUD 82ebef067 PMD 0
[ 3663.344773] Oops: 0000 [#1] SMP
[ 3663.348455] Modules linked in: shpchp gpio_ich x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd microcode joydev sb_edac edac_core lpc_ich ipmi_si tpm_tis ipmi_msghandler ioatdma wmi acpi_pad mac_hid lp parport ixgbe isci mpt2sas dca ahci ptp libsas libahci raid_class pps_core scsi_transport_sas mdio hid_generic usbhid hid
[ 3663.394393] CPU: 61 PID: 2416 Comm: cron Tainted: G W 3.14.0-rc5+ #21
[ 3663.402643] Hardware name: Intel Corporation BRICKLAND/BRICKLAND, BIOS BRIVTIN1.86B.0047.F03.1403031049 03/03/2014
[ 3663.414299] task: ffff88082fe54b00 ti: ffff880845fba000 task.ti: ffff880845fba000
[ 3663.422741] RIP: 0010:[<ffffffff81172219>] [<ffffffff81172219>] __alloc_pages_nodemask+0xb9/0x2d0
[ 3663.432857] RSP: 0018:ffff880845fbbcd0 EFLAGS: 00010246
[ 3663.439265] RAX: 0000000000001f00 RBX: 0000000000000000 RCX: 0000000000000000
[ 3663.447291] RDX: 0000000000000000 RSI: 0000000000000a8d RDI: ffffffff81a8d950
[ 3663.455318] RBP: ffff880845fbbd58 R08: ffff880823293400 R09: 0000000000000001
[ 3663.463345] R10: 0000000000000001 R11: 0000000000000000 R12: 00000000002052d0
[ 3663.471363] R13: ffff880854c07600 R14: 0000000000000002 R15: 0000000000000000
[ 3663.479389] FS: 00007f2e8b99e800(0000) GS:ffff88105a400000(0000) knlGS:0000000000000000
[ 3663.488514] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 3663.495018] CR2: 0000000000001f08 CR3: 00000008237b1000 CR4: 00000000001407e0
[ 3663.503476] Stack:
[ 3663.505757] ffffffff811bd74d ffff880854c01d98 ffff880854c01df0 ffff880854c01dd0
[ 3663.514167] 00000003208ca420 000000075a5d84d0 ffff88082fe54b00 ffffffff811bb35f
[ 3663.522567] ffff880854c07600 0000000000000003 0000000000001f00 ffff880845fbbd48
[ 3663.530976] Call Trace:
[ 3663.533753] [<ffffffff811bd74d>] ? deactivate_slab+0x41d/0x4f0
[ 3663.540421] [<ffffffff811bb35f>] ? new_slab+0x3f/0x2d0
[ 3663.546307] [<ffffffff811bb3c5>] new_slab+0xa5/0x2d0
[ 3663.552001] [<ffffffff81768c97>] __slab_alloc+0x35d/0x54a
[ 3663.558185] [<ffffffff810a4845>] ? local_clock+0x25/0x30
[ 3663.564686] [<ffffffff8177a34c>] ? __do_page_fault+0x4ec/0x5e0
[ 3663.571356] [<ffffffff810b0054>] ? alloc_fair_sched_group+0xc4/0x190
[ 3663.578609] [<ffffffff810c77f1>] ? __raw_spin_lock_init+0x21/0x60
[ 3663.585570] [<ffffffff811be476>] kmem_cache_alloc_node_trace+0xa6/0x1d0
[ 3663.593112] [<ffffffff810b0054>] ? alloc_fair_sched_group+0xc4/0x190
[ 3663.600363] [<ffffffff810b0054>] alloc_fair_sched_group+0xc4/0x190
[ 3663.607423] [<ffffffff810a359f>] sched_create_group+0x3f/0x80
[ 3663.613994] [<ffffffff810b611f>] sched_autogroup_create_attach+0x3f/0x1b0
[ 3663.621732] [<ffffffff8108258a>] sys_setsid+0xea/0x110
[ 3663.628020] [<ffffffff8177f42d>] system_call_fastpath+0x1a/0x1f
[ 3663.634780] Code: 00 44 89 e7 e8 b9 f8 f4 ff 41 f6 c4 10 74 18 31 d2 be 8d 0a 00 00 48 c7 c7 50 d9 a8 81 e8 70 6a f2 ff e8 db dd 5f 00 48 8b 45 c8 <48> 83 78 08 00 0f 84 b5 01 00 00 48 83 c0 08 44 89 75 c0 4d 89
[ 3663.657032] RIP [<ffffffff81172219>] __alloc_pages_nodemask+0xb9/0x2d0
[ 3663.664491] RSP <ffff880845fbbcd0>
[ 3663.668429] CR2: 0000000000001f08
[ 3663.672659] ---[ end trace df13f08ed9de18ad ]---
Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
---
arch/x86/kernel/acpi/boot.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
index 3b5641703a49..00c2ed507460 100644
--- a/arch/x86/kernel/acpi/boot.c
+++ b/arch/x86/kernel/acpi/boot.c
@@ -611,6 +611,7 @@ static void acpi_map_cpu2node(acpi_handle handle, int cpu, int physid)
nid = acpi_get_node(handle);
if (nid != -1) {
set_apicid_to_node(physid, nid);
+ try_online_node(nid);
numa_set_node(cpu, nid);
if (node_online(nid))
set_cpu_numa_mem(cpu, local_memory_node(nid));
--
1.7.10.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [RFC Patch V1 17/30] mm, intel_powerclamp: Use cpu_to_mem()/numa_mem_id() to support memoryless node
2014-07-11 7:37 ` [RFC Patch V1 17/30] mm, intel_powerclamp: Use cpu_to_mem()/numa_mem_id() to support memoryless node Jiang Liu
@ 2014-07-21 17:38 ` Nishanth Aravamudan
0 siblings, 0 replies; 9+ messages in thread
From: Nishanth Aravamudan @ 2014-07-21 17:38 UTC (permalink / raw)
To: Jiang Liu
Cc: Andrew Morton, Mel Gorman, David Rientjes, Mike Galbraith,
Peter Zijlstra, Rafael J . Wysocki, Zhang Rui, Eduardo Valentin,
Tony Luck, linux-mm, linux-hotplug, linux-kernel, linux-pm
On 11.07.2014 [15:37:34 +0800], Jiang Liu wrote:
> When CONFIG_HAVE_MEMORYLESS_NODES is enabled, cpu_to_node()/numa_node_id()
> may return a node without memory, and later cause system failure/panic
> when calling kmalloc_node() and friends with returned node id.
> So use cpu_to_mem()/numa_mem_id() instead to get the nearest node with
> memory for the/current cpu.
You used the same changelog for all of the patches, it seems. But the
interface below (kthread_create_on_node) doesn't go into kmalloc_node?
kthread_create_on_node eventually sets the value used by
tsk_fork_get_node(), which is used by alloc_task_struct_node() and
alloc_thread_info_node(). The first uses kmem_cache_alloc_node() and the
second, depending on the relative sizes of THREAD_SIZE and PAGE_SIZE
uses either alloc_kmem_pages_node() or kmem_cache_alloc_node().
kmem_cache_alloc_node() goes into the appropriate slab allocator which
on SLUB for instance, goes down into __alloc_pages_nodemask. But no
failure occurs when memoryless nodes are present, you just get memory
that is remote from the node specified? Similarly,
alloc_kmem_pages_node() calls into __alloc_pages with an appropriate
node_zonelist, which should provide for the correct fallback based upon
NUMA topology?
What system failure/panic did you see that is resolved by this patch?
> If CONFIG_HAVE_MEMORYLESS_NODES is disabled, cpu_to_mem()/numa_mem_id()
> is the same as cpu_to_node()/numa_node_id().
>
> Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
> ---
> drivers/thermal/intel_powerclamp.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/thermal/intel_powerclamp.c b/drivers/thermal/intel_powerclamp.c
> index 95cb7fc20e17..9d9be8cd1b50 100644
> --- a/drivers/thermal/intel_powerclamp.c
> +++ b/drivers/thermal/intel_powerclamp.c
> @@ -531,7 +531,7 @@ static int start_power_clamp(void)
>
> thread = kthread_create_on_node(clamp_thread,
> (void *) cpu,
> - cpu_to_node(cpu),
> + cpu_to_mem(cpu),
As Tejun has pointed out elsewhere, we lose context here about the
original node we were running on. That information is relevant for a few
reasons:
1) In the underlying allocator, we might not have memory *right now* to
satisfy a request, which, say, causes us to deactivate a slab
(CONFIG_SLUB). But that condition may be relieved in the future and we
want to use the correct node again then.
2) For topologies that are symmetrical around a memoryless node, we
could lose the correct fallback information when we specify a nearest
neighbor with memory.
Thanks,
Nish
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC Patch V1 29/30] mm, x86: Enable memoryless node support to better support CPU/memory hotplug
2014-07-11 7:37 ` [RFC Patch V1 29/30] mm, x86: Enable memoryless node support to better support CPU/memory hotplug Jiang Liu
@ 2014-07-24 23:26 ` Nishanth Aravamudan
2014-07-25 1:41 ` Jiang Liu
0 siblings, 1 reply; 9+ messages in thread
From: Nishanth Aravamudan @ 2014-07-24 23:26 UTC (permalink / raw)
To: Jiang Liu
Cc: Andrew Morton, Mel Gorman, David Rientjes, Mike Galbraith,
Peter Zijlstra, Rafael J . Wysocki, Thomas Gleixner, Ingo Molnar,
H. Peter Anvin, x86, Rafael J. Wysocki, Len Brown, Pavel Machek,
Toshi Kani, Igor Mammedov, Borislav Petkov, Paul Gortmaker,
Tang Chen, Zhang Yanfei, Lans Zhang, Tony Luck, linux-mm
On 11.07.2014 [15:37:46 +0800], Jiang Liu wrote:
> With current implementation, all CPUs within a NUMA node will be
> assocaited with another NUMA node if the node has no memory installed.
<snip>
> ---
> arch/x86/Kconfig | 3 +++
> arch/x86/kernel/acpi/boot.c | 5 ++++-
> arch/x86/kernel/smpboot.c | 2 ++
> arch/x86/mm/numa.c | 42 +++++++++++++++++++++++++++++++++++-------
> 4 files changed, 44 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index a8f749ef0fdc..f35b25b88625 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -1887,6 +1887,9 @@ config USE_PERCPU_NUMA_NODE_ID
> def_bool y
> depends on NUMA
>
> +config HAVE_MEMORYLESS_NODES
> + def_bool NUMA
> +
> config ARCH_ENABLE_SPLIT_PMD_PTLOCK
> def_bool y
> depends on X86_64 || X86_PAE
> diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
> index 86281ffb96d6..3b5641703a49 100644
> --- a/arch/x86/kernel/acpi/boot.c
> +++ b/arch/x86/kernel/acpi/boot.c
> @@ -612,6 +612,8 @@ static void acpi_map_cpu2node(acpi_handle handle, int cpu, int physid)
> if (nid != -1) {
> set_apicid_to_node(physid, nid);
> numa_set_node(cpu, nid);
> + if (node_online(nid))
> + set_cpu_numa_mem(cpu, local_memory_node(nid));
How common is it for this method to be called for a CPU on an offline
node? Aren't you fixing this in the next patch (so maybe the order
should be changed?)?
> }
> #endif
> }
> @@ -644,9 +646,10 @@ int acpi_unmap_lsapic(int cpu)
> {
> #ifdef CONFIG_ACPI_NUMA
> set_apicid_to_node(per_cpu(x86_cpu_to_apicid, cpu), NUMA_NO_NODE);
> + set_cpu_numa_mem(cpu, NUMA_NO_NODE);
> #endif
>
> - per_cpu(x86_cpu_to_apicid, cpu) = -1;
> + per_cpu(x86_cpu_to_apicid, cpu) = BAD_APICID;
I think this is an unrelated change?
> set_cpu_present(cpu, false);
> num_processors--;
>
> diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
> index 5492798930ef..4a5437989ffe 100644
> --- a/arch/x86/kernel/smpboot.c
> +++ b/arch/x86/kernel/smpboot.c
> @@ -162,6 +162,8 @@ static void smp_callin(void)
> __func__, cpuid);
> }
>
> + set_numa_mem(local_memory_node(cpu_to_node(cpuid)));
> +
Note that you might hit the same issue I reported on powerpc, if
smp_callin() is part of smp_init(). The waitqueue initialization code
depends on cpu_to_node() [and eventually cpu_to_mem()] to be initialized
quite early.
Thanks,
Nish
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC Patch V1 30/30] x86, NUMA: Online node earlier when doing CPU hot-addition
2014-07-11 7:37 ` [RFC Patch V1 30/30] x86, NUMA: Online node earlier when doing CPU hot-addition Jiang Liu
@ 2014-07-24 23:30 ` Nishanth Aravamudan
2014-07-25 1:43 ` Jiang Liu
2014-07-25 1:44 ` Jiang Liu
0 siblings, 2 replies; 9+ messages in thread
From: Nishanth Aravamudan @ 2014-07-24 23:30 UTC (permalink / raw)
To: Jiang Liu
Cc: Andrew Morton, Mel Gorman, David Rientjes, Mike Galbraith,
Peter Zijlstra, Rafael J . Wysocki, Rafael J. Wysocki, Len Brown,
Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
Tony Luck, linux-mm, linux-hotplug, linux-kernel, linux-pm
On 11.07.2014 [15:37:47 +0800], Jiang Liu wrote:
> With typical CPU hot-addition flow on x86, PCI host bridges embedded
> in physical processor are always associated with NOMA_NO_NODE, which
> may cause sub-optimal performance.
> 1) Handle CPU hot-addition notification
> acpi_processor_add()
> acpi_processor_get_info()
> acpi_processor_hotadd_init()
> acpi_map_lsapic()
> 1.a) acpi_map_cpu2node()
>
> 2) Handle PCI host bridge hot-addition notification
> acpi_pci_root_add()
> pci_acpi_scan_root()
> 2.a) if (node != NUMA_NO_NODE && !node_online(node)) node = NUMA_NO_NODE;
>
> 3) Handle memory hot-addition notification
> acpi_memory_device_add()
> acpi_memory_enable_device()
> add_memory()
> 3.a) node_set_online();
>
> 4) Online CPUs through sysfs interfaces
> cpu_subsys_online()
> cpu_up()
> try_online_node()
> 4.a) node_set_online();
>
> So associated node is always in offline state because it is onlined
> until step 3.a or 4.a.
>
> We could improve performance by online node at step 1.a. This change
> also makes the code symmetric. Nodes are always created when handling
> CPU/memory hot-addition events instead of handling user requests from
> sysfs interfaces, and are destroyed when handling CPU/memory hot-removal
> events.
It seems like this patch has little to nothing to do with the rest of
the series and can be sent on its own?
> It also close a race window caused by kmalloc_node(cpu_to_node(cpu)),
To be clear, the race is that on some x86 platforms, there is a period
of time where a node ID returned by cpu_to_node() is offline.
<snip>
> Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
> ---
> arch/x86/kernel/acpi/boot.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
> index 3b5641703a49..00c2ed507460 100644
> --- a/arch/x86/kernel/acpi/boot.c
> +++ b/arch/x86/kernel/acpi/boot.c
> @@ -611,6 +611,7 @@ static void acpi_map_cpu2node(acpi_handle handle, int cpu, int physid)
> nid = acpi_get_node(handle);
> if (nid != -1) {
> set_apicid_to_node(physid, nid);
> + try_online_node(nid);
try_online_node() seems like it can fail? I assume it's a pretty rare
case, but should the return code be checked?
If it does fail, it seems like there are pretty serious problems and we
shouldn't be onlining this CPU, etc.?
> numa_set_node(cpu, nid);
> if (node_online(nid))
> set_cpu_numa_mem(cpu, local_memory_node(nid));
Which means you can remove this check presuming try_online_node()
returned 0.
Thanks,
Nish
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC Patch V1 29/30] mm, x86: Enable memoryless node support to better support CPU/memory hotplug
2014-07-24 23:26 ` Nishanth Aravamudan
@ 2014-07-25 1:41 ` Jiang Liu
0 siblings, 0 replies; 9+ messages in thread
From: Jiang Liu @ 2014-07-25 1:41 UTC (permalink / raw)
To: Nishanth Aravamudan
Cc: Andrew Morton, Mel Gorman, David Rientjes, Mike Galbraith,
Peter Zijlstra, Rafael J . Wysocki, Thomas Gleixner, Ingo Molnar,
H. Peter Anvin, x86, Rafael J. Wysocki, Len Brown, Pavel Machek,
Toshi Kani, Igor Mammedov, Borislav Petkov, Paul Gortmaker,
Tang Chen, Zhang Yanfei, Lans Zhang, Tony Luck, linux-mm,
linux-hotplug, linux-kernel, Ingo Molnar
On 2014/7/25 7:26, Nishanth Aravamudan wrote:
> On 11.07.2014 [15:37:46 +0800], Jiang Liu wrote:
>> With current implementation, all CPUs within a NUMA node will be
>> assocaited with another NUMA node if the node has no memory installed.
>
> <snip>
>
>> ---
>> arch/x86/Kconfig | 3 +++
>> arch/x86/kernel/acpi/boot.c | 5 ++++-
>> arch/x86/kernel/smpboot.c | 2 ++
>> arch/x86/mm/numa.c | 42 +++++++++++++++++++++++++++++++++++-------
>> 4 files changed, 44 insertions(+), 8 deletions(-)
>>
>> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
>> index a8f749ef0fdc..f35b25b88625 100644
>> --- a/arch/x86/Kconfig
>> +++ b/arch/x86/Kconfig
>> @@ -1887,6 +1887,9 @@ config USE_PERCPU_NUMA_NODE_ID
>> def_bool y
>> depends on NUMA
>>
>> +config HAVE_MEMORYLESS_NODES
>> + def_bool NUMA
>> +
>> config ARCH_ENABLE_SPLIT_PMD_PTLOCK
>> def_bool y
>> depends on X86_64 || X86_PAE
>> diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
>> index 86281ffb96d6..3b5641703a49 100644
>> --- a/arch/x86/kernel/acpi/boot.c
>> +++ b/arch/x86/kernel/acpi/boot.c
>> @@ -612,6 +612,8 @@ static void acpi_map_cpu2node(acpi_handle handle, int cpu, int physid)
>> if (nid != -1) {
>> set_apicid_to_node(physid, nid);
>> numa_set_node(cpu, nid);
>> + if (node_online(nid))
>> + set_cpu_numa_mem(cpu, local_memory_node(nid));
>
> How common is it for this method to be called for a CPU on an offline
> node? Aren't you fixing this in the next patch (so maybe the order
> should be changed?)?
Hi Nishanth,
For physical CPU hot-addition instead of logical CPU online through
sysfs, the node is always in offline state.
In v2, I have reordered the patch set so patch 30 goes first.
>
>> }
>> #endif
>> }
>> @@ -644,9 +646,10 @@ int acpi_unmap_lsapic(int cpu)
>> {
>> #ifdef CONFIG_ACPI_NUMA
>> set_apicid_to_node(per_cpu(x86_cpu_to_apicid, cpu), NUMA_NO_NODE);
>> + set_cpu_numa_mem(cpu, NUMA_NO_NODE);
>> #endif
>>
>> - per_cpu(x86_cpu_to_apicid, cpu) = -1;
>> + per_cpu(x86_cpu_to_apicid, cpu) = BAD_APICID;
>
> I think this is an unrelated change?
Thanks for reminder, it's unrelated to support memoryless node.
>
>> set_cpu_present(cpu, false);
>> num_processors--;
>>
>> diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
>> index 5492798930ef..4a5437989ffe 100644
>> --- a/arch/x86/kernel/smpboot.c
>> +++ b/arch/x86/kernel/smpboot.c
>> @@ -162,6 +162,8 @@ static void smp_callin(void)
>> __func__, cpuid);
>> }
>>
>> + set_numa_mem(local_memory_node(cpu_to_node(cpuid)));
>> +
>
> Note that you might hit the same issue I reported on powerpc, if
> smp_callin() is part of smp_init(). The waitqueue initialization code
> depends on cpu_to_node() [and eventually cpu_to_mem()] to be initialized
> quite early.
Thanks for reminder. Patch 29/30 together will setup cpu_to_mem() array
when enumerating CPUs for hot-adding events, so it should be ready
for use when onlining those CPUs.
Regards!
Gerry
>
> Thanks,
> Nish
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC Patch V1 30/30] x86, NUMA: Online node earlier when doing CPU hot-addition
2014-07-24 23:30 ` Nishanth Aravamudan
@ 2014-07-25 1:43 ` Jiang Liu
2014-07-25 1:44 ` Jiang Liu
1 sibling, 0 replies; 9+ messages in thread
From: Jiang Liu @ 2014-07-25 1:43 UTC (permalink / raw)
To: Nishanth Aravamudan
Cc: Andrew Morton, Mel Gorman, David Rientjes, Mike Galbraith,
Peter Zijlstra, Rafael J . Wysocki, Rafael J. Wysocki, Len Brown,
Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
Tony Luck, linux-mm, linux-hotplug, linux-kernel, linux-pm
On 2014/7/25 7:30, Nishanth Aravamudan wrote:
> On 11.07.2014 [15:37:47 +0800], Jiang Liu wrote:
>> With typical CPU hot-addition flow on x86, PCI host bridges embedded
>> in physical processor are always associated with NOMA_NO_NODE, which
>> may cause sub-optimal performance.
>> 1) Handle CPU hot-addition notification
>> acpi_processor_add()
>> acpi_processor_get_info()
>> acpi_processor_hotadd_init()
>> acpi_map_lsapic()
>> 1.a) acpi_map_cpu2node()
>>
>> 2) Handle PCI host bridge hot-addition notification
>> acpi_pci_root_add()
>> pci_acpi_scan_root()
>> 2.a) if (node != NUMA_NO_NODE && !node_online(node)) node = NUMA_NO_NODE;
>>
>> 3) Handle memory hot-addition notification
>> acpi_memory_device_add()
>> acpi_memory_enable_device()
>> add_memory()
>> 3.a) node_set_online();
>>
>> 4) Online CPUs through sysfs interfaces
>> cpu_subsys_online()
>> cpu_up()
>> try_online_node()
>> 4.a) node_set_online();
>>
>> So associated node is always in offline state because it is onlined
>> until step 3.a or 4.a.
>>
>> We could improve performance by online node at step 1.a. This change
>> also makes the code symmetric. Nodes are always created when handling
>> CPU/memory hot-addition events instead of handling user requests from
>> sysfs interfaces, and are destroyed when handling CPU/memory hot-removal
>> events.
>
> It seems like this patch has little to nothing to do with the rest of
> the series and can be sent on its own?
>
>> It also close a race window caused by kmalloc_node(cpu_to_node(cpu)),
>
> To be clear, the race is that on some x86 platforms, there is a period
> of time where a node ID returned by cpu_to_node() is offline.
>
> <snip>
>
>> Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
>> ---
>> arch/x86/kernel/acpi/boot.c | 1 +
>> 1 file changed, 1 insertion(+)
>>
>> diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
>> index 3b5641703a49..00c2ed507460 100644
>> --- a/arch/x86/kernel/acpi/boot.c
>> +++ b/arch/x86/kernel/acpi/boot.c
>> @@ -611,6 +611,7 @@ static void acpi_map_cpu2node(acpi_handle handle, int cpu, int physid)
>> nid = acpi_get_node(handle);
>> if (nid != -1) {
>> set_apicid_to_node(physid, nid);
>> + try_online_node(nid);
>
> try_online_node() seems like it can fail? I assume it's a pretty rare
> case, but should the return code be checked?
Good suggestion, I should split out this patch to fix the crash.
>
> If it does fail, it seems like there are pretty serious problems and we
> shouldn't be onlining this CPU, etc.?
>
>> numa_set_node(cpu, nid);
>> if (node_online(nid))
>> set_cpu_numa_mem(cpu, local_memory_node(nid));
>
> Which means you can remove this check presuming try_online_node()
> returned 0.
Yes, that's true.
>
> Thanks,
> Nish
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC Patch V1 30/30] x86, NUMA: Online node earlier when doing CPU hot-addition
2014-07-24 23:30 ` Nishanth Aravamudan
2014-07-25 1:43 ` Jiang Liu
@ 2014-07-25 1:44 ` Jiang Liu
1 sibling, 0 replies; 9+ messages in thread
From: Jiang Liu @ 2014-07-25 1:44 UTC (permalink / raw)
To: Nishanth Aravamudan
Cc: Andrew Morton, Mel Gorman, David Rientjes, Mike Galbraith,
Peter Zijlstra, Rafael J . Wysocki, Rafael J. Wysocki, Len Brown,
Pavel Machek, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
Tony Luck, linux-mm, linux-hotplug, linux-kernel, linux-pm
On 2014/7/25 7:30, Nishanth Aravamudan wrote:
> On 11.07.2014 [15:37:47 +0800], Jiang Liu wrote:
>> With typical CPU hot-addition flow on x86, PCI host bridges embedded
>> in physical processor are always associated with NOMA_NO_NODE, which
>> may cause sub-optimal performance.
>> 1) Handle CPU hot-addition notification
>> acpi_processor_add()
>> acpi_processor_get_info()
>> acpi_processor_hotadd_init()
>> acpi_map_lsapic()
>> 1.a) acpi_map_cpu2node()
>>
>> 2) Handle PCI host bridge hot-addition notification
>> acpi_pci_root_add()
>> pci_acpi_scan_root()
>> 2.a) if (node != NUMA_NO_NODE && !node_online(node)) node = NUMA_NO_NODE;
>>
>> 3) Handle memory hot-addition notification
>> acpi_memory_device_add()
>> acpi_memory_enable_device()
>> add_memory()
>> 3.a) node_set_online();
>>
>> 4) Online CPUs through sysfs interfaces
>> cpu_subsys_online()
>> cpu_up()
>> try_online_node()
>> 4.a) node_set_online();
>>
>> So associated node is always in offline state because it is onlined
>> until step 3.a or 4.a.
>>
>> We could improve performance by online node at step 1.a. This change
>> also makes the code symmetric. Nodes are always created when handling
>> CPU/memory hot-addition events instead of handling user requests from
>> sysfs interfaces, and are destroyed when handling CPU/memory hot-removal
>> events.
>
> It seems like this patch has little to nothing to do with the rest of
> the series and can be sent on its own?
>
>> It also close a race window caused by kmalloc_node(cpu_to_node(cpu)),
>
> To be clear, the race is that on some x86 platforms, there is a period
> of time where a node ID returned by cpu_to_node() is offline.
>
> <snip>
>
>> Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
>> ---
>> arch/x86/kernel/acpi/boot.c | 1 +
>> 1 file changed, 1 insertion(+)
>>
>> diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
>> index 3b5641703a49..00c2ed507460 100644
>> --- a/arch/x86/kernel/acpi/boot.c
>> +++ b/arch/x86/kernel/acpi/boot.c
>> @@ -611,6 +611,7 @@ static void acpi_map_cpu2node(acpi_handle handle, int cpu, int physid)
>> nid = acpi_get_node(handle);
>> if (nid != -1) {
>> set_apicid_to_node(physid, nid);
>> + try_online_node(nid);
>
> try_online_node() seems like it can fail? I assume it's a pretty rare
> case, but should the return code be checked?
>
> If it does fail, it seems like there are pretty serious problems and we
> shouldn't be onlining this CPU, etc.?
>
>> numa_set_node(cpu, nid);
>> if (node_online(nid))
>> set_cpu_numa_mem(cpu, local_memory_node(nid));
>
> Which means you can remove this check presuming try_online_node()
> returned 0.
Good suggestion, will try to enhance the error handling path.
>
> Thanks,
> Nish
>
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2014-07-25 1:44 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1405064267-11678-1-git-send-email-jiang.liu@linux.intel.com>
2014-07-11 7:37 ` [RFC Patch V1 17/30] mm, intel_powerclamp: Use cpu_to_mem()/numa_mem_id() to support memoryless node Jiang Liu
2014-07-21 17:38 ` Nishanth Aravamudan
2014-07-11 7:37 ` [RFC Patch V1 29/30] mm, x86: Enable memoryless node support to better support CPU/memory hotplug Jiang Liu
2014-07-24 23:26 ` Nishanth Aravamudan
2014-07-25 1:41 ` Jiang Liu
2014-07-11 7:37 ` [RFC Patch V1 30/30] x86, NUMA: Online node earlier when doing CPU hot-addition Jiang Liu
2014-07-24 23:30 ` Nishanth Aravamudan
2014-07-25 1:43 ` Jiang Liu
2014-07-25 1:44 ` Jiang Liu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).