* [PATCH 0/4] slab: common kmem_cache_cpu functions V1
@ 2014-05-30 18:27 Christoph Lameter
2014-05-30 18:27 ` [PATCH 1/4] slab common: Add functions for kmem_cache_node access Christoph Lameter
` (3 more replies)
0 siblings, 4 replies; 13+ messages in thread
From: Christoph Lameter @ 2014-05-30 18:27 UTC (permalink / raw)
To: Pekka Enberg; +Cc: linux-mm@kvack.org, Andrew Morton, David Rientjes
The patchset provides two new functions in mm/slab.h and modifies SLAB and SLUB to use these.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 1/4] slab common: Add functions for kmem_cache_node access
2014-05-30 18:27 [PATCH 0/4] slab: common kmem_cache_cpu functions V1 Christoph Lameter
@ 2014-05-30 18:27 ` Christoph Lameter
2014-05-30 18:27 ` [PATCH 2/4] slub: Use new node functions Christoph Lameter
` (2 subsequent siblings)
3 siblings, 0 replies; 13+ messages in thread
From: Christoph Lameter @ 2014-05-30 18:27 UTC (permalink / raw)
To: Pekka Enberg; +Cc: linux-mm@kvack.org, Andrew Morton, David Rientjes
[-- Attachment #1: common_node_functions --]
[-- Type: text/plain, Size: 1733 bytes --]
These functions allow to eliminate repeatedly used code in both
SLAB and SLUB and also allow for the insertion of debugging code
that may be needed in the development process.
Signed-off-by: Christoph Lameter <cl@linux.com>
Index: linux/mm/slab.h
===================================================================
--- linux.orig/mm/slab.h 2014-05-30 13:12:01.444370238 -0500
+++ linux/mm/slab.h 2014-05-30 13:12:01.444370238 -0500
@@ -288,5 +288,14 @@ struct kmem_cache_node {
};
+static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
+{
+ return s->node[node];
+}
+
+#define for_each_kmem_cache_node(s, node, n) \
+ for (node = 0; n = get_node(s, node), node < nr_node_ids; node++) \
+ if (n)
+
void *slab_next(struct seq_file *m, void *p, loff_t *pos);
void slab_stop(struct seq_file *m, void *p);
Index: linux/mm/slub.c
===================================================================
--- linux.orig/mm/slub.c 2014-05-30 13:10:55.000000000 -0500
+++ linux/mm/slub.c 2014-05-30 13:12:12.628022255 -0500
@@ -233,11 +233,6 @@ static inline void stat(const struct kme
* Core slab cache functions
*******************************************************************/
-static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
-{
- return s->node[node];
-}
-
/* Verify that a pointer has an address that is valid within a slab page */
static inline int check_valid_pointer(struct kmem_cache *s,
struct page *page, const void *object)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 2/4] slub: Use new node functions
2014-05-30 18:27 [PATCH 0/4] slab: common kmem_cache_cpu functions V1 Christoph Lameter
2014-05-30 18:27 ` [PATCH 1/4] slab common: Add functions for kmem_cache_node access Christoph Lameter
@ 2014-05-30 18:27 ` Christoph Lameter
2014-06-02 4:59 ` Joonsoo Kim
2014-05-30 18:27 ` [PATCH 3/4] slab: Use get_node function Christoph Lameter
2014-05-30 18:27 ` [PATCH 4/4] slab: Use for_each_kmem_cache_node function Christoph Lameter
3 siblings, 1 reply; 13+ messages in thread
From: Christoph Lameter @ 2014-05-30 18:27 UTC (permalink / raw)
To: Pekka Enberg; +Cc: linux-mm@kvack.org, Andrew Morton, David Rientjes
[-- Attachment #1: common_slub_node --]
[-- Type: text/plain, Size: 2068 bytes --]
Make use of the new node functions in mm/slab.h
Signed-off-by: Christoph Lameter <cl@linux.com>
Index: linux/mm/slub.c
===================================================================
--- linux.orig/mm/slub.c 2014-05-30 13:15:30.541864121 -0500
+++ linux/mm/slub.c 2014-05-30 13:15:30.541864121 -0500
@@ -2148,6 +2148,7 @@ static noinline void
slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid)
{
int node;
+ struct kmem_cache_node *n;
printk(KERN_WARNING
"SLUB: Unable to allocate memory on node %d (gfp=0x%x)\n",
@@ -2160,15 +2161,11 @@ slab_out_of_memory(struct kmem_cache *s,
printk(KERN_WARNING " %s debugging increased min order, use "
"slub_debug=O to disable.\n", s->name);
- for_each_online_node(node) {
- struct kmem_cache_node *n = get_node(s, node);
+ for_each_kmem_cache_node(s, node, n) {
unsigned long nr_slabs;
unsigned long nr_objs;
unsigned long nr_free;
- if (!n)
- continue;
-
nr_free = count_partial(n, count_free);
nr_slabs = node_nr_slabs(n);
nr_objs = node_nr_objs(n);
@@ -4376,16 +4373,12 @@ static ssize_t show_slab_objects(struct
static int any_slab_objects(struct kmem_cache *s)
{
int node;
+ struct kmem_cache_node *n;
- for_each_online_node(node) {
- struct kmem_cache_node *n = get_node(s, node);
-
- if (!n)
- continue;
-
+ for_each_kmem_cache_node(s, node, n)
if (atomic_long_read(&n->total_objects))
return 1;
- }
+
return 0;
}
#endif
@@ -5340,12 +5333,9 @@ void get_slabinfo(struct kmem_cache *s,
unsigned long nr_objs = 0;
unsigned long nr_free = 0;
int node;
+ struct kmem_cache_node *n;
- for_each_online_node(node) {
- struct kmem_cache_node *n = get_node(s, node);
-
- if (!n)
- continue;
+ for_each_kmem_cache_node(s, node, n) {
nr_slabs += node_nr_slabs(n);
nr_objs += node_nr_objs(n);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 3/4] slab: Use get_node function
2014-05-30 18:27 [PATCH 0/4] slab: common kmem_cache_cpu functions V1 Christoph Lameter
2014-05-30 18:27 ` [PATCH 1/4] slab common: Add functions for kmem_cache_node access Christoph Lameter
2014-05-30 18:27 ` [PATCH 2/4] slub: Use new node functions Christoph Lameter
@ 2014-05-30 18:27 ` Christoph Lameter
2014-05-30 18:27 ` [PATCH 4/4] slab: Use for_each_kmem_cache_node function Christoph Lameter
3 siblings, 0 replies; 13+ messages in thread
From: Christoph Lameter @ 2014-05-30 18:27 UTC (permalink / raw)
To: Pekka Enberg; +Cc: linux-mm@kvack.org, Andrew Morton, David Rientjes
[-- Attachment #1: common_slab_node --]
[-- Type: text/plain, Size: 10193 bytes --]
Signed-off-by: Christoph Lameter <cl@linux.com>
Index: linux/mm/slab.c
===================================================================
--- linux.orig/mm/slab.c 2014-05-30 13:07:17.313211059 -0500
+++ linux/mm/slab.c 2014-05-30 13:07:17.313211059 -0500
@@ -267,7 +267,7 @@ static void kmem_cache_node_init(struct
#define MAKE_LIST(cachep, listp, slab, nodeid) \
do { \
INIT_LIST_HEAD(listp); \
- list_splice(&(cachep->node[nodeid]->slab), listp); \
+ list_splice(&get_node(cachep, nodeid)->slab, listp); \
} while (0)
#define MAKE_ALL_LISTS(cachep, ptr, nodeid) \
@@ -461,7 +461,7 @@ static void slab_set_lock_classes(struct
struct kmem_cache_node *n;
int r;
- n = cachep->node[q];
+ n = get_node(cachep, q);
if (!n)
return;
@@ -509,7 +509,7 @@ static void init_node_lock_keys(int q)
if (!cache)
continue;
- n = cache->node[q];
+ n = get_node(cache, q);
if (!n || OFF_SLAB(cache))
continue;
@@ -520,7 +520,7 @@ static void init_node_lock_keys(int q)
static void on_slab_lock_classes_node(struct kmem_cache *cachep, int q)
{
- if (!cachep->node[q])
+ if (!get_node(cachep, q))
return;
slab_set_lock_classes(cachep, &on_slab_l3_key,
@@ -774,7 +774,7 @@ static inline bool is_slab_pfmemalloc(st
static void recheck_pfmemalloc_active(struct kmem_cache *cachep,
struct array_cache *ac)
{
- struct kmem_cache_node *n = cachep->node[numa_mem_id()];
+ struct kmem_cache_node *n = get_node(cachep,numa_mem_id());
struct page *page;
unsigned long flags;
@@ -829,7 +829,7 @@ static void *__ac_get_obj(struct kmem_ca
* If there are empty slabs on the slabs_free list and we are
* being forced to refill the cache, mark this one !pfmemalloc.
*/
- n = cachep->node[numa_mem_id()];
+ n = get_node(cachep, numa_mem_id());
if (!list_empty(&n->slabs_free) && force_refill) {
struct page *page = virt_to_head_page(objp);
ClearPageSlabPfmemalloc(page);
@@ -979,7 +979,7 @@ static void free_alien_cache(struct arra
static void __drain_alien_cache(struct kmem_cache *cachep,
struct array_cache *ac, int node)
{
- struct kmem_cache_node *n = cachep->node[node];
+ struct kmem_cache_node *n = get_node(cachep, node);
if (ac->avail) {
spin_lock(&n->list_lock);
@@ -1047,7 +1047,7 @@ static inline int cache_free_alien(struc
if (likely(nodeid == node))
return 0;
- n = cachep->node[node];
+ n = get_node(cachep, node);
STATS_INC_NODEFREES(cachep);
if (n->alien && n->alien[nodeid]) {
alien = n->alien[nodeid];
@@ -1059,9 +1059,9 @@ static inline int cache_free_alien(struc
ac_put_obj(cachep, alien, objp);
spin_unlock(&alien->lock);
} else {
- spin_lock(&(cachep->node[nodeid])->list_lock);
+ spin_lock(&get_node(cachep, nodeid)->list_lock);
free_block(cachep, &objp, 1, nodeid);
- spin_unlock(&(cachep->node[nodeid])->list_lock);
+ spin_unlock(&get_node(cachep, nodeid)->list_lock);
}
return 1;
}
@@ -1088,7 +1088,7 @@ static int init_cache_node_node(int node
* begin anything. Make sure some other cpu on this
* node has not already allocated this
*/
- if (!cachep->node[node]) {
+ if (!get_node(cachep, node)) {
n = kmalloc_node(memsize, GFP_KERNEL, node);
if (!n)
return -ENOMEM;
@@ -1104,11 +1104,11 @@ static int init_cache_node_node(int node
cachep->node[node] = n;
}
- spin_lock_irq(&cachep->node[node]->list_lock);
- cachep->node[node]->free_limit =
+ spin_lock_irq(&get_node(cachep, node)->list_lock);
+ get_node(cachep, node)->free_limit =
(1 + nr_cpus_node(node)) *
cachep->batchcount + cachep->num;
- spin_unlock_irq(&cachep->node[node]->list_lock);
+ spin_unlock_irq(&get_node(cachep, node)->list_lock);
}
return 0;
}
@@ -1134,7 +1134,7 @@ static void cpuup_canceled(long cpu)
/* cpu is dead; no one can alloc from it. */
nc = cachep->array[cpu];
cachep->array[cpu] = NULL;
- n = cachep->node[node];
+ n = get_node(cachep, node);
if (!n)
goto free_array_cache;
@@ -1177,7 +1177,7 @@ free_array_cache:
* shrink each nodelist to its limit.
*/
list_for_each_entry(cachep, &slab_caches, list) {
- n = cachep->node[node];
+ n = get_node(cachep, node);
if (!n)
continue;
drain_freelist(cachep, n, slabs_tofree(cachep, n));
@@ -1232,7 +1232,7 @@ static int cpuup_prepare(long cpu)
}
}
cachep->array[cpu] = nc;
- n = cachep->node[node];
+ n = get_node(cachep, node);
BUG_ON(!n);
spin_lock_irq(&n->list_lock);
@@ -1343,7 +1343,7 @@ static int __meminit drain_cache_node_no
list_for_each_entry(cachep, &slab_caches, list) {
struct kmem_cache_node *n;
- n = cachep->node[node];
+ n = get_node(cachep, node);
if (!n)
continue;
@@ -2371,7 +2371,7 @@ static void check_spinlock_acquired(stru
{
#ifdef CONFIG_SMP
check_irq_off();
- assert_spin_locked(&cachep->node[numa_mem_id()]->list_lock);
+ assert_spin_locked(&get_node(cachep, numa_mem_id())->list_lock);
#endif
}
@@ -2379,7 +2379,7 @@ static void check_spinlock_acquired_node
{
#ifdef CONFIG_SMP
check_irq_off();
- assert_spin_locked(&cachep->node[node]->list_lock);
+ assert_spin_locked(&get_node(cachep, node)->list_lock);
#endif
}
@@ -2402,9 +2402,9 @@ static void do_drain(void *arg)
check_irq_off();
ac = cpu_cache_get(cachep);
- spin_lock(&cachep->node[node]->list_lock);
+ spin_lock(&get_node(cachep, node)->list_lock);
free_block(cachep, ac->entry, ac->avail, node);
- spin_unlock(&cachep->node[node]->list_lock);
+ spin_unlock(&get_node(cachep, node)->list_lock);
ac->avail = 0;
}
@@ -2416,13 +2416,13 @@ static void drain_cpu_caches(struct kmem
on_each_cpu(do_drain, cachep, 1);
check_irq_on();
for_each_online_node(node) {
- n = cachep->node[node];
+ n = get_node(cachep, node);
if (n && n->alien)
drain_alien_cache(cachep, n->alien);
}
for_each_online_node(node) {
- n = cachep->node[node];
+ n = get_node(cachep, node);
if (n)
drain_array(cachep, n, n->shared, 1, node);
}
@@ -2479,7 +2479,7 @@ static int __cache_shrink(struct kmem_ca
check_irq_on();
for_each_online_node(i) {
- n = cachep->node[i];
+ n = get_node(cachep, i);
if (!n)
continue;
@@ -2526,7 +2526,7 @@ int __kmem_cache_shutdown(struct kmem_ca
/* NUMA: free the node structures */
for_each_online_node(i) {
- n = cachep->node[i];
+ n = get_node(cachep, i);
if (n) {
kfree(n->shared);
free_alien_cache(n->alien);
@@ -2709,7 +2709,7 @@ static int cache_grow(struct kmem_cache
/* Take the node list lock to change the colour_next on this node */
check_irq_off();
- n = cachep->node[nodeid];
+ n = get_node(cachep, nodeid);
spin_lock(&n->list_lock);
/* Get colour for the slab, and cal the next value. */
@@ -2877,7 +2877,7 @@ retry:
*/
batchcount = BATCHREFILL_LIMIT;
}
- n = cachep->node[node];
+ n = get_node(cachep, node);
BUG_ON(ac->avail > 0 || !n);
spin_lock(&n->list_lock);
@@ -3121,8 +3121,8 @@ retry:
nid = zone_to_nid(zone);
if (cpuset_zone_allowed_hardwall(zone, flags) &&
- cache->node[nid] &&
- cache->node[nid]->free_objects) {
+ get_node(cache, nid) &&
+ get_node(cache, nid)->free_objects) {
obj = ____cache_alloc_node(cache,
flags | GFP_THISNODE, nid);
if (obj)
@@ -3185,7 +3185,7 @@ static void *____cache_alloc_node(struct
int x;
VM_BUG_ON(nodeid > num_online_nodes());
- n = cachep->node[nodeid];
+ n = get_node(cachep, nodeid);
BUG_ON(!n);
retry:
@@ -3256,7 +3256,7 @@ slab_alloc_node(struct kmem_cache *cache
if (nodeid == NUMA_NO_NODE)
nodeid = slab_node;
- if (unlikely(!cachep->node[nodeid])) {
+ if (unlikely(!get_node(cachep, nodeid))) {
/* Node not bootstrapped yet */
ptr = fallback_alloc(cachep, flags);
goto out;
@@ -3372,7 +3372,7 @@ static void free_block(struct kmem_cache
objp = objpp[i];
page = virt_to_head_page(objp);
- n = cachep->node[node];
+ n = get_node(cachep, node);
list_del(&page->lru);
check_spinlock_acquired_node(cachep, node);
slab_put_obj(cachep, page, objp, node);
@@ -3414,7 +3414,7 @@ static void cache_flusharray(struct kmem
BUG_ON(!batchcount || batchcount > ac->avail);
#endif
check_irq_off();
- n = cachep->node[node];
+ n = get_node(cachep, node);
spin_lock(&n->list_lock);
if (n->shared) {
struct array_cache *shared_array = n->shared;
@@ -3727,7 +3727,7 @@ static int alloc_kmem_cache_node(struct
}
}
- n = cachep->node[node];
+ n = get_node(cachep, node);
if (n) {
struct array_cache *shared = n->shared;
@@ -3772,8 +3772,8 @@ fail:
/* Cache is not active yet. Roll back what we did */
node--;
while (node >= 0) {
- if (cachep->node[node]) {
- n = cachep->node[node];
+ if (get_node(cachep, node)) {
+ n = get_node(cachep, node);
kfree(n->shared);
free_alien_cache(n->alien);
@@ -3838,9 +3838,9 @@ static int __do_tune_cpucache(struct kme
struct array_cache *ccold = new->new[i];
if (!ccold)
continue;
- spin_lock_irq(&cachep->node[cpu_to_mem(i)]->list_lock);
+ spin_lock_irq(&get_node(cachep, cpu_to_mem(i))->list_lock);
free_block(cachep, ccold->entry, ccold->avail, cpu_to_mem(i));
- spin_unlock_irq(&cachep->node[cpu_to_mem(i)]->list_lock);
+ spin_unlock_irq(&get_node(cachep, cpu_to_mem(i))->list_lock);
kfree(ccold);
}
kfree(new);
@@ -4000,7 +4000,7 @@ static void cache_reap(struct work_struc
* have established with reasonable certainty that
* we can do some work if the lock was obtained.
*/
- n = searchp->node[node];
+ n = get_node(searchp, node);
reap_alien(searchp, n);
@@ -4053,7 +4053,7 @@ void get_slabinfo(struct kmem_cache *cac
active_objs = 0;
num_slabs = 0;
for_each_online_node(node) {
- n = cachep->node[node];
+ n = get_node(cachep, node);
if (!n)
continue;
@@ -4290,7 +4290,7 @@ static int leaks_show(struct seq_file *m
x[1] = 0;
for_each_online_node(node) {
- n = cachep->node[node];
+ n = get_node(cachep, node);
if (!n)
continue;
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 4/4] slab: Use for_each_kmem_cache_node function
2014-05-30 18:27 [PATCH 0/4] slab: common kmem_cache_cpu functions V1 Christoph Lameter
` (2 preceding siblings ...)
2014-05-30 18:27 ` [PATCH 3/4] slab: Use get_node function Christoph Lameter
@ 2014-05-30 18:27 ` Christoph Lameter
2014-06-02 5:12 ` Joonsoo Kim
3 siblings, 1 reply; 13+ messages in thread
From: Christoph Lameter @ 2014-05-30 18:27 UTC (permalink / raw)
To: Pekka Enberg; +Cc: linux-mm@kvack.org, Andrew Morton, David Rientjes
[-- Attachment #1: common_slab_foreach --]
[-- Type: text/plain, Size: 1814 bytes --]
Reduce code somewhat by the use of kmem_cache_node.
Signed-off-by: Christoph Lameter <cl@linux.com>
Index: linux/mm/slab.c
===================================================================
--- linux.orig/mm/slab.c 2014-05-30 13:08:32.986856450 -0500
+++ linux/mm/slab.c 2014-05-30 13:08:32.986856450 -0500
@@ -2415,17 +2415,12 @@ static void drain_cpu_caches(struct kmem
on_each_cpu(do_drain, cachep, 1);
check_irq_on();
- for_each_online_node(node) {
- n = get_node(cachep, node);
- if (n && n->alien)
+ for_each_kmem_cache_node(cachep, node, n)
+ if (n->alien)
drain_alien_cache(cachep, n->alien);
- }
- for_each_online_node(node) {
- n = get_node(cachep, node);
- if (n)
- drain_array(cachep, n, n->shared, 1, node);
- }
+ for_each_kmem_cache_node(cachep, node, n)
+ drain_array(cachep, n, n->shared, 1, node);
}
/*
@@ -2478,11 +2473,7 @@ static int __cache_shrink(struct kmem_ca
drain_cpu_caches(cachep);
check_irq_on();
- for_each_online_node(i) {
- n = get_node(cachep, i);
- if (!n)
- continue;
-
+ for_each_kmem_cache_node(cachep, i, n) {
drain_freelist(cachep, n, slabs_tofree(cachep, n));
ret += !list_empty(&n->slabs_full) ||
@@ -2525,13 +2516,10 @@ int __kmem_cache_shutdown(struct kmem_ca
kfree(cachep->array[i]);
/* NUMA: free the node structures */
- for_each_online_node(i) {
- n = get_node(cachep, i);
- if (n) {
- kfree(n->shared);
- free_alien_cache(n->alien);
- kfree(n);
- }
+ for_each_kmem_cache_node(cachep, i, n) {
+ kfree(n->shared);
+ free_alien_cache(n->alien);
+ kfree(n);
}
return 0;
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/4] slub: Use new node functions
2014-05-30 18:27 ` [PATCH 2/4] slub: Use new node functions Christoph Lameter
@ 2014-06-02 4:59 ` Joonsoo Kim
2014-06-02 15:42 ` Christoph Lameter
0 siblings, 1 reply; 13+ messages in thread
From: Joonsoo Kim @ 2014-06-02 4:59 UTC (permalink / raw)
To: Christoph Lameter
Cc: Pekka Enberg, linux-mm@kvack.org, Andrew Morton, David Rientjes
On Fri, May 30, 2014 at 01:27:55PM -0500, Christoph Lameter wrote:
> Make use of the new node functions in mm/slab.h
>
> Signed-off-by: Christoph Lameter <cl@linux.com>
>
> Index: linux/mm/slub.c
> ===================================================================
> --- linux.orig/mm/slub.c 2014-05-30 13:15:30.541864121 -0500
> +++ linux/mm/slub.c 2014-05-30 13:15:30.541864121 -0500
> @@ -2148,6 +2148,7 @@ static noinline void
> slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid)
> {
> int node;
> + struct kmem_cache_node *n;
>
> printk(KERN_WARNING
> "SLUB: Unable to allocate memory on node %d (gfp=0x%x)\n",
> @@ -2160,15 +2161,11 @@ slab_out_of_memory(struct kmem_cache *s,
> printk(KERN_WARNING " %s debugging increased min order, use "
> "slub_debug=O to disable.\n", s->name);
>
> - for_each_online_node(node) {
> - struct kmem_cache_node *n = get_node(s, node);
> + for_each_kmem_cache_node(s, node, n) {
> unsigned long nr_slabs;
> unsigned long nr_objs;
> unsigned long nr_free;
>
> - if (!n)
> - continue;
> -
> nr_free = count_partial(n, count_free);
> nr_slabs = node_nr_slabs(n);
> nr_objs = node_nr_objs(n);
> @@ -4376,16 +4373,12 @@ static ssize_t show_slab_objects(struct
> static int any_slab_objects(struct kmem_cache *s)
> {
> int node;
> + struct kmem_cache_node *n;
>
> - for_each_online_node(node) {
> - struct kmem_cache_node *n = get_node(s, node);
> -
> - if (!n)
> - continue;
> -
> + for_each_kmem_cache_node(s, node, n)
> if (atomic_long_read(&n->total_objects))
> return 1;
> - }
> +
> return 0;
> }
> #endif
> @@ -5340,12 +5333,9 @@ void get_slabinfo(struct kmem_cache *s,
> unsigned long nr_objs = 0;
> unsigned long nr_free = 0;
> int node;
> + struct kmem_cache_node *n;
>
> - for_each_online_node(node) {
> - struct kmem_cache_node *n = get_node(s, node);
> -
> - if (!n)
> - continue;
> + for_each_kmem_cache_node(s, node, n) {
>
> nr_slabs += node_nr_slabs(n);
> nr_objs += node_nr_objs(n);
Hello, Christoph.
I think that we can use for_each_kmem_cache_node() instead of
using for_each_node_state(node, N_NORMAL_MEMORY). Just one
exception is init_kmem_cache_nodes() which is responsible
for setting kmem_cache_node correctly.
Is there any reason not to use it for for_each_node_state()?
Thanks.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 4/4] slab: Use for_each_kmem_cache_node function
2014-05-30 18:27 ` [PATCH 4/4] slab: Use for_each_kmem_cache_node function Christoph Lameter
@ 2014-06-02 5:12 ` Joonsoo Kim
2014-06-02 15:45 ` Christoph Lameter
2014-06-02 17:43 ` Christoph Lameter
0 siblings, 2 replies; 13+ messages in thread
From: Joonsoo Kim @ 2014-06-02 5:12 UTC (permalink / raw)
To: Christoph Lameter
Cc: Pekka Enberg, linux-mm@kvack.org, Andrew Morton, David Rientjes
On Fri, May 30, 2014 at 01:27:57PM -0500, Christoph Lameter wrote:
> Reduce code somewhat by the use of kmem_cache_node.
Hello,
There are some other places that we can replace such as get_slabinfo(),
leaks_show(), etc.. If you want to replace for_each_online_node()
with for_each_kmem_cache_node, please also replace them.
Meanwhile, I think that this change is not good for readability. There
are many for_each_online_node() usage that we can't replace, so I don't
think this abstraction is really helpful clean-up. Possibly, using
for_each_online_node() consistently would be more readable than this
change.
Thanks.
>
> Signed-off-by: Christoph Lameter <cl@linux.com>
>
> Index: linux/mm/slab.c
> ===================================================================
> --- linux.orig/mm/slab.c 2014-05-30 13:08:32.986856450 -0500
> +++ linux/mm/slab.c 2014-05-30 13:08:32.986856450 -0500
> @@ -2415,17 +2415,12 @@ static void drain_cpu_caches(struct kmem
>
> on_each_cpu(do_drain, cachep, 1);
> check_irq_on();
> - for_each_online_node(node) {
> - n = get_node(cachep, node);
> - if (n && n->alien)
> + for_each_kmem_cache_node(cachep, node, n)
> + if (n->alien)
> drain_alien_cache(cachep, n->alien);
> - }
>
> - for_each_online_node(node) {
> - n = get_node(cachep, node);
> - if (n)
> - drain_array(cachep, n, n->shared, 1, node);
> - }
> + for_each_kmem_cache_node(cachep, node, n)
> + drain_array(cachep, n, n->shared, 1, node);
> }
>
> /*
> @@ -2478,11 +2473,7 @@ static int __cache_shrink(struct kmem_ca
> drain_cpu_caches(cachep);
>
> check_irq_on();
> - for_each_online_node(i) {
> - n = get_node(cachep, i);
> - if (!n)
> - continue;
> -
> + for_each_kmem_cache_node(cachep, i, n) {
> drain_freelist(cachep, n, slabs_tofree(cachep, n));
>
> ret += !list_empty(&n->slabs_full) ||
> @@ -2525,13 +2516,10 @@ int __kmem_cache_shutdown(struct kmem_ca
> kfree(cachep->array[i]);
>
> /* NUMA: free the node structures */
> - for_each_online_node(i) {
> - n = get_node(cachep, i);
> - if (n) {
> - kfree(n->shared);
> - free_alien_cache(n->alien);
> - kfree(n);
> - }
> + for_each_kmem_cache_node(cachep, i, n) {
> + kfree(n->shared);
> + free_alien_cache(n->alien);
> + kfree(n);
> }
> return 0;
> }
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/4] slub: Use new node functions
2014-06-02 4:59 ` Joonsoo Kim
@ 2014-06-02 15:42 ` Christoph Lameter
2014-06-03 6:57 ` Joonsoo Kim
0 siblings, 1 reply; 13+ messages in thread
From: Christoph Lameter @ 2014-06-02 15:42 UTC (permalink / raw)
To: Joonsoo Kim
Cc: Pekka Enberg, linux-mm@kvack.org, Andrew Morton, David Rientjes
On Mon, 2 Jun 2014, Joonsoo Kim wrote:
> I think that we can use for_each_kmem_cache_node() instead of
> using for_each_node_state(node, N_NORMAL_MEMORY). Just one
> exception is init_kmem_cache_nodes() which is responsible
> for setting kmem_cache_node correctly.
Yup.
> Is there any reason not to use it for for_each_node_state()?
There are two cases in which is doesnt work. free_kmem_cache_nodes() and
init_kmem_cache_nodes() as you noted before. And there is a case in the
statistics subsystem that needs to be handled a bit differently.
Here is a patch doing the additional modifications:
Subject: slub: Replace for_each_node_state with for_each_kmem_cache_node
More uses for the new function.
Signed-off-by: Christoph Lameter <cl@linux.com>
Index: linux/mm/slub.c
===================================================================
--- linux.orig/mm/slub.c 2014-05-30 13:23:24.863105538 -0500
+++ linux/mm/slub.c 2014-06-02 10:39:50.218883865 -0500
@@ -3210,11 +3210,11 @@ static void free_partial(struct kmem_cac
static inline int kmem_cache_close(struct kmem_cache *s)
{
int node;
+ struct kmem_cache_node *n;
flush_all(s);
/* Attempt to free all objects */
- for_each_node_state(node, N_NORMAL_MEMORY) {
- struct kmem_cache_node *n = get_node(s, node);
+ for_each_kmem_cache_node(s, node, n) {
free_partial(s, n);
if (n->nr_partial || slabs_node(s, node))
@@ -3400,11 +3400,7 @@ int kmem_cache_shrink(struct kmem_cache
return -ENOMEM;
flush_all(s);
- for_each_node_state(node, N_NORMAL_MEMORY) {
- n = get_node(s, node);
-
- if (!n->nr_partial)
- continue;
+ for_each_kmem_cache_node(s, node, n) {
for (i = 0; i < objects; i++)
INIT_LIST_HEAD(slabs_by_inuse + i);
@@ -3575,6 +3571,7 @@ static struct kmem_cache * __init bootst
{
int node;
struct kmem_cache *s = kmem_cache_zalloc(kmem_cache, GFP_NOWAIT);
+ struct kmem_cache_node *n;
memcpy(s, static_cache, kmem_cache->object_size);
@@ -3584,19 +3581,16 @@ static struct kmem_cache * __init bootst
* IPIs around.
*/
__flush_cpu_slab(s, smp_processor_id());
- for_each_node_state(node, N_NORMAL_MEMORY) {
- struct kmem_cache_node *n = get_node(s, node);
+ for_each_kmem_cache_node(s, node, n) {
struct page *p;
- if (n) {
- list_for_each_entry(p, &n->partial, lru)
- p->slab_cache = s;
+ list_for_each_entry(p, &n->partial, lru)
+ p->slab_cache = s;
#ifdef CONFIG_SLUB_DEBUG
- list_for_each_entry(p, &n->full, lru)
- p->slab_cache = s;
+ list_for_each_entry(p, &n->full, lru)
+ p->slab_cache = s;
#endif
- }
}
list_add(&s->list, &slab_caches);
return s;
@@ -3952,16 +3946,14 @@ static long validate_slab_cache(struct k
unsigned long count = 0;
unsigned long *map = kmalloc(BITS_TO_LONGS(oo_objects(s->max)) *
sizeof(unsigned long), GFP_KERNEL);
+ struct kmem_cache_node *n;
if (!map)
return -ENOMEM;
flush_all(s);
- for_each_node_state(node, N_NORMAL_MEMORY) {
- struct kmem_cache_node *n = get_node(s, node);
-
+ for_each_kmem_cache_node(s, node, n)
count += validate_slab_node(s, n, map);
- }
kfree(map);
return count;
}
@@ -4115,6 +4107,7 @@ static int list_locations(struct kmem_ca
int node;
unsigned long *map = kmalloc(BITS_TO_LONGS(oo_objects(s->max)) *
sizeof(unsigned long), GFP_KERNEL);
+ struct kmem_cache_node *n;
if (!map || !alloc_loc_track(&t, PAGE_SIZE / sizeof(struct location),
GFP_TEMPORARY)) {
@@ -4124,8 +4117,7 @@ static int list_locations(struct kmem_ca
/* Push back cpu slabs */
flush_all(s);
- for_each_node_state(node, N_NORMAL_MEMORY) {
- struct kmem_cache_node *n = get_node(s, node);
+ for_each_kmem_cache_node(s, node, n) {
unsigned long flags;
struct page *page;
@@ -4327,8 +4319,9 @@ static ssize_t show_slab_objects(struct
lock_memory_hotplug();
#ifdef CONFIG_SLUB_DEBUG
if (flags & SO_ALL) {
- for_each_node_state(node, N_NORMAL_MEMORY) {
- struct kmem_cache_node *n = get_node(s, node);
+ struct kmem_cache_node *n;
+
+ for_each_kmem_cache_node(s, node, n) {
if (flags & SO_TOTAL)
x = atomic_long_read(&n->total_objects);
@@ -4344,8 +4337,9 @@ static ssize_t show_slab_objects(struct
} else
#endif
if (flags & SO_PARTIAL) {
- for_each_node_state(node, N_NORMAL_MEMORY) {
- struct kmem_cache_node *n = get_node(s, node);
+ struct kmem_cache_node *n;
+
+ for_each_kmem_cache_node(s, node, n) {
if (flags & SO_TOTAL)
x = count_partial(n, count_total);
@@ -4359,7 +4353,7 @@ static ssize_t show_slab_objects(struct
}
x = sprintf(buf, "%lu", total);
#ifdef CONFIG_NUMA
- for_each_node_state(node, N_NORMAL_MEMORY)
+ for(node = 0; node < nr_node_ids; node++)
if (nodes[node])
x += sprintf(buf + x, " N%d=%lu",
node, nodes[node]);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 4/4] slab: Use for_each_kmem_cache_node function
2014-06-02 5:12 ` Joonsoo Kim
@ 2014-06-02 15:45 ` Christoph Lameter
2014-06-02 15:53 ` Christoph Lameter
2014-06-02 17:43 ` Christoph Lameter
1 sibling, 1 reply; 13+ messages in thread
From: Christoph Lameter @ 2014-06-02 15:45 UTC (permalink / raw)
To: Joonsoo Kim
Cc: Pekka Enberg, linux-mm@kvack.org, Andrew Morton, David Rientjes
On Mon, 2 Jun 2014, Joonsoo Kim wrote:
> There are some other places that we can replace such as get_slabinfo(),
> leaks_show(), etc.. If you want to replace for_each_online_node()
> with for_each_kmem_cache_node, please also replace them.
Ok we can do that.
> Meanwhile, I think that this change is not good for readability. There
> are many for_each_online_node() usage that we can't replace, so I don't
> think this abstraction is really helpful clean-up. Possibly, using
> for_each_online_node() consistently would be more readable than this
> change.
What really matters is that we have a management structure kmem_cache_node
for the relevant node. There are portions during bootstrap when
kmem_cache_node is not allocated. Using this function also avoids race
conditions during node bringup and teardown.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 4/4] slab: Use for_each_kmem_cache_node function
2014-06-02 15:45 ` Christoph Lameter
@ 2014-06-02 15:53 ` Christoph Lameter
0 siblings, 0 replies; 13+ messages in thread
From: Christoph Lameter @ 2014-06-02 15:53 UTC (permalink / raw)
To: Joonsoo Kim
Cc: Pekka Enberg, linux-mm@kvack.org, Andrew Morton, David Rientjes
Additional use cases for kmem_cache_node:
Subject: slab: use for_each_kmem_cache_node instead of for_each_online_node
Some use cases. There could be more work done to clean this up and use
for_each_kmem_cache_node in more places but the structure of some of these
functions may have to be changed a bit.
Signed-off-by: Christoph Lameter <cl@linux.com>
Index: linux/mm/slab.c
===================================================================
--- linux.orig/mm/slab.c 2014-05-30 13:23:24.879105040 -0500
+++ linux/mm/slab.c 2014-06-02 10:50:26.631319986 -0500
@@ -1632,14 +1632,10 @@ slab_out_of_memory(struct kmem_cache *ca
printk(KERN_WARNING " cache: %s, object size: %d, order: %d\n",
cachep->name, cachep->size, cachep->gfporder);
- for_each_online_node(node) {
+ for_each_kmem_cache_node(cachep, node, n) {
unsigned long active_objs = 0, num_objs = 0, free_objects = 0;
unsigned long active_slabs = 0, num_slabs = 0;
- n = cachep->node[node];
- if (!n)
- continue;
-
spin_lock_irqsave(&n->list_lock, flags);
list_for_each_entry(page, &n->slabs_full, lru) {
active_objs += cachep->num;
@@ -4040,10 +4036,7 @@ void get_slabinfo(struct kmem_cache *cac
active_objs = 0;
num_slabs = 0;
- for_each_online_node(node) {
- n = get_node(cachep, node);
- if (!n)
- continue;
+ for_each_kmem_cache_node(cachep, node, n) {
check_irq_on();
spin_lock_irq(&n->list_lock);
@@ -4277,10 +4270,7 @@ static int leaks_show(struct seq_file *m
x[1] = 0;
- for_each_online_node(node) {
- n = get_node(cachep, node);
- if (!n)
- continue;
+ for_each_kmem_cache_node(cachep, node, n) {
check_irq_on();
spin_lock_irq(&n->list_lock);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 4/4] slab: Use for_each_kmem_cache_node function
2014-06-02 5:12 ` Joonsoo Kim
2014-06-02 15:45 ` Christoph Lameter
@ 2014-06-02 17:43 ` Christoph Lameter
1 sibling, 0 replies; 13+ messages in thread
From: Christoph Lameter @ 2014-06-02 17:43 UTC (permalink / raw)
To: Joonsoo Kim
Cc: Pekka Enberg, linux-mm@kvack.org, Andrew Morton, David Rientjes
On Mon, 2 Jun 2014, Joonsoo Kim wrote:
> Meanwhile, I think that this change is not good for readability. There
> are many for_each_online_node() usage that we can't replace, so I don't
We can replace many of them if we do not pass "node" around but a pointer
to the node structure. Like here:
Subject: slab: Use for_each_kmem_cache_work by reworking call chain fopr slab_set_lock_classes
Signed-off-by: Christoph Lameter <cl@linux.com>
Index: linux/mm/slab.c
===================================================================
--- linux.orig/mm/slab.c 2014-06-02 10:50:26.631319986 -0500
+++ linux/mm/slab.c 2014-06-02 12:34:15.279952487 -0500
@@ -455,16 +455,11 @@ static struct lock_class_key debugobj_al
static void slab_set_lock_classes(struct kmem_cache *cachep,
struct lock_class_key *l3_key, struct lock_class_key *alc_key,
- int q)
+ struct kmem_cache_node *n)
{
struct array_cache **alc;
- struct kmem_cache_node *n;
int r;
- n = get_node(cachep, q);
- if (!n)
- return;
-
lockdep_set_class(&n->list_lock, l3_key);
alc = n->alien;
/*
@@ -482,17 +477,19 @@ static void slab_set_lock_classes(struct
}
}
-static void slab_set_debugobj_lock_classes_node(struct kmem_cache *cachep, int node)
+static void slab_set_debugobj_lock_classes_node(struct kmem_cache *cachep,
+ struct kmem_cache_node *ne)
{
- slab_set_lock_classes(cachep, &debugobj_l3_key, &debugobj_alc_key, node);
+ slab_set_lock_classes(cachep, &debugobj_l3_key, &debugobj_alc_key, n);
}
static void slab_set_debugobj_lock_classes(struct kmem_cache *cachep)
{
int node;
+ struct kmem_cache_node *n;
- for_each_online_node(node)
- slab_set_debugobj_lock_classes_node(cachep, node);
+ for_each_kmem_cache_node(cachep, node, h)
+ slab_set_debugobj_lock_classes_node(cachep, n);
}
static void init_node_lock_keys(int q)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/4] slub: Use new node functions
2014-06-02 15:42 ` Christoph Lameter
@ 2014-06-03 6:57 ` Joonsoo Kim
2014-06-03 14:47 ` Christoph Lameter
0 siblings, 1 reply; 13+ messages in thread
From: Joonsoo Kim @ 2014-06-03 6:57 UTC (permalink / raw)
To: Christoph Lameter
Cc: Pekka Enberg, linux-mm@kvack.org, Andrew Morton, David Rientjes
On Mon, Jun 02, 2014 at 10:42:35AM -0500, Christoph Lameter wrote:
> On Mon, 2 Jun 2014, Joonsoo Kim wrote:
>
> > I think that we can use for_each_kmem_cache_node() instead of
> > using for_each_node_state(node, N_NORMAL_MEMORY). Just one
> > exception is init_kmem_cache_nodes() which is responsible
> > for setting kmem_cache_node correctly.
>
> Yup.
>
> > Is there any reason not to use it for for_each_node_state()?
>
> There are two cases in which is doesnt work. free_kmem_cache_nodes() and
> init_kmem_cache_nodes() as you noted before. And there is a case in the
> statistics subsystem that needs to be handled a bit differently.
Hello,
I think that We can also replace for_each_node_state() in
free_kmem_cache_nodes(). What prevent it from being replaced?
>
> Here is a patch doing the additional modifications:
>
Seems good to me.
Thanks.
>
>
>
> Subject: slub: Replace for_each_node_state with for_each_kmem_cache_node
>
> More uses for the new function.
>
> Signed-off-by: Christoph Lameter <cl@linux.com>
>
> Index: linux/mm/slub.c
> ===================================================================
> --- linux.orig/mm/slub.c 2014-05-30 13:23:24.863105538 -0500
> +++ linux/mm/slub.c 2014-06-02 10:39:50.218883865 -0500
> @@ -3210,11 +3210,11 @@ static void free_partial(struct kmem_cac
> static inline int kmem_cache_close(struct kmem_cache *s)
> {
> int node;
> + struct kmem_cache_node *n;
>
> flush_all(s);
> /* Attempt to free all objects */
> - for_each_node_state(node, N_NORMAL_MEMORY) {
> - struct kmem_cache_node *n = get_node(s, node);
> + for_each_kmem_cache_node(s, node, n) {
>
> free_partial(s, n);
> if (n->nr_partial || slabs_node(s, node))
> @@ -3400,11 +3400,7 @@ int kmem_cache_shrink(struct kmem_cache
> return -ENOMEM;
>
> flush_all(s);
> - for_each_node_state(node, N_NORMAL_MEMORY) {
> - n = get_node(s, node);
> -
> - if (!n->nr_partial)
> - continue;
> + for_each_kmem_cache_node(s, node, n) {
>
> for (i = 0; i < objects; i++)
> INIT_LIST_HEAD(slabs_by_inuse + i);
> @@ -3575,6 +3571,7 @@ static struct kmem_cache * __init bootst
> {
> int node;
> struct kmem_cache *s = kmem_cache_zalloc(kmem_cache, GFP_NOWAIT);
> + struct kmem_cache_node *n;
>
> memcpy(s, static_cache, kmem_cache->object_size);
>
> @@ -3584,19 +3581,16 @@ static struct kmem_cache * __init bootst
> * IPIs around.
> */
> __flush_cpu_slab(s, smp_processor_id());
> - for_each_node_state(node, N_NORMAL_MEMORY) {
> - struct kmem_cache_node *n = get_node(s, node);
> + for_each_kmem_cache_node(s, node, n) {
> struct page *p;
>
> - if (n) {
> - list_for_each_entry(p, &n->partial, lru)
> - p->slab_cache = s;
> + list_for_each_entry(p, &n->partial, lru)
> + p->slab_cache = s;
>
> #ifdef CONFIG_SLUB_DEBUG
> - list_for_each_entry(p, &n->full, lru)
> - p->slab_cache = s;
> + list_for_each_entry(p, &n->full, lru)
> + p->slab_cache = s;
> #endif
> - }
> }
> list_add(&s->list, &slab_caches);
> return s;
> @@ -3952,16 +3946,14 @@ static long validate_slab_cache(struct k
> unsigned long count = 0;
> unsigned long *map = kmalloc(BITS_TO_LONGS(oo_objects(s->max)) *
> sizeof(unsigned long), GFP_KERNEL);
> + struct kmem_cache_node *n;
>
> if (!map)
> return -ENOMEM;
>
> flush_all(s);
> - for_each_node_state(node, N_NORMAL_MEMORY) {
> - struct kmem_cache_node *n = get_node(s, node);
> -
> + for_each_kmem_cache_node(s, node, n)
> count += validate_slab_node(s, n, map);
> - }
> kfree(map);
> return count;
> }
> @@ -4115,6 +4107,7 @@ static int list_locations(struct kmem_ca
> int node;
> unsigned long *map = kmalloc(BITS_TO_LONGS(oo_objects(s->max)) *
> sizeof(unsigned long), GFP_KERNEL);
> + struct kmem_cache_node *n;
>
> if (!map || !alloc_loc_track(&t, PAGE_SIZE / sizeof(struct location),
> GFP_TEMPORARY)) {
> @@ -4124,8 +4117,7 @@ static int list_locations(struct kmem_ca
> /* Push back cpu slabs */
> flush_all(s);
>
> - for_each_node_state(node, N_NORMAL_MEMORY) {
> - struct kmem_cache_node *n = get_node(s, node);
> + for_each_kmem_cache_node(s, node, n) {
> unsigned long flags;
> struct page *page;
>
> @@ -4327,8 +4319,9 @@ static ssize_t show_slab_objects(struct
> lock_memory_hotplug();
> #ifdef CONFIG_SLUB_DEBUG
> if (flags & SO_ALL) {
> - for_each_node_state(node, N_NORMAL_MEMORY) {
> - struct kmem_cache_node *n = get_node(s, node);
> + struct kmem_cache_node *n;
> +
> + for_each_kmem_cache_node(s, node, n) {
>
> if (flags & SO_TOTAL)
> x = atomic_long_read(&n->total_objects);
> @@ -4344,8 +4337,9 @@ static ssize_t show_slab_objects(struct
> } else
> #endif
> if (flags & SO_PARTIAL) {
> - for_each_node_state(node, N_NORMAL_MEMORY) {
> - struct kmem_cache_node *n = get_node(s, node);
> + struct kmem_cache_node *n;
> +
> + for_each_kmem_cache_node(s, node, n) {
>
> if (flags & SO_TOTAL)
> x = count_partial(n, count_total);
> @@ -4359,7 +4353,7 @@ static ssize_t show_slab_objects(struct
> }
> x = sprintf(buf, "%lu", total);
> #ifdef CONFIG_NUMA
> - for_each_node_state(node, N_NORMAL_MEMORY)
> + for(node = 0; node < nr_node_ids; node++)
> if (nodes[node])
> x += sprintf(buf + x, " N%d=%lu",
> node, nodes[node]);
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/4] slub: Use new node functions
2014-06-03 6:57 ` Joonsoo Kim
@ 2014-06-03 14:47 ` Christoph Lameter
0 siblings, 0 replies; 13+ messages in thread
From: Christoph Lameter @ 2014-06-03 14:47 UTC (permalink / raw)
To: Joonsoo Kim
Cc: Pekka Enberg, linux-mm@kvack.org, Andrew Morton, David Rientjes
On Tue, 3 Jun 2014, Joonsoo Kim wrote:
> I think that We can also replace for_each_node_state() in
> free_kmem_cache_nodes(). What prevent it from being replaced?
There is the problem that we are assigning NULL to s->node[node] which
would not be covered so I thought I defer that for later when we deal with
corner cases.
> >
> > Here is a patch doing the additional modifications:
> >
>
> Seems good to me.
Ok, who is queuing the patches?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2014-06-03 14:47 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-05-30 18:27 [PATCH 0/4] slab: common kmem_cache_cpu functions V1 Christoph Lameter
2014-05-30 18:27 ` [PATCH 1/4] slab common: Add functions for kmem_cache_node access Christoph Lameter
2014-05-30 18:27 ` [PATCH 2/4] slub: Use new node functions Christoph Lameter
2014-06-02 4:59 ` Joonsoo Kim
2014-06-02 15:42 ` Christoph Lameter
2014-06-03 6:57 ` Joonsoo Kim
2014-06-03 14:47 ` Christoph Lameter
2014-05-30 18:27 ` [PATCH 3/4] slab: Use get_node function Christoph Lameter
2014-05-30 18:27 ` [PATCH 4/4] slab: Use for_each_kmem_cache_node function Christoph Lameter
2014-06-02 5:12 ` Joonsoo Kim
2014-06-02 15:45 ` Christoph Lameter
2014-06-02 15:53 ` Christoph Lameter
2014-06-02 17:43 ` Christoph Lameter
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).