linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC 0/3] kmemcg slab reparenting
@ 2014-05-13 13:48 Vladimir Davydov
  2014-05-13 13:48 ` [PATCH RFC 1/3] slub: keep full slabs on list for per memcg caches Vladimir Davydov
                   ` (2 more replies)
  0 siblings, 3 replies; 31+ messages in thread
From: Vladimir Davydov @ 2014-05-13 13:48 UTC (permalink / raw)
  To: hannes, mhocko, cl; +Cc: akpm, linux-kernel, linux-mm

Hi Johannes, Michal, Christoph,

Recently I posted my thoughts on how we can handle kmem caches of dead
memcgs:

https://lkml.org/lkml/2014/4/20/38

The only feedback I got then was from Johannes who voted for migrating
slabs of such caches to the parent memcg's cache (so called
reparenting), so in this RFC I'd like to propose a draft of possible
implementation of slab reparenting. I'd appreciate if you could look
through it and post if it's worth developing in this direction or not.

The implementation of reparenting is given in patch 3, which is the most
important part of this set. Patch 1 just makes slub keep full slabs on
list, and patch 2 a bit extends percpu-refcnt interface.

NOTE the implementation is given only for slub, though it should be easy
to implement the same hack for slab.

Thanks,

Vladimir Davydov (3):
  slub: keep full slabs on list for per memcg caches
  percpu-refcount: allow to get dead reference
  slub: reparent memcg caches' slabs on memcg offline

 include/linux/memcontrol.h      |    4 +-
 include/linux/percpu-refcount.h |   11 +-
 include/linux/slab.h            |    7 +-
 mm/memcontrol.c                 |   54 ++++---
 mm/slab.h                       |    7 +-
 mm/slub.c                       |  299 ++++++++++++++++++++++++++++++++++-----
 6 files changed, 318 insertions(+), 64 deletions(-)

-- 
1.7.10.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2014-05-27 14:38 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-05-13 13:48 [PATCH RFC 0/3] kmemcg slab reparenting Vladimir Davydov
2014-05-13 13:48 ` [PATCH RFC 1/3] slub: keep full slabs on list for per memcg caches Vladimir Davydov
2014-05-14 16:16   ` Christoph Lameter
2014-05-15  6:34     ` Vladimir Davydov
2014-05-15 15:15       ` Christoph Lameter
2014-05-16 13:06         ` Vladimir Davydov
2014-05-16 15:05           ` Christoph Lameter
2014-05-13 13:48 ` [PATCH RFC 2/3] percpu-refcount: allow to get dead reference Vladimir Davydov
2014-05-13 13:48 ` [PATCH RFC 3/3] slub: reparent memcg caches' slabs on memcg offline Vladimir Davydov
2014-05-14 16:20   ` Christoph Lameter
2014-05-15  7:16     ` Vladimir Davydov
2014-05-15 15:16       ` Christoph Lameter
2014-05-16 13:22         ` Vladimir Davydov
2014-05-16 15:03           ` Christoph Lameter
2014-05-19 15:24             ` Vladimir Davydov
2014-05-19 16:03               ` Christoph Lameter
2014-05-19 18:27                 ` Vladimir Davydov
2014-05-21 13:58                   ` Vladimir Davydov
2014-05-21 14:45                     ` Christoph Lameter
2014-05-21 15:14                       ` Vladimir Davydov
2014-05-22  0:15                         ` Christoph Lameter
2014-05-22 14:07                           ` Vladimir Davydov
2014-05-21 14:41                   ` Christoph Lameter
2014-05-21 15:04                     ` Vladimir Davydov
2014-05-22  0:13                       ` Christoph Lameter
2014-05-22 13:47                         ` Vladimir Davydov
2014-05-22 19:25                           ` Christoph Lameter
2014-05-23 15:26                             ` Vladimir Davydov
2014-05-23 17:45                               ` Christoph Lameter
2014-05-23 19:57                                 ` Vladimir Davydov
2014-05-27 14:38                                   ` Christoph Lameter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).