linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Tobin C. Harding" <tobin@kernel.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: "Tobin C. Harding" <tobin@kernel.org>,
	Christopher Lameter <cl@linux.com>,
	Pekka Enberg <penberg@cs.helsinki.fi>,
	Matthew Wilcox <willy@infradead.org>,
	Tycho Andersen <tycho@tycho.ws>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: [RFC 02/15] slub: Add isolate() and migrate() methods
Date: Fri,  8 Mar 2019 15:14:13 +1100	[thread overview]
Message-ID: <20190308041426.16654-3-tobin@kernel.org> (raw)
In-Reply-To: <20190308041426.16654-1-tobin@kernel.org>

Add the two methods needed for moving objects and enable the display of
the callbacks via the /sys/kernel/slab interface.

Add documentation explaining the use of these methods and the prototypes
for slab.h. Add functions to setup the callbacks method for a slab
cache.

Add empty functions for SLAB/SLOB. The API is generic so it could be
theoretically implemented for these allocators as well.

Co-developed-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Tobin C. Harding <tobin@kernel.org>
---
 include/linux/slab.h     | 69 ++++++++++++++++++++++++++++++++++++++++
 include/linux/slub_def.h |  3 ++
 mm/slab_common.c         |  4 +++
 mm/slub.c                | 42 ++++++++++++++++++++++++
 4 files changed, 118 insertions(+)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index 11b45f7ae405..22e87c41b8a4 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -152,6 +152,75 @@ void memcg_create_kmem_cache(struct mem_cgroup *, struct kmem_cache *);
 void memcg_deactivate_kmem_caches(struct mem_cgroup *);
 void memcg_destroy_kmem_caches(struct mem_cgroup *);
 
+/*
+ * Function prototypes passed to kmem_cache_setup_mobility() to enable
+ * mobile objects and targeted reclaim in slab caches.
+ */
+
+/**
+ * typedef kmem_cache_isolate_func - Object migration callback function.
+ * @s: The cache we are working on.
+ * @ptr: Pointer to an array of pointers to the objects to migrate.
+ * @nr: Number of objects in array.
+ *
+ * The purpose of kmem_cache_isolate_func() is to pin each object so that
+ * they cannot be freed until kmem_cache_migrate_func() has processed
+ * them. This may be accomplished by increasing the refcount or setting
+ * a flag.
+ *
+ * The object pointer array passed is also passed to
+ * kmem_cache_migrate_func().  The function may remove objects from the
+ * array by setting pointers to NULL. This is useful if we can determine
+ * that an object is being freed because kmem_cache_isolate_func() was
+ * called when the subsystem was calling kmem_cache_free().  In that
+ * case it is not necessary to increase the refcount or specially mark
+ * the object because the release of the slab lock will lead to the
+ * immediate freeing of the object.
+ *
+ * Context: Called with locks held so that the slab objects cannot be
+ *          freed.  We are in an atomic context and no slab operations
+ *          may be performed.
+ * Return: A pointer that is passed to the migrate function. If any
+ *         objects cannot be touched at this point then the pointer may
+ *         indicate a failure and then the migration function can simply
+ *         remove the references that were already obtained. The private
+ *         data could be used to track the objects that were already pinned.
+ */
+typedef void *kmem_cache_isolate_func(struct kmem_cache *s, void **ptr, int nr);
+
+/**
+ * typedef kmem_cache_migrate_func - Object migration callback function.
+ * @s: The cache we are working on.
+ * @ptr: Pointer to an array of pointers to the objects to migrate.
+ * @nr: Number of objects in array.
+ * @node: The NUMA node where the object should be allocated.
+ * @private: The pointer returned by kmem_cache_isolate_func().
+ *
+ * This function is responsible for migrating objects.  Typically, for
+ * each object in the input array you will want to allocate an new
+ * object, copy the original object, update any pointers, and free the
+ * old object.
+ *
+ * After this function returns all pointers to the old object should now
+ * point to the new object.
+ *
+ * Context: Called with no locks held and interrupts enabled.  Sleeping
+ *          is possible.  Any operation may be performed.
+ */
+typedef void kmem_cache_migrate_func(struct kmem_cache *s, void **ptr,
+				     int nr, int node, void *private);
+
+/*
+ * kmem_cache_setup_mobility() is used to setup callbacks for a slab cache.
+ */
+#ifdef CONFIG_SLUB
+void kmem_cache_setup_mobility(struct kmem_cache *, kmem_cache_isolate_func,
+			       kmem_cache_migrate_func);
+#else
+static inline void kmem_cache_setup_mobility(struct kmem_cache *s,
+	kmem_cache_isolate_func isolate, kmem_cache_migrate_func migrate) {}
+#endif
+
 /*
  * Please use this macro to create slab caches. Simply specify the
  * name of the structure and maybe some flags that are listed above.
diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 3a1a1dbc6f49..a7340a1ed5dc 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -99,6 +99,9 @@ struct kmem_cache {
 	gfp_t allocflags;	/* gfp flags to use on each alloc */
 	int refcount;		/* Refcount for slab cache destroy */
 	void (*ctor)(void *);
+	kmem_cache_isolate_func *isolate;
+	kmem_cache_migrate_func *migrate;
+
 	unsigned int inuse;		/* Offset to metadata */
 	unsigned int align;		/* Alignment */
 	unsigned int red_left_pad;	/* Left redzone padding size */
diff --git a/mm/slab_common.c b/mm/slab_common.c
index f9d89c1b5977..754acdb292e4 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -298,6 +298,10 @@ int slab_unmergeable(struct kmem_cache *s)
 	if (!is_root_cache(s))
 		return 1;
 
+	/*
+	 * s->isolate and s->migrate imply s->ctor so no need to
+	 * check them explicitly.
+	 */
 	if (s->ctor)
 		return 1;
 
diff --git a/mm/slub.c b/mm/slub.c
index 69164aa7cbbf..0133168d1089 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4325,6 +4325,34 @@ int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags)
 	return err;
 }
 
+void kmem_cache_setup_mobility(struct kmem_cache *s,
+			       kmem_cache_isolate_func isolate,
+			       kmem_cache_migrate_func migrate)
+{
+	/*
+	 * Mobile objects must have a ctor otherwise the object may be
+	 * in an undefined state on allocation.  Since the object may
+	 * need to be inspected by the migration function at any time
+	 * after allocation we must ensure that the object always has a
+	 * defined state.
+	 */
+	if (!s->ctor) {
+		pr_err("%s: cannot setup mobility without a constructor\n",
+		       s->name);
+		return;
+	}
+
+	s->isolate = isolate;
+	s->migrate = migrate;
+
+	/*
+	 * Sadly serialization requirements currently mean that we have
+	 * to disable fast cmpxchg based processing.
+	 */
+	s->flags &= ~__CMPXCHG_DOUBLE;
+}
+EXPORT_SYMBOL(kmem_cache_setup_mobility);
+
 void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long caller)
 {
 	struct kmem_cache *s;
@@ -5018,6 +5046,20 @@ static ssize_t ops_show(struct kmem_cache *s, char *buf)
 
 	if (s->ctor)
 		x += sprintf(buf + x, "ctor : %pS\n", s->ctor);
+
+	if (s->isolate) {
+		x += sprintf(buf + x, "isolate : ");
+		x += sprint_symbol(buf + x,
+				(unsigned long)s->isolate);
+		x += sprintf(buf + x, "\n");
+	}
+
+	if (s->migrate) {
+		x += sprintf(buf + x, "migrate : ");
+		x += sprint_symbol(buf + x,
+				(unsigned long)s->migrate);
+		x += sprintf(buf + x, "\n");
+	}
 	return x;
 }
 SLAB_ATTR_RO(ops);
-- 
2.21.0


  parent reply	other threads:[~2019-03-08  4:15 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-08  4:14 [RFC 00/15] mm: Implement Slab Movable Objects (SMO) Tobin C. Harding
2019-03-08  4:14 ` [RFC 01/15] slub: Create sysfs field /sys/slab/<cache>/ops Tobin C. Harding
2019-03-11 21:23   ` Roman Gushchin
2019-03-12  1:16     ` Tobin C. Harding
2019-03-08  4:14 ` Tobin C. Harding [this message]
2019-03-08 15:28   ` [RFC 02/15] slub: Add isolate() and migrate() methods Tycho Andersen
2019-03-08 16:15     ` Christopher Lameter
2019-03-08 16:22       ` Tycho Andersen
2019-03-08 19:53         ` Tobin C. Harding
2019-03-08 20:08           ` Tycho Andersen
2019-03-11 21:51   ` Roman Gushchin
2019-03-12  1:08     ` Tobin C. Harding
2019-03-12  4:35     ` Christopher Lameter
2019-03-12 18:47       ` Roman Gushchin
2019-03-08  4:14 ` [RFC 03/15] tools/vm/slabinfo: Add support for -C and -F options Tobin C. Harding
2019-03-11 21:54   ` Roman Gushchin
2019-03-12  1:20     ` Tobin C. Harding
2019-03-08  4:14 ` [RFC 04/15] slub: Enable Slab Movable Objects (SMO) Tobin C. Harding
2019-03-11 22:48   ` Roman Gushchin
2019-03-12  1:47     ` Tobin C. Harding
2019-03-12 18:00       ` Roman Gushchin
2019-03-12  4:39     ` Christopher Lameter
2019-03-08  4:14 ` [RFC 05/15] slub: Sort slab cache list Tobin C. Harding
2019-03-08  4:14 ` [RFC 06/15] tools/vm/slabinfo: Add remote node defrag ratio output Tobin C. Harding
2019-03-08  4:14 ` [RFC 07/15] slub: Add defrag_used_ratio field and sysfs support Tobin C. Harding
2019-03-08 16:01   ` Tycho Andersen
2019-03-11  6:04     ` Tobin C. Harding
2019-03-08  4:14 ` [RFC 08/15] tools/vm/slabinfo: Add defrag_used_ratio output Tobin C. Harding
2019-03-08  4:14 ` [RFC 09/15] slub: Enable slab defragmentation using SMO Tobin C. Harding
2019-03-11 23:35   ` Roman Gushchin
2019-03-12  1:49     ` Tobin C. Harding
2019-03-08  4:14 ` [RFC 10/15] tools/testing/slab: Add object migration test module Tobin C. Harding
2019-03-08  4:14 ` [RFC 11/15] tools/testing/slab: Add object migration test suite Tobin C. Harding
2019-03-08  4:14 ` [RFC 12/15] xarray: Implement migration function for objects Tobin C. Harding
2019-03-12  0:16   ` Roman Gushchin
2019-03-12  1:54     ` Tobin C. Harding
2019-03-08  4:14 ` [RFC 13/15] tools/testing/slab: Add XArray movable objects tests Tobin C. Harding
2019-03-08  4:14 ` [RFC 14/15] slub: Enable move _all_ objects to node Tobin C. Harding
2019-03-08  4:14 ` [RFC 15/15] slub: Enable balancing slab objects across nodes Tobin C. Harding
2019-03-12  0:09 ` [RFC 00/15] mm: Implement Slab Movable Objects (SMO) Roman Gushchin
2019-03-12  1:48   ` Tobin C. Harding

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190308041426.16654-3-tobin@kernel.org \
    --to=tobin@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@cs.helsinki.fi \
    --cc=tycho@tycho.ws \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).