public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0 of 3] SLOB patches for 2.6.25
@ 2008-01-17 17:57 Matt Mackall
  2008-01-17 17:57 ` [PATCH 1 of 3] slob: fix free block merging at head of subpage Matt Mackall
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Matt Mackall @ 2008-01-17 17:57 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-kernel

Here are some SLOB fragmentation reduction patches for 2.6.25, please apply.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 1 of 3] slob: fix free block merging at head of subpage
  2008-01-17 17:57 [PATCH 0 of 3] SLOB patches for 2.6.25 Matt Mackall
@ 2008-01-17 17:57 ` Matt Mackall
  2008-01-17 17:57 ` [PATCH 2 of 3] slob: reduce external fragmentation by using three free lists Matt Mackall
  2008-01-17 17:57 ` [PATCH 3 of 3] slob: correct Kconfig description Matt Mackall
  2 siblings, 0 replies; 4+ messages in thread
From: Matt Mackall @ 2008-01-17 17:57 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-kernel

We weren't merging freed blocks at the beginning of the free list.
Fixing this showed a 2.5% efficiency improvement in a userspace test
harness.

Signed-off-by: Matt Mackall <mpm@selenic.com>

diff -r 408d4beddb6c -r 771a5ab2c6b7 mm/slob.c
--- a/mm/slob.c	Wed Jan 16 19:00:27 2008 +0000
+++ b/mm/slob.c	Wed Jan 16 18:14:29 2008 -0600
@@ -398,6 +398,10 @@
 	sp->units += units;
 
 	if (b < sp->free) {
+		if (b + units == sp->free) {
+			units += slob_units(sp->free);
+			sp->free = slob_next(sp->free);
+		}
 		set_slob(b, units, sp->free);
 		sp->free = b;
 	} else {

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 2 of 3] slob: reduce external fragmentation by using three free lists
  2008-01-17 17:57 [PATCH 0 of 3] SLOB patches for 2.6.25 Matt Mackall
  2008-01-17 17:57 ` [PATCH 1 of 3] slob: fix free block merging at head of subpage Matt Mackall
@ 2008-01-17 17:57 ` Matt Mackall
  2008-01-17 17:57 ` [PATCH 3 of 3] slob: correct Kconfig description Matt Mackall
  2 siblings, 0 replies; 4+ messages in thread
From: Matt Mackall @ 2008-01-17 17:57 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-kernel

By putting smaller objects on their own list, we greatly reduce
overall external fragmentation and increase repeatability. This
reduces total SLOB overhead from > 50% to ~6% on a simple boot test.

Signed-off-by: Matt Mackall <mpm@selenic.com>

diff -r 771a5ab2c6b7 -r eff994add207 mm/slob.c
--- a/mm/slob.c	Wed Jan 16 18:14:29 2008 -0600
+++ b/mm/slob.c	Wed Jan 16 18:14:29 2008 -0600
@@ -12,10 +12,17 @@
  * allocator is as little as 2 bytes, however typically most architectures
  * will require 4 bytes on 32-bit and 8 bytes on 64-bit.
  *
- * The slob heap is a linked list of pages from alloc_pages(), and
- * within each page, there is a singly-linked list of free blocks (slob_t).
- * The heap is grown on demand and allocation from the heap is currently
- * first-fit.
+ * The slob heap is a set of linked list of pages from alloc_pages(),
+ * and within each page, there is a singly-linked list of free blocks
+ * (slob_t). The heap is grown on demand. To reduce fragmentation,
+ * heap pages are segregated into three lists, with objects less than
+ * 256 bytes, objects less than 1024 bytes, and all other objects.
+ *
+ * Allocation from heap involves first searching for a page with
+ * sufficient free blocks (using a next-fit-like approach) followed by
+ * a first-fit scan of the page. Deallocation inserts objects back
+ * into the free list in address order, so this is effectively an
+ * address-ordered first fit.
  *
  * Above this is an implementation of kmalloc/kfree. Blocks returned
  * from kmalloc are prepended with a 4-byte header with the kmalloc size.
@@ -110,9 +117,13 @@
 }
 
 /*
- * All (partially) free slob pages go on this list.
+ * All partially free slob pages go on these lists.
  */
-static LIST_HEAD(free_slob_pages);
+#define SLOB_BREAK1 256
+#define SLOB_BREAK2 1024
+static LIST_HEAD(free_slob_small);
+static LIST_HEAD(free_slob_medium);
+static LIST_HEAD(free_slob_large);
 
 /*
  * slob_page: True for all slob pages (false for bigblock pages)
@@ -140,9 +151,9 @@
 	return test_bit(PG_private, &sp->flags);
 }
 
-static inline void set_slob_page_free(struct slob_page *sp)
+static void set_slob_page_free(struct slob_page *sp, struct list_head *list)
 {
-	list_add(&sp->list, &free_slob_pages);
+	list_add(&sp->list, list);
 	__set_bit(PG_private, &sp->flags);
 }
 
@@ -294,12 +305,20 @@
 {
 	struct slob_page *sp;
 	struct list_head *prev;
+	struct list_head *slob_list;
 	slob_t *b = NULL;
 	unsigned long flags;
 
+	if (size < SLOB_BREAK1)
+		slob_list = &free_slob_small;
+	else if (size < SLOB_BREAK2)
+		slob_list = &free_slob_medium;
+	else
+		slob_list = &free_slob_large;
+
 	spin_lock_irqsave(&slob_lock, flags);
 	/* Iterate through each partially free page, try to find room */
-	list_for_each_entry(sp, &free_slob_pages, list) {
+	list_for_each_entry(sp, slob_list, list) {
 #ifdef CONFIG_NUMA
 		/*
 		 * If there's a node specification, search for a partial
@@ -321,9 +340,9 @@
 		/* Improve fragment distribution and reduce our average
 		 * search time by starting our next search here. (see
 		 * Knuth vol 1, sec 2.5, pg 449) */
-		if (prev != free_slob_pages.prev &&
-				free_slob_pages.next != prev->next)
-			list_move_tail(&free_slob_pages, prev->next);
+		if (prev != slob_list->prev &&
+				slob_list->next != prev->next)
+			list_move_tail(slob_list, prev->next);
 		break;
 	}
 	spin_unlock_irqrestore(&slob_lock, flags);
@@ -341,7 +360,7 @@
 		sp->free = b;
 		INIT_LIST_HEAD(&sp->list);
 		set_slob(b, SLOB_UNITS(PAGE_SIZE), b + SLOB_UNITS(PAGE_SIZE));
-		set_slob_page_free(sp);
+		set_slob_page_free(sp, slob_list);
 		b = slob_page_alloc(sp, size, align);
 		BUG_ON(!b);
 		spin_unlock_irqrestore(&slob_lock, flags);
@@ -387,7 +406,7 @@
 		set_slob(b, units,
 			(void *)((unsigned long)(b +
 					SLOB_UNITS(PAGE_SIZE)) & PAGE_MASK));
-		set_slob_page_free(sp);
+		set_slob_page_free(sp, &free_slob_small);
 		goto out;
 	}
 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 3 of 3] slob: correct Kconfig description
  2008-01-17 17:57 [PATCH 0 of 3] SLOB patches for 2.6.25 Matt Mackall
  2008-01-17 17:57 ` [PATCH 1 of 3] slob: fix free block merging at head of subpage Matt Mackall
  2008-01-17 17:57 ` [PATCH 2 of 3] slob: reduce external fragmentation by using three free lists Matt Mackall
@ 2008-01-17 17:57 ` Matt Mackall
  2 siblings, 0 replies; 4+ messages in thread
From: Matt Mackall @ 2008-01-17 17:57 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-kernel

Signed-off-by: Matt Mackall <mpm@selenic.com>

diff -r eff994add207 -r db1f4e74c5ab init/Kconfig
--- a/init/Kconfig	Wed Jan 16 18:14:29 2008 -0600
+++ b/init/Kconfig	Wed Jan 16 18:14:29 2008 -0600
@@ -648,11 +648,9 @@
 	depends on EMBEDDED
 	bool "SLOB (Simple Allocator)"
 	help
-	   SLOB replaces the SLAB allocator with a drastically simpler
-	   allocator.  SLOB is more space efficient than SLAB but does not
-	   scale well (single lock for all operations) and is also highly
-	   susceptible to fragmentation. SLUB can accomplish a higher object
-	   density. It is usually better to use SLUB instead of SLOB.
+	   SLOB replaces the stock allocator with a drastically simpler
+	   allocator. SLOB is generally more space efficient but
+	   does not perform as well on large systems.
 
 endchoice
 

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2008-01-17 17:59 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-01-17 17:57 [PATCH 0 of 3] SLOB patches for 2.6.25 Matt Mackall
2008-01-17 17:57 ` [PATCH 1 of 3] slob: fix free block merging at head of subpage Matt Mackall
2008-01-17 17:57 ` [PATCH 2 of 3] slob: reduce external fragmentation by using three free lists Matt Mackall
2008-01-17 17:57 ` [PATCH 3 of 3] slob: correct Kconfig description Matt Mackall

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox