public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [patch 4/9] slab: cache_estimate cleanup
@ 2006-01-03 20:25 Pekka Enberg
  2006-01-04  1:00 ` Steven Rostedt
  0 siblings, 1 reply; 6+ messages in thread
From: Pekka Enberg @ 2006-01-03 20:25 UTC (permalink / raw)
  To: akpm; +Cc: linux-kernel, manfred, colpatch, rostedt, clameter, penberg

From: Steven Rostedt <rostedt@goodmis.org>

This patch cleans up cache_estimate in mm/slab.c and improves the algorithm
by taking an initial guess before executing the while loop. The optimization
was originally made by Balbir Singh with further improvements from Steven
Rostedt. Manfred Spraul provider further modifications: no loop at all for
the off-slab case and explain the background.

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
---

 mm/slab.c |   89 ++++++++++++++++++++++++++++++++++++++++++++------------------
 1 file changed, 64 insertions(+), 25 deletions(-)

Index: 2.6/mm/slab.c
===================================================================
--- 2.6.orig/mm/slab.c
+++ 2.6/mm/slab.c
@@ -700,32 +700,71 @@ kmem_cache_t *kmem_find_general_cachep(s
 }
 EXPORT_SYMBOL(kmem_find_general_cachep);
 
-/* Cal the num objs, wastage, and bytes left over for a given slab size. */
-static void cache_estimate(unsigned long gfporder, size_t size, size_t align,
-		 int flags, size_t *left_over, unsigned int *num)
+static size_t slab_mgmt_size(size_t nr_objs, size_t align)
 {
-	int i;
-	size_t wastage = PAGE_SIZE<<gfporder;
-	size_t extra = 0;
-	size_t base = 0;
-
-	if (!(flags & CFLGS_OFF_SLAB)) {
-		base = sizeof(struct slab);
-		extra = sizeof(kmem_bufctl_t);
-	}
-	i = 0;
-	while (i*size + ALIGN(base+i*extra, align) <= wastage)
-		i++;
-	if (i > 0)
-		i--;
-
-	if (i > SLAB_LIMIT)
-		i = SLAB_LIMIT;
-
-	*num = i;
-	wastage -= i*size;
-	wastage -= ALIGN(base+i*extra, align);
-	*left_over = wastage;
+	return ALIGN(sizeof(struct slab)+nr_objs*sizeof(kmem_bufctl_t), align);
+}
+
+/* Calculate the number of objects and left-over bytes for a given
+   object size. */
+static void cache_estimate(unsigned long gfporder, size_t obj_size,
+			   size_t align, int flags, size_t *left_over,
+			   unsigned int *num)
+{
+	int nr_objs;
+	size_t mgmt_size;
+	size_t slab_size = PAGE_SIZE << gfporder;
+
+	/*
+	 * The slab management structure can be either off the slab or
+	 * on it. For the latter case, the memory allocated for a
+	 * slab is used for:
+	 *
+	 * - The struct slab
+	 * - One kmem_bufctl_t for each object
+	 * - Padding, to achieve alignment of @align
+	 * - @obj_size bytes for each object
+	 */
+	if (flags & CFLGS_OFF_SLAB) {
+		mgmt_size = 0;
+		nr_objs = slab_size / obj_size;
+
+		if (nr_objs > SLAB_LIMIT)
+			nr_objs = SLAB_LIMIT;
+	} else {
+		/* Ignore padding for the initial guess.  The padding
+		 * is at most @align-1 bytes, and @size is at least
+		 * @align. In the worst case, this result will be one
+		 * greater than the number of objects that fit into
+		 * the memory allocation when taking the padding into
+		 * account.
+		 */
+		nr_objs = (slab_size - sizeof(struct slab)) /
+			  (obj_size + sizeof(kmem_bufctl_t));
+
+		/*
+		 * Now take the padding into account and increase the
+		 * number of objects/slab until it doesn't fit
+		 * anymore.
+		 */
+		while (slab_mgmt_size(nr_objs, align) + nr_objs*obj_size
+		       <= slab_size)
+			nr_objs++;
+
+		/*
+		 * Reduce it by one which the maximum number of objects that
+		 * fit in the slab.
+		 */
+		if (nr_objs > 0)
+			nr_objs--;
+
+		if (nr_objs > SLAB_LIMIT)
+			nr_objs = SLAB_LIMIT;
+
+		mgmt_size = slab_mgmt_size(nr_objs, align);
+	}
+	*num = nr_objs;
+	*left_over = slab_size - nr_objs*obj_size - mgmt_size;
 }
 
 #define slab_error(cachep, msg) __slab_error(__FUNCTION__, cachep, msg)

--



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [patch 4/9] slab: cache_estimate cleanup
  2006-01-03 20:25 [patch 4/9] slab: cache_estimate cleanup Pekka Enberg
@ 2006-01-04  1:00 ` Steven Rostedt
  2006-01-04  1:47   ` Steven Rostedt
  0 siblings, 1 reply; 6+ messages in thread
From: Steven Rostedt @ 2006-01-04  1:00 UTC (permalink / raw)
  To: Pekka Enberg; +Cc: akpm, linux-kernel, manfred, colpatch, clameter

[-- Attachment #1: Type: text/plain, Size: 3656 bytes --]

On Tue, 2006-01-03 at 22:25 +0200, Pekka Enberg wrote:
> From: Steven Rostedt <rostedt@goodmis.org>
> 
> This patch cleans up cache_estimate in mm/slab.c and improves the algorithm
> by taking an initial guess before executing the while loop. The optimization
> was originally made by Balbir Singh with further improvements from Steven
> Rostedt. Manfred Spraul provider further modifications: no loop at all for
> the off-slab case and explain the background.

The no loop at all probably needs a comment, since I don't see one.
[ see below ]

> 
> Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
> ---
> 
>  mm/slab.c |   89 ++++++++++++++++++++++++++++++++++++++++++++------------------
>  1 file changed, 64 insertions(+), 25 deletions(-)
> 
> Index: 2.6/mm/slab.c
> ===================================================================
> --- 2.6.orig/mm/slab.c
> +++ 2.6/mm/slab.c

[snip]

> +/* Calculate the number of objects and left-over bytes for a given
> +   object size. */
> +static void cache_estimate(unsigned long gfporder, size_t obj_size,
> +			   size_t align, int flags, size_t *left_over,
> +			   unsigned int *num)
> +{
> +	int nr_objs;
> +	size_t mgmt_size;
> +	size_t slab_size = PAGE_SIZE << gfporder;
> +
> +	/*
> +	 * The slab management structure can be either off the slab or
> +	 * on it. For the latter case, the memory allocated for a
> +	 * slab is used for:
> +	 *
> +	 * - The struct slab
> +	 * - One kmem_bufctl_t for each object
> +	 * - Padding, to achieve alignment of @align
> +	 * - @obj_size bytes for each object
> +	 */

Probably want a comment here that says something to the following:

If the slab management structure is off the slab, then the alignment
will already be calculated into the size. Because the slabs are all
pages aligned, the objects will be at the correct alignment when
allocated.

[more below]

> +	if (flags & CFLGS_OFF_SLAB) {
> +		mgmt_size = 0;
> +		nr_objs = slab_size / obj_size;
> +
> +		if (nr_objs > SLAB_LIMIT)
> +			nr_objs = SLAB_LIMIT;
> +	} else {
> +		/* Ignore padding for the initial guess.  The padding
> +		 * is at most @align-1 bytes, and @size is at least

should @size be @obj_size

> +		 * @align. In the worst case, this result will be one
> +		 * greater than the number of objects that fit into
> +		 * the memory allocation when taking the padding into
> +		 * account.
> +		 */
> +		nr_objs = (slab_size - sizeof(struct slab)) /
> +			  (obj_size + sizeof(kmem_bufctl_t));
> +
> +		/*
> +		 * Now take the padding into account and increase the
> +		 * number of objects/slab until it doesn't fit
> +		 * anymore.
> +		 */
> +		while (slab_mgmt_size(nr_objs, align) + nr_objs*obj_size
> +		       <= slab_size)
> +			nr_objs++;

I've also been looking at this and I've realized that we can even remove
the "while" and make it into an "if".  It works just as well. I created
the attached test program to see if there would be any difference
testing all sizes from 8 to PAGE_SIZE >> 1 (where PAGE_SIZE = 1<<12),
and all alignments of 4 to size, and I tried this with two orders of
pages.  The "if" should make the assembly a little better.

-- Steve

> +
> +		/*
> +		 * Reduce it by one which the maximum number of objects that
> +		 * fit in the slab.
> +		 */
> +		if (nr_objs > 0)
> +			nr_objs--;
> +
> +		if (nr_objs > SLAB_LIMIT)
> +			nr_objs = SLAB_LIMIT;
> +
> +		mgmt_size = slab_mgmt_size(nr_objs, align);
> +	}
> +	*num = nr_objs;
> +	*left_over = slab_size - nr_objs*obj_size - mgmt_size;
>  }
>  
>  #define slab_error(cachep, msg) __slab_error(__FUNCTION__, cachep, msg)
> 
> --
> 


[-- Attachment #2: align.c --]
[-- Type: text/x-csrc, Size: 1731 bytes --]

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#define PAGE_SIZE (1<<12)
struct slab {
	int x[7];
};

typedef long kmem_bufctl_t;

#define ALIGN(x,a) (((x)+(a)-1)&~((a)-1))

static size_t slab_mgmt_size(size_t nr_objs, size_t align)
{
	return ALIGN(sizeof(struct slab)+nr_objs*sizeof(kmem_bufctl_t), align);
}

int estimate (int size, int align, int *leftover, int *hold1, int * hold2, int *num, int order)
{ 
	int nr_objs;
	int slab_size = PAGE_SIZE << order;
	int nr1;
	int nr2;
	int mgmt_size;
	int nr_while;
	int nr_if;
	int ret = 0;

	nr_objs = (slab_size - sizeof(struct slab)) /
		(size + sizeof(kmem_bufctl_t));

	nr1 = nr_objs;
	/*
	 * Now take the padding into account and increase the
	 * number of objects/slab until it doesn't fit
	 * anymore.
	 */
	nr_if = nr_objs;

	if (slab_mgmt_size(nr_objs, align) + nr_objs*size
	       <= slab_size)
		nr_if++;

	if (slab_mgmt_size(nr_objs, align) + nr_objs*size
	       <= slab_size)
		nr_objs++;

	if (nr_if != nr_objs)
		ret = 1;

	nr2 = nr_objs;

	/*
	 * Reduce it by one which the maximum number of objects that
	 * fit in the slab.
	 */
	if (nr_objs > 0)
		nr_objs--;

        mgmt_size = slab_mgmt_size(nr_objs, align);

	*num = nr_objs;
	*leftover = slab_size - nr_objs*size - mgmt_size;

	*hold1 = nr1;
	*hold2 = nr2;

	return ret;
}

int main (int argc, char **argv)
{
	int i;
	int x;
	int leftover;
	int hold1;
	int hold2;
	int num;
	int order;

	for (order=0; order <= 1; order++) {
		for (i=8; i < (PAGE_SIZE >> 1); i++) {
			for (x=4; x < i; x++) {
				if (estimate(i, x, &leftover, &hold1, &hold2, &num, order))
					printf("%8d%8d%8d%8d%8d%8d%8d%8d\n", i, x, leftover,
					       hold1, hold2, num, hold1-num, hold2-num);
			}
		}
	}
	return 0;
}

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [patch 4/9] slab: cache_estimate cleanup
  2006-01-04  1:00 ` Steven Rostedt
@ 2006-01-04  1:47   ` Steven Rostedt
  2006-01-04  6:45     ` Balbir Singh
  2006-01-04 14:11     ` Steven Rostedt
  0 siblings, 2 replies; 6+ messages in thread
From: Steven Rostedt @ 2006-01-04  1:47 UTC (permalink / raw)
  To: Pekka Enberg; +Cc: clameter, colpatch, manfred, linux-kernel, akpm

On Tue, 2006-01-03 at 20:00 -0500, Steven Rostedt wrote:
> > +	if (flags & CFLGS_OFF_SLAB) {
> > +		mgmt_size = 0;
> > +		nr_objs = slab_size / obj_size;
> > +
> > +		if (nr_objs > SLAB_LIMIT)
> > +			nr_objs = SLAB_LIMIT;
> > +	} else {
> > +		/* Ignore padding for the initial guess.  The padding
> > +		 * is at most @align-1 bytes, and @size is at least
> 
> should @size be @obj_size
> 
> > +		 * @align. In the worst case, this result will be one
> > +		 * greater than the number of objects that fit into
> > +		 * the memory allocation when taking the padding into
> > +		 * account.
> > +		 */
> > +		nr_objs = (slab_size - sizeof(struct slab)) /
> > +			  (obj_size + sizeof(kmem_bufctl_t));
> > +
> > +		/*
> > +		 * Now take the padding into account and increase the
> > +		 * number of objects/slab until it doesn't fit
> > +		 * anymore.
> > +		 */
> > +		while (slab_mgmt_size(nr_objs, align) + nr_objs*obj_size
> > +		       <= slab_size)
> > +			nr_objs++;
> 
> I've also been looking at this and I've realized that we can even remove
> the "while" and make it into an "if".  It works just as well. I created
> the attached test program to see if there would be any difference
> testing all sizes from 8 to PAGE_SIZE >> 1 (where PAGE_SIZE = 1<<12),
> and all alignments of 4 to size, and I tried this with two orders of
> pages.  The "if" should make the assembly a little better.
> 
> -- Steve
> 
> > +
> > +		/*
> > +		 * Reduce it by one which the maximum number of objects that
> > +		 * fit in the slab.
> > +		 */
> > +		if (nr_objs > 0)
> > +			nr_objs--;

Looking even further into this, we can get rid of the while and this if,
and replace it with one if.

		nr_objs = (slab_size - sizeof(struct slab)) /
			  (obj_size + sizeof(kmem_bufctl_t));

		/*
		 * This calculated number will be either the right
		 * amount, or one greater than what we want.
		 */
		if (slab_mgmt_size(nr_objs, align) + nr_objs*obj_size
		       > slab_size)
			nr_objs--;

		if (nr_objs > SLAB_LIMIT)
			nr_objs = SLAB_LIMIT;

Note the change from <= to >.

Here's a new patch for that.

-- Steve

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>


From: Steven Rostedt <rostedt@goodmis.org>

This patch cleans up cache_estimate in mm/slab.c and improves the algorithm
by taking an initial guess before executing the while loop. The optimization
was originally made by Balbir Singh with further improvements from Steven
Rostedt. Manfred Spraul provider further modifications: no loop at all for
the off-slab case and explain the background.

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
Index: linux-2.6.15/mm/slab.c
===================================================================
--- linux-2.6.15.orig/mm/slab.c	2006-01-03 20:39:24.000000000 -0500
+++ linux-2.6.15/mm/slab.c	2006-01-03 20:40:49.000000000 -0500
@@ -700,32 +700,63 @@
 }
 EXPORT_SYMBOL(kmem_find_general_cachep);
 
-/* Cal the num objs, wastage, and bytes left over for a given slab size. */
-static void cache_estimate(unsigned long gfporder, size_t size, size_t align,
-		 int flags, size_t *left_over, unsigned int *num)
+static size_t slab_mgmt_size(size_t nr_objs, size_t align)
 {
-	int i;
-	size_t wastage = PAGE_SIZE<<gfporder;
-	size_t extra = 0;
-	size_t base = 0;
-
-	if (!(flags & CFLGS_OFF_SLAB)) {
-		base = sizeof(struct slab);
-		extra = sizeof(kmem_bufctl_t);
-	}
-	i = 0;
-	while (i*size + ALIGN(base+i*extra, align) <= wastage)
-		i++;
-	if (i > 0)
-		i--;
-
-	if (i > SLAB_LIMIT)
-		i = SLAB_LIMIT;
-
-	*num = i;
-	wastage -= i*size;
-	wastage -= ALIGN(base+i*extra, align);
-	*left_over = wastage;
+	return ALIGN(sizeof(struct slab)+nr_objs*sizeof(kmem_bufctl_t), align);
+}
+
+/* Calculate the number of objects and left-over bytes for a given
+   object size. */
+static void cache_estimate(unsigned long gfporder, size_t obj_size,
+			   size_t align, int flags, size_t *left_over,
+			   unsigned int *num)
+{
+	int nr_objs;
+	size_t mgmt_size;
+	size_t slab_size = PAGE_SIZE << gfporder;
+
+	/*
+	 * The slab management structure can be either off the slab or
+	 * on it. For the latter case, the memory allocated for a
+	 * slab is used for:
+	 *
+	 * - The struct slab
+	 * - One kmem_bufctl_t for each object
+	 * - Padding, to achieve alignment of @align
+	 * - @obj_size bytes for each object
+	 */
+	if (flags & CFLGS_OFF_SLAB) {
+		mgmt_size = 0;
+		nr_objs = slab_size / obj_size;
+
+		if (nr_objs > SLAB_LIMIT)
+			nr_objs = SLAB_LIMIT;
+	} else {
+		/* Ignore padding for the initial guess.  The padding
+		 * is at most @align-1 bytes, and @size is at least
+		 * @align. In the worst case, this result will be one
+		 * greater than the number of objects that fit into
+		 * the memory allocation when taking the padding into
+		 * account.
+		 */
+		nr_objs = (slab_size - sizeof(struct slab)) /
+			  (obj_size + sizeof(kmem_bufctl_t));
+
+		/*
+		 * This calculated number will be either the right
+		 * amount, or one greater than what we want.
+		 */
+		if (slab_mgmt_size(nr_objs, align) + nr_objs*obj_size
+		       > slab_size)
+			nr_objs--;
+
+		if (nr_objs > SLAB_LIMIT)
+			nr_objs = SLAB_LIMIT;
+
+		mgmt_size = slab_mgmt_size(nr_objs, align);
+	}
+	*num = nr_objs;
+	*left_over = slab_size - nr_objs*obj_size - mgmt_size;
 }
 
 #define slab_error(cachep, msg) __slab_error(__FUNCTION__, cachep, msg)



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [patch 4/9] slab: cache_estimate cleanup
  2006-01-04  1:47   ` Steven Rostedt
@ 2006-01-04  6:45     ` Balbir Singh
  2006-01-04 14:11     ` Steven Rostedt
  1 sibling, 0 replies; 6+ messages in thread
From: Balbir Singh @ 2006-01-04  6:45 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Pekka Enberg, clameter, colpatch, manfred, linux-kernel, akpm

> This patch cleans up cache_estimate in mm/slab.c and improves the algorithm
> by taking an initial guess before executing the while loop. The optimization
> was originally made by Balbir Singh with further improvements from Steven
> Rostedt. Manfred Spraul provider further modifications: no loop at all for
> the off-slab case and explain the background.
>
> Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>

I had prepared this patch about five years ago and completely forgot about it.

Acked-by: Balbir Singh <bsingharora@gmail.com>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [patch 4/9] slab: cache_estimate cleanup
  2006-01-04  1:47   ` Steven Rostedt
  2006-01-04  6:45     ` Balbir Singh
@ 2006-01-04 14:11     ` Steven Rostedt
  2006-01-04 15:22       ` Balbir Singh
  1 sibling, 1 reply; 6+ messages in thread
From: Steven Rostedt @ 2006-01-04 14:11 UTC (permalink / raw)
  To: Pekka Enberg; +Cc: akpm, linux-kernel, manfred, colpatch, clameter

On Tue, 2006-01-03 at 20:47 -0500, Steven Rostedt wrote:

> > 
> > I've also been looking at this and I've realized that we can even remove
> > the "while" and make it into an "if".  It works just as well. I created
> > the attached test program to see if there would be any difference
> > testing all sizes from 8 to PAGE_SIZE >> 1 (where PAGE_SIZE = 1<<12),
> > and all alignments of 4 to size, and I tried this with two orders of
> > pages.  The "if" should make the assembly a little better.
> > 
> >
[...]
> +static size_t slab_mgmt_size(size_t nr_objs, size_t align)
>  {

[ delete deletion ]

> +	return ALIGN(sizeof(struct slab)+nr_objs*sizeof(kmem_bufctl_t), align);
> +}
> +
> +/* Calculate the number of objects and left-over bytes for a given
> +   object size. */
> +static void cache_estimate(unsigned long gfporder, size_t obj_size,
> +			   size_t align, int flags, size_t *left_over,
> +			   unsigned int *num)
> +{
> +

[...]

> +		/* Ignore padding for the initial guess.  The padding
> +		 * is at most @align-1 bytes, and @size is at least
> +		 * @align. In the worst case, this result will be one
> +		 * greater than the number of objects that fit into
> +		 * the memory allocation when taking the padding into
> +		 * account.
> +		 */
> +		nr_objs = (slab_size - sizeof(struct slab)) /
> +			  (obj_size + sizeof(kmem_bufctl_t));
> +
> +		/*
> +		 * This calculated number will be either the right
> +		 * amount, or one greater than what we want.
> +		 */
> +		if (slab_mgmt_size(nr_objs, align) + nr_objs*obj_size
> +		       > slab_size)
> +			nr_objs--;

No while is needed!  slab_mgmt_size(nr_objs, align) will end up being:

(sizeof(struct slab)+nr_objs*sizeof(kmem_bufctl_t)+align-1)&~(align-1)

lets say:
  S = sizeof(struct slab)
  K = sizeof(kmem_bufctl_t)
  n = nr_objs
  z = slab_size
  o = objsize
  a = align

	nr_objs = (slab_size - sizeof(struct slab)) /
		(size + sizeof(kmem_bufctl_t));

will be  n = (z - S) / (o + K)

looking at the if:

	if (slab_mgmt_size(nr_objs, align) + nr_objs*size
	       > slab_size)

and slab_mgmt_size:

   static size_t slab_mgmt_size(size_t nr_objs, size_t align)
   {
	return ALIGN(sizeof(struct slab)+nr_objs*sizeof(kmem_bufctl_t), align);
   }

and ALIGN:

   #define ALIGN(x,a) (((x)+(a)-1)&~((a)-1))

slab_mgmt_size is the same as:

    ALIGN(S + nK, a)

so this will not be greater than:

S + nK + a - 1


add this to the if:

if (S + nK + a-1 + no > z)

proof by contradiction: can the left side be greater than z + o?

S + nK + a-1 + no = z+o+1 ?

S + (o+K)n + a-1 = z+o+1

n = (z - S) / (o + K) so:

S + (o+K)(z-S)/(o+K) + a-1 = z+o+1

S + (z-S) + a-1 = z+o+1

removing the z and S

a-1 = o+1

We know that a can be at most o so:

o-1 = o+1

and thus we get:

-1 = 1

So I believe this is the proof by contradiction.  Might want to check
this, since I just woke up ;)

-- Steve




^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [patch 4/9] slab: cache_estimate cleanup
  2006-01-04 14:11     ` Steven Rostedt
@ 2006-01-04 15:22       ` Balbir Singh
  0 siblings, 0 replies; 6+ messages in thread
From: Balbir Singh @ 2006-01-04 15:22 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Pekka Enberg, akpm, linux-kernel, manfred, colpatch, clameter

> No while is needed!  slab_mgmt_size(nr_objs, align) will end up being:
>
> (sizeof(struct slab)+nr_objs*sizeof(kmem_bufctl_t)+align-1)&~(align-1)
>
> lets say:
>   S = sizeof(struct slab)
>   K = sizeof(kmem_bufctl_t)
>   n = nr_objs
>   z = slab_size
>   o = objsize
>   a = align
>
>         nr_objs = (slab_size - sizeof(struct slab)) /
>                 (size + sizeof(kmem_bufctl_t));
>
> will be  n = (z - S) / (o + K)
>
> looking at the if:
>
>         if (slab_mgmt_size(nr_objs, align) + nr_objs*size
>                > slab_size)
>
> and slab_mgmt_size:
>
>    static size_t slab_mgmt_size(size_t nr_objs, size_t align)
>    {
>         return ALIGN(sizeof(struct slab)+nr_objs*sizeof(kmem_bufctl_t), align);
>    }
>
> and ALIGN:
>
>    #define ALIGN(x,a) (((x)+(a)-1)&~((a)-1))
>
> slab_mgmt_size is the same as:
>
>     ALIGN(S + nK, a)
>
> so this will not be greater than:
>
> S + nK + a - 1
>
>
> add this to the if:
>
> if (S + nK + a-1 + no > z)
>
> proof by contradiction: can the left side be greater than z + o?
>
> S + nK + a-1 + no = z+o+1 ?
>
> S + (o+K)n + a-1 = z+o+1
>
> n = (z - S) / (o + K) so:
>
> S + (o+K)(z-S)/(o+K) + a-1 = z+o+1
>
> S + (z-S) + a-1 = z+o+1
>
> removing the z and S
>
> a-1 = o+1
>
> We know that a can be at most o so:
>
> o-1 = o+1
>
> and thus we get:
>
> -1 = 1
>
> So I believe this is the proof by contradiction.  Might want to check
> this, since I just woke up ;)
>
> -- Steve

Your analysis is very interesting and seems correct. My analysis is
similar, but a bit
different

Best case is S+nK is aligned
Worst case is S+nK is off by +1 byte from an alignment boundary

Taking the worst case, we find a-1 bytes of space eaten up by the
alignment operation.
We need to see if the a-1 bytes that we lost, could have accommodated
another object of
size o and a bufctl of size K. If that is true we need to reduce n.

The equation becomes

If o+K < a-1, reduce n

If a is atmost o, then it leads to (from your analysis assumption)

if K < -1 reduce n, K is certainly positive, hence do not reduce n.

Balbir

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2006-01-04 15:23 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-01-03 20:25 [patch 4/9] slab: cache_estimate cleanup Pekka Enberg
2006-01-04  1:00 ` Steven Rostedt
2006-01-04  1:47   ` Steven Rostedt
2006-01-04  6:45     ` Balbir Singh
2006-01-04 14:11     ` Steven Rostedt
2006-01-04 15:22       ` Balbir Singh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox