From: "Zhang, Yanmin" <yanmin_zhang@linux.intel.com>
To: Christoph Lameter <clameter@sgi.com>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>, Mel Gorman <mel@csn.ul.ie>,
Matt Mackall <mpm@selenic.com>,
linux-mm@kvack.org
Subject: Re: [patch 8/9] slub: Make the order configurable for each slab cache
Date: Thu, 20 Mar 2008 13:53:29 +0800 [thread overview]
Message-ID: <1205992409.14496.48.camel@ymzhang> (raw)
In-Reply-To: <20080317230529.701336582@sgi.com>
On Mon, 2008-03-17 at 16:05 -0700, Christoph Lameter wrote:
> plain text document attachment
> (0008-slub-Make-the-order-configurable-for-each-slab-cach.patch)
> Makes /sys/kernel/slab/<slabname>/order writable. The allocation
> order of a slab cache can then be changed dynamically during runtime.
> This can be used to override the objects per slabs value establisheed
> with the slub_min_objects setting that was manually specified or
> calculated on bootup.
>
> Signed-off-by: Christoph Lameter <clameter@sgi.com>
> ---
> mm/slub.c | 30 +++++++++++++++++++++++-------
> 1 file changed, 23 insertions(+), 7 deletions(-)
>
> Index: linux-2.6/mm/slub.c
> ===================================================================
> --- linux-2.6.orig/mm/slub.c 2008-03-17 15:38:16.337702541 -0700
> +++ linux-2.6/mm/slub.c 2008-03-17 15:49:47.791302447 -0700
> @@ -2146,7 +2146,7 @@ static int init_kmem_cache_nodes(struct
> * calculate_sizes() determines the order and the distribution of data within
> * a slab object.
> */
> -static int calculate_sizes(struct kmem_cache *s)
> +static int calculate_sizes(struct kmem_cache *s, int forced_order)
Is there any race between calculate_sizes and allocate_slab?
calculate_sizes sets s->order and s->objects, while allocate_slab uses them.
For example, change order from 5 to 2.
Step\thread | Thread 1 | Thread 2
--------------------------------------------------------------------------------------------
1 | allocate_slab |
| fetch the old s->order |
--------------------------------------------------------------------------------------------
2 | | calculate_sizes
| | changes s->order=2
--------------------------------------------------------------------------------------------
3 | | calculate_sizes
| | changes s->objects=8
--------------------------------------------------------------------------------------------
4 | allocate_slab |
| fetchs s->objects to |
| page->objects; |
Just before calculate_sizes changes s->order to a smaller value,
allocate_slab might fetch the old s->order to call alloc_pages successfully. Then, before
allocate_slab fetch s->objects, calculate_sizes changes it to a smaller value
It could be resolved by fetch s->order in allocate_slab firstly and calculate
page->objects lately instead of fetching s->objects.
> {
> unsigned long flags = s->flags;
> unsigned long size = s->objsize;
> @@ -2235,7 +2235,11 @@ static int calculate_sizes(struct kmem_c
> size = ALIGN(size, align);
> s->size = size;
>
> - s->order = calculate_order(size);
> + if (forced_order >= 0)
> + s->order = forced_order;
> + else
> + s->order = calculate_order(size);
> +
> if (s->order < 0)
> return 0;
>
> @@ -2271,7 +2275,7 @@ static int kmem_cache_open(struct kmem_c
> s->align = align;
> s->flags = kmem_cache_flags(size, flags, name, ctor);
>
> - if (!calculate_sizes(s))
> + if (!calculate_sizes(s, -1))
> goto error;
>
> s->refcount = 1;
> @@ -3727,11 +3731,23 @@ static ssize_t objs_per_slab_show(struct
> }
> SLAB_ATTR_RO(objs_per_slab);
>
> +static ssize_t order_store(struct kmem_cache *s,
> + const char *buf, size_t length)
> +{
> + int order = simple_strtoul(buf, NULL, 10);
> +
> + if (order > slub_max_order || order < slub_min_order)
> + return -EINVAL;
> +
> + calculate_sizes(s, order);
> + return length;
> +}
> +
> static ssize_t order_show(struct kmem_cache *s, char *buf)
> {
> return sprintf(buf, "%d\n", s->order);
> }
> -SLAB_ATTR_RO(order);
> +SLAB_ATTR(order);
>
> static ssize_t ctor_show(struct kmem_cache *s, char *buf)
> {
> @@ -3865,7 +3881,7 @@ static ssize_t red_zone_store(struct kme
> s->flags &= ~SLAB_RED_ZONE;
> if (buf[0] == '1')
> s->flags |= SLAB_RED_ZONE;
> - calculate_sizes(s);
> + calculate_sizes(s, -1);
> return length;
> }
> SLAB_ATTR(red_zone);
> @@ -3884,7 +3900,7 @@ static ssize_t poison_store(struct kmem_
> s->flags &= ~SLAB_POISON;
> if (buf[0] == '1')
> s->flags |= SLAB_POISON;
> - calculate_sizes(s);
> + calculate_sizes(s, -1);
> return length;
> }
> SLAB_ATTR(poison);
> @@ -3903,7 +3919,7 @@ static ssize_t store_user_store(struct k
> s->flags &= ~SLAB_STORE_USER;
> if (buf[0] == '1')
> s->flags |= SLAB_STORE_USER;
> - calculate_sizes(s);
> + calculate_sizes(s, -1);
> return length;
> }
> SLAB_ATTR(store_user);
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2008-03-20 5:53 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20080317230516.078358225@sgi.com>
[not found] ` <20080317230529.474353536@sgi.com>
2008-03-18 18:54 ` [patch 7/9] slub: Adjust order boundaries and minimum objects per slab Pekka Enberg
2008-03-18 19:00 ` Christoph Lameter
2008-03-19 1:04 ` Zhang, Yanmin
2008-03-19 15:20 ` Pekka Enberg
2008-03-20 6:44 ` Zhang, Yanmin
2008-03-20 18:32 ` Christoph Lameter
[not found] ` <20080317230528.279983034@sgi.com>
2008-03-19 9:09 ` [patch 2/9] Store max number of objects in the page struct Zhang, Yanmin
2008-03-19 17:49 ` Christoph Lameter
2008-03-20 3:32 ` Zhang, Yanmin
2008-03-20 21:05 ` Christoph Lameter
2008-03-21 22:24 ` Andrew Morton
2008-03-22 3:27 ` Ben Pfaff
2008-03-24 1:22 ` [PATCH] Add definitions of USHORT_MAX and others Zhang, Yanmin
[not found] ` <20080317230528.939792410@sgi.com>
2008-03-20 5:10 ` [patch 5/9] slub: Fallback to minimal order during slab page allocation Zhang, Yanmin
2008-03-20 18:29 ` Christoph Lameter
2008-03-21 0:52 ` Zhang, Yanmin
2008-03-21 3:35 ` Christoph Lameter
2008-03-21 5:14 ` Zhang, Yanmin
2008-03-21 6:07 ` Christoph Lameter
2008-03-21 8:23 ` Zhang, Yanmin
[not found] ` <20080317230529.701336582@sgi.com>
2008-03-20 5:53 ` Zhang, Yanmin [this message]
2008-03-20 18:31 ` [patch 8/9] slub: Make the order configurable for each slab cache Christoph Lameter
2008-03-20 23:57 ` Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1205992409.14496.48.camel@ymzhang \
--to=yanmin_zhang@linux.intel.com \
--cc=clameter@sgi.com \
--cc=linux-mm@kvack.org \
--cc=mel@csn.ul.ie \
--cc=mpm@selenic.com \
--cc=penberg@cs.helsinki.fi \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).