linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Joonsoo Kim <js1304@gmail.com>
To: Pekka Enberg <penberg@kernel.org>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	Joonsoo Kim <js1304@gmail.com>,
	Christoph Lameter <cl@linux-foundation.org>
Subject: [PATCH 1/2] slub: rename cpu_partial to max_cpu_object
Date: Sat, 25 Aug 2012 01:05:02 +0900	[thread overview]
Message-ID: <1345824303-30292-1-git-send-email-js1304@gmail.com> (raw)
In-Reply-To: <Yes>

cpu_partial of kmem_cache struct is a bit awkward.

It means the maximum number of objects kept in the per cpu slab
and cpu partial lists of a processor. However, current name
seems to represent objects kept in the cpu partial lists only.
So, this patch renames it.

Signed-off-by: Joonsoo Kim <js1304@gmail.com>
Cc: Christoph Lameter <cl@linux-foundation.org>

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index df448ad..9130e6b 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -84,7 +84,7 @@ struct kmem_cache {
 	int size;		/* The size of an object including meta data */
 	int object_size;	/* The size of an object without meta data */
 	int offset;		/* Free pointer offset. */
-	int cpu_partial;	/* Number of per cpu partial objects to keep around */
+	int max_cpu_object;	/* Number of per cpu objects to keep around */
 	struct kmem_cache_order_objects oo;
 
 	/* Allocation and freeing of slabs */
diff --git a/mm/slub.c b/mm/slub.c
index c67bd0a..d597530 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1565,7 +1565,7 @@ static void *get_partial_node(struct kmem_cache *s,
 			available = put_cpu_partial(s, page, 0);
 			stat(s, CPU_PARTIAL_NODE);
 		}
-		if (kmem_cache_debug(s) || available > s->cpu_partial / 2)
+		if (kmem_cache_debug(s) || available > s->max_cpu_object / 2)
 			break;
 
 	}
@@ -1953,7 +1953,7 @@ int put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
 		if (oldpage) {
 			pobjects = oldpage->pobjects;
 			pages = oldpage->pages;
-			if (drain && pobjects > s->cpu_partial) {
+			if (drain && pobjects > s->max_cpu_object) {
 				unsigned long flags;
 				/*
 				 * partial array is full. Move the existing
@@ -3073,8 +3073,8 @@ static int kmem_cache_open(struct kmem_cache *s,
 	set_min_partial(s, ilog2(s->size) / 2);
 
 	/*
-	 * cpu_partial determined the maximum number of objects kept in the
-	 * per cpu partial lists of a processor.
+	 * max_cpu_object determined the maximum number of objects kept in the
+	 * per cpu slab and cpu partial lists of a processor.
 	 *
 	 * Per cpu partial lists mainly contain slabs that just have one
 	 * object freed. If they are used for allocation then they can be
@@ -3085,20 +3085,20 @@ static int kmem_cache_open(struct kmem_cache *s,
 	 *
 	 * A) The number of objects from per cpu partial slabs dumped to the
 	 *    per node list when we reach the limit.
-	 * B) The number of objects in cpu partial slabs to extract from the
-	 *    per node list when we run out of per cpu objects. We only fetch 50%
-	 *    to keep some capacity around for frees.
+	 * B) The number of objects in cpu slab and cpu partial lists to
+	 *    extract from the per node list when we run out of per cpu objects.
+	 *    We only fetch 50% to keep some capacity around for frees.
 	 */
 	if (kmem_cache_debug(s))
-		s->cpu_partial = 0;
+		s->max_cpu_object = 0;
 	else if (s->size >= PAGE_SIZE)
-		s->cpu_partial = 2;
+		s->max_cpu_object = 2;
 	else if (s->size >= 1024)
-		s->cpu_partial = 6;
+		s->max_cpu_object = 6;
 	else if (s->size >= 256)
-		s->cpu_partial = 13;
+		s->max_cpu_object = 13;
 	else
-		s->cpu_partial = 30;
+		s->max_cpu_object = 30;
 
 	s->refcount = 1;
 #ifdef CONFIG_NUMA
@@ -4677,12 +4677,12 @@ static ssize_t min_partial_store(struct kmem_cache *s, const char *buf,
 }
 SLAB_ATTR(min_partial);
 
-static ssize_t cpu_partial_show(struct kmem_cache *s, char *buf)
+static ssize_t max_cpu_object_show(struct kmem_cache *s, char *buf)
 {
-	return sprintf(buf, "%u\n", s->cpu_partial);
+	return sprintf(buf, "%u\n", s->max_cpu_object);
 }
 
-static ssize_t cpu_partial_store(struct kmem_cache *s, const char *buf,
+static ssize_t max_cpu_object_store(struct kmem_cache *s, const char *buf,
 				 size_t length)
 {
 	unsigned long objects;
@@ -4694,11 +4694,11 @@ static ssize_t cpu_partial_store(struct kmem_cache *s, const char *buf,
 	if (objects && kmem_cache_debug(s))
 		return -EINVAL;
 
-	s->cpu_partial = objects;
+	s->max_cpu_object = objects;
 	flush_all(s);
 	return length;
 }
-SLAB_ATTR(cpu_partial);
+SLAB_ATTR(max_cpu_object);
 
 static ssize_t ctor_show(struct kmem_cache *s, char *buf)
 {
@@ -5103,7 +5103,7 @@ static struct attribute *slab_attrs[] = {
 	&objs_per_slab_attr.attr,
 	&order_attr.attr,
 	&min_partial_attr.attr,
-	&cpu_partial_attr.attr,
+	&max_cpu_object_attr.attr,
 	&objects_attr.attr,
 	&objects_partial_attr.attr,
 	&partial_attr.attr,
-- 
1.7.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2012-08-24 16:07 UTC|newest]

Thread overview: 95+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <Yes>
2012-07-16 16:14 ` [PATCH 1/3] mm: correct return value of migrate_pages() Joonsoo Kim
2012-07-16 16:14   ` [PATCH 2/3] mm: fix possible incorrect return value of migrate_pages() syscall Joonsoo Kim
2012-07-16 17:26     ` Christoph Lameter
2012-07-16 17:40     ` Michal Nazarewicz
2012-07-16 17:59       ` JoonSoo Kim
2012-07-17 13:02         ` Michal Nazarewicz
2012-07-16 16:14   ` [PATCH 3/3] mm: fix return value in __alloc_contig_migrate_range() Joonsoo Kim
2012-07-16 17:29     ` Christoph Lameter
2012-07-16 17:40     ` Michal Nazarewicz
2012-07-16 18:40       ` JoonSoo Kim
2012-07-17 13:16         ` Michal Nazarewicz
2012-07-16 17:14   ` [PATCH 4] mm: fix possible incorrect return value of move_pages() syscall Joonsoo Kim
2012-07-16 17:30     ` Christoph Lameter
2012-07-16 17:23   ` [PATCH 1/3] mm: correct return value of migrate_pages() Christoph Lameter
2012-07-16 17:32     ` JoonSoo Kim
2012-07-16 17:37       ` Christoph Lameter
2012-07-16 17:40   ` Michal Nazarewicz
2012-07-16 17:57     ` JoonSoo Kim
2012-07-16 18:05       ` Christoph Lameter
2012-07-17 12:33 ` [PATCH 1/4 v2] mm: correct return value of migrate_pages() and migrate_huge_pages() Joonsoo Kim
2012-07-17 12:33   ` [PATCH 2/4 v2] mm: fix possible incorrect return value of migrate_pages() syscall Joonsoo Kim
2012-07-17 14:28     ` Christoph Lameter
2012-07-17 15:41       ` JoonSoo Kim
2012-07-17 12:33   ` [PATCH 3/4 v2] mm: fix return value in __alloc_contig_migrate_range() Joonsoo Kim
2012-07-17 13:25     ` Michal Nazarewicz
2012-07-17 15:45       ` JoonSoo Kim
2012-07-17 15:49         ` [PATCH 3/4 v3] " Joonsoo Kim
2012-07-17 12:33   ` [PATCH 4/4 v2] mm: fix possible incorrect return value of move_pages() syscall Joonsoo Kim
2012-07-27 17:55 ` [RESEND PATCH 1/4 v3] mm: correct return value of migrate_pages() and migrate_huge_pages() Joonsoo Kim
2012-07-27 17:55   ` [RESEND PATCH 2/4 v3] mm: fix possible incorrect return value of migrate_pages() syscall Joonsoo Kim
2012-07-27 20:57     ` Christoph Lameter
2012-07-28  6:16       ` JoonSoo Kim
2012-07-30 19:30         ` Christoph Lameter
2012-07-27 17:55   ` [RESEND PATCH 3/4 v3] mm: fix return value in __alloc_contig_migrate_range() Joonsoo Kim
2012-07-27 17:55   ` [RESEND PATCH 4/4 v3] mm: fix possible incorrect return value of move_pages() syscall Joonsoo Kim
2012-07-27 20:54     ` Christoph Lameter
2012-07-28  6:09       ` JoonSoo Kim
2012-07-30 19:29         ` Christoph Lameter
2012-07-31  3:34           ` JoonSoo Kim
2012-07-31 14:04             ` Christoph Lameter
2012-08-01  5:15           ` Michael Kerrisk
2012-08-01 18:00             ` Christoph Lameter
2012-08-02  5:52               ` Michael Kerrisk
2012-08-24 16:05 ` Joonsoo Kim [this message]
2012-08-24 16:05   ` [PATCH 2/2] slub: correct the calculation of the number of cpu objects in get_partial_node Joonsoo Kim
2012-08-24 16:15     ` Christoph Lameter
2012-08-24 16:28       ` JoonSoo Kim
2012-08-24 16:31         ` Christoph Lameter
2012-08-24 16:40           ` JoonSoo Kim
2012-08-24 16:12   ` [PATCH 1/2] slub: rename cpu_partial to max_cpu_object Christoph Lameter
2012-08-25 14:11 ` [PATCH 1/2] slab: do ClearSlabPfmemalloc() for all pages of slab Joonsoo Kim
2012-08-25 14:11   ` [PATCH 2/2] slab: fix starting index for finding another object Joonsoo Kim
2012-09-03 10:08   ` [PATCH 1/2] slab: do ClearSlabPfmemalloc() for all pages of slab Mel Gorman
2012-10-20 15:48 ` [PATCH for-v3.7 1/2] slub: optimize poorly inlined kmalloc* functions Joonsoo Kim
2012-10-20 15:48   ` [PATCH for-v3.7 2/2] slub: optimize kmalloc* inlining for GFP_DMA Joonsoo Kim
2012-10-22 14:31     ` Christoph Lameter
2012-10-23  2:29       ` JoonSoo Kim
2012-10-23  6:16         ` Eric Dumazet
2012-10-23 16:12           ` JoonSoo Kim
2012-10-24  8:05   ` [PATCH for-v3.7 1/2] slub: optimize poorly inlined kmalloc* functions Pekka Enberg
2012-10-24 13:36     ` Christoph Lameter
2012-10-28 19:12 ` [PATCH 0/5] minor clean-up and optimize highmem related code Joonsoo Kim
2012-10-28 19:12   ` [PATCH 1/5] mm, highmem: use PKMAP_NR() to calculate an index of pkmap Joonsoo Kim
2012-10-29  1:48     ` Minchan Kim
2012-10-28 19:12   ` [PATCH 2/5] mm, highmem: remove useless pool_lock Joonsoo Kim
2012-10-29  1:52     ` Minchan Kim
2012-10-30 21:31     ` Andrew Morton
2012-10-31  5:14       ` Minchan Kim
2012-10-31 15:01       ` JoonSoo Kim
2012-10-28 19:12   ` [PATCH 3/5] mm, highmem: remove page_address_pool list Joonsoo Kim
2012-10-29  1:57     ` Minchan Kim
2012-10-28 19:12   ` [PATCH 4/5] mm, highmem: makes flush_all_zero_pkmaps() return index of last flushed entry Joonsoo Kim
2012-10-29  2:06     ` Minchan Kim
2012-10-29 13:12       ` JoonSoo Kim
2012-10-28 19:12   ` [PATCH 5/5] mm, highmem: get virtual address of the page using PKMAP_ADDR() Joonsoo Kim
2012-10-29  2:09     ` Minchan Kim
2012-10-29  2:12   ` [PATCH 0/5] minor clean-up and optimize highmem related code Minchan Kim
2012-10-29 13:15     ` JoonSoo Kim
2012-10-31 17:11       ` JoonSoo Kim
2012-10-31 16:56 ` [PATCH v2 " Joonsoo Kim
2012-10-31 16:56   ` [PATCH v2 1/5] mm, highmem: use PKMAP_NR() to calculate an index of pkmap Joonsoo Kim
2012-10-31 16:56   ` [PATCH v2 2/5] mm, highmem: remove useless pool_lock Joonsoo Kim
2012-10-31 16:56   ` [PATCH v2 3/5] mm, highmem: remove page_address_pool list Joonsoo Kim
2012-10-31 16:56   ` [PATCH v2 4/5] mm, highmem: makes flush_all_zero_pkmaps() return index of first flushed entry Joonsoo Kim
2012-11-01  5:03     ` Minchan Kim
2012-11-02 19:07       ` JoonSoo Kim
2012-11-02 22:42         ` Minchan Kim
2012-11-13  0:30           ` JoonSoo Kim
2012-11-13 12:49             ` Minchan Kim
2012-11-13 14:12               ` JoonSoo Kim
2012-11-13 15:01                 ` Minchan Kim
2012-11-14 17:09                   ` JoonSoo Kim
2012-11-19 23:46                     ` Minchan Kim
2012-11-27 15:01                       ` JoonSoo Kim
2012-10-31 16:56   ` [PATCH v2 5/5] mm, highmem: get virtual address of the page using PKMAP_ADDR() Joonsoo Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1345824303-30292-1-git-send-email-js1304@gmail.com \
    --to=js1304@gmail.com \
    --cc=cl@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).