linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] Slub code refine
@ 2015-10-21  9:51 Wei Yang
  2015-10-21  9:51 ` [PATCH 1/3] mm/slub: correct the comment in calculate_order() Wei Yang
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Wei Yang @ 2015-10-21  9:51 UTC (permalink / raw)
  To: cl, penberg, rientjes, akpm; +Cc: linux-mm, Wei Yang

Here are three patches for slub code refine.

Some of them are acked/reviewed, resend by including Andrew for comments. No
code change.

Wei Yang (3):
  mm/slub: correct the comment in calculate_order()
  mm/slub: use get_order() instead of fls()
  mm/slub: calculate start order with reserved in consideration

 mm/slub.c | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/3] mm/slub: correct the comment in calculate_order()
  2015-10-21  9:51 [PATCH 0/3] Slub code refine Wei Yang
@ 2015-10-21  9:51 ` Wei Yang
  2015-10-21 14:18   ` Christoph Lameter
  2015-10-21  9:51 ` [PATCH 2/3] mm/slub: use get_order() instead of fls() Wei Yang
  2015-10-21  9:51 ` [PATCH 3/3] mm/slub: calculate start order with reserved in consideration Wei Yang
  2 siblings, 1 reply; 7+ messages in thread
From: Wei Yang @ 2015-10-21  9:51 UTC (permalink / raw)
  To: cl, penberg, rientjes, akpm; +Cc: linux-mm, Wei Yang

In calculate_order(), it tries to calculate the best order by adjusting the
fraction and min_objects. On each iteration on min_objects, fraction
iterates on 16, 8, 4. Which means the acceptable waste increases with 1/16,
1/8, 1/4.

This patch corrects the comment according to the code.

Signed-off-by: Wei Yang <weiyang@linux.vnet.ibm.com>
---
 mm/slub.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index f68c0e5..e171b10 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2839,7 +2839,7 @@ static inline int calculate_order(int size, int reserved)
 	 * works by first attempting to generate a layout with
 	 * the best configuration and backing off gradually.
 	 *
-	 * First we reduce the acceptable waste in a slab. Then
+	 * First we increase the acceptable waste in a slab. Then
 	 * we reduce the minimum objects required in a slab.
 	 */
 	min_objects = slub_min_objects;
-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/3] mm/slub: use get_order() instead of fls()
  2015-10-21  9:51 [PATCH 0/3] Slub code refine Wei Yang
  2015-10-21  9:51 ` [PATCH 1/3] mm/slub: correct the comment in calculate_order() Wei Yang
@ 2015-10-21  9:51 ` Wei Yang
  2015-10-21 14:19   ` Christoph Lameter
  2015-10-21  9:51 ` [PATCH 3/3] mm/slub: calculate start order with reserved in consideration Wei Yang
  2 siblings, 1 reply; 7+ messages in thread
From: Wei Yang @ 2015-10-21  9:51 UTC (permalink / raw)
  To: cl, penberg, rientjes, akpm; +Cc: linux-mm, Wei Yang

get_order() is more easy to understand.

This patch just replaces it.

Signed-off-by: Wei Yang <weiyang@linux.vnet.ibm.com>
Pekka Enberg <penberg@kernel.org>
---
 mm/slub.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index e171b10..37552f8 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2808,8 +2808,7 @@ static inline int slab_order(int size, int min_objects,
 	if (order_objects(min_order, size, reserved) > MAX_OBJS_PER_PAGE)
 		return get_order(size * MAX_OBJS_PER_PAGE) - 1;
 
-	for (order = max(min_order,
-				fls(min_objects * size - 1) - PAGE_SHIFT);
+	for (order = max(min_order, get_order(min_objects * size));
 			order <= max_order; order++) {
 
 		unsigned long slab_size = PAGE_SIZE << order;
-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/3] mm/slub: calculate start order with reserved in consideration
  2015-10-21  9:51 [PATCH 0/3] Slub code refine Wei Yang
  2015-10-21  9:51 ` [PATCH 1/3] mm/slub: correct the comment in calculate_order() Wei Yang
  2015-10-21  9:51 ` [PATCH 2/3] mm/slub: use get_order() instead of fls() Wei Yang
@ 2015-10-21  9:51 ` Wei Yang
  2 siblings, 0 replies; 7+ messages in thread
From: Wei Yang @ 2015-10-21  9:51 UTC (permalink / raw)
  To: cl, penberg, rientjes, akpm; +Cc: linux-mm, Wei Yang

In function slub_order(), the order starts from max(min_order,
get_order(min_objects * size)). When (min_objects * size) has different
order with (min_objects * size + reserved), it will skip this order by the
check in the loop.

This patch optimizes this a little by calculating the start order with
reserved in consideration and remove the check in loop.

Signed-off-by: Wei Yang <weiyang@linux.vnet.ibm.com>
Acked-by: Christoph Lameter <cl@linux.com>
---
 mm/slub.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 37552f8..62b228e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2808,19 +2808,15 @@ static inline int slab_order(int size, int min_objects,
 	if (order_objects(min_order, size, reserved) > MAX_OBJS_PER_PAGE)
 		return get_order(size * MAX_OBJS_PER_PAGE) - 1;
 
-	for (order = max(min_order, get_order(min_objects * size));
+	for (order = max(min_order, get_order(min_objects * size + reserved));
 			order <= max_order; order++) {
 
 		unsigned long slab_size = PAGE_SIZE << order;
 
-		if (slab_size < min_objects * size + reserved)
-			continue;
-
 		rem = (slab_size - reserved) % size;
 
 		if (rem <= slab_size / fract_leftover)
 			break;
-
 	}
 
 	return order;
-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/3] mm/slub: correct the comment in calculate_order()
  2015-10-21  9:51 ` [PATCH 1/3] mm/slub: correct the comment in calculate_order() Wei Yang
@ 2015-10-21 14:18   ` Christoph Lameter
  0 siblings, 0 replies; 7+ messages in thread
From: Christoph Lameter @ 2015-10-21 14:18 UTC (permalink / raw)
  To: Wei Yang; +Cc: penberg, rientjes, akpm, linux-mm

On Wed, 21 Oct 2015, Wei Yang wrote:

> In calculate_order(), it tries to calculate the best order by adjusting the
> fraction and min_objects. On each iteration on min_objects, fraction
> iterates on 16, 8, 4. Which means the acceptable waste increases with 1/16,
> 1/8, 1/4.

Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/3] mm/slub: use get_order() instead of fls()
  2015-10-21  9:51 ` [PATCH 2/3] mm/slub: use get_order() instead of fls() Wei Yang
@ 2015-10-21 14:19   ` Christoph Lameter
  2015-10-22  1:09     ` Wei Yang
  0 siblings, 1 reply; 7+ messages in thread
From: Christoph Lameter @ 2015-10-21 14:19 UTC (permalink / raw)
  To: Wei Yang; +Cc: penberg, rientjes, akpm, linux-mm

On Wed, 21 Oct 2015, Wei Yang wrote:

> Signed-off-by: Wei Yang <weiyang@linux.vnet.ibm.com>
> Pekka Enberg <penberg@kernel.org>

Acked-by: ?


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/3] mm/slub: use get_order() instead of fls()
  2015-10-21 14:19   ` Christoph Lameter
@ 2015-10-22  1:09     ` Wei Yang
  0 siblings, 0 replies; 7+ messages in thread
From: Wei Yang @ 2015-10-22  1:09 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: Wei Yang, penberg, rientjes, akpm, linux-mm

On Wed, Oct 21, 2015 at 09:19:09AM -0500, Christoph Lameter wrote:
>On Wed, 21 Oct 2015, Wei Yang wrote:
>
>> Signed-off-by: Wei Yang <weiyang@linux.vnet.ibm.com>
>> Pekka Enberg <penberg@kernel.org>
>
>Acked-by: ?
>

Oh, missed copy.

Reviewed-by: Pekka Enberg <penberg@kernel.org>

-- 
Richard Yang
Help you, Help me

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2015-10-22  1:10 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-10-21  9:51 [PATCH 0/3] Slub code refine Wei Yang
2015-10-21  9:51 ` [PATCH 1/3] mm/slub: correct the comment in calculate_order() Wei Yang
2015-10-21 14:18   ` Christoph Lameter
2015-10-21  9:51 ` [PATCH 2/3] mm/slub: use get_order() instead of fls() Wei Yang
2015-10-21 14:19   ` Christoph Lameter
2015-10-22  1:09     ` Wei Yang
2015-10-21  9:51 ` [PATCH 3/3] mm/slub: calculate start order with reserved in consideration Wei Yang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).