* [PATCH v2] zsmalloc: use class->pages_per_zspage
@ 2015-07-16 0:10 Minchan Kim
2015-07-16 0:20 ` Sergey Senozhatsky
0 siblings, 1 reply; 2+ messages in thread
From: Minchan Kim @ 2015-07-16 0:10 UTC (permalink / raw)
To: Andrew Morton; +Cc: Sergey Senozhatsky, linux-kernel, linux-mm, Minchan Kim
There is no need to recalcurate pages_per_zspage in runtime.
Just use class->pages_per_zspage to avoid unnecessary runtime
overhead.
* From v1
* fix up __zs_compact - Sergey
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
mm/zsmalloc.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 27b9661c8fa6..c9685bb2bb92 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1711,7 +1711,7 @@ static unsigned long zs_can_compact(struct size_class *class)
obj_wasted /= get_maxobj_per_zspage(class->size,
class->pages_per_zspage);
- return obj_wasted * get_pages_per_zspage(class->size);
+ return obj_wasted * class->pages_per_zspage;
}
static void __zs_compact(struct zs_pool *pool, struct size_class *class)
@@ -1749,8 +1749,7 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
putback_zspage(pool, class, dst_page);
if (putback_zspage(pool, class, src_page) == ZS_EMPTY)
- pool->stats.pages_compacted +=
- get_pages_per_zspage(class->size);
+ pool->stats.pages_compacted += class->pages_per_zspage;
spin_unlock(&class->lock);
cond_resched();
spin_lock(&class->lock);
--
1.9.1
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH v2] zsmalloc: use class->pages_per_zspage
2015-07-16 0:10 [PATCH v2] zsmalloc: use class->pages_per_zspage Minchan Kim
@ 2015-07-16 0:20 ` Sergey Senozhatsky
0 siblings, 0 replies; 2+ messages in thread
From: Sergey Senozhatsky @ 2015-07-16 0:20 UTC (permalink / raw)
To: Minchan Kim; +Cc: Andrew Morton, Sergey Senozhatsky, linux-kernel, linux-mm
On (07/16/15 09:10), Minchan Kim wrote:
> There is no need to recalcurate pages_per_zspage in runtime.
> Just use class->pages_per_zspage to avoid unnecessary runtime
> overhead.
>
> * From v1
> * fix up __zs_compact - Sergey
>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
thanks.
Acked-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
-ss
> ---
> mm/zsmalloc.c | 5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 27b9661c8fa6..c9685bb2bb92 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1711,7 +1711,7 @@ static unsigned long zs_can_compact(struct size_class *class)
> obj_wasted /= get_maxobj_per_zspage(class->size,
> class->pages_per_zspage);
>
> - return obj_wasted * get_pages_per_zspage(class->size);
> + return obj_wasted * class->pages_per_zspage;
> }
>
> static void __zs_compact(struct zs_pool *pool, struct size_class *class)
> @@ -1749,8 +1749,7 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
>
> putback_zspage(pool, class, dst_page);
> if (putback_zspage(pool, class, src_page) == ZS_EMPTY)
> - pool->stats.pages_compacted +=
> - get_pages_per_zspage(class->size);
> + pool->stats.pages_compacted += class->pages_per_zspage;
> spin_unlock(&class->lock);
> cond_resched();
> spin_lock(&class->lock);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2015-07-16 0:20 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-07-16 0:10 [PATCH v2] zsmalloc: use class->pages_per_zspage Minchan Kim
2015-07-16 0:20 ` Sergey Senozhatsky
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).