From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754020AbbGPAKi (ORCPT ); Wed, 15 Jul 2015 20:10:38 -0400 Received: from lgeamrelo04.lge.com ([156.147.1.127]:49611 "EHLO lgeamrelo04.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753827AbbGPAKh (ORCPT ); Wed, 15 Jul 2015 20:10:37 -0400 X-Original-SENDERIP: 10.177.220.145 X-Original-MAILFROM: minchan@kernel.org From: Minchan Kim To: Andrew Morton Cc: Sergey Senozhatsky , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Minchan Kim Subject: [PATCH v2] zsmalloc: use class->pages_per_zspage Date: Thu, 16 Jul 2015 09:10:54 +0900 Message-Id: <1437005454-3338-1-git-send-email-minchan@kernel.org> X-Mailer: git-send-email 1.9.1 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There is no need to recalcurate pages_per_zspage in runtime. Just use class->pages_per_zspage to avoid unnecessary runtime overhead. * From v1 * fix up __zs_compact - Sergey Signed-off-by: Minchan Kim --- mm/zsmalloc.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 27b9661c8fa6..c9685bb2bb92 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1711,7 +1711,7 @@ static unsigned long zs_can_compact(struct size_class *class) obj_wasted /= get_maxobj_per_zspage(class->size, class->pages_per_zspage); - return obj_wasted * get_pages_per_zspage(class->size); + return obj_wasted * class->pages_per_zspage; } static void __zs_compact(struct zs_pool *pool, struct size_class *class) @@ -1749,8 +1749,7 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class) putback_zspage(pool, class, dst_page); if (putback_zspage(pool, class, src_page) == ZS_EMPTY) - pool->stats.pages_compacted += - get_pages_per_zspage(class->size); + pool->stats.pages_compacted += class->pages_per_zspage; spin_unlock(&class->lock); cond_resched(); spin_lock(&class->lock); -- 1.9.1