From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752754AbcBVAqm (ORCPT ); Sun, 21 Feb 2016 19:46:42 -0500 Received: from mail-pa0-f50.google.com ([209.85.220.50]:36246 "EHLO mail-pa0-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751296AbcBVAqk (ORCPT ); Sun, 21 Feb 2016 19:46:40 -0500 Date: Mon, 22 Feb 2016 09:47:58 +0900 From: Sergey Senozhatsky To: Minchan Kim Cc: Sergey Senozhatsky , Andrew Morton , Joonsoo Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: Re: [RFC][PATCH v2 3/3] mm/zsmalloc: increase ZS_MAX_PAGES_PER_ZSPAGE Message-ID: <20160222004758.GB4958@swordfish> References: <1456061274-20059-1-git-send-email-sergey.senozhatsky@gmail.com> <1456061274-20059-4-git-send-email-sergey.senozhatsky@gmail.com> <20160222002515.GB21710@bbox> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160222002515.GB21710@bbox> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On (02/22/16 09:25), Minchan Kim wrote: [..] > I tempted it several times with same reason you pointed out. > But my worry was that if we increase ZS_MAX_ZSPAGE_ORDER, zram can > consume more memory because we need several pages chain to populate > just a object. Even, at that time, we didn't have compaction scheme > so fragmentation of object in zspage is huge pain to waste memory. well, the thing is -- we end up requesting less pages after all, so zsmalloc has better chances to survive. for example, gcc5 compilation test BASE 168 2720 0 1 115833 115831 77222 2 190 3072 0 1 109708 109707 82281 3 202 3264 0 5 1910 1895 1528 4 254 4096 0 0 380174 380174 380174 1 Total 44 285 1621495 1618234 891703 PATCHED 192 3104 1 0 3740 3737 2860 13 194 3136 0 1 7215 7208 5550 10 197 3184 1 0 11151 11150 8673 7 199 3216 0 1 9310 9304 7315 11 200 3232 0 1 4731 4717 3735 15 202 3264 0 1 8400 8396 6720 4 206 3328 0 1 22064 22051 17927 13 207 3344 0 1 4884 4877 3996 9 208 3360 0 1 4420 4415 3640 14 211 3408 0 1 11250 11246 9375 5 212 3424 1 0 3344 3343 2816 16 214 3456 0 2 7345 7329 6215 11 217 3504 0 1 10801 10797 9258 6 219 3536 0 1 5295 5289 4589 13 222 3584 0 0 6008 6008 5257 7 223 3600 0 1 1530 1518 1350 15 225 3632 0 1 3519 3514 3128 8 228 3680 0 1 3990 3985 3591 9 230 3712 0 2 2167 2151 1970 10 232 3744 1 2 1848 1835 1694 11 234 3776 0 2 1404 1384 1296 12 235 3792 0 2 672 654 624 13 236 3808 1 2 615 592 574 14 238 3840 1 2 1120 1098 1050 15 254 4096 0 0 241824 241824 241824 1 Total 129 489 1627756 1618193 850147 that's 891703 - 850147 = 41556 less pages. or 162MB less memory used. 41556 less pages means that zsmalloc had 41556 less chances to fail. > Now, we have compaction facility so fragment of object might not > be a severe problem but still painful to allocate 16 pages to store > 3408 byte. So, if we want to increase ZS_MAX_ZSPAGE_ORDER, > first of all, we should prepare dynamic creating of sub-page of > zspage, I think and more smart compaction to minimize wasted memory. well, I agree, but given that we allocate less pages, do we really want to introduce this complexity at this point? -ss