From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752885AbcBVCeX (ORCPT ); Sun, 21 Feb 2016 21:34:23 -0500 Received: from LGEAMRELO11.lge.com ([156.147.23.51]:58223 "EHLO lgeamrelo11.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752300AbcBVCeW (ORCPT ); Sun, 21 Feb 2016 21:34:22 -0500 X-Original-SENDERIP: 156.147.1.125 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 165.244.98.204 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 10.177.223.161 X-Original-MAILFROM: minchan@kernel.org Date: Mon, 22 Feb 2016 11:34:32 +0900 From: Minchan Kim To: Sergey Senozhatsky CC: Sergey Senozhatsky , Andrew Morton , Joonsoo Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [RFC][PATCH v2 3/3] mm/zsmalloc: increase ZS_MAX_PAGES_PER_ZSPAGE Message-ID: <20160222023432.GC27829@bbox> References: <1456061274-20059-1-git-send-email-sergey.senozhatsky@gmail.com> <1456061274-20059-4-git-send-email-sergey.senozhatsky@gmail.com> <20160222002515.GB21710@bbox> <20160222004758.GB4958@swordfish> <20160222013442.GB27829@bbox> <20160222020113.GB488@swordfish> MIME-Version: 1.0 In-Reply-To: <20160222020113.GB488@swordfish> User-Agent: Mutt/1.5.21 (2010-09-15) X-MIMETrack: Itemize by SMTP Server on LGEKRMHUB04/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2016/02/22 11:34:14, Serialize by Router on LGEKRMHUB04/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2016/02/22 11:34:14, Serialize complete at 2016/02/22 11:34:14 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Feb 22, 2016 at 11:01:13AM +0900, Sergey Senozhatsky wrote: > On (02/22/16 10:34), Minchan Kim wrote: > [..] > > > > I tempted it several times with same reason you pointed out. > > > > But my worry was that if we increase ZS_MAX_ZSPAGE_ORDER, zram can > > > > consume more memory because we need several pages chain to populate > > > > just a object. Even, at that time, we didn't have compaction scheme > > > > so fragmentation of object in zspage is huge pain to waste memory. > > > > > > well, the thing is -- we end up requesting less pages after all, so > > > zsmalloc has better chances to survive. for example, gcc5 compilation test > > > > Indeed. I saw your test result. > > > [..] > > > Total 129 489 1627756 1618193 850147 > > > > > > > > > that's 891703 - 850147 = 41556 less pages. or 162MB less memory used. > > > 41556 less pages means that zsmalloc had 41556 less chances to fail. > > > > > > Let's think swap-case which is more important for zram now. As you know, > > most of usecase are swap in embedded world. > > Do we really need 16 pages allocator for just less PAGE_SIZE objet > > at the moment which is really heavy memory pressure? > > I'll take a look at dynamic class page addition. Thanks, Sergey. Just a note: I am preparing zsmalloc migration now and almost done so I hope I can send it within two weeks. In there, I changed a lot of things in zsmalloc, page chaining, struct page fields usecases and locking scheme and so on. The zsmalloc fragment/migration is really painful now so we should solve it first so I hope you help to review that and let's go further dynamic chaining after that, please. :)