From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753315AbcBVEyo (ORCPT ); Sun, 21 Feb 2016 23:54:44 -0500 Received: from LGEAMRELO13.lge.com ([156.147.23.53]:48569 "EHLO lgeamrelo13.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752539AbcBVEyo (ORCPT ); Sun, 21 Feb 2016 23:54:44 -0500 X-Original-SENDERIP: 156.147.1.151 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 165.244.98.150 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 10.177.223.161 X-Original-MAILFROM: minchan@kernel.org Date: Mon, 22 Feb 2016 13:54:58 +0900 From: Minchan Kim To: Sergey Senozhatsky CC: Sergey Senozhatsky , Andrew Morton , Joonsoo Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [RFC][PATCH v2 2/3] zram: use zs_get_huge_class_size_watermark() Message-ID: <20160222045458.GF27829@bbox> References: <1456061274-20059-1-git-send-email-sergey.senozhatsky@gmail.com> <1456061274-20059-3-git-send-email-sergey.senozhatsky@gmail.com> <20160222000436.GA21710@bbox> <20160222004047.GA4958@swordfish> <20160222012758.GA27829@bbox> <20160222015912.GA488@swordfish> <20160222025709.GD27829@bbox> <20160222035448.GB11961@swordfish> MIME-Version: 1.0 In-Reply-To: <20160222035448.GB11961@swordfish> User-Agent: Mutt/1.5.21 (2010-09-15) X-MIMETrack: Itemize by SMTP Server on LGEKRMHUB05/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2016/02/22 13:54:40, Serialize by Router on LGEKRMHUB05/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2016/02/22 13:54:40, Serialize complete at 2016/02/22 13:54:40 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Feb 22, 2016 at 12:54:48PM +0900, Sergey Senozhatsky wrote: > On (02/22/16 11:57), Minchan Kim wrote: > [..] > > > > Yes, I mean if we have backing storage, we could mitigate the problem > > > > like the mentioned approach. Otherwise, we should solve it in allocator > > > > itself and you suggested the idea and I commented first step. > > > > What's the problem, now? > > > > > > well, I didn't say I have problems. > > > so you want a backing device that will keep only 'bad compression' > > > objects and use zsmalloc to keep there only 'good compression' objects? > > > IOW, no huge classes in zsmalloc at all? well, that can work out. it's > > > a bit strange though that to solve zram-zsmalloc issues we would ask > > > someone to create a additional device. it looks (at least for now) that > > > we can address those issues in zram-zsmalloc entirely; w/o user > > > intervention or a 3rd party device. > > > > Agree. That's what I want. zram shouldn't be aware of allocator's > > internal implementation. IOW, zsmalloc should handle it without > > exposing any internal limitation. > > well, at the same time zram must not dictate what to do. zram simply spoils > zsmalloc; it does not offer guaranteed good compression, and it does not let > zsmalloc to do it's job. zram has only excuses to be the way it is. > the existing zram->zsmalloc dependency looks worse than zsmalloc->zram to me. I don't get it why you think it's zram->zsmalloc dependency. I already explained. Here it goes, again. Long time ago, zram(i.e, ramzswap) can fallback incompressible page to backed device if it presents and the size was PAGE_SIZE / 2. IOW, if compress ratio is bad than 50%, zram passes the page to backed storage to make memory efficiency. If zram doesn't have backed storage and compress ratio under 25%(ie, short of memory saving) it store pages as uncompressible for avoiding additional *decompress* overhead. Of course, it's arguable whether memory efficiency VS. CPU consumption so we should handle it as another topic. What I want to say in here is it's not dependency between zram and zsmalloc but it was a zram policy for a long time. If it's not good, we can fix it. > > > Backing device issue is orthogonal but what I said about thing > > was it could solve the issue too without exposing zsmalloc's > > limitation to the zram. > > well, backing device would not reduce the amount of pages we request. > and that's the priority issue, especially if we are talking about > embedded system with a low free pages capability. we would just move huge > objects from zsmalloc to backing device. other than that we would still > request 1000 (for example) pages to store 1000 objects. it's zsmalloc's > "page sharing" that permits us to request less than 1000 pages to store > 1000 objects. > > so yes, I agree, increasing ZS_MAX_ZSPAGE_ORDER and do more tests is > the step #1 to take. > > > Let's summary my points in here. > > > > Let's make zsmalloc smarter to reduce wasted space. One of option is > > dynamic page creation which I agreed. > > > > Before the feature, we should test how memory footprint is bigger > > without the feature if we increase ZS_MAX_ZSPAGE_ORDER. > > If it's not big, we could go with your patch easily without adding > > more complex stuff(i.e, dynamic page creation). > > yes, agree. alloc_zspage()/init_zspage() and friends must be the last > thing to touch. only if increased ZS_MAX_ZSPAGE_ORDER will turn out not > to be good enough. > > > Please, check max_used_pages rather than mem_used_total for seeing > > memory footprint at the some moment and test very fragmented scenario > > (creating files and free part of files) rather than just full coping. > > sure, more tests will follow. Thanks.