From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754937AbcIEFvm (ORCPT ); Mon, 5 Sep 2016 01:51:42 -0400 Received: from LGEAMRELO13.lge.com ([156.147.23.53]:54768 "EHLO lgeamrelo13.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752686AbcIEFvl (ORCPT ); Mon, 5 Sep 2016 01:51:41 -0400 X-Original-SENDERIP: 156.147.1.127 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 165.244.98.150 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 10.177.223.161 X-Original-MAILFROM: minchan@kernel.org Date: Mon, 5 Sep 2016 14:51:33 +0900 From: Minchan Kim To: Hui Zhu CC: Sergey Senozhatsky , Hui Zhu , , Hugh Dickins , Steven Rostedt , Ingo Molnar , Peter Zijlstra , , , Andrew Morton , , , , , , , , , , , , , , , , , , , , , Thomas Gleixner , , , , Joe Perches , , Rik van Riel , "linux-kernel@vger.kernel.org" , Linux Memory Management List Subject: Re: [RFC 0/4] ZRAM: make it just store the high compression rate page Message-ID: <20160905055133.GA28514@bbox> References: <1471854309-30414-1-git-send-email-zhuhui@xiaomi.com> <20160825060957.GA568@swordfish> <20160905021852.GB22701@bbox> MIME-Version: 1.0 In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) X-MIMETrack: Itemize by SMTP Server on LGEKRMHUB08/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2016/09/05 14:51:36, Serialize by Router on LGEKRMHUB08/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2016/09/05 14:51:36, Serialize complete at 2016/09/05 14:51:36 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Sep 05, 2016 at 01:12:05PM +0800, Hui Zhu wrote: > On Mon, Sep 5, 2016 at 10:18 AM, Minchan Kim wrote: > > On Thu, Aug 25, 2016 at 04:25:30PM +0800, Hui Zhu wrote: > >> On Thu, Aug 25, 2016 at 2:09 PM, Sergey Senozhatsky > >> wrote: > >> > Hello, > >> > > >> > On (08/22/16 16:25), Hui Zhu wrote: > >> >> > >> >> Current ZRAM just can store all pages even if the compression rate > >> >> of a page is really low. So the compression rate of ZRAM is out of > >> >> control when it is running. > >> >> In my part, I did some test and record with ZRAM. The compression rate > >> >> is about 40%. > >> >> > >> >> This series of patches make ZRAM can just store the page that the > >> >> compressed size is smaller than a value. > >> >> With these patches, I set the value to 2048 and did the same test with > >> >> before. The compression rate is about 20%. The times of lowmemorykiller > >> >> also decreased. > >> > > >> > I haven't looked at the patches in details yet. can you educate me a bit? > >> > is your test stable? why the number of lowmemorykill-s has decreased? > >> > ... or am reading "The times of lowmemorykiller also decreased" wrong? > >> > > >> > suppose you have X pages that result in bad compression size (from zram > >> > point of view). zram stores such pages uncompressed, IOW we have no memory > >> > savings - swapped out page lands in zsmalloc PAGE_SIZE class. now you > >> > don't try to store those pages in zsmalloc, but keep them as unevictable. > >> > so the page still occupies PAGE_SIZE; no memory saving again. why did it > >> > improve LMK? > >> > >> No, zram will not save this page uncompressed with these patches. It > >> will set it as non-swap and kick back to shrink_page_list. > >> Shrink_page_list will remove this page from swapcache and kick it to > >> unevictable list. > >> Then this page will not be swaped before it get write. > >> That is why most of code are around vmscan.c. > > > > If I understand Sergey's point right, he means there is no gain > > to save memory between before and after. > > > > With your approach, you can prevent unnecessary pageout(i.e., > > uncompressible page swap out) but it doesn't mean you save the > > memory compared to old so why does your patch decrease the number of > > lowmemory killing? > > > > A thing I can imagine is without this feature, zram could be full of > > uncompressible pages so good-compressible page cannot be swapped out. > > Hui, is this scenario right for your case? > > > > That is one reason. But it is not the principal one. > > Another reason is when swap is running to put page to zram, what the > system wants is to get memory. > Then the deal is system spends cpu time and memory to get memory. If > the zram just access the high compression rate pages, system can get > more memory with the same amount of memory. It will pull system from > low memory status earlier. (Maybe more cpu time, because the > compression rate checks. But maybe less, because fewer pages need to > digress. That is the interesting part. :) > I think that is why lmk times decrease. > > And yes, all of this depends on the number of high compression rate > pages. So you cannot just set a non_swap limit to the system and get > everything. You need to do a lot of test around it to make sure the > non_swap limit is good for your system. > > And I think use AOP_WRITEPAGE_ACTIVATE without kicking page to a > special list will make cpu too busy sometimes. Yes, and it would same with your patch if new arraival write on CoWed page is uncompressible data. > I did some tests before I kick page to a special list. The shrink task What kinds of test? Could you elaborate a bit more? shrink task. What does it mean? > will be moved around, around and around because low compression rate > pages just moved from one list to another a lot of times, again, again > and again. > And all this low compression rate pages always stay together. I cannot understand with detail description. :( Could you explain more?