From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751919AbcCRBYp (ORCPT ); Thu, 17 Mar 2016 21:24:45 -0400 Received: from LGEAMRELO13.lge.com ([156.147.23.53]:45888 "EHLO lgeamrelo13.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751503AbcCRBYg (ORCPT ); Thu, 17 Mar 2016 21:24:36 -0400 X-Original-SENDERIP: 156.147.1.121 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 165.244.98.204 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 10.177.223.161 X-Original-MAILFROM: minchan@kernel.org Date: Fri, 18 Mar 2016 10:25:24 +0900 From: Minchan Kim To: Sergey Senozhatsky CC: Sergey Senozhatsky , Andrew Morton , linux-kernel@vger.kernel.org Subject: Re: [PATCH] zram: export the number of available comp streams Message-ID: <20160318012524.GA10612@bbox> References: <1453809839-21705-1-git-send-email-sergey.senozhatsky@gmail.com> <20160129072842.GA30072@bbox> <20160201010157.GA1033@swordfish> <20160318003236.GB2154@bbox> <20160318010937.GA572@swordfish> MIME-Version: 1.0 In-Reply-To: <20160318010937.GA572@swordfish> User-Agent: Mutt/1.5.21 (2010-09-15) X-MIMETrack: Itemize by SMTP Server on LGEKRMHUB01/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2016/03/18 10:24:25, Serialize by Router on LGEKRMHUB01/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2016/03/18 10:24:25, Serialize complete at 2016/03/18 10:24:25 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Mar 18, 2016 at 10:09:37AM +0900, Sergey Senozhatsky wrote: > Hello Minchan, > > On (03/18/16 09:32), Minchan Kim wrote: > [..] > > > do I need 21? may be no. do I nede 18? if 18 streams are needed only 10% > > > of the time (I can figure it out by doing repetitive cat zramX/mm_stat), > > > then I can set max_comp_streams to make 90% of applications happy, e.g. > > > max_comp_streams to 10, and save some memory. > > > > > > > Okay. Let's go back to zcomp design decade. As you remember, the reason > > we separated single and multi stream code was performance caused by > > locking scheme(ie, mutex_lock in single stream model was really fast > > than sleep/wakeup model in multi stream). > > If we could overcome that problem back then, we should have gone to > > multi stream code default. > > yes, IIRC I wanted to limit the number of streams by the number of > online CPUs (or was it 2*num_online_cpus()?), and thus change the > number of streams dynamically (because CPUs can go on and off line); > and create at least num_online_cpus() streams during device > initialization. > > the reason for a single-stream zram IIRC were setups in which > zram is used as a swap device. streams require some memory, after > all. and then we discovered that mutex spin on owner boosts single > stream zram significantly. > > > How about using *per-cpu* streams? > > OK. instead of list of idle streams use per-cpu pointer and process > CPU_FOO notifications. that can work, sounds good to me. > > > > I remember you wanted to create max number of comp streams statically > > although I didn't want at that time but I change my decision. > > > > Let's allocate comp stream statically but remove max_comp_streams > > knob. Instead, by default, zram alloctes number of streams according > > to the number of online CPU. > > OK. removing `max_comp_streams' will take another 2 years. That's > a major change, we can leave it for longer, just make it nop. > > > So I think we can solve locking scheme issue in single stream > > , guarantee parallel level as well as enhancing performance with > > no locking. > > > > Downside with the approach is that unnecessary memory space reserve > > although zram might be used 1% of running system time. But we > > should give it up for other benefits > > aha, ok. > > > (ie, simple code, removing > > max_comp_streams knob, no need to this your stat, guarantee parallel > > level, guarantee consumed memory space). > > I'll take a look and prepare some numbers (most likely next week). Sounds great to me! > > > > What do you think about it? > > so should I ask Andrew to drop this patch? Yeb. Thanks! > > -ss