From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932164AbcJUGIR (ORCPT ); Fri, 21 Oct 2016 02:08:17 -0400 Received: from mail-pf0-f195.google.com ([209.85.192.195]:33188 "EHLO mail-pf0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754339AbcJUGIO (ORCPT ); Fri, 21 Oct 2016 02:08:14 -0400 Date: Fri, 21 Oct 2016 15:08:09 +0900 From: Sergey Senozhatsky To: Minchan Kim Cc: Sergey Senozhatsky , Jens Axboe , Andrew Morton , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Sergey Senozhatsky Subject: Re: [PATCH 2/3] zram: support page-based parallel write Message-ID: <20161021060809.GB527@swordfish> References: <1474526565-6676-1-git-send-email-minchan@kernel.org> <1474526565-6676-2-git-send-email-minchan@kernel.org> <20160929031831.GA1175@swordfish> <20160930055221.GA16293@bbox> <20161004044314.GA835@swordfish> <20161005020153.GA2988@bbox> <20161006082915.GA946@swordfish> <20161007063322.GA24554@bbox> <20161017050424.GA4591@blaptop> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20161017050424.GA4591@blaptop> User-Agent: Mutt/1.7.1 (2016-10-04) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello Minchan, On (10/17/16 14:04), Minchan Kim wrote: > Hi Sergey, > > On Fri, Oct 07, 2016 at 03:33:22PM +0900, Minchan Kim wrote: > > < snip > > > > > so the question is -- can we move this parallelization out of zram > > > and instead flush bdi in more than one kthread? how bad that would > > > be? can anyone else benefit from this? > > > > Isn't it blk-mq you mentioned? With blk-mq, I have some concerns. > > > > 1. read speed degradation > > 2. no work with rw_page > > 3. more memory footprint by bio/request queue allocation > > > > Having said, it's worth to look into it in detail more. > > I will have time to see that approach to know what I can do > > with that. > > queue_mode=2 bs=4096 nr_devices=1 submit_queues=4 hw_queue_depth=128 > > Last week, I played with null_blk and blk-mq.c to get an idea how > blk-mq works and I realized it's not good for zram because it aims > to solve 1) dispatch queue bottleneck 2) cache-friendly IO completion > through IRQ so 3) avoids remote memory accesses. > > For zram which is used for embedded as primary purpose, ones listed > abvoe are not a severe problem. Most imporant thing is there is no > model to support that a process queueing IO request on *a* CPU while > other CPUs issues the queued IO to driver. > > Anyway, Although blk-mrq can support that model, it is blk-layer thing. > IOW, it's software stuff for fast IO delievry but what we need is > device parallelism of zram itself. So, although we follow blk-mq, > we still need multiple threads to compress in parallel which is most of > code I wrote in this patchset. yes. but at least wb can be multi-threaded. well, sort of. seems like. sometimes. > If I cannot get huge benefit(e.g., reduce a lot of zram-speicif code > to support such model) with blk-mq, I don't feel to switch to request > model at the cost of reasons I stated above. thanks. I'm looking at your patches. -ss