From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752079AbcEMHEZ (ORCPT ); Fri, 13 May 2016 03:04:25 -0400 Received: from mail-pa0-f68.google.com ([209.85.220.68]:33607 "EHLO mail-pa0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751184AbcEMHEV (ORCPT ); Fri, 13 May 2016 03:04:21 -0400 Date: Fri, 13 May 2016 16:05:53 +0900 From: Sergey Senozhatsky To: Sergey Senozhatsky Cc: Minchan Kim , Sergey Senozhatsky , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH] zram: introduce per-device debug_stat sysfs node Message-ID: <20160513070553.GC615@swordfish> References: <20160511134553.12655-1-sergey.senozhatsky@gmail.com> <20160512234143.GA27204@bbox> <20160513010929.GA615@swordfish> <20160513062303.GA21204@bbox> <20160513065805.GB615@swordfish> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160513065805.GB615@swordfish> User-Agent: Mutt/1.6.1 (2016-04-27) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On (05/13/16 15:58), Sergey Senozhatsky wrote: > On (05/13/16 15:23), Minchan Kim wrote: > [..] > > @@ -737,12 +737,12 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index, > > zcomp_strm_release(zram->comp, zstrm); > > zstrm = NULL; > > > > - atomic64_inc(&zram->stats.num_recompress); > > - > > handle = zs_malloc(meta->mem_pool, clen, > > GFP_NOIO | __GFP_HIGHMEM); > > - if (handle) > > + if (handle) { > > + atomic64_inc(&zram->stats.num_recompress); > > goto compress_again; > > + } > > not like a real concern... > > the main (and only) purpose of num_recompress is to match performance > slowdowns and failed fast write paths (when the first zs_malloc() fails). > this matching is depending on successful second zs_malloc(), but if it's > also unsuccessful we would only increase failed_writes; w/o increasing > the failed fast write counter, while we actually would have failed fast > write and extra zs_malloc() [unaccounted in this case]. yet it's probably > a bit unlikely to happen, but still. well, just saying. here I assume that the biggest contributor to re-compress latency is enabled preemption after zcomp_strm_release() and this second zs_malloc(). the compression itself of a PAGE_SIZE buffer should be fast enough. so IOW we would pass down the slow path, but would not account it. -ss