linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jerome Marchand <jmarchan@redhat.com>
To: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Minchan Kim <minchan@kernel.org>, Nitin Gupta <ngupta@vflare.org>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCHv3 3/8] zram: remove good and bad compress stats
Date: Thu, 16 Jan 2014 14:44:05 +0100	[thread overview]
Message-ID: <52D7E225.7000105@redhat.com> (raw)
In-Reply-To: <1389877936-15543-4-git-send-email-sergey.senozhatsky@gmail.com>

On 01/16/2014 02:12 PM, Sergey Senozhatsky wrote:
> Remove `good' and `bad' compressed sub-requests stats. RW request may
> cause a number of RW sub-requests. zram used to account `good' compressed
> sub-queries (with compressed size less than 50% of original size), `bad'
> compressed sub-queries (with compressed size greater that 75% of original
> size), leaving sub-requests with compression size between 50% and 75% of
> original size not accounted and not reported. zram already accounts each
> sub-request's compression size so we can calculate real device compression
> ratio.
> 
> Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Jerome Marchand <jmarchan@redhat.com>

> ---
>  drivers/block/zram/zram_drv.c | 11 -----------
>  drivers/block/zram/zram_drv.h |  2 --
>  2 files changed, 13 deletions(-)
> 
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index b0bcb7e..c7c7789 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -293,7 +293,6 @@ static void zram_free_page(struct zram *zram, size_t index)
>  {
>  	struct zram_meta *meta = zram->meta;
>  	unsigned long handle = meta->table[index].handle;
> -	u16 size = meta->table[index].size;
>  
>  	if (unlikely(!handle)) {
>  		/*
> @@ -307,14 +306,8 @@ static void zram_free_page(struct zram *zram, size_t index)
>  		return;
>  	}
>  
> -	if (unlikely(size > max_zpage_size))
> -		atomic_dec(&zram->stats.bad_compress);
> -
>  	zs_free(meta->mem_pool, handle);
>  
> -	if (size <= PAGE_SIZE / 2)
> -		atomic_dec(&zram->stats.good_compress);
> -
>  	atomic64_sub(meta->table[index].size, &zram->stats.compr_size);
>  	atomic_dec(&zram->stats.pages_stored);
>  
> @@ -478,7 +471,6 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>  	}
>  
>  	if (unlikely(clen > max_zpage_size)) {
> -		atomic_inc(&zram->stats.bad_compress);
>  		clen = PAGE_SIZE;
>  		src = NULL;
>  		if (is_partial_io(bvec))
> @@ -518,9 +510,6 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>  	/* Update stats */
>  	atomic64_add(clen, &zram->stats.compr_size);
>  	atomic_inc(&zram->stats.pages_stored);
> -	if (clen <= PAGE_SIZE / 2)
> -		atomic_inc(&zram->stats.good_compress);
> -
>  out:
>  	if (locked)
>  		mutex_unlock(&meta->buffer_lock);
> diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
> index e81e9cd..2f173cb 100644
> --- a/drivers/block/zram/zram_drv.h
> +++ b/drivers/block/zram/zram_drv.h
> @@ -78,8 +78,6 @@ struct zram_stats {
>  	atomic64_t notify_free;	/* no. of swap slot free notifications */
>  	atomic_t pages_zero;		/* no. of zero filled pages */
>  	atomic_t pages_stored;	/* no. of pages currently stored */
> -	atomic_t good_compress;	/* % of pages with compression ratio<=50% */
> -	atomic_t bad_compress;	/* % of pages with compression ratio>=75% */
>  };
>  
>  struct zram_meta {
> 


  reply	other threads:[~2014-01-16 13:45 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-01-16 13:12 [PATCHv3 0/8] zram stats rework and code cleanup Sergey Senozhatsky
2014-01-16 13:12 ` [PATCHv3 1/8] zram: drop `init_done' struct zram member Sergey Senozhatsky
2014-01-16 13:12 ` [PATCHv3 2/8] zram: do not pass rw argument to __zram_make_request() Sergey Senozhatsky
2014-01-16 13:12 ` [PATCHv3 3/8] zram: remove good and bad compress stats Sergey Senozhatsky
2014-01-16 13:44   ` Jerome Marchand [this message]
2014-01-17  6:48   ` Minchan Kim
2014-01-16 13:12 ` [PATCHv3 4/8] zram: use atomic64_t for all zram stats Sergey Senozhatsky
2014-01-16 13:58   ` Jerome Marchand
2014-01-17  6:54   ` Minchan Kim
2014-01-16 13:12 ` [PATCHv3 5/8] zram: remove zram stats code duplication Sergey Senozhatsky
2014-01-17  6:54   ` Minchan Kim
2014-01-16 13:12 ` [PATCHv3 6/8] zram: report failed read and write stats Sergey Senozhatsky
2014-02-04 22:18   ` Andrew Morton
2014-02-05  9:52     ` Sergey Senozhatsky
2014-01-16 13:12 ` [PATCHv3 7/8] zram: drop not used table `count' member Sergey Senozhatsky
2014-01-16 13:52   ` Jerome Marchand
2014-01-16 14:17     ` Sergey Senozhatsky
2014-01-17  7:03   ` Minchan Kim
2014-01-16 13:12 ` [PATCHv3 8/8] zram: move zram size warning to documentation Sergey Senozhatsky
2014-01-17  7:08   ` Minchan Kim
2014-01-17  7:09 ` [PATCHv3 0/8] zram stats rework and code cleanup Minchan Kim
2014-01-20  4:42   ` Minchan Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52D7E225.7000105@redhat.com \
    --to=jmarchan@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=minchan@kernel.org \
    --cc=ngupta@vflare.org \
    --cc=sergey.senozhatsky@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).