linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
To: Minchan Kim <minchan@kernel.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-kernel@vger.kernel.org,
	Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Subject: Re: zram: per-cpu compression streams
Date: Wed, 27 Apr 2016 16:43:35 +0900	[thread overview]
Message-ID: <20160427074335.GC7601@swordfish> (raw)
In-Reply-To: <20160427072954.GA29816@bbox>

Hello,

On (04/27/16 16:29), Minchan Kim wrote:
[..]
> > the test:
> > 
> > -- 4 GB x86_64 box
> > -- zram 3GB, lzo
> > -- mem-hogger pre-faults 3GB of pages before the fio test
> > -- fio test has been modified to have 11% compression ratio (to increase the
> >                                                   chances of re-compressions)
> 
> Could you test concurrent mem hogger with fio rather than pre-fault before fio test
> in next submit?

this test will not prove anything, unfortunately. I performed it;
and it's impossible to guarantee even remotely stable results.
mem-hogger process can spend on pre-fault from 41 to 81 seconds;
so I'm quite sceptical about the actual value of this test.

> > considering buffer_compress_percentage=11, the box was under somewhat
> > heavy pressure.
> > 
> > now, the results
> 
> Yeb, Even, recompression case is fater than old but want to see more heavy memory
> pressure case and the ratio I mentioned above.

I did quite heavy testing over the last 7 days, with numerous OOM kills
and OOM panics.

	-ss

  reply	other threads:[~2016-04-27  7:42 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-23  8:12 zram: per-cpu compression streams Sergey Senozhatsky
2016-03-24 23:41 ` Minchan Kim
2016-03-25  1:47   ` Sergey Senozhatsky
2016-03-28  3:21     ` Minchan Kim
2016-03-30  8:34       ` Sergey Senozhatsky
2016-03-30 22:12         ` Minchan Kim
2016-03-31  1:26           ` Sergey Senozhatsky
2016-03-31  5:53             ` Minchan Kim
2016-03-31  6:34               ` Sergey Senozhatsky
2016-04-01 15:38                 ` Sergey Senozhatsky
2016-04-04  0:27                   ` Minchan Kim
2016-04-04  1:17                     ` Sergey Senozhatsky
2016-04-18  7:57                       ` Sergey Senozhatsky
2016-04-19  8:00                         ` Minchan Kim
2016-04-19  8:08                           ` Sergey Senozhatsky
2016-04-26 11:23                           ` Sergey Senozhatsky
2016-04-27  7:29                             ` Minchan Kim
2016-04-27  7:43                               ` Sergey Senozhatsky [this message]
2016-04-27  7:55                                 ` Minchan Kim
2016-04-27  8:10                                   ` Sergey Senozhatsky
2016-04-27  8:54                               ` Sergey Senozhatsky
2016-04-27  9:01                                 ` Sergey Senozhatsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160427074335.GC7601@swordfish \
    --to=sergey.senozhatsky.work@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=minchan@kernel.org \
    --cc=sergey.senozhatsky@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).