From: Minchan Kim <minchan@kernel.org>
To: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>,
linux-kernel@vger.kernel.org
Subject: Re: zram: per-cpu compression streams
Date: Thu, 31 Mar 2016 07:12:33 +0900 [thread overview]
Message-ID: <20160330221233.GA6736@bbox> (raw)
In-Reply-To: <20160330083419.GA2769@swordfish>
On Wed, Mar 30, 2016 at 05:34:19PM +0900, Sergey Senozhatsky wrote:
> Hello Minchan,
> sorry for long reply.
>
> On (03/28/16 12:21), Minchan Kim wrote:
> [..]
> > group_reporting
> > buffer_compress_percentage=50
> > filename=/dev/zram0
> > loops=10
>
> I used a bit different script. no `buffer_compress_percentage' option,
> because it provide "a mix of random data and zeroes"
Normally, zram's compression ratio is 3 or 2 so I used it.
Hmm, isn't it more real practice usecase?
If we don't use buffer_compress_percentage, what's the content in the buffer?
>
> buffer_compress_percentage=int
> If this is set, then fio will attempt to provide IO buffer content
> (on WRITEs) that compress to the specified level. Fio does this by
> providing a mix of random data and zeroes
>
> and I also used scramble_buffers=0. but default scramble_buffers is
> true, so
>
> scramble_buffers=bool
> If refill_buffers is too costly and the target is using data
> deduplication, then setting this option will slightly modify the IO
> buffer contents to defeat normal de-dupe attempts. This is not
> enough to defeat more clever block compression attempts, but it will
> stop naive dedupe of blocks. Default: true.
>
> hm, but I guess it's not enough; fio probably will have different
> data (well, only if we didn't ask it to zero-fill the buffers) for
> different tests, causing different zram->zsmalloc behaviour. need
> to check it.
>
>
> > Hmm, Could you retest to who how the benefit is big?
>
> sure. the results are:
>
> - seq-read
> - rand-read
> - seq-write
> - rand-write (READ + WRITE)
> - mixed-seq
> - mixed-rand (READ + WRITE)
>
> TEST 4 streams 8 streams per-cpu
>
> #jobs1
> READ: 2665.4MB/s 2515.2MB/s 2632.4MB/s
> READ: 2258.2MB/s 2055.2MB/s 2166.2MB/s
> WRITE: 933180KB/s 894260KB/s 898234KB/s
> WRITE: 765576KB/s 728154KB/s 746396KB/s
> READ: 563169KB/s 541004KB/s 551541KB/s
> WRITE: 562660KB/s 540515KB/s 551043KB/s
> READ: 493656KB/s 477990KB/s 488041KB/s
> WRITE: 493210KB/s 477558KB/s 487600KB/s
> #jobs2
> READ: 5116.7MB/s 4607.1MB/s 4401.5MB/s
> READ: 4401.5MB/s 3993.6MB/s 3831.6MB/s
> WRITE: 1539.9MB/s 1425.5MB/s 1600.0MB/s
> WRITE: 1311.1MB/s 1228.7MB/s 1380.6MB/s
> READ: 1001.8MB/s 960799KB/s 989.63MB/s
> WRITE: 998.31MB/s 957540KB/s 986.26MB/s
> READ: 921439KB/s 860387KB/s 899720KB/s
> WRITE: 918314KB/s 857469KB/s 896668KB/s
> #jobs3
> READ: 6670.9MB/s 6469.9MB/s 6548.8MB/s
> READ: 5743.4MB/s 5507.8MB/s 5608.4MB/s
> WRITE: 1923.8MB/s 1885.9MB/s 2191.9MB/s
> WRITE: 1622.4MB/s 1605.4MB/s 1842.2MB/s
> READ: 1277.3MB/s 1295.8MB/s 1395.2MB/s
> WRITE: 1276.9MB/s 1295.4MB/s 1394.7MB/s
> READ: 1152.6MB/s 1137.1MB/s 1216.6MB/s
> WRITE: 1152.2MB/s 1137.6MB/s 1216.2MB/s
> #jobs4
> READ: 8720.4MB/s 7301.7MB/s 7896.2MB/s
> READ: 7510.3MB/s 6690.1MB/s 6456.2MB/s
> WRITE: 2211.6MB/s 1930.8MB/s 2713.9MB/s
> WRITE: 2002.2MB/s 1629.8MB/s 2227.7MB/s
Your case is 40% win. It's huge, Nice!
I tested with your guide line(i.e., no buffer_compress_percentage,
scramble_buffers=0) but still 10% enhance in my machine.
Hmm,,,
How about if you test my fio job.file in your machine?
Still, it's 40% win?
Also, I want to test again in your exactly same configuration.
Could you tell me zram environment(ie, disksize, compression
algorithm) and share me your job.file of fio?
Thanks.
next prev parent reply other threads:[~2016-03-30 22:10 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-03-23 8:12 zram: per-cpu compression streams Sergey Senozhatsky
2016-03-24 23:41 ` Minchan Kim
2016-03-25 1:47 ` Sergey Senozhatsky
2016-03-28 3:21 ` Minchan Kim
2016-03-30 8:34 ` Sergey Senozhatsky
2016-03-30 22:12 ` Minchan Kim [this message]
2016-03-31 1:26 ` Sergey Senozhatsky
2016-03-31 5:53 ` Minchan Kim
2016-03-31 6:34 ` Sergey Senozhatsky
2016-04-01 15:38 ` Sergey Senozhatsky
2016-04-04 0:27 ` Minchan Kim
2016-04-04 1:17 ` Sergey Senozhatsky
2016-04-18 7:57 ` Sergey Senozhatsky
2016-04-19 8:00 ` Minchan Kim
2016-04-19 8:08 ` Sergey Senozhatsky
2016-04-26 11:23 ` Sergey Senozhatsky
2016-04-27 7:29 ` Minchan Kim
2016-04-27 7:43 ` Sergey Senozhatsky
2016-04-27 7:55 ` Minchan Kim
2016-04-27 8:10 ` Sergey Senozhatsky
2016-04-27 8:54 ` Sergey Senozhatsky
2016-04-27 9:01 ` Sergey Senozhatsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160330221233.GA6736@bbox \
--to=minchan@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=sergey.senozhatsky.work@gmail.com \
--cc=sergey.senozhatsky@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).