linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Minchan Kim <minchan@kernel.org>
To: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	linux-kernel@vger.kernel.org,
	Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Subject: Re: zram: per-cpu compression streams
Date: Mon, 4 Apr 2016 09:27:57 +0900	[thread overview]
Message-ID: <20160404002757.GC5833@bbox> (raw)
In-Reply-To: <20160401153829.GA1212@swordfish>

Hello Sergey,

On Sat, Apr 02, 2016 at 12:38:29AM +0900, Sergey Senozhatsky wrote:
> Hello Minchan,
> 
> On (03/31/16 15:34), Sergey Senozhatsky wrote:
> > > I tested with you suggested parameter.
> > > In my side, win is better compared to my previous test but it seems
> > > your test is so fast. IOW, filesize is small and loops is just 1.
> > > Please test filesize=500m loops=10 or 20.
> 
> fio
> - loops=10
> - buffer_pattern=0xbadc0ffee
> 
> zram 6G. no intel p-state, deadline IO scheduler, no lockdep (no lock debugging).

We are using rw_page so I/O scheduler is not related.
Anyway, I configured my machine as you said but still see 10~20% enhance. :(
Hmm, could you post your .config?
I want to investigate why such difference happens between our machines.

The reason I want to see such *big enhance* in my machine is that
as you know, with per-cpu, zram's write path will lose blockable section
so it would make upcoming features's implementation hard.

We also should test it in very low memory situation so every write path
retry it(i.e., dobule compression). With it, I want to see how many
performance can drop.

If both test(normal: huge win low memory: small regression) are fine,
we can go per-cpu approach at the cost of giving up blockable section.
:)

Thanks.


> 
> 
> test            8 streams        per-cpu
> 
> #jobs1                         	                
> READ:           4118.2MB/s	 4105.3MB/s
> READ:           3487.7MB/s	 3624.9MB/s
> WRITE:          2197.8MB/s	 2305.1MB/s
> WRITE:          1776.2MB/s	 1887.5MB/s
> READ:           736589KB/s	 745648KB/s

> WRITE:          736353KB/s	 745409KB/s
> READ:           679279KB/s	 686559KB/s
> WRITE:          679093KB/s	 686371KB/s
> #jobs2                         	                
> READ:           6924.6MB/s	 7160.2MB/s
> READ:           6213.2MB/s	 6247.1MB/s
> WRITE:          2510.3MB/s	 3680.1MB/s
> WRITE:          2286.2MB/s	 3153.9MB/s
> READ:           1163.1MB/s	 1333.7MB/s
> WRITE:          1163.4MB/s	 1332.2MB/s
> READ:           1122.9MB/s	 1240.3MB/s
> WRITE:          1121.9MB/s	 1239.2MB/s
> #jobs3                         	                
> READ:           10304MB/s	 10424MB/s
> READ:           9014.5MB/s	 9014.5MB/s
> WRITE:          3883.9MB/s	 5373.8MB/s
> WRITE:          3549.1MB/s	 4576.4MB/s
> READ:           1704.4MB/s	 1916.8MB/s
> WRITE:          1704.9MB/s	 1915.9MB/s
> READ:           1603.5MB/s	 1806.8MB/s
> WRITE:          1598.8MB/s	 1800.8MB/s
> #jobs4                         	                
> READ:           13509MB/s	 12792MB/s
> READ:           10899MB/s	 11434MB/s
> WRITE:          4027.2MB/s	 6272.8MB/s
> WRITE:          3902.1MB/s	 5389.2MB/s
> READ:           2090.9MB/s	 2344.4MB/s
> WRITE:          2085.2MB/s	 2337.1MB/s
> READ:           1968.1MB/s	 2185.9MB/s
> WRITE:          1969.5MB/s	 2186.4MB/s
> #jobs5                         	                
> READ:           12634MB/s	 11607MB/s
> READ:           9932.7MB/s	 9980.6MB/s
> WRITE:          4275.8MB/s	 5844.3MB/s
> WRITE:          4210.1MB/s	 5262.3MB/s
> READ:           1995.6MB/s	 2211.4MB/s
> WRITE:          1988.4MB/s	 2203.4MB/s
> READ:           1930.1MB/s	 2191.8MB/s
> WRITE:          1929.8MB/s	 2190.3MB/s
> #jobs6                         	                
> READ:           12270MB/s	 13012MB/s
> READ:           11221MB/s	 10815MB/s
> WRITE:          4643.4MB/s	 6090.9MB/s
> WRITE:          4373.6MB/s	 5772.8MB/s
> READ:           2232.6MB/s	 2358.4MB/s
> WRITE:          2233.4MB/s	 2359.2MB/s
> READ:           2082.6MB/s	 2285.8MB/s
> WRITE:          2075.9MB/s	 2278.1MB/s
> #jobs7                         	                
> READ:           13617MB/s	 14172MB/s
> READ:           12290MB/s	 11734MB/s
> WRITE:          5077.3MB/s	 6315.7MB/s
> WRITE:          4719.4MB/s	 5825.1MB/s
> READ:           2379.8MB/s	 2523.7MB/s
> WRITE:          2373.7MB/s	 2516.7MB/s
> READ:           2287.9MB/s	 2362.4MB/s
> WRITE:          2283.9MB/s	 2358.2MB/s
> #jobs8                         	                
> READ:           15130MB/s	 15533MB/s
> READ:           12952MB/s	 13077MB/s
> WRITE:          5586.6MB/s	 7108.2MB/s
> WRITE:          5233.5MB/s	 6591.3MB/s
> READ:           2541.2MB/s	 2709.2MB/s
> WRITE:          2544.6MB/s	 2713.2MB/s
> READ:           2450.6MB/s	 2590.7MB/s
> WRITE:          2449.4MB/s	 2589.3MB/s
> #jobs9                         	                
> READ:           13480MB/s	 13909MB/s
> READ:           12389MB/s	 12000MB/s
> WRITE:          5266.8MB/s	 6594.9MB/s
> WRITE:          4971.6MB/s	 6442.2MB/s
> READ:           2464.9MB/s	 2470.9MB/s
> WRITE:          2482.7MB/s	 2488.8MB/s
> READ:           2171.9MB/s	 2402.2MB/s
> WRITE:          2174.9MB/s	 2405.5MB/s
> #jobs10                        	                
> READ:           14647MB/s	 14667MB/s
> READ:           11765MB/s	 12032MB/s
> WRITE:          5248.7MB/s	 6740.4MB/s
> WRITE:          4779.8MB/s	 5822.8MB/s
> READ:           2448.8MB/s	 2585.3MB/s
> WRITE:          2449.4MB/s	 2585.9MB/s
> READ:           2290.5MB/s	 2409.1MB/s
> WRITE:          2290.2MB/s	 2409.7MB/s
> 
> 	-ss

  reply	other threads:[~2016-04-04  0:27 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-23  8:12 zram: per-cpu compression streams Sergey Senozhatsky
2016-03-24 23:41 ` Minchan Kim
2016-03-25  1:47   ` Sergey Senozhatsky
2016-03-28  3:21     ` Minchan Kim
2016-03-30  8:34       ` Sergey Senozhatsky
2016-03-30 22:12         ` Minchan Kim
2016-03-31  1:26           ` Sergey Senozhatsky
2016-03-31  5:53             ` Minchan Kim
2016-03-31  6:34               ` Sergey Senozhatsky
2016-04-01 15:38                 ` Sergey Senozhatsky
2016-04-04  0:27                   ` Minchan Kim [this message]
2016-04-04  1:17                     ` Sergey Senozhatsky
2016-04-18  7:57                       ` Sergey Senozhatsky
2016-04-19  8:00                         ` Minchan Kim
2016-04-19  8:08                           ` Sergey Senozhatsky
2016-04-26 11:23                           ` Sergey Senozhatsky
2016-04-27  7:29                             ` Minchan Kim
2016-04-27  7:43                               ` Sergey Senozhatsky
2016-04-27  7:55                                 ` Minchan Kim
2016-04-27  8:10                                   ` Sergey Senozhatsky
2016-04-27  8:54                               ` Sergey Senozhatsky
2016-04-27  9:01                                 ` Sergey Senozhatsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160404002757.GC5833@bbox \
    --to=minchan@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=sergey.senozhatsky.work@gmail.com \
    --cc=sergey.senozhatsky@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).