From: Seth Jennings <sjenning@linux.vnet.ibm.com>
To: Seth Jennings <sjenning@linux.vnet.ibm.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
Andrew Morton <akpm@linux-foundation.org>,
Nitin Gupta <ngupta@vflare.org>, Minchan Kim <minchan@kernel.org>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
Dan Magenheimer <dan.magenheimer@oracle.com>,
Robert Jennings <rcj@linux.vnet.ibm.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
devel@driverdev.osuosl.org
Subject: Re: [PATCH 0/4] promote zcache from staging
Date: Thu, 09 Aug 2012 13:50:39 -0500 [thread overview]
Message-ID: <5024067F.3010602@linux.vnet.ibm.com> (raw)
In-Reply-To: <5021795A.5000509@linux.vnet.ibm.com>
On 08/07/2012 03:23 PM, Seth Jennings wrote:
> On 07/27/2012 01:18 PM, Seth Jennings wrote:
>> Some benchmarking numbers demonstrating the I/O saving that can be had
>> with zcache:
>>
>> https://lkml.org/lkml/2012/3/22/383
>
> There was concern that kernel changes external to zcache since v3.3 may
> have mitigated the benefit of zcache. So I re-ran my kernel building
> benchmark and confirmed that zcache is still providing I/O and runtime
> savings.
There was a request made to test with even greater memory pressure to
demonstrate that, at some unknown point, zcache doesn't have real
problems. So I continued out to 32 threads:
N=4..20 is the same data as before except for the pswpin values.
I found a mistake in the way I computed pswpin that changed those
values slightly. However, this didn't change the overall trend.
I also inverted the %change fields since it is a percent change vs the
normal case.
I/O (in pages)
normal zcache change
N pswpin pswpout majflt I/O sum pswpin pswpout majflt I/O sum %I/O
4 0 2 2116 2118 0 0 2125 2125 0%
8 0 575 2244 2819 0 4 2219 2223 -21%
12 1979 4038 3226 9243 1269 2519 3871 7659 -17%
16 21568 47278 9426 78272 7770 15598 9372 32740 -58%
20 50307 127797 15039 193143 20224 40634 17975 78833 -59%
24 186278 364809 45052 596139 47406 90489 30877 168772 -72%
28 274734 777815 53112 1105661 134981 307346 63480 505807 -54%
32 988530 2002087 168662 3159279 324801 723385 140288 1188474 -62%
Runtime (in seconds)
N normal zcache %change
4 126 127 1%
8 124 124 0%
12 131 133 2%
16 189 156 -17%
20 261 235 -10%
24 513 288 -44%
28 556 434 -22%
32 1463 745 -49%
%CPU utilization (out of 400% on 4 cpus)
N normal zcache %change
4 254 253 0%
8 261 263 1%
12 250 248 -1%
16 173 211 22%
20 124 140 13%
24 64 114 78%
28 59 76 29%
32 23 45 96%
The ~60% I/O savings holds even out to 32 threads, at which point the
non-zcache case has 12GB of I/O and is taking 12x longer to complete.
Additionally, the runtime savings increases significantly beyond 20
threads, even though the absolute runtime is suboptimal due to the
extreme memory pressure.
Seth
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-08-09 18:50 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-07-27 18:18 [PATCH 0/4] promote zcache from staging Seth Jennings
2012-07-27 18:18 ` [PATCH 1/4] zsmalloc: collapse internal .h into .c Seth Jennings
2012-07-27 18:18 ` [PATCH 2/4] zsmalloc: promote to mm/ Seth Jennings
2012-07-27 18:18 ` [PATCH 3/4] drivers: add memory management driver class Seth Jennings
2012-07-31 15:31 ` Konrad Rzeszutek Wilk
2012-07-27 18:18 ` [PATCH 4/4] zcache: promote to drivers/mm/ Seth Jennings
2012-07-29 2:20 ` [PATCH 0/4] promote zcache from staging Minchan Kim
2012-08-07 20:23 ` Seth Jennings
2012-08-07 21:47 ` Dan Magenheimer
2012-08-08 16:29 ` Seth Jennings
2012-08-08 17:47 ` Dan Magenheimer
2012-08-09 18:50 ` Seth Jennings [this message]
2012-08-09 20:20 ` Dan Magenheimer
2012-08-10 18:14 ` Seth Jennings
2012-08-15 9:38 ` Konrad Rzeszutek Wilk
2012-08-15 14:24 ` Seth Jennings
2012-08-17 22:21 ` Dan Magenheimer
2012-08-17 23:33 ` Seth Jennings
2012-08-18 19:09 ` Dan Magenheimer
2012-08-14 22:18 ` Seth Jennings
2012-08-14 23:29 ` Minchan Kim
[not found] <<1343413117-1989-1-git-send-email-sjenning@linux.vnet.ibm.com>
2012-07-27 19:21 ` Dan Magenheimer
2012-07-27 20:59 ` Konrad Rzeszutek Wilk
2012-07-27 21:42 ` Dan Magenheimer
2012-07-29 1:54 ` Minchan Kim
2012-07-31 15:36 ` Konrad Rzeszutek Wilk
2012-08-06 4:49 ` Minchan Kim
2012-07-30 19:19 ` Seth Jennings
2012-07-30 20:48 ` Dan Magenheimer
2012-07-31 15:58 ` Konrad Rzeszutek Wilk
2012-07-31 16:19 ` Greg Kroah-Hartman
2012-07-31 17:51 ` Konrad Rzeszutek Wilk
2012-07-31 18:19 ` Seth Jennings
2012-08-06 0:38 ` Minchan Kim
2012-08-06 15:24 ` Dan Magenheimer
2012-08-06 15:47 ` Pekka Enberg
2012-08-06 16:21 ` Dan Magenheimer
2012-08-06 16:29 ` Greg Kroah-Hartman
2012-08-06 16:38 ` Dan Magenheimer
2012-08-07 0:44 ` Minchan Kim
2012-08-07 19:28 ` Konrad Rzeszutek Wilk
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5024067F.3010602@linux.vnet.ibm.com \
--to=sjenning@linux.vnet.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=dan.magenheimer@oracle.com \
--cc=devel@driverdev.osuosl.org \
--cc=gregkh@linuxfoundation.org \
--cc=konrad.wilk@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=ngupta@vflare.org \
--cc=rcj@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).