public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Nebojsa Trpkovic <trx.lists@gmail.com>, dan.magenheimer@oracle.com
Cc: linux-kernel@vger.kernel.org
Subject: Re: cleancache can lead to serious performance degradation
Date: Thu, 25 Aug 2011 00:12:12 -0400	[thread overview]
Message-ID: <20110825041212.GA5014@dumpdata.com> (raw)
In-Reply-To: <4E4C395E.20000@gmail.com>

On Wed, Aug 17, 2011 at 11:57:50PM +0200, Nebojsa Trpkovic wrote:
> Hello.

I've put Dan on the CC since he is the author of it.

> 
> I've tried using cleancache on my file server and came to conclusion
> that my Core2 Duo 4MB L2 cache 2.33GHz CPU cannot cope with the
> amount of data it needs to compress during heavy sequential IO when
> cleancache/zcache are enabled.
> 
> For an example, with cleancache enabled I get 60-70MB/s from my RAID
> arrays and both CPU cores are saturated with system (kernel) time.
> Without cleancache, each RAID gives me more then 300MB/s of useful
> read throughput.
> 
> In the scenario of sequential reading, this drop of throughput seems
> completely normal:
> - a lot of data gets pulled in from disks
> - data is processed in some non CPU-intensive way
> - page cache fills up quickly and cleancache starts compressing
> pages (a lot of "puts" in /sys/kernel/mm/cleancache/)
> - these compressed cleancache pages newer get read because there are
> a whole lot of new pages coming in every second replacing old ones
> (practically no "succ_gets" in /sys/kernel/mm/cleancache/)
> - CPU saturates doing useless compression, and even worse:
> - new disk read operations are waiting for CPU to finish compression
> and make some space in memory
> 
> 
> So, using cleancache in scenarios with a lot of non-random data
> throughput can lead to very bad performance degradation.
> 
> 
> I guess that possible workaround could be to implement some kind of
> compression throttling valve for cleancache/zcache:
> 
> - if there's available CPU time (idle cycles or so), then compress
> (maybe even with low CPU scheduler priority);
> 
> - if there's no available CPU time, just store (or throw away) to
> avoid IO waits;
> 
> 
> At least, there should be a warning in kernel help about this kind of
> situations.
> 
> 
> Regards,
> Nebojsa Trpkovic
> 
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

  reply	other threads:[~2011-08-25  4:12 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-08-17 21:57 cleancache can lead to serious performance degradation Nebojsa Trpkovic
2011-08-25  4:12 ` Konrad Rzeszutek Wilk [this message]
2011-08-25 16:56   ` Dan Magenheimer
2011-08-26 13:24     ` Seth Jennings
2011-08-26 14:42       ` Dan Magenheimer
2011-08-29  0:45     ` Nebojsa Trpkovic
2011-08-29 15:08       ` Dan Magenheimer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110825041212.GA5014@dumpdata.com \
    --to=konrad.wilk@oracle.com \
    --cc=dan.magenheimer@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=trx.lists@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox