From: "J. R. Okajima" <hooanon05@yahoo.co.jp>
To: Phillip Lougher <phillip@lougher.demon.co.uk>
Cc: linux-fsdevel@vger.kernel.org
Subject: Re: Q. cache in squashfs?
Date: Sat, 10 Jul 2010 14:07:54 +0900 [thread overview]
Message-ID: <7545.1278738474@jrobl> (raw)
In-Reply-To: <4C370017.4070604@lougher.demon.co.uk>
Phillip Lougher:
> You can determine which blocks are being repeatedly decompressed by
> printing out the value of cache->name in squashfs_cache_get().
>
> You should get one of "data", "fragment" and "metadata" for data
> blocks, fragment blocks and metadata respectively.
>
> This information will go a long way in showing where the problem lies.
Here is a patch to count and the result.
----------------------------------------------------------------------
frag(3, 100) x -no-fragments(with, without)
O: no-fragments x inner ext3
A: frag=3 x without -no-fragments
B: frag=3 x with -no-fragments
C: frag=100 x without -no-fragments
-: frag=100 x with -no-fragments
cat10 cache_get read zlib
(sec,cpu) (meta,frag,data) (meta,data) (meta,data)
----------------------------------------------------------------------
O .06, 35% 92, -, 41 3, 44 2, 3557
A .09, 113% 12359, 81, 22 4, 90 6, 6474
B .07, 104% 12369, -, 109 3, 100 5, 3484
C .06, 112% 12381, 80, 35 4, 53 6, 3650
- the case O is b.img in my first mail, and the case A is a.img.
- the "cat10" column is the result of time command as described in my
first mail.
- all these numbers just show the trend and the small difference doesn't
have much meaning.
- with -no-fragments option (case B),
+ the number of zlib call is reduced.
+ the CPU usage is not reduced much.
+ the number of cache_get for data increses.
+ the number of read for data may increse too.
- even with the compressed fragments, by increasing
CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE it shows similar performance (case
C),
+ the number of zlib call is reduced.
+ the CPU usage is not reduced much.
+ the number of cache_get for data may increse.
+ the number of read for data may decrese.
I am not sure the differece of cache_get/read for data between cases is
so meaningful.
But it surely shows high CPU usage in squashfs and I guess it is caused
by cache_get for metadata. The number of zlib compression may not be
related to this CPU usage much.
J. R. Okajima
next prev parent reply other threads:[~2010-07-10 5:08 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-06-24 2:37 Q. cache in squashfs? J. R. Okajima
2010-07-08 3:57 ` Phillip Lougher
2010-07-08 6:08 ` J. R. Okajima
2010-07-09 7:53 ` J. R. Okajima
2010-07-09 10:32 ` Phillip Lougher
2010-07-09 10:55 ` Phillip Lougher
2010-07-10 5:07 ` J. R. Okajima [this message]
2010-07-10 5:08 ` J. R. Okajima
2010-07-11 2:48 ` Phillip Lougher
2010-07-11 5:55 ` J. R. Okajima
2010-07-11 9:38 ` [RFC 0/2] squashfs parallel decompression J. R. Okajima
2011-02-22 19:41 ` Phillip Susi
2011-02-23 3:23 ` Phillip Lougher
2010-07-11 9:38 ` [RFC 1/2] squashfs parallel decompression, early wait_on_buffer J. R. Okajima
2010-07-11 9:38 ` [RFC 2/2] squashfs parallel decompression, z_stream per cpu J. R. Okajima
2010-07-09 12:24 ` Q. cache in squashfs? J. R. Okajima
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7545.1278738474@jrobl \
--to=hooanon05@yahoo.co.jp \
--cc=linux-fsdevel@vger.kernel.org \
--cc=phillip@lougher.demon.co.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).