From: "J. R. Okajima" <hooanon05@yahoo.co.jp>
To: Phillip Lougher <phillip@lougher.demon.co.uk>
Cc: linux-fsdevel@vger.kernel.org
Subject: Re: Q. cache in squashfs?
Date: Fri, 09 Jul 2010 21:24:50 +0900 [thread overview]
Message-ID: <18172.1278678290@jrobl> (raw)
In-Reply-To: <4C36FAB1.6010506@lougher.demon.co.uk>
Phillip Lougher:
> > The -no-fragments shows better performance, but it is very small.
> > It doesn't seem that the number of fragment blocks is large on my test
> > environment.
>
> That is *very* surprising. How many fragments do you have?
Actually -no-fragments could reduce the number of zlib_inflate()
expectedly. But the performance didn't improve much, particulary CPU
usage.
So I removed -no-fragments option again. This is what I forgot to write
in my mail. I hope one your big mystery solved.
$ sq4.0.wcvs/squashfs/squashfs-tools/mksquashfs /bin /tmp/a.img -no-progress -noappend -keep-as-directory -comp gzip
Parallel mksquashfs: Using 2 processors
Creating 4.0 filesystem on /tmp/a.img, block size 131072.
Exportable Squashfs 4.0 filesystem, gzip compressed, data block size 131072
compressed data, compressed metadata, compressed fragments
duplicates are removed
Filesystem size 2236.52 Kbytes (2.18 Mbytes)
47.19% of uncompressed filesystem size (4739.02 Kbytes)
Inode table size 1210 bytes (1.18 Kbytes)
36.87% of uncompressed inode table size (3282 bytes)
Directory table size 851 bytes (0.83 Kbytes)
63.70% of uncompressed directory table size (1336 bytes)
Number of duplicate files found 1
Number of inodes 98
Number of files 84
Number of fragments 28
Number of symbolic links 12
Number of device nodes 0
Number of fifo nodes 0
Number of socket nodes 0
Number of directories 2
Number of ids (unique uids + gids) 2
Number of uids 2
root (0)
jro (1000)
Number of gids 2
root (0)
jro (1000)
> It is fragments and metadata blocks which show the potential for
> repeated re-reading on random access patterns.
Ok, then I'd focus metadata.
Increasing SQUASHFS_CACHED_BLKS to (8<<10) didn't help the performance
for my case.
Here is my thought.
squashfs_read_metadata() is called very many times, from (every?) lookup
or file read. In squashfs_cache_get(), the search loop runs every time
with a spinlock held. That is why I thought the search is the CPU eater.
"100" is not a problem.
J. R. Okajima
prev parent reply other threads:[~2010-07-09 12:25 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-06-24 2:37 Q. cache in squashfs? J. R. Okajima
2010-07-08 3:57 ` Phillip Lougher
2010-07-08 6:08 ` J. R. Okajima
2010-07-09 7:53 ` J. R. Okajima
2010-07-09 10:32 ` Phillip Lougher
2010-07-09 10:55 ` Phillip Lougher
2010-07-10 5:07 ` J. R. Okajima
2010-07-10 5:08 ` J. R. Okajima
2010-07-11 2:48 ` Phillip Lougher
2010-07-11 5:55 ` J. R. Okajima
2010-07-11 9:38 ` [RFC 0/2] squashfs parallel decompression J. R. Okajima
2011-02-22 19:41 ` Phillip Susi
2011-02-23 3:23 ` Phillip Lougher
2010-07-11 9:38 ` [RFC 1/2] squashfs parallel decompression, early wait_on_buffer J. R. Okajima
2010-07-11 9:38 ` [RFC 2/2] squashfs parallel decompression, z_stream per cpu J. R. Okajima
2010-07-09 12:24 ` J. R. Okajima [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=18172.1278678290@jrobl \
--to=hooanon05@yahoo.co.jp \
--cc=linux-fsdevel@vger.kernel.org \
--cc=phillip@lougher.demon.co.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).