linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Minchan Kim <minchan@kernel.org>
To: Phillip Lougher <phillip@squashfs.org.uk>
Cc: linux-kernel@vger.kernel.org, ch0.han@lge.com, gunho.lee@lge.com,
	Minchan Kim <minchan@kernel.org>
Subject: [RFC 0/5] squashfs enhance
Date: Mon, 16 Sep 2013 16:08:34 +0900	[thread overview]
Message-ID: <1379315319-7752-1-git-send-email-minchan@kernel.org> (raw)

Our proudct have used squashfs for rootfs and it saves a few bucks
per device. Super thanks, Squashfs! You were a perfect for us.
But unfortunately, our device start to become complex so sometime
we need better throughput for sequential I/O but current squashfs
couldn't meet our usecase.

When I dive into the code, it has some problems.

1) Too many copy
2) Only a decompression stream buffer so that concurrent read stuck with that
3) short of readpages

This patchset try to solve above problems.

Just two patches are clean up so it shouldn't change any behavior.
And functions factored out will be used for later patches.
If they changes some behavior, it's not what I intended. :(

3rd patch removes cache usage for (not-fragemented, no-tail-end)
data pages so that we can reduce memory copy.

4th patch supports multiple decompress stream buffer so concurrent
read could be handled at the same time. When I tested experiment,
It obviously reduces a half time.

5th patch try to implement asynchronous readahead functions
so I found it can enhance about 35% with lots of I/O merging.

Any comments are welcome.
Thanks.

Minchan Kim (5):
  squashfs: clean up squashfs_read_data
  squashfs: clean up squashfs_readpage
  squashfs: remove cache for normal data page
  squashfs: support multiple decompress stream buffer
  squashfs: support readpages

 fs/squashfs/block.c          |  245 +++++++++-----
 fs/squashfs/cache.c          |   16 +-
 fs/squashfs/decompressor.c   |  107 +++++-
 fs/squashfs/decompressor.h   |   27 +-
 fs/squashfs/file.c           |  738 ++++++++++++++++++++++++++++++++++++++----
 fs/squashfs/lzo_wrapper.c    |   12 +-
 fs/squashfs/squashfs.h       |   12 +-
 fs/squashfs/squashfs_fs_sb.h |   11 +-
 fs/squashfs/super.c          |   44 ++-
 fs/squashfs/xz_wrapper.c     |   20 +-
 fs/squashfs/zlib_wrapper.c   |   12 +-
 11 files changed, 1024 insertions(+), 220 deletions(-)

-- 
1.7.9.5


             reply	other threads:[~2013-09-16  7:08 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-09-16  7:08 Minchan Kim [this message]
2013-09-16  7:08 ` [RFC 1/5] squashfs: clean up squashfs_read_data Minchan Kim
2013-09-16  7:08 ` [RFC 2/5] squashfs: clean up squashfs_readpage Minchan Kim
2013-09-16  7:08 ` [RFC 3/5] squashfs: remove cache for normal data page Minchan Kim
2013-09-16  7:08 ` [RFC 4/5] squashfs: support multiple decompress stream buffer Minchan Kim
2013-09-16  7:08 ` [RFC 5/5] squashfs: support readpages Minchan Kim
2013-09-17  1:52   ` Minchan Kim
2013-09-17  1:59     ` Minchan Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1379315319-7752-1-git-send-email-minchan@kernel.org \
    --to=minchan@kernel.org \
    --cc=ch0.han@lge.com \
    --cc=gunho.lee@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=phillip@squashfs.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).