From: "J. R. Okajima" <hooanon05@yahoo.co.jp>
To: Phillip Lougher <phillip@lougher.demon.co.uk>
Cc: linux-kernel@vger.kernel.org, Minchan Kim <minchan@kernel.org>,
Phillip Lougher <phillip@squashfs.org.uk>,
Stephen Hemminger <shemminger@vyatta.com>
Subject: Re: [PATCH 3/6] Squashfs: add multi-threaded decompression using percpu variables
Date: Mon, 18 Nov 2013 17:08:37 +0900 [thread overview]
Message-ID: <4087.1384762117@jrobl> (raw)
In-Reply-To: <528502A4.3030505@lougher.demon.co.uk>
Phillip Lougher:
> CCing Junjiro Okijima and Stephen Hemminger
Thank you for CCing, and sorry for my slow responce.
> >> Using percpu variables has advantages and disadvantages over
> >> implementations which do not use percpu variables.
> >>
> >> Advantages: the nature of percpu variables ensures decompression is
> >> load-balanced across the multiple cores.
> >>
> >> Disadvantages: it limits decompression to one thread per core.
Honestly speaking, I don't remember the details of squashfs. It was a
long long time ago when I read and modified squashfs.
Anyway I will try replying.
Percpu is a good approach. Obviously, as you mentioned as
disadvantage, it depends the balance between these two things.
- How many I/Os in parallel?
- How much does the decompression cost?
My current guess is the latter is heavier (for the performance), so I
guess percpu is good.
Is it guranteed that the decompressor never require any new resources?
Under heavy I/O and memory pressure,
if the decompressor wants some memory between get_cpu_ptr() and
put_cpu_ptr(),
and if the decompressor is running on all other cores at the same time,
then does squashfs simply return ENOMEM because the memory shrinker
cannot run on any core?
If it is true, we may need a rule "no new resources for decompressing"
since users may prefer the "slow but successful decompression" than
getting ENOMEM.
If this mail is totaly pointless, please ignore.
J. R. Okajima
next prev parent reply other threads:[~2013-11-18 8:19 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-11-07 20:24 [PATCH 0/6] Squashfs performance improvements Phillip Lougher
2013-11-07 20:24 ` [PATCH 1/6] Squashfs: Refactor decompressor interface and code (V2) Phillip Lougher
2013-11-07 20:24 ` [PATCH 2/6] Squashfs: enhance parallel I/O Phillip Lougher
2013-11-07 20:24 ` [PATCH 3/6] Squashfs: add multi-threaded decompression using percpu variables Phillip Lougher
2013-11-08 2:42 ` Minchan Kim
2013-11-14 17:04 ` Phillip Lougher
2013-11-18 8:08 ` J. R. Okajima [this message]
2013-11-19 2:12 ` Minchan Kim
2013-11-07 20:24 ` [PATCH 4/6] Squashfs: Generalise paging handling in the decompressors (V2) Phillip Lougher
2013-11-08 5:29 ` Minchan Kim
2013-11-07 20:24 ` [PATCH 5/6] Squashfs: restructure squashfs_readpage() Phillip Lougher
2013-11-08 5:55 ` Minchan Kim
2013-11-08 6:02 ` Minchan Kim
2013-11-07 20:24 ` [PATCH 6/6] Squashfs: Directly decompress into the page cache for file data (V2) Phillip Lougher
2013-11-08 8:23 ` Minchan Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4087.1384762117@jrobl \
--to=hooanon05@yahoo.co.jp \
--cc=linux-kernel@vger.kernel.org \
--cc=minchan@kernel.org \
--cc=phillip@lougher.demon.co.uk \
--cc=phillip@squashfs.org.uk \
--cc=shemminger@vyatta.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).