From: Eric Nelson <eric@nelint.com>
To: u-boot@lists.denx.de
Subject: [U-Boot] ext4 and caching
Date: Wed, 16 Mar 2016 11:42:55 -0700 [thread overview]
Message-ID: <56E9A92F.5000205@nelint.com> (raw)
Hi all,
I've been seeing the same sort of issues repoted by Ionut
and as addressed by this patch:
http://lists.denx.de/pipermail/u-boot/2014-January/171459.html
That patch was added in commit fc0fc50 and reverted in commit 715b56f.
It no longer applies cleanly, and when I tried to resurrect it,
I saw errors traversing directories and perhaps something went
wrong with my merge.
Ionut, do you have a current version of this patch?
When I looked a little further, I found that a read of a ~150 MiB
was around 30x slower than typical, and that the **same** 8 blocks
accounted for the majority of the reads.
In fact, these same blocks were read back-to-back.
The following is a quick picture of the output from a simple
printf in mmc_bread of the block number and count during a
load of the problem file.
~$ uniq -c < mmc_bread.log | sort -n
1 mmc_bread: 0/1
1 mmc_bread: 2293760/1
1 mmc_bread: 2293762/2
1 mmc_bread: 2293768/1
1 mmc_bread: 2293768/1
1 mmc_bread: 2293768/1
1 mmc_bread: 2293768/1
1 mmc_bread: 2295264/1
1 mmc_bread: 2295270/1
1 mmc_bread: 2295290/1
1 mmc_bread: 2295645/1
1 mmc_bread: 7536640/131072
1 mmc_bread: 7667712/32768
1 mmc_bread: 7700480/16384
1 mmc_bread: 7729152/65536
1 mmc_bread: 7798784/130722
1 mmc_bread: 7929506/1
1 mmc_init: 0, time 129
2 mmc_bread: 0/1
6 mmc_bread: 2359120/1
10 mmc_bread: 2358752/1
34 mmc_bread: 2358808/1
2048 mmc_bread: 2557936/8
4096 mmc_bread: 2557936/8
8193 mmc_bread: 2557936/8
16340 mmc_bread: 2557936/8
16384 mmc_bread: 2557936/8
~$ sort < mmc_bread.log | uniq -c | sort -n | tail -n 1
47061 mmc_bread: 2557936/8
In English, the 8 blocks starting@2557936 are read
47061 times, and back to back in large (2k/4k/16k) bunches,
so a very simple **single** block cache of the last read
will fix the speed issue and I hacked something up to
verify that.
Is anybody else working on things in this area?
I think this is something that's probably easier to fix
at the block device level rather than within the ext4
filesystem code.
That said, the 2k/4k/16k bunches above may also indicate
a simpler fix in the ext4 code.
Please chime in with your thoughts.
Regards,
Eric Nelson
next reply other threads:[~2016-03-16 18:42 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-03-16 18:42 Eric Nelson [this message]
2016-03-16 21:40 ` [U-Boot] [RFC PATCH 0/2] simple cache layer for block devices Eric Nelson
2016-03-16 21:40 ` [U-Boot] [RFC PATCH 1/2] add block device cache Eric Nelson
2016-03-17 21:16 ` Stephen Warren
2016-03-17 21:33 ` Eric Nelson
2016-03-17 21:41 ` Stephen Warren
2016-03-20 22:13 ` Tom Rini
2016-03-20 22:51 ` Eric Nelson
2016-03-16 21:40 ` [U-Boot] [RFC PATCH 2/2] mmc: add support for " Eric Nelson
2016-03-17 21:23 ` Stephen Warren
2016-03-20 19:35 ` Eric Nelson
2016-03-20 22:13 ` Tom Rini
2016-03-20 22:54 ` Eric Nelson
2016-03-21 18:31 ` Eric Nelson
2016-03-26 0:11 ` Eric Nelson
2016-04-09 17:55 ` Simon Glass
2016-04-10 14:31 ` Eric Nelson
2016-03-21 14:27 ` Eric Nelson
2016-03-19 15:42 ` [U-Boot] ext4 and caching Ioan Nicu
2016-03-20 15:02 ` Eric Nelson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56E9A92F.5000205@nelint.com \
--to=eric@nelint.com \
--cc=u-boot@lists.denx.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox