* Centralized summary as a way to speed up "ls" time
@ 2007-04-10 16:53 akorolev
2007-04-10 17:24 ` Artem Bityutskiy
2007-04-12 12:16 ` Ferenc Havasi
0 siblings, 2 replies; 3+ messages in thread
From: akorolev @ 2007-04-10 16:53 UTC (permalink / raw)
To: havasi; +Cc: linux-mtd
Hello Ferenc,
While using JFFS2 on NAND devices with large size and many files I faced
the problem of very slow "ls" performance.
On folder which contains ~40 files and ~80MB ls could take ~25sec!
I'm thinking of the way to resolve somehow this issue.
Reasons of the very slow performance are more or less clean - JFFS2
scans NAND to fill jffs2_node_frag and jffs2_full_dnode structures.
After first call of ls on some dir this data will be filled and next
call will work fast.
If we have few RAM cache may be released and next ls will take long.
The question I have will it be possible to extend a little bit
functionality of Centralized Summary to story these nodes as well.
Do you see any technical issues here?
IMHO if it is possible it make sense, because it will load node info
into RAM for the most used data. So for most used data we will have
preloaded node info fast "ls" just after mount.
Do you have any updates of CS? Is the last version of CS placed here
www.inf.u-szeged.hu/jffs2/mount.php?
Thanks,
Alexey
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Centralized summary as a way to speed up "ls" time
2007-04-10 16:53 Centralized summary as a way to speed up "ls" time akorolev
@ 2007-04-10 17:24 ` Artem Bityutskiy
2007-04-12 12:16 ` Ferenc Havasi
1 sibling, 0 replies; 3+ messages in thread
From: Artem Bityutskiy @ 2007-04-10 17:24 UTC (permalink / raw)
To: akorolev; +Cc: linux-mtd
On Tue, 2007-04-10 at 20:53 +0400, akorolev wrote:
> Hello Ferenc,
>
> While using JFFS2 on NAND devices with large size and many files I faced
> the problem of very slow "ls" performance.
> On folder which contains ~40 files and ~80MB ls could take ~25sec!
One way could be to introduce an aggregated dentry node which would
store several, say 32 direntries together, instead of having
per-direntry node.
--
Best regards,
Artem Bityutskiy (Битюцкий Артём)
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Centralized summary as a way to speed up "ls" time
2007-04-10 16:53 Centralized summary as a way to speed up "ls" time akorolev
2007-04-10 17:24 ` Artem Bityutskiy
@ 2007-04-12 12:16 ` Ferenc Havasi
1 sibling, 0 replies; 3+ messages in thread
From: Ferenc Havasi @ 2007-04-12 12:16 UTC (permalink / raw)
To: akorolev; +Cc: linux-mtd
Hi Alexey,
akorolev wrote:
> While using JFFS2 on NAND devices with large size and many files I faced
> the problem of very slow "ls" performance.
> On folder which contains ~40 files and ~80MB ls could take ~25sec!
>
> The question I have will it be possible to extend a little bit
> functionality of Centralized Summary to story these nodes as well.
> Do you see any technical issues here?
Now we have thought it over, and there is no big technical problem in
extending Centralized Summary for storing these (frag tree) nodes as well.
We see two technical things:
- it is not sure that after an "ls" the these trees are staying in
memory for ever - so saving only the exiting trees may not cause always
speedup
- it can increase the summary information by (in worst case) about 100%.
> Do you have any updates of CS? Is the last version of CS placed here
> www.inf.u-szeged.hu/jffs2/mount.php?
Yes, this is the last version. To update this for the latest MTD version
can be a little bit big effort - many thing has been changed.
Ferenc
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2007-04-12 12:16 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-04-10 16:53 Centralized summary as a way to speed up "ls" time akorolev
2007-04-10 17:24 ` Artem Bityutskiy
2007-04-12 12:16 ` Ferenc Havasi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox