public inbox for linux-mtd@lists.infradead.org
 help / color / mirror / Atom feed
From: "Øyvind Harboe" <oyvind.harboe@zylin.com>
To: David Woodhouse <dwmw2@infradead.org>
Cc: linux-mtd@lists.infradead.org
Subject: Re: Prune obsolete raw_node_ref's from RAM
Date: Wed, 14 Jul 2004 09:27:40 +0200	[thread overview]
Message-ID: <1089790059.7607.18.camel@famine> (raw)
In-Reply-To: <1089788931.8822.54.camel@imladris.demon.co.uk>

On Wed, 2004-07-14 at 09:08, David Woodhouse wrote:
> On Wed, 2004-07-14 at 08:51 +0200, Øyvind Harboe wrote:
> > On Wed, 2004-07-14 at 08:41, David Woodhouse wrote:
> > > On Wed, 2004-07-14 at 08:26 +0200, Øyvind Harboe wrote:
> > > > Would all the performance problems go away if the lists were doubly
> > > > linked?
> > > 
> > > Well yes, but since the object of the exercise was to save memory,
> > > doubling the size of the objects in question doesn't really strike me as
> > > being the right way to approach the problem :)
> > 
> > The memory problem I ran into wasn't the size of the un-obsolete nodes,
> > but that they grew with the number of obsolete nodes.
> > 
> > I hate #if's in code as much as the next guy, but JFFS2 spans deeply
> > embedded systems to full-fledged PCs and it is only to be expected that
> > the different profiles have different needs.
> 
> Well, I'm more than happy to do 
> 	#define jffs2_prune_ref_lists(c, ref) do { } while (0)
> in the Linux code and let the implementation we're playing with live in
> eCos code alone. It would be nicer if it were generic though -- that's
> why I'm thinking about making it happen in a periodic pass, rather than
> doing it all on _every_ obsoletion. By doing it every single time, we
> maximise the amount of list-walking required. 
> 
> We could walk the next_in_ino list when we remove a given inode from the
> inode cache, dropping all obsolete nodes from it then in a single pass. 
> 
> And we could walk each eraseblock's list at some other time...

Hmmm... how about a configurable threshold(i.e. a maximum number
obsolete nodes) that triggers a "purge" at the end of
jffs2_mark_node_obsolete()?

A "purge" would be to loop through all the physical nodes and merge
those that can be merged.

For my purposes, I need to make sure that there is a predictable peak
usage of RAM. What precisely that peak usage is, is somewhat less
important.

-- 
Øyvind Harboe
http://www.zylin.com

      reply	other threads:[~2004-07-14  7:27 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-07-13 14:18 Prune obsolete raw_node_ref's from RAM Øyvind Harboe
2004-07-13 23:14 ` David Woodhouse
2004-07-14  6:26   ` Øyvind Harboe
2004-07-14  6:41     ` David Woodhouse
2004-07-14  6:51       ` Øyvind Harboe
2004-07-14  7:08         ` David Woodhouse
2004-07-14  7:27           ` Øyvind Harboe [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1089790059.7607.18.camel@famine \
    --to=oyvind.harboe@zylin.com \
    --cc=dwmw2@infradead.org \
    --cc=linux-mtd@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox