* Q: Filesystem choice..
@ 2004-01-25 21:53 Eric W. Biederman
2004-01-25 22:49 ` Jörn Engel
` (2 more replies)
0 siblings, 3 replies; 14+ messages in thread
From: Eric W. Biederman @ 2004-01-25 21:53 UTC (permalink / raw)
To: linux-mtd
Currently I am examining the possibility of using a filesystem with
LinuxBIOS so that I may store parameters and kernels in the flash in a
more flexible manner.
The current flash chips I am working with are NOR flash from 512KiB to
4MiB. And they generally have a 64KiB erase size.
I have two flash blocks that are reserved for XIP code (the hw
initialization firmware) and the rest can be used for the filesystem.
So in the worst case I have 6 flash blocks to play with.
The old papers on jffs2 would make it unacceptable as it reserves
5 erase blocks. And I don't know if yaffs or yaffs2 is any better.
In addition boot time is important so it would be ideal if I did not
to read every byte of the ROM chip to initialize the filesystem.
Is there a filesystem that only reserves one erase block?
Does it look like I need to write my own solution?
Eric
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Q: Filesystem choice..
2004-01-25 21:53 Q: Filesystem choice Eric W. Biederman
@ 2004-01-25 22:49 ` Jörn Engel
2004-01-26 6:42 ` David Woodhouse
2004-01-27 4:30 ` Charles Manning
2 siblings, 0 replies; 14+ messages in thread
From: Jörn Engel @ 2004-01-25 22:49 UTC (permalink / raw)
To: Eric W. Biederman; +Cc: linux-mtd
On Sun, 25 January 2004 14:53:55 -0700, Eric W. Biederman wrote:
>
> Currently I am examining the possibility of using a filesystem with
> LinuxBIOS so that I may store parameters and kernels in the flash in a
> more flexible manner.
>
> The current flash chips I am working with are NOR flash from 512KiB to
> 4MiB. And they generally have a 64KiB erase size.
>
> I have two flash blocks that are reserved for XIP code (the hw
> initialization firmware) and the rest can be used for the filesystem.
> So in the worst case I have 6 flash blocks to play with.
>
> The old papers on jffs2 would make it unacceptable as it reserves
> 5 erase blocks. And I don't know if yaffs or yaffs2 is any better.
>
> In addition boot time is important so it would be ideal if I did not
> to read every byte of the ROM chip to initialize the filesystem.
>
> Is there a filesystem that only reserves one erase block?
>
> Does it look like I need to write my own solution?
Not necessarily. Disable compression for jffs2 and you should be able
to get away with two reserved blocks, or even 80kiB or so. But that
requires changes to current code and lots of testing. Compression
makes things more complicated, basically impossible to calculate, so
you have to reserve a little more and hope for the best. Five blocks
are *very* conservative for a filesystem of six block total, though.
The idea is pretty old, it's just that noone cared enough to do all
the work.
Jörn
--
Fancy algorithms are buggier than simple ones, and they're much harder
to implement. Use simple algorithms as well as simple data structures.
-- Rob Pike
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Q: Filesystem choice..
2004-01-25 21:53 Q: Filesystem choice Eric W. Biederman
2004-01-25 22:49 ` Jörn Engel
@ 2004-01-26 6:42 ` David Woodhouse
2004-01-26 7:09 ` Eric W. Biederman
2004-01-27 4:30 ` Charles Manning
2 siblings, 1 reply; 14+ messages in thread
From: David Woodhouse @ 2004-01-26 6:42 UTC (permalink / raw)
To: Eric W. Biederman; +Cc: linux-mtd
On Sun, 2004-01-25 at 14:53 -0700, Eric W. Biederman wrote:
> The old papers on jffs2 would make it unacceptable as it reserves
> 5 erase blocks.
It's got slightly different heuristics now -- a proportion of total
size, plus a proportion of total _blocks_. That was done primarily to
deal with NAND flash, where we need _more_ blocks reserved, but it
should also have helped with small NOR flashes.
You blatantly don't _need_ to reserve five erase blocks to let you
rewrite the contents of the remaining, erm, one erase block full of
data. You can tune this; it's not a mount option but it's relatively
simple to change in the code.
> And I don't know if yaffs or yaffs2 is any better.
They're for NAND, not NOR flash.
> In addition boot time is important so it would be ideal if I did not
> to read every byte of the ROM chip to initialize the filesystem.
There have been efforts to improve JFFS2 performance in this respect. It
still reads the _header_ from each node of the file system, but doesn't
actually checksum every node any more.
--
dwmw2
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Q: Filesystem choice..
2004-01-26 6:42 ` David Woodhouse
@ 2004-01-26 7:09 ` Eric W. Biederman
2004-01-26 7:40 ` David Woodhouse
0 siblings, 1 reply; 14+ messages in thread
From: Eric W. Biederman @ 2004-01-26 7:09 UTC (permalink / raw)
To: David Woodhouse; +Cc: linux-mtd
David Woodhouse <dwmw2@infradead.org> writes:
> On Sun, 2004-01-25 at 14:53 -0700, Eric W. Biederman wrote:
> > The old papers on jffs2 would make it unacceptable as it reserves
> > 5 erase blocks.
>
> It's got slightly different heuristics now -- a proportion of total
> size, plus a proportion of total _blocks_. That was done primarily to
> deal with NAND flash, where we need _more_ blocks reserved, but it
> should also have helped with small NOR flashes.
>
> You blatantly don't _need_ to reserve five erase blocks to let you
> rewrite the contents of the remaining, erm, one erase block full of
> data. You can tune this; it's not a mount option but it's relatively
> simple to change in the code.
Has anyone gotten as far as a proof. Or are there some informal
things that almost make up a proof, so I could get a feel? Reserving
more than a single erase block is going to be hard to swallow for such
a small filesystem.
> > And I don't know if yaffs or yaffs2 is any better.
>
> They're for NAND, not NOR flash.
I think I have heard about a port to NOR flash, but tuned
for NAND flash I would be really surprised if they were different.
> > In addition boot time is important so it would be ideal if I did not
> > to read every byte of the ROM chip to initialize the filesystem.
>
> There have been efforts to improve JFFS2 performance in this respect. It
> still reads the _header_ from each node of the file system, but doesn't
> actually checksum every node any more.
That should help. It bears trying to see how fast things are.
Eric
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Q: Filesystem choice..
2004-01-26 7:09 ` Eric W. Biederman
@ 2004-01-26 7:40 ` David Woodhouse
2004-01-26 8:34 ` Joakim Tjernlund
` (2 more replies)
0 siblings, 3 replies; 14+ messages in thread
From: David Woodhouse @ 2004-01-26 7:40 UTC (permalink / raw)
To: Eric W. Biederman; +Cc: linux-mtd
On Mon, 2004-01-26 at 00:09 -0700, Eric W. Biederman wrote:
> Has anyone gotten as far as a proof. Or are there some informal
> things that almost make up a proof, so I could get a feel? Reserving
> more than a single erase block is going to be hard to swallow for such
> a small filesystem.
You need to have enough space to let garbage collection make progress.
Which means it has to be able to GC a whole erase block into space
elsewhere, then erase it. That's basically one block you require.
Except you have to account for write errors or power cycles during a GC
write, wasting some of your free space. You have to account for the
possibility that what started off as a single 4KiB node in the original
block now hits the end of the new erase block and is split between that
and the start of another, so effectively it grew because it has an extra
node header now. And of course when you do that you get worse
compression ratios too, since 2KiB blocks compress less effectively than
4KiB blocks do.
When you get down to the kind of sizes you're talking about, I suspect
we need to be thinking in bytes rather than blocks -- because there
isn't just one threshold; there's many, of which three are particularly
relevant:
/* Deletion should almost _always_ be allowed. We're fairly
buggered once we stop allowing people to delete stuff
because there's not enough free space... */
c->resv_blocks_deletion = 2;
/* Be conservative about how much space we need before we allow writes.
On top of that which is required for deletia, require an extra 2%
of the medium to be available, for overhead caused by nodes being
split across blocks, etc. */
size = c->flash_size / 50; /* 2% of flash size */
size += c->nr_blocks * 100; /* And 100 bytes per eraseblock */
size += c->sector_size - 1; /* ... and round up */
c->resv_blocks_write = c->resv_blocks_deletion + (size / c->sector_size);
/* When do we allow garbage collection to merge nodes to make
long-term progress at the expense of short-term space exhaustion? */
c->resv_blocks_gcmerge = c->resv_blocks_deletion + 1;
You want resv_blocks_write to be larger than resv_blocks_deletion, and I
suspect you could get away with values of 2 and 1.5 respectively, if we
were counting bytes rather than whole eraseblocks.
Then resv_blocks_gcmerge wants to be probably about the same as
resv_blocks_deletion, to make sure we get as much benefit from GC as
possible.
> > > And I don't know if yaffs or yaffs2 is any better.
> >
> > They're for NAND, not NOR flash.
>
> I think I have heard about a port to NOR flash, but tuned
> for NAND flash I would be really surprised if they were different.
>
> > > In addition boot time is important so it would be ideal if I did not
> > > to read every byte of the ROM chip to initialize the filesystem.
> >
> > There have been efforts to improve JFFS2 performance in this respect. It
> > still reads the _header_ from each node of the file system, but doesn't
> > actually checksum every node any more.
>
> That should help. It bears trying to see how fast things are.
>
> Eric
>
> ______________________________________________________
> Linux MTD discussion mailing list
> http://lists.infradead.org/mailman/listinfo/linux-mtd/
--
dwmw2
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: Q: Filesystem choice..
2004-01-26 7:40 ` David Woodhouse
@ 2004-01-26 8:34 ` Joakim Tjernlund
2004-01-26 8:38 ` David Woodhouse
2004-01-26 9:23 ` Eric W. Biederman
2004-01-26 15:32 ` Jörn Engel
2 siblings, 1 reply; 14+ messages in thread
From: Joakim Tjernlund @ 2004-01-26 8:34 UTC (permalink / raw)
To: 'David Woodhouse', 'Eric W. Biederman'; +Cc: linux-mtd
> When you get down to the kind of sizes you're talking about, I suspect
> we need to be thinking in bytes rather than blocks -- because there
> isn't just one threshold; there's many, of which three are
> particularly
> relevant:
>
> /* Deletion should almost _always_ be allowed. We're fairly
> buggered once we stop allowing people to delete stuff
> because there's not enough free space... */
> c->resv_blocks_deletion = 2;
>
>
> /* Be conservative about how much space we need
> before we allow writes.
> On top of that which is required for deletia,
> require an extra 2%
> of the medium to be available, for overhead caused
> by nodes being
> split across blocks, etc. */
>
>
> size = c->flash_size / 50; /* 2% of flash size */
> size += c->nr_blocks * 100; /* And 100 bytes per eraseblock */
> size += c->sector_size - 1; /* ... and round up */
>
>
> c->resv_blocks_write = c->resv_blocks_deletion +
> (size / c->sector_size);
>
> /* When do we allow garbage collection to merge nodes to make
> long-term progress at the expense of short-term
> space exhaustion? */
> c->resv_blocks_gcmerge = c->resv_blocks_deletion + 1;
>
>
> You want resv_blocks_write to be larger than
> resv_blocks_deletion, and I
> suspect you could get away with values of 2 and 1.5
> respectively, if we
> were counting bytes rather than whole eraseblocks.
>
> Then resv_blocks_gcmerge wants to be probably about the same as
> resv_blocks_deletion, to make sure we get as much benefit from GC as
> possible.
Hmm, I got a ~63 MB JFFS2 NOR(EB=256 KB) FS. If I understand the above calculations above correcly, I get:
size = 63*1024*1024/50 = 1321205
size += 252*100 = 1321457
size += 256*1024 -1 = 1583600
resv_blocks_write = 2 + 1583600/(256*1024) = 2 + 6 = 8
resv_blocks_gcmerge = 2+1 = 3
So now I need 8(2 MB) reserved blocks instead of 5?
I never had any trouble with 5 reserved blocks, maybe one could do a
resv_blocks_write = MIN(5, resv_blocks_write) iff NOR flash?
Jocke
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: Q: Filesystem choice..
2004-01-26 8:34 ` Joakim Tjernlund
@ 2004-01-26 8:38 ` David Woodhouse
2004-01-26 9:28 ` Joakim Tjernlund
0 siblings, 1 reply; 14+ messages in thread
From: David Woodhouse @ 2004-01-26 8:38 UTC (permalink / raw)
To: joakim.tjernlund; +Cc: linux-mtd, 'Eric W. Biederman'
On Mon, 2004-01-26 at 09:34 +0100, Joakim Tjernlund wrote:
> size = 63*1024*1024/50 = 1321205
> size += 252*100 = 1321457
> size += 256*1024 -1 = 1583600
>
> resv_blocks_write = 2 + 1583600/(256*1024) = 2 + 6 = 8
> resv_blocks_gcmerge = 2+1 = 3
>
> So now I need 8(2 MB) reserved blocks instead of 5?
> I never had any trouble with 5 reserved blocks, maybe one could do a
> resv_blocks_write = MIN(5, resv_blocks_write) iff NOR flash?
I'd be happier about reducing the 2% figure to 1% or 1.5%, maybe.
--
dwmw2
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Q: Filesystem choice..
2004-01-26 7:40 ` David Woodhouse
2004-01-26 8:34 ` Joakim Tjernlund
@ 2004-01-26 9:23 ` Eric W. Biederman
2004-01-26 9:31 ` David Woodhouse
2004-01-26 15:32 ` Jörn Engel
2 siblings, 1 reply; 14+ messages in thread
From: Eric W. Biederman @ 2004-01-26 9:23 UTC (permalink / raw)
To: David Woodhouse; +Cc: linux-mtd
David Woodhouse <dwmw2@infradead.org> writes:
> On Mon, 2004-01-26 at 00:09 -0700, Eric W. Biederman wrote:
> > Has anyone gotten as far as a proof. Or are there some informal
> > things that almost make up a proof, so I could get a feel? Reserving
> > more than a single erase block is going to be hard to swallow for such
> > a small filesystem.
>
> You need to have enough space to let garbage collection make progress.
> Which means it has to be able to GC a whole erase block into space
> elsewhere, then erase it. That's basically one block you require.
>
> Except you have to account for write errors or power cycles during a GC
> write, wasting some of your free space. You have to account for the
> possibility that what started off as a single 4KiB node in the original
> block now hits the end of the new erase block and is split between that
> and the start of another, so effectively it grew because it has an extra
> node header now. And of course when you do that you get worse
> compression ratios too, since 2KiB blocks compress less effectively than
> 4KiB blocks do.
Compression is an interesting question. Do you encode the uncompressed
size of a block in bytes. If so I don't think it would be too difficult
to get your uncompressed block size > page size. With the page cache
there is real reason a block size <= page size. You just need what
amounts to scatter/gather support.
My real question here is how difficult is it to disable compression?
Or can compression be deliberately disabled on a per file basis?
For the two primary files I am thinking of using neither one would
need compression. A file of my BIOS settings is would be dense
and quite small (128 bytes on a big day). A kernel is already
compressed and carries it's own decompresser, and whole file compression
is more effective than compressing small blocks.
> When you get down to the kind of sizes you're talking about, I suspect
> we need to be thinking in bytes rather than blocks -- because there
> isn't just one threshold; there's many, of which three are particularly
> relevant:
That makes sense. This at least looks like a viable alternative for
the 1MB case.
[snip actual formulas]
> You want resv_blocks_write to be larger than resv_blocks_deletion, and I
> suspect you could get away with values of 2 and 1.5 respectively, if we
> were counting bytes rather than whole eraseblocks.
I have a truly perverse case I would like to ask your opinion about.
A filesystem composed of 2 8K erase blocks? That is one of the
weird special cases that flash chips often support. I could
only store my parameter file in there but it would be interesting.
I think if I counted bytes very carefully and never got above .5 of
a block full I suspect that it would work, and be useful. I'd just
have to make certain the degenerate case matched the original jffs.
And a last question. jffs2 rounds all erase blocks up to a common size
doesn't it?
Eric
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: Q: Filesystem choice..
2004-01-26 8:38 ` David Woodhouse
@ 2004-01-26 9:28 ` Joakim Tjernlund
0 siblings, 0 replies; 14+ messages in thread
From: Joakim Tjernlund @ 2004-01-26 9:28 UTC (permalink / raw)
To: 'David Woodhouse'; +Cc: linux-mtd, 'Eric W. Biederman'
> On Mon, 2004-01-26 at 09:34 +0100, Joakim Tjernlund wrote:
> > size = 63*1024*1024/50 = 1321205
> > size += 252*100 += 25200= 1321457
> > size += 256*1024 -1 += 262143 = 1583600
> >
> > resv_blocks_write = 2 + 1583600/(256*1024) = 2 + 6 = 8
> > resv_blocks_gcmerge = 2+1 = 3
> >
> > So now I need 8(2 MB) reserved blocks instead of 5?
> > I never had any trouble with 5 reserved blocks, maybe one could do a
> > resv_blocks_write = MIN(5, resv_blocks_write) iff NOR flash?
>
> I'd be happier about reducing the 2% figure to 1% or 1.5%, maybe.
1% results in: resv_blocks_write = 2 + 3.7 = 5.
Much better.
Jocke
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Q: Filesystem choice..
2004-01-26 9:23 ` Eric W. Biederman
@ 2004-01-26 9:31 ` David Woodhouse
2004-01-26 16:20 ` Eric W. Biederman
0 siblings, 1 reply; 14+ messages in thread
From: David Woodhouse @ 2004-01-26 9:31 UTC (permalink / raw)
To: Eric W. Biederman; +Cc: linux-mtd
On Mon, 2004-01-26 at 02:23 -0700, Eric W. Biederman wrote:
> Compression is an interesting question. Do you encode the uncompressed
> size of a block in bytes.
Yes.
> If so I don't think it would be too difficult to get your uncompressed
> block size > page size.
It wouldn't be difficult -- but that's not really at all relevant to the
above question. By 'uncompressed block size' here I'm assuming you're
talking of the amount of data payload we attach to any given node (log
entry). The rule is currently that it mustn't cross a page boundary --
and hence, by inference, obviously can't exceed a page in size.
That assumption did allow a little simplification of bits of the code,
but actually it turned out to be less useful than I originally thought,
so it might be worth ditching in order to let us get better compression
by compressing larger chunks at a time.
> With the page cache there is real reason a block size <= page size.
> You just need what amounts to scatter/gather support.
Yes. It's been done for zisofs -- if we have to decompress, for example,
16KiB to satisfy a single 4KiB readpage, we can prefetch the other data
which we had to decompress anyway.
Fix up some other assumptions about the first byte in any given page
also being the first byte in a node, and fix up the garbage-collection
which will need to have enough workspace to decompress and recompress
the largest block it may encounter, and it should work.
> My real question here is how difficult is it to disable compression?
> Or can compression be deliberately disabled on a per file basis?
It's not too hard. To disable it completely you just need to change a
few #defines in os-linux.h. The support for disabling it on a per-file
basis isn't complete, but there are flags allocated in the inode
structure to keep track of it.
> I have a truly perverse case I would like to ask your opinion about.
> A filesystem composed of 2 8K erase blocks? That is one of the
> weird special cases that flash chips often support. I could
> only store my parameter file in there but it would be interesting.
To be honest, at that size I'd just do it directly via /dev/mtd0. Put
the file directly on the flash with a checksum. Alternate between the
eraseblocks each time it changes, then erase the old copy.
> And a last question. jffs2 rounds all erase blocks up to a common size
> doesn't it?
Yes.
--
dwmw2
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Q: Filesystem choice..
2004-01-26 7:40 ` David Woodhouse
2004-01-26 8:34 ` Joakim Tjernlund
2004-01-26 9:23 ` Eric W. Biederman
@ 2004-01-26 15:32 ` Jörn Engel
2 siblings, 0 replies; 14+ messages in thread
From: Jörn Engel @ 2004-01-26 15:32 UTC (permalink / raw)
To: David Woodhouse; +Cc: linux-mtd, Eric W. Biederman
On Mon, 26 January 2004 07:40:00 +0000, David Woodhouse wrote:
>
> You want resv_blocks_write to be larger than resv_blocks_deletion, and I
> suspect you could get away with values of 2 and 1.5 respectively, if we
> were counting bytes rather than whole eraseblocks.
Hmm. Any special reason, why you don't always count in bytes? That
would even remove code, such as this line:
> size += c->sector_size - 1; /* ... and round up */
Jörn
--
Write programs that do one thing and do it well. Write programs to work
together. Write programs to handle text streams, because that is a
universal interface.
-- Doug MacIlroy
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Q: Filesystem choice..
2004-01-26 9:31 ` David Woodhouse
@ 2004-01-26 16:20 ` Eric W. Biederman
0 siblings, 0 replies; 14+ messages in thread
From: Eric W. Biederman @ 2004-01-26 16:20 UTC (permalink / raw)
To: David Woodhouse; +Cc: linux-mtd
David Woodhouse <dwmw2@infradead.org> writes:
> On Mon, 2004-01-26 at 02:23 -0700, Eric W. Biederman wrote:
>
> Fix up some other assumptions about the first byte in any given page
> also being the first byte in a node, and fix up the garbage-collection
> which will need to have enough workspace to decompress and recompress
> the largest block it may encounter, and it should work.
Cool. Then if it comes up I will look.
> > My real question here is how difficult is it to disable compression?
> > Or can compression be deliberately disabled on a per file basis?
>
> It's not too hard. To disable it completely you just need to change a
> few #defines in os-linux.h. The support for disabling it on a per-file
> basis isn't complete, but there are flags allocated in the inode
> structure to keep track of it.
Nice.
> > I have a truly perverse case I would like to ask your opinion about.
> > A filesystem composed of 2 8K erase blocks? That is one of the
> > weird special cases that flash chips often support. I could
> > only store my parameter file in there but it would be interesting.
>
> To be honest, at that size I'd just do it directly via /dev/mtd0. Put
> the file directly on the flash with a checksum. Alternate between the
> eraseblocks each time it changes, then erase the old copy.
Right that would work, and I might do that. If could put a fs in
there I get some extensibility benefits. As well as being able to
write a couple copies of my small file before I switch to the next
erase block. The extensibility is that other pieces of firmware
could have their own files of settings, decoupling things a little bit.
If I could get the degenerate case to work without needing gross hacks
jffs2 would have scaled down to a useful level.
Primarily I am interested in not reinventing if I can, and it looks
like that may be a possibility.
> > And a last question. jffs2 rounds all erase blocks up to a common size
> > doesn't it?
>
> Yes.
Thanks for information.
I'm not quite certain where I will go with this but it has made
where the trade offs pretty clear.
Eric
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Q: Filesystem choice..
2004-01-25 21:53 Q: Filesystem choice Eric W. Biederman
2004-01-25 22:49 ` Jörn Engel
2004-01-26 6:42 ` David Woodhouse
@ 2004-01-27 4:30 ` Charles Manning
2004-01-27 7:13 ` Eric W. Biederman
2 siblings, 1 reply; 14+ messages in thread
From: Charles Manning @ 2004-01-27 4:30 UTC (permalink / raw)
To: Eric W. Biederman, linux-mtd
On Monday 26 January 2004 10:53, Eric W. Biederman wrote:
> Currently I am examining the possibility of using a filesystem with
> LinuxBIOS so that I may store parameters and kernels in the flash in a
> more flexible manner.
>
> The current flash chips I am working with are NOR flash from 512KiB to
> 4MiB. And they generally have a 64KiB erase size.
>
> I have two flash blocks that are reserved for XIP code (the hw
> initialization firmware) and the rest can be used for the filesystem.
> So in the worst case I have 6 flash blocks to play with.
>
> The old papers on jffs2 would make it unacceptable as it reserves
> 5 erase blocks. And I don't know if yaffs or yaffs2 is any better.
>
> In addition boot time is important so it would be ideal if I did not
> to read every byte of the ROM chip to initialize the filesystem.
>
> Is there a filesystem that only reserves one erase block?
>
> Does it look like I need to write my own solution?
First up: Do you really need a full-blown file system? Maybe something more
along the lines of a linear file store is more suited. Maybe too just some
basic storage in binary partitions.
YAFFS is not really designed for NOR, though it has been used for NOR. For
the sizes you're talking about YAFFS would not really be a good choice
because the file headers use one "chunk" per file. THis eases garbage
collection, but swallows flash.
-- CHarles
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: Q: Filesystem choice..
2004-01-27 4:30 ` Charles Manning
@ 2004-01-27 7:13 ` Eric W. Biederman
0 siblings, 0 replies; 14+ messages in thread
From: Eric W. Biederman @ 2004-01-27 7:13 UTC (permalink / raw)
To: manningc2; +Cc: linux-mtd
Charles Manning <manningc2@actrix.gen.nz> writes:
>
> First up: Do you really need a full-blown file system? Maybe something more
> along the lines of a linear file store is more suited. Maybe too just some
> basic storage in binary partitions.
As I don't have one at the moment I don't. At the moment I am considering
my options. I am looking at having several different pieces of firmware
by different authors so a filesystem would be useful.
> YAFFS is not really designed for NOR, though it has been used for NOR. For
> the sizes you're talking about YAFFS would not really be a good choice
> because the file headers use one "chunk" per file. THis eases garbage
> collection, but swallows flash.
Thanks, that was my impression but having it confirmed is appreciated.
Eric
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2004-01-27 7:11 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-01-25 21:53 Q: Filesystem choice Eric W. Biederman
2004-01-25 22:49 ` Jörn Engel
2004-01-26 6:42 ` David Woodhouse
2004-01-26 7:09 ` Eric W. Biederman
2004-01-26 7:40 ` David Woodhouse
2004-01-26 8:34 ` Joakim Tjernlund
2004-01-26 8:38 ` David Woodhouse
2004-01-26 9:28 ` Joakim Tjernlund
2004-01-26 9:23 ` Eric W. Biederman
2004-01-26 9:31 ` David Woodhouse
2004-01-26 16:20 ` Eric W. Biederman
2004-01-26 15:32 ` Jörn Engel
2004-01-27 4:30 ` Charles Manning
2004-01-27 7:13 ` Eric W. Biederman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox