* Lost space on JFFS2 partition
@ 2003-08-28 9:50 John Hall
2003-08-28 10:23 ` David Woodhouse
0 siblings, 1 reply; 8+ messages in thread
From: John Hall @ 2003-08-28 9:50 UTC (permalink / raw)
To: linux-mtd
Hi,
I have a 7MB NAND flash partition, on which I'm running JFFS2. This
partition contained about 3.5MB of files, yet df reported that the partition
was 96% full. I tried sending SIGHUP to the gc thread and unmounting and
remounting the device, but it had no effect. I then moved the files off the
partition and then copied them back, and the usage went down to just 31%,
which with compression is what one would expect.
I've got two ideas about what happened:
1. The files in question are log files and so there are lots of small writes
happening. How does JFFS2 compress files? Is it on a block basis or per
write? If it is the latter then I could imagine that compression is actually
having an adverse effect when a file is created from a large number of small
writes.
2. A bug in JFFS2 was causing some unused space not to be garbage collected.
The version of JFFS2 being used is 9 months old, so perhaps I should merge a
later version in anyway.
My knowledge of how JFFS2 works internally is limited, so I would be
grateful for any advice.
Regards,
John Hall
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Lost space on JFFS2 partition
2003-08-28 9:50 Lost space on JFFS2 partition John Hall
@ 2003-08-28 10:23 ` David Woodhouse
2003-08-28 10:28 ` John Hall
0 siblings, 1 reply; 8+ messages in thread
From: David Woodhouse @ 2003-08-28 10:23 UTC (permalink / raw)
To: John Hall; +Cc: linux-mtd
On Thu, 2003-08-28 at 10:50 +0100, John Hall wrote:
> Hi,
>
> I have a 7MB NAND flash partition, on which I'm running JFFS2. This
> partition contained about 3.5MB of files, yet df reported that the partition
> was 96% full. I tried sending SIGHUP to the gc thread and unmounting and
> remounting the device, but it had no effect. I then moved the files off the
> partition and then copied them back, and the usage went down to just 31%,
> which with compression is what one would expect.
>
> I've got two ideas about what happened:
>
> 1. The files in question are log files and so there are lots of small writes
> happening. How does JFFS2 compress files? Is it on a block basis or per
> write? If it is the latter then I could imagine that compression is actually
> having an adverse effect when a file is created from a large number of small
> writes.
What do you mean by 'on a block basis'? JFFS2 does compression within
each log entry, which in the case of small writes is basically
per-write. It doesn't hurt though -- if the node payload would grow on
compression, we write it out uncompressed.
> 2. A bug in JFFS2 was causing some unused space not to be garbage collected.
> The version of JFFS2 being used is 9 months old, so perhaps I should merge a
> later version in anyway.
Sort of. I think it's related to a NAND-specific bug that I fixed last
week, where we'd consistently waste space under that usage pattern, and
not although it's reclaimable we wouldn't account it as free in
statfs().
We still don't account it as free -- but we don't waste space nearly as
often as we used to either; we trigger garbage-collection to fill our
buffer rather than just padding it.
--
dwmw2
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Lost space on JFFS2 partition
2003-08-28 10:23 ` David Woodhouse
@ 2003-08-28 10:28 ` John Hall
2003-08-28 11:27 ` David Woodhouse
0 siblings, 1 reply; 8+ messages in thread
From: John Hall @ 2003-08-28 10:28 UTC (permalink / raw)
To: linux-mtd
"David Woodhouse" <dwmw2@infradead.org> wrote in message
news:1062066193.8465.1571.camel@hades.cambridge.redhat.com...
> > 1. The files in question are log files and so there are lots of
> > small writes happening. How does JFFS2 compress files? Is it on a
> > block basis or per write? If it is the latter then I could imagine
> > that compression is actually having an adverse effect when a file is
> > created from a large number of small writes.
> What do you mean by 'on a block basis'? JFFS2 does compression within
> each log entry, which in the case of small writes is basically
> per-write. It doesn't hurt though -- if the node payload would grow on
> compression, we write it out uncompressed.
I wasn't sure how JFFS2 does its writes, i.e. whether it did each write
immediately to the flash, or whether it would build a page or block's
worth before writing to the flash. Now I see that each write is done
immediately.
> > 2. A bug in JFFS2 was causing some unused space not to be garbage
> > collected. The version of JFFS2 being used is 9 months old, so
> > perhaps I should merge a later version in anyway.
> Sort of. I think it's related to a NAND-specific bug that I fixed last
> week, where we'd consistently waste space under that usage pattern, and
> not although it's reclaimable we wouldn't account it as free in
> statfs().
>
> We still don't account it as free -- but we don't waste space nearly as
> often as we used to either; we trigger garbage-collection to fill our
> buffer rather than just padding it.
Thanks for your explanation. I guess that I need to look at upgrading
JFFS2 (or more likely MTD).
Cheers,
John
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Lost space on JFFS2 partition
2003-08-28 10:28 ` John Hall
@ 2003-08-28 11:27 ` David Woodhouse
2003-08-28 13:58 ` John Hall
0 siblings, 1 reply; 8+ messages in thread
From: David Woodhouse @ 2003-08-28 11:27 UTC (permalink / raw)
To: John Hall; +Cc: linux-mtd
On Thu, 2003-08-28 at 11:28 +0100, John Hall wrote:
> I wasn't sure how JFFS2 does its writes, i.e. whether it did each write
> immediately to the flash, or whether it would build a page or block's
> worth before writing to the flash. Now I see that each write is done
> immediately.
Not on NAND. We _can't_ just write out every tiny node as it happens,
since we could violate the writes-per-page limit for NAND. We batch
writes with a write-behind buffer, flushing it if it's got dirty data in
after a certain amount of time. It was that flushing which was causing
the problem. Now we trigger garbage-collection to fill it, instead of
just padding and wasting the space.
--
dwmw2
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Lost space on JFFS2 partition
2003-08-28 11:27 ` David Woodhouse
@ 2003-08-28 13:58 ` John Hall
2003-08-28 14:15 ` David Woodhouse
0 siblings, 1 reply; 8+ messages in thread
From: John Hall @ 2003-08-28 13:58 UTC (permalink / raw)
To: linux-mtd
On 28 August 2003 12:27, David Woodhouse <dwmw2@infradead.org> wrote:
> > I wasn't sure how JFFS2 does its writes, i.e. whether it did each
> > write immediately to the flash, or whether it would build a page or
> > block's worth before writing to the flash. Now I see that each write
> > is done immediately.
> Not on NAND. We _can't_ just write out every tiny node as it happens,
> since we could violate the writes-per-page limit for NAND. We batch
> writes with a write-behind buffer, flushing it if it's got dirty data
> in after a certain amount of time. It was that flushing which was
> causing the problem. Now we trigger garbage-collection to fill it,
> instead of just padding and wasting the space.
OK, that makes sense.
Is the current CVS head considered stable, and will it work correctly
under 2.4 (in fact armlinux 2.4.18-rmk7)?
On a completely different note, is it possible to disable compression in
JFFS2 without hacking the sourcecode?
Cheers,
John
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Lost space on JFFS2 partition
2003-08-28 13:58 ` John Hall
@ 2003-08-28 14:15 ` David Woodhouse
2003-08-28 14:25 ` John Hall
0 siblings, 1 reply; 8+ messages in thread
From: David Woodhouse @ 2003-08-28 14:15 UTC (permalink / raw)
To: John Hall; +Cc: linux-mtd
On Thu, 2003-08-28 at 14:58 +0100, John Hall wrote:
> Is the current CVS head considered stable,
Yes.
> and will it work correctly under 2.4
Yes....
> (in fact armlinux 2.4.18-rmk7)?
... but maybe not for ancient 2.4 like that. Won't be far off though --
you may just have to reinstate some backward-compat stuff in
include/linux/mtd/compatmac.h which got removed recently.
> On a completely different note, is it possible to disable compression in
> JFFS2 without hacking the sourcecode?
No. At least not if you count just reading the archives for the last
week or two and applying the patch therein as 'hacking' ;)
--
dwmw2
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Lost space on JFFS2 partition
2003-08-28 14:15 ` David Woodhouse
@ 2003-08-28 14:25 ` John Hall
2003-08-28 14:46 ` David Woodhouse
0 siblings, 1 reply; 8+ messages in thread
From: John Hall @ 2003-08-28 14:25 UTC (permalink / raw)
To: linux-mtd
"David Woodhouse" <dwmw2@infradead.org> wrote in message
news:1062080106.12122.1.camel@hades.cambridge.redhat.com...
> > and will it work correctly under 2.4
>
> Yes....
>
> > (in fact armlinux 2.4.18-rmk7)?
>
> ... but maybe not for ancient 2.4 like that. Won't be far off though
> -- you may just have to reinstate some backward-compat stuff in
> include/linux/mtd/compatmac.h which got removed recently.
2.4.18 is considered ancient? :(
I've discovered the backward-compat stuff - seems to mainly be 'BUG_ON'
and 'likely'. I did get some compiler warnings about %z in printf
strings - is this just gcc 2.95.3 not being aware of them?
> > On a completely different note, is it possible to disable
> > compression in JFFS2 without hacking the sourcecode?
>
> No. At least not if you count just reading the archives for the last
> week or two and applying the patch therein as 'hacking' ;)
I found this in the archives:
"Just replace jffs2_{de,}compress() with NOPs. Return zero from
jffs2_compress()", which seems to be more of a hack...
Regards,
John
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: Lost space on JFFS2 partition
2003-08-28 14:25 ` John Hall
@ 2003-08-28 14:46 ` David Woodhouse
0 siblings, 0 replies; 8+ messages in thread
From: David Woodhouse @ 2003-08-28 14:46 UTC (permalink / raw)
To: John Hall; +Cc: linux-mtd
On Thu, 2003-08-28 at 15:25 +0100, John Hall wrote:
> 2.4.18 is considered ancient? :(
March 2002? Yes, that's ancient.
> I've discovered the backward-compat stuff - seems to mainly be 'BUG_ON'
> and 'likely'. I did get some compiler warnings about %z in printf
> strings - is this just gcc 2.95.3 not being aware of them?
Get lib/vsprintf.c from a current kernel; that kernel doesn't grok %z
either. Then the warning from gcc is a harmless false positive.
> I found this in the archives:
> "Just replace jffs2_{de,}compress() with NOPs. Return zero from
> jffs2_compress()", which seems to be more of a hack...
Yeah -- I want to make it a per-file and/or per-directory ioctl to
disable compression, but haven't got round to it yet.
--
dwmw2
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2003-08-28 14:46 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-08-28 9:50 Lost space on JFFS2 partition John Hall
2003-08-28 10:23 ` David Woodhouse
2003-08-28 10:28 ` John Hall
2003-08-28 11:27 ` David Woodhouse
2003-08-28 13:58 ` John Hall
2003-08-28 14:15 ` David Woodhouse
2003-08-28 14:25 ` John Hall
2003-08-28 14:46 ` David Woodhouse
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox