* ubifs, ubiblk(formatted with vfat) and yaffs2 test.
@ 2008-05-30 6:01 KeunO Park
2008-05-30 6:33 ` Artem Bityutskiy
2008-10-24 11:41 ` Artem Bityutskiy
0 siblings, 2 replies; 19+ messages in thread
From: KeunO Park @ 2008-05-30 6:01 UTC (permalink / raw)
To: linux-mtd
hello guys. I am a newbie here.
I am working in embedded device. and I handled some kind of flash
filesystem like yaffs2, jffs2, rfs on ONENAND/NAND and tffs on MDOC.
you know, in territory of mobile phone, mass storage class func
becomes basic function.
but well known linux flash filesystems do not support this function
except ubiblk.
(ok. rfs support MSC. but this is not opensourced.)
I need both ubifs(for system partition) and ubiblk(for user
partition). so I tested ubifs and ubiblk.
before you see this result, remember this is just the report of my test.
some days ago, I tested the performance of ubifs, ubiblk and yaffs2.
here is the result.
[test board]
cpu:s3c2448 400MHz sdram:64MB nand:128MB
the copying file is 10MB size and created with /dev/urandom.
so I think that there may be some disadvantage to ubifs using compressor.
[write test]
yaffs2
write: 10.20s, 12.09s, 12.24s avg:11.51s (868KB/s)
load avg right after copy&sync: 0.03 -> 0.11
ubifs (LZO)
write: 14.45s, 14.40s, 14.45s avg:14.43s (693KB/s)
load avg right after copy&sync: 0.03 -> 0.53
ubifs (ZLIB)
write : 27.17s, 27.18s, 27.21s avg:27.18 (367KB/s)
load avg right after copy&sync: 0.03 -> 0.80
ubifs (No Compression)
write: 6.69s, 10.90s, 10.98s avg:9.52s (1050KB/s)
load avg right after copy&sync: 0.03 -> 0.43
ubiblk(vfat mount)
read: 0.46s, 0.47s, 0.46s avg: 0.463s (21.5MB/s)
write: 12.13s, 14.95s, 12.61s avg:13.23s (755KB/s)
load avg right after copy&sync: 0.02 -> 0.31
With above result, it seems that there is some overload in ubi.
PS: I am not good at english. and english is not my native language.
so plz understand my wierd sentence. :-)
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: ubifs, ubiblk(formatted with vfat) and yaffs2 test.
2008-05-30 6:01 ubifs, ubiblk(formatted with vfat) and yaffs2 test KeunO Park
@ 2008-05-30 6:33 ` Artem Bityutskiy
2008-05-30 7:15 ` KeunO Park
2008-10-24 11:41 ` Artem Bityutskiy
1 sibling, 1 reply; 19+ messages in thread
From: Artem Bityutskiy @ 2008-05-30 6:33 UTC (permalink / raw)
To: KeunO Park; +Cc: linux-mtd
On Fri, 2008-05-30 at 15:01 +0900, KeunO Park wrote:
> I am working in embedded device. and I handled some kind of flash
> filesystem like yaffs2, jffs2, rfs on ONENAND/NAND and tffs on MDOC.
> you know, in territory of mobile phone, mass storage class func
> becomes basic function.
Yes, yaffs, jffs2 are "special" class of file-systems and they were not
designed to be what you call "mass storage class func". They should
rather be used as root file system on "internal" flash, which is smaller
than "mass memory", where you store your core libraries, etc.
> yaffs2
> write: 10.20s, 12.09s, 12.24s avg:11.51s (868KB/s)
> load avg right after copy&sync: 0.03 -> 0.11
>
> ubifs (LZO)
> write: 14.45s, 14.40s, 14.45s avg:14.43s (693KB/s)
> load avg right after copy&sync: 0.03 -> 0.53
>
> ubifs (ZLIB)
> write : 27.17s, 27.18s, 27.21s avg:27.18 (367KB/s)
> load avg right after copy&sync: 0.03 -> 0.80
>
> ubifs (No Compression)
> write: 6.69s, 10.90s, 10.98s avg:9.52s (1050KB/s)
> load avg right after copy&sync: 0.03 -> 0.43
We beat yaffs2? Sounds nice :-)
> ubiblk(vfat mount)
> read: 0.46s, 0.47s, 0.46s avg: 0.463s (21.5MB/s)
> write: 12.13s, 14.95s, 12.61s avg:13.23s (755KB/s)
> load avg right after copy&sync: 0.02 -> 0.31
>
> With above result, it seems that there is some overload in ubi.
I do not really see what is the question you want to ask.
--
Best regards,
Artem Bityutskiy (Битюцкий Артём)
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: ubifs, ubiblk(formatted with vfat) and yaffs2 test.
2008-05-30 6:33 ` Artem Bityutskiy
@ 2008-05-30 7:15 ` KeunO Park
2008-05-30 11:34 ` Josh Boyer
2008-05-30 12:02 ` Artem Bityutskiy
0 siblings, 2 replies; 19+ messages in thread
From: KeunO Park @ 2008-05-30 7:15 UTC (permalink / raw)
To: dedekind; +Cc: linux-mtd
> Yes, yaffs, jffs2 are "special" class of file-systems and they were not
> designed to be what you call "mass storage class func". They should
> rather be used as root file system on "internal" flash, which is smaller
> than "mass memory", where you store your core libraries, etc.
>
>> yaffs2
>> write: 10.20s, 12.09s, 12.24s avg:11.51s (868KB/s)
>> load avg right after copy&sync: 0.03 -> 0.11
>>
>> ubifs (LZO)
>> write: 14.45s, 14.40s, 14.45s avg:14.43s (693KB/s)
>> load avg right after copy&sync: 0.03 -> 0.53
>>
>> ubifs (ZLIB)
>> write : 27.17s, 27.18s, 27.21s avg:27.18 (367KB/s)
>> load avg right after copy&sync: 0.03 -> 0.80
>>
>> ubifs (No Compression)
>> write: 6.69s, 10.90s, 10.98s avg:9.52s (1050KB/s)
>> load avg right after copy&sync: 0.03 -> 0.43
> We beat yaffs2? Sounds nice :-)
according to the above result(and only with no compressor option :-), yes.
but, I think that load avg result is too much higher than yaffs2's.
this may result in some critical situation especially on the mobile phone.
For example, camera or camcorder application in mobile devices need to write
the encoded display data steadily to NAND. so because of the high load avg,
we may get stuttered display result.
(I had experienced this before with the combination of 416Mhz pxa27x &
mdoc tffs)
so I wanna more light ubi.
regards,
KeunO Park
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: ubifs, ubiblk(formatted with vfat) and yaffs2 test.
2008-05-30 7:15 ` KeunO Park
@ 2008-05-30 11:34 ` Josh Boyer
2008-05-30 12:51 ` KeunO Park
2008-05-30 12:02 ` Artem Bityutskiy
1 sibling, 1 reply; 19+ messages in thread
From: Josh Boyer @ 2008-05-30 11:34 UTC (permalink / raw)
To: KeunO Park; +Cc: linux-mtd
On Fri, 30 May 2008 16:15:30 +0900
"KeunO Park" <lastnite@gmail.com> wrote:
> so I wanna more light ubi.
Send patches?
josh
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: ubifs, ubiblk(formatted with vfat) and yaffs2 test.
2008-05-30 7:15 ` KeunO Park
2008-05-30 11:34 ` Josh Boyer
@ 2008-05-30 12:02 ` Artem Bityutskiy
2008-05-30 12:05 ` Artem Bityutskiy
1 sibling, 1 reply; 19+ messages in thread
From: Artem Bityutskiy @ 2008-05-30 12:02 UTC (permalink / raw)
To: KeunO Park; +Cc: linux-mtd
On Fri, 2008-05-30 at 16:15 +0900, KeunO Park wrote:
> > Yes, yaffs, jffs2 are "special" class of file-systems and they were not
> > designed to be what you call "mass storage class func". They should
> > rather be used as root file system on "internal" flash, which is smaller
> > than "mass memory", where you store your core libraries, etc.
> >
> >> yaffs2
> >> write: 10.20s, 12.09s, 12.24s avg:11.51s (868KB/s)
> >> load avg right after copy&sync: 0.03 -> 0.11
> >>
> >> ubifs (LZO)
> >> write: 14.45s, 14.40s, 14.45s avg:14.43s (693KB/s)
> >> load avg right after copy&sync: 0.03 -> 0.53
> >>
> >> ubifs (ZLIB)
> >> write : 27.17s, 27.18s, 27.21s avg:27.18 (367KB/s)
> >> load avg right after copy&sync: 0.03 -> 0.80
> >>
> >> ubifs (No Compression)
> >> write: 6.69s, 10.90s, 10.98s avg:9.52s (1050KB/s)
> >> load avg right after copy&sync: 0.03 -> 0.43
> > We beat yaffs2? Sounds nice :-)
>
> according to the above result(and only with no compressor option :-), yes.
> but, I think that load avg result is too much higher than yaffs2's.
So what you do is you write a large file, this does not go to the flash
but instead sits in the kernel buffers, in the page cache, then you call
fsync() which causes _massive_ page-cache write-back (flushing) and
consume a lot of CPU.
Yaffs2 is synchronous, so you cannot compare it with UBIFS like this.
Make your test file synchronous then compare. With synchronous file you
will end up with much slower write speed, but sync will not cause sudden
CPU usage peaks, everything will be smoother.
I think opening the file with O_SYNC should do the work. Or chattr +s
should do. Or mount ubifs with -o sync flag to make everything
synchronous, but then everything should become slow as well.
> For example, camera or camcorder application in mobile devices need to write
> the encoded display data steadily to NAND. so because of the high load avg,
The application should be clever enough to be aware of file buffering
and use fsync() after writing some amount of data. Or make the file
synchronous. Or assign lower priority to the camera task.
> so I wanna more light ubi.
It's not about UBI, it is about UBIFS I think, namely about the
buffering. You should be aware of this and be careful with this.
UBIFS fast write speed is not magic. It is because we cache stuff and
postpone writing, which is a well known and nice optimization. In some
situation this may be undesirable, though. So UBIFS is more complex than
yaffs and jffs2 in this respect. The applications which work with UBIFS
should be more clever and the programmers should be aware of stuff like
this:
$ man 2 write:
...
"A successful return from write() does not make any guarantee that data
has been committed to disk. In fact, on some buggy implementations, it
does not even guarantee that space has successfully been reserved for
the data. The only way to be sure is to call fsync(2) after you are
done writing all your data."
--
Best regards,
Artem Bityutskiy (Битюцкий Артём)
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: ubifs, ubiblk(formatted with vfat) and yaffs2 test.
2008-05-30 12:02 ` Artem Bityutskiy
@ 2008-05-30 12:05 ` Artem Bityutskiy
2008-05-30 13:00 ` KeunO Park
0 siblings, 1 reply; 19+ messages in thread
From: Artem Bityutskiy @ 2008-05-30 12:05 UTC (permalink / raw)
To: KeunO Park; +Cc: linux-mtd
On Fri, 2008-05-30 at 15:02 +0300, Artem Bityutskiy wrote:
> On Fri, 2008-05-30 at 16:15 +0900, KeunO Park wrote:
> > > Yes, yaffs, jffs2 are "special" class of file-systems and they were not
> > > designed to be what you call "mass storage class func". They should
> > > rather be used as root file system on "internal" flash, which is smaller
> > > than "mass memory", where you store your core libraries, etc.
> > >
> > >> yaffs2
> > >> write: 10.20s, 12.09s, 12.24s avg:11.51s (868KB/s)
> > >> load avg right after copy&sync: 0.03 -> 0.11
> > >>
> > >> ubifs (LZO)
> > >> write: 14.45s, 14.40s, 14.45s avg:14.43s (693KB/s)
> > >> load avg right after copy&sync: 0.03 -> 0.53
> > >>
> > >> ubifs (ZLIB)
> > >> write : 27.17s, 27.18s, 27.21s avg:27.18 (367KB/s)
> > >> load avg right after copy&sync: 0.03 -> 0.80
> > >>
> > >> ubifs (No Compression)
> > >> write: 6.69s, 10.90s, 10.98s avg:9.52s (1050KB/s)
> > >> load avg right after copy&sync: 0.03 -> 0.43
> > > We beat yaffs2? Sounds nice :-)
> >
> > according to the above result(and only with no compressor option :-), yes.
> > but, I think that load avg result is too much higher than yaffs2's.
>
> So what you do is you write a large file, this does not go to the flash
> but instead sits in the kernel buffers, in the page cache, then you call
> fsync() which causes _massive_ page-cache write-back (flushing) and
> consume a lot of CPU.
But I have to add that of course, YAFFS/JFFS2 are more light-weight
file-system, because they do not maintain the FS index on the flash
media. UBIFS does and this costs extra CPU cycles and extra I/O.
--
Best regards,
Artem Bityutskiy (Битюцкий Артём)
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: ubifs, ubiblk(formatted with vfat) and yaffs2 test.
2008-05-30 11:34 ` Josh Boyer
@ 2008-05-30 12:51 ` KeunO Park
0 siblings, 0 replies; 19+ messages in thread
From: KeunO Park @ 2008-05-30 12:51 UTC (permalink / raw)
To: Josh Boyer; +Cc: linux-mtd
2008/5/30, Josh Boyer <jwboyer@linux.vnet.ibm.com>:
> On Fri, 30 May 2008 16:15:30 +0900
> "KeunO Park" <lastnite@gmail.com> wrote:
>
> > so I wanna more light ubi.
>
> Send patches?
>
oh. I'm sorry. I mean I'll wait for next release in mainline.
that's why I'm newbie here. :-)
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: ubifs, ubiblk(formatted with vfat) and yaffs2 test.
2008-05-30 12:05 ` Artem Bityutskiy
@ 2008-05-30 13:00 ` KeunO Park
2008-05-30 13:49 ` Artem Bityutskiy
0 siblings, 1 reply; 19+ messages in thread
From: KeunO Park @ 2008-05-30 13:00 UTC (permalink / raw)
To: dedekind; +Cc: linux-mtd
2008/5/30, Artem Bityutskiy <dedekind@infradead.org>:
> On Fri, 2008-05-30 at 15:02 +0300, Artem Bityutskiy wrote:
> > On Fri, 2008-05-30 at 16:15 +0900, KeunO Park wrote:
> > > > Yes, yaffs, jffs2 are "special" class of file-systems and they were not
> > > > designed to be what you call "mass storage class func". They should
> > > > rather be used as root file system on "internal" flash, which is smaller
> > > > than "mass memory", where you store your core libraries, etc.
> > > >
> > > >> yaffs2
> > > >> write: 10.20s, 12.09s, 12.24s avg:11.51s (868KB/s)
> > > >> load avg right after copy&sync: 0.03 -> 0.11
> > > >>
> > > >> ubifs (LZO)
> > > >> write: 14.45s, 14.40s, 14.45s avg:14.43s (693KB/s)
> > > >> load avg right after copy&sync: 0.03 -> 0.53
> > > >>
> > > >> ubifs (ZLIB)
> > > >> write : 27.17s, 27.18s, 27.21s avg:27.18 (367KB/s)
> > > >> load avg right after copy&sync: 0.03 -> 0.80
> > > >>
> > > >> ubifs (No Compression)
> > > >> write: 6.69s, 10.90s, 10.98s avg:9.52s (1050KB/s)
> > > >> load avg right after copy&sync: 0.03 -> 0.43
> > > > We beat yaffs2? Sounds nice :-)
> > >
> > > according to the above result(and only with no compressor option :-), yes.
> > > but, I think that load avg result is too much higher than yaffs2's.
> >
> > So what you do is you write a large file, this does not go to the flash
> > but instead sits in the kernel buffers, in the page cache, then you call
> > fsync() which causes _massive_ page-cache write-back (flushing) and
> > consume a lot of CPU.
>
> But I have to add that of course, YAFFS/JFFS2 are more light-weight
> file-system, because they do not maintain the FS index on the flash
> media. UBIFS does and this costs extra CPU cycles and extra I/O.
>
actually I did write & fsync the file during test. :-)
anyway thank you for your comment.
regards,
KeunO Park.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: ubifs, ubiblk(formatted with vfat) and yaffs2 test.
2008-05-30 13:00 ` KeunO Park
@ 2008-05-30 13:49 ` Artem Bityutskiy
2008-05-30 15:08 ` KeunO Park
0 siblings, 1 reply; 19+ messages in thread
From: Artem Bityutskiy @ 2008-05-30 13:49 UTC (permalink / raw)
To: KeunO Park; +Cc: linux-mtd
On Fri, 2008-05-30 at 22:00 +0900, KeunO Park wrote:
> > But I have to add that of course, YAFFS/JFFS2 are more light-weight
> > file-system, because they do not maintain the FS index on the flash
> > media. UBIFS does and this costs extra CPU cycles and extra I/O.
> >
>
> actually I did write & fsync the file during test. :-)
> anyway thank you for your comment.
Could you please send your test? And how you measure the load average?
--
Best regards,
Artem Bityutskiy (Битюцкий Артём)
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: ubifs, ubiblk(formatted with vfat) and yaffs2 test.
2008-05-30 13:49 ` Artem Bityutskiy
@ 2008-05-30 15:08 ` KeunO Park
2008-06-02 6:10 ` Artem Bityutskiy
0 siblings, 1 reply; 19+ messages in thread
From: KeunO Park @ 2008-05-30 15:08 UTC (permalink / raw)
To: dedekind; +Cc: linux-mtd
2008/5/30, Artem Bityutskiy <dedekind@infradead.org>:
> On Fri, 2008-05-30 at 22:00 +0900, KeunO Park wrote:
> > > But I have to add that of course, YAFFS/JFFS2 are more light-weight
> > > file-system, because they do not maintain the FS index on the flash
> > > media. UBIFS does and this costs extra CPU cycles and extra I/O.
> > >
> >
> > actually I did write & fsync the file during test. :-)
> > anyway thank you for your comment.
>
> Could you please send your test? And how you measure the load average?
>
ok. here is my simple test.
1. make a random dumped file in sdram area.
# cd /dev/shm/tmp
# dd if=/dev/urandom of=test.out bs=1M count=10
because ubifs uses compressor, I made a random data file for test.
2. make another shell script.
# cat test_write.sh
#!/bin/sh
/bin/cp /dev/shm/tmp/test.out /nand_partition
/bin/sync
3. check that there is no other application running.
write down the load avg using 'top'
4. just use 'time' utility to know how long it takes.
# time ./test_write.sh
5. write down the load avg again using 'top'.
using 'top' may be inadaquate choice. but, I think that
this would be helpful more or less.
and, I have another results for you. :-)
it would be interesting.
1. ubifs(LZO) mount stability test
1.1 when ubifs is writing(copy) something in NAND, I cut off the power.
then check ubifs is all right.
I tested 100 times. but, ok. there is no problem.
1.2 during mounting ubifs, I cut off the power.
then check ubifs is all right.
I tested 100 times. but, no problem. :-)
2. mount speed test
2.1.1 ubiblk partition(90MiB size): right after formatting vfat
real 0m 0.07s
user 0m 0.00s
sys 0m 0.02s
2.1.2 ubiblk partition(90MiB size): copy 6800 files(almost 80MiB),
then check mount speed.
real 0m 0.07s
user 0m 0.00s
sys 0m 0.01s
2.2.1 ubifs(LZO, 90MiB size) : right after formatting ubifs.
real 0m 0.13s
user 0m 0.00s
sys 0m 0.13s
2.2.2 ubifs(LZO, 90MiB size) : copy 6800 files(almost 80MiB), then
check mount speed.
real 0m 0.16s
user 0m 0.00s
sys 0m 0.16s
3.1 while I run 'cp test.out /nand_partition;sync' in ubifs partition
with 6800 files, suddenly I cut off the power. then check the speed in
next mount time.
real 0m 3.64s
user 0m 0.00s
sys 0m 1.32s
3.2 while I run 'cp test.out /nand_partition;sync' in clean ubifs
partition with a few files, suddenly I cut off the power. then check
the speed in next mount time.
real 0m 1.62s
user 0m 0.00s
sys 0m 0.45s
are these results proper?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: ubifs, ubiblk(formatted with vfat) and yaffs2 test.
[not found] <5ed5c4730805291914h187e0b0et2de31d595b52f125@mail.gmail.com>
@ 2008-05-30 15:17 ` KeunO Park
0 siblings, 0 replies; 19+ messages in thread
From: KeunO Park @ 2008-05-30 15:17 UTC (permalink / raw)
To: linux-mtd
>
> [test board]
> cpu:s3c2448 400MHz sdram:64MB nand:128MB
>
> the copying file is 10MB size and created with /dev/urandom.
> so I think that there may be some disadvantage to ubifs using compressor.
>
> [write test]
>
> yaffs2
> write: 10.20s, 12.09s, 12.24s avg:11.51s (868KB/s)
> load avg right after copy&sync: 0.03 -> 0.11
>
> ubifs (LZO)
> write: 14.45s, 14.40s, 14.45s avg:14.43s (693KB/s)
> load avg right after copy&sync: 0.03 -> 0.53
>
> ubifs (ZLIB)
> write : 27.17s, 27.18s, 27.21s avg:27.18 (367KB/s)
> load avg right after copy&sync: 0.03 -> 0.80
>
> ubifs (No Compression)
> write: 6.69s, 10.90s, 10.98s avg:9.52s (1050KB/s)
> load avg right after copy&sync: 0.03 -> 0.43
>
> ubiblk(vfat mount)
> read: 0.46s, 0.47s, 0.46s avg: 0.463s (21.5MB/s)
> write: 12.13s, 14.95s, 12.61s avg:13.23s (755KB/s)
> load avg right after copy&sync: 0.02 -> 0.31
>
> With above result, it seems that there is some overload in ubi.
>
with the same device, I did the same test on micro SD.
write: 4.98s, 5.43s, 6.34s avg:5.58s (1792KB/s)
load avg right after copy&sync: 0.03 -> 0.33
It seems that there may be no meaning about comparing with ubifs and
microSD. but, I think that is helpful to someone in this mailing list.
regards,
KeunO Park.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: ubifs, ubiblk(formatted with vfat) and yaffs2 test
[not found] <bae050c10806012138p6167f30eu9450563efa5429ab@mail.gmail.com>
@ 2008-06-02 5:55 ` KeunO Park
2008-06-02 6:06 ` Nancy
0 siblings, 1 reply; 19+ messages in thread
From: KeunO Park @ 2008-06-02 5:55 UTC (permalink / raw)
To: Nancy; +Cc: linux-mtd
2008/6/2 Nancy <nancydreaming@gmail.com>:
> Hi,
> Thank you for sharing your test report!
> For ubiblk, cp; sync; is not enough, cause ubiblk hold back a LEB
> in ram until another logical block number LEB wants to use the
> writecache(in ubiblk) or an block device layer "release" call will
> drive the LEB in writecache to be written on Nand Flash.
> For FAT, when it write some files, usually, it modify FAT table
> first, then goes the file contends, finally still need to change
> something in FAT table or whatever it is which belongs to the
> Filesystem meta data. That means, the last LEB hold back in ram
> contains the most important data (filesystem meta data). If there
> comes unclean reboot. That may lost lots of data. Though UBI tolerant
> unclean reboot. In this case, you should use tool "dosfsck
> /dev/ubiblockN -a" before you mount ubiblock again. And see what you
> have lost!
> To be safe, you should do : cp...; sync; flushcache /dev/ubiblockN
> I'm not sure block device layer's Ioctl "BLKFLSBUF" use in this
> way. Is there any command like sync, not just sync the buffer cache,
> but also the buffer in dirver layer( call that ioctl :-)
>
> /* flushcache.c */
> #include <sys/ioctl.h>
> #include <linux/fs.h>
> #include <fcntl.h>
> #include <stdio.h>
>
> int main(int argc,char **argv)
> {
> int fd;
>
> if( argc != 2 ){
> printf( "Usage:%s device name(full path)\n", argv[0] );
> return -1;
> }
>
> if( (fd = open( argv[1], O_RDONLY ) ) == -1) {
> printf( "Open %s failed\n", argv[1] );
> return -1;
> }
>
> if( ioctl( fd, BLKFLSBUF) == -1)
> printf("flush catche failed\n");
>
> close(fd);
> return 0;
> }
>
> --
> Best wishes,
> Nancy
>
hi. I've tested the speed & load avg again. (I used flushcache
program that you attached. )
here is the new result.
the major difference of the result is the load avg.
even it is higher than ubifs(using ZLIB compressor).
[write test]
>yaffs2
>write: 10.20s, 12.09s, 12.24s avg:11.51s (868KB/s)
>load avg right after copy&sync: 0.03 -> 0.11
>ubifs (LZO)
>write: 14.45s, 14.40s, 14.45s avg:14.43s (693KB/s)
>load avg right after copy&sync: 0.03 -> 0.53
>ubifs (ZLIB)
>write : 27.17s, 27.18s, 27.21s avg:27.18 (367KB/s)
>load avg right after copy&sync: 0.03 -> 0.80
>ubifs (No Compression)
>write: 6.69s, 10.90s, 10.98s avg:9.52s (1050KB/s)
>load avg right after copy&sync: 0.03 -> 0.43
ubiblk(vfat mount)
write: 15.39s, 15.82s, 16.03s avg:15.74s (635KB/s)
load avg right after copy&sync: 0.03 -> 0.85
thank you.
regards,
KeunO Park.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: ubifs, ubiblk(formatted with vfat) and yaffs2 test
2008-06-02 5:55 ` KeunO Park
@ 2008-06-02 6:06 ` Nancy
0 siblings, 0 replies; 19+ messages in thread
From: Nancy @ 2008-06-02 6:06 UTC (permalink / raw)
To: KeunO Park; +Cc: linux-mtd
> ubiblk(vfat mount)
> write: 15.39s, 15.82s, 16.03s avg:15.74s (635KB/s)
> load avg right after copy&sync: 0.03 -> 0.85
Big difference! Hope someone make it or the new FAL quickly!
--
Best wishes,
Nancy
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: ubifs, ubiblk(formatted with vfat) and yaffs2 test.
2008-05-30 15:08 ` KeunO Park
@ 2008-06-02 6:10 ` Artem Bityutskiy
2008-06-04 4:06 ` KeunO Park
0 siblings, 1 reply; 19+ messages in thread
From: Artem Bityutskiy @ 2008-06-02 6:10 UTC (permalink / raw)
To: KeunO Park; +Cc: linux-mtd
Hi,
On Sat, 2008-05-31 at 00:08 +0900, KeunO Park wrote:
> 2008/5/30, Artem Bityutskiy <dedekind@infradead.org>:
> > On Fri, 2008-05-30 at 22:00 +0900, KeunO Park wrote:
> > > > But I have to add that of course, YAFFS/JFFS2 are more light-weight
> > > > file-system, because they do not maintain the FS index on the flash
> > > > media. UBIFS does and this costs extra CPU cycles and extra I/O.
> > > >
> > >
> > > actually I did write & fsync the file during test. :-)
> > > anyway thank you for your comment.
> >
> > Could you please send your test? And how you measure the load average?
> >
>
> ok. here is my simple test.
>
> 1. make a random dumped file in sdram area.
> # cd /dev/shm/tmp
> # dd if=/dev/urandom of=test.out bs=1M count=10
> because ubifs uses compressor, I made a random data file for test.
>
> 2. make another shell script.
> # cat test_write.sh
> #!/bin/sh
> /bin/cp /dev/shm/tmp/test.out /nand_partition
> /bin/sync
In my previous mail, I described why the load will jump up when you do
like this, please refer to that mail again.
I suggest you to mount the file-system with '-o sync' option and measure
load average.
> 3. check that there is no other application running.
> write down the load avg using 'top'
>
> 4. just use 'time' utility to know how long it takes.
> # time ./test_write.sh
>
> 5. write down the load avg again using 'top'.
>
> using 'top' may be inadaquate choice. but, I think that
> this would be helpful more or less.
Yes, using top is not nice. But if the file is large enough, and you
watch 'top' for, say, several minutes, and then calculate the average
accurately, this should be good enough.
> 3.1 while I run 'cp test.out /nand_partition;sync' in ubifs partition
> with 6800 files, suddenly I cut off the power. then check the speed in
> next mount time.
> real 0m 3.64s
> user 0m 0.00s
> sys 0m 1.32s
> 3.2 while I run 'cp test.out /nand_partition;sync' in clean ubifs
> partition with a few files, suddenly I cut off the power. then check
> the speed in next mount time.
> real 0m 1.62s
> user 0m 0.00s
> sys 0m 0.45s
Mount time depends on how full is your journal, so it will be slightly
different each time you reboot uncleanly.
--
Best regards,
Artem Bityutskiy (Битюцкий Артём)
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: ubifs, ubiblk(formatted with vfat) and yaffs2 test
2008-06-02 5:29 References:ubifs, " Nancy
@ 2008-06-02 6:18 ` Artem Bityutskiy
2008-06-02 6:47 ` Nancy
0 siblings, 1 reply; 19+ messages in thread
From: Artem Bityutskiy @ 2008-06-02 6:18 UTC (permalink / raw)
To: Nancy; +Cc: linux-mtd
[Fixed abused mail subject]
On Mon, 2008-06-02 at 13:29 +0800, Nancy wrote:
> Thank you for sharing your test report!
> For ubiblk, cp; sync; is not enough, cause ubiblk hold back a LEB
> in ram until another logical block number LEB wants to use the
> writecache(in ubiblk) or an block device layer "release" call will
> drive the LEB in writecache to be written on Nand Flash.
> For FAT, when it write some files, usually, it modify FAT table
> first, then goes the file contends, finally still need to change
> something in FAT table or whatever it is which belongs to the
> Filesystem meta data. That means, the last LEB hold back in ram
> contains the most important data (filesystem meta data). If there
> comes unclean reboot. That may lost lots of data. Though UBI tolerant
> unclean reboot. In this case, you should use tool "dosfsck
> /dev/ubiblockN -a" before you mount ubiblock again. And see what you
> have lost!
> To be safe, you should do : cp...; sync; flushcache /dev/ubiblockN
> I'm not sure block device layer's Ioctl "BLKFLSBUF" use in this
> way. Is there any command like sync, not just sync the buffer cache,
> but also the buffer in dirver layer( call that ioctl :-)
Sorry, but these things are absolutely unacceptable and mean your
implementation is broken.
--
Best regards,
Artem Bityutskiy (Битюцкий Артём)
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: ubifs, ubiblk(formatted with vfat) and yaffs2 test
2008-06-02 6:18 ` ubifs, " Artem Bityutskiy
@ 2008-06-02 6:47 ` Nancy
0 siblings, 0 replies; 19+ messages in thread
From: Nancy @ 2008-06-02 6:47 UTC (permalink / raw)
To: dedekind; +Cc: linux-mtd
On 6/2/08, Artem Bityutskiy <dedekind@infradead.org> wrote:
> [Fixed abused mail subject]
I wished I know how to fix it. Is there any mailbox supply References:
or In-Reply-To: headers when you write a new Email? Or is there some
special operation to do this?
> On Mon, 2008-06-02 at 13:29 +0800, Nancy wrote:
> > Thank you for sharing your test report!
> > For ubiblk, cp; sync; is not enough, cause ubiblk hold back a LEB
> > in ram until another logical block number LEB wants to use the
> > writecache(in ubiblk) or an block device layer "release" call will
> > drive the LEB in writecache to be written on Nand Flash.
> > For FAT, when it write some files, usually, it modify FAT table
> > first, then goes the file contends, finally still need to change
> > something in FAT table or whatever it is which belongs to the
> > Filesystem meta data. That means, the last LEB hold back in ram
> > contains the most important data (filesystem meta data). If there
> > comes unclean reboot. That may lost lots of data. Though UBI tolerant
> > unclean reboot. In this case, you should use tool "dosfsck
> > /dev/ubiblockN -a" before you mount ubiblock again. And see what you
> > have lost!
> > To be safe, you should do : cp...; sync; flushcache /dev/ubiblockN
> > I'm not sure block device layer's Ioctl "BLKFLSBUF" use in this
> > way. Is there any command like sync, not just sync the buffer cache,
> > but also the buffer in dirver layer( call that ioctl :-)
>
> Sorry, but these things are absolutely unacceptable and mean your
> implementation is broken.
That's OK. For me, better have a faulty one than have nothing at
all. Hope someone will fix it or create a new one instead!
Because that's mainly for FAT, lost some user data is not that
unaccepable. Cause most of the user data have bak in PC. Just make
sure when you download somefiles from PC through USB, unload USB
properly, the script will close the ubiblock device. That's safe.
When you want to write something in ubiblock device in target
board, you should be aware of flushcache things. Unclean reboot
happened rarely.
At least this faulty accepted by my customers :-)
--
Best wishes,
Nancy
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: ubifs, ubiblk(formatted with vfat) and yaffs2 test.
2008-06-02 6:10 ` Artem Bityutskiy
@ 2008-06-04 4:06 ` KeunO Park
2008-06-04 8:05 ` Artem Bityutskiy
0 siblings, 1 reply; 19+ messages in thread
From: KeunO Park @ 2008-06-04 4:06 UTC (permalink / raw)
To: dedekind; +Cc: linux-mtd
>> ok. here is my simple test.
>>
>> 1. make a random dumped file in sdram area.
>> # cd /dev/shm/tmp
>> # dd if=/dev/urandom of=test.out bs=1M count=10
>> because ubifs uses compressor, I made a random data file for test.
>>
>> 2. make another shell script.
>> # cat test_write.sh
>> #!/bin/sh
>> /bin/cp /dev/shm/tmp/test.out /nand_partition
>> /bin/sync
>
> In my previous mail, I described why the load will jump up when you do
> like this, please refer to that mail again.
>
> I suggest you to mount the file-system with '-o sync' option and measure
> load average.
>
Hi, I tested again with '-o sync'. :-)
UBIFS LZO (mounted with '-o sync,noatime' option)
16.93sec 590KB/s
load avg : 0.00 -> 0.36
UBIFS No Compressor (mounted with '-o sync,noatime' option)
13.38sec 747KB/s
load avg : 0.00 -> 0.29
YAFFS2 (mounted with '-o sync,noatime' option)
12.25sec 816KB/s
load avg : 0.00 -> 0.15
regards,
KeunO Park.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: ubifs, ubiblk(formatted with vfat) and yaffs2 test.
2008-06-04 4:06 ` KeunO Park
@ 2008-06-04 8:05 ` Artem Bityutskiy
0 siblings, 0 replies; 19+ messages in thread
From: Artem Bityutskiy @ 2008-06-04 8:05 UTC (permalink / raw)
To: KeunO Park; +Cc: linux-mtd
On Wed, 2008-06-04 at 13:06 +0900, KeunO Park wrote:
> Hi, I tested again with '-o sync'. :-)
Note, these load average numbers do not mean much, and you cannot do
conclusions whether the FS will prevent your camera application from
being executed.
>
> UBIFS LZO (mounted with '-o sync,noatime' option)
> 16.93sec 590KB/s
> load avg : 0.00 -> 0.36
>
> UBIFS No Compressor (mounted with '-o sync,noatime' option)
> 13.38sec 747KB/s
> load avg : 0.00 -> 0.29
>
> YAFFS2 (mounted with '-o sync,noatime' option)
> 12.25sec 816KB/s
> load avg : 0.00 -> 0.15
--
Best regards,
Artem Bityutskiy (Битюцкий Артём)
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: ubifs, ubiblk(formatted with vfat) and yaffs2 test.
2008-05-30 6:01 ubifs, ubiblk(formatted with vfat) and yaffs2 test KeunO Park
2008-05-30 6:33 ` Artem Bityutskiy
@ 2008-10-24 11:41 ` Artem Bityutskiy
1 sibling, 0 replies; 19+ messages in thread
From: Artem Bityutskiy @ 2008-10-24 11:41 UTC (permalink / raw)
To: KeunO Park; +Cc: linux-mtd
On Fri, 2008-05-30 at 15:01 +0900, KeunO Park wrote:
> [write test]
>
> yaffs2
> write: 10.20s, 12.09s, 12.24s avg:11.51s (868KB/s)
> load avg right after copy&sync: 0.03 -> 0.11
>
> ubifs (LZO)
> write: 14.45s, 14.40s, 14.45s avg:14.43s (693KB/s)
> load avg right after copy&sync: 0.03 -> 0.53
>
> ubifs (ZLIB)
> write : 27.17s, 27.18s, 27.21s avg:27.18 (367KB/s)
> load avg right after copy&sync: 0.03 -> 0.80
>
> ubifs (No Compression)
> write: 6.69s, 10.90s, 10.98s avg:9.52s (1050KB/s)
> load avg right after copy&sync: 0.03 -> 0.43
>
> ubiblk(vfat mount)
> read: 0.46s, 0.47s, 0.46s avg: 0.463s (21.5MB/s)
> write: 12.13s, 14.95s, 12.61s avg:13.23s (755KB/s)
> load avg right after copy&sync: 0.02 -> 0.31
>
> With above result, it seems that there is some overload in ubi.
Hi,
you complained about UBIFS load average. We have added a mount
option which disabled data CRC checking. You may try it to improve
the situation. Just added this piece of documentation:
http://www.linux-mtd.infradead.org/doc/ubifs.html#L_checksumming
--
Best regards,
Artem Bityutskiy (Битюцкий Артём)
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2008-10-24 11:43 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-05-30 6:01 ubifs, ubiblk(formatted with vfat) and yaffs2 test KeunO Park
2008-05-30 6:33 ` Artem Bityutskiy
2008-05-30 7:15 ` KeunO Park
2008-05-30 11:34 ` Josh Boyer
2008-05-30 12:51 ` KeunO Park
2008-05-30 12:02 ` Artem Bityutskiy
2008-05-30 12:05 ` Artem Bityutskiy
2008-05-30 13:00 ` KeunO Park
2008-05-30 13:49 ` Artem Bityutskiy
2008-05-30 15:08 ` KeunO Park
2008-06-02 6:10 ` Artem Bityutskiy
2008-06-04 4:06 ` KeunO Park
2008-06-04 8:05 ` Artem Bityutskiy
2008-10-24 11:41 ` Artem Bityutskiy
[not found] <5ed5c4730805291914h187e0b0et2de31d595b52f125@mail.gmail.com>
2008-05-30 15:17 ` KeunO Park
-- strict thread matches above, loose matches on Subject: below --
2008-06-02 5:29 References:ubifs, " Nancy
2008-06-02 6:18 ` ubifs, " Artem Bityutskiy
2008-06-02 6:47 ` Nancy
[not found] <bae050c10806012138p6167f30eu9450563efa5429ab@mail.gmail.com>
2008-06-02 5:55 ` KeunO Park
2008-06-02 6:06 ` Nancy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox