* UBIFS Question
@ 2009-07-10 18:43 Laurent .
2009-07-10 20:01 ` Corentin Chary
` (2 more replies)
0 siblings, 3 replies; 17+ messages in thread
From: Laurent . @ 2009-07-10 18:43 UTC (permalink / raw)
To: dedekind, artem.bityutski; +Cc: linux-mtd
Hi Artem,
I am currently working on an embedded Linux project which has an ARM9 chip (NXP’s 180MHz LPC3131) with a NAND Flash controller which supports 5bits or 8bits HW ECC.
I have a 256MB SLC NAND and run Linux 2.6.28.2
It is important to know the way my product is used, the user will always brutally power it off. It’s not like I can nicely do this, do a sync etc etc…
I would have a few questions, hopefully you can answer them.
1) The LPC3131 supports a USB DFU mode which basically allows an end-user to upload a binary file into the chip's internal RAM and execute it.
I have coded a small USB application that I upload via DFU which allows me to completely control the NAND flash from the PC.
My goal was to create an application used in the factory that programs a blank NAND Flash.
It works :
with respect of bad blocks, I burn the LPC313x boot-block, my own small uimage bootloader, the nand-bbt located at the end of the flash, the linux uimage, and a ubi image that I generated doing
mkfs.ubifs -x none -r lolo -m 2048 -e 129024 -c 2047 -o ubifs.img
ubinize.cfg is:
[ubifs]
mode=ubi
image=ubifs.img
vol_id=0
vol_size=200MiB
vol_type=dynamic
vol_name=rootfs
vol_alignment=1
vol_flags=autoresize
ubinize -o ubi.img -m 2048 -p 128KiB -s 512 ubinize.cfg
I am not using LZO compression because I have seen that mount time is faster without, and since my file system contents are already compressed file, this does not buy me must room.
But maybe I am missing something and LZO compression is really advised ?
2) I boot my embedded system ( I use OpenWrt )
root@OpenWrt:/# ubiattach /dev/ubi_ctrl -m 5
UBI: attaching mtd5 to ubi0
UBI: physical eraseblock size: 131072 bytes (128 KiB)
UBI: logical eraseblock size: 129024 bytes
UBI: smallest flash I/O unit: 2048
UBI: sub-page size: 512
UBI: VID header offset: 512 (aligned 512)
UBI: data offset: 2048
UBI: volume 0 ("rootfs") re-sized from 1626 to 1854 LEBs
UBI: attached mtd5 to ubi0
UBI: MTD device name: "lpc313x-rootfs"
UBI: MTD device size: 235 MiB
UBI: number of good PEBs: 1876
UBI: number of bad PEBs: 6
UBI: max. allowed volumes: 128
UBI: wear-leveling threshold: 4096
UBI: number of internal volumes: 1
UBI: number of user volumes: 1
UBI: available PEBs: 0
UBI: total number of reserved PEBs: 1876
UBI: number of PEBs reserved for bad PEB handling: 18
UBI: max/mean erase counter: 1/0
UBI: background thread "ubi_bgt0d" started, PID 2059
UBI device number 0, total 1876 LEBs (242049024 bytes, 230.8 MiB), available 0 LEBs (0 bytes), LEB size 129024 bytes (126.0 KiB)
root@OpenWrt:/# mount -t ubifs ubi0 /mnt
UBIFS: mounted UBI device 0, volume 0, name "rootfs"
UBIFS: file system size: 237791232 bytes (232218 KiB, 226 MiB, 1843 LEBs)
UBIFS: journal size: 9033728 bytes (8822 KiB, 8 MiB, 71 LEBs)
UBIFS: media format: 4 (latest is 4)
UBIFS: default compressor: no compression
UBIFS: reserved for root: 0 bytes (0 KiB)
root@OpenWrt:/# find /mnt
/mnt
/mnt/img
/mnt/img/00783_hongkongbynight_1920x1200.jpg
/mnt/img/00678_vancouverdusk_1920x1200.jpg
/mnt/img/00698_snowmountains_1920x1200.jpg
/mnt/music
/mnt/music/10 F Cuitad & Y Hayat - The Force.mp3
/mnt/music/07 Sakamoto - Forbidden Colours.mp3
/mnt/alphaflight
/mnt/alphaflight/indexOnline.html
/mnt/alphaflight/Lover_Charlie.zsid
/mnt/alphaflight/sid.java
/mnt/alphaflight/retroguard.jar
/mnt/alphaflight/retroguard.log
/mnt/alphaflight/obfuscAFL.bat
/mnt/alphaflight/aflClean.jar
/mnt/alphaflight/index.html
/mnt/alphaflight/sid.class
/mnt/alphaflight/afl.gif
/mnt/alphaflight/afl.jar
/mnt/alphaflight/afl.java
/mnt/alphaflight/afl.class
/mnt/alphaflight/AFL-06.prg
/mnt/alphaflight/indexObuscLocal.html
/mnt/alphaflight/sid_applet.java
/mnt/alphaflight/sid$sidVoice.class
/mnt/alphaflight/aflold.java
In /mnt/alphaflight I generate a small new.txt file with vi, which contains a few lines of characters.
After saving, I immediately power-off brutally.
I power-on again, mount, and the new.txt file is not there...
I recreate the file, I wait 20 seconds or so and power-off brutally.
I power-on again, mount and now I have:
-rw-r--r-- 1 1000 1000 397 Dec 31 2002 indexOnline.html
-rw-r--r-- 1 root root 0 Jan 1 00:00 new.txt
-rw-r--r-- 1 1000 1000 104 Dec 31 2002 obfuscAFL.bat
Size 0 ... I can understand that since I did not have time to sync.
It’s funny though that the file is present in the directory but the contents are not there ?
If I do this programmatically, I presume I can force a sync after I close the file ?
Is it safe to do so ? Would you know the C API to use to do a sync programmatically ?
I generate a new2.txt wait one minute... Reboot, the file new2.txt is still 0
I generate a new3.txt wait two minutes... Reboot, now the file new3.txt has correct size !
So, I am wondering, is this background thread "ubi_bgt0d" responsible for automatically synching periodically ? Is this interval adjustable ? Do you recommend it ? Like every 10 seconds, isn’t it too much ?
Thank you,
Laurent
_________________________________________________________________
Insert movie times and more without leaving Hotmail®.
http://windowslive.com/Tutorial/Hotmail/QuickAdd?ocid=TXT_TAGLM_WL_HM_Tutorial_QuickAdd_062009
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: UBIFS Question
2009-07-10 18:43 UBIFS Question Laurent .
@ 2009-07-10 20:01 ` Corentin Chary
2009-07-11 14:55 ` Artem Bityutskiy
2009-07-11 15:54 ` Vitaly Wool
2 siblings, 0 replies; 17+ messages in thread
From: Corentin Chary @ 2009-07-10 20:01 UTC (permalink / raw)
To: Laurent .; +Cc: linux-mtd, artem.bityutski
On Fri, Jul 10, 2009 at 8:43 PM, Laurent .<sid6582@msn.com> wrote:
>
> In /mnt/alphaflight I generate a small new.txt file with vi, which contains a few lines of characters.
> After saving, I immediately power-off brutally.
> I power-on again, mount, and the new.txt file is not there...
>
> I recreate the file, I wait 20 seconds or so and power-off brutally.
>
> I power-on again, mount and now I have:
>
> -rw-r--r-- 1 1000 1000 397 Dec 31 2002 indexOnline.html
> -rw-r--r-- 1 root root 0 Jan 1 00:00 new.txt
> -rw-r--r-- 1 1000 1000 104 Dec 31 2002 obfuscAFL.bat
>
> Size 0 ... I can understand that since I did not have time to sync.
>
> It’s funny though that the file is present in the directory but the contents are not there ?
>
> If I do this programmatically, I presume I can force a sync after I close the file ?
> Is it safe to do so ? Would you know the C API to use to do a sync programmatically ?
>
Hi,
I think you should read
http://www.linux-mtd.infradead.org/faq/ubifs.html#L_empty_file
An use fsync() on your file.
Quotting "close(2) manual"
A successful close does not guarantee that the data has been
successfully
saved to disk, as the kernel defers writes. It is not common
for a file system to flush the buffers
when the stream is closed. If you need to be sure that the
data is physically stored use fsync(2).
(It will depend on the disk hardware at this point.)
--
Corentin Chary
http://xf.iksaif.net - http://uffs.org
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: UBIFS Question
2009-07-10 18:43 UBIFS Question Laurent .
2009-07-10 20:01 ` Corentin Chary
@ 2009-07-11 14:55 ` Artem Bityutskiy
2009-07-14 6:11 ` Laurent .
2009-07-11 15:54 ` Vitaly Wool
2 siblings, 1 reply; 17+ messages in thread
From: Artem Bityutskiy @ 2009-07-11 14:55 UTC (permalink / raw)
To: Laurent .; +Cc: linux-mtd
[Fixing loooooooong lines. Normal mailing list etiquette recommends
people to make sure their e-mails have 78 characters line wrapping.
And indeed, it is rather difficult to deal with non-wrapped e-mails]
Hi Laurent,
On Fri, 2009-07-10 at 11:43 -0700, Laurent . wrote:
> I am currently working on an embedded Linux project which has an ARM9
> chip (NXP’s 180MHz LPC3131) with a NAND Flash controller which supports
> 5bits or 8bits HW ECC.
> I have a 256MB SLC NAND and run Linux 2.6.28.2
Sounds good, UBIFS should be appropriate for this, I think. However,
please, make sure you have fetched the latest UBI/UBIFS patches from
corresponding UBIFS back-port tree. There were many important bug-fixes.
See here:
http://www.linux-mtd.infradead.org/doc/ubifs.html#L_source
> It is important to know the way my product is used, the user will
> always brutally power it off. It’s not like I can nicely do this,
> do a sync etc etc…
OK, should be fine for UBIFS. At least we tested this extensively on
NAND: http://www.linux-mtd.infradead.org/faq/ubifs.html#L_powercut
> I would have a few questions, hopefully you can answer them.
Will try.
> 1) The LPC3131 supports a USB DFU mode which basically allows an
> end-user to upload a binary file into the chip's internal RAM and
> execute it.
Sounds like a very useful feature! :-)
> I have coded a small USB application that I upload via DFU which
> allows me to completely control the NAND flash from the PC.
Nice.
> My goal was to create an application used in the factory that programs
> a blank NAND Flash.
OK. BTW, I've written a small article which should help people
understand how UBI flashers should work, roughly:
http://www.linux-mtd.infradead.org/doc/ubi.html#L_format
> It works :
> with respect of bad blocks, I burn the LPC313x boot-block, my own small
> uimage bootloader, the nand-bbt located at the end of the flash, the
> linux uimage, and a ubi image that I generated doing
>
> mkfs.ubifs -x none -r lolo -m 2048 -e 129024 -c 2047 -o ubifs.img
>
> ubinize.cfg is:
>
> [ubifs]
> mode=ubi
> image=ubifs.img
> vol_id=0
> vol_size=200MiB
> vol_type=dynamic
> vol_name=rootfs
> vol_alignment=1
> vol_flags=autoresize
>
> ubinize -o ubi.img -m 2048 -p 128KiB -s 512 ubinize.cfg
Ok.
> I am not using LZO compression because I have seen that mount time is
> faster without,
Take into account that if you have a power cut, then next mount will be
slower than normal, because UBIFS will have to replay the journal and
do recovery. So if mount time is critical, you may want to test it in
power-cut conditions.
VS compression. You could try to test this. The pages (4K pieces of
files) which do not compress well will anyway be left uncompressed.
> and since my file system contents are already compressed file, this
> does not buy me must room.
> But maybe I am missing something and LZO compression is really advised ?
No, if you mostly store mp3's, then no. The only thing is that you may
optimize UBIFS. ATM reading works like this: we allocate a buffer, we
read into it, then we uncompress data from this buffer into the final
buffer. If compression is not used, we copy data. You could get rid of
this unneeded copying.
> 2) I boot my embedded system ( I use OpenWrt )
To be frank I'm not aware of OpenWrt, but anyway :-)
> root@OpenWrt:/# ubiattach /dev/ubi_ctrl -m 5
>
> UBI: attaching mtd5 to ubi0
> UBI: physical eraseblock size: 131072 bytes (128 KiB)
> UBI: logical eraseblock size: 129024 bytes
> UBI: smallest flash I/O unit: 2048
> UBI: sub-page size: 512
> UBI: VID header offset: 512 (aligned 512)
> UBI: data offset: 2048
> UBI: volume 0 ("rootfs") re-sized from 1626 to 1854 LEBs
> UBI: attached mtd5 to ubi0
> UBI: MTD device name: "lpc313x-rootfs"
> UBI: MTD device size: 235 MiB
> UBI: number of good PEBs: 1876
> UBI: number of bad PEBs: 6
> UBI: max. allowed volumes: 128
> UBI: wear-leveling threshold: 4096
> UBI: number of internal volumes: 1
> UBI: number of user volumes: 1
> UBI: available PEBs: 0
> UBI: total number of reserved PEBs: 1876
> UBI: number of PEBs reserved for bad PEB handling: 18
> UBI: max/mean erase counter: 1/0
> UBI: background thread "ubi_bgt0d" started, PID 2059
> UBI device number 0, total 1876 LEBs (242049024 bytes, 230.8 MiB), available 0 LEBs (0 bytes), LEB size 129024 bytes (126.0 KiB)
>
> root@OpenWrt:/# mount -t ubifs ubi0 /mnt
>
> UBIFS: mounted UBI device 0, volume 0, name "rootfs"
> UBIFS: file system size: 237791232 bytes (232218 KiB, 226 MiB, 1843 LEBs)
> UBIFS: journal size: 9033728 bytes (8822 KiB, 8 MiB, 71 LEBs)
> UBIFS: media format: 4 (latest is 4)
> UBIFS: default compressor: no compression
> UBIFS: reserved for root: 0 bytes (0 KiB)
Hmm, which version of mkfs.ubifs you use? I though the latest version
would create an image with a reserve for the root. Try
"mkfs.ubifs --version" please.
> root@OpenWrt:/# find /mnt
>
> /mnt
> /mnt/img
> /mnt/img/00783_hongkongbynight_1920x1200.jpg
> /mnt/img/00678_vancouverdusk_1920x1200.jpg
... snip ...
> In /mnt/alphaflight I generate a small new.txt file with vi, which
> contains a few lines of characters.
> After saving, I immediately power-off brutally.
> I power-on again, mount, and the new.txt file is not there...
Right, because when you create the file, UBIFS will write to its write
buffer. The write-buffer is synchronized by timer after 5 sec. So when
the power is cut, you loos the file. But UBIFS guarantees you that the
other data will not be corrupted, and that you will be able to mount
and use the FS normally.
I think your vi calls 'sync()' when you exit? If not, you may amend it.
In this case, you should not loose anything.
> I recreate the file, I wait 20 seconds or so and power-off brutally.
OK. Below you'll see many URL. I'd recommend to read them if you
want to understand write-back behavior.
> I power-on again, mount and now I have:
>
> -rw-r--r-- 1 1000 1000 397 Dec 31 2002 indexOnline.html
> -rw-r--r-- 1 root root 0 Jan 1 00:00 new.txt
> -rw-r--r-- 1 1000 1000 104 Dec 31 2002 obfuscAFL.bat
>
> Size 0 ... I can understand that since I did not have time to sync.
>
> It’s funny though that the file is present in the directory but the
> contents are not there ?
Yup. As Corenting said, this effect is explained here:
http://www.linux-mtd.infradead.org/faq/ubifs.html#L_empty_file
And I've just updated this FAQ section.
> If I do this programmatically, I presume I can force a sync
> after I close the file ?
> Is it safe to do so ? Would you know the C API to use to
> do a sync programmatically ?
This all is described here:
http://www.linux-mtd.infradead.org/doc/ubifs.html#L_writeback
> I generate a new2.txt wait one minute... Reboot, the file
> new2.txt is still 0
This means the write-buffer reached the media, but the data
did not. You may read here about what UBIFS write-buffer is:
http://www.linux-mtd.infradead.org/doc/ubifs.html#L_writebuffer
(I've just created that documentation section for you)
But one minute seems to be a lot. The default write-back interval
is usually 30 sec. Embedded systems tend to make is smaller. Glance
at /proc/sys/vm/dirty_expire_centisecs
See this section for some more information about this knob:
http://www.linux-mtd.infradead.org/doc/ubifs.html#L_wb_knobs
(I've also just created that documentation section for you)
> I generate a new3.txt wait two minutes... Reboot, now the
> file new3.txt has correct size !
Right. The dirty data were written back.
> So, I am wondering, is this background thread "ubi_bgt0d"
> responsible for automatically synching periodically ?
No. First of all "ubi_bgt0d" is UBI thread, not UBIFS. UBIFS thread
is "ubifs_bgt0_0" in your case. Probably you do not fully realize
that UBI and UBIFS are 2 different beasts. You may want to read
the introduction section for both:
http://www.linux-mtd.infradead.org/doc/ubi.html#L_overview
http://www.linux-mtd.infradead.org/doc/ubifs.html#L_overview
Anyway, I've also just created these 2 FAQ entries for you. They
shortly explain what UBI/UBIFS background threads do:
http://www.linux-mtd.infradead.org/faq/ubi.html#L_bgt_thread
http://www.linux-mtd.infradead.org/faq/ubifs.html#L_bgt_thread
> Is this interval adjustable ?
The dirty timeout is adjustable via the
/proc/sys/vm/dirty_expire_centisecs file.
> Do you recommend it ? Like every 10 seconds, isn’t it too much ?
Depends of the system. I'd recommend something like 5 seconds
for embedded systems.
--
Best regards,
Artem Bityutskiy (Битюцкий Артём)
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: UBIFS Question
2009-07-10 18:43 UBIFS Question Laurent .
2009-07-10 20:01 ` Corentin Chary
2009-07-11 14:55 ` Artem Bityutskiy
@ 2009-07-11 15:54 ` Vitaly Wool
2 siblings, 0 replies; 17+ messages in thread
From: Vitaly Wool @ 2009-07-11 15:54 UTC (permalink / raw)
To: Laurent .; +Cc: linux-mtd, artem.bityutski
Hello Laurent,
On Fri, Jul 10, 2009 at 10:43 PM, Laurent .<sid6582@msn.com> wrote:
>
> Hi Artem,
>
> I am currently working on an embedded Linux project which has an ARM9 chip (NXP’s 180MHz LPC3131) with a NAND Flash controller which supports 5bits or 8bits HW ECC.
> I have a 256MB SLC NAND and run Linux 2.6.28.2
FWIW, why don't you come up with the patches to the mainline which add
support for this chip? It would have been easier to help you if you
did so.
~Vitaly
^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: UBIFS Question
2009-07-11 14:55 ` Artem Bityutskiy
@ 2009-07-14 6:11 ` Laurent .
2009-07-14 7:22 ` Artem Bityutskiy
0 siblings, 1 reply; 17+ messages in thread
From: Laurent . @ 2009-07-14 6:11 UTC (permalink / raw)
To: dedekind; +Cc: linux-mtd
Hi Artem,
> OK. BTW, I've written a small article which should help people
> understand how UBI flashers should work, roughly:
> http://www.linux-mtd.infradead.org/doc/ubi.html#L_format
Thanks.
For now, I am only going to use my flasher in factory, when the flash is blank,
so I don't have to deal with erase counters.
However, your article will be useful when I will have to code
a firmware upgrader :-)
Related to the BBT and MTD in general, do I have to reserve a few blocks
outside the partitions, in case of one of the mirrored bbt goes bad ?
What does MTD do if it sees it could not update one of the two blocks ?
Does it grab a fresh block outside the partitions ? or can it pick a
block from anywhere (since that block won't be mark as 'good' anymore
anyways, it should not mess up with the partition) ?
>
> Take into account that if you have a power cut, then next mount will be
> slower than normal, because UBIFS will have to replay the journal and
> do recovery. So if mount time is critical, you may want to test it in
> power-cut conditions.
>
That's an important point for my application, I need fast mount time
( who does not anyways ? ;-) )
However, I can assume that when the file system is going to be written to,
it is very unlikely that the user will power off the unit right away.
Then, I presume that I should be fine since I am going to use a
rather short (5 sec) dirty timeout ?
From what you wrote about the sync, I should be able to write my code so
that I will greatly minimize the risk of power-off while the file system
s still "dirty" and there are still stuff to commit.
So, just to be sure, if the file system is dirty, and the time out occur and
all data are commited, it is safe to power off the unit and
next mount will be fast, correct ?
> Hmm, which version of mkfs.ubifs you use? I though the latest version
> would create an image with a reserve for the root. Try
> "mkfs.ubifs --version" please.
Version 1.3
> Depends of the system. I'd recommend something like 5 seconds
> for embedded systems.
OK. I presume that if there is nothing new "to commit",
there is only very small overhead introduced every 5 seconds, correct ?
Well, thanks a lot for your detailed answered about that "ghost" file phenomenon.
It all makes sense to me.
Regards,
Laurent
_________________________________________________________________
Hotmail® has ever-growing storage! Don’t worry about storage limits.
http://windowslive.com/Tutorial/Hotmail/Storage?ocid=TXT_TAGLM_WL_HM_Tutorial_Storage_062009
^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: UBIFS Question
2009-07-14 6:11 ` Laurent .
@ 2009-07-14 7:22 ` Artem Bityutskiy
0 siblings, 0 replies; 17+ messages in thread
From: Artem Bityutskiy @ 2009-07-14 7:22 UTC (permalink / raw)
To: Laurent .; +Cc: linux-mtd
On Mon, 2009-07-13 at 23:11 -0700, Laurent . wrote:
> For now, I am only going to use my flasher in factory, when the flash is blank,
> so I don't have to deal with erase counters.
> However, your article will be useful when I will have to code
> a firmware upgrader :-)
Ok.
> Related to the BBT and MTD in general, do I have to reserve a few blocks
> outside the partitions, in case of one of the mirrored bbt goes bad ?
> What does MTD do if it sees it could not update one of the two blocks ?
> Does it grab a fresh block outside the partitions ? or can it pick a
> block from anywhere (since that block won't be mark as 'good' anymore
> anyways, it should not mess up with the partition) ?
I'm not sure. You need to dig MTD code for this. I personally never used
BBT. My guess is that if BBT is corrupted, MTD would fall-back to the
scanning scheme. But this is just a guess.
So if you explored the sources and then provided us a good FAQ entry,
this would nice.
> However, I can assume that when the file system is going to be written to,
> it is very unlikely that the user will power off the unit right away.
> Then, I presume that I should be fine since I am going to use a
> rather short (5 sec) dirty timeout ?
It does not really matter. UBIFS has the journal. The data is written
to the journal first, and when the journal is full, the it is committed.
It is also committed when you unmount cleanly.
When mounting, UBIFS has to replay the journal, which means it has to
scan it, and may be do some recovery. It takes time.
Committed journal does not need to be scanned, because it is empty.
The dirty data in the memory caches does not matter. When you have
power cut you just loose it forever. It does not matter that you do
not cut power just after writing.
Let me put things other way. Consider what happens to the data when you
write.
1. You write to file foo with 'write()'.
2. The date is put to the page cache and 'write()' returns. If you cut
power at this point, you will simply loose all your cahnges.
3. Timeout. The dirty data from the page cache is written to UBIFS.
UBIFS puts the data to the journal. Some of it, though, still sits in
the write-buffer. If you cut power now you loose the write-buffer
contents.
4. Timeout. Data from the write-buffer is flushed to the journal.
5. And so on, the journal becomes full, commit operation commits it and
the journal becomes empty.
6. And so on.
If you cut power at steps 1-4, next time you mount UBIFS replays the
journal, which makes mount time longer. If you cut power after it is
committed (step 5), then when you mount next time, UBIFS does not have
much to scan, because the journal is empty.
IOW, the mount time depends on how full is the journal. And BTW, you may
make it smaller to lessen the maximum mount time. There are mkfs.ubifs
options for this.
> From what you wrote about the sync, I should be able to write my code so
> that I will greatly minimize the risk of power-off while the file system
> s still "dirty" and there are still stuff to commit.
> So, just to be sure, if the file system is dirty, and the time out occur and
> all data are commited, it is safe to power off the unit and
> next mount will be fast, correct ?
The data is written back by timeout. It is written to the journal. But
it does not mean the journal is committed. If you want to make sure
the journal is committed, you may call 'sync()'. It writes back
everything plus commits the journal.
> > Hmm, which version of mkfs.ubifs you use? I though the latest version
> > would create an image with a reserve for the root. Try
> > "mkfs.ubifs --version" please.
>
> Version 1.3
Hmm, I'll look at this.
> > Depends of the system. I'd recommend something like 5 seconds
> > for embedded systems.
>
> OK. I presume that if there is nothing new "to commit",
> there is only very small overhead introduced every 5 seconds, correct ?
s/commit/write-back/. These are different things. Write-back is about
writing dirty data from RAM to storage. It is a generic Linux thing.
It works the same way for ext3 + disks, and ubifs + flash.
The commit and the journal are UBIFS implementation details. In UBIFS
everything goes first to the journal, than a mechanism called commit
empties the journal. The journal has to be scanned. The committed
stuff - not.
So, if you have no dirty data, then every 5 sec. the background thread
will wake up, find there is nothing to do, and go sleep. The only
side effect of this is that it'll eat your battery power :-)
But you may also teach UBIFS background thread to start commit in
background by timeout, if nothing is happening for some time. Might
be a good idea. It is about adding a timer.
--
Best regards,
Artem Bityutskiy (Битюцкий Артём)
^ permalink raw reply [flat|nested] 17+ messages in thread
* UBIFS question
@ 2016-03-16 9:54 Martin Townsend
2016-03-16 23:12 ` Richard Weinberger
0 siblings, 1 reply; 17+ messages in thread
From: Martin Townsend @ 2016-03-16 9:54 UTC (permalink / raw)
To: linux-mtd
Hi,
I have an board with a 512MB Raw NAND flash memory device and 4GB
Managed eMMC flash memory device. I would like to take advantage of
this and maintain a root filesystem on each and keep them in sync so
upgrades will upgrade each filesystem. I can then use one of them as
a redundant image that can be used to correct the primary one or fail
over on device failure. I have UBIFS on the raw NAND flash and ext3
on the eMMC flash. My first thought was can I create a mirror using
raid but after researching it looks like the linux SW raid, although
very flexible, only supports block devices which rules out UBIFS. I
could see some mails about a block device layer for UBI but I seem to
remember that this was read only or had certain restrictions.
Next I thought about maybe using Docker so I could switch between the
2 filesystems but saw a mail that UBIFS doesn't support WHITEOUT
feature that docker would require from the underlying filesystem as it
uses union/overlay filesystems.
Just wanted to check that these assumptions are correct and I haven't
missed something and also if anyone knows of another method I would be
very interested in hearing it.
Thanks in Advance,
Martin.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: UBIFS question
2016-03-16 9:54 UBIFS question Martin Townsend
@ 2016-03-16 23:12 ` Richard Weinberger
2016-03-17 8:33 ` Martin Townsend
0 siblings, 1 reply; 17+ messages in thread
From: Richard Weinberger @ 2016-03-16 23:12 UTC (permalink / raw)
To: Martin Townsend; +Cc: linux-mtd@lists.infradead.org
On Wed, Mar 16, 2016 at 10:54 AM, Martin Townsend
<mtownsend1973@gmail.com> wrote:
> I have an board with a 512MB Raw NAND flash memory device and 4GB
> Managed eMMC flash memory device. I would like to take advantage of
> this and maintain a root filesystem on each and keep them in sync so
> upgrades will upgrade each filesystem. I can then use one of them as
> a redundant image that can be used to correct the primary one or fail
> over on device failure. I have UBIFS on the raw NAND flash and ext3
> on the eMMC flash. My first thought was can I create a mirror using
> raid but after researching it looks like the linux SW raid, although
> very flexible, only supports block devices which rules out UBIFS. I
> could see some mails about a block device layer for UBI but I seem to
> remember that this was read only or had certain restrictions.
>
> Next I thought about maybe using Docker so I could switch between the
> 2 filesystems but saw a mail that UBIFS doesn't support WHITEOUT
> feature that docker would require from the underlying filesystem as it
> uses union/overlay filesystems.
>
> Just wanted to check that these assumptions are correct and I haven't
> missed something and also if anyone knows of another method I would be
> very interested in hearing it.
Yes, you cannot do RAID1 between MTD and block devices.
And yes, overlayfs does not fully work these day on UBIFS but I'm working on it,
at least it is on my TODO list and I have some patches on my disk..
Why can't you just rsync between the filesystems?
--
Thanks,
//richard
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: UBIFS question
2016-03-16 23:12 ` Richard Weinberger
@ 2016-03-17 8:33 ` Martin Townsend
2016-03-17 8:56 ` Richard Weinberger
0 siblings, 1 reply; 17+ messages in thread
From: Martin Townsend @ 2016-03-17 8:33 UTC (permalink / raw)
To: Richard Weinberger; +Cc: linux-mtd@lists.infradead.org
Hi Richard,
Thanks for the reply. rsync is the backup plan, I just wanted to rule
other options out first. The flash devices are going to subjected to a
fairly harsh environment and the idea of being able to fail over to a
backup docker container was appealing.
Which leads me to a couple of questions:
1) I need to simulate flash devices reading corrupted
pages/blocks/LEBs. Is there currently a way of doing this? if not
would it be possible to write something, say a kernel module to sit
above the NAND driver to do this. I just want to see what effect
corruption has on a live system and how these errors manifest
themselves.
2) One thing I'm going to have to do is write a background thread to
monitor the status of the filesystem and try and detect corruption
before the system becomes unstable, is there any way to find out the
validity of the LEBs, ie checking their checksums.
Many Thanks,
Martin.
On Wed, Mar 16, 2016 at 11:12 PM, Richard Weinberger
<richard.weinberger@gmail.com> wrote:
> On Wed, Mar 16, 2016 at 10:54 AM, Martin Townsend
> <mtownsend1973@gmail.com> wrote:
>> I have an board with a 512MB Raw NAND flash memory device and 4GB
>> Managed eMMC flash memory device. I would like to take advantage of
>> this and maintain a root filesystem on each and keep them in sync so
>> upgrades will upgrade each filesystem. I can then use one of them as
>> a redundant image that can be used to correct the primary one or fail
>> over on device failure. I have UBIFS on the raw NAND flash and ext3
>> on the eMMC flash. My first thought was can I create a mirror using
>> raid but after researching it looks like the linux SW raid, although
>> very flexible, only supports block devices which rules out UBIFS. I
>> could see some mails about a block device layer for UBI but I seem to
>> remember that this was read only or had certain restrictions.
>>
>> Next I thought about maybe using Docker so I could switch between the
>> 2 filesystems but saw a mail that UBIFS doesn't support WHITEOUT
>> feature that docker would require from the underlying filesystem as it
>> uses union/overlay filesystems.
>>
>> Just wanted to check that these assumptions are correct and I haven't
>> missed something and also if anyone knows of another method I would be
>> very interested in hearing it.
>
> Yes, you cannot do RAID1 between MTD and block devices.
> And yes, overlayfs does not fully work these day on UBIFS but I'm working on it,
> at least it is on my TODO list and I have some patches on my disk..
>
> Why can't you just rsync between the filesystems?
>
> --
> Thanks,
> //richard
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: UBIFS question
2016-03-17 8:33 ` Martin Townsend
@ 2016-03-17 8:56 ` Richard Weinberger
2016-03-17 11:16 ` Martin Townsend
0 siblings, 1 reply; 17+ messages in thread
From: Richard Weinberger @ 2016-03-17 8:56 UTC (permalink / raw)
To: Martin Townsend, Richard Weinberger; +Cc: linux-mtd@lists.infradead.org
Martin,
Am 17.03.2016 um 09:33 schrieb Martin Townsend:
> Hi Richard,
>
> Thanks for the reply. rsync is the backup plan, I just wanted to rule
> other options out first. The flash devices are going to subjected to a
> fairly harsh environment and the idea of being able to fail over to a
> backup docker container was appealing.
>
> Which leads me to a couple of questions:
> 1) I need to simulate flash devices reading corrupted
> pages/blocks/LEBs. Is there currently a way of doing this? if not
> would it be possible to write something, say a kernel module to sit
> above the NAND driver to do this. I just want to see what effect
> corruption has on a live system and how these errors manifest
> themselves.
check out nandsim and UBI's debugfs. We have a lot of knobs to
simulate such stuff.
> 2) One thing I'm going to have to do is write a background thread to
> monitor the status of the filesystem and try and detect corruption
> before the system becomes unstable, is there any way to find out the
> validity of the LEBs, ie checking their checksums.
So, what exactly is the error scenario you have in mind?
If the SLC NAND behaves correctly UBIFS can deal with all kinds
of errors.
Of course UBI (and UBIFS) is not a magic bullet, if a NAND block
turns bad all of a sudden there is nothing it can do for you.
But this NAND would also not be with in the spec...
It is not clear do me what this background thread should achieve.
Thanks,
//richard
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: UBIFS question
2016-03-17 8:56 ` Richard Weinberger
@ 2016-03-17 11:16 ` Martin Townsend
2016-03-17 11:25 ` Richard Weinberger
0 siblings, 1 reply; 17+ messages in thread
From: Martin Townsend @ 2016-03-17 11:16 UTC (permalink / raw)
To: Richard Weinberger; +Cc: Richard Weinberger, linux-mtd@lists.infradead.org
Hi Richard,
On Thu, Mar 17, 2016 at 8:56 AM, Richard Weinberger <richard@nod.at> wrote:
> Martin,
>
> Am 17.03.2016 um 09:33 schrieb Martin Townsend:
>> Hi Richard,
>>
>> Thanks for the reply. rsync is the backup plan, I just wanted to rule
>> other options out first. The flash devices are going to subjected to a
>> fairly harsh environment and the idea of being able to fail over to a
>> backup docker container was appealing.
>>
>> Which leads me to a couple of questions:
>> 1) I need to simulate flash devices reading corrupted
>> pages/blocks/LEBs. Is there currently a way of doing this? if not
>> would it be possible to write something, say a kernel module to sit
>> above the NAND driver to do this. I just want to see what effect
>> corruption has on a live system and how these errors manifest
>> themselves.
>
> check out nandsim and UBI's debugfs. We have a lot of knobs to
> simulate such stuff.
>
I will take a look at nandsim and UBI debugfs.
>> 2) One thing I'm going to have to do is write a background thread to
>> monitor the status of the filesystem and try and detect corruption
>> before the system becomes unstable, is there any way to find out the
>> validity of the LEBs, ie checking their checksums.
>
> So, what exactly is the error scenario you have in mind?
> If the SLC NAND behaves correctly UBIFS can deal with all kinds
> of errors.
> Of course UBI (and UBIFS) is not a magic bullet, if a NAND block
> turns bad all of a sudden there is nothing it can do for you.
> But this NAND would also not be with in the spec...
>
> It is not clear do me what this background thread should achieve.
We expect the flash devices to start failing quicker than normally
expected due to the environment in which they will be operating in, so
sudden NAND blocks turning bad will eventually happen and what we
would like to do is try and capture this as soon as possible.
The boards are not accessible as they will be located in very remote
locations so detecting these failures before the system locks up would
be an advantage so we can report home with the information and fail
over to the other filesystem (providing that hasn't also been
corrupted).
Many Thanks,
Martin.
>
> Thanks,
> //richard
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: UBIFS question
2016-03-17 11:16 ` Martin Townsend
@ 2016-03-17 11:25 ` Richard Weinberger
2016-03-17 11:43 ` Ricard Wanderlof
0 siblings, 1 reply; 17+ messages in thread
From: Richard Weinberger @ 2016-03-17 11:25 UTC (permalink / raw)
To: Martin Townsend; +Cc: linux-mtd@lists.infradead.org
Martin,
Am 17.03.2016 um 12:16 schrieb Martin Townsend:
>>> 2) One thing I'm going to have to do is write a background thread to
>>> monitor the status of the filesystem and try and detect corruption
>>> before the system becomes unstable, is there any way to find out the
>>> validity of the LEBs, ie checking their checksums.
>>
>> So, what exactly is the error scenario you have in mind?
>> If the SLC NAND behaves correctly UBIFS can deal with all kinds
>> of errors.
>> Of course UBI (and UBIFS) is not a magic bullet, if a NAND block
>> turns bad all of a sudden there is nothing it can do for you.
>> But this NAND would also not be with in the spec...
>>
>> It is not clear do me what this background thread should achieve.
>
> We expect the flash devices to start failing quicker than normally
> expected due to the environment in which they will be operating in, so
> sudden NAND blocks turning bad will eventually happen and what we
> would like to do is try and capture this as soon as possible.
> The boards are not accessible as they will be located in very remote
> locations so detecting these failures before the system locks up would
> be an advantage so we can report home with the information and fail
> over to the other filesystem (providing that hasn't also been
> corrupted).
Dealing with sudden bad NAND blocks is almost impossible.
Unless you have a copy of each block.
NAND is not expected to gain bad blocks without an indication like
correctable bitflips.
Thanks,
//richard
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: UBIFS question
2016-03-17 11:25 ` Richard Weinberger
@ 2016-03-17 11:43 ` Ricard Wanderlof
2016-03-17 12:54 ` Martin Townsend
0 siblings, 1 reply; 17+ messages in thread
From: Ricard Wanderlof @ 2016-03-17 11:43 UTC (permalink / raw)
To: Martin Townsend; +Cc: Richard Weinberger, linux-mtd@lists.infradead.org
> > We expect the flash devices to start failing quicker than normally
> > expected due to the environment in which they will be operating in, so
> > sudden NAND blocks turning bad will eventually happen and what we
> > would like to do is try and capture this as soon as possible.
> > The boards are not accessible as they will be located in very remote
> > locations so detecting these failures before the system locks up would
> > be an advantage so we can report home with the information and fail
> > over to the other filesystem (providing that hasn't also been
> > corrupted).
>
> Dealing with sudden bad NAND blocks is almost impossible.
> Unless you have a copy of each block.
> NAND is not expected to gain bad blocks without an indication like
> correctable bitflips.
Yes, although the NAND flash documentation sometimes reads like blocks can
suddenly 'go bad' for no special reason, in practice it is due to
excessive erase/write cycles, i.e. its a wear problem.
However, I don't know, if you are operating the flash in an environment
where there is cosmic radiation that can actually damage the chip for
instance, then of course any part of the chip could fail randomly with a
fairly high probability. But NAND bad block management is not designed to
take care of that case, which is why bad block detection is only done
during block erasure (i.e. when a block fails to erase).
/Ricard
--
Ricard Wolf Wanderlöf ricardw(at)axis.com
Axis Communications AB, Lund, Sweden www.axis.com
Phone +46 46 272 2016 Fax +46 46 13 61 30
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: UBIFS question
2016-03-17 11:43 ` Ricard Wanderlof
@ 2016-03-17 12:54 ` Martin Townsend
2016-03-17 14:55 ` Boris Brezillon
0 siblings, 1 reply; 17+ messages in thread
From: Martin Townsend @ 2016-03-17 12:54 UTC (permalink / raw)
To: Ricard Wanderlof; +Cc: Richard Weinberger, linux-mtd@lists.infradead.org
Hi Ricard, Richard
On Thu, Mar 17, 2016 at 11:43 AM, Ricard Wanderlof
<ricard.wanderlof@axis.com> wrote:
>
>> > We expect the flash devices to start failing quicker than normally
>> > expected due to the environment in which they will be operating in, so
>> > sudden NAND blocks turning bad will eventually happen and what we
>> > would like to do is try and capture this as soon as possible.
>> > The boards are not accessible as they will be located in very remote
>> > locations so detecting these failures before the system locks up would
>> > be an advantage so we can report home with the information and fail
>> > over to the other filesystem (providing that hasn't also been
>> > corrupted).
>>
>> Dealing with sudden bad NAND blocks is almost impossible.
>> Unless you have a copy of each block.
>> NAND is not expected to gain bad blocks without an indication like
>> correctable bitflips.
I'm not interested in dealing with sudden bad NAND blocks, I accept
this will more than likely happen at some point but what I am
interested in is early detection. Once the system has booted most
files will be cached to memory and the product that the flash devices
are in is designed to run for many months without being power cycled
so what I'm looking to do is monitor the health of the flash devices.
Ideally I would like to know FEC counts but I doubt I will get this
information :) But checking LEBs, pages etc for bad checksums would be
great.
>
> Yes, although the NAND flash documentation sometimes reads like blocks can
> suddenly 'go bad' for no special reason, in practice it is due to
> excessive erase/write cycles, i.e. its a wear problem.
>
> However, I don't know, if you are operating the flash in an environment
> where there is cosmic radiation that can actually damage the chip for
> instance, then of course any part of the chip could fail randomly with a
> fairly high probability. But NAND bad block management is not designed to
> take care of that case, which is why bad block detection is only done
> during block erasure (i.e. when a block fails to erase).
>
I'm not sure how much I can say I'm afraid as I'm under NDA but assume
that it is going to be operating in an environment where it's
receiving more cosmic radiation than expected. So I could look at the
bad block detection code to get some ideas? I don't necessary want to
mark blocks as bad I just want to detect them so I have an idea that
the flash is failing.
Many Thanks,
Martin.
> /Ricard
> --
> Ricard Wolf Wanderlöf ricardw(at)axis.com
> Axis Communications AB, Lund, Sweden www.axis.com
> Phone +46 46 272 2016 Fax +46 46 13 61 30
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: UBIFS question
2016-03-17 12:54 ` Martin Townsend
@ 2016-03-17 14:55 ` Boris Brezillon
2016-03-17 15:39 ` Martin Townsend
0 siblings, 1 reply; 17+ messages in thread
From: Boris Brezillon @ 2016-03-17 14:55 UTC (permalink / raw)
To: Martin Townsend
Cc: Ricard Wanderlof, Richard Weinberger,
linux-mtd@lists.infradead.org
Hi Martin,
On Thu, 17 Mar 2016 12:54:43 +0000
Martin Townsend <mtownsend1973@gmail.com> wrote:
> Hi Ricard, Richard
>
> On Thu, Mar 17, 2016 at 11:43 AM, Ricard Wanderlof
> <ricard.wanderlof@axis.com> wrote:
> >
> >> > We expect the flash devices to start failing quicker than normally
> >> > expected due to the environment in which they will be operating in, so
> >> > sudden NAND blocks turning bad will eventually happen and what we
> >> > would like to do is try and capture this as soon as possible.
> >> > The boards are not accessible as they will be located in very remote
> >> > locations so detecting these failures before the system locks up would
> >> > be an advantage so we can report home with the information and fail
> >> > over to the other filesystem (providing that hasn't also been
> >> > corrupted).
> >>
> >> Dealing with sudden bad NAND blocks is almost impossible.
> >> Unless you have a copy of each block.
> >> NAND is not expected to gain bad blocks without an indication like
> >> correctable bitflips.
>
> I'm not interested in dealing with sudden bad NAND blocks, I accept
> this will more than likely happen at some point but what I am
> interested in is early detection. Once the system has booted most
> files will be cached to memory and the product that the flash devices
> are in is designed to run for many months without being power cycled
> so what I'm looking to do is monitor the health of the flash devices.
> Ideally I would like to know FEC counts but I doubt I will get this
> information :) But checking LEBs, pages etc for bad checksums would be
> great.
>
> >
> > Yes, although the NAND flash documentation sometimes reads like blocks can
> > suddenly 'go bad' for no special reason, in practice it is due to
> > excessive erase/write cycles, i.e. its a wear problem.
> >
> > However, I don't know, if you are operating the flash in an environment
> > where there is cosmic radiation that can actually damage the chip for
> > instance, then of course any part of the chip could fail randomly with a
> > fairly high probability. But NAND bad block management is not designed to
> > take care of that case, which is why bad block detection is only done
> > during block erasure (i.e. when a block fails to erase).
> >
> I'm not sure how much I can say I'm afraid as I'm under NDA but assume
> that it is going to be operating in an environment where it's
> receiving more cosmic radiation than expected. So I could look at the
> bad block detection code to get some ideas? I don't necessary want to
> mark blocks as bad I just want to detect them so I have an idea that
> the flash is failing.
I guess you're more worried about bitflips than blocks becoming bad
(which, AFAIK, can only happen when writing or erasing a block, not
when reading it).
If bitflips detection/prevention is what your looking for, I guess
ubihealthd (developed by Richard) could help.
[1]https://lwn.net/Articles/663751/
[2]https://lkml.org/lkml/2015/3/29/31
--
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: UBIFS question
2016-03-17 14:55 ` Boris Brezillon
@ 2016-03-17 15:39 ` Martin Townsend
2016-03-17 15:59 ` Richard Weinberger
0 siblings, 1 reply; 17+ messages in thread
From: Martin Townsend @ 2016-03-17 15:39 UTC (permalink / raw)
To: Boris Brezillon
Cc: Ricard Wanderlof, Richard Weinberger,
linux-mtd@lists.infradead.org
Hi Boris,
On Thu, Mar 17, 2016 at 2:55 PM, Boris Brezillon
<boris.brezillon@free-electrons.com> wrote:
> Hi Martin,
>
> On Thu, 17 Mar 2016 12:54:43 +0000
> Martin Townsend <mtownsend1973@gmail.com> wrote:
>
>> Hi Ricard, Richard
>>
>> On Thu, Mar 17, 2016 at 11:43 AM, Ricard Wanderlof
>> <ricard.wanderlof@axis.com> wrote:
>> >
>> >> > We expect the flash devices to start failing quicker than normally
>> >> > expected due to the environment in which they will be operating in, so
>> >> > sudden NAND blocks turning bad will eventually happen and what we
>> >> > would like to do is try and capture this as soon as possible.
>> >> > The boards are not accessible as they will be located in very remote
>> >> > locations so detecting these failures before the system locks up would
>> >> > be an advantage so we can report home with the information and fail
>> >> > over to the other filesystem (providing that hasn't also been
>> >> > corrupted).
>> >>
>> >> Dealing with sudden bad NAND blocks is almost impossible.
>> >> Unless you have a copy of each block.
>> >> NAND is not expected to gain bad blocks without an indication like
>> >> correctable bitflips.
>>
>> I'm not interested in dealing with sudden bad NAND blocks, I accept
>> this will more than likely happen at some point but what I am
>> interested in is early detection. Once the system has booted most
>> files will be cached to memory and the product that the flash devices
>> are in is designed to run for many months without being power cycled
>> so what I'm looking to do is monitor the health of the flash devices.
>> Ideally I would like to know FEC counts but I doubt I will get this
>> information :) But checking LEBs, pages etc for bad checksums would be
>> great.
>>
>> >
>> > Yes, although the NAND flash documentation sometimes reads like blocks can
>> > suddenly 'go bad' for no special reason, in practice it is due to
>> > excessive erase/write cycles, i.e. its a wear problem.
>> >
>> > However, I don't know, if you are operating the flash in an environment
>> > where there is cosmic radiation that can actually damage the chip for
>> > instance, then of course any part of the chip could fail randomly with a
>> > fairly high probability. But NAND bad block management is not designed to
>> > take care of that case, which is why bad block detection is only done
>> > during block erasure (i.e. when a block fails to erase).
>> >
>> I'm not sure how much I can say I'm afraid as I'm under NDA but assume
>> that it is going to be operating in an environment where it's
>> receiving more cosmic radiation than expected. So I could look at the
>> bad block detection code to get some ideas? I don't necessary want to
>> mark blocks as bad I just want to detect them so I have an idea that
>> the flash is failing.
>
> I guess you're more worried about bitflips than blocks becoming bad
> (which, AFAIK, can only happen when writing or erasing a block, not
> when reading it).
> If bitflips detection/prevention is what your looking for, I guess
> ubihealthd (developed by Richard) could help.
>
> [1]https://lwn.net/Articles/663751/
> [2]https://lkml.org/lkml/2015/3/29/31
>
>
Looks very promising, thank you for the links. Bitflip detection is
definitely something I am looking for. If I could get some metrics on
bitflips detected even better :) I will take a closer look.
Many Thanks,
Martin.
> --
> Boris Brezillon, Free Electrons
> Embedded Linux and Kernel engineering
> http://free-electrons.com
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: UBIFS question
2016-03-17 15:39 ` Martin Townsend
@ 2016-03-17 15:59 ` Richard Weinberger
0 siblings, 0 replies; 17+ messages in thread
From: Richard Weinberger @ 2016-03-17 15:59 UTC (permalink / raw)
To: Martin Townsend, Boris Brezillon
Cc: Ricard Wanderlof, linux-mtd@lists.infradead.org
Am 17.03.2016 um 16:39 schrieb Martin Townsend:
>> I guess you're more worried about bitflips than blocks becoming bad
>> (which, AFAIK, can only happen when writing or erasing a block, not
>> when reading it).
>> If bitflips detection/prevention is what your looking for, I guess
>> ubihealthd (developed by Richard) could help.
>>
>> [1]https://lwn.net/Articles/663751/
>> [2]https://lkml.org/lkml/2015/3/29/31
>>
>>
>
> Looks very promising, thank you for the links. Bitflip detection is
> definitely something I am looking for. If I could get some metrics on
> bitflips detected even better :) I will take a closer look.
To clarify, UBI does already bitflip detection and then
"tortures" a block to figure whether it is good or not.
But only upon read. So, if you very seldom read from a
page read disturb may hit you.
This is why I did the ubi-healthd, it allows you to trigger
read of all data (UBI meta and payload) to detect read disturb
soonish.
Thanks,
//richard
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2016-03-17 15:59 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-07-10 18:43 UBIFS Question Laurent .
2009-07-10 20:01 ` Corentin Chary
2009-07-11 14:55 ` Artem Bityutskiy
2009-07-14 6:11 ` Laurent .
2009-07-14 7:22 ` Artem Bityutskiy
2009-07-11 15:54 ` Vitaly Wool
-- strict thread matches above, loose matches on Subject: below --
2016-03-16 9:54 UBIFS question Martin Townsend
2016-03-16 23:12 ` Richard Weinberger
2016-03-17 8:33 ` Martin Townsend
2016-03-17 8:56 ` Richard Weinberger
2016-03-17 11:16 ` Martin Townsend
2016-03-17 11:25 ` Richard Weinberger
2016-03-17 11:43 ` Ricard Wanderlof
2016-03-17 12:54 ` Martin Townsend
2016-03-17 14:55 ` Boris Brezillon
2016-03-17 15:39 ` Martin Townsend
2016-03-17 15:59 ` Richard Weinberger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox