Linux NILFS development
 help / color / mirror / Atom feed
* Re: Does nilfs2 do any in-place writes?
@ 2014-01-16 17:48 Mark Trumpold
  2014-01-16 18:41 ` Clemens Eisserer
  2014-01-17  6:31 ` Vyacheslav Dubeyko
  0 siblings, 2 replies; 24+ messages in thread
From: Mark Trumpold @ 2014-01-16 17:48 UTC (permalink / raw)
  To: linux-nilfs-u79uwXL29TY76Z2rM5mHXA

Hello All,

I am wondering what the impact of in-place writes of the
superblock has on SSDs in terms of wear?

I've been stress testing our system which uses Nilfs, and
recently I had a SSD fail with the classic messages indicating
low level media problems -- and also implicating Nilfs as trying
to locate a superblock (I think).

Following is a partial dmesg list: 

[    7.630382] Sense Key : Medium Error [current] [descriptor]
[    7.630385] Descriptor sense data with sense descriptors (in hex):
[    7.630386]         72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 
[    7.630394]         05 ff 0e 58 
[    7.630397] sd 0:0:0:0: [sda]  
[    7.630399] Add. Sense: Unrecovered read error - auto reallocate failed
[    7.630401] sd 0:0:0:0: [sda] CDB: 
[    7.630402] Read(10): 28 00 05 ff 0e 54 00 00 08 00
[    7.630409] end_request: I/O error, dev sda, sector 100601432
[    7.635326] NILFS warning: I/O error on loading last segment
[    7.635329] NILFS: error searching super root.


Best regards,
Mark T.



--
To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 24+ messages in thread
* Re:  Does nilfs2 do any in-place writes?
@ 2014-01-17 19:19 Mark Trumpold
  0 siblings, 0 replies; 24+ messages in thread
From: Mark Trumpold @ 2014-01-17 19:19 UTC (permalink / raw)
  To: slava-yeENwD64cLxBDgjK7y7TUQ, Mark Trumpold
  Cc: linux-nilfs-u79uwXL29TY76Z2rM5mHXA


>-----Original Message-----
>From: Vyacheslav Dubeyko [mailto:slava-yeENwD64cLxBDgjK7y7TUQ@public.gmane.org]
>Sent: Thursday, January 16, 2014 10:31 PM
>To: 'Mark Trumpold'
>Cc: linux-nilfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>Subject: Re: Does nilfs2 do any in-place writes?
>
>On Thu, 2014-01-16 at 17:48 +0000, Mark Trumpold wrote:
>> Hello All,
>> 
>> I am wondering what the impact of in-place writes of the
>> superblock has on SSDs in terms of wear?
>> 
>> I've been stress testing our system which uses Nilfs, and
>> recently I had a SSD fail with the classic messages indicating
>> low level media problems -- and also implicating Nilfs as trying
>> to locate a superblock (I think).
>> 
>> Following is a partial dmesg list: 
>> 
>> [    7.630382] Sense Key : Medium Error [current] [descriptor]
>> [    7.630385] Descriptor sense data with sense descriptors (in hex):
>> [    7.630386]         72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 
>> [    7.630394]         05 ff 0e 58 
>> [    7.630397] sd 0:0:0:0: [sda]  
>> [    7.630399] Add. Sense: Unrecovered read error - auto reallocate failed
>> [    7.630401] sd 0:0:0:0: [sda] CDB: 
>> [    7.630402] Read(10): 28 00 05 ff 0e 54 00 00 08 00
>> [    7.630409] end_request: I/O error, dev sda, sector 100601432
>> [    7.635326] NILFS warning: I/O error on loading last segment
>> [    7.635329] NILFS: error searching super root.
>> 
>> 
>
>I don't think that this issue is related to superblocks. Because I can't
>see in your output the magic signature of NILFS2. For example, I have
>such first 16 bytes in superblock:
>
>00000400  02 00 00 00 00 00 34 34  18 01 00 00 52 85 db 71  |......44....R..q|
>
>Of course, I don't know your partition table details but I doubt that
>sector 100601432 is a superblock sector. Moreover, you have error
>messages that inform about troubles with loading last segment during
>super root searching.
>
>We have on NILFS2 only two blocks that live under in-place update
>policy. An update frequency is not so high. So, I suppose that any FTL
>can easily provide good wear leveling support for superblocks. But, of
>course, in-place update is not good policy for flash-based devices,
>anyway.
>
>Maybe, I misunderstand something in your output. But I suppose that
>during stress-testing you can discover I/O error in any part of volume.
>Because it is really hard to predict when you will exhaust spare pool of
>erase blocks.
>
>With the best regards,
>Vyacheslav Dubeyko.
>
>
>

Hi Vyacheslav,

Thank you for taking a look at this.

Your assessment makes good sense, and I am relieved we have
a plausible explanation.

BTW: I upgraded to the 3.11.6 linux kernel (per your's and Ryusuke's
suggestions) to pick up the most recent Nilfs devel, and am finding things
to be very stable.

Best regards,
Mark T.


--
To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 24+ messages in thread
* Re:  Does nilfs2 do any in-place writes?
@ 2014-01-16 19:40 Mark Trumpold
  0 siblings, 0 replies; 24+ messages in thread
From: Mark Trumpold @ 2014-01-16 19:40 UTC (permalink / raw)
  To: Clemens Eisserer; +Cc: linux-nilfs-u79uwXL29TY76Z2rM5mHXA

> -----Original Message-----
> From: linux-nilfs-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org [mailto:linux-nilfs-
> owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Clemens Eisserer
> Sent: Thursday, January 16, 2014 10:42 AM
> To: linux-nilfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Subject: [PossibleSpam] Re: Does nilfs2 do any in-place writes?
>
> Hi Mark,
>
> > I am wondering what the impact of in-place writes of the
> > superblock has on SSDs in terms of wear?
>
> Typically SSDs have far more advanved static wear leveling algorithms
> which keep the erease count for each erease block as well as a
> sophisticated mapping table. Otherwise e.g. journaling file systems
> would probably kill it quickly.
>
> Regards, Clemens
> --

Hi Clemens,

Thank you for the info.  That was my prior understanding; however
I thought it curious that the SSD failure sited nilfs trying to access
the superblock which had failed at the media level.

It was a fairly high end SSD with TRIM, etc (Corsair Force 240GB).
Working with the vendor to analyze further..

Thanks again,
Mark T. 



--
To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 24+ messages in thread
* Does nilfs2 do any in-place writes?
@ 2014-01-15 10:44 Clemens Eisserer
       [not found] ` <CAFvQSYSzpX_WpUi9KpGj0pZvzhw2mfzzOqcgdj9ripXAjipmtw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 24+ messages in thread
From: Clemens Eisserer @ 2014-01-15 10:44 UTC (permalink / raw)
  To: linux-nilfs-u79uwXL29TY76Z2rM5mHXA

Hi,

Recently my raspberry pi destroyed a 32GB SD card after only 4 days,
because that cheap SD card seemed to have issues with wear-leveling.
The areas where the ext4 journal was stored were no longer read- or writeable.

I wonder which write-access patterns nilfs2 does exhibit.
Are there any frequent in-place updates to statically positioned data
structures (superblock, translation tables, ...) or is the data mostly
written sequentially?

Thank you in advance, Clemens
--
To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2014-01-19 14:11 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-01-16 17:48 Does nilfs2 do any in-place writes? Mark Trumpold
2014-01-16 18:41 ` Clemens Eisserer
2014-01-17  6:31 ` Vyacheslav Dubeyko
2014-01-18  1:47   ` Ryusuke Konishi
     [not found]     ` <20140118.104703.356941870.konishi.ryusuke-Zyj7fXuS5i5L9jVzuh4AOg@public.gmane.org>
2014-01-18  9:44       ` Clemens Eisserer
     [not found]         ` <CAFvQSYQZtf0fsfX_7zNHdw4hVo9VHggN9F0TYEi1Fwo2ZvS4Ng-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-01-18 16:25           ` Mark Trumpold
     [not found]             ` <CEFFE8EC.9A4A%markt-qk0wvQ0ghJwAvxtiuMwx3w@public.gmane.org>
2014-01-18 18:11               ` Vyacheslav Dubeyko
2014-01-18 11:45       ` Andreas Rohner
     [not found]         ` <52DA696D.6010206-hi6Y0CQ0nG0@public.gmane.org>
2014-01-18 23:08           ` Vyacheslav Dubeyko
     [not found]             ` <04877EE1-F5BF-41CE-AC92-CD9C3ED0B8A4-yeENwD64cLxBDgjK7y7TUQ@public.gmane.org>
2014-01-18 23:08               ` Andreas Rohner
     [not found]                 ` <52DB098A.4010300-hi6Y0CQ0nG0@public.gmane.org>
2014-01-19  5:43                   ` Ryusuke Konishi
     [not found]                     ` <20140119.144345.373615211.konishi.ryusuke-Zyj7fXuS5i5L9jVzuh4AOg@public.gmane.org>
2014-01-19 14:11                       ` Andreas Rohner
  -- strict thread matches above, loose matches on Subject: below --
2014-01-17 19:19 Mark Trumpold
2014-01-16 19:40 Mark Trumpold
2014-01-15 10:44 Clemens Eisserer
     [not found] ` <CAFvQSYSzpX_WpUi9KpGj0pZvzhw2mfzzOqcgdj9ripXAjipmtw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-01-15 10:52   ` Vyacheslav Dubeyko
2014-01-15 11:44     ` Clemens Eisserer
     [not found]       ` <CAFvQSYTG6HBVc9iodYyvCejwf889jiwOPsVb1Hi8cDrR9pOGeg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-01-15 12:01         ` Vyacheslav Dubeyko
2014-01-15 15:23           ` Ryusuke Konishi
     [not found]             ` <20140116.002353.94325733.konishi.ryusuke-Zyj7fXuS5i5L9jVzuh4AOg@public.gmane.org>
2014-01-16 10:08               ` Vyacheslav Dubeyko
2014-01-17 22:55                 ` Ryusuke Konishi
2014-01-18  0:00                 ` Ryusuke Konishi
2014-01-16 10:03           ` Clemens Eisserer
     [not found]             ` <CAFvQSYSC7+dd93pRH-uok9N+A_s=1VKrfGEppu3qRTg3q=CuXQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-01-16 10:10               ` Vyacheslav Dubeyko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox