linux-ext4.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Does ext4 perform online update of the bad blocks inode?
@ 2009-09-18 20:26 Francesco Pretto
  2009-09-18 21:11 ` Andreas Dilger
  0 siblings, 1 reply; 4+ messages in thread
From: Francesco Pretto @ 2009-09-18 20:26 UTC (permalink / raw)
  To: linux-ext4

Does ext4 update the bad blocks inode when a read/write failure is detected in a
mounted filesystem? If not, is safe a daily/weekly "fsck.ext4 -n -c device" on a
mounted filesystem? Would be safe even on a raid volume?

Unfortunately, still no Bad Block Relocation Target for lvm2 so I'm interested
in similar fs features.

Thanks,
Francesco


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Does ext4 perform online update of the bad blocks inode?
  2009-09-18 20:26 Does ext4 perform online update of the bad blocks inode? Francesco Pretto
@ 2009-09-18 21:11 ` Andreas Dilger
  2009-09-19 11:31   ` Francesco Pretto
  0 siblings, 1 reply; 4+ messages in thread
From: Andreas Dilger @ 2009-09-18 21:11 UTC (permalink / raw)
  To: Francesco Pretto; +Cc: linux-ext4

On Sep 18, 2009  20:26 +0000, Francesco Pretto wrote:
> Does ext4 update the bad blocks inode when a read/write failure is
> detected in a mounted filesystem? If not, is safe a daily/weekly
> "fsck.ext4 -n -c device" on a mounted filesystem? Would be safe even
> on a raid volume?

This isn't even safe on an UNMOUNTED filesystem, since "badblocks"
by default does destructive testing of the block device.  Since most
disks will internally relocate bad blocks on writes, it is very
unlikely that "badblocks" will ever find a problem on a new disk.

Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Does ext4 perform online update of the bad blocks inode?
  2009-09-18 21:11 ` Andreas Dilger
@ 2009-09-19 11:31   ` Francesco Pretto
  2009-09-19 13:59     ` Eric Sandeen
  0 siblings, 1 reply; 4+ messages in thread
From: Francesco Pretto @ 2009-09-19 11:31 UTC (permalink / raw)
  To: Andreas Dilger; +Cc: linux-ext4

[-- Attachment #1: Type: text/plain, Size: 3088 bytes --]

2009/9/18 Andreas Dilger <adilger@sun.com>:
>
> This isn't even safe on an UNMOUNTED filesystem, since "badblocks"
> by default does destructive testing of the block device.

With destructive testing, I think you mean here a read/write test,
since a read only test isn't supposed to be destructive (usually).
According to badblocks(8), option -n, "By default only a
non-destructive read-only test is done". Moreover, according to
fsck.ext4(8), option -c, "This  option  causes  e2fsck to use
badblocks(8) program to do a read-only scan of the device in order to
find  any  bad  blocks" and later "If  this  option  is specified
twice, then the bad block scan will be done using a non-destructive
read-write test". So I think the *potentially* unsafe command you
meant was "fsck.ext4 -n -c -c device".

Assuming that the manual is correct, and "fsck.ext4 -n -c device" does
really perform a read-only test opening the fs just to update the bad
blocks inode, my question still persists: is safe to launch it weekly
on a mounted filesystem? The wording of the manual seems to tell "yes,
it's supposed to be safe but don't do it because of <unexplained
reason>" :-)

> Since most
> disks will internally relocate bad blocks on writes, it is very
> unlikely that "badblocks" will ever find a problem on a new disk.
>

I'd like to believe you but please read the "smartctl --all" output
(attached) for a Toshiba 120GB notebook drive I recently replaced, or
just observe this excerpt:

  5 Reallocated_Sector_Ct   0x0033   100   100   050    Pre-fail
Always       -       2
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age
Always       -       2
....
Num  Test_Description    Status                  Remaining
LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed: read failure       00%      6366
      57398211
# 2  Extended offline    Completed: read failure       00%      6350
      57398211

So, just 2 sectors reallocated but still read failures that are
visible on the linux block device layer. I can guarantee this: I
extensively repeated read tests on the disk, no way I could force the
drive to relocate more failing sectors using its own SMART mechanism.
So, what I mean is that hw bad blocks relocate features could not work
as expected even on modern drives. Because of bugged implementation?
Don't know.

You didn't answer my main question: does ext4 do something in case of
a read/write failure that is detected in the block device layer?
Exotic filesystems like NTFS (when running Windows, sure) seems to
update its bad blocks list online, so it doesn't seems a bad think for
notebook/desktop users.

The same problem is open for DM users: since evms is deprecated,
there's no more a BBR target. So, for example, your buggy hard drive
doesn't intercept the first and the only failing sector? The error
arrives in the block device layer and the failing drive is
deactived/removed from the RAID volume. Not good for me to throw away
a disk for just one failing sector. This is matter for another mailing
list, so please ignore.

Regards,
Francesco

[-- Attachment #2: smartctl-all --]
[-- Type: application/octet-stream, Size: 11056 bytes --]

smartctl version 5.38 [i686-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen
Home page is http://smartmontools.sourceforge.net/

=== START OF INFORMATION SECTION ===
Model Family:     Toshiba 2.5" HDD series (80 GB and above)
Device Model:     TOSHIBA MK1234GSX
Serial Number:    967X0076T
Firmware Version: AH001A
User Capacity:    120,034,123,776 bytes
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   7
ATA Standard is:  Exact ATA specification draft version not indicated
Local Time is:    Sat Sep 19 12:33:40 2009 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82)	Offline data collection activity
					was completed without error.
					Auto Offline Data Collection: Enabled.
Self-test execution status:      ( 112)	The previous self-test completed having
					the read element of the test failed.
Total time to complete Offline 
data collection: 		 ( 435) seconds.
Offline data collection
capabilities: 			 (0x5b) SMART execute Offline immediate.
					Auto Offline data collection on/off support.
					Suspend Offline collection upon new
					command.
					Offline surface scan supported.
					Self-test supported.
					No Conveyance Self-test supported.
					Selective Self-test supported.
SMART capabilities:            (0x0003)	Saves SMART data before entering
					power-saving mode.
					Supports SMART auto save timer.
Error logging capability:        (0x01)	Error logging supported.
					General Purpose Logging supported.
Short self-test routine 
recommended polling time: 	 (   2) minutes.
Extended self-test routine
recommended polling time: 	 (  86) minutes.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   100   050    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0005   100   100   050    Pre-fail  Offline      -       0
  3 Spin_Up_Time            0x0027   100   100   001    Pre-fail  Always       -       1722
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       3215
  5 Reallocated_Sector_Ct   0x0033   100   100   050    Pre-fail  Always       -       2
  7 Seek_Error_Rate         0x000b   100   100   050    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0005   100   100   050    Pre-fail  Offline      -       0
  9 Power_On_Hours          0x0032   084   084   000    Old_age   Always       -       6694
 10 Spin_Retry_Count        0x0033   164   100   030    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       3198
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       195
193 Load_Cycle_Count        0x0032   071   071   000    Old_age   Always       -       295410
194 Temperature_Celsius     0x0022   100   100   000    Old_age   Always       -       26 (Lifetime Min/Max 10/60)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       2
197 Current_Pending_Sector  0x0032   100   100   000    Old_age   Always       -       1
198 Offline_Uncorrectable   0x0030   100   100   000    Old_age   Offline      -       1
199 UDMA_CRC_Error_Count    0x0032   200   253   000    Old_age   Always       -       3
220 Disk_Shift              0x0002   100   100   000    Old_age   Always       -       69
222 Loaded_Hours            0x0032   087   087   000    Old_age   Always       -       5349
223 Load_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
224 Load_Friction           0x0022   100   100   000    Old_age   Always       -       0
226 Load-in_Time            0x0026   100   100   000    Old_age   Always       -       632
240 Head_Flying_Hours       0x0001   100   100   001    Pre-fail  Offline      -       0

SMART Error Log Version: 1
ATA Error Count: 25 (device log contains only the most recent five errors)
	CR = Command Register [HEX]
	FR = Features Register [HEX]
	SC = Sector Count Register [HEX]
	SN = Sector Number Register [HEX]
	CL = Cylinder Low Register [HEX]
	CH = Cylinder High Register [HEX]
	DH = Device/Head Register [HEX]
	DC = Device Command Register [HEX]
	ER = Error register [HEX]
	ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 25 occurred at disk power-on lifetime: 6693 hours (278 days + 21 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 41 02 c3 d3 6b 43

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 58 00 6d d3 6b 40 00      01:38:36.750  READ FPDMA QUEUED
  27 00 00 00 00 00 e0 00      01:38:36.750  READ NATIVE MAX ADDRESS EXT
  ec 00 00 00 00 00 a0 00      01:38:36.749  IDENTIFY DEVICE
  ef 03 45 00 00 00 a0 00      01:38:36.749  SET FEATURES [Set transfer mode]
  27 00 00 00 00 00 e0 00      01:38:36.749  READ NATIVE MAX ADDRESS EXT

Error 24 occurred at disk power-on lifetime: 6693 hours (278 days + 21 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 41 0a c3 d3 6b 43

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 58 08 6d d3 6b 40 00      01:38:29.995  READ FPDMA QUEUED
  60 66 00 5f d4 6b 40 00      01:38:29.994  READ FPDMA QUEUED
  27 00 00 00 00 00 e0 00      01:38:29.994  READ NATIVE MAX ADDRESS EXT
  ec 00 00 00 00 00 a0 00      01:38:29.993  IDENTIFY DEVICE
  ef 03 45 00 00 00 a0 00      01:38:29.993  SET FEATURES [Set transfer mode]

Error 23 occurred at disk power-on lifetime: 6693 hours (278 days + 21 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 41 02 c3 d3 6b 43

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 66 08 5f d4 6b 40 00      01:38:23.251  READ FPDMA QUEUED
  60 58 00 6d d3 6b 40 00      01:38:23.250  READ FPDMA QUEUED
  27 00 00 00 00 00 e0 00      01:38:23.250  READ NATIVE MAX ADDRESS EXT
  ec 00 00 00 00 00 a0 00      01:38:23.249  IDENTIFY DEVICE
  ef 03 45 00 00 00 a0 00      01:38:23.249  SET FEATURES [Set transfer mode]

Error 22 occurred at disk power-on lifetime: 6693 hours (278 days + 21 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 41 12 c3 d3 6b 43

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 58 10 6d d3 6b 40 00      01:38:09.971  READ FPDMA QUEUED
  60 66 08 5f d4 6b 40 00      01:38:09.971  READ FPDMA QUEUED
  60 9a 00 c5 d3 6b 40 00      01:38:09.971  READ FPDMA QUEUED
  27 00 00 00 00 00 e0 00      01:38:09.970  READ NATIVE MAX ADDRESS EXT
  ec 00 00 00 00 00 a0 00      01:38:09.969  IDENTIFY DEVICE

Error 21 occurred at disk power-on lifetime: 6693 hours (278 days + 21 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 41 02 c3 d3 6b 43

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 9a 10 c5 d3 6b 40 00      01:38:03.226  READ FPDMA QUEUED
  60 66 08 5f d4 6b 40 00      01:38:03.226  READ FPDMA QUEUED
  60 58 00 6d d3 6b 40 00      01:38:03.226  READ FPDMA QUEUED
  27 00 00 00 00 00 e0 00      01:38:03.226  READ NATIVE MAX ADDRESS EXT
  ec 00 00 00 00 00 a0 00      01:38:03.225  IDENTIFY DEVICE

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed: read failure       00%      6366         57398211
# 2  Extended offline    Completed: read failure       00%      6350         57398211
# 3  Extended offline    Completed without error       00%      4532         -
# 4  Extended offline    Aborted by host               90%      4531         -
# 5  Extended offline    Interrupted (host reset)      70%      4530         -
# 6  Extended offline    Completed without error       00%      3655         -
# 7  Extended offline    Aborted by host               90%      3654         -
# 8  Extended offline    Completed without error       00%      3567         -
# 9  Short offline       Completed without error       00%      3565         -
#10  Short offline       Completed without error       00%      1837         -
#11  Short offline       Completed without error       00%      1785         -
#12  Short offline       Completed without error       00%      1780         -
#13  Short offline       Completed without error       00%      1681         -
#14  Short offline       Completed without error       00%      1663         -
#15  Short offline       Completed without error       00%      1661         -
#16  Short offline       Completed without error       00%      1646         -
#17  Short offline       Completed without error       00%      1608         -
#18  Short offline       Completed without error       00%      1601         -
#19  Short offline       Completed without error       00%      1584         -
#20  Short offline       Completed without error       00%      1564         -
#21  Short offline       Interrupted (host reset)      40%      1421         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Does ext4 perform online update of the bad blocks inode?
  2009-09-19 11:31   ` Francesco Pretto
@ 2009-09-19 13:59     ` Eric Sandeen
  0 siblings, 0 replies; 4+ messages in thread
From: Eric Sandeen @ 2009-09-19 13:59 UTC (permalink / raw)
  To: Francesco Pretto; +Cc: Andreas Dilger, linux-ext4

Francesco Pretto wrote:
> 2009/9/18 Andreas Dilger <adilger@sun.com>:

...

>> Since most
>> disks will internally relocate bad blocks on writes, it is very
>> unlikely that "badblocks" will ever find a problem on a new disk.
>>
> 
> I'd like to believe you but please read the "smartctl --all" output
> (attached) for a Toshiba 120GB notebook drive I recently replaced, or
> just observe this excerpt:
> 
>   5 Reallocated_Sector_Ct   0x0033   100   100   050    Pre-fail
> Always       -       2
> 196 Reallocated_Event_Count 0x0032   100   100   000    Old_age
> Always       -       2
> ....
> Num  Test_Description    Status                  Remaining
> LifeTime(hours)  LBA_of_first_error
> # 1  Extended offline    Completed: read failure       00%      6366
>       57398211
> # 2  Extended offline    Completed: read failure       00%      6350
>       57398211
> 
> So, just 2 sectors reallocated but still read failures that are
> visible on the linux block device layer.

The disk won't reallocate on a read, only on a write.  So this is quite 
possible.

> I can guarantee this: I
> extensively repeated read tests on the disk, no way I could force the
> drive to relocate more failing sectors using its own SMART mechanism.
> So, what I mean is that hw bad blocks relocate features could not work
> as expected even on modern drives. Because of bugged implementation?
> Don't know.

No, it's expected.  Blocks can only be reallocated on a write (on a 
read, if it fails, what do you put into the new block?  You don't know 
what was there before so no idea what goes in the new block).

If the unreadable block is not in use on the fileystem, it's ok, because 
eventually when the fs writes to it the drive should reallocate.

If the unreadable block -is- in use, you're a little stuck; hopefully 
the fs gives you enough info about which block it is, and you could do a 
judicious "dd" of /dev/zero into it to force a reallocation, followed by 
a fsck I guess.

> You didn't answer my main question: does ext4 do something in case of
> a read/write failure that is detected in the block device layer?
> Exotic filesystems like NTFS (when running Windows, sure) seems to
> update its bad blocks list online, so it doesn't seems a bad think for
> notebook/desktop users.

Certainly not in kernelspace (well, it will return an EIO error to you, 
and possibly abort the filesystem, but that's all).

I've always felt like the badblocks list is a decades-old relic of the 
floppy days, to be honest.

If you have a sector you can't write to, you're done - the drive would 
have reallocated if it could, so get what you can off the drive and 
recycle it.

If you have a sector you can't read, it should be reallocated on the 
next write.

In neither case is a bad blocks list useful, IMHO.

-Eric

> The same problem is open for DM users: since evms is deprecated,
> there's no more a BBR target. So, for example, your buggy hard drive
> doesn't intercept the first and the only failing sector? The error
> arrives in the block device layer and the failing drive is
> deactived/removed from the RAID volume. Not good for me to throw away
> a disk for just one failing sector. This is matter for another mailing
> list, so please ignore.
> 
> Regards,
> Francesco


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2009-09-19 13:59 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-09-18 20:26 Does ext4 perform online update of the bad blocks inode? Francesco Pretto
2009-09-18 21:11 ` Andreas Dilger
2009-09-19 11:31   ` Francesco Pretto
2009-09-19 13:59     ` Eric Sandeen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).