* [ISSUE] uncorrectable errors on Raid1
[not found] <13125824.3612316.1484511403145.JavaMail.zimbra@laposte.net>
@ 2017-01-15 20:28 ` randomtechguy
2017-01-16 8:28 ` Duncan
0 siblings, 1 reply; 2+ messages in thread
From: randomtechguy @ 2017-01-15 20:28 UTC (permalink / raw)
To: linux-btrfs
Hello /all,
I have some concerns about the raid 1 of BTRFS. I have encountered 114 uncorrectable errors on the directory hosting my 'seafile-data'. Seafile is a software to backup the data. My 2 hard drives seems to be fined. SMARTCTL reports do not identify any badlocks (Reallocated_Event_Count or Current_Pending_Sector).
How can I have uncorrectable errors since BTRFS is assuring data integrity ? How did my data got corrupted ? What can I do to ensure that it does not happen again ?
Sincerely,
You can find below all the useful information I can think of. If you need more, let me know.
sudo btrfs scrub status /mnt
scrub status for 89f6f57e-90d9-46ac-1132-144e6ac150e4
scrub started at Sat Jan 14 17:09:36 2017 and finished after 2207 seconds
total bytes scrubbed: 598.03GiB with 114 errors
error details: csum=114
corrected errors: 0, uncorrectable errors: 114, unverified errors: 0
if I look, at the dmesg log , I can that both logical block seems to be corrupted.
[ 1047.312852] BTRFS: bdev /dev/sde1 errs: wr 0, rd 0, flush 0, corrupt 49, gen 0
[ 1047.352631] BTRFS: unable to fixup (regular) error at logical 429848649728 on dev /dev/sde1
[ 1062.667080] BTRFS: checksum error at logical 441348554752 on dev /dev/sdd1, sector 195114560, root 5, inode 964364, offset 819200, length 4096, links 1 (path: seafile-data/storage/blocks/bd71e3e1-95bd-40fc-b6db-55c4ea9467c1/30/bfa04bb182ff8050fe4a0f357da7df335e7511)
[ 1062.667092] BTRFS: bdev /dev/sdd1 errs: wr 0, rd 0, flush 0, corrupt 18, gen 0
[ 1062.710999] BTRFS: unable to fixup (regular) error at logical 441348554752 on dev /dev/sdd1
[ 1074.536137] BTRFS: checksum error at logical 441348554752 on dev /dev/sde1, sector 195075648, root 5, inode 964364, offset 819200, length 4096, links 1 (path: seafile-data/storage/blocks/bd71e3e1-95bd-40fc-b6db-55c4ea9467c1/30/bfa04bb182ff8050fe4a0f357da7df335e7511)
sudo btrfs inspect-internal logical-resolve 441348554752 -v /mnt
ioctl ret=0, total_size=4096, bytes_left=4056, bytes_missing=0, cnt=3, missed=0
ioctl ret=0, bytes_left=3965, bytes_missing=0, cnt=1, missed=0
/vault/seafile-data/storage/blocks/bd71e3e1-95bd-40fc-b6db-55c4ea9467c1/30/bfa04bb182ff8050fe4a0f357da7df335e7511
If I attempt to read the corresponding file, I have an " Input/output error ".
Here is my Raid1 configuration:
sudo btrfs fi show /mnt
Label: none uuid: 91f6f57e-23d7-46ac-8056-144e6ac150e4
Total devices 2 FS bytes used 299.02GiB
devid 1 size 2.73TiB used 301.03GiB path /dev/sdd1
devid 2 size 2.73TiB used 301.01GiB path /dev/sde1
btrfs-progs v3.19.1
sudo btrfs fi df /mnt
Data, RAID1: total=299.00GiB, used=298.15GiB
Data, single: total=8.00MiB, used=0.00B
System, RAID1: total=8.00MiB, used=64.00KiB
System, single: total=4.00MiB, used=0.00B
Metadata, RAID1: total=2.00GiB, used=887.55MiB
Metadata, single: total=8.00MiB, used=0.00B
GlobalReserve, single: total=304.00MiB, used=0.00B
sudo btrfs fi us /mnt
Overall:
Device size: 5.46TiB
Device allocated: 602.04GiB
Device unallocated: 4.87TiB
Device missing: 0.00B
Used: 598.04GiB
Free (estimated): 2.44TiB (min: 2.44TiB)
Data ratio: 2.00
Metadata ratio: 2.00
Global reserve: 304.00MiB (used: 0.00B)
Data,single: Size:8.00MiB, Used:0.00B
/dev/sdd1 8.00MiB
Data,RAID1: Size:299.00GiB, Used:298.15GiB
/dev/sdd1 299.00GiB
/dev/sde1 299.00GiB
Metadata,single: Size:8.00MiB, Used:0.00B
/dev/sdd1 8.00MiB
Metadata,RAID1: Size:2.00GiB, Used:887.55MiB
/dev/sdd1 2.00GiB
/dev/sde1 2.00GiB
System,single: Size:4.00MiB, Used:0.00B
/dev/sdd1 4.00MiB
System,RAID1: Size:8.00MiB, Used:64.00KiB
/dev/sdd1 8.00MiB
/dev/sde1 8.00MiB
Unallocated:
/dev/sdd1 2.43TiB
/dev/sde1 2.43TiB
btrfs --version
btrfs-progs v3.19.1
sudo smartctl -a /dev/sde
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.10.0-327.28.3.el7.x86_64] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Western Digital Red (AF)
Device Model: WDC WD30EFRX-68EUZN0
Serial Number: WD-WCC4N1003742
LU WWN Device Id: 5 0014ee 25f64a417
Firmware Version: 80.00A80
User Capacity: 3 000 592 982 016 bytes [3,00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-2 (minor revision not indicated)
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is: Sun Jan 15 16:46:37 2017 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (40080) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 402) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x703d) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0
3 Spin_Up_Time 0x0027 198 176 021 Pre-fail Always - 5100
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 134
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 100 253 000 Old_age Always - 0
9 Power_On_Hours 0x0032 085 085 000 Old_age Always - 11308
10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 134
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 126
193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 432
194 Temperature_Celsius 0x0022 122 106 000 Old_age Always - 28
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age Offline - 0
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Conveyance offline Completed without error 00% 9489 -
# 2 Short offline Completed without error 00% 9479 -
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [ISSUE] uncorrectable errors on Raid1
2017-01-15 20:28 ` [ISSUE] uncorrectable errors on Raid1 randomtechguy
@ 2017-01-16 8:28 ` Duncan
0 siblings, 0 replies; 2+ messages in thread
From: Duncan @ 2017-01-16 8:28 UTC (permalink / raw)
To: linux-btrfs
randomtechguy@laposte.net posted on Sun, 15 Jan 2017 21:28:01 +0100 as
excerpted:
> Hello /all,
>
> I have some concerns about the raid 1 of BTRFS. I have encountered 114
> uncorrectable errors on the directory hosting my 'seafile-data'. Seafile
> is a software to backup the data. My 2 hard drives seems to be fined.
> SMARTCTL reports do not identify any badlocks (Reallocated_Event_Count
> or Current_Pending_Sector).
> How can I have uncorrectable errors since BTRFS is assuring data
> integrity ? How did my data got corrupted ? What can I do to ensure that
> it does not happen again ?
It's worth noting that btrfs data integrity is based on checksums over
data and metadata blocks, and that btrfs raid1 makes exactly two copies
of each chunk and block within a chunk. If both copies of a block fail
to match checksum, btrfs (either in normal operation or via scrub) can't
fall back to the second copy as it's also bad.
Unfortunately, that seems to have happened to you. How, I can't say.
And here comes the disclaimer. I'm a normal if somewhat advanced btrfs
user (I run gentoo and routinely build from sources, applying patches as
necessary, doing git bisects, etc.) and list regular, not a dev. So
there's a limit to what I can cover, but I can address some of the easier
stuff where it has been covered on the list previously, and by doing so,
free the more advanced list regulars and devs for the more complex
answers and for further development, etc.
There's a roadmapped proposal to offer N-way-mirroring, instead of just
the two-way currently available, which would of course allow N fallbacks
in case the first two are bad, and I've been intensely interested in 3-
way for my own use, but that has been scheduled for attention right after
parity-raid (raid5/6 mode) for well over a full major kernel cycle now,
since at least 3.6 (when I first started seriously looking into btrfs),
when raid56 was supposed to be introduced in a (minor) kernel cycle or
two. Unfortunately, it was 3.19 before raid56 code was nominally
complete, and just in the last couple kernel cycles (4.8/4.9) it became
clear that the existing raid56 code is still majorly flawed, to the point
a full or near full rewrite may be necessary, so now we're looking at
another year or two likely for it to stabilize properly, meaning it could
easily be 5.x (assuming 4.19 is the last 4.x, as was 3.19 for 3.x) before
raid56 stabilization, and then easily 5.10 before N-way-mirroring is
first available and maybe 6.x before stabilization, even if N-way-
mirroring doesn't end up with any of the long development time and then
long term stability problems that hit raid56 mode. So very possibly five
years out... and in kernel terms five years is a very long time, the
practical horizon for any sort of predicting at all, so who knows,
really, but what we do know is it's unlikely in anything like the near
future.
> You can find below all the useful information I can think of. If you
> need more, let me know.
> If I attempt to read the corresponding file, I have an " Input/output
> error ".
That's normal when both mirrors fail checksum verification. You can try
btrfs restore on the unmounted filesystem, telling it (using the regex
option) to restore only that file or files and where to put them, but
they may well be corrupt even if you can retrieve them that way. With
more work, you could depend on btrfs' copy-on-write nature and the fact
that previous root generations are likely available, to find and feed to
btrfs restore older roots and hope that it can find and restore an
uncorrupted copy that way, but honestly, you better have some pretty
advanced technical chops to even think about that -- I'm not sure I could
do it here, tho I'd certainly try if I didn't have a backup to resort to.
Which of course brings up backups. As any sysadmin worth the name will
tell you, what you /really/ think about the value of your data is defined
by the number of backups you have of it, and the faithfulness with which
you update those backups. No backups means you value the time and
resources saved by /not/ doing those backups more than the data that you
are risking losing as a result of not having those backups. A single
backup of course means much lower risk, but the same thing applies there
-- only one backup means you value the time and resources saved by not
making a second more than the risk of actually needing that second backup
because both the working copy and the first backup failed at the same
time, for whatever reason.
Of course that's for normal, fully stable, hardware and filesystems.
Btrfs, while no longer (since 3.12, IIRC) labeled "eat your data"
experimental, remains under heavy development and not yet entirely stable
and mature. As such, the additional risk to any data stored on the not
yet fully stable and mature btrfs must be taken into account when
assigning relative value to backups or the lack thereof, tilting the
balance toward more backups than one would consider worth the trouble on
a more stable and mature filesystem.
Thus it can be safely and confidently stated that you either have
backups, or by your (in)actions in not having them, placed a lower value
on the data than on your time and resources that would be used in backing
it up. So a problem with btrfs that results in loss of some or all of
the data on it isn't a big problem, because you can either resort to your
backups if you have to, or the data was of only trivial value as defined
by the lack of backups in any case, as even if you lose it, you saved
what you considered of more value, the time and trouble that would have
otherwise gone into making those backups.
Since it's just a few files here, not the entire filesystem, it's even
less of a problem. Just restore from backups if you have them. Or try
btrfs restore on them, and if they're corrupted from that or it doesn't
work at all, no big deal since the files were obviously not worth a lot
anyway as they weren't backed up, and you can just delete the problem
files and move on, or worse comes to worse, blow away the filesystem with
a fresh mkfs, and move on.
Meanwhile...
> Here is my Raid1 configuration: [snippage]
> sudo btrfs fi df /mnt
> Data, RAID1: total=299.00GiB, used=298.15GiB
> Data, single: total=8.00MiB, used=0.00B
> System, RAID1: total=8.00MiB, used=64.00KiB
> System, single: total=4.00MiB, used=0.00B
> Metadata, RAID1: total=2.00GiB, used=887.55MiB
> Metadata, single: total=8.00MiB, used=0.00B
> GlobalReserve, single: total=304.00MiB, used=0.00B
It's worth noting that those data, system and metadata single-mode chunks
are an artifact from an older mkfs.btrfs, and can be eliminated using for
example btrfs balance start -dusage=0 -musage=0 or btrfs balance start
-profiles=single. (GlobalReserve is different and always single, despite
it actually coming from metadata, the reason metadata as reported never
gets entirely full.)
> btrfs --version btrfs-progs v3.19.1
>
>
> sudo smartctl -a /dev/sde
> smartctl 6.2 2013-07-26 r3841
> [x86_64-linux-3.10.0-327.28.3.el7.x86_64] (local build)
If that's your current kernel, an upgrade to at minimum the LTS kernel
4.1 series is *VERY* strongly recommended. As mentioned above, btrfs
prior to kernel 3.12 still had the "experimental" label, and is well out
of the practical support range for this list. All sorts of btrfs bugs
have been found and fixed since 3.10, and it could in fact be one of
them, likely in combination with something else like a system crash or
hard shutdown without proper unmounting, or simply some strange corner-
case that wasn't dealt with properly back then, that triggered your
uncorrectable errors.
Here on this list, which does focus on the mainstream kernel and forward
development, the recommended kernels are the last two kernels in one of
two tracks, current stable release or the mainstream LTS kernel series.
For current stable, 4.10 is in development so 4.9 and 4.8 are supported,
tho 4.8 is now EOL on kernel.org, so people should be moving to 4.9 by
now unless they have specific reason not to. For LTS, the 4.4 series is
the latest, with 4.1 previous to that, so 4.4 and 4.1 are supported.
Previous to 4.1, while we do try, enough development has happened since
then that practical support quality is likely to be quite reduced, with a
very early suggestion being an upgrade to something newer.
Of course we realize that various distros have chosen to support btrfs on
their older kernels, backporting various patches as they consider
appropriate. However, we don't track what they've backported and what
they haven't, and thus aren't in a particularly good position to offer
support. The distros choosing to do that backporting and support should
be far better resources in that case, as they actually know what they've
backported and what they haven't.
Tho if I'm not mistaken and based on what I've read, RHEL offered
experimental support for btrfs back then, but never full support, and
even that experimental support is long ended, so you may be on your own
there in any case. But that's second hand. Maybe they still offer it?
But if you're choosing to use that ancient a long-term-supported
enterprise distro and kernel, presumably you value stability very highly,
which would seem at its root to be incompatible with btrfs' status as
still in development and stabilizing, not yet fully stable and mature.
So a reevaluation is likely in order, as depending on your needs and
priorities, your use of either RHEL 7 and its old kernel for stability
reasons, or your use of the still under heavy development and not yet
fully stable btrfs, would appear to be inappropriate for your
circumstances and priorities and may well need changed.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2017-01-16 8:29 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <13125824.3612316.1484511403145.JavaMail.zimbra@laposte.net>
2017-01-15 20:28 ` [ISSUE] uncorrectable errors on Raid1 randomtechguy
2017-01-16 8:28 ` Duncan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).