public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
* backpointer mismatch
@ 2014-01-10  3:59 Peter van Hoof
  2014-01-10 14:26 ` Duncan
  0 siblings, 1 reply; 4+ messages in thread
From: Peter van Hoof @ 2014-01-10  3:59 UTC (permalink / raw)
  To: linux-btrfs

Hi,

I am using btrfs for my backup RAID. This had been running well for 
about a year. Recently I decided the upgrade the backup server to 
openSUSE 13.1. I checked all filesystems before the upgrade and 
everything was clean. I had several attempts at upgrading the system, 
but all failed (the installation of some rpm would hang indefinitely). 
So I aborted the installation and reverted the system back to openSUSE 
12.3 (with a custom-installed 3.9.7 kernel). Unfortunately, after this 
the backup RAID reported lots of errors.

When I run btrfsck on the filesystem, I get around 1.3M of these messages:

Extent back ref already exists for 1116254208 parent 11145490432 root 0

and around 1.2M of these:

ref mismatch on [90670907392 4096] extent item 11, found 12
Incorrect global backref count on 90670907392 found 11 wanted 12
backpointer mismatch on [90670907392 4096]

Filtering these out, this is the remaining output:

checking extents
Errors found in extent allocation tree or chunk allocation
checking free space cache
checking fs roots
checking csums
checking root refs
Checking filesystem on /dev/md2
UUID: 0b6a9d0d-e501-4a23-9d09-259b1f5b5652
found 2213988384746 bytes used err is 0
total csum bytes: 3185850148
total tree bytes: 42770862080
total fs tree bytes: 36787625984
total extent tree bytes: 1643925504
btree space waste bytes: 12475940633
file data blocks allocated: 5269432860672
  referenced 5254870626304
Btrfs v3.12+20131125

(this version of btrfsck comes from openSUSE factory).

I also ran btrfs scrub on the file system. This uncovered 4 checksum 
errors which I could repair manually. I do not know if that is related 
to the problem above. At least it didn't solve it...

The btrfs file system is installed on top of an mdadm RAID5.

How worried should I be about the reported errors? What confuses me is 
that in the end btrfsck reports an error count of 0.

Should I try to repair this? I have had bad experiences in the past with 
"btrfsck --repair", but that was with a much older version...

I can of course recreate the backups, but this would take a long time 
and I would loose my entire snapshot history which I would rather avoid...


Cheers,

Peter.

-- 
Peter van Hoof
Royal Observatory of Belgium
Ringlaan 3
1180 Brussel
Belgium
http://homepage.oma.be/pvh

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2014-01-10 15:53 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-01-10  3:59 backpointer mismatch Peter van Hoof
2014-01-10 14:26 ` Duncan
2014-01-10 15:16   ` Roman Mamedov
2014-01-10 15:53     ` Duncan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox