linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Arne Jansen <sensille@gmx.net>
To: Maxim Mikheev <mikhmv@gmail.com>
Cc: Liu Bo <liubo2009@cn.fujitsu.com>,
	linux-btrfs@vger.kernel.org,
	Jan Schmidt <list.btrfs@jan-o-sch.net>
Subject: Re: Help with data recovering
Date: Mon, 04 Jun 2012 13:32:37 +0200	[thread overview]
Message-ID: <4FCC9CD5.2050309@gmx.net> (raw)
In-Reply-To: <4FCC9C63.7000605@gmail.com>

On 04.06.2012 13:30, Maxim Mikheev wrote:
> How can I mount it at the first?

Let me state it differently: If you can't mount it, you can't scrub it.

> 
> On 06/04/2012 04:18 AM, Arne Jansen wrote:
>> On 04.06.2012 04:59, Liu Bo wrote:
>>> On 06/04/2012 10:18 AM, Maxim Mikheev wrote:
>>>
>>>> Hi Liu,
>>>>
>>>> 1) all of them not working (see dmesg at the end)
>>>> 2)
>>>> max@s0:~$ sudo btrfs scrub start /dev/sdb
>>>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device
>>>> max@s0:~$ sudo btrfs scrub start /dev/sda
>>>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device
>>>> max@s0:~$ sudo btrfs scrub start /dev/sdd
>>>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device
>>>> max@s0:~$ sudo btrfs scrub start /dev/sde
>>>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device
>>>> max@s0:~$ sudo btrfs scrub start /dev/sdf
>>>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device
>>>>
>> Even to scrub a single device, the filesystem has to be mounted.
>>
>>> (add Jan and Arne to cc, they are authors of scrub)
>>>
>>> I'm not an expert on scrub, and I'm not clear how to scrub a device directly :(
>>>
>>> btw, have you tried restore (for attempting to recover data from an
>>> unmountable filesystem):
>>>
>>> https://btrfs.wiki.kernel.org/index.php/Restore
>>>
>>> thanks,
>>> liubo
>>>
>>>
>>>> dmesg after all operations:
>>>> [ 2183.864056] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 2
>>>> transid 9096 /dev/sdb
>>>> [ 2183.916128] btrfs: disk space caching is enabled
>>>> [ 2191.863409] parent transid verify failed on 5468060241920 wanted 9096
>>>> found 7621
>>>> [ 2191.872937] parent transid verify failed on 5468060241920 wanted 9096
>>>> found 7621
>>>> [ 2191.873666] btrfs read error corrected: ino 1 off 5468060241920 (dev
>>>> /dev/sdb sector 2143292648)
>>>> [ 2191.873678] Failed to read block groups: -5
>>>> [ 2191.884636] btrfs: open_ctree failed
>>>> [ 2222.910225] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 3
>>>> transid 9095 /dev/sdd
>>>> [ 2222.959128] btrfs: disk space caching is enabled
>>>> [ 2231.264285] parent transid verify failed on 5468060241920 wanted 9096
>>>> found 7621
>>>> [ 2231.274306] parent transid verify failed on 5468060241920 wanted 9096
>>>> found 7621
>>>> [ 2231.275194] btrfs read error corrected: ino 1 off 5468060241920 (dev
>>>> /dev/sdd sector 2143292648)
>>>> [ 2231.275207] Failed to read block groups: -5
>>>> [ 2231.288795] btrfs: open_ctree failed
>>>> [ 2240.624691] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 4
>>>> transid 9096 /dev/sde
>>>> [ 2240.671344] btrfs: disk space caching is enabled
>>>> [ 2248.916772] parent transid verify failed on 5468060241920 wanted 9096
>>>> found 7621
>>>> [ 2248.928106] parent transid verify failed on 5468060241920 wanted 9096
>>>> found 7621
>>>> [ 2248.929091] btrfs read error corrected: ino 1 off 5468060241920 (dev
>>>> /dev/sdd sector 2143292648)
>>>> [ 2248.929105] Failed to read block groups: -5
>>>> [ 2248.939081] btrfs: open_ctree failed
>>>> [ 2253.829071] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5
>>>> transid 9096 /dev/sdf
>>>> [ 2253.879940] btrfs: disk space caching is enabled
>>>> [ 2261.754357] parent transid verify failed on 5468060241920 wanted 9096
>>>> found 7621
>>>> [ 2261.767118] parent transid verify failed on 5468060241920 wanted 9096
>>>> found 7621
>>>> [ 2261.767929] btrfs read error corrected: ino 1 off 5468060241920 (dev
>>>> /dev/sdb sector 2143292648)
>>>> [ 2261.767942] Failed to read block groups: -5
>>>> [ 2261.778219] btrfs: open_ctree failed
>>>> [ 2309.831415] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 1
>>>> transid 9096 /dev/sda
>>>> [ 2309.904520] btrfs: disk space caching is enabled
>>>> [ 2318.286463] parent transid verify failed on 5468060241920 wanted 9096
>>>> found 7621
>>>> [ 2318.302991] parent transid verify failed on 5468060241920 wanted 9096
>>>> found 7621
>>>> [ 2318.304000] btrfs read error corrected: ino 1 off 5468060241920 (dev
>>>> /dev/sdd sector 2143292648)
>>>> [ 2318.304013] Failed to read block groups: -5
>>>> [ 2318.314587] btrfs: open_ctree failed
>>>>
>>>> On 06/03/2012 10:16 PM, Liu Bo wrote:
>>>>> On 06/04/2012 09:43 AM, Maxim Mikheev wrote:
>>>>>
>>>>>> Hi Liu,
>>>>>>
>>>>>> thanks for advice. I tried it before btrfsck. results are here:
>>>>>> max@s0:~$ sudo mount /tank -o recovery
>>>>>> [sudo] password for max:
>>>>>> mount: wrong fs type, bad option, bad superblock on /dev/sdf,
>>>>>>          missing codepage or helper program, or other error
>>>>>>          In some cases useful info is found in syslog - try
>>>>>>          dmesg | tail  or so
>>>>>>
>>>>>> max@s0:~$ sudo mount -o recovery /tank
>>>>>> mount: wrong fs type, bad option, bad superblock on /dev/sdf,
>>>>>>          missing codepage or helper program, or other error
>>>>>>          In some cases useful info is found in syslog - try
>>>>>>          dmesg | tail  or so
>>>>>>
>>>>> Two possible ways:
>>>>>
>>>>> 1)
>>>>> I noticed that your btrfs had 5 partitions in all:
>>>>> mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
>>>>>
>>>>> Can you try to mount other disk partitions instead by hand, like:
>>>>> mount /dev/sdb /tank
>>>>> mount /dev/sdc /tank
>>>>> mount /dev/sdd /tank
>>>>> mount /dev/sde /tank
>>>>> mount /dev/sdf /tank
>>>>>
>>>>> 2)
>>>>> use btrfs's scrub to resort to metadata backups created by RAID1.
>>>>>
>>>>> thanks,
>>>>> liubo
>>>>>
>>>>>> dmesg after boot before mount -o recovery:
>>>>>> [   51.829352] parent transid verify failed on 5468060241920 wanted 9096
>>>>>> found 7621
>>>>>> [   51.841153] parent transid verify failed on 5468060241920 wanted 9096
>>>>>> found 7621
>>>>>> [   51.841603] btrfs read error corrected: ino 1 off 5468060241920 (dev
>>>>>> /dev/sdb sector 2143292648)
>>>>>> [   51.841610] Failed to read block groups: -5
>>>>>> [   51.848057] btrfs: open_ctree failed
>>>>>> ..............................
>>>>>> dmesg after both mounts:
>>>>>>
>>>>>> [  123.687773] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5
>>>>>> transid 9096 /dev/sdf
>>>>>> [  123.733678] btrfs: use lzo compression
>>>>>> [  123.733683] btrfs: enabling auto recovery
>>>>>> [  123.733686] btrfs: disk space caching is enabled
>>>>>> [  131.699910] parent transid verify failed on 5468060241920 wanted 9096
>>>>>> found 7621
>>>>>> [  131.714018] parent transid verify failed on 5468060241920 wanted 9096
>>>>>> found 7621
>>>>>> [  131.715059] btrfs read error corrected: ino 1 off 5468060241920 (dev
>>>>>> /dev/sdb sector 2143292648)
>>>>>> [  131.715072] Failed to read block groups: -5
>>>>>> [  131.727176] btrfs: open_ctree failed
>>>>>> [  161.697873] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5
>>>>>> transid 9096 /dev/sdf
>>>>>> [  161.746345] btrfs: use lzo compression
>>>>>> [  161.746354] btrfs: enabling auto recovery
>>>>>> [  161.746358] btrfs: disk space caching is enabled
>>>>>> [  169.720823] parent transid verify failed on 5468060241920 wanted 9096
>>>>>> found 7621
>>>>>> [  169.732048] parent transid verify failed on 5468060241920 wanted 9096
>>>>>> found 7621
>>>>>> [  169.732611] btrfs read error corrected: ino 1 off 5468060241920 (dev
>>>>>> /dev/sdb sector 2143292648)
>>>>>> [  169.732623] Failed to read block groups: -5
>>>>>> [  169.743437] btrfs: open_ctree failed
>>>>>>
>>>>>> So It does not work. I have seen in some posts command:
>>>>>>
>>>>>> sudo mount -s 2 -o recovery /tank
>>>>>> Should I try it?
>>>>>>
>>>>>> Please help me, I need to get this data ASAP.
>>>>>>
>>>>>> Regards,
>>>>>>      Max
>>>>>>
>>>>>> On 06/03/2012 09:22 PM, Liu Bo wrote:
>>>>>>> On 06/02/2012 09:43 PM, Maxim Mikheev wrote:
>>>>>>>
>>>>>>>> Repair was not helpful.
>>>>>>>> Is any other ways to get access to data?
>>>>>>>>
>>>>>>>> Please help....
>>>>>>>>
>>>>>>> Hi Maxim,
>>>>>>>
>>>>>>> Besides btrfsck --repair, we also have a recovery mount option to deal
>>>>>>> with your situation,
>>>>>>> maybe you can try mount xxx -o recovery and see if it helps?
>>>>>>>
>>>>>>>
>>>>>>> thanks,
>>>>>>> liubo
>>>>>>>
>>>>>>>> On 05/30/2012 11:15 PM, Michael K wrote:
>>>>>>>>> Let it run to completion. There is little you can do other than hope
>>>>>>>>> and wait.
>>>>>>>>>
>>>>>>>>> On May 30, 2012 9:02 PM, "Maxim Mikheev"<mikhmv@gmail.com
>>>>>>>>> <mailto:mikhmv@gmail.com>>    wrote:
>>>>>>>>>
>>>>>>>>>        btrfsck --repair running already for 26 hours.
>>>>>>>>>
>>>>>>>>>        Is it have sense to wait more?
>>>>>>>>>
>>>>>>>>>        Thanks
>>>>>>>>>
>>>>>>>>>        On 05/29/2012 07:36 PM, cwillu wrote:
>>>>>>>>>
>>>>>>>>>            On Tue, May 29, 2012 at 5:24 PM, Maxim
>>>>>>>>>            Mikheev<mikhmv@gmail.com<mailto:mikhmv@gmail.com>>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>                Thank you for your answer.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>                The system kernel was and now:
>>>>>>>>>
>>>>>>>>>                Linux s0 3.4.0-030400-generic #201205210521 SMP Mon
>>>>>>>>> May 21
>>>>>>>>>                09:22:02 UTC 2012
>>>>>>>>>                x86_64 x86_64 x86_64 GNU/Linux
>>>>>>>>>
>>>>>>>>>                the raid was created by:
>>>>>>>>>                mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
>>>>>>>>>
>>>>>>>>>                Disk are connected through RocketRaid 2670.
>>>>>>>>>
>>>>>>>>>                for mounting I used line in fstab:
>>>>>>>>>                UUID=c9776e19-37eb-4f9c-bd6b-04e8dde97682    /tank
>>>>>>>>>                 btrfs
>>>>>>>>>                 defaults,compress=lzo    0    1
>>>>>>>>>
>>>>>>>>>                On machine was running several Virtual machines.
>>>>>>>>> Only one
>>>>>>>>>                was actively using
>>>>>>>>>                disks.
>>>>>>>>>
>>>>>>>>>                VM has active several threads:
>>>>>>>>>                1. 2 threads reading big files (50GB each)
>>>>>>>>>                2. reading from 50 files and writing one big file
>>>>>>>>>                3. The kernel panic happens when I run another program
>>>>>>>>>                with 30 threads of
>>>>>>>>>                reading/writing of small files.
>>>>>>>>>
>>>>>>>>>                Virtual Machine accessed to underline btrfs through 9-p
>>>>>>>>>                file system which
>>>>>>>>>                actively used xattr.
>>>>>>>>>
>>>>>>>>>                After reboot system was in this stage.
>>>>>>>>>
>>>>>>>>>                I hope that btrfsck --repair will not make it worse,
>>>>>>>>> It is
>>>>>>>>>                now running.
>>>>>>>>>
>>>>>>>>>            **twitch**
>>>>>>>>>
>>>>>>>>>            Well, I also hope it won't make it worse.  Do not cancel it
>>>>>>>>>            now, let
>>>>>>>>>            it finish (aborting it will  make things worse), but I
>>>>>>>>> suggest
>>>>>>>>>            waiting
>>>>>>>>>            until a few more people have weighed in before attempting
>>>>>>>>> anything
>>>>>>>>>            beyond that.
>>>>>>>>>
>>>>>>>>>        --
>>>>>>>>>        To unsubscribe from this list: send the line "unsubscribe
>>>>>>>>>        linux-btrfs" in
>>>>>>>>>        the body of a message to majordomo@vger.kernel.org
>>>>>>>>>        <mailto:majordomo@vger.kernel.org>
>>>>>>>>>        More majordomo info at
>>>>>>>>> http://vger.kernel.org/majordomo-info.html
>>>>>>>>>


  reply	other threads:[~2012-06-04 11:32 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-05-29 22:14 Help with recover data Maxim Mikheev
2012-05-29 22:40 ` Help with data recovering Maxim Mikheev
2012-05-29 23:11   ` cwillu
2012-05-29 23:24     ` Maxim Mikheev
2012-05-29 23:36       ` cwillu
2012-05-31  2:02         ` Maxim Mikheev
     [not found]           ` <CA+WRLO-mRoSXkdd6_ydc2py3JJCnoM4avQNanxDWWntde2Ah0A@mail.gmail.com>
2012-06-01 21:15             ` Maxim Mikheev
     [not found]           ` <CAGJTRcibT_pufU4tKqbBpBfm8QiuW=dhQ8BAGzQnpxMCa-dOCQ@mail.gmail.com>
2012-06-02 13:43             ` Maxim Mikheev
2012-06-04  1:22               ` Liu Bo
2012-06-04  1:43                 ` Maxim Mikheev
2012-06-04  2:16                   ` Liu Bo
2012-06-04  2:18                     ` Maxim Mikheev
2012-06-04  2:59                       ` Liu Bo
2012-06-04  3:13                         ` Maxim Mikheev
2012-06-04  4:27                           ` Maxim Mikheev
2012-06-04  8:18                         ` Arne Jansen
2012-06-04 11:30                           ` Maxim Mikheev
2012-06-04 11:32                             ` Arne Jansen [this message]
2012-06-04 11:43                               ` Maxim Mikheev
2012-06-04 11:49                                 ` Hugo Mills
2012-06-04 12:01                                   ` Maxim Mikheev
2012-06-04 12:11                                     ` Hugo Mills
2012-06-04 12:28                                       ` Maxim Mikheev
2012-06-04 12:34                                         ` Hugo Mills
2012-06-04 12:37                                           ` Maxim Mikheev
2012-06-04 16:24                                           ` Maxim Mikheev
2012-06-04 17:04                                             ` Hugo Mills
2012-06-04 17:09                                               ` Hugo Mills
2012-06-04 18:02                                                 ` Michael
2012-06-04 18:03                                                   ` Maxim Mikheev
2012-06-04 18:37                                                     ` Michael
2012-06-06 16:25                                                       ` Maxim Mikheev
2012-06-07  3:27                                                         ` Maxim Mikheev
2012-06-05  9:55                                               ` Martin Steigerwald
2012-06-05  9:57                                                 ` Martin Steigerwald
2012-06-04 14:54                                 ` Ryan C. Underwood
2012-06-04 16:49                                   ` Maxim Mikheev
2012-06-05  9:59                                     ` Martin Steigerwald
2012-06-05 10:23                                       ` Martin Steigerwald
2012-06-05 11:07                                       ` Helmut Hullen
2012-05-29 23:37       ` Maxim Mikheev
2012-05-29 23:14 ` Help with recover data Felix Blanke
2012-05-29 23:19   ` cwillu
2012-06-04 12:24 ` Stefan Behrens
2012-06-04 12:26   ` Maxim Mikheev
2012-06-04 13:03     ` Stefan Behrens
     [not found]       ` <4FCCC176.1020007@gmail.com>
2012-06-04 15:01         ` Maxim Mikheev
2012-06-04 15:02         ` Stefan Behrens
2012-06-04 15:08           ` Maxim Mikheev
2012-06-04 15:11             ` Stefan Behrens
2012-06-04 15:26               ` Maxim Mikheev
2012-06-04 17:35           ` Maxim Mikheev
2012-06-04 18:08             ` Stefan Behrens
2012-06-04 18:15           ` Ryan C. Underwood
2012-06-04 12:31   ` Maxim Mikheev

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4FCC9CD5.2050309@gmx.net \
    --to=sensille@gmx.net \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=list.btrfs@jan-o-sch.net \
    --cc=liubo2009@cn.fujitsu.com \
    --cc=mikhmv@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).