From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ob0-f174.google.com ([209.85.214.174]:54095 "EHLO mail-ob0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754633Ab2EaCCT (ORCPT ); Wed, 30 May 2012 22:02:19 -0400 Received: by obbtb18 with SMTP id tb18so627015obb.19 for ; Wed, 30 May 2012 19:02:18 -0700 (PDT) Message-ID: <4FC6D128.60308@gmail.com> Date: Wed, 30 May 2012 22:02:16 -0400 From: Maxim Mikheev MIME-Version: 1.0 To: cwillu CC: linux-btrfs@vger.kernel.org Subject: Re: Help with data recovering References: <4FC54A5D.8000600@gmail.com> <4FC55043.4050004@gmail.com> <4FC55AB7.3000903@gmail.com> In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Sender: linux-btrfs-owner@vger.kernel.org List-ID: btrfsck --repair running already for 26 hours. Is it have sense to wait more? Thanks On 05/29/2012 07:36 PM, cwillu wrote: > On Tue, May 29, 2012 at 5:24 PM, Maxim Mikheev wrote: >> Thank you for your answer. >> >> >> The system kernel was and now: >> >> Linux s0 3.4.0-030400-generic #201205210521 SMP Mon May 21 09:22:02 UTC 2012 >> x86_64 x86_64 x86_64 GNU/Linux >> >> the raid was created by: >> mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf >> >> Disk are connected through RocketRaid 2670. >> >> for mounting I used line in fstab: >> UUID=c9776e19-37eb-4f9c-bd6b-04e8dde97682 /tank btrfs >> defaults,compress=lzo 0 1 >> >> On machine was running several Virtual machines. Only one was actively using >> disks. >> >> VM has active several threads: >> 1. 2 threads reading big files (50GB each) >> 2. reading from 50 files and writing one big file >> 3. The kernel panic happens when I run another program with 30 threads of >> reading/writing of small files. >> >> Virtual Machine accessed to underline btrfs through 9-p file system which >> actively used xattr. >> >> After reboot system was in this stage. >> >> I hope that btrfsck --repair will not make it worse, It is now running. > **twitch** > > Well, I also hope it won't make it worse. Do not cancel it now, let > it finish (aborting it will make things worse), but I suggest waiting > until a few more people have weighed in before attempting anything > beyond that.