public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* XFS_REPAIR on LVM partition
@ 2013-12-15 22:47 Rafael Weingartner
  2013-12-16  0:01 ` Dave Chinner
  0 siblings, 1 reply; 8+ messages in thread
From: Rafael Weingartner @ 2013-12-15 22:47 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 1187 bytes --]

Hi folks,
I am having some troubles with a XFS over one LVM partition. After an
unexpected reboot, I am getting the following message when I try to mount
it:
*mount: Structure needs cleaning*


I tried "sudo xfs_check /dev/mapper/volume". Sadly, I got the message:
xfs_check: cannot init perag data (117)
*ERROR: The filesystem has valuable metadata changes in a log which needs
to*
*be replayed.  Mount the filesystem to replay the log, and unmount it
before*
*re-running xfs_check.  If you are unable to mount the filesystem, then use*
*the xfs_repair -L option to destroy the log and attempt a repair.*
*Note that destroying the log may cause corruption -- please attempt a
mount*
*of the filesystem before doing this*

So, I tried:
xfs_repair -L

The command is running for over 3 hours and still just dots on my screen, I
have no idea of what is happening. Any ideas how I can get it to work
again? Or at least some work around that would enable me to extract the
data that it contains.

The server is a Ubuntu server 12.04.
The XFS version is: xfs_info version 3.1.7
If you need I can provide you with more info.

Thanks,

-- 
Rafael Weingärtner

[-- Attachment #1.2: Type: text/html, Size: 3510 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: XFS_REPAIR on LVM partition
  2013-12-15 22:47 XFS_REPAIR on LVM partition Rafael Weingartner
@ 2013-12-16  0:01 ` Dave Chinner
  2013-12-16  0:34   ` Rafael Weingartner
  0 siblings, 1 reply; 8+ messages in thread
From: Dave Chinner @ 2013-12-16  0:01 UTC (permalink / raw)
  To: Rafael Weingartner; +Cc: xfs

On Sun, Dec 15, 2013 at 08:47:30PM -0200, Rafael Weingartner wrote:
> Hi folks,
> I am having some troubles with a XFS over one LVM partition. After an
> unexpected reboot, I am getting the following message when I try to mount
> it:
> *mount: Structure needs cleaning*

And the error in dmesg is?

> I tried "sudo xfs_check /dev/mapper/volume". Sadly, I got the message:
> xfs_check: cannot init perag data (117)

xfs_check is deprecated, please use "xfs_repair -n" instead.

> *ERROR: The filesystem has valuable metadata changes in a log which needs
> to*
> *be replayed.  Mount the filesystem to replay the log, and unmount it
> before*
> *re-running xfs_check.  If you are unable to mount the filesystem, then use*
> *the xfs_repair -L option to destroy the log and attempt a repair.*
> *Note that destroying the log may cause corruption -- please attempt a
> mount*
> *of the filesystem before doing this*
> 
> So, I tried:
> xfs_repair -L

Ok, so you went immediately for the big hammer. There's the
possibility that might not be able to recover your filesystem from
whatever went wrong now that the log has been zeroed.

> The command is running for over 3 hours and still just dots on my screen, I
> have no idea of what is happening. Any ideas how I can get it to work
> again? Or at least some work around that would enable me to extract the
> data that it contains.

I'm guessing it can't find or validate the primary superblock, so
it's looking for a secondary superblock. Please post the output of
the running repair so we can see exactly what it is doing.

Also we need more information about your problem - why did the
machine reboot? what's your storage configuration? You hardware,
etc.

http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F

> The server is a Ubuntu server 12.04.
> The XFS version is: xfs_info version 3.1.7
> If you need I can provide you with more info.

That's an old version of xfsprogs - you might want to start by
upgrading it to 3.11...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: XFS_REPAIR on LVM partition
  2013-12-16  0:01 ` Dave Chinner
@ 2013-12-16  0:34   ` Rafael Weingartner
  2013-12-16  3:05     ` Dave Chinner
  0 siblings, 1 reply; 8+ messages in thread
From: Rafael Weingartner @ 2013-12-16  0:34 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs


[-- Attachment #1.1: Type: text/plain, Size: 5221 bytes --]

So, sadly I went for the big hammer option, I thought that there were no
other options ;).

I'm guessing it can't find or validate the primary superblock, so
> it's looking for a secondary superblock. Please post the output of
> the running repair so we can see exactly what it is doing.


That is exactly what it seems that it is happening.

*dmesg erros:*

>  81.927888] Pid: 878, comm: mount Not tainted 3.5.0-44-generic
> #67~precise1-Ubuntu
> [   81.927891] Call Trace:
> [   81.927941]  [<ffffffffa01d460f>] xfs_error_report+0x3f/0x50 [xfs]
> [   81.927972]  [<ffffffffa01ecd66>] ? xfs_free_extent+0xe6/0x130 [xfs]
> [   81.927990]  [<ffffffffa01ea318>] xfs_free_ag_extent+0x528/0x730 [xfs]
> [   81.928007]  [<ffffffffa01e8e07>] ? kmem_zone_alloc+0x67/0xe0 [xfs]
> [   81.928033]  [<ffffffffa01ecd66>] xfs_free_extent+0xe6/0x130 [xfs]
> [   81.928055]  [<ffffffffa021bb10>] xlog_recover_process_efi+0x170/0x1b0
> [xfs]
> [   81.928075]  [<ffffffffa021cd56>]
> xlog_recover_process_efis.isra.8+0x76/0xd0 [xfs]
> [   81.928097]  [<ffffffffa0220a17>] xlog_recover_finish+0x27/0xd0 [xfs]
> [   81.928119]  [<ffffffffa022812c>] xfs_log_mount_finish+0x2c/0x30 [xfs]
> [   81.928140]  [<ffffffffa0223620>] xfs_mountfs+0x420/0x6b0 [xfs]
> [   81.928156]  [<ffffffffa01e2ffd>] xfs_fs_fill_super+0x21d/0x2b0 [xfs]
> [   81.928163]  [<ffffffff8118b716>] mount_bdev+0x1c6/0x210
> [   81.928179]  [<ffffffffa01e2de0>] ? xfs_parseargs+0xb80/0xb80 [xfs]
> [   81.928194]  [<ffffffffa01e10a5>] xfs_fs_mount+0x15/0x20 [xfs]
> [   81.928198]  [<ffffffff8118c563>] mount_fs+0x43/0x1b0
> [   81.928202]  [<ffffffff811a5ee3>] ? find_filesystem+0x63/0x80
> [   81.928206]  [<ffffffff811a7246>] vfs_kern_mount+0x76/0x120
> [   81.928209]  [<ffffffff811a7c34>] do_kern_mount+0x54/0x110
> [   81.928212]  [<ffffffff811a9934>] do_mount+0x1a4/0x260
> [   81.928215]  [<ffffffff811a9e10>] sys_mount+0x90/0xe0
> [   81.928220]  [<ffffffff816a7729>] system_call_fastpath+0x16/0x1b
> [   81.928229] XFS (dm-0): Failed to recover EFIs
> [   81.928232] XFS (dm-0): log mount finish failed
> [   81.972741] XFS (dm-1): Mounting Filesystem
> [   82.195661] XFS (dm-1): Ending clean mount
> [   82.203627] XFS (dm-2): Mounting Filesystem
> [   82.479044] XFS (dm-2): Ending clean mount



Actually, the problem was a little bit more complicated. This LVM2
partition, was using a physical device (PV) that is exported by a RAID NAS
controller. This volume exported by the controller was created using a RAID
5, there was a hardware failure in one of the HDs of the array and the
volume got unavailable, till we replaced the bad driver with a new one and
the array rebuild finished.

That's an old version of xfsprogs - you might want to start by
> upgrading it to 3.11...
>
So, a good try should be upgrading to xfsprogs 3.11 and running xfs_repair
again.


2013/12/15 Dave Chinner <david@fromorbit.com>

> On Sun, Dec 15, 2013 at 08:47:30PM -0200, Rafael Weingartner wrote:
> > Hi folks,
> > I am having some troubles with a XFS over one LVM partition. After an
> > unexpected reboot, I am getting the following message when I try to mount
> > it:
> > *mount: Structure needs cleaning*
>
> And the error in dmesg is?
>
> > I tried "sudo xfs_check /dev/mapper/volume". Sadly, I got the message:
> > xfs_check: cannot init perag data (117)
>
> xfs_check is deprecated, please use "xfs_repair -n" instead.
>
> > *ERROR: The filesystem has valuable metadata changes in a log which needs
> > to*
> > *be replayed.  Mount the filesystem to replay the log, and unmount it
> > before*
> > *re-running xfs_check.  If you are unable to mount the filesystem, then
> use*
> > *the xfs_repair -L option to destroy the log and attempt a repair.*
> > *Note that destroying the log may cause corruption -- please attempt a
> > mount*
> > *of the filesystem before doing this*
> >
> > So, I tried:
> > xfs_repair -L
>
> Ok, so you went immediately for the big hammer. There's the
> possibility that might not be able to recover your filesystem from
> whatever went wrong now that the log has been zeroed.
>
> > The command is running for over 3 hours and still just dots on my
> screen, I
> > have no idea of what is happening. Any ideas how I can get it to work
> > again? Or at least some work around that would enable me to extract the
> > data that it contains.
>
> I'm guessing it can't find or validate the primary superblock, so
> it's looking for a secondary superblock. Please post the output of
> the running repair so we can see exactly what it is doing.
>
> Also we need more information about your problem - why did the
> machine reboot? what's your storage configuration? You hardware,
> etc.
>
>
> http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
>
> > The server is a Ubuntu server 12.04.
> > The XFS version is: xfs_info version 3.1.7
> > If you need I can provide you with more info.
>
> That's an old version of xfsprogs - you might want to start by
> upgrading it to 3.11...
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
>



-- 
Rafael Weingärtner

[-- Attachment #1.2: Type: text/html, Size: 8183 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: XFS_REPAIR on LVM partition
  2013-12-16  0:34   ` Rafael Weingartner
@ 2013-12-16  3:05     ` Dave Chinner
  2013-12-16  8:52       ` Rafael Weingartner
  0 siblings, 1 reply; 8+ messages in thread
From: Dave Chinner @ 2013-12-16  3:05 UTC (permalink / raw)
  To: Rafael Weingartner; +Cc: xfs

On Sun, Dec 15, 2013 at 10:34:43PM -0200, Rafael Weingartner wrote:
> So, sadly I went for the big hammer option, I thought that there were no
> other options ;).
> 
> I'm guessing it can't find or validate the primary superblock, so
> > it's looking for a secondary superblock. Please post the output of
> > the running repair so we can see exactly what it is doing.
> 
> That is exactly what it seems that it is happening.
> 
> *dmesg erros:*
> 
> >  81.927888] Pid: 878, comm: mount Not tainted 3.5.0-44-generic
> > #67~precise1-Ubuntu
> > [   81.927891] Call Trace:
> > [   81.927941]  [<ffffffffa01d460f>] xfs_error_report+0x3f/0x50 [xfs]
> > [   81.927972]  [<ffffffffa01ecd66>] ? xfs_free_extent+0xe6/0x130 [xfs]
> > [   81.927990]  [<ffffffffa01ea318>] xfs_free_ag_extent+0x528/0x730 [xfs]
> > [   81.928007]  [<ffffffffa01e8e07>] ? kmem_zone_alloc+0x67/0xe0 [xfs]
> > [   81.928033]  [<ffffffffa01ecd66>] xfs_free_extent+0xe6/0x130 [xfs]
> > [   81.928055]  [<ffffffffa021bb10>] xlog_recover_process_efi+0x170/0x1b0
> > [xfs]
> > [   81.928075]  [<ffffffffa021cd56>]
> > xlog_recover_process_efis.isra.8+0x76/0xd0 [xfs]
> > [   81.928097]  [<ffffffffa0220a17>] xlog_recover_finish+0x27/0xd0 [xfs]
> > [   81.928119]  [<ffffffffa022812c>] xfs_log_mount_finish+0x2c/0x30 [xfs]
> > [   81.928140]  [<ffffffffa0223620>] xfs_mountfs+0x420/0x6b0 [xfs]
> > [   81.928156]  [<ffffffffa01e2ffd>] xfs_fs_fill_super+0x21d/0x2b0 [xfs]
> > [   81.928163]  [<ffffffff8118b716>] mount_bdev+0x1c6/0x210
> > [   81.928179]  [<ffffffffa01e2de0>] ? xfs_parseargs+0xb80/0xb80 [xfs]
> > [   81.928194]  [<ffffffffa01e10a5>] xfs_fs_mount+0x15/0x20 [xfs]
> > [   81.928198]  [<ffffffff8118c563>] mount_fs+0x43/0x1b0
> > [   81.928202]  [<ffffffff811a5ee3>] ? find_filesystem+0x63/0x80
> > [   81.928206]  [<ffffffff811a7246>] vfs_kern_mount+0x76/0x120
> > [   81.928209]  [<ffffffff811a7c34>] do_kern_mount+0x54/0x110
> > [   81.928212]  [<ffffffff811a9934>] do_mount+0x1a4/0x260
> > [   81.928215]  [<ffffffff811a9e10>] sys_mount+0x90/0xe0
> > [   81.928220]  [<ffffffff816a7729>] system_call_fastpath+0x16/0x1b
> > [   81.928229] XFS (dm-0): Failed to recover EFIs
> > [   81.928232] XFS (dm-0): log mount finish failed
> > [   81.972741] XFS (dm-1): Mounting Filesystem
> > [   82.195661] XFS (dm-1): Ending clean mount
> > [   82.203627] XFS (dm-2): Mounting Filesystem
> > [   82.479044] XFS (dm-2): Ending clean mount
> 
> Actually, the problem was a little bit more complicated. This LVM2
> partition, was using a physical device (PV) that is exported by a RAID NAS
> controller.

What's a "RAID NAS controller"? Details, please, or we can't help
you.

> This volume exported by the controller was created using a RAID
> 5, there was a hardware failure in one of the HDs of the array and the
> volume got unavailable, till we replaced the bad driver with a new one and
> the array rebuild finished.

So, hardware RAID5, lost a drive, rebuild on replace, filesystem in
a bad way after rebuild?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: XFS_REPAIR on LVM partition
  2013-12-16  3:05     ` Dave Chinner
@ 2013-12-16  8:52       ` Rafael Weingartner
  2013-12-16 12:54         ` Dave Chinner
  0 siblings, 1 reply; 8+ messages in thread
From: Rafael Weingartner @ 2013-12-16  8:52 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs


[-- Attachment #1.1: Type: text/plain, Size: 4263 bytes --]

What's a "RAID NAS controller"? Details, please, or we can't help
> you.


Maybe I am not expressing my self clearly. That is what I meant:
http://www.starline.de/produkte/raid-systeme/infortrend-raid-systeme/eonstor/es-a08u-g2421/

It is a piece of hardware that we use to apply RAIDx (normally 1 or 5) over
physical disks instead of plugging them on the storage server and applying
RAID via software or something else. It exports the volumes using an SCSI
channel.  The devices are seen on the server as normal sd*, as if they were
normal physical devices.

So, hardware RAID5, lost a drive, rebuild on replace, filesystem in
> a bad way after rebuild?


That is exactly what happened, the RAID5 array lost a drive, and after we
replaced and rebuild it, the filesystem was not mounting anymore.
Teorically, this should not affect the filesystem since the RAID5 would
have recovered any lost information.



2013/12/16 Dave Chinner <david@fromorbit.com>

> On Sun, Dec 15, 2013 at 10:34:43PM -0200, Rafael Weingartner wrote:
> > So, sadly I went for the big hammer option, I thought that there were no
> > other options ;).
> >
> > I'm guessing it can't find or validate the primary superblock, so
> > > it's looking for a secondary superblock. Please post the output of
> > > the running repair so we can see exactly what it is doing.
> >
> > That is exactly what it seems that it is happening.
> >
> > *dmesg erros:*
> >
> > >  81.927888] Pid: 878, comm: mount Not tainted 3.5.0-44-generic
> > > #67~precise1-Ubuntu
> > > [   81.927891] Call Trace:
> > > [   81.927941]  [<ffffffffa01d460f>] xfs_error_report+0x3f/0x50 [xfs]
> > > [   81.927972]  [<ffffffffa01ecd66>] ? xfs_free_extent+0xe6/0x130 [xfs]
> > > [   81.927990]  [<ffffffffa01ea318>] xfs_free_ag_extent+0x528/0x730
> [xfs]
> > > [   81.928007]  [<ffffffffa01e8e07>] ? kmem_zone_alloc+0x67/0xe0 [xfs]
> > > [   81.928033]  [<ffffffffa01ecd66>] xfs_free_extent+0xe6/0x130 [xfs]
> > > [   81.928055]  [<ffffffffa021bb10>]
> xlog_recover_process_efi+0x170/0x1b0
> > > [xfs]
> > > [   81.928075]  [<ffffffffa021cd56>]
> > > xlog_recover_process_efis.isra.8+0x76/0xd0 [xfs]
> > > [   81.928097]  [<ffffffffa0220a17>] xlog_recover_finish+0x27/0xd0
> [xfs]
> > > [   81.928119]  [<ffffffffa022812c>] xfs_log_mount_finish+0x2c/0x30
> [xfs]
> > > [   81.928140]  [<ffffffffa0223620>] xfs_mountfs+0x420/0x6b0 [xfs]
> > > [   81.928156]  [<ffffffffa01e2ffd>] xfs_fs_fill_super+0x21d/0x2b0
> [xfs]
> > > [   81.928163]  [<ffffffff8118b716>] mount_bdev+0x1c6/0x210
> > > [   81.928179]  [<ffffffffa01e2de0>] ? xfs_parseargs+0xb80/0xb80 [xfs]
> > > [   81.928194]  [<ffffffffa01e10a5>] xfs_fs_mount+0x15/0x20 [xfs]
> > > [   81.928198]  [<ffffffff8118c563>] mount_fs+0x43/0x1b0
> > > [   81.928202]  [<ffffffff811a5ee3>] ? find_filesystem+0x63/0x80
> > > [   81.928206]  [<ffffffff811a7246>] vfs_kern_mount+0x76/0x120
> > > [   81.928209]  [<ffffffff811a7c34>] do_kern_mount+0x54/0x110
> > > [   81.928212]  [<ffffffff811a9934>] do_mount+0x1a4/0x260
> > > [   81.928215]  [<ffffffff811a9e10>] sys_mount+0x90/0xe0
> > > [   81.928220]  [<ffffffff816a7729>] system_call_fastpath+0x16/0x1b
> > > [   81.928229] XFS (dm-0): Failed to recover EFIs
> > > [   81.928232] XFS (dm-0): log mount finish failed
> > > [   81.972741] XFS (dm-1): Mounting Filesystem
> > > [   82.195661] XFS (dm-1): Ending clean mount
> > > [   82.203627] XFS (dm-2): Mounting Filesystem
> > > [   82.479044] XFS (dm-2): Ending clean mount
> >
> > Actually, the problem was a little bit more complicated. This LVM2
> > partition, was using a physical device (PV) that is exported by a RAID
> NAS
> > controller.
>
> What's a "RAID NAS controller"? Details, please, or we can't help
> you.
>
> > This volume exported by the controller was created using a RAID
> > 5, there was a hardware failure in one of the HDs of the array and the
> > volume got unavailable, till we replaced the bad driver with a new one
> and
> > the array rebuild finished.
>
> So, hardware RAID5, lost a drive, rebuild on replace, filesystem in
> a bad way after rebuild?
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
>



-- 
Rafael Weingärtner

[-- Attachment #1.2: Type: text/html, Size: 6810 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: XFS_REPAIR on LVM partition
  2013-12-16  8:52       ` Rafael Weingartner
@ 2013-12-16 12:54         ` Dave Chinner
       [not found]           ` <CAG97raeP-0QEhhYjDX_DDxzS3TN_brRSU6G+j-+V3KEuJ7Ym6Q@mail.gmail.com>
  0 siblings, 1 reply; 8+ messages in thread
From: Dave Chinner @ 2013-12-16 12:54 UTC (permalink / raw)
  To: Rafael Weingartner; +Cc: xfs

On Mon, Dec 16, 2013 at 06:52:23AM -0200, Rafael Weingartner wrote:
> What's a "RAID NAS controller"? Details, please, or we can't help
> > you.
> 
> 
> Maybe I am not expressing my self clearly. That is what I meant:
> http://www.starline.de/produkte/raid-systeme/infortrend-raid-systeme/eonstor/es-a08u-g2421/
> 
> It is a piece of hardware that we use to apply RAIDx (normally 1 or 5) over
> physical disks instead of plugging them on the storage server and applying
> RAID via software or something else. It exports the volumes using an SCSI
> channel.  The devices are seen on the server as normal sd*, as if they were
> normal physical devices.
> 
> So, hardware RAID5, lost a drive, rebuild on replace, filesystem in
> > a bad way after rebuild?
> 
> 
> That is exactly what happened, the RAID5 array lost a drive, and after we
> replaced and rebuild it, the filesystem was not mounting anymore.
> Teorically, this should not affect the filesystem since the RAID5 would
> have recovered any lost information.

In theory, yes. But we hear about "successful" hardware raid
rebuilds ifrom situations like yours that result in corrupt
filesystems. i.e. the hardware RAID rebuild didn't actually recover
properly...

So, what's the contents of the primary XFS superblock on that
device? What does this give:

# dd if=<dev> bs=512 count=1 | hexdump -C

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: XFS_REPAIR on LVM partition
       [not found]             ` <CAG97raf7Na5UwzREJ_C9nYJ64r7PPwkhp_qcPiGVnKqu+ujAgw@mail.gmail.com>
@ 2013-12-18 21:34               ` Dave Chinner
  2013-12-18 23:29                 ` Rafael Weingartner
  0 siblings, 1 reply; 8+ messages in thread
From: Dave Chinner @ 2013-12-18 21:34 UTC (permalink / raw)
  To: Rafael Weingartner; +Cc: xfs

[ cc'd the XFS list again - please keep problem triage on the public
lists so that more than one person can help you. ]

On Mon, Dec 16, 2013 at 02:53:03PM -0200, Rafael Weingartner wrote:
> Today, I let the command xfs_repair /dev/... ran till it finished, I got
> the following messages:
> 
> Phase 1 - Find and verify superblock ....
> > Could not verify primary superblock - not enough secondary superblocks
> > with matching geometry!!

The primary superblock dump looked valid, but it it couldn't find
matching secondary superblocks from the contents of the primary that
it found.

> > attempting to find a secondary superblock.....
> > ...
> > ..
> > ...
> > ...
> > found candidate secondary superblock....
> > unable to verify superblock, continuing....

And it found blocks with the correct superbloc magic numbers, but
they don't match the primary superblock that was found.

> > found candidate secondary superblock....
> > unable to verify superblock, continuing...
> > ...
> > ...
> > ..............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................Sorry,
> > could not find valid secondary superblock
> > Exiting now.
> 
> 
> Should I upgrade the xfsprogs and try to run the xfs_repair again?
> Or does that message mean that there is no way of recovering the filesystem?

It's still possible to recover, but more info is needed first.
Can you get xfs_db to dump the primary and a couple of secondary
superblocks?

# xfs_db -c "sb 0" -c p -c "sb 2" -c p -c "sb 7" -c p <dev>

And post the output?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: XFS_REPAIR on LVM partition
  2013-12-18 21:34               ` Dave Chinner
@ 2013-12-18 23:29                 ` Rafael Weingartner
  0 siblings, 0 replies; 8+ messages in thread
From: Rafael Weingartner @ 2013-12-18 23:29 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs


[-- Attachment #1.1: Type: text/plain, Size: 6107 bytes --]

>
> [ cc'd the XFS list again - please keep problem triage on the public
> lists so that more than one person can help you. ]


My bad, I am sorry, sometimes I forget to press the reply to all button.

It's still possible to recover, but more info is needed first.
> Can you get xfs_db to dump the primary and a couple of secondary
> superblocks?


I am glad to hear that, I have actually already extracted the files that I
needed using UFS explorer, but I still would like to restore the filesystem
if it is possible.

Output of "xfs_db -c "sb 0" -c p -c "sb 2" -c p -c "sb 7" -c p <dev>"

xfs_db: cannot init perag data (117)
> magicnum = 0x58465342
> blocksize = 4096
> dblocks = 131072000
> rblocks = 0
> rextents = 0
> uuid = 430d1253-0d52-401b-b6bd-42e23bb56bc3
> logstart = 67108868
> rootino = 128
> rbmino = 129
> rsumino = 130
> rextsize = 1
> agblocks = 32768000
> agcount = 4
> rbmblocks = 0
> logblocks = 64000
> versionnum = 0xb4a4
> sectsize = 512
> inodesize = 256
> inopblock = 16
> fname = "\000\000\000\000\000\000\000\000\000\000\000\000"
> blocklog = 12
> sectlog = 9
> inodelog = 8
> inopblog = 4
> agblklog = 25
> rextslog = 0
> inprogress = 0
> imax_pct = 25
> icount = 256
> ifree = 200
> fdblocks = 127617831
> frextents = 0
> uquotino = 0
> gquotino = 0
> qflags = 0
> flags = 0
> shared_vn = 0
> inoalignmt = 2
> unit = 0
> width = 0
> dirblklog = 0
> logsectlog = 0
> logsectsize = 0
> logsunit = 1
> features2 = 0xa
> bad_features2 = 0xa
> magicnum = 0x58465350
> blocksize = 4096
> dblocks = 7566328849834176030
> rblocks = 70481084416
> rextents = 5638878729879945216
> uuid = 5de3e560-0f52-401b-854a-acfc3bb56bfb
> logstart = 7566047375040578052
> rootino = 18374686479671613439
> rbmino = 18446744071377518559
> rsumino = null
> rextsize = 1
> agblocks = 32768000
> agcount = 4
> rbmblocks = 1073782799
> logblocks = 64000
> versionnum = 0xa4a4
> sectsize = 512
> inodesize = 267
> inopblock = 16
> fname = "3\367\356\036\000\000\000`i\000\000\000"
> blocklog = 66
> sectlog = 64
> inodelog = 190
> inopblog = 133
> agblklog = 27
> rextslog = 3
> inprogress = 1
> imax_pct = 25
> icount = 0
> ifree = 72057594037927936
> fdblocks = 131054064
> frextents = 5891388165771235349
> uquotino = 13746228866238942976
> gquotino = 13746228866238942976
> qflags = 0xd0de
> flags = 0x17
> shared_vn = 97
> inoalignmt = 33554434
> unit = 418391552
> width = 0
> dirblklog = 0
> logsectlog = 0
> logsectsize = 0
> logsunit = 506003457
> features2 = 0x8
> bad_features2 = 0xa
> bad allocation group number 7
> magicnum = 0x58465350
> blocksize = 4096
> dblocks = 7566328849834176030
> rblocks = 70481084416
> rextents = 5638878729879945216
> uuid = 5de3e560-0f52-401b-854a-acfc3bb56bfb
> logstart = 7566047375040578052
> rootino = 18374686479671613439
> rbmino = 18446744071377518559
> rsumino = null
> rextsize = 1
> agblocks = 32768000
> agcount = 4
> rbmblocks = 1073782799
> logblocks = 64000
> versionnum = 0xa4a4
> sectsize = 512
> inodesize = 267
> inopblock = 16
> fname = "3\367\356\036\000\000\000`i\000\000\000"
> blocklog = 66
> sectlog = 64
> inodelog = 190
> inopblog = 133
> agblklog = 27
> rextslog = 3
> inprogress = 1
> imax_pct = 25
> icount = 0
> ifree = 72057594037927936
> fdblocks = 131054064
> frextents = 5891388165771235349
> uquotino = 13746228866238942976
> gquotino = 13746228866238942976
> qflags = 0xd0de
> flags = 0x17
> shared_vn = 97
> inoalignmt = 33554434
> unit = 418391552
> width = 0
> dirblklog = 0
> logsectlog = 0
> logsectsize = 0
> logsunit = 506003457
> features2 = 0x8
> bad_features2 = 0xa




On Wed, Dec 18, 2013 at 7:34 PM, Dave Chinner <david@fromorbit.com> wrote:

> [ cc'd the XFS list again - please keep problem triage on the public
> lists so that more than one person can help you. ]
>
> On Mon, Dec 16, 2013 at 02:53:03PM -0200, Rafael Weingartner wrote:
> > Today, I let the command xfs_repair /dev/... ran till it finished, I got
> > the following messages:
> >
> > Phase 1 - Find and verify superblock ....
> > > Could not verify primary superblock - not enough secondary superblocks
> > > with matching geometry!!
>
> The primary superblock dump looked valid, but it it couldn't find
> matching secondary superblocks from the contents of the primary that
> it found.
>
> > > attempting to find a secondary superblock.....
> > > ...
> > > ..
> > > ...
> > > ...
> > > found candidate secondary superblock....
> > > unable to verify superblock, continuing....
>
> And it found blocks with the correct superbloc magic numbers, but
> they don't match the primary superblock that was found.
>
> > > found candidate secondary superblock....
> > > unable to verify superblock, continuing...
> > > ...
> > > ...
> > >
> ..............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................Sorry,
> > > could not find valid secondary superblock
> > > Exiting now.
> >
> >
> > Should I upgrade the xfsprogs and try to run the xfs_repair again?
> > Or does that message mean that there is no way of recovering the
> filesystem?
>
> It's still possible to recover, but more info is needed first.
> Can you get xfs_db to dump the primary and a couple of secondary
> superblocks?
>
> # xfs_db -c "sb 0" -c p -c "sb 2" -c p -c "sb 7" -c p <dev>
>
> And post the output?
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
>



-- 
Rafael Weingärtner

[-- Attachment #1.2: Type: text/html, Size: 8485 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2013-12-18 23:29 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-12-15 22:47 XFS_REPAIR on LVM partition Rafael Weingartner
2013-12-16  0:01 ` Dave Chinner
2013-12-16  0:34   ` Rafael Weingartner
2013-12-16  3:05     ` Dave Chinner
2013-12-16  8:52       ` Rafael Weingartner
2013-12-16 12:54         ` Dave Chinner
     [not found]           ` <CAG97raeP-0QEhhYjDX_DDxzS3TN_brRSU6G+j-+V3KEuJ7Ym6Q@mail.gmail.com>
     [not found]             ` <CAG97raf7Na5UwzREJ_C9nYJ64r7PPwkhp_qcPiGVnKqu+ujAgw@mail.gmail.com>
2013-12-18 21:34               ` Dave Chinner
2013-12-18 23:29                 ` Rafael Weingartner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox