public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: "David Bernick" <dbernick@gmail.com>
To: Eric Sandeen <sandeen@sandeen.net>
Cc: xfs@oss.sgi.com
Subject: Re: xfs_repair problem.
Date: Sun, 21 Dec 2008 17:08:58 -0500	[thread overview]
Message-ID: <7bcfcfff0812211408u6e08bf81r1c19ab5ba938b0e2@mail.gmail.com> (raw)
In-Reply-To: <494E766B.5080102@sandeen.net>

So I ran an xfs_repair -v on my filesystem. While the FS was originally 12T
and 95% full. I ran xfs_repair and it throw many "out of space errors" when
it was running. That makes some sense.

I expanded it with a new device with xfs_grow. It seems to work, because the
disk is now bigger when I mount it.

When I ran xfs_repair on that (latest version), it reverts back to the
original size. Is there anything I need to do to make the xfs_growfs
permanent?

If I can make the XFS partition bigger, I can likely make it work because
then it won't run out of space! But the partition, despite saying its
"bigger" here, doesn't seem to "take" after the xfs_repair. Any ideas?

Below is the xfs_db and some other useful things.
   8    17 12695312483 sdb1
   8    33 8803844062 sdc1
   8    34 4280452000 sdc2

 --- Logical volume ---
  LV Name                /dev/docs/v5Docs
  VG Name                docs
  LV UUID                G85Zi9-s63C-yWrU-yyf0-STP6-YOhJ-6Ne3pS
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                24.01 TB
  Current LE             6293848
  Segments               3
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:5
xfs_db> sb 0
xfs_db> print
magicnum = 0x58465342
blocksize = 4096
dblocks = 6444900352
rblocks = 0
rextents = 0
uuid = f086bb71-d67b-4cc1-b622-1f10349e6a49
logstart = 1073741828
rootino = 128
rbmino = 129
rsumino = 130
rextsize = 1
agblocks = 67108864
agcount = 97
rbmblocks = 0
logblocks = 32768
versionnum = 0x3084
sectsize = 512
inodesize = 256
inopblock = 16
fname = "\000\000\000\000\000\000\000\000\000\000\000\000"
blocklog = 12
sectlog = 9
inodelog = 8
inopblog = 4
agblklog = 26
rextslog = 0
inprogress = 0
imax_pct = 25
icount = 149545792
ifree = 274
fdblocks = 4275133930
frextents = 0
uquotino = 0
gquotino = 0
qflags = 0
flags = 0
shared_vn = 0
inoalignmt = 2
unit = 0
width = 0
dirblklog = 0
logsectlog = 0
logsectsize = 0
logsunit = 0
features2 = 0


On Sun, Dec 21, 2008 at 12:01 PM, Eric Sandeen <sandeen@sandeen.net> wrote:

> David Bernick wrote:
> > Thanks for the help so far:
> >
> > It my output was from "sb 0". Thanks for reminding me to be explicit.
> >
> > The system is a 64-bit system with 32-GB of RAM. It's going through the
> > FS right now with XFS repair.
> > Output of xfs_repair says, "arno=3" and about 81.6% of RAM is used by
> > the process. Think 32 G will be enough to handle this task?
> > I actually don't KNOW the original error, unfortunately, when growing. I
> > came into this late.
> >
> > We're using repair 2.9.4. Worth getting a more recent version?
>
> 2.9.8 had some memory usage improvements (reductions) for repair IIRC
>
> > Kernel is - 2.6.18-92.1.1.el5
>
> heh; RHEL5 does not support xfs ;)
>
> You probably hit:
>
> TAKE 959978 - growing an XFS filesystem by more than 2TB is broken
> http://oss.sgi.com/archives/xfs/2007-01/msg00053.html
>
> I'd see if you can get centos to backport that fix (I assume you're
> using centos or at least their kernel module; if not you can backport it
> yourself...)
>
> > I "backed off" by vgsplit-ing the new physical device from the original
> > vgroup, so I was left with my original partition. I am hoping to mount
> > the original device since the "expanded" fs didn't work. I am hoping
> > xfs_repair helps that.
>
> well, you don't want to take out part of the device if the fs thinks it
> owns it now, but from the db output I think you still have the smaller
> size.
>
> I'd read through:
>
> http://oss.sgi.com/archives/xfs/2008-01/msg00085.html
>
> and see if it helps you recover.
>
> -Eric
>


[[HTML alternate version deleted]]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  parent reply	other threads:[~2008-12-21 22:09 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-12-21 15:03 xfs_repair problem David Bernick
2008-12-21 15:55 ` Eric Sandeen
2008-12-21 16:52   ` David Bernick
2008-12-21 17:01     ` Eric Sandeen
2008-12-21 17:07       ` David Bernick
2008-12-21 22:08       ` David Bernick [this message]
2008-12-21 22:14         ` David Bernick
2008-12-21 22:20         ` Eric Sandeen
2008-12-21 17:12     ` Eric Sandeen
  -- strict thread matches above, loose matches on Subject: below --
2007-12-06 16:10 XFS_Repair PRoblem Kingghost
2007-12-06 21:06 ` David Chinner
2007-12-07  0:51   ` Kingghost

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7bcfcfff0812211408u6e08bf81r1c19ab5ba938b0e2@mail.gmail.com \
    --to=dbernick@gmail.com \
    --cc=sandeen@sandeen.net \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox