public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: pg_xf2@xf2.for.sabi.co.UK (Peter Grandi)
To: Linux XFS <xfs@oss.sgi.com>
Subject: Re: xfs_growfs failure....
Date: Wed, 24 Feb 2010 17:10:25 +0000	[thread overview]
Message-ID: <19333.23937.956036.716@tree.ty.sabi.co.uk> (raw)
In-Reply-To: <E51793F9F4FAD54A8C1774933D8E500006B64A2687@sbapexch05>

> I am in some difficulty here over a 100TB filesystem

Shrewd idea! Because 'fsck' takes no time and memory, so the
bigger the filesystem the better! ;-).

> that Is now unusable after a xfs_growfs command. [ ... ] 

Wondering how long it took to backup 100TB; but of course doing
a 'grow' is guaranteed to be error free, so there :-).

> attempt to access beyond end of device dm-61: rw=0,
> want=238995038208, limit=215943192576

It looks like the underlying DM logical volume is smaller than
the new size of the filesystem, which is strange as 'xfs_growfs'
is supposed to fetch the size of the underlying block device if
none is specified explicitly on the command line. The different
is about 10% or 10TB, so it is far from trivial.

Looking at the superblock dumps there are some pretty huge
discrepancies, 

  -bash-3.1# xfs_db -r -c 'sb 0' -c p /dev/logfs-sessions/sessions
  magicnum = 0x58465342
  blocksize = 4096
  dblocks = 29874379776
  rblocks = 0
  rextents = 0
  uuid = fc8bdf76-d962-43c1-ae60-b85f378978a6
  logstart = 0
  rootino = 2048
  rbmino = 2049
  rsumino = 2050
  rextsize = 384
  agblocks = 268435328
  agcount = 112
  [ ... ]
  -bash-3.1# xfs_db -r -c 'sb 2' -c p /dev/logfs-sessions/sessions
  magicnum = 0x58465342
  blocksize = 4096
  dblocks = 24111418368
  rblocks = 0
  rextents = 0
  uuid = fc8bdf76-d962-43c1-ae60-b85f378978a6
  logstart = 0
  rootino = 2048
  rbmino = 2049
  rsumino = 2050
  rextsize = 384
  agblocks = 268435328
  agcount = 90
  [ ... ]

The 'dblocks' field is rather different, even if the 'uuid' and
'agblocks' is the same, and 'agcount' is also rather different.
In SB 0 'dblocks' 29874379776 means size 238995038208, which is
value of 'want' above. The products of 'agcount' and 'agblocks'
fit with the sizes.

It looks like that the filesystem was "grown" from ~92TiB to
~114TiB on a storage device that is reported as ~103TiB
long. Again, very strange.

My impression is that not enough history/context has been
provided to enable a good guess at what has happened and how to
undo the consequent damage.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  parent reply	other threads:[~2010-02-24 17:09 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-02-24 10:44 xfs_growfs failure Joe Allen
2010-02-24 11:54 ` Dave Chinner
     [not found]   ` <5A88945F-EAA8-4678-8ADA-7700E3FF607B@citrixonline.com>
2010-02-24 17:56     ` Jason Vagalatos
     [not found]     ` <84A4B16519BD4C43ABD91AFB3CB84B6F097ED42EE9@sbapexch05>
2010-02-24 18:08       ` Jason Vagalatos
2010-02-24 17:10 ` Peter Grandi [this message]
2010-02-24 18:37   ` Joe Allen
2010-02-24 21:34     ` Peter Grandi
2010-02-24 23:02     ` Dave Chinner
2010-02-25  8:27       ` Joe Allen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=19333.23937.956036.716@tree.ty.sabi.co.uk \
    --to=pg_xf2@xf2.for.sabi.co.uk \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox