public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Michael Maier <m1278468@allmail.net>
To: Dave Chinner <david@fromorbit.com>
Cc: Eric Sandeen <sandeen@sandeen.net>, xfs@oss.sgi.com
Subject: Re: Failure growing xfs with linux 3.10.5
Date: Tue, 13 Aug 2013 16:55:00 +0200	[thread overview]
Message-ID: <520A48C4.6060801@allmail.net> (raw)
In-Reply-To: <20130813005414.GT12779@dastard>

Dave Chinner wrote:
> On Mon, Aug 12, 2013 at 06:50:55PM +0200, Michael Maier wrote:
>> Meanwhile, I faced another problem on another xfs-file system with linux
>> 3.10.5 which I never saw before. During writing a few bytes to disc, I
>> got "disc full" and the writing failed.
>>
>> At the same time, df reported 69G of free space! I ran xfs_repair -n and
>> got:
>>
>>
>> xfs_repair -n /dev/mapper/raid0-daten2
>> Phase 1 - find and verify superblock...
>> Phase 2 - using internal log
>>         - scan filesystem freespace and inode maps...
>> sb_ifree 591, counted 492
>> ^^^^^^^^^^^^^^^^^^^^^^^^^
>> What does this mean? How can I get rid of it w/o loosing data? This file
>> system was created a few days ago and never resized.
> 
> Superblock inode counting is lazy - it can get out of sync in after
> an unclean shutdown, but generally mounting a dirty filesystem will
> result in it being recalculated rather than trusted to be correct.
> So there's nothing to worry about here.

When will it be self healed? I still can see it today after 4 remounts!
This is strange and I can't use the free space, which I need! How can it
be forced to be repaired w/o data loss?


Thanks,
kind regards,
Michael

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2013-08-13 14:55 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-08-11  7:11 Failure growing xfs with linux 3.10.5 Michael Maier
2013-08-11 18:36 ` Eric Sandeen
2013-08-12 16:50   ` Michael Maier
2013-08-13  0:54     ` Dave Chinner
2013-08-13 14:55       ` Michael Maier [this message]
2013-08-14  5:43         ` Dave Chinner
2013-08-14 15:16           ` Michael Maier
2013-08-15  0:58             ` Dave Chinner
2013-08-15 18:14               ` Michael Maier
     [not found]   ` <52090C6C.6060604@allmail.net>
2013-08-13  0:04     ` Dave Chinner
2013-08-13 15:30       ` Michael Maier
2013-08-14  5:53         ` Stan Hoeppner
2013-08-14 15:05           ` Michael Maier
2013-08-14 17:31             ` Stan Hoeppner
2013-08-14 18:13               ` Michael Maier
2013-08-14 22:20                 ` Stan Hoeppner
2013-08-15 17:05                   ` Michael Maier
2013-08-14  6:20         ` Dave Chinner
2013-08-14 16:20           ` Michael Maier
2013-08-14 16:37             ` Eric Sandeen
2013-08-15 17:18             ` Eric Sandeen
2013-08-15 17:55               ` Michael Maier
2013-08-15 18:14                 ` Eric Sandeen
2013-08-15 18:35                   ` Michael Maier
2013-08-15 18:42                     ` Eric Sandeen
2013-08-14 16:51           ` Eric Sandeen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=520A48C4.6060801@allmail.net \
    --to=m1278468@allmail.net \
    --cc=david@fromorbit.com \
    --cc=sandeen@sandeen.net \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox