From: Eric Sandeen <sandeen@sandeen.net>
To: Anand Tiwari <tiwarikanand@gmail.com>
Cc: xfs@oss.sgi.com
Subject: Re: xfs_repair deleting realtime files.
Date: Fri, 21 Sep 2012 11:07:02 -0500 [thread overview]
Message-ID: <505C90A6.90103@sandeen.net> (raw)
In-Reply-To: <CAHt31__x6VbmO-8GJvPesKEXXmRGEfYt-TeRk-9GFJt1HjPPvA@mail.gmail.com>
On 9/21/12 10:51 AM, Anand Tiwari wrote:
>
>
> On Thu, Sep 20, 2012 at 11:00 PM, Eric Sandeen <sandeen@sandeen.net
> <mailto:sandeen@sandeen.net>> wrote:
>
> On 9/20/12 7:40 PM, Anand Tiwari wrote:
>> Hi All,
>>
>> I have been looking into an issue with xfs_repair with realtime sub
>> volume. some times while running xfs_repair I see following errors
>>
>> ---------------------------- data fork in rt inode 134 claims used
>> rt block 19607 bad data fork in inode 134 would have cleared inode
>> 134 data fork in rt inode 135 claims used rt block 29607 bad data
>> fork in inode 135 would have cleared inode 135 - agno = 1 - agno =
>> 2 - agno = 3 - process newly discovered inodes... Phase 4 - check
>> for duplicate blocks... - setting up duplicate extent list... -
>> check for inodes claiming duplicate blocks... - agno = 0 - agno =
>> 1 - agno = 2 - agno = 3 entry "test-011" in shortform directory 128
>> references free inode 134 would have junked entry "test-011" in
>> directory inode 128 entry "test-0" in shortform directory 128
>> references free inode 135 would have junked entry "test-0" in
>> directory inode 128 data fork in rt ino 134 claims dup rt
>> extent,off - 0, start - 7942144, count 2097000 bad data fork in
>> inode 134 would have cleared inode 134 data fork in rt ino 135
>> claims dup rt extent,off - 0, start - 13062144, count 2097000 bad
>> data fork in inode 135 would have cleared inode 135 No modify flag
>> set, skipping phase 5 ------------------------
>>
>> Here is the bmap for both inodes.
>>
>> xfs_db> inode 135 xfs_db> bmap data offset 0 startblock 13062144
>> (12/479232) count 2097000 flag 0 data offset 2097000 startblock
>> 15159144 (14/479080) count 2097000 flag 0 data offset 4194000
>> startblock 17256144 (16/478928) count 2097000 flag 0 data offset
>> 6291000 startblock 19353144 (18/478776) count 2097000 flag 0 data
>> offset 8388000 startblock 21450144 (20/478624) count 2097000 flag
>> 0 data offset 10485000 startblock 23547144 (22/478472) count
>> 2097000 flag 0 data offset 12582000 startblock 25644144 (24/478320)
>> count 2097000 flag 0 data offset 14679000 startblock 27741144
>> (26/478168) count 2097000 flag 0 data offset 16776000 startblock
>> 29838144 (28/478016) count 2097000 flag 0 data offset 18873000
>> startblock 31935144 (30/477864) count 1607000 flag 0 xfs_db> inode
>> 134 xfs_db> bmap data offset 0 startblock 7942144 (7/602112) count
>> 2097000 flag 0 data offset 2097000 startblock 10039144 (9/601960)
>> count 2097000 flag 0 data offset 4194000 startblock 12136144
>> (11/601808) count 926000 flag 0
>
> It's been a while since I thought about realtime, but -
>
> That all seems fine, I don't see anything overlapping there, they
> are all perfectly adjacent, though of interesting size.
>
>>
>> by looking into xfs_repair code, it looks like repair does not
>> handle a case where we have more than one extent in a real-time
>> extent. following is code from repair/dinode.c: process_rt_rec
>
> "more than one extent in a real-time extent?" I'm not sure what that
> means.
>
> Every extent above is length 2097000 blocks, and they are adjacent.
> But you say your realtime extent size is 512 blocks ... which doesn't
> go into 2097000 evenly. So that's odd, at least.
>
>
> well, lets look at first extent
>> data offset 0 startblock 13062144 (12/479232) count 2097000 flag 0
>> data offset 2097000 startblock 15159144 (14/479080) count 2097000
>> flag 0
> startblock is aligned and rtext is 25512, since the blockcount is
> not multiple of 512, last realtime extent ( 25512 + 4095) is
> partially used, 360 blks second extent start from realtime extent
> 29607 (ie 25512 + 4095). so, yes, extents are not overlapping, but
> 29607 realtime extent is shared by two extents. Now once xfs_repair
> detects this case in phase 2, it bails out and clears that inode. I
> think search for duplicate extent is done in phase 4, but inode is
> marked already.
... ok I realize I was misunderstanding some things about the realtime
volume. (It's been a very long time since I thought about it). Still,
I'd like to look at the metadump image if possible.
Thanks,
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2012-09-21 16:05 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-09-21 0:40 xfs_repair deleting realtime files Anand Tiwari
2012-09-21 5:00 ` Eric Sandeen
2012-09-21 15:51 ` Anand Tiwari
2012-09-21 16:07 ` Eric Sandeen [this message]
2012-09-21 16:40 ` Anand Tiwari
2012-09-24 7:55 ` Dave Chinner
2012-09-24 12:51 ` Anand Tiwari
2012-09-26 1:26 ` Anand Tiwari
2012-09-26 2:44 ` Dave Chinner
2012-09-26 3:45 ` Anand Tiwari
2012-09-26 6:17 ` Dave Chinner
2012-09-28 1:27 ` Anand Tiwari
2012-09-28 6:47 ` Dave Chinner
2012-09-29 22:49 ` Anand Tiwari
2012-10-02 0:59 ` Dave Chinner
2012-10-02 16:47 ` Anand Tiwari
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=505C90A6.90103@sandeen.net \
--to=sandeen@sandeen.net \
--cc=tiwarikanand@gmail.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox