* Unable to mount and repair filesystems
@ 2015-01-29 17:36 Gerard Beekmans
2015-01-29 20:18 ` Eric Sandeen
0 siblings, 1 reply; 9+ messages in thread
From: Gerard Beekmans @ 2015-01-29 17:36 UTC (permalink / raw)
To: xfs@oss.sgi.com
[-- Attachment #1.1: Type: text/plain, Size: 2093 bytes --]
Hi,
One of our VMware VMs crashed which has resulted in a few XFS filesystems unable to mount and be repaired.
Some VM details:
- Distribution is CentOS 7
- Partitions reside inside LVM
- Tried CentOS provided xfsprogs-3.2.0-alpha2 as well as manually compiling 3.2.2
When attempting to mount:
[71895.922382] XFS (dm-9): Mounting Filesystem
[71895.994614] XFS (dm-9): Starting recovery (logdev: internal)
[71896.000910] XFS (dm-9): Metadata corruption detected at xfs_agf_read_verify+0x70/0x120 [xfs], block 0x753001
[71896.002304] XFS (dm-9): Unmount and run xfs_repair
[71896.003649] XFS (dm-9): First 64 bytes of corrupted metadata buffer:
[71896.005049] ffff8800b1200c00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[71896.006468] ffff8800b1200c10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[71896.007799] ffff8800b1200c20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[71896.009116] ffff8800b1200c30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[71896.010482] XFS (dm-9): metadata I/O error: block 0x753001 ("xfs_trans_read_buf_map") error 117 numblks 1
mount: mount /dev/mapper/data-srv on /srv failed: Structure needs cleaning
xfs_repair:
xfs_repair /dev/mapper/data-srv
Phase 1 - find and verify superblock...
couldn't verify primary superblock - bad magic number !!!
attempting to find secondary superblock...
...
Found candidate secondary superblock...
Unable to verify superblock, continuing...
And on it goes until it eventually gives up.
There are two XFS partitions that are having these issues. The other 8 this VM has ended up repairing and mounting properly.
If there is anything I can do to fix these issues that would be appreciated. I have a bit of time before this VM needs to be up and running (which will involve formatting the filesystem and restoring from a backup). I'd love to help out the XFS project by providing any debug information you might find useful.
Regards,
Gerard Beekmans
[-- Attachment #1.2: Type: text/html, Size: 8467 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Unable to mount and repair filesystems
2015-01-29 17:36 Unable to mount and repair filesystems Gerard Beekmans
@ 2015-01-29 20:18 ` Eric Sandeen
2015-01-29 21:27 ` Gerard Beekmans
0 siblings, 1 reply; 9+ messages in thread
From: Eric Sandeen @ 2015-01-29 20:18 UTC (permalink / raw)
To: Gerard Beekmans, xfs@oss.sgi.com
On 1/29/15 11:36 AM, Gerard Beekmans wrote:
> Hi,
>
> One of our VMware VMs crashed which has resulted in a few XFS filesystems unable to mount and be repaired.
>
> Some VM details:
> - Distribution is CentOS 7
> - Partitions reside inside LVM
> - Tried CentOS provided xfsprogs-3.2.0-alpha2 as well as manually compiling 3.2.2
>
> When attempting to mount:
>
> [71895.922382] XFS (dm-9): Mounting Filesystem
> [71895.994614] XFS (dm-9): Starting recovery (logdev: internal)
> [71896.000910] XFS (dm-9): Metadata corruption detected at xfs_agf_read_verify+0x70/0x120 [xfs], block 0x753001
> [71896.002304] XFS (dm-9): Unmount and run xfs_repair
> [71896.003649] XFS (dm-9): First 64 bytes of corrupted metadata buffer:
> [71896.005049] ffff8800b1200c00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
> [71896.006468] ffff8800b1200c10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
> [71896.007799] ffff8800b1200c20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
> [71896.009116] ffff8800b1200c30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
all zeros...
> [71896.010482] XFS (dm-9): metadata I/O error: block 0x753001 ("xfs_trans_read_buf_map") error 117 numblks 1
> mount: mount /dev/mapper/data-srv on /srv failed: Structure needs cleaning
>
>
> xfs_repair:
>
> xfs_repair /dev/mapper/data-srv
> Phase 1 - find and verify superblock...
> couldn't verify primary superblock - bad magic number !!!
bad first block, too.
Are you certain that the volume / storage behind dm-9 is in decent shape? (i.e. is it really even an xfs filesystem?)
A VM crashing definitely should not result in a badly corrupt/unmountable filesystem.
Is there any other interesting part of the story? :)
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: Unable to mount and repair filesystems
2015-01-29 20:18 ` Eric Sandeen
@ 2015-01-29 21:27 ` Gerard Beekmans
2015-01-29 21:49 ` Eric Sandeen
2015-01-29 22:57 ` Dave Chinner
0 siblings, 2 replies; 9+ messages in thread
From: Gerard Beekmans @ 2015-01-29 21:27 UTC (permalink / raw)
To: Eric Sandeen, xfs@oss.sgi.com
> -----Original Message-----
> Are you certain that the volume / storage behind dm-9 is in decent shape?
> (i.e. is it really even an xfs filesystem?)
The question "is it in decent shape" is probably the million dollar question.
What I do know is this:
* It's all LVM based
* The first problem partition is /dev/data/srv which in turn is a symlink to /dev/dm-9
* The second problem partition is /dev/os/opt which in turn is a symlink to /dev/dm-7
Both were originally formatted as XFS and /etc/fstab has same. Now I can' t be sure if the symlinks were always dm-7 and dm-9.
Comparing what "lvdisplay" tell in terms of block device major & minor numbers and compare to the dm-* symlinks, they all match up. So by all accounts it ought to be correct.
Running xfs_db on those two partitions shows what I understand to be the "right stuff" aside from an error when it first runs:
# xfs_db /dev/os/opt
Metadata corruption detected at block 0x4e2001/0x200
xfs_db: cannot init perag data (117). Continuing anyway.
xfs_db> sb 0
xfs_db> p
magicnum = 0x58465342
blocksize = 4096
dblocks = 3133440
rblocks = 0
rextents = 0
uuid = b4ab7d1d-d383-4c49-af2c-be120ff967a7
logstart = 262148
rootino = 128
rbmino = 129
rsumino = 130
rextsize = 1
agblocks = 128000
agcount = 25
rbmblocks = 0
logblocks = 2560
versionnum = 0xb4b4
sectsize = 512
inodesize = 256
inopblock = 16
fname = "opt\000\000\000\000\000\000\000\000\000"
blocklog = 12
sectlog = 9
inodelog = 8
inopblog = 4
agblklog = 17
rextslog = 0
inprogress = 0
imax_pct = 25
icount = 576
ifree = 135
fdblocks = 3079156
frextents = 0
uquotino = 0
gquotino = 0
qflags = 0
flags = 0
shared_vn = 0
inoalignmt = 2
unit = 0
width = 0
dirblklog = 0
logsectlog = 0
logsectsize = 0
logsunit = 1
features2 = 0x8a
bad_features2 = 0x8a
features_compat = 0
features_ro_compat = 0
features_incompat = 0
features_log_incompat = 0
crc = 0 (correct)
pquotino = 0
lsn = 0
> A VM crashing definitely should not result in a badly corrupt/unmountable
> filesystem.
>
> Is there any other interesting part of the story? :)
The full setup is as follows:
The VM question is a VMware guest running on a vmware cluster. The actual files that make up the VM is stored on a SAN that VMware accesses via NFS.
The outage occurred at the SAN level making the NFS storage unavailable which in turn turned off all the VMs running on it (turned off in the virtual sense).
~50 VMs then were brought online and none had any serious issues. Most needed a form of fsck to bring things back to consistency. This is the only VM that suffered the way it did. Other VMs are a mix of Linux, BSD, OpenSolaris and Windows with all their varieties of filesystems (ext3, ext4, xfs, ntfs and so on).
It is possible that it is the vmware VMDK file that belongs to this VM that is the issue but it does not appear to be corrupt from a vmdk standpoint. Just the data inside of it.
Gerard
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Unable to mount and repair filesystems
2015-01-29 21:27 ` Gerard Beekmans
@ 2015-01-29 21:49 ` Eric Sandeen
2015-01-29 21:59 ` Gerard Beekmans
2015-01-29 22:57 ` Dave Chinner
1 sibling, 1 reply; 9+ messages in thread
From: Eric Sandeen @ 2015-01-29 21:49 UTC (permalink / raw)
To: Gerard Beekmans, xfs@oss.sgi.com
On 1/29/15 3:27 PM, Gerard Beekmans wrote:
>> -----Original Message-----
>> Are you certain that the volume / storage behind dm-9 is in decent shape?
>> (i.e. is it really even an xfs filesystem?)
>
> The question "is it in decent shape" is probably the million dollar question.
Right, sorry, I just meant: does this seem like an xfs problem or a storage problem
at first glance.
> What I do know is this:
>
> * It's all LVM based
> * The first problem partition is /dev/data/srv which in turn is a symlink to /dev/dm-9
> * The second problem partition is /dev/os/opt which in turn is a symlink to /dev/dm-7
>
> Both were originally formatted as XFS and /etc/fstab has same. Now I
> can' t be sure if the symlinks were always dm-7 and dm-9.
>
> Comparing what "lvdisplay" tell in terms of block device major &
> minor numbers and compare to the dm-* symlinks, they all match up. So
> by all accounts it ought to be correct.
>
> Running xfs_db on those two partitions shows what I understand to be
> the "right stuff" aside from an error when it first runs:
ok, that's a good datapoint, so it's not woefully scrambled.
> # xfs_db /dev/os/opt
> Metadata corruption detected at block 0x4e2001/0x200
so at sector 0x4e2001, length 0x200.
xfs_db> agf 5
xfs_db> daddr
current daddr is 5120001
so it's the 5th AGF which is corrupt.
you could try:
xfs_db> agf 5
xfs_db> print
to see how it looks.
> xfs_db: cannot init perag data (117). Continuing anyway.
> xfs_db> sb 0
> xfs_db> p
> magicnum = 0x58465342
this must not be the one that repair failed on like:
> couldn't verify primary superblock - bad magic number !!!
because that magicnum is valid. Did this one also fail to
repair?
> blocksize = 4096
> dblocks = 3133440
> rblocks = 0
> rextents = 0
> uuid = b4ab7d1d-d383-4c49-af2c-be120ff967a7
> logstart = 262148
> rootino = 128
> rbmino = 129
> rsumino = 130
> rextsize = 1
> agblocks = 128000
> agcount = 25
25 ags, presumably the fs was grown in the past, but ok...
...
>> A VM crashing definitely should not result in a badly corrupt/unmountable
>> filesystem.
>>
>> Is there any other interesting part of the story? :)
>
> The full setup is as follows:
>
> The VM question is a VMware guest running on a vmware cluster. The
> actual files that make up the VM is stored on a SAN that VMware
> accesses via NFS.
>
> The outage occurred at the SAN level making the NFS storage
> unavailable which in turn turned off all the VMs running on it
> (turned off in the virtual sense).
>
> ~50 VMs then were brought online and none had any serious issues.
> Most needed a form of fsck to bring things back to consistency. This
> is the only VM that suffered the way it did. Other VMs are a mix of
> Linux, BSD, OpenSolaris and Windows with all their varieties of
> filesystems (ext3, ext4, xfs, ntfs and so on).
>
> It is possible that it is the vmware VMDK file that belongs to this
> VM that is the issue but it does not appear to be corrupt from a vmdk
> standpoint. Just the data inside of it.
The only thing I can say is that xfs is going to depend on the storage
telling the truth about completed IOs... If the storage told XFS an IO
was persistent, but it wasn't, and the storage went poof, bad things
can happen. I don't know the details of your setup, or TBH much
about vmware over nfs ... you weren't mounted with -o nobarrier
were you?
-Eric
>
> Gerard
>
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: Unable to mount and repair filesystems
2015-01-29 21:49 ` Eric Sandeen
@ 2015-01-29 21:59 ` Gerard Beekmans
2015-01-29 22:15 ` Eric Sandeen
0 siblings, 1 reply; 9+ messages in thread
From: Gerard Beekmans @ 2015-01-29 21:59 UTC (permalink / raw)
To: Eric Sandeen, xfs@oss.sgi.com
> -----Original Message-----
> > # xfs_db /dev/os/opt
> > Metadata corruption detected at block 0x4e2001/0x200
>
> so at sector 0x4e2001, length 0x200.
>
> xfs_db> agf 5
> xfs_db> daddr
> current daddr is 5120001
>
> so it's the 5th AGF which is corrupt.
>
> you could try:
>
> xfs_db> agf 5
> xfs_db> print
>
> to see how it looks.
That gives me this:
xfs_db> agf 5
xfs_db> daddr
current daddr is 5120001
xfs_db> print
magicnum = 0
versionnum = 0
seqno = 0
length = 0
bnoroot = 0
cntroot = 0
bnolevel = 0
cntlevel = 0
flfirst = 0
fllast = 0
flcount = 0
freeblks = 0
longest = 0
btreeblks = 0
uuid = 00000000-0000-0000-0000-000000000000
lsn = 0
crc = 0 (correct)
> > xfs_db: cannot init perag data (117). Continuing anyway.
> > xfs_db> sb 0
> > xfs_db> p
> > magicnum = 0x58465342
>
> this must not be the one that repair failed on like:
>
> > couldn't verify primary superblock - bad magic number !!!
>
> because that magicnum is valid. Did this one also fail to repair?
How do I know/check/test if "this one" fails to refer? I'm not sure what you're referring to (or what to do with it).
> > agcount = 25
>
> 25 ags, presumably the fs was grown in the past, but ok...
Yes, it was. Ran out of space so I increased the size of the logical volume then used xfs_grow to increase the filesystem itself. That was the whole reason behind using LVM so this growth can be done on a live system without requiring repartitioning and such.
I did read today that growing an XFS is not necessarily something we should be doing? Some posts even suggest that LVM and XFS shouldn't be mixed together. Not sure how to separate truth from fiction.
> The only thing I can say is that xfs is going to depend on the storage telling
> the truth about completed IOs... If the storage told XFS an IO was persistent,
> but it wasn't, and the storage went poof, bad things can happen. I don't
> know the details of your setup, or TBH much about vmware over nfs ... you
> weren't mounted with -o nobarrier were you?
No I wasn't mounted with nobarrier unless it is done by default. I never specified the option on command line or in /etc/fstab at any rate for what that is worth.
Gerard
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Unable to mount and repair filesystems
2015-01-29 21:59 ` Gerard Beekmans
@ 2015-01-29 22:15 ` Eric Sandeen
2015-01-29 23:12 ` Dave Chinner
0 siblings, 1 reply; 9+ messages in thread
From: Eric Sandeen @ 2015-01-29 22:15 UTC (permalink / raw)
To: Gerard Beekmans, xfs@oss.sgi.com
On 1/29/15 3:59 PM, Gerard Beekmans wrote:
...
> That gives me this:
>
> xfs_db> agf 5
> xfs_db> daddr
> current daddr is 5120001
> xfs_db> print
> magicnum = 0
> versionnum = 0
...
so it is completely zeroed out.
>
>>> xfs_db: cannot init perag data (117). Continuing anyway.
>>> xfs_db> sb 0
>>> xfs_db> p
>>> magicnum = 0x58465342
>>
>> this must not be the one that repair failed on like:
>>
>>> couldn't verify primary superblock - bad magic number !!!
>>
>> because that magicnum is valid. Did this one also fail to repair?
>
> How do I know/check/test if "this one" fails to refer? I'm not sure what you're referring to (or what to do with it).
I'm sorry, I meant did this filesystem fail to repair based on a bad
primary superblock?
>>> agcount = 25
>>
>> 25 ags, presumably the fs was grown in the past, but ok...
>
> Yes, it was. Ran out of space so I increased the size of the logical
> volume then used xfs_grow to increase the filesystem itself. That was
> the whole reason behind using LVM so this growth can be done on a
> live system without requiring repartitioning and such.
> I did read today that growing an XFS is not necessarily something we
> should be doing? Some posts even suggest that LVM and XFS shouldn't
> be mixed together. Not sure how to separate truth from fiction.
It's fine; the downside is for people who think they can start with 1G
and grow to 10T; that's pretty suboptimal. XFS over LVM is fine.
I'm sure it's not related to this issue (unless it was very recently grown?
Was it grown shortly before the failures?)
Hm, it would have started at 4 AGs by default, and it's the 5th one that
looks bad; maybe that's a clue. Are agf 6, 7, 8 etc also full of 0s?
>> The only thing I can say is that xfs is going to depend on the storage telling
>> the truth about completed IOs... If the storage told XFS an IO was persistent,
>> but it wasn't, and the storage went poof, bad things can happen. I don't
>> know the details of your setup, or TBH much about vmware over nfs ... you
>> weren't mounted with -o nobarrier were you?
>
> No I wasn't mounted with nobarrier unless it is done by default. I
> never specified the option on command line or in /etc/fstab at any
> rate for what that is worth.
ok. I'm not sure what to tell you at this point; you have at least one
swath of your storage which looks totally zeroed out. That's not a
failure mode we usually see, and makes me think it's more storage related,
although the "how long ago did you grow this fs?" question might be
related, because the first visible corruption is in the first "new"
post-growth AG...
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Unable to mount and repair filesystems
2015-01-29 21:27 ` Gerard Beekmans
2015-01-29 21:49 ` Eric Sandeen
@ 2015-01-29 22:57 ` Dave Chinner
1 sibling, 0 replies; 9+ messages in thread
From: Dave Chinner @ 2015-01-29 22:57 UTC (permalink / raw)
To: Gerard Beekmans; +Cc: Eric Sandeen, xfs@oss.sgi.com
On Thu, Jan 29, 2015 at 09:27:32PM +0000, Gerard Beekmans wrote:
> > -----Original Message-----
> > Are you certain that the volume /
> > storage behind dm-9 is in decent shape? (i.e. is it really even
> > an xfs filesystem?)
.....
> The outage occurred at the SAN level making the NFS storage
> unavailable which in turn turned off all the VMs running on it
> (turned off in the virtual sense).
Define "SAN" outage. All this tells me is that the backing store
went bad in some way and needed recovery, not what the actual
problem in the SAN was. If it was a potential data loss event, then
that's the prime candidate for the storage returning zeros where
there should be data.
The second candidate is the NFS server. What was the NFS server?
Did the NFS server get rebooted? Did the NFS clients (i.e. the
physical machines running the hypervisor, not the guests) get
rebooted too? If you reboot the server, the NFS clients are
supposed to retransmit any unstable data they have to the server. If
the clients are rebooted or the NFS mount forcible unmounted while
the server is down, then that unstable data is lost forever.
Really, fully zeroed blocks in critical XFS metadata blocks is
almost always an indication of data loss somewhere in the lower
layers of the storage stack. As a precaution, though, if one vmdk
is bad, I'd consider all the others as suspect, even if the
filesystem checkers haven't thrown errors. Random block data loss
can really only be reliably recovered from backups, as user data is
notoriously difficult to validate as correct.
.....
> It is possible that it is the vmware VMDK file that belongs to
> this VM that is the issue but it does not appear to be corrupt
> from a vmdk standpoint. Just the data inside of it.
Also, you are using VMDK image files, that implies you are running
ESX as your hypervisor, yes? If so, that limits our ability to help
you track the source of the corruption...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Unable to mount and repair filesystems
2015-01-29 22:15 ` Eric Sandeen
@ 2015-01-29 23:12 ` Dave Chinner
2015-01-30 0:04 ` Gerard Beekmans
0 siblings, 1 reply; 9+ messages in thread
From: Dave Chinner @ 2015-01-29 23:12 UTC (permalink / raw)
To: Eric Sandeen; +Cc: xfs@oss.sgi.com, Gerard Beekmans
On Thu, Jan 29, 2015 at 04:15:54PM -0600, Eric Sandeen wrote:
> On 1/29/15 3:59 PM, Gerard Beekmans wrote:
> I'm sure it's not related to this issue (unless it was very recently grown?
> Was it grown shortly before the failures?)
>
> Hm, it would have started at 4 AGs by default, and it's the 5th one that
> looks bad; maybe that's a clue. Are agf 6, 7, 8 etc also full of 0s?
Gerard is using the default mount options, so XFS is issuing cache
flushes and FUA with log writes. Hence if the new AG headers are
zero yet the superblock says they are valid, then that's a storage
bug.
In more detail: we force the new AGs to be written to disk
synchronously during the growfs operation before we commit the
transaction. The superblock with the larger AG count can only get on
disk after the transaction has been written to the log. Log writes
trigger a storge device cache flush, which results in the IO
ordering of:
new AG header IO
IO complete
transaction commit
....
Device cache flush
(new AG headers guaranteed to be on disk)
journal write (FUA)
(journal write guaranteed to be on disk)
.....
superblock write IO.
Hence if the superblock is showing 25 AGs and the new ags from 4-25
are not found on disk then either:
a) if the grow was very recent the storage is not obeying
cache flushes and hence breaking fundamental IO ordering
behaviour; or,
b) if the growfs happened long ago, the storage has lost the
data that was written to stable media...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: Unable to mount and repair filesystems
2015-01-29 23:12 ` Dave Chinner
@ 2015-01-30 0:04 ` Gerard Beekmans
0 siblings, 0 replies; 9+ messages in thread
From: Gerard Beekmans @ 2015-01-30 0:04 UTC (permalink / raw)
To: Dave Chinner, Eric Sandeen; +Cc: xfs@oss.sgi.com
<snip>
> Hence if the superblock is showing 25 AGs and the new ags from 4-25 are not
> found on disk then either:
>
> a) if the grow was very recent the storage is not obeying
> cache flushes and hence breaking fundamental IO ordering
> behaviour; or,
>
> b) if the growfs happened long ago, the storage has lost the
> data that was written to stable media...
To answer both Dave and Eric in a single email:
The growfs happened about 1 month ago. The storage crash occurred yesterday so that would make me think enough time would have passed to commit that data to disk so it must be option b) above.
I think it's academic at this point but to answer the SAN questions that came up:
The NFS server is the SAN itself. The SAN software is Nexenta which provides various methods of accessing its data (NFS, SMB and iSCSI being the primary ones). When the SAN crashed, there was no warning so the Hypervisors had their shared NFS storage disconnected and ultimately after a timeout, the VMs that were running from those NFS shares were shutoff.
Eric: agf 5, 6, 7 and beyond are also full of 0s. 1-4 don't appear to be.
Gerard
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2015-01-30 0:04 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-01-29 17:36 Unable to mount and repair filesystems Gerard Beekmans
2015-01-29 20:18 ` Eric Sandeen
2015-01-29 21:27 ` Gerard Beekmans
2015-01-29 21:49 ` Eric Sandeen
2015-01-29 21:59 ` Gerard Beekmans
2015-01-29 22:15 ` Eric Sandeen
2015-01-29 23:12 ` Dave Chinner
2015-01-30 0:04 ` Gerard Beekmans
2015-01-29 22:57 ` Dave Chinner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox