public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* Re: XFS-filesystem corrupted by defragmentation
@ 2010-04-13 16:08 Sebastian Brings
  2010-04-13 17:41 ` Bernhard Gschaider
  0 siblings, 1 reply; 6+ messages in thread
From: Sebastian Brings @ 2010-04-13 16:08 UTC (permalink / raw)
  To: Bernhard Gschaider, xfs

> Hi!> > I'm asking here because I've been referred here fro the CentOS-mailing> list (for the full story see> http://www.pubbs.net/201004/centos/17112-centos-performance-> problems-with-xfs-on-centos-54.html> and > http://www.pubbs.net/201004/centos/24542-centos-xfs-filesystem-> corrupted-by-defragmentation-was-performance-problems-with-xfs-on-> centos-54.html> the following stuff is a summary of this)> > It was suggested to me that the source of my performance problems might> be the fragmentation of the XFS-system. I tested for fragmentation and> got > > xfs_db> frag> actual 6349355, ideal 4865683, fragmentation factor 23.37%> > Before I'd try to defragment my whole filesystem I figured "Let's try> it on some file". > > So I did> > > xfs_bmap /raid/Temp/someDiskimage.iso> [output shows 101 extents and 1 hole]> > Then I defragmented the file> > xfs_fsr /raid/Temp/someDiskimage.iso> extents before:101 after:3 DONE> > > xfs_bmap /raid/Temp/someDiskimage.iso> [output shows 3 extents and 1 hole]> > and now comes the bummer: i wanted to check the fragmentation of the> whole filesystem (just for checking):> > > xfs_db -r /dev/mapper/VolGroup00-LogVol04> xfs_db: unexpected XFS SB magic number 0x00000000> xfs_db: read failed: Invalid argument> xfs_db: data size check failed> cache_node_purge: refcount was 1, not zero (node=0x2a25c20)> xfs_db: cannot read root inode (22)> > THAT output was definitly not there when I did this the last time and> therefor the new fragmentation does not make me happy either> > xfs_db> frag> actual 0, ideal 0, fragmentation factor 0.00%> > The file-system is still mounted and working and I don't dare to do> anything about it (am in a mild state of panic) because I think it> might not come back if I do.> > Any suggestions most welcome (am googling myself before I do anything> about it).> > I swear to god: I did not do anything else with the xfs_*-commands> than the stuff mentioned above> > As far as I understood from other places the first thing to do is "try> to get the incore copy of the XFS superblock flushed out" before> proceeding (must find out how to do that). How would you suggest to> proceed from that? If defragmenting one file messes things up this> badly how safe is defragmentation in general?> > Thanks for your time> Bernhard> > Info about my system. Tell me if you need more info:> > My system is a CentOS 5.4 (which is equivalent to a RHEL 5.4) which> means kernel 2.6.18 (64bit. Unmodified Xen-Kernel). xfs_db -V reports> "xfs_db version 2.9.4"> > Memory on the system is 4Gig (2 DualCore Xenons). The filesystem is> 3.5 TB of which 740 Gig are used. Which is the maximum amount used> during the one year that the filesystem is being used (that is why the> high fragmentation amazes me) The filesystem is on a LVM-Volume which> sits on a RAID 5 (Hardware RAID) drive.> > % xfs_info /raid> meta-data=/dev/VolGroup00/LogVol05 isize=256 agcount=32, > agsize=29434880 blks> = sectsz=512 attr=0> data = bsize=4096 blocks=941916160, imaxpct=25> = sunit=0 swidth=0 blks, unwritten=1> naming =version 2 bsize=4096 > log =internal bsize=4096 blocks=32768, version=1> = sectsz=512 sunit=0 blks, lazy-count=0> realtime =none extsz=4096 blocks=0, rtextents=0> 
Hi,

could it be you specified the wrong device for xfs_db? The xfs_info gives =/dev/VolGroup00/LogVol05  as metadata device, but for xfs_db you used /dev/mapper/VolGroup00-LogVol04...

Sebastian
___________________________________________________________
NEU: WEB.DE DSL für 19,99 EUR/mtl. und ohne Mindest-Laufzeit!
http://produkte.web.de/go/02/

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 6+ messages in thread
* XFS-filesystem corrupted by defragmentation
@ 2010-04-13 12:10 Bernhard Gschaider
  2010-04-13 14:58 ` Robert Brockway
  2010-04-13 16:36 ` Eric Sandeen
  0 siblings, 2 replies; 6+ messages in thread
From: Bernhard Gschaider @ 2010-04-13 12:10 UTC (permalink / raw)
  To: xfs


Hi!

I'm asking here because I've been referred here fro the CentOS-mailing
list (for the full story see
http://www.pubbs.net/201004/centos/17112-centos-performance-problems-with-xfs-on-centos-54.html
and 
http://www.pubbs.net/201004/centos/24542-centos-xfs-filesystem-corrupted-by-defragmentation-was-performance-problems-with-xfs-on-centos-54.html
the following stuff is a summary of this)

It was suggested to me that the source of my performance problems might
be the fragmentation of the XFS-system. I tested for fragmentation and
got 

xfs_db> frag
actual 6349355, ideal 4865683, fragmentation factor 23.37%

Before I'd try to defragment my whole filesystem I figured "Let's try
it on some file". 

So I did

> xfs_bmap /raid/Temp/someDiskimage.iso
[output shows 101 extents and 1 hole]

Then I defragmented the file
> xfs_fsr /raid/Temp/someDiskimage.iso
extents before:101 after:3 DONE

> xfs_bmap /raid/Temp/someDiskimage.iso
[output shows 3 extents and 1 hole]

and now comes the bummer: i wanted to check the fragmentation of the
whole filesystem (just for checking):

> xfs_db -r /dev/mapper/VolGroup00-LogVol04
xfs_db: unexpected XFS SB magic number 0x00000000
xfs_db: read failed: Invalid argument
xfs_db: data size check failed
cache_node_purge: refcount was 1, not zero (node=0x2a25c20)
xfs_db: cannot read root inode (22)

THAT output was definitly not there when I did this the last time and
therefor the new fragmentation does not make me happy either

xfs_db> frag
actual 0, ideal 0, fragmentation factor 0.00%

The file-system is still mounted and working and I don't dare to do
anything about it (am in a mild state of panic) because I think it
might not come back if I do.

Any suggestions most welcome (am googling myself before I do anything
about it).

I swear to god: I did not do anything else with the xfs_*-commands
than the stuff mentioned above

As far as I understood from other places the first thing to do is "try
to get the incore copy of the XFS superblock flushed out" before
proceeding (must find out how to do that). How would you suggest to
proceed from that? If defragmenting one file messes things up this
badly how safe is defragmentation in general?

Thanks for your time
Bernhard

Info about my system. Tell me if you need more info:

My system is a CentOS 5.4 (which is equivalent to a RHEL 5.4) which
means kernel 2.6.18 (64bit. Unmodified Xen-Kernel). xfs_db -V reports
"xfs_db version 2.9.4"

Memory on the system is 4Gig (2 DualCore Xenons). The filesystem is
3.5 TB of which 740 Gig are used. Which is the maximum amount used
during the one year that the filesystem is being used (that is why the
high fragmentation amazes me) The filesystem is on a LVM-Volume which
sits on a RAID 5 (Hardware RAID) drive.

% xfs_info  /raid
meta-data=/dev/VolGroup00/LogVol05 isize=256    agcount=32, agsize=29434880 blks
         =                       sectsz=512   attr=0
data     =                       bsize=4096   blocks=941916160, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096  
log      =internal               bsize=4096   blocks=32768, version=1
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0


_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2010-04-13 17:39 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-04-13 16:08 XFS-filesystem corrupted by defragmentation Sebastian Brings
2010-04-13 17:41 ` Bernhard Gschaider
  -- strict thread matches above, loose matches on Subject: below --
2010-04-13 12:10 Bernhard Gschaider
2010-04-13 14:58 ` Robert Brockway
2010-04-13 15:24   ` Bernhard Gschaider
2010-04-13 16:36 ` Eric Sandeen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox