public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Bernhard Schrader <bernhard.schrader@innogames.de>
To: xfs@oss.sgi.com
Subject: Problems with filesizes on different Kernels
Date: Fri, 17 Feb 2012 12:51:54 +0100	[thread overview]
Message-ID: <4F3E3F5A.9000202@innogames.de> (raw)

Hi all,

we just discovered a problem, which I think is related to XFS. Well, I 
will try to explain.

The environment i am working with are around 300 Postgres databases in 
separated VM's. All are running with XFS. Differences are just in kernel 
versions.
- 2.6.18
- 2.6.39
- 3.1.4

Some days ago i discovered that the file nodes of my postgresql tables 
have strange sizes. They are located in 
/var/lib/postgresql/9.0/main/base/[databaseid]/
If I execute the following commands i get results like this:

Command: du -sh | tr "\n" " "; du --apparent-size -h
Result: 6.6G	. 5.7G	.

Well, as you can see there is something wrong. The files consume more 
Diskspace than they originally would do. This happens only on 2.6.39 and 
3.1.4 servers. the old 2.6.18 has normal behavior and the sizes are the 
same for both commands.

The following was done on a 3.1.4 kernel.
To get some more informations i played a little bit with the xfs tools:

First i choose one file to examine:
##########
/var/lib/postgresql/9.0/main/base/43169# ls -lh 64121
-rw------- 1 postgres postgres 58M 2012-02-16 17:03 64121

/var/lib/postgresql/9.0/main/base/43169# du -sh 64121
89M    64121
##########

So this file "64121" has a difference of 31MB.

##########
/var/lib/postgresql/9.0/main/base/43169# xfs_bmap  64121
64121:
     0: [0..116991]: 17328672..17445663

/var/lib/postgresql/9.0/main/base/43169# xfs_fsr -v 64121
64121
64121 already fully defragmented.

/var/lib/postgresql/9.0/main/base/43169# xfs_info /dev/xvda1
meta-data=/dev/root              isize=256    agcount=4, agsize=959932 blks
          =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=3839727, imaxpct=25
          =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=2560, version=2
          =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

/var/lib/postgresql/9.0/main/base/43169# cat /proc/mounts
rootfs / rootfs rw 0 0
/dev/root / xfs rw,noatime,nodiratime,attr2,delaylog,nobarrier,noquota 0 0
tmpfs /lib/init/rw tmpfs rw,nosuid,relatime,mode=755 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev,relatime 0 0
devpts /dev/pts devpts 
rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
#########

I sent the following also to postgres mailinglist but i think this is 
now useful too.

Strange, or  not? Regarding this informations, the file is contiguous on 
disk and has of course no fragmentation, so why is it showing so much 
diskusage?

The relation this filenode is belonging to, is an index, and regarding 
my last overview it seems that this happens for 95% only to indexes/pkeys.

Well you could think i have some strange config settings, but we 
distribute this config via puppet, and also the servers on old hardware 
have this config. so things like fillfactor couldn't explain this.

We also thought that there could be some filehandles still exist. So we 
decided to reboot. Wow, we thought we got it, the free diskspace 
increased slowly for a while. But then, after 1-2GB captured diskspace 
it went back to normal and the filenodes grew again. This doesn't 
explain it as well. :/

One more thing, a xfs_fsr /dev/xvda1 recaptures also some diskspace, but 
with same effect as a reboot.


Some differences on 2.6.18 are the mount options and the lazy-count:
###########
xfs_info /dev/xvda1
meta-data=/dev/root              isize=256    agcount=4, agsize=959996 blks
          =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=3839983, imaxpct=25
          =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=2560, version=2
          =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0

cat /proc/mounts
rootfs / rootfs rw 0 0
/dev/root / xfs rw,noatime,nodiratime 0 0
tmpfs /lib/init/rw tmpfs rw,nosuid 0 0
proc /proc proc rw,nosuid,nodev,noexec 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts rw,nosuid,noexec 0 0
#############

I don't know what causes this problem, and why we are the only ones who 
discovered this. I don't know if it's really 100% related to xfs but for 
now i don't have other ideas. If you need anymore information I will 
provide.

Thanks in advance
Bernhard

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

             reply	other threads:[~2012-02-17 11:51 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-02-17 11:51 Bernhard Schrader [this message]
2012-02-17 12:33 ` Problems with filesizes on different Kernels Matthias Schniedermeyer
2012-02-20  8:41   ` Bernhard Schrader
2012-02-20 11:06     ` Matthias Schniedermeyer
2012-02-20 12:06       ` Bernhard Schrader
2012-02-27  8:23         ` Bernhard Schrader

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4F3E3F5A.9000202@innogames.de \
    --to=bernhard.schrader@innogames.de \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox