* Correct usage of inode64/running out of inodes
@ 2009-06-29 12:59 Adam Donald
2009-06-29 19:39 ` Eric Sandeen
0 siblings, 1 reply; 5+ messages in thread
From: Adam Donald @ 2009-06-29 12:59 UTC (permalink / raw)
To: xfs
[-- Attachment #1.1: Type: text/plain, Size: 2471 bytes --]
Hello
In short, I believe that I have used the indode64 option correctly in
mounting my XFS device on my Centos 5.2 system, however, I seem to only
have 59 free inodes available and 7.5TB of free space. I would appreciate
any insight as to what the best approach would be to fix this situation.
In case it is helpful, I have included output from various commands/files
below, the XFS device in question is /dev/mapper/VolGroup01-DATA01. Thank
you in advance for your assistance!
uname:
Linux NAS01 2.6.18-92.1.6.el5 #1 SMP Wed Jun 25 13:45:47 EDT 2008 x86_64
x86_64 x86_64 GNU/Linux
df -h:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00 71G 12G 55G 18% /
/dev/sda1 99M 25M 70M 26% /boot
tmpfs 3.9G 0 3.9G 0% /dev/shm
/dev/mapper/VolGroup01-DATA01 18T 9.9T 7.5T 57% /DATA01
df -ih:
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/VolGroup00-LogVol00 19M 123K 19M 1% /
/dev/sda1 26K 44 26K 1% /boot
tmpfs 999K 1 999K 1% /dev/shm
/dev/mapper/VolGroup01-DATA01 18G 297K 18G 1% /DATA01
mount:
...
/dev/mapper/VolGroup01-DATA01 on /DATA01 type xfs (rw,inode64)
...
fstab:
...
/dev/VolGroup01/DATA01 /DATA01 xfs
rw,suid,dev,exec,auto,nouser,async,inode64 1 0
...
/usr/sbin/xfs_db -r -c sb -c p /dev/mapper/VolGroup01-DATA01:
magicnum = 0x58465342
blocksize = 4096
dblocks = 4638613504
rblocks = 0
rextents = 0
uuid = 2dc65efd-97e9-429f-a9bd-530bd86eac23
logstart = 4294967300
rootino = 128
rbmino = 129
rsumino = 130
rextsize = 1
agblocks = 144956672
agcount = 32
rbmblocks = 0
logblocks = 32768
versionnum = 0x3094
sectsize = 512
inodesize = 256
inopblock = 16
fname = "\000\000\000\000\000\000\000\000\000\000\000\000"
blocklog = 12
sectlog = 9
inodelog = 8
inopblog = 4
agblklog = 28
rextslog = 0
inprogress = 0
imax_pct = 25
icount = 303360
ifree = 59
fdblocks = 2001250945
frextents = 0
uquotino = 0
gquotino = 0
qflags = 0
flags = 0
shared_vn = 0
inoalignmt = 2
unit = 0
width = 0
dirblklog = 0
logsectlog = 0
logsectsize = 0
logsunit = 0
features2 = 0
______________________________________________________________________
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
______________________________________________________________________
[-- Attachment #1.2: Type: text/html, Size: 6140 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Correct usage of inode64/running out of inodes
2009-06-29 12:59 Correct usage of inode64/running out of inodes Adam Donald
@ 2009-06-29 19:39 ` Eric Sandeen
2009-06-29 20:35 ` Adam Donald
0 siblings, 1 reply; 5+ messages in thread
From: Eric Sandeen @ 2009-06-29 19:39 UTC (permalink / raw)
To: Adam Donald; +Cc: xfs
Adam Donald wrote:
>
> Hello
>
> In short, I believe that I have used the indode64 option correctly in
> mounting my XFS device on my Centos 5.2 system, however, I seem to only
> have 59 free inodes available and 7.5TB of free space. I would
> appreciate any insight as to what the best approach would be to fix this
> situation. In case it is helpful, I have included output from various
> commands/files below, the XFS device in question is
> /dev/mapper/VolGroup01-DATA01. Thank you in advance for your assistance!
It all looks sane to me; what are the actual symptoms of the problem
You can create 59 files and then -ENOSPC? Any kernel messages?
Maybe this is a bug in the old xfs code in the centos module... though I
don't remember such a bug right now.s
-Eric
> uname:
> Linux NAS01 2.6.18-92.1.6.el5 #1 SMP Wed Jun 25 13:45:47 EDT 2008 x86_64
> x86_64 x86_64 GNU/Linux
>
> df -h:
> Filesystem Size Used Avail Use% Mounted on
> /dev/mapper/VolGroup00-LogVol00 71G 12G 55G 18% /
> /dev/sda1 99M 25M 70M 26% /boot
> tmpfs 3.9G 0 3.9G 0% /dev/shm
> /dev/mapper/VolGroup01-DATA01 18T 9.9T 7.5T 57% /DATA01
>
> df -ih:
> Filesystem Inodes IUsed IFree IUse% Mounted on
> /dev/mapper/VolGroup00-LogVol00 19M 123K 19M 1% /
> /dev/sda1 26K 44 26K 1% /boot
> tmpfs 999K 1 999K 1% /dev/shm
> /dev/mapper/VolGroup01-DATA01 18G 297K 18G 1% /DATA01
>
> mount:
> ...
> /dev/mapper/VolGroup01-DATA01 on /DATA01 type xfs (rw,inode64)
> ...
>
> fstab:
> ...
> /dev/VolGroup01/DATA01 /DATA01 xfs
> rw,suid,dev,exec,auto,nouser,async,inode64 1 0
> ...
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Correct usage of inode64/running out of inodes
2009-06-29 19:39 ` Eric Sandeen
@ 2009-06-29 20:35 ` Adam Donald
2009-06-29 20:44 ` Eric Sandeen
0 siblings, 1 reply; 5+ messages in thread
From: Adam Donald @ 2009-06-29 20:35 UTC (permalink / raw)
To: Eric Sandeen; +Cc: xfs
[-- Attachment #1.1: Type: text/plain, Size: 3408 bytes --]
From:
Eric Sandeen <sandeen@sandeen.net>
To:
Adam Donald <Adam.Donald@gencopharma.com>
Cc:
xfs@oss.sgi.com
Date:
06/29/2009 02:39 PM
Subject:
Re: Correct usage of inode64/running out of inodes
Adam Donald wrote:
>
> Hello
>
> In short, I believe that I have used the indode64 option correctly in
> mounting my XFS device on my Centos 5.2 system, however, I seem to only
> have 59 free inodes available and 7.5TB of free space. I would
> appreciate any insight as to what the best approach would be to fix this
> situation. In case it is helpful, I have included output from various
> commands/files below, the XFS device in question is
> /dev/mapper/VolGroup01-DATA01. Thank you in advance for your
assistance!
It all looks sane to me; what are the actual symptoms of the problem
You can create 59 files and then -ENOSPC? Any kernel messages?
Maybe this is a bug in the old xfs code in the centos module... though I
don't remember such a bug right now.s
-Eric
> uname:
> Linux NAS01 2.6.18-92.1.6.el5 #1 SMP Wed Jun 25 13:45:47 EDT 2008 x86_64
> x86_64 x86_64 GNU/Linux
>
> df -h:
> Filesystem Size Used Avail Use% Mounted on
> /dev/mapper/VolGroup00-LogVol00 71G 12G 55G 18% /
> /dev/sda1 99M 25M 70M 26% /boot
> tmpfs 3.9G 0 3.9G 0% /dev/shm
> /dev/mapper/VolGroup01-DATA01 18T 9.9T 7.5T 57% /DATA01
>
> df -ih:
> Filesystem Inodes IUsed IFree IUse% Mounted
on
> /dev/mapper/VolGroup00-LogVol00 19M 123K 19M 1% /
> /dev/sda1 26K 44 26K 1% /boot
> tmpfs 999K 1 999K 1%
/dev/shm
> /dev/mapper/VolGroup01-DATA01 18G 297K 18G 1% /DATA01
>
> mount:
> ...
> /dev/mapper/VolGroup01-DATA01 on /DATA01 type xfs (rw,inode64)
> ...
>
> fstab:
> ...
> /dev/VolGroup01/DATA01 /DATA01 xfs
> rw,suid,dev,exec,auto,nouser,async,inode64 1 0
> ...
Thank you for your response. To be honest, I only ran out of "space"
(inodes) once on this volume a month or so ago, and I recall receiving a
ENOSPC type error at that time. At the time I received out of space
errors I found the xfs_db command and have since started to monitor the
ifree value, deleting files when I felt that ifree was dipping too low, as
I was unable to apply the inode64 option without first taking down various
production systems. When the time came this past weekend to apply the
inode64 option, I was expecting the ifree option value to shoot up
dramatically (several hundred, perhaps), and instead the ifree value
remained unaffected, the same as mounting the volume without the inode64
option.
Given the fact that I have this volume mounted with the inode64 option,
have roughly 7.5TB free, and show ifree with a double digit number
(currently 30 on our system), is there a an inconsistency between the
total amount of free space available and the number of free inodes
available?
Thanks again for the input, I appreciate it!
AD
______________________________________________________________________
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
______________________________________________________________________
[-- Attachment #1.2: Type: text/html, Size: 5418 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Correct usage of inode64/running out of inodes
2009-06-29 20:35 ` Adam Donald
@ 2009-06-29 20:44 ` Eric Sandeen
2009-06-30 20:08 ` Adam Donald
0 siblings, 1 reply; 5+ messages in thread
From: Eric Sandeen @ 2009-06-29 20:44 UTC (permalink / raw)
To: Adam Donald; +Cc: xfs
Adam Donald wrote:
> Thank you for your response. To be honest, I only ran out of "space"
> (inodes) once on this volume a month or so ago, and I recall receiving a
> ENOSPC type error at that time. At the time I received out of space
> errors I found the xfs_db command and have since started to monitor the
> ifree value, deleting files when I felt that ifree was dipping too low,
> as I was unable to apply the inode64 option without first taking down
> various production systems. When the time came this past weekend to
> apply the inode64 option, I was expecting the ifree option value to
> shoot up dramatically (several hundred, perhaps), and instead the ifree
> value remained unaffected, the same as mounting the volume without the
> inode64 option.
I don't -think- that the inode64 option affects the value reported via
statfs (though maybe it should; for dynamically allocated inodes it's
all make-believe anyway)
> Given the fact that I have this volume mounted with the inode64 option,
> have roughly 7.5TB free, and show ifree with a double digit number
> (currently 30 on our system), is there a an inconsistency between the
> total amount of free space available and the number of free inodes
> available?
hand-wavily, no, it seems fine... the way xfs reports free inodes (or
available inodes) is to look at how many blocks are free, and then how
many inodes -could- be created in that number of blocks, which is why
it's often absurdly high numbers.
inode32 behavior, fragmented free space, or lack of stripe-aligned space
(I think...) can sometimes cause spurious ENOSPC when looking for a new
inode...
-Eric
> Thanks again for the input, I appreciate it!
>
>
> AD
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Correct usage of inode64/running out of inodes
2009-06-29 20:44 ` Eric Sandeen
@ 2009-06-30 20:08 ` Adam Donald
0 siblings, 0 replies; 5+ messages in thread
From: Adam Donald @ 2009-06-30 20:08 UTC (permalink / raw)
To: Eric Sandeen; +Cc: xfs
[-- Attachment #1.1: Type: text/plain, Size: 2722 bytes --]
From:
Eric Sandeen <sandeen@sandeen.net>
To:
Adam Donald <Adam.Donald@gencopharma.com>
Cc:
xfs@oss.sgi.com
Date:
06/29/2009 03:44 PM
Subject:
Re: Correct usage of inode64/running out of inodes
Adam Donald wrote:
> Thank you for your response. To be honest, I only ran out of "space"
> (inodes) once on this volume a month or so ago, and I recall receiving a
> ENOSPC type error at that time. At the time I received out of space
> errors I found the xfs_db command and have since started to monitor the
> ifree value, deleting files when I felt that ifree was dipping too low,
> as I was unable to apply the inode64 option without first taking down
> various production systems. When the time came this past weekend to
> apply the inode64 option, I was expecting the ifree option value to
> shoot up dramatically (several hundred, perhaps), and instead the ifree
> value remained unaffected, the same as mounting the volume without the
> inode64 option.
I don't -think- that the inode64 option affects the value reported via
statfs (though maybe it should; for dynamically allocated inodes it's
all make-believe anyway)
> Given the fact that I have this volume mounted with the inode64 option,
> have roughly 7.5TB free, and show ifree with a double digit number
> (currently 30 on our system), is there a an inconsistency between the
> total amount of free space available and the number of free inodes
> available?
hand-wavily, no, it seems fine... the way xfs reports free inodes (or
available inodes) is to look at how many blocks are free, and then how
many inodes -could- be created in that number of blocks, which is why
it's often absurdly high numbers.
inode32 behavior, fragmented free space, or lack of stripe-aligned space
(I think...) can sometimes cause spurious ENOSPC when looking for a new
inode...
-Eric
> Thanks again for the input, I appreciate it!
>
>
> AD
Again, I appreciated your input, and I am tending to agree with you that
our setup is now fine since adding inode64. I noticed that I had an ifree
value of 9 this morning and I then used dd to create several large files.
During the dd process the ifree value jumped to 63 without sending a space
error - it appears that the inode64 setting is indeed working as intended,
I just had incorrect expectations as to how this option would affect the
actual display of ifree. Thank you for helping me get this situation
straightened out.
AD
______________________________________________________________________
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
______________________________________________________________________
[-- Attachment #1.2: Type: text/html, Size: 3764 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2009-06-30 20:08 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-06-29 12:59 Correct usage of inode64/running out of inodes Adam Donald
2009-06-29 19:39 ` Eric Sandeen
2009-06-29 20:35 ` Adam Donald
2009-06-29 20:44 ` Eric Sandeen
2009-06-30 20:08 ` Adam Donald
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox