public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* Incorrect Free Space / xfs_growfs on RAID5 Volume ?
@ 2009-05-26  9:51 Svavar Örn Eysteinsson
       [not found] ` <4A1BE48F.9020107@dermichi.com>
  0 siblings, 1 reply; 5+ messages in thread
From: Svavar Örn Eysteinsson @ 2009-05-26  9:51 UTC (permalink / raw)
  To: xfs

Hi all.

I’m hoping that someone can help me out here, regarding growing a XFS  
filesystem
on a Adaptec RAID controller with RAID5 setup.

First of all, this is my setup :

Fedora Core 6.
Kernel : 2.6.27
(stock Fedora Core 6 xfsprogs)
Adaptec SATA Raid Controller 2820SA ( 8 port )


For 2 years now, I had running a RAID5 setup with 4x 500GB SATA disks
as one logical drive. That drive would have been /dev/sdb.
I made an xfs filesystem out of it and it gave me about 1.8 TB of  
usable data space.

No problems so far, nothing.

I recently added the 5th disks in the RAID5 setup, and reconfigured  
(Online Expansion)
with the Adaptec Storage Manager.
So the Adaptec SM tells me that I have 2.274 TB of space.
Parity space is : 465.626 GB and the Stripe size is 256K.

Well, the RAID controller setup is done. So I headed out to the fdisk  
section.

I deleted the /dev/sdb1 partition and made a new one right ahead.

(see below)

Partition number (1-4): 1
First cylinder (1-303916, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-267349, default 267349):  
267349

Disk /dev/sdb: 2499.7 GB, 2499794698240 bytes
255 heads, 63 sectors/track, 303916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      267349  2147480811   83  Linux


I issued a “w” command to write to the partition table and exited fdisk.

Mounted the /dev/sdb1 partition as /raid-data.


Did grow the XFS filesystem with xfs_growfs /raid-data




Now the strange part. When I issue “df -h” command it shows much smaller
disk space added then it should have.





(see below)

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda3             269G  181G   75G  71% /
/dev/sda1             244M   20M  211M   9% /boot
tmpfs                1013M     0 1013M   0% /dev/shm
/dev/sdb1             2.0T  1.9T  191G  91% /raid-data

Using a “non -h” df command shows me :


Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda3            281636376 189165420  77933740  71% /
/dev/sda1               248895     20331    215714   9% /boot
tmpfs                  1036736         0   1036736   0% /dev/shm
/dev/sdb1            2147349736 1948095012 199254724  91% /raid-data



Any ideas? This is not acceptable as I inserted 500GB as additional  
space
to the RAID5 group and don’t have all of it. :(

So I checked the fragmentation with xfs_db and it told me
that the volume was having a 36.7% fragmentation.

I was going to issue a xfs_fsr but the system couldn’t find that  
command.
Great.

I removed the stock xfsprogs rpm version, downloaded the newest
xfsprogs source (xfsprogs-3.0.1). Build’it, installed it and issued
the xfs_fsr on the volume.

Nothing changed on the free space thing, It’s totally the same.


Filesystem            Size  Used Avail Use% Mounted on
/dev/sda3             269G  181G   75G  71% /
/dev/sda1             244M   20M  211M   9% /boot
tmpfs                1013M     0 1013M   0% /dev/shm
/dev/sdb1             2.0T  1.9T  191G  91% /raid-data





I then decided to use the xfs_growfs with the newest version(that is  
the version I
downloaded, configured and installed) on the volume.

Nothing happens, as the xfs_growfs says :

# xfs_growfs -d /raid-data
meta-data=/dev/sdb1              isize=256    agcount=36,  
agsize=15257482 blks
          =                       sectsz=512   attr=0
data     =                       bsize=4096   blocks=536870202,  
imaxpct=25
          =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=32768, version=1
          =                       sectsz=512   sunit=0 blks, lazy- 
count=0
realtime =none                   extsz=65536  blocks=0, rtextents=0
data size unchanged, skipping



So I really need some advice or help about this situation.
Did I do anything wrong ?

Is the metadata, and or log data on the xfs volume taking all the  
remain space
so I only get about 191GB free for data after the insert of 500GB disk ?


Thanks all.

Best regards,

Svavar - Reykjavik / Iceland


_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 5+ messages in thread
* Re: Incorrect Free Space / xfs_growfs on RAID5 Volume ?
@ 2009-05-26 16:58 Richard Ems
  0 siblings, 0 replies; 5+ messages in thread
From: Richard Ems @ 2009-05-26 16:58 UTC (permalink / raw)
  To: xfs, svavar

Hi!

You have hit the 2 TB partition limit.

You will need to change your partitioning to GPT, see
http://en.wikipedia.org/wiki/GUID_Partition_Table or search for "2TB
partition limit" on google.

Backup your data first!

Regards, Richard


-- 
Richard Ems       mail: Richard.Ems@Cape-Horn-Eng.com

Cape Horn Engineering S.L.
C/ Dr. J.J. Dómine 1, 5º piso
46011 Valencia
Tel : +34 96 3242923 / Fax 924
http://www.cape-horn-eng.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2009-05-28  9:10 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-05-26  9:51 Incorrect Free Space / xfs_growfs on RAID5 Volume ? Svavar Örn Eysteinsson
     [not found] ` <4A1BE48F.9020107@dermichi.com>
2009-05-27  9:26   ` Svavar Örn Eysteinsson
2009-05-27 16:29     ` Eric Sandeen
2009-05-28  9:10     ` Michael Monnerie
  -- strict thread matches above, loose matches on Subject: below --
2009-05-26 16:58 Richard Ems

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox