public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* Incorrect Free Space / xfs_growfs on RAID5 Volume ?
@ 2009-05-26  9:51 Svavar Örn Eysteinsson
       [not found] ` <4A1BE48F.9020107@dermichi.com>
  0 siblings, 1 reply; 5+ messages in thread
From: Svavar Örn Eysteinsson @ 2009-05-26  9:51 UTC (permalink / raw)
  To: xfs

Hi all.

I’m hoping that someone can help me out here, regarding growing a XFS  
filesystem
on a Adaptec RAID controller with RAID5 setup.

First of all, this is my setup :

Fedora Core 6.
Kernel : 2.6.27
(stock Fedora Core 6 xfsprogs)
Adaptec SATA Raid Controller 2820SA ( 8 port )


For 2 years now, I had running a RAID5 setup with 4x 500GB SATA disks
as one logical drive. That drive would have been /dev/sdb.
I made an xfs filesystem out of it and it gave me about 1.8 TB of  
usable data space.

No problems so far, nothing.

I recently added the 5th disks in the RAID5 setup, and reconfigured  
(Online Expansion)
with the Adaptec Storage Manager.
So the Adaptec SM tells me that I have 2.274 TB of space.
Parity space is : 465.626 GB and the Stripe size is 256K.

Well, the RAID controller setup is done. So I headed out to the fdisk  
section.

I deleted the /dev/sdb1 partition and made a new one right ahead.

(see below)

Partition number (1-4): 1
First cylinder (1-303916, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-267349, default 267349):  
267349

Disk /dev/sdb: 2499.7 GB, 2499794698240 bytes
255 heads, 63 sectors/track, 303916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      267349  2147480811   83  Linux


I issued a “w” command to write to the partition table and exited fdisk.

Mounted the /dev/sdb1 partition as /raid-data.


Did grow the XFS filesystem with xfs_growfs /raid-data




Now the strange part. When I issue “df -h” command it shows much smaller
disk space added then it should have.





(see below)

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda3             269G  181G   75G  71% /
/dev/sda1             244M   20M  211M   9% /boot
tmpfs                1013M     0 1013M   0% /dev/shm
/dev/sdb1             2.0T  1.9T  191G  91% /raid-data

Using a “non -h” df command shows me :


Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda3            281636376 189165420  77933740  71% /
/dev/sda1               248895     20331    215714   9% /boot
tmpfs                  1036736         0   1036736   0% /dev/shm
/dev/sdb1            2147349736 1948095012 199254724  91% /raid-data



Any ideas? This is not acceptable as I inserted 500GB as additional  
space
to the RAID5 group and don’t have all of it. :(

So I checked the fragmentation with xfs_db and it told me
that the volume was having a 36.7% fragmentation.

I was going to issue a xfs_fsr but the system couldn’t find that  
command.
Great.

I removed the stock xfsprogs rpm version, downloaded the newest
xfsprogs source (xfsprogs-3.0.1). Build’it, installed it and issued
the xfs_fsr on the volume.

Nothing changed on the free space thing, It’s totally the same.


Filesystem            Size  Used Avail Use% Mounted on
/dev/sda3             269G  181G   75G  71% /
/dev/sda1             244M   20M  211M   9% /boot
tmpfs                1013M     0 1013M   0% /dev/shm
/dev/sdb1             2.0T  1.9T  191G  91% /raid-data





I then decided to use the xfs_growfs with the newest version(that is  
the version I
downloaded, configured and installed) on the volume.

Nothing happens, as the xfs_growfs says :

# xfs_growfs -d /raid-data
meta-data=/dev/sdb1              isize=256    agcount=36,  
agsize=15257482 blks
          =                       sectsz=512   attr=0
data     =                       bsize=4096   blocks=536870202,  
imaxpct=25
          =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=32768, version=1
          =                       sectsz=512   sunit=0 blks, lazy- 
count=0
realtime =none                   extsz=65536  blocks=0, rtextents=0
data size unchanged, skipping



So I really need some advice or help about this situation.
Did I do anything wrong ?

Is the metadata, and or log data on the xfs volume taking all the  
remain space
so I only get about 191GB free for data after the insert of 500GB disk ?


Thanks all.

Best regards,

Svavar - Reykjavik / Iceland


_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Incorrect Free Space / xfs_growfs on RAID5 Volume ?
@ 2009-05-26 16:58 Richard Ems
  0 siblings, 0 replies; 5+ messages in thread
From: Richard Ems @ 2009-05-26 16:58 UTC (permalink / raw)
  To: xfs, svavar

Hi!

You have hit the 2 TB partition limit.

You will need to change your partitioning to GPT, see
http://en.wikipedia.org/wiki/GUID_Partition_Table or search for "2TB
partition limit" on google.

Backup your data first!

Regards, Richard


-- 
Richard Ems       mail: Richard.Ems@Cape-Horn-Eng.com

Cape Horn Engineering S.L.
C/ Dr. J.J. Dómine 1, 5º piso
46011 Valencia
Tel : +34 96 3242923 / Fax 924
http://www.cape-horn-eng.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Incorrect Free Space / xfs_growfs on RAID5 Volume ?
       [not found] ` <4A1BE48F.9020107@dermichi.com>
@ 2009-05-27  9:26   ` Svavar Örn Eysteinsson
  2009-05-27 16:29     ` Eric Sandeen
  2009-05-28  9:10     ` Michael Monnerie
  0 siblings, 2 replies; 5+ messages in thread
From: Svavar Örn Eysteinsson @ 2009-05-27  9:26 UTC (permalink / raw)
  To: xfs

Hi.

I read on http://www.carltonbale.com/2007/05/how-to-break-the-2tb-2-terabyte-file-system-limit/
that if your kernel is compiled with CONFIG_LBD You can break the 2tb  
limit. Any facts on that ?

****
Breaking 2TB Option 2 - Use Linux with CONFIG_LBD enabled. Most Linux  
file systems are capable of partitions larger than 2 TB, as long as  
the Linux kernel itself is. (See this comparison of Linux file  
systems.) Most Linux distributions now have kernels compiled with  
CONFIG_LBD enabled (Ubuntu 6.10 does, for example.) As long as the  
kernel is configured/compiled properly, it is straight-forward to  
create a single 4TB EXT3 (or similar) partition.

     * To summarize: 1 RAID array of five 1TB Drives -> 1 RAID level 5  
Volume Set that is 4TB -> 1 EXT3 (or similar) Linux partition that is  
4TB.
****

.... Is this maby out of my scope/setup ?


Is there a simple way for me to check if my kernel has this option  
compiled in ?
I'm running Fedora Core 6 with  2.6.27.7 #1 SMP Tue Nov 25 11:50:10  
GMT 2008 i686 i686 i386 GNU/Linux.


And the FINAL question.... Is there any way for me to alter the raid  
volume, partitions to GPT or just format the /dev/sdb without loosing  
any data ?
Maby it's just not possible without backup up data, and restore'ing ?


Thanks allot guys..


Best regards,

Svavar - Reykjavik - Iceland



On 26.5.2009, at 12:46, Michael Weissenbacher wrote:

> Hi Svavar!
>> Now the strange part. When I issue “df -h” command it shows much  
>> smaller
>> disk space added then it should have.
>
> You have run into the 2TB limit for a DOS Paritition Table. You must  
> use GPT (GUID Partition Table) to overcome the limit. You can't use  
> fdisk for that since it has no GPT support. An alternative would be  
> parted [1]. I'm not sure how this can be done without data loss. An  
> alternative would be to not use partitions at all and create the XFS  
> directly on /dev/sdb.
> This is not really an XFS issue but an partitioning issue.
>
> [1] http://www.gnu.org/software/parted/index.shtml
>
> hth,
> Michael


_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Incorrect Free Space / xfs_growfs on RAID5 Volume ?
  2009-05-27  9:26   ` Svavar Örn Eysteinsson
@ 2009-05-27 16:29     ` Eric Sandeen
  2009-05-28  9:10     ` Michael Monnerie
  1 sibling, 0 replies; 5+ messages in thread
From: Eric Sandeen @ 2009-05-27 16:29 UTC (permalink / raw)
  To: Svavar Örn Eysteinsson; +Cc: xfs

Svavar Örn Eysteinsson wrote:
> Hi.
> 
> I read on http://www.carltonbale.com/2007/05/how-to-break-the-2tb-2-terabyte-file-system-limit/
> that if your kernel is compiled with CONFIG_LBD You can break the 2tb  
> limit. Any facts on that ?

This lets the kernel track block devices which are more than 2^32
sectors large, but it does not change the fact that an msdos partition
table cannot have a partition this large.

> ****
> Breaking 2TB Option 2 - Use Linux with CONFIG_LBD enabled. Most Linux  
> file systems are capable of partitions larger than 2 TB, as long as  
> the Linux kernel itself is. (See this comparison of Linux file  
> systems.) Most Linux distributions now have kernels compiled with  
> CONFIG_LBD enabled (Ubuntu 6.10 does, for example.) As long as the  
> kernel is configured/compiled properly, it is straight-forward to  
> create a single 4TB EXT3 (or similar) partition.
> 
>      * To summarize: 1 RAID array of five 1TB Drives -> 1 RAID level 5  
> Volume Set that is 4TB -> 1 EXT3 (or similar) Linux partition that is  
> 4TB.
> ****
> 
> .... Is this maby out of my scope/setup ?
> 
> 
> Is there a simple way for me to check if my kernel has this option  
> compiled in ?
> I'm running Fedora Core 6 with  2.6.27.7 #1 SMP Tue Nov 25 11:50:10  
> GMT 2008 i686 i686 i386 GNU/Linux.

That's pretty old.... but it probably has it on.  I don't remember if
FC6 had config-* in /boot; if so you could just check there.  Otherwise
grab the src.rpm and work from there ...

> 
> And the FINAL question.... Is there any way for me to alter the raid  
> volume, partitions to GPT or just format the /dev/sdb without loosing  
> any data ?
> Maby it's just not possible without backup up data, and restore'ing ?

It is probably possible to put a new GPT table in place of the DOS
table, but you have to be careful.  The idea is that you need a GPT
table with a partition starting at exactly the same place, and with the
end in the correct (larger) place ... and the GPT table must all fit
before the first sector of the first partition.  With care, this usually
works.

-Eric

> 
> Thanks allot guys..
> 
> 
> Best regards,
> 
> Svavar - Reykjavik - Iceland
> 
> 
> 
> On 26.5.2009, at 12:46, Michael Weissenbacher wrote:
> 
>> Hi Svavar!
>>> Now the strange part. When I issue “df -h” command it shows much  
>>> smaller
>>> disk space added then it should have.
>> You have run into the 2TB limit for a DOS Paritition Table. You must  
>> use GPT (GUID Partition Table) to overcome the limit. You can't use  
>> fdisk for that since it has no GPT support. An alternative would be  
>> parted [1]. I'm not sure how this can be done without data loss. An  
>> alternative would be to not use partitions at all and create the XFS  
>> directly on /dev/sdb.
>> This is not really an XFS issue but an partitioning issue.
>>
>> [1] http://www.gnu.org/software/parted/index.shtml
>>
>> hth,
>> Michael
> 
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Incorrect Free Space / xfs_growfs on RAID5 Volume ?
  2009-05-27  9:26   ` Svavar Örn Eysteinsson
  2009-05-27 16:29     ` Eric Sandeen
@ 2009-05-28  9:10     ` Michael Monnerie
  1 sibling, 0 replies; 5+ messages in thread
From: Michael Monnerie @ 2009-05-28  9:10 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 947 bytes --]

On Mittwoch 27 Mai 2009 Svavar Örn Eysteinsson wrote:
> Is there a simple way for me to check if my kernel has this option  
> compiled in ?

gzip -cd /proc/config.gz | grep CONFIG_LBD
if your kernel provides /proc/config.gz

> And the FINAL question.... Is there any way for me to alter the raid
>   volume, partitions to GPT or just format the /dev/sdb without
> loosing any data ?
> Maby it's just not possible without backup up data, and restore'ing ?

I googled this once, and found the anser "no". If you find a solution, 
please post it, I'd be very interested in it.

mfg zmi
-- 
// Michael Monnerie, Ing.BSc    -----      http://it-management.at
// Tel: 0660 / 415 65 31                      .network.your.ideas.
// PGP Key:         "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38  500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net                  Key-ID: 1C1209B4


[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 197 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2009-05-28  9:10 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-05-26  9:51 Incorrect Free Space / xfs_growfs on RAID5 Volume ? Svavar Örn Eysteinsson
     [not found] ` <4A1BE48F.9020107@dermichi.com>
2009-05-27  9:26   ` Svavar Örn Eysteinsson
2009-05-27 16:29     ` Eric Sandeen
2009-05-28  9:10     ` Michael Monnerie
  -- strict thread matches above, loose matches on Subject: below --
2009-05-26 16:58 Richard Ems

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox