linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Find Correct LE Size of ext4 Partition (16TB)
@ 2013-10-08  9:06 Sebastian Walter
  2013-10-08 20:11 ` Gabriel
  0 siblings, 1 reply; 6+ messages in thread
From: Sebastian Walter @ 2013-10-08  9:06 UTC (permalink / raw)
  To: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 3756 bytes --]

Dear List,

For extending an ext4 partition to some additional TBs, I extended the
underlying logical volume (LV) to fill the entire physical volume of the
disk. This was about 19.4 TB. While trying to extend the ext4 partition
to fill the hole LV, I realized that the ext4 size limit is still at
16TB. So I set the partition to 16TB. Now I'm wasting space on the LV
which I would like to again be reduced to the minimum size of the
partition (16TB).

Has anyone an idea to exactly calculate the amount of logical extends
(LE) needed for holding the 16TB partition? I'm sure there are some
bytes needed for overhead, journaling, etc. Maybe we can read it from
one of these outputs:

This is the partition:
tune2fs -l /dev/storage/storage
tune2fs 1.42.8 (20-Jun-2013)
Filesystem volume name:   storage
Last mounted on:          /mnt/storage/mnt/storage
Filesystem UUID:          9c05fe79-6a1b-484e-980d-788a6cf5b99c
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index
filetype needs_recovery extent sparse_super large_file uninit_bg
Filesystem flags:         signed_directory_hash
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              2147483648
Block count:              4294967295
Reserved block count:     42909770
Free blocks:              229845414
Free inodes:              1929998857
First block:              0
Block size:               4096
Fragment size:            4096
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         16384
Inode blocks per group:   512
Filesystem created:       Tue Sep 18 14:50:21 2007
Last mount time:          Tue Oct  8 04:00:16 2013
Last write time:          Tue Oct  8 04:00:16 2013
Mount count:              24
Maximum mount count:      36
Last checked:             Sat Oct  6 14:50:58 2012
Check interval:           15552000 (6 months)
Next check after:         Thu Apr  4 14:50:58 2013
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:	          128
Journal inode:            8
Default directory hash:   tea
Directory Hash Seed:      675a96e5-faf3-4576-af10-713007ac8387
Journal backup:           inode blocks

This is the Logical Volume (vgdisplay -v):
--- Logical volume ---
  LV Name                /dev/storage/storage
  VG Name                storage
  LV UUID                KQtBJN-RXzo-UUhT-MeiR-Ack8-QB2x-g9aPOT
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                19.18 TB
  Current LE             5027838
  Segments               4
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:24

And this is the Volume Group (vgdisplay):
  --- Volume group ---
  VG Name               storage
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  174
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               19.77 TB
  PE Size               4.00 MB
  Total PE              5181438
  Alloc PE / Size       5181438 / 19.77 TB
  Free  PE / Size       0 / 0
  VG UUID               ZalMV7-fZqp-mBA3-aYzi-iADF-VDWV-YIifQr

What I would like to know is the correct *extents* parameter for
lvreduce. Any help is greatly appreciated!

Sebastian



[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 5045 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-lvm] Find Correct LE Size of ext4 Partition (16TB)
  2013-10-08  9:06 [linux-lvm] Find Correct LE Size of ext4 Partition (16TB) Sebastian Walter
@ 2013-10-08 20:11 ` Gabriel
  2013-10-08 22:58   ` matthew patton
  0 siblings, 1 reply; 6+ messages in thread
From: Gabriel @ 2013-10-08 20:11 UTC (permalink / raw)
  To: LVM general discussion and development

Hi,

by multiplying block count by block size you get de filesystem size in
bytes. Converting into TiB yields a value just shy of 16 TiB (as
expected). More precisely 15.99999999627470970153.

If the PE size is 4 MiB and my calculations are correct, you would
need 16 TiB * 1024 * 1024 / 4 = 4194304  LEs.

On Tue, Oct 8, 2013 at 12:06 PM, Sebastian Walter
<sebastian.walter@fu-berlin.de> wrote:
> Dear List,
>
> For extending an ext4 partition to some additional TBs, I extended the
> underlying logical volume (LV) to fill the entire physical volume of the
> disk. This was about 19.4 TB. While trying to extend the ext4 partition
> to fill the hole LV, I realized that the ext4 size limit is still at
> 16TB. So I set the partition to 16TB. Now I'm wasting space on the LV
> which I would like to again be reduced to the minimum size of the
> partition (16TB).
>
> Has anyone an idea to exactly calculate the amount of logical extends
> (LE) needed for holding the 16TB partition? I'm sure there are some
> bytes needed for overhead, journaling, etc. Maybe we can read it from
> one of these outputs:
>
> This is the partition:
> tune2fs -l /dev/storage/storage
> tune2fs 1.42.8 (20-Jun-2013)
> Filesystem volume name:   storage
> Last mounted on:          /mnt/storage/mnt/storage
> Filesystem UUID:          9c05fe79-6a1b-484e-980d-788a6cf5b99c
> Filesystem magic number:  0xEF53
> Filesystem revision #:    1 (dynamic)
> Filesystem features:      has_journal ext_attr resize_inode dir_index
> filetype needs_recovery extent sparse_super large_file uninit_bg
> Filesystem flags:         signed_directory_hash
> Default mount options:    (none)
> Filesystem state:         clean
> Errors behavior:          Continue
> Filesystem OS type:       Linux
> Inode count:              2147483648
> Block count:              4294967295
> Reserved block count:     42909770
> Free blocks:              229845414
> Free inodes:              1929998857
> First block:              0
> Block size:               4096
> Fragment size:            4096
> Blocks per group:         32768
> Fragments per group:      32768
> Inodes per group:         16384
> Inode blocks per group:   512
> Filesystem created:       Tue Sep 18 14:50:21 2007
> Last mount time:          Tue Oct  8 04:00:16 2013
> Last write time:          Tue Oct  8 04:00:16 2013
> Mount count:              24
> Maximum mount count:      36
> Last checked:             Sat Oct  6 14:50:58 2012
> Check interval:           15552000 (6 months)
> Next check after:         Thu Apr  4 14:50:58 2013
> Reserved blocks uid:      0 (user root)
> Reserved blocks gid:      0 (group root)
> First inode:              11
> Inode size:               128
> Journal inode:            8
> Default directory hash:   tea
> Directory Hash Seed:      675a96e5-faf3-4576-af10-713007ac8387
> Journal backup:           inode blocks
>
> This is the Logical Volume (vgdisplay -v):
> --- Logical volume ---
>   LV Name                /dev/storage/storage
>   VG Name                storage
>   LV UUID                KQtBJN-RXzo-UUhT-MeiR-Ack8-QB2x-g9aPOT
>   LV Write Access        read/write
>   LV Status              available
>   # open                 1
>   LV Size                19.18 TB
>   Current LE             5027838
>   Segments               4
>   Allocation             inherit
>   Read ahead sectors     auto
>   - currently set to     256
>   Block device           253:24
>
> And this is the Volume Group (vgdisplay):
>   --- Volume group ---
>   VG Name               storage
>   System ID
>   Format                lvm2
>   Metadata Areas        2
>   Metadata Sequence No  174
>   VG Access             read/write
>   VG Status             resizable
>   MAX LV                0
>   Cur LV                3
>   Open LV               3
>   Max PV                0
>   Cur PV                2
>   Act PV                2
>   VG Size               19.77 TB
>   PE Size               4.00 MB
>   Total PE              5181438
>   Alloc PE / Size       5181438 / 19.77 TB
>   Free  PE / Size       0 / 0
>   VG UUID               ZalMV7-fZqp-mBA3-aYzi-iADF-VDWV-YIifQr
>
> What I would like to know is the correct *extents* parameter for
> lvreduce. Any help is greatly appreciated!
>
> Sebastian
>
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-lvm] Find Correct LE Size of ext4 Partition (16TB)
  2013-10-08 20:11 ` Gabriel
@ 2013-10-08 22:58   ` matthew patton
  2013-10-09  7:57     ` [linux-lvm] Find Correct LE Size of ext4 Filesystem (16TB) Sebastian Walter
  0 siblings, 1 reply; 6+ messages in thread
From: matthew patton @ 2013-10-08 22:58 UTC (permalink / raw)
  To: LVM general discussion and development


>> So I set the partition to 16TB. Now I'm wasting space on the LV
>> which I would like to again be reduced to the minimum size of the
>> partition (16TB).
>>
>> Has anyone an idea to exactly calculate the amount of logical extends
>> (LE) needed for holding the 16TB partition?


The whole point of using LVM is to dispense with the malarky of hard disk partitions so I don't know why you're resizing them. If I read your email correctly you're going about this dangerously.

1) resize2fs down to say 15.8TB.
2) lvresize to 16TB
3) resize2fs (no args) <LV device>

Step 3 will resize the FS to as close as the LV size allows and won't let you write past the LV's endpoint like your method makes so easy to screw up when doing the extent calculation. Maybe your maths are always perfect but I wouldn't risk my data for a measly few bytes. Do it the fool-proof way.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-lvm] Find Correct LE Size of ext4 Filesystem (16TB)
  2013-10-08 22:58   ` matthew patton
@ 2013-10-09  7:57     ` Sebastian Walter
  2013-10-09  9:42       ` Gabriel
  0 siblings, 1 reply; 6+ messages in thread
From: Sebastian Walter @ 2013-10-09  7:57 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 1713 bytes --]

Hi Matthew,

On 10/09/2013 12:58 AM, matthew patton wrote:

> 
> The whole point of using LVM is to dispense with the malarky of hard disk partitions so I don't know why you're resizing them. If I read your email correctly you're going about this dangerously.> 

Thanks for pointing this out. Now I realize that I used the term
"partition" instead of "filesystem" erroneously (changed subject).

What I initially wanted to do is to extend the filesystem to the whole
size of the underlying physical disks. So first I extended the LV to
fill the disks (19TB). As the second step I wanted to resize the ext4
filesystem to the LV size of 19TB. This last step failed with an error
message saying that tune4fs can't allocate more than 16TB. Setting the
filesystem's size to 16T worked.

Now I have a 16TB filesystem on a 19.18TB logical volume. The additional
3TB are unused on the LV so I want to reduce the LV to the same size as
the filesystem. This way I could use the 3TB for other LVs.

> 3) resize2fs (no args) <LV device>
> 
> Step 3 will resize the FS to as close as the LV size allows and won't let you write past the LV's endpoint like your method makes so easy to screw up when doing the extent calculation. Maybe your maths are always perfect but I wouldn't risk my data for a measly few bytes. Do it the fool-proof way.

I agree with you that resizing the filesystem to the therotical values
of the LV geometry is much too risky.

What would be the fool-proof way? Creating a new filesystem on a 16TB LV
and copying the data from the backup?

Or is it really safe to reduce the LV by some amount - let's say 2.5TB -
and sacrifice 500GB for safety?

Thanks! Sebastian


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 5045 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-lvm] Find Correct LE Size of ext4 Filesystem (16TB)
  2013-10-09  7:57     ` [linux-lvm] Find Correct LE Size of ext4 Filesystem (16TB) Sebastian Walter
@ 2013-10-09  9:42       ` Gabriel
  2013-11-07 10:02         ` Sebastian Walter
  0 siblings, 1 reply; 6+ messages in thread
From: Gabriel @ 2013-10-09  9:42 UTC (permalink / raw)
  To: LVM general discussion and development

I think what Matthew is saying is that you should first shrink your
filesystem to a smaller size (15.8 TiB), then adjust the LV size to 16
TiB (this is safe because the filesystem above is smaller) and finally
resize the filesystem to the LV size (resize2fs /your/filesystem).
This is a much safer approach and leaves you with no unused space.

On Wed, Oct 9, 2013 at 10:57 AM, Sebastian Walter
<sebastian.walter@fu-berlin.de> wrote:
> Hi Matthew,
>
> On 10/09/2013 12:58 AM, matthew patton wrote:
>
>>
>> The whole point of using LVM is to dispense with the malarky of hard disk partitions so I don't know why you're resizing them. If I read your email correctly you're going about this dangerously.>
>
> Thanks for pointing this out. Now I realize that I used the term
> "partition" instead of "filesystem" erroneously (changed subject).
>
> What I initially wanted to do is to extend the filesystem to the whole
> size of the underlying physical disks. So first I extended the LV to
> fill the disks (19TB). As the second step I wanted to resize the ext4
> filesystem to the LV size of 19TB. This last step failed with an error
> message saying that tune4fs can't allocate more than 16TB. Setting the
> filesystem's size to 16T worked.
>
> Now I have a 16TB filesystem on a 19.18TB logical volume. The additional
> 3TB are unused on the LV so I want to reduce the LV to the same size as
> the filesystem. This way I could use the 3TB for other LVs.
>
>> 3) resize2fs (no args) <LV device>
>>
>> Step 3 will resize the FS to as close as the LV size allows and won't let you write past the LV's endpoint like your method makes so easy to screw up when doing the extent calculation. Maybe your maths are always perfect but I wouldn't risk my data for a measly few bytes. Do it the fool-proof way.
>
> I agree with you that resizing the filesystem to the therotical values
> of the LV geometry is much too risky.
>
> What would be the fool-proof way? Creating a new filesystem on a 16TB LV
> and copying the data from the backup?
>
> Or is it really safe to reduce the LV by some amount - let's say 2.5TB -
> and sacrifice 500GB for safety?
>
> Thanks! Sebastian
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-lvm] Find Correct LE Size of ext4 Filesystem (16TB)
  2013-10-09  9:42       ` Gabriel
@ 2013-11-07 10:02         ` Sebastian Walter
  0 siblings, 0 replies; 6+ messages in thread
From: Sebastian Walter @ 2013-11-07 10:02 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 3233 bytes --]

Hi list,

It took me a while to get the server off from production, but finally
Matthew's method worked. I shrank the filesystem to 15TB, then set up
the LV to exactly 16TB and finally expanded the filesystem to the size
of the LV.

Thanks! Many regards, Sebastian

On 10/09/2013 11:42 AM, Gabriel wrote:
> I think what Matthew is saying is that you should first shrink your
> filesystem to a smaller size (15.8 TiB), then adjust the LV size to 16
> TiB (this is safe because the filesystem above is smaller) and finally
> resize the filesystem to the LV size (resize2fs /your/filesystem).
> This is a much safer approach and leaves you with no unused space.
> 
> On Wed, Oct 9, 2013 at 10:57 AM, Sebastian Walter
> <sebastian.walter@fu-berlin.de> wrote:
>> Hi Matthew,
>>
>> On 10/09/2013 12:58 AM, matthew patton wrote:
>>
>>>
>>> The whole point of using LVM is to dispense with the malarky of hard disk partitions so I don't know why you're resizing them. If I read your email correctly you're going about this dangerously.>
>>
>> Thanks for pointing this out. Now I realize that I used the term
>> "partition" instead of "filesystem" erroneously (changed subject).
>>
>> What I initially wanted to do is to extend the filesystem to the whole
>> size of the underlying physical disks. So first I extended the LV to
>> fill the disks (19TB). As the second step I wanted to resize the ext4
>> filesystem to the LV size of 19TB. This last step failed with an error
>> message saying that tune4fs can't allocate more than 16TB. Setting the
>> filesystem's size to 16T worked.
>>
>> Now I have a 16TB filesystem on a 19.18TB logical volume. The additional
>> 3TB are unused on the LV so I want to reduce the LV to the same size as
>> the filesystem. This way I could use the 3TB for other LVs.
>>
>>> 3) resize2fs (no args) <LV device>
>>>
>>> Step 3 will resize the FS to as close as the LV size allows and won't let you write past the LV's endpoint like your method makes so easy to screw up when doing the extent calculation. Maybe your maths are always perfect but I wouldn't risk my data for a measly few bytes. Do it the fool-proof way.
>>
>> I agree with you that resizing the filesystem to the therotical values
>> of the LV geometry is much too risky.
>>
>> What would be the fool-proof way? Creating a new filesystem on a 16TB LV
>> and copying the data from the backup?
>>
>> Or is it really safe to reduce the LV by some amount - let's say 2.5TB -
>> and sacrifice 500GB for safety?
>>
>> Thanks! Sebastian
>>
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 


-- 
Dipl.-Geol. Sebastian Walter
Freie Universitaet Berlin - Institute of Geological Sciences
Planetary Sciences and Remote Sensing
Malteserstrasse 74-100 - 12249 Berlin, Germany
Phone: +49 30 838 70541


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 5045 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2013-11-07 10:01 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-10-08  9:06 [linux-lvm] Find Correct LE Size of ext4 Partition (16TB) Sebastian Walter
2013-10-08 20:11 ` Gabriel
2013-10-08 22:58   ` matthew patton
2013-10-09  7:57     ` [linux-lvm] Find Correct LE Size of ext4 Filesystem (16TB) Sebastian Walter
2013-10-09  9:42       ` Gabriel
2013-11-07 10:02         ` Sebastian Walter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).