* [linux-lvm] calculating free capacity from pvdisplay and lvdisplay
@ 2010-08-11 2:25 Rahul Nabar
2010-08-11 6:10 ` Ray Morris
2010-08-11 8:42 ` Giorgio Bersano
0 siblings, 2 replies; 11+ messages in thread
From: Rahul Nabar @ 2010-08-11 2:25 UTC (permalink / raw)
To: linux-lvm
Some of the physical volumes show "Allocatable yes (but
full)" while others don't. How does one relate this to the actual
capacity? THe reason I am confused is that 3 of my PV's show up as
full and each is 1.63 TB. Thus the total full space would come to 4.89
TB. But the sum of full space of all my LV's is only around 3 TB
(based on the output of df)
I've reproduced the outputs of pvdisplay, lvdisplay and df below.
I'm confused! Any pointers?
--
Rahul
[root@eustorage ~]# pvdisplay
--- Physical volume ---
PV Name /dev/sdb
VG Name euclid_highperf_storage
PV Size 1.63 TB / not usable 4.00 MB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 428351
Free PE 0
Allocated PE 428351
PV UUID wDdbmP-2n5m-98HD-Ewqk-Q3y0-lnMf-rsaVXt
--- Physical volume ---
PV Name /dev/sdc
VG Name euclid_highperf_storage
PV Size 1.63 TB / not usable 4.00 MB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 428351
Free PE 0
Allocated PE 428351
PV UUID 75i75q-2rec-2FMf-eyPa-W0nF-zFHH-PIAvvc
--- Physical volume ---
PV Name /dev/sdd
VG Name euclid_highperf_storage
PV Size 1.63 TB / not usable 4.00 MB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 428351
Free PE 0
Allocated PE 428351
PV UUID vo2Jh2-PfFC-eOj4-GYnP-Jx1I-Sisu-2nY4lC
--- Physical volume ---
PV Name /dev/sde
VG Name euclid_highperf_storage
PV Size 1.63 TB / not usable 4.00 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 428351
Free PE 38140
Allocated PE 390211
PV UUID EK7cvF-IZjf-PJVw-d2RR-lCdt-kOSD-iqFtOf
--- Physical volume ---
PV Name /dev/sdf
VG Name euclid_highperf_storage
PV Size 1.63 TB / not usable 4.00 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 428351
Free PE 140607
Allocated PE 287744
PV UUID fQXN8S-HhYu-weoq-kbuz-BrxZ-6WQk-6ydBDw
--- Physical volume ---
PV Name /dev/sdg
VG Name euclid_highperf_storage
PV Size 1.63 TB / not usable 4.00 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 428351
Free PE 140607
Allocated PE 287744
PV UUID i7GD1d-rbd2-efKd-uK3u-D3S2-BxJv-UkrNve
[root@eustorage ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 76G 8.6G 64G 12% /
/dev/sda6 19G 365M 17G 3% /var
/dev/sda5 15G 165M 14G 2% /tmp
/dev/sda1 487M 17M 445M 4% /boot
tmpfs 24G 0 24G 0% /dev/shm
/dev/mapper/euclid_highperf_storage-LV_home
6.0T 1.4T 4.4T 24% /home
/dev/mapper/euclid_highperf_storage-LV_export
591G 17G 550G 3% /opt
/dev/mapper/euclid_highperf_storage-LV_polhome
2.0T 1.5T 386G 80% /polhome
[root@eustorage ~]# lvdisplay
--- Logical volume ---
LV Name /dev/euclid_highperf_storage/LV_home
VG Name euclid_highperf_storage
LV UUID gu7yo1-TYYr-ucHG-QSDk-y8HD-ETrs-Z5kCk9
LV Write Access read/write
LV Status available
# open 1
LV Size 6.00 TB
Current LE 1572864
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 1536
Block device 253:0
--- Logical volume ---
LV Name /dev/euclid_highperf_storage/LV_export
VG Name euclid_highperf_storage
LV UUID 1lktLy-Hgn3-qS1m-41VJ-5kNY-DMyb-1ri4Th
LV Write Access read/write
LV Status available
# open 1
LV Size 600.00 GB
Current LE 153600
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 1536
Block device 253:1
--- Logical volume ---
LV Name /dev/euclid_highperf_storage/LV_polhome
VG Name euclid_highperf_storage
LV UUID xqpOX5-HFey-H0qi-NgjP-NVS7-FwDb-zbiK8m
LV Write Access read/write
LV Status available
# open 1
LV Size 2.00 TB
Current LE 524288
Segments 4
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] calculating free capacity from pvdisplay and lvdisplay
2010-08-11 2:25 Rahul Nabar
@ 2010-08-11 6:10 ` Ray Morris
2010-08-11 17:03 ` Rahul Nabar
2010-08-11 8:42 ` Giorgio Bersano
1 sibling, 1 reply; 11+ messages in thread
From: Ray Morris @ 2010-08-11 6:10 UTC (permalink / raw)
To: LVM general discussion and development
> B. Thus the total full space would come to 4.89 TB. But the sum of
> full
> space of all my LV's is only around 3 TB (based on the output of df)
It's the same thing as making a new partition covering your whole drive,
then wondering why fdisk says you can't make another partition. Just
because you haven't stored files in that partition, it's still takes
up the whole drive.
df shows that your LVs take up 8.6TB: 6TB + 600 GB + 2 TB.
Therefore, you are using 8.6TB of disk space for those LVs.
Some of the space WITHIN each LV might not be used for files,
but it has been dedicated to that LV.
df also shows that the filesystems on the LVs have free space for
more files. So you can put more files on those LVs, which is a
different thing than having space to make more LVs.
I'm not good at explaining things, so sometimes I try explaining three
different ways. I have six cereal boxes, each half empty. I put the
boxes in a bag. The bag is now full. The cereal boxes may not be full,
but they fill up the bag. The cereal boxes are your half empty LVs and
the bag is your drives.
Layers:
hard drive
partition (can be skipped)
physical volume
volume group
logical volume
file system
--
Ray Morris
support@bettercgi.com
Strongbox - The next generation in site security:
http://www.bettercgi.com/strongbox/
Throttlebox - Intelligent Bandwidth Control
http://www.bettercgi.com/throttlebox/
Strongbox / Throttlebox affiliate program:
http://www.bettercgi.com/affiliates/user/register.php
On 08/10/2010 09:25:15 PM, Rahul Nabar wrote:
> Some of the physical volumes show "Allocatable yes (but
> full)" while others don't. How does one relate this to the actual
> capacity? THe reason I am confused is that 3 of my PV's show up as
> full and each is 1.63 TB. Thus the total full space would come to 4.89
> TB. But the sum of full space of all my LV's is only around 3 TB
> (based on the output of df)
>
> I've reproduced the outputs of pvdisplay, lvdisplay and df below.
>
> I'm confused! Any pointers?
>
> --
> Rahul
>
>
> [root@eustorage ~]# pvdisplay
> --- Physical volume ---
> PV Name /dev/sdb
> VG Name euclid_highperf_storage
> PV Size 1.63 TB / not usable 4.00 MB
> Allocatable yes (but full)
> PE Size (KByte) 4096
> Total PE 428351
> Free PE 0
> Allocated PE 428351
> PV UUID wDdbmP-2n5m-98HD-Ewqk-Q3y0-lnMf-rsaVXt
>
> --- Physical volume ---
> PV Name /dev/sdc
> VG Name euclid_highperf_storage
> PV Size 1.63 TB / not usable 4.00 MB
> Allocatable yes (but full)
> PE Size (KByte) 4096
> Total PE 428351
> Free PE 0
> Allocated PE 428351
> PV UUID 75i75q-2rec-2FMf-eyPa-W0nF-zFHH-PIAvvc
>
> --- Physical volume ---
> PV Name /dev/sdd
> VG Name euclid_highperf_storage
> PV Size 1.63 TB / not usable 4.00 MB
> Allocatable yes (but full)
> PE Size (KByte) 4096
> Total PE 428351
> Free PE 0
> Allocated PE 428351
> PV UUID vo2Jh2-PfFC-eOj4-GYnP-Jx1I-Sisu-2nY4lC
>
> --- Physical volume ---
> PV Name /dev/sde
> VG Name euclid_highperf_storage
> PV Size 1.63 TB / not usable 4.00 MB
> Allocatable yes
> PE Size (KByte) 4096
> Total PE 428351
> Free PE 38140
> Allocated PE 390211
> PV UUID EK7cvF-IZjf-PJVw-d2RR-lCdt-kOSD-iqFtOf
>
> --- Physical volume ---
> PV Name /dev/sdf
> VG Name euclid_highperf_storage
> PV Size 1.63 TB / not usable 4.00 MB
> Allocatable yes
> PE Size (KByte) 4096
> Total PE 428351
> Free PE 140607
> Allocated PE 287744
> PV UUID fQXN8S-HhYu-weoq-kbuz-BrxZ-6WQk-6ydBDw
>
> --- Physical volume ---
> PV Name /dev/sdg
> VG Name euclid_highperf_storage
> PV Size 1.63 TB / not usable 4.00 MB
> Allocatable yes
> PE Size (KByte) 4096
> Total PE 428351
> Free PE 140607
> Allocated PE 287744
> PV UUID i7GD1d-rbd2-efKd-uK3u-D3S2-BxJv-UkrNve
>
> [root@eustorage ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda2 76G 8.6G 64G 12% /
> /dev/sda6 19G 365M 17G 3% /var
> /dev/sda5 15G 165M 14G 2% /tmp
> /dev/sda1 487M 17M 445M 4% /boot
> tmpfs 24G 0 24G 0% /dev/shm
> /dev/mapper/euclid_highperf_storage-LV_home
> 6.0T 1.4T 4.4T 24% /home
> /dev/mapper/euclid_highperf_storage-LV_export
> 591G 17G 550G 3% /opt
> /dev/mapper/euclid_highperf_storage-LV_polhome
> 2.0T 1.5T 386G 80% /polhome
> [root@eustorage ~]# lvdisplay
> --- Logical volume ---
> LV Name /dev/euclid_highperf_storage/LV_home
> VG Name euclid_highperf_storage
> LV UUID gu7yo1-TYYr-ucHG-QSDk-y8HD-ETrs-Z5kCk9
> LV Write Access read/write
> LV Status available
> # open 1
> LV Size 6.00 TB
> Current LE 1572864
> Segments 1
> Allocation inherit
> Read ahead sectors auto
> - currently set to 1536
> Block device 253:0
>
> --- Logical volume ---
> LV Name /dev/euclid_highperf_storage/LV_export
> VG Name euclid_highperf_storage
> LV UUID 1lktLy-Hgn3-qS1m-41VJ-5kNY-DMyb-1ri4Th
> LV Write Access read/write
> LV Status available
> # open 1
> LV Size 600.00 GB
> Current LE 153600
> Segments 1
> Allocation inherit
> Read ahead sectors auto
> - currently set to 1536
> Block device 253:1
>
> --- Logical volume ---
> LV Name /dev/euclid_highperf_storage/LV_polhome
> VG Name euclid_highperf_storage
> LV UUID xqpOX5-HFey-H0qi-NgjP-NVS7-FwDb-zbiK8m
> LV Write Access read/write
> LV Status available
> # open 1
> LV Size 2.00 TB
> Current LE 524288
> Segments 4
> Allocation inherit
> Read ahead sectors auto
> - currently set to 256
> Block device 253:2
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] calculating free capacity from pvdisplay and lvdisplay
2010-08-11 2:25 Rahul Nabar
2010-08-11 6:10 ` Ray Morris
@ 2010-08-11 8:42 ` Giorgio Bersano
1 sibling, 0 replies; 11+ messages in thread
From: Giorgio Bersano @ 2010-08-11 8:42 UTC (permalink / raw)
To: LVM general discussion and development
[-- Attachment #1: Type: text/plain, Size: 6199 bytes --]
2010/8/11 Rahul Nabar <rpnabar@gmail.com>
> Some of the physical volumes show "Allocatable yes (but
> full)" while others don't. How does one relate this to the actual
> capacity? THe reason I am confused is that 3 of my PV's show up as
> full and each is 1.63 TB. Thus the total full space would come to 4.89
> TB. But the sum of full space of all my LV's is only around 3 TB
> (based on the output of df)
>
> I've reproduced the outputs of pvdisplay, lvdisplay and df below.
>
> I'm confused! Any pointers?
>
> --
> Rahul
>
>
Hi Rahul,
you really appear to have 6 PV (not three) of 1.63TB each and all that
storage space is assigned to the VG euclid_highperf_storage .
The VG has three LV defined from it and indeed has free space; using these
LV you created three filesystems which have free space too, as Ray just
explained.
You have not shown the output of a vgs command but if If you issue it you
should see something like this:
#vgs
VG #PV #LV #SN Attr VSize VFree
euclid_highperf_storage 6 3 0 wz--n- 9.8T 1.22T
and so you still have 1.22 TB of free space to use
some math...
Details from pvdisplay:
PE Size (KByte) 4096 = 4MB
VG Name euclid_highperf_storage
Total PE= 428351*6 = 2570106*4MB = 10280424 MB = 9.8TB
total Free PE: 38140+140607+140607 = 319354*4MB = 1277416M = 1.22 TB
total Allocated PE: 428351*3+390211+287744+287744 = 2250752*4MB = 9003008 MB
= 8.58 TB
Details from lvdisplay:
Current LE 1572864+153600+524288 = 2250752
exactly like the Allocated PE in pvdisplay
Best regards,
Giorgio.
> [root@eustorage ~]# pvdisplay
> --- Physical volume ---
> PV Name /dev/sdb
> VG Name euclid_highperf_storage
> PV Size 1.63 TB / not usable 4.00 MB
> Allocatable yes (but full)
> PE Size (KByte) 4096
> Total PE 428351
> Free PE 0
> Allocated PE 428351
> PV UUID wDdbmP-2n5m-98HD-Ewqk-Q3y0-lnMf-rsaVXt
>
> --- Physical volume ---
> PV Name /dev/sdc
> VG Name euclid_highperf_storage
> PV Size 1.63 TB / not usable 4.00 MB
> Allocatable yes (but full)
> PE Size (KByte) 4096
> Total PE 428351
> Free PE 0
> Allocated PE 428351
> PV UUID 75i75q-2rec-2FMf-eyPa-W0nF-zFHH-PIAvvc
>
> --- Physical volume ---
> PV Name /dev/sdd
> VG Name euclid_highperf_storage
> PV Size 1.63 TB / not usable 4.00 MB
> Allocatable yes (but full)
> PE Size (KByte) 4096
> Total PE 428351
> Free PE 0
> Allocated PE 428351
> PV UUID vo2Jh2-PfFC-eOj4-GYnP-Jx1I-Sisu-2nY4lC
>
> --- Physical volume ---
> PV Name /dev/sde
> VG Name euclid_highperf_storage
> PV Size 1.63 TB / not usable 4.00 MB
> Allocatable yes
> PE Size (KByte) 4096
> Total PE 428351
> Free PE 38140
> Allocated PE 390211
> PV UUID EK7cvF-IZjf-PJVw-d2RR-lCdt-kOSD-iqFtOf
>
> --- Physical volume ---
> PV Name /dev/sdf
> VG Name euclid_highperf_storage
> PV Size 1.63 TB / not usable 4.00 MB
> Allocatable yes
> PE Size (KByte) 4096
> Total PE 428351
> Free PE 140607
> Allocated PE 287744
> PV UUID fQXN8S-HhYu-weoq-kbuz-BrxZ-6WQk-6ydBDw
>
> --- Physical volume ---
> PV Name /dev/sdg
> VG Name euclid_highperf_storage
> PV Size 1.63 TB / not usable 4.00 MB
> Allocatable yes
> PE Size (KByte) 4096
> Total PE 428351
> Free PE 140607
> Allocated PE 287744
> PV UUID i7GD1d-rbd2-efKd-uK3u-D3S2-BxJv-UkrNve
>
> [root@eustorage ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda2 76G 8.6G 64G 12% /
> /dev/sda6 19G 365M 17G 3% /var
> /dev/sda5 15G 165M 14G 2% /tmp
> /dev/sda1 487M 17M 445M 4% /boot
> tmpfs 24G 0 24G 0% /dev/shm
> /dev/mapper/euclid_highperf_storage-LV_home
> 6.0T 1.4T 4.4T 24% /home
> /dev/mapper/euclid_highperf_storage-LV_export
> 591G 17G 550G 3% /opt
> /dev/mapper/euclid_highperf_storage-LV_polhome
> 2.0T 1.5T 386G 80% /polhome
> [root@eustorage ~]# lvdisplay
> --- Logical volume ---
> LV Name /dev/euclid_highperf_storage/LV_home
> VG Name euclid_highperf_storage
> LV UUID gu7yo1-TYYr-ucHG-QSDk-y8HD-ETrs-Z5kCk9
> LV Write Access read/write
> LV Status available
> # open 1
> LV Size 6.00 TB
> Current LE 1572864
> Segments 1
> Allocation inherit
> Read ahead sectors auto
> - currently set to 1536
> Block device 253:0
>
> --- Logical volume ---
> LV Name /dev/euclid_highperf_storage/LV_export
> VG Name euclid_highperf_storage
> LV UUID 1lktLy-Hgn3-qS1m-41VJ-5kNY-DMyb-1ri4Th
> LV Write Access read/write
> LV Status available
> # open 1
> LV Size 600.00 GB
> Current LE 153600
> Segments 1
> Allocation inherit
> Read ahead sectors auto
> - currently set to 1536
> Block device 253:1
>
> --- Logical volume ---
> LV Name /dev/euclid_highperf_storage/LV_polhome
> VG Name euclid_highperf_storage
> LV UUID xqpOX5-HFey-H0qi-NgjP-NVS7-FwDb-zbiK8m
> LV Write Access read/write
> LV Status available
> # open 1
> LV Size 2.00 TB
> Current LE 524288
> Segments 4
> Allocation inherit
> Read ahead sectors auto
> - currently set to 256
> Block device 253:2
>
>
[-- Attachment #2: Type: text/html, Size: 7327 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] calculating free capacity from pvdisplay and lvdisplay
2010-08-11 6:10 ` Ray Morris
@ 2010-08-11 17:03 ` Rahul Nabar
2010-08-11 17:19 ` Ray Morris
0 siblings, 1 reply; 11+ messages in thread
From: Rahul Nabar @ 2010-08-11 17:03 UTC (permalink / raw)
To: LVM general discussion and development
On Wed, Aug 11, 2010 at 1:10 AM, Ray Morris <support@bettercgi.com> wrote:
Thanks Giorgio and Ray! That helps!
>
> df shows that your LVs take up 8.6TB: 6TB + 600 GB + 2 TB.
> Therefore, you are using 8.6TB of disk space for those LVs.
> Some of the space WITHIN each LV might not be used for files,
> but it has been dedicated to that LV.
Makes sense! The only reason that I was confused was why 3 of my PV's
say "yes (but full)" and the other three not. How does one explain
that?
1.63x3=4.89 still less than 8.6.
Has the VG spanned across the first 3 PVs fully and then utilized the
remaining 3 partially?
> I'm not good at explaining things, so sometimes I try explaining three
> different ways. �I have six cereal boxes, each half empty. �I put the
> boxes in a bag. �The bag is now full. �The cereal boxes may not be full,
> but they fill up the bag. �The cereal boxes are your half empty LVs and
> the bag is your drives.
Food based analogies are always good! :)
Giorgio:
The vgs output is exactly as you say:
[root@eustorage ~]# vgs
VG #PV #LV #SN Attr VSize VFree
euclid_highperf_storage 6 3 0 wz--n- 9.80T 1.22T
--
Rahul
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] calculating free capacity from pvdisplay and lvdisplay
2010-08-11 17:03 ` Rahul Nabar
@ 2010-08-11 17:19 ` Ray Morris
0 siblings, 0 replies; 11+ messages in thread
From: Ray Morris @ 2010-08-11 17:19 UTC (permalink / raw)
To: LVM general discussion and development
> Makes sense! The only reason that I was confused was why 3 of my PV's
> say "yes (but full)" and the other three not. How does one explain
> that?
>
> 1.63x3=4.89 still less than 8.6.
>
> Has the VG spanned across the first 3 PVs fully and then utilized the
> remaining 3 partially?
Pretty much. See man (8) lvm, --alloc option
--
Ray Morris
support@bettercgi.com
Strongbox - The next generation in site security:
http://www.bettercgi.com/strongbox/
Throttlebox - Intelligent Bandwidth Control
http://www.bettercgi.com/throttlebox/
Strongbox / Throttlebox affiliate program:
http://www.bettercgi.com/affiliates/user/register.php
On 08/11/2010 12:03:39 PM, Rahul Nabar wrote:
> On Wed, Aug 11, 2010 at 1:10 AM, Ray Morris <support@bettercgi.com>
> wrote:
>
> Thanks Giorgio and Ray! That helps!
> >
> > df shows that your LVs take up 8.6TB: 6TB + 600 GB + 2 TB.
> > Therefore, you are using 8.6TB of disk space for those LVs.
> > Some of the space WITHIN each LV might not be used for files,
> > but it has been dedicated to that LV.
>
> Makes sense! The only reason that I was confused was why 3 of my PV's
> say "yes (but full)" and the other three not. How does one explain
> that?
>
> 1.63x3=4.89 still less than 8.6.
>
> Has the VG spanned across the first 3 PVs fully and then utilized the
> remaining 3 partially?
>
>
> > I'm not good at explaining things, so sometimes I try explaining
> three
> > different ways. �I have six cereal boxes, each half empty. �I put
> the
> > boxes in a bag. �The bag is now full. �The cereal boxes may not be
> full,
> > but they fill up the bag. �The cereal boxes are your half empty LVs
> and
> > the bag is your drives.
>
> Food based analogies are always good! :)
>
> Giorgio:
>
> The vgs output is exactly as you say:
>
> [root@eustorage ~]# vgs
> VG #PV #LV #SN Attr VSize VFree
> euclid_highperf_storage 6 3 0 wz--n- 9.80T 1.22T
>
> --
> Rahul
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] calculating free capacity from pvdisplay and lvdisplay
@ 2010-08-11 22:26 Daksh Chauhan
2010-08-11 22:40 ` Stuart D. Gathman
0 siblings, 1 reply; 11+ messages in thread
From: Daksh Chauhan @ 2010-08-11 22:26 UTC (permalink / raw)
To: linux-lvm
This is interesting, and I understand your explaination Ray... But,
how can I figure out how much data is on each PVs??
I have something similar setup, and I have scripts (and cacti) to see
disk IO for each PVs, but I would really like to see how much data is
on each PV...
Thank you,
> Date: Wed, 11 Aug 2010 01:10:31 -0500
> From: Ray Morris <support@bettercgi.com>
> To: LVM general discussion and development <linux-lvm@redhat.com>
> Subject: Re: [linux-lvm] calculating free capacity from pvdisplay and
> � � � �lvdisplay
> Message-ID: <1281507031.21952.1@raydesk1.bettercgi.com>
> Content-Type: text/plain; charset=us-ascii; DelSp=Yes; Format=Flowed
>
>> B. Thus the total full space would come to 4.89 TB. But the sum of
>> full
>> space of all my LV's is only around 3 TB (based on the output of df)
>
> It's the same thing as making a new partition covering your whole drive,
> then wondering why fdisk says you can't make another partition. �Just
> because you haven't stored files in that partition, it's still takes
> up the whole drive.
>
> df shows that your LVs take up 8.6TB: 6TB + 600 GB + 2 TB.
> Therefore, you are using 8.6TB of disk space for those LVs.
> Some of the space WITHIN each LV might not be used for files,
> but it has been dedicated to that LV.
>
> df also shows that the filesystems on the LVs have free space for
> more files. �So you can put more files on those LVs, which is a
> different thing than having space to make more LVs.
>
> I'm not good at explaining things, so sometimes I try explaining three
> different ways. �I have six cereal boxes, each half empty. �I put the
> boxes in a bag. �The bag is now full. �The cereal boxes may not be full,
> but they fill up the bag. �The cereal boxes are your half empty LVs and
> the bag is your drives.
>
> Layers:
>
> hard drive
> partition (can be skipped)
> physical volume
> volume group
> logical volume
> file system
> --
> Ray Morris
> support@bettercgi.com
>
> Strongbox - The next generation in site security:
> http://www.bettercgi.com/strongbox/
>
> Throttlebox - Intelligent Bandwidth Control
> http://www.bettercgi.com/throttlebox/
>
> Strongbox / Throttlebox affiliate program:
> http://www.bettercgi.com/affiliates/user/register.php
>
>
> On 08/10/2010 09:25:15 PM, Rahul Nabar wrote:
>> Some of the physical volumes show "Allocatable � � � � � yes (but
>> full)" while others don't. How does one relate this to the actual
>> capacity? THe reason I am confused is that 3 of my PV's show up as
>> full and each is 1.63 TB. Thus the total full space would come to 4.89
>> TB. But the sum of full space of all my LV's is only around 3 TB
>> (based on the output of df)
>>
>> I've reproduced the outputs of pvdisplay, lvdisplay and df below.
>>
>> I'm confused! Any pointers?
>>
>> --
>> Rahul
>>
>>
>> [root@eustorage ~]# pvdisplay
>> � --- Physical volume ---
>> � PV Name � � � � � � � /dev/sdb
>> � VG Name � � � � � � � euclid_highperf_storage
>> � PV Size � � � � � � � 1.63 TB / not usable 4.00 MB
>> � Allocatable � � � � � yes (but full)
>> � PE Size (KByte) � � � 4096
>> � Total PE � � � � � � �428351
>> � Free PE � � � � � � � 0
>> � Allocated PE � � � � �428351
>> � PV UUID � � � � � � � wDdbmP-2n5m-98HD-Ewqk-Q3y0-lnMf-rsaVXt
>>
>> � --- Physical volume ---
>> � PV Name � � � � � � � /dev/sdc
>> � VG Name � � � � � � � euclid_highperf_storage
>> � PV Size � � � � � � � 1.63 TB / not usable 4.00 MB
>> � Allocatable � � � � � yes (but full)
>> � PE Size (KByte) � � � 4096
>> � Total PE � � � � � � �428351
>> � Free PE � � � � � � � 0
>> � Allocated PE � � � � �428351
>> � PV UUID � � � � � � � 75i75q-2rec-2FMf-eyPa-W0nF-zFHH-PIAvvc
>>
>> � --- Physical volume ---
>> � PV Name � � � � � � � /dev/sdd
>> � VG Name � � � � � � � euclid_highperf_storage
>> � PV Size � � � � � � � 1.63 TB / not usable 4.00 MB
>> � Allocatable � � � � � yes (but full)
>> � PE Size (KByte) � � � 4096
>> � Total PE � � � � � � �428351
>> � Free PE � � � � � � � 0
>> � Allocated PE � � � � �428351
>> � PV UUID � � � � � � � vo2Jh2-PfFC-eOj4-GYnP-Jx1I-Sisu-2nY4lC
>>
>> � --- Physical volume ---
>> � PV Name � � � � � � � /dev/sde
>> � VG Name � � � � � � � euclid_highperf_storage
>> � PV Size � � � � � � � 1.63 TB / not usable 4.00 MB
>> � Allocatable � � � � � yes
>> � PE Size (KByte) � � � 4096
>> � Total PE � � � � � � �428351
>> � Free PE � � � � � � � 38140
>> � Allocated PE � � � � �390211
>> � PV UUID � � � � � � � EK7cvF-IZjf-PJVw-d2RR-lCdt-kOSD-iqFtOf
>>
>> � --- Physical volume ---
>> � PV Name � � � � � � � /dev/sdf
>> � VG Name � � � � � � � euclid_highperf_storage
>> � PV Size � � � � � � � 1.63 TB / not usable 4.00 MB
>> � Allocatable � � � � � yes
>> � PE Size (KByte) � � � 4096
>> � Total PE � � � � � � �428351
>> � Free PE � � � � � � � 140607
>> � Allocated PE � � � � �287744
>> � PV UUID � � � � � � � fQXN8S-HhYu-weoq-kbuz-BrxZ-6WQk-6ydBDw
>>
>> � --- Physical volume ---
>> � PV Name � � � � � � � /dev/sdg
>> � VG Name � � � � � � � euclid_highperf_storage
>> � PV Size � � � � � � � 1.63 TB / not usable 4.00 MB
>> � Allocatable � � � � � yes
>> � PE Size (KByte) � � � 4096
>> � Total PE � � � � � � �428351
>> � Free PE � � � � � � � 140607
>> � Allocated PE � � � � �287744
>> � PV UUID � � � � � � � i7GD1d-rbd2-efKd-uK3u-D3S2-BxJv-UkrNve
>>
>> [root@eustorage ~]# df -h
>> Filesystem � � � � � �Size �Used Avail Use% Mounted on
>> /dev/sda2 � � � � � � �76G �8.6G � 64G �12% /
>> /dev/sda6 � � � � � � �19G �365M � 17G � 3% /var
>> /dev/sda5 � � � � � � �15G �165M � 14G � 2% /tmp
>> /dev/sda1 � � � � � � 487M � 17M �445M � 4% /boot
>> tmpfs � � � � � � � � �24G � � 0 � 24G � 0% /dev/shm
>> /dev/mapper/euclid_highperf_storage-LV_home
>> � � � � � � � � � � � 6.0T �1.4T �4.4T �24% /home
>> /dev/mapper/euclid_highperf_storage-LV_export
>> � � � � � � � � � � � 591G � 17G �550G � 3% /opt
>> /dev/mapper/euclid_highperf_storage-LV_polhome
>> � � � � � � � � � � � 2.0T �1.5T �386G �80% /polhome
>> [root@eustorage ~]# lvdisplay
>> � --- Logical volume ---
>> � LV Name � � � � � � � �/dev/euclid_highperf_storage/LV_home
>> � VG Name � � � � � � � �euclid_highperf_storage
>> � LV UUID � � � � � � � �gu7yo1-TYYr-ucHG-QSDk-y8HD-ETrs-Z5kCk9
>> � LV Write Access � � � �read/write
>> � LV Status � � � � � � �available
>> � # open � � � � � � � � 1
>> � LV Size � � � � � � � �6.00 TB
>> � Current LE � � � � � � 1572864
>> � Segments � � � � � � � 1
>> � Allocation � � � � � � inherit
>> � Read ahead sectors � � auto
>> � - currently set to � � 1536
>> � Block device � � � � � 253:0
>>
>> � --- Logical volume ---
>> � LV Name � � � � � � � �/dev/euclid_highperf_storage/LV_export
>> � VG Name � � � � � � � �euclid_highperf_storage
>> � LV UUID � � � � � � � �1lktLy-Hgn3-qS1m-41VJ-5kNY-DMyb-1ri4Th
>> � LV Write Access � � � �read/write
>> � LV Status � � � � � � �available
>> � # open � � � � � � � � 1
>> � LV Size � � � � � � � �600.00 GB
>> � Current LE � � � � � � 153600
>> � Segments � � � � � � � 1
>> � Allocation � � � � � � inherit
>> � Read ahead sectors � � auto
>> � - currently set to � � 1536
>> � Block device � � � � � 253:1
>>
>> � --- Logical volume ---
>> � LV Name � � � � � � � �/dev/euclid_highperf_storage/LV_polhome
>> � VG Name � � � � � � � �euclid_highperf_storage
>> � LV UUID � � � � � � � �xqpOX5-HFey-H0qi-NgjP-NVS7-FwDb-zbiK8m
>> � LV Write Access � � � �read/write
>> � LV Status � � � � � � �available
>> � # open � � � � � � � � 1
>> � LV Size � � � � � � � �2.00 TB
>> � Current LE � � � � � � 524288
>> � Segments � � � � � � � 4
>> � Allocation � � � � � � inherit
>> � Read ahead sectors � � auto
>> � - currently set to � � 256
>> � Block device � � � � � 253:2
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>
>>
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] calculating free capacity from pvdisplay and lvdisplay
2010-08-11 22:26 [linux-lvm] calculating free capacity from pvdisplay and lvdisplay Daksh Chauhan
@ 2010-08-11 22:40 ` Stuart D. Gathman
2010-08-11 23:01 ` Rahul Nabar
0 siblings, 1 reply; 11+ messages in thread
From: Stuart D. Gathman @ 2010-08-11 22:40 UTC (permalink / raw)
To: daksh, LVM general discussion and development
On Wed, 11 Aug 2010, Daksh Chauhan wrote:
> This is interesting, and I understand your explaination Ray... But,
> how can I figure out how much data is on each PVs??
If by "data" you mean "allocated to LVs", then the free PPs for each PV tells
you the answer.
But perhaps by "data" you mean "allocated to LVs *and* allocated to files
in the (arbitrary) filesystems on those LVs".
This would be difficult (and probably pointless).
1) Create a map for each LV of data blocks in use. Obtaining this map is
filesystem dependent, and very different for each filesystem type.
2) Use the LP->PV mapping for each LV to map in use filesystem blocks to PPs.
3) Count the PPs for each PV, and subtract from the total for each PV to get
the total in use by files in each respective filesystem.
The whole point of LVM is that LVM knows *nothing* about the filesystems
that you put on the LVs. You would not be asking the question unless:
a) You are coming from Sun ZFS, where the "LVM" *does* know about the
filesystems. (The drawback is that you can only ever use the ZFS
filesystem.)
b) You are unclear on the concept. Perhaps the explanation of how
it *could* be accomplished will illuminate. Note especially the point
in step 1 - "the map is filesystem dependent".
--
Stuart D. Gathman <stuart@bmsi.com>
Business Management Systems Inc. Phone: 703 591-0911 Fax: 703 591-6154
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] calculating free capacity from pvdisplay and lvdisplay
2010-08-11 22:40 ` Stuart D. Gathman
@ 2010-08-11 23:01 ` Rahul Nabar
2010-08-11 23:56 ` Malahal Naineni
0 siblings, 1 reply; 11+ messages in thread
From: Rahul Nabar @ 2010-08-11 23:01 UTC (permalink / raw)
To: LVM general discussion and development; +Cc: daksh
On Wed, Aug 11, 2010 at 5:40 PM, Stuart D. Gathman <stuart@bmsi.com> wrote:
> b) You are unclear on the concept. �Perhaps the explanation of how
> � it *could* be accomplished will illuminate. �Note especially the point
> � in step 1 - "the map is filesystem dependent".
What's still unclear to me is understanding why LVM decided to use
fully the first three PV's and then after that only partially the last
three PV's. Although it could have continued to use PV4 fully as well
and then partially PV5 and keep PV6 totally empty. (My total LV size
is a little greater than 4PVs but less than 5PVs)
I assume the answer to this mystery lies in the "Allocation Policy".
[contiguous, cling, normal, anywhere or inherit ] Is there a way on a
running LVM to query what its allocation policy is? Alternately if the
person creating an LVM omits an explicit specification then what
allocation policy does LVM use by default?
--
Rahul
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] calculating free capacity from pvdisplay and lvdisplay
2010-08-11 23:01 ` Rahul Nabar
@ 2010-08-11 23:56 ` Malahal Naineni
2010-08-12 0:22 ` Rahul Nabar
0 siblings, 1 reply; 11+ messages in thread
From: Malahal Naineni @ 2010-08-11 23:56 UTC (permalink / raw)
To: linux-lvm
Rahul Nabar [rpnabar@gmail.com] wrote:
> On Wed, Aug 11, 2010 at 5:40 PM, Stuart D. Gathman <stuart@bmsi.com> wrote:
> > b) You are unclear on the concept. �Perhaps the explanation of how
> > � it *could* be accomplished will illuminate. �Note especially the point
> > � in step 1 - "the map is filesystem dependent".
>
> What's still unclear to me is understanding why LVM decided to use
> fully the first three PV's and then after that only partially the last
> three PV's. Although it could have continued to use PV4 fully as well
> and then partially PV5 and keep PV6 totally empty. (My total LV size
> is a little greater than 4PVs but less than 5PVs)
>
> I assume the answer to this mystery lies in the "Allocation Policy".
> [contiguous, cling, normal, anywhere or inherit ] Is there a way on a
> running LVM to query what its allocation policy is? Alternately if the
> person creating an LVM omits an explicit specification then what
> allocation policy does LVM use by default?
It is 'normal' for VG and 'inherit' for an LV. See 'man lvm' command for
details. See 'man vgs' on how to get it. Allocation policy is part of vg
'Attr' field. Look for Attr value when you run 'vgs -o+attr'
Thanks, Malahal.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] calculating free capacity from pvdisplay and lvdisplay
2010-08-11 23:56 ` Malahal Naineni
@ 2010-08-12 0:22 ` Rahul Nabar
2010-08-12 0:52 ` Malahal Naineni
0 siblings, 1 reply; 11+ messages in thread
From: Rahul Nabar @ 2010-08-12 0:22 UTC (permalink / raw)
To: LVM general discussion and development
On Wed, Aug 11, 2010 at 6:56 PM, Malahal Naineni <malahal@us.ibm.com> wrote:
>
> It is 'normal' for VG and 'inherit' for an LV. See 'man lvm' command for
> details. See 'man vgs' on how to get it. Allocation policy is part of vg
> 'Attr' field. Look for Attr value when you run 'vgs -o+attr'
THanks! I do have "normal" allocation policy on the VG and all the
LV's inherit this.
But that just deepens the mystery. Why are my first 3 PV's showing
full and the others not?
[root@eustorage ~]# vgs
VG #PV #LV #SN Attr VSize VFree
euclid_highperf_storage 6 3 0 wz--n- 9.80T 1.22T
[root@eustorage ~]# lvs
LV VG Attr LSize Origin Snap% Move
Log Copy% Convert
LV_export euclid_highperf_storage -wi-ao 600.00G
LV_home euclid_highperf_storage -wi-ao 6.00T
LV_polhome euclid_highperf_storage -wi-ao 2.00T
--
Rahul
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] calculating free capacity from pvdisplay and lvdisplay
2010-08-12 0:22 ` Rahul Nabar
@ 2010-08-12 0:52 ` Malahal Naineni
0 siblings, 0 replies; 11+ messages in thread
From: Malahal Naineni @ 2010-08-12 0:52 UTC (permalink / raw)
To: linux-lvm
Rahul Nabar [rpnabar@gmail.com] wrote:
> On Wed, Aug 11, 2010 at 6:56 PM, Malahal Naineni <malahal@us.ibm.com> wrote:
> >
> > It is 'normal' for VG and 'inherit' for an LV. See 'man lvm' command for
> > details. See 'man vgs' on how to get it. Allocation policy is part of vg
> > 'Attr' field. Look for Attr value when you run 'vgs -o+attr'
>
> THanks! I do have "normal" allocation policy on the VG and all the
> LV's inherit this.
> But that just deepens the mystery. Why are my first 3 PV's showing
> full and the others not?
I don't know your particular situation, but if you create an LV, it will
normally use the first PV. Until the first PV is used, it won't allocate
from the next PV. This is only true if you use linear LV. If you use
stripes, that allocation would be completely different.
I don't think LVM remembers the last used PV for allocation to use any
kind of round robin allocation among different lvcreates. Hope that
answers your mystery.
Thanks, Malahal.
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2010-08-12 0:52 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-08-11 22:26 [linux-lvm] calculating free capacity from pvdisplay and lvdisplay Daksh Chauhan
2010-08-11 22:40 ` Stuart D. Gathman
2010-08-11 23:01 ` Rahul Nabar
2010-08-11 23:56 ` Malahal Naineni
2010-08-12 0:22 ` Rahul Nabar
2010-08-12 0:52 ` Malahal Naineni
-- strict thread matches above, loose matches on Subject: below --
2010-08-11 2:25 Rahul Nabar
2010-08-11 6:10 ` Ray Morris
2010-08-11 17:03 ` Rahul Nabar
2010-08-11 17:19 ` Ray Morris
2010-08-11 8:42 ` Giorgio Bersano
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).