From: Daksh Chauhan <um.daksh@gmail.com>
To: linux-lvm@redhat.com
Subject: Re: [linux-lvm] calculating free capacity from pvdisplay and lvdisplay
Date: Wed, 11 Aug 2010 17:26:18 -0500 [thread overview]
Message-ID: <AANLkTinnEDV-Vsx3LW9XirpDsN7mPyvphxeLsqYsSp0F@mail.gmail.com> (raw)
This is interesting, and I understand your explaination Ray... But,
how can I figure out how much data is on each PVs??
I have something similar setup, and I have scripts (and cacti) to see
disk IO for each PVs, but I would really like to see how much data is
on each PV...
Thank you,
> Date: Wed, 11 Aug 2010 01:10:31 -0500
> From: Ray Morris <support@bettercgi.com>
> To: LVM general discussion and development <linux-lvm@redhat.com>
> Subject: Re: [linux-lvm] calculating free capacity from pvdisplay and
> � � � �lvdisplay
> Message-ID: <1281507031.21952.1@raydesk1.bettercgi.com>
> Content-Type: text/plain; charset=us-ascii; DelSp=Yes; Format=Flowed
>
>> B. Thus the total full space would come to 4.89 TB. But the sum of
>> full
>> space of all my LV's is only around 3 TB (based on the output of df)
>
> It's the same thing as making a new partition covering your whole drive,
> then wondering why fdisk says you can't make another partition. �Just
> because you haven't stored files in that partition, it's still takes
> up the whole drive.
>
> df shows that your LVs take up 8.6TB: 6TB + 600 GB + 2 TB.
> Therefore, you are using 8.6TB of disk space for those LVs.
> Some of the space WITHIN each LV might not be used for files,
> but it has been dedicated to that LV.
>
> df also shows that the filesystems on the LVs have free space for
> more files. �So you can put more files on those LVs, which is a
> different thing than having space to make more LVs.
>
> I'm not good at explaining things, so sometimes I try explaining three
> different ways. �I have six cereal boxes, each half empty. �I put the
> boxes in a bag. �The bag is now full. �The cereal boxes may not be full,
> but they fill up the bag. �The cereal boxes are your half empty LVs and
> the bag is your drives.
>
> Layers:
>
> hard drive
> partition (can be skipped)
> physical volume
> volume group
> logical volume
> file system
> --
> Ray Morris
> support@bettercgi.com
>
> Strongbox - The next generation in site security:
> http://www.bettercgi.com/strongbox/
>
> Throttlebox - Intelligent Bandwidth Control
> http://www.bettercgi.com/throttlebox/
>
> Strongbox / Throttlebox affiliate program:
> http://www.bettercgi.com/affiliates/user/register.php
>
>
> On 08/10/2010 09:25:15 PM, Rahul Nabar wrote:
>> Some of the physical volumes show "Allocatable � � � � � yes (but
>> full)" while others don't. How does one relate this to the actual
>> capacity? THe reason I am confused is that 3 of my PV's show up as
>> full and each is 1.63 TB. Thus the total full space would come to 4.89
>> TB. But the sum of full space of all my LV's is only around 3 TB
>> (based on the output of df)
>>
>> I've reproduced the outputs of pvdisplay, lvdisplay and df below.
>>
>> I'm confused! Any pointers?
>>
>> --
>> Rahul
>>
>>
>> [root@eustorage ~]# pvdisplay
>> � --- Physical volume ---
>> � PV Name � � � � � � � /dev/sdb
>> � VG Name � � � � � � � euclid_highperf_storage
>> � PV Size � � � � � � � 1.63 TB / not usable 4.00 MB
>> � Allocatable � � � � � yes (but full)
>> � PE Size (KByte) � � � 4096
>> � Total PE � � � � � � �428351
>> � Free PE � � � � � � � 0
>> � Allocated PE � � � � �428351
>> � PV UUID � � � � � � � wDdbmP-2n5m-98HD-Ewqk-Q3y0-lnMf-rsaVXt
>>
>> � --- Physical volume ---
>> � PV Name � � � � � � � /dev/sdc
>> � VG Name � � � � � � � euclid_highperf_storage
>> � PV Size � � � � � � � 1.63 TB / not usable 4.00 MB
>> � Allocatable � � � � � yes (but full)
>> � PE Size (KByte) � � � 4096
>> � Total PE � � � � � � �428351
>> � Free PE � � � � � � � 0
>> � Allocated PE � � � � �428351
>> � PV UUID � � � � � � � 75i75q-2rec-2FMf-eyPa-W0nF-zFHH-PIAvvc
>>
>> � --- Physical volume ---
>> � PV Name � � � � � � � /dev/sdd
>> � VG Name � � � � � � � euclid_highperf_storage
>> � PV Size � � � � � � � 1.63 TB / not usable 4.00 MB
>> � Allocatable � � � � � yes (but full)
>> � PE Size (KByte) � � � 4096
>> � Total PE � � � � � � �428351
>> � Free PE � � � � � � � 0
>> � Allocated PE � � � � �428351
>> � PV UUID � � � � � � � vo2Jh2-PfFC-eOj4-GYnP-Jx1I-Sisu-2nY4lC
>>
>> � --- Physical volume ---
>> � PV Name � � � � � � � /dev/sde
>> � VG Name � � � � � � � euclid_highperf_storage
>> � PV Size � � � � � � � 1.63 TB / not usable 4.00 MB
>> � Allocatable � � � � � yes
>> � PE Size (KByte) � � � 4096
>> � Total PE � � � � � � �428351
>> � Free PE � � � � � � � 38140
>> � Allocated PE � � � � �390211
>> � PV UUID � � � � � � � EK7cvF-IZjf-PJVw-d2RR-lCdt-kOSD-iqFtOf
>>
>> � --- Physical volume ---
>> � PV Name � � � � � � � /dev/sdf
>> � VG Name � � � � � � � euclid_highperf_storage
>> � PV Size � � � � � � � 1.63 TB / not usable 4.00 MB
>> � Allocatable � � � � � yes
>> � PE Size (KByte) � � � 4096
>> � Total PE � � � � � � �428351
>> � Free PE � � � � � � � 140607
>> � Allocated PE � � � � �287744
>> � PV UUID � � � � � � � fQXN8S-HhYu-weoq-kbuz-BrxZ-6WQk-6ydBDw
>>
>> � --- Physical volume ---
>> � PV Name � � � � � � � /dev/sdg
>> � VG Name � � � � � � � euclid_highperf_storage
>> � PV Size � � � � � � � 1.63 TB / not usable 4.00 MB
>> � Allocatable � � � � � yes
>> � PE Size (KByte) � � � 4096
>> � Total PE � � � � � � �428351
>> � Free PE � � � � � � � 140607
>> � Allocated PE � � � � �287744
>> � PV UUID � � � � � � � i7GD1d-rbd2-efKd-uK3u-D3S2-BxJv-UkrNve
>>
>> [root@eustorage ~]# df -h
>> Filesystem � � � � � �Size �Used Avail Use% Mounted on
>> /dev/sda2 � � � � � � �76G �8.6G � 64G �12% /
>> /dev/sda6 � � � � � � �19G �365M � 17G � 3% /var
>> /dev/sda5 � � � � � � �15G �165M � 14G � 2% /tmp
>> /dev/sda1 � � � � � � 487M � 17M �445M � 4% /boot
>> tmpfs � � � � � � � � �24G � � 0 � 24G � 0% /dev/shm
>> /dev/mapper/euclid_highperf_storage-LV_home
>> � � � � � � � � � � � 6.0T �1.4T �4.4T �24% /home
>> /dev/mapper/euclid_highperf_storage-LV_export
>> � � � � � � � � � � � 591G � 17G �550G � 3% /opt
>> /dev/mapper/euclid_highperf_storage-LV_polhome
>> � � � � � � � � � � � 2.0T �1.5T �386G �80% /polhome
>> [root@eustorage ~]# lvdisplay
>> � --- Logical volume ---
>> � LV Name � � � � � � � �/dev/euclid_highperf_storage/LV_home
>> � VG Name � � � � � � � �euclid_highperf_storage
>> � LV UUID � � � � � � � �gu7yo1-TYYr-ucHG-QSDk-y8HD-ETrs-Z5kCk9
>> � LV Write Access � � � �read/write
>> � LV Status � � � � � � �available
>> � # open � � � � � � � � 1
>> � LV Size � � � � � � � �6.00 TB
>> � Current LE � � � � � � 1572864
>> � Segments � � � � � � � 1
>> � Allocation � � � � � � inherit
>> � Read ahead sectors � � auto
>> � - currently set to � � 1536
>> � Block device � � � � � 253:0
>>
>> � --- Logical volume ---
>> � LV Name � � � � � � � �/dev/euclid_highperf_storage/LV_export
>> � VG Name � � � � � � � �euclid_highperf_storage
>> � LV UUID � � � � � � � �1lktLy-Hgn3-qS1m-41VJ-5kNY-DMyb-1ri4Th
>> � LV Write Access � � � �read/write
>> � LV Status � � � � � � �available
>> � # open � � � � � � � � 1
>> � LV Size � � � � � � � �600.00 GB
>> � Current LE � � � � � � 153600
>> � Segments � � � � � � � 1
>> � Allocation � � � � � � inherit
>> � Read ahead sectors � � auto
>> � - currently set to � � 1536
>> � Block device � � � � � 253:1
>>
>> � --- Logical volume ---
>> � LV Name � � � � � � � �/dev/euclid_highperf_storage/LV_polhome
>> � VG Name � � � � � � � �euclid_highperf_storage
>> � LV UUID � � � � � � � �xqpOX5-HFey-H0qi-NgjP-NVS7-FwDb-zbiK8m
>> � LV Write Access � � � �read/write
>> � LV Status � � � � � � �available
>> � # open � � � � � � � � 1
>> � LV Size � � � � � � � �2.00 TB
>> � Current LE � � � � � � 524288
>> � Segments � � � � � � � 4
>> � Allocation � � � � � � inherit
>> � Read ahead sectors � � auto
>> � - currently set to � � 256
>> � Block device � � � � � 253:2
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>
>>
>
next reply other threads:[~2010-08-11 22:26 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-08-11 22:26 Daksh Chauhan [this message]
2010-08-11 22:40 ` [linux-lvm] calculating free capacity from pvdisplay and lvdisplay Stuart D. Gathman
2010-08-11 23:01 ` Rahul Nabar
2010-08-11 23:56 ` Malahal Naineni
2010-08-12 0:22 ` Rahul Nabar
2010-08-12 0:52 ` Malahal Naineni
-- strict thread matches above, loose matches on Subject: below --
2010-08-11 2:25 Rahul Nabar
2010-08-11 6:10 ` Ray Morris
2010-08-11 17:03 ` Rahul Nabar
2010-08-11 17:19 ` Ray Morris
2010-08-11 8:42 ` Giorgio Bersano
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=AANLkTinnEDV-Vsx3LW9XirpDsN7mPyvphxeLsqYsSp0F@mail.gmail.com \
--to=um.daksh@gmail.com \
--cc=daksh@olemiss.edu \
--cc=linux-lvm@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).