linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] lsm space giveth and space taketh away: missing space?
@ 2010-09-02  1:50 Linda A. Walsh
  2010-09-02  3:44 ` Stuart D. Gathman
  2010-09-02 10:01 ` Bryn M. Reeves
  0 siblings, 2 replies; 5+ messages in thread
From: Linda A. Walsh @ 2010-09-02  1:50 UTC (permalink / raw)
  To: linux-lvm

I'm running low on space in my /backups partition.  I looked at the
partitions and volumes to see what might be done (besides deleting old
backups), and noticed:

pvs:
  PV         VG         Fmt  Attr PSize  PFree
  /dev/sdb1  Backups    lvm2 a-   10.91T  3.15G

---
So I thought 'cool', I didn't make it the full size, and
I have some left...ok...(I didn't remember what I'd done, its
been a while).

Run lvresize:
lvresize /dev/Backups/Backups -L +3.15G
  Rounding up size to full physical extent 3.15 GB
  Extending logical volume Backups to 10.91 TB
  Logical volume Backups successfully resized

Um...HELLO?  Extending to 10.91?  But it was at 10.91!
pvs:
  PV         VG         Fmt  Attr PSize  PFree
  /dev/sdb1  Backups    lvm2 a-   10.91T     0

Well that was unimpressive.

parted /dev/sdb
p(rint)
Disk /dev/sdb: 12.0TB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name    Flags
 1      17.4kB  12.0TB  12.0TB               backup  lvm






Ok, now I'm really confused.  10.91T?  14.06T?  (Obviously
this was optimistic reporting if the partition is only 12TB!)

So why does parted show a 12TB disk while lvm shows only a 10.91T
disk and why did lvm show 3.15G free when it wasn't really there?

How do I get my 1.09T back from lvm?  That seems like awfully
high for an overhead number for lvm.  I'd expect more like "0.09T".


Ideas?
Linda

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] lsm space giveth and space taketh away: missing space?
  2010-09-02  1:50 [linux-lvm] lsm space giveth and space taketh away: missing space? Linda A. Walsh
@ 2010-09-02  3:44 ` Stuart D. Gathman
  2010-09-02  3:52   ` Stuart D. Gathman
  2010-09-02 10:01 ` Bryn M. Reeves
  1 sibling, 1 reply; 5+ messages in thread
From: Stuart D. Gathman @ 2010-09-02  3:44 UTC (permalink / raw)
  To: LVM general discussion and development

On Wed, 1 Sep 2010, Linda A. Walsh wrote:

> lvresize /dev/Backups/Backups -L +3.15G
>  Rounding up size to full physical extent 3.15 GB
>  Extending logical volume Backups to 10.91 TB
>  Logical volume Backups successfully resized
> 
> Um...HELLO?  Extending to 10.91?  But it was at 10.91!

3.15G is not significant compared to 10.91TB.  You know the saying, 
"A billion here and a billion there, and pretty soon you're talking
about real storage."

> So why does parted show a 12TB disk while lvm shows only a 10.91T
> disk and why did lvm show 3.15G free when it wasn't really there?

12.0 TeraBytes (12*10^12) ~= 10.91 TibiBytes (10.91*2^40) to 4 decimals.  The
3.15G was there, you added it to your LV (but you still need to resize the
filesystem).  10.91*2^40 + 3.15*2^30 ~= 10.91*2^40 to 4 decimals.

> How do I get my 1.09T back from lvm?  That seems like awfully
> high for an overhead number for lvm.  I'd expect more like "0.09T".

http://www.innumeracy.com/

-- 
	      Stuart D. Gathman <stuart@bmsi.com>
    Business Management Systems Inc.  Phone: 703 591-0911 Fax: 703 591-6154
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] lsm space giveth and space taketh away: missing space?
  2010-09-02  3:44 ` Stuart D. Gathman
@ 2010-09-02  3:52   ` Stuart D. Gathman
  0 siblings, 0 replies; 5+ messages in thread
From: Stuart D. Gathman @ 2010-09-02  3:52 UTC (permalink / raw)
  To: LVM general discussion and development

On Wed, 1 Sep 2010, Stuart D. Gathman wrote:

> 12.0 TeraBytes (12*10^12) ~= 10.91 TibiBytes (10.91*2^40) to 4 decimals.  The
> 3.15G was there, you added it to your LV (but you still need to resize the
> filesystem).  10.91*2^40 + 3.15*2^30 ~= 10.91*2^40 to 4 decimals.
> 
> http://www.innumeracy.com/

I didn't mean to be quite so snarky.  It is a real problem that the storage
units (binary vs. decimal) are not consistently labeled in the various
utilities.  Blame it on disk manufacturers who started using decimal units
to inflate the apparent size of their drives in the eyes of naive buyers.
(But you should have had some sense of how insignificant a Giga anything is
compared to a Tera anything, whether binary or decimal.)

-- 
	      Stuart D. Gathman <stuart@bmsi.com>
    Business Management Systems Inc.  Phone: 703 591-0911 Fax: 703 591-6154
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] lsm space giveth and space taketh away: missing space?
  2010-09-02  1:50 [linux-lvm] lsm space giveth and space taketh away: missing space? Linda A. Walsh
  2010-09-02  3:44 ` Stuart D. Gathman
@ 2010-09-02 10:01 ` Bryn M. Reeves
  2010-09-02 17:32   ` Linda A. Walsh
  1 sibling, 1 reply; 5+ messages in thread
From: Bryn M. Reeves @ 2010-09-02 10:01 UTC (permalink / raw)
  To: LVM general discussion and development; +Cc: Linda A. Walsh

On 09/02/2010 02:50 AM, Linda A. Walsh wrote:
> I'm running low on space in my /backups partition.  I looked at the
> partitions and volumes to see what might be done (besides deleting old
> backups), and noticed:
> 
> pvs:
>   PV         VG         Fmt  Attr PSize  PFree
>   /dev/sdb1  Backups    lvm2 a-   10.91T  3.15G

You're running "pvs" which means you are looking at physical volumes.
The "lvs" command would probably have been more useful.

> ---
> So I thought 'cool', I didn't make it the full size, and
> I have some left...ok...(I didn't remember what I'd done, its
> been a while).
> 
> Run lvresize:
> lvresize /dev/Backups/Backups -L +3.15G
>   Rounding up size to full physical extent 3.15 GB
>   Extending logical volume Backups to 10.91 TB
>   Logical volume Backups successfully resized

You added 3.15G to your *logical* volume. This made it the same size
(within rounding errors and ignoring the metadata, alignment & label
overheads) as the physical volume you were looking at above.

> Um...HELLO?  Extending to 10.91?  But it was at 10.91!

You're mixing up your logical volumes (usable block devices allocated
from an LVM2 volume group) with your physical volumes (underlying disks
that provide the usable storage extents for the volume group).

http://www.errorists.org/stuff/lvm/lvm-concepts.png

The logical volume was 3.15G smaller before this operation - you can
check this if you're using the default archiving settings in
/etc/lvm/lvm.conf by looking for the "Backups" VG's archived metadata in
/etc/lvm/archive. Look for the highest numbered version (this will be a
backup of the current state _after_ the lvextend above) and then go back
and look at the previous version.

You can also review what operations have been done to the VG in the
retained archives by running:

# grep '^description' /etc/lvm/archive/<VG name>*

E.g.:

http://pastebin.com/M918gGGU

> pvs:
>   PV         VG         Fmt  Attr PSize  PFree
>   /dev/sdb1  Backups    lvm2 a-   10.91T     0
> 
> Well that was unimpressive.

Only becaus eyou are still looking at _physical_ volumes. You might be
more impressed if you ran the lvs command (or lvdisplay which has a
multi-line record style of output by default) before and after.

You'll only see changes in the output of the PFree attribute of pvs when
you're just manipulating LVs; if you changed the disk size and used
pvresize or ran vgextend to add a new disk you would see changes here
but since you're just allocating more storage to the LVs in the volume
group the only field to change is the amount of free space on the PV.

# lvdisplay bmr_vg0/root
  --- Logical volume ---
  LV Name                /dev/bmr_vg0/root
  VG Name                bmr_vg0
  LV UUID                bn0t3S-GHAq-b3vK-bvUQ-gYey-acwt-efyd5Z
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                17.81 GB
  Current LE             4560
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

lvs
  LV   VG      Attr   LSize  Origin Snap%  Move Log Copy%  Convert
  data bmr_vg0 -wi-ao 28.00G
  home bmr_vg0 -wi-ao 84.41G
  root bmr_vg0 -wi-ao 17.81G
  var  bmr_vg0 -wi-ao  3.91G
  swap bmr_vg1 -wi-ao  3.91G

> Number  Start   End     Size    File system  Name    Flags
>  1      17.4kB  12.0TB  12.0TB               backup  lvm

As Stuart pointed out this is just binary prefix vs. SI notations:

http://physics.nist.gov/cuu/Units/binary.html
http://en.wikipedia.org/wiki/Binary_Prefix
http://en.wikipedia.org/wiki/International_System_of_Units

Your space hasn't gone anywhere :)

> How do I get my 1.09T back from lvm?  That seems like awfully
> high for an overhead number for lvm.  I'd expect more like "0.09T".

There's very little overhead to lvm in terms of space. Read through the
metadata files in your archive directory and you'll see how the data is
laid out. A few sectors are taken up with the LVM2 physical volume
label, a few more (configurable) are occupied by the metadata buffer and
on recent versions there may be some padding to provide optimal data
alignment but the rest (from pe_start in the metadata) are all available
for data allocation.

Don't forget to resize the file system:

# fsadm resize /dev/Backup/<LV Name>

Regards,
Bryn.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] lsm space giveth and space taketh away: missing space?
  2010-09-02 10:01 ` Bryn M. Reeves
@ 2010-09-02 17:32   ` Linda A. Walsh
  0 siblings, 0 replies; 5+ messages in thread
From: Linda A. Walsh @ 2010-09-02 17:32 UTC (permalink / raw)
  To: LVM general discussion and development





Bryn M. Reeves wrote:
> On 09/02/2010 02:50 AM, Linda A. Walsh wrote:
>> I'm running low on space in my /backups partition.  I looked at the
>> partitions and volumes to see what might be done (besides deleting old
>> backups), and noticed:
>>
>> pvs:
>>   PV         VG         Fmt  Attr PSize  PFree
>>   /dev/sdb1  Backups    lvm2 a-   10.91T  3.15G
>
> You're running "pvs" which means you are looking at physical volumes.
> The "lvs" command would probably have been more useful.
----
    That's what threw me more than the G/T units (I knew about that, and 
thought I'd tried a conversion but only used 10^9 instead of 10^12 as a 
conversion factor: not used to parted's use of 'T' (had used fdisk 
before, which
only went up to 'M' in display units, no 'G' or 'T', AND used the OS 
friendly
1024 instead of 1000 as a multiplier when a single-letter prefix (K,M,G)
was used with the incremental size instead of the full SI unit (KB/MB/GB).


First time I've worked with 'parted' and first time I've dealt file systems
in multiple TB, so I didn't apply the 5% error needed vs. the 2% error for
1 prefix difference and the figures didn't match.  For some reason I 
expected
to see the missing 3.15G show up in the VG before the LV, but I should have
done a VGs and I'd probably have seen it there.


pvs:
  PV         VG         Fmt  Attr PSize  PFree
  /dev/sdb1  Backups    lvm2 a-   10.91T     0
vgs:
  VG         #PV #LV #SN Attr   VSize  VFree
  Backups      1   1   0 wz--n- 10.91T     0
lvs:
  LV                  VG         Attr   LSize  Origin Snap%  Move Log 
Copy%  Convert
  Backups             Backups    -wi-ao 10.91T                

----
Since I'd seent the 3.15G go away from the pv, I expected to see it pop up
under the VG as an extra 3.15G space, that's I'd then alloc to the lvs,
and then extend to the file system with xfs_growfs.

But I had a brain disconnect in using lvresize, then instead of vgresize.
Chances are my VG also had the 3.15G free, and by using the lvresize,
I circumvented that step.  I have to remember that the pvs command shows
unallocated space of the VG, not the PV, since the PV isn't subdividable.
Hmmm....not exactly the most intuitive display...since I keep equating PVs
with PDs, which they're not.  I just usually create them that way.

> Only becaus eyou are still looking at _physical_ volumes. You might be
> more impressed if you ran the lvs command (or lvdisplay which has a
> multi-line record style of output by default) before and after.
>
> You'll only see changes in the output of the PFree attribute of pvs when
> you're just manipulating LVs; if you changed the disk size and used
> pvresize or ran vgextend to add a new disk you would see changes here
> but since you're just allocating more storage to the LVs in the volume
> group the only field to change is the amount of free space on the PV.
Ok, I thought I assigned space from disk as PV's (thus marking the space as
available for the volume manager).  Then I allocated from there into VGs or
LVs.  In my case, I was aiming for 1VG in this PV, and 1LV in the VG.

    What I thought I was seeing was some unallocated space on the
PV that wasn't allocated to the VG yet.  A trivial amount
compared to the whole, but I hadn't gotten that far when the 3.15G
number disappeared out of the totals.  Using 'display' instead of 
's'(ummary):

pv(display) Backups:
 --- Physical volume ---
  PV Name               /dev/sdb1
  VG Name               Backups
  PV Size               10.91 TB / not usable 3.47 MB
  Allocatable           yes (but full)
  PE Size (KByte)       4096
  Total PE              2860799
  Free PE               0
  Allocated PE          2860799
  PV UUID               4c2f35-d439-4f47-6220-1007-0306-062860

So now in vg(display) Backups:
  --- Volume group ---
  VG Name               Backups
  System ID            
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  6
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               10.91 TB
  PE Size               4.00 MB
  Total PE              2860799
  Alloc PE / Size       2860799 / 10.91 TB
  Free  PE / Size       0 / 0 

--- I don't see anything that looks like free space there.
and under lv(display) Backups/Backups:

LV Name                /dev/Backups/Backups
  VG Name                Backups
  LV UUID                npJSrk-ECi5-S6xh-pjpZ-fYoa-gSyx-jPTkBt
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                10.91 TB
  Current LE             2860799
  Segments               1
  Allocation             inherit
  Read ahead sectors     32768
  Block device           252:1
----
  
    And...aww  wouldn't really notice it here anyway.... :-/.

    That's my problem...it disappeared between the lvs and file-system size
crack and I didn't try the xfs_grow because I looked for the space to
appear in the wrong place ...

*doh!*....

and yup: xfs_growfs:
...
data blocks changed from 2928631808 to 2929458176
(1)> 2929458176-2928631808
   = 826368  (0x000c9c00) 
(2)> 826368*4*1024
   = 3384803328  (0xc9c00000)
(3)> 826368*4/1024/1024
    = 3.15234375  (0x3) 
---
There's the 3.15G.
*sigh*

I'll probably have some similar mixup when I move to my first
disks measured in 'PB', as well... (seem to remember having a
brief confusion on the first transition from MB->GB as well,
sigh, but not so well announced-- :-)).

>
> As Stuart pointed out ...
(not too helpfully, as it didn't answer my question as it contributed
zilch to understanding what happened to the 3.15G)

> Your space hasn't gone anywhere :)
---
    As I found out when after xfs_growing it, as noted above.  Came
out to exactly the 3.15G I was missing.

> Don't forget to resize the file system:
> # fsadm resize /dev/Backup/<LV Name>
---
    That's the step I should have done for completeness and would have
answered my own question, but 'fsadm'?  ext3?
Hmmmm  ...it's part of the lvm suite!  Didn't know that.
Would have worked with my fs?  Manpage makes it look like it's
hardcoded to only use 'ext[X]' file systems.  Does it read the fs
type and call the appropriate resize command for the listed file system?
I know 'parted' at least 'knows' about 'xfs', so I would guess that
it "could" be as smart as parted, fsck, mount, etc...

    Does it have the same smarts as those other disk and file system
commands?


    Thanks for the response....it helped me work through
'my issues'.... (sigh) 

    (Now, have to deal with the *real* problem, instead of my
accounting problem:  'Backups', *did* rise to an even 11T (was 10.9T) under
linux w/933G avail, though interestingly, Windows still thinks it's
10.9T (w/932G avail), but I still need to trim by ~25-35%).  Speed really
seems to degrade in the last part of the disk -- maybe the last part of
the disk has a slower transfer speed than I think it does (besides
the slowdown as the fs-allocator, possibly, has more work to do).

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2010-09-02 17:32 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-09-02  1:50 [linux-lvm] lsm space giveth and space taketh away: missing space? Linda A. Walsh
2010-09-02  3:44 ` Stuart D. Gathman
2010-09-02  3:52   ` Stuart D. Gathman
2010-09-02 10:01 ` Bryn M. Reeves
2010-09-02 17:32   ` Linda A. Walsh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).