linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Adam Carheden <carheden@cira.colostate.edu>
To: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] LVM Confusion
Date: Mon, 30 Jan 2006 14:34:39 -0700	[thread overview]
Message-ID: <43DE866F.7060800@cira.colostate.edu> (raw)
In-Reply-To: <6a83e6620601301326l4c4db027oa748dcff72e368c6@mail.gmail.com>

Forgive me if you said you did this somewhere in the post, but did you 
use xfs_growfs to expand the size of the filesystem after you expanded 
the size of the logical volume?

-- 
Adam Carheden
Linux Systems Administrator
970-491-8956 (o)
970-556-2914 (c)



tufkal wrote:
> This is kind of a long story but I will try to keep it simple.  I have a 
> linux file/web/mythtv/blah server that I run at my business.  I recently 
> added a 120GB and a 80GB drive to the machine.  Here was the setup prior 
> to that.
> 
> 120GB Seagate
> -9GB / ext3
> -1GB Swap
> -110GB lvmhome xfs
> 
> 120GB Seagate
> -120GB lvmhome xfs
> 
> 
> The 2 lvm partitions were used together as a ~220GB /home partition, and 
> everything worked well.  Then I decided to add the 2 new drives.  The 
> geometry now looks like this
> 
> 120GB Seagate
> -9GB / ext3
> -1GB Swap
> -110GB lvmhome xfs
> 
> 120GB Seagate
> -120GB lvmhome xfs
> 
> 120GB Seagate
> -80GB lvmhome xfs
> -40GB /home/me linuxraid
> 
> 80GB Seagate
> -40GB lvmhome xfs
> -40GB /home/me linuxraid
> 
> I setup 40GB on each new drive apart in a raid1 mirror using mdadm.  
> Then with the 120GB of new LVM space due to me, I tried to add it to my 
> LVM home.  I am unable to get the space.  Here are the outputs of a few 
> commands for some help.
> 
> tufkal@tux:/home$ sudo lvm lvscan
> Password:
>   ACTIVE            '/dev/lvmhome/home' [325.82 GB] inherit
> tufkal@tux:/home$ sudo lvm pvscan
>   PV /dev/hdb5   VG lvmhome   lvm2 [107.39 GB / 0    free]
>   PV /dev/hdc5   VG lvmhome   lvm2 [ 110.23 GB / 0    free]
>   PV /dev/sdb5   VG lvmhome   lvm2 [35.46 GB / 0    free]
>   PV /dev/sda5   VG lvmhome   lvm2 [72.73 GB / 0    free]
>   Total: 4 [325.82 GB] / in use: 4 [325.82 GB] / in no VG: 0 [0   ]
> tufkal@tux :/home$ sudo lvm vgscan
>   Reading all physical volumes.  This may take a while...
>   Found volume group "lvmhome" using metadata type lvm2
> tufkal@tux:/home$
> 
> According to those, everything looks great!  I should have 325GB of space!
> 
> tufkal@tux:/home$ cat /etc/fstab | grep lvm
> /dev/mapper/lvmhome-home         /home                 xfs         
> defaults       0           2
> 
> And my /home is mounted on my lvm.....
> 
> tufkal@tux:~$ df -h
> Filesystem                              Size  Used Avail Use% Mounted on
> /dev/hdb1                                4.4G  3.0G  1.2G  72% /
> tmpfs                                     253M     0  253M   0% /dev/shm
> tmpfs                                     253M   13M  240M   5% 
> /lib/modules/2.6.12-10-386/volatile
> /dev/mapper/lvmhome-home     218G  197G   22G  91% /home
> /dev/md0                                 39G   29G  7.7G  80% /home/tufkal
> 
> BWAH?  Why do I only have 220GB of space?  I run out of space almost 
> nightly when MythTV starts transcoding the day's recordings.  I gave up 
> trying to figure out what I did wrong and I ask of you.
> 
> More random command line pastes in case they help.
> 
> 
> tufkal@tux:~$ sudo lvm vgs
> Password:
>   VG      #PV #LV #SN Attr  VSize   VFree
>   lvmhome   4   1   0 wz--n 325.82G    0
> tufkal@tux:~$ sudo lvm pvs
>   PV         VG      Fmt  Attr PSize   PFree
>   /dev/hdb5  lvmhome lvm2 a-   107.39G    0
>   /dev/hdc5  lvmhome lvm2 a-   110.23G    0
>   /dev/sda5  lvmhome lvm2 a-    72.73G    0
>   /dev/sdb5  lvmhome lvm2 a-    35.46G    0
> tufkal@tux:~$ sudo lvm lvs
>   LV   VG      Attr   LSize   Origin Snap%  Move Copy%
>   home lvmhome -wi-ao 325.82G
> 
> 
> tufkal@tux:~$ sudo lvm vgdisplay
>   --- Volume group ---
>   VG Name               lvmhome
>   System ID
>   Format                lvm2
>   Metadata Areas        4
>   Metadata Sequence No  96
>   VG Access             read/write
>   VG Status             resizable
>   MAX LV                0
>   Cur LV                1
>   Open LV               1
>   Max PV                0
>   Cur PV                4
>   Act PV                4
>   VG Size               325.82 GB
>   PE Size               4.00 MB
>   Total PE              83409
>   Alloc PE / Size       83409 / 325.82 GB
>   Free  PE / Size       0 / 0
>   VG UUID               8BCR28-5oEl-heGV-97YY-k6Dy-gzaO-QdZcML
> 
> tufkal@tux:~$ sudo lvm pvdisplay
>   --- Physical volume ---
>   PV Name               /dev/hdb5
>   VG Name               lvmhome
>   PV Size               107.39 GB / not usable 0
>   Allocatable           yes (but full)
>   PE Size (KByte)       4096
>   Total PE              27493
>   Free PE               0
>   Allocated PE          27493
>   PV UUID               DIuF4I-JfQH-w1nB-dJlM-0zef-R9An-MWxkLe
> 
>   --- Physical volume ---
>   PV Name               /dev/hdc5
>   VG Name               lvmhome
>   PV Size               110.23 GB / not usable 0
>   Allocatable           yes (but full)
>   PE Size (KByte)       4096
>   Total PE              28219
>   Free PE               0
>   Allocated PE          28219
>   PV UUID               A3uL70-LwK6-61Vx-DoGj-8C7v-Ml7E-9JnWtM
> 
>   --- Physical volume ---
>   PV Name               /dev/sdb5
>   VG Name               lvmhome
>   PV Size               35.46 GB / not usable 0
>   Allocatable           yes (but full)
>   PE Size (KByte)       4096
>   Total PE              9079
>   Free PE               0
>   Allocated PE          9079
>   PV UUID               yo14mv-oyRd-Up5z-DBj2-lEPO-jACe-su53bV
> 
>   --- Physical volume ---
>   PV Name               /dev/sda5
>   VG Name               lvmhome
>   PV Size               72.73 GB / not usable 0
>   Allocatable           yes (but full)
>   PE Size (KByte)       4096
>   Total PE              18618
>   Free PE               0
>   Allocated PE          18618
>   PV UUID               sT4CZO-gIZu-c05A-rrQf-uNw9-zC3B-p5h2vV
> 
> tufkal@tux:~$ sudo lvm lvdisplay
>   --- Logical volume ---
>   LV Name                /dev/lvmhome/home
>   VG Name                lvmhome
>   LV UUID                WrEbxl-7pck-lwbJ-w3V7-BNiZ-ttge-QvIL9r
>   LV Write Access        read/write
>   LV Status              available
>   # open                 1
>   LV Size                325.82 GB
>   Current LE             83409
>   Segments               4
>   Allocation             inherit
>   Read ahead sectors     0
>   Block device           253:0
> 
> tufkal@tux:~$
> 
> AND WHILE IM AT IT, FOR FUTURE REFERENCE
> 
> What is the correct way to add a partition to a LVM (like if I slap 
> another drive in there). 
> 
> Thanks in advance.
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

  reply	other threads:[~2006-01-30 21:34 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-01-30 21:26 [linux-lvm] LVM Confusion tufkal
2006-01-30 21:34 ` Adam Carheden [this message]
2006-01-30 21:45 ` Matthew Gillen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=43DE866F.7060800@cira.colostate.edu \
    --to=carheden@cira.colostate.edu \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).