linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Michael Ole Olsen <gnu@gmx.net>
To: linux-raid@vger.kernel.org
Subject: Re: how to resize volume group lvm2?
Date: Fri, 12 Jun 2009 21:14:21 +0200	[thread overview]
Message-ID: <20090612191421.GB14231@rlogin.dk> (raw)
In-Reply-To: <20090612185841.GA14231@rlogin.dk>

Guess I was too quick with that one..

pvresize /dev/md0 was all that was needed, no need for restart :-)


On Fri, 12 Jun 2009, Michael Ole Olsen wrote:

> How do I resize my volume group with lvm2?
> 
> vgextend needs new devices but I only have one device md0 for all my physical
> volumes.
> 
> I know how to resize my logical volumes under the volume group, but somehow
> the volume group st1500 did not extend after raid5 reshape with mdadm?
> 
> Is my only option backing up 8TB of data and creating a new volume group?
> 
> I dont want to add sd[ai] here with vgextend as lvm2 is on top of 
> mdadm so I guess I should have only one device for my VG?
> 
> /Michael Ole Olsen
> 
> mfs:~# fdisk -l /dev/md0
> 
> Disk /dev/md0: 12002.4 GB, 12002414559232 bytes
> 2 heads, 4 sectors/track, -1364690304 cylinders
> Units = cylinders of 8 * 512 = 4096 bytes
> Disk identifier: 0x00000000
> 
> Disk /dev/md0 doesn't contain a valid partition table
> 
> mfs:~# vgdisplay
>   --- Volume group ---
>   VG Name               st1500
>   System ID             
>   Format                lvm2
>   Metadata Areas        1
>   Metadata Sequence No  4
>   VG Access             read/write
>   VG Status             resizable
>   MAX LV                0
>   Cur LV                2
>   Open LV               1
>   Max PV                0
>   Cur PV                1
>   Act PV                1
>   VG Size               8,19 TB
>   PE Size               4,00 MB
>   Total PE              2146198
>   Alloc PE / Size       2104832 / 8,03 TB
>   Free  PE / Size       41366 / 161,59 GB
>   VG UUID               1Rl7ly-OguV-fEbS-TU8F-7tdM-9YH3-wKc9F6
> 
> lvm> help vgextend
>   vgextend: Add physical volumes to a volume group
> 
> vgextend
>         [-A|--autobackup y|n]
>         [-d|--debug]
>         [-h|--help]
>         [-t|--test]
>         [-v|--verbose]
>         [--version]
>         VolumeGroupName PhysicalDevicePath [PhysicalDevicePath...]
> 
>   Command failed with status code 0.
> lvm> vgextend st1500 /dev/md0
>   Physical volume '/dev/md0' is already in volume group 'st1500'
>   Unable to add physical volume '/dev/md0' to volume group 'st1500'.
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2009-06-12 19:14 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-06-10 21:58 Strange RAID behaviour when faced with user error Jody McIntyre
2009-06-10 22:19 ` John Robinson
2009-06-10 22:23   ` John Robinson
2009-06-12 18:33   ` Jody McIntyre
2009-06-12 18:58     ` how to resize volume group lvm2? Michael Ole Olsen
2009-06-12 19:14       ` Michael Ole Olsen [this message]
2009-06-12 19:15       ` Tapani Tarvainen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090612191421.GB14231@rlogin.dk \
    --to=gnu@gmx.net \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).