linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Randy Perkins <randyperkins@randyperkins.com>
To: linux-lvm@redhat.com
Subject: [linux-lvm] what is the proper procedure to remove a disk from a LV using lvm1?
Date: Sun, 02 Oct 2005 09:45:26 -0500	[thread overview]
Message-ID: <1128264326.13721.39.camel@localhost.localdomain> (raw)

Hello,
I wish to remove a disk from my logical volume.  I have moved data from
the LV in order to have enough free space to remove a PV.  I have
checked around and think i know the tools i need to use, but am not sure
of the order.

i have read this:
http://www.tldp.org/HOWTO/LVM-HOWTO/removeadisk.html
but it does not cover reducing the LV.
my LV is formatted as ext3
I'm pretty sure i need to use e2fsadm to reduce the LV.

So my questions are:

1 Do I need to use e2fsadm to resize the LV if I am removing a PV ?
2. If yes, which do i do first, e2fsadm or pvmove/vgreduce ?
3. should i umount the LV before using e2fsadm ?

thanks in advance and if you need more information, let me know

Randy Perkins


info about my config:
OS fedora core 1  2.4.22-1.2199.5.legacy.nptl
lvm lvm-1.0.3-13.1.legacy
PV 4 disks, PV fills entire disk( see pvdisplay below)
VG 1 VG /dev/my_vg ,uses 4 PV ( see vgdisplay below )
LV 1 LV  /dev/my_vg/my_lv fills entire VG (see lvdisplay below)
430G of 587G is available ( see df -h  below)
LV is formatted as ext3 ( see mount below )


what i think i need to do is:
e2fsadm -L 200G	/dev/my_vg/my_lv   (to reduce the LV)
pvmove /dev/hde	(reallocate all PE on this disk to other PV's)
pvreduce /dev/hde     (to remove PV from the VG)


------------output of pvdisplay ---------------------------

[root@fileserver root]# pvdisplay /dev/hd[e,g,i,k]
--- Physical volume ---
PV Name               /dev/hde
VG Name               my_vg
PV Size               149.05 GB [312581808 secs] / NOT usable 16.19 MB
[LVM: 165 KB]
PV#                   1
PV Status             NOT available
Allocatable           yes (but full)
Cur LV                1
PE Size (KByte)       16384
Total PE              9538
Free PE               0
Allocated PE          9538
PV UUID               KeWFeh-VH6C-psRt-FLi9-xoEe-PVrG-bT86Ow

--- Physical volume ---
PV Name               /dev/hdg
VG Name               my_vg
PV Size               149.05 GB [312581808 secs] / NOT usable 16.19 MB
[LVM: 165 KB]
PV#                   2
PV Status             NOT available
Allocatable           yes (but full)
Cur LV                1
PE Size (KByte)       16384
Total PE              9538
Free PE               0
Allocated PE          9538
PV UUID               NrI9L9-bPLr-hJhs-e54y-SWsd-aYgR-EHQgDe

--- Physical volume ---
PV Name               /dev/hdi
VG Name               my_vg
PV Size               149.05 GB [312581808 secs] / NOT usable 16.19 MB
[LVM: 165 KB]
PV#                   3
PV Status             NOT available
Allocatable           yes (but full)
Cur LV                1
PE Size (KByte)       16384
Total PE              9538
Free PE               0
Allocated PE          9538
PV UUID               tf70Hd-13P3-1i1b-GSwd-JEs2-E8Bm-N9Dhwk

--- Physical volume ---
PV Name               /dev/hdk
VG Name               my_vg
PV Size               149.05 GB [312581808 secs] / NOT usable 16.19 MB
[LVM: 165 KB]
PV#                   4
PV Status             NOT available
Allocatable           yes (but full)
Cur LV                1
PE Size (KByte)       16384
Total PE              9538
Free PE               0
Allocated PE          9538
PV UUID               UaPjnP-NIM7-TMzA-46Yf-Onyh-wKju-I2Ad7H



--------------output of vgdisplay -------------------------------

[root@fileserver root]# vgdisplay /dev/my_vg
--- Volume group ---
VG Name               my_vg
VG Access             read/write
VG Status             available/resizable
VG #                  0
MAX LV                256
Cur LV                1
Open LV               1
MAX LV Size           1023.97 GB
Max PV                256
Cur PV                4
Act PV                4
VG Size               596.12 GB
PE Size               16 MB
Total PE              38152
Alloc PE / Size       38152 / 596.12 GB
Free  PE / Size       0 / 0
VG UUID               2IBFij-7ujy-3d2V-wDD3-nWE9-EhKk-8P0Zkm


----------- output of lvdisplay -------------------

[root@fileserver root]# lvdisplay /dev/my_vg/my_lv
--- Logical volume ---
LV Name                /dev/my_vg/my_lv
VG Name                my_vg
LV Write Access        read/write
LV Status              available
LV #                   1
# open                 1
LV Size                596.12 GB
Current LE             38152
Allocated LE           38152
Allocation             next free
Read ahead sectors     1024
Block device           58:0

----------- output of df -h -----------------------

[root@fileserver root]# df -h /dev/my_vg/my_lv
Filesystem            Size  Used Avail Use% Mounted on
/dev/my_vg/my_lv      587G  158G  430G  27% /pub


------------ output of mount command ----------------
[root@fileserver root]# mount
... snip ...
/dev/my_vg/my_lv on /pub type ext3 (rw)

                 reply	other threads:[~2005-10-02 14:45 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1128264326.13721.39.camel@localhost.localdomain \
    --to=randyperkins@randyperkins.com \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).