From: Delian Krustev <linux-lvm@krustev.net>
To: linux-lvm@redhat.com
Subject: [linux-lvm] Unable to remove a pvmove LV
Date: Wed, 2 Nov 2005 22:28:20 +0200 [thread overview]
Message-ID: <200511012135.50701.krustev@krustev.net> (raw)
The problem appeared after I've had a problem with this controller:
0000:00:0e.0 RAID bus controller: Silicon Image, Inc. (formerly CMD Technology Inc) PCI0680 Ultra ATA-133 Host Controller (rev 02)
I've added two more disks to this controller and to the volume group
and messages similar to this:
Oct 26 20:28:21 serv0 kernel: hda: dma_intr: status=0x7f { DriveReady DeviceFault SeekComplete DataRequest CorrectedError Index Error }
Oct 26 20:28:21 serv0 kernel: hda: dma_intr: error=0x00 { }
Oct 26 20:28:21 serv0 kernel: hda: DMA disabled
Oct 26 20:28:21 serv0 kernel: hdb: DMA disabled
Oct 26 20:28:21 serv0 kernel: ide0: reset: success
began to appiear in the logs. I've tried removing the new disks but this LV
became invincible. The machine is running Debian sarge. Details follow:
serv0:/# dpkg -s lvm2 |grep Version
Version: 2.01.04-5
serv0:/# uname -a
Linux serv0 2.6.8-2-k7 #1 Thu May 19 18:03:29 JST 2005 i686 GNU/Linux
serv0:/# lvdisplay
--- Logical volume ---
LV Name /dev/s0_data/data
VG Name s0_data
LV UUID Wzs9Lu-5IUj-DK88-UVnZ-QJpj-gHuD-3a01jq
LV Write Access read/write
LV Status available
# open 1
LV Size 1.02 TB
Current LE 4164
Segments 23
Allocation inherit
Read ahead sectors 0
Block device 254:0
--- Logical volume ---
LV Name /dev/s0_data/pvmove0
VG Name s0_data
LV UUID H8As4Y-m61V-krSa-lGTd-pNRh-Ef1e-UStXSC
LV Write Access read/write
LV Status NOT available
LV Size 46.50 GB
Current LE 186
Segments 1
Allocation contiguous
Read ahead sectors 0
serv0:/# lvremove -v -f -d /dev/s0_data/pvmove0
Using logical volume(s) on command line
Can't remove locked LV pvmove0
serv0:/# pvmove --abort -v
Finding all volume groups
Finding volume group "s0_data"
serv0:/# lvdisplay
--- Logical volume ---
LV Name /dev/s0_data/data
VG Name s0_data
LV UUID Wzs9Lu-5IUj-DK88-UVnZ-QJpj-gHuD-3a01jq
LV Write Access read/write
LV Status available
# open 1
LV Size 1.02 TB
Current LE 4164
Segments 23
Allocation inherit
Read ahead sectors 0
Block device 254:0
--- Logical volume ---
LV Name /dev/s0_data/pvmove0
VG Name s0_data
LV UUID H8As4Y-m61V-krSa-lGTd-pNRh-Ef1e-UStXSC
LV Write Access read/write
LV Status NOT available
LV Size 46.50 GB
Current LE 186
Segments 1
Allocation contiguous
Read ahead sectors 0
serv0:/#
Additionally the pvmove process was killed several times by the OOM
killer, since in my opinion it was leaking memory. The PVs are 50 GBs
each, and at some time at over 90 % the pvmove process was eating
all the memory(512 MBs). Surprisingly after trying to resume some minutes
later I've found the move had completed successfully.
Cheers,
Delian
next reply other threads:[~2005-11-02 20:28 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-11-02 20:28 Delian Krustev [this message]
2005-11-02 23:37 ` [linux-lvm] Unable to remove a pvmove LV Alasdair G Kergon
2005-11-03 13:44 ` Delian Krustev
2005-11-03 0:11 ` Randall A. Jones
2005-11-03 13:34 ` Delian Krustev
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200511012135.50701.krustev@krustev.net \
--to=linux-lvm@krustev.net \
--cc=linux-lvm@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).