linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Unable to remove a pvmove LV
@ 2005-11-02 20:28 Delian Krustev
  2005-11-02 23:37 ` Alasdair G Kergon
  2005-11-03  0:11 ` Randall A. Jones
  0 siblings, 2 replies; 5+ messages in thread
From: Delian Krustev @ 2005-11-02 20:28 UTC (permalink / raw)
  To: linux-lvm


The problem appeared after I've had a problem with this controller:

0000:00:0e.0 RAID bus controller: Silicon Image, Inc. (formerly CMD Technology Inc) PCI0680 Ultra ATA-133 Host Controller (rev 02)

I've added two more disks to this controller and to the volume group
and messages similar to this:

Oct 26 20:28:21 serv0 kernel: hda: dma_intr: status=0x7f { DriveReady DeviceFault SeekComplete DataRequest CorrectedError Index Error }
Oct 26 20:28:21 serv0 kernel: hda: dma_intr: error=0x00 { }
Oct 26 20:28:21 serv0 kernel: hda: DMA disabled
Oct 26 20:28:21 serv0 kernel: hdb: DMA disabled
Oct 26 20:28:21 serv0 kernel: ide0: reset: success

began to appiear in the logs. I've tried removing the new disks but this LV
became invincible. The machine is running Debian sarge. Details follow:

serv0:/# dpkg -s lvm2 |grep Version
Version: 2.01.04-5
serv0:/# uname -a
Linux serv0 2.6.8-2-k7 #1 Thu May 19 18:03:29 JST 2005 i686 GNU/Linux
serv0:/# lvdisplay
  --- Logical volume ---
  LV Name                /dev/s0_data/data
  VG Name                s0_data
  LV UUID                Wzs9Lu-5IUj-DK88-UVnZ-QJpj-gHuD-3a01jq
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.02 TB
  Current LE             4164
  Segments               23
  Allocation             inherit
  Read ahead sectors     0
  Block device           254:0

  --- Logical volume ---
  LV Name                /dev/s0_data/pvmove0
  VG Name                s0_data
  LV UUID                H8As4Y-m61V-krSa-lGTd-pNRh-Ef1e-UStXSC
  LV Write Access        read/write
  LV Status              NOT available
  LV Size                46.50 GB
  Current LE             186
  Segments               1
  Allocation             contiguous
  Read ahead sectors     0

serv0:/# lvremove -v -f -d /dev/s0_data/pvmove0
    Using logical volume(s) on command line
  Can't remove locked LV pvmove0
serv0:/# pvmove --abort -v
    Finding all volume groups
    Finding volume group "s0_data"
serv0:/# lvdisplay
  --- Logical volume ---
  LV Name                /dev/s0_data/data
  VG Name                s0_data
  LV UUID                Wzs9Lu-5IUj-DK88-UVnZ-QJpj-gHuD-3a01jq
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.02 TB
  Current LE             4164
  Segments               23
  Allocation             inherit
  Read ahead sectors     0
  Block device           254:0

  --- Logical volume ---
  LV Name                /dev/s0_data/pvmove0
  VG Name                s0_data
  LV UUID                H8As4Y-m61V-krSa-lGTd-pNRh-Ef1e-UStXSC
  LV Write Access        read/write
  LV Status              NOT available
  LV Size                46.50 GB
  Current LE             186
  Segments               1
  Allocation             contiguous
  Read ahead sectors     0

serv0:/#

Additionally the pvmove process was killed several times by the OOM
killer, since in my opinion it was leaking memory. The PVs are 50 GBs
each, and at some time at over 90 % the pvmove process was eating
all the memory(512 MBs). Surprisingly after trying to resume some minutes
later I've found the move had completed successfully.

Cheers,
Delian

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2005-11-03 13:44 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-11-02 20:28 [linux-lvm] Unable to remove a pvmove LV Delian Krustev
2005-11-02 23:37 ` Alasdair G Kergon
2005-11-03 13:44   ` Delian Krustev
2005-11-03  0:11 ` Randall A. Jones
2005-11-03 13:34   ` Delian Krustev

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).