All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Zachary Hamm" <zhamm@nc.rr.com>
To: linux-lvm@redhat.com
Subject: [linux-lvm] LVM2 and RAID 1 not working on Fedora Core 2
Date: Thu, 22 Jul 2004 09:42:58 -0400	[thread overview]
Message-ID: <001701c46ff1$cdcecc60$6b02a8c0@ZACK2> (raw)


Hello, I'm running Fedora Core 2 on a Dell Poweredge 1750 with three 36G
SCSI drives setup with two volumes, /boot and /, setup with software RAID 1
with online spare.  This was setup at install, which reported no errors.
I've done a yum update as well.

The problem is that only one of the to volume groups is recognized and
mirrored (/boot), as apparently vgscan does not like the the large drives
(which aren't that large...).   Any help is appreciated.

Zack




fstab:  (the rootvg is supposed to be /dev/md1)
-----------------
/dev/rootvg/LogVol00    /                       ext3    defaults        1 1
/dev/md0                /boot                   ext3    defaults        1 2
----------------

/var/log/lvm2.log:
-----------------------
commands/toolcontext.c:139   Logging initialised at Wed Jul 21 14:49:57 2004

commands/toolcontext.c:158   Set umask to 0077
lvmdiskscan.c:67 lvmdiskscan  /dev/sda  [       33.92 GB]
lvmdiskscan.c:67 lvmdiskscan  /dev/md0  [      101.75 MB]
lvmdiskscan.c:67 lvmdiskscan  /dev/md1  [       32.81 GB] LVM physical
volume
lvmdiskscan.c:67 lvmdiskscan  /dev/sda2 [        1.00 GB]
lvmdiskscan.c:67 lvmdiskscan  /dev/sdb  [       33.92 GB] LVM physical
volume
lvmdiskscan.c:67 lvmdiskscan  /dev/sdb2 [        1.00 GB]
lvmdiskscan.c:67 lvmdiskscan  /dev/sdc  [       33.92 GB] LVM physical
volume
lvmdiskscan.c:67 lvmdiskscan  /dev/sdc2 [        1.00 GB]
lvmdiskscan.c:137 lvmdiskscan  1 disk
lvmdiskscan.c:139 lvmdiskscan  4 partitions
lvmdiskscan.c:142 lvmdiskscan  2 LVM physical volume whole disks
lvmdiskscan.c:144 lvmdiskscan  1 LVM physical volume
commands/toolcontext.c:139   Logging initialised at Wed Jul 21 14:50:24 2004

commands/toolcontext.c:158   Set umask to 0077
commands/toolcontext.c:139   Logging initialised at Wed Jul 21 14:50:36 2004

commands/toolcontext.c:158   Set umask to 0077
vgscan.c:51 vgscan  Wiping cache of LVM-capable devices
vgscan.c:54 vgscan  Wiping internal cache
vgscan.c:57 vgscan  Reading all physical volumes.  This may take a while...
toollib.c:414 vgscan  Finding all volume groups
toollib.c:330 vgscan  Finding volume group "vg00"
device/dev-io.c:79 vgscan  Read size too large: 3191341056
format1/disk-rep.c:364 vgscan  Failed to read extents from /dev/sdb
device/dev-io.c:79 vgscan  Read size too large: 3191341056
format1/disk-rep.c:364 vgscan  Failed to read extents from /dev/sdb
device/dev-io.c:79 vgscan  Read size too large: 3191341056
format1/disk-rep.c:364 vgscan  Failed to read extents from /dev/sdc
vgscan.c:22 vgscan  Volume group "vg00" not found
toollib.c:330 vgscan  Finding volume group "rootvg"
vgscan.c:37 vgscan  Found volume group "rootvg" using metadata type lvm2

Output of lvscan and pvscan:
-----------------------
# lvscan
    Logging initialised at Thu Jul 22 09:40:45 2004

    Set umask to 0077
lvscan    Finding all logical volumes
lvscan  Read size too large: 3191341056
lvscan  Failed to read extents from /dev/sdb
lvscan  Read size too large: 3191341056
lvscan  Failed to read extents from /dev/sdb
lvscan  Read size too large: 3191341056
lvscan  Failed to read extents from /dev/sdc
lvscan  Volume group "vg00" not found
lvscan  ACTIVE            '/dev/rootvg/LogVol00' [32.80 GB] next free
(default)

# pvscan
    Logging initialised at Thu Jul 22 09:40:49 2004

    Set umask to 0077
pvscan    Wiping cache of LVM-capable devices
pvscan    Wiping internal cache
pvscan    Walking through all physical volumes
pvscan  Read size too large: 3191341056
pvscan  Failed to read extents from /dev/sdb
pvscan  Read size too large: 3191341056
pvscan  Failed to read extents from /dev/sdb
pvscan  Read size too large: 3191341056
pvscan  Failed to read extents from /dev/sdc
pvscan  PV /dev/md1   VG rootvg   lvm2 [32.81 GB / 8.00 MB free]
pvscan  Total: 1 [32.81 GB] / in use: 1 [32.81 GB] / in no VG: 0 [0   ]

             reply	other threads:[~2004-07-22 13:42 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-07-22 13:42 Zachary Hamm [this message]
2004-07-22 15:34 ` [linux-lvm] LVM2 and RAID 1 not working on Fedora Core 2 Patrick Caulfield
2004-07-22 19:25   ` Zachary Hamm
2004-07-23  6:56     ` Patrick Caulfield
2004-07-26 22:56       ` Zachary Hamm
2004-07-27  7:18         ` Patrick Caulfield
2004-07-27 23:33           ` Zachary Hamm
2004-07-28  5:39             ` Luca Berra
2004-07-30  3:04               ` Zachary Hamm
  -- strict thread matches above, loose matches on Subject: below --
2004-07-22 14:22 Rupert Hair

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='001701c46ff1$cdcecc60$6b02a8c0@ZACK2' \
    --to=zhamm@nc.rr.com \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.