All of lore.kernel.org
 help / color / mirror / Atom feed
* [linux-lvm] LVM2 and RAID 1 not working on Fedora Core 2
@ 2004-07-22 13:42 Zachary Hamm
  2004-07-22 15:34 ` Patrick Caulfield
  0 siblings, 1 reply; 10+ messages in thread
From: Zachary Hamm @ 2004-07-22 13:42 UTC (permalink / raw)
  To: linux-lvm


Hello, I'm running Fedora Core 2 on a Dell Poweredge 1750 with three 36G
SCSI drives setup with two volumes, /boot and /, setup with software RAID 1
with online spare.  This was setup at install, which reported no errors.
I've done a yum update as well.

The problem is that only one of the to volume groups is recognized and
mirrored (/boot), as apparently vgscan does not like the the large drives
(which aren't that large...).   Any help is appreciated.

Zack




fstab:  (the rootvg is supposed to be /dev/md1)
-----------------
/dev/rootvg/LogVol00    /                       ext3    defaults        1 1
/dev/md0                /boot                   ext3    defaults        1 2
----------------

/var/log/lvm2.log:
-----------------------
commands/toolcontext.c:139   Logging initialised at Wed Jul 21 14:49:57 2004

commands/toolcontext.c:158   Set umask to 0077
lvmdiskscan.c:67 lvmdiskscan  /dev/sda  [       33.92 GB]
lvmdiskscan.c:67 lvmdiskscan  /dev/md0  [      101.75 MB]
lvmdiskscan.c:67 lvmdiskscan  /dev/md1  [       32.81 GB] LVM physical
volume
lvmdiskscan.c:67 lvmdiskscan  /dev/sda2 [        1.00 GB]
lvmdiskscan.c:67 lvmdiskscan  /dev/sdb  [       33.92 GB] LVM physical
volume
lvmdiskscan.c:67 lvmdiskscan  /dev/sdb2 [        1.00 GB]
lvmdiskscan.c:67 lvmdiskscan  /dev/sdc  [       33.92 GB] LVM physical
volume
lvmdiskscan.c:67 lvmdiskscan  /dev/sdc2 [        1.00 GB]
lvmdiskscan.c:137 lvmdiskscan  1 disk
lvmdiskscan.c:139 lvmdiskscan  4 partitions
lvmdiskscan.c:142 lvmdiskscan  2 LVM physical volume whole disks
lvmdiskscan.c:144 lvmdiskscan  1 LVM physical volume
commands/toolcontext.c:139   Logging initialised at Wed Jul 21 14:50:24 2004

commands/toolcontext.c:158   Set umask to 0077
commands/toolcontext.c:139   Logging initialised at Wed Jul 21 14:50:36 2004

commands/toolcontext.c:158   Set umask to 0077
vgscan.c:51 vgscan  Wiping cache of LVM-capable devices
vgscan.c:54 vgscan  Wiping internal cache
vgscan.c:57 vgscan  Reading all physical volumes.  This may take a while...
toollib.c:414 vgscan  Finding all volume groups
toollib.c:330 vgscan  Finding volume group "vg00"
device/dev-io.c:79 vgscan  Read size too large: 3191341056
format1/disk-rep.c:364 vgscan  Failed to read extents from /dev/sdb
device/dev-io.c:79 vgscan  Read size too large: 3191341056
format1/disk-rep.c:364 vgscan  Failed to read extents from /dev/sdb
device/dev-io.c:79 vgscan  Read size too large: 3191341056
format1/disk-rep.c:364 vgscan  Failed to read extents from /dev/sdc
vgscan.c:22 vgscan  Volume group "vg00" not found
toollib.c:330 vgscan  Finding volume group "rootvg"
vgscan.c:37 vgscan  Found volume group "rootvg" using metadata type lvm2

Output of lvscan and pvscan:
-----------------------
# lvscan
    Logging initialised at Thu Jul 22 09:40:45 2004

    Set umask to 0077
lvscan    Finding all logical volumes
lvscan  Read size too large: 3191341056
lvscan  Failed to read extents from /dev/sdb
lvscan  Read size too large: 3191341056
lvscan  Failed to read extents from /dev/sdb
lvscan  Read size too large: 3191341056
lvscan  Failed to read extents from /dev/sdc
lvscan  Volume group "vg00" not found
lvscan  ACTIVE            '/dev/rootvg/LogVol00' [32.80 GB] next free
(default)

# pvscan
    Logging initialised at Thu Jul 22 09:40:49 2004

    Set umask to 0077
pvscan    Wiping cache of LVM-capable devices
pvscan    Wiping internal cache
pvscan    Walking through all physical volumes
pvscan  Read size too large: 3191341056
pvscan  Failed to read extents from /dev/sdb
pvscan  Read size too large: 3191341056
pvscan  Failed to read extents from /dev/sdb
pvscan  Read size too large: 3191341056
pvscan  Failed to read extents from /dev/sdc
pvscan  PV /dev/md1   VG rootvg   lvm2 [32.81 GB / 8.00 MB free]
pvscan  Total: 1 [32.81 GB] / in use: 1 [32.81 GB] / in no VG: 0 [0   ]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] LVM2 and RAID 1 not working on Fedora Core 2
@ 2004-07-22 14:22 Rupert Hair
  0 siblings, 0 replies; 10+ messages in thread
From: Rupert Hair @ 2004-07-22 14:22 UTC (permalink / raw)
  To: LVM general discussion and development

Quoting Zachary Hamm <zhamm@nc.rr.com>:

> Hello, I'm running Fedora Core 2 on a Dell Poweredge 1750 with three 36G
> SCSI drives setup with two volumes, /boot and /, setup with software RAID 1
> with online spare.  This was setup at install, which reported no errors.
> I've done a yum update as well.
I would recommend that you run LVM on top of the RAID.  I.E.  all of the disks
are setup for RAID (with partition type fd/"Linux raid auto") and then you make
the resultant /dev/md0 a physical volume in the LVM.  You can then create two
logical volumes for your boot and root partitions.

If the system is recently installed then a re-install is probably the quickest
way to go.

> The problem is that only one of the to volume groups is recognised and
> mirrored (/boot), as apparently vgscan does not like the the large drives
> (which aren't that large...).   Any help is appreciated.
I'm not sure what this could be but I don't think its due to the disks actually
being too big.

Hope this is of some help,

Rupert

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] LVM2 and RAID 1 not working on Fedora Core 2
  2004-07-22 13:42 [linux-lvm] LVM2 and RAID 1 not working on Fedora Core 2 Zachary Hamm
@ 2004-07-22 15:34 ` Patrick Caulfield
  2004-07-22 19:25   ` Zachary Hamm
  0 siblings, 1 reply; 10+ messages in thread
From: Patrick Caulfield @ 2004-07-22 15:34 UTC (permalink / raw)
  To: LVM general discussion and development

On Thu, Jul 22, 2004 at 09:42:58AM -0400, Zachary Hamm wrote:
> 
> Hello, I'm running Fedora Core 2 on a Dell Poweredge 1750 with three 36G
> SCSI drives setup with two volumes, /boot and /, setup with software RAID 1
> with online spare.  This was setup at install, which reported no errors.
> I've done a yum update as well.
> 
> The problem is that only one of the to volume groups is recognized and
> mirrored (/boot), as apparently vgscan does not like the the large drives
> (which aren't that large...).   Any help is appreciated.
> 

It looks to me like you need to stop LVM looking at the "real" disks, and make
it only look at the MD devices.

Either add a filter like "a/md*/" or add "md_component_detection = 1" to the
devices section of the lvm.conf file - if you're running a recent lvm2
userspace.

-- 

patrick

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] LVM2 and RAID 1 not working on Fedora Core 2
  2004-07-22 15:34 ` Patrick Caulfield
@ 2004-07-22 19:25   ` Zachary Hamm
  2004-07-23  6:56     ` Patrick Caulfield
  0 siblings, 1 reply; 10+ messages in thread
From: Zachary Hamm @ 2004-07-22 19:25 UTC (permalink / raw)
  To: LVM general discussion and development

Thanks for the tip.  But "md_component_detection = 1" is already set in my
lvm.conf file.   Any other tips?  Perhaps also linked to sometimes getting
an "out of memory" message when running the user tools?

-----------------------------------
commands/toolcontext.c:139   Logging initialised at Wed Jul 21 13:48:44 2004

commands/toolcontext.c:158   Set umask to 0077
vgscan.c:51 vgscan  Wiping cache of LVM-capable devices
vgscan.c:54 vgscan  Wiping internal cache
vgscan.c:57 vgscan  Reading all physical volumes.  This may take a while...
toollib.c:414 vgscan  Finding all volume groups
toollib.c:330 vgscan  Finding volume group "vg00"
mm/pool-fast.c:224 vgscan  Out of memory.  Requested 3191341076 bytes.
format1/disk-rep.c:364 vgscan  Failed to read extents from /dev/sdb
mm/pool-fast.c:224 vgscan  Out of memory.  Requested 3191341076 bytes.
format1/disk-rep.c:364 vgscan  Failed to read extents from /dev/sdb
mm/pool-fast.c:224 vgscan  Out of memory.  Requested 3191341076 bytes.
format1/disk-rep.c:364 vgscan  Failed to read extents from /dev/sdc
vgscan.c:22 vgscan  Volume group "vg00" not found
toollib.c:330 vgscan  Finding volume group "rootvg"
vgscan.c:37 vgscan  Found volume group "rootvg" using metadata type lvm2
-------------------------------

The partition scheme on all three drives are the same:
--------------------------------------------------
# fdisk -l /dev/sda
Disk /dev/sda: 36.4 GB, 36420075008 bytes
255 heads, 63 sectors/track, 4427 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   fd  Linux raid
autodetect
/dev/sda2              14         144     1052257+  82  Linux swap
/dev/sda3             145        4427    34403197+  fd  Linux raid
autodetect
--------------------------------------------------------------




----- Original Message ----- 
From: "Patrick Caulfield" <pcaulfie@redhat.com>
To: "LVM general discussion and development" <linux-lvm@redhat.com>
Sent: Thursday, July 22, 2004 11:34 AM
Subject: Re: [linux-lvm] LVM2 and RAID 1 not working on Fedora Core 2


> On Thu, Jul 22, 2004 at 09:42:58AM -0400, Zachary Hamm wrote:
> >
> > Hello, I'm running Fedora Core 2 on a Dell Poweredge 1750 with three 36G
> > SCSI drives setup with two volumes, /boot and /, setup with software
RAID 1
> > with online spare.  This was setup at install, which reported no errors.
> > I've done a yum update as well.
> >
> > The problem is that only one of the to volume groups is recognized and
> > mirrored (/boot), as apparently vgscan does not like the the large
drives
> > (which aren't that large...).   Any help is appreciated.
> >
>
> It looks to me like you need to stop LVM looking at the "real" disks, and
make
> it only look at the MD devices.
>
> Either add a filter like "a/md*/" or add "md_component_detection = 1" to
the
> devices section of the lvm.conf file - if you're running a recent lvm2
> userspace.
>
> -- 
>
> patrick
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] LVM2 and RAID 1 not working on Fedora Core 2
  2004-07-22 19:25   ` Zachary Hamm
@ 2004-07-23  6:56     ` Patrick Caulfield
  2004-07-26 22:56       ` Zachary Hamm
  0 siblings, 1 reply; 10+ messages in thread
From: Patrick Caulfield @ 2004-07-23  6:56 UTC (permalink / raw)
  To: LVM general discussion and development

On Thu, Jul 22, 2004 at 03:25:09PM -0400, Zachary Hamm wrote:
> Thanks for the tip.  But "md_component_detection = 1" is already set in my
> lvm.conf file.   Any other tips?  Perhaps also linked to sometimes getting
> an "out of memory" message when running the user tools?
> 

Did you try the filter? The OOM error is almost certainly caused by the tools
reading an incorrect or corrupt header from the disk, and if your PVs are all MD
devices then there's no reason for LVM to scan the underlying disks.

patrick

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] LVM2 and RAID 1 not working on Fedora Core 2
  2004-07-23  6:56     ` Patrick Caulfield
@ 2004-07-26 22:56       ` Zachary Hamm
  2004-07-27  7:18         ` Patrick Caulfield
  0 siblings, 1 reply; 10+ messages in thread
From: Zachary Hamm @ 2004-07-26 22:56 UTC (permalink / raw)
  To: LVM general discussion and development


Changed the filter directed as suggested, no joy.  I still get the same
errors.  Any other suggestions, or should I post more config information?
There has to be something I'm missing.

Zack

----- Original Message ----- 
From: "Patrick Caulfield" <pcaulfie@redhat.com>
To: "LVM general discussion and development" <linux-lvm@redhat.com>
Sent: Friday, July 23, 2004 2:56 AM
Subject: Re: [linux-lvm] LVM2 and RAID 1 not working on Fedora Core 2


> On Thu, Jul 22, 2004 at 03:25:09PM -0400, Zachary Hamm wrote:
> > Thanks for the tip.  But "md_component_detection = 1" is already set in
my
> > lvm.conf file.   Any other tips?  Perhaps also linked to sometimes
getting
> > an "out of memory" message when running the user tools?
> >
>
> Did you try the filter? The OOM error is almost certainly caused by the
tools
> reading an incorrect or corrupt header from the disk, and if your PVs are
all MD
> devices then there's no reason for LVM to scan the underlying disks.
>
> patrick
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] LVM2 and RAID 1 not working on Fedora Core 2
  2004-07-26 22:56       ` Zachary Hamm
@ 2004-07-27  7:18         ` Patrick Caulfield
  2004-07-27 23:33           ` Zachary Hamm
  0 siblings, 1 reply; 10+ messages in thread
From: Patrick Caulfield @ 2004-07-27  7:18 UTC (permalink / raw)
  To: LVM general discussion and development

On Mon, Jul 26, 2004 at 06:56:04PM -0400, Zachary Hamm wrote:
> 
> Changed the filter directed as suggested, no joy.  I still get the same
> errors.  Any other suggestions, or should I post more config information?
> There has to be something I'm missing.
> 

Can you tell me which drives are making up the MD devices and which (if any)
are to be used by LVM directly?

Its possible that it's just the filter that needs more work. maybe something 
like:

  filter = [ "a/md.*/", "r/.*/" ]
-- 

patrick

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] LVM2 and RAID 1 not working on Fedora Core 2
  2004-07-27  7:18         ` Patrick Caulfield
@ 2004-07-27 23:33           ` Zachary Hamm
  2004-07-28  5:39             ` Luca Berra
  0 siblings, 1 reply; 10+ messages in thread
From: Zachary Hamm @ 2004-07-27 23:33 UTC (permalink / raw)
  To: LVM general discussion and development

Ahh.  Okay, here's my config files:

/etc/raidtab:
----------------------------------
raiddev             /dev/md1
raid-level                  1
nr-raid-disks               2
chunk-size                  256
persistent-superblock       1
nr-spare-disks              1
    device          /dev/sda3
    raid-disk     0
    device          /dev/sdb3
    raid-disk     1
    device          /dev/sdc3
    spare-disk     0
raiddev             /dev/md0
raid-level                  1
nr-raid-disks               2
chunk-size                  256
persistent-superblock       1
nr-spare-disks              1
    device          /dev/sda1
    raid-disk     0
    device          /dev/sdb1
    raid-disk     1
    device          /dev/sdc1
    spare-disk     0
----------------------------------

/etc/fstab:
------------------------------------
/dev/rootvg/LogVol00    /                       ext3    defaults        1 1
/dev/md0                /boot                   ext3    defaults        1 2
none                    /dev/pts                devpts  gid=5,mode=620  0 0
none                    /dev/shm                tmpfs   defaults        0 0
none                    /proc                   proc    defaults        0 0
none                    /sys                    sysfs   defaults        0 0
/dev/sdc2               swap                    swap    defaults        0 0
/dev/sdb2               swap                    swap    defaults        0 0
/dev/sda2               swap                    swap    defaults        0 0
/dev/cdrom              /mnt/cdrom              udf,iso9660
noauto,owner,kudzu,ro 0 0
/dev/fd0                /mnt/floppy             auto    noauto,owner,kudzu 0
0
----------------------------------------------------------------------------
-

vgscan says:
--------------------------------------------------
Logging initialised at Tue Jul 27 19:32:00 2004

Set umask to 0077
vgscan    Wiping cache of LVM-capable devices
vgscan    Wiping internal cache
vgscan  Reading all physical volumes.  This may take a while...
vgscan    Finding all volume groups
vgscan    Finding volume group "vg00"
vgscan  Read size too large: 3191341056
vgscan  Failed to read extents from /dev/sdb
vgscan  Read size too large: 3191341056
vgscan  Failed to read extents from /dev/sdb
vgscan  Read size too large: 3191341056
vgscan  Failed to read extents from /dev/sdc
vgscan  Volume group "vg00" not found
vgscan    Finding volume group "rootvg"
vgscan  Found volume group "rootvg" using metadata type lvm2
----------------------------------------------------------------




Anything else I should be posting?  BTW, are you here in Raleigh?

Thanks!

Zack






----- Original Message ----- 
From: "Patrick Caulfield" <pcaulfie@redhat.com>
To: "LVM general discussion and development" <linux-lvm@redhat.com>
Sent: Tuesday, July 27, 2004 3:18 AM
Subject: Re: [linux-lvm] LVM2 and RAID 1 not working on Fedora Core 2


> On Mon, Jul 26, 2004 at 06:56:04PM -0400, Zachary Hamm wrote:
> >
> > Changed the filter directed as suggested, no joy.  I still get the same
> > errors.  Any other suggestions, or should I post more config
information?
> > There has to be something I'm missing.
> >
>
> Can you tell me which drives are making up the MD devices and which (if
any)
> are to be used by LVM directly?
>
> Its possible that it's just the filter that needs more work. maybe
something
> like:
>
>   filter = [ "a/md.*/", "r/.*/" ]
> -- 
>
> patrick
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] LVM2 and RAID 1 not working on Fedora Core 2
  2004-07-27 23:33           ` Zachary Hamm
@ 2004-07-28  5:39             ` Luca Berra
  2004-07-30  3:04               ` Zachary Hamm
  0 siblings, 1 reply; 10+ messages in thread
From: Luca Berra @ 2004-07-28  5:39 UTC (permalink / raw)
  To: LVM general discussion and development

On Tue, Jul 27, 2004 at 07:33:00PM -0400, Zachary Hamm wrote:
>Anything else I should be posting?  BTW, are you here in Raleigh?
please edit lvm.conf so that it produces a full debug and post that.
besides how big are your drives?

L.

-- 
Luca Berra -- bluca@comedia.it
        Communication Media & Services S.r.l.
 /"\
 \ /     ASCII RIBBON CAMPAIGN
  X        AGAINST HTML MAIL
 / \

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] LVM2 and RAID 1 not working on Fedora Core 2
  2004-07-28  5:39             ` Luca Berra
@ 2004-07-30  3:04               ` Zachary Hamm
  0 siblings, 0 replies; 10+ messages in thread
From: Zachary Hamm @ 2004-07-30  3:04 UTC (permalink / raw)
  To: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 1777 bytes --]

Attached is the lvm2.log file with debug at setting 6 (setting to level 7
created a 7M file!)
The drives are Three 36G SCSI drives, setup as a RAID1 mirror with hot
spare.

/etc/raidtab:
----------------------------------------------------
raiddev             /dev/md1
raid-level                  1
nr-raid-disks               2
chunk-size                  256
persistent-superblock       1
nr-spare-disks              1
    device          /dev/sda3
    raid-disk     0
    device          /dev/sdb3
    raid-disk     1
    device          /dev/sdc3
    spare-disk     0
raiddev             /dev/md0
raid-level                  1
nr-raid-disks               2
chunk-size                  256
persistent-superblock       1
nr-spare-disks              1
    device          /dev/sda1
    raid-disk     0
    device          /dev/sdb1
    raid-disk     1
    device          /dev/sdc1
    spare-disk     0



----- Original Message ----- 
From: "Luca Berra" <bluca@comedia.it>
To: "LVM general discussion and development" <linux-lvm@redhat.com>
Sent: Wednesday, July 28, 2004 1:39 AM
Subject: Re: [linux-lvm] LVM2 and RAID 1 not working on Fedora Core 2


> On Tue, Jul 27, 2004 at 07:33:00PM -0400, Zachary Hamm wrote:
> >Anything else I should be posting?  BTW, are you here in Raleigh?
> please edit lvm.conf so that it produces a full debug and post that.
> besides how big are your drives?
>
> L.
>
> -- 
> Luca Berra -- bluca@comedia.it
>         Communication Media & Services S.r.l.
>  /"\
>  \ /     ASCII RIBBON CAMPAIGN
>   X        AGAINST HTML MAIL
>  / \
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>

[-- Attachment #2: lvm2.log --]
[-- Type: application/octet-stream, Size: 12466 bytes --]

config/config.c:800   log/activation not found in config: defaulting to 0
commands/toolcontext.c:139   Logging initialised at Thu Jul 29 22:53:59 2004

config/config.c:795   Setting global/umask to 63
commands/toolcontext.c:158   Set umask to 0077
config/config.c:780   Setting devices/dir to /dev
config/config.c:780   Setting global/proc to /proc
config/config.c:795   Setting global/activation to 1
config/config.c:800   global/suffix not found in config: defaulting to 1
config/config.c:786   global/units not found in config: defaulting to h
config/config.c:780   Setting devices/cache to /etc/lvm/.cache
config/config.c:795   Setting devices/write_cache_state to 1
config/config.c:795   Setting activation/reserved_stack to 256
config/config.c:795   Setting activation/reserved_memory to 8192
config/config.c:795   Setting activation/process_priority to -18
config/config.c:786   global/format not found in config: defaulting to lvm2
commands/toolcontext.c:537   No tags defined in config file
config/config.c:795   Setting backup/retain_days to 30
config/config.c:795   Setting backup/retain_min to 10
config/config.c:780   Setting backup/archive_dir to /etc/lvm/archive
config/config.c:780   Setting backup/backup_dir to /etc/lvm/backup
config/config.c:800   global/fallback_to_lvm1 not found in config: defaulting to 1
config/config.c:795 pvscan  Setting global/locking_type to 1
config/config.c:780 pvscan  Setting global/locking_dir to /var/lock/lvm
locking/locking.c:137 pvscan  File-based locking enabled.
pvscan.c:127 pvscan  Wiping cache of LVM-capable devices
pvscan.c:130 pvscan  Wiping internal cache
pvscan.c:133 pvscan  Walking through all physical volumes
device/dev-io.c:220 pvscan  Getting size of /dev/sda
label/label.c:183 pvscan  /dev/sda: No label detected
device/dev-io.c:220 pvscan  Getting size of /dev/md0
label/label.c:183 pvscan  /dev/md0: No label detected
device/dev-io.c:220 pvscan  Getting size of /dev/sda1
device/dev-io.c:220 pvscan  Getting size of /dev/md1
label/label.c:165 pvscan  /dev/md1: lvm2 label detected
device/dev-io.c:220 pvscan  Getting size of /dev/sda2
label/label.c:183 pvscan  /dev/sda2: No label detected
device/dev-io.c:220 pvscan  Getting size of /dev/sda3
device/dev-io.c:220 pvscan  Getting size of /dev/sdb
label/label.c:165 pvscan  /dev/sdb: lvm1 label detected
device/dev-io.c:220 pvscan  Getting size of /dev/sdb1
device/dev-io.c:220 pvscan  Getting size of /dev/sdb2
label/label.c:183 pvscan  /dev/sdb2: No label detected
device/dev-io.c:220 pvscan  Getting size of /dev/sdb3
device/dev-io.c:220 pvscan  Getting size of /dev/sdc
label/label.c:165 pvscan  /dev/sdc: lvm1 label detected
device/dev-io.c:220 pvscan  Getting size of /dev/sdc1
device/dev-io.c:220 pvscan  Getting size of /dev/sdc2
label/label.c:183 pvscan  /dev/sdc2: No label detected
device/dev-io.c:220 pvscan  Getting size of /dev/sdc3
device/dev-io.c:79 pvscan  Read size too large: 3191341056
format1/disk-rep.c:364 pvscan  Failed to read extents from /dev/sdb
format1/disk-rep.c:152 pvscan  /dev/sda does not have a valid LVM1 PV identifier
format1/disk-rep.c:152 pvscan  /dev/md0 does not have a valid LVM1 PV identifier
format1/disk-rep.c:152 pvscan  /dev/md1 does not have a valid LVM1 PV identifier
format1/disk-rep.c:152 pvscan  /dev/sda2 does not have a valid LVM1 PV identifier
device/dev-io.c:79 pvscan  Read size too large: 3191341056
format1/disk-rep.c:364 pvscan  Failed to read extents from /dev/sdb
format1/disk-rep.c:152 pvscan  /dev/sdb2 does not have a valid LVM1 PV identifier
device/dev-io.c:79 pvscan  Read size too large: 3191341056
format1/disk-rep.c:364 pvscan  Failed to read extents from /dev/sdc
format1/disk-rep.c:152 pvscan  /dev/sdc2 does not have a valid LVM1 PV identifier
label/label.c:165 pvscan  /dev/md1: lvm2 label detected
pvscan.c:97 pvscan  PV /dev/md1   VG rootvg   lvm2 [32.81 GB / 8.00 MB free]
pvscan.c:195 pvscan  Total: 1 [32.81 GB] / in use: 1 [32.81 GB] / in no VG: 0 [0   ]
filters/filter-persistent.c:179 pvscan  Dumping persistent device cache to
/etc/lvm/.cache
config/config.c:800   log/activation not found in config: defaulting to 0
commands/toolcontext.c:139   Logging initialised at Thu Jul 29 22:54:02 2004

config/config.c:795   Setting global/umask to 63
commands/toolcontext.c:158   Set umask to 0077
config/config.c:780   Setting devices/dir to /dev
config/config.c:780   Setting global/proc to /proc
config/config.c:795   Setting global/activation to 1
config/config.c:800   global/suffix not found in config: defaulting to 1
config/config.c:786   global/units not found in config: defaulting to h
config/config.c:780   Setting devices/cache to /etc/lvm/.cache
config/config.c:795   Setting devices/write_cache_state to 1
filters/filter-persistent.c:125   Loaded persistent filter cache from /etc/lvm/.cache
config/config.c:795   Setting activation/reserved_stack to 256
config/config.c:795   Setting activation/reserved_memory to 8192
config/config.c:795   Setting activation/process_priority to -18
config/config.c:786   global/format not found in config: defaulting to lvm2
commands/toolcontext.c:537   No tags defined in config file
config/config.c:795   Setting backup/retain_days to 30
config/config.c:795   Setting backup/retain_min to 10
config/config.c:780   Setting backup/archive_dir to /etc/lvm/archive
config/config.c:780   Setting backup/backup_dir to /etc/lvm/backup
config/config.c:800   global/fallback_to_lvm1 not found in config: defaulting to 1
config/config.c:795 vgscan  Setting global/locking_type to 1
config/config.c:780 vgscan  Setting global/locking_dir to /var/lock/lvm
locking/locking.c:137 vgscan  File-based locking enabled.
vgscan.c:51 vgscan  Wiping cache of LVM-capable devices
vgscan.c:54 vgscan  Wiping internal cache
vgscan.c:57 vgscan  Reading all physical volumes.  This may take a while...
toollib.c:414 vgscan  Finding all volume groups
device/dev-io.c:220 vgscan  Getting size of /dev/sda
label/label.c:183 vgscan  /dev/sda: No label detected
device/dev-io.c:220 vgscan  Getting size of /dev/md0
label/label.c:183 vgscan  /dev/md0: No label detected
device/dev-io.c:220 vgscan  Getting size of /dev/sda1
device/dev-io.c:220 vgscan  Getting size of /dev/md1
label/label.c:165 vgscan  /dev/md1: lvm2 label detected
device/dev-io.c:220 vgscan  Getting size of /dev/sda2
label/label.c:183 vgscan  /dev/sda2: No label detected
device/dev-io.c:220 vgscan  Getting size of /dev/sda3
device/dev-io.c:220 vgscan  Getting size of /dev/sdb
label/label.c:165 vgscan  /dev/sdb: lvm1 label detected
device/dev-io.c:220 vgscan  Getting size of /dev/sdb1
device/dev-io.c:220 vgscan  Getting size of /dev/sdb2
label/label.c:183 vgscan  /dev/sdb2: No label detected
device/dev-io.c:220 vgscan  Getting size of /dev/sdb3
device/dev-io.c:220 vgscan  Getting size of /dev/sdc
label/label.c:165 vgscan  /dev/sdc: lvm1 label detected
device/dev-io.c:220 vgscan  Getting size of /dev/sdc1
device/dev-io.c:220 vgscan  Getting size of /dev/sdc2
label/label.c:183 vgscan  /dev/sdc2: No label detected
device/dev-io.c:220 vgscan  Getting size of /dev/sdc3
locking/file_locking.c:162 vgscan  Locking /var/lock/lvm/V_vg00 RB
toollib.c:330 vgscan  Finding volume group "vg00"
device/dev-io.c:79 vgscan  Read size too large: 3191341056
format1/disk-rep.c:364 vgscan  Failed to read extents from /dev/sdb
format1/disk-rep.c:152 vgscan  /dev/sda does not have a valid LVM1 PV identifier
format1/disk-rep.c:152 vgscan  /dev/md0 does not have a valid LVM1 PV identifier
format1/disk-rep.c:152 vgscan  /dev/md1 does not have a valid LVM1 PV identifier
format1/disk-rep.c:152 vgscan  /dev/sda2 does not have a valid LVM1 PV identifier
device/dev-io.c:79 vgscan  Read size too large: 3191341056
format1/disk-rep.c:364 vgscan  Failed to read extents from /dev/sdb
format1/disk-rep.c:152 vgscan  /dev/sdb2 does not have a valid LVM1 PV identifier
device/dev-io.c:79 vgscan  Read size too large: 3191341056
format1/disk-rep.c:364 vgscan  Failed to read extents from /dev/sdc
format1/disk-rep.c:152 vgscan  /dev/sdc2 does not have a valid LVM1 PV identifier
vgscan.c:22 vgscan  Volume group "vg00" not found
locking/file_locking.c:59 vgscan  Unlocking /var/lock/lvm/V_vg00
locking/file_locking.c:162 vgscan  Locking /var/lock/lvm/V_rootvg RB
toollib.c:330 vgscan  Finding volume group "rootvg"
label/label.c:165 vgscan  /dev/md1: lvm2 label detected
vgscan.c:37 vgscan  Found volume group "rootvg" using metadata type lvm2
locking/file_locking.c:59 vgscan  Unlocking /var/lock/lvm/V_rootvg
filters/filter-persistent.c:179 vgscan  Dumping persistent device cache to
/etc/lvm/.cache
config/config.c:800   log/activation not found in config: defaulting to 0
commands/toolcontext.c:139   Logging initialised at Thu Jul 29 22:54:10 2004

config/config.c:795   Setting global/umask to 63
commands/toolcontext.c:158   Set umask to 0077
config/config.c:780   Setting devices/dir to /dev
config/config.c:780   Setting global/proc to /proc
config/config.c:795   Setting global/activation to 1
config/config.c:800   global/suffix not found in config: defaulting to 1
config/config.c:786   global/units not found in config: defaulting to h
config/config.c:780   Setting devices/cache to /etc/lvm/.cache
config/config.c:795   Setting devices/write_cache_state to 1
filters/filter-persistent.c:125   Loaded persistent filter cache from /etc/lvm/.cache
config/config.c:795   Setting activation/reserved_stack to 256
config/config.c:795   Setting activation/reserved_memory to 8192
config/config.c:795   Setting activation/process_priority to -18
config/config.c:786   global/format not found in config: defaulting to lvm2
commands/toolcontext.c:537   No tags defined in config file
config/config.c:795   Setting backup/retain_days to 30
config/config.c:795   Setting backup/retain_min to 10
config/config.c:780   Setting backup/archive_dir to /etc/lvm/archive
config/config.c:780   Setting backup/backup_dir to /etc/lvm/backup
config/config.c:800   global/fallback_to_lvm1 not found in config: defaulting to 1
config/config.c:795 lvscan  Setting global/locking_type to 1
config/config.c:780 lvscan  Setting global/locking_dir to /var/lock/lvm
locking/locking.c:137 lvscan  File-based locking enabled.
toollib.c:223 lvscan  Finding all logical volumes
label/label.c:183 lvscan  /dev/sda: No label detected
label/label.c:183 lvscan  /dev/md0: No label detected
label/label.c:165 lvscan  /dev/md1: lvm2 label detected
label/label.c:183 lvscan  /dev/sda2: No label detected
label/label.c:165 lvscan  /dev/sdb: lvm1 label detected
label/label.c:183 lvscan  /dev/sdb2: No label detected
label/label.c:165 lvscan  /dev/sdc: lvm1 label detected
label/label.c:183 lvscan  /dev/sdc2: No label detected
locking/file_locking.c:162 lvscan  Locking /var/lock/lvm/V_vg00 RB
device/dev-io.c:79 lvscan  Read size too large: 3191341056
format1/disk-rep.c:364 lvscan  Failed to read extents from /dev/sdb
format1/disk-rep.c:152 lvscan  /dev/sda does not have a valid LVM1 PV identifier
format1/disk-rep.c:152 lvscan  /dev/md0 does not have a valid LVM1 PV identifier
format1/disk-rep.c:152 lvscan  /dev/md1 does not have a valid LVM1 PV identifier
format1/disk-rep.c:152 lvscan  /dev/sda2 does not have a valid LVM1 PV identifier
device/dev-io.c:79 lvscan  Read size too large: 3191341056
format1/disk-rep.c:364 lvscan  Failed to read extents from /dev/sdb
format1/disk-rep.c:152 lvscan  /dev/sdb2 does not have a valid LVM1 PV identifier
device/dev-io.c:79 lvscan  Read size too large: 3191341056
format1/disk-rep.c:364 lvscan  Failed to read extents from /dev/sdc
format1/disk-rep.c:152 lvscan  /dev/sdc2 does not have a valid LVM1 PV identifier
locking/file_locking.c:59 lvscan  Unlocking /var/lock/lvm/V_vg00
toollib.c:247 lvscan  Volume group "vg00" not found
locking/file_locking.c:162 lvscan  Locking /var/lock/lvm/V_rootvg RB
label/label.c:165 lvscan  /dev/md1: lvm2 label detected
config/config.c:780 lvscan  Setting activation/missing_stripe_filler to /dev/ioerror
config/config.c:795 lvscan  Setting activation/mirror_region_size to 512
lvscan.c:43 lvscan  ACTIVE            '/dev/rootvg/LogVol00' [32.80 GB] next free
(default)
locking/file_locking.c:59 lvscan  Unlocking /var/lock/lvm/V_rootvg
filters/filter-persistent.c:179 lvscan  Dumping persistent device cache to
/etc/lvm/.cache

 

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2004-07-30  3:39 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-07-22 13:42 [linux-lvm] LVM2 and RAID 1 not working on Fedora Core 2 Zachary Hamm
2004-07-22 15:34 ` Patrick Caulfield
2004-07-22 19:25   ` Zachary Hamm
2004-07-23  6:56     ` Patrick Caulfield
2004-07-26 22:56       ` Zachary Hamm
2004-07-27  7:18         ` Patrick Caulfield
2004-07-27 23:33           ` Zachary Hamm
2004-07-28  5:39             ` Luca Berra
2004-07-30  3:04               ` Zachary Hamm
  -- strict thread matches above, loose matches on Subject: below --
2004-07-22 14:22 Rupert Hair

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.