linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Julie Ashworth <julie.ashworth@berkeley.edu>
To: linux-lvm@redhat.com
Subject: [linux-lvm] Re: missing physical volumes after upgrade to rhel 5.4
Date: Thu, 24 Sep 2009 12:38:04 -0700	[thread overview]
Message-ID: <20090924193804.GC5230@venus.gateway.2wire.net> (raw)
In-Reply-To: <20090924100622.GA3517@venus.neuro.bekeley.edu>

Some additional information:

----------------------
# lvm dumpconfig
  devices {
    dir="/dev"
    scan="/dev"
    preferred_names=[]
    filter="a/.*/"
    cache_dir="/etc/lvm/cache"
    cache_file_prefix=""
    write_cache_state=1
    sysfs_scan=1
    md_component_detection=1
    ignore_suspended_devices=0
  }
  activation {
    missing_stripe_filler="/dev/ioerror"
    reserved_stack=256
    reserved_memory=8192
    process_priority=-18
    mirror_region_size=512
    readahead="auto"
    mirror_log_fault_policy="allocate"
    mirror_device_fault_policy="remove"
  }
  global {
    umask=63
    test=0
    units="h"
    activation=1
    proc="/proc"
    locking_type=1
    fallback_to_clustered_locking=1
    fallback_to_local_locking=1
    locking_dir="/var/lock/lvm"
  }
  shell {
    history_size=100
  }
  backup {
    backup=1
    backup_dir="/etc/lvm/backup"
    archive=1
    archive_dir="/etc/lvm/archive"
    retain_min=10
    retain_days=30
  }
  log {
    verbose=0
    syslog=1
    overwrite=0
    level=0
    indent=1
    command_names=0
    prefix="  "
  }

----------------------
(I powered off 2 of the storage devices, so only
one (12TB) device remains accessible):

# lvmdiskscan
  /dev/ramdisk   [       16.00 MB] 
  /dev/md0       [      148.94 MB] 
  /dev/ram       [       16.00 MB] 
  /dev/md1       [        3.91 GB] 
  /dev/ram2      [       16.00 MB] 
  /dev/md2       [      105.46 GB] 
  /dev/dm-2      [       11.83 TB] 
  /dev/ram3      [       16.00 MB] 
  /dev/sda3      [        3.91 GB] 
  /dev/md3       [        3.91 GB] 
  /dev/ram4      [       16.00 MB] 
  /dev/md4       [      800.46 GB] 
  /dev/ram5      [       16.00 MB] 
  /dev/ram6      [       16.00 MB] 
  /dev/ram7      [       16.00 MB] 
  /dev/ram8      [       16.00 MB] 
  /dev/ram9      [       16.00 MB] 
  /dev/ram10     [       16.00 MB] 
  /dev/ram11     [       16.00 MB] 
  /dev/ram12     [       16.00 MB] 
  /dev/ram13     [       16.00 MB] 
  /dev/ram14     [       16.00 MB] 
  /dev/ram15     [       16.00 MB] 
  /dev/sdb6      [        3.91 GB] 
  /dev/sdb8      [        6.05 GB] 
  3 disks
  23 partitions
  0 LVM physical volume whole disks
  1 LVM physical volume



-- 
Julie Ashworth <julie.ashworth@berkeley.edu>
Computational Infrastructure for Research Labs, UC Berkeley 
http://cirl.berkeley.edu/
PGP Key ID: 0x17F013D2

  reply	other threads:[~2009-09-24 19:38 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-09-24 10:06 [linux-lvm] missing physical volumes after upgrade to rhel 5.4 Julie Ashworth
2009-09-24 19:38 ` Julie Ashworth [this message]
2009-09-25  9:52 ` Mark Round
2009-09-26  1:45   ` Julie Ashworth

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090924193804.GC5230@venus.gateway.2wire.net \
    --to=julie.ashworth@berkeley.edu \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).