linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: jason@monsterjam.org
To: "Matthew B. Brookover" <mbrookov@mines.edu>
Cc: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] lvm upgrade problems.
Date: Wed, 23 May 2007 19:45:26 -0400	[thread overview]
Message-ID: <20070523234525.GC21463@monsterjam.org> (raw)
In-Reply-To: <1179929417.5276.10.camel@merlin.Mines.EDU>

ok, everything is fine now.. upgraded the other server to the same versions as this server and 
everyone is happy now. thanks for the help.

Jason


On Wed, May 23, 2007 at 08:10:16AM -0600, Matthew B. 
Brookover wrote:
>    You probably need to start up the cluster infrastructure. cman, ccsd,
>    fencd, and clvmd.
>    This is probably not a good idea, but you can also turn off LVM2
>    locking with:
> lvmconf --disable-cluster
> 
>    you can turn LVM2 locking back on with:
> lvmconf --enable-cluster --lockinglibdir /usr/lib --lockinglib liblvm2clusterloc
> k.so
> 
>    The lvmconf command edits /etc/lvm/lvm.conf.
>    The errors 'connect() failed on local socket: Connection refused'
>    appear from LVM2 commands when clvmd is not running and lvm.conf is
>    configured for a cluster.
>    I am not sure why the device name does not show up.  After turning off
>    locking, you could try to do a 'vgchange -ay' and hopefully it will
>    appear.
>    Matt
>    On Tue, 2007-05-22 at 20:51 -0400, jason@monsterjam.org wrote:
> 
> hey list. Im running
> [root@tf2 ~]# uname -a
> Linux tf2.localdomain 2.6.9-55.ELsmp #1 SMP Fri Apr 20 17:03:35 EDT 2007 i686 i6
> 86 i386
> GNU/Linux
> [root@tf2 ~]# cat /etc/redhat-release
> Red Hat Enterprise Linux AS release 4 (Nahant Update 5)
> [root@tf2 ~]#
> 
> and I have a lvm volume created on a GFS formatted drive that I cant see anymore
> ..
> 
> [root@tf2 ~]# vgscan
>   connect() failed on local socket: Connection refused
>   connect() failed on local socket: Connection refused
>   WARNING: Falling back to local file-based locking.
>   Volume Groups with the clustered attribute will be inaccessible.
>   Reading all physical volumes.  This may take a while...
>   Skipping clustered volume group diskarray
> [root@tf2 ~]# pvscan
>   connect() failed on local socket: Connection refused
>   connect() failed on local socket: Connection refused
>   WARNING: Falling back to local file-based locking.
>   Volume Groups with the clustered attribute will be inaccessible.
>   PV /dev/sdb1   VG diskarray   lvm2 [136.48 GB / 6.48 GB free]
>   Total: 1 [136.48 GB] / in use: 1 [136.48 GB] / in no VG: 0 [0   ]
> [root@tf2 ~]#
> 
> whats more, the device name used to be
> /dev/diskarray/lv1
> now, all I see is
> 
> [root@tf2 ~]# ls -al /dev/disk/by-path/*
> lrwxrwxrwx  1 root root  9 May 22 15:34 /dev/disk/by-path/pci-0000:00:1f.1-ide-0
> :0 -> ../../hda
> lrwxrwxrwx  1 root root  9 May 22 15:34 /dev/disk/by-path/pci-0000:02:0e.0-scsi-
> 0:1:0:0 ->
> ../../sda
> lrwxrwxrwx  1 root root 10 May 22 15:34 /dev/disk/by-path/pci-0000:02:0e.0-scsi-
> 0:1:0:0-part1 ->
> ../../sda1
> lrwxrwxrwx  1 root root 10 May 22 15:34 /dev/disk/by-path/pci-0000:02:0e.0-scsi-
> 0:1:0:0-part2 ->
> ../../sda2
> lrwxrwxrwx  1 root root 10 May 22 15:34 /dev/disk/by-path/pci-0000:02:0e.0-scsi-
> 0:1:0:0-part3 ->
> ../../sda3
> lrwxrwxrwx  1 root root 10 May 22 15:34 /dev/disk/by-path/pci-0000:02:0e.0-scsi-
> 0:1:0:0-part4 ->
> ../../sda4
> lrwxrwxrwx  1 root root 10 May 22 15:34 /dev/disk/by-path/pci-0000:02:0e.0-scsi-
> 0:1:0:0-part5 ->
> ../../sda5
> lrwxrwxrwx  1 root root 10 May 22 15:34 /dev/disk/by-path/pci-0000:02:0e.0-scsi-
> 0:1:0:0-part6 ->
> ../../sda6
> lrwxrwxrwx  1 root root 10 May 22 15:34 /dev/disk/by-path/pci-0000:02:0e.0-scsi-
> 0:1:0:0-part7 ->
> ../../sda7
> lrwxrwxrwx  1 root root 10 May 22 15:34 /dev/disk/by-path/pci-0000:02:0e.0-scsi-
> 0:1:0:0-part8 ->
> ../../sda8
> lrwxrwxrwx  1 root root 10 May 22 15:34 /dev/disk/by-path/pci-0000:02:0e.0-scsi-
> 0:1:0:0-part9 ->
> ../../sda9
> lrwxrwxrwx  1 root root  9 May 22 15:34 /dev/disk/by-path/pci-0000:03:0b.0-scsi-
> 0:2:0:0 ->
> ../../sdb
> lrwxrwxrwx  1 root root 10 May 22 15:34 /dev/disk/by-path/pci-0000:03:0b.0-scsi-
> 0:2:0:0-part1 ->
> ../../sdb1
> [root@tf2 ~]#
> 
> 
> /dev/sdb1 is my disk array..
> 
> any ideas?
> 
> Jason
> 
> 
> _______________________________________________
> linux-lvm mailing list
> [1]linux-lvm@redhat.com
> [2]https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at [3]http://tldp.org/HOWTO/LVM-HOWTO/
> 
> References
> 
>    1. mailto:linux-lvm@redhat.com
>    2. https://www.redhat.com/mailman/listinfo/linux-lvm
>    3. http://tldp.org/HOWTO/LVM-HOWTO/

> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

      reply	other threads:[~2007-05-24  0:21 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-05-23  0:51 [linux-lvm] lvm upgrade problems jason
2007-05-23 14:10 ` Matthew B. Brookover
2007-05-23 23:45   ` jason [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20070523234525.GC21463@monsterjam.org \
    --to=jason@monsterjam.org \
    --cc=linux-lvm@redhat.com \
    --cc=mbrookov@mines.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).