linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: brem belguebli <brem.belguebli@gmail.com>
To: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] Lvm hangs on San fail
Date: Thu, 15 Apr 2010 01:02:09 +0200	[thread overview]
Message-ID: <1271286129.2462.0.camel@localhost> (raw)
In-Reply-To: <a6e6303b5e6ea7de1c0bf2618b82bd80.squirrel@fela.liber4e.com>

post your multipath.conf file, you may be queuing forever ?



On Wed, 2010-04-14 at 15:03 +0000, jose nuno neto wrote:
> Hi2all
> 
> I'm on RHEL 5.4 with
> lvm2-2.02.46-8.el5_4.1
> 2.6.18-164.2.1.el5
> 
> I have a multipathed SAN connection with what Im builing LVs
> Its a Cluster system, and I want LVs to switch on failure
> 
> If I simulate a fail through the OS via /sys/bus/scsi/devices/$DEVICE/delete
> I get a LV fail and the service switch to other node
> 
> But if I do it "real" portdown on the SAN Switch, multipath reports path
> down, but LVM commands hang forever and nothing gets switched
> 
> from the logs i see multipath failing paths, and lvm Failed to remove faulty
> "devices"
> 
> Any ideas how I should  "fix" it?
> 
> Apr 14 16:02:45 dc1-x6250-a lvm[15622]: Log device, 253:53, has failed.
> Apr 14 16:02:45 dc1-x6250-a lvm[15622]: Device failure in
> vg_ora_scapa-lv_ora_scapa_redo
> Apr 14 16:02:45 dc1-x6250-a lvm[15622]: Another thread is handling an
> event.  Waiting...
> 
> Apr 14 16:02:52 dc1-x6250-a multipathd: mpath-dc1-a: remaining active
> paths: 0
> Apr 14 16:02:52 dc1-x6250-a multipathd: mpath-dc1-a: remaining active
> paths: 0
> Apr 14 16:02:52 dc1-x6250-a multipathd: mpath-dc1-b: remaining active
> paths: 0
> Apr 14 16:02:52 dc1-x6250-a multipathd: mpath-dc1-b: remaining active
> paths: 0
> 
> Apr 14 16:03:05 dc1-x6250-a lvm[15622]: Device failure in
> vg_syb_roger-lv_syb_roger_admin
> Apr 14 16:03:14 dc1-x6250-a lvm[15622]: Failed to remove faulty devices in
> vg_syb_roger-lv_syb_roger_admin
> 
> Much Thanks
> Jose
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

  parent reply	other threads:[~2010-04-14 21:02 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-02-24 16:14 [linux-lvm] Mirror fail/recover test jose nuno neto
2010-02-24 18:55 ` malahal
2010-02-25 10:36   ` jose nuno neto
2010-02-25 16:11     ` malahal
2010-03-02 10:31       ` [linux-lvm] Mirror fail/recover test SOLVED jose nuno neto
2010-04-14 15:03         ` [linux-lvm] Lvm hangs on San fail jose nuno neto
2010-04-14 17:38           ` Eugene Vilensky
2010-04-14 23:02           ` brem belguebli [this message]
2010-04-15  8:29             ` jose nuno neto
2010-04-15  9:32               ` Bryan Whitehead
2010-04-15 11:59               ` jose nuno neto
2010-04-15 12:41                 ` Eugene Vilensky
2010-04-16  8:55                   ` jose nuno neto
2010-04-16 20:15                     ` Bryan Whitehead
2010-04-17  9:00                     ` brem belguebli
2010-04-19  9:21                       ` jose nuno neto

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1271286129.2462.0.camel@localhost \
    --to=brem.belguebli@gmail.com \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).