linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] piranha on Fedora core 6, piranha-gui fails, pulse running_real servers not switching
@ 2007-04-07 17:02 Anuj Singh
  2007-04-07 22:21 ` [linux-lvm] " Anuj Singh
  0 siblings, 1 reply; 2+ messages in thread
From: Anuj Singh @ 2007-04-07 17:02 UTC (permalink / raw)
  To: linux-lvm

Hi ,
I am configuring lvs nat router with piranha-0.7.12-1 on Fedora Core 6 machines
When i turn off service of one of my real service it is not switching
to second real server. I can access and see logs of connection in
ipvsadm -L.
I edited sample.cf file and copied as lvs.cf. as my piranha-gui is not
running due to some missing appache modules.

Director Master lvs machine
eth0=10.1.1.1
eth1=192.168.10.42

routing table without running pulse service
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.10.0    0.0.0.0         255.255.255.0   U     0      0        0 eth1
10.1.1.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth1

routing table after starting pulse service
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.10.0    0.0.0.0         255.255.255.0   U     0      0        0 eth1
10.1.1.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth1
10.0.0.0        0.0.0.0         255.0.0.0       U     0      0        0 eth0

ipvsadm -L
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
 -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.1.22:ftp wlc
TCP  10.1.1.21:http rr persistent 60
 -> node31.prolog.com:http Masq    2      0          0


Slave backupp lvs machine
eth0=10.1.12
eth1=192.168.10.55

Test client machine IP = 10.1.1.11

my lvs.cf

service = lvs
primary = 192.168.10.42
backup = 192.168.10.55
backup_active = 1
heartbeat = 1
heartbeat_port = 1050
keepalive = 6
deadtime = 18
network = nat
nat_router = 192.168.10.111 eth1:1
virtual server1 {
   address = 10.1.1.21 eth0:1
   active = 1
   load_monitor = uptime
   timeout = 5
   reentry = 10
   port = http
       send = "GET / HTTP/1.0\r\n\r\n"
       expect = "HTTP"
   scheduler = rr
   persistent = 60
   pmask = 255.255.255.255
       protocol = tcp
   server Real1 {
       address = 192.168.10.31
       active = 1
       weight = 2
   }
   server Real2 {
       address = 192.168.10.83
       active = 1
       weight = 1
   }
}
virtual server2 {
   address = 10.1.1.22 eth0:2
   active = 1
   load_monitor = uptime
   timeout = 5
   reentry = 10
   port = 21
       send = "\n"
   server Real1 {
       address = 192.168.10.83
       active = 1
   }
   server Real2 {
       address = 192.168.10.3
       active = 1
   }
}


Logs:
Apr  7 21:50:45 pr0032 pulse[7695]: STARTING PULSE AS MASTER
Apr  7 21:51:03 pr0032 pulse[7695]: partner dead: activating lvs
Apr  7 21:51:03 pr0032 lvs[7697]: starting virtual service server1 active: 80
Apr  7 21:51:03 pr0032 nanny[7703]: starting LVS client monitor for 10.1.1.21:80
Apr  7 21:51:03 pr0032 lvs[7697]: create_monitor for server1/Real1
running as pid 7703
Apr  7 21:51:03 pr0032 nanny[7704]: starting LVS client monitor for 10.1.1.21:80
Apr  7 21:51:03 pr0032 lvs[7697]: create_monitor for server1/Real2
running as pid 7704
Apr  7 21:51:03 pr0032 lvs[7697]: starting virtual service server2 active: 21
Apr  7 21:51:03 pr0032 nanny[7706]: starting LVS client monitor for 10.1.1.22:21
Apr  7 21:51:03 pr0032 lvs[7697]: create_monitor for server2/Real1
running as pid 7706
Apr  7 21:51:03 pr0032 nanny[7707]: starting LVS client monitor for 10.1.1.22:21
Apr  7 21:51:03 pr0032 lvs[7697]: create_monitor for server2/Real2
running as pid 7707
Apr  7 21:51:03 pr0032 avahi-daemon[4317]: Registering new address
record for 10.1.1.21 on eth0.
Apr  7 21:51:03 pr0032 avahi-daemon[4317]: Registering new address
record for 10.1.1.22 on eth0.
Apr  7 21:51:03 pr0032 nanny[7703]: making 192.168.10.31:80 available
Apr  7 21:51:03 pr0032 avahi-daemon[4317]: Registering new address
record for 192.168.10.111 on eth1.
Apr  7 21:51:08 pr0032 pulse[7700]: gratuitous lvs arps finished
Apr  7 21:51:28 pr0032 nanny[7703]: The following exited abnormally:
Apr  7 21:51:28 pr0032 nanny[7703]: failed to read remote load
Apr  7 21:51:08 pr0032 pulse[7700]: gratuitous lvs arps finished
Apr  7 21:51:28 pr0032 nanny[7703]: The following exited abnormally:
Apr  7 21:51:28 pr0032 nanny[7703]: failed to read remote load
Apr  7 21:51:48 pr0032 nanny[7703]: The following exited abnormally:
Apr  7 21:51:48 pr0032 nanny[7703]: failed to read remote load
Apr  7 21:52:08 pr0032 nanny[7703]: The following exited abnormally:
Apr  7 21:52:08 pr0032 nanny[7703]: failed to read remote load
Apr  7 21:52:28 pr0032 nanny[7703]: The following exited abnormally:

What am i missing here? Is it necessary to use piranha-gui to
configure pulse? I am using Fedora core 6.
When I switch off one of my real server it is not switching to real
server2. My logs showing errors in nanny for reading load. Do i need
to make some changes on my real servers?

Thanks and regards
Anuj Singh

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [linux-lvm] Re: piranha on Fedora core 6, piranha-gui fails, pulse running_real servers not switching
  2007-04-07 17:02 [linux-lvm] piranha on Fedora core 6, piranha-gui fails, pulse running_real servers not switching Anuj Singh
@ 2007-04-07 22:21 ` Anuj Singh
  0 siblings, 0 replies; 2+ messages in thread
From: Anuj Singh @ 2007-04-07 22:21 UTC (permalink / raw)
  To: linux-lvm

Hello,
it is working now, one of my real server had some routing problem, i
changed it, now only problem remaining is to read load as nanny is
still showing the same error ,

Thank and regards
anugunj "anuj"


Apr  7 21:51:48 pr0032 nanny[7703]: failed to read remote load
Apr  7 21:52:08 pr0032 nanny[7703]: The following exited abnormally:
Apr  7 21:52:08 pr0032 nanny[7703]: failed to read remote load
Apr  7 21:52:28 pr0032 nanny[7703]: The following exited abnormally:

What am i missing here? Is it necessary to use piranha-gui to
On 4/7/07, Anuj Singh <anujhere@gmail.com> wrote:
> Hi ,
> I am configuring lvs nat router with piranha-0.7.12-1 on Fedora Core 6 machines
> When i turn off service of one of my real service it is not switching
> to second real server. I can access and see logs of connection in
> ipvsadm -L.
> I edited sample.cf file and copied as lvs.cf. as my piranha-gui is not
> running due to some missing appache modules.
>
> Director Master lvs machine
> eth0=10.1.1.1
> eth1=192.168.10.42
>
> routing table without running pulse service
> Kernel IP routing table
> Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
> 192.168.10.0    0.0.0.0         255.255.255.0   U     0      0        0 eth1
> 10.1.1.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
> 169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth1
>
> routing table after starting pulse service
> Kernel IP routing table
> Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
> 192.168.10.0    0.0.0.0         255.255.255.0   U     0      0        0 eth1
> 10.1.1.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
> 169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth1
> 10.0.0.0        0.0.0.0         255.0.0.0       U     0      0        0 eth0
>
> ipvsadm -L
> IP Virtual Server version 1.2.1 (size=4096)
> Prot LocalAddress:Port Scheduler Flags
>  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
> TCP  10.1.1.22:ftp wlc
> TCP  10.1.1.21:http rr persistent 60
>  -> node31.prolog.com:http Masq    2      0          0
>
>
> Slave backupp lvs machine
> eth0=10.1.12
> eth1=192.168.10.55
>
> Test client machine IP = 10.1.1.11
>
> my lvs.cf
>
> service = lvs
> primary = 192.168.10.42
> backup = 192.168.10.55
> backup_active = 1
> heartbeat = 1
> heartbeat_port = 1050
> keepalive = 6
> deadtime = 18
> network = nat
> nat_router = 192.168.10.111 eth1:1
> virtual server1 {
>    address = 10.1.1.21 eth0:1
>    active = 1
>    load_monitor = uptime
>    timeout = 5
>    reentry = 10
>    port = http
>        send = "GET / HTTP/1.0\r\n\r\n"
>        expect = "HTTP"
>    scheduler = rr
>    persistent = 60
>    pmask = 255.255.255.255
>        protocol = tcp
>    server Real1 {
>        address = 192.168.10.31
>        active = 1
>        weight = 2
>    }
>    server Real2 {
>        address = 192.168.10.83
>        active = 1
>        weight = 1
>    }
> }
> virtual server2 {
>    address = 10.1.1.22 eth0:2
>    active = 1
>    load_monitor = uptime
>    timeout = 5
>    reentry = 10
>    port = 21
>        send = "\n"
>    server Real1 {
>        address = 192.168.10.83
>        active = 1
>    }
>    server Real2 {
>        address = 192.168.10.3
>        active = 1
>    }
> }
>
>
> Logs:
> Apr  7 21:50:45 pr0032 pulse[7695]: STARTING PULSE AS MASTER
> Apr  7 21:51:03 pr0032 pulse[7695]: partner dead: activating lvs
> Apr  7 21:51:03 pr0032 lvs[7697]: starting virtual service server1 active: 80
> Apr  7 21:51:03 pr0032 nanny[7703]: starting LVS client monitor for 10.1.1.21:80
> Apr  7 21:51:03 pr0032 lvs[7697]: create_monitor for server1/Real1
> running as pid 7703
> Apr  7 21:51:03 pr0032 nanny[7704]: starting LVS client monitor for 10.1.1.21:80
> Apr  7 21:51:03 pr0032 lvs[7697]: create_monitor for server1/Real2
> running as pid 7704
> Apr  7 21:51:03 pr0032 lvs[7697]: starting virtual service server2 active: 21
> Apr  7 21:51:03 pr0032 nanny[7706]: starting LVS client monitor for 10.1.1.22:21
> Apr  7 21:51:03 pr0032 lvs[7697]: create_monitor for server2/Real1
> running as pid 7706
> Apr  7 21:51:03 pr0032 nanny[7707]: starting LVS client monitor for 10.1.1.22:21
> Apr  7 21:51:03 pr0032 lvs[7697]: create_monitor for server2/Real2
> running as pid 7707
> Apr  7 21:51:03 pr0032 avahi-daemon[4317]: Registering new address
> record for 10.1.1.21 on eth0.
> Apr  7 21:51:03 pr0032 avahi-daemon[4317]: Registering new address
> record for 10.1.1.22 on eth0.
> Apr  7 21:51:03 pr0032 nanny[7703]: making 192.168.10.31:80 available
> Apr  7 21:51:03 pr0032 avahi-daemon[4317]: Registering new address
> record for 192.168.10.111 on eth1.
> Apr  7 21:51:08 pr0032 pulse[7700]: gratuitous lvs arps finished
> Apr  7 21:51:28 pr0032 nanny[7703]: The following exited abnormally:
> Apr  7 21:51:28 pr0032 nanny[7703]: failed to read remote load
> Apr  7 21:51:08 pr0032 pulse[7700]: gratuitous lvs arps finished
> Apr  7 21:51:28 pr0032 nanny[7703]: The following exited abnormally:
> Apr  7 21:51:28 pr0032 nanny[7703]: failed to read remote load
> Apr  7 21:51:48 pr0032 nanny[7703]: The following exited abnormally:
> Apr  7 21:51:48 pr0032 nanny[7703]: failed to read remote load
> Apr  7 21:52:08 pr0032 nanny[7703]: The following exited abnormally:
> Apr  7 21:52:08 pr0032 nanny[7703]: failed to read remote load
> Apr  7 21:52:28 pr0032 nanny[7703]: The following exited abnormally:
>
> What am i missing here? Is it necessary to use piranha-gui to
> configure pulse? I am using Fedora core 6.
> When I switch off one of my real server it is not switching to real
> server2. My logs showing errors in nanny for reading load. Do i need
> to make some changes on my real servers?
>
> Thanks and regards
> Anuj Singh
>

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2007-04-07 22:21 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-04-07 17:02 [linux-lvm] piranha on Fedora core 6, piranha-gui fails, pulse running_real servers not switching Anuj Singh
2007-04-07 22:21 ` [linux-lvm] " Anuj Singh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).