* [Cluster-devel] [PATCH] gfs2: add native setup to man page
@ 2013-05-13 16:09 David Teigland
2013-05-14 9:04 ` Andrew Price
0 siblings, 1 reply; 4+ messages in thread
From: David Teigland @ 2013-05-13 16:09 UTC (permalink / raw)
To: cluster-devel.redhat.com
List the simplest sequence of steps to manually
set up and run gfs2/dlm.
Signed-off-by: David Teigland <teigland@redhat.com>
---
gfs2/man/gfs2.5 | 188 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 188 insertions(+)
diff --git a/gfs2/man/gfs2.5 b/gfs2/man/gfs2.5
index 25effdd..eb12934 100644
--- a/gfs2/man/gfs2.5
+++ b/gfs2/man/gfs2.5
@@ -196,3 +196,191 @@ The GFS2 documentation has been split into a number of sections:
\fBgfs2_tool\fP(8) Tool to manipulate a GFS2 file system (obsolete)
\fBtunegfs2\fP(8) Tool to manipulate GFS2 superblocks
+.SH SETUP
+
+GFS2 clustering is driven by the dlm, which depends on dlm_controld to
+provide clustering from userspace. dlm_controld clustering is built on
+corosync cluster/group membership and messaging.
+
+Follow these steps to manually configure and run gfs2/dlm/corosync.
+
+.B 1. create /etc/corosync/corosync.conf and copy to all nodes
+
+In this sample, replace cluster_name and IP addresses, and add nodes as
+needed. If using only two nodes, uncomment the two_node line.
+See corosync.conf(5) for more information.
+
+.nf
+totem {
+ version: 2
+ secauth: off
+ cluster_name: abc
+}
+
+nodelist {
+ node {
+ ring0_addr: 10.10.10.1
+ nodeid: 1
+ }
+ node {
+ ring0_addr: 10.10.10.2
+ nodeid: 2
+ }
+ node {
+ ring0_addr: 10.10.10.3
+ nodeid: 3
+ }
+}
+
+quorum {
+ provider: corosync_votequorum
+# two_node: 1
+}
+
+logging {
+ to_syslog: yes
+}
+.fi
+
+.PP
+
+.B 2. start corosync on all nodes
+
+.nf
+systemctl start corosync
+.fi
+
+Run corosync-quorumtool to verify that all nodes are listed.
+
+.PP
+
+.B 3. create /etc/dlm/dlm.conf and copy to all nodes
+
+.B *
+To use no fencing, use this line:
+
+.nf
+enable_fencing=0
+.fi
+
+.B *
+To use no fencing, but exercise fencing functions, use this line:
+
+.nf
+fence_all /bin/true
+.fi
+
+The "true" binary will be executed for all nodes and will succeed (exit 0)
+immediately.
+
+.B *
+To use manual fencing, use this line:
+
+.nf
+fence_all /bin/false
+.fi
+
+The "false" binary will be executed for all nodes and will fail (exit 1)
+immediately.
+
+When a node fails, manually run: dlm_tool fence_ack <nodeid>
+
+.B *
+To use stonith/pacemaker for fencing, use this line:
+
+.nf
+fence_all /usr/sbin/dlm_stonith
+.fi
+
+The "dlm_stonith" binary will be executed for all nodes. If
+stonith/pacemaker systems are not available, dlm_stonith will fail and
+this config becomes the equivalent of the previous /bin/false config.
+
+.B *
+To use an APC power switch, use these lines:
+
+.nf
+device apc /usr/sbin/fence_apc ipaddr=1.1.1.1 login=admin password=pw
+connect apc node=1 port=1
+connect apc node=2 port=2
+connect apc node=3 port=3
+.fi
+
+Other network switch based agents are configured similarly.
+
+.B *
+To use sanlock/watchdog fencing, use these lines:
+
+.nf
+device wd /usr/sbin/fence_sanlock path=/dev/fence/leases
+connect wd node=1 host_id=1
+connect wd node=2 host_id=2
+unfence wd
+.fi
+
+See fence_sanlock(8) for more information.
+
+.B *
+For other fencing configurations see dlm.conf(5) man page.
+
+.PP
+
+.B 4. start dlm_controld on all nodes
+
+.nf
+systemctl start dlm
+.fi
+
+Run "dlm_tool status" to verify that all nodes are listed.
+
+.PP
+
+.B 5. if using clvm, start clvmd on all nodes
+
+systemctl clvmd start
+
+.PP
+
+.B 6. make new gfs2 file systems
+
+mkfs.gfs -p lock_dlm -t cluster_name:fs_name -j num /path/to/storage
+
+The cluster_name must match the name used in step 1 above.
+The fs_name must be a unique name in the cluster.
+The -j option is the number of journals to create, there must
+be one for each node that will mount the fs.
+
+.PP
+
+.B 7. mount gfs2 file systems
+
+mount /path/to/storage /mountpoint
+
+Run "dlm_tool ls" to verify the nodes that have each fs mounted.
+
+.PP
+
+.B 8. shut down
+
+.nf
+umount -a gfs2
+systemctl clvmd stop
+systemctl dlm stop
+systemctl corosync stop
+.fi
+
+.PP
+
+.B More setup information:
+.br
+.BR dlm_controld (8),
+.br
+.BR dlm_tool (8),
+.br
+.BR dlm.conf (5),
+.br
+.BR corosync (8),
+.br
+.BR corosync.conf (5)
+.br
+
--
1.8.1.rc1.5.g7e0651a
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [Cluster-devel] [PATCH] gfs2: add native setup to man page
2013-05-13 16:09 [Cluster-devel] [PATCH] gfs2: add native setup to man page David Teigland
@ 2013-05-14 9:04 ` Andrew Price
2013-05-14 17:50 ` David Teigland
0 siblings, 1 reply; 4+ messages in thread
From: Andrew Price @ 2013-05-14 9:04 UTC (permalink / raw)
To: cluster-devel.redhat.com
Hi,
Looks good to me. Just a couple of tweaks:
On 13/05/13 17:09, David Teigland wrote:
> List the simplest sequence of steps to manually
> set up and run gfs2/dlm.
>
> Signed-off-by: David Teigland <teigland@redhat.com>
> ---
> gfs2/man/gfs2.5 | 188 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 188 insertions(+)
>
> diff --git a/gfs2/man/gfs2.5 b/gfs2/man/gfs2.5
> index 25effdd..eb12934 100644
> --- a/gfs2/man/gfs2.5
> +++ b/gfs2/man/gfs2.5
> @@ -196,3 +196,191 @@ The GFS2 documentation has been split into a number of sections:
> \fBgfs2_tool\fP(8) Tool to manipulate a GFS2 file system (obsolete)
> \fBtunegfs2\fP(8) Tool to manipulate GFS2 superblocks
>
> +.SH SETUP
> +
> +GFS2 clustering is driven by the dlm, which depends on dlm_controld to
> +provide clustering from userspace. dlm_controld clustering is built on
> +corosync cluster/group membership and messaging.
> +
> +Follow these steps to manually configure and run gfs2/dlm/corosync.
> +
> +.B 1. create /etc/corosync/corosync.conf and copy to all nodes
> +
> +In this sample, replace cluster_name and IP addresses, and add nodes as
> +needed. If using only two nodes, uncomment the two_node line.
> +See corosync.conf(5) for more information.
> +
> +.nf
> +totem {
> + version: 2
> + secauth: off
> + cluster_name: abc
> +}
> +
> +nodelist {
> + node {
> + ring0_addr: 10.10.10.1
> + nodeid: 1
> + }
> + node {
> + ring0_addr: 10.10.10.2
> + nodeid: 2
> + }
> + node {
> + ring0_addr: 10.10.10.3
> + nodeid: 3
> + }
> +}
> +
> +quorum {
> + provider: corosync_votequorum
> +# two_node: 1
> +}
> +
> +logging {
> + to_syslog: yes
> +}
> +.fi
> +
> +.PP
> +
> +.B 2. start corosync on all nodes
> +
> +.nf
> +systemctl start corosync
> +.fi
> +
> +Run corosync-quorumtool to verify that all nodes are listed.
> +
> +.PP
> +
> +.B 3. create /etc/dlm/dlm.conf and copy to all nodes
> +
> +.B *
> +To use no fencing, use this line:
> +
> +.nf
> +enable_fencing=0
> +.fi
> +
> +.B *
> +To use no fencing, but exercise fencing functions, use this line:
> +
> +.nf
> +fence_all /bin/true
> +.fi
> +
> +The "true" binary will be executed for all nodes and will succeed (exit 0)
> +immediately.
> +
> +.B *
> +To use manual fencing, use this line:
> +
> +.nf
> +fence_all /bin/false
> +.fi
> +
> +The "false" binary will be executed for all nodes and will fail (exit 1)
> +immediately.
> +
> +When a node fails, manually run: dlm_tool fence_ack <nodeid>
> +
> +.B *
> +To use stonith/pacemaker for fencing, use this line:
> +
> +.nf
> +fence_all /usr/sbin/dlm_stonith
> +.fi
> +
> +The "dlm_stonith" binary will be executed for all nodes. If
> +stonith/pacemaker systems are not available, dlm_stonith will fail and
> +this config becomes the equivalent of the previous /bin/false config.
> +
> +.B *
> +To use an APC power switch, use these lines:
> +
> +.nf
> +device apc /usr/sbin/fence_apc ipaddr=1.1.1.1 login=admin password=pw
> +connect apc node=1 port=1
> +connect apc node=2 port=2
> +connect apc node=3 port=3
> +.fi
> +
> +Other network switch based agents are configured similarly.
> +
> +.B *
> +To use sanlock/watchdog fencing, use these lines:
> +
> +.nf
> +device wd /usr/sbin/fence_sanlock path=/dev/fence/leases
> +connect wd node=1 host_id=1
> +connect wd node=2 host_id=2
> +unfence wd
> +.fi
> +
> +See fence_sanlock(8) for more information.
> +
> +.B *
> +For other fencing configurations see dlm.conf(5) man page.
> +
> +.PP
> +
> +.B 4. start dlm_controld on all nodes
> +
> +.nf
> +systemctl start dlm
> +.fi
> +
> +Run "dlm_tool status" to verify that all nodes are listed.
> +
> +.PP
> +
> +.B 5. if using clvm, start clvmd on all nodes
> +
> +systemctl clvmd start
> +
> +.PP
> +
> +.B 6. make new gfs2 file systems
> +
> +mkfs.gfs -p lock_dlm -t cluster_name:fs_name -j num /path/to/storage
^
Missing a 2 here.
> +The cluster_name must match the name used in step 1 above.
> +The fs_name must be a unique name in the cluster.
> +The -j option is the number of journals to create, there must
> +be one for each node that will mount the fs.
> +
> +.PP
> +
> +.B 7. mount gfs2 file systems
> +
> +mount /path/to/storage /mountpoint
> +
> +Run "dlm_tool ls" to verify the nodes that have each fs mounted.
> +
> +.PP
> +
> +.B 8. shut down
> +
> +.nf
> +umount -a gfs2
This needs a -t
Andy
> +systemctl clvmd stop
> +systemctl dlm stop
> +systemctl corosync stop
> +.fi
> +
> +.PP
> +
> +.B More setup information:
> +.br
> +.BR dlm_controld (8),
> +.br
> +.BR dlm_tool (8),
> +.br
> +.BR dlm.conf (5),
> +.br
> +.BR corosync (8),
> +.br
> +.BR corosync.conf (5)
> +.br
> +
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* [Cluster-devel] [PATCH] gfs2: add native setup to man page
2013-05-14 9:04 ` Andrew Price
@ 2013-05-14 17:50 ` David Teigland
2013-05-15 9:51 ` Andrew Price
0 siblings, 1 reply; 4+ messages in thread
From: David Teigland @ 2013-05-14 17:50 UTC (permalink / raw)
To: cluster-devel.redhat.com
List the simplest sequence of steps to manually
set up and run gfs2/dlm.
Signed-off-by: David Teigland <teigland@redhat.com>
---
gfs2/man/gfs2.5 | 188 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 188 insertions(+)
diff --git a/gfs2/man/gfs2.5 b/gfs2/man/gfs2.5
index 25effdd..220a10d 100644
--- a/gfs2/man/gfs2.5
+++ b/gfs2/man/gfs2.5
@@ -196,3 +196,191 @@ The GFS2 documentation has been split into a number of sections:
\fBgfs2_tool\fP(8) Tool to manipulate a GFS2 file system (obsolete)
\fBtunegfs2\fP(8) Tool to manipulate GFS2 superblocks
+.SH SETUP
+
+GFS2 clustering is driven by the dlm, which depends on dlm_controld to
+provide clustering from userspace. dlm_controld clustering is built on
+corosync cluster/group membership and messaging.
+
+Follow these steps to manually configure and run gfs2/dlm/corosync.
+
+.B 1. create /etc/corosync/corosync.conf and copy to all nodes
+
+In this sample, replace cluster_name and IP addresses, and add nodes as
+needed. If using only two nodes, uncomment the two_node line.
+See corosync.conf(5) for more information.
+
+.nf
+totem {
+ version: 2
+ secauth: off
+ cluster_name: abc
+}
+
+nodelist {
+ node {
+ ring0_addr: 10.10.10.1
+ nodeid: 1
+ }
+ node {
+ ring0_addr: 10.10.10.2
+ nodeid: 2
+ }
+ node {
+ ring0_addr: 10.10.10.3
+ nodeid: 3
+ }
+}
+
+quorum {
+ provider: corosync_votequorum
+# two_node: 1
+}
+
+logging {
+ to_syslog: yes
+}
+.fi
+
+.PP
+
+.B 2. start corosync on all nodes
+
+.nf
+systemctl start corosync
+.fi
+
+Run corosync-quorumtool to verify that all nodes are listed.
+
+.PP
+
+.B 3. create /etc/dlm/dlm.conf and copy to all nodes
+
+.B *
+To use no fencing, use this line:
+
+.nf
+enable_fencing=0
+.fi
+
+.B *
+To use no fencing, but exercise fencing functions, use this line:
+
+.nf
+fence_all /bin/true
+.fi
+
+The "true" binary will be executed for all nodes and will succeed (exit 0)
+immediately.
+
+.B *
+To use manual fencing, use this line:
+
+.nf
+fence_all /bin/false
+.fi
+
+The "false" binary will be executed for all nodes and will fail (exit 1)
+immediately.
+
+When a node fails, manually run: dlm_tool fence_ack <nodeid>
+
+.B *
+To use stonith/pacemaker for fencing, use this line:
+
+.nf
+fence_all /usr/sbin/dlm_stonith
+.fi
+
+The "dlm_stonith" binary will be executed for all nodes. If
+stonith/pacemaker systems are not available, dlm_stonith will fail and
+this config becomes the equivalent of the previous /bin/false config.
+
+.B *
+To use an APC power switch, use these lines:
+
+.nf
+device apc /usr/sbin/fence_apc ipaddr=1.1.1.1 login=admin password=pw
+connect apc node=1 port=1
+connect apc node=2 port=2
+connect apc node=3 port=3
+.fi
+
+Other network switch based agents are configured similarly.
+
+.B *
+To use sanlock/watchdog fencing, use these lines:
+
+.nf
+device wd /usr/sbin/fence_sanlock path=/dev/fence/leases
+connect wd node=1 host_id=1
+connect wd node=2 host_id=2
+unfence wd
+.fi
+
+See fence_sanlock(8) for more information.
+
+.B *
+For other fencing configurations see dlm.conf(5) man page.
+
+.PP
+
+.B 4. start dlm_controld on all nodes
+
+.nf
+systemctl start dlm
+.fi
+
+Run "dlm_tool status" to verify that all nodes are listed.
+
+.PP
+
+.B 5. if using clvm, start clvmd on all nodes
+
+systemctl clvmd start
+
+.PP
+
+.B 6. make new gfs2 file systems
+
+mkfs.gfs2 -p lock_dlm -t cluster_name:fs_name -j num /path/to/storage
+
+The cluster_name must match the name used in step 1 above.
+The fs_name must be a unique name in the cluster.
+The -j option is the number of journals to create, there must
+be one for each node that will mount the fs.
+
+.PP
+
+.B 7. mount gfs2 file systems
+
+mount /path/to/storage /mountpoint
+
+Run "dlm_tool ls" to verify the nodes that have each fs mounted.
+
+.PP
+
+.B 8. shut down
+
+.nf
+umount -a -t gfs2
+systemctl clvmd stop
+systemctl dlm stop
+systemctl corosync stop
+.fi
+
+.PP
+
+.B More setup information:
+.br
+.BR dlm_controld (8),
+.br
+.BR dlm_tool (8),
+.br
+.BR dlm.conf (5),
+.br
+.BR corosync (8),
+.br
+.BR corosync.conf (5)
+.br
+
--
1.8.1.rc1.5.g7e0651a
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [Cluster-devel] [PATCH] gfs2: add native setup to man page
2013-05-14 17:50 ` David Teigland
@ 2013-05-15 9:51 ` Andrew Price
0 siblings, 0 replies; 4+ messages in thread
From: Andrew Price @ 2013-05-15 9:51 UTC (permalink / raw)
To: cluster-devel.redhat.com
On 14/05/13 18:50, David Teigland wrote:
> List the simplest sequence of steps to manually
> set up and run gfs2/dlm.
Pushed to master with some "trailing whitespace error" fixes, thanks.
Andy
>
> Signed-off-by: David Teigland <teigland@redhat.com>
> ---
> gfs2/man/gfs2.5 | 188 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 188 insertions(+)
>
> diff --git a/gfs2/man/gfs2.5 b/gfs2/man/gfs2.5
> index 25effdd..220a10d 100644
> --- a/gfs2/man/gfs2.5
> +++ b/gfs2/man/gfs2.5
> @@ -196,3 +196,191 @@ The GFS2 documentation has been split into a number of sections:
> \fBgfs2_tool\fP(8) Tool to manipulate a GFS2 file system (obsolete)
> \fBtunegfs2\fP(8) Tool to manipulate GFS2 superblocks
>
> +.SH SETUP
> +
> +GFS2 clustering is driven by the dlm, which depends on dlm_controld to
> +provide clustering from userspace. dlm_controld clustering is built on
> +corosync cluster/group membership and messaging.
> +
> +Follow these steps to manually configure and run gfs2/dlm/corosync.
> +
> +.B 1. create /etc/corosync/corosync.conf and copy to all nodes
> +
> +In this sample, replace cluster_name and IP addresses, and add nodes as
> +needed. If using only two nodes, uncomment the two_node line.
> +See corosync.conf(5) for more information.
> +
> +.nf
> +totem {
> + version: 2
> + secauth: off
> + cluster_name: abc
> +}
> +
> +nodelist {
> + node {
> + ring0_addr: 10.10.10.1
> + nodeid: 1
> + }
> + node {
> + ring0_addr: 10.10.10.2
> + nodeid: 2
> + }
> + node {
> + ring0_addr: 10.10.10.3
> + nodeid: 3
> + }
> +}
> +
> +quorum {
> + provider: corosync_votequorum
> +# two_node: 1
> +}
> +
> +logging {
> + to_syslog: yes
> +}
> +.fi
> +
> +.PP
> +
> +.B 2. start corosync on all nodes
> +
> +.nf
> +systemctl start corosync
> +.fi
> +
> +Run corosync-quorumtool to verify that all nodes are listed.
> +
> +.PP
> +
> +.B 3. create /etc/dlm/dlm.conf and copy to all nodes
> +
> +.B *
> +To use no fencing, use this line:
> +
> +.nf
> +enable_fencing=0
> +.fi
> +
> +.B *
> +To use no fencing, but exercise fencing functions, use this line:
> +
> +.nf
> +fence_all /bin/true
> +.fi
> +
> +The "true" binary will be executed for all nodes and will succeed (exit 0)
> +immediately.
> +
> +.B *
> +To use manual fencing, use this line:
> +
> +.nf
> +fence_all /bin/false
> +.fi
> +
> +The "false" binary will be executed for all nodes and will fail (exit 1)
> +immediately.
> +
> +When a node fails, manually run: dlm_tool fence_ack <nodeid>
> +
> +.B *
> +To use stonith/pacemaker for fencing, use this line:
> +
> +.nf
> +fence_all /usr/sbin/dlm_stonith
> +.fi
> +
> +The "dlm_stonith" binary will be executed for all nodes. If
> +stonith/pacemaker systems are not available, dlm_stonith will fail and
> +this config becomes the equivalent of the previous /bin/false config.
> +
> +.B *
> +To use an APC power switch, use these lines:
> +
> +.nf
> +device apc /usr/sbin/fence_apc ipaddr=1.1.1.1 login=admin password=pw
> +connect apc node=1 port=1
> +connect apc node=2 port=2
> +connect apc node=3 port=3
> +.fi
> +
> +Other network switch based agents are configured similarly.
> +
> +.B *
> +To use sanlock/watchdog fencing, use these lines:
> +
> +.nf
> +device wd /usr/sbin/fence_sanlock path=/dev/fence/leases
> +connect wd node=1 host_id=1
> +connect wd node=2 host_id=2
> +unfence wd
> +.fi
> +
> +See fence_sanlock(8) for more information.
> +
> +.B *
> +For other fencing configurations see dlm.conf(5) man page.
> +
> +.PP
> +
> +.B 4. start dlm_controld on all nodes
> +
> +.nf
> +systemctl start dlm
> +.fi
> +
> +Run "dlm_tool status" to verify that all nodes are listed.
> +
> +.PP
> +
> +.B 5. if using clvm, start clvmd on all nodes
> +
> +systemctl clvmd start
> +
> +.PP
> +
> +.B 6. make new gfs2 file systems
> +
> +mkfs.gfs2 -p lock_dlm -t cluster_name:fs_name -j num /path/to/storage
> +
> +The cluster_name must match the name used in step 1 above.
> +The fs_name must be a unique name in the cluster.
> +The -j option is the number of journals to create, there must
> +be one for each node that will mount the fs.
> +
> +.PP
> +
> +.B 7. mount gfs2 file systems
> +
> +mount /path/to/storage /mountpoint
> +
> +Run "dlm_tool ls" to verify the nodes that have each fs mounted.
> +
> +.PP
> +
> +.B 8. shut down
> +
> +.nf
> +umount -a -t gfs2
> +systemctl clvmd stop
> +systemctl dlm stop
> +systemctl corosync stop
> +.fi
> +
> +.PP
> +
> +.B More setup information:
> +.br
> +.BR dlm_controld (8),
> +.br
> +.BR dlm_tool (8),
> +.br
> +.BR dlm.conf (5),
> +.br
> +.BR corosync (8),
> +.br
> +.BR corosync.conf (5)
> +.br
> +
>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2013-05-15 9:51 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-05-13 16:09 [Cluster-devel] [PATCH] gfs2: add native setup to man page David Teigland
2013-05-14 9:04 ` Andrew Price
2013-05-14 17:50 ` David Teigland
2013-05-15 9:51 ` Andrew Price
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).