From: Alexander Aring <aahringo@redhat.com>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] [PATCHv2 dlm-tool 4/4] dlm_controld: add support for waitplock_recovery switch
Date: Fri, 26 Jun 2020 12:44:46 -0400 [thread overview]
Message-ID: <20200626164446.114220-5-aahringo@redhat.com> (raw)
In-Reply-To: <20200626164446.114220-1-aahringo@redhat.com>
This patch adds support to set the cluster attribute waitplock_recovery
via enable_waitplock_recover arg or config file attribute.
---
dlm_controld/action.c | 5 +++++
dlm_controld/dlm.conf.5 | 2 ++
dlm_controld/dlm_daemon.h | 1 +
dlm_controld/main.c | 5 +++++
4 files changed, 13 insertions(+)
diff --git a/dlm_controld/action.c b/dlm_controld/action.c
index 126e3b62..63040227 100644
--- a/dlm_controld/action.c
+++ b/dlm_controld/action.c
@@ -876,6 +876,11 @@ int setup_configfs_options(void)
dlm_options[timewarn_ind].file_set)
set_configfs_cluster("timewarn_cs", NULL, opt(timewarn_ind));
+ if (dlm_options[enable_waitplock_recovery_ind].cli_set ||
+ dlm_options[enable_waitplock_recovery_ind].file_set)
+ set_configfs_cluster("waitplock_recovery", NULL,
+ opt(enable_waitplock_recovery_ind));
+
set_configfs_cluster("mark", NULL, optu(mark_ind));
proto_name = opts(protocol_ind);
diff --git a/dlm_controld/dlm.conf.5 b/dlm_controld/dlm.conf.5
index 1ce0c644..e92dfc8e 100644
--- a/dlm_controld/dlm.conf.5
+++ b/dlm_controld/dlm.conf.5
@@ -46,6 +46,8 @@ debug_logfile
.br
enable_plock
.br
+enable_waitplock_recovery
+.br
plock_debug
.br
plock_rate_limit
diff --git a/dlm_controld/dlm_daemon.h b/dlm_controld/dlm_daemon.h
index 9e7a5fbf..979aab7a 100644
--- a/dlm_controld/dlm_daemon.h
+++ b/dlm_controld/dlm_daemon.h
@@ -102,6 +102,7 @@ enum {
mark_ind,
enable_fscontrol_ind,
enable_plock_ind,
+ enable_waitplock_recovery_ind,
plock_debug_ind,
plock_rate_limit_ind,
plock_ownership_ind,
diff --git a/dlm_controld/main.c b/dlm_controld/main.c
index b330f88d..3ec318c2 100644
--- a/dlm_controld/main.c
+++ b/dlm_controld/main.c
@@ -1752,6 +1752,11 @@ static void set_opt_defaults(void)
1, NULL,
"enable/disable posix lock support for cluster fs");
+ set_opt_default(enable_waitplock_recovery_ind,
+ "enable_waitplock_recovery", '\0', req_arg_bool,
+ 1, NULL,
+ "enable/disable posix lock to wait for dlm recovery after lock acquire");
+
set_opt_default(plock_debug_ind,
"plock_debug", 'P', no_arg,
0, NULL,
--
2.26.2
next prev parent reply other threads:[~2020-06-26 16:44 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-26 16:44 [Cluster-devel] [PATCHv2 dlm-tool 0/4] dlm_controld: support for mark and waitplock_recovery Alexander Aring
2020-06-26 16:44 ` [Cluster-devel] [PATCHv2 dlm-tool 1/4] dlm_controld: add support for unsigned int values Alexander Aring
2020-06-26 16:44 ` [Cluster-devel] [PATCHv2 dlm-tool 2/4] dlm_controld: set listen skb mark setting Alexander Aring
2020-06-26 16:44 ` [Cluster-devel] [PATCHv2 dlm-tool 3/4] dlm_controld: add support for per nodeid configuration Alexander Aring
2020-06-26 16:44 ` Alexander Aring [this message]
2020-07-08 21:20 ` [Cluster-devel] [PATCHv2 dlm-tool 4/4] dlm_controld: add support for waitplock_recovery switch Alexander Ahring Oder Aring
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200626164446.114220-5-aahringo@redhat.com \
--to=aahringo@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).