From: Simon Horman <horms@verge.net.au>
To: lvs-devel@vger.kernel.org, netdev@vger.kernel.org,
netfilter-devel@vger.kernel.org
Cc: Stephen Hemminger <shemminger@vyatta.com>,
Wensong Zhang <wensong@linux-vs.org>,
Julian Anastasov <ja@ssi.bg>
Subject: [rfc] IPVS: convert scheduler management to RCU
Date: Fri, 20 Aug 2010 22:33:21 +0900 [thread overview]
Message-ID: <20100820133320.GA29311@verge.net.au> (raw)
Signed-off-by: Simon Horman <horms@verge.net.au>
---
I'm still getting my head around RCU, so review would be greatly appreciated.
It occurs to me that this code is not performance critical, so
perhaps simply replacing the rwlock with a spinlock would be better?
Index: nf-next-2.6/net/netfilter/ipvs/ip_vs_sched.c
===================================================================
--- nf-next-2.6.orig/net/netfilter/ipvs/ip_vs_sched.c 2010-08-20 22:21:01.000000000 +0900
+++ nf-next-2.6/net/netfilter/ipvs/ip_vs_sched.c 2010-08-20 22:21:51.000000000 +0900
@@ -35,7 +35,7 @@
static LIST_HEAD(ip_vs_schedulers);
/* lock for service table */
-static DEFINE_RWLOCK(__ip_vs_sched_lock);
+static DEFINE_SPINLOCK(ip_vs_sched_mutex);
/*
@@ -91,9 +91,9 @@ static struct ip_vs_scheduler *ip_vs_sch
IP_VS_DBG(2, "%s(): sched_name \"%s\"\n", __func__, sched_name);
- read_lock_bh(&__ip_vs_sched_lock);
+ rcu_read_lock_bh();
- list_for_each_entry(sched, &ip_vs_schedulers, n_list) {
+ list_for_each_entry_rcu(sched, &ip_vs_schedulers, n_list) {
/*
* Test and get the modules atomically
*/
@@ -105,14 +105,14 @@ static struct ip_vs_scheduler *ip_vs_sch
}
if (strcmp(sched_name, sched->name)==0) {
/* HIT */
- read_unlock_bh(&__ip_vs_sched_lock);
+ rcu_read_unlock_bh();
return sched;
}
if (sched->module)
module_put(sched->module);
}
- read_unlock_bh(&__ip_vs_sched_lock);
+ rcu_read_unlock_bh();
return NULL;
}
@@ -167,10 +167,10 @@ int register_ip_vs_scheduler(struct ip_v
/* increase the module use count */
ip_vs_use_count_inc();
- write_lock_bh(&__ip_vs_sched_lock);
+ spin_lock_bh(&ip_vs_sched_mutex);
if (!list_empty(&scheduler->n_list)) {
- write_unlock_bh(&__ip_vs_sched_lock);
+ spin_unlock_bh(&ip_vs_sched_mutex);
ip_vs_use_count_dec();
pr_err("%s(): [%s] scheduler already linked\n",
__func__, scheduler->name);
@@ -181,9 +181,9 @@ int register_ip_vs_scheduler(struct ip_v
* Make sure that the scheduler with this name doesn't exist
* in the scheduler list.
*/
- list_for_each_entry(sched, &ip_vs_schedulers, n_list) {
+ list_for_each_entry_rcu(sched, &ip_vs_schedulers, n_list) {
if (strcmp(scheduler->name, sched->name) == 0) {
- write_unlock_bh(&__ip_vs_sched_lock);
+ spin_unlock_bh(&ip_vs_sched_mutex);
ip_vs_use_count_dec();
pr_err("%s(): [%s] scheduler already existed "
"in the system\n", __func__, scheduler->name);
@@ -193,8 +193,8 @@ int register_ip_vs_scheduler(struct ip_v
/*
* Add it into the d-linked scheduler list
*/
- list_add(&scheduler->n_list, &ip_vs_schedulers);
- write_unlock_bh(&__ip_vs_sched_lock);
+ list_add_rcu(&scheduler->n_list, &ip_vs_schedulers);
+ spin_unlock_bh(&ip_vs_sched_mutex);
pr_info("[%s] scheduler registered.\n", scheduler->name);
@@ -212,9 +212,9 @@ int unregister_ip_vs_scheduler(struct ip
return -EINVAL;
}
- write_lock_bh(&__ip_vs_sched_lock);
+ spin_lock_bh(&ip_vs_sched_mutex);
if (list_empty(&scheduler->n_list)) {
- write_unlock_bh(&__ip_vs_sched_lock);
+ spin_unlock_bh(&ip_vs_sched_mutex);
pr_err("%s(): [%s] scheduler is not in the list. failed\n",
__func__, scheduler->name);
return -EINVAL;
@@ -223,8 +223,8 @@ int unregister_ip_vs_scheduler(struct ip
/*
* Remove it from the d-linked scheduler list
*/
- list_del(&scheduler->n_list);
- write_unlock_bh(&__ip_vs_sched_lock);
+ list_del_rcu(&scheduler->n_list);
+ spin_unlock_bh(&ip_vs_sched_mutex);
/* decrease the module use count */
ip_vs_use_count_dec();
next reply other threads:[~2010-08-20 13:33 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-08-20 13:33 Simon Horman [this message]
2010-08-20 13:44 ` [rfc] IPVS: convert scheduler management to RCU Changli Gao
2010-08-20 14:00 ` Simon Horman
2010-08-20 14:05 ` Eric Dumazet
2010-08-20 14:16 ` yao zhao
2010-08-20 14:32 ` Eric Dumazet
2010-08-20 15:04 ` yao zhao
2010-08-20 15:32 ` Eric Dumazet
2010-08-20 17:54 ` yao zhao
2010-08-20 14:33 ` Simon Horman
2010-08-20 14:31 ` Simon Horman
2010-08-20 13:59 ` Simon Horman
2010-08-20 19:29 ` Paul E. McKenney
2010-08-21 3:28 ` Simon Horman
2010-08-20 18:03 ` Julian Anastasov
2010-08-21 3:30 ` Simon Horman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100820133320.GA29311@verge.net.au \
--to=horms@verge.net.au \
--cc=ja@ssi.bg \
--cc=lvs-devel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=netfilter-devel@vger.kernel.org \
--cc=shemminger@vyatta.com \
--cc=wensong@linux-vs.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).