* [PATCH net-next 00/10] clean up bearer and node layer
@ 2014-03-27 4:54 Ying Xue
2014-03-27 4:54 ` [PATCH net-next 01/10] tipc: remove unnecessary checking for node object Ying Xue
` (10 more replies)
0 siblings, 11 replies; 12+ messages in thread
From: Ying Xue @ 2014-03-27 4:54 UTC (permalink / raw)
To: davem; +Cc: jon.maloy, Paul.Gortmaker, tipc-discussion, netdev
This is another commit series which aims at facilitating future
changes to the locking policy around nodes, links and bearers.
Currently, the tipc routing hierarchy comprises the structures 'node',
'link' and 'bearer'. The whole hierarchy is protected by a big
read/write lock (tipc_net_lock), to ensure that nothing is added or
removed while any of these structures is being accessed. Obviously
the locking policy makes node, link and bearer components closely
bound together so that their relationship becomes extremely complex.
In the worst case, such locking policy not only has a negative
influence on performance, but also it's prone to lead to deadlock
occasionally.
In order to decouple the complex relationship between bearer and node
as well as link, the locking policy is adjusted as follows:
- Bearer level
RTNL lock is used on update side, and RCU is used on read side.
Meanwhile, all bearer instances including broadcast bearer are
saved into bearer_list array.
- Node and link level
All node instances are saved into two tipc_node_list and node_htable
lists. The two lists are protected by node_list_lock on write side,
and they are guarded with RCU lock on read side. All members in node
structure including link instances are protected by node spin lock.
- The relationship between bearer and node
When link accesses bearer, it first needs to find the bearer with
its bearer identity from the bearer_list array. When bearer accesses
node, it can iterate the node_htable hash list with the node address
to find the corresponding node.
In the new locking policy, every component has its private locking
solution and the relationship between bearer and node is very simple,
that is, they can find each other with node address or bearer identity
from node_htable hash list or bearer_list array.
But, prior to these changes, we need to do some necessary cleanup and
code consolidation. This is what we do with this commit series. In a
later series we will replace net_lock with RTNL as well as RCU lock
to deploy the new locking policy.
Ying Xue (10):
tipc: remove unnecessary checking for node object
tipc: obsolete the remote management feature
tipc: acquire necessary locks in named_cluster_distribute routine
tipc: convert tipc_bearers array to pointer list
tipc: remove active flag from tipc_bearer structure
tipc: make broadcast bearer store in bearer_list array
tipc: rename node create lock to protect node list and hlist
tipc: tipc: convert node list and node hlist to RCU lists
tipc: use node_list_lock to protect tipc_num_nodes variable
tipc: use node list lock to protect tipc_num_links variable
net/tipc/bcast.c | 13 +++---
net/tipc/bearer.c | 49 ++++++++++++++++------
net/tipc/bearer.h | 4 +-
net/tipc/config.c | 107 ++-----------------------------------------------
net/tipc/config.h | 5 ---
net/tipc/core.c | 9 -----
net/tipc/core.h | 1 -
net/tipc/link.c | 20 ++++-----
net/tipc/name_distr.c | 16 ++++++--
net/tipc/net.c | 9 ++---
net/tipc/node.c | 107 ++++++++++++++++++++++++-------------------------
net/tipc/node.h | 6 ++-
12 files changed, 129 insertions(+), 217 deletions(-)
--
1.7.9.5
------------------------------------------------------------------------------
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH net-next 01/10] tipc: remove unnecessary checking for node object
2014-03-27 4:54 [PATCH net-next 00/10] clean up bearer and node layer Ying Xue
@ 2014-03-27 4:54 ` Ying Xue
2014-03-27 4:54 ` [PATCH net-next 02/10] tipc: obsolete the remote management feature Ying Xue
` (9 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Ying Xue @ 2014-03-27 4:54 UTC (permalink / raw)
To: davem; +Cc: jon.maloy, Paul.Gortmaker, tipc-discussion, netdev
tipc_node_create routine doesn't need to check whether a node
object specified with a node address exists or not because its
caller(ie, tipc_disc_recv_msg routine) has checked this before
calling it.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Erik Hugne <erik.hugne@ericsson.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
---
net/tipc/node.c | 6 ------
1 file changed, 6 deletions(-)
diff --git a/net/tipc/node.c b/net/tipc/node.c
index 0b0f6c7..7c9b667 100644
--- a/net/tipc/node.c
+++ b/net/tipc/node.c
@@ -95,12 +95,6 @@ struct tipc_node *tipc_node_create(u32 addr)
spin_lock_bh(&node_create_lock);
- n_ptr = tipc_node_find(addr);
- if (n_ptr) {
- spin_unlock_bh(&node_create_lock);
- return n_ptr;
- }
-
n_ptr = kzalloc(sizeof(*n_ptr), GFP_ATOMIC);
if (!n_ptr) {
spin_unlock_bh(&node_create_lock);
--
1.7.9.5
------------------------------------------------------------------------------
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH net-next 02/10] tipc: obsolete the remote management feature
2014-03-27 4:54 [PATCH net-next 00/10] clean up bearer and node layer Ying Xue
2014-03-27 4:54 ` [PATCH net-next 01/10] tipc: remove unnecessary checking for node object Ying Xue
@ 2014-03-27 4:54 ` Ying Xue
2014-03-27 4:54 ` [PATCH net-next 03/10] tipc: acquire necessary locks in named_cluster_distribute routine Ying Xue
` (8 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Ying Xue @ 2014-03-27 4:54 UTC (permalink / raw)
To: davem; +Cc: jon.maloy, Paul.Gortmaker, tipc-discussion, netdev
Due to the lacking of any credential, it's allowed to accept commands
requested from remote nodes to query the local node status, which is
prone to involve potential security risks. Instead, if we login to
a remote node with ssh command, this approach is not only more safe
than the remote management feature, but also it can give us more
permissions like changing the remote node configuration. So it's
reasonable for us to obsolete the remote management feature now.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Erik Hugne <erik.hugne@ericsson.com>
---
net/tipc/config.c | 107 ++---------------------------------------------------
net/tipc/config.h | 5 ---
net/tipc/core.c | 9 -----
net/tipc/core.h | 1 -
net/tipc/net.c | 2 -
5 files changed, 3 insertions(+), 121 deletions(-)
diff --git a/net/tipc/config.c b/net/tipc/config.c
index e6d7216..4b981c0 100644
--- a/net/tipc/config.c
+++ b/net/tipc/config.c
@@ -43,13 +43,11 @@
#define REPLY_TRUNCATED "<truncated>\n"
static DEFINE_MUTEX(config_mutex);
-static struct tipc_server cfgsrv;
static const void *req_tlv_area; /* request message TLV area */
static int req_tlv_space; /* request message TLV area size */
static int rep_headroom; /* reply message headroom to use */
-
struct sk_buff *tipc_cfg_reply_alloc(int payload_size)
{
struct sk_buff *buf;
@@ -185,18 +183,6 @@ static struct sk_buff *cfg_set_own_addr(void)
return tipc_cfg_reply_none();
}
-static struct sk_buff *cfg_set_remote_mng(void)
-{
- u32 value;
-
- if (!TLV_CHECK(req_tlv_area, req_tlv_space, TIPC_TLV_UNSIGNED))
- return tipc_cfg_reply_error_string(TIPC_CFG_TLV_ERROR);
-
- value = ntohl(*(__be32 *)TLV_DATA(req_tlv_area));
- tipc_remote_management = (value != 0);
- return tipc_cfg_reply_none();
-}
-
static struct sk_buff *cfg_set_max_ports(void)
{
u32 value;
@@ -247,21 +233,10 @@ struct sk_buff *tipc_cfg_do_cmd(u32 orig_node, u16 cmd, const void *request_area
/* Check command authorization */
if (likely(in_own_node(orig_node))) {
/* command is permitted */
- } else if (cmd >= 0x8000) {
+ } else {
rep_tlv_buf = tipc_cfg_reply_error_string(TIPC_CFG_NOT_SUPPORTED
" (cannot be done remotely)");
goto exit;
- } else if (!tipc_remote_management) {
- rep_tlv_buf = tipc_cfg_reply_error_string(TIPC_CFG_NO_REMOTE);
- goto exit;
- } else if (cmd >= 0x4000) {
- u32 domain = 0;
-
- if ((tipc_nametbl_translate(TIPC_ZM_SRV, 0, &domain) == 0) ||
- (domain != orig_node)) {
- rep_tlv_buf = tipc_cfg_reply_error_string(TIPC_CFG_NOT_ZONE_MSTR);
- goto exit;
- }
}
/* Call appropriate processing routine */
@@ -310,18 +285,12 @@ struct sk_buff *tipc_cfg_do_cmd(u32 orig_node, u16 cmd, const void *request_area
case TIPC_CMD_SET_NODE_ADDR:
rep_tlv_buf = cfg_set_own_addr();
break;
- case TIPC_CMD_SET_REMOTE_MNG:
- rep_tlv_buf = cfg_set_remote_mng();
- break;
case TIPC_CMD_SET_MAX_PORTS:
rep_tlv_buf = cfg_set_max_ports();
break;
case TIPC_CMD_SET_NETID:
rep_tlv_buf = cfg_set_netid();
break;
- case TIPC_CMD_GET_REMOTE_MNG:
- rep_tlv_buf = tipc_cfg_reply_unsigned(tipc_remote_management);
- break;
case TIPC_CMD_GET_MAX_PORTS:
rep_tlv_buf = tipc_cfg_reply_unsigned(tipc_max_ports);
break;
@@ -345,6 +314,8 @@ struct sk_buff *tipc_cfg_do_cmd(u32 orig_node, u16 cmd, const void *request_area
case TIPC_CMD_SET_MAX_PUBL:
case TIPC_CMD_GET_MAX_PUBL:
case TIPC_CMD_SET_LOG_SIZE:
+ case TIPC_CMD_SET_REMOTE_MNG:
+ case TIPC_CMD_GET_REMOTE_MNG:
case TIPC_CMD_DUMP_LOG:
rep_tlv_buf = tipc_cfg_reply_error_string(TIPC_CFG_NOT_SUPPORTED
" (obsolete command)");
@@ -369,75 +340,3 @@ exit:
mutex_unlock(&config_mutex);
return rep_tlv_buf;
}
-
-static void cfg_conn_msg_event(int conid, struct sockaddr_tipc *addr,
- void *usr_data, void *buf, size_t len)
-{
- struct tipc_cfg_msg_hdr *req_hdr;
- struct tipc_cfg_msg_hdr *rep_hdr;
- struct sk_buff *rep_buf;
-
- /* Validate configuration message header (ignore invalid message) */
- req_hdr = (struct tipc_cfg_msg_hdr *)buf;
- if ((len < sizeof(*req_hdr)) ||
- (len != TCM_ALIGN(ntohl(req_hdr->tcm_len))) ||
- (ntohs(req_hdr->tcm_flags) != TCM_F_REQUEST)) {
- pr_warn("Invalid configuration message discarded\n");
- return;
- }
-
- /* Generate reply for request (if can't, return request) */
- rep_buf = tipc_cfg_do_cmd(addr->addr.id.node, ntohs(req_hdr->tcm_type),
- buf + sizeof(*req_hdr),
- len - sizeof(*req_hdr),
- BUF_HEADROOM + MAX_H_SIZE + sizeof(*rep_hdr));
- if (rep_buf) {
- skb_push(rep_buf, sizeof(*rep_hdr));
- rep_hdr = (struct tipc_cfg_msg_hdr *)rep_buf->data;
- memcpy(rep_hdr, req_hdr, sizeof(*rep_hdr));
- rep_hdr->tcm_len = htonl(rep_buf->len);
- rep_hdr->tcm_flags &= htons(~TCM_F_REQUEST);
- tipc_conn_sendmsg(&cfgsrv, conid, addr, rep_buf->data,
- rep_buf->len);
- kfree_skb(rep_buf);
- }
-}
-
-static struct sockaddr_tipc cfgsrv_addr __read_mostly = {
- .family = AF_TIPC,
- .addrtype = TIPC_ADDR_NAMESEQ,
- .addr.nameseq.type = TIPC_CFG_SRV,
- .addr.nameseq.lower = 0,
- .addr.nameseq.upper = 0,
- .scope = TIPC_ZONE_SCOPE
-};
-
-static struct tipc_server cfgsrv __read_mostly = {
- .saddr = &cfgsrv_addr,
- .imp = TIPC_CRITICAL_IMPORTANCE,
- .type = SOCK_RDM,
- .max_rcvbuf_size = 64 * 1024,
- .name = "cfg_server",
- .tipc_conn_recvmsg = cfg_conn_msg_event,
- .tipc_conn_new = NULL,
- .tipc_conn_shutdown = NULL
-};
-
-int tipc_cfg_init(void)
-{
- return tipc_server_start(&cfgsrv);
-}
-
-void tipc_cfg_reinit(void)
-{
- tipc_server_stop(&cfgsrv);
-
- cfgsrv_addr.addr.nameseq.lower = tipc_own_addr;
- cfgsrv_addr.addr.nameseq.upper = tipc_own_addr;
- tipc_server_start(&cfgsrv);
-}
-
-void tipc_cfg_stop(void)
-{
- tipc_server_stop(&cfgsrv);
-}
diff --git a/net/tipc/config.h b/net/tipc/config.h
index 1f252f3..47b1bf1 100644
--- a/net/tipc/config.h
+++ b/net/tipc/config.h
@@ -64,9 +64,4 @@ static inline struct sk_buff *tipc_cfg_reply_ultra_string(char *string)
struct sk_buff *tipc_cfg_do_cmd(u32 orig_node, u16 cmd,
const void *req_tlv_area, int req_tlv_space,
int headroom);
-
-int tipc_cfg_init(void);
-void tipc_cfg_reinit(void);
-void tipc_cfg_stop(void);
-
#endif
diff --git a/net/tipc/core.c b/net/tipc/core.c
index e2491b3..50d5742 100644
--- a/net/tipc/core.c
+++ b/net/tipc/core.c
@@ -50,7 +50,6 @@ int tipc_random __read_mostly;
u32 tipc_own_addr __read_mostly;
int tipc_max_ports __read_mostly;
int tipc_net_id __read_mostly;
-int tipc_remote_management __read_mostly;
int sysctl_tipc_rmem[3] __read_mostly; /* min/default/max */
/**
@@ -85,7 +84,6 @@ static void tipc_core_stop(void)
tipc_net_stop();
tipc_bearer_cleanup();
tipc_netlink_stop();
- tipc_cfg_stop();
tipc_subscr_stop();
tipc_nametbl_stop();
tipc_ref_table_stop();
@@ -130,18 +128,12 @@ static int tipc_core_start(void)
if (err)
goto out_subscr;
- err = tipc_cfg_init();
- if (err)
- goto out_cfg;
-
err = tipc_bearer_setup();
if (err)
goto out_bearer;
return 0;
out_bearer:
- tipc_cfg_stop();
-out_cfg:
tipc_subscr_stop();
out_subscr:
tipc_unregister_sysctl();
@@ -166,7 +158,6 @@ static int __init tipc_init(void)
pr_info("Activated (version " TIPC_MOD_VER ")\n");
tipc_own_addr = 0;
- tipc_remote_management = 1;
tipc_max_ports = CONFIG_TIPC_PORTS;
tipc_net_id = 4711;
diff --git a/net/tipc/core.h b/net/tipc/core.h
index 4dfe137..8985bbc 100644
--- a/net/tipc/core.h
+++ b/net/tipc/core.h
@@ -79,7 +79,6 @@ int tipc_snprintf(char *buf, int len, const char *fmt, ...);
extern u32 tipc_own_addr __read_mostly;
extern int tipc_max_ports __read_mostly;
extern int tipc_net_id __read_mostly;
-extern int tipc_remote_management __read_mostly;
extern int sysctl_tipc_rmem[3] __read_mostly;
/*
diff --git a/net/tipc/net.c b/net/tipc/net.c
index 31b606e..bb171c3 100644
--- a/net/tipc/net.c
+++ b/net/tipc/net.c
@@ -182,8 +182,6 @@ void tipc_net_start(u32 addr)
tipc_bclink_init();
write_unlock_bh(&tipc_net_lock);
- tipc_cfg_reinit();
-
pr_info("Started in network mode\n");
pr_info("Own node address %s, network identity %u\n",
tipc_addr_string_fill(addr_string, tipc_own_addr), tipc_net_id);
--
1.7.9.5
------------------------------------------------------------------------------
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH net-next 03/10] tipc: acquire necessary locks in named_cluster_distribute routine
2014-03-27 4:54 [PATCH net-next 00/10] clean up bearer and node layer Ying Xue
2014-03-27 4:54 ` [PATCH net-next 01/10] tipc: remove unnecessary checking for node object Ying Xue
2014-03-27 4:54 ` [PATCH net-next 02/10] tipc: obsolete the remote management feature Ying Xue
@ 2014-03-27 4:54 ` Ying Xue
2014-03-27 4:54 ` [PATCH net-next 04/10] tipc: convert tipc_bearers array to pointer list Ying Xue
` (7 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Ying Xue @ 2014-03-27 4:54 UTC (permalink / raw)
To: davem; +Cc: jon.maloy, Paul.Gortmaker, tipc-discussion, netdev
The 'tipc_node_list' is guarded by tipc_net_lock and 'links' array
defined in 'tipc_node' structure is protected by node lock as well.
Without acquiring the two locks in named_cluster_distribute() a fatal
oops may happen in case that a destroyed link might be got and then
accessed. Therefore, above mentioned two locks must be held in
named_cluster_distribute() to prevent the issue from happening
accidentally.
As 'links' array in node struct must be protected by node lock,
we have to move the code of selecting an active link from
tipc_link_xmit() to named_cluster_distribute() and then call
__tipc_link_xmit() with the selected link to deliver name messages.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Erik Hugne <erik.hugne@ericsson.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
---
net/tipc/name_distr.c | 14 +++++++++++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/net/tipc/name_distr.c b/net/tipc/name_distr.c
index 893c49a..c5904d1 100644
--- a/net/tipc/name_distr.c
+++ b/net/tipc/name_distr.c
@@ -131,16 +131,24 @@ static void named_cluster_distribute(struct sk_buff *buf)
{
struct sk_buff *buf_copy;
struct tipc_node *n_ptr;
+ struct tipc_link *l_ptr;
+ read_lock_bh(&tipc_net_lock);
list_for_each_entry(n_ptr, &tipc_node_list, list) {
- if (tipc_node_active_links(n_ptr)) {
+ spin_lock_bh(&n_ptr->lock);
+ l_ptr = n_ptr->active_links[n_ptr->addr & 1];
+ if (l_ptr) {
buf_copy = skb_copy(buf, GFP_ATOMIC);
- if (!buf_copy)
+ if (!buf_copy) {
+ spin_unlock_bh(&n_ptr->lock);
break;
+ }
msg_set_destnode(buf_msg(buf_copy), n_ptr->addr);
- tipc_link_xmit(buf_copy, n_ptr->addr, n_ptr->addr);
+ __tipc_link_xmit(l_ptr, buf_copy);
}
+ spin_unlock_bh(&n_ptr->lock);
}
+ read_unlock_bh(&tipc_net_lock);
kfree_skb(buf);
}
--
1.7.9.5
------------------------------------------------------------------------------
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH net-next 04/10] tipc: convert tipc_bearers array to pointer list
2014-03-27 4:54 [PATCH net-next 00/10] clean up bearer and node layer Ying Xue
` (2 preceding siblings ...)
2014-03-27 4:54 ` [PATCH net-next 03/10] tipc: acquire necessary locks in named_cluster_distribute routine Ying Xue
@ 2014-03-27 4:54 ` Ying Xue
2014-03-27 4:54 ` [PATCH net-next 05/10] tipc: remove active flag from tipc_bearer structure Ying Xue
` (6 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Ying Xue @ 2014-03-27 4:54 UTC (permalink / raw)
To: davem; +Cc: jon.maloy, Paul.Gortmaker, tipc-discussion, netdev
As part of the effort to introduce RCU protection for the bearer
list, we first need to change it to a list of pointers.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Erik Hugne <erik.hugne@ericsson.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
---
net/tipc/bcast.c | 5 ++---
net/tipc/bearer.c | 46 +++++++++++++++++++++++++++++++++++-----------
net/tipc/bearer.h | 2 +-
3 files changed, 38 insertions(+), 15 deletions(-)
diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c
index e0feb7e..b4f8c62 100644
--- a/net/tipc/bcast.c
+++ b/net/tipc/bcast.c
@@ -668,9 +668,8 @@ void tipc_bcbearer_sort(void)
memset(bp_temp, 0, sizeof(bcbearer->bpairs_temp));
for (b_index = 0; b_index < MAX_BEARERS; b_index++) {
- struct tipc_bearer *b = &tipc_bearers[b_index];
-
- if (!b->active || !b->nodes.count)
+ struct tipc_bearer *b = bearer_list[b_index];
+ if (!b || !b->active || !b->nodes.count)
continue;
if (!bp_temp[b->priority].primary)
diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c
index 7f1f95c..7ff98ef 100644
--- a/net/tipc/bearer.c
+++ b/net/tipc/bearer.c
@@ -49,7 +49,7 @@ static struct tipc_media * const media_info_array[] = {
NULL
};
-struct tipc_bearer tipc_bearers[MAX_BEARERS];
+struct tipc_bearer *bearer_list[MAX_BEARERS];
static void bearer_disable(struct tipc_bearer *b_ptr, bool shutting_down);
@@ -177,8 +177,9 @@ struct tipc_bearer *tipc_bearer_find(const char *name)
struct tipc_bearer *b_ptr;
u32 i;
- for (i = 0, b_ptr = tipc_bearers; i < MAX_BEARERS; i++, b_ptr++) {
- if (b_ptr->active && (!strcmp(b_ptr->name, name)))
+ for (i = 0; i < MAX_BEARERS; i++) {
+ b_ptr = bearer_list[i];
+ if (b_ptr && b_ptr->active && (!strcmp(b_ptr->name, name)))
return b_ptr;
}
return NULL;
@@ -200,7 +201,9 @@ struct sk_buff *tipc_bearer_get_names(void)
read_lock_bh(&tipc_net_lock);
for (i = 0; media_info_array[i] != NULL; i++) {
for (j = 0; j < MAX_BEARERS; j++) {
- b = &tipc_bearers[j];
+ b = bearer_list[j];
+ if (!b)
+ continue;
if (b->active && (b->media == media_info_array[i])) {
tipc_cfg_append_tlv(buf, TIPC_TLV_BEARER_NAME,
b->name,
@@ -284,16 +287,17 @@ restart:
bearer_id = MAX_BEARERS;
with_this_prio = 1;
for (i = MAX_BEARERS; i-- != 0; ) {
- if (!tipc_bearers[i].active) {
+ b_ptr = bearer_list[i];
+ if (!b_ptr || !b_ptr->active) {
bearer_id = i;
continue;
}
- if (!strcmp(name, tipc_bearers[i].name)) {
+ if (!strcmp(name, b_ptr->name)) {
pr_warn("Bearer <%s> rejected, already enabled\n",
name);
goto exit;
}
- if ((tipc_bearers[i].priority == priority) &&
+ if ((b_ptr->priority == priority) &&
(++with_this_prio > 2)) {
if (priority-- == 0) {
pr_warn("Bearer <%s> rejected, duplicate priority\n",
@@ -311,7 +315,11 @@ restart:
goto exit;
}
- b_ptr = &tipc_bearers[bearer_id];
+ b_ptr = kzalloc(sizeof(*b_ptr), GFP_ATOMIC);
+ if (!b_ptr) {
+ res = -ENOMEM;
+ goto exit;
+ }
strcpy(b_ptr->name, name);
b_ptr->media = m_ptr;
res = m_ptr->enable_media(b_ptr);
@@ -335,6 +343,9 @@ restart:
name);
goto exit;
}
+
+ bearer_list[bearer_id] = b_ptr;
+
pr_info("Enabled bearer <%s>, discovery domain %s, priority %u\n",
name,
tipc_addr_string_fill(addr_string, disc_domain), priority);
@@ -362,13 +373,22 @@ static int tipc_reset_bearer(struct tipc_bearer *b_ptr)
*/
static void bearer_disable(struct tipc_bearer *b_ptr, bool shutting_down)
{
+ u32 i;
+
pr_info("Disabling bearer <%s>\n", b_ptr->name);
b_ptr->media->disable_media(b_ptr);
tipc_link_delete_list(b_ptr->identity, shutting_down);
if (b_ptr->link_req)
tipc_disc_delete(b_ptr->link_req);
- memset(b_ptr, 0, sizeof(struct tipc_bearer));
+
+ for (i = 0; i < MAX_BEARERS; i++) {
+ if (b_ptr == bearer_list[i]) {
+ bearer_list[i] = NULL;
+ break;
+ }
+ }
+ kfree(b_ptr);
}
int tipc_disable_bearer(const char *name)
@@ -603,10 +623,14 @@ void tipc_bearer_cleanup(void)
void tipc_bearer_stop(void)
{
+ struct tipc_bearer *b_ptr;
u32 i;
for (i = 0; i < MAX_BEARERS; i++) {
- if (tipc_bearers[i].active)
- bearer_disable(&tipc_bearers[i], true);
+ b_ptr = bearer_list[i];
+ if (b_ptr && b_ptr->active) {
+ bearer_disable(b_ptr, true);
+ bearer_list[i] = NULL;
+ }
}
}
diff --git a/net/tipc/bearer.h b/net/tipc/bearer.h
index 425dd81..f4e72ca 100644
--- a/net/tipc/bearer.h
+++ b/net/tipc/bearer.h
@@ -150,7 +150,7 @@ struct tipc_bearer_names {
struct tipc_link;
-extern struct tipc_bearer tipc_bearers[];
+extern struct tipc_bearer *bearer_list[];
/*
* TIPC routines available to supported media types
--
1.7.9.5
------------------------------------------------------------------------------
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH net-next 05/10] tipc: remove active flag from tipc_bearer structure
2014-03-27 4:54 [PATCH net-next 00/10] clean up bearer and node layer Ying Xue
` (3 preceding siblings ...)
2014-03-27 4:54 ` [PATCH net-next 04/10] tipc: convert tipc_bearers array to pointer list Ying Xue
@ 2014-03-27 4:54 ` Ying Xue
2014-03-27 4:54 ` [PATCH net-next 06/10] tipc: make broadcast bearer store in bearer_list array Ying Xue
` (5 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Ying Xue @ 2014-03-27 4:54 UTC (permalink / raw)
To: davem; +Cc: jon.maloy, Paul.Gortmaker, tipc-discussion, netdev
After the allocation of tipc_bearer structure instance is converted
from statical way to dynamical way, we identify whether a certain
tipc_bearer structure pointer is valid by checking whether the pointer
is NULL or not. So the active flag in tipc_bearer structure becomes
redundant.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Erik Hugne <erik.hugne@ericsson.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
---
net/tipc/bcast.c | 2 +-
net/tipc/bearer.c | 9 ++++-----
net/tipc/bearer.h | 2 --
net/tipc/link.c | 4 ----
4 files changed, 5 insertions(+), 12 deletions(-)
diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c
index b4f8c62..47bb07a 100644
--- a/net/tipc/bcast.c
+++ b/net/tipc/bcast.c
@@ -669,7 +669,7 @@ void tipc_bcbearer_sort(void)
for (b_index = 0; b_index < MAX_BEARERS; b_index++) {
struct tipc_bearer *b = bearer_list[b_index];
- if (!b || !b->active || !b->nodes.count)
+ if (!b || !b->nodes.count)
continue;
if (!bp_temp[b->priority].primary)
diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c
index 7ff98ef..826b701 100644
--- a/net/tipc/bearer.c
+++ b/net/tipc/bearer.c
@@ -179,7 +179,7 @@ struct tipc_bearer *tipc_bearer_find(const char *name)
for (i = 0; i < MAX_BEARERS; i++) {
b_ptr = bearer_list[i];
- if (b_ptr && b_ptr->active && (!strcmp(b_ptr->name, name)))
+ if (b_ptr && (!strcmp(b_ptr->name, name)))
return b_ptr;
}
return NULL;
@@ -204,7 +204,7 @@ struct sk_buff *tipc_bearer_get_names(void)
b = bearer_list[j];
if (!b)
continue;
- if (b->active && (b->media == media_info_array[i])) {
+ if (b->media == media_info_array[i]) {
tipc_cfg_append_tlv(buf, TIPC_TLV_BEARER_NAME,
b->name,
strlen(b->name) + 1);
@@ -288,7 +288,7 @@ restart:
with_this_prio = 1;
for (i = MAX_BEARERS; i-- != 0; ) {
b_ptr = bearer_list[i];
- if (!b_ptr || !b_ptr->active) {
+ if (!b_ptr) {
bearer_id = i;
continue;
}
@@ -333,7 +333,6 @@ restart:
b_ptr->tolerance = m_ptr->tolerance;
b_ptr->window = m_ptr->window;
b_ptr->net_plane = bearer_id + 'A';
- b_ptr->active = 1;
b_ptr->priority = priority;
res = tipc_disc_create(b_ptr, &b_ptr->bcast_addr, disc_domain);
@@ -628,7 +627,7 @@ void tipc_bearer_stop(void)
for (i = 0; i < MAX_BEARERS; i++) {
b_ptr = bearer_list[i];
- if (b_ptr && b_ptr->active) {
+ if (b_ptr) {
bearer_disable(b_ptr, true);
bearer_list[i] = NULL;
}
diff --git a/net/tipc/bearer.h b/net/tipc/bearer.h
index f4e72ca..3f6d7d0 100644
--- a/net/tipc/bearer.h
+++ b/net/tipc/bearer.h
@@ -118,7 +118,6 @@ struct tipc_media {
* @tolerance: default link tolerance for bearer
* @identity: array index of this bearer within TIPC bearer array
* @link_req: ptr to (optional) structure making periodic link setup requests
- * @active: non-zero if bearer structure is represents a bearer
* @net_plane: network plane ('A' through 'H') currently associated with bearer
* @nodes: indicates which nodes in cluster can be reached through bearer
*
@@ -138,7 +137,6 @@ struct tipc_bearer {
u32 tolerance;
u32 identity;
struct tipc_link_req *link_req;
- int active;
char net_plane;
struct tipc_node_map nodes;
};
diff --git a/net/tipc/link.c b/net/tipc/link.c
index a42f4a1..882c5c9 100644
--- a/net/tipc/link.c
+++ b/net/tipc/link.c
@@ -1458,10 +1458,6 @@ void tipc_rcv(struct sk_buff *head, struct tipc_bearer *b_ptr)
head = head->next;
buf->next = NULL;
- /* Ensure bearer is still enabled */
- if (unlikely(!b_ptr->active))
- goto discard;
-
/* Ensure message is well-formed */
if (unlikely(!link_recv_buf_validate(buf)))
goto discard;
--
1.7.9.5
------------------------------------------------------------------------------
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH net-next 06/10] tipc: make broadcast bearer store in bearer_list array
2014-03-27 4:54 [PATCH net-next 00/10] clean up bearer and node layer Ying Xue
` (4 preceding siblings ...)
2014-03-27 4:54 ` [PATCH net-next 05/10] tipc: remove active flag from tipc_bearer structure Ying Xue
@ 2014-03-27 4:54 ` Ying Xue
2014-03-27 4:54 ` [PATCH net-next 07/10] tipc: rename node create lock to protect node list and hlist Ying Xue
` (4 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Ying Xue @ 2014-03-27 4:54 UTC (permalink / raw)
To: davem; +Cc: jon.maloy, Paul.Gortmaker, tipc-discussion, netdev
Now unicast bearer is dynamically allocated and placed into its
identity specified slot of bearer_list array. When we search
bearer_list array with a bearer identity, the corresponding bearer
instance can be found. But broadcast bearer is statically allocated
and it is not located in the bearer_list array yet. So we decide to
enlarge bearer_list array into MAX_BEARERS + 1 slots, and its last
slot stores the broadcast bearer so that the broadcast bearer can
be found from bearer_list array with MAX_BEARERS as index. The
change will help us reduce the complex relationship between bearer
and link in the future.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Erik Hugne <erik.hugne@ericsson.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
---
net/tipc/bcast.c | 8 +++++---
net/tipc/bearer.c | 2 +-
2 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c
index 47bb07a..26c2c1e 100644
--- a/net/tipc/bcast.c
+++ b/net/tipc/bcast.c
@@ -41,9 +41,9 @@
#include "bcast.h"
#include "name_distr.h"
-#define MAX_PKT_DEFAULT_MCAST 1500 /* bcast link max packet size (fixed) */
-
-#define BCLINK_WIN_DEFAULT 20 /* bcast link window size (default) */
+#define MAX_PKT_DEFAULT_MCAST 1500 /* bcast link max packet size (fixed) */
+#define BCLINK_WIN_DEFAULT 20 /* bcast link window size (default) */
+#define BCBEARER MAX_BEARERS
/**
* struct tipc_bcbearer_pair - a pair of bearers used by broadcast link
@@ -784,6 +784,7 @@ void tipc_bclink_init(void)
bcl->max_pkt = MAX_PKT_DEFAULT_MCAST;
tipc_link_set_queue_limits(bcl, BCLINK_WIN_DEFAULT);
bcl->b_ptr = &bcbearer->bearer;
+ bearer_list[BCBEARER] = &bcbearer->bearer;
bcl->state = WORKING_WORKING;
strlcpy(bcl->name, tipc_bclink_name, TIPC_MAX_LINK_NAME);
}
@@ -794,6 +795,7 @@ void tipc_bclink_stop(void)
tipc_link_purge_queues(bcl);
spin_unlock_bh(&bc_lock);
+ bearer_list[BCBEARER] = NULL;
memset(bclink, 0, sizeof(*bclink));
memset(bcbearer, 0, sizeof(*bcbearer));
}
diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c
index 826b701..ed45f97 100644
--- a/net/tipc/bearer.c
+++ b/net/tipc/bearer.c
@@ -49,7 +49,7 @@ static struct tipc_media * const media_info_array[] = {
NULL
};
-struct tipc_bearer *bearer_list[MAX_BEARERS];
+struct tipc_bearer *bearer_list[MAX_BEARERS + 1];
static void bearer_disable(struct tipc_bearer *b_ptr, bool shutting_down);
--
1.7.9.5
------------------------------------------------------------------------------
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH net-next 07/10] tipc: rename node create lock to protect node list and hlist
2014-03-27 4:54 [PATCH net-next 00/10] clean up bearer and node layer Ying Xue
` (5 preceding siblings ...)
2014-03-27 4:54 ` [PATCH net-next 06/10] tipc: make broadcast bearer store in bearer_list array Ying Xue
@ 2014-03-27 4:54 ` Ying Xue
2014-03-27 4:54 ` [PATCH net-next 08/10] tipc: tipc: convert node list and node hlist to RCU lists Ying Xue
` (3 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Ying Xue @ 2014-03-27 4:54 UTC (permalink / raw)
To: davem; +Cc: jon.maloy, Paul.Gortmaker, tipc-discussion, netdev
When a node is created, tipc_net_lock read lock is first held and
then node_create_lock is grabbed in order to prevent the same node
from being created and inserted into both node list and hlist twice.
But when we query node from the two node lists, we only hold
tipc_net_lock read lock without grabbing node_create_lock. Obviously
this locking policy is unable to guarantee that the two node lists
are always synchronized especially when the operation of changing
and accessing them occurs in different contexts like currently doing.
Therefore, rename node_create_lock to node_list_lock to protect the
two node lists, that is, whenever node is inserted into them or node
is queried from them, the node_list_lock should be always held. As a
result, tipc_net_lock read lock becomes redundant and then can be
removed from the node query functions.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Erik Hugne <erik.hugne@ericsson.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
---
net/tipc/net.c | 7 +++----
net/tipc/node.c | 59 ++++++++++++++++++++++++++++---------------------------
net/tipc/node.h | 4 ++--
3 files changed, 35 insertions(+), 35 deletions(-)
diff --git a/net/tipc/net.c b/net/tipc/net.c
index bb171c3..0374a81 100644
--- a/net/tipc/net.c
+++ b/net/tipc/net.c
@@ -189,15 +189,14 @@ void tipc_net_start(u32 addr)
void tipc_net_stop(void)
{
- struct tipc_node *node, *t_node;
-
if (!tipc_own_addr)
return;
+
write_lock_bh(&tipc_net_lock);
tipc_bearer_stop();
tipc_bclink_stop();
- list_for_each_entry_safe(node, t_node, &tipc_node_list, list)
- tipc_node_delete(node);
+ tipc_node_stop();
write_unlock_bh(&tipc_net_lock);
+
pr_info("Left network mode\n");
}
diff --git a/net/tipc/node.c b/net/tipc/node.c
index 7c9b667..ec83607 100644
--- a/net/tipc/node.c
+++ b/net/tipc/node.c
@@ -2,7 +2,7 @@
* net/tipc/node.c: TIPC node management routines
*
* Copyright (c) 2000-2006, 2012 Ericsson AB
- * Copyright (c) 2005-2006, 2010-2011, Wind River Systems
+ * Copyright (c) 2005-2006, 2010-2014, Wind River Systems
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@@ -44,11 +44,10 @@
static void node_lost_contact(struct tipc_node *n_ptr);
static void node_established_contact(struct tipc_node *n_ptr);
-static DEFINE_SPINLOCK(node_create_lock);
-
static struct hlist_head node_htable[NODE_HTABLE_SIZE];
LIST_HEAD(tipc_node_list);
static u32 tipc_num_nodes;
+static DEFINE_SPINLOCK(node_list_lock);
static atomic_t tipc_num_links = ATOMIC_INIT(0);
@@ -73,31 +72,26 @@ struct tipc_node *tipc_node_find(u32 addr)
if (unlikely(!in_own_cluster_exact(addr)))
return NULL;
+ spin_lock_bh(&node_list_lock);
hlist_for_each_entry(node, &node_htable[tipc_hashfn(addr)], hash) {
- if (node->addr == addr)
+ if (node->addr == addr) {
+ spin_unlock_bh(&node_list_lock);
return node;
+ }
}
+ spin_unlock_bh(&node_list_lock);
return NULL;
}
-/**
- * tipc_node_create - create neighboring node
- *
- * Currently, this routine is called by neighbor discovery code, which holds
- * net_lock for reading only. We must take node_create_lock to ensure a node
- * isn't created twice if two different bearers discover the node at the same
- * time. (It would be preferable to switch to holding net_lock in write mode,
- * but this is a non-trivial change.)
- */
struct tipc_node *tipc_node_create(u32 addr)
{
struct tipc_node *n_ptr, *temp_node;
- spin_lock_bh(&node_create_lock);
+ spin_lock_bh(&node_list_lock);
n_ptr = kzalloc(sizeof(*n_ptr), GFP_ATOMIC);
if (!n_ptr) {
- spin_unlock_bh(&node_create_lock);
+ spin_unlock_bh(&node_list_lock);
pr_warn("Node creation failed, no memory\n");
return NULL;
}
@@ -120,11 +114,11 @@ struct tipc_node *tipc_node_create(u32 addr)
tipc_num_nodes++;
- spin_unlock_bh(&node_create_lock);
+ spin_unlock_bh(&node_list_lock);
return n_ptr;
}
-void tipc_node_delete(struct tipc_node *n_ptr)
+static void tipc_node_delete(struct tipc_node *n_ptr)
{
list_del(&n_ptr->list);
hlist_del(&n_ptr->hash);
@@ -133,6 +127,16 @@ void tipc_node_delete(struct tipc_node *n_ptr)
tipc_num_nodes--;
}
+void tipc_node_stop(void)
+{
+ struct tipc_node *node, *t_node;
+
+ spin_lock_bh(&node_list_lock);
+ list_for_each_entry_safe(node, t_node, &tipc_node_list, list)
+ tipc_node_delete(node);
+ spin_unlock_bh(&node_list_lock);
+}
+
/**
* tipc_node_link_up - handle addition of link
*
@@ -335,22 +339,22 @@ struct sk_buff *tipc_node_get_nodes(const void *req_tlv_area, int req_tlv_space)
return tipc_cfg_reply_error_string(TIPC_CFG_INVALID_VALUE
" (network address)");
- read_lock_bh(&tipc_net_lock);
+ spin_lock_bh(&node_list_lock);
if (!tipc_num_nodes) {
- read_unlock_bh(&tipc_net_lock);
+ spin_unlock_bh(&node_list_lock);
return tipc_cfg_reply_none();
}
/* For now, get space for all other nodes */
payload_size = TLV_SPACE(sizeof(node_info)) * tipc_num_nodes;
if (payload_size > 32768u) {
- read_unlock_bh(&tipc_net_lock);
+ spin_unlock_bh(&node_list_lock);
return tipc_cfg_reply_error_string(TIPC_CFG_NOT_SUPPORTED
" (too many nodes)");
}
buf = tipc_cfg_reply_alloc(payload_size);
if (!buf) {
- read_unlock_bh(&tipc_net_lock);
+ spin_unlock_bh(&node_list_lock);
return NULL;
}
@@ -363,8 +367,7 @@ struct sk_buff *tipc_node_get_nodes(const void *req_tlv_area, int req_tlv_space)
tipc_cfg_append_tlv(buf, TIPC_TLV_NODE_INFO,
&node_info, sizeof(node_info));
}
-
- read_unlock_bh(&tipc_net_lock);
+ spin_unlock_bh(&node_list_lock);
return buf;
}
@@ -387,19 +390,18 @@ struct sk_buff *tipc_node_get_links(const void *req_tlv_area, int req_tlv_space)
if (!tipc_own_addr)
return tipc_cfg_reply_none();
- read_lock_bh(&tipc_net_lock);
-
+ spin_lock_bh(&node_list_lock);
/* Get space for all unicast links + broadcast link */
payload_size = TLV_SPACE(sizeof(link_info)) *
(atomic_read(&tipc_num_links) + 1);
if (payload_size > 32768u) {
- read_unlock_bh(&tipc_net_lock);
+ spin_unlock_bh(&node_list_lock);
return tipc_cfg_reply_error_string(TIPC_CFG_NOT_SUPPORTED
" (too many links)");
}
buf = tipc_cfg_reply_alloc(payload_size);
if (!buf) {
- read_unlock_bh(&tipc_net_lock);
+ spin_unlock_bh(&node_list_lock);
return NULL;
}
@@ -427,7 +429,6 @@ struct sk_buff *tipc_node_get_links(const void *req_tlv_area, int req_tlv_space)
}
tipc_node_unlock(n_ptr);
}
-
- read_unlock_bh(&tipc_net_lock);
+ spin_unlock_bh(&node_list_lock);
return buf;
}
diff --git a/net/tipc/node.h b/net/tipc/node.h
index 63e2e8e..4203869 100644
--- a/net/tipc/node.h
+++ b/net/tipc/node.h
@@ -2,7 +2,7 @@
* net/tipc/node.h: Include file for TIPC node management routines
*
* Copyright (c) 2000-2006, Ericsson AB
- * Copyright (c) 2005, 2010-2011, Wind River Systems
+ * Copyright (c) 2005, 2010-2014, Wind River Systems
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@@ -107,7 +107,7 @@ extern struct list_head tipc_node_list;
struct tipc_node *tipc_node_find(u32 addr);
struct tipc_node *tipc_node_create(u32 addr);
-void tipc_node_delete(struct tipc_node *n_ptr);
+void tipc_node_stop(void);
void tipc_node_attach_link(struct tipc_node *n_ptr, struct tipc_link *l_ptr);
void tipc_node_detach_link(struct tipc_node *n_ptr, struct tipc_link *l_ptr);
void tipc_node_link_down(struct tipc_node *n_ptr, struct tipc_link *l_ptr);
--
1.7.9.5
------------------------------------------------------------------------------
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH net-next 08/10] tipc: tipc: convert node list and node hlist to RCU lists
2014-03-27 4:54 [PATCH net-next 00/10] clean up bearer and node layer Ying Xue
` (6 preceding siblings ...)
2014-03-27 4:54 ` [PATCH net-next 07/10] tipc: rename node create lock to protect node list and hlist Ying Xue
@ 2014-03-27 4:54 ` Ying Xue
2014-03-27 4:54 ` [PATCH net-next 09/10] tipc: use node_list_lock to protect tipc_num_nodes variable Ying Xue
` (2 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Ying Xue @ 2014-03-27 4:54 UTC (permalink / raw)
To: davem; +Cc: jon.maloy, Paul.Gortmaker, tipc-discussion, netdev
Convert tipc_node_list list and node_htable hash list to RCU lists.
On read side, the two lists are protected with RCU read lock, and
on update side, node_list_lock is applied to them.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Erik Hugne <erik.hugne@ericsson.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
---
net/tipc/link.c | 16 ++++++++++------
net/tipc/name_distr.c | 6 +++---
net/tipc/node.c | 28 ++++++++++++++++------------
net/tipc/node.h | 2 ++
4 files changed, 31 insertions(+), 21 deletions(-)
diff --git a/net/tipc/link.c b/net/tipc/link.c
index 882c5c9..c5190ab 100644
--- a/net/tipc/link.c
+++ b/net/tipc/link.c
@@ -280,13 +280,13 @@ struct tipc_link *tipc_link_create(struct tipc_node *n_ptr,
return l_ptr;
}
-
void tipc_link_delete_list(unsigned int bearer_id, bool shutting_down)
{
struct tipc_link *l_ptr;
struct tipc_node *n_ptr;
- list_for_each_entry(n_ptr, &tipc_node_list, list) {
+ rcu_read_lock();
+ list_for_each_entry_rcu(n_ptr, &tipc_node_list, list) {
spin_lock_bh(&n_ptr->lock);
l_ptr = n_ptr->links[bearer_id];
if (l_ptr) {
@@ -309,6 +309,7 @@ void tipc_link_delete_list(unsigned int bearer_id, bool shutting_down)
}
spin_unlock_bh(&n_ptr->lock);
}
+ rcu_read_unlock();
}
/**
@@ -461,13 +462,15 @@ void tipc_link_reset_list(unsigned int bearer_id)
struct tipc_link *l_ptr;
struct tipc_node *n_ptr;
- list_for_each_entry(n_ptr, &tipc_node_list, list) {
+ rcu_read_lock();
+ list_for_each_entry_rcu(n_ptr, &tipc_node_list, list) {
spin_lock_bh(&n_ptr->lock);
l_ptr = n_ptr->links[bearer_id];
if (l_ptr)
tipc_link_reset(l_ptr);
spin_unlock_bh(&n_ptr->lock);
}
+ rcu_read_unlock();
}
static void link_activate(struct tipc_link *l_ptr)
@@ -2404,13 +2407,12 @@ static struct tipc_node *tipc_link_find_owner(const char *link_name,
{
struct tipc_link *l_ptr;
struct tipc_node *n_ptr;
- struct tipc_node *tmp_n_ptr;
struct tipc_node *found_node = 0;
-
int i;
*bearer_id = 0;
- list_for_each_entry_safe(n_ptr, tmp_n_ptr, &tipc_node_list, list) {
+ rcu_read_lock();
+ list_for_each_entry_rcu(n_ptr, &tipc_node_list, list) {
tipc_node_lock(n_ptr);
for (i = 0; i < MAX_BEARERS; i++) {
l_ptr = n_ptr->links[i];
@@ -2424,6 +2426,8 @@ static struct tipc_node *tipc_link_find_owner(const char *link_name,
if (found_node)
break;
}
+ rcu_read_unlock();
+
return found_node;
}
diff --git a/net/tipc/name_distr.c b/net/tipc/name_distr.c
index c5904d1..aff8041 100644
--- a/net/tipc/name_distr.c
+++ b/net/tipc/name_distr.c
@@ -133,8 +133,8 @@ static void named_cluster_distribute(struct sk_buff *buf)
struct tipc_node *n_ptr;
struct tipc_link *l_ptr;
- read_lock_bh(&tipc_net_lock);
- list_for_each_entry(n_ptr, &tipc_node_list, list) {
+ rcu_read_lock();
+ list_for_each_entry_rcu(n_ptr, &tipc_node_list, list) {
spin_lock_bh(&n_ptr->lock);
l_ptr = n_ptr->active_links[n_ptr->addr & 1];
if (l_ptr) {
@@ -148,7 +148,7 @@ static void named_cluster_distribute(struct sk_buff *buf)
}
spin_unlock_bh(&n_ptr->lock);
}
- read_unlock_bh(&tipc_net_lock);
+ rcu_read_unlock();
kfree_skb(buf);
}
diff --git a/net/tipc/node.c b/net/tipc/node.c
index ec83607..4f517ff 100644
--- a/net/tipc/node.c
+++ b/net/tipc/node.c
@@ -72,14 +72,14 @@ struct tipc_node *tipc_node_find(u32 addr)
if (unlikely(!in_own_cluster_exact(addr)))
return NULL;
- spin_lock_bh(&node_list_lock);
- hlist_for_each_entry(node, &node_htable[tipc_hashfn(addr)], hash) {
+ rcu_read_lock();
+ hlist_for_each_entry_rcu(node, &node_htable[tipc_hashfn(addr)], hash) {
if (node->addr == addr) {
- spin_unlock_bh(&node_list_lock);
+ rcu_read_unlock();
return node;
}
}
- spin_unlock_bh(&node_list_lock);
+ rcu_read_unlock();
return NULL;
}
@@ -102,13 +102,13 @@ struct tipc_node *tipc_node_create(u32 addr)
INIT_LIST_HEAD(&n_ptr->list);
INIT_LIST_HEAD(&n_ptr->nsub);
- hlist_add_head(&n_ptr->hash, &node_htable[tipc_hashfn(addr)]);
+ hlist_add_head_rcu(&n_ptr->hash, &node_htable[tipc_hashfn(addr)]);
- list_for_each_entry(temp_node, &tipc_node_list, list) {
+ list_for_each_entry_rcu(temp_node, &tipc_node_list, list) {
if (n_ptr->addr < temp_node->addr)
break;
}
- list_add_tail(&n_ptr->list, &temp_node->list);
+ list_add_tail_rcu(&n_ptr->list, &temp_node->list);
n_ptr->block_setup = WAIT_PEER_DOWN;
n_ptr->signature = INVALID_NODE_SIG;
@@ -120,9 +120,9 @@ struct tipc_node *tipc_node_create(u32 addr)
static void tipc_node_delete(struct tipc_node *n_ptr)
{
- list_del(&n_ptr->list);
- hlist_del(&n_ptr->hash);
- kfree(n_ptr);
+ list_del_rcu(&n_ptr->list);
+ hlist_del_rcu(&n_ptr->hash);
+ kfree_rcu(n_ptr, rcu);
tipc_num_nodes--;
}
@@ -359,7 +359,8 @@ struct sk_buff *tipc_node_get_nodes(const void *req_tlv_area, int req_tlv_space)
}
/* Add TLVs for all nodes in scope */
- list_for_each_entry(n_ptr, &tipc_node_list, list) {
+ rcu_read_lock();
+ list_for_each_entry_rcu(n_ptr, &tipc_node_list, list) {
if (!tipc_in_scope(domain, n_ptr->addr))
continue;
node_info.addr = htonl(n_ptr->addr);
@@ -367,6 +368,7 @@ struct sk_buff *tipc_node_get_nodes(const void *req_tlv_area, int req_tlv_space)
tipc_cfg_append_tlv(buf, TIPC_TLV_NODE_INFO,
&node_info, sizeof(node_info));
}
+ rcu_read_unlock();
spin_unlock_bh(&node_list_lock);
return buf;
}
@@ -412,7 +414,8 @@ struct sk_buff *tipc_node_get_links(const void *req_tlv_area, int req_tlv_space)
tipc_cfg_append_tlv(buf, TIPC_TLV_LINK_INFO, &link_info, sizeof(link_info));
/* Add TLVs for any other links in scope */
- list_for_each_entry(n_ptr, &tipc_node_list, list) {
+ rcu_read_lock();
+ list_for_each_entry_rcu(n_ptr, &tipc_node_list, list) {
u32 i;
if (!tipc_in_scope(domain, n_ptr->addr))
@@ -429,6 +432,7 @@ struct sk_buff *tipc_node_get_links(const void *req_tlv_area, int req_tlv_space)
}
tipc_node_unlock(n_ptr);
}
+ rcu_read_unlock();
spin_unlock_bh(&node_list_lock);
return buf;
}
diff --git a/net/tipc/node.h b/net/tipc/node.h
index 4203869..7cbb8ce 100644
--- a/net/tipc/node.h
+++ b/net/tipc/node.h
@@ -66,6 +66,7 @@
* @link_cnt: number of links to node
* @signature: node instance identifier
* @bclink: broadcast-related info
+ * @rcu: rcu struct for tipc_node
* @acked: sequence # of last outbound b'cast message acknowledged by node
* @last_in: sequence # of last in-sequence b'cast message received from node
* @last_sent: sequence # of last b'cast message sent by node
@@ -89,6 +90,7 @@ struct tipc_node {
int working_links;
int block_setup;
u32 signature;
+ struct rcu_head rcu;
struct {
u32 acked;
u32 last_in;
--
1.7.9.5
------------------------------------------------------------------------------
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH net-next 09/10] tipc: use node_list_lock to protect tipc_num_nodes variable
2014-03-27 4:54 [PATCH net-next 00/10] clean up bearer and node layer Ying Xue
` (7 preceding siblings ...)
2014-03-27 4:54 ` [PATCH net-next 08/10] tipc: tipc: convert node list and node hlist to RCU lists Ying Xue
@ 2014-03-27 4:54 ` Ying Xue
2014-03-27 4:54 ` [PATCH net-next 10/10] tipc: use node list lock to protect tipc_num_links variable Ying Xue
2014-03-27 17:12 ` [PATCH net-next 00/10] clean up bearer and node layer David Miller
10 siblings, 0 replies; 12+ messages in thread
From: Ying Xue @ 2014-03-27 4:54 UTC (permalink / raw)
To: davem; +Cc: jon.maloy, Paul.Gortmaker, tipc-discussion, netdev
As tipc_node_list is protected by rcu read lock on read side, it's
unnecessary to hold node_list_lock to protect tipc_node_list in
tipc_node_get_links(). Instead, node_list_lock should just protects
tipc_num_nodes in the function.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Erik Hugne <erik.hugne@ericsson.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
---
net/tipc/node.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/net/tipc/node.c b/net/tipc/node.c
index 4f517ff..85405a6 100644
--- a/net/tipc/node.c
+++ b/net/tipc/node.c
@@ -352,11 +352,11 @@ struct sk_buff *tipc_node_get_nodes(const void *req_tlv_area, int req_tlv_space)
return tipc_cfg_reply_error_string(TIPC_CFG_NOT_SUPPORTED
" (too many nodes)");
}
+ spin_unlock_bh(&node_list_lock);
+
buf = tipc_cfg_reply_alloc(payload_size);
- if (!buf) {
- spin_unlock_bh(&node_list_lock);
+ if (!buf)
return NULL;
- }
/* Add TLVs for all nodes in scope */
rcu_read_lock();
@@ -369,7 +369,6 @@ struct sk_buff *tipc_node_get_nodes(const void *req_tlv_area, int req_tlv_space)
&node_info, sizeof(node_info));
}
rcu_read_unlock();
- spin_unlock_bh(&node_list_lock);
return buf;
}
--
1.7.9.5
------------------------------------------------------------------------------
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH net-next 10/10] tipc: use node list lock to protect tipc_num_links variable
2014-03-27 4:54 [PATCH net-next 00/10] clean up bearer and node layer Ying Xue
` (8 preceding siblings ...)
2014-03-27 4:54 ` [PATCH net-next 09/10] tipc: use node_list_lock to protect tipc_num_nodes variable Ying Xue
@ 2014-03-27 4:54 ` Ying Xue
2014-03-27 17:12 ` [PATCH net-next 00/10] clean up bearer and node layer David Miller
10 siblings, 0 replies; 12+ messages in thread
From: Ying Xue @ 2014-03-27 4:54 UTC (permalink / raw)
To: davem; +Cc: jon.maloy, Paul.Gortmaker, tipc-discussion, netdev
Without properly implicit or explicit read memory barrier, it's
unsafe to read an atomic variable with atomic_read() from another
thread which is different with the thread of changing the atomic
variable with atomic_inc() or atomic_dec(). So a stale tipc_num_links
may be got with atomic_read() in tipc_node_get_links(). If the
tipc_num_links variable type is converted from atomic to unsigned
integer and node list lock is used to protect it, the issue would
be avoided.
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Erik Hugne <erik.hugne@ericsson.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
---
net/tipc/node.c | 21 +++++++++++----------
1 file changed, 11 insertions(+), 10 deletions(-)
diff --git a/net/tipc/node.c b/net/tipc/node.c
index 85405a6..1d3a499 100644
--- a/net/tipc/node.c
+++ b/net/tipc/node.c
@@ -47,10 +47,9 @@ static void node_established_contact(struct tipc_node *n_ptr);
static struct hlist_head node_htable[NODE_HTABLE_SIZE];
LIST_HEAD(tipc_node_list);
static u32 tipc_num_nodes;
+static u32 tipc_num_links;
static DEFINE_SPINLOCK(node_list_lock);
-static atomic_t tipc_num_links = ATOMIC_INIT(0);
-
/*
* A trivial power-of-two bitmask technique is used for speed, since this
* operation is done for every incoming TIPC packet. The number of hash table
@@ -241,7 +240,9 @@ int tipc_node_is_up(struct tipc_node *n_ptr)
void tipc_node_attach_link(struct tipc_node *n_ptr, struct tipc_link *l_ptr)
{
n_ptr->links[l_ptr->b_ptr->identity] = l_ptr;
- atomic_inc(&tipc_num_links);
+ spin_lock_bh(&node_list_lock);
+ tipc_num_links++;
+ spin_unlock_bh(&node_list_lock);
n_ptr->link_cnt++;
}
@@ -253,7 +254,9 @@ void tipc_node_detach_link(struct tipc_node *n_ptr, struct tipc_link *l_ptr)
if (l_ptr != n_ptr->links[i])
continue;
n_ptr->links[i] = NULL;
- atomic_dec(&tipc_num_links);
+ spin_lock_bh(&node_list_lock);
+ tipc_num_links--;
+ spin_unlock_bh(&node_list_lock);
n_ptr->link_cnt--;
}
}
@@ -393,18 +396,17 @@ struct sk_buff *tipc_node_get_links(const void *req_tlv_area, int req_tlv_space)
spin_lock_bh(&node_list_lock);
/* Get space for all unicast links + broadcast link */
- payload_size = TLV_SPACE(sizeof(link_info)) *
- (atomic_read(&tipc_num_links) + 1);
+ payload_size = TLV_SPACE((sizeof(link_info)) * (tipc_num_links + 1));
if (payload_size > 32768u) {
spin_unlock_bh(&node_list_lock);
return tipc_cfg_reply_error_string(TIPC_CFG_NOT_SUPPORTED
" (too many links)");
}
+ spin_unlock_bh(&node_list_lock);
+
buf = tipc_cfg_reply_alloc(payload_size);
- if (!buf) {
- spin_unlock_bh(&node_list_lock);
+ if (!buf)
return NULL;
- }
/* Add TLV for broadcast link */
link_info.dest = htonl(tipc_cluster_mask(tipc_own_addr));
@@ -432,6 +434,5 @@ struct sk_buff *tipc_node_get_links(const void *req_tlv_area, int req_tlv_space)
tipc_node_unlock(n_ptr);
}
rcu_read_unlock();
- spin_unlock_bh(&node_list_lock);
return buf;
}
--
1.7.9.5
------------------------------------------------------------------------------
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH net-next 00/10] clean up bearer and node layer
2014-03-27 4:54 [PATCH net-next 00/10] clean up bearer and node layer Ying Xue
` (9 preceding siblings ...)
2014-03-27 4:54 ` [PATCH net-next 10/10] tipc: use node list lock to protect tipc_num_links variable Ying Xue
@ 2014-03-27 17:12 ` David Miller
10 siblings, 0 replies; 12+ messages in thread
From: David Miller @ 2014-03-27 17:12 UTC (permalink / raw)
To: ying.xue; +Cc: jon.maloy, Paul.Gortmaker, tipc-discussion, netdev
From: Ying Xue <ying.xue@windriver.com>
Date: Thu, 27 Mar 2014 12:54:29 +0800
> This is another commit series which aims at facilitating future
> changes to the locking policy around nodes, links and bearers.
Series applied, thanks.
------------------------------------------------------------------------------
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2014-03-27 17:12 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-03-27 4:54 [PATCH net-next 00/10] clean up bearer and node layer Ying Xue
2014-03-27 4:54 ` [PATCH net-next 01/10] tipc: remove unnecessary checking for node object Ying Xue
2014-03-27 4:54 ` [PATCH net-next 02/10] tipc: obsolete the remote management feature Ying Xue
2014-03-27 4:54 ` [PATCH net-next 03/10] tipc: acquire necessary locks in named_cluster_distribute routine Ying Xue
2014-03-27 4:54 ` [PATCH net-next 04/10] tipc: convert tipc_bearers array to pointer list Ying Xue
2014-03-27 4:54 ` [PATCH net-next 05/10] tipc: remove active flag from tipc_bearer structure Ying Xue
2014-03-27 4:54 ` [PATCH net-next 06/10] tipc: make broadcast bearer store in bearer_list array Ying Xue
2014-03-27 4:54 ` [PATCH net-next 07/10] tipc: rename node create lock to protect node list and hlist Ying Xue
2014-03-27 4:54 ` [PATCH net-next 08/10] tipc: tipc: convert node list and node hlist to RCU lists Ying Xue
2014-03-27 4:54 ` [PATCH net-next 09/10] tipc: use node_list_lock to protect tipc_num_nodes variable Ying Xue
2014-03-27 4:54 ` [PATCH net-next 10/10] tipc: use node list lock to protect tipc_num_links variable Ying Xue
2014-03-27 17:12 ` [PATCH net-next 00/10] clean up bearer and node layer David Miller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).