* [PATCH net-next 0/6] net/smc: patches 2020-02-17
@ 2020-02-17 15:24 Ursula Braun
2020-02-17 15:24 ` [PATCH net-next 1/6] net/smc: improve smc_lgr_cleanup() Ursula Braun
` (6 more replies)
0 siblings, 7 replies; 8+ messages in thread
From: Ursula Braun @ 2020-02-17 15:24 UTC (permalink / raw)
To: davem; +Cc: netdev, linux-s390, heiko.carstens, raspl, kgraul, ubraun
Hi Dave,
here are patches for SMC making termination tasks more perfect.
Thanks, Ursula
Karsten Graul (5):
net/smc: improve smc_lgr_cleanup()
net/smc: use termination worker under send_lock
net/smc: do not delete lgr from list twice
net/smc: remove unused parameter of smc_lgr_terminate()
net/smc: simplify normal link termination
Ursula Braun (1):
net/smc: reduce port_event scheduling
net/smc/smc_clc.c | 2 +-
net/smc/smc_core.c | 26 +++++++++++---------------
net/smc/smc_core.h | 8 +-------
net/smc/smc_ib.c | 44 +++++++++++++++++++++++++++++---------------
net/smc/smc_llc.c | 2 +-
net/smc/smc_tx.c | 2 +-
6 files changed, 44 insertions(+), 40 deletions(-)
--
2.17.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH net-next 1/6] net/smc: improve smc_lgr_cleanup()
2020-02-17 15:24 [PATCH net-next 0/6] net/smc: patches 2020-02-17 Ursula Braun
@ 2020-02-17 15:24 ` Ursula Braun
2020-02-17 15:24 ` [PATCH net-next 2/6] net/smc: use termination worker under send_lock Ursula Braun
` (5 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Ursula Braun @ 2020-02-17 15:24 UTC (permalink / raw)
To: davem; +Cc: netdev, linux-s390, heiko.carstens, raspl, kgraul, ubraun
From: Karsten Graul <kgraul@linux.ibm.com>
smc_lgr_cleanup() is called during termination processing, there is no
need to send a DELETE_LINK at that time. A DELETE_LINK should have been
sent before the termination is initiated, if needed.
And remove the extra call to wake_up(&lnk->wr_reg_wait) because
smc_llc_link_inactive() already calls the related helper function
smc_wr_wakeup_reg_wait().
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
---
net/smc/smc_core.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
index 2249de5379ee..8f3c1fced334 100644
--- a/net/smc/smc_core.c
+++ b/net/smc/smc_core.c
@@ -576,11 +576,8 @@ static void smc_lgr_cleanup(struct smc_link_group *lgr)
} else {
struct smc_link *lnk = &lgr->lnk[SMC_SINGLE_LINK];
- wake_up(&lnk->wr_reg_wait);
- if (lnk->state != SMC_LNK_INACTIVE) {
- smc_link_send_delete(lnk, false);
+ if (lnk->state != SMC_LNK_INACTIVE)
smc_llc_link_inactive(lnk);
- }
}
}
--
2.17.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH net-next 2/6] net/smc: use termination worker under send_lock
2020-02-17 15:24 [PATCH net-next 0/6] net/smc: patches 2020-02-17 Ursula Braun
2020-02-17 15:24 ` [PATCH net-next 1/6] net/smc: improve smc_lgr_cleanup() Ursula Braun
@ 2020-02-17 15:24 ` Ursula Braun
2020-02-17 15:24 ` [PATCH net-next 3/6] net/smc: do not delete lgr from list twice Ursula Braun
` (4 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Ursula Braun @ 2020-02-17 15:24 UTC (permalink / raw)
To: davem; +Cc: netdev, linux-s390, heiko.carstens, raspl, kgraul, ubraun
From: Karsten Graul <kgraul@linux.ibm.com>
smc_tx_rdma_write() is called under the send_lock and should not call
smc_lgr_terminate() directly. Call smc_lgr_terminate_sched() instead
which schedules a worker.
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
---
net/smc/smc_tx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/smc/smc_tx.c b/net/smc/smc_tx.c
index 0d42e7716b91..9f1ade86d70e 100644
--- a/net/smc/smc_tx.c
+++ b/net/smc/smc_tx.c
@@ -284,7 +284,7 @@ static int smc_tx_rdma_write(struct smc_connection *conn, int peer_rmbe_offset,
rdma_wr->rkey = lgr->rtokens[conn->rtoken_idx][SMC_SINGLE_LINK].rkey;
rc = ib_post_send(link->roce_qp, &rdma_wr->wr, NULL);
if (rc)
- smc_lgr_terminate(lgr, true);
+ smc_lgr_terminate_sched(lgr);
return rc;
}
--
2.17.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH net-next 3/6] net/smc: do not delete lgr from list twice
2020-02-17 15:24 [PATCH net-next 0/6] net/smc: patches 2020-02-17 Ursula Braun
2020-02-17 15:24 ` [PATCH net-next 1/6] net/smc: improve smc_lgr_cleanup() Ursula Braun
2020-02-17 15:24 ` [PATCH net-next 2/6] net/smc: use termination worker under send_lock Ursula Braun
@ 2020-02-17 15:24 ` Ursula Braun
2020-02-17 15:24 ` [PATCH net-next 4/6] net/smc: remove unused parameter of smc_lgr_terminate() Ursula Braun
` (3 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Ursula Braun @ 2020-02-17 15:24 UTC (permalink / raw)
To: davem; +Cc: netdev, linux-s390, heiko.carstens, raspl, kgraul, ubraun
From: Karsten Graul <kgraul@linux.ibm.com>
When 2 callers call smc_lgr_terminate() at the same time
for the same lgr, one gets the lgr_lock and deletes the lgr from the
list and releases the lock. Then the second caller gets the lock and
tries to delete it again.
In smc_lgr_terminate() add a check if the link group lgr is already
deleted from the link group list and prevent to try to delete it a
second time.
And add a check if the lgr is marked as freeing, which means that a
termination is already pending.
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
---
net/smc/smc_core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
index 8f3c1fced334..9b92b52952dd 100644
--- a/net/smc/smc_core.c
+++ b/net/smc/smc_core.c
@@ -629,7 +629,7 @@ void smc_lgr_terminate(struct smc_link_group *lgr, bool soft)
smc_lgr_list_head(lgr, &lgr_lock);
spin_lock_bh(lgr_lock);
- if (lgr->terminating) {
+ if (list_empty(&lgr->list) || lgr->terminating || lgr->freeing) {
spin_unlock_bh(lgr_lock);
return; /* lgr already terminating */
}
--
2.17.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH net-next 4/6] net/smc: remove unused parameter of smc_lgr_terminate()
2020-02-17 15:24 [PATCH net-next 0/6] net/smc: patches 2020-02-17 Ursula Braun
` (2 preceding siblings ...)
2020-02-17 15:24 ` [PATCH net-next 3/6] net/smc: do not delete lgr from list twice Ursula Braun
@ 2020-02-17 15:24 ` Ursula Braun
2020-02-17 15:24 ` [PATCH net-next 5/6] net/smc: simplify normal link termination Ursula Braun
` (2 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Ursula Braun @ 2020-02-17 15:24 UTC (permalink / raw)
To: davem; +Cc: netdev, linux-s390, heiko.carstens, raspl, kgraul, ubraun
From: Karsten Graul <kgraul@linux.ibm.com>
The soft parameter of smc_lgr_terminate() is not used and obsolete.
Remove it.
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
---
net/smc/smc_clc.c | 2 +-
net/smc/smc_core.c | 18 ++++++++----------
net/smc/smc_core.h | 2 +-
net/smc/smc_llc.c | 2 +-
4 files changed, 11 insertions(+), 13 deletions(-)
diff --git a/net/smc/smc_clc.c b/net/smc/smc_clc.c
index 86cccc24e52e..aee9ccfa99c2 100644
--- a/net/smc/smc_clc.c
+++ b/net/smc/smc_clc.c
@@ -349,7 +349,7 @@ int smc_clc_wait_msg(struct smc_sock *smc, void *buf, int buflen,
smc->peer_diagnosis = ntohl(dclc->peer_diagnosis);
if (((struct smc_clc_msg_decline *)buf)->hdr.flag) {
smc->conn.lgr->sync_err = 1;
- smc_lgr_terminate(smc->conn.lgr, true);
+ smc_lgr_terminate(smc->conn.lgr);
}
}
diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
index 9b92b52952dd..53b6afbb1d93 100644
--- a/net/smc/smc_core.c
+++ b/net/smc/smc_core.c
@@ -229,7 +229,7 @@ static void smc_lgr_terminate_work(struct work_struct *work)
struct smc_link_group *lgr = container_of(work, struct smc_link_group,
terminate_work);
- smc_lgr_terminate(lgr, true);
+ smc_lgr_terminate(lgr);
}
/* create a new SMC link group */
@@ -581,7 +581,10 @@ static void smc_lgr_cleanup(struct smc_link_group *lgr)
}
}
-/* terminate link group */
+/* terminate link group
+ * @soft: true if link group shutdown can take its time
+ * false if immediate link group shutdown is required
+ */
static void __smc_lgr_terminate(struct smc_link_group *lgr, bool soft)
{
struct smc_connection *conn;
@@ -619,11 +622,8 @@ static void __smc_lgr_terminate(struct smc_link_group *lgr, bool soft)
smc_lgr_free(lgr);
}
-/* unlink and terminate link group
- * @soft: true if link group shutdown can take its time
- * false if immediate link group shutdown is required
- */
-void smc_lgr_terminate(struct smc_link_group *lgr, bool soft)
+/* unlink and terminate link group */
+void smc_lgr_terminate(struct smc_link_group *lgr)
{
spinlock_t *lgr_lock;
@@ -633,11 +633,9 @@ void smc_lgr_terminate(struct smc_link_group *lgr, bool soft)
spin_unlock_bh(lgr_lock);
return; /* lgr already terminating */
}
- if (!soft)
- lgr->freeing = 1;
list_del_init(&lgr->list);
spin_unlock_bh(lgr_lock);
- __smc_lgr_terminate(lgr, soft);
+ __smc_lgr_terminate(lgr, true);
}
/* Called when IB port is terminated */
diff --git a/net/smc/smc_core.h b/net/smc/smc_core.h
index c472e12951d1..094d43c24345 100644
--- a/net/smc/smc_core.h
+++ b/net/smc/smc_core.h
@@ -296,7 +296,7 @@ struct smc_clc_msg_accept_confirm;
struct smc_clc_msg_local;
void smc_lgr_forget(struct smc_link_group *lgr);
-void smc_lgr_terminate(struct smc_link_group *lgr, bool soft);
+void smc_lgr_terminate(struct smc_link_group *lgr);
void smc_port_terminate(struct smc_ib_device *smcibdev, u8 ibport);
void smc_smcd_terminate(struct smcd_dev *dev, u64 peer_gid,
unsigned short vlan);
diff --git a/net/smc/smc_llc.c b/net/smc/smc_llc.c
index a9f6431dd69a..b134a08c929e 100644
--- a/net/smc/smc_llc.c
+++ b/net/smc/smc_llc.c
@@ -614,7 +614,7 @@ static void smc_llc_testlink_work(struct work_struct *work)
rc = wait_for_completion_interruptible_timeout(&link->llc_testlink_resp,
SMC_LLC_WAIT_TIME);
if (rc <= 0) {
- smc_lgr_terminate(smc_get_lgr(link), true);
+ smc_lgr_terminate(smc_get_lgr(link));
return;
}
next_interval = link->llc_testlink_time;
--
2.17.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH net-next 5/6] net/smc: simplify normal link termination
2020-02-17 15:24 [PATCH net-next 0/6] net/smc: patches 2020-02-17 Ursula Braun
` (3 preceding siblings ...)
2020-02-17 15:24 ` [PATCH net-next 4/6] net/smc: remove unused parameter of smc_lgr_terminate() Ursula Braun
@ 2020-02-17 15:24 ` Ursula Braun
2020-02-17 15:24 ` [PATCH net-next 6/6] net/smc: reduce port_event scheduling Ursula Braun
2020-02-17 22:50 ` [PATCH net-next 0/6] net/smc: patches 2020-02-17 David Miller
6 siblings, 0 replies; 8+ messages in thread
From: Ursula Braun @ 2020-02-17 15:24 UTC (permalink / raw)
To: davem; +Cc: netdev, linux-s390, heiko.carstens, raspl, kgraul, ubraun
From: Karsten Graul <kgraul@linux.ibm.com>
smc_lgr_terminate() and smc_lgr_terminate_sched() both result in soft
link termination, smc_lgr_terminate_sched() is scheduling a worker for
this task. Take out complexity by always using the termination worker
and getting rid of smc_lgr_terminate() completely.
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
---
net/smc/smc_clc.c | 2 +-
net/smc/smc_core.c | 9 +++++----
net/smc/smc_core.h | 8 +-------
net/smc/smc_llc.c | 2 +-
4 files changed, 8 insertions(+), 13 deletions(-)
diff --git a/net/smc/smc_clc.c b/net/smc/smc_clc.c
index aee9ccfa99c2..3e16b887cfcf 100644
--- a/net/smc/smc_clc.c
+++ b/net/smc/smc_clc.c
@@ -349,7 +349,7 @@ int smc_clc_wait_msg(struct smc_sock *smc, void *buf, int buflen,
smc->peer_diagnosis = ntohl(dclc->peer_diagnosis);
if (((struct smc_clc_msg_decline *)buf)->hdr.flag) {
smc->conn.lgr->sync_err = 1;
- smc_lgr_terminate(smc->conn.lgr);
+ smc_lgr_terminate_sched(smc->conn.lgr);
}
}
diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
index 53b6afbb1d93..1bbce5531014 100644
--- a/net/smc/smc_core.c
+++ b/net/smc/smc_core.c
@@ -46,6 +46,7 @@ static DECLARE_WAIT_QUEUE_HEAD(lgrs_deleted);
static void smc_buf_free(struct smc_link_group *lgr, bool is_rmb,
struct smc_buf_desc *buf_desc);
+static void __smc_lgr_terminate(struct smc_link_group *lgr, bool soft);
/* return head of link group list and its lock for a given link group */
static inline struct list_head *smc_lgr_list_head(struct smc_link_group *lgr,
@@ -229,7 +230,7 @@ static void smc_lgr_terminate_work(struct work_struct *work)
struct smc_link_group *lgr = container_of(work, struct smc_link_group,
terminate_work);
- smc_lgr_terminate(lgr);
+ __smc_lgr_terminate(lgr, true);
}
/* create a new SMC link group */
@@ -622,8 +623,8 @@ static void __smc_lgr_terminate(struct smc_link_group *lgr, bool soft)
smc_lgr_free(lgr);
}
-/* unlink and terminate link group */
-void smc_lgr_terminate(struct smc_link_group *lgr)
+/* unlink link group and schedule termination */
+void smc_lgr_terminate_sched(struct smc_link_group *lgr)
{
spinlock_t *lgr_lock;
@@ -635,7 +636,7 @@ void smc_lgr_terminate(struct smc_link_group *lgr)
}
list_del_init(&lgr->list);
spin_unlock_bh(lgr_lock);
- __smc_lgr_terminate(lgr, true);
+ schedule_work(&lgr->terminate_work);
}
/* Called when IB port is terminated */
diff --git a/net/smc/smc_core.h b/net/smc/smc_core.h
index 094d43c24345..5695c7bc639e 100644
--- a/net/smc/smc_core.h
+++ b/net/smc/smc_core.h
@@ -285,18 +285,12 @@ static inline struct smc_connection *smc_lgr_find_conn(
return res;
}
-static inline void smc_lgr_terminate_sched(struct smc_link_group *lgr)
-{
- if (!lgr->terminating && !lgr->freeing)
- schedule_work(&lgr->terminate_work);
-}
-
struct smc_sock;
struct smc_clc_msg_accept_confirm;
struct smc_clc_msg_local;
void smc_lgr_forget(struct smc_link_group *lgr);
-void smc_lgr_terminate(struct smc_link_group *lgr);
+void smc_lgr_terminate_sched(struct smc_link_group *lgr);
void smc_port_terminate(struct smc_ib_device *smcibdev, u8 ibport);
void smc_smcd_terminate(struct smcd_dev *dev, u64 peer_gid,
unsigned short vlan);
diff --git a/net/smc/smc_llc.c b/net/smc/smc_llc.c
index b134a08c929e..0e52aab53d97 100644
--- a/net/smc/smc_llc.c
+++ b/net/smc/smc_llc.c
@@ -614,7 +614,7 @@ static void smc_llc_testlink_work(struct work_struct *work)
rc = wait_for_completion_interruptible_timeout(&link->llc_testlink_resp,
SMC_LLC_WAIT_TIME);
if (rc <= 0) {
- smc_lgr_terminate(smc_get_lgr(link));
+ smc_lgr_terminate_sched(smc_get_lgr(link));
return;
}
next_interval = link->llc_testlink_time;
--
2.17.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH net-next 6/6] net/smc: reduce port_event scheduling
2020-02-17 15:24 [PATCH net-next 0/6] net/smc: patches 2020-02-17 Ursula Braun
` (4 preceding siblings ...)
2020-02-17 15:24 ` [PATCH net-next 5/6] net/smc: simplify normal link termination Ursula Braun
@ 2020-02-17 15:24 ` Ursula Braun
2020-02-17 22:50 ` [PATCH net-next 0/6] net/smc: patches 2020-02-17 David Miller
6 siblings, 0 replies; 8+ messages in thread
From: Ursula Braun @ 2020-02-17 15:24 UTC (permalink / raw)
To: davem; +Cc: netdev, linux-s390, heiko.carstens, raspl, kgraul, ubraun
IB event handlers schedule the port event worker for further
processing of port state changes. This patch reduces the number of
schedules to avoid duplicate processing of the same port change.
Reviewed-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
---
net/smc/smc_ib.c | 44 +++++++++++++++++++++++++++++---------------
1 file changed, 29 insertions(+), 15 deletions(-)
diff --git a/net/smc/smc_ib.c b/net/smc/smc_ib.c
index 548632621f4b..6756bd5a3fe4 100644
--- a/net/smc/smc_ib.c
+++ b/net/smc/smc_ib.c
@@ -257,6 +257,7 @@ static void smc_ib_global_event_handler(struct ib_event_handler *handler,
struct ib_event *ibevent)
{
struct smc_ib_device *smcibdev;
+ bool schedule = false;
u8 port_idx;
smcibdev = container_of(handler, struct smc_ib_device, event_handler);
@@ -266,22 +267,35 @@ static void smc_ib_global_event_handler(struct ib_event_handler *handler,
/* terminate all ports on device */
for (port_idx = 0; port_idx < SMC_MAX_PORTS; port_idx++) {
set_bit(port_idx, &smcibdev->port_event_mask);
- set_bit(port_idx, smcibdev->ports_going_away);
+ if (!test_and_set_bit(port_idx,
+ smcibdev->ports_going_away))
+ schedule = true;
}
- schedule_work(&smcibdev->port_event_work);
+ if (schedule)
+ schedule_work(&smcibdev->port_event_work);
break;
- case IB_EVENT_PORT_ERR:
case IB_EVENT_PORT_ACTIVE:
- case IB_EVENT_GID_CHANGE:
port_idx = ibevent->element.port_num - 1;
- if (port_idx < SMC_MAX_PORTS) {
- set_bit(port_idx, &smcibdev->port_event_mask);
- if (ibevent->event == IB_EVENT_PORT_ERR)
- set_bit(port_idx, smcibdev->ports_going_away);
- else if (ibevent->event == IB_EVENT_PORT_ACTIVE)
- clear_bit(port_idx, smcibdev->ports_going_away);
+ if (port_idx >= SMC_MAX_PORTS)
+ break;
+ set_bit(port_idx, &smcibdev->port_event_mask);
+ if (test_and_clear_bit(port_idx, smcibdev->ports_going_away))
+ schedule_work(&smcibdev->port_event_work);
+ break;
+ case IB_EVENT_PORT_ERR:
+ port_idx = ibevent->element.port_num - 1;
+ if (port_idx >= SMC_MAX_PORTS)
+ break;
+ set_bit(port_idx, &smcibdev->port_event_mask);
+ if (!test_and_set_bit(port_idx, smcibdev->ports_going_away))
schedule_work(&smcibdev->port_event_work);
- }
+ break;
+ case IB_EVENT_GID_CHANGE:
+ port_idx = ibevent->element.port_num - 1;
+ if (port_idx >= SMC_MAX_PORTS)
+ break;
+ set_bit(port_idx, &smcibdev->port_event_mask);
+ schedule_work(&smcibdev->port_event_work);
break;
default:
break;
@@ -316,11 +330,11 @@ static void smc_ib_qp_event_handler(struct ib_event *ibevent, void *priv)
case IB_EVENT_QP_FATAL:
case IB_EVENT_QP_ACCESS_ERR:
port_idx = ibevent->element.qp->port - 1;
- if (port_idx < SMC_MAX_PORTS) {
- set_bit(port_idx, &smcibdev->port_event_mask);
- set_bit(port_idx, smcibdev->ports_going_away);
+ if (port_idx >= SMC_MAX_PORTS)
+ break;
+ set_bit(port_idx, &smcibdev->port_event_mask);
+ if (!test_and_set_bit(port_idx, smcibdev->ports_going_away))
schedule_work(&smcibdev->port_event_work);
- }
break;
default:
break;
--
2.17.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH net-next 0/6] net/smc: patches 2020-02-17
2020-02-17 15:24 [PATCH net-next 0/6] net/smc: patches 2020-02-17 Ursula Braun
` (5 preceding siblings ...)
2020-02-17 15:24 ` [PATCH net-next 6/6] net/smc: reduce port_event scheduling Ursula Braun
@ 2020-02-17 22:50 ` David Miller
6 siblings, 0 replies; 8+ messages in thread
From: David Miller @ 2020-02-17 22:50 UTC (permalink / raw)
To: ubraun; +Cc: netdev, linux-s390, heiko.carstens, raspl, kgraul
From: Ursula Braun <ubraun@linux.ibm.com>
Date: Mon, 17 Feb 2020 16:24:49 +0100
> here are patches for SMC making termination tasks more perfect.
Series applied, thanks.
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2020-02-17 22:50 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-02-17 15:24 [PATCH net-next 0/6] net/smc: patches 2020-02-17 Ursula Braun
2020-02-17 15:24 ` [PATCH net-next 1/6] net/smc: improve smc_lgr_cleanup() Ursula Braun
2020-02-17 15:24 ` [PATCH net-next 2/6] net/smc: use termination worker under send_lock Ursula Braun
2020-02-17 15:24 ` [PATCH net-next 3/6] net/smc: do not delete lgr from list twice Ursula Braun
2020-02-17 15:24 ` [PATCH net-next 4/6] net/smc: remove unused parameter of smc_lgr_terminate() Ursula Braun
2020-02-17 15:24 ` [PATCH net-next 5/6] net/smc: simplify normal link termination Ursula Braun
2020-02-17 15:24 ` [PATCH net-next 6/6] net/smc: reduce port_event scheduling Ursula Braun
2020-02-17 22:50 ` [PATCH net-next 0/6] net/smc: patches 2020-02-17 David Miller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).