* [patch 2/2] qeth: fix qeth_wait_for_threads() deadlock for OSN devices
2013-04-02 10:56 [patch 0/2] s390: network bug fixes for net frank.blaschka
@ 2013-04-02 10:56 ` frank.blaschka
2013-04-02 16:13 ` Eric Dumazet
0 siblings, 1 reply; 7+ messages in thread
From: frank.blaschka @ 2013-04-02 10:56 UTC (permalink / raw)
To: davem; +Cc: netdev, linux-s390, Stefan Raspl
[-- Attachment #1: 602-qeth-thread-deadlock.diff --]
[-- Type: text/plain, Size: 3938 bytes --]
From: Stefan Raspl <raspl@linux.vnet.ibm.com>
Any recovery thread will deadlock when calling qeth_wait_for_threads(), most
notably when triggering a recovery on an OSN device.
This patch will store the recovery thread's task pointer on recovery
invocation and check in qeth_wait_for_threads() respectively to avoid
deadlocks.
Signed-off-by: Stefan Raspl <raspl@linux.vnet.ibm.com>
Signed-off-by: Frank Blaschka <blaschka@linux.vnet.ibm.com>
Reviewed-by: Ursula Braun <ursula.braun@de.ibm.com>
---
drivers/s390/net/qeth_core.h | 3 +++
drivers/s390/net/qeth_core_main.c | 19 +++++++++++++++++++
drivers/s390/net/qeth_l2_main.c | 2 ++
drivers/s390/net/qeth_l3_main.c | 2 ++
4 files changed, 26 insertions(+)
--- a/drivers/s390/net/qeth_core.h
+++ b/drivers/s390/net/qeth_core.h
@@ -769,6 +769,7 @@ struct qeth_card {
unsigned long thread_start_mask;
unsigned long thread_allowed_mask;
unsigned long thread_running_mask;
+ struct task_struct *recovery_task;
spinlock_t ip_lock;
struct list_head ip_list;
struct list_head *ip_tbd_list;
@@ -862,6 +863,8 @@ extern struct qeth_card_list_struct qeth
extern struct kmem_cache *qeth_core_header_cache;
extern struct qeth_dbf_info qeth_dbf[QETH_DBF_INFOS];
+void qeth_set_recovery_task(struct qeth_card *);
+void qeth_clear_recovery_task(struct qeth_card *);
void qeth_set_allowed_threads(struct qeth_card *, unsigned long , int);
int qeth_threads_running(struct qeth_card *, unsigned long);
int qeth_wait_for_threads(struct qeth_card *, unsigned long);
--- a/drivers/s390/net/qeth_core_main.c
+++ b/drivers/s390/net/qeth_core_main.c
@@ -177,6 +177,23 @@ const char *qeth_get_cardname_short(stru
return "n/a";
}
+void qeth_set_recovery_task(struct qeth_card *card)
+{
+ card->recovery_task = current;
+}
+EXPORT_SYMBOL_GPL(qeth_set_recovery_task);
+
+void qeth_clear_recovery_task(struct qeth_card *card)
+{
+ card->recovery_task = NULL;
+}
+EXPORT_SYMBOL_GPL(qeth_clear_recovery_task);
+
+static bool qeth_is_recovery_task(struct qeth_card *card)
+{
+ return (card->recovery_task == current);
+}
+
void qeth_set_allowed_threads(struct qeth_card *card, unsigned long threads,
int clear_start_mask)
{
@@ -205,6 +222,8 @@ EXPORT_SYMBOL_GPL(qeth_threads_running);
int qeth_wait_for_threads(struct qeth_card *card, unsigned long threads)
{
+ if (qeth_is_recovery_task(card))
+ return 0;
return wait_event_interruptible(card->wait_q,
qeth_threads_running(card, threads) == 0);
}
--- a/drivers/s390/net/qeth_l2_main.c
+++ b/drivers/s390/net/qeth_l2_main.c
@@ -1143,6 +1143,7 @@ static int qeth_l2_recover(void *ptr)
QETH_CARD_TEXT(card, 2, "recover2");
dev_warn(&card->gdev->dev,
"A recovery process has been started for the device\n");
+ qeth_set_recovery_task(card);
__qeth_l2_set_offline(card->gdev, 1);
rc = __qeth_l2_set_online(card->gdev, 1);
if (!rc)
@@ -1153,6 +1154,7 @@ static int qeth_l2_recover(void *ptr)
dev_warn(&card->gdev->dev, "The qeth device driver "
"failed to recover an error on the device\n");
}
+ qeth_clear_recovery_task(card);
qeth_clear_thread_start_bit(card, QETH_RECOVER_THREAD);
qeth_clear_thread_running_bit(card, QETH_RECOVER_THREAD);
return 0;
--- a/drivers/s390/net/qeth_l3_main.c
+++ b/drivers/s390/net/qeth_l3_main.c
@@ -3515,6 +3515,7 @@ static int qeth_l3_recover(void *ptr)
QETH_CARD_TEXT(card, 2, "recover2");
dev_warn(&card->gdev->dev,
"A recovery process has been started for the device\n");
+ qeth_set_recovery_task(card);
__qeth_l3_set_offline(card->gdev, 1);
rc = __qeth_l3_set_online(card->gdev, 1);
if (!rc)
@@ -3525,6 +3526,7 @@ static int qeth_l3_recover(void *ptr)
dev_warn(&card->gdev->dev, "The qeth device driver "
"failed to recover an error on the device\n");
}
+ qeth_clear_recovery_task(card);
qeth_clear_thread_start_bit(card, QETH_RECOVER_THREAD);
qeth_clear_thread_running_bit(card, QETH_RECOVER_THREAD);
return 0;
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [patch 2/2] qeth: fix qeth_wait_for_threads() deadlock for OSN devices
2013-04-02 10:56 ` [patch 2/2] qeth: fix qeth_wait_for_threads() deadlock for OSN devices frank.blaschka
@ 2013-04-02 16:13 ` Eric Dumazet
0 siblings, 0 replies; 7+ messages in thread
From: Eric Dumazet @ 2013-04-02 16:13 UTC (permalink / raw)
To: frank.blaschka; +Cc: davem, netdev, linux-s390, Stefan Raspl
On Tue, 2013-04-02 at 12:56 +0200, frank.blaschka@de.ibm.com wrote:
> plain text document attachment (602-qeth-thread-deadlock.diff)
> From: Stefan Raspl <raspl@linux.vnet.ibm.com>
> +static bool qeth_is_recovery_task(struct qeth_card *card)
> +{
> + return (card->recovery_task == current);
> +}
> +
static bool qeth_is_recovery_task(const struct qeth_card *card)
{
return card->recovery_task == current;
}
^ permalink raw reply [flat|nested] 7+ messages in thread
* [patch 0/2] s390: network bug fixes for net [v2]
@ 2013-04-08 8:19 frank.blaschka
2013-04-08 8:19 ` [patch 1/2] af_iucv: fix recvmsg by replacing skb_pull() function frank.blaschka
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: frank.blaschka @ 2013-04-08 8:19 UTC (permalink / raw)
To: davem; +Cc: netdev, linux-s390
Hi Dave,
here are the fixes for net again, including
feedback from Eric (Thx!)
shortlog:
Ursula Braun (1)
af_iucv: fix recvmsg by replacing skb_pull() function
Stefan Raspl (1)
qeth: fix qeth_wait_for_threads() deadlock for OSN devices
Thanks,
Frank
^ permalink raw reply [flat|nested] 7+ messages in thread
* [patch 1/2] af_iucv: fix recvmsg by replacing skb_pull() function
2013-04-08 8:19 [patch 0/2] s390: network bug fixes for net [v2] frank.blaschka
@ 2013-04-08 8:19 ` frank.blaschka
2013-04-08 20:16 ` Eric Dumazet
2013-04-08 8:19 ` [patch 2/2] qeth: fix qeth_wait_for_threads() deadlock for OSN devices frank.blaschka
2013-04-08 21:17 ` [patch 0/2] s390: network bug fixes for net [v2] David Miller
2 siblings, 1 reply; 7+ messages in thread
From: frank.blaschka @ 2013-04-08 8:19 UTC (permalink / raw)
To: davem; +Cc: netdev, linux-s390, Ursula Braun
[-- Attachment #1: 601-af-iucv-skb-pull.diff --]
[-- Type: text/plain, Size: 5822 bytes --]
From: Ursula Braun <ursula.braun@de.ibm.com>
When receiving data messages, the "BUG_ON(skb->len < skb->data_len)" in
the skb_pull() function triggers a kernel panic.
Replace the skb_pull logic by a per skb offset as advised by
Eric Dumazet.
Signed-off-by: Ursula Braun <ursula.braun@de.ibm.com>
Signed-off-by: Frank Blaschka <blaschka@linux.vnet.ibm.com>
Reviewed-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
---
include/net/iucv/af_iucv.h | 8 ++++++++
net/iucv/af_iucv.c | 34 ++++++++++++++++------------------
2 files changed, 24 insertions(+), 18 deletions(-)
--- a/include/net/iucv/af_iucv.h
+++ b/include/net/iucv/af_iucv.h
@@ -130,6 +130,14 @@ struct iucv_sock {
enum iucv_tx_notify n);
};
+struct iucv_skb_cb {
+ u32 class; /* target class of message */
+ u32 tag; /* tag associated with message */
+ u32 offset; /* offset for skb receival */
+};
+
+#define IUCV_SKB_CB(__skb) ((struct iucv_skb_cb *)&((__skb)->cb[0]))
+
/* iucv socket options (SOL_IUCV) */
#define SO_IPRMDATA_MSG 0x0080 /* send/recv IPRM_DATA msgs */
#define SO_MSGLIMIT 0x1000 /* get/set IUCV MSGLIMIT */
--- a/net/iucv/af_iucv.c
+++ b/net/iucv/af_iucv.c
@@ -49,12 +49,6 @@ static const u8 iprm_shutdown[8] =
#define TRGCLS_SIZE (sizeof(((struct iucv_message *)0)->class))
-/* macros to set/get socket control buffer at correct offset */
-#define CB_TAG(skb) ((skb)->cb) /* iucv message tag */
-#define CB_TAG_LEN (sizeof(((struct iucv_message *) 0)->tag))
-#define CB_TRGCLS(skb) ((skb)->cb + CB_TAG_LEN) /* iucv msg target class */
-#define CB_TRGCLS_LEN (TRGCLS_SIZE)
-
#define __iucv_sock_wait(sk, condition, timeo, ret) \
do { \
DEFINE_WAIT(__wait); \
@@ -1141,7 +1135,7 @@ static int iucv_sock_sendmsg(struct kioc
/* increment and save iucv message tag for msg_completion cbk */
txmsg.tag = iucv->send_tag++;
- memcpy(CB_TAG(skb), &txmsg.tag, CB_TAG_LEN);
+ IUCV_SKB_CB(skb)->tag = txmsg.tag;
if (iucv->transport == AF_IUCV_TRANS_HIPER) {
atomic_inc(&iucv->msg_sent);
@@ -1224,7 +1218,7 @@ static int iucv_fragment_skb(struct sock
return -ENOMEM;
/* copy target class to control buffer of new skb */
- memcpy(CB_TRGCLS(nskb), CB_TRGCLS(skb), CB_TRGCLS_LEN);
+ IUCV_SKB_CB(nskb)->class = IUCV_SKB_CB(skb)->class;
/* copy data fragment */
memcpy(nskb->data, skb->data + copied, size);
@@ -1256,7 +1250,7 @@ static void iucv_process_message(struct
/* store msg target class in the second 4 bytes of skb ctrl buffer */
/* Note: the first 4 bytes are reserved for msg tag */
- memcpy(CB_TRGCLS(skb), &msg->class, CB_TRGCLS_LEN);
+ IUCV_SKB_CB(skb)->class = msg->class;
/* check for special IPRM messages (e.g. iucv_sock_shutdown) */
if ((msg->flags & IUCV_IPRMDATA) && len > 7) {
@@ -1292,6 +1286,7 @@ static void iucv_process_message(struct
}
}
+ IUCV_SKB_CB(skb)->offset = 0;
if (sock_queue_rcv_skb(sk, skb))
skb_queue_head(&iucv_sk(sk)->backlog_skb_q, skb);
}
@@ -1327,6 +1322,7 @@ static int iucv_sock_recvmsg(struct kioc
unsigned int copied, rlen;
struct sk_buff *skb, *rskb, *cskb;
int err = 0;
+ u32 offset;
msg->msg_namelen = 0;
@@ -1348,13 +1344,14 @@ static int iucv_sock_recvmsg(struct kioc
return err;
}
- rlen = skb->len; /* real length of skb */
+ offset = IUCV_SKB_CB(skb)->offset;
+ rlen = skb->len - offset; /* real length of skb */
copied = min_t(unsigned int, rlen, len);
if (!rlen)
sk->sk_shutdown = sk->sk_shutdown | RCV_SHUTDOWN;
cskb = skb;
- if (skb_copy_datagram_iovec(cskb, 0, msg->msg_iov, copied)) {
+ if (skb_copy_datagram_iovec(cskb, offset, msg->msg_iov, copied)) {
if (!(flags & MSG_PEEK))
skb_queue_head(&sk->sk_receive_queue, skb);
return -EFAULT;
@@ -1372,7 +1369,8 @@ static int iucv_sock_recvmsg(struct kioc
* get the trgcls from the control buffer of the skb due to
* fragmentation of original iucv message. */
err = put_cmsg(msg, SOL_IUCV, SCM_IUCV_TRGCLS,
- CB_TRGCLS_LEN, CB_TRGCLS(skb));
+ sizeof(IUCV_SKB_CB(skb)->class),
+ (void *)&IUCV_SKB_CB(skb)->class);
if (err) {
if (!(flags & MSG_PEEK))
skb_queue_head(&sk->sk_receive_queue, skb);
@@ -1384,9 +1382,8 @@ static int iucv_sock_recvmsg(struct kioc
/* SOCK_STREAM: re-queue skb if it contains unreceived data */
if (sk->sk_type == SOCK_STREAM) {
- skb_pull(skb, copied);
- if (skb->len) {
- skb_queue_head(&sk->sk_receive_queue, skb);
+ if (copied < rlen) {
+ IUCV_SKB_CB(skb)->offset = offset + copied;
goto done;
}
}
@@ -1405,6 +1402,7 @@ static int iucv_sock_recvmsg(struct kioc
spin_lock_bh(&iucv->message_q.lock);
rskb = skb_dequeue(&iucv->backlog_skb_q);
while (rskb) {
+ IUCV_SKB_CB(rskb)->offset = 0;
if (sock_queue_rcv_skb(sk, rskb)) {
skb_queue_head(&iucv->backlog_skb_q,
rskb);
@@ -1832,7 +1830,7 @@ static void iucv_callback_txdone(struct
spin_lock_irqsave(&list->lock, flags);
while (list_skb != (struct sk_buff *)list) {
- if (!memcmp(&msg->tag, CB_TAG(list_skb), CB_TAG_LEN)) {
+ if (msg->tag != IUCV_SKB_CB(list_skb)->tag) {
this = list_skb;
break;
}
@@ -2093,6 +2091,7 @@ static int afiucv_hs_callback_rx(struct
skb_pull(skb, sizeof(struct af_iucv_trans_hdr));
skb_reset_transport_header(skb);
skb_reset_network_header(skb);
+ IUCV_SKB_CB(skb)->offset = 0;
spin_lock(&iucv->message_q.lock);
if (skb_queue_empty(&iucv->backlog_skb_q)) {
if (sock_queue_rcv_skb(sk, skb)) {
@@ -2197,8 +2196,7 @@ static int afiucv_hs_rcv(struct sk_buff
/* fall through and receive zero length data */
case 0:
/* plain data frame */
- memcpy(CB_TRGCLS(skb), &trans_hdr->iucv_hdr.class,
- CB_TRGCLS_LEN);
+ IUCV_SKB_CB(skb)->class = trans_hdr->iucv_hdr.class;
err = afiucv_hs_callback_rx(sk, skb);
break;
default:
^ permalink raw reply [flat|nested] 7+ messages in thread
* [patch 2/2] qeth: fix qeth_wait_for_threads() deadlock for OSN devices
2013-04-08 8:19 [patch 0/2] s390: network bug fixes for net [v2] frank.blaschka
2013-04-08 8:19 ` [patch 1/2] af_iucv: fix recvmsg by replacing skb_pull() function frank.blaschka
@ 2013-04-08 8:19 ` frank.blaschka
2013-04-08 21:17 ` [patch 0/2] s390: network bug fixes for net [v2] David Miller
2 siblings, 0 replies; 7+ messages in thread
From: frank.blaschka @ 2013-04-08 8:19 UTC (permalink / raw)
To: davem; +Cc: netdev, linux-s390, Stefan Raspl
[-- Attachment #1: 602-qeth-thread-deadlock.diff --]
[-- Type: text/plain, Size: 3942 bytes --]
From: Stefan Raspl <raspl@linux.vnet.ibm.com>
Any recovery thread will deadlock when calling qeth_wait_for_threads(), most
notably when triggering a recovery on an OSN device.
This patch will store the recovery thread's task pointer on recovery
invocation and check in qeth_wait_for_threads() respectively to avoid
deadlocks.
Signed-off-by: Stefan Raspl <raspl@linux.vnet.ibm.com>
Signed-off-by: Frank Blaschka <blaschka@linux.vnet.ibm.com>
Reviewed-by: Ursula Braun <ursula.braun@de.ibm.com>
---
drivers/s390/net/qeth_core.h | 3 +++
drivers/s390/net/qeth_core_main.c | 19 +++++++++++++++++++
drivers/s390/net/qeth_l2_main.c | 2 ++
drivers/s390/net/qeth_l3_main.c | 2 ++
4 files changed, 26 insertions(+)
--- a/drivers/s390/net/qeth_core.h
+++ b/drivers/s390/net/qeth_core.h
@@ -769,6 +769,7 @@ struct qeth_card {
unsigned long thread_start_mask;
unsigned long thread_allowed_mask;
unsigned long thread_running_mask;
+ struct task_struct *recovery_task;
spinlock_t ip_lock;
struct list_head ip_list;
struct list_head *ip_tbd_list;
@@ -862,6 +863,8 @@ extern struct qeth_card_list_struct qeth
extern struct kmem_cache *qeth_core_header_cache;
extern struct qeth_dbf_info qeth_dbf[QETH_DBF_INFOS];
+void qeth_set_recovery_task(struct qeth_card *);
+void qeth_clear_recovery_task(struct qeth_card *);
void qeth_set_allowed_threads(struct qeth_card *, unsigned long , int);
int qeth_threads_running(struct qeth_card *, unsigned long);
int qeth_wait_for_threads(struct qeth_card *, unsigned long);
--- a/drivers/s390/net/qeth_core_main.c
+++ b/drivers/s390/net/qeth_core_main.c
@@ -177,6 +177,23 @@ const char *qeth_get_cardname_short(stru
return "n/a";
}
+void qeth_set_recovery_task(struct qeth_card *card)
+{
+ card->recovery_task = current;
+}
+EXPORT_SYMBOL_GPL(qeth_set_recovery_task);
+
+void qeth_clear_recovery_task(struct qeth_card *card)
+{
+ card->recovery_task = NULL;
+}
+EXPORT_SYMBOL_GPL(qeth_clear_recovery_task);
+
+static bool qeth_is_recovery_task(const struct qeth_card *card)
+{
+ return card->recovery_task == current;
+}
+
void qeth_set_allowed_threads(struct qeth_card *card, unsigned long threads,
int clear_start_mask)
{
@@ -205,6 +222,8 @@ EXPORT_SYMBOL_GPL(qeth_threads_running);
int qeth_wait_for_threads(struct qeth_card *card, unsigned long threads)
{
+ if (qeth_is_recovery_task(card))
+ return 0;
return wait_event_interruptible(card->wait_q,
qeth_threads_running(card, threads) == 0);
}
--- a/drivers/s390/net/qeth_l2_main.c
+++ b/drivers/s390/net/qeth_l2_main.c
@@ -1143,6 +1143,7 @@ static int qeth_l2_recover(void *ptr)
QETH_CARD_TEXT(card, 2, "recover2");
dev_warn(&card->gdev->dev,
"A recovery process has been started for the device\n");
+ qeth_set_recovery_task(card);
__qeth_l2_set_offline(card->gdev, 1);
rc = __qeth_l2_set_online(card->gdev, 1);
if (!rc)
@@ -1153,6 +1154,7 @@ static int qeth_l2_recover(void *ptr)
dev_warn(&card->gdev->dev, "The qeth device driver "
"failed to recover an error on the device\n");
}
+ qeth_clear_recovery_task(card);
qeth_clear_thread_start_bit(card, QETH_RECOVER_THREAD);
qeth_clear_thread_running_bit(card, QETH_RECOVER_THREAD);
return 0;
--- a/drivers/s390/net/qeth_l3_main.c
+++ b/drivers/s390/net/qeth_l3_main.c
@@ -3515,6 +3515,7 @@ static int qeth_l3_recover(void *ptr)
QETH_CARD_TEXT(card, 2, "recover2");
dev_warn(&card->gdev->dev,
"A recovery process has been started for the device\n");
+ qeth_set_recovery_task(card);
__qeth_l3_set_offline(card->gdev, 1);
rc = __qeth_l3_set_online(card->gdev, 1);
if (!rc)
@@ -3525,6 +3526,7 @@ static int qeth_l3_recover(void *ptr)
dev_warn(&card->gdev->dev, "The qeth device driver "
"failed to recover an error on the device\n");
}
+ qeth_clear_recovery_task(card);
qeth_clear_thread_start_bit(card, QETH_RECOVER_THREAD);
qeth_clear_thread_running_bit(card, QETH_RECOVER_THREAD);
return 0;
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [patch 1/2] af_iucv: fix recvmsg by replacing skb_pull() function
2013-04-08 8:19 ` [patch 1/2] af_iucv: fix recvmsg by replacing skb_pull() function frank.blaschka
@ 2013-04-08 20:16 ` Eric Dumazet
0 siblings, 0 replies; 7+ messages in thread
From: Eric Dumazet @ 2013-04-08 20:16 UTC (permalink / raw)
To: frank.blaschka; +Cc: davem, netdev, linux-s390, Ursula Braun
On Mon, 2013-04-08 at 10:19 +0200, frank.blaschka@de.ibm.com wrote:
> plain text document attachment (601-af-iucv-skb-pull.diff)
> From: Ursula Braun <ursula.braun@de.ibm.com>
>
> When receiving data messages, the "BUG_ON(skb->len < skb->data_len)" in
> the skb_pull() function triggers a kernel panic.
>
> Replace the skb_pull logic by a per skb offset as advised by
> Eric Dumazet.
>
> Signed-off-by: Ursula Braun <ursula.braun@de.ibm.com>
> Signed-off-by: Frank Blaschka <blaschka@linux.vnet.ibm.com>
> Reviewed-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
> ---
This is indeed a nicer patch ;)
Acked-by: Eric Dumazet <edumazet@google.com>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [patch 0/2] s390: network bug fixes for net [v2]
2013-04-08 8:19 [patch 0/2] s390: network bug fixes for net [v2] frank.blaschka
2013-04-08 8:19 ` [patch 1/2] af_iucv: fix recvmsg by replacing skb_pull() function frank.blaschka
2013-04-08 8:19 ` [patch 2/2] qeth: fix qeth_wait_for_threads() deadlock for OSN devices frank.blaschka
@ 2013-04-08 21:17 ` David Miller
2 siblings, 0 replies; 7+ messages in thread
From: David Miller @ 2013-04-08 21:17 UTC (permalink / raw)
To: frank.blaschka; +Cc: netdev, linux-s390
From: frank.blaschka@de.ibm.com
Date: Mon, 08 Apr 2013 10:19:25 +0200
> here are the fixes for net again, including
> feedback from Eric (Thx!)
Both applied, thanks.
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2013-04-08 21:17 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-04-08 8:19 [patch 0/2] s390: network bug fixes for net [v2] frank.blaschka
2013-04-08 8:19 ` [patch 1/2] af_iucv: fix recvmsg by replacing skb_pull() function frank.blaschka
2013-04-08 20:16 ` Eric Dumazet
2013-04-08 8:19 ` [patch 2/2] qeth: fix qeth_wait_for_threads() deadlock for OSN devices frank.blaschka
2013-04-08 21:17 ` [patch 0/2] s390: network bug fixes for net [v2] David Miller
-- strict thread matches above, loose matches on Subject: below --
2013-04-02 10:56 [patch 0/2] s390: network bug fixes for net frank.blaschka
2013-04-02 10:56 ` [patch 2/2] qeth: fix qeth_wait_for_threads() deadlock for OSN devices frank.blaschka
2013-04-02 16:13 ` Eric Dumazet
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).