* [patch 0/2] s390: network bug fixes for net
@ 2013-04-02 10:56 frank.blaschka
2013-04-02 10:56 ` [patch 1/2] af_iucv: recvmsg: use correct skb_pull() function frank.blaschka
2013-04-02 10:56 ` [patch 2/2] qeth: fix qeth_wait_for_threads() deadlock for OSN devices frank.blaschka
0 siblings, 2 replies; 6+ messages in thread
From: frank.blaschka @ 2013-04-02 10:56 UTC (permalink / raw)
To: davem; +Cc: netdev, linux-s390
Hi Dave,
here are 2 bug fixes for net.
shortlog:
Hendrik Brueckner (1)
af_iucv: recvmsg: use correct skb_pull() function
Stefan Raspl (1)
qeth: fix qeth_wait_for_threads() deadlock for OSN devices
Thanks,
Frank
^ permalink raw reply [flat|nested] 6+ messages in thread
* [patch 1/2] af_iucv: recvmsg: use correct skb_pull() function
2013-04-02 10:56 [patch 0/2] s390: network bug fixes for net frank.blaschka
@ 2013-04-02 10:56 ` frank.blaschka
2013-04-02 15:37 ` Eric Dumazet
2013-04-02 10:56 ` [patch 2/2] qeth: fix qeth_wait_for_threads() deadlock for OSN devices frank.blaschka
1 sibling, 1 reply; 6+ messages in thread
From: frank.blaschka @ 2013-04-02 10:56 UTC (permalink / raw)
To: davem; +Cc: netdev, linux-s390, Hendrik Brueckner
[-- Attachment #1: 601-af-iucv-skb-pull.diff --]
[-- Type: text/plain, Size: 991 bytes --]
From: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
When receiving data messages, the "BUG_ON(skb->len < skb->data_len)" in
the skb_pull() function triggers a kernel panic.
Check if the skb uses paged data (is non-linear) and use the pskb_pull()
function. Use skb_pull() for linear skbs' only.
Reviewed-by: Ursula Braun <ursula.braun@de.ibm.com>
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Frank Blaschka <blaschka@linux.vnet.ibm.com>
---
net/iucv/af_iucv.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
--- a/net/iucv/af_iucv.c
+++ b/net/iucv/af_iucv.c
@@ -1382,7 +1382,10 @@ static int iucv_sock_recvmsg(struct kioc
/* SOCK_STREAM: re-queue skb if it contains unreceived data */
if (sk->sk_type == SOCK_STREAM) {
- skb_pull(skb, copied);
+ if (skb_is_nonlinear(skb))
+ pskb_pull(skb, copied);
+ else
+ skb_pull(skb, copied);
if (skb->len) {
skb_queue_head(&sk->sk_receive_queue, skb);
goto done;
^ permalink raw reply [flat|nested] 6+ messages in thread
* [patch 2/2] qeth: fix qeth_wait_for_threads() deadlock for OSN devices
2013-04-02 10:56 [patch 0/2] s390: network bug fixes for net frank.blaschka
2013-04-02 10:56 ` [patch 1/2] af_iucv: recvmsg: use correct skb_pull() function frank.blaschka
@ 2013-04-02 10:56 ` frank.blaschka
2013-04-02 16:13 ` Eric Dumazet
1 sibling, 1 reply; 6+ messages in thread
From: frank.blaschka @ 2013-04-02 10:56 UTC (permalink / raw)
To: davem; +Cc: netdev, linux-s390, Stefan Raspl
[-- Attachment #1: 602-qeth-thread-deadlock.diff --]
[-- Type: text/plain, Size: 3938 bytes --]
From: Stefan Raspl <raspl@linux.vnet.ibm.com>
Any recovery thread will deadlock when calling qeth_wait_for_threads(), most
notably when triggering a recovery on an OSN device.
This patch will store the recovery thread's task pointer on recovery
invocation and check in qeth_wait_for_threads() respectively to avoid
deadlocks.
Signed-off-by: Stefan Raspl <raspl@linux.vnet.ibm.com>
Signed-off-by: Frank Blaschka <blaschka@linux.vnet.ibm.com>
Reviewed-by: Ursula Braun <ursula.braun@de.ibm.com>
---
drivers/s390/net/qeth_core.h | 3 +++
drivers/s390/net/qeth_core_main.c | 19 +++++++++++++++++++
drivers/s390/net/qeth_l2_main.c | 2 ++
drivers/s390/net/qeth_l3_main.c | 2 ++
4 files changed, 26 insertions(+)
--- a/drivers/s390/net/qeth_core.h
+++ b/drivers/s390/net/qeth_core.h
@@ -769,6 +769,7 @@ struct qeth_card {
unsigned long thread_start_mask;
unsigned long thread_allowed_mask;
unsigned long thread_running_mask;
+ struct task_struct *recovery_task;
spinlock_t ip_lock;
struct list_head ip_list;
struct list_head *ip_tbd_list;
@@ -862,6 +863,8 @@ extern struct qeth_card_list_struct qeth
extern struct kmem_cache *qeth_core_header_cache;
extern struct qeth_dbf_info qeth_dbf[QETH_DBF_INFOS];
+void qeth_set_recovery_task(struct qeth_card *);
+void qeth_clear_recovery_task(struct qeth_card *);
void qeth_set_allowed_threads(struct qeth_card *, unsigned long , int);
int qeth_threads_running(struct qeth_card *, unsigned long);
int qeth_wait_for_threads(struct qeth_card *, unsigned long);
--- a/drivers/s390/net/qeth_core_main.c
+++ b/drivers/s390/net/qeth_core_main.c
@@ -177,6 +177,23 @@ const char *qeth_get_cardname_short(stru
return "n/a";
}
+void qeth_set_recovery_task(struct qeth_card *card)
+{
+ card->recovery_task = current;
+}
+EXPORT_SYMBOL_GPL(qeth_set_recovery_task);
+
+void qeth_clear_recovery_task(struct qeth_card *card)
+{
+ card->recovery_task = NULL;
+}
+EXPORT_SYMBOL_GPL(qeth_clear_recovery_task);
+
+static bool qeth_is_recovery_task(struct qeth_card *card)
+{
+ return (card->recovery_task == current);
+}
+
void qeth_set_allowed_threads(struct qeth_card *card, unsigned long threads,
int clear_start_mask)
{
@@ -205,6 +222,8 @@ EXPORT_SYMBOL_GPL(qeth_threads_running);
int qeth_wait_for_threads(struct qeth_card *card, unsigned long threads)
{
+ if (qeth_is_recovery_task(card))
+ return 0;
return wait_event_interruptible(card->wait_q,
qeth_threads_running(card, threads) == 0);
}
--- a/drivers/s390/net/qeth_l2_main.c
+++ b/drivers/s390/net/qeth_l2_main.c
@@ -1143,6 +1143,7 @@ static int qeth_l2_recover(void *ptr)
QETH_CARD_TEXT(card, 2, "recover2");
dev_warn(&card->gdev->dev,
"A recovery process has been started for the device\n");
+ qeth_set_recovery_task(card);
__qeth_l2_set_offline(card->gdev, 1);
rc = __qeth_l2_set_online(card->gdev, 1);
if (!rc)
@@ -1153,6 +1154,7 @@ static int qeth_l2_recover(void *ptr)
dev_warn(&card->gdev->dev, "The qeth device driver "
"failed to recover an error on the device\n");
}
+ qeth_clear_recovery_task(card);
qeth_clear_thread_start_bit(card, QETH_RECOVER_THREAD);
qeth_clear_thread_running_bit(card, QETH_RECOVER_THREAD);
return 0;
--- a/drivers/s390/net/qeth_l3_main.c
+++ b/drivers/s390/net/qeth_l3_main.c
@@ -3515,6 +3515,7 @@ static int qeth_l3_recover(void *ptr)
QETH_CARD_TEXT(card, 2, "recover2");
dev_warn(&card->gdev->dev,
"A recovery process has been started for the device\n");
+ qeth_set_recovery_task(card);
__qeth_l3_set_offline(card->gdev, 1);
rc = __qeth_l3_set_online(card->gdev, 1);
if (!rc)
@@ -3525,6 +3526,7 @@ static int qeth_l3_recover(void *ptr)
dev_warn(&card->gdev->dev, "The qeth device driver "
"failed to recover an error on the device\n");
}
+ qeth_clear_recovery_task(card);
qeth_clear_thread_start_bit(card, QETH_RECOVER_THREAD);
qeth_clear_thread_running_bit(card, QETH_RECOVER_THREAD);
return 0;
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [patch 1/2] af_iucv: recvmsg: use correct skb_pull() function
2013-04-02 10:56 ` [patch 1/2] af_iucv: recvmsg: use correct skb_pull() function frank.blaschka
@ 2013-04-02 15:37 ` Eric Dumazet
0 siblings, 0 replies; 6+ messages in thread
From: Eric Dumazet @ 2013-04-02 15:37 UTC (permalink / raw)
To: frank.blaschka; +Cc: davem, netdev, linux-s390, Hendrik Brueckner
On Tue, 2013-04-02 at 12:56 +0200, frank.blaschka@de.ibm.com wrote:
> plain text document attachment (601-af-iucv-skb-pull.diff)
> From: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
>
> When receiving data messages, the "BUG_ON(skb->len < skb->data_len)" in
> the skb_pull() function triggers a kernel panic.
>
> Check if the skb uses paged data (is non-linear) and use the pskb_pull()
> function. Use skb_pull() for linear skbs' only.
>
> Reviewed-by: Ursula Braun <ursula.braun@de.ibm.com>
> Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
> Signed-off-by: Frank Blaschka <blaschka@linux.vnet.ibm.com>
> ---
> net/iucv/af_iucv.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> --- a/net/iucv/af_iucv.c
> +++ b/net/iucv/af_iucv.c
> @@ -1382,7 +1382,10 @@ static int iucv_sock_recvmsg(struct kioc
>
> /* SOCK_STREAM: re-queue skb if it contains unreceived data */
> if (sk->sk_type == SOCK_STREAM) {
> - skb_pull(skb, copied);
> + if (skb_is_nonlinear(skb))
> + pskb_pull(skb, copied);
> + else
> + skb_pull(skb, copied);
> if (skb->len) {
> skb_queue_head(&sk->sk_receive_queue, skb);
> goto done;
That cant be right. You can not ignore pskb_pull() return value.
Its also not very efficient.
I would advise using a per skb offset, to avoid skb_pull()
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [patch 2/2] qeth: fix qeth_wait_for_threads() deadlock for OSN devices
2013-04-02 10:56 ` [patch 2/2] qeth: fix qeth_wait_for_threads() deadlock for OSN devices frank.blaschka
@ 2013-04-02 16:13 ` Eric Dumazet
0 siblings, 0 replies; 6+ messages in thread
From: Eric Dumazet @ 2013-04-02 16:13 UTC (permalink / raw)
To: frank.blaschka; +Cc: davem, netdev, linux-s390, Stefan Raspl
On Tue, 2013-04-02 at 12:56 +0200, frank.blaschka@de.ibm.com wrote:
> plain text document attachment (602-qeth-thread-deadlock.diff)
> From: Stefan Raspl <raspl@linux.vnet.ibm.com>
> +static bool qeth_is_recovery_task(struct qeth_card *card)
> +{
> + return (card->recovery_task == current);
> +}
> +
static bool qeth_is_recovery_task(const struct qeth_card *card)
{
return card->recovery_task == current;
}
^ permalink raw reply [flat|nested] 6+ messages in thread
* [patch 2/2] qeth: fix qeth_wait_for_threads() deadlock for OSN devices
2013-04-08 8:19 [patch 0/2] s390: network bug fixes for net [v2] frank.blaschka
@ 2013-04-08 8:19 ` frank.blaschka
0 siblings, 0 replies; 6+ messages in thread
From: frank.blaschka @ 2013-04-08 8:19 UTC (permalink / raw)
To: davem; +Cc: netdev, linux-s390, Stefan Raspl
[-- Attachment #1: 602-qeth-thread-deadlock.diff --]
[-- Type: text/plain, Size: 3942 bytes --]
From: Stefan Raspl <raspl@linux.vnet.ibm.com>
Any recovery thread will deadlock when calling qeth_wait_for_threads(), most
notably when triggering a recovery on an OSN device.
This patch will store the recovery thread's task pointer on recovery
invocation and check in qeth_wait_for_threads() respectively to avoid
deadlocks.
Signed-off-by: Stefan Raspl <raspl@linux.vnet.ibm.com>
Signed-off-by: Frank Blaschka <blaschka@linux.vnet.ibm.com>
Reviewed-by: Ursula Braun <ursula.braun@de.ibm.com>
---
drivers/s390/net/qeth_core.h | 3 +++
drivers/s390/net/qeth_core_main.c | 19 +++++++++++++++++++
drivers/s390/net/qeth_l2_main.c | 2 ++
drivers/s390/net/qeth_l3_main.c | 2 ++
4 files changed, 26 insertions(+)
--- a/drivers/s390/net/qeth_core.h
+++ b/drivers/s390/net/qeth_core.h
@@ -769,6 +769,7 @@ struct qeth_card {
unsigned long thread_start_mask;
unsigned long thread_allowed_mask;
unsigned long thread_running_mask;
+ struct task_struct *recovery_task;
spinlock_t ip_lock;
struct list_head ip_list;
struct list_head *ip_tbd_list;
@@ -862,6 +863,8 @@ extern struct qeth_card_list_struct qeth
extern struct kmem_cache *qeth_core_header_cache;
extern struct qeth_dbf_info qeth_dbf[QETH_DBF_INFOS];
+void qeth_set_recovery_task(struct qeth_card *);
+void qeth_clear_recovery_task(struct qeth_card *);
void qeth_set_allowed_threads(struct qeth_card *, unsigned long , int);
int qeth_threads_running(struct qeth_card *, unsigned long);
int qeth_wait_for_threads(struct qeth_card *, unsigned long);
--- a/drivers/s390/net/qeth_core_main.c
+++ b/drivers/s390/net/qeth_core_main.c
@@ -177,6 +177,23 @@ const char *qeth_get_cardname_short(stru
return "n/a";
}
+void qeth_set_recovery_task(struct qeth_card *card)
+{
+ card->recovery_task = current;
+}
+EXPORT_SYMBOL_GPL(qeth_set_recovery_task);
+
+void qeth_clear_recovery_task(struct qeth_card *card)
+{
+ card->recovery_task = NULL;
+}
+EXPORT_SYMBOL_GPL(qeth_clear_recovery_task);
+
+static bool qeth_is_recovery_task(const struct qeth_card *card)
+{
+ return card->recovery_task == current;
+}
+
void qeth_set_allowed_threads(struct qeth_card *card, unsigned long threads,
int clear_start_mask)
{
@@ -205,6 +222,8 @@ EXPORT_SYMBOL_GPL(qeth_threads_running);
int qeth_wait_for_threads(struct qeth_card *card, unsigned long threads)
{
+ if (qeth_is_recovery_task(card))
+ return 0;
return wait_event_interruptible(card->wait_q,
qeth_threads_running(card, threads) == 0);
}
--- a/drivers/s390/net/qeth_l2_main.c
+++ b/drivers/s390/net/qeth_l2_main.c
@@ -1143,6 +1143,7 @@ static int qeth_l2_recover(void *ptr)
QETH_CARD_TEXT(card, 2, "recover2");
dev_warn(&card->gdev->dev,
"A recovery process has been started for the device\n");
+ qeth_set_recovery_task(card);
__qeth_l2_set_offline(card->gdev, 1);
rc = __qeth_l2_set_online(card->gdev, 1);
if (!rc)
@@ -1153,6 +1154,7 @@ static int qeth_l2_recover(void *ptr)
dev_warn(&card->gdev->dev, "The qeth device driver "
"failed to recover an error on the device\n");
}
+ qeth_clear_recovery_task(card);
qeth_clear_thread_start_bit(card, QETH_RECOVER_THREAD);
qeth_clear_thread_running_bit(card, QETH_RECOVER_THREAD);
return 0;
--- a/drivers/s390/net/qeth_l3_main.c
+++ b/drivers/s390/net/qeth_l3_main.c
@@ -3515,6 +3515,7 @@ static int qeth_l3_recover(void *ptr)
QETH_CARD_TEXT(card, 2, "recover2");
dev_warn(&card->gdev->dev,
"A recovery process has been started for the device\n");
+ qeth_set_recovery_task(card);
__qeth_l3_set_offline(card->gdev, 1);
rc = __qeth_l3_set_online(card->gdev, 1);
if (!rc)
@@ -3525,6 +3526,7 @@ static int qeth_l3_recover(void *ptr)
dev_warn(&card->gdev->dev, "The qeth device driver "
"failed to recover an error on the device\n");
}
+ qeth_clear_recovery_task(card);
qeth_clear_thread_start_bit(card, QETH_RECOVER_THREAD);
qeth_clear_thread_running_bit(card, QETH_RECOVER_THREAD);
return 0;
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2013-04-08 8:20 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-04-02 10:56 [patch 0/2] s390: network bug fixes for net frank.blaschka
2013-04-02 10:56 ` [patch 1/2] af_iucv: recvmsg: use correct skb_pull() function frank.blaschka
2013-04-02 15:37 ` Eric Dumazet
2013-04-02 10:56 ` [patch 2/2] qeth: fix qeth_wait_for_threads() deadlock for OSN devices frank.blaschka
2013-04-02 16:13 ` Eric Dumazet
-- strict thread matches above, loose matches on Subject: below --
2013-04-08 8:19 [patch 0/2] s390: network bug fixes for net [v2] frank.blaschka
2013-04-08 8:19 ` [patch 2/2] qeth: fix qeth_wait_for_threads() deadlock for OSN devices frank.blaschka
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).