public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] ipmi: Remove smi_msg from waiting_rcv_msgs list before handle_one_recv_msg()
@ 2016-06-10  4:31 Junichi Nomura
  2016-06-10  4:36 ` Corey Minyard
  0 siblings, 1 reply; 2+ messages in thread
From: Junichi Nomura @ 2016-06-10  4:31 UTC (permalink / raw)
  To: minyard@acm.org, openipmi-developer@lists.sourceforge.net
  Cc: linux-kernel@vger.kernel.org, cminyard@mvista.com

Commit 7ea0ed2b5be8 ("ipmi: Make the message handler easier to use for
SMI interfaces") changed handle_new_recv_msgs() to call handle_one_recv_msg()
for a smi_msg while the smi_msg is still connected to waiting_rcv_msgs list.
That could lead to following list corruption problems:

1) low-level function treats smi_msg as not connected to list

  handle_one_recv_msg() could end up calling smi_send(), which
  assumes the msg is not connected to list.

  For example, the following sequence could corrupt list by
  doing list_add_tail() for the entry still connected to other list.

    handle_new_recv_msgs()
      msg = list_entry(waiting_rcv_msgs)
      handle_one_recv_msg(msg)
        handle_ipmb_get_msg_cmd(msg)
          smi_send(msg)
            spin_lock(xmit_msgs_lock)
            list_add_tail(msg)
            spin_unlock(xmit_msgs_lock)

2) race between multiple handle_new_recv_msgs() instances

  handle_new_recv_msgs() once releases waiting_rcv_msgs_lock before calling
  handle_one_recv_msg() then retakes the lock and list_del() it.

  If others call handle_new_recv_msgs() during the window shown below
  list_del() will be done twice for the same smi_msg.

  handle_new_recv_msgs()
    spin_lock(waiting_rcv_msgs_lock)
    msg = list_entry(waiting_rcv_msgs)
    spin_unlock(waiting_rcv_msgs_lock)
  | 
  | handle_one_recv_msg(msg)
  | 
    spin_lock(waiting_rcv_msgs_lock)
    list_del(msg)
    spin_unlock(waiting_rcv_msgs_lock)

Fixes: 7ea0ed2b5be8 ("ipmi: Make the message handler easier to use for SMI interfaces")
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>

diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
index 94fb407..94e4a88 100644
--- a/drivers/char/ipmi/ipmi_msghandler.c
+++ b/drivers/char/ipmi/ipmi_msghandler.c
@@ -3820,6 +3820,7 @@ static void handle_new_recv_msgs(ipmi_smi_t intf)
 	while (!list_empty(&intf->waiting_rcv_msgs)) {
 		smi_msg = list_entry(intf->waiting_rcv_msgs.next,
 				     struct ipmi_smi_msg, link);
+		list_del(&smi_msg->link);
 		if (!run_to_completion)
 			spin_unlock_irqrestore(&intf->waiting_rcv_msgs_lock,
 					       flags);
@@ -3831,9 +3832,9 @@ static void handle_new_recv_msgs(ipmi_smi_t intf)
 			 * To preserve message order, quit if we
 			 * can't handle a message.
 			 */
+			list_add(&smi_msg->link, &intf->waiting_rcv_msgs);
 			break;
 		} else {
-			list_del(&smi_msg->link);
 			if (rv == 0)
 				/* Message handled */
 				ipmi_free_smi_msg(smi_msg);

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] ipmi: Remove smi_msg from waiting_rcv_msgs list before handle_one_recv_msg()
  2016-06-10  4:31 [PATCH] ipmi: Remove smi_msg from waiting_rcv_msgs list before handle_one_recv_msg() Junichi Nomura
@ 2016-06-10  4:36 ` Corey Minyard
  0 siblings, 0 replies; 2+ messages in thread
From: Corey Minyard @ 2016-06-10  4:36 UTC (permalink / raw)
  To: Junichi Nomura, openipmi-developer@lists.sourceforge.net
  Cc: linux-kernel@vger.kernel.org, cminyard@mvista.com

I actually just wrote this exact patch, moments ago.  But you deserve 
credit, I'll use yours :).

-corey

On 06/09/2016 11:31 PM, Junichi Nomura wrote:
> Commit 7ea0ed2b5be8 ("ipmi: Make the message handler easier to use for
> SMI interfaces") changed handle_new_recv_msgs() to call handle_one_recv_msg()
> for a smi_msg while the smi_msg is still connected to waiting_rcv_msgs list.
> That could lead to following list corruption problems:
>
> 1) low-level function treats smi_msg as not connected to list
>
>    handle_one_recv_msg() could end up calling smi_send(), which
>    assumes the msg is not connected to list.
>
>    For example, the following sequence could corrupt list by
>    doing list_add_tail() for the entry still connected to other list.
>
>      handle_new_recv_msgs()
>        msg = list_entry(waiting_rcv_msgs)
>        handle_one_recv_msg(msg)
>          handle_ipmb_get_msg_cmd(msg)
>            smi_send(msg)
>              spin_lock(xmit_msgs_lock)
>              list_add_tail(msg)
>              spin_unlock(xmit_msgs_lock)
>
> 2) race between multiple handle_new_recv_msgs() instances
>
>    handle_new_recv_msgs() once releases waiting_rcv_msgs_lock before calling
>    handle_one_recv_msg() then retakes the lock and list_del() it.
>
>    If others call handle_new_recv_msgs() during the window shown below
>    list_del() will be done twice for the same smi_msg.
>
>    handle_new_recv_msgs()
>      spin_lock(waiting_rcv_msgs_lock)
>      msg = list_entry(waiting_rcv_msgs)
>      spin_unlock(waiting_rcv_msgs_lock)
>    |
>    | handle_one_recv_msg(msg)
>    |
>      spin_lock(waiting_rcv_msgs_lock)
>      list_del(msg)
>      spin_unlock(waiting_rcv_msgs_lock)
>
> Fixes: 7ea0ed2b5be8 ("ipmi: Make the message handler easier to use for SMI interfaces")
> Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
>
> diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
> index 94fb407..94e4a88 100644
> --- a/drivers/char/ipmi/ipmi_msghandler.c
> +++ b/drivers/char/ipmi/ipmi_msghandler.c
> @@ -3820,6 +3820,7 @@ static void handle_new_recv_msgs(ipmi_smi_t intf)
>   	while (!list_empty(&intf->waiting_rcv_msgs)) {
>   		smi_msg = list_entry(intf->waiting_rcv_msgs.next,
>   				     struct ipmi_smi_msg, link);
> +		list_del(&smi_msg->link);
>   		if (!run_to_completion)
>   			spin_unlock_irqrestore(&intf->waiting_rcv_msgs_lock,
>   					       flags);
> @@ -3831,9 +3832,9 @@ static void handle_new_recv_msgs(ipmi_smi_t intf)
>   			 * To preserve message order, quit if we
>   			 * can't handle a message.
>   			 */
> +			list_add(&smi_msg->link, &intf->waiting_rcv_msgs);
>   			break;
>   		} else {
> -			list_del(&smi_msg->link);
>   			if (rv == 0)
>   				/* Message handled */
>   				ipmi_free_smi_msg(smi_msg);

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2016-06-10  4:37 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-06-10  4:31 [PATCH] ipmi: Remove smi_msg from waiting_rcv_msgs list before handle_one_recv_msg() Junichi Nomura
2016-06-10  4:36 ` Corey Minyard

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox