From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751322AbcFJEhH (ORCPT ); Fri, 10 Jun 2016 00:37:07 -0400 Received: from mail-oi0-f67.google.com ([209.85.218.67]:32775 "EHLO mail-oi0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750756AbcFJEhF (ORCPT ); Fri, 10 Jun 2016 00:37:05 -0400 Reply-To: minyard@acm.org Subject: Re: [PATCH] ipmi: Remove smi_msg from waiting_rcv_msgs list before handle_one_recv_msg() References: <7727de24-6867-1a37-3fed-1bdb1fd6084f@ce.jp.nec.com> To: Junichi Nomura , "openipmi-developer@lists.sourceforge.net" Cc: "linux-kernel@vger.kernel.org" , "cminyard@mvista.com" From: Corey Minyard Message-ID: <575A43E7.2090701@acm.org> Date: Thu, 9 Jun 2016 23:36:55 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.8.0 MIME-Version: 1.0 In-Reply-To: <7727de24-6867-1a37-3fed-1bdb1fd6084f@ce.jp.nec.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org I actually just wrote this exact patch, moments ago. But you deserve credit, I'll use yours :). -corey On 06/09/2016 11:31 PM, Junichi Nomura wrote: > Commit 7ea0ed2b5be8 ("ipmi: Make the message handler easier to use for > SMI interfaces") changed handle_new_recv_msgs() to call handle_one_recv_msg() > for a smi_msg while the smi_msg is still connected to waiting_rcv_msgs list. > That could lead to following list corruption problems: > > 1) low-level function treats smi_msg as not connected to list > > handle_one_recv_msg() could end up calling smi_send(), which > assumes the msg is not connected to list. > > For example, the following sequence could corrupt list by > doing list_add_tail() for the entry still connected to other list. > > handle_new_recv_msgs() > msg = list_entry(waiting_rcv_msgs) > handle_one_recv_msg(msg) > handle_ipmb_get_msg_cmd(msg) > smi_send(msg) > spin_lock(xmit_msgs_lock) > list_add_tail(msg) > spin_unlock(xmit_msgs_lock) > > 2) race between multiple handle_new_recv_msgs() instances > > handle_new_recv_msgs() once releases waiting_rcv_msgs_lock before calling > handle_one_recv_msg() then retakes the lock and list_del() it. > > If others call handle_new_recv_msgs() during the window shown below > list_del() will be done twice for the same smi_msg. > > handle_new_recv_msgs() > spin_lock(waiting_rcv_msgs_lock) > msg = list_entry(waiting_rcv_msgs) > spin_unlock(waiting_rcv_msgs_lock) > | > | handle_one_recv_msg(msg) > | > spin_lock(waiting_rcv_msgs_lock) > list_del(msg) > spin_unlock(waiting_rcv_msgs_lock) > > Fixes: 7ea0ed2b5be8 ("ipmi: Make the message handler easier to use for SMI interfaces") > Signed-off-by: Jun'ichi Nomura > > diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c > index 94fb407..94e4a88 100644 > --- a/drivers/char/ipmi/ipmi_msghandler.c > +++ b/drivers/char/ipmi/ipmi_msghandler.c > @@ -3820,6 +3820,7 @@ static void handle_new_recv_msgs(ipmi_smi_t intf) > while (!list_empty(&intf->waiting_rcv_msgs)) { > smi_msg = list_entry(intf->waiting_rcv_msgs.next, > struct ipmi_smi_msg, link); > + list_del(&smi_msg->link); > if (!run_to_completion) > spin_unlock_irqrestore(&intf->waiting_rcv_msgs_lock, > flags); > @@ -3831,9 +3832,9 @@ static void handle_new_recv_msgs(ipmi_smi_t intf) > * To preserve message order, quit if we > * can't handle a message. > */ > + list_add(&smi_msg->link, &intf->waiting_rcv_msgs); > break; > } else { > - list_del(&smi_msg->link); > if (rv == 0) > /* Message handled */ > ipmi_free_smi_msg(smi_msg);