From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C3F0C4167D for ; Mon, 6 Nov 2023 04:41:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230237AbjKFElj (ORCPT ); Sun, 5 Nov 2023 23:41:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229717AbjKFElh (ORCPT ); Sun, 5 Nov 2023 23:41:37 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CAE30F2; Sun, 5 Nov 2023 20:41:34 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4BE74C433C7; Mon, 6 Nov 2023 04:41:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699245694; bh=IIAQOZpGjXwc7fKWKRroagG6s2bx1zCT3L6uGLaxqXs=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Peu4o7qwkZo0Z/MtDJwwNepcGBQMJdQbjgWPgzXc+T/RJql4g3M4oNSBYjfwm3zA8 D60CvLEE8mwOV7PvEoMIEr0N8CG6NJvNBQt51rV65D6wl33VBWMJcuxScTcqWD6BEn jjWBpUfGQcdxWVa16xNRGkao9uCTdhAbjK71voulOHuLOUn21VC4fWWJeRbz//adoV z+3EUB8vYniwy7gQ9qyvn2+adlyfL0J8E4SZPv59SCf338th4OPhS1kk7KYPE7bJzG ovbSinbVbc79rWH6cdf2MMMm/gvLJkcSKmk7W6H6O8cnG9Lbpn9XVpSNkdEpR/Gu3C 9a4KSxhSjHfKg== Date: Mon, 6 Nov 2023 10:11:26 +0530 From: Manivannan Sadhasivam To: Qiang Yu Cc: mani@kernel.org, quic_jhugo@quicinc.com, mhi@lists.linux.dev, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, quic_cang@quicinc.com, quic_mrana@quicinc.com, Bhaumik Bhatt Subject: Re: [PATCH v2 1/2] bus: mhi: host: Add spinlock to protect WP access when queueing TREs Message-ID: <20231106044126.GA2474@thinkpad> References: <1694594861-12691-1-git-send-email-quic_qianyu@quicinc.com> <1694594861-12691-2-git-send-email-quic_qianyu@quicinc.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1694594861-12691-2-git-send-email-quic_qianyu@quicinc.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 13, 2023 at 04:47:40PM +0800, Qiang Yu wrote: > From: Bhaumik Bhatt > > Protect WP accesses such that multiple threads queueing buffers for > incoming data do not race and access the same WP twice. Ensure read and > write locks for the channel are not taken in succession by dropping the > read lock from parse_xfer_event() such that a callback given to client > can potentially queue buffers and acquire the write lock in that process. > Any queueing of buffers should be done without channel read lock acquired > as it can result in multiple locks and a soft lockup. > This change is doing two things: 1. Unlocking xfer_cb to prevent potential lockup 2. Protecting mhi_gen_tre() against concurrent access So you should split this into two patches and also add Fixes tag if appropriate. - Mani > Signed-off-by: Bhaumik Bhatt > Signed-off-by: Qiang Yu > --- > drivers/bus/mhi/host/main.c | 11 ++++++++++- > 1 file changed, 10 insertions(+), 1 deletion(-) > > diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c > index dcf627b..13c4b89 100644 > --- a/drivers/bus/mhi/host/main.c > +++ b/drivers/bus/mhi/host/main.c > @@ -642,6 +642,7 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl, > mhi_del_ring_element(mhi_cntrl, tre_ring); > local_rp = tre_ring->rp; > > + read_unlock_bh(&mhi_chan->lock); > /* notify client */ > mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result); > > @@ -667,6 +668,7 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl, > kfree(buf_info->cb_buf); > } > } > + read_lock_bh(&mhi_chan->lock); > } > break; > } /* CC_EOT */ > @@ -1204,6 +1206,9 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan, > int eot, eob, chain, bei; > int ret; > > + /* Protect accesses for reading and incrementing WP */ > + write_lock_bh(&mhi_chan->lock); > + > buf_ring = &mhi_chan->buf_ring; > tre_ring = &mhi_chan->tre_ring; > > @@ -1221,8 +1226,10 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan, > > if (!info->pre_mapped) { > ret = mhi_cntrl->map_single(mhi_cntrl, buf_info); > - if (ret) > + if (ret) { > + write_unlock_bh(&mhi_chan->lock); > return ret; > + } > } > > eob = !!(flags & MHI_EOB); > @@ -1239,6 +1246,8 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan, > mhi_add_ring_element(mhi_cntrl, tre_ring); > mhi_add_ring_element(mhi_cntrl, buf_ring); > > + write_unlock_bh(&mhi_chan->lock); > + > return 0; > } > > -- > 2.7.4 > > -- மணிவண்ணன் சதாசிவம்