From: Stanley Chu <stanley.chu@mediatek.com>
To: Mika Westerberg <mika.westerberg@linux.intel.com>
Cc: linux-scsi@vger.kernel.org, wsd_upstream@mediatek.com,
kuohong.wang@mediatek.com, stable@vger.kernel.org,
linux-mediatek@lists.infradead.org, matthias.bgg@gmail.com,
peter.wang@mediatek.com
Subject: Re: [PATCH v2 1/1] scsi: Synchronize request queue PM status only on successful resume
Date: Thu, 3 Jan 2019 17:06:21 +0800 [thread overview]
Message-ID: <1546506381.20657.15.camel@mtkswgap22> (raw)
In-Reply-To: <20190103082333.GL2469@lahna.fi.intel.com>
Hi Mika,
On Thu, 2019-01-03 at 10:23 +0200, Mika Westerberg wrote:
> Hi,
>
> On Thu, Jan 03, 2019 at 02:53:15PM +0800, stanley.chu@mediatek.com wrote:
> > From: Stanley Chu <stanley.chu@mediatek.com>
> >
> > The commit 356fd2663cff ("scsi: Set request queue runtime PM status
> > back to active on resume") fixed up the inconsistent RPM status between
> > request queue and device. However changing request queue RPM status
> > shall be done only on successful resume, otherwise status may be still
> > inconsistent as below,
> >
> > Request queue: RPM_ACTIVE
> > Device: RPM_SUSPENDED
> >
> > This ends up soft lockup because requests can be submitted to
> > underlying devices but those devices and their required resource
> > are not resumed.
>
> It would be good to add some example of the soft lockup you are seeing
> here.
Thanks for remind, I will add below example in commit message in v3.
For example,
After above inconsistent status happens, IO request can be submitted
to UFS device driver but required resource (like clock) is not resumed
yet thus lead to warning as below call stack,
WARN_ON(hba->clk_gating.state != CLKS_ON);
ufshcd_queuecommand
scsi_dispatch_cmd
scsi_request_fn
__blk_run_queue
cfq_insert_request
__elv_add_request
blk_flush_plug_list
blk_finish_plug
jbd2_journal_commit_transaction
kjournald2
We may see all behind IO requests hang because of no response from
storage host or device and then soft lockup happens in system. In the
end, system may crash in many ways.
>
> > Fixes: 356fd2663cff ("scsi: Set request queue runtime PM status
> > back to active on resume")
>
> You don't need to wrap this.
OK! Will fix it.
>
> The change itself looks fine.
Thanks.
Stanley
>
> _______________________________________________
> Linux-mediatek mailing list
> Linux-mediatek@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-mediatek
prev parent reply other threads:[~2019-01-03 9:06 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-01-03 6:53 [PATCH v2] scsi: Synchronize request queue PM status only on successful resume stanley.chu-NuS5LvNUpcJWk0Htik3J/w
[not found] ` <1546498395-4184-1-git-send-email-stanley.chu-NuS5LvNUpcJWk0Htik3J/w@public.gmane.org>
2019-01-03 6:53 ` [PATCH v2 0/1] " stanley.chu-NuS5LvNUpcJWk0Htik3J/w
2019-01-03 6:53 ` [PATCH v2 1/1] " stanley.chu-NuS5LvNUpcJWk0Htik3J/w
2019-01-03 8:23 ` Mika Westerberg
2019-01-03 9:06 ` Stanley Chu [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1546506381.20657.15.camel@mtkswgap22 \
--to=stanley.chu@mediatek.com \
--cc=kuohong.wang@mediatek.com \
--cc=linux-mediatek@lists.infradead.org \
--cc=linux-scsi@vger.kernel.org \
--cc=matthias.bgg@gmail.com \
--cc=mika.westerberg@linux.intel.com \
--cc=peter.wang@mediatek.com \
--cc=stable@vger.kernel.org \
--cc=wsd_upstream@mediatek.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox