From: Keith Busch <kbusch@kernel.org>
To: John Garry <john.garry@huawei.com>
Cc: axboe@fb.com, sagi@grimberg.me,
Alexey Dobriyan <adobriyan@gmail.com>,
linux-nvme@lists.infradead.org, hch@lst.de
Subject: Re: [PATCH] nvme-pci: slimmer CQ head update
Date: Wed, 6 May 2020 06:47:01 -0600 [thread overview]
Message-ID: <20200506124701.GA54933@C02WT3WMHTD6> (raw)
In-Reply-To: <defb25c5-5ae5-5ff9-66db-efb129bd7743@huawei.com>
On Wed, May 06, 2020 at 12:03:35PM +0100, John Garry wrote:
> On 29/02/2020 05:53, Keith Busch wrote:
> > On Fri, Feb 28, 2020 at 09:45:19PM +0300, Alexey Dobriyan wrote:
> > > static inline void nvme_update_cq_head(struct nvme_queue *nvmeq)
> > > {
> > > - if (nvmeq->cq_head == nvmeq->q_depth - 1) {
> > > + if (++nvmeq->cq_head == nvmeq->q_depth) {
>
> I figure momentarily nvmeq->cq_head may transition to equal nvmeq->q_depth
> and then to 0, which causes an out-of-bounds access here:
>
> static inline bool nvme_cqe_pending(struct nvme_queue *nvmeq)
> {
> return (le16_to_cpu(nvmeq->cqes[nvmeq->cq_head].status) & 1) ==
> nvmeq->cq_phase;
> }
Thanks for the notice, your analysis sounds correct to me.
Ideally we wouldn't let the irq check happen while the threaded
handler is running, but that is a bit risky to introduce at this
point. I'm okay with reverting to fix this issue.
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2020-05-06 12:47 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-28 18:45 [PATCH] nvme-pci: slimmer CQ head update Alexey Dobriyan
2020-02-29 5:53 ` Keith Busch
2020-05-06 11:03 ` John Garry
2020-05-06 12:47 ` Keith Busch [this message]
2020-05-06 13:24 ` Alexey Dobriyan
2020-05-06 13:44 ` John Garry
2020-05-06 14:01 ` Alexey Dobriyan
2020-05-06 14:35 ` Christoph Hellwig
2020-05-06 16:26 ` John Garry
2020-05-06 16:31 ` Will Deacon
2020-05-06 16:52 ` Robin Murphy
2020-05-06 17:02 ` John Garry
2020-05-07 8:18 ` John Garry
2020-05-07 11:04 ` Robin Murphy
2020-05-07 13:55 ` John Garry
2020-05-07 14:23 ` Keith Busch
2020-05-07 15:11 ` John Garry
2020-05-07 15:35 ` Keith Busch
2020-05-07 15:41 ` John Garry
2020-05-08 16:16 ` Keith Busch
2020-05-08 17:04 ` John Garry
2020-05-07 16:26 ` Robin Murphy
2020-05-07 17:35 ` Keith Busch
2020-05-07 17:44 ` Will Deacon
2020-05-07 18:06 ` Keith Busch
2020-05-08 11:40 ` Will Deacon
2020-05-08 14:07 ` Keith Busch
2020-05-08 15:34 ` Keith Busch
2020-05-06 14:44 ` Keith Busch
2020-05-07 15:58 ` Keith Busch
2020-05-07 20:07 ` [PATCH] nvme-pci: fix "slimmer CQ head update" Alexey Dobriyan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200506124701.GA54933@C02WT3WMHTD6 \
--to=kbusch@kernel.org \
--cc=adobriyan@gmail.com \
--cc=axboe@fb.com \
--cc=hch@lst.de \
--cc=john.garry@huawei.com \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox