From: Jon Derrick <jonathan.derrick@intel.com>
To: Parav Pandit <Parav.pandit@avagotech.com>
Cc: linux-nvme@lists.infradead.org, willy@linux.intel.com,
axboe@kernel.dk, keith.busch@intel.com,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] NVMe: Fixed race between nvme_thread & probe path.
Date: Thu, 18 Jun 2015 09:59:32 -0600 [thread overview]
Message-ID: <20150618155932.GA1276@localhost.localdomain> (raw)
In-Reply-To: <1434624230-25050-1-git-send-email-Parav.pandit@avagotech.com>
On Thu, Jun 18, 2015 at 04:13:50PM +0530, Parav Pandit wrote:
> Kernel thread nvme_thread and driver load process can be executing
> in parallel on different CPU. This leads to race condition whenever
> nvme_alloc_queue() instructions are executed out of order that can
> reflects incorrect value for nvme_thread.
> Memory barrier in nvme_alloc_queue() ensures that it maintains the
> order and and data dependency read barrier in reader thread ensures
> that cpu cache is synced.
>
> Signed-off-by: Parav Pandit <Parav.pandit@avagotech.com>
> ---
> drivers/block/nvme-core.c | 12 ++++++++++--
> 1 files changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c
> index 5961ed7..90fb0ce 100644
> --- a/drivers/block/nvme-core.c
> +++ b/drivers/block/nvme-core.c
> @@ -1403,8 +1403,10 @@ static struct nvme_queue *nvme_alloc_queue(struct nvme_dev *dev, int qid,
> nvmeq->q_db = &dev->dbs[qid * 2 * dev->db_stride];
> nvmeq->q_depth = depth;
> nvmeq->qid = qid;
> - dev->queue_count++;
> dev->queues[qid] = nvmeq;
> + /* update queues first before updating queue_count */
> + smp_wmb();
> + dev->queue_count++;
>
> return nvmeq;
>
This has been applied already as an explicit mb()
> @@ -2073,7 +2075,13 @@ static int nvme_kthread(void *data)
> continue;
> }
> for (i = 0; i < dev->queue_count; i++) {
> - struct nvme_queue *nvmeq = dev->queues[i];
> + struct nvme_queue *nvmeq;
> +
> + /* make sure to read queue_count before
> + * traversing queues.
> + */
> + smp_read_barrier_depends();
> + nvmeq = dev->queues[i];
> if (!nvmeq)
> continue;
> spin_lock_irq(&nvmeq->q_lock);
I don't think this is necessary. If queue_count is incremented while in this loop, it will be picked up the next time the kthread runs
next prev parent reply other threads:[~2015-06-18 15:59 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-06-18 10:43 [PATCH] NVMe: Fixed race between nvme_thread & probe path Parav Pandit
2015-06-18 15:59 ` Jon Derrick [this message]
2015-06-18 17:48 ` Parav Pandit
2015-06-26 18:10 ` Jon Derrick
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150618155932.GA1276@localhost.localdomain \
--to=jonathan.derrick@intel.com \
--cc=Parav.pandit@avagotech.com \
--cc=axboe@kernel.dk \
--cc=keith.busch@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=willy@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox