public inbox for linux-block@vger.kernel.org
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@fb.com>
To: "Javier González" <jg@lightnvm.io>
Cc: Ming Lei <ming.lei@redhat.com>, Christoph Hellwig <hch@lst.de>,
	Dan Williams <dan.j.williams@intel.com>,
	linux-block@vger.kernel.org,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: Large latency on blk_queue_enter
Date: Mon, 8 May 2017 08:23:36 -0600	[thread overview]
Message-ID: <661d4b67-cf0c-a703-331b-ce24d75e782d@fb.com> (raw)
In-Reply-To: <E169E488-53EC-468A-9221-DF11F0944298@lightnvm.io>

On 05/08/2017 08:20 AM, Javier Gonz�lez wrote:
>> On 8 May 2017, at 16.13, Jens Axboe <axboe@fb.com> wrote:
>>
>> On 05/08/2017 07:44 AM, Javier Gonz�lez wrote:
>>>> On 8 May 2017, at 14.27, Ming Lei <ming.lei@redhat.com> wrote:
>>>>
>>>> On Mon, May 08, 2017 at 01:54:58PM +0200, Javier Gonz�lez wrote:
>>>>> Hi,
>>>>>
>>>>> I find an unusual added latency(~20-30ms) on blk_queue_enter when
>>>>> allocating a request directly from the NVMe driver through
>>>>> nvme_alloc_request. I could use some help confirming that this is a bug
>>>>> and not an expected side effect due to something else.
>>>>>
>>>>> I can reproduce this latency consistently on LightNVM when mixing I/O
>>>>> from pblk and I/O sent through an ioctl using liblightnvm, but I don't
>>>>> see anything on the LightNVM side that could impact the request
>>>>> allocation.
>>>>>
>>>>> When I have a 100% read workload sent from pblk, the max. latency is
>>>>> constant throughout several runs at ~80us (which is normal for the media
>>>>> we are using at bs=4k, qd=1). All pblk I/Os reach the nvme_nvm_submit_io
>>>>> function on lightnvm.c., which uses nvme_alloc_request. When we send a
>>>>> command from user space through an ioctl, then the max latency goes up
>>>>> to ~20-30ms. This happens independently from the actual command
>>>>> (IN/OUT). I tracked down the added latency down to the call
>>>>> percpu_ref_tryget_live in blk_queue_enter. Seems that the queue
>>>>> reference counter is not released as it should through blk_queue_exit in
>>>>> blk_mq_alloc_request. For reference, all ioctl I/Os reach the
>>>>> nvme_nvm_submit_user_cmd on lightnvm.c
>>>>>
>>>>> Do you have any idea about why this might happen? I can dig more into
>>>>> it, but first I wanted to make sure that I am not missing any obvious
>>>>> assumption, which would explain the reference counter to be held for a
>>>>> longer time.
>>>>
>>>> You need to check if the .q_usage_counter is working at atomic mode.
>>>> This counter is initialized as atomic mode, and finally switchs to
>>>> percpu mode via percpu_ref_switch_to_percpu() in blk_register_queue().
>>>
>>> Thanks for commenting Ming.
>>>
>>> The .q_usage_counter is not working on atomic mode. The queue is
>>> initialized normally through blk_register_queue() and the counter is
>>> switched to percpu mode, as you mentioned. As I understand it, this is
>>> how it should be, right?
>>
>> That is how it should be, yes. You're not running with any heavy
>> debugging options, like lockdep or anything like that?
> 
> No lockdep, KASAN, kmemleak or any of the other usual suspects.
> 
> What's interesting is that it only happens when one of the I/Os comes
> from user space through the ioctl. If I have several pblk instances on
> the same device (which would end up allocating a new request in
> parallel, potentially on the same core), the latency spike does not
> trigger.
> 
> I also tried to bind the read thread and the liblightnvm thread issuing
> the ioctl to different cores, but it does not help...

How do I reproduce this? Off the top of my head, and looking at the code,
I have no idea what is going on here.

-- 
Jens Axboe

  reply	other threads:[~2017-05-08 14:23 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-08 11:54 Large latency on blk_queue_enter Javier González
2017-05-08 12:27 ` Ming Lei
2017-05-08 13:44   ` Javier González
2017-05-08 14:13     ` Jens Axboe
2017-05-08 14:20       ` Javier González
2017-05-08 14:23         ` Jens Axboe [this message]
2017-05-08 14:46           ` Javier González
2017-05-08 14:52             ` Jens Axboe
2017-05-08 15:02               ` Javier González
2017-05-08 15:08                 ` Jens Axboe
2017-05-08 15:14                   ` Jens Axboe
2017-05-08 15:22                     ` Javier González
2017-05-08 15:25                       ` Jens Axboe
2017-05-08 15:38                         ` Javier González
2017-05-08 15:40                           ` Jens Axboe
2017-05-08 15:49                             ` Javier González
2017-05-08 16:06                               ` Jens Axboe
2017-05-08 16:39                                 ` Javier González
2017-05-09 10:34                                   ` Javier González
2017-05-09 10:58                                     ` Ming Lei
2017-05-09 11:21                                       ` Javier González
2017-05-09 14:21                                         ` Javier González

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=661d4b67-cf0c-a703-331b-ce24d75e782d@fb.com \
    --to=axboe@fb.com \
    --cc=dan.j.williams@intel.com \
    --cc=hch@lst.de \
    --cc=jg@lightnvm.io \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox