From: Christian Borntraeger <borntraeger@de.ibm.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>,
linux-block@vger.kernel.org,
Christoph Hellwig <hch@infradead.org>,
Stefan Haberland <sth@linux.vnet.ibm.com>,
Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH] blk-mq: only run mapped hw queues in blk_mq_run_hw_queues()
Date: Fri, 6 Apr 2018 17:11:53 +0200 [thread overview]
Message-ID: <03a62bea-ac13-8d39-2c7d-995547b32be7@de.ibm.com> (raw)
In-Reply-To: <20180406145822.GA12198@ming.t460p>
On 04/06/2018 04:58 PM, Ming Lei wrote:
> On Fri, Apr 06, 2018 at 04:26:49PM +0200, Christian Borntraeger wrote:
>>
>>
>> On 04/06/2018 03:41 PM, Ming Lei wrote:
>>> On Fri, Apr 06, 2018 at 12:19:19PM +0200, Christian Borntraeger wrote:
>>>>
>>>>
>>>> On 04/06/2018 11:23 AM, Ming Lei wrote:
>>>>> On Fri, Apr 06, 2018 at 10:51:28AM +0200, Christian Borntraeger wrote:
>>>>>>
>>>>>>
>>>>>> On 04/06/2018 10:41 AM, Ming Lei wrote:
>>>>>>> On Thu, Apr 05, 2018 at 07:39:56PM +0200, Christian Borntraeger wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 04/05/2018 06:11 PM, Ming Lei wrote:
>>>>>>>>>>
>>>>>>>>>> Could you please apply the following patch and provide the dmesg boot log?
>>>>>>>>>
>>>>>>>>> And please post out the 'lscpu' log together from the test machine too.
>>>>>>>>
>>>>>>>> attached.
>>>>>>>>
>>>>>>>> As I said before this seems to go way with CONFIG_NR_CPUS=64 or smaller.
>>>>>>>> We have 282 nr_cpu_ids here (max 141CPUs on that z13 with SMT2) but only 8 Cores
>>>>>>>> == 16 threads.
>>>>>>>
>>>>>>> OK, thanks!
>>>>>>>
>>>>>>> The most weird thing is that hctx->next_cpu is computed as 512 since
>>>>>>> nr_cpu_id is 282, and hctx->next_cpu should have pointed to one of
>>>>>>> possible CPU.
>>>>>>>
>>>>>>> Looks like it is a s390 specific issue, since I can setup one queue
>>>>>>> which has same mapping with yours:
>>>>>>>
>>>>>>> - nr_cpu_id is 282
>>>>>>> - CPU 0~15 is online
>>>>>>> - 64 queues null_blk
>>>>>>> - still run all hw queues in .complete handler
>>>>>>>
>>>>>>> But can't reproduce this issue at all.
>>>>>>>
>>>>>>> So please test the following patch, which may tell us why hctx->next_cpu
>>>>>>> is computed wrong:
>>>>>>
>>>>>> I see things like
>>>>>>
>>>>>> [ 8.196907] wrong next_cpu 512, blk_mq_map_swqueue, first_and
>>>>>> [ 8.196910] wrong next_cpu 512, blk_mq_map_swqueue, first_and
>>>>>> [ 8.196912] wrong next_cpu 512, blk_mq_map_swqueue, first_and
>>>>>> [ 8.196913] wrong next_cpu 512, blk_mq_map_swqueue, first_and
>>>>>> [ 8.196914] wrong next_cpu 512, blk_mq_map_swqueue, first_and
>>>>>> [ 8.196915] wrong next_cpu 512, blk_mq_map_swqueue, first_and
>>>>>> [ 8.196916] wrong next_cpu 512, blk_mq_map_swqueue, first_and
>>>>>> [ 8.196916] wrong next_cpu 512, blk_mq_map_swqueue, first_and
>>>>>> [ 8.196917] wrong next_cpu 512, blk_mq_map_swqueue, first_and
>>>>>> [ 8.196918] wrong next_cpu 512, blk_mq_map_swqueue, first_and
>>>>>>
>>>>>> which is exactly what happens if the find and and operation fails (returns size of bitmap).
>>>>>
>>>>> Given both 'cpu_online_mask' and 'hctx->cpumask' are shown as correct
>>>>> in your previous debug log, it means the following function returns
>>>>> totally wrong result on S390.
>>>>>
>>>>> cpumask_first_and(hctx->cpumask, cpu_online_mask);
>>>>>
>>>>> The debugfs log shows that each hctx->cpumask includes one online
>>>>> CPU(0~15).
>>>>
>>>> Really? the last log (with the latest patch applied shows a lot of contexts
>>>> that do not have CPUs in 0-15:
>>>>
>>>> e.g.
>>>> [ 4.049828] dump CPUs mapped to this hctx:
>>>> [ 4.049829] 18
>>>> [ 4.049829] 82
>>>> [ 4.049830] 146
>>>> [ 4.049830] 210
>>>> [ 4.049831] 274
>>>
>>> That won't be an issue, since no IO can be submitted from these offline
>>> CPUs, then these hctx shouldn't have been run at all.
>>>
>>> But hctx->next_cpu can be set as 512 for these inactive hctx in
>>> blk_mq_map_swqueue(), then please test the attached patch, and if
>>> hctx->next_cpu is still set as 512, something is still wrong.
>>
>>
>> WIth this patch I no longer see the "run queue from wrong CPU x, hctx active" messages.
>> your debug code still triggers, though.
>>
>> wrong next_cpu 512, blk_mq_hctx_next_cpu, first_and
>> wrong next_cpu 512, blk_mq_hctx_next_cpu, next_and
>>
>> If we would remove the debug code then dmesg would be clean it seems.
>
> That is still a bit strange, since for any inactive hctx(without online
> CPU mapped), blk_mq_run_hw_queue() will check blk_mq_hctx_has_pending()
I think for next_and it is reasonable to see this, as the next_and will return
512 after we have used the last one. In fact the code does call first_and in
that case for a reason, no?
> first. And there shouldn't be any pending IO for all inactive hctx
> in your case, so looks blk_mq_hctx_next_cpu() shouldn't be called for
> inactive hctx.
>
> I will prepare one patchset and post out soon, and hope all these issues
> can be covered.
>
> Thanks,
> Ming
>
next prev parent reply other threads:[~2018-04-06 15:11 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-03-28 1:20 [PATCH] blk-mq: only run mapped hw queues in blk_mq_run_hw_queues() Ming Lei
2018-03-28 3:22 ` Jens Axboe
2018-03-28 7:45 ` Christian Borntraeger
2018-03-28 14:38 ` Jens Axboe
2018-03-28 14:53 ` Jens Axboe
2018-03-28 15:38 ` Christian Borntraeger
2018-03-28 15:26 ` Ming Lei
2018-03-28 15:36 ` Christian Borntraeger
2018-03-28 15:44 ` Christian Borntraeger
2018-03-29 2:00 ` Ming Lei
2018-03-29 7:23 ` Christian Borntraeger
2018-03-29 9:09 ` Christian Borntraeger
2018-03-29 9:40 ` Ming Lei
2018-03-29 10:10 ` Christian Borntraeger
2018-03-29 10:48 ` Ming Lei
2018-03-29 10:49 ` Christian Borntraeger
2018-03-29 11:43 ` Ming Lei
2018-03-29 11:49 ` Christian Borntraeger
2018-03-30 2:53 ` Ming Lei
2018-04-04 8:18 ` Christian Borntraeger
2018-04-05 16:05 ` Ming Lei
2018-04-05 16:11 ` Ming Lei
2018-04-05 17:39 ` Christian Borntraeger
2018-04-05 17:43 ` Christian Borntraeger
2018-04-06 8:41 ` Ming Lei
2018-04-06 8:51 ` Christian Borntraeger
2018-04-06 8:53 ` Christian Borntraeger
2018-04-06 9:23 ` Ming Lei
2018-04-06 10:19 ` Christian Borntraeger
2018-04-06 13:41 ` Ming Lei
2018-04-06 14:26 ` Christian Borntraeger
2018-04-06 14:58 ` Ming Lei
2018-04-06 15:11 ` Christian Borntraeger [this message]
2018-04-06 15:40 ` Ming Lei
2018-04-06 11:37 ` Christian Borntraeger
2018-04-06 8:35 ` Christian Borntraeger
2018-03-29 9:52 ` Ming Lei
2018-03-29 10:11 ` Christian Borntraeger
2018-03-29 10:12 ` Christian Borntraeger
2018-03-29 10:13 ` Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=03a62bea-ac13-8d39-2c7d-995547b32be7@de.ibm.com \
--to=borntraeger@de.ibm.com \
--cc=axboe@kernel.dk \
--cc=hch@infradead.org \
--cc=hch@lst.de \
--cc=linux-block@vger.kernel.org \
--cc=ming.lei@redhat.com \
--cc=sth@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox