From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Date: Fri, 6 Apr 2018 17:23:10 +0800 From: Ming Lei To: Christian Borntraeger Cc: Jens Axboe , linux-block@vger.kernel.org, Christoph Hellwig , Stefan Haberland , Christoph Hellwig Subject: Re: [PATCH] blk-mq: only run mapped hw queues in blk_mq_run_hw_queues() Message-ID: <20180406092309.GB9605@ming.t460p> References: <20180329114313.GC17537@ming.t460p> <20180330025340.GB12412@ming.t460p> <20180405160503.GA20818@ming.t460p> <20180405161142.GA20972@ming.t460p> <3a72f42f-db90-6092-5e1b-0579d2095daa@de.ibm.com> <20180406084106.GA8940@ming.t460p> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii In-Reply-To: List-ID: On Fri, Apr 06, 2018 at 10:51:28AM +0200, Christian Borntraeger wrote: > > > On 04/06/2018 10:41 AM, Ming Lei wrote: > > On Thu, Apr 05, 2018 at 07:39:56PM +0200, Christian Borntraeger wrote: > >> > >> > >> On 04/05/2018 06:11 PM, Ming Lei wrote: > >>>> > >>>> Could you please apply the following patch and provide the dmesg boot log? > >>> > >>> And please post out the 'lscpu' log together from the test machine too. > >> > >> attached. > >> > >> As I said before this seems to go way with CONFIG_NR_CPUS=64 or smaller. > >> We have 282 nr_cpu_ids here (max 141CPUs on that z13 with SMT2) but only 8 Cores > >> == 16 threads. > > > > OK, thanks! > > > > The most weird thing is that hctx->next_cpu is computed as 512 since > > nr_cpu_id is 282, and hctx->next_cpu should have pointed to one of > > possible CPU. > > > > Looks like it is a s390 specific issue, since I can setup one queue > > which has same mapping with yours: > > > > - nr_cpu_id is 282 > > - CPU 0~15 is online > > - 64 queues null_blk > > - still run all hw queues in .complete handler > > > > But can't reproduce this issue at all. > > > > So please test the following patch, which may tell us why hctx->next_cpu > > is computed wrong: > > I see things like > > [ 8.196907] wrong next_cpu 512, blk_mq_map_swqueue, first_and > [ 8.196910] wrong next_cpu 512, blk_mq_map_swqueue, first_and > [ 8.196912] wrong next_cpu 512, blk_mq_map_swqueue, first_and > [ 8.196913] wrong next_cpu 512, blk_mq_map_swqueue, first_and > [ 8.196914] wrong next_cpu 512, blk_mq_map_swqueue, first_and > [ 8.196915] wrong next_cpu 512, blk_mq_map_swqueue, first_and > [ 8.196916] wrong next_cpu 512, blk_mq_map_swqueue, first_and > [ 8.196916] wrong next_cpu 512, blk_mq_map_swqueue, first_and > [ 8.196917] wrong next_cpu 512, blk_mq_map_swqueue, first_and > [ 8.196918] wrong next_cpu 512, blk_mq_map_swqueue, first_and > > which is exactly what happens if the find and and operation fails (returns size of bitmap). Given both 'cpu_online_mask' and 'hctx->cpumask' are shown as correct in your previous debug log, it means the following function returns totally wrong result on S390. cpumask_first_and(hctx->cpumask, cpu_online_mask); The debugfs log shows that each hctx->cpumask includes one online CPU(0~15). So looks it isn't one issue in block MQ core. Thanks, Ming