public inbox for linux-block@vger.kernel.org
 help / color / mirror / Atom feed
From: John Garry <john.garry@huawei.com>
To: Ming Lei <tom.leiming@gmail.com>
Cc: Ming Lei <ming.lei@redhat.com>, Jens Axboe <axboe@kernel.dk>,
	linux-block <linux-block@vger.kernel.org>,
	Bart Van Assche <bvanassche@acm.org>,
	"Hannes Reinecke" <hare@suse.com>, Christoph Hellwig <hch@lst.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Keith Busch <keith.busch@intel.com>,
	"chenxiang (M)" <chenxiang66@hisilicon.com>
Subject: Re: [PATCH V5 0/6] blk-mq: improvement CPU hotplug
Date: Fri, 7 Feb 2020 10:56:44 +0000	[thread overview]
Message-ID: <0ba80182-01f0-4118-a70c-9faba96d3a3d@huawei.com> (raw)
In-Reply-To: <CACVXFVOk3cnRqyngYjHPPtLM1Wn8p3=hP8C3tBns9nDQAnoCyQ@mail.gmail.com>

On 31/01/2020 10:58, Ming Lei wrote:
> On Fri, Jan 31, 2020 at 6:24 PM John Garry<john.garry@huawei.com>  wrote:
>>>> [  141.976109] Call trace:
>>>> [  141.978550]  __switch_to+0xbc/0x218
>>>> [  141.982029]  blk_mq_run_work_fn+0x1c/0x28
>>>> [  141.986027]  process_one_work+0x1e0/0x358
>>>> [  141.990025]  worker_thread+0x40/0x488
>>>> [  141.993678]  kthread+0x118/0x120
>>>> [  141.996897]  ret_from_fork+0x10/0x18
>>> Hi John,
>>>
>>> Thanks for your test!
>>>
>> Hi Ming,
>>
>>> Could you test the following patchset and only the last one is changed?
>>>
>>> https://github.com/ming1/linux/commits/my_for_5.6_block
>> For SCSI testing, I will ask my colleague Xiang Chen to test when he
>> returns to work. So I did not see this issue for my SCSI testing for
>> your original v5, but I was only using 1x as opposed to maybe 20x SAS disks.
>>
>> BTW, did you test NVMe? For some reason I could not trigger a scenario
>> where we're draining the outstanding requests for a queue which is being
>> deactivated - I mean, the queues were always already quiesced.
> I run cpu hotplug test on both NVMe and SCSI in KVM, and fio just runs
> as expected.
> 
> NVMe is often 1:1 mapping, so it might be a bit difficult to trigger
> draining in-flight IOs.
> 

Hi Ming,

We got around to testing your my_for_5.6_block branch (Xiang Chen 
actually took the v5 series and applied the following on top only:
block: deactivate hctx when running queue in wrong CPU core
Revert "block: deactivate hctx when all its CPUs are offline when run…)

and we get this:

] IRQ 598: no longer affine to CPU4
[ 1077.396063] CPU4: shutdown
[ 1077.398769] psci: CPU4 killed (polled 0 ms)
[ 1077.457777] CPU3: shutdown
[ 1077.460495] psci: CPU3 killed (polled 0 ms)
[ 1077.499650] CPU2: shutdown
[ 1077.502357] psci: CPU2 killed (polled 0 ms)
[ 1077.546976] CPU1: shutdown
[ 1077.549690] psci: CPU1 killed (polled 0 ms)
it's running b  0
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [38.9% done] 
[1201MB/0KB/0KB /s] [307K/0/0 iops] [eta 00m:22s]
it's running b  1
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [41.7% done] 
[625.2MB/0KB/0KB /s] [160K/0/0 iops] [eta 00m:21s]
it's running b  2
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [44.4% done] 
[637.1MB/0KB/0KB /s] [163K/0/0 iops] [eta 00m:20s]
it's running b  3
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [47.2% done] 
[648.6MB/0KB/0KB /s] [166K/0/0 iops] [eta 00m:19s]
it's running b  4
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [50.0% done] 
[672.8MB/0KB/0KB /s] [172K/0/0 iops] [eta 00m:18s]
it's running b  5
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [52.8% done] 
[680.2MB/0KB/0KB /s] [174K/0/0 iops] [eta 00m:17s]
it's running b  6
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [55.6% done] 
[674.7MB/0KB/0KB /s] [173K/0/0 iops] [eta 00m:16s]
it's running b  7
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [58.3% done] 
[666.2MB/0KB/0KB /s] [171K/0/0 iops] [eta 00m:15s]
it's running b  8
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [61.1% done] 
[668.7MB/0KB/0KB /s] [171K/0/0 iops] [eta 00m:14s]
it's running b  9
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [63.9% done] 
[657.9MB/0KB/0KB /s] [168K/0/0 iops] [eta 00m:13s]
it's running b  10
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [66.7% done] 
[659.6MB/0KB/0KB /s] [169K/0/0 iops] [eta 00m:12s]
it's running b  11
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [69.4% done] 
[662.8MB/0KB/0KB /s] [170K/0/0 iops] [eta 00m:11s]
it's running b  12
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [72.2% done] 
[669.8MB/0KB/0KB /s] [171K/0/0 iops] [eta 00m:10s]
it's running b  13
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [75.0% done] 
[673.2MB/0KB/0KB /s] [172K/0/0 iops] [eta 00m:09s]
it's running b  14
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [77.8% done] 
[650.5MB/0KB/0KB /s] [167K/0/0 iops] [eta 00m:08s]
it's running b  15
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [80.6% done] 
[658.9MB/0KB/0KB /s] [169K/0/0 iops] [eta 00m:07s]
it's running b  16
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [83.3% done] 
[670.3MB/0KB/0KB /s] [172K/0/0 iops] [eta 00m:06s]
it's running b  17
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [86.1% done] 
[663.7MB/0KB/0KB /s] [170K/0/0 iops] [eta 00m:05s]
it's running b  18
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [88.9% done] 
[657.9MB/0KB/0KB /s] [168K/0/0 iops] [eta 00m:04s]
it's running b  19
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [91.7% done] 
[650.9MB/0KB/0KB /s] [167K/0/0 iops] [eta 00m:03s]
it's running b  20
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [94.4% done] 
[646.1MB/0KB/0KB /s] [166K/0/0 iops] [eta 00m:02s]
it's running b  21
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [97.2% done] 
[658.4MB/0KB/0KB /s] [169K/0/0 iops] [eta 00m:01s]
it's running b  22
Jobs: 40 (f=40): [RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR] [100.0% 
done] [649.4MB/0KB/0KB /s] [166K/0/0 iops] [eta 00m:00s]
it's running b  23
Jobs: 1 (f=1): [______________________R_________________] [2.6% done] 
[402.5MB/0KB/0KB /s] [103K/0/0 iops] [eta 22m:44s]
it's running b  24
Jobs: 1 (f=1): [______________________R_________________] [2.7% done] 
[0KB/0KB/0KB /s] [0/0/0 iops] [eta 22m:43s]
it's running b  25
Jobs: 1 (f=1): [______________________R_________________] [2.8% done] 
[0KB/0KB/0KB /s] [0/0/0 iops] [eta 22m:42s]
it's running b  26
Jobs: 1 (f=1): [______________________R_________________] [2.9% done] 
[0KB/0KB/0KB /s] [0/0/0 iops] [eta 22m:41s]
it's running b  27
Jobs: 1 (f=1): [______________________R_________________] [2.9% done] 
[0KB/0KB/0KB /s] [0/0/0 iops] [eta 22m:40s]
[ 1105.419335] sas: Enter sas_scsi_recover_host busy: 1 failed: 1
[ 1105.425185] sas: trying to find task 0x00000000f1b865f3
[ 1105.430409] sas: sas_scsi_find_task: aborting task 0x00000000f1b865f3
not running b  28
#

Looks like the queues are not properly drained as we're getting a single 
IO timeout. I'll have a look when I get a chance.

Cheers,
John

      parent reply	other threads:[~2020-02-07 10:56 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-15 11:44 [PATCH V5 0/6] blk-mq: improvement CPU hotplug Ming Lei
2020-01-15 11:44 ` [PATCH 1/6] blk-mq: add new state of BLK_MQ_S_INACTIVE Ming Lei
2020-01-15 11:44 ` [PATCH 2/6] blk-mq: prepare for draining IO when hctx's all CPUs are offline Ming Lei
2020-01-15 11:44 ` [PATCH 3/6] blk-mq: stop to handle IO and drain IO before hctx becomes inactive Ming Lei
2020-01-15 11:44 ` [PATCH 4/6] blk-mq: re-submit IO in case that hctx is inactive Ming Lei
2020-01-15 11:44 ` [PATCH 5/6] blk-mq: handle requests dispatched from IO scheduler in case of inactive hctx Ming Lei
2020-01-15 11:44 ` [PATCH 6/6] block: deactivate hctx when all its CPUs are offline when running queue Ming Lei
2020-01-15 17:00 ` [PATCH V5 0/6] blk-mq: improvement CPU hotplug John Garry
2020-01-20 13:23   ` John Garry
2020-01-31 10:04     ` Ming Lei
2020-01-31 10:24       ` John Garry
2020-01-31 10:58         ` Ming Lei
2020-01-31 17:51           ` John Garry
2020-01-31 18:02             ` John Garry
2020-02-01  1:31               ` Ming Lei
2020-02-01 11:05                 ` Marc Zyngier
2020-02-01 11:31                   ` Thomas Gleixner
2020-02-03 10:30                     ` John Garry
2020-02-03 10:49                       ` John Garry
2020-02-03 10:59                         ` Ming Lei
2020-02-03 12:56                           ` John Garry
2020-02-03 15:43                             ` Marc Zyngier
2020-02-03 18:16                               ` John Garry
2020-02-05 14:08                                 ` John Garry
2020-02-05 14:23                                   ` Marc Zyngier
2020-02-07 10:56           ` John Garry [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0ba80182-01f0-4118-a70c-9faba96d3a3d@huawei.com \
    --to=john.garry@huawei.com \
    --cc=axboe@kernel.dk \
    --cc=bvanassche@acm.org \
    --cc=chenxiang66@hisilicon.com \
    --cc=hare@suse.com \
    --cc=hch@lst.de \
    --cc=keith.busch@intel.com \
    --cc=linux-block@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=tom.leiming@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox