From: "jianchao.wang" <jianchao.w.wang@oracle.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: Keith Busch <keith.busch@intel.com>,
Sagi Grimberg <sagi@grimberg.me>,
Christoph Hellwig <hch@infradead.org>, Jens Axboe <axboe@fb.com>,
Stefan Haberland <sth@linux.vnet.ibm.com>,
linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org,
James Smart <james.smart@broadcom.com>,
linux-block@vger.kernel.org,
Christian Borntraeger <borntraeger@de.ibm.com>,
Thomas Gleixner <tglx@linutronix.de>,
Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH 2/2] blk-mq: simplify queue mapping & schedule with each possisble CPU
Date: Fri, 19 Jan 2018 11:05:35 +0800 [thread overview]
Message-ID: <53da00dc-3d46-dcdb-2be4-277f79a9888b@oracle.com> (raw)
In-Reply-To: <20180117095744.GF9487@ming.t460p>
Hi ming
Sorry for delayed report this.
On 01/17/2018 05:57 PM, Ming Lei wrote:
> 2) hctx->next_cpu can become offline from online before __blk_mq_run_hw_queue
> is run, there isn't warning, but once the IO is submitted to hardware,
> after it is completed, how does the HBA/hw queue notify CPU since CPUs
> assigned to this hw queue(irq vector) are offline? blk-mq's timeout
> handler may cover that, but looks too tricky.
In theory, the irq affinity will be migrated to other cpu. This is done by
fixup_irqs() in the context of stop_machine.
However, in my test, I found this log:
[ 267.161043] do_IRQ: 7.33 No irq handler for vector
The 33 is the vector used by nvme cq.
The irq seems to be missed and sometimes IO hang occurred.
It is not every time, I think maybe due to nvme_process_cq in nvme_queue_rq.
I add dump stack behind the error log and get following:
[ 267.161043] do_IRQ: 7.33 No irq handler for vector migration/7
[ 267.161045] CPU: 7 PID: 52 Comm: migration/7 Not tainted 4.15.0-rc7+ #27
[ 267.161045] Hardware name: LENOVO 10MLS0E339/3106, BIOS M1AKT22A 06/27/2017
[ 267.161046] Call Trace:
[ 267.161047] <IRQ>
[ 267.161052] dump_stack+0x7c/0xb5
[ 267.161054] do_IRQ+0xb9/0xf0
[ 267.161056] common_interrupt+0xa2/0xa2
[ 267.161057] </IRQ>
[ 267.161059] RIP: 0010:multi_cpu_stop+0xb0/0x120
[ 267.161060] RSP: 0018:ffffbb6c81af7e70 EFLAGS: 00000202 ORIG_RAX: ffffffffffffffde
[ 267.161061] RAX: 0000000000000001 RBX: 0000000000000004 RCX: 0000000000000000
[ 267.161062] RDX: 0000000000000006 RSI: ffffffff898c4591 RDI: 0000000000000202
[ 267.161063] RBP: ffffbb6c826e7c88 R08: ffff991abc1256bc R09: 0000000000000005
[ 267.161063] R10: ffffbb6c81af7db8 R11: ffffffff89c91d20 R12: 0000000000000001
[ 267.161064] R13: ffffbb6c826e7cac R14: 0000000000000003 R15: 0000000000000000
[ 267.161067] ? cpu_stop_queue_work+0x90/0x90
[ 267.161068] cpu_stopper_thread+0x83/0x100
[ 267.161070] smpboot_thread_fn+0x161/0x220
[ 267.161072] kthread+0xf5/0x130
[ 267.161073] ? sort_range+0x20/0x20
[ 267.161074] ? kthread_associate_blkcg+0xe0/0xe0
[ 267.161076] ret_from_fork+0x24/0x30
The irq just occurred after the irq is enabled in multi_cpu_stop.
0xffffffff8112d655 is in multi_cpu_stop (/home/will/u04/source_code/linux-block/kernel/stop_machine.c:223).
218 */
219 touch_nmi_watchdog();
220 }
221 } while (curstate != MULTI_STOP_EXIT);
222
223 local_irq_restore(flags);
224 return err;
225 }
Thanks
Jianchao
next prev parent reply other threads:[~2018-01-19 3:06 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-12 2:53 [PATCH 0/2] blk-mq: support physical CPU hotplug Ming Lei
2018-01-12 2:53 ` [PATCH 1/2] genirq/affinity: assign vectors to all possible CPUs Ming Lei
2018-01-12 19:35 ` Thomas Gleixner
2018-01-12 2:53 ` [PATCH 2/2] blk-mq: simplify queue mapping & schedule with each possisble CPU Ming Lei
2018-01-16 10:00 ` Stefan Haberland
2018-01-16 10:12 ` jianchao.wang
2018-01-16 12:10 ` Ming Lei
2018-01-16 14:31 ` jianchao.wang
2018-01-16 15:32 ` Ming Lei
2018-01-17 2:56 ` jianchao.wang
2018-01-17 3:52 ` Ming Lei
2018-01-17 5:24 ` jianchao.wang
2018-01-17 6:22 ` Ming Lei
2018-01-17 8:09 ` jianchao.wang
2018-01-17 9:57 ` Ming Lei
2018-01-17 10:07 ` Christian Borntraeger
2018-01-17 10:14 ` Christian Borntraeger
2018-01-17 10:17 ` Ming Lei
2018-01-19 3:05 ` jianchao.wang [this message]
2018-01-26 9:31 ` Ming Lei
2018-01-12 8:12 ` [PATCH 0/2] blk-mq: support physical CPU hotplug Christian Borntraeger
2018-01-12 10:47 ` Johannes Thumshirn
2018-01-12 18:02 ` Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=53da00dc-3d46-dcdb-2be4-277f79a9888b@oracle.com \
--to=jianchao.w.wang@oracle.com \
--cc=axboe@fb.com \
--cc=borntraeger@de.ibm.com \
--cc=hch@infradead.org \
--cc=hch@lst.de \
--cc=james.smart@broadcom.com \
--cc=keith.busch@intel.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=ming.lei@redhat.com \
--cc=sagi@grimberg.me \
--cc=sth@linux.vnet.ibm.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).