From: Yi Zhang <yi.zhang@redhat.com>
To: linux-nvme@lists.infradead.org
Cc: sagi@grimberg.me
Subject: BUG: using __this_cpu_read() in preemptible code observed with blktests nvme-tcp on rt kernel
Date: Sun, 14 Mar 2021 01:32:28 -0500 (EST) [thread overview]
Message-ID: <2052426528.19238323.1615703548200.JavaMail.zimbra@redhat.com> (raw)
In-Reply-To: <642327227.19237955.1615702580608.JavaMail.zimbra@redhat.com>
Hi
I reproduced this issue on latest rt kernel, could you help check it, thanks.
[ 76.812567] run blktests nvme/003 at 2021-03-14 07:29:39
[ 76.880530] loop: module loaded
[ 76.893827] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
[ 76.895766] nvmet_tcp: enabling port 0 (127.0.0.1:4420)
[ 76.898923] nvmet: creating controller 1 for subsystem nqn.2014-08.org.nvmexpress.discovery for NQN nqn.2014-08.org.nvmexpress:uuid:f1d7a9f1-79ef-42a3-952b-59c7338f3b54.
[ 76.899061] nvme nvme0: new ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery", addr 127.0.0.1:4420
[ 76.899063] BUG: using __this_cpu_read() in preemptible [00000000] code: kworker/u64:25/231
[ 76.899065] caller is nvme_tcp_submit_async_event+0x128/0x170 [nvme_tcp]
[ 76.899069] CPU: 3 PID: 231 Comm: kworker/u64:25 Tainted: G S I 5.12.0-rc2-rt1 #4
[ 76.899071] Hardware name: Dell Inc. PowerEdge R640/06NR82, BIOS 2.10.0 11/12/2020
[ 76.899072] Workqueue: nvme-wq nvme_async_event_work [nvme_core]
[ 76.899081] Call Trace:
[ 76.899083] dump_stack+0x64/0x7c
[ 76.899086] check_preemption_disabled+0xb6/0xd0
[ 76.899090] nvme_tcp_submit_async_event+0x128/0x170 [nvme_tcp]
[ 76.899093] nvme_async_event_work+0x5d/0xc0 [nvme_core]
[ 76.899098] process_one_work+0x1c8/0x3f0
[ 76.899101] ? process_one_work+0x3f0/0x3f0
[ 76.899102] worker_thread+0x30/0x370
[ 76.899104] ? process_one_work+0x3f0/0x3f0
[ 76.899105] kthread+0x183/0x1a0
[ 76.899108] ? kthread_park+0x80/0x80
[ 76.899110] ret_from_fork+0x1f/0x30
[ 86.912113] nvme nvme0: Removing ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery"
Best Regards,
Yi Zhang
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next parent reply other threads:[~2021-03-14 6:34 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <642327227.19237955.1615702580608.JavaMail.zimbra@redhat.com>
2021-03-14 6:32 ` Yi Zhang [this message]
2021-03-15 17:29 ` BUG: using __this_cpu_read() in preemptible code observed with blktests nvme-tcp on rt kernel Sagi Grimberg
2021-03-15 18:04 ` Christoph Hellwig
2021-03-15 18:13 ` Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2052426528.19238323.1615703548200.JavaMail.zimbra@redhat.com \
--to=yi.zhang@redhat.com \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox