From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BACF2C433EC for ; Tue, 28 Jul 2020 17:42:59 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 88CF720672 for ; Tue, 28 Jul 2020 17:42:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="EVc8cztl" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 88CF720672 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=chelsio.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-nvme-bounces+linux-nvme=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=W3qHaMDPoxP9xKW2REc66qrlbj2n/E40FLBBFhwpeic=; b=EVc8cztl5c5jOthagCzdPvPaN mxF3xhLiJAN2GnKA5000K0ajUmgr6vrAZwzPD2UAAQ9d2vwlhrpK6xGvBrkpWavca3T6VT76v2JbN zank0wxC/s/LsOgYwJNY3WLZ1puaUnkm0Zw4C5tZ6H7lpSFHG4rNP4htuYToMj8R8nxv0DV6vYTkh F1zhtlNaYmwNGwTjMSUgMn0L4NllRuJ22jWz6h+65TCin9i8FVCId9tUNUIDJCW79okSx2r4rMo9Y vn5KYsIHPaaFBdKpi/i91km4bmUxOe0gnY500hVwORZT9m3CQOdGBUSmzVckw9QwtWn9/kLprpGNO qQlKPY3eQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1k0Tcr-0003rk-Tw; Tue, 28 Jul 2020 17:42:53 +0000 Received: from stargate.chelsio.com ([12.32.117.8]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1k0Tco-0003r3-V5 for linux-nvme@lists.infradead.org; Tue, 28 Jul 2020 17:42:52 +0000 Received: from localhost (pvp1.blr.asicdesigners.com [10.193.80.26]) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id 06SHgRqb005947; Tue, 28 Jul 2020 10:42:28 -0700 Date: Tue, 28 Jul 2020 23:12:27 +0530 From: Krishnamraju Eraparaju To: Sagi Grimberg Subject: Re: Hang at NVME Host caused by Controller reset Message-ID: <20200728174224.GA5497@chelsio.com> References: <20200727181944.GA5484@chelsio.com> <9b8dae53-1fcc-3c03-5fcd-cfb55cd8cc80@grimberg.me> <20200728115904.GA5508@chelsio.com> <4d87ffbb-24a2-9342-4507-cabd9e3b76c2@grimberg.me> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <4d87ffbb-24a2-9342-4507-cabd9e3b76c2@grimberg.me> User-Agent: Mutt/1.5.21 (2010-09-15) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200728_134251_138295_3E7948F6 X-CRM114-Status: GOOD ( 12.46 ) X-BeenThere: linux-nvme@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-rdma@vger.kernel.org, bharat@chelsio.com, linux-nvme@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "Linux-nvme" Errors-To: linux-nvme-bounces+linux-nvme=archiver.kernel.org@lists.infradead.org Sagi, Yes, Multipath is disabled. This time, with "nvme-fabrics: allow to queue requests for live queues" patch applied, I see hang only at blk_queue_enter(): [Jul28 17:25] INFO: task nvme:21119 blocked for more than 122 seconds. [ +0.000061] Not tainted 5.8.0-rc7ekr+ #2 [ +0.000052] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ +0.000059] nvme D14392 21119 2456 0x00004000 [ +0.000059] Call Trace: [ +0.000110] __schedule+0x32b/0x670 [ +0.000108] schedule+0x45/0xb0 [ +0.000107] blk_queue_enter+0x1e9/0x250 [ +0.000109] ? wait_woken+0x70/0x70 [ +0.000110] blk_mq_alloc_request+0x53/0xc0 [ +0.000111] nvme_alloc_request+0x61/0x70 [nvme_core] [ +0.000121] nvme_submit_user_cmd+0x50/0x310 [nvme_core] [ +0.000118] nvme_user_cmd+0x12e/0x1c0 [nvme_core] [ +0.000163] ? _copy_to_user+0x22/0x30 [ +0.000113] blkdev_ioctl+0x100/0x250 [ +0.000115] block_ioctl+0x34/0x40 [ +0.000110] ksys_ioctl+0x82/0xc0 [ +0.000109] __x64_sys_ioctl+0x11/0x20 [ +0.000109] do_syscall_64+0x3e/0x70 [ +0.000120] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ +0.000112] RIP: 0033:0x7fbe9cdbb67b [ +0.000110] Code: Bad RIP value. [ +0.000124] RSP: 002b:00007ffd61ff5778 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ +0.000170] RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007fbe9cdbb67b [ +0.000114] RDX: 00007ffd61ff5780 RSI: 00000000c0484e43 RDI: 0000000000000003 [ +0.000113] RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000 [ +0.000115] R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffd61ff7219 [ +0.000123] R13: 0000000000000006 R14: 00007ffd61ff5e30 R15: 000055e09c1854a0 [ +0.000115] Kernel panic - not syncing: hung_task: blocked tasks You could easily reproduce this by running below, parallelly, for 10min: while [ 1 ]; do nvme write-zeroes /dev/nvme0n1 -s 1 -c 1; done while [ 1 ]; do echo 1 > /sys/block/nvme0n1/device/reset_controller; done while [ 1 ]; do ifconfig enp2s0f4 down; sleep 24; ifconfig enp2s0f4 up; sleep 28; done Not sure using nvme-write this way is valid or not.. Thanks, Krishna. On Tuesday, July 07/28/20, 2020 at 08:54:18 -0700, Sagi Grimberg wrote: > > > On 7/28/20 4:59 AM, Krishnamraju Eraparaju wrote: > >Sagi, > >With the given patch, I am no more seeing the freeze_queue_wait hang > >issue, but I am seeing another hang issue: > > The trace suggest that you are not running with multipath right? > > I think you need the patch: > [PATCH] nvme-fabrics: allow to queue requests for live queues > > You can find it in linux-nvme _______________________________________________ Linux-nvme mailing list Linux-nvme@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-nvme