From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 70856C433EF for ; Thu, 7 Jul 2022 01:46:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=balIyGGcThFGQnZtBoNOoylNjVr91YNMza6Ayzw3JGM=; b=hT6IZIUu1el6GfODULcYofvpAF f3zlVN3xX3jge1KPvXcsHHyvQ4LBpPCwjTHUuWTvVOWpCfvvmsQ3oJlJlMDsBqV+4GUQJm9RjIV8R W+1SCQbQsnyZBO+RxfpBNPTyzM6bBdKdWq9xeGRIgPmk8YbMSsi94OifcA4OA0m0hYWgR2ntUphB+ gbcHMPAMhXkpIgGUPpqMAIkabHByNSzeblUl8yLnQPj5DbdvMy40I5Cp46KdpV953Zhr11VUnkC3j cvsqY9Ze1AaaC7y1tjUDsO3uzReluW3CtsGlK2YilFxFVrkLDYBhM56erIwmk1yGw/DZd/kATSdxO LXk4CMfw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o9Gb8-00D1uU-Av; Thu, 07 Jul 2022 01:46:30 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o9Gb5-00D1th-3X for linux-nvme@lists.infradead.org; Thu, 07 Jul 2022 01:46:28 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1657158384; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=balIyGGcThFGQnZtBoNOoylNjVr91YNMza6Ayzw3JGM=; b=A01oaMjEdK4H3Oced3p1wLv+5LbGgsXF1CAGFNTuTIN1CoPhcqWObELNLQZ+FN/ofd2eX8 uD2A+KtT6D0ERIVbrPGgjbcOq5ZtWJwpZoqC485BocjGZ4Updhm4QOziDWSY8Hbprr6A5g k27E1GnFYo+EslEJIjntw++FaHF0BG8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-624-_XOvf8u9PLGwRWov-MAsuw-1; Wed, 06 Jul 2022 21:46:20 -0400 X-MC-Unique: _XOvf8u9PLGwRWov-MAsuw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 24558185A7B2; Thu, 7 Jul 2022 01:46:20 +0000 (UTC) Received: from T590 (ovpn-8-19.pek2.redhat.com [10.72.8.19]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 81F4C18ECC; Thu, 7 Jul 2022 01:46:15 +0000 (UTC) Date: Thu, 7 Jul 2022 09:46:10 +0800 From: Ming Lei To: Sagi Grimberg Cc: Yi Zhang , "open list:NVM EXPRESS DRIVER" , linux-block , Bart Van Assche Subject: Re: [bug report] nvme/rdma: nvme connect failed after offline one cpu on host side Message-ID: References: <2c42c70a-8eb4-a095-1d2b-139614ebd903@grimberg.me> <0a8099e6-6e28-da1f-7b4b-0ea04fa8f9d6@grimberg.me> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <0a8099e6-6e28-da1f-7b4b-0ea04fa8f9d6@grimberg.me> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220706_184627_257272_3AC90AFC X-CRM114-Status: GOOD ( 29.27 ) X-BeenThere: linux-nvme@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-nvme" Errors-To: linux-nvme-bounces+linux-nvme=archiver.kernel.org@lists.infradead.org On Wed, Jul 06, 2022 at 06:30:43PM +0300, Sagi Grimberg wrote: > > > > > update the subject to better describe the issue: > > > > > > > > So I tried this issue on one nvme/rdma environment, and it was also > > > > reproducible, here are the steps: > > > > > > > > # echo 0 >/sys/devices/system/cpu/cpu0/online > > > > # dmesg | tail -10 > > > > [ 781.577235] smpboot: CPU 0 is now offline > > > > # nvme connect -t rdma -a 172.31.45.202 -s 4420 -n testnqn > > > > Failed to write to /dev/nvme-fabrics: Invalid cross-device link > > > > no controller found: failed to write to nvme-fabrics device > > > > > > > > # dmesg > > > > [ 781.577235] smpboot: CPU 0 is now offline > > > > [ 799.471627] nvme nvme0: creating 39 I/O queues. > > > > [ 801.053782] nvme nvme0: mapped 39/0/0 default/read/poll queues. > > > > [ 801.064149] nvme nvme0: Connect command failed, error wo/DNR bit: -16402 > > > > [ 801.073059] nvme nvme0: failed to connect queue: 1 ret=-18 > > > > > > This is because of blk_mq_alloc_request_hctx() and was raised before. > > > > > > IIRC there was reluctance to make it allocate a request for an hctx even > > > if its associated mapped cpu is offline. > > > > > > The latest attempt was from Ming: > > > [PATCH V7 0/3] blk-mq: fix blk_mq_alloc_request_hctx > > > > > > Don't know where that went tho... > > > > The attempt relies on that the queue for connecting io queue uses > > non-admined irq, unfortunately that can't be true for all drivers, > > so that way can't go. > > The only consumer is nvme-fabrics, so others don't matter. > Maybe we need a different interface that allows this relaxation. > > > So far, I'd suggest to fix nvme_*_connect_io_queues() to ignore failed > > io queue, then the nvme host still can be setup with less io queues. > > What happens when the CPU comes back? Not sure we can simply ignore it. Anyway, it is a not good choice to fail the whole controller if only one queue can't be connected. I meant the queue can be kept as non-LIVE, and it should work since no any io can be issued to this queue when it is non-LIVE. Just wondering why we can't re-connect the io queue and set LIVE after any CPU in the this hctx->cpumask becomes online? blk-mq could add one pair of callbacks for driver for handing this queue change. thanks, Ming