From: Ming Lei <ming.lei@redhat.com>
To: Sumit Saxena <sumit.saxena@broadcom.com>
Cc: John Garry <john.garry@huawei.com>, Qian Cai <cai@redhat.com>,
Kashyap Desai <kashyap.desai@broadcom.com>,
Jens Axboe <axboe@kernel.dk>,
"James E.J. Bottomley" <jejb@linux.ibm.com>,
"Martin K. Petersen" <martin.petersen@oracle.com>,
don.brace@microsemi.com, Bart Van Assche <bvanassche@acm.org>,
dgilbert@interlog.com, paolo.valente@linaro.org,
Hannes Reinecke <hare@suse.de>, Christoph Hellwig <hch@lst.de>,
linux-block@vger.kernel.org, LKML <linux-kernel@vger.kernel.org>,
Linux SCSI List <linux-scsi@vger.kernel.org>,
esc.storagedev@microsemi.com,
"PDL,MEGARAIDLINUX" <megaraidlinux.pdl@broadcom.com>,
chenxiang66@hisilicon.com, luojiaxing@huawei.com,
Hannes Reinecke <hare@suse.com>
Subject: Re: [PATCH v8 17/18] scsi: megaraid_sas: Added support for shared host tagset for cpuhotplug
Date: Wed, 11 Nov 2020 17:27:43 +0800 [thread overview]
Message-ID: <20201111092743.GC545929@T590> (raw)
In-Reply-To: <CAL2rwxpQt-w2Re8ttu0=6Yzb7ibX3_FB6j-kd_cbtrWxzc7chw@mail.gmail.com>
On Wed, Nov 11, 2020 at 12:57:59PM +0530, Sumit Saxena wrote:
> On Tue, Nov 10, 2020 at 11:12 PM John Garry <john.garry@huawei.com> wrote:
> >
> > On 09/11/2020 14:05, John Garry wrote:
> > > On 09/11/2020 13:39, Qian Cai wrote:
> > >>> I suppose I could try do this myself also, but an authentic version
> > >>> would be nicer.
> > >> The closest one I have here is:
> > >> https://cailca.coding.net/public/linux/mm/git/files/master/arm64.config
> > >>
> > >> but it only selects the Thunder X2 platform and needs to manually select
> > >> CONFIG_MEGARAID_SAS=m to start with, but none of arm64 systems here have
> > >> megaraid_sas.
> > >
> > > Thanks, I'm confident I can fix it up to get it going on my Huawei arm64
> > > D06CS.
> > >
> > > So that board has a megaraid sas card. In addition, it also has hisi_sas
> > > HW, which is another storage controller which we enabled this same
> > > feature which is causing the problem.
> > >
> > > I'll report back when I can.
> >
> > So I had to hack that arm64 config a bit to get it booting:
> > https://github.com/hisilicon/kernel-dev/commits/private-topic-sas-5.10-megaraid-hang
> >
> > Boot is ok on my board without the megaraid sas card, but includes
> > hisi_sas HW (which enables the equivalent option which is exposing the
> > problem).
> >
> > But the board with the megaraid sas boots very slowly, specifically
> > around the megaraid sas probe:
> >
> > : ttyS0 at MMIO 0x3f00002f8 (irq = 17, base_baud = 115200) is a 16550A
> > [ 50.023726][ T1] printk: console [ttyS0] enabled
> > [ 50.412597][ T1] megasas: 07.714.04.00-rc1
> > [ 50.436614][ T5] megaraid_sas 0000:08:00.0: FW now in Ready state
> > [ 50.450079][ T5] megaraid_sas 0000:08:00.0: 63 bit DMA mask and 63
> > bit consistent mask
> > [ 50.467811][ T5] megaraid_sas 0000:08:00.0: firmware supports msix
> > : (128)
> > [ 50.845995][ T5] megaraid_sas 0000:08:00.0: requested/available
> > msix 128/128
> > [ 50.861476][ T5] megaraid_sas 0000:08:00.0: current msix/online
> > cpus : (128/128)
> > [ 50.877616][ T5] megaraid_sas 0000:08:00.0: RDPQ mode : (enabled)
> > [ 50.891018][ T5] megaraid_sas 0000:08:00.0: Current firmware
> > supports maximum commands: 4077 LDIO threshold: 0
> > [ 51.262942][ T5] megaraid_sas 0000:08:00.0: Performance mode
> > :Latency (latency index = 1)
> > [ 51.280749][ T5] megaraid_sas 0000:08:00.0: FW supports sync cache
> > : Yes
> > [ 51.295451][ T5] megaraid_sas 0000:08:00.0:
> > megasas_disable_intr_fusion is called outbound_intr_mask:0x40000009
> > [ 51.387474][ T5] megaraid_sas 0000:08:00.0: FW provided
> > supportMaxExtLDs: 1 max_lds: 64
> > [ 51.404931][ T5] megaraid_sas 0000:08:00.0: controller type
> > : MR(2048MB)
> > [ 51.419616][ T5] megaraid_sas 0000:08:00.0: Online Controller
> > Reset(OCR) : Enabled
> > [ 51.436132][ T5] megaraid_sas 0000:08:00.0: Secure JBOD support
> > : Yes
> > [ 51.450265][ T5] megaraid_sas 0000:08:00.0: NVMe passthru support
> > : Yes
> > [ 51.464757][ T5] megaraid_sas 0000:08:00.0: FW provided TM
> > TaskAbort/Reset timeout : 6 secs/60 secs
> > [ 51.484379][ T5] megaraid_sas 0000:08:00.0: JBOD sequence map
> > support : Yes
> > [ 51.499607][ T5] megaraid_sas 0000:08:00.0: PCI Lane Margining
> > support : No
> > [ 51.547610][ T5] megaraid_sas 0000:08:00.0: NVME page size
> > : (4096)
> > [ 51.608635][ T5] megaraid_sas 0000:08:00.0:
> > megasas_enable_intr_fusion is called outbound_intr_mask:0x40000000
> > [ 51.630285][ T5] megaraid_sas 0000:08:00.0: INIT adapter done
> > [ 51.649854][ T5] megaraid_sas 0000:08:00.0: pci id
> > : (0x1000)/(0x0016)/(0x19e5)/(0xd215)
> > [ 51.667873][ T5] megaraid_sas 0000:08:00.0: unevenspan support : no
> > [ 51.681646][ T5] megaraid_sas 0000:08:00.0: firmware crash dump : no
> > [ 51.695596][ T5] megaraid_sas 0000:08:00.0: JBOD sequence map
> > : enabled
> > [ 51.711521][ T5] megaraid_sas 0000:08:00.0: Max firmware commands:
> > 4076 shared with nr_hw_queues = 127
> > [ 51.733056][ T5] scsi host0: Avago SAS based MegaRAID driver
> > [ 65.304363][ T5] scsi 0:0:0:0: Direct-Access ATA SAMSUNG
> > MZ7KH1T9 404Q PQ: 0 ANSI: 6
> > [ 65.392401][ T5] scsi 0:0:1:0: Direct-Access ATA SAMSUNG
> > MZ7KH1T9 404Q PQ: 0 ANSI: 6
> > [ 79.508307][ T5] scsi 0:0:65:0: Enclosure HUAWEI
> > Expander 12Gx16 131 PQ: 0 ANSI: 6
> > [ 183.965109][ C14] random: fast init done
> >
> > Notice the 14 and 104 second delays.
> >
> > But does boot fully to get to the console. I'll wait for further issues,
> > which you guys seem to experience after a while.
> >
> > Thanks,
> > John
> "megaraid_sas" driver calls “scsi_scan_host()” to discover SCSI
> devices. In this failure case, scsi_scan_host() is taking a long time
> to complete, hence causing delay in system boot.
> With "host_tagset" enabled, scsi_scan_host() takes around 20 mins.
> With "host_tagset" disabled, scsi_scan_host() takes upto 5-8 mins.
>
> The scan time depends upon the number of scsi channels and devices per
> scsi channel is exposed by LLD.
> megaraid_sas driver exposes 4 channels and 128 drives per channel.
>
> Each target scan takes 2 seconds (in case of failure with host_tagset
> enabled). That's why driver load completes after ~20 minutes. See
> below:
>
> [ 299.725271] kobject: 'target18:0:96': free name
> [ 301.681267] kobject: 'target18:0:97' (00000000987c7f11):
> kobject_cleanup, parent 0000000000000000
> [ 301.681269] kobject: 'target18:0:97' (00000000987c7f11): calling
> ktype release
> [ 301.681273] kobject: 'target18:0:97': free name
> [ 303.575268] kobject: 'target18:0:98' (00000000a8c34149):
> kobject_cleanup, parent 0000000000000000
>
> In Qian's kernel .config, async scsi scan is disabled so in failure
> case SCSI scan type is synchronous.
> Below is the stack trace when scsi_scan_host() hangs:
>
> [<0>] __wait_rcu_gp+0x134/0x170
> [<0>] synchronize_rcu.part.80+0x53/0x60
> [<0>] blk_free_flush_queue+0x12/0x30
Can this issue disappear by applying the following change?
diff --git a/block/blk-flush.c b/block/blk-flush.c
index e32958f0b687..b1fe6176d77f 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -469,9 +469,6 @@ struct blk_flush_queue *blk_alloc_flush_queue(int node, int cmd_size,
INIT_LIST_HEAD(&fq->flush_queue[1]);
INIT_LIST_HEAD(&fq->flush_data_in_flight);
- lockdep_register_key(&fq->key);
- lockdep_set_class(&fq->mq_flush_lock, &fq->key);
-
return fq;
fail_rq:
@@ -486,7 +483,6 @@ void blk_free_flush_queue(struct blk_flush_queue *fq)
if (!fq)
return;
- lockdep_unregister_key(&fq->key);
kfree(fq->flush_rq);
kfree(fq);
}
Thanks,
Ming
next prev parent reply other threads:[~2020-11-11 9:28 UTC|newest]
Thread overview: 58+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-08-19 15:20 [PATCH v8 00/18] blk-mq/scsi: Provide hostwide shared tags for SCSI HBAs John Garry
2020-08-19 15:20 ` [PATCH v8 01/18] blk-mq: Rename BLK_MQ_F_TAG_SHARED as BLK_MQ_F_TAG_QUEUE_SHARED John Garry
2020-08-19 15:20 ` [PATCH v8 02/18] blk-mq: Rename blk_mq_update_tag_set_depth() John Garry
2020-08-19 15:20 ` [PATCH v8 03/18] blk-mq: Free tags in blk_mq_init_tags() upon error John Garry
2020-08-19 15:20 ` [PATCH v8 04/18] blk-mq: Pass flags for tag init/free John Garry
2020-08-19 15:20 ` [PATCH v8 05/18] blk-mq: Use pointers for blk_mq_tags bitmap tags John Garry
2020-08-19 15:20 ` [PATCH v8 06/18] blk-mq: Facilitate a shared sbitmap per tagset John Garry
2020-08-19 15:20 ` [PATCH v8 07/18] blk-mq: Relocate hctx_may_queue() John Garry
2020-08-19 15:20 ` [PATCH v8 08/18] blk-mq: Record nr_active_requests per queue for when using shared sbitmap John Garry
2020-08-19 15:20 ` [PATCH v8 09/18] blk-mq: Record active_queues_shared_sbitmap per tag_set " John Garry
2020-08-19 15:20 ` [PATCH v8 10/18] blk-mq, elevator: Count requests per hctx to improve performance John Garry
2020-08-19 15:20 ` [PATCH v8 11/18] null_blk: Support shared tag bitmap John Garry
2020-08-19 15:20 ` [PATCH v8 12/18] scsi: Add host and host template flag 'host_tagset' John Garry
2020-08-19 15:20 ` [PATCH v8 13/18] scsi: core: Show nr_hw_queues in sysfs John Garry
2020-09-10 8:33 ` John Garry
2020-08-19 15:20 ` [PATCH v8 14/18] scsi: hisi_sas: Switch v3 hw to MQ John Garry
2020-08-19 15:20 ` [PATCH v8 15/18] scsi: scsi_debug: Support host tagset John Garry
2020-08-19 15:20 ` [PATCH v8 16/18] hpsa: enable host_tagset and switch to MQ John Garry
2020-08-19 15:20 ` [PATCH v8 17/18] scsi: megaraid_sas: Added support for shared host tagset for cpuhotplug John Garry
2020-11-02 14:17 ` Qian Cai
2020-11-02 14:31 ` Kashyap Desai
2020-11-02 15:24 ` Qian Cai
2020-11-02 14:51 ` John Garry
2020-11-02 15:18 ` Qian Cai
2020-11-03 10:54 ` John Garry
2020-11-03 13:04 ` Qian Cai
2020-11-04 15:21 ` Qian Cai
2020-11-04 16:07 ` Kashyap Desai
2020-11-04 18:08 ` John Garry
2020-11-06 19:25 ` Sumit Saxena
2020-11-07 0:17 ` Qian Cai
2020-11-09 8:49 ` John Garry
2020-11-09 13:39 ` Qian Cai
2020-11-09 14:05 ` John Garry
2020-11-10 17:42 ` John Garry
2020-11-11 7:27 ` Sumit Saxena
2020-11-11 9:27 ` Ming Lei [this message]
2020-11-11 11:36 ` Sumit Saxena
2020-11-11 14:42 ` Qian Cai
2020-11-11 15:04 ` Ming Lei
2020-11-11 11:51 ` John Garry
2020-08-19 15:20 ` [PATCH v8 18/18] smartpqi: enable host tagset John Garry
2020-08-27 8:53 ` [PATCH v8 00/18] blk-mq/scsi: Provide hostwide shared tags for SCSI HBAs John Garry
2020-09-03 19:28 ` Douglas Gilbert
2020-09-03 21:23 ` Jens Axboe
2020-09-04 9:09 ` John Garry
2020-09-04 12:44 ` Martin K. Petersen
2020-09-16 7:21 ` John Garry
2020-09-17 1:10 ` Martin K. Petersen
2020-09-17 6:48 ` John Garry
2020-09-21 21:35 ` Don.Brace
2020-09-21 22:15 ` John Garry
2020-09-22 9:03 ` John Garry
2020-09-28 16:11 ` Kashyap Desai
2020-10-06 14:24 ` John Garry
2020-10-06 14:42 ` Jens Axboe
2020-09-08 12:46 ` Hannes Reinecke
2020-09-08 13:38 ` John Garry
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201111092743.GC545929@T590 \
--to=ming.lei@redhat.com \
--cc=axboe@kernel.dk \
--cc=bvanassche@acm.org \
--cc=cai@redhat.com \
--cc=chenxiang66@hisilicon.com \
--cc=dgilbert@interlog.com \
--cc=don.brace@microsemi.com \
--cc=esc.storagedev@microsemi.com \
--cc=hare@suse.com \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=jejb@linux.ibm.com \
--cc=john.garry@huawei.com \
--cc=kashyap.desai@broadcom.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=luojiaxing@huawei.com \
--cc=martin.petersen@oracle.com \
--cc=megaraidlinux.pdl@broadcom.com \
--cc=paolo.valente@linaro.org \
--cc=sumit.saxena@broadcom.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox