From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68F42C41513 for ; Wed, 16 Aug 2023 14:45:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343799AbjHPOpK (ORCPT ); Wed, 16 Aug 2023 10:45:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343819AbjHPOor (ORCPT ); Wed, 16 Aug 2023 10:44:47 -0400 Received: from mx.treblig.org (unknown [IPv6:2a00:1098:5b::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B27DE26BD; Wed, 16 Aug 2023 07:44:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=treblig.org ; s=bytemarkmx; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID :Subject:Cc:To:From:Date:Sender:Reply-To:Content-Transfer-Encoding:Content-ID :Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To: Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe :List-Post:List-Owner:List-Archive; bh=3amSda4+MC6p18gWEHydYASalkPadZ0r3X5uLkwlFd4=; b=hz8yN6PTb79DfpT0VYKZaizQm0 d7htU5flsx+rMp6VlOG9QY4jnsYNWoiPcwfs+fgUwSdBSalhq7YuyhwuoOWOfNoUdX/S9z9N7ZWfE EU5fVz4/p7d+3nGQ2Fa62Aeqzl4RbvxGCzE5KTOg4Ah5JyrvkgouqB+XDiviqR4ww+HuJOiHjc5Pd dEd6MPypOhuWpups8EBiwVH1e2KqQw+j69v1QQAaNxFwAu+Cj5txNQ2MxtPQgMnHaDllqPvdqq+un gUAjJSK3cDP3ycz9VdtgiotDrD3juMHufjvx9rh9C5jFN+4uiPh0k4Cq4YiaIgOCAsJJlei4Lfn1h m4Ha+/BA==; Received: from dg by mx.treblig.org with local (Exim 4.94.2) (envelope-from ) id 1qWHlH-007GNR-Ur; Wed, 16 Aug 2023 14:44:39 +0000 Date: Wed, 16 Aug 2023 14:44:39 +0000 From: "Dr. David Alan Gilbert" To: Jens Axboe Cc: Theodore Ts'o , hch@lst.de, adilger.kernel@dilger.ca, song@kernel.org, linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ext4@vger.kernel.org Subject: Re: 6.5.0rc5 fs hang - ext4? raid? Message-ID: References: <20230815125146.GA1508930@mit.edu> <324fc71c-dead-4418-af81-6817e1f41c39@kernel.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <324fc71c-dead-4418-af81-6817e1f41c39@kernel.dk> X-Chocolate: 70 percent or better cocoa solids preferably X-Operating-System: Linux/5.10.0-23-amd64 (x86_64) X-Uptime: 14:44:33 up 41 days, 16 min, 1 user, load average: 0.11, 0.03, 0.01 User-Agent: Mutt/2.0.5 (2021-01-21) Precedence: bulk List-ID: X-Mailing-List: linux-raid@vger.kernel.org * Jens Axboe (axboe@kernel.dk) wrote: > On 8/15/23 7:31 PM, Dr. David Alan Gilbert wrote: > > (Copying in Christoph and Jens) > > > > * Dr. David Alan Gilbert (dave@treblig.org) wrote: > >> * Dr. David Alan Gilbert (dave@treblig.org) wrote: > >>> * Theodore Ts'o (tytso@mit.edu) wrote: > >>>> On Mon, Aug 14, 2023 at 09:02:53PM +0000, Dr. David Alan Gilbert wrote: > >>>>> dg 29594 29592 0 18:40 pts/0 00:00:00 /usr/bin/ar --plugin /usr/libexec/gcc/x86_64-redhat-linux/13/liblto_plugin.so -csrDT src/intel/perf/libintel_perf.a src/intel/perf/libintel_perf.a.p/meson-generated_.._intel_perf_metrics.c.o src/intel/perf/libintel_perf.a.p/intel_perf.c.o src/intel/perf/libintel_perf.a.p/intel_perf_query.c.o src/intel/perf/libintel_perf.a.p/intel_perf_mdapi.c.o > >>>>> > >>>>> [root@dalek dg]# cat /proc/29594/stack > >>>>> [<0>] md_super_wait+0xa2/0xe0 > >>>>> [<0>] md_bitmap_unplug+0xd2/0x120 > >>>>> [<0>] flush_bio_list+0xf3/0x100 [raid1] > >>>>> [<0>] raid1_unplug+0x3b/0xb0 [raid1] > >>>>> [<0>] __blk_flush_plug+0xd7/0x150 > >>>>> [<0>] blk_finish_plug+0x29/0x40 > >>>>> [<0>] ext4_do_writepages+0x401/0xc90 > >>>>> [<0>] ext4_writepages+0xad/0x180 > >>>> > >>>> If you want a few seconds and try grabbing cat /proc/29594/stack > >>>> again, what does the stack trace stay consistent as above? > >>> > >>> I'll get back to that and retry it. > >> > >> Yeh, the stack is consistent; this time around it's an 'ar' in a kernel > >> build: > >> > >> [root@dalek dg]# cat /proc/17970/stack > >> [<0>] md_super_wait+0xa2/0xe0 > >> [<0>] md_bitmap_unplug+0xad/0x120 > >> [<0>] flush_bio_list+0xf3/0x100 [raid1] > >> [<0>] raid1_unplug+0x3b/0xb0 [raid1] > >> [<0>] __blk_flush_plug+0xd7/0x150 > >> [<0>] blk_finish_plug+0x29/0x40 > >> [<0>] ext4_do_writepages+0x401/0xc90 > >> [<0>] ext4_writepages+0xad/0x180 > >> [<0>] do_writepages+0xd2/0x1e0 > >> [<0>] filemap_fdatawrite_wbc+0x63/0x90 > >> [<0>] __filemap_fdatawrite_range+0x5c/0x80 > >> [<0>] ext4_release_file+0x74/0xb0 > >> [<0>] __fput+0xf5/0x2a0 > >> [<0>] task_work_run+0x5d/0x90 > >> [<0>] exit_to_user_mode_prepare+0x1e6/0x1f0 > >> [<0>] syscall_exit_to_user_mode+0x1b/0x40 > >> [<0>] do_syscall_64+0x6c/0x90 > >> [<0>] entry_SYSCALL_64_after_hwframe+0x6e/0xd8 > >> [root@dalek dg]# cat /proc/17970/stack > >> [<0>] md_super_wait+0xa2/0xe0 > >> [<0>] md_bitmap_unplug+0xad/0x120 > >> [<0>] flush_bio_list+0xf3/0x100 [raid1] > >> [<0>] raid1_unplug+0x3b/0xb0 [raid1] > >> [<0>] __blk_flush_plug+0xd7/0x150 > >> [<0>] blk_finish_plug+0x29/0x40 > >> [<0>] ext4_do_writepages+0x401/0xc90 > >> [<0>] ext4_writepages+0xad/0x180 > >> [<0>] do_writepages+0xd2/0x1e0 > >> [<0>] filemap_fdatawrite_wbc+0x63/0x90 > >> [<0>] __filemap_fdatawrite_range+0x5c/0x80 > >> [<0>] ext4_release_file+0x74/0xb0 > >> [<0>] __fput+0xf5/0x2a0 > >> [<0>] task_work_run+0x5d/0x90 > >> [<0>] exit_to_user_mode_prepare+0x1e6/0x1f0 > >> [<0>] syscall_exit_to_user_mode+0x1b/0x40 > >> [<0>] do_syscall_64+0x6c/0x90 > >> [<0>] entry_SYSCALL_64_after_hwframe+0x6e/0xd8 > >> > >>>> Also, if you have iostat installed (usually part of the sysstat > >>>> package), does "iostat 1" show any I/O activity on the md device? > >> > >> iostat is showing something odd, most devices are at 0, > >> except for 3 of the dm's that are stuck at 100% utilisation with > >> apparently nothing going on: > >> > >> avg-cpu: %user %nice %system %iowait %steal %idle > >> 0.06 0.00 0.03 53.06 0.00 46.84 > >> > >> Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util > >> ... > >> dm-16 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 > >> dm-17 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 > >> dm-18 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 > >> dm-19 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 > >> dm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 > >> dm-20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 > >> .... > >> > >> dm-20 is the /dev/mapper/main-more which is the RAID on which the > >> fs runs, 16 and 17 are main-more_rmeta_0 and main-more_rimage_0 > >> so something screwy is going on there. > > > > I've just finished a bisect of this hang, and got to: > > > > 615939a2ae734e3e68c816d6749d1f5f79c62ab7 is the first bad commit > > commit 615939a2ae734e3e68c816d6749d1f5f79c62ab7 > > Author: Christoph Hellwig > > Date: Fri May 19 06:40:48 2023 +0200 > > > > blk-mq: defer to the normal submission path for post-flush requests > > > > Requests with the FUA bit on hardware without FUA support need a post > > flush before returning to the caller, but they can still be sent using > > the normal I/O path after initializing the flush-related fields and > > end I/O handler. > > > > Signed-off-by: Christoph Hellwig > > Reviewed-by: Bart Van Assche > > Link: https://lore.kernel.org/r/20230519044050.107790-6-hch@lst.de > > Signed-off-by: Jens Axboe > > Can you try and pull in: > > https://git.kernel.dk/cgit/linux/commit/?h=block-6.5&id=5ff3213a5387e076af2b87f796f94b36965e8c3a > > and see if that helps? Dave > -- > Jens Axboe > -- -----Open up your eyes, open up your mind, open up your code ------- / Dr. David Alan Gilbert | Running GNU/Linux | Happy \ \ dave @ treblig.org | | In Hex / \ _________________________|_____ http://www.treblig.org |_______/