From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from cuda.sgi.com (cuda3.sgi.com [192.48.176.15]) by oss.sgi.com (8.14.3/8.14.3/SuSE Linux 0.8) with ESMTP id q4RIUf7x251367 for ; Sun, 27 May 2012 13:30:41 -0500 Received: from mail.profihost.ag (mail.profihost.ag [85.158.179.208]) by cuda.sgi.com with ESMTP id d3vh7w7Pj3JnmY5M (version=TLSv1 cipher=AES256-SHA bits=256 verify=NO) for ; Sun, 27 May 2012 11:30:38 -0700 (PDT) Message-ID: <4FC272D0.4090101@profihost.ag> Date: Sun, 27 May 2012 20:30:40 +0200 From: Stefan Priebe MIME-Version: 1.0 Subject: Re: xfs: blocked task in xfs_buf_lock References: <4FBE1823.2050303@profihost.ag> In-Reply-To: <4FBE1823.2050303@profihost.ag> List-Id: XFS Filesystem from SGI List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: xfs-bounces@oss.sgi.com Errors-To: xfs-bounces@oss.sgi.com To: "xfs@oss.sgi.com" Cc: Christoph Hellwig , gregkh@linuxfoundation.org, stable@vger.kernel.org Hi, nobody who has an idea? Or what to check? Am 24.05.2012 13:14, schrieb Stefan Priebe - Profihost AG: > Hi list, > > while testing ceph cluster and using XFS as the underlying filesystem, > i've seen xfs blocking tasks several times. > > Kernel: 3.0.30 plus a patch labeled "xfs: don't wait for all pending I/O > in ->write_inode" you (Christoph) send me some month ago. > > INFO: task ceph-osd:3065 blocked for more than 120 seconds. > "echo 0> /proc/sys/kernel/hung_task_timeout_secs" disables this message. > ceph-osd D ffff8803b0e61d88 0 3065 1 0x00000004 > ffff88032f3ab7f8 0000000000000086 ffff8803bffdac08 ffff880300000000 > ffff8803b0e61820 0000000000010800 ffff88032f3abfd8 ffff88032f3aa010 > ffff88032f3abfd8 0000000000010800 ffffffff81a0b020 ffff8803b0e61820 > Call Trace: > [] schedule+0x3a/0x60 > [] schedule_timeout+0x1fd/0x2e0 > [] ? xfs_iext_bno_to_ext+0x84/0x160 > [] ? down_trylock+0x31/0x50 > [] ? xfs_iext_bno_to_ext+0x84/0x160 > [] __down+0x69/0xb0 > [] ? _xfs_buf_find+0xf6/0x280 > [] down+0x3b/0x50 > [] xfs_buf_lock+0x40/0xe0 > [] _xfs_buf_find+0xf6/0x280 > [] xfs_buf_get+0x59/0x190 > [] xfs_buf_read+0x27/0x100 > [] xfs_trans_read_buf+0x1e7/0x420 > [] xfs_read_agf+0x61/0x1a0 > [] xfs_alloc_read_agf+0x34/0xd0 > [] xfs_alloc_fix_freelist+0x3f7/0x470 > [] ? kmem_free+0x35/0x40 > [] ? xfs_trans_free_item_desc+0x2e/0x30 > [] ? xfs_trans_free_items+0x87/0xb0 > [] ? xfs_perag_get+0x33/0xb0 > [] ? xfs_free_extent+0x8f/0x120 > [] xfs_free_extent+0xa0/0x120 > [] ? kmem_zone_alloc+0x77/0xf0 > [] xfs_bmap_finish+0x15d/0x1a0 > [] xfs_itruncate_finish+0x15e/0x340 > [] xfs_setattr+0x365/0x980 > [] xfs_vn_setattr+0x16/0x20 > [] notify_change+0x11d/0x300 > [] do_truncate+0x5c/0x90 > [] ? get_write_access+0x15/0x50 > [] sys_truncate+0x127/0x130 > [] system_call_fastpath+0x16/0x1b > INFO: task flush-8:16:3089 blocked for more than 120 seconds. > "echo 0> /proc/sys/kernel/hung_task_timeout_secs" disables this message. > flush-8:16 D ffff8803af0d9d88 0 3089 2 0x00000000 > ffff88032e835940 0000000000000046 0000000100000fe0 ffff880300000000 > ffff8803af0d9820 0000000000010800 ffff88032e835fd8 ffff88032e834010 > ffff88032e835fd8 0000000000010800 ffff8803b0f7e080 ffff8803af0d9820 > Call Trace: > [] ? __lock_page+0x70/0x70 > [] schedule+0x3a/0x60 > [] io_schedule+0x87/0xd0 > [] sleep_on_page+0x9/0x10 > [] __wait_on_bit_lock+0x52/0xb0 > [] __lock_page+0x62/0x70 > [] ? autoremove_wake_function+0x40/0x40 > [] ? pagevec_lookup_tag+0x20/0x30 > [] write_cache_pages+0x386/0x4d0 > [] ? set_page_dirty+0x70/0x70 > [] ? kmem_cache_free+0x1b/0xe0 > [] generic_writepages+0x4c/0x70 > [] xfs_vm_writepages+0x4f/0x60 > [] do_writepages+0x1c/0x40 > [] writeback_single_inode+0xf4/0x260 > [] writeback_sb_inodes+0xe5/0x1b0 > [] writeback_inodes_wb+0x98/0x160 > [] wb_writeback+0x2f3/0x460 > [] ? __schedule+0x3ae/0x850 > [] ? lock_timer_base+0x37/0x70 > [] wb_do_writeback+0x21f/0x270 > [] bdi_writeback_thread+0x9a/0x230 > [] ? wb_do_writeback+0x270/0x270 > [] ? wb_do_writeback+0x270/0x270 > [] kthread+0x96/0xa0 > [] kernel_thread_helper+0x4/0x10 > [] ? kthread_worker_fn+0x130/0x130 > [] ? gs_change+0xb/0xb > > Thanks and greets, > Stefan _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs