From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757349Ab1IGWPP (ORCPT ); Wed, 7 Sep 2011 18:15:15 -0400 Received: from peace.netnation.com ([204.174.223.2]:33371 "EHLO peace.netnation.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752758Ab1IGWPM (ORCPT ); Wed, 7 Sep 2011 18:15:12 -0400 Date: Wed, 7 Sep 2011 15:15:05 -0700 From: Simon Kirby To: linux-kernel@vger.kernel.org, drbd-dev@lists.linbit.com, xfs@oss.sgi.com Subject: [3.1-rc4] XFS+DRBD hangs Message-ID: <20110907221505.GC21603@hostway.ca> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We will try again with most recent Linus HEAD, but with a build of Linus e9208a4eec8acbde7ede6516c39dea05f3b700dc (3.1-rc4), we are seeing awesome VM performance (low memory fragmentation, good caching), followed quickly by a hard lockup. APEI and pstore stuff just doesn't seem to like this PowerEdge 2950 (ERST: Failed to get Error Log Address Range.), so this was captured over an old-fashioned serial cable: [ 93.004364] block drbd0: Resync done (total 52 sec; paused 0 sec; 3884 K/sec) [ 93.008004] block drbd0: updated UUIDs D1D6A96AAE5E394E:0000000000000000:264787A8207F2376:7E86E22A1F92B7FB [ 93.008004] block drbd0: conn( SyncTarget -> Connected ) disk( Inconsistent -> UpToDate ) [ 93.030394] block drbd0: helper command: /sbin/drbdadm after-resync-target minor-0 [ 93.044631] block drbd0: helper command: /sbin/drbdadm after-resync-target minor-0 exit code 0 (0x0) [ 93.102981] block drbd0: bitmap WRITE of 3382 pages took 10 jiffies [ 93.109462] block drbd0: 0 KB (0 bits) marked out-of-sync by on disk bit-map. [ 1114.600102] block drbd0: peer( Primary -> Secondary ) [ 1114.904031] block drbd0: role( Secondary -> Primary ) [ 1115.284503] XFS (drbd0): Mounting Filesystem [ 1115.928766] XFS (drbd0): Ending clean mount [ 1116.324036] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory [ 1116.336050] NFSD: starting 90-second grace period [ 1532.427149] Kernel panic - not syncing: Watchdog detected hard LOCKUP on cpu 1 [ 1532.427149] Pid: 3590, comm: lmtp Not tainted 3.1.0-rc4-hw+ #28 [ 1532.427149] Call Trace: [ 1532.427149] [] panic+0xba/0x1f5 [ 1532.427149] [] watchdog_overflow_callback+0xb1/0xc0 [ 1532.427149] [] __perf_event_overflow+0xa2/0x1f0 [ 1532.427149] [] ? perf_event_update_userpage+0x11/0xc0 [ 1532.427149] [] perf_event_overflow+0x14/0x20 [ 1532.427149] [] intel_pmu_handle_irq+0x21b/0x460 [ 1532.427149] [] perf_event_nmi_handler+0x36/0xb0 [ 1532.427149] [] notifier_call_chain+0x3f/0x80 [ 1532.427149] [] atomic_notifier_call_chain+0x15/0x20 [ 1532.427149] [] notify_die+0x2e/0x30 [ 1532.427149] [] do_nmi+0xa2/0x250 [ 1532.427149] [] nmi+0x1a/0x20 [ 1532.427149] [] ? try_to_wake_up+0xc2/0x270 [ 1532.427149] <> [] default_wake_function+0xd/0x10 [ 1532.427149] [] autoremove_wake_function+0x11/0x40 [ 1532.427149] [] __wake_up_common+0x5a/0x90 [ 1532.427149] [] __wake_up+0x43/0x70 [ 1532.427149] [] drbd_al_complete_io+0x99/0xd0 [drbd] [ 1532.427149] [] _req_may_be_done+0x325/0x560 [drbd] [ 1532.427149] [] ? kmem_cache_free+0xe1/0xf0 [ 1532.427149] [] _req_may_be_done_not_susp+0x1d/0x30 [drbd] [ 1532.427149] [] __req_mod+0x4ab/0xee0 [drbd] [ 1532.427149] [] drbd_endio_pri+0xae/0x120 [drbd] [ 1532.427149] [] bio_endio+0x1c/0x40 [ 1532.427149] [] req_bio_endio+0x83/0xb0 [ 1532.427149] [] blk_update_request+0x100/0x470 [ 1532.427149] [] blk_update_bidi_request+0x22/0x90 [ 1532.427149] [] blk_end_bidi_request+0x2a/0x80 [ 1532.427149] [] blk_end_request+0xb/0x10 [ 1532.427149] [] scsi_io_completion+0x11d/0x620 [ 1532.427149] [] scsi_finish_command+0xbf/0x120 [ 1532.427149] [] scsi_softirq_done+0x13e/0x160 [ 1532.427149] [] blk_done_softirq+0x6d/0x80 [ 1532.427149] [] __do_softirq+0xd0/0x1c0 [ 1532.427149] [] call_softirq+0x1c/0x30 [ 1532.427149] [] do_softirq+0x55/0x90 [ 1532.427149] [] irq_exit+0xad/0xe0 [ 1532.427149] [] do_IRQ+0x64/0xe0 [ 1532.427149] [] common_interrupt+0x6b/0x6b [ 1532.427149] [ 1545.171011] Kernel panic - not syncing: Watchdog detected hard LOCKUP on cpu 0 [ 1545.171011] Pid: 20283, comm: xfsbufd/drbd0 Not tainted 3.1.0-rc4-hw+ #28 [ 1545.171011] Call Trace: [ 1545.171011] [] panic+0xba/0x1f5 [ 1545.171011] [] watchdog_overflow_callback+0xb1/0xc0 [ 1545.171011] [] __perf_event_overflow+0xa2/0x1f0 [ 1545.171011] [] ? perf_event_update_userpage+0x11/0xc0 [ 1545.171011] [] perf_event_overflow+0x14/0x20 [ 1545.171011] [] intel_pmu_handle_irq+0x21b/0x460 [ 1545.171011] [] perf_event_nmi_handler+0x36/0xb0 [ 1545.171011] [] notifier_call_chain+0x3f/0x80 [ 1545.171011] [] atomic_notifier_call_chain+0x15/0x20 [ 1545.171011] [] notify_die+0x2e/0x30 [ 1545.171011] [] do_nmi+0xa2/0x250 [ 1545.171011] [] nmi+0x1a/0x20 [ 1545.171011] [] ? _raw_spin_lock_irqsave+0x17/0x20 [ 1545.171011] <> [] drbd_endio_pri+0x9c/0x120 [drbd] [ 1545.171011] [] ? put_device+0x12/0x20 [ 1545.171011] [] bio_endio+0x1c/0x40 [ 1545.171011] [] req_bio_endio+0x83/0xb0 [ 1545.171011] [] blk_update_request+0x100/0x470 [ 1545.171011] [] blk_update_bidi_request+0x22/0x90 [ 1545.171011] [] blk_end_bidi_request+0x2a/0x80 [ 1545.171011] [] blk_end_request+0xb/0x10 [ 1545.171011] [] scsi_io_completion+0x11d/0x620 [ 1545.171011] [] scsi_finish_command+0xbf/0x120 [ 1545.171011] [] scsi_softirq_done+0x13e/0x160 [ 1545.171011] [] blk_done_softirq+0x6d/0x80 [ 1545.171011] [] __do_softirq+0xd0/0x1c0 [ 1545.171011] [] call_softirq+0x1c/0x30 [ 1545.171011] [] do_softirq+0x55/0x90 [ 1545.171011] [] irq_exit+0xad/0xe0 [ 1545.171011] [] smp_call_function_single_interrupt+0x2f/0x40 [ 1545.171011] [] call_function_single_interrupt+0x6b/0x70 [ 1545.171011] [] ? scsi_request_fn+0xec/0x510 [ 1545.171011] [] __blk_run_queue+0x15/0x20 [ 1545.171011] [] cfq_insert_request+0x49c/0x5c0 [ 1545.171011] [] __elv_add_request+0x198/0x2a0 [ 1545.171011] [] blk_flush_plug_list+0x1d0/0x210 [ 1545.171011] [] schedule+0x62a/0xad0 [ 1545.171011] [] ? kmem_cache_alloc+0xee/0x100 [ 1545.171011] [] ? lc_get+0x49/0x250 [lru_cache] [ 1545.171011] [] drbd_al_begin_io+0x1c5/0x200 [drbd] [ 1545.171011] [] ? bio_alloc_bioset+0x54/0xe0 [ 1545.171011] [] ? wake_up_bit+0x40/0x40 [ 1545.171011] [] ? __bio_clone+0x21/0x60 [ 1545.171011] [] drbd_make_request_common+0xd25/0x1650 [drbd] [ 1545.171011] [] ? elv_set_request+0x16/0x40 [ 1545.171011] [] drbd_make_request+0x33a/0x1150 [drbd] [ 1545.171011] [] ? blk_recount_segments+0x25/0x40 [ 1545.171011] [] ? bio_phys_segments+0x21/0x30 [ 1545.171011] [] ? drive_stat_acct+0x11e/0x180 [ 1545.171011] [] ? __make_request+0x1bd/0x330 [ 1545.171011] [] generic_make_request+0x3f3/0x540 [ 1545.171011] [] ? kobject_put+0x27/0x60 [ 1545.171011] [] submit_bio+0x76/0xf0 [ 1545.171011] [] _xfs_buf_ioapply+0xb7/0x1f0 [ 1545.171011] [] ? xfs_bdstrat_cb+0x85/0xb0 [ 1545.171011] [] xfs_buf_iorequest+0x71/0xc0 [ 1545.171011] [] xfs_bdstrat_cb+0x85/0xb0 [ 1545.171011] [] xfsbufd+0xf6/0x130 [ 1545.171011] [] ? xfs_bdstrat_cb+0xb0/0xb0 [ 1545.171011] [] kthread+0x96/0xb0 [ 1545.171011] [] kernel_thread_helper+0x4/0x10 [ 1545.171011] [] ? kthread_worker_fn+0x130/0x130 [ 1545.171011] [] ? gs_change+0xb/0xb We upgraded from 2.6.36 which seemed to have a page leak (file pages left on the LRU) and so would eventually perform very poorly. 2.6.37 and 2.6.38 seemed to have some unix socket issue that caused heartbeat to wedge. Shall we enable lock debugging or something here? More bits at http://0x.ca/sim/ref/3.1-rc4/. Simon-