From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753274AbaB0Ost (ORCPT ); Thu, 27 Feb 2014 09:48:49 -0500 Received: from userp1040.oracle.com ([156.151.31.81]:46357 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753248AbaB0Osp (ORCPT ); Thu, 27 Feb 2014 09:48:45 -0500 Message-ID: <530F5045.1010604@oracle.com> Date: Thu, 27 Feb 2014 09:48:37 -0500 From: Sasha Levin User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: Minchan Kim , ngupta@vflare.org CC: LKML Subject: zram: lockdep spew for zram->init_lock Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Source-IP: acsinet22.oracle.com [141.146.126.238] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi all, I've stumbled on the following spew while fuzzing with trinity inside a KVM tools guest running latest -next. It looks like a false positive (we only set size for uninitialized devices, so we can't deadlock on them being in-use) but I'd really like someone to confirm it before I write it down as such. [ 2655.365684] ================================= [ 2655.368278] [ INFO: inconsistent lock state ] [ 2655.370163] 3.14.0-rc4-next-20140226-sasha-00013-g082bdac-dirty #4 Tainted: G W [ 2655.371972] --------------------------------- [ 2655.371972] inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-R} usage. [ 2655.371972] kswapd30/5352 [HC0[0]:SC0[0]:HE1:SE1] takes: [ 2655.371972] (&zram->init_lock){+++++-}, at: [] zram_make_request+0x2a/0xc0 [ 2655.371972] {RECLAIM_FS-ON-W} state was registered at: [ 2655.371972] [] mark_held_locks+0x6c/0x90 [ 2655.371972] [] lockdep_trace_alloc+0xfd/0x140 [ 2655.371972] [] kmem_cache_alloc_trace+0x32/0x2e0 [ 2655.371972] [] zram_meta_alloc+0x20/0x150 [ 2655.371972] [] disksize_store+0x8e/0xf0 [ 2655.371972] [] dev_attr_store+0x1b/0x20 [ 2655.371972] [] sysfs_kf_write+0x4a/0x60 [ 2655.371972] [] kernfs_fop_write+0x110/0x190 [ 2655.371972] [] vfs_write+0xe3/0x1d0 [ 2655.371972] [] SyS_write+0x5d/0xa0 [ 2655.371972] [] tracesys+0xdd/0xe2 [ 2655.371972] irq event stamp: 10207 [ 2655.371972] hardirqs last enabled at (10207): [] throtl_update_dispatch_stats+0x15d/0x1a0 [ 2655.371972] hardirqs last disabled at (10206): [] throtl_update_dispatch_stats+0xa4/0x1a0 [ 2655.371972] softirqs last enabled at (10172): [] __do_softirq+0x447/0x4f0 [ 2655.371972] softirqs last disabled at (10165): [] irq_exit+0x83/0x160 [ 2655.371972] [ 2655.371972] other info that might help us debug this: [ 2655.371972] Possible unsafe locking scenario: [ 2655.371972] [ 2655.371972] CPU0 [ 2655.371972] ---- [ 2655.371972] lock(&zram->init_lock); [ 2655.371972] [ 2655.371972] lock(&zram->init_lock); [ 2655.371972] [ 2655.371972] *** DEADLOCK *** [ 2655.371972] [ 2655.371972] no locks held by kswapd30/5352. [ 2655.371972] [ 2655.371972] stack backtrace: [ 2655.371972] CPU: 78 PID: 5352 Comm: kswapd30 Tainted: G W 3.14.0-rc4-next-20140226-sasha-00013-g082bdac-dirty #4 [ 2655.371972] ffff880636f98cc0 ffff880636f8d3f8 ffffffff843882e5 0000000000000000 [ 2655.371972] ffff880636f98000 ffff880636f8d458 ffffffff811a09f7 0000000000000000 [ 2655.371972] 0000000000000001 ffff880600000001 ffffffff876c2060 0000000000000009 [ 2655.371972] Call Trace: [ 2655.371972] [] dump_stack+0x52/0x7f [ 2655.371972] [] print_usage_bug+0x1a7/0x1e0 [ 2655.371972] [] ? print_usage_bug+0x1e0/0x1e0 [ 2655.371972] [] mark_lock_irq+0xd9/0x2a0 [ 2655.371972] [] mark_lock+0x128/0x210 [ 2655.371972] [] mark_irqflags+0x144/0x170 [ 2655.371972] [] __lock_acquire+0x2de/0x5a0 [ 2655.371972] [] lock_acquire+0x182/0x1d0 [ 2655.371972] [] ? zram_make_request+0x2a/0xc0 [ 2655.371972] [] down_read+0x47/0xa0 [ 2655.371972] [] ? zram_make_request+0x2a/0xc0 [ 2655.371972] [] ? preempt_count_add+0x96/0xc0 [ 2655.371972] [] zram_make_request+0x2a/0xc0 [ 2655.371972] [] generic_make_request+0xb6/0x110 [ 2655.371972] [] submit_bio+0x148/0x170 [ 2655.371972] [] ? test_set_page_writeback+0x24e/0x2a0 [ 2655.371972] [] __swap_writepage+0x1fc/0x220 [ 2655.371972] [] ? _raw_spin_unlock+0x30/0x50 [ 2655.371972] [] ? page_swapcount+0x4e/0x60 [ 2655.371972] [] swap_writepage+0x72/0x80 [ 2655.371972] [] pageout+0x167/0x2e0 [ 2655.371972] [] shrink_page_list+0x4f4/0x7c0 [ 2655.371972] [] shrink_inactive_list+0x31c/0x570 [ 2655.371972] [] ? shrink_active_list+0x30b/0x320 [ 2655.371972] [] shrink_lruvec+0x124/0x300 [ 2655.371972] [] ? sched_clock+0x1d/0x30 [ 2655.371972] [] shrink_zone+0x8e/0x1d0 [ 2655.371972] [] kswapd_shrink_zone+0xf1/0x1b0 [ 2655.371972] [] balance_pgdat+0x363/0x540 [ 2655.371972] [] ? finish_wait+0x70/0x90 [ 2655.371972] [] kswapd+0x2eb/0x350 [ 2655.371972] [] ? ftrace_raw_event_mm_vmscan_writepage+0x180/0x180 [ 2655.371972] [] kthread+0x105/0x110 [ 2655.371972] [] ? __lock_release+0x1e2/0x200 [ 2655.371972] [] ? set_kthreadd_affinity+0x30/0x30 [ 2655.371972] [] ret_from_fork+0x7c/0xb0 [ 2655.371972] [] ? set_kthreadd_affinity+0x30/0x30 Thanks, Sasha