From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754904AbcA1GEV (ORCPT ); Thu, 28 Jan 2016 01:04:21 -0500 Received: from mail-pf0-f193.google.com ([209.85.192.193]:36585 "EHLO mail-pf0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751043AbcA1GER (ORCPT ); Thu, 28 Jan 2016 01:04:17 -0500 Date: Thu, 28 Jan 2016 15:05:30 +0900 From: Sergey Senozhatsky To: "byungchul.park" Cc: "'Sergey Senozhatsky'" , akpm@linux-foundation.org, mingo@kernel.org, linux-kernel@vger.kernel.org, akinobu.mita@gmail.com, jack@suse.cz, torvalds@linux-foundation.org, peter@hurleysoftware.com, sergey.senozhatsky@gmail.com Subject: Re: [PATCH v4] lib/spinlock_debug.c: prevent a recursive cycle in the debug code Message-ID: <20160128060530.GC1834@swordfish> References: <1453896061-14102-1-git-send-email-byungchul.park@lge.com> <20160128014253.GC1538@X58A-UD3R> <20160128023750.GB1834@swordfish> <000301d15985$7f416690$7dc433b0$@lge.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <000301d15985$7f416690$7dc433b0$@lge.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On (01/28/16 13:36), byungchul.park wrote: [..] > > the thing is, it's really-really hard to lockup in console_trylock()... > > > > int down_trylock(struct semaphore *sem) > > { > > unsigned long flags; > > int count; > > > > raw_spin_lock_irqsave(&sem->lock, flags); <<<<<< um... > > I also think it's hard, but a backtrace said the lockup happened here. what was the state of `struct semaphore *sem' and especially of `sem->lock'? what was the lock->owner_cpu doing? (addr2line of its pc registe, for example). > > count = sem->count - 1; > > if (likely(count >= 0)) > > sem->count = count; > > raw_spin_unlock_irqrestore(&sem->lock, flags); > > > > return (count < 0); > > } > > > > was not able to dereference sem->count? `*sem' was corrupted? or because > > sem->lock was corrupted? in any of those cases you solve the problem from > > the wrong side. if you have a memory corruption then it can corrupt > > What I solved here is to make it possible to print what the problem is, by > the spinlock debug code instead of system lockup while printing a debug > message. I think it's not wrong. none of the CPUs will print anything anymore. it's done. your CPUa - CPUx are spinning in down_trylock() vprintk_emit() down_trylock() raw_spin_lock_irqsave() << spin here and they are spinnig because CPUz is keeping the sem->lock and is _not_ going to release it. and this is the root cause, not spin_dump(). CPUz was expected to do a quick thing in down_trylock() raw_spin_lock_irqsave(&sem->lock, flags); count = sem->count - 1; if (likely(count >= 0)) sem->count = count; raw_spin_unlock_irqrestore(&sem->lock, flags); or down()/down_common() raw_spin_lock_irqsave(&sem->lock, flags); if (likely(sem->count > 0)) sem->count--; else down_common() { ... for (;;) { if (signal_pending_state(state, task)) goto interrupted; if (unlikely(timeout <= 0)) goto timed_out; __set_task_state(task, state); raw_spin_unlock_irq(&sem->lock); timeout = schedule_timeout(timeout); raw_spin_lock_irq(&sem->lock); if (waiter.up) return 0; } ... } raw_spin_unlock_irqrestore(&sem->lock, flags); I can't see how it's even possible to keep that spin_lock locked longer than `loops_per_jiffy * HZ'. and the fact that you saw N recursive printk()->down_trylock()->spin_lock()->spin_dump()->printk()->... sounds like a good prove of the fact the your CPUz was either dead, or gone crazy, and took the spin_lock with it. but this is not spinlock_debug's business. console_flush_on_panic() _probably_ would help in this particular case -- it does not care about console_sem state and goes to console_unlock() directly -- but it still locks the logbuf_lock. so if CPUz died with logbuf_lock being locked, then nothing will save your day. do you have any reproducer or you've seen it once? -ss