From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965063AbWLODrs (ORCPT ); Thu, 14 Dec 2006 22:47:48 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S965062AbWLODrs (ORCPT ); Thu, 14 Dec 2006 22:47:48 -0500 Received: from ms-smtp-02.nyroc.rr.com ([24.24.2.56]:52612 "EHLO ms-smtp-02.nyroc.rr.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965063AbWLODrr (ORCPT ); Thu, 14 Dec 2006 22:47:47 -0500 Subject: [BUG -rt] scheduling in atomic. From: Steven Rostedt To: Ingo Molnar Cc: LKML Content-Type: text/plain Date: Thu, 14 Dec 2006 22:47:43 -0500 Message-Id: <1166154463.19210.7.camel@localhost.localdomain> Mime-Version: 1.0 X-Mailer: Evolution 2.6.3 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Ingo, I've hit this. I compiled the kernel as CONFIG_PREEMPT, and turned off IRQ's as threads. BUG: scheduling while atomic: swapper/0x00000001/1, CPU#3 Call Trace: [] dump_trace+0xaa/0x404 [] show_trace+0x3c/0x52 [] dump_stack+0x15/0x17 [] __sched_text_start+0x8a/0xbb7 [] schedule+0xd3/0xf3 [] flush_cpu_workqueue+0x72/0xa4 [] flush_workqueue+0x6d/0x95 [] schedule_on_each_cpu+0xe8/0xff [] filevec_add_drain_all+0x12/0x14 [] remove_proc_entry+0xaf/0x258 [] unregister_handler_proc+0x23/0x48 [] free_irq+0xda/0x114 [] i8042_probe+0x338/0x75c [] platform_drv_probe+0x12/0x14 [] really_probe+0x54/0xee [] driver_probe_device+0xae/0xba [] __device_attach+0x9/0xb [] bus_for_each_drv+0x47/0x7d [] device_attach+0x65/0x79 [] bus_attach_device+0x24/0x4c [] device_add+0x38f/0x505 [] platform_device_add+0x11a/0x152 [] i8042_init+0x2b0/0x30d [] init+0x182/0x344 [] child_rip+0xa/0x12 Seems that we have this in remove_proc_entry: spin_lock(&proc_subdir_lock); for (p = &parent->subdir; *p; p=&(*p)->next ) { [...] proc_kill_inodes(de); [...] } spin_unlock(&proc_subdir_lock); And in proc_kill_inodes: static void proc_kill_inodes(struct proc_dir_entry *de) { struct file *filp; struct super_block *sb = proc_mnt->mnt_sb; /* * Actually it's a partial revoke(). */ filevec_add_drain_all(); [...] } and in filevec_add_drain_all: int filevec_add_drain_all(void) { return schedule_on_each_cpu(filevec_add_drain_per_cpu, NULL); } And schedule_on_each_cpu is easily schedulable. So it seems that it schedules while holding a spin lock. I don't know this code very well, and don't have time to look too deep into it, but I figure that I would report it. -- Steve