From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S261392AbVCCA63 (ORCPT ); Wed, 2 Mar 2005 19:58:29 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S261399AbVCCA6P (ORCPT ); Wed, 2 Mar 2005 19:58:15 -0500 Received: from cavan.codon.org.uk ([213.162.118.85]:19162 "EHLO cavan.codon.org.uk") by vger.kernel.org with ESMTP id S261392AbVCCA4w (ORCPT ); Wed, 2 Mar 2005 19:56:52 -0500 From: Matthew Garrett To: linux-kernel@vger.kernel.org Cc: pavel@ucw.cz Date: Thu, 03 Mar 2005 00:56:44 +0000 Message-Id: <1109811404.5918.80.camel@tyrosine> Mime-Version: 1.0 X-Mailer: Evolution 2.0.3 X-SA-Exim-Connect-IP: 213.162.118.93 X-SA-Exim-Mail-From: mjg59@srcf.ucam.org Subject: Scheduling while atomic errors on swsusp resume Content-Type: text/plain Content-Transfer-Encoding: 7bit X-SA-Exim-Version: 4.1 (built Tue, 17 Aug 2004 11:06:07 +0200) X-SA-Exim-Scanned: Yes (on cavan.codon.org.uk) Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Using the current Ubuntu development kernel (2.6.10 with acpi and swsusp stuff backported from 2.6.11), a user is getting the following trace on resume. Passing noapic nolapic removes the APIC error, but the rest of the trace is identical. This is reproducible, but only seems to happen on this machine. Anyone have any idea what's going on before I head off to try getting it reproduced with a stock kernel? Stopping tasks: =========================================================| Freeing memory... -\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|done (21866 pages freed) .........................................swsusp: Need to copy 19917 pages .swsusp: Restoring Highmem APIC error on CPU0: 00(00) ACPI: PCI interrupt 0000:00:11.1[A]: no GSI scheduling while atomic: hibernate.sh/0x00000002/6955 [] schedule+0x52d/0x540 [] task_no_data_intr+0x0/0xa0 [ide_core] [] wait_for_completion+0x78/0xd0 [] default_wake_function+0x0/0x20 [] default_wake_function+0x0/0x20 [] __elv_add_request+0x78/0xc0 [] ide_do_drive_cmd+0xf9/0x170 [ide_core] [] generic_ide_resume+0x93/0xc0 [ide_core] [] resume_device+0x27/0x30 [] dpm_resume+0xa4/0xb0 [] device_resume+0x11/0x20 [] finish+0x12/0x60 [] pm_suspend_disk+0x41/0x80 [] enter_state+0x6c/0x70 [] state_store+0xa0/0xa8 [] subsys_attr_store+0x3a/0x40 [] flush_write_buffer+0x3e/0x50 [] sysfs_write_file+0x6f/0x80 [] sysfs_write_file+0x0/0x80 [] vfs_write+0xbf/0x150 [] sys_write+0x51/0x80 [] sysenter_past_esp+0x52/0x75 Restarting tasks...<3>scheduling while atomic: hibernate.sh/0x00000002/6955 [] schedule+0x52d/0x540 [] wake_up_process+0x1e/0x20 [] thaw_processes+0xa4/0xe0 [] finish+0x20/0x60 [] pm_suspend_disk+0x41/0x80 [] enter_state+0x6c/0x70 [] state_store+0xa0/0xa8 [] subsys_attr_store+0x3a/0x40 [] flush_write_buffer+0x3e/0x50 [] sysfs_write_file+0x6f/0x80 [] sysfs_write_file+0x0/0x80 [] vfs_write+0xbf/0x150 [] sys_write+0x51/0x80 [] sysenter_past_esp+0x52/0x75 done scheduling while atomic: hibernate.sh/0x00000001/6955 [] schedule+0x52d/0x540 [] sys_write+0x51/0x80 [] work_resched+0x5/0x16 scheduling while atomic: hibernate.sh/0x00000001/6955 [] schedule+0x52d/0x540 [] sys_sched_yield+0x53/0x70 [] coredump_wait+0x38/0xa0 [] try_to_wake_up+0xa4/0xc0 [] do_coredump+0xcd/0x1d6 [] vgacon_scroll+0x144/0x230 [] free_uid+0x1f/0x80 [] __dequeue_signal+0xe5/0x1a0 [] dequeue_signal+0x35/0x90 [] get_signal_to_deliver+0x20d/0x300 [] do_signal+0x9d/0x130 [] __kernel_text_address+0x2e/0x40 [] __kernel_text_address+0x2e/0x40 [] recalc_task_prio+0x8f/0x190 [] schedule+0x2f1/0x540 [] do_page_fault+0x0/0x5c7 [] do_notify_resume+0x37/0x3c [] work_notifysig+0x13/0x15 note: hibernate.sh[6955] exited with preempt_count 1 -- Matthew Garrett | mjg59@srcf.ucam.org