public inbox for linux-bluetooth@vger.kernel.org
 help / color / mirror / Atom feed
From: Thomas Gleixner <tglx@linutronix.de>
To: Maciej Rutecki <maciej.rutecki@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	linux-bluetooth@vger.kernel.org, marcel@holtmann.org
Subject: Re: [2.6.29-git12] BUG: scheduling while atomic: swapper/0/0x10000100
Date: Fri, 10 Apr 2009 20:25:34 +0200 (CEST)	[thread overview]
Message-ID: <alpine.LFD.2.00.0904102019220.21577@localhost.localdomain> (raw)
In-Reply-To: <8db1092f0904101057x56000b73w9de3227b2a3a9abb@mail.gmail.com>

Maciej,

On Fri, 10 Apr 2009, Maciej Rutecki wrote:
> 2009/4/10 Thomas Gleixner <tglx@linutronix.de>:
> 
> [...]
> > Please upload trace.bz2 file to some place.
> 
> http://unixy.pl/maciek/download/kernel/2.6.30-rc1-git3/laptop/trace-2.6.30-rc1-git3.bz2

Ok. That gives us the necessary information:

  <idle>-0       1d... 76597263us : do_softirq <-irq_exit
  <idle>-0       1d... 76597263us : __do_softirq <-do_softirq
  <idle>-0       1..s. 76597264us : tasklet_action <-__do_softirq
  <idle>-0       1..s. 76597265us : hci_rx_task <-tasklet_action
  <idle>-0       1..s. 76597266us : _read_lock <-hci_rx_task
  <idle>-0       1..s. 76597267us : skb_dequeue <-hci_rx_task
  <idle>-0       1..s. 76597268us : _spin_lock_irqsave <-skb_dequeue
  <idle>-0       1d.s. 76597269us : _spin_unlock_irqrestore <-skb_dequeue
  <idle>-0       1..s. 76597270us : hci_send_to_sock <-hci_rx_task
  <idle>-0       1..s. 76597270us+: _read_lock <-hci_send_to_sock
  <idle>-0       1..s. 76597272us : skb_clone <-hci_send_to_sock
  <idle>-0       1..s. 76597273us : kmem_cache_alloc <-skb_clone
  <idle>-0       1..s. 76597274us : __skb_clone <-skb_clone
  <idle>-0       1..s. 76597275us : __copy_skb_header <-__skb_clone
  <idle>-0       1..s. 76597276us : skb_push <-hci_send_to_sock
  <idle>-0       1..s. 76597277us : sock_queue_rcv_skb <-hci_send_to_sock
  <idle>-0       1..s. 76597278us : sk_filter <-sock_queue_rcv_skb
  <idle>-0       1..s. 76597279us : security_sock_rcv_skb <-sk_filter
  <idle>-0       1..s. 76597280us : cap_socket_sock_rcv_skb <-security_sock_rcv_skb
  <idle>-0       1..s. 76597280us : local_bh_disable <-sk_filter
  <idle>-0       1..s. 76597281us : local_bh_enable <-sk_filter
  <idle>-0       1..s. 76597282us : skb_queue_tail <-sock_queue_rcv_skb
  <idle>-0       1..s. 76597283us : _spin_lock_irqsave <-skb_queue_tail
  <idle>-0       1d.s. 76597284us : _spin_unlock_irqrestore <-skb_queue_tail
  <idle>-0       1..s. 76597285us : sock_def_readable <-sock_queue_rcv_skb
  <idle>-0       1..s. 76597286us : _read_lock <-sock_def_readable
  <idle>-0       1..s. 76597286us : __wake_up_sync_key <-sock_def_readable
  <idle>-0       1..s. 76597287us : _spin_lock_irqsave <-__wake_up_sync_key
  <idle>-0       1d.s. 76597288us : __wake_up_common <-__wake_up_sync_key
  <idle>-0       1d.s. 76597289us : pollwake <-__wake_up_common
  <idle>-0       1d.s. 76597290us : default_wake_function <-pollwake
  <idle>-0       1d.s. 76597290us : try_to_wake_up <-default_wake_function
  <idle>-0       1d.s. 76597292us : update_shares <-try_to_wake_up
  <idle>-0       1d.s. 76597293us : walk_tg_tree <-update_shares
  <idle>-0       1d.s. 76597294us : tg_nop <-walk_tg_tree
  <idle>-0       1d.s. 76597294us : tg_nop <-walk_tg_tree
  <idle>-0       1d.s. 76597295us : tg_shares_up <-walk_tg_tree
  <idle>-0       1d.s. 76597297us : tg_nop <-walk_tg_tree
  <idle>-0       1d.s. 76597297us : tg_shares_up <-walk_tg_tree
  <idle>-0       1d.s. 76597298us : tg_nop <-walk_tg_tree
  <idle>-0       1d.s. 76597299us : tg_shares_up <-walk_tg_tree
  <idle>-0       1d.s. 76597300us : tg_nop <-walk_tg_tree
  <idle>-0       1d.s. 76597301us : tg_shares_up <-walk_tg_tree
  <idle>-0       1d.s. 76597302us : tg_nop <-walk_tg_tree
  <idle>-0       1d.s. 76597303us : tg_shares_up <-walk_tg_tree
  <idle>-0       1d.s. 76597304us : tg_shares_up <-walk_tg_tree
  <idle>-0       1d.s. 76597305us+: task_rq_lock <-try_to_wake_up
  <idle>-0       1d.s. 76597314us : _spin_lock <-task_rq_lock
  <idle>-0       1d.s. 76597315us : select_task_rq_fair <-try_to_wake_up
  <idle>-0       1d.s. 76597316us : activate_task <-try_to_wake_up
  <idle>-0       1d.s. 76597317us : enqueue_task_fair <-activate_task
  <idle>-0       1d.s. 76597317us : enqueue_entity <-enqueue_task_fair
  <idle>-0       1d.s. 76597318us : update_curr <-enqueue_entity
  <idle>-0       1d.s. 76597319us : place_entity <-enqueue_entity
  <idle>-0       1d.s. 76597320us : enqueue_entity <-enqueue_task_fair
  <idle>-0       1d.s. 76597321us : update_curr <-enqueue_entity
  <idle>-0       1d.s. 76597321us : place_entity <-enqueue_entity
  <idle>-0       1d.s. 76597322us : check_preempt_curr_idle <-try_to_wake_up
  <idle>-0       1d.s. 76597323us : resched_task <-check_preempt_curr_idle
  <idle>-0       1dNs. 76597324us : _spin_unlock_irqrestore <-try_to_wake_up
  <idle>-0       1dNs. 76597325us+: _spin_unlock_irqrestore <-__wake_up_sync_key
  <idle>-0       1.Ns. 76597326us : hci_event_packet <-hci_rx_task
  <idle>-0       1.Ns. 76597327us : skb_pull <-hci_event_packet
  <idle>-0       1.Ns. 76597328us : _spin_lock <-hci_event_packet
  <idle>-0       1.Ns. 76597330us : del_timer <-hci_event_packet
  <idle>-0       1.Ns. 76597331us : hci_conn_add_sysfs <-hci_event_packet
  <idle>-0       1.Ns. 76597332us+: dev_set_name <-hci_conn_add_sysfs

  Here we call dev_set_name from tasklet context which in turn calls kmalloc.

  <idle>-0       1.Ns. 76597334us+: strnlen <-string
  <idle>-0       1.Ns. 76597336us : __kmalloc <-kvasprintf
  <idle>-0       1.Ns. 76597337us : get_slab <-__kmalloc
  <idle>-0       1.Ns. 76597338us : _cond_resched <-__kmalloc

  We woke up a task above so the resched bit is set and we schedule
  out from the softirq context. Bad idea :)

  <idle>-0       1.Ns. 76597338us : __cond_resched <-_cond_resched
  <idle>-0       1.Ns. 76597339us : schedule <-__cond_resched
  <idle>-0       1.Ns. 76597340us : __schedule <-schedule
  <idle>-0       1.Ns. 76597341us : rcu_qsctr_inc <-__schedule
  <idle>-0       1.Ns. 76597342us : __schedule_bug <-__schedule

Marcel, can you please have a look at that ?

Thanks,

	tglx

  reply	other threads:[~2009-04-10 18:25 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-04-07  7:37 [2.6.29-git12] BUG: scheduling while atomic: swapper/0/0x10000100 Maciej Rutecki
2009-04-10  4:20 ` Andrew Morton
2009-04-10 10:40   ` Maciej Rutecki
2009-04-10 11:41     ` Thomas Gleixner
2009-04-10 14:47       ` Maciej Rutecki
2009-04-10 16:26         ` Thomas Gleixner
2009-04-10 16:52           ` Maciej Rutecki
2009-04-10 17:16             ` Thomas Gleixner
2009-04-10 17:57               ` Maciej Rutecki
2009-04-10 18:25                 ` Thomas Gleixner [this message]
2009-04-18 13:04                   ` Dave Young
2009-04-11 19:52                     ` Marcel Holtmann
2009-05-16 14:31                       ` Maciej Rutecki
2009-04-10 16:57           ` Maciej Rutecki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.LFD.2.00.0904102019220.21577@localhost.localdomain \
    --to=tglx@linutronix.de \
    --cc=akpm@linux-foundation.org \
    --cc=linux-bluetooth@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maciej.rutecki@gmail.com \
    --cc=marcel@holtmann.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox