From: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
To: kernel test robot <ying.huang@linux.intel.com>
Cc: lkp@01.org, LKML <linux-kernel@vger.kernel.org>,
Sudip Mukherjee <sudip@vectorindia.org>,
Mark Brown <broonie@kernel.org>
Subject: Re: [lkp] [spi] 2baed30cb3: BUG: scheduling while atomic: systemd-udevd/134/0x00000002
Date: Wed, 20 Jan 2016 10:12:47 +0530 [thread overview]
Message-ID: <20160120044247.GA3238@sudip-pc> (raw)
In-Reply-To: <87bn8h5bbe.fsf@yhuang-dev.intel.com>
On Wed, Jan 20, 2016 at 08:44:37AM +0800, kernel test robot wrote:
> FYI, we noticed the below changes on
>
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
> commit 2baed30cb30727b2637d26eac5a8887875a13420 ("spi: lm70llp: use new parport device model")
>
>
> +----------------+------------+------------+
> | | 74bdced4b4 | 2baed30cb3 |
> +----------------+------------+------------+
> | boot_successes | 0 | 0 |
> +----------------+------------+------------+
>
>
>
> [ 6.358390] i6300esb: Intel 6300ESB WatchDog Timer Driver v0.05
> [ 6.358540] i6300esb: cannot register miscdev on minor=130 (err=-16)
> [ 6.358555] i6300ESB timer: probe of 0000:00:06.0 failed with error -16
> [ 6.363357] BUG: scheduling while atomic: systemd-udevd/134/0x00000002
> [ 6.363366] Modules linked in: crc32c_intel pcspkr evdev i6300esb ide_cd_mod cdrom intel_agp intel_gtt i2c_piix4 i2c_core virtio_pci virtio virtio_ring agpgart rtc_cmos(+) parport_pc(+) autofs4
> [ 6.363369] CPU: 1 PID: 134 Comm: systemd-udevd Not tainted 4.4.0-rc1-00006-g2baed30 #1
> [ 6.363370] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
Can you please let me know how do i reproduce this on qemu? what command
line you used?
> [ 6.363372] 0000000000012880 ffff88007f8bb880 ffffffff878a5e4d ffff880078712880
> [ 6.363374] ffff88007f8bb890 ffffffff876a64d6 ffff88007f8bb8d0 ffffffff87b05f69
> [ 6.363375] ffff88005d04e340 ffff88007f8bc000 000000000000007f ffffffff879a6260
> [ 6.363375] Call Trace:
> [ 6.363385] [<ffffffff878a5e4d>] dump_stack+0x4b/0x6e
> [ 6.363391] [<ffffffff876a64d6>] __schedule_bug+0x46/0x60
> [ 6.363394] [<ffffffff87b05f69>] __schedule+0x549/0x780
> [ 6.363398] [<ffffffff879a6260>] ? dead_read+0x10/0x10
dead_read() is used only when a port has been removed and the driver has
not registered with parport_register_driver(). But in the case of
spi/spi-lm70llp.c it has detach callback, so whenever a port is removed
detach should be executed.
regards
sudip
next prev parent reply other threads:[~2016-01-20 4:43 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-20 0:44 [lkp] [spi] 2baed30cb3: BUG: scheduling while atomic: systemd-udevd/134/0x00000002 kernel test robot
2016-01-20 4:42 ` Sudip Mukherjee [this message]
2016-01-20 5:00 ` [LKP] " Huang, Ying
2016-01-21 5:33 ` Sudip Mukherjee
2016-01-21 5:47 ` Huang, Ying
2016-01-21 6:06 ` Sudip Mukherjee
2016-01-23 6:39 ` Sudip Mukherjee
2016-01-25 2:15 ` Huang, Ying
2016-01-25 4:40 ` Sudip Mukherjee
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160120044247.GA3238@sudip-pc \
--to=sudipm.mukherjee@gmail.com \
--cc=broonie@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lkp@01.org \
--cc=sudip@vectorindia.org \
--cc=ying.huang@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).