From: Tobias Geiger <tobias.geiger@vido.info>
To: xen-devel@lists.xen.org
Subject: Re: Regression in kernel 3.5 as Dom0 regarding PCI Passthrough?!
Date: Wed, 25 Jul 2012 16:32:41 +0200 [thread overview]
Message-ID: <c1a31744388dda3a239ef1d8a95333b9@vido.info> (raw)
In-Reply-To: <e66167cc9126ff5a6388a9b281901d68@vido.info>
It will take some time for me to re-test with "dom0_mem=4096M" (i.e.
w/o a "max" range), because i forgot a "panic=X" command on the Dom0
cmdline, so right now the machine is waiting for me to press the
reset-button ... :(
I'll post my results asap.
Greetings
Am 25.07.2012 16:20, schrieb Tobias Geiger:
> Am 25.07.2012 15:43, schrieb Konrad Rzeszutek Wilk:
>> On Wed, Jul 25, 2012 at 02:30:00PM +0200, Tobias Geiger wrote:
>>> Hi!
>>>
>>> i notice a serious regression with 3.5 as Dom0 kernel (3.4 was rock
>>> stable):
>>>
>>> 1st: only the GPU PCI Passthrough works, the PCI USB Controller is
>>> not recognized within the DomU (HVM Win7 64)
>>> Dom0 cmdline is:
>>> ro root=LABEL=dom0root
>>> xen-pciback.hide=(08:00.0)(08:00.1)(00:1d.0)(00:1d.1)(00:1d.2)(00:1d.7)
>>> security=apparmor noirqdebug nouveau.msi=1
>>>
>>> Only 8:00.0 and 8:00.1 get passed through without problems, all the
>>> USB Controller IDs are not correctly passed through and get a
>>> exclamation mark within the win7 device manager ("could not be
>>> started").
>>
>> Ok, but they do get passed in though? As in, QEMU sees them.
>> If you boot a Live Ubuntu/Fedora CD within the guest with the PCI
>> passed in devices do you see them? Meaning lspci shows them?
>>
>
> Yes, they get passed through:
>
> pc:~# xl pci-list win
> Vdev Device
> 05.0 0000:08:00.0
> 06.0 0000:08:00.1
> 07.0 0000:00:1d.0
> 08.0 0000:00:1d.1
> 09.0 0000:00:1d.2
> 0a.0 0000:00:1d.7
>
> but *:1d.* gets a exclamation mark within win7...
>
> sorry i have no linux hvm at hand right now to do a lspci.
>
>>
>> Is the lspci -vvv output in dom0 different from 3.4 vs 3.5?
>>
>>>
>>>
>>> 2nd: After DomU shutdown , Dom0 panics (100% reproducable) - sorry
>>> that i have no full stacktrace, all i have is a "screenshot" which
>>> i
>>> uploaded here:
>>> http://imageshack.us/photo/my-images/52/img20120724235921.jpg/
>>
>> Ugh, that looks like somebody removed a large chunk of a pagetable.
>>
>> Hmm. Are you using dom0_mem=max parameter? If not, can you try
>> that and also disable ballooning in the xm/xl config file pls?
>
> i already have/had:
> xen_commandline : watchdog dom0_mem=4096M,max:7680M
> dom0_vcpus_pin
>
> but autoballooning was on in xl.conf, i disabled it:
>
> but still i get a panic as soon as domu is shut down:
> (luckily i happend to press "enter" on the dmesg command exactly at
> the right time to get the full stacktrace just before my ssh
> connection died...)
>
> pc:~# dmesg
> [ 206.553547] xen-blkback:backend/vbd/1/832: prepare for reconnect
> [ 207.421690] xen-blkback:backend/vbd/1/768: prepare for reconnect
> [ 208.248271] vif vif-1-0: 2 reading script
> [ 208.252882] br0: port 3(vif1.0) entered disabled state
> [ 208.253584] br0: port 3(vif1.0) entered disabled state
> [ 213.115052] ------------[ cut here ]------------
> [ 213.115071] kernel BUG at drivers/xen/balloon.c:359!
> [ 213.115079] invalid opcode: 0000 [#1] PREEMPT SMP
> [ 213.115091] CPU 4
> [ 213.115094] Modules linked in: uvcvideo snd_seq_midi snd_usb_audio
> snd_usbmidi_lib snd_hwdep snd_rawmidi videobuf2_vm
> alloc videobuf2_memops videobuf2_core videodev joydev hid_generic
> gpio_ich [last unloaded: scsi_wait_scan]
> [ 213.115124]
> [ 213.115126] Pid: 1191, comm: kworker/4:1 Not tainted 3.5.0 #2
> /DX58SO
> [ 213.115135] RIP: e030:[<ffffffff81448105>] [<ffffffff81448105>]
> balloon_process+0x385/0x3a0
> [ 213.115146] RSP: e02b:ffff88012e7f7dc0 EFLAGS: 00010213
> [ 213.115150] RAX: 0000000220be8000 RBX: 0000000000000000 RCX:
> 0000000000000008
> [ 213.115158] RDX: ffff88010bb02000 RSI: 00000000000001cb RDI:
> 000000000020efcb
> [ 213.115164] RBP: ffff88012e7f7e20 R08: ffff88014068e140 R09:
> 0000000000000001
> [ 213.115169] R10: 0000000000000001 R11: 0000000000000000 R12:
> 0000160000000000
> [ 213.115175] R13: 0000000000000001 R14: 000000000020efcb R15:
> ffffea00083bf2c0
> [ 213.115183] FS: 00007f31ea7f7700(0000) GS:ffff880140680000(0000)
> knlGS:0000000000000000
> [ 213.115189] CS: e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> [ 213.115193] CR2: 00007f31ea193986 CR3: 0000000001e0c000 CR4:
> 0000000000002660
> [ 213.115199] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> [ 213.115204] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
> 0000000000000400
> [ 213.115210] Process kworker/4:1 (pid: 1191, threadinfo
> ffff88012e7f6000, task ffff88012ec65b00)
> [ 213.115216] Stack:
> [ 213.115218] 000000000008a6ba 0000000000000001 ffffffff8200ea80
> 0000000000000001
> [ 213.115331] 0000000000000000 0000000000007ff0 ffff88012e7f7e00
> ffff8801312fb100
> [ 213.115341] ffff880140697000 ffff88014068e140 0000000000000000
> ffffffff81e587c0
> [ 213.115350] Call Trace:
> [ 213.115356] [<ffffffff8106752b>] process_one_work+0x12b/0x450
> [ 213.115362] [<ffffffff81447d80>] ?
> decrease_reservation+0x320/0x320
> [ 213.115368] [<ffffffff810688ae>] worker_thread+0x12e/0x2d0
> [ 213.115374] [<ffffffff81068780>] ?
> manage_workers.isra.26+0x1f0/0x1f0
> [ 213.115380] [<ffffffff8106db6e>] kthread+0x8e/0xa0
> [ 213.115386] [<ffffffff8184e324>] kernel_thread_helper+0x4/0x10
> [ 213.115394] [<ffffffff8184c7bc>] ? retint_restore_args+0x5/0x6
> [ 213.115400] [<ffffffff8184e320>] ? gs_change+0x13/0x13
> [ 213.115406] Code: 01 15 80 69 bc 00 48 29 d0 48 89 05 7e 69 bc 00
> e9 31 fd ff ff 0f 0b 0f 0b 4c 89 f7 e8 35 33 bc ff
> 48 83 f8 ff 0f 84 2b fe ff ff <0f> 0b 66 0f 1f 84 00 00 00 00 00 48
> 83 c1 01 e9 c2 fd ff ff 0f
> [ 213.115509] RIP [<ffffffff81448105>] balloon_process+0x385/0x3a0
> [ 213.115521] RSP <ffff88012e7f7dc0>
> [ 213.126036] ---[ end trace 38b78364333593e7 ]---
> [ 213.126061] BUG: unable to handle kernel paging request at
> fffffffffffffff8
> [ 213.126072] IP: [<ffffffff8106e07c>] kthread_data+0xc/0x20
> [ 213.126079] PGD 1e0e067 PUD 1e0f067 PMD 0
> [ 213.126087] Oops: 0000 [#2] PREEMPT SMP
> [ 213.126094] CPU 4
> [ 213.126097] Modules linked in: uvcvideo snd_seq_midi snd_usb_audio
> snd_usbmidi_lib snd_hwdep snd_rawmidi videobuf2_vm
> alloc videobuf2_memops videobuf2_core videodev joydev hid_generic
> gpio_ich [last unloaded: scsi_wait_scan]
> [ 213.126151]
> [ 213.126154] Pid: 1191, comm: kworker/4:1 Tainted: G D
> 3.5.0 #2 /DX58SO
> [ 213.126175] RIP: e030:[<ffffffff8106e07c>] [<ffffffff8106e07c>]
> kthread_data+0xc/0x20
> [ 213.126192] RSP: e02b:ffff88012e7f7a90 EFLAGS: 00010092
> [ 213.126203] RAX: 0000000000000000 RBX: 0000000000000004 RCX:
> 0000000000000004
> [ 213.126212] RDX: ffffffff81fcba40 RSI: 0000000000000004 RDI:
> ffff88012ec65b00
> [ 213.126225] RBP: ffff88012e7f7aa8 R08: 0000000000989680 R09:
> ffffffff81fcba40
> [ 213.126239] R10: ffffffff813b0d60 R11: 0000000000000000 R12:
> ffff8801406936c0
> [ 213.126254] R13: 0000000000000004 R14: ffff88012ec65af0 R15:
> ffff88012ec65b00
> [ 213.126270] FS: 00007f31ea7f7700(0000) GS:ffff880140680000(0000)
> knlGS:0000000000000000
> [ 213.126284] CS: e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> [ 213.126296] CR2: fffffffffffffff8 CR3: 0000000001e0c000 CR4:
> 0000000000002660
> [ 213.126310] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> [ 213.126325] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
> 0000000000000400
> [ 213.126337] Process kworker/4:1 (pid: 1191, threadinfo
> ffff88012e7f6000, task ffff88012ec65b00)
> [ 213.126354] Stack:
> [ 213.126360] ffffffff810698d0 ffff88012e7f7aa8 ffff88012ec65ed8
> ffff88012e7f7b18
> [ 213.126381] ffffffff8184ad32 ffff88012e7f7fd8 ffff88012ec65b00
> ffff88012e7f7fd8
> [ 213.126403] ffff88012e7f7fd8 ffff8801312f94e0 ffff88012ec65b00
> ffff88012ec660f0
> [ 213.126422] Call Trace:
> [ 213.126427] [<ffffffff810698d0>] ? wq_worker_sleeping+0x10/0xa0
> [ 213.126435] [<ffffffff8184ad32>] __schedule+0x592/0x7d0
> [ 213.126443] [<ffffffff8184b094>] schedule+0x24/0x70
> [ 213.126449] [<ffffffff81051582>] do_exit+0x5b2/0x910
> [ 213.126457] [<ffffffff8183e941>] ? printk+0x48/0x4a
> [ 213.126464] [<ffffffff8100ad02>] ? check_events+0x12/0x20
> [ 213.126472] [<ffffffff810175a1>] oops_end+0x71/0xa0
> [ 213.126478] [<ffffffff81017713>] die+0x53/0x80
> [ 213.126484] [<ffffffff81014418>] do_trap+0xb8/0x160
> [ 213.126490] [<ffffffff81014713>] do_invalid_op+0xa3/0xb0
> [ 213.126499] [<ffffffff81448105>] ? balloon_process+0x385/0x3a0
> [ 213.127254] [<ffffffff81085f52>] ? load_balance+0xd2/0x800
> [ 213.127940] [<ffffffff8108116d>] ? cpuacct_charge+0x6d/0xb0
> [ 213.128621] [<ffffffff8184e19b>] invalid_op+0x1b/0x20
> [ 213.129304] [<ffffffff81448105>] ? balloon_process+0x385/0x3a0
> [ 213.129962] [<ffffffff8106752b>] process_one_work+0x12b/0x450
> [ 213.130590] [<ffffffff81447d80>] ?
> decrease_reservation+0x320/0x320
> [ 213.131226] [<ffffffff810688ae>] worker_thread+0x12e/0x2d0
> [ 213.131856] [<ffffffff81068780>] ?
> manage_workers.isra.26+0x1f0/0x1f0
> [ 213.132482] [<ffffffff8106db6e>] kthread+0x8e/0xa0
> [ 213.133099] [<ffffffff8184e324>] kernel_thread_helper+0x4/0x10
> [ 213.133718] [<ffffffff8184c7bc>] ? retint_restore_args+0x5/0x6
> [ 213.134338] [<ffffffff8184e320>] ? gs_change+0x13/0x13
> [ 213.134954] Code: e0 ff ff 01 48 8b 80 38 e0 ff ff a8 08 0f 84 3d
> ff ff ff e8 97 cf 7d 00 e9 33 ff ff ff 66 90 48 8b
> 87 80 03 00 00 55 48 89 e5 5d <48> 8b 40 f8 c3 66 66 66 66 66 66 2e
> 0f 1f 84 00 00 00 00 00 55
> [ 213.135647] RIP [<ffffffff8106e07c>] kthread_data+0xc/0x20
> [ 213.136320] RSP <ffff88012e7f7a90>
> [ 213.136967] CR2: fffffffffffffff8
> [ 213.137610] ---[ end trace 38b78364333593e8 ]---
> [ 213.137611] Fixing recursive fault but reboot is needed!
>
> seems like a ballooning thing - i will try to with only a "max"
> setting, not a range ... stay tuned ;)
>
>
>>
>>>
>>>
>>> With 3.4 both issues were not there - everything worked perfectly.
>>> Tell me which debugging info you need, i may be able to re-install
>>> my netconsole to get the full stacktrace (but i had not much luck
>>> with netconsole regarding kernel panics - rarely this info gets
>>> sent
>>> before the "panic"...)
>>>
>>> Greetings
>>> Tobias
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2012-07-25 14:32 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-07-25 12:30 Regression in kernel 3.5 as Dom0 regarding PCI Passthrough?! Tobias Geiger
2012-07-25 13:43 ` Konrad Rzeszutek Wilk
2012-07-25 14:20 ` Tobias Geiger
2012-07-25 14:32 ` Tobias Geiger [this message]
2012-07-25 17:59 ` Tobias Geiger
2012-07-25 18:09 ` Konrad Rzeszutek Wilk
2012-08-06 16:16 ` Konrad Rzeszutek Wilk
2012-08-20 23:30 ` Konrad Rzeszutek Wilk
-- strict thread matches above, loose matches on Subject: below --
2012-08-21 2:41 Ren, Yongjie
2012-08-21 14:23 ` Konrad Rzeszutek Wilk
2012-08-28 8:25 Ren, Yongjie
2012-08-28 13:19 ` Konrad Rzeszutek Wilk
2012-09-05 18:54 ` Konrad Rzeszutek Wilk
2012-09-06 11:28 ` Tobias Geiger
2012-09-06 13:05 ` Konrad Rzeszutek Wilk
2012-09-06 13:24 ` Tobias Geiger
2012-09-07 2:08 ` Ren, Yongjie
2012-09-07 10:37 ` Tobias Geiger
2012-09-06 11:32 ` Tobias Geiger
2012-09-06 11:46 ` Tobias Geiger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c1a31744388dda3a239ef1d8a95333b9@vido.info \
--to=tobias.geiger@vido.info \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).