From: konrad wilk <konrad.wilk@oracle.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Xen 4.3 + tmem = Xen BUG at domain_page.c:143
Date: Tue, 11 Jun 2013 09:45:45 -0400 [thread overview]
Message-ID: <51B72A09.8080709@oracle.com> (raw)
This is a fairly simple test and it does work with Xen 4.2.
# xl info
host : tst035.dumpdata.com
release : 3.10.0-rc5upstream-00438-g335262d-dirty
version 3f00:17bae3ff:00000000:00000001:00000000
virt_caps : hvm hvm_directio
total_memory : 8016
free_memory : 5852
sharing_freed_memory : 0
sharing_used_memory : 0
outstanding_claims : 0
free_cpus : 0
xen_major : 4
xen_minor : 3
xen_extra : -unstable
xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler : credit
xen_pagesize : 4096
platform_params : virt_start=0xffff800000000000
xen_changeset : Mon Jun 10 14:42:51 2013 +0200 git:44434f3-dirty
xen_commandline : com1=115200,8n1 tmem=1 dom0_mem=max:2G
cpufreq=xen:performance,verbose noreboot console=com1,vga loglvl=all
guest_loglvl=all
cc_compiler : gcc (GCC) 4.4.4 20100503 (Red Hat 4.4.4-2)
cc_compile_by : konrad
cc_compile_domain : (none)
cc_compile_date : Mon Jun 10 17:01:43 EDT 2013
xend_config_format : 4
#
133.475684] xen-blkback:(backend_changed:585) .
Jun 11 13:39:58 tst035 logger: /etc/xen/scripts/block: add
XENBUS_PATH=backend/vbd/1/51712
[ 133.477018] xen-blkback:(xen_vbd_create:421) Successful creation of
handle=ca00 (dom=1)
[ 133.477018] .
[ 133.479632] xen-blkback:(frontend_changed:665) Initialising.
mapping kernel into physical memory
about to get started...
Jun 11 13:39:59 tst035 logger: /etc/xen/scripts/vif-bridge: online
type_if=vif XENBUS_PATH=backend/vif/1/0
[ 133.635819] device vif1.0 entered promiscuous mode
[ 133.639363] IPv6: ADDRCONF(NETDEV_UP): vif1.0: link is not ready
Jun 11 13:39:59 tst035 logger: /etc/xen/scripts/vif-bridge: Successful
vif-bridge online for vif1.0, bridge switch.
Jun 11 13:39:59 tst035 logger: /etc/xen/scripts/vif-bridge: Writing
backend/vif/1/0/hotplug-status connected to xenstore.
[ 135.864732] IPv6: ADDRCONF(NETDEV_CHANGE): vif1.0: link becomes ready
[ 135.865760] switch: port 2(vif1.0) entered forward[ 135.965777]
xen-blkback:(frontend_changed:665) Initialised.
[ 135.966711] xen-blkback:(connect_ring:820) /local/domain/1/d
persistent grants
[ 135.968942] xen-blkback:(connect:734) /local/domain/1/device/vbd/51712.
[ 135.981089] xen-blkback:(frontend_changed:665) Connected.
... snip..
[ 140.441073] xen-blkback: grant 38 added to the tree of persistent
grants, using 28/1056
[ 140.441640] xen-blkback: grant 39 added to the tree of persistent
grants, using 29/1056
[ 140.442284] xen-blkback: grant 40 added to the tree of persistent
grants, using 30/1056
[ 140.442840] xen-blkback: grant 41 added to the tree of persistent
grants, using 31/1056
[ 140.443389] xen-blkback: grant 42 added to the tree of persistent
grants, using 32/1056
[ 140.443920] xen-blkback: grant 43 added to the tree of persistent
grants, using 33/1056
[ 140.444449] xen-blkback: grant 44 added to the tree of persistent
grants, using 34/1056
(XEN) tmem: initializing tmem capability for domid=1...<G><2>ok
(XEN) tmem: allocating persistent-private tmem pool for
domid=1...<G><2>pool_id=0
[ 150.879132] switch: port 2(vif1.0) entered forwarding state
(XEN) Xen BUG at domain_page.c:143
(XEN) ----[ Xen-4.3-unstable x86_64 debug=y Not tainted ]----
(XEN) CPU: 0
(XEN) RIP: e008:[<ffff82c4c0160461>] map_domain_page+0x450/0x514
(XEN) RFLAGS: 0000000000010046 CONTEXT: hypervisor
(XEN) rax: 0000000000000020 rbx: ffff8300c68f9000 rcx: 0000000000000000
(XEN) rdx: 0000000000000020 rsi: 0000000000000020 rdi: 0000000000000000
(XEN) rbp: ffff82c4c02c7cc8 rsp: ffff82c4c02c7c88 r8: ffff820060001000
(XEN) r9: 00000000ffffffff r10: ffff820060006000 r11: 0000000000000000
(XEN) r12: ffff83022e1bb000 r13: 00000000001ebcdc r14: 0000000000000020
(XEN) r15: 0000000000000004 cr0: 0000000080050033 cr4: 00000000000426f0
(XEN) cr3: 0000000209541000 cr2: ffff88002b683fd0
(XEN) ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: e010 cs: e008
(XEN) Xen stack trace from rsp=ffff82c4c02c7c88:
(XEN) ffff83022e1bb2d8 0000000000000286 ffff82c4c012760a ffff83022e1bb000
(XEN) ffff82e003d79b80 ffff82c4c02c7d60 00000000001ebcdc 0000000000000000
(XEN) ffff82c4c02c7d38 ffff82c4c01373de ffff82c4c0127b6b ffffffffffffffff
(XEN) 00000000c02c7d38 ffff82c4c02c7d58 ffff83022e1bb2d8 0000000000000286
(XEN) 0000000000000027 0000000000000000 0000000000001000 0000000000000000
(XEN) 0000000000000000 00000000001ebcdc ffff82c4c02c7d98 ffff82c4c01377c4
(XEN) 0000000000000000 ffff820040014000 ffff82e003d79b80 00000000001ebcdc
(XEN) ffff82c4c02c7d98 ffff830210ecf390 00000000fffffff4 ffff820040010010
(XEN) ffff82004001cf50 ffff83022e1bcc90 ffff82c4c02c7e18 ffff82c4c0135929
(XEN) ffff82c4c02c7db8 ffff82004001cf50 0000000000000000 00000000001ebcdc
(XEN) 0000000000000000 0000000000000000 0000e8a200000000 ffff82c4c02c7e00
(XEN) ffff82c4c02c7e18 ffff83022e1bcc90 ffff830210ecf390 0000000000000000
(XEN) 0000000000000001 000000000000009a ffff82c4c02c7ef8 ffff82c4c0136510
(XEN) 0000002700001000 0000000000000000 ffff82c4c02c7e90 97c4284effffffc2
(XEN) ffff82c4c02c7e68 ffff82c4c015719d ffff82c4c0127b09 0000000000000000
(XEN) ffff82c4c02c7e88 ffff82c4c018c13c ffff82c4c0319100 ffff82c4c02c7f18
(XEN) 0000000000000004 0000000000000001 0000000000000000 0000000000000000
(XEN) 000000000000e8a2 0000000000000000 00000000001ebcdc 000000000000e030
(XEN) 0000000000000246 ffff8300c68f9000 0000000000000000 0000000000000000
(XEN) 0000000000000001 0000000000000000 00007d3b3fd380c7 ffff82c4c02236db
(XEN) Xen call trace:
(XEN) [<ffff82c4c0160461>] map_domain_page+0x450/0x514
(XEN) [<ffff82c4c01373de>] cli_get_page+0x15e/0x17b
(XEN) [<ffff82c4c01377c4>] tmh_copy_from_client+0x150/0x284
(XEN) [<ffff82c4c0135929>] do_tmem_put+0x323/0x5c4
(XEN) [<ffff82c4c0136510>] do_tmem_op+0x5a0/0xbd0
(XEN) [<ffff82c4c02236db>] syscall_enter+0xeb/0x145
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Xen BUG at domain_page.c:143
(XEN) ****************************************
(XEN)
(XEN) Manual reset required ('noreboot' specified)
next reply other threads:[~2013-06-11 13:45 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-06-11 13:45 konrad wilk [this message]
2013-06-11 14:46 ` Xen 4.3 + tmem = Xen BUG at domain_page.c:143 Jan Beulich
2013-06-11 15:30 ` konrad wilk
2013-06-11 15:56 ` George Dunlap
2013-06-11 16:38 ` Jan Beulich
2013-06-11 17:30 ` konrad wilk
2013-06-11 18:52 ` konrad wilk
2013-06-11 21:06 ` konrad wilk
2013-06-12 6:38 ` Jan Beulich
2013-06-12 11:00 ` George Dunlap
2013-06-12 11:15 ` Processed: " xen
2013-06-12 11:37 ` George Dunlap
2013-06-12 12:46 ` Jan Beulich
2013-06-12 14:13 ` Konrad Rzeszutek Wilk
2013-06-12 12:12 ` Jan Beulich
2013-06-12 13:16 ` George Dunlap
2013-06-12 13:27 ` Jan Beulich
2013-06-12 15:11 ` Keir Fraser
2013-06-12 15:27 ` Keir Fraser
2013-06-12 15:54 ` Jan Beulich
2013-06-12 15:48 ` Jan Beulich
2013-06-12 17:26 ` Keir Fraser
2013-07-05 16:56 ` George Dunlap
2013-07-08 8:58 ` Jan Beulich
2013-07-08 9:07 ` George Dunlap
2013-07-08 9:15 ` Processed: " xen
2013-07-08 9:25 ` George Dunlap
2013-07-08 9:30 ` Processed: " xen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=51B72A09.8080709@oracle.com \
--to=konrad.wilk@oracle.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).