* Re: PoD issue
@ 2010-01-31 17:48 Jan Beulich
2010-02-03 18:42 ` George Dunlap
0 siblings, 1 reply; 13+ messages in thread
From: Jan Beulich @ 2010-01-31 17:48 UTC (permalink / raw)
To: george.dunlap; +Cc: xen-devel
>>> George Dunlap 01/29/10 7:30 PM >>>
>PoD is not critical to balloon out guest memory. You can boot with mem
>== maxmem and then balloon down afterwards just as you could before,
>without involving PoD. (Or at least, you should be able to; if you
>can't then it's a bug.) It's just that with PoD you can do something
>you've always wanted to do but never knew it: boot with 1GiB with the
>option of expanding up to 2GiB later. :-)
Oh, no, that's not what I meant. What I really wanted to say is that
with PoD, a properly functioning balloon driver in the guest is crucial
for it to stay alive long enough.
>With the 54 megabyte difference: It's not like a GiB vs GB thing, is
>it? (i.e., 2^30 vs 10^9?) The difference between 1GiB (2^30) and 1 GB
>(10^9) is about 74 megs, or 18,000 pages.
No, that's not the problem. As I understand it now, the problem is
that totalram_pages (which the balloon driver bases its calculations
on) reflects all memory available after all bootmem allocations were
done (i.e. includes neither the static kernel image nor any memory
allocated before or from the bootmem allocator).
>I guess that is a weakness of PoD in general: we can't control the guest
>balloon driver, but we rely on it to have the same model of how to
>translate "target" into # pages in the balloon as the PoD code.
I think this isn't a weakness of PoD, but a design issue in the balloon
driver's xenstore interface: While a target value shown in or obtained
from the /proc and /sys interfaces naturally can be based on (and
reflect) any internal kernel state, the xenstore interface should only
use numbers in terms of full memory amount given to the guest.
Hence a target value read from the memory/target node should be
adjusted before put in relation to totalram_pages. And I think this
is a general misconception in the current implementation (i.e. it
should be corrected not only for the HVM case, but for the pv one
as well).
The bad aspect of this is that it will require a fixed balloon driver
in any HVM guest that has maxmem>mem when the underlying Xen
gets updated to a version that supports PoD. I cannot, however,
see an OS and OS-version independent alternative (i.e. something
to be done in the PoD code or the tools).
Jan
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: PoD issue
2010-01-31 17:48 PoD issue Jan Beulich
@ 2010-02-03 18:42 ` George Dunlap
2010-02-04 8:17 ` Jan Beulich
0 siblings, 1 reply; 13+ messages in thread
From: George Dunlap @ 2010-02-03 18:42 UTC (permalink / raw)
To: Jan Beulich; +Cc: xen-devel@lists.xensource.com
So did you track down where the math error is? Do we have a plan to fix
this going forward?
-George
Jan Beulich wrote:
>>>> George Dunlap 01/29/10 7:30 PM >>>
>>>>
>> PoD is not critical to balloon out guest memory. You can boot with mem
>> == maxmem and then balloon down afterwards just as you could before,
>> without involving PoD. (Or at least, you should be able to; if you
>> can't then it's a bug.) It's just that with PoD you can do something
>> you've always wanted to do but never knew it: boot with 1GiB with the
>> option of expanding up to 2GiB later. :-)
>>
>
> Oh, no, that's not what I meant. What I really wanted to say is that
> with PoD, a properly functioning balloon driver in the guest is crucial
> for it to stay alive long enough.
>
>
>> With the 54 megabyte difference: It's not like a GiB vs GB thing, is
>> it? (i.e., 2^30 vs 10^9?) The difference between 1GiB (2^30) and 1 GB
>> (10^9) is about 74 megs, or 18,000 pages.
>>
>
> No, that's not the problem. As I understand it now, the problem is
> that totalram_pages (which the balloon driver bases its calculations
> on) reflects all memory available after all bootmem allocations were
> done (i.e. includes neither the static kernel image nor any memory
> allocated before or from the bootmem allocator).
>
>
>> I guess that is a weakness of PoD in general: we can't control the guest
>> balloon driver, but we rely on it to have the same model of how to
>> translate "target" into # pages in the balloon as the PoD code.
>>
>
> I think this isn't a weakness of PoD, but a design issue in the balloon
> driver's xenstore interface: While a target value shown in or obtained
> from the /proc and /sys interfaces naturally can be based on (and
> reflect) any internal kernel state, the xenstore interface should only
> use numbers in terms of full memory amount given to the guest.
> Hence a target value read from the memory/target node should be
> adjusted before put in relation to totalram_pages. And I think this
> is a general misconception in the current implementation (i.e. it
> should be corrected not only for the HVM case, but for the pv one
> as well).
>
> The bad aspect of this is that it will require a fixed balloon driver
> in any HVM guest that has maxmem>mem when the underlying Xen
> gets updated to a version that supports PoD. I cannot, however,
> see an OS and OS-version independent alternative (i.e. something
> to be done in the PoD code or the tools).
>
> Jan
>
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: PoD issue
2010-02-03 18:42 ` George Dunlap
@ 2010-02-04 8:17 ` Jan Beulich
2010-02-04 19:12 ` George Dunlap
0 siblings, 1 reply; 13+ messages in thread
From: Jan Beulich @ 2010-02-04 8:17 UTC (permalink / raw)
To: George Dunlap; +Cc: xen-devel@lists.xensource.com
It was in the balloon driver's interaction with xenstore - see 2.6.18 c/s
989.
I have to admit that I cannot see how this issue could slip attention
when the PoD code was introduced - any guest with PoD in use and
an unfixed balloon driver is set to crash sooner or later (implying the
unfortunate effect of requiring an update of the pv drivers in HVM
guests when upgrading Xen from a PoD-incapable to a PoD-capable
version).
Jan
>>> George Dunlap <george.dunlap@eu.citrix.com> 03.02.10 19:42 >>>
So did you track down where the math error is? Do we have a plan to fix
this going forward?
-George
Jan Beulich wrote:
>>>> George Dunlap 01/29/10 7:30 PM >>>
>>>>
>> PoD is not critical to balloon out guest memory. You can boot with mem
>> == maxmem and then balloon down afterwards just as you could before,
>> without involving PoD. (Or at least, you should be able to; if you
>> can't then it's a bug.) It's just that with PoD you can do something
>> you've always wanted to do but never knew it: boot with 1GiB with the
>> option of expanding up to 2GiB later. :-)
>>
>
> Oh, no, that's not what I meant. What I really wanted to say is that
> with PoD, a properly functioning balloon driver in the guest is crucial
> for it to stay alive long enough.
>
>
>> With the 54 megabyte difference: It's not like a GiB vs GB thing, is
>> it? (i.e., 2^30 vs 10^9?) The difference between 1GiB (2^30) and 1 GB
>> (10^9) is about 74 megs, or 18,000 pages.
>>
>
> No, that's not the problem. As I understand it now, the problem is
> that totalram_pages (which the balloon driver bases its calculations
> on) reflects all memory available after all bootmem allocations were
> done (i.e. includes neither the static kernel image nor any memory
> allocated before or from the bootmem allocator).
>
>
>> I guess that is a weakness of PoD in general: we can't control the guest
>> balloon driver, but we rely on it to have the same model of how to
>> translate "target" into # pages in the balloon as the PoD code.
>>
>
> I think this isn't a weakness of PoD, but a design issue in the balloon
> driver's xenstore interface: While a target value shown in or obtained
> from the /proc and /sys interfaces naturally can be based on (and
> reflect) any internal kernel state, the xenstore interface should only
> use numbers in terms of full memory amount given to the guest.
> Hence a target value read from the memory/target node should be
> adjusted before put in relation to totalram_pages. And I think this
> is a general misconception in the current implementation (i.e. it
> should be corrected not only for the HVM case, but for the pv one
> as well).
>
> The bad aspect of this is that it will require a fixed balloon driver
> in any HVM guest that has maxmem>mem when the underlying Xen
> gets updated to a version that supports PoD. I cannot, however,
> see an OS and OS-version independent alternative (i.e. something
> to be done in the PoD code or the tools).
>
> Jan
>
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Re: PoD issue
2010-02-04 8:17 ` Jan Beulich
@ 2010-02-04 19:12 ` George Dunlap
2010-02-19 0:03 ` Keith Coleman
0 siblings, 1 reply; 13+ messages in thread
From: George Dunlap @ 2010-02-04 19:12 UTC (permalink / raw)
To: Jan Beulich; +Cc: xen-devel@lists.xensource.com
Yeah, the OSS tree doesn't get the kind of regression testing it
really needs at the moment. I was using the OSS balloon drivers when
I implemented and submitted the PoD code last year. I didn't have any
trouble then, and I was definitely using up all of the memory. But I
haven't done any testing on OSS since then, basically.
-George
On Thu, Feb 4, 2010 at 12:17 AM, Jan Beulich <JBeulich@novell.com> wrote:
> It was in the balloon driver's interaction with xenstore - see 2.6.18 c/s
> 989.
>
> I have to admit that I cannot see how this issue could slip attention
> when the PoD code was introduced - any guest with PoD in use and
> an unfixed balloon driver is set to crash sooner or later (implying the
> unfortunate effect of requiring an update of the pv drivers in HVM
> guests when upgrading Xen from a PoD-incapable to a PoD-capable
> version).
>
> Jan
>
>>>> George Dunlap <george.dunlap@eu.citrix.com> 03.02.10 19:42 >>>
> So did you track down where the math error is? Do we have a plan to fix
> this going forward?
> -George
>
> Jan Beulich wrote:
>>>>> George Dunlap 01/29/10 7:30 PM >>>
>>>>>
>>> PoD is not critical to balloon out guest memory. You can boot with mem
>>> == maxmem and then balloon down afterwards just as you could before,
>>> without involving PoD. (Or at least, you should be able to; if you
>>> can't then it's a bug.) It's just that with PoD you can do something
>>> you've always wanted to do but never knew it: boot with 1GiB with the
>>> option of expanding up to 2GiB later. :-)
>>>
>>
>> Oh, no, that's not what I meant. What I really wanted to say is that
>> with PoD, a properly functioning balloon driver in the guest is crucial
>> for it to stay alive long enough.
>>
>>
>>> With the 54 megabyte difference: It's not like a GiB vs GB thing, is
>>> it? (i.e., 2^30 vs 10^9?) The difference between 1GiB (2^30) and 1 GB
>>> (10^9) is about 74 megs, or 18,000 pages.
>>>
>>
>> No, that's not the problem. As I understand it now, the problem is
>> that totalram_pages (which the balloon driver bases its calculations
>> on) reflects all memory available after all bootmem allocations were
>> done (i.e. includes neither the static kernel image nor any memory
>> allocated before or from the bootmem allocator).
>>
>>
>>> I guess that is a weakness of PoD in general: we can't control the guest
>>> balloon driver, but we rely on it to have the same model of how to
>>> translate "target" into # pages in the balloon as the PoD code.
>>>
>>
>> I think this isn't a weakness of PoD, but a design issue in the balloon
>> driver's xenstore interface: While a target value shown in or obtained
>> from the /proc and /sys interfaces naturally can be based on (and
>> reflect) any internal kernel state, the xenstore interface should only
>> use numbers in terms of full memory amount given to the guest.
>> Hence a target value read from the memory/target node should be
>> adjusted before put in relation to totalram_pages. And I think this
>> is a general misconception in the current implementation (i.e. it
>> should be corrected not only for the HVM case, but for the pv one
>> as well).
>>
>> The bad aspect of this is that it will require a fixed balloon driver
>> in any HVM guest that has maxmem>mem when the underlying Xen
>> gets updated to a version that supports PoD. I cannot, however,
>> see an OS and OS-version independent alternative (i.e. something
>> to be done in the PoD code or the tools).
>>
>> Jan
>>
>>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Re: PoD issue
2010-02-04 19:12 ` George Dunlap
@ 2010-02-19 0:03 ` Keith Coleman
2010-02-19 6:53 ` Ian Pratt
2010-02-19 8:19 ` Jan Beulich
0 siblings, 2 replies; 13+ messages in thread
From: Keith Coleman @ 2010-02-19 0:03 UTC (permalink / raw)
To: George Dunlap; +Cc: xen-devel@lists.xensource.com, Keir Fraser, Jan Beulich
On Thu, Feb 4, 2010 at 2:12 PM, George Dunlap
<George.Dunlap@eu.citrix.com> wrote:
> Yeah, the OSS tree doesn't get the kind of regression testing it
> really needs at the moment. I was using the OSS balloon drivers when
> I implemented and submitted the PoD code last year. I didn't have any
> trouble then, and I was definitely using up all of the memory. But I
> haven't done any testing on OSS since then, basically.
>
Is it expected that booting HVM guests with maxmem > memory is
unstable? In testing 3.4.3-rc2 (kernel 2.6.18 c/s 993) I can easily
crash the guest and occasionally the entire server.
Keith Coleman
^ permalink raw reply [flat|nested] 13+ messages in thread
* RE: Re: PoD issue
2010-02-19 0:03 ` Keith Coleman
@ 2010-02-19 6:53 ` Ian Pratt
2010-02-19 21:28 ` Keith Coleman
2010-02-19 8:19 ` Jan Beulich
1 sibling, 1 reply; 13+ messages in thread
From: Ian Pratt @ 2010-02-19 6:53 UTC (permalink / raw)
To: Keith Coleman, George Dunlap
Cc: Ian, Jan Beulich, Pratt, xen-devel@lists.xensource.com, Fraser
> On Thu, Feb 4, 2010 at 2:12 PM, George Dunlap
> <George.Dunlap@eu.citrix.com> wrote:
> > Yeah, the OSS tree doesn't get the kind of regression testing it
> > really needs at the moment. I was using the OSS balloon drivers when
> > I implemented and submitted the PoD code last year. I didn't have any
> > trouble then, and I was definitely using up all of the memory. But I
> > haven't done any testing on OSS since then, basically.
> >
>
> Is it expected that booting HVM guests with maxmem > memory is
> unstable? In testing 3.4.3-rc2 (kernel 2.6.18 c/s 993) I can easily
> crash the guest and occasionally the entire server.
Obviously the platform should never crash, and that's very concerning.
Are you running a balloon driver in the guest? It's essential that you do, because it needs to get in fairly early in the guest boot and allocate the difference between maxmem and target memory. The populate-on-demand code exists just to cope with things like the memory scrubber running ahead of the balloon driver. If you're not running a balloon driver the guest is doomed to crash as soon as it tries using more than target memory.
All of this requires coordination between the tool stack, PoD code, and PV drivers so that sufficient memory gets ballooned out. I expect the combination that has had most testing is the XCP toolstack and Citrix PV windows drivers.
Ian
You need to be running a balloon driver in the guest: it needs
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Re: PoD issue
2010-02-19 6:53 ` Ian Pratt
@ 2010-02-19 21:28 ` Keith Coleman
0 siblings, 0 replies; 13+ messages in thread
From: Keith Coleman @ 2010-02-19 21:28 UTC (permalink / raw)
To: Ian Pratt
Cc: George Dunlap, xen-devel@lists.xensource.com, Keir Fraser,
Jan Beulich
On Fri, Feb 19, 2010 at 1:53 AM, Ian Pratt <Ian.Pratt@eu.citrix.com> wrote:
>> On Thu, Feb 4, 2010 at 2:12 PM, George Dunlap
>> <George.Dunlap@eu.citrix.com> wrote:
>> > Yeah, the OSS tree doesn't get the kind of regression testing it
>> > really needs at the moment. I was using the OSS balloon drivers when
>> > I implemented and submitted the PoD code last year. I didn't have any
>> > trouble then, and I was definitely using up all of the memory. But I
>> > haven't done any testing on OSS since then, basically.
>> >
>>
>> Is it expected that booting HVM guests with maxmem > memory is
>> unstable? In testing 3.4.3-rc2 (kernel 2.6.18 c/s 993) I can easily
>> crash the guest and occasionally the entire server.
>
> Obviously the platform should never crash, and that's very concerning.
>
> Are you running a balloon driver in the guest? It's essential that you do, because it needs to get in fairly early in the guest boot and allocate the difference between maxmem and target memory. The populate-on-demand code exists just to cope with things like the memory scrubber running ahead of the balloon driver. If you're not running a balloon driver the guest is doomed to crash as soon as it tries using more than target memory.
>
> All of this requires coordination between the tool stack, PoD code, and PV drivers so that sufficient memory gets ballooned out. I expect the combination that has had most testing is the XCP toolstack and Citrix PV windows drivers.
>
Initially I was using the XCP 0.1.1 WinPV drivers (win server 2003
sp2) and the guest crashed when I tried to install software via
emulated cdrom. Nothing about the crash was reported in the qemu log
file and xend.log wasn't very helpful either but here's the relevant
portion:
[2010-02-17 20:42:49 4253] DEBUG (DevController:139) Waiting for devices vtpm.
[2010-02-17 20:42:49 4253] INFO (XendDomain:1182) Domain win2 (30) unpaused.
[2010-02-17 20:48:05 4253] WARNING (XendDomainInfo:1888) Domain has
crashed: name=win2 id=30.
[2010-02-17 20:48:06 4253] DEBUG (XendDomainInfo:2734)
XendDomainInfo.destroy: domid=30
[2010-02-17 20:48:06 4253] DEBUG (XendDomainInfo:2209) Destroying device model
I unsuccessfully attempted the install several more times then tried
copying files from the emulated cd which also crashed the guest each
time. I wasn't even thinking about the fact that I had set maxmem/pod
so I blamed the xcp winpv drivers and switched to gplpv (0.10.0.138).
Same crashes with gplpv. At this point I hadn't checked 'xm dmesg'
which was the only place that the pod/p2m error is reported so I
changed to pure HVM mode and tried to copy the files from emulated cd.
That's when the real trouble started.
The rdp and vnc connections to the guest froze as did the ssh to the
dom0. This server was also hosting 7 linux pv guests. I could ping the
guests and partially load some of their websites but couldn't login
via ssh. I suspeced that the HDDs were overloaded causing disk io to
block the guests. I was on site so I went to check server and was
shocked to find no disk activity. The monitor output was blank and I
couldnt wake it up. Maybe the usb keyboard was unable to be enumerated
because I couldnt even toggle the numlock, etc after several
reconnections.
I power cycled the host and checked the logs but there was no evidence
of a crash other than one of the software raid devices being unclean
on startup. Perhaps there was interesting data logged to 'xm dmesg' or
waiting to be written to disk at the time of the crash. I'm afraid
this server/mb is incapable of logging data to the serial port. I've
attempted to do so several times both before and after this crash.
Of course the simple fix is to remove maxmem from the domU config file
for the time being. Eventually people will use pod on production
systems. Relying on the guest to have a solid balloon driver is
unacceptable. A guest could accidentally (or otherwise) remove the pv
drivers to bring down an entire host.
When I can free up a server with serial logging for testing I will try
to reproduce this crash.
Keith Coleman
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Re: PoD issue
2010-02-19 0:03 ` Keith Coleman
2010-02-19 6:53 ` Ian Pratt
@ 2010-02-19 8:19 ` Jan Beulich
2010-06-04 15:03 ` Pasi Kärkkäinen
1 sibling, 1 reply; 13+ messages in thread
From: Jan Beulich @ 2010-02-19 8:19 UTC (permalink / raw)
To: Keith Coleman; +Cc: George Dunlap, xen-devel@lists.xensource.com, Keir Fraser
>>> Keith Coleman <list.keith@scaltro.com> 19.02.10 01:03 >>>
>On Thu, Feb 4, 2010 at 2:12 PM, George Dunlap
><George.Dunlap@eu.citrix.com> wrote:
>> Yeah, the OSS tree doesn't get the kind of regression testing it
>> really needs at the moment. I was using the OSS balloon drivers when
>> I implemented and submitted the PoD code last year. I didn't have any
>> trouble then, and I was definitely using up all of the memory. But I
>> haven't done any testing on OSS since then, basically.
>>
>
>Is it expected that booting HVM guests with maxmem > memory is
>unstable? In testing 3.4.3-rc2 (kernel 2.6.18 c/s 993) I can easily
>crash the guest and occasionally the entire server.
Crashing the guest is expected if the guest doesn't have a fixed
balloon driver (i.e. the mentioned c/s would need to be in the
sources the pv drivers for the guest were built from).
Crashing the host is certainly unacceptable - please provide logs
thereof.
Jan
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Re: PoD issue
2010-02-19 8:19 ` Jan Beulich
@ 2010-06-04 15:03 ` Pasi Kärkkäinen
0 siblings, 0 replies; 13+ messages in thread
From: Pasi Kärkkäinen @ 2010-06-04 15:03 UTC (permalink / raw)
To: Jan Beulich
Cc: George Dunlap, xen-devel@lists.xensource.com, Keir Fraser,
Keith Coleman
On Fri, Feb 19, 2010 at 08:19:15AM +0000, Jan Beulich wrote:
> >>> Keith Coleman <list.keith@scaltro.com> 19.02.10 01:03 >>>
> >On Thu, Feb 4, 2010 at 2:12 PM, George Dunlap
> ><George.Dunlap@eu.citrix.com> wrote:
> >> Yeah, the OSS tree doesn't get the kind of regression testing it
> >> really needs at the moment. I was using the OSS balloon drivers when
> >> I implemented and submitted the PoD code last year. I didn't have any
> >> trouble then, and I was definitely using up all of the memory. But I
> >> haven't done any testing on OSS since then, basically.
> >>
> >
> >Is it expected that booting HVM guests with maxmem > memory is
> >unstable? In testing 3.4.3-rc2 (kernel 2.6.18 c/s 993) I can easily
> >crash the guest and occasionally the entire server.
>
> Crashing the guest is expected if the guest doesn't have a fixed
> balloon driver (i.e. the mentioned c/s would need to be in the
> sources the pv drivers for the guest were built from).
>
> Crashing the host is certainly unacceptable - please provide logs
> thereof.
>
Was this resolved? Someone was complaining recently that maxmem != memory
crashes his Xen host..
-- Pasi
^ permalink raw reply [flat|nested] 13+ messages in thread
* PoD issue
@ 2010-01-29 15:27 Jan Beulich
2010-01-29 16:01 ` George Dunlap
0 siblings, 1 reply; 13+ messages in thread
From: Jan Beulich @ 2010-01-29 15:27 UTC (permalink / raw)
To: George Dunlap; +Cc: xen-devel
[-- Attachment #1: Type: text/plain, Size: 1448 bytes --]
George,
before diving deeply into the PoD code, I hope you have some idea that
might ease the debugging that's apparently going to be needed.
Following the comment immediately before p2m_pod_set_mem_target(),
there's an apparent inconsistency with the accounting: While the guest
in question properly balloons down to its intended setting (1G, with a
maxmem setting of 2G), the combination of the equations
d->arch.p2m->pod.entry_count == B - P
d->tot_pages == P + d->arch.p2m->pod.count
doesn't hold (provided I interpreted the meaning of B correctly - I
took this from the guest balloon driver's "Current allocation" report,
converted to pages); there's a difference of over 13000 pages.
Obviously, as soon as the guest uses up enough of its memory, it
will get crashed by the PoD code.
In two runs I did, the difference (and hence the number of entries
reported in the eventual crash message) was identical, implying to
me that this is not a simple race, but rather a systematical problem.
Even on the initial dump taken (when the guest was sitting at the
boot manager screen), there already appears to be a difference of
800 pages (it's my understanding that at this point the difference
between entries and cache should equal the difference between
maxmem and mem).
Does this ring any bells? Any hints how to debug this? In any case
I'm attaching the full log in case you want to look at it.
Jan
[-- Attachment #2: xen.log.1 --]
[-- Type: application/octet-stream, Size: 68236 bytes --]
__ __ _ _ ___ ___ _____
\ \/ /___ _ __ | || | / _ \ / _ \ _ __ ___|___ / _ __ _ __ ___
\ // _ \ '_ \ | || |_| | | | | | |__| '__/ __| |_ \ __| '_ \| '__/ _ \
/ \ __/ | | | |__ _| |_| | |_| |__| | | (__ ___) |__| |_) | | | __/
/_/\_\___|_| |_| |_|(_)___(_)___/ |_| \___|____/ | .__/|_| \___|
|_|
(XEN) Xen version 4.0.0-rc3-pre (jbeulich@dus.novell.com) (gcc version 4.1.2 20070115 (SUSE Linux)) Wed Jan 27 15:50:45 CET 2010
(XEN) Latest ChangeSet: 20858-01
(XEN) Command line: console=vga,com1 com1=115200 conswitch=qx vga=mode-0x31a dom0_mem=-256M loglvl=all guest_loglvl=all cpufreq=xen:threshold=20 noreboot
(XEN) Video information:
(XEN) VGA is graphics mode 1280x1024, 16 bpp
(XEN) VBE/DDC methods: V2; EDID transfer time: 2 seconds
(XEN) Disc information:
(XEN) Found 2 MBR signatures
(XEN) Found 2 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN) 0000000000000000 - 000000000009d800 (usable)
(XEN) 000000000009d800 - 00000000000a0000 (reserved)
(XEN) 00000000000ce000 - 0000000000100000 (reserved)
(XEN) 0000000000100000 - 00000000cfeb0000 (usable)
(XEN) 00000000cfeb0000 - 00000000cfec5000 (ACPI data)
(XEN) 00000000cfec5000 - 00000000cfed1000 (ACPI NVS)
(XEN) 00000000cfed1000 - 00000000cff7f000 (reserved)
(XEN) 00000000cff80000 - 00000000d0000000 (reserved)
(XEN) 00000000fec00000 - 00000000fec03000 (reserved)
(XEN) 00000000fee00000 - 00000000fee01000 (reserved)
(XEN) 00000000fff80000 - 0000000100000000 (reserved)
(XEN) 0000000100000000 - 0000000130000000 (usable)
(XEN) ACPI: RSDP 000F7FE0, 0024 (r2 PTLTD )
(XEN) ACPI: XSDT CFEBE59B, 008C (r1 BRCM Anaheim 6040000 PTL 2000001)
(XEN) ACPI: FACP CFEBE69B, 00F4 (r3 BRCM EXPLOSN 6040000 MSFT 2000001)
(XEN) ACPI Warning (tbfadt-0444): Optional field "Pm2ControlBlock" has zero address or length: 0000000000000000/C [20070126]
(XEN) ACPI: DSDT CFEBE78F, 4777 (r2 AMD Anaheim 6040000 MSFT 2000002)
(XEN) ACPI: FACS CFED0FC0, 0040
(XEN) ACPI: TCPA CFEC2F06, 0032 (r1 BRCM Anaheim 6040000 PTL 20000001)
(XEN) ACPI: EINJ CFEC2F38, 0210 (r1 PTL WHEAPTL 6040000 PTL 1)
(XEN) ACPI: HEST CFEC3148, 03D0 (r1 PTL WHEAPTL 6040000 PTL 1)
(XEN) ACPI: BERT CFEC3518, 0030 (r1 PTL WHEAPTL 6040000 PTL 1)
(XEN) ACPI: SSDT CFEC3548, 00E1 (r1 wheaos wheaosc 6040000 INTL 20050624)
(XEN) ACPI: ERST CFEC3629, 02B0 (r1 PTL WHEAPTL 6040000 PTL 1)
(XEN) ACPI: SRAT CFEC38D9, 0150 (r1 AMD HAMMER 6040000 AMD 1)
(XEN) ACPI: SSDT CFEC3A29, 143C (r1 AMD POWERNOW 6040000 AMD 1)
(XEN) ACPI: HPET CFEC4E65, 0038 (r1 BRCM Anaheim 6040000 BRCM 2000001)
(XEN) ACPI: SSDT CFEC4E9D, 0049 (r1 BRCM PRT0 6040000 BRCM 2000001)
(XEN) ACPI: SPCR CFEC4EE6, 0050 (r1 PTLTD $UCRTBL$ 6040000 PTL 1)
(XEN) ACPI: APIC CFEC4F36, 00CA (r1 BRCM Anaheim 6040000 PTL 2000001)
(XEN) System RAM: 4081MB (4179812kB)
(XEN) SRAT: PXM 0 -> APIC 0 -> Node 0
(XEN) SRAT: PXM 0 -> APIC 1 -> Node 0
(XEN) SRAT: PXM 0 -> APIC 2 -> Node 0
(XEN) SRAT: PXM 0 -> APIC 3 -> Node 0
(XEN) SRAT: PXM 1 -> APIC 4 -> Node 1
(XEN) SRAT: PXM 1 -> APIC 5 -> Node 1
(XEN) SRAT: PXM 1 -> APIC 6 -> Node 1
(XEN) SRAT: PXM 1 -> APIC 7 -> Node 1
(XEN) SRAT: Node 0 PXM 0 0-a0000
(XEN) SRAT: Node 0 PXM 0 100000-80000000
(XEN) SRAT: Node 1 PXM 1 80000000-d0000000
(XEN) SRAT: Node 1 PXM 1 100000000-130000000
(XEN) NUMA: Allocated memnodemap from 12fdfd000 - 12fdff000
(XEN) NUMA: Using 8 for the hash shift.
(XEN) Domain heap initialised DMA width 29 bits
(XEN) vesafb: framebuffer at 0xd0000000, mapped to 0xffff82c000000000, using 4096k, total 32768k
(XEN) vesafb: mode is 1280x1024x16, linelength=2560, font 8x16
(XEN) vesafb: Truecolor: size=0:5:6:5, shift=0:11:5:0
(XEN) found SMP MP-table at 000f8010
(XEN) DMI present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x508
(XEN) ACPI: ACPI SLEEP INFO: pm1x_cnt[544,504], pm1x_evt[500,540]
(XEN) ACPI: wakeup_vec[cfed0fcc], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
(XEN) Processor #0 0:2 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
(XEN) Processor #1 0:2 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 0:2 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
(XEN) Processor #3 0:2 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
(XEN) Processor #4 0:2 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
(XEN) Processor #5 0:2 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] enabled)
(XEN) Processor #6 0:2 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] enabled)
(XEN) Processor #7 0:2 APIC version 16
(XEN) ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1])
(XEN) ACPI: IOAPIC (id[0x08] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 8, version 17, address 0xfec00000, GSI 0-15
(XEN) ACPI: IOAPIC (id[0x09] address[0xfec01000] gsi_base[16])
(XEN) IOAPIC[1]: apic_id 9, version 17, address 0xfec01000, GSI 16-31
(XEN) ACPI: IOAPIC (id[0x0a] address[0xfec02000] gsi_base[32])
(XEN) IOAPIC[2]: apic_id 10, version 17, address 0xfec02000, GSI 32-47
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) Enabling APIC mode: Flat. Using 3 I/O APICs
(XEN) ACPI: HPET id: 0x1166a201 base: 0xfed00000
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Initializing CPU#0
(XEN) Detected 1995.080 MHz processor.
(XEN) Initing memory sharing.
(XEN) CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line)
(XEN) CPU: L2 Cache: 512K (64 bytes/line)
(XEN) CPU 0(4) -> Core 0
(XEN) HVM: ASIDs enabled.
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging detected.
(XEN) CPU0: AMD Family10h machine check reporting enabled
(XEN) AMD-Vi: IOMMU not found!
(XEN) I/O virtualisation disabled
(XEN) CPU0: AMD Quad-Core AMD Opteron(tm) Processor 2350 stepping 03
(XEN) Booting processor 1/1 eip 8c000
(XEN) Initializing CPU#1
(XEN) CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line)
(XEN) CPU: L2 Cache: 512K (64 bytes/line)
(XEN) CPU 1(4) -> Core 1
(XEN) HVM: ASIDs enabled.
(XEN) CPU1: AMD Family10h machine check reporting enabled
(XEN) CPU1: AMD Quad-Core AMD Opteron(tm) Processor 2350 stepping 03
(XEN) Booting processor 2/2 eip 8c000
(XEN) Initializing CPU#2
(XEN) CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line)
(XEN) CPU: L2 Cache: 512K (64 bytes/line)
(XEN) CPU 2(4) -> Core 2
(XEN) HVM: ASIDs enabled.
(XEN) CPU2: AMD Family10h machine check reporting enabled
(XEN) CPU2: AMD Quad-Core AMD Opteron(tm) Processor 2350 stepping 03
(XEN) Booting processor 3/3 eip 8c000
(XEN) Initializing CPU#3
(XEN) CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line)
(XEN) CPU: L2 Cache: 512K (64 bytes/line)
(XEN) CPU 3(4) -> Core 3
(XEN) HVM: ASIDs enabled.
(XEN) CPU3: AMD Family10h machine check reporting enabled
(XEN) CPU3: AMD Quad-Core AMD Opteron(tm) Processor 2350 stepping 03
(XEN) Booting processor 4/4 eip 8c000
(XEN) Initializing CPU#4
(XEN) CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line)
(XEN) CPU: L2 Cache: 512K (64 bytes/line)
(XEN) CPU 4(4) -> Core 0
(XEN) HVM: ASIDs enabled.
(XEN) CPU4: AMD Family10h machine check reporting enabled
(XEN) CPU4: AMD Quad-Core AMD Opteron(tm) Processor 2350 stepping 03
(XEN) Booting processor 5/5 eip 8c000
(XEN) Initializing CPU#5
(XEN) CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line)
(XEN) CPU: L2 Cache: 512K (64 bytes/line)
(XEN) CPU 5(4) -> Core 1
(XEN) HVM: ASIDs enabled.
(XEN) CPU5: AMD Family10h machine check reporting enabled
(XEN) CPU5: AMD Quad-Core AMD Opteron(tm) Processor 2350 stepping 03
(XEN) Booting processor 6/6 eip 8c000
(XEN) Initializing CPU#6
(XEN) CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line)
(XEN) CPU: L2 Cache: 512K (64 bytes/line)
(XEN) CPU 6(4) -> Core 2
(XEN) HVM: ASIDs enabled.
(XEN) CPU6: AMD Family10h machine check reporting enabled
(XEN) CPU6: AMD Quad-Core AMD Opteron(tm) Processor 2350 stepping 03
(XEN) Booting processor 7/7 eip 8c000
(XEN) Initializing CPU#7
(XEN) CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line)
(XEN) CPU: L2 Cache: 512K (64 bytes/line)
(XEN) CPU 7(4) -> Core 3
(XEN) HVM: ASIDs enabled.
(XEN) CPU7: AMD Family10h machine check reporting enabled
(XEN) CPU7: AMD Quad-Core AMD Opteron(tm) Processor 2350 stepping 03
(XEN) Total of 8 processors activated.
(XEN) ENABLING IO-APIC IRQs
(XEN) -> Using new ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) TSC is reliable, synchronization unnecessary
(XEN) Platform timer is 14.318MHz HPET
(XEN) CPU 0 APIC 0 -> Node 0
(XEN) CPU 1 APIC 1 -> Node 0
(XEN) microcode.c:73:d32767 microcode: CPU1 resumed
(XEN) CPU 2 APIC 2 -> Node 0
(XEN) microcode.c:73:d32767 microcode: CPU2 resumed
(XEN) CPU 3 APIC 3 -> Node 0
(XEN) microcode.c:73:d32767 microcode: CPU3 resumed
(XEN) microcode.c:73:d32767 microcode: CPU4 resumed
(XEN) CPU 4 APIC 4 -> Node 1
(XEN) CPU 5 APIC 5 -> Node 1
(XEN) microcode.c:73:d32767 microcode: CPU5 resumed
(XEN) microcode.c:73:d32767 microcode: CPU6 resumed
(XEN) CPU 6 APIC 6 -> Node 1
(XEN) CPU 7 APIC 7 -> Node 1
(XEN) microcode.c:73:d32767 microcode: CPU7 resumed
(XEN) Brought up 8 CPUs
(XEN) HPET: 3 timers in total, 0 timers will be used for broadcast
(XEN) ACPI sleep modes: S3
(XEN) MCA: Use hw thresholding to adjust polling frequency
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) *** LOADING DOMAIN 0 ***
(XEN) Xen kernel: 64-bit, lsb, compat32
(XEN) Dom0 kernel: 64-bit, lsb, paddr 0x2000 -> 0x5a0000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN) Dom0 alloc.: 000000007e000000->000000007f000000 (965213 pages to be allocated)
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN) Loaded kernel: ffffffff80002000->ffffffff805a0000
(XEN) Init. ramdisk: ffffffff805a0000->ffffffff808853b3
(XEN) Phys-Mach map: ffffea0000000000->ffffea00007652e8
(XEN) Start info: ffffffff80886000->ffffffff808864b4
(XEN) Page tables: ffffffff80887000->ffffffff80890000
(XEN) Boot stack: ffffffff80890000->ffffffff80891000
(XEN) TOTAL: ffffffff80000000->ffffffff80c00000
(XEN) ENTRY ADDRESS: ffffffff80002000
(XEN) Dom0 has maximum 8 VCPUs
(XEN) Scrubbing Free RAM: ..done.
(XEN) Xen trace buffers: disabled
(XEN) tmem: initialized comp=0 global-lock=0
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) Xen is relinquishing VGA console.
(XEN) *** Serial input -> Xen (type 'CTRL-q' three times to switch input to DOM0)
(XEN) Freed 156kB init memory.
Linux version 2.6.32.6-2010-01-27-xen0 (jbeulich@dus-dev-sles10b) (gcc version 4.1.2 20070115 (SUSE Linux)) #1 SMP Wed Jan 27 16:11:22 CET 2010
Command line: root=/dev/sda2 ro noresume
KERNEL supported cpus:
Intel GenuineIntel
AMD AuthenticAMD
Centaur CentaurHauls
Xen-provided machine memory map:
BIOS: 0000000000000000 - 000000000009d800 (usable)
BIOS: 000000000009d800 - 00000000000a0000 (reserved)
BIOS: 00000000000ce000 - 0000000000100000 (reserved)
BIOS: 0000000000100000 - 00000000cfeb0000 (usable)
BIOS: 00000000cfeb0000 - 00000000cfec5000 (ACPI data)
BIOS: 00000000cfec5000 - 00000000cfed1000 (ACPI NVS)
BIOS: 00000000cfed1000 - 00000000cff7f000 (reserved)
BIOS: 00000000cff80000 - 00000000d0000000 (reserved)
BIOS: 00000000fec00000 - 00000000fec03000 (reserved)
BIOS: 00000000fee00000 - 00000000fee01000 (reserved)
BIOS: 00000000fff80000 - 0000000100000000 (reserved)
BIOS: 0000000100000000 - 0000000130000000 (usable)
Xen-provided physical RAM map:
Xen: 0000000000000000 - 00000000ed25d000 (usable)
DMI present.
last_pfn = 0xed25d max_arch_pfn = 0x80000000
init_memory_mapping: 0000000000000000-00000000ed25d000
RAMDISK: 005a0000 - 008853b3
ACPI: RSDP 00000000000f7fe0 00024 (v02 PTLTD )
ACPI: XSDT 00000000cfebe59b 0008C (v01 BRCM Anaheim 06040000 PTL 02000001)
ACPI: FACP 00000000cfebe69b 000F4 (v03 BRCM EXPLOSN 06040000 MSFT 02000001)
ACPI Warning: Optional field Pm2ControlBlock has zero address or length: 0000000000000000/C (20090903/tbfadt-557)
ACPI: DSDT 00000000cfebe78f 04777 (v02 AMD Anaheim 06040000 MSFT 02000002)
ACPI: FACS 00000000cfed0fc0 00040
ACPI: TCPA 00000000cfec2f06 00032 (v01 BRCM Anaheim 06040000 PTL 20000001)
ACPI: EINJ 00000000cfec2f38 00210 (v01 PTL WHEAPTL 06040000 PTL 00000001)
ACPI: HEST 00000000cfec3148 003D0 (v01 PTL WHEAPTL 06040000 PTL 00000001)
ACPI: BERT 00000000cfec3518 00030 (v01 PTL WHEAPTL 06040000 PTL 00000001)
ACPI: SSDT 00000000cfec3548 000E1 (v01 wheaos wheaosc 06040000 INTL 20050624)
ACPI: ERST 00000000cfec3629 002B0 (v01 PTL WHEAPTL 06040000 PTL 00000001)
ACPI: SRAT 00000000cfec38d9 00150 (v01 AMD HAMMER 06040000 AMD 00000001)
ACPI: SSDT 00000000cfec3a29 0143C (v01 AMD POWERNOW 06040000 AMD 00000001)
ACPI: HPET 00000000cfec4e65 00038 (v01 BRCM Anaheim 06040000 BRCM 02000001)
ACPI: SSDT 00000000cfec4e9d 00049 (v01 BRCM PRT0 06040000 BRCM 02000001)
ACPI: SPCR 00000000cfec4ee6 00050 (v01 PTLTD $UCRTBL$ 06040000 PTL 00000001)
ACPI: APIC 00000000cfec4f36 000CA (v01 BRCM Anaheim 06040000 PTL 02000001)
(5 early reservations) ==> bootmem [0000000000 - 00eca5d000]
#0 [00005a0000 - 0000890000] Xen provided ==> [00005a0000 - 0000890000]
#1 [0000002000 - 000057ffb8] TEXT DATA BSS ==> [0000002000 - 000057ffb8]
#2 [0001000000 - 0001769000] INITP2M ==> [0001000000 - 0001769000]
#3 [0000580000 - 00005801b4] BRK ==> [0000580000 - 00005801b4]
#4 [0000890000 - 0000fff000] PGTABLE ==> [0000890000 - 0000fff000]
found SMP MP-table at [ffffffffff5f4010] 000f8010
Zone PFN ranges:
DMA 0x00000000 -> 0x00001000
DMA32 0x00001000 -> 0x00100000
Normal 0x00100000 -> 0x00100000
Movable zone start PFN for each node
early_node_map[2] active PFN ranges
0: 0x00000000 -> 0x000eca5d
0: 0x000ed25d -> 0x000ed25d
ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] enabled)
ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] enabled)
ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1])
ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1])
ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1])
ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1])
ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1])
ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1])
ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1])
ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1])
ACPI: IOAPIC (id[0x08] address[0xfec00000] gsi_base[0])
IOAPIC[0]: apic_id 8, version 17, address 0xfec00000, GSI 0-15
ACPI: IOAPIC (id[0x09] address[0xfec01000] gsi_base[16])
IOAPIC[1]: apic_id 9, version 17, address 0xfec01000, GSI 16-31
ACPI: IOAPIC (id[0x0a] address[0xfec02000] gsi_base[32])
IOAPIC[2]: apic_id 10, version 17, address 0xfec02000, GSI 32-47
ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge)
Using ACPI (MADT) for SMP configuration information
Allocating PCI resources starting at d0000000 (gap: d0000000:2ec00000)
NR_CPUS:8 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
PERCPU: Embedded 18 pages/cpu @ffff880001008000 s44184 r8192 d21352 u73728
pcpu-alloc: s44184 r8192 d21352 u73728 alloc=18*4096
pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 [0] 4 [0] 5 [0] 6 [0] 7
Swapping MFNs for PFN 487 and 100f (MFN 7e487 and 7f60d)
Built 1 zonelists in Zone order, mobility grouping on. Total pages: 956028
Kernel command line: root=/dev/sda2 ro noresume
PID hash table entries: 4096 (order: 3, 32768 bytes)
Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes)
Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes)
Software IO TLB enabled:
Aperture: 64 megabytes
Address size: 27 bits
Kernel range: ffff8800052d5000 - ffff8800092d5000
PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Memory: 3726984k/3885428k available (2749k kernel code, 8192k absent, 149836k reserved, 1846k data, 348k init)
Hierarchical RCU implementation.
NR_IRQS:4480 nr_irqs:1008
Extended CMOS year: 2000
Xen reported: 1995.080 MHz processor.
Console: colour dummy device 80x25
console [tty0] enabled
console [xvc-1] enabled
Calibrating delay using timer specific routine.. 3991.99 BogoMIPS (lpj=7983980)
Security Framework initialized
Mount-cache hash table entries: 256
mce: CPU supports 6 MCE banks
Freeing SMP alternatives: 20k freed
ACPI: Core revision 20090903
(XEN) io_apic.c:2293:
(XEN) ioapic_guest_write: apic=0, pin=2, irq=0
(XEN) ioapic_guest_write: new_entry=00000900
(XEN) ioapic_guest_write: Attempt to modify IO-APIC pin for in-use IRQ!
(XEN) io_apic.c:2293:
(XEN) ioapic_guest_write: apic=0, pin=4, irq=4
(XEN) ioapic_guest_write: new_entry=00000904
(XEN) ioapic_guest_write: Attempt to modify IO-APIC pin for in-use IRQ!
Brought up 8 CPUs
NET: Registered protocol family 16
TOM: 00000000d0000000 aka 3328M
TOM2: 0000000130000000 aka 4864M
ACPI: bus type pci registered
PCI: Using configuration type 1 for base access
PCI: Using configuration type 1 for extended access
bio: create slab <bio-0> at 0
ACPI: Interpreter enabled
ACPI: (supports S0 S5)
ACPI: Using IOAPIC for interrupt routing
ACPI: PCI Root Bridge [PCI0] (0000:00)
pci 0000:00:01.0: Enabling HT MSI Mapping
pci 0000:00:03.2: PME# supported from D0 D1 D2 D3hot
pci 0000:00:03.2: PME# disabled
pci 0000:00:06.0: PME# supported from D0 D3hot D3cold
pci 0000:00:06.0: PME# disabled
pci 0000:00:07.0: PME# supported from D0 D3hot D3cold
pci 0000:00:07.0: PME# disabled
pci 0000:00:08.0: PME# supported from D0 D3hot D3cold
pci 0000:00:08.0: PME# disabled
pci 0000:00:09.0: PME# supported from D0 D3hot D3cold
pci 0000:00:09.0: PME# disabled
pci 0000:00:0a.0: PME# supported from D0 D3hot D3cold
pci 0000:00:0a.0: PME# disabled
pci 0000:07:00.0: PME# supported from D0 D3hot D3cold
pci 0000:07:00.0: PME# disabled
pci 0000:08:04.0: PME# supported from D3hot D3cold
pci 0000:08:04.0: PME# disabled
pci 0000:08:04.1: PME# supported from D3hot D3cold
pci 0000:08:04.1: PME# disabled
ACPI: PCI Interrupt Link [LNKU] (IRQs *10 11)
ACPI: PCI Interrupt Link [LNKW] (IRQs 10 11) *0, disabled.
ACPI: PCI Interrupt Link [LNKS] (IRQs 5 *7 11)
ACPI: PCI Interrupt Link [LN00] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN01] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN02] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN03] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN04] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN05] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN06] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN07] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN08] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN09] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN0A] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN0B] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN0C] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN0D] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN0E] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN0F] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN10] (IRQs 3 4 5 7 *11 12 14 15)
ACPI: PCI Interrupt Link [LN11] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN12] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN13] (IRQs 3 4 *5 7 11 12 14 15)
ACPI: PCI Interrupt Link [LN14] (IRQs 3 4 5 7 *11 12 14 15)
ACPI: PCI Interrupt Link [LN15] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN16] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN17] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN18] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN19] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN1A] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN1B] (IRQs 3 4 *5 7 11 12 14 15)
ACPI: PCI Interrupt Link [LN1C] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN1D] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN1E] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LN1F] (IRQs 3 4 5 7 11 12 14 15) *0, disabled.
xen_mem: Initialising balloon driver.
vgaarb: device added: PCI:0000:00:04.0,decodes=io+mem,owns=io+mem,locks=none
vgaarb: loaded
PCI: Using ACPI for IRQ routing
Switching to clocksource xen
pnp: PnP ACPI init
ACPI: bus type pnp registered
(XEN) io_apic.c:2293:
(XEN) ioapic_guest_write: apic=0, pin=4, irq=4
(XEN) ioapic_guest_write: new_entry=00000904
(XEN) ioapic_guest_write: Attempt to modify IO-APIC pin for in-use IRQ!
pnp: PnP ACPI: found 16 devices
ACPI: ACPI bus type pnp unregistered
system 00:00: iomem range 0xfed08000-0xfed08007 has been reserved
system 00:01: iomem range 0xe0000000-0xefffffff has been reserved
system 00:0b: ioport range 0x40b-0x40b has been reserved
system 00:0b: ioport range 0x4d0-0x4d1 has been reserved
system 00:0b: ioport range 0x4d6-0x4d6 has been reserved
system 00:0b: ioport range 0x500-0x560 has been reserved
system 00:0b: ioport range 0x558-0x55b has been reserved
system 00:0b: ioport range 0x580-0x58f has been reserved
system 00:0b: ioport range 0x590-0x593 has been reserved
system 00:0b: ioport range 0x600-0x61f has been reserved
system 00:0b: ioport range 0x620-0x623 has been reserved
system 00:0b: ioport range 0x700-0x703 has been reserved
system 00:0b: ioport range 0xc00-0xc01 has been reserved
system 00:0b: ioport range 0xc06-0xc08 has been reserved
system 00:0b: ioport range 0xc14-0xc14 has been reserved
system 00:0b: ioport range 0xc49-0xc4a has been reserved
system 00:0b: ioport range 0xc50-0xc53 has been reserved
system 00:0b: ioport range 0xc6c-0xc6c has been reserved
system 00:0b: ioport range 0xc6f-0xc6f has been reserved
system 00:0b: ioport range 0xcd6-0xcd7 has been reserved
system 00:0b: ioport range 0xcf9-0xcf9 could not be reserved
system 00:0b: ioport range 0xf50-0xf58 has been reserved
pci 0000:01:0d.0: PCI bridge, secondary bus 0000:02
pci 0000:01:0d.0: IO window: disabled
pci 0000:01:0d.0: MEM window: disabled
pci 0000:01:0d.0: PREFETCH window: disabled
pci 0000:00:01.0: PCI bridge, secondary bus 0000:01
pci 0000:00:01.0: IO window: 0x6000-0x6fff
pci 0000:00:01.0: MEM window: 0xd8300000-0xd83fffff
pci 0000:00:01.0: PREFETCH window: 0xd8000000-0xd80fffff
pci 0000:00:06.0: PCI bridge, secondary bus 0000:03
pci 0000:00:06.0: IO window: disabled
pci 0000:00:06.0: MEM window: disabled
pci 0000:00:06.0: PREFETCH window: disabled
pci 0000:00:07.0: PCI bridge, secondary bus 0000:04
pci 0000:00:07.0: IO window: disabled
pci 0000:00:07.0: MEM window: disabled
pci 0000:00:07.0: PREFETCH window: disabled
pci 0000:00:08.0: PCI bridge, secondary bus 0000:05
pci 0000:00:08.0: IO window: disabled
pci 0000:00:08.0: MEM window: disabled
pci 0000:00:08.0: PREFETCH window: disabled
pci 0000:00:09.0: PCI bridge, secondary bus 0000:06
pci 0000:00:09.0: IO window: 0x7000-0x7fff
pci 0000:00:09.0: MEM window: 0xd8200000-0xd82fffff
pci 0000:00:09.0: PREFETCH window: 0xd8100000-0xd81fffff
pci 0000:07:00.0: PCI bridge, secondary bus 0000:08
pci 0000:07:00.0: IO window: disabled
pci 0000:07:00.0: MEM window: 0xd8400000-0xd84fffff
pci 0000:07:00.0: PREFETCH window: disabled
pci 0000:00:0a.0: PCI bridge, secondary bus 0000:07
pci 0000:00:0a.0: IO window: disabled
pci 0000:00:0a.0: MEM window: 0xd8400000-0xd84fffff
pci 0000:00:0a.0: PREFETCH window: disabled
(XEN) allocated vector for irq:32
pci 0000:00:06.0: PCI INT A -> GSI 32 (level, low) -> IRQ 32
pci 0000:00:07.0: PCI INT A -> GSI 32 (level, low) -> IRQ 32
pci 0000:00:08.0: PCI INT A -> GSI 32 (level, low) -> IRQ 32
pci 0000:00:09.0: PCI INT A -> GSI 32 (level, low) -> IRQ 32
pci 0000:00:0a.0: PCI INT A -> GSI 32 (level, low) -> IRQ 32
NET: Registered protocol family 2
IP route cache hash table entries: 131072 (order: 8, 1048576 bytes)
TCP established hash table entries: 262144 (order: 10, 4194304 bytes)
TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
TCP: Hash tables configured (established 262144 bind 65536)
TCP reno registered
NET: Registered protocol family 1
pci 0000:00:02.0: disabled boot interrupts on device [1166:0205]
Unpacking initramfs...
Freeing initrd memory: 2964k freed
VFS: Disk quotas dquot_6.5.2
Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
msgmni has been set to 1894
io scheduler noop registered
io scheduler cfq registered (default)
(XEN) PCI add device 00:01.0
(XEN) PCI add device 00:06.0
(XEN) PCI add device 00:07.0
(XEN) PCI add device 00:08.0
(XEN) PCI add device 00:09.0
(XEN) PCI add device 00:0a.0
(XEN) PCI add device 01:0d.0
(XEN) PCI add device 07:00.0
vesafb: framebuffer at 0xd0000000, mapped to 0xffffc90000080000, using 5120k, total 32768k
vesafb: mode is 1280x1024x16, linelength=2560, pages=0
vesafb: scrolling: redraw
vesafb: Truecolor: size=0:5:6:5, shift=0:11:5:0
Console: switching to colour frame buffer device 160x64
fb0: VESA VGA frame buffer device
Real Time Clock Driver v1.12b
Xen virtual console successfully installed as xvc0
Event-channel device installed.
PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUE] at 0x60,0x64 irq 1,12
serio: i8042 KBD port at 0x60,0x64 irq 1
serio: i8042 AUX port at 0x60,0x64 irq 12
mice: PS/2 mouse device common for all mice
input: PC Speaker as /class/input/input0
TCP cubic registered
PCI IO multiplexer device installed.
Freeing unused kernel memory: 348k freed
Write protecting the kernel read-only data: 4340k
input: AT Translated Set 2 keyboard as /class/input/input1
input: ImExPS/2 Generic Explorer Mouse as /class/input/input2
SCSI subsystem initialized
ACPI: No dock devices found.
(XEN) PCI add device 01:0e.0
ACPI: PCI Interrupt Link [LNKS] enabled at IRQ 11
sata_svw 0000:01:0e.0: PCI INT A -> Link[LNKS] -> GSI 11 (level, low) -> IRQ 11
scsi0 : sata_svw
scsi1 : sata_svw
scsi2 : sata_svw
scsi3 : sata_svw
ata1: SATA max UDMA/133 mmio m8192@0xd8300000 port 0xd8300000 irq 11
ata2: SATA max UDMA/133 mmio m8192@0xd8300000 port 0xd8300100 irq 11
ata3: SATA max UDMA/133 mmio m8192@0xd8300000 port 0xd8300200 irq 11
ata4: SATA max UDMA/133 mmio m8192@0xd8300000 port 0xd8300300 irq 11
(XEN) PCI add device 01:0e.1
sata_svw 0000:01:0e.1: PCI INT A -> Link[LNKS] -> GSI 11 (level, low) -> IRQ 11
ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
ata1.00: ATA-7: ST3160815AS, 3.AAC, max UDMA/133
ata1.00: 312581808 sectors, multi 16: LBA48 NCQ (depth 0/32)
ata1.00: configured for UDMA/133
scsi 0:0:0:0: Direct-Access ATA ST3160815AS 3.AA PQ: 0 ANSI: 5
sd 0:0:0:0: [sda] 312581808 512-byte logical blocks: (160 GB/149 GiB)
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sda: sda1 sda2 sda3 sda4 < sda5 sda6 sda7 >
sd 0:0:0:0: [sda] Attached SCSI disk
ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
ata2.00: ATA-6: ST3120026AS, 3.00, max UDMA/133
ata2.00: 234441648 sectors, multi 16: LBA48 NCQ (depth 0/32)
ata2.00: configured for UDMA/133
scsi 1:0:0:0: Direct-Access ATA ST3120026AS 3.00 PQ: 0 ANSI: 5
sd 1:0:0:0: [sdb] 234441648 512-byte logical blocks: (120 GB/111 GiB)
sd 1:0:0:0: [sdb] Write Protect is off
sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sdb: sdb1
sd 1:0:0:0: [sdb] Attached SCSI disk
ata3: SATA link down (SStatus 4 SControl 300)
ata4: SATA link down (SStatus 4 SControl 300)
No available Cx info for cpu 0
(XEN) Set CPU acpi_id(0) cpuid(0) Px State info:
(XEN) _PCT: descriptor=130, length=12, space_id=127, bit_width=64, bit_offset=0, reserved=0, address=3221291106
(XEN) _PCT: descriptor=130, length=12, space_id=127, bit_width=64, bit_offset=0, reserved=0, address=0
(XEN) _PSS: state_count=5
(XEN) State0: 2000MHz 25530mW 19us 19us 0x0 0x0
(XEN) State1: 1700MHz 23115mW 19us 19us 0x1 0x1
(XEN) State2: 1400MHz 20815mW 19us 19us 0x2 0x2
(XEN) State3: 1200MHz 19320mW 19us 19us 0x3 0x3
(XEN) State4: 1000MHz 17710mW 19us 19us 0x4 0x4
(XEN) _PSD: num_entries=5 rev=0 domain=0 coord_type=253 num_processors=1
(XEN) _PPC: 0
(XEN) CPU 0 initialization completed
processor LNXCPU:00: registered as cooling_device0
No available Cx info for cpu 1
(XEN) Set CPU acpi_id(1) cpuid(1) Px State info:
(XEN) _PCT: descriptor=130, length=12, space_id=127, bit_width=64, bit_offset=0, reserved=0, address=3221291106
(XEN) _PCT: descriptor=130, length=12, space_id=127, bit_width=64, bit_offset=0, reserved=0, address=0
(XEN) _PSS: state_count=5
(XEN) State0: 2000MHz 25530mW 19us 19us 0x0 0x0
(XEN) State1: 1700MHz 23115mW 19us 19us 0x1 0x1
(XEN) State2: 1400MHz 20815mW 19us 19us 0x2 0x2
(XEN) State3: 1200MHz 19320mW 19us 19us 0x3 0x3
(XEN) State4: 1000MHz 17710mW 19us 19us 0x4 0x4
(XEN) _PSD: num_entries=5 rev=0 domain=1 coord_type=253 num_processors=1
(XEN) _PPC: 0
(XEN) CPU 1 initialization completed
processor LNXCPU:01: registered as cooling_device1
No available Cx info for cpu 2
(XEN) Set CPU acpi_id(2) cpuid(2) Px State info:
(XEN) _PCT: descriptor=130, length=12, space_id=127, bit_width=64, bit_offset=0, reserved=0, address=3221291106
(XEN) _PCT: descriptor=130, length=12, space_id=127, bit_width=64, bit_offset=0, reserved=0, address=0
(XEN) _PSS: state_count=5
(XEN) State0: 2000MHz 25530mW 19us 19us 0x0 0x0
(XEN) State1: 1700MHz 23115mW 19us 19us 0x1 0x1
(XEN) State2: 1400MHz 20815mW 19us 19us 0x2 0x2
(XEN) State3: 1200MHz 19320mW 19us 19us 0x3 0x3
(XEN) State4: 1000MHz 17710mW 19us 19us 0x4 0x4
(XEN) _PSD: num_entries=5 rev=0 domain=2 coord_type=253 num_processors=1
(XEN) _PPC: 0
(XEN) CPU 2 initialization completed
processor LNXCPU:02: registered as cooling_device2
No available Cx info for cpu 3
(XEN) Set CPU acpi_id(3) cpuid(3) Px State info:
(XEN) _PCT: descriptor=130, length=12, space_id=127, bit_width=64, bit_offset=0, reserved=0, address=3221291106
(XEN) _PCT: descriptor=130, length=12, space_id=127, bit_width=64, bit_offset=0, reserved=0, address=0
(XEN) _PSS: state_count=5
(XEN) State0: 2000MHz 25530mW 19us 19us 0x0 0x0
(XEN) State1: 1700MHz 23115mW 19us 19us 0x1 0x1
(XEN) State2: 1400MHz 20815mW 19us 19us 0x2 0x2
(XEN) State3: 1200MHz 19320mW 19us 19us 0x3 0x3
(XEN) State4: 1000MHz 17710mW 19us 19us 0x4 0x4
(XEN) _PSD: num_entries=5 rev=0 domain=3 coord_type=253 num_processors=1
(XEN) _PPC: 0
(XEN) CPU 3 initialization completed
processor LNXCPU:03: registered as cooling_device3
No available Cx info for cpu 4
(XEN) Set CPU acpi_id(4) cpuid(4) Px State info:
(XEN) _PCT: descriptor=130, length=12, space_id=127, bit_width=64, bit_offset=0, reserved=0, address=3221291106
(XEN) _PCT: descriptor=130, length=12, space_id=127, bit_width=64, bit_offset=0, reserved=0, address=0
(XEN) _PSS: state_count=5
(XEN) State0: 2000MHz 25530mW 19us 19us 0x0 0x0
(XEN) State1: 1700MHz 23115mW 19us 19us 0x1 0x1
(XEN) State2: 1400MHz 20815mW 19us 19us 0x2 0x2
(XEN) State3: 1200MHz 19320mW 19us 19us 0x3 0x3
(XEN) State4: 1000MHz 17710mW 19us 19us 0x4 0x4
(XEN) _PSD: num_entries=5 rev=0 domain=4 coord_type=253 num_processors=1
(XEN) _PPC: 0
(XEN) CPU 4 initialization completed
processor LNXCPU:04: registered as cooling_device4
No available Cx info for cpu 5
(XEN) Set CPU acpi_id(5) cpuid(5) Px State info:
(XEN) _PCT: descriptor=130, length=12, space_id=127, bit_width=64, bit_offset=0, reserved=0, address=3221291106
(XEN) _PCT: descriptor=130, length=12, space_id=127, bit_width=64, bit_offset=0, reserved=0, address=0
(XEN) _PSS: state_count=5
(XEN) State0: 2000MHz 25530mW 19us 19us 0x0 0x0
(XEN) State1: 1700MHz 23115mW 19us 19us 0x1 0x1
(XEN) State2: 1400MHz 20815mW 19us 19us 0x2 0x2
(XEN) State3: 1200MHz 19320mW 19us 19us 0x3 0x3
(XEN) State4: 1000MHz 17710mW 19us 19us 0x4 0x4
(XEN) _PSD: num_entries=5 rev=0 domain=5 coord_type=253 num_processors=1
(XEN) _PPC: 0
(XEN) CPU 5 initialization completed
processor LNXCPU:05: registered as cooling_device5
No available Cx info for cpu 6
(XEN) Set CPU acpi_id(6) cpuid(6) Px State info:
(XEN) _PCT: descriptor=130, length=12, space_id=127, bit_width=64, bit_offset=0, reserved=0, address=3221291106
(XEN) _PCT: descriptor=130, length=12, space_id=127, bit_width=64, bit_offset=0, reserved=0, address=0
(XEN) _PSS: state_count=5
(XEN) State0: 2000MHz 25530mW 19us 19us 0x0 0x0
(XEN) State1: 1700MHz 23115mW 19us 19us 0x1 0x1
(XEN) State2: 1400MHz 20815mW 19us 19us 0x2 0x2
(XEN) State3: 1200MHz 19320mW 19us 19us 0x3 0x3
(XEN) State4: 1000MHz 17710mW 19us 19us 0x4 0x4
(XEN) _PSD: num_entries=5 rev=0 domain=6 coord_type=253 num_processors=1
(XEN) _PPC: 0
(XEN) CPU 6 initialization completed
processor LNXCPU:06: registered as cooling_device6
No available Cx info for cpu 7
(XEN) Set CPU acpi_id(7) cpuid(7) Px State info:
(XEN) _PCT: descriptor=130, length=12, space_id=127, bit_width=64, bit_offset=0, reserved=0, address=3221291106
(XEN) _PCT: descriptor=130, length=12, space_id=127, bit_width=64, bit_offset=0, reserved=0, address=0
(XEN) _PSS: state_count=5
(XEN) State0: 2000MHz 25530mW 19us 19us 0x0 0x0
(XEN) State1: 1700MHz 23115mW 19us 19us 0x1 0x1
(XEN) State2: 1400MHz 20815mW 19us 19us 0x2 0x2
(XEN) State3: 1200MHz 19320mW 19us 19us 0x3 0x3
(XEN) State4: 1000MHz 17710mW 19us 19us 0x4 0x4
(XEN) _PSD: num_entries=5 rev=0 domain=7 coord_type=253 num_processors=1
(XEN) _PPC: 0
(XEN) CPU 7 initialization completed
processor LNXCPU:07: registered as cooling_device7
BIOS EDD facility v0.16 2004-Jun-25, 2 devices found
(XEN) PCI add device 00:02.1
scsi4 : pata_serverworks
scsi5 : pata_serverworks
ata5: PATA max UDMA/66 cmd 0x1f0 ctl 0x3f6 bmdma 0x4800 irq 14
ata6: PATA max UDMA/66 cmd 0x170 ctl 0x376 bmdma 0x4808 irq 15
ata5.00: ATAPI: TEAC DV-516G, F4S7, max UDMA/33
ata5.00: configured for UDMA/33
scsi 4:0:0:0: CD-ROM TEAC DV-516G F4S7 PQ: 0 ANSI: 5
sr0: scsi3-mmc drive: 4x/48x cd/rw xa/form2 cdda tray
Uniform CD-ROM driver Revision: 3.20
REISERFS (device sda2): found reiserfs format "3.6" with standard journal
REISERFS (device sda2): using ordered data mode
reiserfs: using flush barriers
REISERFS (device sda2): journal params: device sda2, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30
REISERFS (device sda2): checking transaction log (sda2)
REISERFS (device sda2): Using r5 hash to sort names
(XEN) tmem: initializing tmem capability for domid=0...ok
(XEN) tmem: allocating ephemeral-private tmem pool for domid=0...pool_id=0
reiserfs: enabling write barrier flush mode
Adding 8393952k swap on /dev/sda3. Priority:-1 extents:1 across:8393952k
(XEN) tmem: allocating persistent-private tmem pool for domid=0...pool_id=1
Floppy drive(s): fd0 is 1.44M
floppy0: Unable to grab DMA2 for the floppy driver
FDC 0 is a National Semiconductor PC87306
(XEN) PCI add device 00:02.0
piix4_smbus 0000:00:02.0: SMBus Host Controller at 0x580, revision 0
Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
pci_hotplug: PCI Hot Plug PCI Core version: 0.5
Fusion MPT base driver 4.22.00.00
Copyright (c) 1999-2008 LSI Corporation
00:0d: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
usbcore: registered new interface driver usbfs
usbcore: registered new interface driver hub
usbcore: registered new device driver usb
(XEN) PCI add device 00:01.0
(XEN) PCI add device 01:0d.0
(XEN) PCI add device 07:00.0
(XEN) PCI add device 06:00.0
(XEN) allocated vector for irq:35
shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
(XEN) PCI add device 00:03.2
tpm_inf_pnp 00:0f: Found TPM with ID IFX0102
(XEN) PCI add device 00:03.0
tpm_inf_pnp 00:0f: TPM found: config base 0x4e, data base 0x4700, chip version 0x000b, vendor id 0x15d1 (Infineon), product id 0x000b (SLB 9635 TT 1.2)
(XEN) PCI add device 00:03.1
Fusion MPT SAS Host driver 4.22.00.00
mptsas 0000:06:00.0: PCI INT A -> GSI 35 (level, low) -> IRQ 35
mptbase: ioc0: 32 BIT PCI BUS DMA ADDRESSING SUPPORTED, total memory = 3880552 kB
mptbase: ioc0: Initiating bringup
ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
ACPI: PCI Interrupt Link [LNKU] enabled at IRQ 10
ehci_hcd 0000:00:03.2: PCI INT A -> Link[LNKU] -> GSI 10 (level, low) -> IRQ 10
ehci_hcd 0000:00:03.2: EHCI Host Controller
ehci_hcd 0000:00:03.2: new USB bus registered, assigned bus number 1
ehci_hcd 0000:00:03.2: irq 10, io mem 0xd8512000
ehci_hcd 0000:00:03.2: USB 2.0 started, EHCI 1.00
usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
usb usb1: Product: EHCI Host Controller
usb usb1: Manufacturer: Linux 2.6.32.6-2010-01-27-xen0 ehci_hcd
usb usb1: SerialNumber: 0000:00:03.2
usb usb1: configuration #1 chosen from 1 choice
hub 1-0:1.0: USB hub found
hub 1-0:1.0: 4 ports detected
ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
ohci_hcd 0000:00:03.0: PCI INT A -> Link[LNKU] -> GSI 10 (level, low) -> IRQ 10
ohci_hcd 0000:00:03.0: OHCI Host Controller
ohci_hcd 0000:00:03.0: new USB bus registered, assigned bus number 2
ohci_hcd 0000:00:03.0: irq 10, io mem 0xd8510000
usb usb2: New USB device found, idVendor=1d6b, idProduct=0001
usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
usb usb2: Product: OHCI Host Controller
usb usb2: Manufacturer: Linux 2.6.32.6-2010-01-27-xen0 ohci_hcd
usb usb2: SerialNumber: 0000:00:03.0
usb usb2: configuration #1 chosen from 1 choice
hub 2-0:1.0: USB hub found
hub 2-0:1.0: 2 ports detected
ohci_hcd 0000:00:03.1: PCI INT A -> Link[LNKU] -> GSI 10 (level, low) -> IRQ 10
ohci_hcd 0000:00:03.1: OHCI Host Controller
ohci_hcd 0000:00:03.1: new USB bus registered, assigned bus number 3
ohci_hcd 0000:00:03.1: irq 10, io mem 0xd8511000
usb usb3: New USB device found, idVendor=1d6b, idProduct=0001
usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1
usb usb3: Product: OHCI Host Controller
usb usb3: Manufacturer: Linux 2.6.32.6-2010-01-27-xen0 ohci_hcd
usb usb3: SerialNumber: 0000:00:03.1
usb usb3: configuration #1 chosen from 1 choice
hub 3-0:1.0: USB hub found
hub 3-0:1.0: 2 ports detected
ioc0: LSISAS1064E B1: Capabilities={Initiator}
mptbase: ioc0: PCI-MSI enabled
(XEN) PCI add device 08:04.0
tg3.c:v3.106 (January 12, 2010)
(XEN) allocated vector for irq:36
tg3 0000:08:04.0: PCI INT A -> GSI 36 (level, low) -> IRQ 36
eth0: Tigon3 [partno(BCM95715) rev 9001] (PCIX:133MHz:64-bit) MAC address 00:e0:81:80:cc:4a
eth0: attached PHY is 5714 (10/100/1000Base-T Ethernet) (WireSpeed[1])
eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] TSOcap[1]
eth0: dma_rwctrl[76148000] dma_mask[40-bit]
(XEN) PCI add device 08:04.1
tg3 0000:08:04.1: PCI INT B -> GSI 36 (level, low) -> IRQ 36
eth1: Tigon3 [partno(BCM95715) rev 9001] (PCIX:133MHz:64-bit) MAC address 00:e0:81:80:cc:4b
eth1: attached PHY is 5714 (10/100/1000Base-T Ethernet) (WireSpeed[1])
eth1: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] TSOcap[1]
eth1: dma_rwctrl[76148000] dma_mask[40-bit]
mptbase: ioc0: LogInfo(0x30030101): Originator={IOP}, Code={Invalid Page}, SubCode(0x0101)
scsi6 : ioc0: LSISAS1064E B1, FwRev=010a0000h, Ports=1, MaxQ=511, IRQ=874
Fusion MPT misc device (ioctl) driver 4.22.00.00
mptctl: Registered with Fusion MPT base driver
mptctl: /dev/mptctl @ (major,minor=10,220)
device-mapper: ioctl: 4.15.0-ioctl (2009-04-01) initialised: dm-devel@redhat.com
loop: module loaded
REISERFS (device sda5): found reiserfs format "3.6" with standard journal
(XEN) tmem: allocating ephemeral-private tmem pool for domid=0...pool_id=2
REISERFS (device sda5): using ordered data mode
reiserfs: using flush barriers
(XEN) tmem: allocating ephemeral-private tmem pool for domid=0...pool_id=3
REISERFS (device sda5): journal params: device sda5, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30
REISERFS (device sda5): checking transaction log (sda5)
REISERFS (device sda5): Using r5 hash to sort names
(XEN) tmem: allocating ephemeral-private tmem pool for domid=0...pool_id=4
REISERFS (device sda6): found reiserfs format "3.6" with standard journal
REISERFS (device sda6): using ordered data mode
reiserfs: using flush barriers
REISERFS (device sda6): journal params: device sda6, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30
REISERFS (device sda6): checking transaction log (sda6)
REISERFS (device sda6): Using r5 hash to sort names
REISERFS (device sdb1): found reiserfs format "3.6" with standard journal
REISERFS (device sdb1): using ordered data mode
reiserfs: using flush barriers
REISERFS (device sdb1): journal params: device sdb1, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30
REISERFS (device sdb1): checking transaction log (sdb1)
REISERFS (device sdb1): Using r5 hash to sort names
input: Power Button as /class/input/input3
ACPI: Power Button [PWRB]
input: Power Button as /class/input/input4
ACPI: Power Button [PWRF]
input: Sleep Button as /class/input/input5
ACPI: Sleep Button [SLPF]
(XEN) tmem: all pools frozen for all domains
(XEN) tmem: all pools thawed for all domains
(XEN) tmem: all pools frozen for all domains
(XEN) tmem: all pools thawed for all domains
(cdrom_add_media_watch() file=/home/jbeulich/cpp/kernel/sle11sp1-2010-01-27/drivers/xen/blkback/cdrom.c, line=108) nodename:backend/vbd/1/5632
(cdrom_is_type() file=/home/jbeulich/cpp/kernel/sle11sp1-2010-01-27/drivers/xen/blkback/cdrom.c, line=95) type:1
(cdrom_add_media_watch() file=/home/jbeulich/cpp/kernel/sle11sp1-2010-01-27/drivers/xen/blkback/cdrom.c, line=110) is a cdrom
(cdrom_add_media_watch() file=/home/jbeulich/cpp/kernel/sle11sp1-2010-01-27/drivers/xen/blkback/cdrom.c, line=112) xenstore wrote OK
(cdrom_is_type() file=/home/jbeulich/cpp/kernel/sle11sp1-2010-01-27/drivers/xen/blkback/cdrom.c, line=95) type:1
(cdrom_add_media_watch() file=/home/jbeulich/cpp/kernel/sle11sp1-2010-01-27/drivers/xen/blkback/cdrom.c, line=108) nodename:backend/vbd/1/768
(cdrom_is_type() file=/home/jbeulich/cpp/kernel/sle11sp1-2010-01-27/drivers/xen/blkback/cdrom.c, line=95) type:0
(XEN) HVM1: HVM Loader
(XEN) HVM1: Detected Xen v4.0.0-rc3-pre
(XEN) HVM1: CPU speed is 1995 MHz
(XEN) irq.c:243: Dom1 PCI link 0 changed 0 -> 5
(XEN) HVM1: PCI-ISA link 0 routed to IRQ5
(XEN) irq.c:243: Dom1 PCI link 1 changed 0 -> 10
(XEN) HVM1: PCI-ISA link 1 routed to IRQ10
(XEN) irq.c:243: Dom1 PCI link 2 changed 0 -> 11
(XEN) HVM1: PCI-ISA link 2 routed to IRQ11
(XEN) irq.c:243: Dom1 PCI link 3 changed 0 -> 5
(XEN) HVM1: PCI-ISA link 3 routed to IRQ5
(XEN) HVM1: pci dev 01:3 INTA->IRQ10
(XEN) HVM1: pci dev 03:0 INTA->IRQ5
(XEN) HVM1: pci dev 02:0 bar 10 size 02000000: f0000008
(XEN) HVM1: pci dev 03:0 bar 14 size 01000000: f2000008
(XEN) HVM1: pci dev 02:0 bar 14 size 00001000: f3000000
(XEN) HVM1: pci dev 03:0 bar 10 size 00000100: 0000c001
(XEN) HVM1: pci dev 01:1 bar 20 size 00000010: 0000c101
(XEN) HVM1: Multiprocessor initialisation:
(XEN) HVM1: - CPU0 ... 48-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(XEN) HVM1: - CPU1 ... 48-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(XEN) HVM1: - CPU2 ... 48-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(XEN) HVM1: - CPU3 ... 48-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(XEN) HVM1: Writing SMBIOS tables ...
(XEN) HVM1: Loading ROMBIOS ...
(XEN) HVM1: 10300 bytes of ROMBIOS high-memory extensions:
(XEN) HVM1: Relocating to 0xfc000000-0xfc00283c ... done
(XEN) HVM1: Creating MP tables ...
(XEN) HVM1: Loading Cirrus VGABIOS ...
(XEN) HVM1: Loading ACPI ...
(XEN) HVM1: - Lo data: 000ea020-000ea04f
(XEN) HVM1: - Hi data: fc002c00-fc018c7f
(XEN) HVM1: vm86 TSS at fc019000
(XEN) HVM1: BIOS map:
(XEN) HVM1: c0000-c8fff: VGA BIOS
(XEN) HVM1: eb000-eb1bd: SMBIOS tables
(XEN) HVM1: f0000-fffff: Main BIOS
(XEN) HVM1: Invoking ROMBIOS ...
(XEN) HVM1: $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
(XEN) stdvga.c:147:d1 entering stdvga and caching modes
(XEN) HVM1: VGABios $Id: vgabios.c,v 1.67 2008/01/27 09:44:12 vruppert Exp $
(XEN) HVM1: Bochs BIOS - build: 06/23/99
(XEN) HVM1: $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
(XEN) HVM1: Options: apmbios pcibios eltorito PMM
(XEN) HVM1:
(XEN) HVM1: ata0-0: PCHS=16383/16/63 translation=lba LCHS=1024/255/63
(XEN) HVM1: ata0 master: QEMU HARDDISK ATA-7 Hard-Disk (10240 MBytes)
(XEN) HVM1: IDE time out
(XEN) HVM1: ata1 master: QEMU DVD-ROM ATAPI-4 CD-Rom/DVD-Rom
(XEN) HVM1: IDE time out
(XEN) HVM1:
(XEN) HVM1:
(XEN) HVM1:
(XEN) HVM1: Press F12 for boot menu.
(XEN) HVM1:
(XEN) HVM1: Booting from Hard Disk...
(XEN) HVM1: Booting from 0000:7c00
(XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=81
(XEN) HVM1: int13_harddisk: function 08, unmapped device for ELDL=81
(XEN) HVM1: *** int 15h function AX=00c0, BX=0000 not yet supported!
(XEN) stdvga.c:151:d1 leaving stdvga
(XEN) 'q' pressed -> dumping domain info (now=0x20:5C8E3F0F)
(XEN) General information for domain 0:
(XEN) refcnt=4 dying=0 nr_pages=761088 xenheap_pages=5 dirty_cpus={1,3-7} max_pages=4294967295
(XEN) handle=00000000-0000-0000-0000-000000000000 vm_assist=00000004
(XEN) Rangesets belonging to domain 0:
(XEN) Interrupts { 0-303 }
(XEN) I/O Memory { 0-febff, fec03-fedff, fee01-ffffffffffffffff }
(XEN) I/O Ports { 0-1f, 22-3f, 44-60, 62-9f, a2-3f7, 400-507, 50c-cfb, d00-ffff }
(XEN) Memory pages belonging to domain 0:
(XEN) DomPage list too long to display
(XEN) XenPage 000000000007fea5: caf=c000000000000002, taf=7400000000000002
(XEN) XenPage 000000000007fea4: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 000000000007fea3: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 000000000007fea2: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 000000000007fe9e: caf=c000000000000002, taf=7400000000000002
(XEN) VCPU information and callbacks for domain 0:
(XEN) VCPU0: CPU6 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={6} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU1: CPU5 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU2: CPU1 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={1} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU3: CPU7 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={7} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU4: CPU5 [has=T] flags=0 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={5} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU5: CPU3 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={3} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU6: CPU4 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={4} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU7: CPU0 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) General information for domain 1:
(XEN) refcnt=3 dying=0 nr_pages=262366 xenheap_pages=5 dirty_cpus={2} max_pages=525312
(XEN) handle=084ab9fc-c60a-bcee-cda5-885aa08cd6b8 vm_assist=00000000
(XEN) paging assistance: hap refcounts log_dirty translate external
(XEN) Rangesets belonging to domain 1:
(XEN) Interrupts { }
(XEN) I/O Memory { }
(XEN) I/O Ports { }
(XEN) Memory pages belonging to domain 1:
(XEN) DomPage list too long to display
(XEN) PoD entries=523264 cachesize=260320
(XEN) XenPage 00000000000cfc94: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 00000000000cfc96: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 00000000000cfc98: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 00000000000cfc9a: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 00000000000cfca2: caf=c000000000000001, taf=7400000000000001
(XEN) VCPU information and callbacks for domain 1:
(XEN) VCPU0: CPU2 [has=T] flags=0 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={2} cpu_affinity={0-3}
(XEN) paging assistance: hap, 1 levels
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/0/0)
(XEN) VCPU1: CPU0 [has=F] flags=2 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-3}
(XEN) paging assistance: hap, 1 levels
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/0/0)
(XEN) VCPU2: CPU1 [has=F] flags=2 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-3}
(XEN) paging assistance: hap, 1 levels
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/0/0)
(XEN) VCPU3: CPU2 [has=F] flags=2 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-3}
(XEN) paging assistance: hap, 1 levels
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/0/0)
(XEN) stdvga.c:147:d1 entering stdvga and caching modes
(XEN) paging.c:170: paging_free_log_dirty_bitmap: used 1 pages for domain 1 dirty logging
(XEN) HVM1: *** int 15h function AX=ec00, BX=0002 not yet supported!
(XEN) HVM1: KBD: unsupported int 16h function 03
(XEN) HVM1: *** int 15h function AX=e980, BX=0000 not yet supported!
(XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=81
(XEN) HVM1: int13_harddisk: function 02, unmapped device for ELDL=81
(XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=82
(XEN) HVM1: int13_harddisk: function 02, unmapped device for ELDL=82
(XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=83
(XEN) HVM1: int13_harddisk: function 02, unmapped device for ELDL=83
(XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=84
(XEN) HVM1: int13_harddisk: function 02, unmapped device for ELDL=84
(XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=85
(XEN) HVM1: int13_harddisk: function 02, unmapped device for ELDL=85
(XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=86
(XEN) HVM1: int13_harddisk: function 02, unmapped device for ELDL=86
(XEN) HVM1: int13_harddisk: function 41, unmapped device for ELDL=87
(XEN) HVM1: int13_harddisk: function 02, unmapped device for ELDL=87
(XEN) HVM1: int13_harddisk: function 41, ELDL out of range 88
(XEN) HVM1: int13_harddisk: function 02, ELDL out of range 88
(XEN) HVM1: int13_harddisk: function 41, ELDL out of range 89
(XEN) HVM1: int13_harddisk: function 02, ELDL out of range 89
(XEN) HVM1: int13_harddisk: function 41, ELDL out of range 8a
(XEN) HVM1: int13_harddisk: function 02, ELDL out of range 8a
(XEN) HVM1: int13_harddisk: function 41, ELDL out of range 8b
(XEN) HVM1: int13_harddisk: function 02, ELDL out of range 8b
(XEN) HVM1: int13_harddisk: function 41, ELDL out of range 8c
(XEN) HVM1: int13_harddisk: function 02, ELDL out of range 8c
(XEN) HVM1: int13_harddisk: function 41, ELDL out of range 8d
(XEN) HVM1: int13_harddisk: function 02, ELDL out of range 8d
(XEN) HVM1: int13_harddisk: function 41, ELDL out of range 8e
(XEN) HVM1: int13_harddisk: function 02, ELDL out of range 8e
(XEN) HVM1: int13_harddisk: function 41, ELDL out of range 8f
(XEN) HVM1: int13_harddisk: function 02, ELDL out of range 8f
(XEN) stdvga.c:151:d1 leaving stdvga
(XEN) vlapic.c:704:d1 Local APIC Write to read-only register 0x30
(XEN) vlapic.c:704:d1 Local APIC Write to read-only register 0x20
(XEN) vlapic.c:704:d1 Local APIC Write to read-only register 0x20
(XEN) irq.c:243: Dom1 PCI link 0 changed 5 -> 0
(XEN) irq.c:243: Dom1 PCI link 1 changed 10 -> 0
(XEN) irq.c:243: Dom1 PCI link 2 changed 11 -> 0
(XEN) irq.c:243: Dom1 PCI link 3 changed 5 -> 0
(XEN) irq.c:306: Dom1 callback via changed to PCI INTx Dev 0x03 IntA
(XEN) 'q' pressed -> dumping domain info (now=0x50:AC933C05)
(XEN) General information for domain 0:
(XEN) refcnt=4 dying=0 nr_pages=761088 xenheap_pages=5 dirty_cpus={3-7} max_pages=4294967295
(XEN) handle=00000000-0000-0000-0000-000000000000 vm_assist=00000004
(XEN) Rangesets belonging to domain 0:
(XEN) Interrupts { 0-303 }
(XEN) I/O Memory { 0-febff, fec03-fedff, fee01-ffffffffffffffff }
(XEN) I/O Ports { 0-1f, 22-3f, 44-60, 62-9f, a2-3f7, 400-507, 50c-cfb, d00-ffff }
(XEN) Memory pages belonging to domain 0:
(XEN) DomPage list too long to display
(XEN) XenPage 000000000007fea5: caf=c000000000000002, taf=7400000000000002
(XEN) XenPage 000000000007fea4: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 000000000007fea3: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 000000000007fea2: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 000000000007fe9e: caf=c000000000000002, taf=7400000000000002
(XEN) VCPU information and callbacks for domain 0:
(XEN) VCPU0: CPU3 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={3} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU1: CPU6 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={6} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU2: CPU4 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={4} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU3: CPU0 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU4: CPU5 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={5} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU5: CPU7 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={7} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU6: CPU2 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU7: CPU1 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) General information for domain 1:
(XEN) refcnt=3 dying=0 nr_pages=262365 xenheap_pages=5 dirty_cpus={0-2} max_pages=525312
(XEN) handle=084ab9fc-c60a-bcee-cda5-885aa08cd6b8 vm_assist=00000000
(XEN) paging assistance: hap refcounts log_dirty translate external
(XEN) Rangesets belonging to domain 1:
(XEN) Interrupts { }
(XEN) I/O Memory { }
(XEN) I/O Ports { }
(XEN) Memory pages belonging to domain 1:
(XEN) DomPage list too long to display
(XEN) PoD entries=218112 cachesize=204559
(XEN) XenPage 00000000000cfc94: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 00000000000cfc96: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 00000000000cfc98: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 00000000000cfc9a: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 00000000000cfca2: caf=c000000000000001, taf=7400000000000001
(XEN) VCPU information and callbacks for domain 1:
(XEN) VCPU0: CPU0 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={0} cpu_affinity={0-3}
(XEN) paging assistance: hap, 4 levels
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU1: CPU2 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={2} cpu_affinity={0-3}
(XEN) paging assistance: hap, 4 levels
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU2: CPU1 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={1} cpu_affinity={0-3}
(XEN) paging assistance: hap, 4 levels
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU3: CPU2 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-3}
(XEN) paging assistance: hap, 4 levels
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) 'q' pressed -> dumping domain info (now=0x5F:3DFE1032)
(XEN) General information for domain 0:
(XEN) refcnt=4 dying=0 nr_pages=761088 xenheap_pages=5 dirty_cpus={2,4-7} max_pages=4294967295
(XEN) handle=00000000-0000-0000-0000-000000000000 vm_assist=00000004
(XEN) Rangesets belonging to domain 0:
(XEN) Interrupts { 0-303 }
(XEN) I/O Memory { 0-febff, fec03-fedff, fee01-ffffffffffffffff }
(XEN) I/O Ports { 0-1f, 22-3f, 44-60, 62-9f, a2-3f7, 400-507, 50c-cfb, d00-ffff }
(XEN) Memory pages belonging to domain 0:
(XEN) DomPage list too long to display
(XEN) XenPage 000000000007fea5: caf=c000000000000002, taf=7400000000000002
(XEN) XenPage 000000000007fea4: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 000000000007fea3: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 000000000007fea2: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 000000000007fe9e: caf=c000000000000002, taf=7400000000000002
(XEN) VCPU information and callbacks for domain 0:
(XEN) VCPU0: CPU4 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={4} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU1: CPU2 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={2} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU2: CPU3 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU3: CPU6 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={6} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU4: CPU5 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={5} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU5: CPU0 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU6: CPU7 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={7} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU7: CPU1 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) General information for domain 1:
(XEN) refcnt=3 dying=0 nr_pages=262365 xenheap_pages=5 dirty_cpus={0-1,3} max_pages=525312
(XEN) handle=084ab9fc-c60a-bcee-cda5-885aa08cd6b8 vm_assist=00000000
(XEN) paging assistance: hap refcounts log_dirty translate external
(XEN) Rangesets belonging to domain 1:
(XEN) Interrupts { }
(XEN) I/O Memory { }
(XEN) I/O Ports { }
(XEN) Memory pages belonging to domain 1:
(XEN) DomPage list too long to display
(XEN) PoD entries=166912 cachesize=153359
(XEN) XenPage 00000000000cfc94: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 00000000000cfc96: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 00000000000cfc98: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 00000000000cfc9a: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 00000000000cfca2: caf=c000000000000001, taf=7400000000000001
(XEN) VCPU information and callbacks for domain 1:
(XEN) VCPU0: CPU1 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={1} cpu_affinity={0-3}
(XEN) paging assistance: hap, 4 levels
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU1: CPU0 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-3}
(XEN) paging assistance: hap, 4 levels
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU2: CPU0 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={0} cpu_affinity={0-3}
(XEN) paging assistance: hap, 4 levels
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU3: CPU3 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={3} cpu_affinity={0-3}
(XEN) paging assistance: hap, 4 levels
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) 'q' pressed -> dumping domain info (now=0x9E:CE2AEC6D)
(XEN) General information for domain 0:
(XEN) refcnt=4 dying=0 nr_pages=761088 xenheap_pages=5 dirty_cpus={4-7} max_pages=4294967295
(XEN) handle=00000000-0000-0000-0000-000000000000 vm_assist=00000004
(XEN) Rangesets belonging to domain 0:
(XEN) Interrupts { 0-303 }
(XEN) I/O Memory { 0-febff, fec03-fedff, fee01-ffffffffffffffff }
(XEN) I/O Ports { 0-1f, 22-3f, 44-60, 62-9f, a2-3f7, 400-507, 50c-cfb, d00-ffff }
(XEN) Memory pages belonging to domain 0:
(XEN) DomPage list too long to display
(XEN) XenPage 000000000007fea5: caf=c000000000000002, taf=7400000000000002
(XEN) XenPage 000000000007fea4: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 000000000007fea3: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 000000000007fea2: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 000000000007fe9e: caf=c000000000000002, taf=7400000000000002
(XEN) VCPU information and callbacks for domain 0:
(XEN) VCPU0: CPU4 [has=T] flags=0 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={4} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU1: CPU7 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={7} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU2: CPU6 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={6} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU3: CPU0 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU4: CPU3 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU5: CPU1 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU6: CPU2 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU7: CPU5 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={5} cpu_affinity={0-63}
(XEN) 250 Hz periodic timer (period 4 ms)
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) General information for domain 1:
(XEN) refcnt=3 dying=0 nr_pages=262365 xenheap_pages=5 dirty_cpus={0-3} max_pages=525312
(XEN) handle=084ab9fc-c60a-bcee-cda5-885aa08cd6b8 vm_assist=00000000
(XEN) paging assistance: hap refcounts log_dirty translate external
(XEN) Rangesets belonging to domain 1:
(XEN) Interrupts { }
(XEN) I/O Memory { }
(XEN) I/O Ports { }
(XEN) Memory pages belonging to domain 1:
(XEN) DomPage list too long to display
(XEN) PoD entries=43268 cachesize=29715
(XEN) XenPage 00000000000cfc94: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 00000000000cfc96: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 00000000000cfc98: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 00000000000cfc9a: caf=c000000000000001, taf=7400000000000001
(XEN) XenPage 00000000000cfca2: caf=c000000000000001, taf=7400000000000001
(XEN) VCPU information and callbacks for domain 1:
(XEN) VCPU0: CPU0 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={0} cpu_affinity={0-3}
(XEN) paging assistance: hap, 4 levels
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU1: CPU2 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={2} cpu_affinity={0-3}
(XEN) paging assistance: hap, 4 levels
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU2: CPU3 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={3} cpu_affinity={0-3}
(XEN) paging assistance: hap, 4 levels
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) VCPU3: CPU1 [has=F] flags=1 poll=0 upcall_pend = 00, upcall_mask = 00 dirty_cpus={1} cpu_affinity={0-3}
(XEN) paging assistance: hap, 4 levels
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) p2m_pod_demand_populate: Out of populate-on-demand memory! tot_pages 262365 pod_entries 13553
(XEN) domain_crash called from p2m.c:1082
(XEN) Domain 1 reported crashed by domain 0 on cpu#4:
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 1)
(XEN) paging.c:170: paging_free_log_dirty_bitmap: used 19 pages for domain 1 dirty logging
(XEN) destroying ephemeral-private tmem pool domid=0 pool_id=3
(XEN) destroying ephemeral-private tmem pool domid=0 pool_id=2
(XEN) irq.c:1524: dom0: forcing unbind of pirq 298
Restarting system.
(XEN) Domain 0 shutdown: rebooting machine.
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 13+ messages in thread* Re: PoD issue
2010-01-29 15:27 Jan Beulich
@ 2010-01-29 16:01 ` George Dunlap
2010-01-29 16:59 ` Jan Beulich
0 siblings, 1 reply; 13+ messages in thread
From: George Dunlap @ 2010-01-29 16:01 UTC (permalink / raw)
To: Jan Beulich; +Cc: xen-devel@lists.xensource.com
What seems likely to me is that Xen (setting the PoD target) and the
balloon driver (allocating memory) have a different way of calculating
the amount of guest memory. So the balloon driver thinks it's done
handing memory back to Xen when there are still more outstanding PoD
entries than there are entries in the PoD memory pool. What balloon
driver are you using? Can you let me know max_mem, target, and what the
balloon driver has reached before calling it quits? (Although 13,000
pages is an awful lot to be off by: 54 MB...)
Re what "B" means, below is a rather long-winded explanation that will,
hopefully, be clear. :-)
Hmm, I'm not sure what the guest balloon driver's "Current allocation"
means either. :-) Does it mean, "Size of the current balloon" (i.e.,
starts at 0 and grows as the balloon driver allocates guest pages and
hands them back to Xen)? Or does it mean, "Amount of memory guest
currently has allocated to it" (i.e., starts at static_max and goes down
as the balloon driver allocates guest pages and hands them back to Xen)?
In the comment, B does *not* mean "the size of the balloon" (i.e., the
number of pages allocated from the guest OS by the balloon driver).
Rather, B means "Amount of memory the guest currently thinks it has
allocated to it." B starts at M at boot. The balloon driver will try
to make B=T by inflating the size of the balloon to M-T. Clear as mud?
Let's make a concrete example. Let's say static max is 409,600K
(100,000 pages).
M=100,000 and doesn't change. Let's say that T is 50,000.
At boot:
B == M == 100,000.
P == 0
tot_pages = pod.count == 50,000
entry_count == 100,000
Thus things hold:
* 0 <= P (0) <= T (50,000) <= B (100,000) <= M (100,000)
* entry_count (100,000) == B (100,000) - P (0)
* tot_pages (50,000) == P (0) + pod.count (50,000)
As the guest boots, pages will be populated from the cache; P increases,
but entry_count and pod.count decrease. Let's say that 25,000 pages get
allocated just before the balloon driver runs:
* 0 <= P (25,000) <= T (50,000) <= B(100,000) <= M (100,000)
* entry_count (75,000) == B (100,000) - P (25,000)
* tot_pages (50,000) == P (25,000) + pod.count (25,000)
Then the balloon driver runs. It should try to allocate 50,000 pages
total (M - T). For simplicity, let's say that the balloon driver only
allocates un-allocated pages. When it's halfway there, having allocated
25,000 pages, things look like this:
* 0 <= P (25,000) <= T (50,000) <= B (75,000) <= M (100,000)
* entry_count (50,000) == B (75,000) - P (25,000)
* tot_pages (50,000) == P (25,000) + pod.count (25,000)
Eventually the balloon driver should reach its new target of 50,000,
having allocated 50,000 pages:
* 0 <= P (25,000) <= T (50,000) <= B (50,000) <= M(100,000)
* entry_count(25,000) == B(50,000) - P (25,000)
* tot_pages (50,000) == P(25,000) + pod.count(25,000)
The reason for the logic is so that we can do the Right Thing if, after
the balloon driver has ballooned half way (to 75,000 pages), the target
is changed. If you're not changing the target before the balloon driver
has reached its target,
-George
Jan Beulich wrote:
> George,
>
> before diving deeply into the PoD code, I hope you have some idea that
> might ease the debugging that's apparently going to be needed.
>
> Following the comment immediately before p2m_pod_set_mem_target(),
> there's an apparent inconsistency with the accounting: While the guest
> in question properly balloons down to its intended setting (1G, with a
> maxmem setting of 2G), the combination of the equations
>
> d->arch.p2m->pod.entry_count == B - P
> d->tot_pages == P + d->arch.p2m->pod.count
>
> doesn't hold (provided I interpreted the meaning of B correctly - I
> took this from the guest balloon driver's "Current allocation" report,
> converted to pages); there's a difference of over 13000 pages.
> Obviously, as soon as the guest uses up enough of its memory, it
> will get crashed by the PoD code.
>
> In two runs I did, the difference (and hence the number of entries
> reported in the eventual crash message) was identical, implying to
> me that this is not a simple race, but rather a systematical problem.
>
> Even on the initial dump taken (when the guest was sitting at the
> boot manager screen), there already appears to be a difference of
> 800 pages (it's my understanding that at this point the difference
> between entries and cache should equal the difference between
> maxmem and mem).
>
> Does this ring any bells? Any hints how to debug this? In any case
> I'm attaching the full log in case you want to look at it.
>
> Jan
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: PoD issue
2010-01-29 16:01 ` George Dunlap
@ 2010-01-29 16:59 ` Jan Beulich
2010-01-29 18:30 ` George Dunlap
0 siblings, 1 reply; 13+ messages in thread
From: Jan Beulich @ 2010-01-29 16:59 UTC (permalink / raw)
To: George Dunlap; +Cc: xen-devel@lists.xensource.com
>>> George Dunlap <george.dunlap@eu.citrix.com> 29.01.10 17:01 >>>
>What seems likely to me is that Xen (setting the PoD target) and the
>balloon driver (allocating memory) have a different way of calculating
>the amount of guest memory. So the balloon driver thinks it's done
>handing memory back to Xen when there are still more outstanding PoD
>entries than there are entries in the PoD memory pool. What balloon
>driver are you using?
The one from our forward ported 2.6.32.x tree. I would suppose there
are no significant differences here to the one in 2.6.18, but I wonder
how precise the totalram_pages value is that the driver (also in 2.6.18)
uses to initialize bs.current_pages. Given that with PoD it is now crucial
for the guest to balloon out enough memory, using an imprecise start
value is not acceptable anymore. The question however is what more
reliable data source one could use (given that any non-exported
kernel object is out of question). And I wonder how this works reliably
for others...
>Can you let me know max_mem, target, and what the
>balloon driver has reached before calling it quits? (Although 13,000
>pages is an awful lot to be off by: 54 MB...)
The balloon driver reports the expected state: target and allocation
are 1G. But yes - how did I not pay attention to this - the balloon is
*far* from being 1G in size (and in fact the difference is probably
matching quite closely those 54M).
Thanks a lot!
Jan
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: PoD issue
2010-01-29 16:59 ` Jan Beulich
@ 2010-01-29 18:30 ` George Dunlap
0 siblings, 0 replies; 13+ messages in thread
From: George Dunlap @ 2010-01-29 18:30 UTC (permalink / raw)
To: Jan Beulich; +Cc: xen-devel@lists.xensource.com
PoD is not critical to balloon out guest memory. You can boot with mem
== maxmem and then balloon down afterwards just as you could before,
without involving PoD. (Or at least, you should be able to; if you
can't then it's a bug.) It's just that with PoD you can do something
you've always wanted to do but never knew it: boot with 1GiB with the
option of expanding up to 2GiB later. :-)
With the 54 megabyte difference: It's not like a GiB vs GB thing, is
it? (i.e., 2^30 vs 10^9?) The difference between 1GiB (2^30) and 1 GB
(10^9) is about 74 megs, or 18,000 pages.
I guess that is a weakness of PoD in general: we can't control the guest
balloon driver, but we rely on it to have the same model of how to
translate "target" into # pages in the balloon as the PoD code.
-George
Jan Beulich wrote:
>>>> George Dunlap <george.dunlap@eu.citrix.com> 29.01.10 17:01 >>>
>>>>
>> What seems likely to me is that Xen (setting the PoD target) and the
>> balloon driver (allocating memory) have a different way of calculating
>> the amount of guest memory. So the balloon driver thinks it's done
>> handing memory back to Xen when there are still more outstanding PoD
>> entries than there are entries in the PoD memory pool. What balloon
>> driver are you using?
>>
>
> The one from our forward ported 2.6.32.x tree. I would suppose there
> are no significant differences here to the one in 2.6.18, but I wonder
> how precise the totalram_pages value is that the driver (also in 2.6.18)
> uses to initialize bs.current_pages. Given that with PoD it is now crucial
> for the guest to balloon out enough memory, using an imprecise start
> value is not acceptable anymore. The question however is what more
> reliable data source one could use (given that any non-exported
> kernel object is out of question). And I wonder how this works reliably
> for others...
>
>
>> Can you let me know max_mem, target, and what the
>> balloon driver has reached before calling it quits? (Although 13,000
>> pages is an awful lot to be off by: 54 MB...)
>>
>
> The balloon driver reports the expected state: target and allocation
> are 1G. But yes - how did I not pay attention to this - the balloon is
> *far* from being 1G in size (and in fact the difference is probably
> matching quite closely those 54M).
>
> Thanks a lot!
>
> Jan
>
>
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2010-06-04 15:03 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-01-31 17:48 PoD issue Jan Beulich
2010-02-03 18:42 ` George Dunlap
2010-02-04 8:17 ` Jan Beulich
2010-02-04 19:12 ` George Dunlap
2010-02-19 0:03 ` Keith Coleman
2010-02-19 6:53 ` Ian Pratt
2010-02-19 21:28 ` Keith Coleman
2010-02-19 8:19 ` Jan Beulich
2010-06-04 15:03 ` Pasi Kärkkäinen
-- strict thread matches above, loose matches on Subject: below --
2010-01-29 15:27 Jan Beulich
2010-01-29 16:01 ` George Dunlap
2010-01-29 16:59 ` Jan Beulich
2010-01-29 18:30 ` George Dunlap
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).