virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* State of Xen in upstream Linux
@ 2008-07-31  0:51 Jeremy Fitzhardinge
  2008-07-31  9:08 ` [Xen-devel] " Daniel P. Berrange
  0 siblings, 1 reply; 13+ messages in thread
From: Jeremy Fitzhardinge @ 2008-07-31  0:51 UTC (permalink / raw)
  To: Xen-devel, xen-users, Virtualization Mailing List

Well, the mainline kernel just hit 2.6.27-rc1, so it's time for an
update about what's new with Xen.  I'm trying to aim this at both the
user and developer audiences, so bear with me if I seem to be waffling
about something irrelevant.


2.6.26 was mostly a bugfix update compared with 2.6.25, with a few small
issues fixed up.  Feature-wise, it supports 32-bit domU with the core
devices needed to make it work (netfront, blockfront, console).  It also
has xen-pvfb support, which means you can run the standard X server
without needing to set up Xvnc.

I don't know of any bugs in 2.6.26, so I'd recommend you try it out for
all your 32-bit domU needs.  It has had fairly wide exposure in Fedora
kernels, so I'd rank its stability as fairly high.  If you're migrating
from 2.6.18-xen, then there'll be a few things you need to pay attention
to.  http://wiki.xensource.com/xenwiki/XenParavirtOps should help, but
if it doesn't, please either fix it and/or ask!


2.6.27 will be a much more interesting release.  It has two major
feature additions: save/restore/migrate (including checkpoint and live
migration), and x86-64 support.  In keeping with the overall unification
of i386 and x86-64 code in the kernel, the 32- and 64-bit Xen code is
largely shared, so they have feature parity.

The Xen support seems fairly stable in linux-2.6.git, but the kernel is
still at -rc1, so lots of other things will tend to break.  I encourage
you to try it out if you're comfortable with what's still a fairly high
rate of change.

My current patch stack is pretty much empty - everything has been merged
into linux-2.6.git - so it makes a good base for any changes you may have


Now that Xen can directly boot a bzImage format kernel, distros have a
lot of flexibilty in how they package Xen.  A single grub.conf entry can
be used to boot either a native kernel (via normal grub), or a
paravirtualized Xen kernel (via pygrub), without modification.

Fedora 9's kernel-xen package has been based on the mainline kernel from
the outset, but it is still packaged as a separate kernel.  kernel-xen
has been dropped from rawhide (what will become Fedora 10), and all Xen
support - both 32 and 64 bit - has been rolled into the main kernel
package.


So, what's next?

The obvious big piece of missing functionality is dom0 support.  That
will be my focus in this next kernel development window, and I hope
we'll have it merged into 2.6.28.  Some roadblock may appear which
prevents this (kernel development is always a bit uncertain), but that's
the current plan.

We're planning on setting up a xen.git on xen.org somewhere.  We still
need to work out the precise details, but my expectation is that will
become the place where dom0 work continues, and I also hope that other
Xen developers will start using it as the base for their own Xen work. 
Expect to see some more concrete details over the next week or so.


What can I do?

I'm glad you asked.  Here's my current TODO list.  These are mostly
fairly small-scale projects which just need some attention.  I'd love
people to adopt things from this list.

x86-64: SMP broken with CONFIG_PREEMPT

    It crashes early after bringing up a second CPU when preempt is
    enabled.  I think it's failing to set up the CPU topology properly,
    and leaving something uninitialized.  The desired topology is the
    simplest possible - one core per package, no SMT/HT, no multicore,
    no shared caches.  It should be simple to set up.

irq balancing causes lockups

    Using irq balancing causes the kernel to lock up after a while.  It
    looks like it's losing interrupts.  It's probably dropping
    interrupts if you migrate an irq beween vcpus while an event is
    pending.  Shouldn't be too hard to fix.  (In the meantime, the
    workaround is to make sure that you don't enable in-kernel irq
    balancing, and you don't run irqbalanced.)

block device hotplug

    Hotplugging devices should work already, but I haven't really tested
    it.  Need to make sure that both the in-kernel driver stuff works
    properly, and that udev events are raised properly, scripts run,
    device nodes added - and conversely for unplug.  Also, a modular
    xen-blockfront.ko should be unloadable.

net device hotplug

    Similar to block devices, but with a slight extra complication.  If
    the driver has outstanding granted pages, then the module can't be
    immediately unloaded, because you can't free the pages if dom0 has a
    reference to them.  My thought is to add a simple kernel thread
    which takes ownership of unwanted granted pages: it would
    periodically try to ungrant them, and if successful, free the page. 
    That means that netfront could hand ownership of those pages over to
    that thread, and unload immediately.

Performance measurement and tuning

    By design, the paravirt-ops-based Xen implementation should have
    high performance.  It uses batching where-ever possible, late
    pin/early unpin, and all the other performance tricks available to a
    Xen kernel.  However, my emphasis has been on correctness and
    features, so I have not extensively benchmarked or performance tuned
    the code.  There's plenty of scope for measuring both synthetic and
    real-world benchmarks (ideally, applications you really care about),
    and try to work out how things can be tuned.

    One thing that has already come to light is a general regression in
    context switch time compared to 2.6.18.8-xen.  It's unclear where
    it's coming from; a close look at the actual context switch code
    itself shows that it should perform the same as 2.6.18-xen (same
    number of hypercalls performed, for example).

    This would be an excellent opportunity to become familiar with Xen's
    tracing and performance measurement tools...

Balloon driver

    The current in-kernel balloon driver only supports shrinking and
    regrowing a domain up to its original size.  There's no support for
    growing a domain beyond that.

    My plan is to use hotplug memory to add new memory to the system.  I
    have some prototype code to do this, which works OK, but the hotplug
    memory subsystem needs some modifications to really deal with the
    kinds of incremental memory increases that we need for ballooning
    (it assumes that you're actually plugging in physical DIMMs).

    The other area which needs attention is some sanity checking when
    deflating a domain, to prevent killing the domain by stealing too
    much memory.  2.6.18-xen uses a simple static minimum memory
    heuristic based on the original size of the domain.  This helps, but
    doesn't really prevent over-shrinking a domain which is already
    under memory pressure.  A better approach might be to register a
    shrinker callback, which means that the balloon driver can see how
    much memory pressure the system is under by looking getting feedback
    from it.

    A more advanced project is to modify the kernel VM subsystem to
    measure refault distance, which is how long a page is evicted before
    being faulted back in again.  That measurement can tell you how much
    more memory you need to add to a domain in order to get the fault
    rate below a given rate.

gdb gives bad info in a 64-bit domain

    For some reason, gdb doesn't work properly.  If you set a
    breakpoint, the program will stop as expected, but the register
    state will be wrong.  Other users of the ptrace syscall, such as
    strace, seem to get good results, so I'm not sure what's going on
    here.  It might be a simple fix, or symptomatic of a more serious
    problem.  But it needs investigation first.

My Pet Project

    What's missing?  What do you depend on?  What's needed before you
    can use mainline Xen as your sole Xen kernel?

Thanks,
    J

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Xen-devel] State of Xen in upstream Linux
  2008-07-31  0:51 State of Xen in upstream Linux Jeremy Fitzhardinge
@ 2008-07-31  9:08 ` Daniel P. Berrange
  2008-07-31 17:54   ` Grant McWilliams
  0 siblings, 1 reply; 13+ messages in thread
From: Daniel P. Berrange @ 2008-07-31  9:08 UTC (permalink / raw)
  To: Jeremy Fitzhardinge; +Cc: Virtualization Mailing List, Xen-devel, xen-users

On Wed, Jul 30, 2008 at 05:51:37PM -0700, Jeremy Fitzhardinge wrote:
> Now that Xen can directly boot a bzImage format kernel, distros have a
> lot of flexibilty in how they package Xen.  A single grub.conf entry can
> be used to boot either a native kernel (via normal grub), or a
> paravirtualized Xen kernel (via pygrub), without modification.
> 
> Fedora 9's kernel-xen package has been based on the mainline kernel from
> the outset, but it is still packaged as a separate kernel.  kernel-xen
> has been dropped from rawhide (what will become Fedora 10), and all Xen
> support - both 32 and 64 bit - has been rolled into the main kernel
> package.

An important thing to note is that support in Xen userspace to boot
from a bzImage is fairly new - so if you have any existing Xen based
products/distros you should check that it has bzImage support if you
want to be guarenteed able to boot mainline kernels.  We're pushing
updates to existing Fedora/RHEL Xen userspace RPMs to enable bzImage 
support. 

IIRC the primary changeset you'll need from xen-unstable is this one:

  changeset:   17332:db943e8d1051
  user:        Keir Fraser <keir.fraser@citrix.com>
  date:        Tue Apr 01 10:09:33 2008 +0100
  files:       tools/libxc/Makefile tools/libxc/xc_dom_bzimageloader.c tools/libxc/xc_dom_elfloader.c
  description:
  x86: Support loading Linux bzImage v2.08 and up.

  The latest -mm kernel (2.6.25-rc3-mm1) contains v2.08 of the Linux
  bzImage format which embeds an ELF file in place of the raw payload
  allowing it to be extracted and used by the Xen domain builder.

  It is expected that this functionality will be put forward for 2.6.26.

  Signed-off-by : Ian Campbell <ijc@hellion.org.uk>


Regards,
Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Re: [Xen-devel] State of Xen in upstream Linux
  2008-07-31  9:08 ` [Xen-devel] " Daniel P. Berrange
@ 2008-07-31 17:54   ` Grant McWilliams
  2008-07-31 18:08     ` [Xen-users] " Jeremy Fitzhardinge
  2008-08-01  8:34     ` Re: [Xen-devel] " Daniel P. Berrange
  0 siblings, 2 replies; 13+ messages in thread
From: Grant McWilliams @ 2008-07-31 17:54 UTC (permalink / raw)
  To: Daniel P. Berrange
  Cc: Virtualization Mailing List, Jeremy Fitzhardinge, Xen-devel,
	xen-users


[-- Attachment #1.1: Type: text/plain, Size: 664 bytes --]

>
>
> > Fedora 9's kernel-xen package has been based on the mainline kernel from
> > the outset, but it is still packaged as a separate kernel.  kernel-xen
> > has been dropped from rawhide (what will become Fedora 10), and all Xen
> > support - both 32 and 64 bit - has been rolled into the main kernel
> > package.
>

Does this mean in the future all Fedora kernels will be Xen kernels? Is this
wise? If I try to run VirtualBox on a Xen kernel the machine will reboot. If
the Vbox module is loaded at runtime it will reboot forever. Yes, I know
it's a Vbox issue but what about KVM. Can we run KVM on a Xen kernel?

Or am I reading this completely wrong?

Grant

[-- Attachment #1.2: Type: text/html, Size: 917 bytes --]

[-- Attachment #2: Type: text/plain, Size: 137 bytes --]

_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Xen-users] Re: State of Xen in upstream Linux
  2008-07-31 17:54   ` Grant McWilliams
@ 2008-07-31 18:08     ` Jeremy Fitzhardinge
  2008-07-31 18:19       ` [Xen-users] Re: [Xen-devel] " Alexey Eremenko
  2008-07-31 18:36       ` [Xen-users] Re: [Xen-devel] " Dr. David Alan Gilbert
  2008-08-01  8:34     ` Re: [Xen-devel] " Daniel P. Berrange
  1 sibling, 2 replies; 13+ messages in thread
From: Jeremy Fitzhardinge @ 2008-07-31 18:08 UTC (permalink / raw)
  To: Grant McWilliams
  Cc: Virtualization Mailing List, Xen-devel, Daniel P. Berrange,
	xen-users

Grant McWilliams wrote:
>
>
>     > Fedora 9's kernel-xen package has been based on the mainline
>     kernel from
>     > the outset, but it is still packaged as a separate kernel.
>      kernel-xen
>     > has been dropped from rawhide (what will become Fedora 10), and
>     all Xen
>     > support - both 32 and 64 bit - has been rolled into the main kernel
>     > package.
>
>
> Does this mean in the future all Fedora kernels will be Xen kernels?
> Is this wise? If I try to run VirtualBox on a Xen kernel the machine
> will reboot. If the Vbox module is loaded at runtime it will reboot
> forever. Yes, I know it's a Vbox issue but what about KVM. Can we run
> KVM on a Xen kernel?
>
> Or am I reading this completely wrong?

If you boot the kernel on bare hardware, the Xen parts of the kernel
will basically be switched off, and play no part in runtime.  All
hardware features and device drivers should be available as normal.  I
run kvm on pvops/xen kernels all the time.

If you boot the kernel under Xen, then VT/SVM will not be available, and
any kernel module which tries to use them - like kvm or (I presume)
virtualbox - should quietly get out of the way.  If they cause the
domain to crash, then that's a bug in those modules.

If by "the machine will reboot" you mean that the whole system crashes,
then that's definitely a bug in Xen which should be fixed.  It should
never be possible for a domU to crash the system.

    J

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Xen-users] Re: [Xen-devel] State of Xen in upstream Linux
  2008-07-31 18:08     ` [Xen-users] " Jeremy Fitzhardinge
@ 2008-07-31 18:19       ` Alexey Eremenko
  2008-07-31 18:28         ` [Xen-users] " Jeremy Fitzhardinge
  2008-07-31 18:36       ` [Xen-users] Re: [Xen-devel] " Dr. David Alan Gilbert
  1 sibling, 1 reply; 13+ messages in thread
From: Alexey Eremenko @ 2008-07-31 18:19 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Virtualization Mailing List, Grant McWilliams, Daniel P. Berrange,
	Xen-devel, xen-users

On Thu, Jul 31, 2008 at 8:08 PM, Jeremy Fitzhardinge <jeremy@goop.org> wrote:
> Grant McWilliams wrote:
>>
>>
>>     > Fedora 9's kernel-xen package has been based on the mainline
>>     kernel from
>>     > the outset, but it is still packaged as a separate kernel.
>>      kernel-xen
>>     > has been dropped from rawhide (what will become Fedora 10), and
>>     all Xen
>>     > support - both 32 and 64 bit - has been rolled into the main kernel
>>     > package.
>>
>>
>> Does this mean in the future all Fedora kernels will be Xen kernels?
>> Is this wise? If I try to run VirtualBox on a Xen kernel the machine
>> will reboot. If the Vbox module is loaded at runtime it will reboot
>> forever. Yes, I know it's a Vbox issue but what about KVM. Can we run
>> KVM on a Xen kernel?
>>
>> Or am I reading this completely wrong?
>
> If you boot the kernel on bare hardware, the Xen parts of the kernel
> will basically be switched off, and play no part in runtime.  All
> hardware features and device drivers should be available as normal.  I
> run kvm on pvops/xen kernels all the time.
>
> If you boot the kernel under Xen, then VT/SVM will not be available, and
> any kernel module which tries to use them - like kvm or (I presume)
> virtualbox - should quietly get out of the way.  If they cause the
> domain to crash, then that's a bug in those modules.
>
> If by "the machine will reboot" you mean that the whole system crashes,
> then that's definitely a bug in Xen which should be fixed.  It should
> never be possible for a domU to crash the system.
>
>    J
> _______________________________________________
> Virtualization mailing list
> Virtualization@lists.linux-foundation.org
> https://lists.linux-foundation.org/mailman/listinfo/virtualization
>

Jeremy: Mr. Grant speaking about the future where Linux (2.6.28?)
mainline kernel will support Xen Dom0, not kernel 2.6.27, which only
improves Xen DomU.

And I agree with Grant, that if Linux mainline will have Dom0
included, that may cause problems for all kinds of drivers.

As for 2.6.27, it shouldn't cause any new problems.

-- 
-Alexey Eromenko "Technologov"

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Xen-users] Re: State of Xen in upstream Linux
  2008-07-31 18:19       ` [Xen-users] Re: [Xen-devel] " Alexey Eremenko
@ 2008-07-31 18:28         ` Jeremy Fitzhardinge
  2008-07-31 18:48           ` Re: [Xen-devel] " Grant McWilliams
  0 siblings, 1 reply; 13+ messages in thread
From: Jeremy Fitzhardinge @ 2008-07-31 18:28 UTC (permalink / raw)
  To: Alexey Eremenko
  Cc: Virtualization Mailing List, Grant McWilliams, Daniel P. Berrange,
	Xen-devel, xen-users

Alexey Eremenko wrote:
> Jeremy: Mr. Grant speaking about the future where Linux (2.6.28?)
> mainline kernel will support Xen Dom0, not kernel 2.6.27, which only
> improves Xen DomU.
>
> And I agree with Grant, that if Linux mainline will have Dom0
> included, that may cause problems for all kinds of drivers.

The intention is that a single kernel will be equally functional in all
modes of operation.  If the kernel has dom0 capabilities, then they will
only come into play when actually running under Xen; when booting
natively, they will have no effect on anything else.  Naturally, running
under Xen is likely incompatible with any other in-kernel virtualization
system, so we need to make sure that they don't get in the way.  That's
easy to arrange for kvm, but I'm not sure about VirtualBox as it is
out-of-tree (though full source is available, right?).

    J

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Xen-users] Re: [Xen-devel] State of Xen in upstream Linux
  2008-07-31 18:08     ` [Xen-users] " Jeremy Fitzhardinge
  2008-07-31 18:19       ` [Xen-users] Re: [Xen-devel] " Alexey Eremenko
@ 2008-07-31 18:36       ` Dr. David Alan Gilbert
  2008-07-31 18:45         ` [Xen-users] " Jeremy Fitzhardinge
  1 sibling, 1 reply; 13+ messages in thread
From: Dr. David Alan Gilbert @ 2008-07-31 18:36 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Virtualization Mailing List, Grant McWilliams, Daniel P. Berrange,
	Xen-devel, xen-users

* Jeremy Fitzhardinge (jeremy@goop.org) wrote:
> Grant McWilliams wrote:

<snip>

> > Does this mean in the future all Fedora kernels will be Xen kernels?
> > Is this wise? If I try to run VirtualBox on a Xen kernel the machine
> > will reboot. If the Vbox module is loaded at runtime it will reboot
> > forever. Yes, I know it's a Vbox issue but what about KVM. Can we run
> > KVM on a Xen kernel?
> >
> > Or am I reading this completely wrong?
> 
> If you boot the kernel on bare hardware, the Xen parts of the kernel
> will basically be switched off, and play no part in runtime.  All
> hardware features and device drivers should be available as normal.  I
> run kvm on pvops/xen kernels all the time.

Does that include things like /proc/cpuinfo?  Current Dom0's seem
to lose some of the physical IDs information from there.

Dave
-- 
 -----Open up your eyes, open up your mind, open up your code -------   
/ Dr. David Alan Gilbert    | Running GNU/Linux on Alpha,68K| Happy  \ 
\ gro.gilbert @ treblig.org | MIPS,x86,ARM,SPARC,PPC & HPPA | In Hex /
 \ _________________________|_____ http://www.treblig.org   |_______/

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Xen-users] Re: State of Xen in upstream Linux
  2008-07-31 18:36       ` [Xen-users] Re: [Xen-devel] " Dr. David Alan Gilbert
@ 2008-07-31 18:45         ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 13+ messages in thread
From: Jeremy Fitzhardinge @ 2008-07-31 18:45 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Virtualization Mailing List, Grant McWilliams, Daniel P. Berrange,
	Xen-devel, xen-users

Dr. David Alan Gilbert wrote:
> Does that include things like /proc/cpuinfo?  Current Dom0's seem
> to lose some of the physical IDs information from there.

Dom0's vcpus don't necessarily have any fixed relationship to the
underlying physical cpus, so there's no real way they could.  Dom0 might
not even have as many vcpus as there are pcpus.  So I think that's expected.

    J

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Re: [Xen-devel] State of Xen in upstream Linux
  2008-07-31 18:28         ` [Xen-users] " Jeremy Fitzhardinge
@ 2008-07-31 18:48           ` Grant McWilliams
  2008-07-31 18:58             ` [Xen-users] " Jeremy Fitzhardinge
  0 siblings, 1 reply; 13+ messages in thread
From: Grant McWilliams @ 2008-07-31 18:48 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Alexey Eremenko, Virtualization Mailing List, Xen-devel,
	Daniel P. Berrange, xen-users


[-- Attachment #1.1: Type: text/plain, Size: 1718 bytes --]

On Thu, Jul 31, 2008 at 11:28 AM, Jeremy Fitzhardinge <jeremy@goop.org>wrote:

> Alexey Eremenko wrote:
> > Jeremy: Mr. Grant speaking about the future where Linux (2.6.28?)
> > mainline kernel will support Xen Dom0, not kernel 2.6.27, which only
> > improves Xen DomU.
> >
> > And I agree with Grant, that if Linux mainline will have Dom0
> > included, that may cause problems for all kinds of drivers.
>
> The intention is that a single kernel will be equally functional in all
> modes of operation.  If the kernel has dom0 capabilities, then they will
> only come into play when actually running under Xen; when booting
> natively, they will have no effect on anything else.  Naturally, running
> under Xen is likely incompatible with any other in-kernel virtualization
> system, so we need to make sure that they don't get in the way.  That's
> easy to arrange for kvm, but I'm not sure about VirtualBox as it is
> out-of-tree (though full source is available, right?).
>
>    J
>

Vbox OSE has full source available. I was working on a contract for a
corporation and was in the stage
of deciding which virtualization platform to run and installed Vbox in a Xen
system (running Xen). The installer
loaded the vbox driver (As well as set it up to load automatically) and sent
the server into a continuous reboot. Unfortunately this was
in a Datacenter that I had no physical access to. The Vbox developers jumped
all over me when I suggested that it was a bug.
When running Xen the VMware interface will come up and the driver will load
but the VMs just don't start. It seems like
that would be a much better situation than causing the entire machine to
crash. Seems like a Vbox problem. It was run in Dom0.

Grant

[-- Attachment #1.2: Type: text/html, Size: 2170 bytes --]

[-- Attachment #2: Type: text/plain, Size: 137 bytes --]

_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Xen-users] Re: State of Xen in upstream Linux
  2008-07-31 18:48           ` Re: [Xen-devel] " Grant McWilliams
@ 2008-07-31 18:58             ` Jeremy Fitzhardinge
  2008-07-31 20:07               ` Re: [Xen-devel] " Grant McWilliams
  0 siblings, 1 reply; 13+ messages in thread
From: Jeremy Fitzhardinge @ 2008-07-31 18:58 UTC (permalink / raw)
  To: Grant McWilliams
  Cc: Alexey Eremenko, Virtualization Mailing List, Xen-devel,
	Daniel P. Berrange, xen-users

Grant McWilliams wrote:
> Vbox OSE has full source available. I was working on a contract for a
> corporation and was in the stage
> of deciding which virtualization platform to run and installed Vbox in
> a Xen system (running Xen). The installer
> loaded the vbox driver (As well as set it up to load automatically)
> and sent the server into a continuous reboot. Unfortunately this was
> in a Datacenter that I had no physical access to. The Vbox developers
> jumped all over me when I suggested that it was a bug.
> When running Xen the VMware interface will come up and the driver will
> load but the VMs just don't start. It seems like
> that would be a much better situation than causing the entire machine
> to crash. Seems like a Vbox problem. It was run in Dom0.

Yes, that sounds like an inherently unstable configuration.  I don't
know anything about how VBox operates internally, but whatever pagetable
management scheme they're using will quite likely not work under Xen
without a lot of care.  The best we can do in that case is try to make
sure that VBox doesn't attempt to come up.

If they don't rely on VT/SVM, then it may work from within a Xen hvm
domain, but I don't know what benefit that would have.  What were you
trying to achieve by running VBox in dom0?

    J

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Re: [Xen-devel] State of Xen in upstream Linux
  2008-07-31 18:58             ` [Xen-users] " Jeremy Fitzhardinge
@ 2008-07-31 20:07               ` Grant McWilliams
  2008-07-31 20:14                 ` [Xen-users] " Jeremy Fitzhardinge
  0 siblings, 1 reply; 13+ messages in thread
From: Grant McWilliams @ 2008-07-31 20:07 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Alexey Eremenko, Virtualization Mailing List, Xen-devel,
	Daniel P. Berrange, xen-users


[-- Attachment #1.1: Type: text/plain, Size: 2025 bytes --]

On Thu, Jul 31, 2008 at 11:58 AM, Jeremy Fitzhardinge <jeremy@goop.org>wrote:

> Grant McWilliams wrote:
> > Vbox OSE has full source available. I was working on a contract for a
> > corporation and was in the stage
> > of deciding which virtualization platform to run and installed Vbox in
> > a Xen system (running Xen). The installer
> > loaded the vbox driver (As well as set it up to load automatically)
> > and sent the server into a continuous reboot. Unfortunately this was
> > in a Datacenter that I had no physical access to. The Vbox developers
> > jumped all over me when I suggested that it was a bug.
> > When running Xen the VMware interface will come up and the driver will
> > load but the VMs just don't start. It seems like
> > that would be a much better situation than causing the entire machine
> > to crash. Seems like a Vbox problem. It was run in Dom0.
>
> Yes, that sounds like an inherently unstable configuration.  I don't
> know anything about how VBox operates internally, but whatever pagetable
> management scheme they're using will quite likely not work under Xen
> without a lot of care.  The best we can do in that case is try to make
> sure that VBox doesn't attempt to come up.
>
> If they don't rely on VT/SVM, then it may work from within a Xen hvm
> domain, but I don't know what benefit that would have.  What were you
> trying to achieve by running VBox in dom0?
>
>    J
>

I wasn't really trying to run VBox in the dom0. I'd been running the Xen
kernel so long that I'd forgotten which
kernel I was on. Like I said when you start VMware the guest OS just doesn't
run which gives you a second
to ponder why. VBox uses VT/SVM if you check a box in the config. It was not
checked when it
crashed the server.

Depending on the application I use VBox or Xen for virtualization projects.
VBox acts as an HVM most of the time
so it's great for those times I'm virtualizing a product that I have no
control over. Xen is best for environments
that I do have control over.

Grant McWilliams

[-- Attachment #1.2: Type: text/html, Size: 2544 bytes --]

[-- Attachment #2: Type: text/plain, Size: 137 bytes --]

_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Xen-users] Re: State of Xen in upstream Linux
  2008-07-31 20:07               ` Re: [Xen-devel] " Grant McWilliams
@ 2008-07-31 20:14                 ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 13+ messages in thread
From: Jeremy Fitzhardinge @ 2008-07-31 20:14 UTC (permalink / raw)
  To: Grant McWilliams
  Cc: Alexey Eremenko, Virtualization Mailing List, Xen-devel,
	Daniel P. Berrange, xen-users

Grant McWilliams wrote:
> I wasn't really trying to run VBox in the dom0. I'd been running the
> Xen kernel so long that I'd forgotten which
> kernel I was on. Like I said when you start VMware the guest OS just
> doesn't run which gives you a second
> to ponder why. VBox uses VT/SVM if you check a box in the config. It
> was not checked when it
> crashed the server.

Yes.  It looks to me like the VBox kernel code is constructing its own
pagetable entries without using the normal kernel API to do so, and is
therefore not taking the pfn to mfn translation that must be done when
constructing pagetable entries.  The upshot is that it will use invalid
pagetables which are bound to crash the dom0 kernel.

There are two possible fixes: 1) make VBox Xen (or pvops) aware by
making sure it always uses the normal kernel APIs for constructing
pagetables, 2a) refuse to build with CONFIG_PARAVIRT enables, or 2b)
detect at runtime we're running on a non-native platform and refuse to
run.  2[ab] probably being easiest to implement.

> Depending on the application I use VBox or Xen for virtualization
> projects. VBox acts as an HVM most of the time
> so it's great for those times I'm virtualizing a product that I have
> no control over. Xen is best for environments
> that I do have control over.

Yes, I use kvm in a similar way.

    J

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Re: [Xen-devel] State of Xen in upstream Linux
  2008-07-31 17:54   ` Grant McWilliams
  2008-07-31 18:08     ` [Xen-users] " Jeremy Fitzhardinge
@ 2008-08-01  8:34     ` Daniel P. Berrange
  1 sibling, 0 replies; 13+ messages in thread
From: Daniel P. Berrange @ 2008-08-01  8:34 UTC (permalink / raw)
  To: Grant McWilliams
  Cc: Virtualization Mailing List, Jeremy Fitzhardinge, Xen-devel,
	xen-users

On Thu, Jul 31, 2008 at 10:54:38AM -0700, Grant McWilliams wrote:
> >
> >
> > > Fedora 9's kernel-xen package has been based on the mainline kernel from
> > > the outset, but it is still packaged as a separate kernel.  kernel-xen
> > > has been dropped from rawhide (what will become Fedora 10), and all Xen
> > > support - both 32 and 64 bit - has been rolled into the main kernel
> > > package.
> >
> 
> Does this mean in the future all Fedora kernels will be Xen kernels? Is this
> wise? If I try to run VirtualBox on a Xen kernel the machine will reboot. If
> the Vbox module is loaded at runtime it will reboot forever. Yes, I know
> it's a Vbox issue but what about KVM. Can we run KVM on a Xen kernel?

The way paravirt_ops works is that the single kernel image has built in
support for a number of hypervisors, as well as, bare metal. When it boots
one of the first things the kernel does it probe to find what its running
on. It then switches in the various Xen, KVM or VMWare specific functions
as required for that platform, or continues running as bare metal. The
Fedora kernels have Xen, KVM and VMI support enabled by default now as
of Fedora 10 (VMI is 32-bit only). Basically it should 'just work' and
do the right thing out of the box.

VirtualBox is using out of tree kernel modules so all bets are off as to
whether it works on any particular kernel. If virtualbox is important to
you, then encourage their developers to get the functionality they need
into the mainline kernel because that's the only way it'll be supportable
long term. We can't guarentee that it won't break in a future Fedora 
kernel update as long as it is out of tree.

Regards,
Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2008-08-01  8:34 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-07-31  0:51 State of Xen in upstream Linux Jeremy Fitzhardinge
2008-07-31  9:08 ` [Xen-devel] " Daniel P. Berrange
2008-07-31 17:54   ` Grant McWilliams
2008-07-31 18:08     ` [Xen-users] " Jeremy Fitzhardinge
2008-07-31 18:19       ` [Xen-users] Re: [Xen-devel] " Alexey Eremenko
2008-07-31 18:28         ` [Xen-users] " Jeremy Fitzhardinge
2008-07-31 18:48           ` Re: [Xen-devel] " Grant McWilliams
2008-07-31 18:58             ` [Xen-users] " Jeremy Fitzhardinge
2008-07-31 20:07               ` Re: [Xen-devel] " Grant McWilliams
2008-07-31 20:14                 ` [Xen-users] " Jeremy Fitzhardinge
2008-07-31 18:36       ` [Xen-users] Re: [Xen-devel] " Dr. David Alan Gilbert
2008-07-31 18:45         ` [Xen-users] " Jeremy Fitzhardinge
2008-08-01  8:34     ` Re: [Xen-devel] " Daniel P. Berrange

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).