xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* Notes from Xen BoF at Debconf15
@ 2015-09-08  9:24 Ian Campbell
  2015-09-08  9:41 ` On distro packaging of stub domains (Re: Notes from Xen BoF at Debconf15) Ian Campbell
                   ` (4 more replies)
  0 siblings, 5 replies; 22+ messages in thread
From: Ian Campbell @ 2015-09-08  9:24 UTC (permalink / raw)
  To: xen-devel, Debian Xen Team

Xen upstream BoF
================

We had a discussion around Xen and packaging at Debian's annual developer
conference (Debconf) a few weeks back:
https://summit.debconf.org/debconf15/meeting/279/xen-upstream-bof/

These are my notes, I think there is probably stuff of interest to most
distro people, not just Debian folks.

The session was scheduled in a small, out of the way, room. Around 2
dozen people attended including:

  Ian Jackson
  Bastian Blank ("waldi", current Xen package maintainer in Debian)
  Guido Trotter (previous Xen package maintainer, who still takes care
                 of stable security updates)
  Axel Beckert  (maintainer of the xen-tools helpers)
  Various users
  Myself

The majority of the conversation was between Ian & I and Bastian and
Guido, some users raised some issues towards the end.

Embedding in xen.git
====================

We are much better about providing ways to use system-supplied
components these days (since 4.4) and Debian uses them.

Waldi noted that iPXE did not have such an option. Since that iPXE is
only used by qemu-trad (for qemu-upstream the iPXE comes from the PCI
ROM BAR) and Debian disabled qemu-trad it should be pretty quick to
patch the build system to disable iPXE.

Secondly it was noted that SeaBIOS is still built into hvmloader,
which makes package updates harder (needs a binNMU of the Xen package
to pickup a new SeaBIOS). Since this is in guest context it is not an
issue for the security team (which would make it higher priority) and
there are medium term plans to perhaps make it possible to load the
BIOS via the toolstack instead.

Note that OVMF would also be built in but is non-free and hence
disabled.

Lowlevel library API stability
==============================

The majority of the current Debian patch queue is moving unstable
libraries (mainly libxc) from $libdir to $libexecdir/xen-X.Y, adding
-rpath where needed and removing the SONAME from such libraries. Also
move related binary files.

Waldi expressed hope that the hypervisor interfaces could become
stable, but we think this is unlikely. Having the hypervisor provide a
compat layer for older interfaces was ruled out as the wrong place
from a security PoV. Second choice would be to have the tools provide
a compat layer for older hypervisors, which would be possible but
perhaps tricky to achieve.

This is also a problem for some third packages, e.g. qemu-upstream and
kexec-tools. This requires a binNMU of those packages to build against
a new Xen package. This is painful for maintainers.

We explained our plan was to move some sections of the unstable
library out into small stable libraries for specific purposes, with
stuff needed for qemu-upstream, kexec-tools and other external
packages being a priority in the short term. After this we plan to
reexamine what is left and consider next steps.

In the meantime it should be much easier these days to provide
upstream configure options to provide the changes currently patched in
by Debian.

Midlevel library stability
==========================

libxenlight is only API not ABI stable. This is a pain in particular
for libvirt which needs binNMU for new Xen package.

We would like to eventually offer ABI stability or this library, but
we are not there yet.

Stubdomains
===========

Hard to do in a packaging environment (is really its own partial
architecture). Rump kernels are no different in this regard.

No clever ideas were put forward.

initscripts
===========

Debian has its own initscripts, does not use the upstream ones.

Waldi stated this was because the upstream ones were not properly LSB
and were too "cross-distro".

We would like to try and have these in xen.git. Perhaps a Yakk, but
closer to upstream the better. Not a very high priority though.

grub-xen
========

Needs much better docs.

ACTION: I agreed to move the text of my blog post somewhere more
obvious.

Release cycle
=============

Waldi commented that the stable release cycle was too long. Would like
to see a release after any large security update.

We asked if the RCs for stable releases were valuable, the answer was
"not so much".

Waldi would prefer to avoid cherry-picking security fixes if possible.

We asked if we thought Xen stable releases could be added to Debian
point releases. Waldi thought they likely could be, citing the
inclusion of Linux stable releases in point releases.

Our stable releases follow a similar set of rules to Linux, we think
we implement them more faithfully (less feature or feature-like
backports)

ACTION: Talk to Jan about making changes to stable release process.

Security updates
================

Guido asked if security updates could go back further.

Currently we go to 4.2, but Debian Wheezy has Xen 4.1.

The security team don't currently have the effort to go further, but
have recently introduced a private discussion list where predisclosure
members are encouraged to exchange their own backports.

Guido is not on global team@security.debian. We suggested he discuss
with the Debian security team switching to a xen specific alias
including team@ + relevant package maintainers.

Release schedule vs. migration N=>N+1 support
=============================================

Philip Hahn (a user) asked what happens to migration if the release
cycle shortens.

We answered that this N=>N+1 policy would need rethinking into an N=>N+M.

We agreed that it would be useful if M was > Debian release cycle (~2
years).

Recent rewrite of migration support has made changing this policy far
more plausible.

It was suggested as an aside that using -backports more would be
useful.

Remus
=====

A user (Kai ???) was interested in Remus support.

We briefly discussed status, we think it should be reasonably easy to
combine xl remus into one of the HA systems in Debian (e.g. Pacemaker,
linux-ha?)

Ferenc Wagner pointed out that integrating VMs (non-HA style) into a
pacemake system was quite easy.

Testing Xen in KVM
==================

A user (anon) asked if we tested this, because it sometimes broke.

We only test Xen in Xen, pointed to the wiki page.

OpenStack/xapi in Debian
========================

I spoke to a couple of users through the week who were asking after
the future of xapi in Debian, because they wanted OpenStack.

I was able to point them to the libvirt+XenProject stuff and explained
about the CI loop etc and they went away happy.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* On distro packaging of stub domains (Re: Notes from Xen BoF at Debconf15)
  2015-09-08  9:24 Notes from Xen BoF at Debconf15 Ian Campbell
@ 2015-09-08  9:41 ` Ian Campbell
  2015-09-08 15:03   ` Antti Kantee
  2015-09-08  9:47 ` Notes from Xen BoF at Debconf15 Jan Beulich
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 22+ messages in thread
From: Ian Campbell @ 2015-09-08  9:41 UTC (permalink / raw)
  To: xen-devel, Debian Xen Team; +Cc: Wei Liu, George.Dunlap

On Tue, 2015-09-08 at 10:24 +0100, Ian Campbell wrote:
> Stubdomains

When I passed these notes around internally for sanity checking we ended up
discussing this issue, since we decided it would be better to move this to
xen-devel I'm quoting the thread below with permission.

It was a bit tricky to flatten the thread into a single mail, but it wasn't
too "branchy" so I think this will be ok. Hopefully I've not misrepresented
what anyone said by how I've arranged it here...

> ===========
> 
> Hard to do in a packaging environment (is really its own partial
> architecture). Rump kernels are no different in this regard.
>
> No clever ideas were put forward.

-----------------------------------------------------------------------

George Dunplap:

> Meaning, hard to reuse binaries from existing qemu package rather than
> using the Xen build system to download and build bespoke qemu images.
> 
> Are the grub packagers doing anything with grub-xen?
> 
> If people want to use the existing qemu package, it does seem like
> making a qemu-xen-stubdom package would be the most sensible thing to do.

-----------------------------------------------------------------------

Me in reply to George:

> From the PoV of a distro a qemu-xen-stubdom package is duplicating a load
> of source code (e.g. qemu and all the libraries which it uses, as well as
> the stubdom libc and kernel etc)), which is disliked by distros, who wish
> to use their existing packages for things.
> 
> But you can't just use the existing source packages for all those libraries
> rebuilt for "arch=stubdom" since "stubdom" is not an architecture which the
> distro understands. (An arch to a distro is a processor architecture + libc
> + calling convention ABI). arch=stubdom could never be a full arch to the
> distro (hence "partial architecture").
> 
> Security teams also don't like things which contain duplicates of source
> code.
> 
> grub-xen is completely self contained, it doesn't rely on anything outside
> of grub.git(or .tar). That's not because they've imported a load of 3rd
> party libraries, it really is just that grub does PV by itself in a
> completely self contained way, but grub is simple enough to get away with
> this.

-----------------------------------------------------------------------

In a short diversion on the final ("grub-xen is completely...") para George
said:

> Do you mean that it's easier to just toss grub-xen into the grub
> package, because it's not very big and doesn't require the maintainer to
> know anything at all about Xen (hence the point about importing 3rd
> party libraries)?

To which I replied:

> Size is not the issue. The fact that grub-xen (or indeed grub-*) has no
> build dependencies at all is what matters.

-----------------------------------------------------------------------

The main thread of the conversation (actually the same mails as above) was in reply to the "From the PoV of a distro" where George said:

> Sorry, this comment came from an RPM perspective, where a single .spec
> file will build a suite of separate rpms (with the same prefix).  The
> CentOS xen.spec will build xen-[version], xen-hypervisor-[version],
> xen-runtime-[version], xen-libs-[version], xen-ocaml-[version], &c.
> 
> CentOS is a bit of a special case, but I presume that in Fedora, if they
> wanted to, they could build a qemu-xenpv package as part of the normal
> qemu RPM build process that would generate a runnable PV image as part
> of the normal rpmbuild of the qemu.spec.
> 
> Is this not possible with debian packages?  Aren't there at least
> separate ${lib} and ${lib}-devel packages?

-----------------------------------------------------------------------

Then I:

> I think we are talking at cross purposes. In Debian a single source package
> does indeed build $lib and $lib-dev etc.
> 
> But, those packages are for arch={i386,amd64,armhf}. They are not for
> arch={stubdom-i386,stubdom-amd64} and there is no such arch in Debian (nor
> CentOS nor Fedora I expect).
> 
> For the qemu source package to be able build a runnable qemu-xenpv binary
> it would need to get all the libFOO and libBAR which qemu needs from
> somewhere, and they would need to be built for the stubdom-{i386,amd64}
> arches, not for i386 or amd64. All the builddeps for qemu-pv need to come
> from somewhere and the existing i386 or amd64 libraries do not satisfy the
> build dep for a pv stubdom.
> 
> (To be clear, I'm talking about mini-os/rump kernel stub doms here, not
> Linux ones of any sort)

-----------------------------------------------------------------------

To which George said:

> I think I see what you're saying -- if building a qemu stubdom *only*
> involved the qemu codebase itself, then it would be really easy for the
> qemu package maintainers to add qemu-xenpv (or something) which would
> just build qemu to run in PV mode.  But in fact, building qemu stubdoms
> involves building all of the other libraries upon which qemu depends for
> stubdoms as well; which again, we don't want duplicated inside the qumu
> package, and so would mean re-building all those other packages for
> stubdoms as well.
> 
> Interestingly, if we cast a vision for a world where unikernels are the
> norm, then it might actually make sense for a distro like Debian to go
> through and re-build core libraries against rump kernels, so that then
> applications like qemu (or apache, or whatever) could link against them
> and build a bootable image, perhaps capable of running either on Xen or
> KVM.
> 
> That's obviously not going to happen overnight -- but it would be
> interesting for Someone to give a try, just to see how difficult it is.
>  If it's not that difficult to build libraries against rumpkernels and
> package them up so they can be linked to from applications, then the
> road to Debian / Fedora-built unikernel applications (including qemu)
> may not actually be that long.

-----------------------------------------------------------------------

Wei and I both replied. Wei said:

> The vision is that rump kernel toolchains will be packaged to distros,
> so that people can build rump kernel with all necessary source tarballs.
> That is not exactly the same as what you just said. What you said is
> probably next step down the road.
> 
> > That's obviously not going to happen overnight -- but it would be
> > interesting for Someone to give a try, just to see how difficult it is.
> 
> Most faffs will be fighting with build systems of individual packages.
> Also some software has less tested configuration that doesn't actually
> work on source code level. For example, I'm very well aware some QEMU
> bugs that prevents it from building with rump kernel.
> 
> All those fixes to build systems and source code can only happen as
> developers realise rump kernel is a thing that they care about. It's a
> bit unrealistic to expect a single team to fix every single piece of
> software.

and I said:

> The first hurdle would be how much of the distro is cross-compilable, since
> rump kernels necessarily imply that.
> 
> That in turn requires the rumpkernel toolchain to look like any other cross
> toolchain i.e. you call -gcc, otherwise every package would need
> tweaking to use the rumpkernel build system.
> 
> Probably that then implies a load of config.{sub,guess} updates for every
> package etc but those are normal for crossing any new architecture and I
> think are reasonably well understood these days.
-----------------------------------------------------------------------

At this point we decided this should go on list, and here we are...

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Notes from Xen BoF at Debconf15
  2015-09-08  9:24 Notes from Xen BoF at Debconf15 Ian Campbell
  2015-09-08  9:41 ` On distro packaging of stub domains (Re: Notes from Xen BoF at Debconf15) Ian Campbell
@ 2015-09-08  9:47 ` Jan Beulich
  2015-09-08 10:15   ` Lars Kurth
  2015-09-08 10:15   ` Ian Campbell
  2015-09-08 14:49 ` Doug Goldstein
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 22+ messages in thread
From: Jan Beulich @ 2015-09-08  9:47 UTC (permalink / raw)
  To: Ian Campbell; +Cc: Debian Xen Team, xen-devel

>>> On 08.09.15 at 11:24, <ian.campbell@citrix.com> wrote:
> Release cycle
> =============
> 
> Waldi commented that the stable release cycle was too long. Would like
> to see a release after any large security update.
> 
> We asked if the RCs for stable releases were valuable, the answer was
> "not so much".
> 
> Waldi would prefer to avoid cherry-picking security fixes if possible.
> 
> We asked if we thought Xen stable releases could be added to Debian
> point releases. Waldi thought they likely could be, citing the
> inclusion of Linux stable releases in point releases.
> 
> Our stable releases follow a similar set of rules to Linux, we think
> we implement them more faithfully (less feature or feature-like
> backports)
> 
> ACTION: Talk to Jan about making changes to stable release process.

That's kind of the opposite of what we quite recently changed to
(a [hopefully] more predictable four month cycle). Apart from the
question what "large" is, doing a release after any large security
update seems unreasonable to me (not only because of giving up
the predictability, but also because of the overhead involved,
which is there even if we ditched the RCs). I have to admit that I
fail to see why Debian would be different than other distros, all
cherry picking security fixes until a new stable release becomes
available. If otoh other major distros voiced similar desires, I
think we'd have to once again re-think our stable release cadence.

Jan

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Notes from Xen BoF at Debconf15
  2015-09-08  9:47 ` Notes from Xen BoF at Debconf15 Jan Beulich
@ 2015-09-08 10:15   ` Lars Kurth
  2015-09-08 10:15   ` Ian Campbell
  1 sibling, 0 replies; 22+ messages in thread
From: Lars Kurth @ 2015-09-08 10:15 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Debian Xen Team, Ian Campbell, xen-devel


> On 8 Sep 2015, at 10:47, Jan Beulich <jbeulich@suse.com> wrote:
> 
>>>> On 08.09.15 at 11:24, <ian.campbell@citrix.com> wrote:
>> Release cycle
>> =============
>> 
>> Waldi commented that the stable release cycle was too long. Would like
>> to see a release after any large security update.
>> 
>> We asked if the RCs for stable releases were valuable, the answer was
>> "not so much".
>> 
>> Waldi would prefer to avoid cherry-picking security fixes if possible.
>> 
>> We asked if we thought Xen stable releases could be added to Debian
>> point releases. Waldi thought they likely could be, citing the
>> inclusion of Linux stable releases in point releases.
>> 
>> Our stable releases follow a similar set of rules to Linux, we think
>> we implement them more faithfully (less feature or feature-like
>> backports)
>> 
>> ACTION: Talk to Jan about making changes to stable release process.
> 
> That's kind of the opposite of what we quite recently changed to
> (a [hopefully] more predictable four month cycle). Apart from the
> question what "large" is, doing a release after any large security
> update seems unreasonable to me (not only because of giving up
> the predictability, but also because of the overhead involved,
> which is there even if we ditched the RCs).

I suppose the question is what the real problem for distros is: 

If it is the process of cherry picking and merging, then we could create a tag on a stable branch more frequently (e.g. every month), which makes it easier for distros to consume a number of security fixes, while not creating all the overhead of creating a release. 

Whether such a tagged stable branch is an RC (without a tarball) or not, would be a different question.

> I have to admit that I
> fail to see why Debian would be different than other distros, all
> cherry picking security fixes until a new stable release becomes
> available. If otoh other major distros voiced similar desires, I
> think we'd have to once again re-think our stable release cadence.

A fully tested maintenance release, created more frequently would be a lot more challenging and require a lot more effort.
I do agree with Jan that we ought to approach other distros and then re-think. 

Lars

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Notes from Xen BoF at Debconf15
  2015-09-08  9:47 ` Notes from Xen BoF at Debconf15 Jan Beulich
  2015-09-08 10:15   ` Lars Kurth
@ 2015-09-08 10:15   ` Ian Campbell
  2015-09-08 10:39     ` Jan Beulich
  1 sibling, 1 reply; 22+ messages in thread
From: Ian Campbell @ 2015-09-08 10:15 UTC (permalink / raw)
  To: Jan Beulich, Ian.Jackson; +Cc: Debian Xen Team, xen-devel

On Tue, 2015-09-08 at 03:47 -0600, Jan Beulich wrote:
> > > > On 08.09.15 at 11:24, <ian.campbell@citrix.com> wrote:
> > Release cycle
> > =============
> > 
> > Waldi commented that the stable release cycle was too long. Would like
> > to see a release after any large security update.
> > 
> > We asked if the RCs for stable releases were valuable, the answer was
> > "not so much".
> > 
> > Waldi would prefer to avoid cherry-picking security fixes if possible.
> > 
> > We asked if we thought Xen stable releases could be added to Debian
> > point releases. Waldi thought they likely could be, citing the
> > inclusion of Linux stable releases in point releases.
> > 
> > Our stable releases follow a similar set of rules to Linux, we think
> > we implement them more faithfully (less feature or feature-like
> > backports)
> > 
> > ACTION: Talk to Jan about making changes to stable release process.
> 
> That's kind of the opposite of what we quite recently changed to
> (a [hopefully] more predictable four month cycle). Apart from the
> question what "large" is, doing a release after any large security
> update seems unreasonable to me (not only because of giving up
> the predictability, but also because of the overhead involved,
> which is there even if we ditched the RCs). I have to admit that I
> fail to see why Debian would be different than other distros, all
> cherry picking security fixes until a new stable release becomes
> available. If otoh other major distros voiced similar desires, I
> think we'd have to once again re-think our stable release cadence.

IIRC (hopefully Ian or someone else will correct me if not) the main
proposal to discuss with you was WRT the usefulness of the RCs for stable
releases, since it was felt they didn't provide much benefit to
downstreams.

IOW perhaps it would be just as useful to downstreams and less work for us
(mainly you I suppose) to do 0 or only 1 rc for a point release, based on
whatever is in the branch at the appropriate time. It might also avoid
various delays which the stable release process can currently suffer from
waiting for a push for each rc instead of just once (or maybe twice), which
in turn might help the release timing to be even more predictable.

Ian.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Notes from Xen BoF at Debconf15
  2015-09-08 10:15   ` Ian Campbell
@ 2015-09-08 10:39     ` Jan Beulich
  2015-09-08 10:49       ` Ian Jackson
  0 siblings, 1 reply; 22+ messages in thread
From: Jan Beulich @ 2015-09-08 10:39 UTC (permalink / raw)
  To: Ian Campbell, Ian.Jackson; +Cc: Debian Xen Team, xen-devel

>>> On 08.09.15 at 12:15, <ian.campbell@citrix.com> wrote:
> On Tue, 2015-09-08 at 03:47 -0600, Jan Beulich wrote:
>> > > > On 08.09.15 at 11:24, <ian.campbell@citrix.com> wrote:
>> > Release cycle
>> > =============
>> > 
>> > Waldi commented that the stable release cycle was too long. Would like
>> > to see a release after any large security update.
>> > 
>> > We asked if the RCs for stable releases were valuable, the answer was
>> > "not so much".
>> > 
>> > Waldi would prefer to avoid cherry-picking security fixes if possible.
>> > 
>> > We asked if we thought Xen stable releases could be added to Debian
>> > point releases. Waldi thought they likely could be, citing the
>> > inclusion of Linux stable releases in point releases.
>> > 
>> > Our stable releases follow a similar set of rules to Linux, we think
>> > we implement them more faithfully (less feature or feature-like
>> > backports)
>> > 
>> > ACTION: Talk to Jan about making changes to stable release process.
>> 
>> That's kind of the opposite of what we quite recently changed to
>> (a [hopefully] more predictable four month cycle). Apart from the
>> question what "large" is, doing a release after any large security
>> update seems unreasonable to me (not only because of giving up
>> the predictability, but also because of the overhead involved,
>> which is there even if we ditched the RCs). I have to admit that I
>> fail to see why Debian would be different than other distros, all
>> cherry picking security fixes until a new stable release becomes
>> available. If otoh other major distros voiced similar desires, I
>> think we'd have to once again re-think our stable release cadence.
> 
> IIRC (hopefully Ian or someone else will correct me if not) the main
> proposal to discuss with you was WRT the usefulness of the RCs for stable
> releases, since it was felt they didn't provide much benefit to
> downstreams.
> 
> IOW perhaps it would be just as useful to downstreams and less work for us
> (mainly you I suppose) to do 0 or only 1 rc for a point release, based on
> whatever is in the branch at the appropriate time. It might also avoid
> various delays which the stable release process can currently suffer from
> waiting for a push for each rc instead of just once (or maybe twice), which
> in turn might help the release timing to be even more predictable.

Right - 4.4.3 already was released with just one RC, and indeed I
meant to stay with that model considering the little (if any) feedback
we get on these RCs. I personally could live without doing any RCs,
but thought so far that doing at least one kind of publicly indicates
the intention of doing a release soon.

Jan

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Notes from Xen BoF at Debconf15
  2015-09-08 10:39     ` Jan Beulich
@ 2015-09-08 10:49       ` Ian Jackson
  2015-09-08 10:55         ` Jan Beulich
  0 siblings, 1 reply; 22+ messages in thread
From: Ian Jackson @ 2015-09-08 10:49 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Debian Xen Team, Ian Campbell, xen-devel

Jan Beulich writes ("Re: [Xen-devel] Notes from Xen BoF at Debconf15"):
> Right - 4.4.3 already was released with just one RC, and indeed I
> meant to stay with that model considering the little (if any) feedback
> we get on these RCs. I personally could live without doing any RCs,
> but [...]

Having had the chance to reflect I am convinced that we should stop
doing RCs for point releases.

> [I] thought so far that doing at least one kind of publicly indicates
> the intention of doing a release soon.

You might hope that it would have that effect, but I don't think it is
working.  Also, if it does work, all it does is generate more patches,
making the `rc' more of a `prod to get your stuff shoveled in'.

Ian.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Notes from Xen BoF at Debconf15
  2015-09-08 10:49       ` Ian Jackson
@ 2015-09-08 10:55         ` Jan Beulich
  2015-09-08 14:52           ` Doug Goldstein
  0 siblings, 1 reply; 22+ messages in thread
From: Jan Beulich @ 2015-09-08 10:55 UTC (permalink / raw)
  To: Ian Jackson; +Cc: Debian Xen Team, Ian Campbell, xen-devel

>>> On 08.09.15 at 12:49, <Ian.Jackson@eu.citrix.com> wrote:
> Jan Beulich writes ("Re: [Xen-devel] Notes from Xen BoF at Debconf15"):
>> Right - 4.4.3 already was released with just one RC, and indeed I
>> meant to stay with that model considering the little (if any) feedback
>> we get on these RCs. I personally could live without doing any RCs,
>> but [...]
> 
> Having had the chance to reflect I am convinced that we should stop
> doing RCs for point releases.
> 
>> [I] thought so far that doing at least one kind of publicly indicates
>> the intention of doing a release soon.
> 
> You might hope that it would have that effect, but I don't think it is
> working.  Also, if it does work, all it does is generate more patches,
> making the `rc' more of a `prod to get your stuff shoveled in'.

Well, okay - no RCs anymore then unless someone offers a
convincing reason to have them.

Jan

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Notes from Xen BoF at Debconf15
  2015-09-08  9:24 Notes from Xen BoF at Debconf15 Ian Campbell
  2015-09-08  9:41 ` On distro packaging of stub domains (Re: Notes from Xen BoF at Debconf15) Ian Campbell
  2015-09-08  9:47 ` Notes from Xen BoF at Debconf15 Jan Beulich
@ 2015-09-08 14:49 ` Doug Goldstein
  2015-09-08 15:24   ` Ian Campbell
  2015-09-17 10:06   ` George Dunlap
  2015-09-16 13:20 ` Ian Campbell
  2015-10-05 15:37 ` Ian Campbell
  4 siblings, 2 replies; 22+ messages in thread
From: Doug Goldstein @ 2015-09-08 14:49 UTC (permalink / raw)
  To: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 8820 bytes --]

On 9/8/15 4:24 AM, Ian Campbell wrote:
> Xen upstream BoF
> ================
> 
> We had a discussion around Xen and packaging at Debian's annual developer
> conference (Debconf) a few weeks back:
> https://summit.debconf.org/debconf15/meeting/279/xen-upstream-bof/

While I'm not a Debian guy and was not at debconf I hope maybe I can
share some of my comments on packaging Xen from a different POV however.
I've been a Gentoo dev for over a decade and I've played around with
improving Xen's packaging in Gentoo. Also I have as of late been
updating Yocto's Xen packages to try to make those work.

> 
> These are my notes, I think there is probably stuff of interest to most
> distro people, not just Debian folks.
> 
> The session was scheduled in a small, out of the way, room. Around 2
> dozen people attended including:
> 
>   Ian Jackson
>   Bastian Blank ("waldi", current Xen package maintainer in Debian)
>   Guido Trotter (previous Xen package maintainer, who still takes care
>                  of stable security updates)
>   Axel Beckert  (maintainer of the xen-tools helpers)
>   Various users
>   Myself
> 
> The majority of the conversation was between Ian & I and Bastian and
> Guido, some users raised some issues towards the end.
> 
> Embedding in xen.git
> ====================
> 
> We are much better about providing ways to use system-supplied
> components these days (since 4.4) and Debian uses them.

100% here. Most distros hate this since its against their policy and
fight real hard to break this apart. I'm hoping to throw my efforts
behind raisin and get Xen upstream to embrace raisin to build rather
than embedding bits.

> 
> Waldi noted that iPXE did not have such an option. Since that iPXE is
> only used by qemu-trad (for qemu-upstream the iPXE comes from the PCI
> ROM BAR) and Debian disabled qemu-trad it should be pretty quick to
> patch the build system to disable iPXE.

This would be a welcome addition.

> 
> Secondly it was noted that SeaBIOS is still built into hvmloader,
> which makes package updates harder (needs a binNMU of the Xen package
> to pickup a new SeaBIOS). Since this is in guest context it is not an
> issue for the security team (which would make it higher priority) and
> there are medium term plans to perhaps make it possible to load the
> BIOS via the toolstack instead.
> 
> Note that OVMF would also be built in but is non-free and hence
> disabled.
> 
> Lowlevel library API stability
> ==============================
> 
> The majority of the current Debian patch queue is moving unstable
> libraries (mainly libxc) from $libdir to $libexecdir/xen-X.Y, adding
> -rpath where needed and removing the SONAME from such libraries. Also
> move related binary files.

This would be good to upstream. If the libraries are not meant to have
API stability then they should likely relocate there.

> 
> Waldi expressed hope that the hypervisor interfaces could become
> stable, but we think this is unlikely. Having the hypervisor provide a
> compat layer for older interfaces was ruled out as the wrong place
> from a security PoV. Second choice would be to have the tools provide
> a compat layer for older hypervisors, which would be possible but
> perhaps tricky to achieve.
> 
> This is also a problem for some third packages, e.g. qemu-upstream and
> kexec-tools. This requires a binNMU of those packages to build against
> a new Xen package. This is painful for maintainers.
> 
> We explained our plan was to move some sections of the unstable
> library out into small stable libraries for specific purposes, with
> stuff needed for qemu-upstream, kexec-tools and other external
> packages being a priority in the short term. After this we plan to
> reexamine what is left and consider next steps.
> 
> In the meantime it should be much easier these days to provide
> upstream configure options to provide the changes currently patched in
> by Debian.
> 
> Midlevel library stability
> ==========================
> 
> libxenlight is only API not ABI stable. This is a pain in particular
> for libvirt which needs binNMU for new Xen package.
> 
> We would like to eventually offer ABI stability or this library, but
> we are not there yet.

What about doing symbol versioning like libvirt does to start offering
ABI stability? You could then at some point in the future do a ABI break
to remove out the bits that were deprecated when you do the "1.0" release.


> 
> Stubdomains
> ===========
> 
> Hard to do in a packaging environment (is really its own partial
> architecture). Rump kernels are no different in this regard.
> 
> No clever ideas were put forward.

Honestly what about moving these more out of tree? Now with mini-os
being out of tree and the stubdoms needing mini-os its an absolute mess
to build from a distro standpoint since mini-os is git fetched. To make
it work upstream using raisin would be a great improvement here.


> 
> initscripts
> ===========
> 
> Debian has its own initscripts, does not use the upstream ones.
> 
> Waldi stated this was because the upstream ones were not properly LSB
> and were too "cross-distro".

It would likely been useful to stick a README in with the init scripts
that are in the tree to identify how they should be used together.
Ubuntu currently (I assume Debian too) uses 1 LSB script and the rest
systemd scripts which doesn't always do the right thing (at least on my
machine) so I would say the situation as a whole needs to be improved.

> 
> We would like to try and have these in xen.git. Perhaps a Yakk, but
> closer to upstream the better. Not a very high priority though.
> 
> grub-xen
> ========
> 
> Needs much better docs.
> 
> ACTION: I agreed to move the text of my blog post somewhere more
> obvious.
> 
> Release cycle
> =============
> 
> Waldi commented that the stable release cycle was too long. Would like
> to see a release after any large security update.
> 
> We asked if the RCs for stable releases were valuable, the answer was
> "not so much".
> 
> Waldi would prefer to avoid cherry-picking security fixes if possible.
> 
> We asked if we thought Xen stable releases could be added to Debian
> point releases. Waldi thought they likely could be, citing the
> inclusion of Linux stable releases in point releases.
> 
> Our stable releases follow a similar set of rules to Linux, we think
> we implement them more faithfully (less feature or feature-like
> backports)
> 
> ACTION: Talk to Jan about making changes to stable release process.
> 
> Security updates
> ================
> 
> Guido asked if security updates could go back further.
> 
> Currently we go to 4.2, but Debian Wheezy has Xen 4.1.
> 
> The security team don't currently have the effort to go further, but
> have recently introduced a private discussion list where predisclosure
> members are encouraged to exchange their own backports.
> 
> Guido is not on global team@security.debian. We suggested he discuss
> with the Debian security team switching to a xen specific alias
> including team@ + relevant package maintainers.
> 
> Release schedule vs. migration N=>N+1 support
> =============================================
> 
> Philip Hahn (a user) asked what happens to migration if the release
> cycle shortens.
> 
> We answered that this N=>N+1 policy would need rethinking into an N=>N+M.
> 
> We agreed that it would be useful if M was > Debian release cycle (~2
> years).
> 
> Recent rewrite of migration support has made changing this policy far
> more plausible.
> 
> It was suggested as an aside that using -backports more would be
> useful.
> 
> Remus
> =====
> 
> A user (Kai ???) was interested in Remus support.
> 
> We briefly discussed status, we think it should be reasonably easy to
> combine xl remus into one of the HA systems in Debian (e.g. Pacemaker,
> linux-ha?)
> 
> Ferenc Wagner pointed out that integrating VMs (non-HA style) into a
> pacemake system was quite easy.
> 
> Testing Xen in KVM
> ==================
> 
> A user (anon) asked if we tested this, because it sometimes broke.
> 
> We only test Xen in Xen, pointed to the wiki page.
> 
> OpenStack/xapi in Debian
> ========================
> 
> I spoke to a couple of users through the week who were asking after
> the future of xapi in Debian, because they wanted OpenStack.
> 
> I was able to point them to the libvirt+XenProject stuff and explained
> about the CI loop etc and they went away happy.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 


-- 
Doug Goldstein


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 959 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Notes from Xen BoF at Debconf15
  2015-09-08 10:55         ` Jan Beulich
@ 2015-09-08 14:52           ` Doug Goldstein
  0 siblings, 0 replies; 22+ messages in thread
From: Doug Goldstein @ 2015-09-08 14:52 UTC (permalink / raw)
  To: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1309 bytes --]

On 9/8/15 5:55 AM, Jan Beulich wrote:
>>>> On 08.09.15 at 12:49, <Ian.Jackson@eu.citrix.com> wrote:
>> Jan Beulich writes ("Re: [Xen-devel] Notes from Xen BoF at Debconf15"):
>>> Right - 4.4.3 already was released with just one RC, and indeed I
>>> meant to stay with that model considering the little (if any) feedback
>>> we get on these RCs. I personally could live without doing any RCs,
>>> but [...]
>>
>> Having had the chance to reflect I am convinced that we should stop
>> doing RCs for point releases.
>>
>>> [I] thought so far that doing at least one kind of publicly indicates
>>> the intention of doing a release soon.
>>
>> You might hope that it would have that effect, but I don't think it is
>> working.  Also, if it does work, all it does is generate more patches,
>> making the `rc' more of a `prod to get your stuff shoveled in'.
> 
> Well, okay - no RCs anymore then unless someone offers a
> convincing reason to have them.
> 
> Jan
> 

So I'll just toss in a late thumbs up here. For libvirt we maintain
stable branches that are mostly just security fixes and other important
fixes. We tag these will without RCs and all the downstream distros (and
myself as the downstream maintainer for 3 distros) are happy with that
situation.

-- 
Doug Goldstein


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 959 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: On distro packaging of stub domains (Re: Notes from Xen BoF at Debconf15)
  2015-09-08  9:41 ` On distro packaging of stub domains (Re: Notes from Xen BoF at Debconf15) Ian Campbell
@ 2015-09-08 15:03   ` Antti Kantee
  2015-09-08 16:15     ` Ian Campbell
  0 siblings, 1 reply; 22+ messages in thread
From: Antti Kantee @ 2015-09-08 15:03 UTC (permalink / raw)
  To: Ian Campbell, xen-devel, Debian Xen Team
  Cc: rumpkernel-users, Wei Liu, George.Dunlap

Hi,

Wei Liu hinted that I should "chime in and / or provide corrections" 
(his words).  I'll attempt to do exactly that by not really replying to 
anything specific.  For the record, when I say "we" in this mail, I mean 
"people who have contributed to the rump kernel project" (as also 
indicated by the email-hat).

First of all, there's a difference between a rump kernel (driver bundle 
built out of unmodified kernel components) and any unikernel you 
construct out of rump kernels ... sort of like how there's a difference 
between Linux and GNU/Linux.

For unikernels, the rump kernel project provides Rumprun, which can 
provide you with a near-full POSIX'y interface.  Rumprun also provides 
toolchain wrappers so that you can compile existing programs as Rumprun 
unikernels.  Rumprun also recently regrew the ability to run without the 
POSIX'y bits; some people found it important to be able to make a 
tradeoff between running POSIX'y applications and more compact "kernel 
plane" unikernels such as routers and firewalls.  But, for brevity and 
simplicity, I'll assume the POSIX'y mode for the rest of this email, 
since that's what the QEMU stubdom will no doubt use.

If the above didn't explain the grand scheme of things clearly, have a 
look at http://wiki.rumpkernel.org/Repo and especially the picture.  If 
things are still not clear after that, please point out matters of 
confusion and I will try to improve the explanations.

Also for simplicity, I'll be talking about rump kernels constructed from 
the NetBSD kernel, and the userspace environment of Rumprun being 
NetBSD-derived.  Conceptually, there's nothing stopping someone from 
plugging a GNU layer on top of NetBSD-derived rump kernels (a bit like 
Debian kXBSD?) or constructing rump kernels out of Linux.  But for now, 
let's talk about the only working implementation.

As far as I know, the API/ABI of the application environment provided by 
Rumprun is the same as the one provided by standard NetBSD.  Granted, I 
didn't perform the necessary experiments to verify that, so take the 
following with a few pinches of salt.  In theory, you could take 
application objects built for NetBSD and link them against Rumprun libs. 
  However, since a) nobody (else) ships applications as relocatable 
static objects b) Rumprun does not support shared libraries, I don't 
know how helpful the fact of ABI compatibility is.  IMO, adding shared 
library support would be a backwards way to go: increasing runtime 
processing and memory requirements to solve a build problem sounds plain 
weird.  So, I don't think you can leverage anything existing.

We do have most of the Rumprun cross-toolchain figured out at this 
point.  First, we don't ship any backend toolchain(s), but rather bolt 
wrappers and specs on top of any toolchain (*) you provide.  That way we 
don't have to figure out where to get a toolchain which produces binary 
for every target that everyone might want.  Also, it makes bootstrapping 
Rumprun convenient, since you just say "hey give me the components and 
application wrappers for CC=foocc" and off you go.

*) as long as it's gcc-derived, for now (IIRC gcc 4.8 - 5.1 are known to 
work well, older than that at least C++ won't work).  clang doesn't 
support specs files at least AFAIK, so someone would have to figure out 
how to move the contents of the specs into the wrappers, or whatever 
equivalent clang uses.  (patches welcome ;)

The produced wrappers look exactly like a normal cross-toolchain.  The 
tuple is the same as what NetBSD uses, except with rumprun added in the 
middle, so e.g. x86_64-rumprun-netbsd or arm-rumprun-netbsdelf-eabihf. 
That naming scheme means that most GNU-using software compiles nicely 
for Rumprun just by running configure as ./configure 
--host=x86_64-rumprun-netbsd followed by "make".  Sometimes you 
additionally need things like --disable-shared, but all in all 
everything works pretty well.  See 
http://repo.rumpkernel.org/rumprun-packages for a bunch of "case 
studies", not limited to just GNU autotools.

After "make", before launch we have an additional step called "bake", 
which links the specific kernel components onto the binary.  So for 
example, you can make the compiled binary run on Xen or KVM depending on 
which kernel components you bake onto it.  As a crude analogy, it's like 
scp'ing a binary to a Xen or KVM or bare metal system, but since the 
Rumprun unikernel is missing exec, we use the linker to perform "system 
selection".

So for shipping, one option is to ship the binary after "make", but then 
you also need to ship the toolchain.  The other option is to ship the 
baked binary, but then you lose some of your possibilities on how to 
target the binary.  I'm not sure either option is right for all cases.

We're still trying to figure out the exact form and figure of 
bake+launch.  In the original implementation we assumed that at 
launch-time we could cheaply control all of the details of the backend 
(a la xl.conf).  That assumption proved to be bad not only for example 
for embedded systems (which we should've foreseen), but also for cases 
like Amazon EC2 (where creating something launchable and launching 
something are seriously separate steps).  I'm not going to go into 
details in this thread, but just saying that we still have a few things 
to figure out in the full source-to-execution chain.  If someone wants 
to ship something, please please check with the rump kernel community 
first so that we know that you want to depend on some syntax/semantics 
not changing anymore.

I don't really have good solutions for the packaging problem.  Building 
a "full distro" around rump kernels certainly sounds interesting, and 
we're sort of trying to experiment with that with rumprun-packages. 
However, for packages sooner or later we need to assimilate a real 
packaging system which properly manages dependencies, licenses, 
vulnerabilities, etc.  I'm unsure we'd want to assimilate every existing 
packaging system -- Rumprun works the same on every platform, after all. 
  Hard to say for now, I hadn't actually considered the case where a 
distro might want to natively ship binary Rumprun images, as opposed to 
just the toolchain and components.

If I were you folks, I'd start by getting qemu out of the door, and 
worry about generalized solutions when someone wants to ship the second 
unikernel (or third or any small N>1).  If you can't sell to distros 
something that solves a problem, it's unlikely you'll be able to sell a 
framework for solving the same one problem (though, granted, it might be 
easier to sell a framework to computer folk -- "nevermind the solution, 
here's abstraction!").

If you haven't tried rumprun-packages, I recommend to pick something 
from there and see how it works.  Doing so might give some more 
perspective on how easy or difficult it would be to package QEMU.

Whoops, ended up being a bit longer than what I hoped for, but with any 
luck at least some new information was communicated.

   - antti

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Notes from Xen BoF at Debconf15
  2015-09-08 14:49 ` Doug Goldstein
@ 2015-09-08 15:24   ` Ian Campbell
  2015-09-17 10:06   ` George Dunlap
  1 sibling, 0 replies; 22+ messages in thread
From: Ian Campbell @ 2015-09-08 15:24 UTC (permalink / raw)
  To: Doug Goldstein, xen-devel

On Tue, 2015-09-08 at 09:49 -0500, Doug Goldstein wrote:
> > Midlevel library stability
> > ==========================
> > 
> > libxenlight is only API not ABI stable. This is a pain in particular
> > for libvirt which needs binNMU for new Xen package.
> > 
> > We would like to eventually offer ABI stability or this library, but
> > we are not there yet.
> 
> What about doing symbol versioning like libvirt does to start offering
> ABI stability? You could then at some point in the future do a ABI break
> to remove out the bits that were deprecated when you do the "1.0"
> release.

I think symbol versioning will indeed be one of the tools we will need to
deploy when we are ready to declare ABI stability for the library.

However symbol versioning only really helps if the ABI is already "almost
stable", if you try to use it to make a library which is undergoing a great
deal of ABI churn appear stable then you have to hand craft an enormous
amount of compatibility code to massage data structures between the
different versions.

I think we are getting towards being "almost stable" enough, maybe in a
release or two.

> > Stubdomains
> > ===========
> > 
> > Hard to do in a packaging environment (is really its own partial
> > architecture). Rump kernels are no different in this regard.
> > 
> > No clever ideas were put forward.
> 
> Honestly what about moving these more out of tree? Now with mini-os
> being out of tree and the stubdoms needing mini-os its an absolute mess
> to build from a distro standpoint since mini-os is git fetched. To make
> it work upstream using raisin would be a great improvement here.

IMHO where the code lives is not the hardest thing with packaging
stubdomains from a distro PoV, see the "On distro packaging of stub
domains" subthread.

That's not to say where the code lives couldn't be improved, and raisin is
certainly a path to improving that, partly by exposing devs to the distros
pain.

Ian.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: On distro packaging of stub domains (Re: Notes from Xen BoF at Debconf15)
  2015-09-08 15:03   ` Antti Kantee
@ 2015-09-08 16:15     ` Ian Campbell
  2015-09-08 16:26       ` Samuel Thibault
  2015-09-08 18:38       ` Antti Kantee
  0 siblings, 2 replies; 22+ messages in thread
From: Ian Campbell @ 2015-09-08 16:15 UTC (permalink / raw)
  To: Antti Kantee, xen-devel, Debian Xen Team
  Cc: rumpkernel-users, Wei Liu, George.Dunlap

On Tue, 2015-09-08 at 15:03 +0000, Antti Kantee wrote:

> For unikernels, the rump kernel project provides Rumprun, which can 
> provide you with a near-full POSIX'y interface.

I'm not 100% clear: Does rumprun _build_ or _run_ the application? It sound
s like it builds but the name suggests otherwise.

>   Rumprun also provides 
> toolchain wrappers so that you can compile existing programs as Rumprun 
> unikernels.

Do these wrappers make a rump kernel build target look just like any other
ross build target? (I've just got to the end and found my answer, which was
yes. I've left this next section in since I think it's a nice summary of
why it matters that the answer is yes)

e.g. I have aarch64-linux-gnu-{gcc,as,ld,ar,etc} which I can use to build
aarch64 binaries on my x86_64 host, including picking up aarch64 libraries
and headers from the correct arch specific patch.

Do these rumprun-provided wrappers provide x86_64-rumpkernel
-{gcc,as,ld,ar,etc} ?

Appearing as a regular cross-compilation target is, I think, going to be
important to being able to create rumpkernel based versions of distro
packages.

I think that package maintainers ideally won't want to have to include a
bunch of rumpkernel specific code in their package, they just want to
leverage the existing cross-compilability of their package.

This may not be super critical for a standalone package with a tiny build
dependency chain. Consider building a QEMU rumpkernel to be shipped in a
Debian package. The existing QEMU source package in Debian has ~30 library
dependencies, many of which I expect have their own dependencies etc. For
another metric consider for the regular x86 binary:

$ ldd /usr/bin/qemu-system-x86_64  | wc -l
87
$

So without a normal looking cross env we are talking about asking maybe 60
-90 different library maintainers to do something special for rumpkernels.

(Surely some of those 60-80 are things you would omit from a rump build for
various reasons, but still...)

> If the above didn't explain the grand scheme of things clearly, have a 
> look at http://wiki.rumpkernel.org/Repo and especially the picture.  If 
> things are still not clear after that, please point out matters of 
> confusion and I will try to improve the explanations.

I think that wiki page is clear, but I think it's orthogonal to the issue
with distro packaging of rump kernels.

>   However, since a) nobody (else) ships applications as relocatable 
> static objects b) Rumprun does not support shared libraries, I don't 
> know how helpful the fact of ABI compatibility is.  IMO, adding shared 
> library support would be a backwards way to go: increasing runtime 
> processing and memory requirements to solve a build problem sounds plain 
> weird.  So, I don't think you can leverage anything existing.

This is an interesting point, since not building a shared library is
already therefore requiring packaging changes which are going to be at
least a little bit rumpkernel specific.

Is it at all possible (even theoretically) to take a shared library (which
is relocatable as required) and to do a compile time static linking pass on
it? i.e. use libfoo.so but still do static linking?

> We do have most of the Rumprun cross-toolchain figured out at this 
> point.  First, we don't ship any backend toolchain(s), but rather bolt 
> wrappers and specs on top of any toolchain (*) you provide.  That way we 
> don't have to figure out where to get a toolchain which produces binary 
> for every target that everyone might want.  Also, it makes bootstrapping 
> Rumprun convenient, since you just say "hey give me the components and 
> application wrappers for CC=foocc" and off you go.

> The produced wrappers look exactly like a normal cross-toolchain.  The 
> tuple is the same as what NetBSD uses, except with rumprun added in the 
> middle, so e.g. x86_64-rumprun-netbsd or arm-rumprun-netbsdelf-eabihf. 

Great, I think this solves a large problem (whether its a wrapper or a
"proper cross compiler" I think is of limited importance as long as it
behaves enough like the latter)

> I don't really have good solutions for the packaging problem.  Building 
> a "full distro" around rump kernels certainly sounds interesting,

FWIW I don't think we need a full distro, just sufficient build
dependencies for the actual useful things (maybe that converges on to a
full distro though).

>   Hard to say for now, I hadn't actually considered the case where a 
> distro might want to natively ship binary Rumprun images, as opposed to 
> just the toolchain and components.

One concrete example of what I want is a Debian package in the Debian
archive which the xen-tools.deb can depend on which provides a qemu
rumpkernel binary such that users can use stubdomain functionality for
their HVM guests.

Debian are (most likely) not going to accept a second copy of the QEMU
source in the archive and likewise they wouldn't want a big source package
which was "qemu + all its build dependencies" or anything like that,
especially when "all its build dependencies" is duplicating the source of
dozens of libraries already in Debian.

> If I were you folks, I'd start by getting qemu out of the door

That's certainly the highest priority, at the moment I don't think we
actually have a QEMU Xen dm based on rumpkernels which anyone could package
anyway, irrespective of how hard that might be.

> , and 
> worry about generalized solutions when someone wants to ship the second 
> unikernel (or third or any small N>1).

Unfortunately I think the N==1 case is tricky already from a distro
acceptance PoV. (At least for Binary distros, it's probably trivial in a
Source based distro like Gentoo)

Ian.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: On distro packaging of stub domains (Re: Notes from Xen BoF at Debconf15)
  2015-09-08 16:15     ` Ian Campbell
@ 2015-09-08 16:26       ` Samuel Thibault
  2015-09-08 16:37         ` Ian Campbell
  2015-09-08 18:38       ` Antti Kantee
  1 sibling, 1 reply; 22+ messages in thread
From: Samuel Thibault @ 2015-09-08 16:26 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Wei Liu, Debian Xen Team, George.Dunlap, Antti Kantee,
	rumpkernel-users, xen-devel

Ian Campbell, le Tue 08 Sep 2015 17:15:40 +0100, a écrit :
> Is it at all possible (even theoretically) to take a shared library (which
> is relocatable as required) and to do a compile time static linking pass on
> it? i.e. use libfoo.so but still do static linking?

€ gcc test.c -o libtest.so -shared -Wl,--relocatable
/usr/bin/ld.bfd.real: -r and -shared may not be used together

Samuel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: On distro packaging of stub domains (Re: Notes from Xen BoF at Debconf15)
  2015-09-08 16:26       ` Samuel Thibault
@ 2015-09-08 16:37         ` Ian Campbell
  2015-09-08 17:09           ` Samuel Thibault
  0 siblings, 1 reply; 22+ messages in thread
From: Ian Campbell @ 2015-09-08 16:37 UTC (permalink / raw)
  To: Samuel Thibault
  Cc: Wei Liu, Debian Xen Team, George.Dunlap, Antti Kantee,
	rumpkernel-users, xen-devel

On Tue, 2015-09-08 at 18:26 +0200, Samuel Thibault wrote:
> Ian Campbell, le Tue 08 Sep 2015 17:15:40 +0100, a écrit :
> > Is it at all possible (even theoretically) to take a shared library
> > (which
> > is relocatable as required) and to do a compile time static linking
> > pass on
> > it? i.e. use libfoo.so but still do static linking?
> 
> € gcc test.c -o libtest.so -shared -Wl,--relocatable
> /usr/bin/ld.bfd.real: -r and -shared may not be used together

Sorry, my suggestion was a bit garbled, to say the least... I meant more
"link an application against it statically even though it is a shared
library":

$ gcc main.c -o myapp.elf -static libfoo.so 

Where myapp.elf would be statically linked and include the libfoo code
directly.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: On distro packaging of stub domains (Re: Notes from Xen BoF at Debconf15)
  2015-09-08 16:37         ` Ian Campbell
@ 2015-09-08 17:09           ` Samuel Thibault
  0 siblings, 0 replies; 22+ messages in thread
From: Samuel Thibault @ 2015-09-08 17:09 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Wei Liu, Debian Xen Team, George.Dunlap, Antti Kantee,
	rumpkernel-users, xen-devel

Ian Campbell, le Tue 08 Sep 2015 17:37:21 +0100, a écrit :
> On Tue, 2015-09-08 at 18:26 +0200, Samuel Thibault wrote:
> > Ian Campbell, le Tue 08 Sep 2015 17:15:40 +0100, a écrit :
> > > Is it at all possible (even theoretically) to take a shared library
> > > (which
> > > is relocatable as required) and to do a compile time static linking
> > > pass on
> > > it? i.e. use libfoo.so but still do static linking?
> > 
> > € gcc test.c -o libtest.so -shared -Wl,--relocatable
> > /usr/bin/ld.bfd.real: -r and -shared may not be used together
> 
> Sorry, my suggestion was a bit garbled, to say the least... I meant more
> "link an application against it statically even though it is a shared
> library":
> 
> $ gcc main.c -o myapp.elf -static libfoo.so 

Yes, that's what I understood, but the answer is the same: AIUI, once
the library is linked, you can not link again with it, because the code
has already been specialized.

Samuel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: On distro packaging of stub domains (Re: Notes from Xen BoF at Debconf15)
  2015-09-08 16:15     ` Ian Campbell
  2015-09-08 16:26       ` Samuel Thibault
@ 2015-09-08 18:38       ` Antti Kantee
  2015-09-09 11:15         ` Ian Campbell
  1 sibling, 1 reply; 22+ messages in thread
From: Antti Kantee @ 2015-09-08 18:38 UTC (permalink / raw)
  To: Ian Campbell, xen-devel, Debian Xen Team
  Cc: rumpkernel-users, Wei Liu, George.Dunlap

On 08/09/15 16:15, Ian Campbell wrote:
> On Tue, 2015-09-08 at 15:03 +0000, Antti Kantee wrote:
>
>> For unikernels, the rump kernel project provides Rumprun, which can
>> provide you with a near-full POSIX'y interface.
>
> I'm not 100% clear: Does rumprun _build_ or _run_ the application? It sound
> s like it builds but the name suggests otherwise.

For all practical purposes, Rumprun is an OS, except that you always 
cross-compile for it.  So, I'd say "yes", but it depends on how you want 
to interpret the situation.  We could spend days writing emails back and 
forth, but there's really no substitute for an hour of hands-on 
experimentation.

(nb. the launch tool for launching Rumprun instances is currently called 
rumprun.  It's on my todo list to propose changing the name of the tool 
to e.g. rumprunner or runrump or something which is distinct from the OS 
name, since similarity causes some confusion)

> Do these wrappers make a rump kernel build target look just like any other
> ross build target? (I've just got to the end and found my answer, which was
> yes. I've left this next section in since I think it's a nice summary of
> why it matters that the answer is yes)
>
> e.g. I have aarch64-linux-gnu-{gcc,as,ld,ar,etc} which I can use to build
> aarch64 binaries on my x86_64 host, including picking up aarch64 libraries
> and headers from the correct arch specific patch.
>
> Do these rumprun-provided wrappers provide x86_64-rumpkernel
> -{gcc,as,ld,ar,etc} ?

No, like I said and which you discovered later, 
x86_64-rumprun-netbsd-{gcc,as,ld,ar,etc}.  aarch64 would be 
aarch64-rumprun-netbsd-{...}.

> Appearing as a regular cross-compilation target is, I think, going to be
> important to being able to create rumpkernel based versions of distro
> packages.
>
> I think that package maintainers ideally won't want to have to include a
> bunch of rumpkernel specific code in their package, they just want to
> leverage the existing cross-compilability of their package.

Yes, that is critical.  We bled to achieve that goal.  It looks obvious 
now, but I can assure you it wasn't obvious a year ago.

> $ ldd /usr/bin/qemu-system-x86_64  | wc -l
> 87
> $

Heh, that's quite a lot.

>> If the above didn't explain the grand scheme of things clearly, have a
>> look at http://wiki.rumpkernel.org/Repo and especially the picture.  If
>> things are still not clear after that, please point out matters of
>> confusion and I will try to improve the explanations.
>
> I think that wiki page is clear, but I think it's orthogonal to the issue
> with distro packaging of rump kernels.

Sure, but I wanted to get the concepts right.  And they're still not 
right.  We're talking about packaging for *Rumprun*, not rump kernels in 
general.

>>    However, since a) nobody (else) ships applications as relocatable
>> static objects b) Rumprun does not support shared libraries, I don't
>> know how helpful the fact of ABI compatibility is.  IMO, adding shared
>> library support would be a backwards way to go: increasing runtime
>> processing and memory requirements to solve a build problem sounds plain
>> weird.  So, I don't think you can leverage anything existing.
>
> This is an interesting point, since not building a shared library is
> already therefore requiring packaging changes which are going to be at
> least a little bit rumpkernel specific.
>
> Is it at all possible (even theoretically) to take a shared library (which
> is relocatable as required) and to do a compile time static linking pass on
> it? i.e. use libfoo.so but still do static linking?

But shared libraries aren't "relocatable", that's the whole point of 
shared libraries! ;) ;)

I guess you could theoretically link shared libs with a different ld, 
and I don't think it would be very different from prelinking shared 
libs, but as Samuel demonstrated, it won't work at least with an 
out-of-the-box ld.

I think it's easier to blame Solaris for the world going bonkers with 
shared libs, bite the bullet, and start adding static linking back where 
it's been ripped out from.  Shared libs make zero sense for unikernels 
since you don't have anyone to share them with, so you're just paying 
extra for PIC for absolutely no return.  (dynamically loadable code is a 
separate issue, if you even want to go there ... I wouldn't)

>> I don't really have good solutions for the packaging problem.  Building
>> a "full distro" around rump kernels certainly sounds interesting,
>
> FWIW I don't think we need a full distro, just sufficient build
> dependencies for the actual useful things (maybe that converges on to a
> full distro though).

By "full distro" I meant "enough to get a majority of the useful 
services going".  Seems like once qemu works, we're 99% there ;)

> Debian are (most likely) not going to accept a second copy of the QEMU
> source in the archive and likewise they wouldn't want a big source package
> which was "qemu + all its build dependencies" or anything like that,
> especially when "all its build dependencies" is duplicating the source of
> dozens of libraries already in Debian.

Why do you need a second copy of the sources?  Or are sources always 
strictly associated with one package without any chances of pulling from 
a master package?  You are going to need two copies of the binaries 
anyway, so it doesn't seem like a particularly big deal to me, not that 
I'm questioning your statement.

>> If I were you folks, I'd start by getting qemu out of the door
>
> That's certainly the highest priority, at the moment I don't think we
> actually have a QEMU Xen dm based on rumpkernels which anyone could package
> anyway, irrespective of how hard that might be.
>
>> , and
>> worry about generalized solutions when someone wants to ship the second
>> unikernel (or third or any small N>1).
>
> Unfortunately I think the N==1 case is tricky already from a distro
> acceptance PoV. (At least for Binary distros, it's probably trivial in a
> Source based distro like Gentoo)

Ok.  I'll help where I can, but I don't think I can be the primus motor 
for solving the distro acceptance problem for Xen stubdomains.

If you can say to the packaging system "build with this cross toolchain 
but disable shared" you're already quite far along, and it seems like 
something that shouldn't be too difficult to get reasonable packaging 
systems to support.  But, details, details.  One major detail is that 
your target is quite wide, and not everyone along that target can be 
assumed to be reasonable :/

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: On distro packaging of stub domains (Re: Notes from Xen BoF at Debconf15)
  2015-09-08 18:38       ` Antti Kantee
@ 2015-09-09 11:15         ` Ian Campbell
  2015-09-09 13:42           ` Antti Kantee
  0 siblings, 1 reply; 22+ messages in thread
From: Ian Campbell @ 2015-09-09 11:15 UTC (permalink / raw)
  To: Antti Kantee, xen-devel, Debian Xen Team
  Cc: rumpkernel-users, Wei Liu, George.Dunlap

On Tue, 2015-09-08 at 18:38 +0000, Antti Kantee wrote:
> On 08/09/15 16:15, Ian Campbell wrote:
> > On Tue, 2015-09-08 at 15:03 +0000, Antti Kantee wrote:
> > 
> > > For unikernels, the rump kernel project provides Rumprun, which can
> > > provide you with a near-full POSIX'y interface.
> > 
> > I'm not 100% clear: Does rumprun _build_ or _run_ the application? It
> > sound
> > s like it builds but the name suggests otherwise.
> 
> For all practical purposes, Rumprun is an OS, except that you always 
> cross-compile for it.  So, I'd say "yes", but it depends on how you want 
> to interpret the situation.  We could spend days writing emails back and 
> forth, but there's really no substitute for an hour of hands-on 
> experimentation.
> 
> (nb. the launch tool for launching Rumprun instances is currently called 
> rumprun.  It's on my todo list to propose changing the name of the tool 
> to e.g. rumprunner or runrump or something which is distinct from the OS 
> name, since similarity causes some confusion)

Thanks, I think I get it...


> > Do these wrappers make a rump kernel build target look just like any
> > other
> > ross build target? (I've just got to the end and found my answer, which
> > was
> > yes. I've left this next section in since I think it's a nice summary
> > of
> > why it matters that the answer is yes)
> > 
> > e.g. I have aarch64-linux-gnu-{gcc,as,ld,ar,etc} which I can use to
> > build
> > aarch64 binaries on my x86_64 host, including picking up aarch64
> > libraries
> > and headers from the correct arch specific patch.
> > 
> > Do these rumprun-provided wrappers provide x86_64-rumpkernel
> > -{gcc,as,ld,ar,etc} ?
> 
> No, like I said and which you discovered later, 
> x86_64-rumprun-netbsd-{gcc,as,ld,ar,etc}.  aarch64 would be 
> aarch64-rumprun-netbsd-{...}.

Sorry, I used an explicit example when really I just meant "some triplet"
without saying "such as" or "e.g.".

So the answer to the question I wanted to ask (rather than the one I did)
is "yes", which is good!

> > > If the above didn't explain the grand scheme of things clearly, have a
> > > look at http://wiki.rumpkernel.org/Repo and especially the picture.  If
> > > things are still not clear after that, please point out matters of
> > > confusion and I will try to improve the explanations.
> > 
> > I think that wiki page is clear, but I think it's orthogonal to the issue
> > with distro packaging of rump kernels.
> 
> Sure, but I wanted to get the concepts right.  And they're still not 
> right.  We're talking about packaging for *Rumprun*, not rump kernels in 
> general.

Right.

> > >    However, since a) nobody (else) ships applications as relocatable
> > > static objects b) Rumprun does not support shared libraries, I don't
> > > know how helpful the fact of ABI compatibility is.  IMO, adding
> > > shared
> > > library support would be a backwards way to go: increasing runtime
> > > processing and memory requirements to solve a build problem sounds
> > > plain
> > > weird.  So, I don't think you can leverage anything existing.
> > 
> > This is an interesting point, since not building a shared library is
> > already therefore requiring packaging changes which are going to be at
> > least a little bit rumpkernel specific.
> > 
> > Is it at all possible (even theoretically) to take a shared library
> > (which
> > is relocatable as required) and to do a compile time static linking
> > pass on
> > it? i.e. use libfoo.so but still do static linking?
> 
> But shared libraries aren't "relocatable", that's the whole point of 
> shared libraries! ;) ;)

Hrm, perhaps I'm confusing PIC with relocatable but AIUI a shared library
can be loaded at any address (subject to some constraints) in a process and
may be loaded at different addresses in different processes, which is what
you actually need to do...

> I guess you could theoretically link shared libs with a different ld, 

...this. (assuming you meant "link an app against shared libs")

> and I don't think it would be very different from prelinking shared 
> libs,

Indeed.

>  but as Samuel demonstrated, it won't work at least with an 
> out-of-the-box ld.

Right, I thought it probably wasn't, which is why I said "even
theoretically".

> I think it's easier to blame Solaris for the world going bonkers with 
> shared libs, bite the bullet, and start adding static linking back where 
> it's been ripped out from.  Shared libs make zero sense for unikernels 
> since you don't have anyone to share them with, so you're just paying 
> extra for PIC for absolutely no return.  (dynamically loadable code is a 
> separate issue, if you even want to go there ... I wouldn't)

The issue, and the reason I mentioned it, is that distros (at least Linux
distros) have, for better or worse, gone in heavily for the use of shared
libraries in their application packaging norms.

Actually distros might be (e.g. Debian is) quite good at always providing a
.a as well as the .so when packaging libraries the issue is in the
application packaging which would need modifying to provide a mode where it
was linked against those .a files instead of the .so files.

Since it is only for the final application maybe that's a more tractable
problem in terms of having to modify distro packaging, since you don't need
to follow the build-dep chain at all.

> > Debian are (most likely) not going to accept a second copy of the QEMU
> > source in the archive and likewise they wouldn't want a big source
> > package
> > which was "qemu + all its build dependencies" or anything like that,
> > especially when "all its build dependencies" is duplicating the source
> > of
> > dozens of libraries already in Debian.
> 
> Why do you need a second copy of the sources?  Or are sources always 
> strictly associated with one package without any chances of pulling from 
> a master package?

A given source package (e.g. "qemu.dsc", which incorporates some upstream
qemu.orig.tar.gz and the Debian packaging changes) might build many
different binary packages (.deb files), perhaps including building multiple
times in different configurations, but the upstream source should only
exist once in the archive as part of that source package.

IOW it would not be allowed to create qemu-rumpkernel.dsc which
incorporates another copy of the upstream qemu source (even a different
version). (There are some situations, e.g. multiple incompatible versions
of libraries, where this might be tolerated, but they do not apply to the
situation here).

When a package is being built it can depend on the binary outputs of other
builds, but it cannot depend on the _source_ of another package.

There are a small number of packages where foo.dsc builds a foo-source.deb
file which contains a tarball of the source, but those are quite
specialised use e.g. the Linux package does so to facilitate people
building their own kernel and the bintuils package does so to facilitate
people building cross toolchains for architectures not in Debian. It would
not be possible, in general, to apply this to e.g. the library dependency
chain of QEMU. 

Also note that the artifacts built with that source is generally a local
build of something, not another package contained in the Debian archive.
I'm not sure what the formal policy status of uploading such a thing is,
but it would certainly be considered exceptional.

If another package ("bar") was to build-depend on foo-source.deb and use
the source in there and then foo-source was updated then "bar" would need
rebuilding and re-uploading to rebuild against it.

>   You are going to need two copies of the binaries 
> anyway, so it doesn't seem like a particularly big deal to me, not that 
> I'm questioning your statement.

Hopefully the above has clarified a bit why it is a big deal to (binary)
distros?

> 
> > > If I were you folks, I'd start by getting qemu out of the door
> > 
> > That's certainly the highest priority, at the moment I don't think we
> > actually have a QEMU Xen dm based on rumpkernels which anyone could
> > package
> > anyway, irrespective of how hard that might be.
> > 
> > > , and
> > > worry about generalized solutions when someone wants to ship the
> > > second
> > > unikernel (or third or any small N>1).
> > 
> > Unfortunately I think the N==1 case is tricky already from a distro
> > acceptance PoV. (At least for Binary distros, it's probably trivial in
> > a
> > Source based distro like Gentoo)
> 
> Ok.  I'll help where I can, but I don't think I can be the primus motor 
> for solving the distro acceptance problem for Xen stubdomains.

Sure, I wasn't expecting you to be (sorry if that wasn't clear!)

> If you can say to the packaging system "build with this cross toolchain 
> but disable shared" you're already quite far along,

Sadly I think "but disable shared" is one of those things which doesn't
exist today for most binary distros and which would therefore be a
potential problem.

Considering libraries first, as I mention above at least in Debian the norm
is to build both .a and .so versions of the library. Is the x86_64-rumprun
-netbsd toolchain capable of building a (perhaps pointless) .so file? Is so
then we can just ignore the "but disable shared" and do the normal package
build to get a .a (which we want) and a .so (which is useless).

The considering applications, is it necessary to explicitly disable shared
when using the x86_64-rumprun-netbsd toolchain or will
    x86_64-rumprun-netbsd-gcc -o myapp main.o -lfoo
do the right thing and use libfoo.a instead of libfoo.so without needing
e.g. -static?

If both of those behave in the way I hope then actually I think we are
actually pretty darn close to being able to try something.

>  and it seems like 
> something that shouldn't be too difficult to get reasonable packaging 
> systems to support.  But, details, details.  One major detail is that 
> your target is quite wide, and not everyone along that target can be 
> assumed to be reasonable :/

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: On distro packaging of stub domains (Re: Notes from Xen BoF at Debconf15)
  2015-09-09 11:15         ` Ian Campbell
@ 2015-09-09 13:42           ` Antti Kantee
  0 siblings, 0 replies; 22+ messages in thread
From: Antti Kantee @ 2015-09-09 13:42 UTC (permalink / raw)
  To: Ian Campbell, xen-devel, Debian Xen Team
  Cc: rumpkernel-users, Wei Liu, George.Dunlap

Ian,

Thank you for the explanations.

Hmm.  (not replying to anything specific)

My guess is that shared libs won't be the biggest problem.  I'd find it 
extremely surprising if you can take a Linux (or any other !NetBSD) 
packaging system and discover the dozens of dependencies of QEMU to not 
contain any "'isms" which do not apply when building for what is 
essentially a NetBSD target.

I don't know how Wei Liu built his QEMU, but I assume it was by farming 
a lot of --disable-stuff.  That's what I'd do and somehow find a way to 
ship the result.  The requirements for stubdom QEMU and /usr/bin/qemu 
are IMO too dissimilar to stuffed into the same box.  By my judgement 
(which may be wrong), almost none of the dynamic dependencies of QEMU on 
a regular Linux system apply for the Xen stub domain use.  If the goal 
is to get a reduced footprint qemu, bundling unnecessary clutter is in 
direct conflict with the goal.  Besides, trying to get e.g. libsystemd 
to build or work with [a NetBSD-API'd] Rumprun will most likely earn you 
a ticket to the loony bin.

So while I think I understand your predicament (you need to sell the Xen 
improvement to distros which means playing by their rules) and the 
temptation of reusing existing packages for the job, I seriously doubt 
the approach will lead to a sensible result.  That, of course, shouldn't 
stop you from trying.  If the result of your experimentation matches 
your hypothesis and shared libs is the main problem, I'll figure out a 
way to make it work on the Rumprun side of things.

If you truly decide you need to use existing Linux infra, I'd start down 
that track by bolting a Linux userspace environment on top of the 
"kernel only" Rumprun stack (which could/should more or less work thanks 
to syscall emulation).  Of course, you'd need to do the same for FreeBSD 
and every other system you want to support, so it's not a free ticket 
either, but by my guess at least a cheaper one.

Anyway, I only have guesses.

   - antti

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Notes from Xen BoF at Debconf15
  2015-09-08  9:24 Notes from Xen BoF at Debconf15 Ian Campbell
                   ` (2 preceding siblings ...)
  2015-09-08 14:49 ` Doug Goldstein
@ 2015-09-16 13:20 ` Ian Campbell
  2015-10-05 15:37 ` Ian Campbell
  4 siblings, 0 replies; 22+ messages in thread
From: Ian Campbell @ 2015-09-16 13:20 UTC (permalink / raw)
  To: xen-devel, Debian Xen Team

On Tue, 2015-09-08 at 10:24 +0100, Ian Campbell wrote:
> Embedding in xen.git
> ====================
> 
> We are much better about providing ways to use system-supplied
> components these days (since 4.4) and Debian uses them.
> 
> Waldi noted that iPXE did not have such an option. Since that iPXE is
> only used by qemu-trad (for qemu-upstream the iPXE comes from the PCI
> ROM BAR) and Debian disabled qemu-trad it should be pretty quick to
> patch the build system to disable iPXE.

I went to do this, and found that ipxe is already only built if ROMBIOS is
enabled and ROMBIOS is disabled by default if qemu-trad is not enabled
(which it is not in the Debian packaging).

So I think this is already addressed! I don't think it is worth at this
point allowing to control ipxe separately from ROMBIOS (or at least such a
task is nowhere near the top of my todo list).

Ian.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Notes from Xen BoF at Debconf15
  2015-09-08 14:49 ` Doug Goldstein
  2015-09-08 15:24   ` Ian Campbell
@ 2015-09-17 10:06   ` George Dunlap
  1 sibling, 0 replies; 22+ messages in thread
From: George Dunlap @ 2015-09-17 10:06 UTC (permalink / raw)
  To: Doug Goldstein; +Cc: xen-devel@lists.xen.org

On Tue, Sep 8, 2015 at 3:49 PM, Doug Goldstein <cardoe@cardoe.com> wrote:
>> Stubdomains
>> ===========
>>
>> Hard to do in a packaging environment (is really its own partial
>> architecture). Rump kernels are no different in this regard.
>>
>> No clever ideas were put forward.
>
> Honestly what about moving these more out of tree? Now with mini-os
> being out of tree and the stubdoms needing mini-os its an absolute mess
> to build from a distro standpoint since mini-os is git fetched. To make
> it work upstream using raisin would be a great improvement here.

The real question with stubdomains is about how to build them at all
so that they're available from within a Debian/Linux distribution,
given that what you want is a binary that consists of code from dozens
of Debian packages (e.g., qemu + all its dependencies) re-compiled for
a different environment (minios or rump kernels).  See the "fork" of
the thread we had on this subject.

(And of course the same if you s/Debian/$SOME_OTHER_DISTRO/;)

 -George

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Notes from Xen BoF at Debconf15
  2015-09-08  9:24 Notes from Xen BoF at Debconf15 Ian Campbell
                   ` (3 preceding siblings ...)
  2015-09-16 13:20 ` Ian Campbell
@ 2015-10-05 15:37 ` Ian Campbell
  4 siblings, 0 replies; 22+ messages in thread
From: Ian Campbell @ 2015-10-05 15:37 UTC (permalink / raw)
  To: xen-devel, Debian Xen Team

On Tue, 2015-09-08 at 10:24 +0100, Ian Campbell wrote:
> grub-xen
> ========
> 
> Needs much better docs.
> 
> ACTION: I agreed to move the text of my blog post somewhere more
> obvious.

I did this on the last doc day: http://wiki.xen.org/wiki/PvGrub2

Ian.

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2015-10-05 15:37 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-09-08  9:24 Notes from Xen BoF at Debconf15 Ian Campbell
2015-09-08  9:41 ` On distro packaging of stub domains (Re: Notes from Xen BoF at Debconf15) Ian Campbell
2015-09-08 15:03   ` Antti Kantee
2015-09-08 16:15     ` Ian Campbell
2015-09-08 16:26       ` Samuel Thibault
2015-09-08 16:37         ` Ian Campbell
2015-09-08 17:09           ` Samuel Thibault
2015-09-08 18:38       ` Antti Kantee
2015-09-09 11:15         ` Ian Campbell
2015-09-09 13:42           ` Antti Kantee
2015-09-08  9:47 ` Notes from Xen BoF at Debconf15 Jan Beulich
2015-09-08 10:15   ` Lars Kurth
2015-09-08 10:15   ` Ian Campbell
2015-09-08 10:39     ` Jan Beulich
2015-09-08 10:49       ` Ian Jackson
2015-09-08 10:55         ` Jan Beulich
2015-09-08 14:52           ` Doug Goldstein
2015-09-08 14:49 ` Doug Goldstein
2015-09-08 15:24   ` Ian Campbell
2015-09-17 10:06   ` George Dunlap
2015-09-16 13:20 ` Ian Campbell
2015-10-05 15:37 ` Ian Campbell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).