xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Wei Liu <liuw@liuw.name>
To: Ian Campbell <Ian.Campbell@eu.citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [PATCH] libxl: enabling upstream qemu as pure pv backend.
Date: Thu, 09 Jun 2011 17:49:16 +0800	[thread overview]
Message-ID: <1307612957.31235.53.camel@limbo> (raw)
In-Reply-To: <1307609530.775.776.camel@zakaz.uk.xensource.com>

On Thu, 2011-06-09 at 09:52 +0100, Ian Campbell wrote:
> On Thu, 2011-06-09 at 08:07 +0100, Wei Liu wrote:
> > The dm creating logic is as followed:
> > 
> > if hvm
> >   libxl__create_device_model
> >     if stubdom
> >       libxl__create_stubdom -> libxl__create_xenpv_qemu
> >                                 |
> >                                 --> libxl__create_device_model
> >     else
> >       spawn and exec qemu
> > else /* pv */
> >   if need_qemu
> >     libxl__create_xenpv_qemu
> >      |
> >      --> libxl__create_device_model
> > 
> > *
> > I think adding device_model_args_{pv,fv} is a good idea.
> 
> Agreed although you will also need _old and _new variants, making four
> functions. It's not clear how much in common they will have but please
> consider making pv vs fv a parameter to the _old/_new functions i.e. try
> and keep it down to just 2 functions (of course if they have nothing in
> common 4 functions will be fine).
> 

Well, I'm referring to configuration variables for config file but not
functions in libxl...

These config variables become parameters of
libxl__build_device_model_args_{old,new}. (see patch)

However, pv/fv configuration variables are related to guest machine type
rather than qemu type. It is possible to add two more variables
device_model_args_{old,new} and get them related to qemu type, if
necessary.

However, so many configuration variables, five in total -- pv/fv/old/new
plus the original one, may confuse users. Too bad.

> > *
> > Since libxl__create_stubdom receives a dm_info structure, I think it is
> > ok for libxl__create_xenpv_qemu to receive one, too. This dm_info is key
> > structure to indicate xenpv qemu's type (traditional or upstream). But
> > once we enter libxl__create_xenpv_qemu, we lost knowledge of whether we
> > are creating a stubdom/qemu-xen or upstream qemu. So the caller should
> > be responsible for filling in a new dm_info for
> > libxl__create_xenpv_qemu.
> 
> I'm wondering if all these functions shouldn't take a
> libxl_domain_config (which contains libxl_dm_info), after all there is
> no fundamental reason that creating the DM should't be at liberty to
> base it's behaviour on any aspect of the domains cfg.
> 

I don't think so. DM creation has nothing to do with domain_config,
that's what I see in xl_cmdimpl.c:parse_config_data.

> > 
> > *
> > vfb is derive from d_config (libxl_domain_config), which is a domain
> > property.
> 
> Aha, an example of what I meant above, convenient ;-)
> 
> >  Currently, the existing code either use
> > libxl__vfb_and_vkb_from_device_model_info (if we are using stubdom) or
> > direct parsing (if we are using xenpv_qemu). Though the code is
> > redundent, the parsing is just the same essentially. What's the point of
> > moving vnc and sdl out of vfb?
> 
> I'm not entirely sure. In a world with multiple VFBs (note: we don't
> currently support this)you could, I suppose have one on SDL and one on
> VNC. I suppose you might even want a single VFB exposed over both, that
> doesn't seem unreasonable (maybe this works today?)
> 

Currently I have no idea how to do this.

> > *
> > Configure two DMs for one domain? Haven't thought about that. I doubt
> > that if it really necessary if we are moving towards a unified DM --
> > upstream qemu -- wouldn't that be sufficient in the long run?
> 
> There will always be a need for at least two DMs, that is in the stubdom
> case (one DM in a stubdom, the other in dom0), however they should both
> be the same version of the DM, i.e. both upstream or both traditional.
> 

I understand. DM in stubdom is statically packed in ioemu-stubdom.gz.
Upstream qemu is not supported by now (AFAIK). In this case, we are not
configuring two DMs, just one.

> In the future it's also possible that we would want to have the option
> of multiple qemu's, e.g. one per qdisk backend, for isolation and
> robustness.

> > *
> > To my understanding, stubdom is minios+qemu-xen. If I (the user) am
> > using stubdom and specify device model args, these args should go to
> > xenpv qemu, not xenfv in stubdom, right?
> 
> The device_model_args are basically a trap door to allow users to do
> things which libxl didn't anticipate (i.e. as a stopgap until libxl can
> be updated with that feature). As such extra args could be needed for
> either DM. The distinction only really matters in the stubdom case.
> 

Can you tell me how to pass parameters to the qemu running inside
stubdom? What I see in the code is that libxl passes args to the xenpv
qemu running in dom0 and leave qemu running inside stubdom untouched.

> >  What I see in the code is that
> > we only need a few args (e.g. disks, vifs) to start stubdom. The
> > internal setup to connect to domU is done within stubdom.
> > 
> > To summarize, I give a second prototype of my patch. Please review.
> > 
> > libxl__build_xenpv_qemu_args handles common options to both xenpv qemu
> > and upstream qemu, while libxl__build_device_model_args distinguish
> > between old and new qemu's and build args respectively.
> > 
> > libxl__create_xenpv_qemu is not allocating a struct
> > libxl_device_model_info anymore, because at this point, it doesn't know
> > if it is building a stubdom/qemu-xen (traditional type) or upstream
> > qemu. The allocating and filling becomes caller's responsibility.
> > 
> > This patch has been tested with pv guest creating, fv guest creating and
> > fv-stubdom guest creating.
> > 
> > -----------8<------------------
> [...]
> > @@ -506,8 +507,15 @@ static int do_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
> >          libxl__device_console_add(gc, domid, &console, &state);
> >          libxl_device_console_destroy(&console);
> > 
> > +        /* only copy those useful configs */
> > +        memset((void*)&xenpv_dm_info, 0x00, sizeof(libxl_device_model_info));
> > +        xenpv_dm_info.device_model_version =
> > +            d_config->dm_info.device_model_version;
> > +        xenpv_dm_info.type = d_config->dm_info.type;
> > +        xenpv_dm_info.device_model = d_config->dm_info.device_model;
> >          if (need_qemu)
> > -            libxl__create_xenpv_qemu(gc, domid, d_config->vfbs, &dm_starting);
> > +            libxl__create_xenpv_qemu(gc, domid, &xenpv_dm_info,
> > +                                     d_config->vfbs, &dm_starting);
> >      }
> > 
> >      if (dm_starting) {
> 
> This is what I was thinking of when I said you might be able to just
> pass the same dm_info to both -- since you only ever copy fields
> verbatim, and libxl__create_xenpv_qemu (presumably) only looks at that
> subset of fields why not just pass the whole lot through.
> 

I'm afraid this kind of verbatim copying is necessary. Luckily, this
kind of operation is hidden from the user.

In the original code, libxl__create_xenpv_qemu allocates its own
dm_info, which is zero-out with libxl__build_xenpv_qemu_args before
using. Because libxl__create_xenpv_qemu eventually calls
libxl__create_device_model. If there are rubbish contents in dm_info,
the creation is likely to fail.

> The rest looked ok, although I didn't review in detail yet.
> 
> Ian.
> 

  reply	other threads:[~2011-06-09  9:49 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-06-08  3:19 [PATCH] libxl: enabling upstream qemu as pure pv backend Wei Liu
2011-06-08  8:55 ` Ian Campbell
2011-06-08  9:53   ` Wei Liu
2011-06-08 12:23     ` Ian Campbell
2011-06-08 10:20   ` Wei Liu
2011-06-08 12:24     ` Ian Campbell
2011-06-08 11:09   ` Stefano Stabellini
2011-06-08 14:07   ` Konrad Rzeszutek Wilk
2011-06-08 11:33 ` Stefano Stabellini
2011-06-08 12:27   ` Ian Campbell
2011-06-08 13:00     ` Stefano Stabellini
2011-06-08 13:13       ` Ian Campbell
2011-06-08 13:43         ` Stefano Stabellini
2011-06-08 13:51           ` Ian Campbell
2011-06-08 15:51             ` Stefano Stabellini
2011-06-08 15:53               ` Ian Campbell
2011-06-08 16:09                 ` Stefano Stabellini
2011-06-09  7:07                   ` Wei Liu
2011-06-09  8:52                     ` Ian Campbell
2011-06-09  9:49                       ` Wei Liu [this message]
2011-06-09 10:15                         ` Ian Campbell
2011-06-09 10:47                           ` Wei Liu
2011-06-09 15:20                     ` Stefano Stabellini
2011-06-09 15:49                       ` Wei Liu
2011-06-09 16:00                         ` Stefano Stabellini
2011-06-10  5:47                       ` Wei Liu
2011-06-10 11:05                         ` Stefano Stabellini
  -- strict thread matches above, loose matches on Subject: below --
2011-07-16  6:06 Wei Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1307612957.31235.53.camel@limbo \
    --to=liuw@liuw.name \
    --cc=Ian.Campbell@eu.citrix.com \
    --cc=Stefano.Stabellini@eu.citrix.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).