qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Anthony Liguori <anthony@codemonkey.ws>
To: Avi Kivity <avi@redhat.com>
Cc: qemu-devel@nongnu.org, quintela@redhat.com
Subject: Re: [Qemu-devel] Re: [PATCH] Split machine creation from the main loop
Date: Sun, 27 Feb 2011 22:01:33 -0600	[thread overview]
Message-ID: <4D6B1E1D.2050801@codemonkey.ws> (raw)
In-Reply-To: <4D6A3679.1010009@redhat.com>

On 02/27/2011 05:33 AM, Avi Kivity wrote:
> On 02/24/2011 07:25 PM, Anthony Liguori wrote:
>>> Is it really necessary?  What's blocking us from initializing 
>>> chardevs early?
>>
>>
>> Well....
>>
>> We initialize all chardevs at once right now and what set of chardevs 
>> there are depends on the machine (by the way defaults are applied).  
>> You could initialize chardevs in two stages although that requires 
>> quite a bit of additional complexity.
>
> We could initialize chardevs on demand - that should resolve any 
> dependencies?

I think that potentially screws up the way -global works.  There's some 
deep black magic involved in how -global, defaults, and device 
initialization interact.

>>>
>>> It would be a pity to divorce the monitor from chardevs, they're 
>>> really flexible.
>>
>> Couple considerations:
>>
>> 1) chardevs don't support multiple simultaneous connections.  I view 
>> this as a blocker for QMP.
>
> What do you mean by that?   Something like ,server which keeps on 
> listening after it a connection is established?

,server won't allow multiple simultaneous connections.  CharDriverStates 
simply don't have a connection semantic.  There can only be one thing 
connected to it at a time.  This is why we don't use CharDriverState for 
VNC.

We should have another abstraction for connection based backend.  I'll 
take a go at this when I'm ready to try to get those patches in.

Just to be clear though, there is a CharDriverState version of the new 
QMP server.  This would be a second option for creating a QMP server and 
it takes a different command line sytnax.

>> 2) Because chardevs don't support multiple connections, we can't 
>> reasonably hook on things like connect/disconnect which means that 
>> fd's sent via SCM_RIGHTs have to be handled in a very special way.  
>> By going outside of the chardev layer, we can let fd's via SCM_RIGHTS 
>> queue up naturally and have getfd/setfd refer to the fd at the top of 
>> the queue.  It makes it quite a bit easier to work with (I believe 
>> Daniel had actually requested this a while ago).
>
> I really don't follow... what's the connection between SCM_RIGHTS and 
> multiple connections?

Monitors have a single fd.  That fd is associated with the monitor and 
lives beyond the length of the connection to the monitor (recall that 
chardevs don't have a notion of connection life cycle).  This means if a 
management tool forgets to do a closefd on an unused fd, there's no easy 
way for QEMU to automatically clean that up.  IOW, a crashed management 
tool == fd leak in QEMU.

>>>> (6) can be started right now.  (1) comes with the QAPI merge.  (2) 
>>>> is pretty easy to do after applying this patch.  (3) is probably 
>>>> something that can be done shortly after (1).  (4) and (5) really 
>>>> require everything but (6) to be in place before we can meaningful 
>>>> do it.
>>>>
>>>> I think we can lay out much of the ground work for this in 0.15 and 
>>>> I think we can have a total conversion realistically for 0.16.  
>>>> That means that by EOY, we could invoke QEMU with no options and do 
>>>> everything through QMP.
>>>
>>> It's something that I've agitated for a long while, but when I see 
>>> all the work needed, I'm not sure it's cost effective.
>>
>> There's a lot of secondary benefits that come from doing this.  QMP 
>> becomes a much stronger interface.  A lot of operations that right 
>> now are only specifiable by the command line become dynamic which 
>> mitigates reboots in the long term. 
>
> Only the hot-pluggable ones.

Yup, but it forces us to treat options that cannot change at runtime as 
special cases which I think is a nice plus.  Customers don't like having 
their guests rebooted during a scheduled downtime so we really ought to 
try to have as many things tunable at runtime as possible.

Regards,

Anthony Liguori

  reply	other threads:[~2011-02-28  4:01 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-02-23 21:38 [Qemu-devel] [PATCH] Split machine creation from the main loop Anthony Liguori
2011-02-23 23:00 ` [Qemu-devel] " Juan Quintela
2011-02-23 23:12   ` Anthony Liguori
2011-02-23 23:38     ` Juan Quintela
2011-02-24  0:36       ` Anthony Liguori
2011-02-24 10:19         ` Stefan Hajnoczi
2011-02-24 14:47           ` Anthony Liguori
2011-02-24 16:01     ` Avi Kivity
2011-02-24 17:25       ` Anthony Liguori
2011-02-27 11:33         ` Avi Kivity
2011-02-28  4:01           ` Anthony Liguori [this message]
2011-02-28  8:20             ` Avi Kivity
2011-02-28  8:57               ` Paolo Bonzini
2011-02-28  9:13                 ` Avi Kivity
2011-02-28 10:08                   ` Paolo Bonzini
2011-02-28 12:08                   ` Anthony Liguori
2011-02-25 17:02 ` [Qemu-devel] " Blue Swirl

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4D6B1E1D.2050801@codemonkey.ws \
    --to=anthony@codemonkey.ws \
    --cc=avi@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).