From: Michael Roth <mdroth@linux.vnet.ibm.com>
To: Barak Azulay <bazulay@redhat.com>
Cc: Gal Hammer <ghammer@redhat.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
"arch@ovirt.org" <arch@ovirt.org>, Alexander Graf <agraf@suse.de>,
"vdsm-devel@lists.fedorahosted.org"
<vdsm-devel@lists.fedorahosted.org>
Subject: Re: [Qemu-devel] converging around a single guest agent
Date: Wed, 16 Nov 2011 09:28:16 -0600 [thread overview]
Message-ID: <4EC3D690.2020609@linux.vnet.ibm.com> (raw)
In-Reply-To: <201111161413.21026.bazulay@redhat.com>
On 11/16/2011 06:13 AM, Barak Azulay wrote:
> On Wednesday 16 November 2011 10:16:57 Alexander Graf wrote:
>> On 16.11.2011, at 08:05, Barak Azulay<bazulay@redhat.com> wrote:
>>> On Wednesday 16 November 2011 02:42:30 Alexander Graf wrote:
>>>> On 16.11.2011, at 00:01, Michael Roth wrote:
>>>>> But practically-speaking, it's unavoidable that qemu-specific
>>>>> management tooling will need to communicate with qemu (via
>>>>> QMP/libqmp/HMP/etc, or by proxy via libvirt). It's through those same
>>>>> channels that the qemu-ga interfaces will ultimately be exposed, so
>>>>> the problem of qemu-ga vs. ovirt-guest-agent isn't really any
>>>>> different than the problem of QMP's system_powerdown/info_balloon/etc
>>>>> vs. ovirt-guest-agent's
>>>>> Shutdown/Available_Ram/etc: it's a policy decision rather than argument
>>>>> for choosing one project over another.
>>>>
>>>> I don't see why we shouldn't be able to just proxy whatever
>>>> communication happens between the guest agent and the management tool
>>>> through qemu. At that point qemu could talk to the guest agent just as
>>>> well as the management tool and everyone's happy.
>>>
>>> I'm not sure proxying all the requests to the guset through qemu is
>>> desirable, other than having single point of management, most of the
>>> calls will be pass throgh and has no interest to qemu (MITM?).
>>>
>>> There is a big advantage on direct communication (VDSM<-> agent), that
>>> way features can be added to the ovirt stack without the need to add it
>>> to the qemu.
>>
>> If we keep the protocol well-defined, we can get that for free. Just have
>> every command carry its own size and a request id shich the reply also
>> contains and suddenly you get asynchronous proxyable communication.
>>
>
>
> Sure we can keep commands synchronized in various ways the question is do we
> want that, there are a few down sides for that:
> 1 - VDSM will have to pass through 2 proxies (libvirt& qemu) in order to
> deliver a message to the guest, this byiself is not such a big disadvantage
> but will force us to handle much more corner-cases.
Can't rule out the possibility of corner-cases resulting from this, but
the practical way to look at it is VDSM will need handle libvirt/QMP
protocols well. The implementation of the proxying mechanism is where
the extra challenge comes into play, but this should be transparent to
the protocols VDSM speaks.
Implementation-wise, just to give you an idea of the work involved if we
took this route:
1) ovirt-guest-agent would need to convert request/response payloads
from/to QMP payloads on the guest-side, which are JSON and should,
theoretically, mesh well with a python-based agent.
2) You'd also need a schema, similar to qemu.git/qapi-schema-guest.json,
to describe the calls you're proxying. The existing infrastructure in
QEMU will handle all the work of marshalling/unmarshalling responses
back to the QMP client on the host-side.
It's a bit of extra work, but the benefit is unifying the
qemu/guest-level management interface into a single place that's easy
for QMP/libvirt to consume.
> 2 - looking at the qemu-ga functionality (read& write ...) do we really want
> to let a big chunk of data through both qemu& libvirt rather than directtly
> to the comsumer (VDSM)
VDSM isn't the only consumer however, HMP/QMP and libvirt are consumers
in and of themselves.
> 3 - When events are fired from the guest agent, the delay of passing it
> through a double proxy will have it's latency penalty (as we have experianced
> in the client disconnect spice event)
>
Getting them out of the guest is probably the biggest factor, delivering
them between processes on the host is likely a small hit in comparison.
>
>>> I envision the agent will have 2 separate ports to listen to, one to
>>> communicate to qemu and one for VDSM.
>>
>> Ugh, no, I'd much prefer a single 'bus' everyone attaches to.
>
> why?
>
> I'm thinking on situation we'll need to priorities commands arriving from qemu
> over "management standard commands"& info gathering, sure there are number of
> mechanisms to do that but it seems to me that a separation is the best way.
>
> e.g. I think we need to priorities a quiesce command from qemu over any other
> info/command from VDSM.
Do you mean prioritize in terms of order of delivery? Best way to do
that is a single protocol with state-tracking, otherwise we're just racing.
>
>
>
>>
>> Alex
>>
>>> Barak
>>>
>>>> Alex
>
next prev parent reply other threads:[~2011-11-16 15:33 UTC|newest]
Thread overview: 59+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-11-15 17:24 [Qemu-devel] converging around a single guest agent Barak Azulay
2011-11-15 17:33 ` Alon Levy
2011-11-16 13:08 ` Gal Hammer
2011-11-15 18:01 ` Perry Myers
2011-11-15 18:08 ` Subhendu Ghosh
2011-11-15 19:45 ` Perry Myers
2011-11-16 6:48 ` Barak Azulay
2011-11-15 19:08 ` Anthony Liguori
2011-11-15 22:39 ` Ayal Baron
2011-11-16 7:53 ` Hans de Goede
2011-11-16 8:16 ` Ayal Baron
2011-11-16 14:59 ` Michael Roth
2011-11-17 15:11 ` Alon Levy
2011-11-16 12:07 ` Alon Levy
2011-11-16 13:45 ` Dor Laor
2011-11-16 13:47 ` Anthony Liguori
2011-11-16 17:55 ` Hans de Goede
2011-11-17 10:16 ` Alon Levy
2011-11-16 13:36 ` Anthony Liguori
2011-11-16 13:39 ` Dor Laor
2011-11-16 13:42 ` Anthony Liguori
2011-11-16 14:10 ` Ayal Baron
2011-11-16 14:20 ` Paolo Bonzini
2011-11-17 7:17 ` Itamar Heim
2011-11-17 14:31 ` Jamie Lokier
2011-11-16 13:45 ` Anthony Liguori
2011-11-15 19:09 ` Anthony Liguori
2011-11-15 23:01 ` Michael Roth
2011-11-16 0:42 ` Alexander Graf
2011-11-16 7:05 ` Barak Azulay
2011-11-16 8:16 ` Alexander Graf
2011-11-16 12:13 ` Barak Azulay
2011-11-16 15:28 ` Michael Roth [this message]
2011-11-16 17:53 ` Barak Azulay
2011-11-16 21:44 ` Michael Roth
2011-11-17 0:03 ` Anthony Liguori
2011-11-17 8:59 ` Ayal Baron
2011-11-17 14:42 ` Anthony Liguori
2011-11-16 10:18 ` Daniel P. Berrange
2011-11-16 20:24 ` Adam Litke
2011-11-17 2:09 ` Michael Roth
2011-11-17 8:46 ` Ayal Baron
2011-11-17 14:58 ` Michael Roth
2011-11-17 15:58 ` Adam Litke
2011-11-17 16:14 ` Daniel P. Berrange
2011-11-17 16:53 ` Eric Gaulin
2011-11-25 19:33 ` Barak Azulay
2011-11-17 17:09 ` Barak Azulay
2011-11-18 0:47 ` Luiz Capitulino
2011-11-17 0:48 ` [Qemu-devel] wiki summary Michael Roth
2011-11-17 16:34 ` Barak Azulay
2011-11-17 19:58 ` Michael Roth
2011-11-18 11:25 ` Barak Azulay
2011-11-18 14:10 ` Adam Litke
2011-11-18 14:21 ` Michael Roth
2011-11-24 12:40 ` Dor Laor
2011-11-24 16:47 ` Richard W.M. Jones
2011-11-25 10:07 ` Daniel P. Berrange
2011-11-27 12:19 ` Dor Laor
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4EC3D690.2020609@linux.vnet.ibm.com \
--to=mdroth@linux.vnet.ibm.com \
--cc=agraf@suse.de \
--cc=arch@ovirt.org \
--cc=bazulay@redhat.com \
--cc=ghammer@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=vdsm-devel@lists.fedorahosted.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).