From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=47084 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1PfCKS-0002p1-RF for qemu-devel@nongnu.org; Tue, 18 Jan 2011 09:14:42 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PfCKQ-0005ZU-Ko for qemu-devel@nongnu.org; Tue, 18 Jan 2011 09:14:36 -0500 Received: from e7.ny.us.ibm.com ([32.97.182.137]:59746) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PfCKQ-0005YN-IG for qemu-devel@nongnu.org; Tue, 18 Jan 2011 09:14:34 -0500 Received: from d01dlp01.pok.ibm.com (d01dlp01.pok.ibm.com [9.56.224.56]) by e7.ny.us.ibm.com (8.14.4/8.13.1) with ESMTP id p0IDqrfX006851 for ; Tue, 18 Jan 2011 08:55:24 -0500 Received: from d01relay03.pok.ibm.com (d01relay03.pok.ibm.com [9.56.227.235]) by d01dlp01.pok.ibm.com (Postfix) with ESMTP id 0BBD8728175 for ; Tue, 18 Jan 2011 09:13:19 -0500 (EST) Received: from d01av04.pok.ibm.com (d01av04.pok.ibm.com [9.56.224.64]) by d01relay03.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p0IEDIOv326032 for ; Tue, 18 Jan 2011 09:13:18 -0500 Received: from d01av04.pok.ibm.com (loopback [127.0.0.1]) by d01av04.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p0IEDGsc018836 for ; Tue, 18 Jan 2011 09:13:18 -0500 Message-ID: <4D359FED.6010209@linux.vnet.ibm.com> Date: Tue, 18 Jan 2011 08:13:01 -0600 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] [RFC][PATCH v6 00/23] virtagent: host/guest RPC communication agent References: <1295270117-24760-1-git-send-email-mdroth@linux.vnet.ibm.com> <4D3449C5.1030006@redhat.com> <4D3457E2.8000603@linux.vnet.ibm.com> <4D359D8A.3000908@redhat.com> In-Reply-To: <4D359D8A.3000908@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Gerd Hoffmann Cc: agl@linux.vnet.ibm.com, stefanha@linux.vnet.ibm.com, markus_mueller@de.ibm.com, marcel.mittelstaedt@de.ibm.com, qemu-devel@nongnu.org, Jes.Sorensen@redhat.com, Michael Roth , ryanh@us.ibm.com, abeekhof@redhat.com On 01/18/2011 08:02 AM, Gerd Hoffmann wrote: > On 01/17/11 15:53, Michael Roth wrote: >> On 01/17/2011 07:53 AM, Gerd Hoffmann wrote: >>> What is your plan to handle system-level queries+actions (such as >>> reboot) vs. per-user stuff (such as cut+paste)? >> >> This is an area that hasn't been well-defined yet and is definitely open >> for suggestions. > > One option would be to have two virtio-serial channels, one for the > system and one for the user stuff. gdm could grant the desktop user > access to the user channel like it does with sound devices and simliar > stuff, so the user agent has access to it. > > Another option is to have some socket where the user agent can talk to > the system agent and have it relay the requests. I think this is the best approach. One requirement we've been working with is that all actions from guest agents are logged. This is to give an administrator confidence that the hypervisor isn't doing anything stupid. If you route all of the user traffic through a privileged daemon, you can log everything to syslog or an appropriate log file. > Maybe it is also possible to use dbus for communication between the > system agent and user agent (and maybe other components). Maybe it > even makes sense to run the dbus protocol over the virtio-serial line? > Disclaimer: I know next to nothing about dbus details. The way I'd prefer to think about it is that the transport and protocol used are separate layers that may have multiple implementations over time. For instance, we currently support virtio-serial and isa-serial. Supporting another protocol wouldn't be a big deal. The part that's needs to remain consistent is the API supported by the transport/protocol combinations. >> For host->guest RPCs the current plan is to always have the RPC executed >> at the system level, but for situations involving a specific user we >> fork and drop privileges with the RPC, and report back the status of the >> fork/exec. The fork'd process would then do what it needs to do, then if >> needed, communicate status back to the system-level daemon via some IPC >> mechanism (most likely a socket we listen to in addition to the serial >> channel) that can be used to send an event. The system-level daemon then >> communicates these events back to the host with a guest->host RPC. > > Hmm. A bit heavy to fork+exec on every rpc. might be ok for rare > events though. > >> Processes created independently of the system-level daemon could report >> events in the same manner, via this socket. I think this might suit the >> vdagent client model for Spice as well, > > Yes, vdagent works this way, except that *all* communication goes > through that socket, i.e. events/requests coming from the host for the > user-level agent are routed through that socket too. > > It is the only sane way to handle clipboard support IMHO as there is > quite some message ping-pong involved to get a clipboard transaction > done. > > How does xmlrpm transmit binary blobs btw? XML-RPC has a base64 encoding that's part of the standard for encoding binary data. It also supports UTF-8 encoded strings. Regards, Anthony Liguori > cheers, > Gerd