From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=55561 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Q4eRH-0006kc-Mb for qemu-devel@nongnu.org; Tue, 29 Mar 2011 15:18:53 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Q4e4H-00082u-GN for qemu-devel@nongnu.org; Tue, 29 Mar 2011 14:55:06 -0400 Received: from e33.co.us.ibm.com ([32.97.110.151]:58248) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Q4e4H-00082Z-AT for qemu-devel@nongnu.org; Tue, 29 Mar 2011 14:55:05 -0400 Received: from d03relay03.boulder.ibm.com (d03relay03.boulder.ibm.com [9.17.195.228]) by e33.co.us.ibm.com (8.14.4/8.13.1) with ESMTP id p2TImICA030347 for ; Tue, 29 Mar 2011 12:48:18 -0600 Received: from d03av06.boulder.ibm.com (d03av06.boulder.ibm.com [9.17.195.245]) by d03relay03.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p2TIsww6110276 for ; Tue, 29 Mar 2011 12:54:58 -0600 Received: from d03av06.boulder.ibm.com (loopback [127.0.0.1]) by d03av06.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p2TIxooL016599 for ; Tue, 29 Mar 2011 12:59:50 -0600 Message-ID: <4D922B01.2000106@linux.vnet.ibm.com> Date: Tue, 29 Mar 2011 13:54:57 -0500 From: Michael Roth MIME-Version: 1.0 References: <1301082479-4058-1-git-send-email-mdroth@linux.vnet.ibm.com> <1301082479-4058-9-git-send-email-mdroth@linux.vnet.ibm.com> <4D8D091D.6030102@us.ibm.com> <4D8D12FD.6000906@linux.vnet.ibm.com> <4D90C94A.7030007@us.ibm.com> In-Reply-To: <4D90C94A.7030007@us.ibm.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Re: [RFC][PATCH v1 08/12] qemu-char: add qmp_proxy chardev List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: Jes.Sorensen@redhat.com, agl@linux.vnet.ibm.com, qemu-devel@nongnu.org On 03/28/2011 12:45 PM, Anthony Liguori wrote: > On 03/25/2011 05:11 PM, Michael Roth wrote: >>> Why are these options required? >> >> >> The qmp_proxy_new() constructor expects a path to a socket it can >> connect() to. Not sure about telnet, but the other options are >> required for this. Well...server=on at least, wait=off needs to be set >> as well because that's the only way to have the socket chardev set the >> listening socket to non-block, since qmp_proxy_new() calls connect() >> on it before we return to the main I/O loop. >> >> Although, we probably shouldn't just silently switch out explicitly >> set options; an error would probably be more appropriate here. > > You ought to make qmp_proxy_new() return a CharDriverState such that you > can do: > > qemu -device virtio-serial,chardev=foo -chardev guest-agent,id=foo > That's what the current invocation looks like, but with regard to not relying on an intermediate socket between QmpProxy and individual qmp sessions, I think the main issue is: - We expose the proxy via a call such a qmp_proxy_send_request(QDict request) - This request can be arbitrarily large (not atm, but in the future perhaps with RPCs sent as qmp_send_file() they may potential by large) So if we do this directly via a new char state rather than intermediate sock, we'd either: 1) Send the request in full via, say, qmp_proxy_send_request()->qemu_chr_read()->virtio_serial_write(), and assume the port/device can actually buffer it all in one shot. Or, 2) Send it in smaller chunks, via something that amounts to: can_read_bytes = virtio_serial_guest_ready()): virtio_serial_write(port, request, can_read_bytes) If anything is left over, we need to add something QEMU's main loop to handle some of the remaining bytes in the next main loop iteration. (With the intermediate socket, we achieve this by registering a write handler that selects on the intermediate socket and writes in small chunks until the buffer is empty, the socket chardev does all the work of checking for the backend device to be ready and not writing more than it should) So, if we do 2), which I think is the only "correct" approach out of the 2, to take the intermediate socket fd out of the equation, we either need to add a custom handler to QEMU's main loop, or register deferred work via something like qemu_bh_schedule_idle(). So I think that's the trade-off...we can: - do what the patches currently do and complicate the plumbing by adding an intermediate socket (or perhaps even just a pipe), and simplify the logic of handling backlogged work via a select()-driven write handler, or - remove the intermediate socket to simplify the plumbing, but complicate the back-end logic involved in sending requests to the guest by using a BH to clear our host->guest TX buffer If using a BH to handle this work seems reasonable, then I'd be fine with attempting to implement this. If using a BH seems nasty, then I think the current intermediate socket/fd approach is the best alternative. > Regards, > > Anthony Liguori >