From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1GvURF-0003ve-Bg for qemu-devel@nongnu.org; Sat, 16 Dec 2006 02:58:33 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1GvURE-0003uX-5J for qemu-devel@nongnu.org; Sat, 16 Dec 2006 02:58:32 -0500 Received: from [199.232.76.173] (helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1GvURD-0003uI-Sl for qemu-devel@nongnu.org; Sat, 16 Dec 2006 02:58:31 -0500 Received: from [129.22.105.37] (helo=mpv2.tis.cwru.edu) by monty-python.gnu.org with esmtps (TLS-1.0:RSA_3DES_EDE_CBC_SHA:24) (Exim 4.52) id 1GvURD-0007jY-RP for qemu-devel@nongnu.org; Sat, 16 Dec 2006 02:58:32 -0500 Received: from v129-22-126-20.vclient.cwru.edu (v129-22-126-20.VCLIENT.CWRU.Edu [129.22.126.20]) by mpv2.tis.cwru.edu (MOS 3.8.2-GA) with ESMTP id BWT95438 for ; Sat, 16 Dec 2006 02:58:28 -0500 (EST) From: Matthew Rosewarne MIME-Version: 1.0 Content-Type: multipart/signed; boundary="nextPart6651365.jGRnsvX5LG"; protocol="application/pgp-signature"; micalg=pgp-sha1 Content-Transfer-Encoding: 7bit Message-Id: <200612160258.17333.mukidohime@case.edu> Subject: [Qemu-devel] Using a simple SHM system for host/guest communication Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Date: Sat, 16 Dec 2006 07:58:34 -0000 To: qemu-devel@nongnu.org --nextPart6651365.jGRnsvX5LG Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline One problem I have with the (excellent) QEMU is that there is no way to get= =20 any data between hosts and guests short of file transfer. Perhaps a soluti= on=20 to this issue could be the use of a very thin and generic shared memory=20 functionality in the emulated hardware. With this method, "drivers" could = be=20 written for the guest OSes and handler daemons for the hosts to use a chunk= =20 of shared memory to communicate. This method would solve some of the=20 problems present in other methods: 1. Pipes/FIFOs are unidirectional, require the data to be serialise= d=20 before transmission, and are fairly slow. 2. Network-based approaches require not only serialisation but also= a=20 network layer and as such considerable overhead and more configuration. 3. File-based approaches are worse than either, requiring=20 serialisation, writing to disk, a network layer (for SMB, TFTP, etc.),=20 polling, and locking, and thus are VERY slow. An SHM approach would A) be fast B) not require serialisation to files/sockets/pipes C) not be dependent on networking or incur its overhead D) not need to be explicitly configured by the user E) not add any guest platform/OS-dependent code to QEMU A hypothetical example in five acts: ACT 1 1. A guest driver is written that requests a chunk of shared memory= =20 named "clip_buffer" from the SHM virtual device in the VM 2. QEMU asks the host kernel to allocate a chunk of SHM and creates= a=20 device node for it on the host, perhaps at /tmp/qemu-${PID}/shm/clip_buffer= =20 (the example has a Unix host; this part would be different on other hosts) 3. A host daemon is written that opens "/tmp/qemu-*/shm/clip_buffer= "=20 (again, a different on other hosts) ACT 2 1. The user copies some data to the clipboard in the guest OS 2. The guest driver scrapes the guest's clipboard for copied data a= nd=20 puts it in "clip_buffer" 3. The host daemon listens for new data in the "clip_buffer"[s] and= =20 puts it in the host's clipboard (and "clip_buffer"[s] from other QEMU=20 instances) ACT 3 1. The user copies some data to the clipboard in the host OS 2. The host daemon scrapes the host's clipboard for copied data and= =20 puts it into "clip_buffer" 3. The guest driver (in each qemu instance) looks for new data from= =20 its "clip_buffer" and puts it into the guest's clipboard ACT 4 1. A second guest is started that does not have a driver=20 for "clip_buffer" 2. The second guest runs correctly (just as before the SHM facility= =20 existed), unaware of the "clip_buffer"[s] on the host or other guests ACT 5 1. The driver in the first guest is unloaded, releasing the SHM=20 chunk "clip_buffer" 2. QEMU deallocates the "clip_buffer" SHM and removes the node=20 at /tmp/qemu-${PID}/shm/clip_buffer 3. If the guest driver breaks and does not release its SHM chunks, QEMU=20 forcefully destroys them after the guest powers off 4. Noticing that the SHM device is gone, the host daemon terminates FIN This facility could be used for much more than copy and paste, but I think = the=20 example is a good demonstration of a simple use case. The only aspect of t= he=20 process I left out intentionally was the starting of the daemons on the hos= t. =20 They could possibly be started by QEMU, which could have a file listing whi= ch=20 daemons to start when certain chunk names are requested, or manually by the= =20 user (maybe in a script). I'm not sure what the best option would be here. Unfortunately I am probably not a good enough hacker to implement this myse= lf,=20 but I hope that somebody might get some ideas from this proposal. Should I= =20 put this info (or perhaps a more terse version of it) on the savannah BTS? = =20 If anyone has any questions, comments, mockery, or abuse, I'd very much li= ke=20 to hear. ~Matt PS: Big thanks to all the QEMU contributors for such outstanding software! --nextPart6651365.jGRnsvX5LG Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQBFg6cZLE8yW/+QbWIRAosZAKCE6/05VJOMfnxtB/bRFUTFIUypcACfSSlS L0g816pbDSCW9zoqOIPHjXo= =bxUT -----END PGP SIGNATURE----- --nextPart6651365.jGRnsvX5LG--