From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1IrkFO-0002Cz-Go for qemu-devel@nongnu.org; Mon, 12 Nov 2007 20:07:22 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1IrkFN-0002Bw-Fx for qemu-devel@nongnu.org; Mon, 12 Nov 2007 20:07:22 -0500 Received: from [199.232.76.173] (helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1IrkFN-0002Bt-Da for qemu-devel@nongnu.org; Mon, 12 Nov 2007 20:07:21 -0500 Received: from sark4.cc.gatech.edu ([130.207.7.19]) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1IrkFM-0006LK-V3 for qemu-devel@nongnu.org; Mon, 12 Nov 2007 20:07:21 -0500 Received: from sark3.cc.gatech.edu (sark3.cc.gatech.edu [130.207.7.22]) by sark4.cc.gatech.edu (8.13.6/8.12.8) with ESMTP id lAD17J93017237 for ; Mon, 12 Nov 2007 20:07:19 -0500 (EST) Received: from [143.215.129.51] (goa.cc.gt.atl.ga.us [143.215.129.51]) (authenticated bits=0) by sark3.cc.gatech.edu (8.12.10/8.12.10) with ESMTP id lAD17IXt006629 (version=TLSv1/SSLv3 cipher=RC4-MD5 bits=128 verify=NOT) for ; Mon, 12 Nov 2007 20:07:18 -0500 (EST) Message-ID: <4738F90D.4020402@cc.gatech.edu> Date: Mon, 12 Nov 2007 20:08:29 -0500 From: kaushikb MIME-Version: 1.0 Subject: Re: [Qemu-devel] Remote guest VBD References: <4737A711.4010806@cc.gatech.edu> In-Reply-To: <4737A711.4010806@cc.gatech.edu> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Kaushik Bhandankar wrote: > Hello, > > I am trying to implement remote VBD functionality for HVM guests is > fully-virtualized Xen 3.0 unstable. > > Consider 2 machines: machine 1 and machine 2, booth Intel-VT capable > machines hosting fully-virtualized Xen 3.0-unstable. > > Lets say the HVM guest sitting on machine 1 has its VBD located on > machine 2 (same/different LAN) > > For this, I have established a 9P communication channel (similar to > socket communication) between the Dom0's of machine 1 and machine 2 > with the 9P server thread running on machine 2 and 9P client thread > running on machine 1. > > Whenever the HVM guest tries to access its VBD, my 9P client needs > tointercept this invocation and send it to the 9P server which does > the necessary stuff with the VBD and sends back the response to the 9P > client. > > But I am not sure about following details: > 1) The code path followed when a guest tries to access its VBD before > it lands me into tools/ioemu/hw/ide.c:ide_ioport_read() or > tools/ioemu/hw/ide.c:ide_ioport_write() > 2) Where exactly should the 9P client intercept the call by guest to > access its VBD? > 3) When the 9P server sends the response back to the 9P client, how > should this response be forwarded by the client? > > Any help about this will be appreciated. > > -Kaushik > > I found out the following info about how a IO request by a DomU is handled in Dom0 (All methods are in drivers/xen/blkback/blkback.c) 1) blkif_schedule() invokes do_block_io_op() 2) do_block_io_op copies the request from the corresponding 'blk_rings' (based on blkif->blk_protocol) and if the request is for BLK_OP_READ or BLK_OP_WRITE, invokes dispatch_rw_block_io() 3) dispatch_rw_block_io() makes a hypercall to "map the referred page into the Dom0's address space", does a 'whole bunch of other stuff' and sets end_block_io_op() as the callback routine when this block I/O finishes. 4) end_block_io_op() invokes __end_block_io_op() which invokes make_response() 5) make_response() creates a blkif_response_t structure (filling in appropriate id, status values from pending request) and copies this response on the 'response ring' for the appropriate DomU and notifies the front-end (FE) using RING_PUSH_RESPONSES_AND_CHECK_NOTIFY() 6) The FE takes appropriate action. Now, for remote VBD implementation, I feel that when the guest fomain makes a block I/O request and we land in dispatch_rw_block_io(), the 9P client thread should be invoked to send the block IO request to the 9P server and the 9P server should send back the response to the 9P client which should invoke make_response() to enqueue the response to the appropriate DomU's shared ring. Can somebody please verify whether this will work? -- Kaushik Bhandankar Graduate Student College of Computing Georgia Tech 206.94.4189 www.cc.gatech.edu/~kaushikb