qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] Remote guest VBD
@ 2007-11-12  1:06 Kaushik Bhandankar
  2007-11-13  1:08 ` kaushikb
  0 siblings, 1 reply; 2+ messages in thread
From: Kaushik Bhandankar @ 2007-11-12  1:06 UTC (permalink / raw)
  To: qemu-devel

Hello,

I  am trying to implement remote VBD functionality for HVM guests is 
fully-virtualized Xen 3.0 unstable.

Consider 2 machines: machine 1 and machine 2, booth Intel-VT capable 
machines hosting fully-virtualized Xen 3.0-unstable.

Lets say the HVM guest sitting on machine 1 has its VBD located on 
machine 2 (same/different LAN)

For this, I have established a 9P communication channel (similar to 
socket communication) between the Dom0's of machine 1 and machine 2 with 
the 9P server thread running on machine 2 and 9P client thread running 
on machine 1.

Whenever the HVM guest tries to access its VBD, my 9P client needs 
tointercept this invocation and send it to the 9P server which does the 
necessary stuff with the VBD and sends back the response to the 9P client.

But I am not sure about following details:
1) The code path followed when a guest tries to access its VBD before it 
lands me into tools/ioemu/hw/ide.c:ide_ioport_read() or 
tools/ioemu/hw/ide.c:ide_ioport_write()
2) Where exactly should the 9P client intercept the call by guest to 
access its VBD?
3) When the 9P server sends the response back to the 9P client, how 
should this response be forwarded by the client?

Any help about this will be appreciated.

-Kaushik

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [Qemu-devel] Remote guest VBD
  2007-11-12  1:06 [Qemu-devel] Remote guest VBD Kaushik Bhandankar
@ 2007-11-13  1:08 ` kaushikb
  0 siblings, 0 replies; 2+ messages in thread
From: kaushikb @ 2007-11-13  1:08 UTC (permalink / raw)
  To: qemu-devel

Kaushik Bhandankar wrote:
> Hello,
>
> I  am trying to implement remote VBD functionality for HVM guests is 
> fully-virtualized Xen 3.0 unstable.
>
> Consider 2 machines: machine 1 and machine 2, booth Intel-VT capable 
> machines hosting fully-virtualized Xen 3.0-unstable.
>
> Lets say the HVM guest sitting on machine 1 has its VBD located on 
> machine 2 (same/different LAN)
>
> For this, I have established a 9P communication channel (similar to 
> socket communication) between the Dom0's of machine 1 and machine 2 
> with the 9P server thread running on machine 2 and 9P client thread 
> running on machine 1.
>
> Whenever the HVM guest tries to access its VBD, my 9P client needs 
> tointercept this invocation and send it to the 9P server which does 
> the necessary stuff with the VBD and sends back the response to the 9P 
> client.
>
> But I am not sure about following details:
> 1) The code path followed when a guest tries to access its VBD before 
> it lands me into tools/ioemu/hw/ide.c:ide_ioport_read() or 
> tools/ioemu/hw/ide.c:ide_ioport_write()
> 2) Where exactly should the 9P client intercept the call by guest to 
> access its VBD?
> 3) When the 9P server sends the response back to the 9P client, how 
> should this response be forwarded by the client?
>
> Any help about this will be appreciated.
>
> -Kaushik
>
>
I found out the following info about how a IO request by a DomU is 
handled in Dom0 (All methods are in drivers/xen/blkback/blkback.c)
1) blkif_schedule() invokes do_block_io_op()
2) do_block_io_op copies the request from the corresponding 'blk_rings' 
(based on blkif->blk_protocol) and if the request is for BLK_OP_READ or 
BLK_OP_WRITE, invokes dispatch_rw_block_io()
3) dispatch_rw_block_io() makes a hypercall to "map the referred page 
into the Dom0's address space", does a 'whole bunch of other stuff' and 
sets end_block_io_op() as the callback routine when this block I/O finishes.
4) end_block_io_op() invokes __end_block_io_op() which invokes 
make_response()
5) make_response() creates a blkif_response_t structure (filling in 
appropriate id, status values from pending request) and copies this 
response on the 'response ring' for the appropriate DomU and notifies 
the front-end (FE) using RING_PUSH_RESPONSES_AND_CHECK_NOTIFY()
6) The FE takes appropriate action.

Now, for remote VBD implementation, I feel that when the guest fomain 
makes a block I/O request and we land in dispatch_rw_block_io(), the 9P 
client thread should be invoked to send the block IO request to the 9P 
server and the 9P server should send back the response to the 9P client 
which should invoke make_response() to enqueue the response to the 
appropriate DomU's shared ring.

Can somebody please verify whether this will work?

-- 
Kaushik Bhandankar

Graduate Student
College of Computing
Georgia Tech
206.94.4189
www.cc.gatech.edu/~kaushikb

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2007-11-13  1:07 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-11-12  1:06 [Qemu-devel] Remote guest VBD Kaushik Bhandankar
2007-11-13  1:08 ` kaushikb

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).