* Hypervisor to dom0 communication
@ 2008-07-10 12:26 Matthew Donovan
2008-07-11 14:47 ` Mark Williamson
0 siblings, 1 reply; 13+ messages in thread
From: Matthew Donovan @ 2008-07-10 12:26 UTC (permalink / raw)
To: xen-devel
I am working on a security tool that monitors various components (IDT, SSDT,
etc) of a domU using VM introspection. Currently, we're using a polling
method to monitor these in-core structions. We would like to be able to use
a blocking method instead. I.e. specify "interesting" memory ranges and
then wait until they are modified.
How can I get the hypervisor to alert a kernel module loaded in dom0 that
something has happened? Can the alert include extra information such as the
address that was modified?
Thanks
-matthew
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Hypervisor to dom0 communication
2008-07-10 12:26 Matthew Donovan
@ 2008-07-11 14:47 ` Mark Williamson
0 siblings, 0 replies; 13+ messages in thread
From: Mark Williamson @ 2008-07-11 14:47 UTC (permalink / raw)
To: xen-devel; +Cc: Matthew Donovan
> I am working on a security tool that monitors various components (IDT,
> SSDT, etc) of a domU using VM introspection. Currently, we're using a
> polling method to monitor these in-core structions. We would like to be
> able to use a blocking method instead. I.e. specify "interesting" memory
> ranges and then wait until they are modified.
Sounds sensible.
> How can I get the hypervisor to alert a kernel module loaded in dom0 that
> something has happened? Can the alert include extra information such as
> the address that was modified?
Use a VIRQ to notify the dom0 kernel (search for VIRQ_* in
xen/include/public/xen.h). That's just an event notification, so you need to
include some other means of getting the data. At this point you could just
do a hypercall - which I assume is how you're currently polling so it might
be the most backwards-compatible way.
Another way of doing things would be to set up a shared memory region for your
communication channel and stuff information in there at the same time as
sending the VIRQ to dom0. You could also, if it suited your purposes, do the
VIRQ and shared memory interactions directly from dom0's userspace and avoid
the need for a kernel module altogether. See xen/common/trace.c and
tools/xentrace/* for an example of this being done.
Yet another alternative would be to use the trace buffer itself and convey
information using trace events. The trace buffer currently doesn't guarantee
not to drop messages so you'd need to either modify it to support lossless
semantics somehow or work around this in your code.
Cheers,
Mark
--
Push Me Pull You - Distributed SCM tool (http://www.cl.cam.ac.uk/~maw48/pmpu/)
^ permalink raw reply [flat|nested] 13+ messages in thread
* Hypervisor to dom0 communication
@ 2012-11-12 21:12 Razvan Cojocaru
2012-11-13 9:59 ` Ian Campbell
0 siblings, 1 reply; 13+ messages in thread
From: Razvan Cojocaru @ 2012-11-12 21:12 UTC (permalink / raw)
To: xen-devel
Hello,
I'm interested in establishing a communication channel between the Xen
hypervisor and a dom0 userspace application. Ideally this would be a
2-way channel, with the hypervisor asynchronously notifying this
application, and then (for certain types of messages) wait for a reply
from the userspace consumer.
To this end, I've been reading xen-devel posts [1], read the
xentrace-related Xen source code [2], and studied tools such as Ether
[3]. I've also read as much as time permitted of "The Definitive Guide
to the Xen Hypervisor" [4] book. However, I have very limited experience
as a kernel developer, and none working on the Xen hypervisor, so I'm
still digesting the information.
A particularly good choice is riding on xentrace, since that's already
been tested, has it's own tools, and it's the fastest way to get data
moving in one direction. However (putting aside the fact that it's not
bidirectional communication) the problem is that the messages I'd like
to pass to my application are longer than 28 bytes, so to be able to get
what I need I'd have to either increase the default trace buffer, or
send a message in several chunks. None of these solutions look elegant
to me.
So I'm thinking about writing a custom channel from scratch, and I'd
like to know what the best way to proceed is. I'll likely need to add a
new VIRQ like the TRACE VIRQ and use that for notifications. However,
allocating and sharing a new page is trickier. I've read about grant
tables and HYPERVISOR_grant_table_op, but trace.c simply calls
share_xen_page_with_privileged_guests(). I've read about ring buffers
and DEFINE_RING_TYPES, but trace.{h,c} has nothing to do with that
macro. Is the "The Definitive Guide to the Xen Hypervisor" book still
relevant?
What would you recommend for my case? Where (if anywhere) might I be
able to find a clear, concise, example of allocating and sharing memory
pages containing a ring buffer used for hypervisor <-> dom0 userspace
communication (hopefully even simpler that trace.c)?
What resources that would accelerate learning and facilitate Xen
enlightenment would you recommend to a developer new to the Xen hypervisor?
Thank you,
Razvan
[1]
http://old-list-archives.xen.org/archives/html/xen-devel/2008-07/msg00589.html
[2] http://code.metager.de/source/xref/xen/xen/common/trace.c
[3] http://ether.gtisc.gatech.edu/source.html
[4] http://www.amazon.com/The-Definitive-Guide-Xen-Hypervisor/dp/013234971X
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Hypervisor to dom0 communication
2012-11-12 21:12 Hypervisor to dom0 communication Razvan Cojocaru
@ 2012-11-13 9:59 ` Ian Campbell
2012-11-13 10:26 ` Razvan Cojocaru
0 siblings, 1 reply; 13+ messages in thread
From: Ian Campbell @ 2012-11-13 9:59 UTC (permalink / raw)
To: Razvan Cojocaru; +Cc: xen-devel@lists.xen.org
On Mon, 2012-11-12 at 21:12 +0000, Razvan Cojocaru wrote:
> Hello,
>
> I'm interested in establishing a communication channel between the Xen
> hypervisor and a dom0 userspace application. Ideally this would be a
> 2-way channel, with the hypervisor asynchronously notifying this
> application, and then (for certain types of messages) wait for a reply
> from the userspace consumer.
Sounds a lot like the model used by the IOREQs, this is the mechanism
for dispatching MMIO emulation to the qemu process for HVM domains.
> What would you recommend for my case? Where (if anywhere) might I be
> able to find a clear, concise, example of allocating and sharing memory
> pages containing a ring buffer used for hypervisor <-> dom0 userspace
> communication (hopefully even simpler that trace.c)?
I think I'd look at the ioreq stuff in preference to the trace stuff.
You probably want to use a normal evtchn rather than VIRQ. The code
which handles HVM_PARAM_BUFIOREQ_PFN and HVM_PARAM_BUFIOREQ_EVTCHN
should give a reasonable starting point (I think, I'm not actually the
familiar with this bit of Xen).
It might also be useful if you describe your actual end-goal -- i.e.
what you are ultimately trying to achieve. It may be that there are
better approaches or even existing solutions or things which are already
close to your needs.
Ian.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Hypervisor to dom0 communication
2012-11-13 9:59 ` Ian Campbell
@ 2012-11-13 10:26 ` Razvan Cojocaru
2012-11-13 10:36 ` Ian Campbell
0 siblings, 1 reply; 13+ messages in thread
From: Razvan Cojocaru @ 2012-11-13 10:26 UTC (permalink / raw)
To: xen-devel@lists.xen.org
> I think I'd look at the ioreq stuff in preference to the trace stuff.
> You probably want to use a normal evtchn rather than VIRQ. The code
> which handles HVM_PARAM_BUFIOREQ_PFN and HVM_PARAM_BUFIOREQ_EVTCHN
> should give a reasonable starting point (I think, I'm not actually the
> familiar with this bit of Xen).
Thanks, I'll look that up.
> It might also be useful if you describe your actual end-goal -- i.e.
> what you are ultimately trying to achieve. It may be that there are
> better approaches or even existing solutions or things which are already
> close to your needs.
OK, my immediate end-goal is real-time logging of hypervisor events via
a dom0 userspace application. These events are always about a currently
running virtual machine, and said virtual machine is paused at the time.
The userspace tool should be immediately notified, so no polling.
Ideally, in the future, based on information received from the
hypervisor, the dom0 application would control other virtual machines
(restart them, shut them down, and so on). This, I'm thinking, could
either be done by replying to the hypervisor itself via the
communication channel and do some work there, or, since the application
is in dom0, do this via libxl or some other available userspace tool.
However, the paused virtual machine should not be allowed to resume
unless the userspace application replies that it is OK to resume.
Hope that made sense,
Razvan
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Hypervisor to dom0 communication
2012-11-13 10:26 ` Razvan Cojocaru
@ 2012-11-13 10:36 ` Ian Campbell
2012-11-13 10:49 ` Razvan Cojocaru
0 siblings, 1 reply; 13+ messages in thread
From: Ian Campbell @ 2012-11-13 10:36 UTC (permalink / raw)
To: Razvan Cojocaru; +Cc: xen-devel@lists.xen.org
On Tue, 2012-11-13 at 10:26 +0000, Razvan Cojocaru wrote:
> > I think I'd look at the ioreq stuff in preference to the trace stuff.
> > You probably want to use a normal evtchn rather than VIRQ. The code
> > which handles HVM_PARAM_BUFIOREQ_PFN and HVM_PARAM_BUFIOREQ_EVTCHN
> > should give a reasonable starting point (I think, I'm not actually the
> > familiar with this bit of Xen).
>
> Thanks, I'll look that up.
>
> > It might also be useful if you describe your actual end-goal -- i.e.
> > what you are ultimately trying to achieve. It may be that there are
> > better approaches or even existing solutions or things which are already
> > close to your needs.
>
> OK, my immediate end-goal is real-time logging of hypervisor events via
> a dom0 userspace application. These events are always about a currently
> running virtual machine, and said virtual machine is paused at the time.
> The userspace tool should be immediately notified, so no polling.
This is very like the ioreq model, where the domain (or maybe just the
vcpu, I'm not sure) is paused while qemu does its thing.
What sort of events are we talking about here?
> Ideally, in the future, based on information received from the
> hypervisor, the dom0 application would control other virtual machines
> (restart them, shut them down, and so on). This, I'm thinking, could
> either be done by replying to the hypervisor itself via the
> communication channel and do some work there, or, since the application
> is in dom0, do this via libxl or some other available userspace tool.
The right thing to do here is for the userspace tool to communicate with
the toolstack (by whatever means) rather the hypervisor in order to
control the domains.
> However, the paused virtual machine should not be allowed to resume
> unless the userspace application replies that it is OK to resume.
You might also find some inspiration for this sort of model in the
xenpaging and memshare code.
Ian.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Hypervisor to dom0 communication
2012-11-13 10:36 ` Ian Campbell
@ 2012-11-13 10:49 ` Razvan Cojocaru
2012-11-13 11:12 ` Ian Campbell
2012-11-15 12:10 ` Tim Deegan
0 siblings, 2 replies; 13+ messages in thread
From: Razvan Cojocaru @ 2012-11-13 10:49 UTC (permalink / raw)
To: xen-devel@lists.xen.org
>> OK, my immediate end-goal is real-time logging of hypervisor events via
>> a dom0 userspace application. These events are always about a currently
>> running virtual machine, and said virtual machine is paused at the time.
>> The userspace tool should be immediately notified, so no polling.
>
> This is very like the ioreq model, where the domain (or maybe just the
> vcpu, I'm not sure) is paused while qemu does its thing.
>
> What sort of events are we talking about here?
A list of interesting registers that changed, that a page fault occured,
things like that, occasionally containing some small string messages
with extra information. Should be around 64 bytes or so.
> The right thing to do here is for the userspace tool to communicate with
> the toolstack (by whatever means) rather the hypervisor in order to
> control the domains.
That's what I was thinking too.
> You might also find some inspiration for this sort of model in the
> xenpaging and memshare code.
Will look those things up, I appreciate the replies.
Thanks,
Razvan
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Hypervisor to dom0 communication
2012-11-13 10:49 ` Razvan Cojocaru
@ 2012-11-13 11:12 ` Ian Campbell
2012-11-13 11:24 ` Razvan Cojocaru
2012-11-15 12:10 ` Tim Deegan
1 sibling, 1 reply; 13+ messages in thread
From: Ian Campbell @ 2012-11-13 11:12 UTC (permalink / raw)
To: Razvan Cojocaru; +Cc: xen-devel@lists.xen.org
On Tue, 2012-11-13 at 10:49 +0000, Razvan Cojocaru wrote:
> >> OK, my immediate end-goal is real-time logging of hypervisor events via
> >> a dom0 userspace application. These events are always about a currently
> >> running virtual machine, and said virtual machine is paused at the time.
> >> The userspace tool should be immediately notified, so no polling.
> >
> > This is very like the ioreq model, where the domain (or maybe just the
> > vcpu, I'm not sure) is paused while qemu does its thing.
> >
> > What sort of events are we talking about here?
>
> A list of interesting registers that changed, that a page fault occured,
> things like that, occasionally containing some small string messages
> with extra information. Should be around 64 bytes or so.
Perhaps it would be useful to build on the vmitools (previously
xenaccess)? http://code.google.com/p/vmitools/
I don't know what level of support it has for trapping on certain events
but it seems like a model where the guest is stopped due to an event,
the tools are signalled and use vmi to figure out what changed etc might
be an interesting one.
In general we prefer not to put stuff in the hypervisor unless
absolutely required.
Ian.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Hypervisor to dom0 communication
2012-11-13 11:12 ` Ian Campbell
@ 2012-11-13 11:24 ` Razvan Cojocaru
2012-11-15 14:26 ` Steven Maresca
0 siblings, 1 reply; 13+ messages in thread
From: Razvan Cojocaru @ 2012-11-13 11:24 UTC (permalink / raw)
To: xen-devel@lists.xen.org
> Perhaps it would be useful to build on the vmitools (previously
> xenaccess)? http://code.google.com/p/vmitools/
Indeed, but LibVMI does not offer event-based access, and polling is not
really a solution.
> In general we prefer not to put stuff in the hypervisor unless
> absolutely required.
Me too :)
Thanks,
Razvan
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Hypervisor to dom0 communication
2012-11-13 10:49 ` Razvan Cojocaru
2012-11-13 11:12 ` Ian Campbell
@ 2012-11-15 12:10 ` Tim Deegan
1 sibling, 0 replies; 13+ messages in thread
From: Tim Deegan @ 2012-11-15 12:10 UTC (permalink / raw)
To: Razvan Cojocaru; +Cc: xen-devel@lists.xen.org
At 12:49 +0200 on 13 Nov (1352810986), Razvan Cojocaru wrote:
> >>OK, my immediate end-goal is real-time logging of hypervisor events via
> >>a dom0 userspace application. These events are always about a currently
> >>running virtual machine, and said virtual machine is paused at the time.
> >>The userspace tool should be immediately notified, so no polling.
> >
> >This is very like the ioreq model, where the domain (or maybe just the
> >vcpu, I'm not sure) is paused while qemu does its thing.
> >
> >What sort of events are we talking about here?
>
> A list of interesting registers that changed, that a page fault occured,
> things like that, occasionally containing some small string messages
> with extra information. Should be around 64 bytes or so.
>
> >The right thing to do here is for the userspace tool to communicate with
> >the toolstack (by whatever means) rather the hypervisor in order to
> >control the domains.
>
> That's what I was thinking too.
>
> >You might also find some inspiration for this sort of model in the
> >xenpaging and memshare code.
Yes, the mem-event api that they use is pretty well suited to this -- it
sends a series of events to userspace, wih the option of pausing the
guest VCPU until userspace acknowledges the event. ISRT the mem-access
API already allows for certain kinds of register info (CR3 writes) to be
sent on that channel.
Cheers,
Tim.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Hypervisor to dom0 communication
2012-11-13 11:24 ` Razvan Cojocaru
@ 2012-11-15 14:26 ` Steven Maresca
2012-11-15 14:37 ` Razvan Cojocaru
2012-11-16 16:51 ` Razvan Cojocaru
0 siblings, 2 replies; 13+ messages in thread
From: Steven Maresca @ 2012-11-15 14:26 UTC (permalink / raw)
To: Razvan Cojocaru; +Cc: xen-devel@lists.xen.org
On Tue, Nov 13, 2012 at 6:24 AM, Razvan Cojocaru <rzvncj@gmail.com> wrote:
>> Perhaps it would be useful to build on the vmitools (previously
>> xenaccess)? http://code.google.com/p/vmitools/
>
>
> Indeed, but LibVMI does not offer event-based access, and polling is not
> really a solution.
>
>
>> In general we prefer not to put stuff in the hypervisor unless
>> absolutely required.
>
>
> Me too :)
>
> Thanks,
> Razvan
FYI, vmitools (LibVMI) does have support for memory events, just not
in the main branch. The events branch works properly for 4.1.x; I have
updated it to support 4.2, but due to other more pressing matters, I
have not had time to commit it to the main branch.
If you're interested in using the memory event facilities abstracted
via LibVMI, please let me know.
Steve
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Hypervisor to dom0 communication
2012-11-15 14:26 ` Steven Maresca
@ 2012-11-15 14:37 ` Razvan Cojocaru
2012-11-16 16:51 ` Razvan Cojocaru
1 sibling, 0 replies; 13+ messages in thread
From: Razvan Cojocaru @ 2012-11-15 14:37 UTC (permalink / raw)
To: Steven Maresca; +Cc: xen-devel@lists.xen.org
> FYI, vmitools (LibVMI) does have support for memory events, just not
> in the main branch. The events branch works properly for 4.1.x; I have
> updated it to support 4.2, but due to other more pressing matters, I
> have not had time to commit it to the main branch.
>
> If you're interested in using the memory event facilities abstracted
> via LibVMI, please let me know.
I am. I've already pulled the events branch from your git server to try
out (on an up-to-date 64bit Arch Linux machine), but it didn't compile:
make[3]: Entering directory `vmitools/libvmi'
/bin/sh ../libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I.
-I.. -I.. -fvisibility=hidden -I/usr/include/glib-2.0
-I/usr/lib/glib-2.0/include -g -O2 -MT driver/libvmi_la-interface.lo
-MD -MP -MF driver/.deps/libvmi_la-interface.Tpo -c -o
driver/libvmi_la-interface.lo `test -f 'driver/interface.c' || echo
'./'`driver/interface.c
[...]
In file included from ./driver/xen.h:29:0,
from driver/interface.c:29:
./driver/xen_events.h:66:5: error: unknown type name
'mem_event_shared_page_t'
Maybe I missed some ./configure flag?
Thanks,
Razvan
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Hypervisor to dom0 communication
2012-11-15 14:26 ` Steven Maresca
2012-11-15 14:37 ` Razvan Cojocaru
@ 2012-11-16 16:51 ` Razvan Cojocaru
1 sibling, 0 replies; 13+ messages in thread
From: Razvan Cojocaru @ 2012-11-16 16:51 UTC (permalink / raw)
To: Steven Maresca; +Cc: xen-devel@lists.xen.org
> FYI, vmitools (LibVMI) does have support for memory events, just not
> in the main branch. The events branch works properly for 4.1.x; I have
> updated it to support 4.2, but due to other more pressing matters, I
> have not had time to commit it to the main branch.
It compiled fine on my Gentoo box, I think the problem I reported before
(compilation errors on an up-to-date Arch Linux box) were due to the
fact that the default Xen version on Arch Linux is 4.2, and I've got Xen
4.1 installed on my home Gentoo machine.
Please let us know when event support for Xen 4.2 is added, I'm sure
there are quite a few of us interested in that.
Thanks,
Razvan
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2012-11-16 16:51 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-11-12 21:12 Hypervisor to dom0 communication Razvan Cojocaru
2012-11-13 9:59 ` Ian Campbell
2012-11-13 10:26 ` Razvan Cojocaru
2012-11-13 10:36 ` Ian Campbell
2012-11-13 10:49 ` Razvan Cojocaru
2012-11-13 11:12 ` Ian Campbell
2012-11-13 11:24 ` Razvan Cojocaru
2012-11-15 14:26 ` Steven Maresca
2012-11-15 14:37 ` Razvan Cojocaru
2012-11-16 16:51 ` Razvan Cojocaru
2012-11-15 12:10 ` Tim Deegan
-- strict thread matches above, loose matches on Subject: below --
2008-07-10 12:26 Matthew Donovan
2008-07-11 14:47 ` Mark Williamson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).