public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: "Mihai Donțu" <mdontu@bitdefender.com>
To: Jan Kiszka <jan.kiszka@web.de>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Stephen Pape <srpape@gmail.com>,
	kvm@vger.kernel.org
Subject: Re: Introspection API development
Date: Thu, 4 Aug 2016 16:57:23 +0300	[thread overview]
Message-ID: <20160804165723.4b199de2@bitdefender.com> (raw)
In-Reply-To: <5435bb72-028a-4c5b-9d96-276b7fca18af@web.de>

[-- Attachment #1: Type: text/plain, Size: 4812 bytes --]

On Thursday 04 August 2016 14:44:10 Jan Kiszka wrote:
> On 2016-08-04 13:18, Mihai Donțu wrote:
> > On Thu, 4 Aug 2016 10:50:30 +0200 Paolo Bonzini wrote:  
> > > On 04/08/2016 05:25, Stephen Pape wrote:  
> > > > My approach involves modifying the kernel driver to export a
> > > > /dev/virt/ filesystem. I suppose I could do it all via /dev/kvm ioctls
> > > > as well.
> > > >
> > > > My (relatively minor) patch allows processes besides the launching
> > > > process to do things like map guest memory and read VCPU states for a
> > > > VM. Soon, I'll be looking into adding support for handling events (cr3
> > > > writes, int3 traps, etc.). Eventually, an event should come in, a
> > > > program will handle it (while able to read memory/registers), and then
> > > > resume the VCPU.
> > >
> > > I think the interface should be implemented entirely in userspace and it
> > > should be *mostly* socket-based; I say mostly because I understand that
> > > reading memory directly can be useful.  
> > 
> > We are working on something similar, but we're looking into making it
> > entirely in kernel and possibly leveraging VirtIO, due to performance
> > considerations (mostly caused by the overhead of hw virtualization).
> > 
> > The model we're aiming is: on a KVM host, out of the N running VM-s, one
> > has special privileges allowing it to manipulate the memory and vCPU
> > state of the others. We call that special VM an SVA (Security Virtual
> > Appliance) and it uses a channel (much like the one found on Xen -
> > evtchn) and a set of specific VMCALL-s to:
> > 
> >   * receive notifications from the host when a new VM is
> >     created/destroyed
> >   * manipulate the EPT of a specific VM
> >   * manipulate the vCPU state of a specific VM (GPRs)
> >   * manipulate the memory of a specific VM (insert code)
> > 
> > We don't have much code in place at the moment, but we plan to post a
> > RFC series in the near future.
> > 
> > Obviously we've tried the userspace / qemu approach since it would have
> > made development _much_ easier, but it's simply not "performant" enough.  
> 
> What was the bottleneck? VCPU state monitoring/manipulation, VM memory
> access or GPA-to-HPA (ie. EPT on Intel) manipulations? I suppose that
> information will be essential when you want to convince the maintainers
> to add another kernel interface (in times where they are rather reduced).

OK, a bit of background on my observations: I initially started with a
POC in which an introspecting tool resides entirely in host kernel.
That tool tracks the VM since creation to destruction and begins by
hooking some MSR-s (to determine when the guest gernel reached a
certain load stage) and then mark a range of pages as RO in EPT.
Anyway, the tool worked OK and the VM behaved decently (considering my
hooks into the KVM mmu were not the best), but it have me a sense of
the performance ontainable. Then I factored in the idea that the tool
should run in userspace (host or guest VM) to prevent it from bringing
down the entire host if it were to crash.

Right now, I'm not really trying to convince anyone of anything, as I'm
not truly convinced of my approach either. I need to make more progress
but the feeling that I will need bypass qemu is pretty strong. :-)

> > This whole KVM work is actually a "glue" to an introspection technology
> > we have developed and which uses extensive hooking (via EPT) to monitor
> > execution of the kernel and user-mode processes, all the while aiming
> > to shave at most 20% out of the performance of each VM (in a 100-VM
> > setup).
> >   
> > > So this is a lot like a mix of two interfaces:
> > >
> > > - a debugger interface which lets you read/write registers and set events
> > >
> > > - the vhost-user interface which lets you pass the memory map (a mapping
> > > between guest physical addresses and offsets in a file descriptor) from
> > > QEMU to another process.
> > >
> > > The gdb stub protocol seems limited for the kind of event you want to
> > > trap, but there was already a GSoC project a few years ago that looked
> > > at gdb protocol extensions.  Jan, what was the outcome?
> > >
> > > In any case, I think there should be a separation between the ioctl KVM
> > > API and the socket userspace API.  By the way most of the KVM API is
> > > already there---e.g. reading/writing registers, breakpoints,
> > > etc.---though you'll want to add events such as cr3 or idtr writes.
> > >  
> > > > My question is, is this anything the KVM group would be interested in
> > > > bringing upstream? I'd definitely be willing to change my approach if
> > > > necessary. If there's no interest, I'll just have to maintain my own
> > > > patches.

-- 
Mihai DONȚU

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

  reply	other threads:[~2016-08-04 13:59 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-04  3:25 Introspection API development Stephen Pape
2016-08-04  8:50 ` Paolo Bonzini
2016-08-04  8:55   ` Jan Kiszka
2016-08-04 11:18   ` Mihai Donțu
2016-08-04 12:44     ` Jan Kiszka
2016-08-04 13:57       ` Mihai Donțu [this message]
2016-08-04 12:56     ` Paolo Bonzini
2016-08-04 13:41       ` Mihai Donțu
2016-08-04 12:48 ` Stefan Hajnoczi
2016-08-04 15:08   ` Stephen Pape

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160804165723.4b199de2@bitdefender.com \
    --to=mdontu@bitdefender.com \
    --cc=jan.kiszka@web.de \
    --cc=kvm@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=srpape@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox