linux-rt-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jan Kiszka <jan.kiszka@siemens.com>
To: luis.henrix@gmail.com
Cc: linux-rt-users@vger.kernel.org, frank.rowand@am.sony.com
Subject: Re: Linux, RT and virtualisation
Date: Tue, 22 Jun 2010 20:29:03 +0200	[thread overview]
Message-ID: <4C2100EF.7030905@siemens.com> (raw)
In-Reply-To: <20100622172414.GA13372@hades>

luis.henrix@gmail.com wrote:
> Hi,
> 
> I have the following scenario: a legacy application with RT constraints
> that needs to be replicated.  Basically, I need to run several instances
> of this application on a single multi-core box.  However, this is not as
> simple as it sounds because the application assumes several things such
> as exclusive access to HW, etc.
> 
> So, instead of re-designing the application to co-exist with different
> instances, I was wondering whether this could be done using a lazy
> approach: running each instance within a virtual machine.
> 
> I have enough cores available so that I can actually dedicate 1 or more
> cores to each VM, but the problem is: will the application still be able
> to meet its RT requirements?

What are those RT requirements (order of magnitude, hard/soft, ie. what
may happen if some deadline is missed)?

>  I guess that, if two VMs share the same
> core(s), meeting the deadlines will not be possible without having a
> special scheduler on the VMs manager.  But what about if all the VMs have
> their own cores?
> 
> Of course there is still the issue with the shared access to the HW,
> but since this HW (Ethernet NICs) also have support for virtualisation,
> I could create virtual NICs for each of the VM instances.

For the tests Frank cited, I tried to avoid device emulation as far as
possible because it can be a bottleneck in QEMU (i.e. also KVM),
specifically if you go below the millisecond and there is other guest
I/O running in parallel. Still, if that may hurt you, depends on your RT
requirements.

> 
> Any experiences/thoughts/links?  Would preemptrt+Xen be able to do this?

Xen uses QEMU (a variant of it) in Dom0 for device emulation. Moreover,
you would have to merge Xen's Dom0 patches with Preempt-RT patches -
well, challenging, I bet.

> preemptrt+kvm? Other options?

Preempt-RT + kvm will at least allow you to tweak a lot, benefit from
ongoing optimizations of both projects, or maybe even apply some "dirty
tricks" to the hypervisor. IMO, a good starting point unless your
requirements are way off.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux

  parent reply	other threads:[~2010-06-22 19:01 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-06-22 17:24 Linux, RT and virtualisation luis.henrix
2010-06-22 18:07 ` Frank Rowand
2010-06-22 18:27   ` Nicholas Mc Guire
2010-06-22 18:49   ` Luis Henriques
2010-06-22 20:11     ` Sven-Thorsten Dietrich
2010-06-22 18:29 ` Jan Kiszka [this message]
2010-06-22 19:04   ` Luis Henriques

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4C2100EF.7030905@siemens.com \
    --to=jan.kiszka@siemens.com \
    --cc=frank.rowand@am.sony.com \
    --cc=linux-rt-users@vger.kernel.org \
    --cc=luis.henrix@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).