linux-embedded.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Wolfgang Grandegger <wg@grandegger.com>
To: Ben Nizette <bn@niasdigital.com>
Cc: Nicholas Mc Guire <hofrat@hofr.at>,
	linux-embedded <linux-embedded@vger.kernel.org>
Subject: Re: UIO - interrupt performance
Date: Tue, 21 Oct 2008 08:57:41 +0200	[thread overview]
Message-ID: <48FD7D65.1070409@grandegger.com> (raw)
In-Reply-To: <1224540757.3954.59.camel@moss.renham>

Ben Nizette wrote:
> On Mon, 2008-10-20 at 03:06 -0800, Nicholas Mc Guire wrote:
>>> On Mon, 2008-10-20 at 10:55 +0100, Douglas, Jim (Jim) wrote:
>>>> We are contemplating porting a large number of device drivers to Linux.
>>>> The pragmatic solution is to keep them in user mode (using the UIO
>>>> framework) where possible ... they are written in C++ for a start.
>>>>
>>>> The obvious disadvantages of user mode device drivers are security /
>>>> isolation.  The main benefit is ease of development.
>>>>
>>>> Do you know what the *technical* disadvantages of this approach might
>>>> be? I am most concerned about possible impact on interrupt handling.
>>>>
>>>> For example, I assume the context switching overhead is higher, and that
>>>> interrupt latency is more difficult to predict?
>>> Userspace drivers certainly aren't first class citizens; uio and kernel
>>> mode drivers generally aren't really interchangeable.
>>>
>>> The technical disadvantages of userspace drivers are that you don't have
>>> access to kernel subsystems, you can't run any userspace content in irq
>>> context so everything needs to be scheduled before it can be dealt with.
>>> A UIO driver still needs a kernel component to do acknowledge the
>>> interrupt.  As such when you say "interrupt latency" you need to define
>>> the end point.  A UIO driver will have it's in-kernel handler called
>>> just as quickly as any other driver but the userspace app will need to
>>> be scheduled before it receives notification that the IRQ has fired.
>>>
>>> The technical advantage of a UIO driver is that devices which only need
>>> to shift data don't have to double-handle it.  e.g. an ADC card doesn't
>>> need to move ADC results from hardware to kernel, kernel to userspace,
>>> it's just one fluid movement.
>>>
>>> What kind of device drivers are you talking about?  They have to be of a
>>> fairly specific flavour to fit in to a UIO model.  Linux isn't a
>>> microkernel, userspace drivers are quite restricted in their power.
>>>
>> are these claims based on benchmarks of a specific driver ? I only know
>> of a singe UIO driver for a Hilscher CIF card and one for a SMX
>> Cryptengine (I guess thats yours any way) but none for a AD/DIO card - if
>> you know of such a driver I would be interested in seeing its performance.
> 
> When UIO was being discussed for inclusion, the example case being
> thrown around was for such an ADC card.  They claimed to have seen
> significant improvements in speed by avoiding the double-handling of
> data.  Come to think of it, I can't see that this specific driver has
> shown up...
> 
> But what kind of benchmarks do you want?  When I say "restricted in
> their power" I mean more in a feature-set kind of way than a raw speed
> way.  Userspace drivers can't plug in to kernel subsystems so can't, for
> example, be SPI hosts or terminal devices or network hardware or
> anything else which sits in the middle of a standard stack.  All they
> can do is be notified of an interrupt and have direct access to a lump
> of memory.
> 
> As I asked before, what's your use-case?  It tends to be fairly obvious
> whether the hardware is suitable for a UIO-based driver or whether it's
> going to have to live in kernel.
> 
>> Also if you know of any simple UIO sample drivers that would also help.
> 
> As in examples of the userspace half?  Unfortunately uio-smx isn't ready
> to fly thanks to some significant production delays but the userspace
> half of the Hilscher CIF driver can be found at
> http://www.osadl.org/projects/downloads/UIO/user/

As I see it, mainly the license conditions attract people to use UIO.
Performance is not that important.

Wolfgang.

  reply	other threads:[~2008-10-21  6:57 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-10-20  9:55 UIO - interrupt performance Douglas, Jim (Jim)
2008-10-20 10:28 ` Ben Nizette
     [not found]   ` <Pine.LNX.4.58.0810200258210.2562@vlab.hofr.at>
2008-10-20 22:12     ` Ben Nizette
2008-10-21  6:57       ` Wolfgang Grandegger [this message]
2008-10-21  9:32         ` Ben Nizette
2008-10-20 10:30 ` Christian SCHWARZ
2008-10-20 11:55 ` Marco Stornelli
2008-10-20 13:20   ` Paul Mundt
2008-10-20 16:13   ` Bill Gatliff
2008-10-21  8:36     ` Marco Stornelli
2008-10-21  9:01       ` Alessio Igor Bogani
2008-10-21  9:30         ` Marco Stornelli
2008-10-21  9:37           ` Ben Nizette
2008-10-21 10:24             ` Marco Stornelli
2008-10-21 10:28             ` Wolfgang Grandegger
2008-10-21 11:39               ` Bill Gatliff
2008-10-20 12:48 ` Thomas Petazzoni
2008-10-20 16:25   ` Bill Gatliff

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=48FD7D65.1070409@grandegger.com \
    --to=wg@grandegger.com \
    --cc=bn@niasdigital.com \
    --cc=hofrat@hofr.at \
    --cc=linux-embedded@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).