From: Gleb Natapov <gleb@redhat.com>
To: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Avi Kivity <avi@redhat.com>,
john cooper <john.cooper@third-harmonic.com>,
Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>,
linux-kernel@vger.kernel.org, mingo@elte.hu, mtosatti@redhat.com,
zamsden@redhat.com
Subject: Re: use of setjmp/longjmp in x86 emulator.
Date: Tue, 9 Mar 2010 08:28:03 +0200 [thread overview]
Message-ID: <20100309062803.GI16909@redhat.com> (raw)
In-Reply-To: <m1aauipfwq.fsf@fess.ebiederm.org>
On Mon, Mar 08, 2010 at 03:11:49PM -0800, Eric W. Biederman wrote:
> Avi Kivity <avi@redhat.com> writes:
>
> > On 03/02/2010 09:28 AM, Gleb Natapov wrote:
> >> On Mon, Mar 01, 2010 at 02:13:32PM -0500, john cooper wrote:
> >>
> >>> Gleb Natapov wrote:
> >>>
> >>>
> >>>> Think about what happens if in the middle of
> >>>> instruction emulation some data from device emulated in userspace is
> >>>> needed. Emulator should be able to tell KVM that exit to userspace is
> >>>> needed and restart instruction emulation when data is available.
> >>>>
> >>> setjmp/longjmp are useful constructs in general but
> >>> IME are better suited for infrequent exceptions vs.
> >>> routine usage.
> >>>
> >> Exception condition during instruction emulation _is_
> >> infrequent.
> >
> > Well, with mmio you'd expect it to happen every read access.
>
> Of course if you are hitting that kind of case very often
> you don't want to do the emulation in the kernel but
> in userspace so you don't have to take the context switch
> overhead and everything else.
>
The devices that do mmio most often are already in the kernel
to avoid exit to usesrapce on each access. And mmio may be the
most frequent cause of emulation, but not the only one.
> I know running emulations in userspace was for dosemu
> the difference between a 16 color ega emulation on X
> that was unusable to one that was good enough to play video
> games like wolfenstein and doom.
>
> Eric
--
Gleb.
next prev parent reply other threads:[~2010-03-09 6:28 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-03-01 9:18 use of setjmp/longjmp in x86 emulator Gleb Natapov
2010-03-01 12:45 ` Takuya Yoshikawa
2010-03-01 12:52 ` Gleb Natapov
2010-03-01 13:17 ` Takuya Yoshikawa
2010-03-01 13:26 ` Gleb Natapov
2010-03-01 19:13 ` john cooper
2010-03-02 7:28 ` Gleb Natapov
2010-03-07 9:00 ` Avi Kivity
2010-03-08 23:11 ` Eric W. Biederman
2010-03-09 6:28 ` Gleb Natapov [this message]
2010-03-01 16:13 ` Zachary Amsden
2010-03-01 17:47 ` Gleb Natapov
2010-03-01 18:39 ` Zachary Amsden
2010-03-01 18:47 ` Luca Barbieri
2010-03-01 19:03 ` Gleb Natapov
2010-03-01 19:18 ` Zachary Amsden
2010-03-01 22:31 ` H. Peter Anvin
2010-03-01 22:56 ` H. Peter Anvin
2010-03-01 23:34 ` Zachary Amsden
2010-03-01 23:43 ` H. Peter Anvin
2010-03-02 8:05 ` Gleb Natapov
2010-03-02 8:49 ` Gleb Natapov
2010-03-07 9:04 ` Avi Kivity
2010-03-08 0:08 ` H. Peter Anvin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100309062803.GI16909@redhat.com \
--to=gleb@redhat.com \
--cc=avi@redhat.com \
--cc=ebiederm@xmission.com \
--cc=john.cooper@third-harmonic.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=mtosatti@redhat.com \
--cc=yoshikawa.takuya@oss.ntt.co.jp \
--cc=zamsden@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).