From: Sasha Levin <levinsasha928@gmail.com>
To: Ingo Molnar <mingo@elte.hu>
Cc: Alexander Graf <agraf@suse.de>,
penberg@kernel.org, kvm@vger.kernel.org, asias.hejun@gmail.com,
gorcunov@gmail.com, prasadjoshi124@gmail.com
Subject: Re: [PATCH] kvm tools: Add MMIO coalescing support
Date: Sat, 04 Jun 2011 19:50:29 +0300 [thread overview]
Message-ID: <1307206229.2173.3.camel@lappy> (raw)
In-Reply-To: <20110604163409.GA1899@elte.hu>
On Sat, 2011-06-04 at 18:34 +0200, Ingo Molnar wrote:
> * Alexander Graf <agraf@suse.de> wrote:
>
> >
> > On 04.06.2011, at 16:46, Ingo Molnar wrote:
> >
> > >
> > > * Alexander Graf <agraf@suse.de> wrote:
> > >
> > >> So the simple rule is: don't register a coalesced MMIO region for a
> > >> region where latency matters. [...]
> > >
> > > So my first suspicion is confirmed.
> > >
> > > A quick look at Qemu sources shows that lots of drivers are using
> > > coalesced_mmio without being aware of the latency effects and only
> > > one seems to make use of qemu_flush_coalesced_mmio_buffer(). Drivers
> > > like hw/e1000.c sure look latency critical to me.
> >
> > e1000 maps its NVRAM on coalesced mmio - which is completely ok.
>
> Ok!
>
> > > So i maintain my initial opinion: this is a pretty dangerous
> > > 'optimization' that should be used with extreme care: i can tell
> > > it you with pretty good authority that latency problems are much
> > > more easy to introduce than to find and remove ...
> >
> > Yup, which is why it's very sparsely used in qemu :). Basically,
> > it's only e1000 and vga, both of which are heavily used and tested
> > drivers.
>
> Ok, so this change in:
>
> commit 73389b5ea017288a949ae27536c8cfd298d3e317
> Author: Sasha Levin <levinsasha928@gmail.com>
> Date: Fri Jun 3 22:51:08 2011 +0300
>
> kvm tools: Add MMIO coalescing support
>
> @@ -67,6 +70,16 @@ bool kvm__register_mmio(u64 phys_addr, u64 phys_addr_len, void (*kvm_mmio_callba
> .kvm_mmio_callback_fn = kvm_mmio_callback_fn,
> };
>
> + zone = (struct kvm_coalesced_mmio_zone) {
> + .addr = phys_addr,
> + .size = phys_addr_len,
> + };
> + ret = ioctl(kvm->vm_fd, KVM_REGISTER_COALESCED_MMIO, &zone);
> + if (ret < 0) {
> + free(mmio);
> + return false;
> + }
>
> Seems completely wrong, because it indiscriminately registers *all*
> mmio regions as coalesced ones.
Yes. I'll add a flag instead of making all of them coalesced.
--
Sasha.
prev parent reply other threads:[~2011-06-04 16:50 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-06-03 19:51 [PATCH] kvm tools: Add MMIO coalescing support Sasha Levin
2011-06-04 9:38 ` Ingo Molnar
2011-06-04 10:14 ` Sasha Levin
2011-06-04 10:17 ` Ingo Molnar
2011-06-04 10:28 ` Sasha Levin
2011-06-04 10:35 ` Ingo Molnar
2011-06-04 10:39 ` Alexander Graf
2011-06-04 10:47 ` Ingo Molnar
2011-06-04 10:54 ` Alexander Graf
2011-06-04 11:27 ` Ingo Molnar
2011-06-04 11:53 ` Alexander Graf
2011-06-04 14:46 ` Ingo Molnar
2011-06-04 15:22 ` Alexander Graf
2011-06-04 16:34 ` Ingo Molnar
2011-06-04 16:50 ` Sasha Levin [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1307206229.2173.3.camel@lappy \
--to=levinsasha928@gmail.com \
--cc=agraf@suse.de \
--cc=asias.hejun@gmail.com \
--cc=gorcunov@gmail.com \
--cc=kvm@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=penberg@kernel.org \
--cc=prasadjoshi124@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox