From: Gleb Natapov <gleb@redhat.com>
To: Jan Kiszka <jan.kiszka@siemens.com>
Cc: Avi Kivity <avi@redhat.com>,
Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>,
Marcelo Tosatti <mtosatti@redhat.com>,
LKML <linux-kernel@vger.kernel.org>, KVM <kvm@vger.kernel.org>
Subject: Re: [PATCH] KVM: x86: fix vcpu->mmio_fragments overflow
Date: Mon, 22 Oct 2012 14:53:01 +0200 [thread overview]
Message-ID: <20121022125301.GS29310@redhat.com> (raw)
In-Reply-To: <50853FF1.8010809@siemens.com>
On Mon, Oct 22, 2012 at 02:45:37PM +0200, Jan Kiszka wrote:
> On 2012-10-22 14:18, Avi Kivity wrote:
> > On 10/22/2012 01:45 PM, Jan Kiszka wrote:
> >
> >>> Indeed. git pull, recheck and call for kvm_flush_coalesced_mmio_buffer()
> >>> is gone. So this will break new userspace, not old. By global you mean
> >>> shared between devices (or memory regions)?
> >>
> >> Yes. We only have a single ring per VM, so we cannot flush multi-second
> >> VGA access separately from other devices. In theory solvable by
> >> introducing per-region rings that can be driven separately.
> >
> > But in practice unneeded. Real time VMs can disable coalescing and not
> > use planar VGA modes.
>
> A) At least right now, we do not differentiate between the VGA modes and
> if flushing is needed. So that device is generally taboo for RT cores of
> the VM.
> B) We need to disable coalescing in E1000 as well - if we want to use
> that model.
> C) Gleb seems to propose using coalescing far beyond those two use cases.
>
Since the userspace change is needed the idea is dead, but if we could
implement it I do not see how it can hurt the latency if it would be the
only mechanism to use coalesced mmio buffer. Checking that the ring buffer
is empty is cheap and if it is not empty it means that kernel just saved
you a lot of 8 bytes exists so even after iterating over all the entries there
you still saved a lot of time.
--
Gleb.
next prev parent reply other threads:[~2012-10-22 12:53 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-10-19 7:37 [PATCH] KVM: x86: fix vcpu->mmio_fragments overflow Xiao Guangrong
2012-10-19 7:39 ` [PATCH] emulator test: add "rep ins" mmio access test Xiao Guangrong
2012-11-01 0:05 ` Marcelo Tosatti
2012-10-22 9:16 ` [PATCH] KVM: x86: fix vcpu->mmio_fragments overflow Gleb Natapov
2012-10-22 11:09 ` Xiao Guangrong
2012-10-22 11:23 ` Gleb Natapov
2012-10-22 11:35 ` Jan Kiszka
2012-10-22 11:43 ` Gleb Natapov
2012-10-22 11:45 ` Jan Kiszka
2012-10-22 12:18 ` Avi Kivity
2012-10-22 12:45 ` Jan Kiszka
2012-10-22 12:53 ` Gleb Natapov [this message]
2012-10-22 12:55 ` Jan Kiszka
2012-10-22 12:58 ` Avi Kivity
2012-10-22 13:05 ` Jan Kiszka
2012-10-22 13:08 ` Gleb Natapov
2012-10-22 13:25 ` Jan Kiszka
2012-10-22 14:00 ` Gleb Natapov
2012-10-22 14:23 ` Jan Kiszka
2012-10-22 15:36 ` Avi Kivity
2012-10-22 12:58 ` Gleb Natapov
2012-10-22 12:55 ` Avi Kivity
2012-10-22 13:01 ` Gleb Natapov
2012-10-22 13:02 ` Avi Kivity
2012-10-22 13:05 ` Gleb Natapov
2012-10-22 12:56 ` Avi Kivity
2012-10-22 13:58 ` Avi Kivity
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20121022125301.GS29310@redhat.com \
--to=gleb@redhat.com \
--cc=avi@redhat.com \
--cc=jan.kiszka@siemens.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=xiaoguangrong@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).