From: Avi Kivity <avi@redhat.com>
To: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>,
Gleb Natapov <gleb@redhat.com>,
LKML <linux-kernel@vger.kernel.org>, KVM <kvm@vger.kernel.org>
Subject: Re: [PATCH] KVM: x86: fix vcpu->mmio_fragments overflow
Date: Mon, 22 Oct 2012 15:58:15 +0200 [thread overview]
Message-ID: <508550F7.1010605@redhat.com> (raw)
In-Reply-To: <5081033C.4060503@linux.vnet.ibm.com>
On 10/19/2012 09:37 AM, Xiao Guangrong wrote:
> After commit b3356bf0dbb349 (KVM: emulator: optimize "rep ins" handling),
> the pieces of io data can be collected and write them to the guest memory
> or MMIO together.
>
> Unfortunately, kvm splits the mmio access into 8 bytes and store them to
> vcpu->mmio_fragments. If the guest uses "rep ins" to move large data, it
> will cause vcpu->mmio_fragments overflow
>
> The bug can be exposed by isapc (-M isapc):
>
> [23154.818733] general protection fault: 0000 [#1] SMP DEBUG_PAGEALLOC
> [ ......]
> [23154.858083] Call Trace:
> [23154.859874] [<ffffffffa04f0e17>] kvm_get_cr8+0x1d/0x28 [kvm]
> [23154.861677] [<ffffffffa04fa6d4>] kvm_arch_vcpu_ioctl_run+0xcda/0xe45 [kvm]
> [23154.863604] [<ffffffffa04f5a1a>] ? kvm_arch_vcpu_load+0x17b/0x180 [kvm]
>
>
> Actually, we can use one mmio_fragment to store a large mmio access for the
> mmio access is always continuous then split it when we pass the mmio-exit-info
> to userspace.
Note, there are instructions that can access discontinuous areas. We don't emulate them and they're unlikely to be used for mmio.
> After that, we only need two entries to store mmio info for
> the cross-mmio pages access
Patch is good, but is somewhat large for 3.7. Maybe we can make it smaller with the following:
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 8b90dd5..41ceb51 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3779,9 +3779,6 @@ static int read_exit_mmio(struct kvm_vcpu *vcpu, gpa_t gpa,
> static int write_exit_mmio(struct kvm_vcpu *vcpu, gpa_t gpa,
> void *val, int bytes)
> {
> - struct kvm_mmio_fragment *frag = &vcpu->mmio_fragments[0];
> -
> - memcpy(vcpu->run->mmio.data, frag->data, frag->len);
> return X86EMUL_CONTINUE;
> }
>
> @@ -3799,6 +3796,64 @@ static const struct read_write_emulator_ops write_emultor = {
> .write = true,
> };
>
> +static bool get_current_mmio_info(struct kvm_vcpu *vcpu, gpa_t *gpa,
> + unsigned *len, void **data)
> +{
> + struct kvm_mmio_fragment *frag;
> + int cur = vcpu->mmio_cur_fragment;
> +
> + if (cur >= vcpu->mmio_nr_fragments)
> + return false;
> +
> + frag = &vcpu->mmio_fragments[cur];
> + if (frag->pos >= frag->len) {
> + if (++vcpu->mmio_cur_fragment >= vcpu->mmio_nr_fragments)
> + return false;
> + frag++;
> + }
Instead of having ->pos, just adjust ->gpa, ->data, and ->len in place. Then get_current_mmio_info would be unneeded, just the min() bit when accessing ->len.
> +
> + *gpa = frag->gpa + frag->pos;
> + *data = frag->data + frag->pos;
> + *len = min(8u, frag->len - frag->pos);
> + return true;
> +}
> +
> +static void complete_current_mmio(struct kvm_vcpu *vcpu)
> +{
> + struct kvm_mmio_fragment *frag;
> + gpa_t gpa;
> + unsigned len;
> + void *data;
> +
> + get_current_mmio_info(vcpu, &gpa, &len, &data);
> +
> + if (!vcpu->mmio_is_write)
> + memcpy(data, vcpu->run->mmio.data, len);
> +
> + /* Increase frag->pos to switch to the next mmio. */
> + frag = &vcpu->mmio_fragments[vcpu->mmio_cur_fragment];
> + frag->pos += len;
> +}
> +
And this would be unneeded, just adjust the code that does mmio_cur_fragment++:
static int complete_emulated_mmio(struct kvm_vcpu *vcpu)
{
struct kvm_run *run = vcpu->run;
- struct kvm_mmio_fragment *frag;
+ struct kvm_mmio_fragment frag;
BUG_ON(!vcpu->mmio_needed);
/* Complete previous fragment */
- frag = &vcpu->mmio_fragments[vcpu->mmio_cur_fragment++];
+ frag = vcpu->mmio_fragments[vcpu->mmio_cur_fragment];
+ if (frag.len <= 8) {
+ ++vcpu->mmio_cur_fragment;
+ } else {
+ vcpu->mmio_fragments[vcpu->mmio_cur_fragment].len -= frag.len;
...
--
error compiling committee.c: too many arguments to function
prev parent reply other threads:[~2012-10-22 13:58 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-10-19 7:37 [PATCH] KVM: x86: fix vcpu->mmio_fragments overflow Xiao Guangrong
2012-10-19 7:39 ` [PATCH] emulator test: add "rep ins" mmio access test Xiao Guangrong
2012-11-01 0:05 ` Marcelo Tosatti
2012-10-22 9:16 ` [PATCH] KVM: x86: fix vcpu->mmio_fragments overflow Gleb Natapov
2012-10-22 11:09 ` Xiao Guangrong
2012-10-22 11:23 ` Gleb Natapov
2012-10-22 11:35 ` Jan Kiszka
2012-10-22 11:43 ` Gleb Natapov
2012-10-22 11:45 ` Jan Kiszka
2012-10-22 12:18 ` Avi Kivity
2012-10-22 12:45 ` Jan Kiszka
2012-10-22 12:53 ` Gleb Natapov
2012-10-22 12:55 ` Jan Kiszka
2012-10-22 12:58 ` Avi Kivity
2012-10-22 13:05 ` Jan Kiszka
2012-10-22 13:08 ` Gleb Natapov
2012-10-22 13:25 ` Jan Kiszka
2012-10-22 14:00 ` Gleb Natapov
2012-10-22 14:23 ` Jan Kiszka
2012-10-22 15:36 ` Avi Kivity
2012-10-22 12:58 ` Gleb Natapov
2012-10-22 12:55 ` Avi Kivity
2012-10-22 13:01 ` Gleb Natapov
2012-10-22 13:02 ` Avi Kivity
2012-10-22 13:05 ` Gleb Natapov
2012-10-22 12:56 ` Avi Kivity
2012-10-22 13:58 ` Avi Kivity [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=508550F7.1010605@redhat.com \
--to=avi@redhat.com \
--cc=gleb@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=xiaoguangrong@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).