From: Avi Kivity <avi@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Joerg Roedel <joerg.roedel@amd.com>,
Marcelo Tosatti <mtosatti@redhat.com>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH RFC dontapply] kvm_para: add mmio word store hypercall
Date: Mon, 26 Mar 2012 11:21:58 +0200 [thread overview]
Message-ID: <4F703536.3040904@redhat.com> (raw)
In-Reply-To: <20120325220518.GA27879@redhat.com>
On 03/26/2012 12:05 AM, Michael S. Tsirkin wrote:
> We face a dilemma: IO mapped addresses are legacy,
> so, for example, PCI express bridges waste 4K
> of this space for each link, in effect limiting us
> to 16 devices using this space.
>
> Memory is supposed to replace them, but memory
> exits are much slower than PIO because of the need for
> emulation and page walks.
>
> As a solution, this patch adds an MMIO hypercall with
> the guest physical address + data.
>
> I did test that this works but didn't benchmark yet.
>
> TODOs:
> This only implements a 2 bytes write since this is
> the minimum required for virtio, but we'll probably need
> at least 1 byte reads (for ISR read).
> We can support up to 8 byte reads/writes for 64 bit
> guests and up to 4 bytes for 32 ones - better limit
> to 4 bytes for everyone for consistency, or support
> the maximum that we can?
Let's support the maximum we can.
>
> static int handle_invd(struct kvm_vcpu *vcpu)
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 9cbfc06..7bc00ae 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -4915,7 +4915,9 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
>
> int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
> {
> + struct kvm_run *run = vcpu->run;
> unsigned long nr, a0, a1, a2, a3, ret;
> + gpa_t gpa;
> int r = 1;
>
> if (kvm_hv_hypercall_enabled(vcpu->kvm))
> @@ -4946,12 +4948,24 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
> case KVM_HC_VAPIC_POLL_IRQ:
> ret = 0;
> break;
> + case KVM_HC_MMIO_STORE_WORD:
HC_MEMORY_WRITE
> + gpa = hc_gpa(vcpu, a1, a2);
> + if (!write_mmio(vcpu, gpa, 2, &a0) && run) {
What's this && run thing?
> + run->exit_reason = KVM_EXIT_MMIO;
> + run->mmio.phys_addr = gpa;
> + memcpy(run->mmio.data, &a0, 2);
> + run->mmio.len = 2;
> + run->mmio.is_write = 1;
> + r = 0;
> + }
> + goto noret;
What if the address is in RAM?
Note the guest can't tell if a piece of memory is direct mapped or
implemented as mmio.
--
error compiling committee.c: too many arguments to function
next prev parent reply other threads:[~2012-03-26 9:21 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-03-25 22:05 [PATCH RFC dontapply] kvm_para: add mmio word store hypercall Michael S. Tsirkin
2012-03-25 23:25 ` H. Peter Anvin
2012-03-26 6:31 ` Michael S. Tsirkin
2012-03-26 9:21 ` Avi Kivity [this message]
2012-03-26 10:08 ` Michael S. Tsirkin
2012-03-26 10:16 ` Avi Kivity
2012-03-26 11:30 ` Michael S. Tsirkin
2012-03-26 12:11 ` Avi Kivity
2012-03-26 10:29 ` Gleb Natapov
2012-03-26 11:24 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4F703536.3040904@redhat.com \
--to=avi@redhat.com \
--cc=hpa@zytor.com \
--cc=joerg.roedel@amd.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=mst@redhat.com \
--cc=mtosatti@redhat.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox