qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Paul Durrant <xadimgnik@gmail.com>
To: David Woodhouse <dwmw2@infradead.org>, qemu-devel@nongnu.org
Cc: "Paolo Bonzini" <pbonzini@redhat.com>,
	"Joao Martins" <joao.m.martins@oracle.com>,
	"Ankur Arora" <ankur.a.arora@oracle.com>,
	"Philippe Mathieu-Daudé" <philmd@linaro.org>,
	"Thomas Huth" <thuth@redhat.com>,
	"Alex Bennée" <alex.bennee@linaro.org>,
	"Juan Quintela" <quintela@redhat.com>,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
	"Claudio Fontana" <cfontana@suse.de>
Subject: Re: [RFC PATCH v2 12/22] hw/xen: Add xen_overlay device for emulating shared xenheap pages
Date: Mon, 12 Dec 2022 14:29:35 +0000	[thread overview]
Message-ID: <58062a00-dcbe-c42c-3a18-8b55ca61939c@xen.org> (raw)
In-Reply-To: <20221209095612.689243-13-dwmw2@infradead.org>

On 09/12/2022 09:56, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> For the shared info page and for grant tables, Xen shares its own pages
> from the "Xen heap" to the guest. The guest requests that a given page
> from a certain address space (XENMAPSPACE_shared_info, etc.) be mapped
> to a given GPA using the XENMEM_add_to_physmap hypercall.
> 
> To support that in qemu when *emulating* Xen, create a memory region
> (migratable) and allow it to be mapped as an overlay when requested.
> 
> Xen theoretically allows the same page to be mapped multiple times
> into the guest, but that's hard to track and reinstate over migration,
> so we automatically *unmap* any previous mapping when creating a new
> one. This approach has been used in production with.... a non-trivial
> number of guests expecting true Xen, without any problems yet being
> noticed.
> 
> This adds just the shared info page for now. The grant tables will be
> a larger region, and will need to be overlaid one page at a time. I
> think that means I need to create separate aliases for each page of
> the overall grant_frames region, so that they can be mapped individually.
> 

Is the following something you want in the commit log?

> Expecting some heckling at the use of xen_overlay_singleton. What is
> the best way to do that? Using qemu_find_recursive() every time seemed
> a bit wrong. But I suppose mapping it into the *guest* isn't a fast
> path, and if the actual grant table code is allowed to just stash the
> pointer it gets from xen_overlay_page_ptr() for later use then that
> isn't a fast path for device I/O either.
> 
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---
>   hw/i386/kvm/meson.build   |   1 +
>   hw/i386/kvm/xen_overlay.c | 198 ++++++++++++++++++++++++++++++++++++++
>   hw/i386/kvm/xen_overlay.h |  14 +++
>   hw/i386/pc_piix.c         |   8 ++
>   4 files changed, 221 insertions(+)
>   create mode 100644 hw/i386/kvm/xen_overlay.c
>   create mode 100644 hw/i386/kvm/xen_overlay.h
> 
[snip]

> +static int xen_overlay_map_page_locked(uint32_t space, uint64_t idx, uint64_t gpa)
> +{
> +    MemoryRegion *ovl_page;
> +    int err;
> +
> +    if (space != XENMAPSPACE_shared_info || idx != 0)
> +        return -EINVAL;
> +
> +    if (!xen_overlay_singleton)
> +        return -ENOENT;
> +
> +    ovl_page = &xen_overlay_singleton->shinfo_mem;
> +
> +    /* Xen allows guests to map the same page as many times as it likes
> +     * into guest physical frames. We don't, because it would be hard
> +     * to track and restore them all. One mapping of each page is
> +     * perfectly sufficient for all known guests... and we've tested
> +     * that theory on a few now in other implementations. dwmw2. */
> +    if (memory_region_is_mapped(ovl_page)) {
> +        if (gpa == INVALID_GPA) {
> +            /* If removing shinfo page, turn the kernel magic off first */
> +            if (space == XENMAPSPACE_shared_info) {
> +                err = xen_overlay_set_be_shinfo(INVALID_GFN);
> +                if (err)
> +                    return err;
> +            }
> +            memory_region_del_subregion(get_system_memory(), ovl_page);
> +            goto done;

This seems a little ugly when you could...

> +        } else {
> +            /* Just move it */
> +            memory_region_set_address(ovl_page, gpa);
> +        }
> +    } else if (gpa != INVALID_GPA) {
> +        memory_region_add_subregion_overlap(get_system_memory(), gpa, ovl_page, 0);
> +    }
> +

... just wrap the following line in 'if (gpa != INVALID_GPA)'

Paul

> +    xen_overlay_set_be_shinfo(gpa >> XEN_PAGE_SHIFT);
> + done:
> +    xen_overlay_singleton->shinfo_gpa = gpa;
> +    return 0;
> +}
> +



  reply	other threads:[~2022-12-12 14:29 UTC|newest]

Thread overview: 78+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-09  9:55 [RFC PATCH v2 00/22] Xen HVM support under KVM David Woodhouse
2022-12-09  9:55 ` [RFC PATCH v2 01/22] include: import xen public headers David Woodhouse
2022-12-12  9:17   ` Paul Durrant
2022-12-09  9:55 ` [RFC PATCH v2 02/22] xen: add CONFIG_XENFV_MACHINE and CONFIG_XEN_EMU options for Xen emulation David Woodhouse
2022-12-12  9:19   ` Paul Durrant
2022-12-12 17:07   ` Paolo Bonzini
2022-12-12 22:22     ` David Woodhouse
2022-12-13  0:39       ` Paolo Bonzini
2022-12-13  0:59         ` David Woodhouse
2022-12-13 22:32           ` Paolo Bonzini
2022-12-16  8:40             ` David Woodhouse
2022-12-09  9:55 ` [RFC PATCH v2 03/22] i386/xen: Add xen-version machine property and init KVM Xen support David Woodhouse
2022-12-12 12:48   ` Paul Durrant
2022-12-12 17:30   ` Paolo Bonzini
2022-12-12 17:55     ` Paul Durrant
2022-12-13  0:13     ` David Woodhouse
2023-01-17 13:49     ` David Woodhouse
2022-12-09  9:55 ` [RFC PATCH v2 04/22] i386/kvm: handle Xen HVM cpuid leaves David Woodhouse
2022-12-12 13:13   ` Paul Durrant
2022-12-13  9:47     ` David Woodhouse
2022-12-09  9:55 ` [RFC PATCH v2 05/22] xen-platform-pci: allow its creation with XEN_EMULATE mode David Woodhouse
2022-12-12 13:24   ` Paul Durrant
2022-12-12 22:07     ` David Woodhouse
2022-12-09  9:55 ` [RFC PATCH v2 06/22] hw/xen_backend: refactor xen_be_init() David Woodhouse
2022-12-12 13:27   ` Paul Durrant
2022-12-09  9:55 ` [RFC PATCH v2 07/22] pc_piix: handle XEN_EMULATE backend init David Woodhouse
2022-12-12 13:47   ` Paul Durrant
2022-12-12 14:50     ` David Woodhouse
2022-12-09  9:55 ` [RFC PATCH v2 08/22] xen_platform: exclude vfio-pci from the PCI platform unplug David Woodhouse
2022-12-12 13:52   ` Paul Durrant
2022-12-09  9:55 ` [RFC PATCH v2 09/22] pc_piix: allow xenfv machine with XEN_EMULATE David Woodhouse
2022-12-12 14:05   ` Paul Durrant
2022-12-09  9:56 ` [RFC PATCH v2 10/22] i386/xen: handle guest hypercalls David Woodhouse
2022-12-12 14:11   ` Paul Durrant
2022-12-12 14:17     ` David Woodhouse
2022-12-12 17:07   ` Paolo Bonzini
2022-12-09  9:56 ` [RFC PATCH v2 11/22] i386/xen: implement HYPERCALL_xen_version David Woodhouse
2022-12-12 14:17   ` Paul Durrant
2022-12-13  0:06     ` David Woodhouse
2022-12-09  9:56 ` [RFC PATCH v2 12/22] hw/xen: Add xen_overlay device for emulating shared xenheap pages David Woodhouse
2022-12-12 14:29   ` Paul Durrant [this message]
2022-12-12 17:14   ` Paolo Bonzini
2022-12-09  9:56 ` [RFC PATCH v2 13/22] i386/xen: implement HYPERVISOR_memory_op David Woodhouse
2022-12-12 14:38   ` Paul Durrant
2022-12-13  0:08     ` David Woodhouse
2022-12-09  9:56 ` [RFC PATCH v2 14/22] i386/xen: implement HYPERVISOR_hvm_op David Woodhouse
2022-12-12 14:41   ` Paul Durrant
2022-12-09  9:56 ` [RFC PATCH v2 15/22] i386/xen: implement HYPERVISOR_vcpu_op David Woodhouse
2022-12-12 14:51   ` Paul Durrant
2022-12-13  0:10     ` David Woodhouse
2022-12-09  9:56 ` [RFC PATCH v2 16/22] i386/xen: handle VCPUOP_register_vcpu_info David Woodhouse
2022-12-12 14:58   ` Paul Durrant
2022-12-13  0:13     ` David Woodhouse
2022-12-14 10:28       ` Paul Durrant
2022-12-14 11:04         ` David Woodhouse
2022-12-09  9:56 ` [RFC PATCH v2 17/22] i386/xen: handle VCPUOP_register_vcpu_time_info David Woodhouse
2022-12-12 15:34   ` Paul Durrant
2022-12-09  9:56 ` [RFC PATCH v2 18/22] i386/xen: handle VCPUOP_register_runstate_memory_area David Woodhouse
2022-12-12 15:38   ` Paul Durrant
2022-12-09  9:56 ` [RFC PATCH v2 19/22] i386/xen: implement HVMOP_set_evtchn_upcall_vector David Woodhouse
2022-12-12 15:52   ` Paul Durrant
2022-12-09  9:56 ` [RFC PATCH v2 20/22] i386/xen: HVMOP_set_param / HVM_PARAM_CALLBACK_IRQ David Woodhouse
2022-12-12 16:16   ` Paul Durrant
2022-12-12 16:26     ` David Woodhouse
2022-12-12 16:39       ` Paul Durrant
2022-12-15 20:54         ` David Woodhouse
2022-12-20 13:56           ` Paul Durrant
2022-12-20 16:27             ` David Woodhouse
2022-12-20 17:25               ` Paul Durrant
2022-12-20 17:29                 ` David Woodhouse
2022-12-28 10:45                   ` David Woodhouse
2022-12-21  1:41     ` David Woodhouse
2022-12-21  9:37       ` Paul Durrant
2022-12-21 12:16         ` David Woodhouse
2022-12-09  9:56 ` [RFC PATCH v2 21/22] i386/xen: implement HYPERVISOR_event_channel_op David Woodhouse
2022-12-12 16:23   ` Paul Durrant
2022-12-09  9:56 ` [RFC PATCH v2 22/22] i386/xen: implement HYPERVISOR_sched_op David Woodhouse
2022-12-12 16:37   ` Paul Durrant

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=58062a00-dcbe-c42c-3a18-8b55ca61939c@xen.org \
    --to=xadimgnik@gmail.com \
    --cc=alex.bennee@linaro.org \
    --cc=ankur.a.arora@oracle.com \
    --cc=cfontana@suse.de \
    --cc=dgilbert@redhat.com \
    --cc=dwmw2@infradead.org \
    --cc=joao.m.martins@oracle.com \
    --cc=pbonzini@redhat.com \
    --cc=philmd@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=thuth@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).