qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: David Woodhouse <dwmw2@infradead.org>
To: quintela@redhat.com
Cc: "Peter Maydell" <peter.maydell@linaro.org>,
	qemu-devel@nongnu.org, "Paolo Bonzini" <pbonzini@redhat.com>,
	"Paul Durrant" <paul@xen.org>,
	"Joao Martins" <joao.m.martins@oracle.com>,
	"Ankur Arora" <ankur.a.arora@oracle.com>,
	"Philippe Mathieu-Daudé" <philmd@linaro.org>,
	"Thomas Huth" <thuth@redhat.com>,
	"Alex Bennée" <alex.bennee@linaro.org>,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
	"Claudio Fontana" <cfontana@suse.de>,
	"Julien Grall" <julien@xen.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Marcel Apfelbaum" <marcel.apfelbaum@gmail.com>,
	armbru@redhat.com, "Stefano Stabellini" <sstabellini@kernel.org>,
	vikram.garhwal@amd.com
Subject: Re: [RFC PATCH v11bis 00/26] Emulated XenStore and PV backend support
Date: Thu, 16 Feb 2023 14:51:01 +0100	[thread overview]
Message-ID: <ca90895e752c860d4e7251d52bac6ee572b3874c.camel@infradead.org> (raw)
In-Reply-To: <87sff5khqo.fsf@secure.mitica>

[-- Attachment #1: Type: text/plain, Size: 2095 bytes --]

On Thu, 2023-02-16 at 11:49 +0100, Juan Quintela wrote:
> David Woodhouse <dwmw2@infradead.org> wrote:
> > The non-RFC patch submisson¹ is just the basic platform support for Xen
> > on KVM. This RFC series is phase 2, adding an internal XenStore and
> > hooking up the PV back end drivers to that and the emulated grant tables
> > and event channels.
> > 
> > With this, we can boot a Xen guest with PV disk, under KVM. Full support
> > for migration isn't there yet because it's actually missing in the PV
> > back end drivers in the first place (perhaps because upstream Xen doesn't
> > yet have guest transparent live migration support anyway). I'm assuming
> > that when the first round is merged and we drop the [RFC] from this set,
> > that won't be a showstopper for now?
> > 
> > I'd be particularly interested in opinions on the way I implemented
> > serialization for the XenStore, by creating a GByteArray and then dumping
> > it with VMSTATE_VARRAY_UINT32_ALLOC().
> 
> And I was wondering why I was CC'd in the whole series O:-)
> 

Indeed, Philippe M-D added you to Cc when discussing migrations on the
first RFC submission back in December, and you've been included ever
since.


> How big is the xenstore?
> I mean typical size and maximun size.
> 

Booting a simple instance with a single disk:

$ scripts/analyze-migration.py -f foo | grep impl_state_size
        "impl_state_size": "0x00000634",

Theoretical maximum is about 1000 nodes @2KiB, so around 2MiB.

> If it is suficientely small (i.e. in the single unit megabytes), you can
> send it as a normal device at the end of migration.
> 

Right now it's part of the xen_xenstore device. Most of that is fairly
simple and it's just the impl_state that's slightly different.


> If it is bigger, I think that you are going to have to enter Migration
> iteration stage, and have some kind of dirty bitmap to know what entries
> are on the target and what not.
> 

We have COW and transactions; that isn't an impossibility; I think we
can avoid that complexity though.


[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5965 bytes --]

  reply	other threads:[~2023-02-16 13:51 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-16  9:44 [RFC PATCH v11bis 00/26] Emulated XenStore and PV backend support David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 01/26] hw/xen: Add xenstore wire implementation and implementation stubs David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 02/26] hw/xen: Add basic XenStore tree walk and write/read/directory support David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 03/26] hw/xen: Implement XenStore watches David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 04/26] hw/xen: Implement XenStore transactions David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 05/26] hw/xen: Watches on " David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 06/26] xenstore perms WIP David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 07/26] hw/xen: Implement core serialize/deserialize methods for xenstore_impl David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 08/26] hw/xen: Create initial XenStore nodes David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 09/26] hw/xen: Add evtchn operations to allow redirection to internal emulation David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 10/26] hw/xen: Add gnttab " David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 11/26] hw/xen: Pass grant ref to gnttab unmap operation David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 12/26] hw/xen: Add foreignmem operations to allow redirection to internal emulation David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 13/26] hw/xen: Add xenstore " David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 14/26] hw/xen: Move xenstore_store_pv_console_info to xen_console.c David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 15/26] hw/xen: Use XEN_PAGE_SIZE in PV backend drivers David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 16/26] hw/xen: Rename xen_common.h to xen_native.h David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 17/26] hw/xen: Build PV backend drivers for CONFIG_XEN_BUS David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 18/26] hw/xen: Avoid crash when backend watch fires too early David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 19/26] hw/xen: Only advertise ring-page-order for xen-block if gnttab supports it David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 20/26] hw/xen: Hook up emulated implementation for event channel operations David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 21/26] hw/xen: Add emulated implementation of grant table operations David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 22/26] hw/xen: Add emulated implementation of XenStore operations David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 23/26] hw/xen: Map guest XENSTORE_PFN grant in emulated Xenstore David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 24/26] hw/xen: Implement soft reset for emulated gnttab David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 25/26] hw/xen: Subsume xen_be_register_common() into xen_be_init() David Woodhouse
2023-02-16  9:44 ` [RFC PATCH v11bis 26/26] i386/xen: Initialize Xen backends from pc_basic_device_init() for emulation David Woodhouse
2023-02-16 10:49 ` [RFC PATCH v11bis 00/26] Emulated XenStore and PV backend support Juan Quintela
2023-02-16 13:51   ` David Woodhouse [this message]
2023-02-16 14:02     ` Juan Quintela
2023-02-16 15:33       ` David Woodhouse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ca90895e752c860d4e7251d52bac6ee572b3874c.camel@infradead.org \
    --to=dwmw2@infradead.org \
    --cc=alex.bennee@linaro.org \
    --cc=ankur.a.arora@oracle.com \
    --cc=armbru@redhat.com \
    --cc=cfontana@suse.de \
    --cc=dgilbert@redhat.com \
    --cc=joao.m.martins@oracle.com \
    --cc=julien@xen.org \
    --cc=marcel.apfelbaum@gmail.com \
    --cc=mst@redhat.com \
    --cc=paul@xen.org \
    --cc=pbonzini@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=philmd@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=sstabellini@kernel.org \
    --cc=thuth@redhat.com \
    --cc=vikram.garhwal@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).