From: Bruno Alvisio <bruno.alvisio@gmail.com>
To: xen-devel@lists.xen.org
Subject: Fwd: VM Live Migration with Local Storage
Date: Sun, 11 Jun 2017 20:16:04 -0700 [thread overview]
Message-ID: <CADNMjED0sWp3-uVddNAbG7Ar2iispsr6qEzBsvUTY9GmW7m3JA@mail.gmail.com> (raw)
In-Reply-To: <CADNMjECBAKRX5muM2mn31RdC1vcPt-vUQ9sqAWt8QSHr+GXj7g@mail.gmail.com>
[-- Attachment #1.1: Type: text/plain, Size: 4798 bytes --]
Hello,
I think it would be beneficial to add local disk migration feature for
‘blkback' backend since it is one of the mostly used backends. I would like
to start a discussion about the design of the machinery needed to achieve
this feature.
===========================
Objective
Add a feature to migrate VMs that have local storage and use the blkback
iface.
===========================
===========================
User Interface
Add a cmd line option in “xl migrate” command to specify if local disks
need to be copied to the destination node.
===========================
===========================
Design
1. As part of the libxl_domain_suspend, the “disk mirroring machinery”
starts an asynchronous job that copies the disks blocks from source to the
destination.
2. The protocol to copy the disks should resemble the one used for
memory copy:
- Do first initial copy of the disk.
- Check of sectors that have been written since copy started. For this,
the blkback driver should be aware that migration of disk is happening and
in this case forward the write request to the “migration machinery” so that
a record of dirty blocks are logged.
- Migration machinery copies “dirty” blocks until convergence.
- Duplicate all the disk writes/reads to both disks in source and
destinations node while VM is being suspended.
Block Diagram
+—------+
| VM |
+-------+
|
| I/O Write
|
V
+----------+ +-----------+ +-------------+
| blkback | ----> | Source | sectors Stream | Destination |
+----------+ | mirror |------------------>| mirror |
| | machinery | I/O Writes | machinery |
| +-----------+ +-------------+
| |
| |
| To I/O block layer |
| |
V V
+----------+ +-------------+
| disk | | Mirrored |
+----------+ | Disk |
+-------------+
======================
Initial Questions
1. Is it possible to leverage the current design of QEMU for drive
mirroring for Xen?
2. What is the best place to implement this protocol? As part of Xen or
the kernel?
3. Is it possible to use the same stream currently used for migrating
the memory to also migrate the disk blocks?
Any guidance/feedback for a more specific design is greatly appreciated.
Thanks,
Bruno
On Wed, Feb 22, 2017 at 5:00 AM, Wei Liu <wei.liu2@citrix.com> wrote:
> Hi Bruno
>
> Thanks for your interest.
>
> On Tue, Feb 21, 2017 at 10:34:45AM -0800, Bruno Alvisio wrote:
> > Hello,
> >
> > I have been to doing some research and as far as I know XEN supports
> > Live Migration
> > of VMs that only have shared storage. (i.e. iSCSI) If the VM has been
> > booted with local storage it cannot be live migrated.
> > QEMU seems to support live migration with local storage (I have tested
> using
> > 'virsh migrate with the '--storage-copy-all' option)
> >
> > I am wondering if this still true in the latest XEN release. Are there
> plans
> > to add this functionality in future releases? I would be interested in
> > contributing to the Xen Project by adding this functionality.
> >
>
> No plan at the moment.
>
> Xen supports a wide variety of disk backends. QEMU is one of them. The
> others are blktap (not upstreamed yet) and in-kernel blkback. The latter
> two don't have the capability to copy local storage to the remote end.
>
> That said, I think it would be valuable to have such capability for QEMU
> backed disks. We also need to design the machinery so that other
> backends can be made to do the same thing in the future.
>
> If you want to undertake this project, I suggest you setup a Xen system,
> read xl / libxl source code under tools directory and understand how
> everything is put together. Reading source code could be daunting at
> times, so don't hesitate to ask for pointers. After you have the big
> picture in mind, we can then discuss how to implement the functionality
> on xen-devel.
>
> Does this sound good to you?
>
> Wei.
>
> > Thanks,
> >
> > Bruno
>
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > https://lists.xen.org/xen-devel
>
>
[-- Attachment #1.2: Type: text/html, Size: 10813 bytes --]
[-- Attachment #2: Type: text/plain, Size: 127 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-06-12 3:16 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-02-21 18:34 VM Live Migration with Local Storage Bruno Alvisio
2017-02-22 13:00 ` Wei Liu
[not found] ` <CADNMjECBAKRX5muM2mn31RdC1vcPt-vUQ9sqAWt8QSHr+GXj7g@mail.gmail.com>
2017-06-12 3:16 ` Bruno Alvisio [this message]
2017-06-20 17:57 ` Fwd: " Konrad Rzeszutek Wilk
2017-06-21 10:03 ` Paul Durrant
2017-06-20 18:36 ` Igor Druzhinin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CADNMjED0sWp3-uVddNAbG7Ar2iispsr6qEzBsvUTY9GmW7m3JA@mail.gmail.com \
--to=bruno.alvisio@gmail.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).