From: Kashyap Chamarthy <kchamart@redhat.com>
To: Cornelia Huck <cohuck@redhat.com>
Cc: kvm@vger.kernel.org, pbonzini@redhat.com, dgilbert@redhat.com,
vkuznets@redhat.com
Subject: Re: [PATCH v2] docs/virt/kvm: Document running nested guests
Date: Mon, 27 Apr 2020 17:22:49 +0200 [thread overview]
Message-ID: <20200427152249.GB25403@paraplu> (raw)
In-Reply-To: <20200422105618.22260edb.cohuck@redhat.com>
On Wed, Apr 22, 2020 at 10:56:18AM +0200, Cornelia Huck wrote:
> On Mon, 20 Apr 2020 13:17:55 +0200
> Kashyap Chamarthy <kchamart@redhat.com> wrote:
[Just noticed this today ... thanks for the review.]
[...]
> > +A nested guest is the ability to run a guest inside another guest (it
> > +can be KVM-based or a different hypervisor). The straightforward
> > +example is a KVM guest that in turn runs on KVM a guest (the rest of
>
> s/on KVM a guest/on a KVM guest/
Will fix in v3.
[...]
> > +Terminology:
> > +
> > +- L0 – level-0; the bare metal host, running KVM
> > +
> > +- L1 – level-1 guest; a VM running on L0; also called the "guest
> > + hypervisor", as it itself is capable of running KVM.
> > +
> > +- L2 – level-2 guest; a VM running on L1, this is the "nested guest"
> > +
> > +.. note:: The above diagram is modelled after x86 architecture; s390x,
>
> s/x86 architecture/the x86 architecture/
>
> > + ppc64 and other architectures are likely to have different
>
> s/to have/to have a/
Noted (both the above)
> > + design for nesting.
> > +
> > + For example, s390x has an additional layer, called "LPAR
> > + hypervisor" (Logical PARtition) on the baremetal, resulting in
> > + "four levels" in a nested setup — L0 (bare metal, running the
> > + LPAR hypervisor), L1 (host hypervisor), L2 (guest hypervisor),
> > + L3 (nested guest).
>
> What about:
>
> "For example, s390x always has an LPAR (LogicalPARtition) hypervisor
> running on bare metal, adding another layer and resulting in at least
> four levels in a nested setup..."
Yep, reads nicer; thanks.
[...]
> > +1. On the host hypervisor (L0), enable the ``nested`` parameter on
> > + s390x::
> > +
> > + $ rmmod kvm
> > + $ modprobe kvm nested=1
> > +
> > +.. note:: On s390x, the kernel parameter ``hpage`` parameter is mutually
>
> Drop one of the "parameter"?
Will do.
> > + exclusive with the ``nested`` paramter; i.e. to have
> > + ``nested`` enabled you _must_ disable the ``hpage`` parameter.
>
> "i.e., in order to be able to enable ``nested``, the ``hpage``
> parameter _must_ be disabled."
>
> ?
Yes :)
>
> > +
> > +2. The guest hypervisor (L1) must be allowed to have ``sie`` CPU
>
> "must be provided with" ?
>
> > + feature — with QEMU, this is possible by using "host passthrough"
>
> s/this is possible by/this can be done by e.g./ ?
>
> > + (via the command-line ``-cpu host``).
> > +
> > +3. Now the KVM module can be enabled in the L1 (guest hypervisor)::
>
> s/enabled/loaded/
Will adjust the above three; thanks.
> > +
> > + $ modprobe kvm
> > +
> > +
> > +Live migration with nested KVM
> > +------------------------------
> > +
> > +The below live migration scenarios should work as of Linux kernel 5.3
> > +and QEMU 4.2.0. In all the below cases, L1 exposes ``/dev/kvm`` in
> > +it, i.e. the L2 guest is a "KVM-accelerated guest", not a "plain
> > +emulated guest" (as done by QEMU's TCG).
>
> The 5.3/4.2 versions likely apply to x86? Should work for s390x as well
> as of these version, but should have worked earlier already :)
Heh, I'll specify the x86-ness of those versions :-)
> > +
> > +- Migrating a nested guest (L2) to another L1 guest on the *same* bare
> > + metal host.
> > +
> > +- Migrating a nested guest (L2) to another L1 guest on a *different*
> > + bare metal host.
> > +
> > +- Migrating an L1 guest, with an *offline* nested guest in it, to
> > + another bare metal host.
> > +
> > +- Migrating an L1 guest, with a *live* nested guest in it, to another
> > + bare metal host.
> > +
> > +Limitations on Linux kernel versions older than 5.3
> > +---------------------------------------------------
> > +
> > +On x86 systems-only (as this does *not* apply for s390x):
>
> Add a "x86" marker? Or better yet, group all the x86 stuff in an x86
> section?
Right, forgot here, will do.
[...]
> > +Reporting bugs from "nested" setups
> > +-----------------------------------
> > +
> > +(This is written with x86 terminology in mind, but similar should apply
> > +for other architectures.)
>
> Better to reorder it a bit (see below).
[...]
> > + - Kernel, libvirt, and QEMU version from L0
> > +
> > + - Kernel, libvirt and QEMU version from L1
> > +
> > + - QEMU command-line of L1 -- preferably full log from
> > + ``/var/log/libvirt/qemu/instance.log``
>
> (if you are running libvirt)
>
> > +
> > + - QEMU command-line of L2 -- preferably full log from
> > + ``/var/log/libvirt/qemu/instance.log``
>
> (if you are running libvirt)
Yes, I'll mention that bit. (I'm just to used to reports coming from
libvirt users :-))
> > +
> > + - Full ``dmesg`` output from L0
> > +
> > + - Full ``dmesg`` output from L1
> > +
> > + - Output of: ``x86info -a`` (& ``lscpu``) from L0
> > +
> > + - Output of: ``x86info -a`` (& ``lscpu``) from L1
>
> lscpu makes sense for other architectures as well.
Noted.
> > +
> > + - Output of: ``dmidecode`` from L0
> > +
> > + - Output of: ``dmidecode`` from L1
>
> This looks x86 specific? Maybe have a list of things that make sense
> everywhere, and list architecture-specific stuff in specific
> subsections?
Can do. Do you have any other specific debugging bits to look out for
s390x or any other arch?
Thanks for the careful review. Much appreciate it :-)
--
/kashyap
next prev parent reply other threads:[~2020-04-27 15:22 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-20 11:17 [PATCH v2] docs/virt/kvm: Document running nested guests Kashyap Chamarthy
2020-04-21 10:35 ` Paolo Bonzini
2020-04-27 10:14 ` Kashyap Chamarthy
2020-04-22 8:56 ` Cornelia Huck
2020-04-27 15:22 ` Kashyap Chamarthy [this message]
2020-04-30 10:25 ` Cornelia Huck
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200427152249.GB25403@paraplu \
--to=kchamart@redhat.com \
--cc=cohuck@redhat.com \
--cc=dgilbert@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=vkuznets@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox