From: Alexey G <x1917x@gmail.com>
To: "Zhang, Xiong Y" <xiong.y.zhang@intel.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Bug] Intel RMRR support with upstream Qemu
Date: Fri, 21 Jul 2017 23:56:44 +1000 [thread overview]
Message-ID: <20170721235644.00004553@gmail.com> (raw)
In-Reply-To: <20170721232804.00001af1@gmail.com>
> On Fri, 21 Jul 2017 10:57:55 +0000
> "Zhang, Xiong Y" <xiong.y.zhang@intel.com> wrote:
>
> > On an intel skylake machine with upstream qemu, if I add
> > "rdm=strategy=host, policy=strict" to hvm.cfg, win 8.1 DomU couldn't
> > boot up and continues reboot.
> >
> > Steps to reproduce this issue:
> >
> > 1) Boot xen with iommu=1 to enable iommu
> > 2) hvm.cfg contain:
> >
> > builder="hvm"
> >
> > memory=xxxx
> >
> > disk=['win8.1 img']
> >
> > device_model_override='qemu-system-i386'
> >
> > device_model_version='qemu-xen'
> >
> > rdm="strategy=host,policy=strict"
> >
> > 3) xl cr hvm.cfg
> >
> > Conditions to reproduce this issue:
> >
> > 1) DomU memory size > the top address of RMRR. Otherwise, this
> > issue will disappear.
> > 2) rdm=" strategy=host,policy=strict" should exist
> > 3) Windows DomU. Linux DomU doesn't have such issue.
> > 4) Upstream qemu. Traditional qemu doesn't have such issue.
> >
> > In this situation, hvmloader will relocate some guest ram below RMRR to
> > high memory, and it seems window guest access an invalid address. Could
> > someone give me some suggestions on how to debug this ?
>
> You're likely have RMRR range(s) below 2GB boundary.
>
> You may try the following:
>
> 1. Specify some large 'mmio_hole' value in your domain configuration file,
> ex. mmio_hole=2560
> 2. If it won't help, 'xl dmesg' output might come useful
>
> Right now upstream QEMU still doesn't support relocation of parts
> of guest RAM to >4GB boundary if they were overlapped by MMIO ranges.
> AFAIR forcing allow_memory_relocate to 1 for hvmloader didn't bring
> anything good for HVM guest.
>
> Setting the mmio_hole size manually allows to create a "predefined"
> memory/MMIO hole layout for both QEMU (via 'max-ram-below-4g') and
> hvmloader (via a XenStore param), effectively avoiding MMIO/RMRR overlaps
> or RAM relocation in hvmloader, so this might help.
Wrote too soon, "policy=strict" means that you won't be able to create a
DomU if RMRR was below 2G... so it's actually should be above 2GB. Anyway,
try setting mmio_hole size.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-07-21 13:56 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-07-21 10:57 [Bug] Intel RMRR support with upstream Qemu Zhang, Xiong Y
2017-07-21 13:28 ` Alexey G
2017-07-21 13:56 ` Alexey G [this message]
2017-07-24 8:07 ` Zhang, Xiong Y
2017-07-24 9:53 ` Igor Druzhinin
2017-07-24 10:49 ` Zhang, Xiong Y
2017-07-24 16:42 ` Alexey G
2017-07-24 17:01 ` Andrew Cooper
2017-07-24 18:34 ` Alexey G
2017-07-24 20:39 ` Igor Druzhinin
2017-07-25 7:03 ` Zhang, Xiong Y
2017-07-25 14:13 ` Igor Druzhinin
2017-07-25 16:49 ` Alexey G
2017-07-25 16:40 ` Alexey G
2017-07-25 17:04 ` Igor Druzhinin
2017-07-25 17:47 ` Andrew Cooper
2017-07-24 16:24 ` Alexey G
2017-07-25 2:52 ` Zhang, Xiong Y
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170721235644.00004553@gmail.com \
--to=x1917x@gmail.com \
--cc=xen-devel@lists.xen.org \
--cc=xiong.y.zhang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).