From: Alexey G <x1917x@gmail.com>
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: "Zhang, Xiong Y" <xiong.y.zhang@intel.com>,
"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Bug] Intel RMRR support with upstream Qemu
Date: Wed, 26 Jul 2017 02:49:01 +1000 [thread overview]
Message-ID: <20170726024901.00004fb8@gmail.com> (raw)
In-Reply-To: <9a6687c2-b08d-dbf4-4810-9624d27b75ba@citrix.com>
On Tue, 25 Jul 2017 15:13:17 +0100
Igor Druzhinin <igor.druzhinin@citrix.com> wrote:
> >> The algorithm implemented in hvmloader for that is not complicated and
> >> can be moved to libxl easily. What we can do is to provision a hole big
> >> enough to include all the initially assigned PCI devices. We can also
> >> account for emulated MMIO regions if necessary. But, to be honest, it
> >> doesn't really matter since if there is no enough space in lower MMIO
> >> hole for some BARs - they can be easily relocated to upper MMIO
> >> hole by hvmloader or the guest itself (dynamically).
> >>
> >> Igor
> > [Zhang, Xiong Y] yes, If we could supply a big enough mmio hole and
> > don't allow hvmloader to do relocate, things will be easier. But how
> > could we supply a big enough mmio hole ? a. statical set base address
> > of mmio hole to 2G/3G. b. Like hvmloader to probe all the pci devices
> > and calculate mmio size. But this runs prior to qemu, how to probe pci
> > devices ?
>
> It's true that we don't know the space occupied by emulated device
> before QEMU is started. But QEMU needs to be started with some lower
> MMIO hole size statically assigned.
>
> One of the possible solutions is to calculate a hole size required to
> include all the assigned pass-through devices and round it up to the
> nearest GB boundary but not larger than 2GB total. If it's not enough to
> also include all the emulated devices - it's not enough, some of the PCI
> device are going to be relocated to upper MMIO hole in that case.
Not all devices are BAR64-capable and even those which are may have Option
ROM BARs (mem32 only). Yet there are 32-bits guests who will find 64-bit
BARs with values above 4GB to be extremely unacceptable. Low MMIO hole is a
precious resource. Also, one need to consider implications of PCI device
hotplugging against the 'static' precalculation approach.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-07-25 16:49 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-07-21 10:57 [Bug] Intel RMRR support with upstream Qemu Zhang, Xiong Y
2017-07-21 13:28 ` Alexey G
2017-07-21 13:56 ` Alexey G
2017-07-24 8:07 ` Zhang, Xiong Y
2017-07-24 9:53 ` Igor Druzhinin
2017-07-24 10:49 ` Zhang, Xiong Y
2017-07-24 16:42 ` Alexey G
2017-07-24 17:01 ` Andrew Cooper
2017-07-24 18:34 ` Alexey G
2017-07-24 20:39 ` Igor Druzhinin
2017-07-25 7:03 ` Zhang, Xiong Y
2017-07-25 14:13 ` Igor Druzhinin
2017-07-25 16:49 ` Alexey G [this message]
2017-07-25 16:40 ` Alexey G
2017-07-25 17:04 ` Igor Druzhinin
2017-07-25 17:47 ` Andrew Cooper
2017-07-24 16:24 ` Alexey G
2017-07-25 2:52 ` Zhang, Xiong Y
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170726024901.00004fb8@gmail.com \
--to=x1917x@gmail.com \
--cc=igor.druzhinin@citrix.com \
--cc=xen-devel@lists.xen.org \
--cc=xiong.y.zhang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).