From: Xiong Zhang <xiong.y.zhang@intel.com>
To: xen-devel@lists.xenproject.org
Cc: Xiong Zhang <xiong.y.zhang@intel.com>,
andrew.cooper3@citrix.com, JBeulich@suse.com
Subject: [PATCH] tools/hvmloader: Use base instead of pci_mem_start for find_next_rmrr()
Date: Tue, 22 Aug 2017 05:53:39 +0800 [thread overview]
Message-ID: <1503352419-2851-1-git-send-email-xiong.y.zhang@intel.com> (raw)
find_next_rmrr(base) is used to find the lowest RMRR ending above base
but below 4G. Current method couldn't cover the following situation:
a. two rmrr exist, small gap between them
b. pci_mem_start and mem_resource.base is below the first rmrr.base
c. find_next_rmrr(pci_mem_start) will find the first rmrr
d. After aligning mem_resource.base to bar size,
first_rmrr.end < new_base < second_rmrr.base and
new_base + bar_sz > second_rmrr.base.
So the new bar will overlap with the second rmrr and don't overlap with
the first rmrr.
But the next_rmrr point to the first rmrr, then check_overlap() couldn't
find the overlap. Finally assign a wrong bar address to bar.
This patch using aligned new base to find the next rmrr, could fix the
above case and find all the overlapped rmrr with new base.
Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
---
tools/firmware/hvmloader/pci.c | 20 +++++++++++---------
1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
index f4288a3..16fccbf 100644
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -405,8 +405,6 @@ void pci_setup(void)
io_resource.base = 0xc000;
io_resource.max = 0x10000;
- next_rmrr = find_next_rmrr(pci_mem_start);
-
/* Assign iomem and ioport resources in descending order of size. */
for ( i = 0; i < nr_bars; i++ )
{
@@ -464,15 +462,19 @@ void pci_setup(void)
base = (resource->base + bar_sz - 1) & ~(uint64_t)(bar_sz - 1);
/* If we're using mem_resource, check for RMRR conflicts. */
- while ( resource == &mem_resource &&
- next_rmrr >= 0 &&
- check_overlap(base, bar_sz,
- memory_map.map[next_rmrr].addr,
- memory_map.map[next_rmrr].size) )
+ if ( resource == &mem_resource)
{
- base = memory_map.map[next_rmrr].addr + memory_map.map[next_rmrr].size;
- base = (base + bar_sz - 1) & ~(bar_sz - 1);
next_rmrr = find_next_rmrr(base);
+ while ( next_rmrr >= 0 &&
+ check_overlap(base, bar_sz,
+ memory_map.map[next_rmrr].addr,
+ memory_map.map[next_rmrr].size) )
+ {
+ base = memory_map.map[next_rmrr].addr +
+ memory_map.map[next_rmrr].size;
+ base = (base + bar_sz - 1) & ~(bar_sz - 1);
+ next_rmrr = find_next_rmrr(base);
+ }
}
bar_data |= (uint32_t)base;
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next reply other threads:[~2017-08-22 4:51 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-08-21 21:53 Xiong Zhang [this message]
2017-08-25 15:10 ` [PATCH] tools/hvmloader: Use base instead of pci_mem_start for find_next_rmrr() Jan Beulich
2017-08-27 23:09 ` [PATCH v2] " Xiong Zhang
2017-08-28 8:18 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1503352419-2851-1-git-send-email-xiong.y.zhang@intel.com \
--to=xiong.y.zhang@intel.com \
--cc=JBeulich@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).