From: Alexander Graf <agraf@suse.de>
To: Peter Maydell <peter.maydell@linaro.org>,
Pavel Fedin <p.fedin@samsung.com>
Cc: QEMU Developers <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] [RFC] Virt machine memory map
Date: Mon, 20 Jul 2015 13:23:45 +0200 [thread overview]
Message-ID: <55ACDA41.2080201@suse.de> (raw)
In-Reply-To: <CAFEAcA_6so5w8We_2hgCZYJhrf63p51O=my2H7wb+f0VnWEMmA@mail.gmail.com>
On 07/20/15 11:41, Peter Maydell wrote:
> On 20 July 2015 at 09:55, Pavel Fedin <p.fedin@samsung.com> wrote:
>> Hello!
>>
>> In our project we work on a very fast paravirtualized network I/O drivers, based on ivshmem. We
>> successfully got ivshmem working on ARM, however with one hack.
>> Currently we have:
>> --- cut ---
>> [VIRT_PCIE_MMIO] = { 0x10000000, 0x2eff0000 },
>> [VIRT_PCIE_PIO] = { 0x3eff0000, 0x00010000 },
>> [VIRT_PCIE_ECAM] = { 0x3f000000, 0x01000000 },
>> [VIRT_MEM] = { 0x40000000, 30ULL * 1024 * 1024 * 1024 },
>> --- cut ---
>> And MMIO region is not enough for us because we want to have 1GB mapping for PCI device. In order
>> to make it working, we modify the map as follows:
>> --- cut ---
>> [VIRT_PCIE_MMIO] = { 0x10000000, 0x7eff0000 },
>> [VIRT_PCIE_PIO] = { 0x8eff0000, 0x00010000 },
>> [VIRT_PCIE_ECAM] = { 0x8f000000, 0x01000000 },
>> [VIRT_MEM] = { 0x90000000, 30ULL * 1024 * 1024 * 1024 },
>> --- cut ---
>> The question is - how could we upstream this? I believe modifying 32-bit virt memory map this way
>> is not good. Will it be OK to have different memory map for 64-bit virt ?
> I think the theory we discussed at the time of putting in the PCIe
> device was that if we wanted this we'd add support for the other
> PCIe memory window (which would then live at somewhere above 4GB).
> Alex, can you remember what the idea was?
Yes, pretty much. It would give us an upper bound to the amount of RAM
that we're able to support, but at least we would be able to support big
MMIO regions like for ivshmem.
I'm not really sure where to put it though. Depending on your kernel
config Linux supports somewhere between 39 and 48 or so bits of phys
address space. And I'd rather not crawl into the PCI hole rat hole that
we have on x86 ;).
We could of course also put it just above RAM - but then our device tree
becomes really dynamic and heavily dependent on -m.
>
> But to be honest I think we weren't expecting anybody to need
> 1GB of PCI MMIO space unless it was a video card...
Ivshmem was actually the most likely target that I could've thought of
to require big MMIO regions ;).
Alex
next prev parent reply other threads:[~2015-07-20 11:23 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-07-20 8:55 [Qemu-devel] [RFC] Virt machine memory map Pavel Fedin
2015-07-20 9:41 ` Peter Maydell
2015-07-20 11:23 ` Alexander Graf [this message]
2015-07-20 13:30 ` Igor Mammedov
2015-07-20 13:44 ` Alexander Graf
2015-07-22 6:52 ` Pavel Fedin
2015-07-22 7:33 ` Alexander Graf
2015-07-22 8:42 ` Pavel Fedin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=55ACDA41.2080201@suse.de \
--to=agraf@suse.de \
--cc=p.fedin@samsung.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).