kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Matt Evans <matt@ozlabs.org>
To: Alexander Graf <agraf@suse.de>
Cc: kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, penberg@kernel.org,
	asias.hejun@gmail.com, levinsasha928@gmail.com,
	gorcunov@gmail.com
Subject: Re: [PATCH V3 2/2] kvm tools: Create arch-specific kvm_cpu__emulate_{mm}io()
Date: Fri, 06 Jan 2012 16:32:17 +1100	[thread overview]
Message-ID: <4F068761.7060705@ozlabs.org> (raw)
In-Reply-To: <E0DCCD10-0E60-4875-881A-1DBF46007BE7@suse.de>

Hey Alex,

On 24/12/11 00:39, Alexander Graf wrote:
> 
> On 23.12.2011, at 14:26, Matt Evans wrote:
> 
>>
>> On 23/12/2011, at 11:58 PM, Alexander Graf wrote:
>>
>>>
>>> On 13.12.2011, at 07:21, Matt Evans wrote:
>>>
>>>> Different architectures will deal with MMIO exits differently.  For example,
>>>> KVM_EXIT_IO is x86-specific, and I/O cycles are often synthesised by steering
>>>> into windows in PCI bridges on other architectures.
>>>>
>>>> This patch calls arch-specific kvm_cpu__emulate_io() and kvm_cpu__emulate_mmio()
>>>> from the main runloop's IO and MMIO exit handlers.  For x86, these directly
>>>> call kvm__emulate_io() and kvm__emulate_mmio() but other architectures will
>>>> perform some address munging before passing on the call.
>>>
>>> Why do you need address munging? PIO is simply not there and MMIO always goes to the physical address the CPU sees, so I don't see what you want to munge. The way the memory bus is attached to the CPU should certainly not be modeled differently for PPC and x86.
>>
>> PIO not there?  PIO is used heavily in kvmtool.  So, I made a window in a similar way to how a real PHB has PIO-window-in-MMIO.
>>
>> PCI BARs are currently 32-bit.  I don't want to limit the guest RAM to <4G
>> nor puncture holes in it just to make it look like x86... PCI bus addresses
>> == CPU addresses is a bit of an x86ism.  So, I just used another PHB window
>> to offset 32bit PCI MMIO up somewhere else.  We can then use all 4G of PCI
>> MMIO space without putting that at addr 0 and RAM starting >4G.  (And then,
>> exception vectors where?)
>>
>> The PCI/BARs/MMIO code could really support 64bit addresses though that's a
>> bit of an orthogonal bit of work.  Why should PPC have an MMIO hole in the
>> middle of RAM?
> 

Sooo.. call it post-holiday bliss but I don't understand what you're saying
here. :)

> I fully agree with what you're saying, but the layering seems off. If the CPU
> gets an MMIO request, it gets that on a physical address from the view of the
  ^^^^ produces?

> CPU. Why would you want to have manual munging there to get to whatever window
> you have? Just map the MMIO regions to the higher addresses and expose
> whatever different representation you have to the device, not to the CPU
> layer.

What do you mean here by "map" and representation?  The only way I can parse
this is as though you're describing PCI devices seeing PCI bus addresses which
CPU MMIOs are converted to by the window offset, i.e. what already exists
i.e. what you're disagreeing with :-)

Sorry.. please explain some more.  Is your suggestion to make CPU phys addresses
and PCI bus addresses 1:1?  (Hole in RAM..)


Thanks!


Matt

  reply	other threads:[~2012-01-06  5:32 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-12-13  6:21 [PATCH V3 0/2] kvm tools: Prepare kvmtool for another architecture Matt Evans
2011-12-13  6:21 ` [PATCH V3 1/2] kvm tools: Add ability to map guest RAM from hugetlbfs Matt Evans
2011-12-14  0:03   ` David Evensky
2011-12-14  1:45     ` Matt Evans
2011-12-13  6:21 ` [PATCH V3 2/2] kvm tools: Create arch-specific kvm_cpu__emulate_{mm}io() Matt Evans
2011-12-23 12:58   ` Alexander Graf
2011-12-23 13:26     ` Matt Evans
2011-12-23 13:39       ` Alexander Graf
2012-01-06  5:32         ` Matt Evans [this message]
2012-01-09 13:41           ` Alexander Graf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4F068761.7060705@ozlabs.org \
    --to=matt@ozlabs.org \
    --cc=agraf@suse.de \
    --cc=asias.hejun@gmail.com \
    --cc=gorcunov@gmail.com \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=levinsasha928@gmail.com \
    --cc=penberg@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).