linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Xiao Guangrong <guangrong.xiao@linux.intel.com>
To: Neo Jia <cjia@nvidia.com>
Cc: "Paolo Bonzini" <pbonzini@redhat.com>,
	linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
	"Kirti Wankhede" <kwankhede@nvidia.com>,
	"Andrea Arcangeli" <aarcange@redhat.com>,
	"Radim Krčmář" <rkrcmar@redhat.com>
Subject: Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed
Date: Tue, 5 Jul 2016 12:02:42 +0800	[thread overview]
Message-ID: <577B3162.3050308@linux.intel.com> (raw)
In-Reply-To: <20160705013555.GA24282@nvidia.com>



On 07/05/2016 09:35 AM, Neo Jia wrote:
> On Tue, Jul 05, 2016 at 09:19:40AM +0800, Xiao Guangrong wrote:
>>
>>
>> On 07/04/2016 11:33 PM, Neo Jia wrote:
>>
>>>>>
>>>>> Sorry, I think I misread the "allocation" as "mapping". We only delay the
>>>>> cpu mapping, not the allocation.
>>>>
>>>> So how to understand your statement:
>>>> "at that moment nobody has any knowledge about how the physical mmio gets virtualized"
>>>>
>>>> The resource, physical MMIO region, has been allocated, why we do not know the physical
>>>> address mapped to the VM?
>>>>
>>>
>>> >From a device driver point of view, the physical mmio region never gets allocated until
>>> the corresponding resource is requested by clients and granted by the mediated device driver.
>>
>> Hmm... but you told me that you did not delay the allocation. :(
>
> Hi Guangrong,
>
> The allocation here is the allocation of device resource, and the only way to
> access that kind of device resource is via a mmio region of some pages there.
>
> For example, if VM needs resource A, and the only way to access resource A is
> via some kind of device memory at mmio address X.
>
> So, we never defer the allocation request during runtime, we just setup the
> CPU mapping later when it actually gets accessed.
>
>>
>> So it returns to my original question: why not allocate the physical mmio region in mmap()?
>>
>
> Without running anything inside the VM, how do you know how the hw resource gets
> allocated, therefore no knowledge of the use of mmio region.

The allocation and mapping can be two independent processes:
- the first process is just allocation. The MMIO region is allocated from physical
   hardware and this region is mapped into _QEMU's_ arbitrary virtual address by mmap().
   At this time, VM can not actually use this resource.

- the second process is mapping. When VM enable this region, e.g, it enables the
   PCI BAR, then QEMU maps its virtual address returned by mmap() to VM's physical
   memory. After that, VM can access this region.

The second process is completed handled in userspace, that means, the mediated
device driver needn't care how the resource is mapped into VM.

This is how QEMU/VFIO currently works, could you please tell me the special points
of your solution comparing with current QEMU/VFIO and why current model can not fit
your requirement? So that we can better understand your scenario?

  reply	other threads:[~2016-07-05  4:06 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-30 13:01 [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed Paolo Bonzini
2016-06-30 13:01 ` [PATCH 1/2] KVM: MMU: prepare to support mapping of VM_IO and VM_PFNMAP frames Paolo Bonzini
2016-06-30 13:01 ` [PATCH 2/2] KVM: MMU: try to fix up page faults before giving up Paolo Bonzini
2016-06-30 21:59 ` [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed Neo Jia
2016-07-04  6:39 ` Xiao Guangrong
2016-07-04  7:03   ` Neo Jia
2016-07-04  7:37     ` Xiao Guangrong
2016-07-04  7:48       ` Paolo Bonzini
2016-07-04  7:59         ` Xiao Guangrong
2016-07-04  8:14           ` Paolo Bonzini
2016-07-04  8:21             ` Xiao Guangrong
2016-07-04  8:48               ` Paolo Bonzini
2016-07-04  7:53       ` Neo Jia
2016-07-04  8:19         ` Xiao Guangrong
2016-07-04  8:41           ` Neo Jia
2016-07-04  8:45             ` Xiao Guangrong
2016-07-04  8:54               ` Xiao Guangrong
2016-07-04  9:16               ` Neo Jia
2016-07-04 10:16                 ` Xiao Guangrong
2016-07-04 15:33                   ` Neo Jia
2016-07-05  1:19                     ` Xiao Guangrong
2016-07-05  1:35                       ` Neo Jia
2016-07-05  4:02                         ` Xiao Guangrong [this message]
2016-07-05  5:16                           ` Neo Jia
2016-07-05  6:26                             ` Xiao Guangrong
2016-07-05  7:30                               ` Neo Jia
2016-07-05  9:02                                 ` Xiao Guangrong
2016-07-05 15:07                                   ` Neo Jia
2016-07-06  2:22                                     ` Xiao Guangrong
2016-07-06  4:01                                       ` Neo Jia
2016-07-04  7:38   ` Paolo Bonzini
2016-07-04  7:40     ` Xiao Guangrong
2016-07-05  5:41 ` Neo Jia
2016-07-05 12:18   ` Paolo Bonzini
2016-07-05 14:02     ` Neo Jia
2016-07-06  2:00     ` Xiao Guangrong
2016-07-06  2:18       ` Neo Jia
2016-07-06  2:35         ` Xiao Guangrong
2016-07-06  2:57           ` Neo Jia
2016-07-06  4:02             ` Xiao Guangrong
2016-07-06 11:48               ` Paolo Bonzini
2016-07-07  2:36                 ` Xiao Guangrong
2016-07-06  6:05       ` Paolo Bonzini
2016-07-06 15:50         ` Alex Williamson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=577B3162.3050308@linux.intel.com \
    --to=guangrong.xiao@linux.intel.com \
    --cc=aarcange@redhat.com \
    --cc=cjia@nvidia.com \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=rkrcmar@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).