From: Alexey Kardashevskiy <aik@ozlabs.ru>
To: David Gibson <david@gibson.dropbear.id.au>
Cc: Paul Mackerras <paulus@samba.org>,
Alex Williamson <alex.williamson@redhat.com>,
qemu-ppc@nongnu.org, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH qemu v14 17/18] vfio/spapr: Use VFIO_SPAPR_TCE_v2_IOMMU
Date: Tue, 29 Mar 2016 16:44:04 +1100 [thread overview]
Message-ID: <56FA1624.2060307@ozlabs.ru> (raw)
In-Reply-To: <20160329053042.GG15156@voom.fritz.box>
On 03/29/2016 04:30 PM, David Gibson wrote:
> On Thu, Mar 24, 2016 at 08:10:44PM +1100, Alexey Kardashevskiy wrote:
>> On 03/24/2016 11:03 AM, Alexey Kardashevskiy wrote:
>>> On 03/23/2016 05:03 PM, David Gibson wrote:
>>>> On Wed, Mar 23, 2016 at 02:06:36PM +1100, Alexey Kardashevskiy wrote:
>>>>> On 03/23/2016 01:53 PM, David Gibson wrote:
>>>>>> On Wed, Mar 23, 2016 at 01:12:59PM +1100, Alexey Kardashevskiy wrote:
>>>>>>> On 03/23/2016 12:08 PM, David Gibson wrote:
>>>>>>>> On Tue, Mar 22, 2016 at 04:54:07PM +1100, Alexey Kardashevskiy wrote:
>>>>>>>>> On 03/22/2016 04:14 PM, David Gibson wrote:
>>>>>>>>>> On Mon, Mar 21, 2016 at 06:47:05PM +1100, Alexey Kardashevskiy wrote:
>>>>>>>>>>> New VFIO_SPAPR_TCE_v2_IOMMU type supports dynamic DMA window
>>>>>>>>>>> management.
>>>>>>>>>>> This adds ability to VFIO common code to dynamically allocate/remove
>>>>>>>>>>> DMA windows in the host kernel when new VFIO container is
>>>>>>>>>>> added/removed.
>>>>>>>>>>>
>>>>>>>>>>> This adds VFIO_IOMMU_SPAPR_TCE_CREATE ioctl to
>>>>>>>>>>> vfio_listener_region_add
>>>>>>>>>>> and adds just created IOMMU into the host IOMMU list; the opposite
>>>>>>>>>>> action is taken in vfio_listener_region_del.
>>>>>>>>>>>
>>>>>>>>>>> When creating a new window, this uses euristic to decide on the
>>>>>>>>>>> TCE table
>>>>>>>>>>> levels number.
>>>>>>>>>>>
>>>>>>>>>>> This should cause no guest visible change in behavior.
>>>>>>>>>>>
>>>>>>>>>>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>>>>>>>>>>> ---
>>>>>>>>>>> Changes:
>>>>>>>>>>> v14:
>>>>>>>>>>> * new to the series
>>>>>>>>>>>
>>>>>>>>>>> ---
>>>>>>>>>>> TODO:
>>>>>>>>>>> * export levels to PHB
>>>>>>>>>>> ---
>>>>>>>>>>> hw/vfio/common.c | 108
>>>>>>>>>>> ++++++++++++++++++++++++++++++++++++++++++++++++++++---
>>>>>>>>>>> trace-events | 2 ++
>>>>>>>>>>> 2 files changed, 105 insertions(+), 5 deletions(-)
>>>>>>>>>>>
>>>>>>>>>>> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
>>>>>>>>>>> index 4e873b7..421d6eb 100644
>>>>>>>>>>> --- a/hw/vfio/common.c
>>>>>>>>>>> +++ b/hw/vfio/common.c
>>>>>>>>>>> @@ -279,6 +279,14 @@ static int vfio_host_iommu_add(VFIOContainer
>>>>>>>>>>> *container,
>>>>>>>>>>> return 0;
>>>>>>>>>>> }
>>>>>>>>>>>
>>>>>>>>>>> +static void vfio_host_iommu_del(VFIOContainer *container, hwaddr
>>>>>>>>>>> min_iova)
>>>>>>>>>>> +{
>>>>>>>>>>> + VFIOHostIOMMU *hiommu = vfio_host_iommu_lookup(container,
>>>>>>>>>>> min_iova, 0x1000);
>>>>>>>>>>
>>>>>>>>>> The hard-coded 0x1000 looks dubious..
>>>>>>>>>
>>>>>>>>> Well, that's the minimal page size...
>>>>>>>>
>>>>>>>> Really? Some BookE CPUs support 1KiB page size..
>>>>>>>
>>>>>>> Hm. For IOMMU? Ok. s/0x1000/1/ should do then :)
>>>>>>
>>>>>> Uh.. actually I don't think those CPUs generally had an IOMMU. But if
>>>>>> it's been done for CPU MMU I wouldn't count on it not being done for
>>>>>> IOMMU.
>>>>>>
>>>>>> 1 is a safer choice.
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>>>>> + g_assert(hiommu);
>>>>>>>>>>> + QLIST_REMOVE(hiommu, hiommu_next);
>>>>>>>>>>> +}
>>>>>>>>>>> +
>>>>>>>>>>> static bool vfio_listener_skipped_section(MemoryRegionSection
>>>>>>>>>>> *section)
>>>>>>>>>>> {
>>>>>>>>>>> return (!memory_region_is_ram(section->mr) &&
>>>>>>>>>>> @@ -392,6 +400,61 @@ static void
>>>>>>>>>>> vfio_listener_region_add(MemoryListener *listener,
>>>>>>>>>>> }
>>>>>>>>>>> end = int128_get64(llend);
>>>>>>>>>>>
>>>>>>>>>>> + if (container->iommu_type == VFIO_SPAPR_TCE_v2_IOMMU) {
>>>>>>>>>>
>>>>>>>>>> I think this would be clearer split out into a helper function,
>>>>>>>>>> vfio_create_host_window() or something.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> It is rather vfio_spapr_create_host_window() and we were avoiding
>>>>>>>>> xxx_spapr_xxx so far. I'd cut-n-paste the SPAPR PCI AS listener to a
>>>>>>>>> separate file but this usually triggers more discussion and never
>>>>>>>>> ends well.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>>> + unsigned entries, pages;
>>>>>>>>>>> + struct vfio_iommu_spapr_tce_create create = { .argsz =
>>>>>>>>>>> sizeof(create) };
>>>>>>>>>>> +
>>>>>>>>>>> + g_assert(section->mr->iommu_ops);
>>>>>>>>>>> + g_assert(memory_region_is_iommu(section->mr));
>>>>>>>>>>
>>>>>>>>>> I don't think you need these asserts. AFAICT the same logic should
>>>>>>>>>> work if a RAM MR was added directly to PCI address space - this would
>>>>>>>>>> create the new host window, then the existing code for adding a RAM MR
>>>>>>>>>> would map that block of RAM statically into the new window.
>>>>>>>>>
>>>>>>>>> In what configuration/machine can we do that on SPAPR?
>>>>>>>>
>>>>>>>> spapr guests won't ever do that. But you can run an x86 guest on a
>>>>>>>> powernv host and this situation could come up.
>>>>>>>
>>>>>>>
>>>>>>> I am pretty sure VFIO won't work in this case anyway.
>>>>>>
>>>>>> I'm not. There's no fundamental reason VFIO shouldn't work with TCG.
>>>>>
>>>>> This is not about TCG (pseries TCG guest works with VFIO on powernv host),
>>>>> this is about things like VFIO_IOMMU_GET_INFO vs.
>>>>> VFIO_IOMMU_SPAPR_TCE_GET_INFO ioctls but yes, fundamentally, it can work.
>>>>>
>>>>> Should I add such support in this patchset?
>>>>
>>>> Unless adding the generality is really complex, and so far I haven't
>>>> seen a reason for it to be.
>>>
>>> Seriously? :(
>>
>>
>> So, I tried.
>>
>> With q35 machine (pc-i440fx-2.6 have even worse memory tree), there are
>> several RAM blocks - 0..0xc0000, then pc.rom, then RAM again till 2GB, then
>> a gap (PCI MMIO?), then PC BIOS at 0xfffc0000 (which is RAM), then after 4GB
>> the rest of the RAM:
>>
>> memory-region: system
>> 0000000000000000-ffffffffffffffff (prio 0, RW): system
>> 0000000000000000-000000007fffffff (prio 0, RW): alias ram-below-4g
>> @pc.ram 0000000000000000-000000007fffffff
>> 0000000000000000-ffffffffffffffff (prio -1, RW): pci
>> 00000000000c0000-00000000000dffff (prio 1, RW): pc.rom
>> 00000000000e0000-00000000000fffff (prio 1, R-): alias isa-bios
>> @pc.bios 0000000000020000-000000000003ffff
>> 00000000febe0000-00000000febeffff (prio 1, RW): 0003:09:00.0 BAR 0
>> 00000000febe0000-00000000febeffff (prio 0, RW): 0003:09:00.0 BAR 0
>> mmaps[0]
>> 00000000febf0000-00000000febf1fff (prio 1, RW): 0003:09:00.0 BAR 2
>> 00000000febf0000-00000000febf007f (prio 0, RW): msix-table
>> 00000000febf1000-00000000febf1007 (prio 0, RW): msix-pba [disabled]
>> 00000000febf2000-00000000febf2fff (prio 1, RW): ahci
>> 00000000fffc0000-00000000ffffffff (prio 0, R-): pc.bios
>> [...]
>> 0000000100000000-000000027fffffff (prio 0, RW): alias ram-above-4g
>> @pc.ram 0000000080000000-00000001ffffffff
>>
>>
>> Ok, let's say we filter out "pc.bios", "pc.rom", BARs (aka "skip dump"
>> regions) and we end up having RAM below 2GB and above 4GB, 2 windows.
>> Problem is a window needs to be aligned to power of two.
>
> Window size? That should be ok - you can just round up the requested
> size to a power of two. We don't have to populate the whole host window.
>
>> Ok, I change my test from -m8G to -m10G to have 2 aligned regions - 2GB and
>> 8GB), next problem - the second window needs to start from 1<<<59, not 4G or
>> any other random offset.
>
> Yeah, that's harder. With the IODA is it always that the 32-bit
> window is at 0 and the 64-bit window is at 1<<59, or can you have a
> 64-bit window at 0?
I can have a single window at 0. This is why my (hacked) version works at all.
IODA2 allows having 2 windows, each window can be backed with TCE table
with variable pagesize or be a bypass, the window capabilities are exactly
the same.
> If it's the second, then this is theoretically possible, but the host
> kernel - on seeing the second window request, would need to see that
> it can instead of adding a new host window, instead extend the first
> host window so that it covers both the guest windows.
> That does sound reasonably tricky and I don't think we should try to
> implement it any time soon. BUT - we should design our *interfaces*
> so that it's reasonable to add that in future.
>
>> Ok, I change my test to start with -m1G to have a single window.
>> Now it fails because here is what happens:
>> region_add: pc.ram: 0 40000000
>> region_del: pc.ram: 0 40000000
>> region_add: pc.ram: 0 c0000
>> The second "add" is not power of two -> fail. And I cannot avoid this -
>> "pc.rom" is still there, it is a separate region so RAM gets split into
>> smaller chunks. I do not know to to fix this properly.
Still, how to fix this? Do I need to fix this now? Practically, when do I
create this single huge window?
>>
>>
>> So, in order to make it (x86/tcg on powernv host) work anyhow, I had to
>> create a single huge window (as it is a single window - it starts from @0)
>> unconditionally in vfio_connect_container(), not in
>> vfio_listener_region_add(); and I added filtering (skip "pc.bios", "pc.rom",
>> BARs - these things start above 1GB) and then I could boot a x86_64 guest
>> and even pass Mellanox ConnextX3, it would bring 2 interfaces up and
>> dhclient assigned IPs to them (which is quite amusing that it even works).
>>
>>
>> So, I think I will replace assert() with:
>>
>> unsigned pagesize = qemu_real_host_page_size;
>> if (section->mr->iommu_ops) {
>> pagesize = section->mr->iommu_ops->get_page_sizes(section->mr);
>> }
>>
>> but there is no practical use for this anyway.
>
> So, I think what you need here is to compute the effective IOMMU page
> size (see other email for details). That will be getrampagesize() for
> RAM regions, and the mininum of that and the guest IOMMU page size for
> IOMMU regions.
--
Alexey
next prev parent reply other threads:[~2016-03-29 5:44 UTC|newest]
Thread overview: 64+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-03-21 7:46 [Qemu-devel] [PATCH qemu v14 00/18] spapr: vfio: Enable Dynamic DMA windows (DDW) Alexey Kardashevskiy
2016-03-21 7:46 ` [Qemu-devel] [PATCH qemu v14 01/18] memory: Fix IOMMU replay base address Alexey Kardashevskiy
2016-03-22 0:49 ` David Gibson
2016-03-22 3:12 ` Alexey Kardashevskiy
2016-03-22 3:26 ` David Gibson
2016-03-22 4:28 ` Alexey Kardashevskiy
2016-03-22 4:59 ` David Gibson
2016-03-22 7:19 ` Alexey Kardashevskiy
2016-03-22 23:07 ` David Gibson
2016-03-23 10:58 ` Paolo Bonzini
2016-03-21 7:46 ` [Qemu-devel] [PATCH qemu v14 02/18] vmstate: Define VARRAY with VMS_ALLOC Alexey Kardashevskiy
2016-03-21 7:46 ` [Qemu-devel] [PATCH qemu v14 03/18] spapr_pci: Move DMA window enablement to a helper Alexey Kardashevskiy
2016-03-22 1:02 ` David Gibson
2016-03-22 3:17 ` Alexey Kardashevskiy
2016-03-22 3:28 ` David Gibson
2016-03-21 7:46 ` [Qemu-devel] [PATCH qemu v14 04/18] spapr_iommu: Move table allocation to helpers Alexey Kardashevskiy
2016-03-21 7:46 ` [Qemu-devel] [PATCH qemu v14 05/18] spapr_iommu: Introduce "enabled" state for TCE table Alexey Kardashevskiy
2016-03-22 1:11 ` David Gibson
2016-03-21 7:46 ` [Qemu-devel] [PATCH qemu v14 06/18] spapr_iommu: Finish renaming vfio_accel to need_vfio Alexey Kardashevskiy
2016-03-22 1:18 ` David Gibson
2016-03-21 7:46 ` [Qemu-devel] [PATCH qemu v14 07/18] spapr_iommu: Realloc table during migration Alexey Kardashevskiy
2016-03-22 1:23 ` David Gibson
2016-03-21 7:46 ` [Qemu-devel] [PATCH qemu v14 08/18] spapr_iommu: Migrate full state Alexey Kardashevskiy
2016-03-22 1:31 ` David Gibson
2016-03-21 7:46 ` [Qemu-devel] [PATCH qemu v14 09/18] spapr_iommu: Add root memory region Alexey Kardashevskiy
2016-03-21 7:46 ` [Qemu-devel] [PATCH qemu v14 10/18] spapr_pci: Reset DMA config on PHB reset Alexey Kardashevskiy
2016-03-21 7:46 ` [Qemu-devel] [PATCH qemu v14 11/18] memory: Add reporting of supported page sizes Alexey Kardashevskiy
2016-03-22 3:02 ` David Gibson
2016-03-21 7:47 ` [Qemu-devel] [PATCH qemu v14 12/18] vfio: Check that IOMMU MR translates to system address space Alexey Kardashevskiy
2016-03-22 3:05 ` David Gibson
2016-03-22 15:47 ` Alex Williamson
2016-03-23 0:43 ` David Gibson
2016-03-23 0:44 ` Alexey Kardashevskiy
2016-03-21 7:47 ` [Qemu-devel] [PATCH qemu v14 13/18] vfio: spapr: Add SPAPR IOMMU v2 support (DMA memory preregistering) Alexey Kardashevskiy
2016-03-22 4:04 ` David Gibson
2016-03-21 7:47 ` [Qemu-devel] [PATCH qemu v14 14/18] spapr_pci: Add and export DMA resetting helper Alexey Kardashevskiy
2016-03-21 7:47 ` [Qemu-devel] [PATCH qemu v14 15/18] vfio: Add host side IOMMU capabilities Alexey Kardashevskiy
2016-03-22 4:20 ` David Gibson
2016-03-22 6:47 ` Alexey Kardashevskiy
2016-03-21 7:47 ` [Qemu-devel] [PATCH qemu v14 16/18] spapr_iommu, vfio, memory: Notify IOMMU about starting/stopping being used by VFIO Alexey Kardashevskiy
2016-03-22 4:45 ` David Gibson
2016-03-22 6:24 ` Alexey Kardashevskiy
2016-03-22 10:22 ` David Gibson
2016-03-21 7:47 ` [Qemu-devel] [PATCH qemu v14 17/18] vfio/spapr: Use VFIO_SPAPR_TCE_v2_IOMMU Alexey Kardashevskiy
2016-03-22 5:14 ` David Gibson
2016-03-22 5:54 ` Alexey Kardashevskiy
2016-03-23 1:08 ` David Gibson
2016-03-23 2:12 ` Alexey Kardashevskiy
2016-03-23 2:53 ` David Gibson
2016-03-23 3:06 ` Alexey Kardashevskiy
2016-03-23 6:03 ` David Gibson
2016-03-24 0:03 ` Alexey Kardashevskiy
2016-03-24 9:10 ` Alexey Kardashevskiy
2016-03-29 5:30 ` David Gibson
2016-03-29 5:44 ` Alexey Kardashevskiy [this message]
2016-03-29 6:44 ` David Gibson
2016-03-21 7:47 ` [Qemu-devel] [PATCH qemu v14 18/18] spapr_pci/spapr_pci_vfio: Support Dynamic DMA Windows (DDW) Alexey Kardashevskiy
2016-03-23 2:13 ` David Gibson
2016-03-23 3:28 ` Alexey Kardashevskiy
2016-03-23 6:11 ` David Gibson
2016-03-24 2:32 ` Alexey Kardashevskiy
2016-03-29 5:22 ` David Gibson
2016-03-29 6:23 ` Alexey Kardashevskiy
2016-03-31 3:19 ` David Gibson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56FA1624.2060307@ozlabs.ru \
--to=aik@ozlabs.ru \
--cc=alex.williamson@redhat.com \
--cc=david@gibson.dropbear.id.au \
--cc=paulus@samba.org \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).