qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Peter Xu <peterx@redhat.com>
Cc: Le Tan <tamlokveer@gmail.com>,
	Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	wei.huang2@amd.com, qemu-devel@nongnu.org,
	Luiz Capitulino <lcapitulino@redhat.com>,
	Auger Eric <eric.auger@redhat.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	Wei Yang <richardw.yang@linux.intel.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>
Subject: Re: [PATCH PROTOTYPE 3/6] vfio: Implement support for sparse RAM memory regions
Date: Wed, 18 Nov 2020 17:14:22 +0100	[thread overview]
Message-ID: <6141422c-1427-2a8d-b3ff-3c49ab1b59d2@redhat.com> (raw)
In-Reply-To: <20201118152311.GB29639@xz-x1>

On 18.11.20 16:23, Peter Xu wrote:
> David,
> 
> On Wed, Nov 18, 2020 at 02:04:00PM +0100, David Hildenbrand wrote:
>> On 20.10.20 22:44, Peter Xu wrote:
>>> On Tue, Oct 20, 2020 at 10:01:12PM +0200, David Hildenbrand wrote:
>>>> Thanks ... but I have an AMD system. Will try to find out how to get
>>>> that running with AMD :)
>>>
>>> May still start with trying intel-iommu first. :) I think it should work for
>>> amd hosts too.
>>>
>>> Just another FYI - Wei is working on amd-iommu for vfio [1], but it's still
>>> during review.
>>>
>>> [1] https://lore.kernel.org/qemu-devel/20201002145907.1294353-1-wei.huang2@amd.com/
>>>
>>
>> I'm trying to get an iommu setup running (without virtio-mem!),
>> but it's a big mess.
>>
>> Essential parts of my QEMU cmdline are:
>>
>> sudo build/qemu-system-x86_64 \
>>      -accel kvm,kernel-irqchip=split \
>>      ...
>>       device pcie-pci-bridge,addr=1e.0,id=pci.1 \
>>      -device vfio-pci,host=0c:00.0,x-vga=on,bus=pci.1,addr=1.0,multifunction=on \
>>      -device vfio-pci,host=0c:00.1,bus=pci.1,addr=1.1 \
>>      -device intel-iommu,caching-mode=on,intremap=on \
> 
> The intel-iommu device needs to be created before the rest of devices.  I
> forgot the reason behind, should be related to how the device address spaces
> are created.  This rule should apply to all the rest of vIOMMUs, afaiu.
> 
> Libvirt guarantees that ordering when VT-d enabled, though when using qemu
> cmdline indeed that's hard to identify from the first glance... iirc we tried
> to fix this, but I forgot the details, it's just not trivial.
> 
> I noticed that this ordering constraint is also missing in the qemu wiki page
> of vt-d, so I updated there too, hopefully..
> 
> https://wiki.qemu.org/Features/VT-d#Command_Line_Example
> 

That did the trick! Thanks!!!

virtio-mem + vfio + iommu seems to work. More testing to be done.

However, malicious guests can play nasty tricks like

a) Unplugging plugged virtio-mem blocks while they are mapped via an
    IOMMU

1. Guest: map memory location X located on a virtio-mem device inside a
    plugged block into the IOMMU
    -> QEMU IOMMU notifier: create vfio DMA mapping
    -> VFIO pins memory of unplugged blocks (populating memory)
2. Guest: Request to unplug memory location X via virtio-mem device
    -> QEMU virtio-mem: discards the memory.
    -> VFIO still has the memory pinned

We consume more memory than intended. In case virtio-memory would get 
replugged and used, we would have an inconsistency. IOMMU device resets/ 
fix it (whereby all VFIO mappings are removed via the IOMMU notifier).


b) Mapping unplugged virtio-mem blocks via an IOMMU

1. Guest: map memory location X located on a virtio-mem device inside an
    unplugged block
    -> QEMU IOMMU notifier: create vfio DMA mapping
    -> VFIO pins memory of unplugged blocks (populating memory)

Memory that's supposed to be discarded now consumes memory. This is 
similar to a malicious guest simply writing to unplugged memory blocks 
(to be tackled with "protection of unplugged memory" in the future) - 
however memory will also get pinned.


To prohibit b) from happening, we would have to disallow creating the 
VFIO mapping (fairly easy).

To prohibit a), there would have to be some notification to IOMMU 
implementations to unmap/refresh whenever an IOMMU entry still points at 
memory that is getting discarded (and the VM is doing something it's not 
supposed to do).


>> As soon as I enable "intel_iommu=on" in my guest kernel, graphics
>> stop working (random mess on graphics output) and I get
>>    vfio-pci 0000:0c:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0023 address=0xff924000 flags=0x0000]
>> in the hypervisor, along with other nice messages.
>>
>> I can spot no vfio DMA mappings coming from an iommu, just as if the
>> guest wouldn't even try to setup the iommu.
>>
>> I tried with
>> 1. AMD Radeon RX Vega 56
>> 2. Nvidia GT220
>> resulting in similar issues.
>>
>> I also tried with "-device amd-iommu" with other issues
>> (guest won't even boot up). Are my graphics card missing some support or
>> is there a fundamental flaw in my setup?
> 
> I guess amd-iommu won't work if without Wei Huang's series applied.

Oh, okay - I spotted it in QEMU and thought this was already working :)

-- 
Thanks,

David / dhildenb



  reply	other threads:[~2020-11-18 16:16 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-24 16:04 [PATCH PROTOTYPE 0/6] virtio-mem: vfio support David Hildenbrand
2020-09-24 16:04 ` [PATCH PROTOTYPE 1/6] memory: Introduce sparse RAM handler for memory regions David Hildenbrand
2020-10-20 19:24   ` Peter Xu
2020-10-20 20:13     ` David Hildenbrand
2020-09-24 16:04 ` [PATCH PROTOTYPE 2/6] virtio-mem: Impelement SparseRAMHandler interface David Hildenbrand
2020-09-24 16:04 ` [PATCH PROTOTYPE 3/6] vfio: Implement support for sparse RAM memory regions David Hildenbrand
2020-10-20 19:44   ` Peter Xu
2020-10-20 20:01     ` David Hildenbrand
2020-10-20 20:44       ` Peter Xu
2020-11-12 10:11         ` David Hildenbrand
2020-11-18 13:04         ` David Hildenbrand
2020-11-18 15:23           ` Peter Xu
2020-11-18 16:14             ` David Hildenbrand [this message]
2020-11-18 17:01               ` Peter Xu
2020-11-18 17:37                 ` David Hildenbrand
2020-11-18 19:05                   ` Peter Xu
2020-11-18 19:20                     ` David Hildenbrand
2020-09-24 16:04 ` [PATCH PROTOTYPE 4/6] memory: Extend ram_block_discard_(require|disable) by two discard types David Hildenbrand
2020-10-20 19:17   ` Peter Xu
2020-10-20 19:58     ` David Hildenbrand
2020-10-20 20:49       ` Peter Xu
2020-10-20 21:30         ` Peter Xu
2020-09-24 16:04 ` [PATCH PROTOTYPE 5/6] virtio-mem: Require only RAM_BLOCK_DISCARD_T_COORDINATED discards David Hildenbrand
2020-09-24 16:04 ` [PATCH PROTOTYPE 6/6] vfio: Disable only RAM_BLOCK_DISCARD_T_UNCOORDINATED discards David Hildenbrand
2020-09-24 19:30 ` [PATCH PROTOTYPE 0/6] virtio-mem: vfio support no-reply
2020-09-29 17:02 ` Dr. David Alan Gilbert
2020-09-29 17:05   ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6141422c-1427-2a8d-b3ff-3c49ab1b59d2@redhat.com \
    --to=david@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=eric.auger@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=lcapitulino@redhat.com \
    --cc=mst@redhat.com \
    --cc=pankaj.gupta.linux@gmail.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=richardw.yang@linux.intel.com \
    --cc=tamlokveer@gmail.com \
    --cc=wei.huang2@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).