Linux IOMMU Development
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Gurchetan Singh <gurchetansingh@google.com>
Cc: virtio-dev@lists.oasis-open.org,
	Claire Chang <tientzu@google.com>,
	peterz@infradead.org, will@kernel.org,
	Tomasz Figa <tfiga@google.com>,
	iommu@lists.linux-foundation.org, robin.murphy@arm.com,
	hch@lst.de
Subject: Re: virtio-gpu dedicated heap
Date: Thu, 3 Mar 2022 23:56:46 -0500	[thread overview]
Message-ID: <20220303235527-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <CAAfnVBmCUHKRUVA=UouoSUH-eTyTTpNReU6i8TSD94iyYWyQzg@mail.gmail.com>

$ ./scripts/get_maintainer.pl -f ./drivers/gpu/drm/virtio/

David Airlie <airlied@linux.ie> (maintainer:VIRTIO GPU DRIVER)
Gerd Hoffmann <kraxel@redhat.com> (maintainer:VIRTIO GPU DRIVER)
Daniel Vetter <daniel@ffwll.ch> (maintainer:DRM DRIVERS)
dri-devel@lists.freedesktop.org (open list:VIRTIO GPU DRIVER)
virtualization@lists.linux-foundation.org (open list:VIRTIO GPU DRIVER)
linux-kernel@vger.kernel.org (open list)

You might want to CC these people.

On Thu, Mar 03, 2022 at 08:07:03PM -0800, Gurchetan Singh wrote:
> +iommu@lists.linux-foundation.org not iommu-request
> 
> On Thu, Mar 3, 2022 at 8:05 PM Gurchetan Singh <gurchetansingh@chromium.org>
> wrote:
> 
>     Hi everyone,
> 
>     With the current virtio setup, all of guest memory is shared with host
>     devices.  There has been interest in changing this, to improve isolation of
>     guest memory and increase confidentiality.  
> 
>     The recently introduced restricted DMA mechanism makes excellent progress
>     in this area:
> 
>     https://patchwork.kernel.org/project/xen-devel/cover/
>     20210624155526.2775863-1-tientzu@chromium.org/  
> 
>     Devices without an IOMMU (traditional virtio devices for example) would
>     allocate from a specially designated region.  Swiotlb bouncing is done for
>     all DMA transfers.  This is controlled by the VIRTIO_F_ACCESS_PLATFORM
>     feature bit.
> 
>     https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/
>     3064198
> 
>     This mechanism works great for the devices it was designed for, such as
>     virtio-net.  However, when trying to adapt to it for other devices, there
>     are some limitations.  
> 
>     It would be great to have a dedicated heap for virtio-gpu rather than
>     allocating from guest memory.  
> 
>     We would like to use dma_alloc_noncontiguous on the restricted dma pool,
>     ideally with page-level granularity somehow.  Continuous buffers are
>     definitely going out of fashion.
> 
>     There are two considerations when using it with the restricted DMA
>     approach:
> 
>     1) No bouncing (aka memcpy)
> 
>     Expensive with graphics buffers, since guest user space would designate
>     shareable graphics buffers with the host.  We plan to use
>     DMA_ATTR_SKIP_CPU_SYNC when doing any DMA transactions with GPU buffers.
> 
>     Bounce buffering will be utilized with virtio-cmds, like the other virtio
>     devices that use the restricted DMA mechanism.
> 
>     2) IO_TLB_SEGSIZE is too small for graphics buffers
> 
>     This issue was hit before here too:
> 
>     https://www.spinics.net/lists/kernel/msg4154086.html
> 
>     The suggestion was to use shared-dma-pool rather than restricted DMA.  But
>     we're not sure a single device can have restricted DMA (for
>     VIRTIO_F_ACCESS_PLATFORM) and shared-dma-pool (for larger buffers) at the
>     same time.  Does anyone know? 
> 
>     If not, it sounds like "splitting the allocation into dma_max_mapping_size
>     () chunks" for restricted-dma is also possible.  What is the preferred
>     method?
> 
>     More generally, we would love more feedback on the proposed design or
>     consider alternatives!
> 

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  reply	other threads:[~2022-03-04  4:57 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CAAfnVBmDf1fA1ZAufPsPhyOWj0=ynGCVfzX_Cx=pgVmkXe8Tog@mail.gmail.com>
2022-03-04  4:07 ` virtio-gpu dedicated heap Gurchetan Singh via iommu
2022-03-04  4:56   ` Michael S. Tsirkin [this message]
     [not found] ` <eecdc1a2-ea5b-9d4f-9d58-ba87ffa5044d@arm.com>
2022-03-04 18:36   ` Gurchetan Singh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220303235527-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=gurchetansingh@google.com \
    --cc=hch@lst.de \
    --cc=iommu@lists.linux-foundation.org \
    --cc=peterz@infradead.org \
    --cc=robin.murphy@arm.com \
    --cc=tfiga@google.com \
    --cc=tientzu@google.com \
    --cc=virtio-dev@lists.oasis-open.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox