From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ben Widawsky Subject: Re: [PATCH] drm/i915: Do not unmap object unless no other VMAs reference it Date: Thu, 17 Jul 2014 16:48:08 -0700 Message-ID: <20140717234808.GC4545@bwidawsk.net> References: <1405099208-4085-1-git-send-email-armin.c.reese@intel.com> <20140717234502.GB4545@bwidawsk.net> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail.bwidawsk.net (bwidawsk.net [166.78.191.112]) by gabe.freedesktop.org (Postfix) with ESMTP id C83126E6E4 for ; Thu, 17 Jul 2014 16:48:17 -0700 (PDT) Content-Disposition: inline In-Reply-To: <20140717234502.GB4545@bwidawsk.net> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" To: armin.c.reese@intel.com Cc: intel-gfx@lists.freedesktop.org List-Id: intel-gfx@lists.freedesktop.org On Thu, Jul 17, 2014 at 04:45:02PM -0700, Ben Widawsky wrote: > On Fri, Jul 11, 2014 at 10:20:07AM -0700, armin.c.reese@intel.com wrote: > > From: Armin Reese > > > > Signed-off-by: Armin Reese > > --- > > drivers/gpu/drm/i915/i915_gem.c | 6 +++--- > > 1 file changed, 3 insertions(+), 3 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c > > index 87d0aac..676e5f4 100644 > > --- a/drivers/gpu/drm/i915/i915_gem.c > > +++ b/drivers/gpu/drm/i915/i915_gem.c > > @@ -2922,8 +2922,6 @@ int i915_vma_unbind(struct i915_vma *vma) > > > > vma->unbind_vma(vma); > > > > - i915_gem_gtt_finish_object(obj); > > - > > list_del_init(&vma->mm_list); > > /* Avoid an unnecessary call to unbind on rebind. */ > > if (i915_is_ggtt(vma->vm)) > > @@ -2934,8 +2932,10 @@ int i915_vma_unbind(struct i915_vma *vma) > > > > /* Since the unbound list is global, only move to that list if > > * no more VMAs exist. */ > > - if (list_empty(&obj->vma_list)) > > + if (list_empty(&obj->vma_list)) { > > + i915_gem_gtt_finish_object(obj); > > list_move_tail(&obj->global_list, &dev_priv->mm.unbound_list); > > + } > > > > /* And finally now the object is completely decoupled from this vma, > > * we can drop its hold on the backing storage and allow it to be > > > > This patch is a fix, and therefore needs a commit message to explain the > bug that you're trying to fix. I will help. > " > When using an IOMMU, GEM objects are mapped by their DMA address as the > physical address is unknown. This depends on the underlying IOMMU > driver to map and unmap the physical pages properly as defined in > intel_iommu.c. > > The current code will tell the IOMMU to unmap the GEM BO's pages on the > destruction of the first VMA that "maps" that BO. This is clearly wrong > as there may be other VMAs "mapping" that BO (using flink). The scanout > is one such example. > > The patch fixes this issue by only unmapping the DMA maps when there are > no more VMAs mapping that object. This is equivalent to when an object > is considered unbound as can be seen by the code. On the first VMA that > again because bound, we will remap. > > An alternate solution would be to move the dma mapping to object > creation and destrubtion. I am not sure if this is considered an > unfriendly thing to do. > > Some notes to backporters: > The bug can never be hit without enabling the IOMMU. The existing code > will also do the right thing when the object is shared via dmabuf. The > failure should be demonstrable with flink. In cases when not using > intel_iommu_strict it is likely (likely, as defined by: off the top of > my head) on current workloads to *not* hit this bug since we often > teardown all VMAs for an object shared across multiple VMs. We also > finish access to that object before the first dma_unmapping. > intel_iommu_strict with flinked buffers is likely to hit this issue. > " Crap. I left out the important part. Note to backporters trying to backport full PPGTT, which is disabled by default on all kernels to date. > -- Ben Widawsky, Intel Open Source Technology Center