From mboxrd@z Thu Jan 1 00:00:00 1970 From: Daniel Vetter Subject: Re: [PATCH 2/2] drm/i915: Do not force non-caching copies for pwrite along shmem path Date: Fri, 7 Mar 2014 09:39:44 +0100 Message-ID: <20140307083944.GC25837@phenom.ffwll.local> References: <1394181037-30480-1-git-send-email-chris@chris-wilson.co.uk> <1394181037-30480-2-git-send-email-chris@chris-wilson.co.uk> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail-ee0-f49.google.com (mail-ee0-f49.google.com [74.125.83.49]) by gabe.freedesktop.org (Postfix) with ESMTP id B40BBFA514 for ; Fri, 7 Mar 2014 00:39:48 -0800 (PST) Received: by mail-ee0-f49.google.com with SMTP id c41so1594078eek.8 for ; Fri, 07 Mar 2014 00:39:47 -0800 (PST) Content-Disposition: inline In-Reply-To: <1394181037-30480-2-git-send-email-chris@chris-wilson.co.uk> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: intel-gfx-bounces@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org To: Chris Wilson Cc: intel-gfx@lists.freedesktop.org List-Id: intel-gfx@lists.freedesktop.org On Fri, Mar 07, 2014 at 08:30:37AM +0000, Chris Wilson wrote: > We don't always want to write into main memory with pwrite. The shmem > fast path in particular is used for memory that is cacheable - under > such circumstances forcing the cache eviction is undesirable. As we will > always flush the cache when targeting incoherent buffers, we can rely on > that second pass to apply the cache coherency rules and so benefit from > in-cache copies otherwise. > > Signed-off-by: Chris Wilson Do you have some numbers on this? Looks good otherwise. -Daniel > --- > drivers/gpu/drm/i915/i915_gem.c | 5 ++--- > 1 file changed, 2 insertions(+), 3 deletions(-) > > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c > index 877afb2c576d..e0ca6d6be2ae 100644 > --- a/drivers/gpu/drm/i915/i915_gem.c > +++ b/drivers/gpu/drm/i915/i915_gem.c > @@ -810,9 +810,8 @@ shmem_pwrite_fast(struct page *page, int shmem_page_offset, int page_length, > if (needs_clflush_before) > drm_clflush_virt_range(vaddr + shmem_page_offset, > page_length); > - ret = __copy_from_user_inatomic_nocache(vaddr + shmem_page_offset, > - user_data, > - page_length); > + ret = __copy_from_user_inatomic(vaddr + shmem_page_offset, > + user_data, page_length); > if (needs_clflush_after) > drm_clflush_virt_range(vaddr + shmem_page_offset, > page_length); > -- > 1.9.0 > > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@lists.freedesktop.org > http://lists.freedesktop.org/mailman/listinfo/intel-gfx -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch