public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
From: Daniel Vetter <daniel@ffwll.ch>
To: "Volkin, Bradley D" <bradley.d.volkin@intel.com>
Cc: "intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>
Subject: Re: [PATCH 2/2] drm/i915: Do not force non-caching copies for pwrite along shmem path
Date: Sat, 8 Mar 2014 00:03:48 +0100	[thread overview]
Message-ID: <20140307230348.GH25837@phenom.ffwll.local> (raw)
In-Reply-To: <20140307181458.GB7286@bdvolkin-ubuntu-desktop>

On Fri, Mar 07, 2014 at 10:14:58AM -0800, Volkin, Bradley D wrote:
> Reviewed-by: Brad Volkin <bradley.d.volkin@intel.com>
> 
> On Fri, Mar 07, 2014 at 12:30:37AM -0800, Chris Wilson wrote:
> > We don't always want to write into main memory with pwrite. The shmem
> > fast path in particular is used for memory that is cacheable - under
> > such circumstances forcing the cache eviction is undesirable. As we will
> > always flush the cache when targeting incoherent buffers, we can rely on
> > that second pass to apply the cache coherency rules and so benefit from
> > in-cache copies otherwise.
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>

Both patches merged, thanks.
-Daniel

> > ---
> >  drivers/gpu/drm/i915/i915_gem.c | 5 ++---
> >  1 file changed, 2 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> > index 877afb2c576d..e0ca6d6be2ae 100644
> > --- a/drivers/gpu/drm/i915/i915_gem.c
> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > @@ -810,9 +810,8 @@ shmem_pwrite_fast(struct page *page, int shmem_page_offset, int page_length,
> >  	if (needs_clflush_before)
> >  		drm_clflush_virt_range(vaddr + shmem_page_offset,
> >  				       page_length);
> > -	ret = __copy_from_user_inatomic_nocache(vaddr + shmem_page_offset,
> > -						user_data,
> > -						page_length);
> > +	ret = __copy_from_user_inatomic(vaddr + shmem_page_offset,
> > +					user_data, page_length);
> >  	if (needs_clflush_after)
> >  		drm_clflush_virt_range(vaddr + shmem_page_offset,
> >  				       page_length);
> > -- 
> > 1.9.0
> > 
> > _______________________________________________
> > Intel-gfx mailing list
> > Intel-gfx@lists.freedesktop.org
> > http://lists.freedesktop.org/mailman/listinfo/intel-gfx
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx

-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

  reply	other threads:[~2014-03-07 23:03 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-03-07  8:30 [PATCH 1/2] drm/i915: Process page flags once rather than per pwrite/pread Chris Wilson
2014-03-07  8:30 ` [PATCH 2/2] drm/i915: Do not force non-caching copies for pwrite along shmem path Chris Wilson
2014-03-07  8:39   ` Daniel Vetter
2014-03-07  9:50     ` Chris Wilson
2014-03-07 18:14   ` Volkin, Bradley D
2014-03-07 23:03     ` Daniel Vetter [this message]
2014-03-07 18:14 ` [PATCH 1/2] drm/i915: Process page flags once rather than per pwrite/pread Volkin, Bradley D

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140307230348.GH25837@phenom.ffwll.local \
    --to=daniel@ffwll.ch \
    --cc=bradley.d.volkin@intel.com \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox