public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
From: Chris Wilson <chris@chris-wilson.co.uk>
To: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH 3/4] drm/i915: Insert a full mb() before reading the seqno from the status page
Date: Sat, 19 Jan 2013 12:02:20 +0000	[thread overview]
Message-ID: <84c8a8$7dvdg8@orsmga001.jf.intel.com> (raw)
In-Reply-To: <20121019135249.7037902c@jbarnes-desktop>

On Fri, 19 Oct 2012 13:52:49 -0700, Jesse Barnes <jbarnes@virtuousgeek.org> wrote:
> On Fri, 19 Oct 2012 21:40:17 +0100
> Chris Wilson <chris@chris-wilson.co.uk> wrote:
> 
> > On Thu, 11 Oct 2012 12:46:00 -0700, Jesse Barnes <jbarnes@virtuousgeek.org> wrote:
> > > On Tue,  9 Oct 2012 19:24:39 +0100
> > > Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > > 
> > > > Hopefully this will reduce a few of the missed IRQ warnings.
> > > > 
> > > > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > > > ---
> > > >  drivers/gpu/drm/i915/intel_ringbuffer.c |    8 +++++++-
> > > >  drivers/gpu/drm/i915/intel_ringbuffer.h |    2 --
> > > >  2 files changed, 7 insertions(+), 3 deletions(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
> > > > index e069e69..133beb6 100644
> > > > --- a/drivers/gpu/drm/i915/intel_ringbuffer.c
> > > > +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
> > > > @@ -704,14 +704,18 @@ gen6_ring_get_seqno(struct intel_ring_buffer *ring, bool lazy_coherency)
> > > >  	/* Workaround to force correct ordering between irq and seqno writes on
> > > >  	 * ivb (and maybe also on snb) by reading from a CS register (like
> > > >  	 * ACTHD) before reading the status page. */
> > > > -	if (!lazy_coherency)
> > > > +	if (!lazy_coherency) {
> > > >  		intel_ring_get_active_head(ring);
> > > > +		mb();
> > > > +	}
> > > >  	return intel_read_status_page(ring, I915_GEM_HWS_INDEX);
> > > >  }
> > > >  
> > > >  static u32
> > > >  ring_get_seqno(struct intel_ring_buffer *ring, bool lazy_coherency)
> > > >  {
> > > > +	if (!lazy_coherency)
> > > > +		mb();
> > > >  	return intel_read_status_page(ring, I915_GEM_HWS_INDEX);
> > > >  }
> > > >  
> > > > @@ -719,6 +723,8 @@ static u32
> > > >  pc_render_get_seqno(struct intel_ring_buffer *ring, bool lazy_coherency)
> > > >  {
> > > >  	struct pipe_control *pc = ring->private;
> > > > +	if (!lazy_coherency)
> > > > +		mb();
> > > >  	return pc->cpu_page[0];
> > > >  }
> > > >  
> > > > diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
> > > > index 2ea7a31..40b252e 100644
> > > > --- a/drivers/gpu/drm/i915/intel_ringbuffer.h
> > > > +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
> > > > @@ -160,8 +160,6 @@ static inline u32
> > > >  intel_read_status_page(struct intel_ring_buffer *ring,
> > > >  		       int reg)
> > > >  {
> > > > -	/* Ensure that the compiler doesn't optimize away the load. */
> > > > -	barrier();
> > > >  	return ring->status_page.page_addr[reg];
> > > >  }
> > > >  
> > > 
> > > This looks a bit more voodoo-y.  Theoretically an mb() on the CPU side
> > > should have nothing to do with what the GPU just wrote to the status
> > > page.  It'll slow down the read a bit but shouldn't affect coherence at
> > > all...  An MMIO read from the GPU otoh should flush any stubborn DMA
> > > buffers.
> > 
> > Absolutely convinced? Aren't we here more worried about the view of the
> > shared cache from any particular core and so need to treat this as an
> > SMP programming problem, in which case we do need to worry about memory
> > barriers around dependent reads and writes between processors?
> > 
> > But it is definitely more voodoo...
> 
> If it's an SMP issue, barriers won't help, we need actual
> synchronization in the form of locks or something.

Glad you agree. How are locks implemented? :)

Irrespectively of the contentious patches Daniel thought would be a good
idea, we need the first 2 to fix the mb() around fences. Poke.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre

  reply	other threads:[~2013-01-19 12:02 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <6c3329lntgg@orsmga002.jf.intel.com>
2012-10-09 18:24 ` [PATCH 1/4] drm/i915: Only insert the mb() before updating the fence parameter Chris Wilson
2012-10-09 18:24   ` [PATCH 2/4] drm/i915: Only apply the mb() when flushing the GTT domain during a finish Chris Wilson
2012-10-11 19:43     ` Jesse Barnes
2013-01-19 13:40       ` Daniel Vetter
2012-10-09 18:24   ` [PATCH 3/4] drm/i915: Insert a full mb() before reading the seqno from the status page Chris Wilson
2012-10-11 19:46     ` Jesse Barnes
2012-10-19 20:40       ` Chris Wilson
2012-10-19 20:52         ` Jesse Barnes
2013-01-19 12:02           ` Chris Wilson [this message]
2012-10-09 18:24   ` [PATCH 4/4] drm/i915: Review the memory barriers around CPU access to buffers Chris Wilson
2012-10-11 19:52     ` Jesse Barnes
2012-10-19 20:48       ` Chris Wilson
2012-10-11 20:46     ` Daniel Vetter
2012-10-11 19:41   ` [PATCH 1/4] drm/i915: Only insert the mb() before updating the fence parameter Jesse Barnes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='84c8a8$7dvdg8@orsmga001.jf.intel.com' \
    --to=chris@chris-wilson.co.uk \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=jbarnes@virtuousgeek.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox