From mboxrd@z Thu Jan 1 00:00:00 1970
From: bugzilla-daemon@freedesktop.org
Subject: [Bug 101572] glMemoryBarrier is backwards
Date: Sat, 24 Jun 2017 04:54:33 +0000
Message-ID:
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="===============1434035868=="
Return-path:
Received: from culpepper.freedesktop.org (culpepper.freedesktop.org
[IPv6:2610:10:20:722:a800:ff:fe98:4b55])
by gabe.freedesktop.org (Postfix) with ESMTP id 114006E11B
for ; Sat, 24 Jun 2017 04:54:34 +0000 (UTC)
List-Unsubscribe: ,
List-Archive:
List-Post:
List-Help:
List-Subscribe: ,
Errors-To: dri-devel-bounces@lists.freedesktop.org
Sender: "dri-devel"
To: dri-devel@lists.freedesktop.org
List-Id: dri-devel@lists.freedesktop.org
--===============1434035868==
Content-Type: multipart/alternative; boundary="14982800740.556f.1423";
charset="UTF-8"
--14982800740.556f.1423
Date: Sat, 24 Jun 2017 04:54:33 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Bugzilla-URL: http://bugs.freedesktop.org/
Auto-Submitted: auto-generated
https://bugs.freedesktop.org/show_bug.cgi?id=3D101572
Bug ID: 101572
Summary: glMemoryBarrier is backwards
Product: Mesa
Version: git
Hardware: All
OS: All
Status: NEW
Severity: major
Priority: medium
Component: Drivers/Gallium/radeonsi
Assignee: dri-devel@lists.freedesktop.org
Reporter: dark_sylinc@yahoo.com.ar
QA Contact: dri-devel@lists.freedesktop.org
This bug may not just be in radeonsi.
I noticed the error after seeing my Compute Shaders produce output from an
input FBO (used as a texture) that had missing draws.
According to spec, glMemoryBarrier reflects how the buffer will be *used
afterwards*.
So for example if I have a Compute Shader that writes to an SSBO and later =
this
buffer is used as an index buffer I should call:
glMemoryBarrier( GL_ELEMENT_ARRAY_BARRIER_BIT );
Because I will be using from this buffer later on as an index buffer.
However it appears Mesa expects me to call instead:
glMemoryBarrier( GL_SHADER_STORAGE_BARRIER_BIT );
because I am writing to this buffer as an SSBO before the barrier.
The problem I encountered specifically is that I was drawing to an FBO, and
later on this FBO is used as a regular texture (sampler2D) in a compute sha=
der.
According to the spec, I should call:
glMemoryBarrier( GL_TEXTURE_FETCH_BARRIER_BIT );
However Mesa does not produce correct output unless I do:
glMemoryBarrier( GL_FRAMEBUFFER_BARRIER_BIT );
I had to re-read the spec several times and I was left wondered if I was the
one who got it backwards. After all the language in which it is written is =
very
confusing without a single example; however I then found:
http://malideveloper.arm.com/sample-code/introduction-compute-shaders-2/
which says:
"It=E2=80=99s important to remember the semantics of glMemoryBarrier(). As =
argument it
takes a bitfield of various buffer types. We specify how we will read data
after the memory barrier. In this case, we are writing the buffer via an SS=
BO,
but we=E2=80=99re reading it when we=E2=80=99re using it as a vertex buffer=
, hence
GL_VERTEX_ATTRIB_ARRAY_BARRIER_BIT."
I then consulted OpenGL SuperBible and the Programming Guide, and they both
agree:
"glMemoryBarrier(GL_ATOMIC_COUNTER_BARRIER_BIT);
will ensure that any access to an atomic counter in a buffer object
will reflect updates to that buffer by a shader. You should call
glMemoryBarrier() with the GL_ATOMIC_COUNTER_BARRIER_BIT set when
something has written to a buffer that you want to see reflected in the
values of your atomic counters. If you update the values in a buffer using
an atomic counter and then use that buffer for something else, the bit you
include in the barriers parameter to glMemoryBarrier() should
correspond to what you want that buffer to be used for, which will not
necessarily include GL_ATOMIC_COUNTER_BARRIER_BIT." (from OpenGL Super Bibl=
e)
"GL_TEXTURE_FETCH_BARRIER_BIT specifies that any fetch from a
texture issued after the barrier should reflect data written to the texture
by commands issued before the barrier.
(...)
GL_FRAMEBUFFER_BARRIER_BIT specifies that reads or writes through
framebuffer attachments issued after the barrier will reflect data written
to those attachments by shaders executed before the barrier. Further,
writes to framebuffers issued after the barrier will be ordered with
respect to writes performed by shaders before the barrier." (from OpenGL
Programming Manual).
It appears state_tracker/st_cb_texturebarrier.c also contains this bug beca=
use
it ignores GL_TEXTURE_UPDATE_BARRIER_BIT & GL_BUFFER_UPDATE_BARRIER_BIT; as=
it
assumes writes through texture/buffer_update will be done through Mesa
functions which are always synchronized; instead of synchronizing the read =
and
writes after the barrier.
This doesn't sound like it has a trivial fix, TBH I have no problem in
supporting a glMemoryBarrierMESA( writesToFlushBeforeBarrier,
readsToFlushAfterBarrier ) which would behave more sanely and the way you'd
want (and I get the fastest path); and you then just workaround the standard
glMemoryBarrier by issuing a barrier to all bits.
--=20
You are receiving this mail because:
You are the assignee for the bug.=
--14982800740.556f.1423
Date: Sat, 24 Jun 2017 04:54:33 +0000
MIME-Version: 1.0
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Bugzilla-URL: http://bugs.freedesktop.org/
Auto-Submitted: auto-generated
| Bug ID |
101572
|
| Summary |
glMemoryBarrier is backwards
|
| Product |
Mesa
|
| Version |
git
|
| Hardware |
All
|
| OS |
All
|
| Status |
NEW
|
| Severity |
major
|
| Priority |
medium
|
| Component |
Drivers/Gallium/radeonsi
|
| Assignee |
dri-devel@lists.freedesktop.org
|
| Reporter |
dark_sylinc@yahoo.com.ar
|
| QA Contact |
dri-devel@lists.freedesktop.org
|
This bug may not just be in radeonsi.
I noticed the error after seeing my Compute Shaders produce output from an
input FBO (used as a texture) that had missing draws.
According to spec, glMemoryBarrier reflects how the buffer will be *used
afterwards*.
So for example if I have a Compute Shader that writes to an SSBO and later =
this
buffer is used as an index buffer I should call:
glMemoryBarrier( GL_ELEMENT_ARRAY_BARRIER_BIT );
Because I will be using from this buffer later on as an index buffer.
However it appears Mesa expects me to call instead:
glMemoryBarrier( GL_SHADER_STORAGE_BARRIER_BIT );
because I am writing to this buffer as an SSBO before the barrier.
The problem I encountered specifically is that I was drawing to an FBO, and
later on this FBO is used as a regular texture (sampler2D) in a compute sha=
der.
According to the spec, I should call:
glMemoryBarrier( GL_TEXTURE_FETCH_BARRIER_BIT );
However Mesa does not produce correct output unless I do:
glMemoryBarrier( GL_FRAMEBUFFER_BARRIER_BIT );
I had to re-read the spec several times and I was left wondered if I was the
one who got it backwards. After all the language in which it is written is =
very
confusing without a single example; however I then found:
http://malideveloper.arm.com/sample-code/introduction-compute-sha=
ders-2/
which says:
"It=E2=80=99s important to remember the semantics of glMemoryBarrier()=
. As argument it
takes a bitfield of various buffer types. We specify how we will read data
after the memory barrier. In this case, we are writing the buffer via an SS=
BO,
but we=E2=80=99re reading it when we=E2=80=99re using it as a vertex buffer=
, hence
GL_VERTEX_ATTRIB_ARRAY_BARRIER_BIT."
I then consulted OpenGL SuperBible and the Programming Guide, and they both
agree:
"glMemoryBarrier(GL_ATOMIC_COUNTER_BARRIER_BIT);
will ensure that any access to an atomic counter in a buffer object
will reflect updates to that buffer by a shader. You should call
glMemoryBarrier() with the GL_ATOMIC_COUNTER_BARRIER_BIT set when
something has written to a buffer that you want to see reflected in the
values of your atomic counters. If you update the values in a buffer using
an atomic counter and then use that buffer for something else, the bit you
include in the barriers parameter to glMemoryBarrier() should
correspond to what you want that buffer to be used for, which will not
necessarily include GL_ATOMIC_COUNTER_BARRIER_BIT." (from OpenGL Super=
Bible)
"GL_TEXTURE_FETCH_BARRIER_BIT specifies that any fetch from a
texture issued after the barrier should reflect data written to the texture
by commands issued before the barrier.
(...)
GL_FRAMEBUFFER_BARRIER_BIT specifies that reads or writes through
framebuffer attachments issued after the barrier will reflect data written
to those attachments by shaders executed before the barrier. Further,
writes to framebuffers issued after the barrier will be ordered with
respect to writes performed by shaders before the barrier." (from Open=
GL
Programming Manual).
It appears state_tracker/st_cb_texturebarrier.c also contains this bug beca=
use
it ignores GL_TEXTURE_UPDATE_BARRIER_BIT & GL_BUFFER_UPDATE_BARRIER_BIT=
; as it
assumes writes through texture/buffer_update will be done through Mesa
functions which are always synchronized; instead of synchronizing the read =
and
writes after the barrier.
This doesn't sound like it has a trivial fix, TBH I have no problem in
supporting a glMemoryBarrierMESA( writesToFlushBeforeBarrier,
readsToFlushAfterBarrier ) which would behave more sanely and the way you'd
want (and I get the fastest path); and you then just workaround the standard
glMemoryBarrier by issuing a barrier to all bits.
You are receiving this mail because:
- You are the assignee for the bug.
=
--14982800740.556f.1423--
--===============1434035868==
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: inline
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KZHJpLWRldmVs
IG1haWxpbmcgbGlzdApkcmktZGV2ZWxAbGlzdHMuZnJlZWRlc2t0b3Aub3JnCmh0dHBzOi8vbGlz
dHMuZnJlZWRlc2t0b3Aub3JnL21haWxtYW4vbGlzdGluZm8vZHJpLWRldmVsCg==
--===============1434035868==--