From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jason Ekstrand Subject: Re: [rfc] drm sync objects (search for spock) Date: Tue, 9 May 2017 15:23:02 -0700 Message-ID: References: <20170426032833.1455-1-airlied@gmail.com> <373f3919-2f52-44c0-89b9-0c10b7b7bed4@vodafone.de> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1653093067==" Return-path: Received: from mail-wr0-x234.google.com (mail-wr0-x234.google.com [IPv6:2a00:1450:400c:c0c::234]) by gabe.freedesktop.org (Postfix) with ESMTPS id AB20C8925C for ; Tue, 9 May 2017 22:23:04 +0000 (UTC) Received: by mail-wr0-x234.google.com with SMTP id l50so18704672wrc.3 for ; Tue, 09 May 2017 15:23:04 -0700 (PDT) In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" To: =?UTF-8?Q?Christian_K=C3=B6nig?= Cc: dri-devel , amd-gfx mailing list List-Id: dri-devel@lists.freedesktop.org --===============1653093067== Content-Type: multipart/alternative; boundary=94eb2c1b5366934fc0054f1ecccc --94eb2c1b5366934fc0054f1ecccc Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable On Wed, Apr 26, 2017 at 7:57 AM, Christian K=C3=B6nig wrote: > Am 26.04.2017 um 11:57 schrieb Dave Airlie: > >> On 26 April 2017 at 18:45, Christian K=C3=B6nig >> wrote: >> >>> Am 26.04.2017 um 05:28 schrieb Dave Airlie: >>> >>>> Okay I've gone around the sun with these a few times, and >>>> pretty much implemented what I said last week. >>>> >>>> This is pretty much a complete revamp. >>>> >>>> 1. sync objects are self contained drm objects, they >>>> have a file reference so can be passed between processes. >>>> >>>> 2. Added a sync object wait interface modelled on the vulkan >>>> fence waiting API. >>>> >>>> 3. sync_file interaction is explicitly different than >>>> opaque fd passing, you import a sync file state into an >>>> existing syncobj, or create a new sync_file from an >>>> existing syncobj. This means no touching the sync file code >>>> at all. \o/ >>>> >>> I've done a quick scan over the patches and I like the API. It almost looks as if Santa read my wish list. :-) That said, it was a "quick scan" so don't take this as any sort of actual code review. It'll probably be a little while before I get a chance to sit down and look at i915 again but things seem reasonable. > Doesn't that break the Vulkan requirement that if a sync_obj is exported = to >>> an fd and imported on the other side we get the same sync_obj again? >>> >>> In other words the fd is exported and imported only once and the >>> expectation >>> is that we fence containing it will change on each signaling operation. >>> >>> As far as I can see with the current implementation you get two >>> sync_objects >>> on in the exporting process and one in the importing one. >>> >> Are you talking about using sync_file interop for vulkan, then yes >> that would be wrong. >> >> But that isn't how this works, see 1. above the sync obj has a 1:1 >> mapping with an >> opaque fd (non-sync_file) that is only used for interprocess passing >> of the syncobj. >> > > Ah, ok that makes some more sense. > > You can interoperate with sync_files by importing/exporting the >> syncobj fence into >> and out of them but that aren't meant for passing the fds around, more >> for where you >> get a sync_file fd from somewhere else and you want to use it and >> vice-versa. >> > > I would prefer dealing only with one type of fd in userspace, but that > approach should work as well. > > I'd like to move any drm APIs away from sync_file to avoid the fd wastage= s, >> > > That sounds like it make sense, but I would still rather vote for > extending the sync_file interface than deprecating it with another one. > > I'd also like to move the driver specific fences to syncobj where I can. >> > > I'm pretty sure that is not a good idea. > > Beating the sequence based fence values we currently use for CPU sync in > performance would be pretty hard. E.g. at least on amdgpu we can even che= ck > those by doing a simple 64bit read from a memory address, without IOCTL o= r > any other overhead involved. > > Regards, > Christian. > > > >> Dave. >> > > > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel > --94eb2c1b5366934fc0054f1ecccc Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable


On Wed, Apr 26, 2017 at 7:57 AM, Christian K=C3=B6nig <deathsimp= le@vodafone.de> wrote:
Am 26.04.2017 um 11:57 schrieb Dave Airlie:
On 26 April 2017 at 18:45, Christian K=C3=B6nig <deathsimple@vodafone.de> wrote= :
Am 26.04.2017 um 05:28 schrieb Dave Airlie:
Okay I've gone around the sun with these a few times, and
pretty much implemented what I said last week.

This is pretty much a complete revamp.

1. sync objects are self contained drm objects, they
have a file reference so can be passed between processes.

2. Added a sync object wait interface modelled on the vulkan
fence waiting API.

3. sync_file interaction is explicitly different than
opaque fd passing, you import a sync file state into an
existing syncobj, or create a new sync_file from an
existing syncobj. This means no touching the sync file code
at all. \o/

I've done a quick scan over the patches and I like the API.=C2= =A0 It almost looks as if Santa read my wish list. :-)

Th= at said, it was a "quick scan" so don't take this as any sort= of actual code review.=C2=A0 It'll probably be a little while before I= get a chance to sit down and look at i915 again but things seem reasonable= .
=C2=A0
Doesn't that break the Vulkan requirement that if a sync_obj is exporte= d to
an fd and imported on the other side we get the same sync_obj again?

In other words the fd is exported and imported only once and the expectatio= n
is that we fence containing it will change on each signaling operation.

As far as I can see with the current implementation you get two sync_object= s
on in the exporting process and one in the importing one.
Are you talking about using sync_file interop for vulkan, then yes
that would be wrong.

But that isn't how this works, see 1. above the sync obj has a 1:1
mapping with an
opaque fd (non-sync_file) that is only used for interprocess passing
of the syncobj.

Ah, ok that makes some more sense.

You can interoperate with sync_files by importing/exporting the
syncobj fence into
and out of them but that aren't meant for passing the fds around, more<= br> for where you
get a sync_file fd from somewhere else and you want to use it and vice-vers= a.

I would prefer dealing only with one type of fd in userspace, but that appr= oach should work as well.

I'd like to move any drm APIs away from sync_file to avoid the fd wasta= ges,

That sounds like it make sense, but I would still rather vote for extending= the sync_file interface than deprecating it with another one.

I'd also like to move the driver specific fences to syncobj where I can= .

I'm pretty sure that is not a good idea.

Beating the sequence based fence values we currently use for CPU sync in pe= rformance would be pretty hard. E.g. at least on amdgpu we can even check t= hose by doing a simple 64bit read from a memory address, without IOCTL or a= ny other overhead involved.

Regards,
Christian.



Dave.


_______________________________________________
dri-devel mailing list
dri-de= vel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/l= istinfo/dri-devel

--94eb2c1b5366934fc0054f1ecccc-- --===============1653093067== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KZHJpLWRldmVs IG1haWxpbmcgbGlzdApkcmktZGV2ZWxAbGlzdHMuZnJlZWRlc2t0b3Aub3JnCmh0dHBzOi8vbGlz dHMuZnJlZWRlc2t0b3Aub3JnL21haWxtYW4vbGlzdGluZm8vZHJpLWRldmVsCg== --===============1653093067==--