From: Oded Gabbay <oded.gabbay@gmail.com>
To: "Oded Gabbay" <ogabbay@kernel.org>,
"Linux-Kernel@Vger. Kernel. Org" <linux-kernel@vger.kernel.org>,
"Greg Kroah-Hartman" <gregkh@linuxfoundation.org>,
"Sumit Semwal" <sumit.semwal@linaro.org>,
"Christian König" <christian.koenig@amd.com>,
"Gal Pressman" <galpress@amazon.com>,
sleybo@amazon.com,
"Maling list - DRI developers" <dri-devel@lists.freedesktop.org>,
"Jason Gunthorpe" <jgg@ziepe.ca>,
linux-rdma <linux-rdma@vger.kernel.org>,
"Linux Media Mailing List" <linux-media@vger.kernel.org>,
"Doug Ledford" <dledford@redhat.com>,
"Dave Airlie" <airlied@gmail.com>,
"Alex Deucher" <alexander.deucher@amd.com>,
"Leon Romanovsky" <leonro@nvidia.com>,
"Christoph Hellwig" <hch@lst.de>,
"amd-gfx list" <amd-gfx@lists.freedesktop.org>,
"moderated list:DMA BUFFER SHARING FRAMEWORK"
<linaro-mm-sig@lists.linaro.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Subject: Re: [PATCH v4 0/2] Add p2p via dmabuf to habanalabs
Date: Tue, 6 Jul 2021 13:03:20 +0300 [thread overview]
Message-ID: <CAFCwf10_rTYL2Fy6tCRVAUCf4-6_TtcWCv5gEEkGnQ0KxqMUBg@mail.gmail.com> (raw)
In-Reply-To: <YOQXBWpo3whVjOyh@phenom.ffwll.local>
On Tue, Jul 6, 2021 at 11:40 AM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Mon, Jul 05, 2021 at 04:03:12PM +0300, Oded Gabbay wrote:
> > Hi,
> > I'm sending v4 of this patch-set following the long email thread.
> > I want to thank Jason for reviewing v3 and pointing out the errors, saving
> > us time later to debug it :)
> >
> > I consulted with Christian on how to fix patch 2 (the implementation) and
> > at the end of the day I shamelessly copied the relevant content from
> > amdgpu_vram_mgr_alloc_sgt() and amdgpu_dma_buf_attach(), regarding the
> > usage of dma_map_resource() and pci_p2pdma_distance_many(), respectively.
> >
> > I also made a few improvements after looking at the relevant code in amdgpu.
> > The details are in the changelog of patch 2.
> >
> > I took the time to write an import code into the driver, allowing me to
> > check real P2P with two Gaudi devices, one as exporter and the other as
> > importer. I'm not going to include the import code in the product, it was
> > just for testing purposes (although I can share it if anyone wants).
> >
> > I run it on a bare-metal environment with IOMMU enabled, on a sky-lake CPU
> > with a white-listed PCIe bridge (to make the pci_p2pdma_distance_many happy).
> >
> > Greg, I hope this will be good enough for you to merge this code.
>
> So we're officially going to use dri-devel for technical details review
> and then Greg for merging so we don't have to deal with other merge
> criteria dri-devel folks have?
I'm glad to receive any help or review, regardless of the subsystem
the person giving that help belongs to.
>
> I don't expect anything less by now, but it does make the original claim
> that drivers/misc will not step all over accelerators folks a complete
> farce under the totally-not-a-gpu banner.
>
> This essentially means that for any other accelerator stack that doesn't
> fit the dri-devel merge criteria, even if it's acting like a gpu and uses
> other gpu driver stuff, you can just send it to Greg and it's good to go.
What's wrong with Greg ??? ;)
On a more serious note, yes, I do think the dri-devel merge criteria
is very extreme, and effectively drives-out many AI accelerator
companies that want to contribute to the kernel but can't/won't open
their software IP and patents.
I think the expectation from AI startups (who are 90% of the deep
learning field) to cooperate outside of company boundaries is not
realistic, especially on the user-side, where the real IP of the
company resides.
Personally I don't think there is a real justification for that at
this point of time, but if it will make you (and other people here)
happy I really don't mind creating a non-gpu accelerator subsystem
that will contain all the totally-not-a-gpu accelerators, and will
have a more relaxed criteria for upstreaming. Something along an
"rdma-core" style library looks like the correct amount of user-level
open source that should be enough.
The question is, what will happen later ? Will it be sufficient to
"allow" us to use dmabuf and maybe other gpu stuff in the future (e.g.
hmm) ?
If the community and dri-devel maintainers (and you among them) will
assure me it is good enough, then I'll happily contribute my work and
personal time to organize this effort and implement it.
Thanks,
oded
>
> There's quite a lot of these floating around actually (and many do have
> semi-open runtimes, like habanalabs have now too, just not open enough to
> be actually useful). It's going to be absolutely lovely having to explain
> to these companies in background chats why habanalabs gets away with their
> stack and they don't.
>
> Or maybe we should just merge them all and give up on the idea of having
> open cross-vendor driver stacks for these accelerators.
>
> Thanks, Daniel
>
> >
> > Thanks,
> > Oded
> >
> > Oded Gabbay (1):
> > habanalabs: define uAPI to export FD for DMA-BUF
> >
> > Tomer Tayar (1):
> > habanalabs: add support for dma-buf exporter
> >
> > drivers/misc/habanalabs/Kconfig | 1 +
> > drivers/misc/habanalabs/common/habanalabs.h | 26 ++
> > drivers/misc/habanalabs/common/memory.c | 480 +++++++++++++++++++-
> > drivers/misc/habanalabs/gaudi/gaudi.c | 1 +
> > drivers/misc/habanalabs/goya/goya.c | 1 +
> > include/uapi/misc/habanalabs.h | 28 +-
> > 6 files changed, 532 insertions(+), 5 deletions(-)
> >
> > --
> > 2.25.1
> >
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
next prev parent reply other threads:[~2021-07-06 10:03 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-05 13:03 [PATCH v4 0/2] Add p2p via dmabuf to habanalabs Oded Gabbay
2021-07-05 13:03 ` [PATCH v4 1/2] habanalabs: define uAPI to export FD for DMA-BUF Oded Gabbay
2021-07-05 13:03 ` [PATCH v4 2/2] habanalabs: add support for dma-buf exporter Oded Gabbay
2021-07-05 16:52 ` Jason Gunthorpe
2021-07-06 9:44 ` Oded Gabbay
2021-07-06 13:54 ` Jason Gunthorpe
2021-07-06 8:40 ` [PATCH v4 0/2] Add p2p via dmabuf to habanalabs Daniel Vetter
2021-07-06 10:03 ` Oded Gabbay [this message]
2021-07-06 10:36 ` Daniel Vetter
2021-07-06 10:47 ` Daniel Vetter
2021-07-06 12:07 ` Daniel Vetter
2021-07-06 13:44 ` Jason Gunthorpe
2021-07-06 14:09 ` Daniel Vetter
2021-07-06 14:56 ` Jason Gunthorpe
2021-07-06 15:52 ` Daniel Vetter
2021-07-06 12:23 ` Christoph Hellwig
2021-07-06 14:23 ` Jason Gunthorpe
2021-07-06 14:39 ` Daniel Vetter
2021-07-06 15:25 ` Jason Gunthorpe
2021-07-06 15:49 ` Daniel Vetter
2021-07-06 16:07 ` Daniel Vetter
2021-07-06 17:28 ` Jason Gunthorpe
2021-07-06 17:31 ` Christoph Hellwig
2021-07-06 17:59 ` Jason Gunthorpe
2021-07-09 14:47 ` Dennis Dalessandro
2021-07-06 16:29 ` Jason Gunthorpe
2021-07-06 17:35 ` Daniel Vetter
2021-07-06 18:03 ` Daniel Vetter
2021-07-06 18:31 ` Jason Gunthorpe
2021-07-06 19:06 ` Daniel Vetter
2021-07-06 19:09 ` Alex Deucher
2021-07-06 12:21 ` Christoph Hellwig
2021-07-06 12:23 ` [Linaro-mm-sig] " Daniel Vetter
2021-07-06 12:45 ` Oded Gabbay
2021-07-06 13:17 ` Daniel Vetter
2021-07-06 13:45 ` Oded Gabbay
2021-07-07 12:17 ` Christian König
2021-07-07 12:54 ` Daniel Vetter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAFCwf10_rTYL2Fy6tCRVAUCf4-6_TtcWCv5gEEkGnQ0KxqMUBg@mail.gmail.com \
--to=oded.gabbay@gmail.com \
--cc=airlied@gmail.com \
--cc=alexander.deucher@amd.com \
--cc=amd-gfx@lists.freedesktop.org \
--cc=christian.koenig@amd.com \
--cc=daniel.vetter@ffwll.ch \
--cc=dledford@redhat.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=galpress@amazon.com \
--cc=gregkh@linuxfoundation.org \
--cc=hch@lst.de \
--cc=jgg@ziepe.ca \
--cc=leonro@nvidia.com \
--cc=linaro-mm-sig@lists.linaro.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-media@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=ogabbay@kernel.org \
--cc=sleybo@amazon.com \
--cc=sumit.semwal@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).