From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64D21C433E3 for ; Fri, 19 Jun 2020 20:59:28 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 37BE220B80 for ; Fri, 19 Jun 2020 20:59:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="c2pZVs4a" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 37BE220B80 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=amd-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 62A4E6EA6C; Fri, 19 Jun 2020 20:59:23 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by gabe.freedesktop.org (Postfix) with ESMTPS id 152046EA64 for ; Fri, 19 Jun 2020 20:59:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592600361; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3zQCeY2hg03c0w2Pqa3443YwJLYT6nkTcSltvd+Vi4A=; b=c2pZVs4aN5wz/TjPLSiFGiz937jcjGV6ySvxU92txh0zsXK39rCAjfHwn8TQTtVOXbVWiG NZoTxmSo+yY6zbnsiFkd8Ze7xfOSBmxmmxEsulo16R+pWgML9SsgMm5hmOcUc0W7HNeN2T nhQrfqQ3A8qUpYVnq3LEWd731g3uDQE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-354-rrXxSD6nM6-QXDL_hIU0dg-1; Fri, 19 Jun 2020 16:59:17 -0400 X-MC-Unique: rrXxSD6nM6-QXDL_hIU0dg-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D0E84193F560; Fri, 19 Jun 2020 20:59:14 +0000 (UTC) Received: from redhat.com (ovpn-112-200.rdu2.redhat.com [10.10.112.200]) by smtp.corp.redhat.com (Postfix) with ESMTPS id A800B7C1E3; Fri, 19 Jun 2020 20:59:12 +0000 (UTC) Date: Fri, 19 Jun 2020 16:59:10 -0400 From: Jerome Glisse To: Daniel Vetter Subject: Re: [Linaro-mm-sig] [PATCH 04/18] dma-fence: prime lockdep annotations Message-ID: <20200619205910.GA14480@redhat.com> References: <20200619113934.GN6578@ziepe.ca> <20200619151551.GP6578@ziepe.ca> <20200619172308.GQ6578@ziepe.ca> <20200619180935.GA10009@redhat.com> <20200619181849.GR6578@ziepe.ca> <20200619201011.GB13117@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-BeenThere: amd-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Discussion list for AMD gfx List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-rdma , Thomas =?iso-8859-1?Q?Hellstr=F6m_=28Intel=29?= , Maarten Lankhorst , LKML , DRI Development , Christian =?iso-8859-1?Q?K=F6nig?= , "moderated list:DMA BUFFER SHARING FRAMEWORK" , Jason Gunthorpe , Thomas Hellstrom , amd-gfx list , Daniel Vetter , Mika Kuoppala , Intel Graphics Development , "open list:DMA BUFFER SHARING FRAMEWORK" Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Errors-To: amd-gfx-bounces@lists.freedesktop.org Sender: "amd-gfx" On Fri, Jun 19, 2020 at 10:43:20PM +0200, Daniel Vetter wrote: > On Fri, Jun 19, 2020 at 10:10 PM Jerome Glisse wrote: > > > > On Fri, Jun 19, 2020 at 03:18:49PM -0300, Jason Gunthorpe wrote: > > > On Fri, Jun 19, 2020 at 02:09:35PM -0400, Jerome Glisse wrote: > > > > On Fri, Jun 19, 2020 at 02:23:08PM -0300, Jason Gunthorpe wrote: > > > > > On Fri, Jun 19, 2020 at 06:19:41PM +0200, Daniel Vetter wrote: > > > > > > > > > > > The madness is only that device B's mmu notifier might need to = wait > > > > > > for fence_B so that the dma operation finishes. Which in turn h= as to > > > > > > wait for device A to finish first. > > > > > > > > > > So, it sound, fundamentally you've got this graph of operations a= cross > > > > > an unknown set of drivers and the kernel cannot insert itself in > > > > > dma_fence hand offs to re-validate any of the buffers involved? > > > > > Buffers which by definition cannot be touched by the hardware yet. > > > > > > > > > > That really is a pretty horrible place to end up.. > > > > > > > > > > Pinning really is right answer for this kind of work flow. I think > > > > > converting pinning to notifers should not be done unless notifier > > > > > invalidation is relatively bounded. > > > > > > > > > > I know people like notifiers because they give a bit nicer perfor= mance > > > > > in some happy cases, but this cripples all the bad cases.. > > > > > > > > > > If pinning doesn't work for some reason maybe we should address t= hat? > > > > > > > > Note that the dma fence is only true for user ptr buffer which pred= ate > > > > any HMM work and thus were using mmu notifier already. You need the > > > > mmu notifier there because of fork and other corner cases. > > > > > > I wonder if we should try to fix the fork case more directly - RDMA > > > has this same problem and added MADV_DONTFORK a long time ago as a > > > hacky way to deal with it. > > > > > > Some crazy page pin that resolved COW in a way that always kept the > > > physical memory with the mm that initiated the pin? > > > > Just no way to deal with it easily, i thought about forcing the > > anon_vma (page->mapping for anonymous page) to the anon_vma that > > belongs to the vma against which the GUP was done but it would > > break things if page is already in other branch of a fork tree. > > Also this forbid fast GUP. > > > > Quite frankly the fork was not the main motivating factor. GPU > > can pin potentialy GBytes of memory thus we wanted to be able > > to release it but since Michal changes to reclaim code this is > > no longer effective. > = > What where how? My patch to annote reclaim paths with mmu notifier > possibility just landed in -mm, so if direct reclaim can't reclaim mmu > notifier'ed stuff anymore we need to know. > = > Also this would resolve the entire pain we're discussing in this > thread about dma_fence_wait deadlocking against anything that's not > GFP_ATOMIC ... Sorry my bad, reclaim still works, only oom skip. It was couple years ago and i thought that some of the things discuss while back did make it upstream. It is probably a good time to also point out that what i wanted to do is have all the mmu notifier callback provide some kind of fence (not dma fence) so that we can split the notification into step: A- schedule notification on all devices/system get fences this step should minimize lock dependency and should not have to wait for anything also best if you can avoid memory allocation for instance by pre-allocating what you need for notification. B- mm can do things like unmap but can not map new page so write special swap pte to cpu page table C- wait on each fences from A ... resume old code ie replace pte or finish unmap ... The idea here is that at step C the core mm can decide to back off if any fence returned from A have to wait. This means that every device is invalidating for nothing but if we get there then it might still be a good thing as next time around maybe the kernel would be successfull without a wait. This would allow things like reclaim to make forward progress and skip over or limit wait time to given timeout. Also I thought to extend this even to multi-cpu tlb flush so that device and CPUs follow same pattern and we can make // progress on each. Getting to such scheme is a lot of work. My plan was to first get the fence as part of the notifier user API and hide it from mm inside notifier common code. Then update each core mm path to new model and see if there is any benefit from it. Reclaim would be first candidate. Cheers, J=E9r=F4me _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx