From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751193AbdFAWfX (ORCPT ); Thu, 1 Jun 2017 18:35:23 -0400 Received: from mx1.redhat.com ([209.132.183.28]:58034 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751078AbdFAWfW (ORCPT ); Thu, 1 Jun 2017 18:35:22 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com AD7BF3DE3D Authentication-Results: ext-mx05.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx05.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=jglisse@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com AD7BF3DE3D Date: Thu, 1 Jun 2017 18:35:18 -0400 From: Jerome Glisse To: Balbir Singh Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Dan Williams , "Kirill A . Shutemov" , John Hubbard , Evgeny Baskakov , Mark Hairgrove , Sherry Cheung , Subhash Gutti Subject: Re: [HMM 12/15] mm/migrate: new memory migration helper for use with device memory v4 Message-ID: <20170601223518.GA2780@redhat.com> References: <20170524172024.30810-1-jglisse@redhat.com> <20170524172024.30810-13-jglisse@redhat.com> <20170531135954.1d67ca31@firefly.ozlabs.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20170531135954.1d67ca31@firefly.ozlabs.ibm.com> User-Agent: Mutt/1.8.0 (2017-02-23) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Thu, 01 Jun 2017 22:35:22 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, May 31, 2017 at 01:59:54PM +1000, Balbir Singh wrote: > On Wed, 24 May 2017 13:20:21 -0400 > Jérôme Glisse wrote: > > > This patch add a new memory migration helpers, which migrate memory > > backing a range of virtual address of a process to different memory > > (which can be allocated through special allocator). It differs from > > numa migration by working on a range of virtual address and thus by > > doing migration in chunk that can be large enough to use DMA engine > > or special copy offloading engine. > > > > Expected users are any one with heterogeneous memory where different > > memory have different characteristics (latency, bandwidth, ...). As > > an example IBM platform with CAPI bus can make use of this feature > > to migrate between regular memory and CAPI device memory. New CPU > > architecture with a pool of high performance memory not manage as > > cache but presented as regular memory (while being faster and with > > lower latency than DDR) will also be prime user of this patch. > > > > Migration to private device memory will be useful for device that > > have large pool of such like GPU, NVidia plans to use HMM for that. > > > > It is helpful, for HMM-CDM however we would like to avoid the downsides > of MIGRATE_SYNC_NOCOPY What are the downside you are referring too ? Cheers, Jérôme