From: jglisse@redhat.com
To: akpm@linux-foundation.org
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
Linus Torvalds <torvalds@linux-foundation.org>,
joro@8bytes.org, Mel Gorman <mgorman@suse.de>,
"H. Peter Anvin" <hpa@zytor.com>,
Peter Zijlstra <peterz@infradead.org>,
Andrea Arcangeli <aarcange@redhat.com>,
Johannes Weiner <jweiner@redhat.com>,
Larry Woodman <lwoodman@redhat.com>,
Rik van Riel <riel@redhat.com>, Dave Airlie <airlied@redhat.com>,
Brendan Conoboy <blc@redhat.com>,
Joe Donohue <jdonohue@redhat.com>,
Duncan Poole <dpoole@nvidia.com>,
Sherry Cheung <SCheung@nvidia.com>,
Subhash Gutti <sgutti@nvidia.com>,
John Hubbard <jhubbard@nvidia.com>,
Mark Hairgrove <mhairgrove@nvidia.com>,
Lucien Dunning <ldunning@nvidia.com>,
Cameron Buschardt <cabuschardt@nvidia.com>,
Arvind Gopalakrishnan <arvindg@nvidia.com>,
Haggai Eran <haggaie@mellanox.com>,
Shachar Raindel <raindel@mellanox.com>,
Liran Liss <liranl@mellanox.com>,
Roland Dreier <roland@purestorage.com>,
Ben Sander <ben.sander@amd.com>,
Greg Stoner <Greg.Stoner@amd.com>,
John Bridgman <John.Bridgman@amd.com>,
Michael Mantor <Michael.Mantor@amd.com>,
Paul Blinzer <Paul.Blinzer@amd.com>,
Laurent Morichetti <Laurent.Morichetti@amd.com>,
Alexander Deucher <Alexander.Deucher@amd.com>,
Oded Gabbay <Oded.Gabbay@amd.com>,
Jerome Glisse <jglisse@redhat.com>,
Jatin Kumar <jakumar@nvidia.com>
Subject: [PATCH 22/36] HMM: add new callback for copying memory from and to device memory.
Date: Thu, 21 May 2015 16:22:58 -0400 [thread overview]
Message-ID: <1432239792-5002-3-git-send-email-jglisse@redhat.com> (raw)
In-Reply-To: <1432239792-5002-1-git-send-email-jglisse@redhat.com>
From: Jerome Glisse <jglisse@redhat.com>
This patch only adds the new callback device driver must implement
to copy memory from and to device memory.
Signed-off-by: JA(C)rA'me Glisse <jglisse@redhat.com>
Signed-off-by: Sherry Cheung <SCheung@nvidia.com>
Signed-off-by: Subhash Gutti <sgutti@nvidia.com>
Signed-off-by: Mark Hairgrove <mhairgrove@nvidia.com>
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Jatin Kumar <jakumar@nvidia.com>
---
include/linux/hmm.h | 103 ++++++++++++++++++++++++++++++++++++++++++++++++++++
mm/hmm.c | 2 +
2 files changed, 105 insertions(+)
diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index f243eb5..eb30418 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -66,6 +66,8 @@ enum hmm_etype {
HMM_DEVICE_RFAULT,
HMM_DEVICE_WFAULT,
HMM_WRITE_PROTECT,
+ HMM_COPY_FROM_DEVICE,
+ HMM_COPY_TO_DEVICE,
};
/* struct hmm_event - memory event information.
@@ -157,6 +159,107 @@ struct hmm_device_ops {
* All other return value trigger warning and are transformed to -EIO.
*/
int (*update)(struct hmm_mirror *mirror,const struct hmm_event *event);
+
+ /* copy_from_device() - copy from device memory to system memory.
+ *
+ * @mirror: The mirror that link process address space with the device.
+ * @event: The event that triggered the copy.
+ * @dst: Array containing hmm_pte of destination memory.
+ * @start: Start address of the range (sub-range of event) to copy.
+ * @end: End address of the range (sub-range of event) to copy.
+ * Returns: 0 on success, error code otherwise {-ENOMEM, -EIO}.
+ *
+ * Called when migrating memory from device memory to system memory.
+ * The dst array contains valid DMA address for the device of the page
+ * to copy to (or pfn of page if hmm_device.device == NULL).
+ *
+ * If event.etype == HMM_FORK then device driver only need to schedule
+ * a copy to the system pages given in the dst hmm_pte array. Do not
+ * update the device page, and do not pause/stop the device threads
+ * that are using this address space. Just copy memory.
+ *
+ * If event.type == HMM_COPY_FROM_DEVICE then device driver must first
+ * write protect the range then schedule the copy, then update its page
+ * table to use the new system memory given the dst array. Some device
+ * can perform all this in an atomic fashion from device point of view.
+ * The device driver must also free the device memory once the copy is
+ * done.
+ *
+ * Device driver must not fail lightly, any failure result in device
+ * process being kill and CPU page table set to HWPOISON entry.
+ *
+ * Note that device driver must clear the valid bit of the dst entry it
+ * failed to copy.
+ *
+ * On failure the mirror will be kill by HMM which will do a HMM_MUNMAP
+ * invalidation of all the memory when this happen the device driver
+ * can free the device memory.
+ *
+ * Note also that there can be hole in the range being copied ie some
+ * entry of dst array will not have the valid bit set, device driver
+ * must simply ignore non valid entry.
+ *
+ * Finaly device driver must set the dirty bit for each page that was
+ * modified since it was copied inside the device memory. This must be
+ * conservative ie if device can not determine that with certainty then
+ * it must set the dirty bit unconditionally.
+ *
+ * Return 0 on success, error value otherwise :
+ * -ENOMEM Not enough memory for performing the operation.
+ * -EIO Some input/output error with the device.
+ *
+ * All other return value trigger warning and are transformed to -EIO.
+ */
+ int (*copy_from_device)(struct hmm_mirror *mirror,
+ const struct hmm_event *event,
+ dma_addr_t *dst,
+ unsigned long start,
+ unsigned long end);
+
+ /* copy_to_device() - copy to device memory from system memory.
+ *
+ * @mirror: The mirror that link process address space with the device.
+ * @event: The event that triggered the copy.
+ * @dst: Array containing hmm_pte of destination memory.
+ * @start: Start address of the range (sub-range of event) to copy.
+ * @end: End address of the range (sub-range of event) to copy.
+ * Returns: 0 on success, error code otherwise {-ENOMEM, -EIO}.
+ *
+ * Called when migrating memory from system memory to device memory.
+ * The dst array is empty, all of its entry are equal to zero. Device
+ * driver must allocate the device memory and populate each entry using
+ * hmm_pte_from_device_pfn() only the valid device bit and hardware
+ * specific bit will be preserve (write and dirty will be taken from
+ * the original entry inside the mirror page table). It is advice to
+ * set the device pfn to match the physical address of device memory
+ * being use. The event.etype will be equals to HMM_COPY_TO_DEVICE.
+ *
+ * Device driver that can atomically copy a page and update its page
+ * table entry to point to the device memory can do that. Partial
+ * failure is allowed, entry that have not been migrated must have
+ * the HMM_PTE_VALID_DEV bit clear inside the dst array. HMM will
+ * update the CPU page table of failed entry to point back to the
+ * system page.
+ *
+ * Note that device driver is responsible for allocating and freeing
+ * the device memory and properly updating to dst array entry with
+ * the allocated device memory.
+ *
+ * Return 0 on success, error value otherwise :
+ * -ENOMEM Not enough memory for performing the operation.
+ * -EIO Some input/output error with the device.
+ *
+ * All other return value trigger warning and are transformed to -EIO.
+ * Errors means that the migration is aborted. So in case of partial
+ * failure if device do not want to fully abort it must return 0.
+ * Device driver can update device page table only if it knows it will
+ * not return failure.
+ */
+ int (*copy_to_device)(struct hmm_mirror *mirror,
+ const struct hmm_event *event,
+ dma_addr_t *dst,
+ unsigned long start,
+ unsigned long end);
};
diff --git a/mm/hmm.c b/mm/hmm.c
index e4585b7..9dbb1e43 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -87,6 +87,8 @@ static inline int hmm_event_init(struct hmm_event *event,
case HMM_ISDIRTY:
case HMM_DEVICE_RFAULT:
case HMM_DEVICE_WFAULT:
+ case HMM_COPY_TO_DEVICE:
+ case HMM_COPY_FROM_DEVICE:
break;
case HMM_FORK:
case HMM_WRITE_PROTECT:
--
1.9.3
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2015-05-21 20:24 UTC|newest]
Thread overview: 79+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-05-21 19:31 HMM (Heterogeneous Memory Management) v8 j.glisse
2015-05-21 19:31 ` [PATCH 01/36] mmu_notifier: add event information to address invalidation v7 j.glisse
2015-05-30 3:43 ` John Hubbard
2015-06-01 19:03 ` Jerome Glisse
2015-06-01 23:10 ` John Hubbard
2015-06-03 16:07 ` Jerome Glisse
2015-06-03 23:02 ` John Hubbard
2015-05-21 19:31 ` [PATCH 02/36] mmu_notifier: keep track of active invalidation ranges v3 j.glisse
2015-05-27 5:09 ` Aneesh Kumar K.V
2015-05-27 14:32 ` Jerome Glisse
2015-06-02 9:32 ` John Hubbard
2015-06-03 17:15 ` Jerome Glisse
2015-06-05 3:29 ` John Hubbard
2015-05-21 19:31 ` [PATCH 03/36] mmu_notifier: pass page pointer to mmu_notifier_invalidate_page() j.glisse
2015-05-27 5:17 ` Aneesh Kumar K.V
2015-05-27 14:33 ` Jerome Glisse
2015-06-03 4:25 ` John Hubbard
2015-05-21 19:31 ` [PATCH 04/36] mmu_notifier: allow range invalidation to exclude a specific mmu_notifier j.glisse
2015-05-21 19:31 ` [PATCH 05/36] HMM: introduce heterogeneous memory management v3 j.glisse
2015-05-27 5:50 ` Aneesh Kumar K.V
2015-05-27 14:38 ` Jerome Glisse
2015-06-08 19:40 ` Mark Hairgrove
2015-06-08 21:17 ` Jerome Glisse
2015-06-09 1:54 ` Mark Hairgrove
2015-06-09 15:56 ` Jerome Glisse
2015-06-10 3:33 ` Mark Hairgrove
2015-06-10 15:42 ` Jerome Glisse
2015-06-11 1:15 ` Mark Hairgrove
2015-06-11 14:23 ` Jerome Glisse
2015-06-11 22:26 ` Mark Hairgrove
2015-06-15 14:32 ` Jerome Glisse
2015-05-21 19:31 ` [PATCH 06/36] HMM: add HMM page table v2 j.glisse
2015-06-19 2:06 ` Mark Hairgrove
2015-06-19 18:07 ` Jerome Glisse
2015-06-20 2:34 ` Mark Hairgrove
2015-06-25 22:57 ` Mark Hairgrove
2015-06-26 16:30 ` Jerome Glisse
2015-06-27 1:34 ` Mark Hairgrove
2015-06-29 14:43 ` Jerome Glisse
2015-07-01 2:51 ` Mark Hairgrove
2015-07-01 15:07 ` Jerome Glisse
2015-05-21 19:31 ` [PATCH 07/36] HMM: add per mirror page table v3 j.glisse
2015-06-25 23:05 ` Mark Hairgrove
2015-06-26 16:43 ` Jerome Glisse
2015-06-27 3:02 ` Mark Hairgrove
2015-06-29 14:50 ` Jerome Glisse
2015-05-21 19:31 ` [PATCH 08/36] HMM: add device page fault support v3 j.glisse
2015-05-21 19:31 ` [PATCH 09/36] HMM: add mm page table iterator helpers j.glisse
2015-05-21 19:31 ` [PATCH 10/36] HMM: use CPU page table during invalidation j.glisse
2015-05-21 19:31 ` [PATCH 11/36] HMM: add discard range helper (to clear and free resources for a range) j.glisse
2015-05-21 19:31 ` [PATCH 12/36] HMM: add dirty range helper (to toggle dirty bit inside mirror page table) j.glisse
2015-05-21 19:31 ` [PATCH 13/36] HMM: DMA map memory on behalf of device driver j.glisse
2015-05-21 19:31 ` [PATCH 14/36] fork: pass the dst vma to copy_page_range() and its sub-functions j.glisse
2015-05-21 19:31 ` [PATCH 15/36] memcg: export get_mem_cgroup_from_mm() j.glisse
2015-05-21 19:31 ` [PATCH 16/36] HMM: add special swap filetype for memory migrated to HMM device memory j.glisse
2015-06-24 7:49 ` Haggai Eran
2015-05-21 19:31 ` [PATCH 17/36] HMM: add new HMM page table flag (valid device memory) j.glisse
2015-05-21 19:31 ` [PATCH 18/36] HMM: add new HMM page table flag (select flag) j.glisse
2015-05-21 19:31 ` [PATCH 19/36] HMM: handle HMM device page table entry on mirror page table fault and update j.glisse
2015-05-21 20:22 ` [PATCH 20/36] HMM: mm add helper to update page table when migrating memory back jglisse
2015-05-21 20:22 ` [PATCH 21/36] HMM: mm add helper to update page table when migrating memory jglisse
2015-05-21 20:22 ` jglisse [this message]
2015-05-21 20:22 ` [PATCH 23/36] HMM: allow to get pointer to spinlock protecting a directory jglisse
2015-05-21 20:23 ` [PATCH 24/36] HMM: split DMA mapping function in two jglisse
2015-05-21 20:23 ` [PATCH 25/36] HMM: add helpers for migration back to system memory jglisse
2015-05-21 20:23 ` [PATCH 26/36] HMM: fork copy migrated memory into system memory for child process jglisse
2015-05-21 20:23 ` [PATCH 27/36] HMM: CPU page fault on migrated memory jglisse
2015-05-21 20:23 ` [PATCH 28/36] HMM: add mirror fault support for system to device memory migration jglisse
2015-05-21 20:23 ` [PATCH 29/36] IB/mlx5: add a new paramter to __mlx_ib_populated_pas for ODP with HMM jglisse
2015-05-21 20:23 ` [PATCH 30/36] IB/mlx5: add a new paramter to mlx5_ib_update_mtt() " jglisse
2015-05-21 20:23 ` [PATCH 31/36] IB/odp: export rbt_ib_umem_for_each_in_range() jglisse
2015-05-21 20:23 ` [PATCH 32/36] IB/odp/hmm: add new kernel option to use HMM for ODP jglisse
2015-05-21 20:23 ` [PATCH 33/36] IB/odp/hmm: add core infiniband structure and helper for ODP with HMM jglisse
2015-06-24 13:59 ` Haggai Eran
2015-05-21 20:23 ` [PATCH 34/36] IB/mlx5/hmm: add mlx5 HMM device initialization and callback jglisse
2015-05-21 20:23 ` [PATCH 35/36] IB/mlx5/hmm: add page fault support for ODP on HMM jglisse
2015-05-21 20:23 ` [PATCH 36/36] IB/mlx5/hmm: enable ODP using HMM jglisse
2015-05-30 3:01 ` HMM (Heterogeneous Memory Management) v8 John Hubbard
2015-05-31 6:56 ` Haggai Eran
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1432239792-5002-3-git-send-email-jglisse@redhat.com \
--to=jglisse@redhat.com \
--cc=Alexander.Deucher@amd.com \
--cc=Greg.Stoner@amd.com \
--cc=John.Bridgman@amd.com \
--cc=Laurent.Morichetti@amd.com \
--cc=Michael.Mantor@amd.com \
--cc=Oded.Gabbay@amd.com \
--cc=Paul.Blinzer@amd.com \
--cc=SCheung@nvidia.com \
--cc=aarcange@redhat.com \
--cc=airlied@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=arvindg@nvidia.com \
--cc=ben.sander@amd.com \
--cc=blc@redhat.com \
--cc=cabuschardt@nvidia.com \
--cc=dpoole@nvidia.com \
--cc=haggaie@mellanox.com \
--cc=hpa@zytor.com \
--cc=jakumar@nvidia.com \
--cc=jdonohue@redhat.com \
--cc=jhubbard@nvidia.com \
--cc=joro@8bytes.org \
--cc=jweiner@redhat.com \
--cc=ldunning@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=liranl@mellanox.com \
--cc=lwoodman@redhat.com \
--cc=mgorman@suse.de \
--cc=mhairgrove@nvidia.com \
--cc=peterz@infradead.org \
--cc=raindel@mellanox.com \
--cc=riel@redhat.com \
--cc=roland@purestorage.com \
--cc=sgutti@nvidia.com \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).