From: "Jérôme Glisse" <jglisse@redhat.com>
To: akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
joro@8bytes.org, Mel Gorman <mgorman@suse.de>,
"H. Peter Anvin" <hpa@zytor.com>,
Peter Zijlstra <peterz@infradead.org>,
Andrea Arcangeli <aarcange@redhat.com>,
Johannes Weiner <jweiner@redhat.com>,
Larry Woodman <lwoodman@redhat.com>,
Rik van Riel <riel@redhat.com>, Dave Airlie <airlied@redhat.com>,
Brendan Conoboy <blc@redhat.com>,
Joe Donohue <jdonohue@redhat.com>,
Christophe Harle <charle@nvidia.com>,
Duncan Poole <dpoole@nvidia.com>,
Sherry Cheung <SCheung@nvidia.com>,
Subhash Gutti <sgutti@nvidia.com>,
John Hubbard <jhubbard@nvidia.com>,
Mark Hairgrove <mhairgrove@nvidia.com>,
Lucien Dunning <ldunning@nvidia.com>,
Cameron Buschardt <cabuschardt@nvidia.com>,
Arvind Gopalakrishnan <arvindg@nvidia.com>,
Haggai Eran <haggaie@mellanox.com>,
Shachar Raindel <raindel@mellanox.com>,
Liran Liss <liranl@mellanox.com>,
Roland Dreier <roland@purestorage.com>,
Ben Sander <ben.sander@amd.com>,
Greg Stoner <Greg.Stoner@amd.com>,
John Bridgman <John.Bridgman@amd.com>,
Michael Mantor <Michael.Mantor@amd.com>,
Paul Blinzer <Paul.Blinzer@amd.com>,
Leonid Shamis <Leonid.Shamis@amd.com>,
Laurent Morichetti <Laurent.Morichetti@amd.com>,
Alexander Deucher <Alexander.Deucher@amd.com>,
Jerome Glisse <jglisse@redhat.com>,
Jatin Kumar <jakumar@nvidia.com>
Subject: [PATCH v12 23/29] HMM: new callback for copying memory from and to device memory v2.
Date: Tue, 8 Mar 2016 15:43:16 -0500 [thread overview]
Message-ID: <1457469802-11850-24-git-send-email-jglisse@redhat.com> (raw)
In-Reply-To: <1457469802-11850-1-git-send-email-jglisse@redhat.com>
From: Jerome Glisse <jglisse@redhat.com>
This patch only adds the new callback device driver must implement
to copy memory from and to device memory.
Changed since v1:
- Pass down the vma to the copy function.
Signed-off-by: JA(C)rA'me Glisse <jglisse@redhat.com>
Signed-off-by: Sherry Cheung <SCheung@nvidia.com>
Signed-off-by: Subhash Gutti <sgutti@nvidia.com>
Signed-off-by: Mark Hairgrove <mhairgrove@nvidia.com>
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Jatin Kumar <jakumar@nvidia.com>
---
include/linux/hmm.h | 105 ++++++++++++++++++++++++++++++++++++++++++++++++++++
mm/hmm.c | 2 +
2 files changed, 107 insertions(+)
diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index 7c66513..9fbfc07 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -65,6 +65,8 @@ enum hmm_etype {
HMM_DEVICE_RFAULT,
HMM_DEVICE_WFAULT,
HMM_WRITE_PROTECT,
+ HMM_COPY_FROM_DEVICE,
+ HMM_COPY_TO_DEVICE,
};
/* struct hmm_event - memory event information.
@@ -170,6 +172,109 @@ struct hmm_device_ops {
*/
int (*update)(struct hmm_mirror *mirror,
struct hmm_event *event);
+
+ /* copy_from_device() - copy from device memory to system memory.
+ *
+ * @mirror: The mirror that link process address space with the device.
+ * @event: The event that triggered the copy.
+ * @dst: Array containing hmm_pte of destination memory.
+ * @start: Start address of the range (sub-range of event) to copy.
+ * @end: End address of the range (sub-range of event) to copy.
+ * Returns: 0 on success, error code otherwise {-ENOMEM, -EIO}.
+ *
+ * Called when migrating memory from device memory to system memory.
+ * The dst array contains valid DMA address for the device of the page
+ * to copy to (or pfn of page if hmm_device.device == NULL).
+ *
+ * If event.etype == HMM_FORK then device driver only need to schedule
+ * a copy to the system pages given in the dst hmm_pte array. Do not
+ * update the device page, and do not pause/stop the device threads
+ * that are using this address space. Just copy memory.
+ *
+ * If event.type == HMM_COPY_FROM_DEVICE then device driver must first
+ * write protect the range then schedule the copy, then update its page
+ * table to use the new system memory given the dst array. Some device
+ * can perform all this in an atomic fashion from device point of view.
+ * The device driver must also free the device memory once the copy is
+ * done.
+ *
+ * Device driver must not fail lightly, any failure result in device
+ * process being kill and CPU page table set to HWPOISON entry.
+ *
+ * Note that device driver must clear the valid bit of the dst entry it
+ * failed to copy.
+ *
+ * On failure the mirror will be kill by HMM which will do a HMM_MUNMAP
+ * invalidation of all the memory when this happen the device driver
+ * can free the device memory.
+ *
+ * Note also that there can be hole in the range being copied ie some
+ * entry of dst array will not have the valid bit set, device driver
+ * must simply ignore non valid entry.
+ *
+ * Finaly device driver must set the dirty bit for each page that was
+ * modified since it was copied inside the device memory. This must be
+ * conservative ie if device can not determine that with certainty then
+ * it must set the dirty bit unconditionally.
+ *
+ * Return 0 on success, error value otherwise :
+ * -ENOMEM Not enough memory for performing the operation.
+ * -EIO Some input/output error with the device.
+ *
+ * All other return value trigger warning and are transformed to -EIO.
+ */
+ int (*copy_from_device)(struct hmm_mirror *mirror,
+ const struct hmm_event *event,
+ dma_addr_t *dst,
+ unsigned long start,
+ unsigned long end);
+
+ /* copy_to_device() - copy to device memory from system memory.
+ *
+ * @mirror: The mirror that link process address space with the device.
+ * @event: The event that triggered the copy.
+ * @vma: The vma corresponding to the range.
+ * @dst: Array containing hmm_pte of destination memory.
+ * @start: Start address of the range (sub-range of event) to copy.
+ * @end: End address of the range (sub-range of event) to copy.
+ * Returns: 0 on success, error code otherwise {-ENOMEM, -EIO}.
+ *
+ * Called when migrating memory from system memory to device memory.
+ * The dst array is empty, all of its entry are equal to zero. Device
+ * driver must allocate the device memory and populate each entry using
+ * hmm_pte_from_device_pfn() only the valid device bit and hardware
+ * specific bit will be preserve (write and dirty will be taken from
+ * the original entry inside the mirror page table). It is advice to
+ * set the device pfn to match the physical address of device memory
+ * being use. The event.etype will be equals to HMM_COPY_TO_DEVICE.
+ *
+ * Device driver that can atomically copy a page and update its page
+ * table entry to point to the device memory can do that. Partial
+ * failure is allowed, entry that have not been migrated must have
+ * the HMM_PTE_VALID_DEV bit clear inside the dst array. HMM will
+ * update the CPU page table of failed entry to point back to the
+ * system page.
+ *
+ * Note that device driver is responsible for allocating and freeing
+ * the device memory and properly updating to dst array entry with
+ * the allocated device memory.
+ *
+ * Return 0 on success, error value otherwise :
+ * -ENOMEM Not enough memory for performing the operation.
+ * -EIO Some input/output error with the device.
+ *
+ * All other return value trigger warning and are transformed to -EIO.
+ * Errors means that the migration is aborted. So in case of partial
+ * failure if device do not want to fully abort it must return 0.
+ * Device driver can update device page table only if it knows it will
+ * not return failure.
+ */
+ int (*copy_to_device)(struct hmm_mirror *mirror,
+ const struct hmm_event *event,
+ struct vm_area_struct *vma,
+ dma_addr_t *dst,
+ unsigned long start,
+ unsigned long end);
};
diff --git a/mm/hmm.c b/mm/hmm.c
index 9455443..d26abe4 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -78,6 +78,8 @@ static inline int hmm_event_init(struct hmm_event *event,
switch (etype) {
case HMM_DEVICE_RFAULT:
case HMM_DEVICE_WFAULT:
+ case HMM_COPY_TO_DEVICE:
+ case HMM_COPY_FROM_DEVICE:
break;
case HMM_FORK:
case HMM_WRITE_PROTECT:
--
2.4.3
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-03-08 19:47 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-03-08 20:42 HMM (Heterogeneous Memory Management) Jérôme Glisse
2016-03-08 20:42 ` [PATCH v12 01/29] mmu_notifier: add event information to address invalidation v9 Jérôme Glisse
2016-03-08 20:42 ` [PATCH v12 02/29] mmu_notifier: keep track of active invalidation ranges v5 Jérôme Glisse
2016-03-08 20:42 ` [PATCH v12 03/29] mmu_notifier: pass page pointer to mmu_notifier_invalidate_page() v2 Jérôme Glisse
2016-03-08 20:42 ` [PATCH v12 04/29] mmu_notifier: allow range invalidation to exclude a specific mmu_notifier Jérôme Glisse
2016-03-08 20:42 ` [PATCH v12 05/29] HMM: introduce heterogeneous memory management v5 Jérôme Glisse
2016-03-08 20:42 ` [PATCH v12 06/29] HMM: add HMM page table v4 Jérôme Glisse
2016-03-08 20:43 ` [PATCH v12 07/29] HMM: add per mirror " Jérôme Glisse
2016-03-29 22:58 ` John Hubbard
2016-03-08 20:43 ` [PATCH v12 08/29] HMM: add device page fault support v6 Jérôme Glisse
2016-03-23 6:52 ` Aneesh Kumar K.V
2016-03-23 10:09 ` Jerome Glisse
2016-03-23 10:29 ` Aneesh Kumar K.V
2016-03-23 11:25 ` Jerome Glisse
2016-03-08 20:43 ` [PATCH v12 09/29] HMM: add mm page table iterator helpers Jérôme Glisse
2016-03-08 20:43 ` [PATCH v12 10/29] HMM: use CPU page table during invalidation Jérôme Glisse
2016-03-08 20:43 ` [PATCH v12 11/29] HMM: add discard range helper (to clear and free resources for a range) Jérôme Glisse
2016-03-08 20:43 ` [PATCH v12 12/29] HMM: add dirty range helper (toggle dirty bit inside mirror page table) v2 Jérôme Glisse
2016-03-08 20:43 ` [PATCH v12 13/29] HMM: DMA map memory on behalf of device driver v2 Jérôme Glisse
2016-03-08 20:43 ` [PATCH v12 14/29] HMM: Add support for hugetlb Jérôme Glisse
2016-03-08 20:43 ` [PATCH v12 15/29] HMM: add documentation explaining HMM internals and how to use it Jérôme Glisse
2016-03-08 20:43 ` [PATCH v12 16/29] fork: pass the dst vma to copy_page_range() and its sub-functions Jérôme Glisse
2016-03-08 20:43 ` [PATCH v12 17/29] HMM: add special swap filetype for memory migrated to device v2 Jérôme Glisse
2016-03-08 20:43 ` [PATCH v12 18/29] HMM: add new HMM page table flag (valid device memory) Jérôme Glisse
2016-03-08 20:43 ` [PATCH v12 19/29] HMM: add new HMM page table flag (select flag) Jérôme Glisse
2016-03-08 20:43 ` [PATCH v12 20/29] HMM: handle HMM device page table entry on mirror page table fault and update Jérôme Glisse
2016-03-08 20:43 ` [PATCH v12 21/29] HMM: mm add helper to update page table when migrating memory back v2 Jérôme Glisse
2016-03-21 11:27 ` Aneesh Kumar K.V
2016-03-21 12:02 ` Jerome Glisse
2016-03-21 13:48 ` Aneesh Kumar K.V
2016-03-21 14:30 ` Jerome Glisse
2016-03-08 20:43 ` [PATCH v12 22/29] HMM: mm add helper to update page table when migrating memory v3 Jérôme Glisse
2016-03-21 14:24 ` Aneesh Kumar K.V
2016-03-08 20:43 ` Jérôme Glisse [this message]
2016-03-08 20:43 ` [PATCH v12 24/29] HMM: allow to get pointer to spinlock protecting a directory Jérôme Glisse
2016-03-08 20:43 ` [PATCH v12 25/29] HMM: split DMA mapping function in two Jérôme Glisse
2016-03-08 20:43 ` [PATCH v12 26/29] HMM: add helpers for migration back to system memory v3 Jérôme Glisse
2016-03-08 20:43 ` [PATCH v12 27/29] HMM: fork copy migrated memory into system memory for child process Jérôme Glisse
2016-03-08 20:43 ` [PATCH v12 28/29] HMM: CPU page fault on migrated memory Jérôme Glisse
2016-03-08 20:43 ` [PATCH v12 29/29] HMM: add mirror fault support for system to device memory migration v3 Jérôme Glisse
2016-03-08 22:02 ` HMM (Heterogeneous Memory Management) John Hubbard
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1457469802-11850-24-git-send-email-jglisse@redhat.com \
--to=jglisse@redhat.com \
--cc=Alexander.Deucher@amd.com \
--cc=Greg.Stoner@amd.com \
--cc=John.Bridgman@amd.com \
--cc=Laurent.Morichetti@amd.com \
--cc=Leonid.Shamis@amd.com \
--cc=Michael.Mantor@amd.com \
--cc=Paul.Blinzer@amd.com \
--cc=SCheung@nvidia.com \
--cc=aarcange@redhat.com \
--cc=airlied@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=arvindg@nvidia.com \
--cc=ben.sander@amd.com \
--cc=blc@redhat.com \
--cc=cabuschardt@nvidia.com \
--cc=charle@nvidia.com \
--cc=dpoole@nvidia.com \
--cc=haggaie@mellanox.com \
--cc=hpa@zytor.com \
--cc=jakumar@nvidia.com \
--cc=jdonohue@redhat.com \
--cc=jhubbard@nvidia.com \
--cc=joro@8bytes.org \
--cc=jweiner@redhat.com \
--cc=ldunning@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=liranl@mellanox.com \
--cc=lwoodman@redhat.com \
--cc=mgorman@suse.de \
--cc=mhairgrove@nvidia.com \
--cc=peterz@infradead.org \
--cc=raindel@mellanox.com \
--cc=riel@redhat.com \
--cc=roland@purestorage.com \
--cc=sgutti@nvidia.com \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).