* [PATCH] mm/hmm: fix hmm_range_dma_map()/hmm_range_dma_unmap()
@ 2019-04-09 17:53 jglisse
2019-04-09 21:52 ` Andrew Morton
0 siblings, 1 reply; 2+ messages in thread
From: jglisse @ 2019-04-09 17:53 UTC (permalink / raw)
To: linux-mm
Cc: linux-kernel, Jérôme Glisse, Andrew Morton,
Ralph Campbell, John Hubbard
From: Jérôme Glisse <jglisse@redhat.com>
Was using wrong field and wrong enum for read only versus read and
write mapping.
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
---
mm/hmm.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/hmm.c b/mm/hmm.c
index 90369fd2307b..ecd16718285e 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -1203,7 +1203,7 @@ long hmm_range_dma_map(struct hmm_range *range,
npages = (range->end - range->start) >> PAGE_SHIFT;
for (i = 0, mapped = 0; i < npages; ++i) {
- enum dma_data_direction dir = DMA_FROM_DEVICE;
+ enum dma_data_direction dir = DMA_TO_DEVICE;
struct page *page;
/*
@@ -1227,7 +1227,7 @@ long hmm_range_dma_map(struct hmm_range *range,
}
/* If it is read and write than map bi-directional. */
- if (range->pfns[i] & range->values[HMM_PFN_WRITE])
+ if (range->pfns[i] & range->flags[HMM_PFN_WRITE])
dir = DMA_BIDIRECTIONAL;
daddrs[i] = dma_map_page(device, page, 0, PAGE_SIZE, dir);
@@ -1243,7 +1243,7 @@ long hmm_range_dma_map(struct hmm_range *range,
unmap:
for (npages = i, i = 0; (i < npages) && mapped; ++i) {
- enum dma_data_direction dir = DMA_FROM_DEVICE;
+ enum dma_data_direction dir = DMA_TO_DEVICE;
struct page *page;
page = hmm_device_entry_to_page(range, range->pfns[i]);
@@ -1254,7 +1254,7 @@ long hmm_range_dma_map(struct hmm_range *range,
continue;
/* If it is read and write than map bi-directional. */
- if (range->pfns[i] & range->values[HMM_PFN_WRITE])
+ if (range->pfns[i] & range->flags[HMM_PFN_WRITE])
dir = DMA_BIDIRECTIONAL;
dma_unmap_page(device, daddrs[i], PAGE_SIZE, dir);
@@ -1298,7 +1298,7 @@ long hmm_range_dma_unmap(struct hmm_range *range,
npages = (range->end - range->start) >> PAGE_SHIFT;
for (i = 0; i < npages; ++i) {
- enum dma_data_direction dir = DMA_FROM_DEVICE;
+ enum dma_data_direction dir = DMA_TO_DEVICE;
struct page *page;
page = hmm_device_entry_to_page(range, range->pfns[i]);
@@ -1306,7 +1306,7 @@ long hmm_range_dma_unmap(struct hmm_range *range,
continue;
/* If it is read and write than map bi-directional. */
- if (range->pfns[i] & range->values[HMM_PFN_WRITE]) {
+ if (range->pfns[i] & range->flags[HMM_PFN_WRITE]) {
dir = DMA_BIDIRECTIONAL;
/*
--
2.20.1
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH] mm/hmm: fix hmm_range_dma_map()/hmm_range_dma_unmap()
2019-04-09 17:53 [PATCH] mm/hmm: fix hmm_range_dma_map()/hmm_range_dma_unmap() jglisse
@ 2019-04-09 21:52 ` Andrew Morton
0 siblings, 0 replies; 2+ messages in thread
From: Andrew Morton @ 2019-04-09 21:52 UTC (permalink / raw)
To: jglisse; +Cc: linux-mm, linux-kernel, Ralph Campbell, John Hubbard
On Tue, 9 Apr 2019 13:53:40 -0400 jglisse@redhat.com wrote:
> Was using wrong field and wrong enum for read only versus read and
> write mapping.
For thos who were wondering, this fixes
mm-hmm-add-an-helper-function-that-fault-pages-and-map-them-to-a-device-v3.patch,
which is presently queued in -mm.
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2019-04-09 21:52 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-04-09 17:53 [PATCH] mm/hmm: fix hmm_range_dma_map()/hmm_range_dma_unmap() jglisse
2019-04-09 21:52 ` Andrew Morton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).