iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* problems with iommu and userspace DMA to hugepages, how to add iommu mapings from userspace.
@ 2016-11-10 11:16 Lars Segerlund
       [not found] ` <CAF-VNarFte4k0bAEjYkUMaCeWXqFZZdn-EHFPGKj8M7y9iCd+w-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 2+ messages in thread
From: Lars Segerlund @ 2016-11-10 11:16 UTC (permalink / raw)
  To: iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA


[-- Attachment #1.1: Type: text/plain, Size: 1138 bytes --]

 Hi,

 I am getting these errors from a userspace 'device driver' ,

[599805.585424] DMAR: DRHD: handling fault status reg 202
[599805.585431] DMAR: DMAR:[DMA Read] Request device [03:00.0] fault addr
4c0008000
                DMAR:[fault reason 06] PTE Read access is not set

 Basicly i mapp a dma engine in a PCIe card by uio to a userspace library,
and then allocate hugepages ( continous ),  do a virt to phys translation
via /proc/self/.... setup a dma and turn it on.

The message tells me I lack iommu mappings for the memory regions i try to
read/write , so far so good nothing unexpected.

My problem is that I have to run with intel_iommu=on iommu=pt flags to the
kernel due to other hard/software on the machine, I can't turn iommu off
completely.

So I have the question is it possible to add iommu mappings from userspace
? through iommu groups in /sys/..  ?

So far all applications that does something similar either uses vfio or
some kernel driver callback and I would prefer not to do this.
( I had high hopes for dpdk pmd driver but so far no luck ).

 All hints and help appreciated ! :-D

 / regards, Lars Segerlund.

[-- Attachment #1.2: Type: text/html, Size: 1918 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: problems with iommu and userspace DMA to hugepages, how to add iommu mapings from userspace.
       [not found] ` <CAF-VNarFte4k0bAEjYkUMaCeWXqFZZdn-EHFPGKj8M7y9iCd+w-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2016-11-10 14:04   ` Alex Williamson
  0 siblings, 0 replies; 2+ messages in thread
From: Alex Williamson @ 2016-11-10 14:04 UTC (permalink / raw)
  To: Lars Segerlund; +Cc: iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

On Thu, 10 Nov 2016 12:16:19 +0100
Lars Segerlund <lars.segerlund-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:

>  Hi,
> 
>  I am getting these errors from a userspace 'device driver' ,
> 
> [599805.585424] DMAR: DRHD: handling fault status reg 202
> [599805.585431] DMAR: DMAR:[DMA Read] Request device [03:00.0] fault addr
> 4c0008000
>                 DMAR:[fault reason 06] PTE Read access is not set
> 
>  Basicly i mapp a dma engine in a PCIe card by uio to a userspace library,
> and then allocate hugepages ( continous ),  do a virt to phys translation
> via /proc/self/.... setup a dma and turn it on.
> 
> The message tells me I lack iommu mappings for the memory regions i try to
> read/write , so far so good nothing unexpected.
> 
> My problem is that I have to run with intel_iommu=on iommu=pt flags to the
> kernel due to other hard/software on the machine, I can't turn iommu off
> completely.
> 
> So I have the question is it possible to add iommu mappings from userspace
> ? through iommu groups in /sys/..  ?
> 
> So far all applications that does something similar either uses vfio or
> some kernel driver callback and I would prefer not to do this.
> ( I had high hopes for dpdk pmd driver but so far no luck ).

vfio is the right way to do this.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2016-11-10 14:04 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-11-10 11:16 problems with iommu and userspace DMA to hugepages, how to add iommu mapings from userspace Lars Segerlund
     [not found] ` <CAF-VNarFte4k0bAEjYkUMaCeWXqFZZdn-EHFPGKj8M7y9iCd+w-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2016-11-10 14:04   ` Alex Williamson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).