* Can I/OAT DMA engineer access PCI MMIO space
@ 2011-04-29 10:13 康剑斌
2011-05-02 6:04 ` Koul, Vinod
0 siblings, 1 reply; 9+ messages in thread
From: 康剑斌 @ 2011-04-29 10:13 UTC (permalink / raw)
To: linux-kernel; +Cc: dan.j.williams
I try to use ioatdam to copy data from system memory to pci MMIO space:
=======================
void mmio_copy(void *dst, void *src, size_t len)
struct async_submit_ctl submit;
struct memcpy_context ctx;
atomic_set(&ctx.cnt, 1);
init_completion(&ctx.cmp);
init_async_submit(&submit, ASYNC_TX_ACK, NULL,
done_memcpy, &ctx, NULL);
while (len) {
size_t clen = (len > PAGE_SIZE) ? PAGE_SIZE : len;
struct page *p_dst, *p_src;
p_dst = virt_to_page(dst);
p_src = virt_to_page(src);
atomic_inc(&ctx.cnt);
ntb_async_memcpy(p_dst, p_src, 0, 0, clen, &submit);
dst += clen;
src += clen;
len -= clen;
}
if (atomic_dec_and_test(&ctx.cnt))
return;
async_tx_issue_pending_all();
wait_for_completion(&ctx.cmp);
=================
If dst points to a memory space, the operation would pass.
But if dst points to a pci MMIO space, it failed with kernel oops.
It seems the code:
BUG_ON(is_ioat_bug(chanerr));
in drivers/dma/ioat/dma_v3.c, line 365 cause the oops.
Is there anyway to access pci MMIO space using ioat?
The datasheet says that ioat supports MMIO access.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Can I/OAT DMA engineer access PCI MMIO space
2011-04-29 10:13 Can I/OAT DMA engineer access PCI MMIO space 康剑斌
@ 2011-05-02 6:04 ` Koul, Vinod
2011-05-03 2:21 ` 康剑斌
0 siblings, 1 reply; 9+ messages in thread
From: Koul, Vinod @ 2011-05-02 6:04 UTC (permalink / raw)
To: 康剑斌; +Cc: linux-kernel, dan.j.williams
On Fri, 2011-04-29 at 18:13 +0800, 康剑斌 wrote:
> I try to use ioatdam to copy data from system memory to pci MMIO space:
> If dst points to a memory space, the operation would pass.
> But if dst points to a pci MMIO space, it failed with kernel oops.
> It seems the code:
> BUG_ON(is_ioat_bug(chanerr));
> in drivers/dma/ioat/dma_v3.c, line 365 cause the oops.
> Is there anyway to access pci MMIO space using ioat?
> The datasheet says that ioat supports MMIO access.
Did you map the IO memory in kernel using ioremap and friends first?
--
~Vinod
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Can I/OAT DMA engineer access PCI MMIO space
2011-05-02 6:04 ` Koul, Vinod
@ 2011-05-03 2:21 ` 康剑斌
2011-05-03 4:12 ` Koul, Vinod
2011-05-03 16:05 ` Dan Williams
0 siblings, 2 replies; 9+ messages in thread
From: 康剑斌 @ 2011-05-03 2:21 UTC (permalink / raw)
To: Koul, Vinod; +Cc: linux-kernel, dan.j.williams
>> I try to use ioatdam to copy data from system memory to pci MMIO space:
>> If dst points to a memory space, the operation would pass.
>> But if dst points to a pci MMIO space, it failed with kernel oops.
>> It seems the code:
>> BUG_ON(is_ioat_bug(chanerr));
>> in drivers/dma/ioat/dma_v3.c, line 365 cause the oops.
>> Is there anyway to access pci MMIO space using ioat?
>> The datasheet says that ioat supports MMIO access.
> Did you map the IO memory in kernel using ioremap and friends first?
>
yes, I had used 'ioremap_nocache' to map the IO memory and I can use
memcpy to copy data to this region. The async_tx should have been
correctly configured as
I can use aync_memcpy to copy data between different system memory address.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Can I/OAT DMA engineer access PCI MMIO space
2011-05-03 2:21 ` 康剑斌
@ 2011-05-03 4:12 ` Koul, Vinod
2011-05-03 6:31 ` 康剑斌
2011-05-03 16:05 ` Dan Williams
1 sibling, 1 reply; 9+ messages in thread
From: Koul, Vinod @ 2011-05-03 4:12 UTC (permalink / raw)
To: 康剑斌; +Cc: linux-kernel, dan.j.williams
On Tue, 2011-05-03 at 10:21 +0800, 康剑斌 wrote:
> >> I try to use ioatdam to copy data from system memory to pci MMIO space:
> >> If dst points to a memory space, the operation would pass.
> >> But if dst points to a pci MMIO space, it failed with kernel oops.
> >> It seems the code:
> >> BUG_ON(is_ioat_bug(chanerr));
> >> in drivers/dma/ioat/dma_v3.c, line 365 cause the oops.
> >> Is there anyway to access pci MMIO space using ioat?
> >> The datasheet says that ioat supports MMIO access.
> > Did you map the IO memory in kernel using ioremap and friends first?
> >
> yes, I had used 'ioremap_nocache' to map the IO memory and I can use
> memcpy to copy data to this region. The async_tx should have been
> correctly configured as
> I can use aync_memcpy to copy data between different system memory address.
Then you should be using memcpy_toio() and friends
~Vinod
--
~Vinod
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Can I/OAT DMA engineer access PCI MMIO space
2011-05-03 4:12 ` Koul, Vinod
@ 2011-05-03 6:31 ` 康剑斌
2011-05-03 15:58 ` Dan Williams
0 siblings, 1 reply; 9+ messages in thread
From: 康剑斌 @ 2011-05-03 6:31 UTC (permalink / raw)
To: Koul, Vinod; +Cc: linux-kernel, dan.j.williams
>> yes, I had used 'ioremap_nocache' to map the IO memory and I can use
>> memcpy to copy data to this region. The async_tx should have been
>> correctly configured as
>> I can use aync_memcpy to copy data between different system memory address.
> Then you should be using memcpy_toio() and friends
>
Do you mean that if I have mapped the mmio, I can' use I/OAT dma
transfer to this region any more?
I can use memcpy to copy data, but it consumes lots of cpu as PCI access
is too slow.
If I can use i/oat dma and asyc_tx api to do the job, the performance
should be imporved.
Thanks
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Can I/OAT DMA engineer access PCI MMIO space
2011-05-03 6:31 ` 康剑斌
@ 2011-05-03 15:58 ` Dan Williams
2011-05-05 8:45 ` 康剑斌
0 siblings, 1 reply; 9+ messages in thread
From: Dan Williams @ 2011-05-03 15:58 UTC (permalink / raw)
To: 康剑斌; +Cc: Koul, Vinod, linux-kernel@vger.kernel.org
On 5/2/2011 11:31 PM, 康剑斌 wrote:
>
>>> yes, I had used 'ioremap_nocache' to map the IO memory and I can use
>>> memcpy to copy data to this region. The async_tx should have been
>>> correctly configured as
>>> I can use aync_memcpy to copy data between different system memory address.
>> Then you should be using memcpy_toio() and friends
>>
> Do you mean that if I have mapped the mmio, I can' use I/OAT dma
> transfer to this region any more?
> I can use memcpy to copy data, but it consumes lots of cpu as PCI access
> is too slow.
> If I can use i/oat dma and asyc_tx api to do the job, the performance
> should be imporved.
> Thanks
The async_tx api only supports memory-to-memory transfers. To write to
mmio space with ioatdma you would need a custom method, like the
dma-slave support in other drivers, to program the descriptors with the
physical mmio bus address.
--
Dan
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Can I/OAT DMA engineer access PCI MMIO space
2011-05-03 2:21 ` 康剑斌
2011-05-03 4:12 ` Koul, Vinod
@ 2011-05-03 16:05 ` Dan Williams
1 sibling, 0 replies; 9+ messages in thread
From: Dan Williams @ 2011-05-03 16:05 UTC (permalink / raw)
To: 康剑斌; +Cc: Koul, Vinod, linux-kernel@vger.kernel.org
On 5/2/2011 7:21 PM, 康剑斌 wrote:
>
>>> I try to use ioatdam to copy data from system memory to pci MMIO space:
>>> If dst points to a memory space, the operation would pass.
>>> But if dst points to a pci MMIO space, it failed with kernel oops.
>>> It seems the code:
>>> BUG_ON(is_ioat_bug(chanerr));
>>> in drivers/dma/ioat/dma_v3.c, line 365 cause the oops.
>>> Is there anyway to access pci MMIO space using ioat?
>>> The datasheet says that ioat supports MMIO access.
>> Did you map the IO memory in kernel using ioremap and friends first?
>>
> yes, I had used 'ioremap_nocache' to map the IO memory and I can use
> memcpy to copy data to this region. The async_tx should have been
> correctly configured as
> I can use aync_memcpy to copy data between different system memory address.
>
ioremap maps mmio space for the cpu not bus mastering devices. You need
to program the physical address into the descriptor.
--
Dan
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Can I/OAT DMA engineer access PCI MMIO space
2011-05-03 15:58 ` Dan Williams
@ 2011-05-05 8:45 ` 康剑斌
2011-05-05 15:11 ` Dan Williams
0 siblings, 1 reply; 9+ messages in thread
From: 康剑斌 @ 2011-05-05 8:45 UTC (permalink / raw)
To: Dan Williams; +Cc: Koul, Vinod, linux-kernel@vger.kernel.org
于 2011年05月03日 23:58, Dan Williams 写道:
>
>> Do you mean that if I have mapped the mmio, I can' use I/OAT dma
>> transfer to this region any more?
>> I can use memcpy to copy data, but it consumes lots of cpu as PCI access
>> is too slow.
>> If I can use i/oat dma and asyc_tx api to do the job, the performance
>> should be imporved.
>> Thanks
>
>
> The async_tx api only supports memory-to-memory transfers. To write
> to mmio space with ioatdma you would need a custom method, like the
> dma-slave support in other drivers, to program the descriptors with
> the physical mmio bus address.
>
> --
> Dan
Thanks.
I directly read pci bar address and program it into descriptors, ioatdma
works.
Some problem is, when PCI transfer failed (Using a NTB connect to
another system, and the system power down),
ioatdma will cause kernel oops.
BUG_ON(is_ioat_bug(chanerr));
in drivers/dma/ioat/dma_v3.c, line 365
It seems that HW reports a 'IOAT_CHANERR_DEST_ADDR_ERR', and drivers
can't recover from this situation.
What does dma-slave mean? Just like DMA_SLAVE flag existing in other DMA
drivers?
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Can I/OAT DMA engineer access PCI MMIO space
2011-05-05 8:45 ` 康剑斌
@ 2011-05-05 15:11 ` Dan Williams
0 siblings, 0 replies; 9+ messages in thread
From: Dan Williams @ 2011-05-05 15:11 UTC (permalink / raw)
To: 康剑斌
Cc: Koul, Vinod, linux-kernel@vger.kernel.org, Jiang, Dave
[ adding Dave ]
On 5/5/2011 1:45 AM, 康剑斌 wrote:
> Thanks.
> I directly read pci bar address and program it into descriptors, ioatdma
> works.
> Some problem is, when PCI transfer failed (Using a NTB connect to
> another system, and the system power down),
> ioatdma will cause kernel oops.
>
> BUG_ON(is_ioat_bug(chanerr));
> in drivers/dma/ioat/dma_v3.c, line 365
>
> It seems that HW reports a 'IOAT_CHANERR_DEST_ADDR_ERR', and drivers
> can't recover from this situation.
Ah ok, this is expected with the current upstream ioatdma driver. The
driver assumes that all transfers are mem-to-mem (ASYNC_TX_DMA or
NET_DMA) and that a destination address error is a fatal error (similar
to a kernel page fault).
With NTB, where failures are expected, the driver would need to be
modified to expect the error, recover from it, and report it to the
application.
> What does dma-slave mean? Just like DMA_SLAVE flag existing in other DMA
> drivers?
Yes, DMA_SLAVE is the generic framework to associate a dma offload
device with an mmio peripheral.
--
Dan
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2011-05-05 15:11 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-04-29 10:13 Can I/OAT DMA engineer access PCI MMIO space 康剑斌
2011-05-02 6:04 ` Koul, Vinod
2011-05-03 2:21 ` 康剑斌
2011-05-03 4:12 ` Koul, Vinod
2011-05-03 6:31 ` 康剑斌
2011-05-03 15:58 ` Dan Williams
2011-05-05 8:45 ` 康剑斌
2011-05-05 15:11 ` Dan Williams
2011-05-03 16:05 ` Dan Williams
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox