* Advice on network driver design
@ 2011-02-19 13:37 Felix Radensky
2011-02-20 18:14 ` Micha Nelissen
2011-02-20 19:13 ` arnd
0 siblings, 2 replies; 5+ messages in thread
From: Felix Radensky @ 2011-02-19 13:37 UTC (permalink / raw)
To: netdev@vger.kernel.org
Hi,
I'm in the process of designing a network driver for a custom
hardware and would like to get some advice from linux network
gurus.
The host platform is Freescale P2020. The custom hardware is
FPGA with several TX FIFOs, single RX FIFO and a set of registers.
FPGA is connected to CPU via PCI-E. Host CPU DMA controller is used
to get packets to/from FIFOs. Each FIFO has its set of events,
generating interrupts, which can be enabled and disabled. Status
register reflects the current state of events, the bit in status
register is cleared by FPGA when event is handled. Reads or writes to
status register have no impact on its contents.
The device driver should support 80Mbit/sec of traffic in each direction.
So far I have TX side working. I'm using Linux dmaengine APIs to
transfer packets to FIFOs. The DMA completion interrupt is handled
by per-fifo work queue.
My question is about RX. Would such design benefit from NAPI ?
If my understanding of NAPI is correct, it runs in softirq context,
so I cannot do any DMA work in dev->poll(). If I were to use NAPI,
I should probably disable RX interrupts, do all DMA work in some
work queue, keep RX packets in a list and only then call dev->poll().
Is that correct ?
Any other advice and how to write an efficient driver for this
hardware is most welcome. I can influence FPGA design to some degree,
so if you think FPGA should be changed to improve things, please let
me know.
Thanks a lot in advance.
Felix.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Advice on network driver design
2011-02-19 13:37 Advice on network driver design Felix Radensky
@ 2011-02-20 18:14 ` Micha Nelissen
2011-02-20 21:01 ` Felix Radensky
2011-02-20 19:13 ` arnd
1 sibling, 1 reply; 5+ messages in thread
From: Micha Nelissen @ 2011-02-20 18:14 UTC (permalink / raw)
To: Felix Radensky; +Cc: netdev@vger.kernel.org
Felix Radensky wrote:
> The host platform is Freescale P2020. The custom hardware is
> FPGA with several TX FIFOs, single RX FIFO and a set of registers.
Wasn't it easier to use the in-SOC ethernet controllers?
Micha
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Advice on network driver design
2011-02-19 13:37 Advice on network driver design Felix Radensky
2011-02-20 18:14 ` Micha Nelissen
@ 2011-02-20 19:13 ` arnd
2011-02-21 16:25 ` Felix Radensky
1 sibling, 1 reply; 5+ messages in thread
From: arnd @ 2011-02-20 19:13 UTC (permalink / raw)
To: Felix Radensky; +Cc: netdev@vger.kernel.org
On Saturday 19 February 2011 14:37:43 Felix Radensky wrote:
> Hi,
>
> I'm in the process of designing a network driver for a custom
> hardware and would like to get some advice from linux network
> gurus.
>
> The host platform is Freescale P2020. The custom hardware is
> FPGA with several TX FIFOs, single RX FIFO and a set of registers.
> FPGA is connected to CPU via PCI-E. Host CPU DMA controller is used
> to get packets to/from FIFOs. Each FIFO has its set of events,
> generating interrupts, which can be enabled and disabled. Status
> register reflects the current state of events, the bit in status
> register is cleared by FPGA when event is handled. Reads or writes to
> status register have no impact on its contents.
>
> The device driver should support 80Mbit/sec of traffic in each direction.
>
> So far I have TX side working. I'm using Linux dmaengine APIs to
> transfer packets to FIFOs. The DMA completion interrupt is handled
> by per-fifo work queue.
>
> My question is about RX. Would such design benefit from NAPI ?
> If my understanding of NAPI is correct, it runs in softirq context,
> so I cannot do any DMA work in dev->poll(). If I were to use NAPI,
> I should probably disable RX interrupts, do all DMA work in some
> work queue, keep RX packets in a list and only then call dev->poll().
> Is that correct ?
>
> Any other advice and how to write an efficient driver for this
> hardware is most welcome. I can influence FPGA design to some degree,
> so if you think FPGA should be changed to improve things, please let
> me know.
There are currently discussions ongoing about using virtio for this
kind of connection. See http://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg49294.html
for an archive.
When you use virtio as the base, you can use the regular virtio-net
driver or any other virtio high-level driver on top.
Arnd
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Advice on network driver design
2011-02-20 18:14 ` Micha Nelissen
@ 2011-02-20 21:01 ` Felix Radensky
0 siblings, 0 replies; 5+ messages in thread
From: Felix Radensky @ 2011-02-20 21:01 UTC (permalink / raw)
To: Micha Nelissen; +Cc: netdev@vger.kernel.org
Hi Micha
On 02/20/2011 08:14 PM, Micha Nelissen wrote:
> Felix Radensky wrote:
>> The host platform is Freescale P2020. The custom hardware is
>> FPGA with several TX FIFOs, single RX FIFO and a set of registers.
>
> Wasn't it easier to use the in-SOC ethernet controllers?
FPGA talks some satellite protocol to the outside world, so the
target box is essentially a ethernet to satellite router.
Felix.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Advice on network driver design
2011-02-20 19:13 ` arnd
@ 2011-02-21 16:25 ` Felix Radensky
0 siblings, 0 replies; 5+ messages in thread
From: Felix Radensky @ 2011-02-21 16:25 UTC (permalink / raw)
To: arnd; +Cc: netdev@vger.kernel.org
Hi Anrd,
On 02/20/2011 09:13 PM, arnd@arndb.de wrote:
> On Saturday 19 February 2011 14:37:43 Felix Radensky wrote:
>> Hi,
>>
>> I'm in the process of designing a network driver for a custom
>> hardware and would like to get some advice from linux network
>> gurus.
>>
>> The host platform is Freescale P2020. The custom hardware is
>> FPGA with several TX FIFOs, single RX FIFO and a set of registers.
>> FPGA is connected to CPU via PCI-E. Host CPU DMA controller is used
>> to get packets to/from FIFOs. Each FIFO has its set of events,
>> generating interrupts, which can be enabled and disabled. Status
>> register reflects the current state of events, the bit in status
>> register is cleared by FPGA when event is handled. Reads or writes to
>> status register have no impact on its contents.
>>
>> The device driver should support 80Mbit/sec of traffic in each direction.
>>
>> So far I have TX side working. I'm using Linux dmaengine APIs to
>> transfer packets to FIFOs. The DMA completion interrupt is handled
>> by per-fifo work queue.
>>
>> My question is about RX. Would such design benefit from NAPI ?
>> If my understanding of NAPI is correct, it runs in softirq context,
>> so I cannot do any DMA work in dev->poll(). If I were to use NAPI,
>> I should probably disable RX interrupts, do all DMA work in some
>> work queue, keep RX packets in a list and only then call dev->poll().
>> Is that correct ?
>>
>> Any other advice and how to write an efficient driver for this
>> hardware is most welcome. I can influence FPGA design to some degree,
>> so if you think FPGA should be changed to improve things, please let
>> me know.
> There are currently discussions ongoing about using virtio for this
> kind of connection. See http://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg49294.html
> for an archive.
>
> When you use virtio as the base, you can use the regular virtio-net
> driver or any other virtio high-level driver on top.
>
> Arnd
>
Thanks, I'll take a look at virtio. Some people seem to think it's an
overkill for this task.
Felix.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2011-02-21 16:25 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-02-19 13:37 Advice on network driver design Felix Radensky
2011-02-20 18:14 ` Micha Nelissen
2011-02-20 21:01 ` Felix Radensky
2011-02-20 19:13 ` arnd
2011-02-21 16:25 ` Felix Radensky
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).