netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Marcin Wojtas <mw@semihalf.com>
To: Florian Fainelli <f.fainelli@gmail.com>
Cc: linux-kernel@vger.kernel.org,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	netdev@vger.kernel.org,
	"Thomas Petazzoni" <thomas.petazzoni@free-electrons.com>,
	"Andrew Lunn" <andrew@lunn.ch>,
	"Russell King - ARM Linux" <linux@arm.linux.org.uk>,
	"Jason Cooper" <jason@lakedaemon.net>,
	"Yair Mahalalel" <myair@marvell.com>,
	"Grzegorz Jaszczyk" <jaz@semihalf.com>,
	"Simon Guinot" <simon.guinot@sequanux.org>,
	"Evan Wang" <xswang@marvell.com>,
	nadavh@marvell.com, "Lior Amsalem" <alior@marvell.com>,
	"Tomasz Nowicki" <tn@semihalf.com>,
	"Gregory Clément" <gregory.clement@free-electrons.com>,
	nitroshift@yahoo.com, "David S. Miller" <davem@davemloft.net>,
	"Sebastian Hesselbarth" <sebastian.hesselbarth@gmail.com>
Subject: Re: [PATCH 00/13] mvneta Buffer Management and enhancements
Date: Sun, 29 Nov 2015 14:21:35 +0100	[thread overview]
Message-ID: <CAPv3WKc4zSmjtw_FeePCVCaq_aUubbCw_ggtxgxs0-7LwL7fHQ@mail.gmail.com> (raw)
In-Reply-To: <5655FF36.20202@gmail.com>

Hi Florian,

>
> Looking at your patches, it was not entirely clear to me how the buffer
> manager on these Marvell SoCs work, but other networking products have
> something similar, like Broadcom's Cable Modem SoCs (BCM33xx) FPM, and
> maybe Freescale's FMAN/DPAA seems to do something similar.
>
> Does the buffer manager allocation work by giving you a reference/token
> to a buffer as opposed to its address? If that is the case, it would be
> good to design support for such hardware in a way that it can be used by
> more drivers.

It does not operate on a reference/token but buffer pointers (physical
adresses). It's a ring and you cannot control which buffer will be
taken at given moment.

>
> Eric Dumazet suggested a while ago to me that you could get abstract
> such allocation using hardware-assisted buffer allocation by either
> introducing a new mm zone (instead of ZONE_NORMAL/DMA/HIGHMEM etc.), or
> using a different NUMA node id, such that SKB allocation and freeing
> helpers could deal with the specifics, and your networking stack and
> driver would be mostly unaware of the buffer manager underlying
> implementation. The purpose would be to get a 'struct page' reference to
> your buffer pool allocation object, so it becomes mostly transparent to
> other areas of the kernel, and you could further specialize everything
> that needs to be based on this node id or zone.

As this buffer manager is pretty tightly coupled with NIC (please see
below) and the solution is very platform specific, I'm not sure if it
wouldn't be an overdesign, to provide such generic, paralel to DMA
mechanism.

>
> Finally, these hardware-assisted allocation schemes typically work very
> well when there is a forwarding/routing workload involved, because you
> can easily steal packets and SKBs from the network stack, but that does
> not necessarily play nicely with host-terminated/initiated traffic which
> wants to have good feedback on what's happening at the NIC level
> (queueing, buffering, etc.).

Sure, I can imagine developing some applications that are developed on
top of the proposed patches, but I'm not sure if such things like
cutting network stack in half should be a part of original support.

>
>>
>> Known issues:
>> - problems with obtaining all mapped buffers from internal SRAM, when
>> destroying the buffer pointer pool
>> - problems with unmapping chunk of SRAM during driver removal
>> Above do not have an impact on the operation, as they are called during
>> driver removal or in error path.
>
> Humm, what is the reason for using the on-chip SRAM here, is it because
> that's the only storage location the Buffer Manager can allocate from,
> or is it because it is presumably faster or with constant access times
> than DRAM? Would be nice to explain a bit more in details how the buffer
> manager works and its interfacing with the network controllers.

Each pool of pointers is a ring maintained in DRAM (called buffer
pointers' pool external). SRAM (called buffer pointers' pool internal
memory, BPPI) ensures smaller latency, but is also the only way to
allocate/fetch buffer pointers from DRAM ring. Transfers between those
two memories are controlled by buffer manager itself.

In the beginning the external pool has to be filled with desired
amount of pointers. NIC (controlled by mvneta driver) has to be
informed, which pools it can use for longer and shorter packets, their
size and also SRAM physical address has to be written to one of NETA
registers. Moreover, in order to be able to provide direct access
between NETA and buffer manager SRAM, special, Marvell-specific
settings have to be configured (so called opening of MBUS window).

After enabling ingress, incoming packet is automatically placed in
next-to-be-used buffer from buffer manager resources and the
controller updates NIC's descriptor contents with pool's number and
buffer addresses. Once the packet is processed, a new buffer has to be
allocated and it's address written to SRAM - this way the pool of
pointers gets refilled.

>
> Can I use the buffer manager with other peripherals as well? Like if I
> wanted to do zero-copy or hardware-assisted memcpy DMA, would that be a
> suitable scheme?

Other peripherals cannot access SRAM directly - only DMA-based access
to DRAM. If one would like to access buffers using SRAM from other
drivers, it has to be done by read/write operations performed by CPU.
Moreover I see a limitation, which is a lack of control over the
current buffer index.

Best regards,
Marcin

  reply	other threads:[~2015-11-29 13:21 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-11-22  7:53 [PATCH 00/13] mvneta Buffer Management and enhancements Marcin Wojtas
2015-11-22  7:53 ` [PATCH 01/13] net: mvneta: add configuration for MBUS windows access protection Marcin Wojtas
2015-11-25 18:19   ` Gregory CLEMENT
2015-11-22  7:53 ` [PATCH 02/13] net: mvneta: enable IP checksum with jumbo frames for Armada 38x on Port0 Marcin Wojtas
2015-11-22 20:00   ` Arnd Bergmann
2015-11-22 21:04     ` Marcin Wojtas
2015-11-22 21:32       ` Arnd Bergmann
2015-11-22 21:55         ` Marcin Wojtas
2015-11-22  7:53 ` [PATCH 03/13] net: mvneta: fix bit assignment in MVNETA_RXQ_CONFIG_REG Marcin Wojtas
2015-11-25 18:25   ` Gregory CLEMENT
2015-11-22  7:53 ` [PATCH 04/13] net: mvneta: enable suspend/resume support Marcin Wojtas
2015-11-25 18:35   ` Gregory CLEMENT
2015-11-26 17:39     ` Marcin Wojtas
2015-11-22  7:53 ` [PATCH 05/13] net: mvneta: add xmit_more support Marcin Wojtas
2015-11-22  7:53 ` [PATCH 06/13] net: mvneta: enable mixed egress processing using HR timer Marcin Wojtas
2015-11-26 16:45   ` Simon Guinot
2015-11-30 15:57     ` Marcin Wojtas
2015-12-02 10:03       ` Marcin Wojtas
2015-11-22  7:53 ` [PATCH 07/13] bus: mvebu-mbus: provide api for obtaining IO and DRAM window information Marcin Wojtas
2015-11-22 20:02   ` Arnd Bergmann
2015-11-22 21:24     ` Marcin Wojtas
2015-11-23 16:58       ` Arnd Bergmann
2015-11-22  7:53 ` [PATCH 08/13] ARM: mvebu: enable SRAM support in mvebu_v7_defconfig Marcin Wojtas
2016-02-16 10:51   ` Gregory CLEMENT
2015-11-22  7:53 ` [PATCH 09/13] net: mvneta: bm: add support for hardware buffer management Marcin Wojtas
2015-11-22  7:53 ` [PATCH 10/13] ARM: mvebu: add buffer manager nodes to armada-38x.dtsi Marcin Wojtas
2015-11-22  9:41   ` Sergei Shtylyov
2015-11-22  7:53 ` [PATCH 11/13] ARM: mvebu: enable buffer manager support on Armada 38x boards Marcin Wojtas
2015-11-22  7:53 ` [PATCH 12/13] ARM: mvebu: add buffer manager nodes to armada-xp.dtsi Marcin Wojtas
2015-11-22  7:53 ` [PATCH 13/13] ARM: mvebu: enable buffer manager support on Armada XP boards Marcin Wojtas
2015-11-22 20:06 ` [PATCH 00/13] mvneta Buffer Management and enhancements Arnd Bergmann
2015-11-22 21:37   ` Marcin Wojtas
2015-11-24 16:22 ` David Miller
2015-11-24 16:47   ` Marcin Wojtas
2015-11-25 18:34 ` Florian Fainelli
2015-11-29 13:21   ` Marcin Wojtas [this message]
2015-11-30  2:02     ` David Miller
2015-11-30 14:13       ` Marcin Wojtas
2015-11-30 16:25         ` David Miller
2015-12-02  8:26           ` Marcin Wojtas
2015-12-04 20:15             ` Florian Fainelli
2015-12-08 10:56               ` Marcin Wojtas
2015-12-08 16:57                 ` David Miller
2015-11-30 17:16 ` Gregory CLEMENT
2015-11-30 19:53   ` Marcin Wojtas
2015-12-01 13:12     ` Gregory CLEMENT
2015-12-01 21:40       ` Marcin Wojtas
2015-12-01 23:34         ` Marcin Wojtas
2015-12-02 10:40           ` Gregory CLEMENT
2015-12-02 16:21             ` Gregory CLEMENT
2015-12-02 22:15               ` Marcin Wojtas
2015-12-02 22:56                 ` Gregory CLEMENT

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAPv3WKc4zSmjtw_FeePCVCaq_aUubbCw_ggtxgxs0-7LwL7fHQ@mail.gmail.com \
    --to=mw@semihalf.com \
    --cc=alior@marvell.com \
    --cc=andrew@lunn.ch \
    --cc=davem@davemloft.net \
    --cc=f.fainelli@gmail.com \
    --cc=gregory.clement@free-electrons.com \
    --cc=jason@lakedaemon.net \
    --cc=jaz@semihalf.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@arm.linux.org.uk \
    --cc=myair@marvell.com \
    --cc=nadavh@marvell.com \
    --cc=netdev@vger.kernel.org \
    --cc=nitroshift@yahoo.com \
    --cc=sebastian.hesselbarth@gmail.com \
    --cc=simon.guinot@sequanux.org \
    --cc=thomas.petazzoni@free-electrons.com \
    --cc=tn@semihalf.com \
    --cc=xswang@marvell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).