public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Arnd Bergmann <arnd@arndb.de>
To: Ravi Patel <rapatel@apm.com>
Cc: Greg KH <gregkh@linuxfoundation.org>, Loc Ho <lho@apm.com>,
	davem@davemloft.net, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	"devicetree@vger.kernel.org" <devicetree@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org" 
	<linux-arm-kernel@lists.infradead.org>,
	Jon Masters <jcm@redhat.com>, "patches@apm.com" <patches@apm.com>,
	Keyur Chudgar <kchudgar@apm.com>
Subject: Re: [PATCH V2 0/4] misc: xgene: Add support for APM X-Gene SoC Queue Manager/Traffic Manager
Date: Sun, 12 Jan 2014 22:19:11 +0100	[thread overview]
Message-ID: <201401122219.11593.arnd@arndb.de> (raw)
In-Reply-To: <CAN1v_PuySRzi6r9aT_SJrQRMBh9gj-5yXOOY1QpyZ6V4wXyZCA@mail.gmail.com>

On Friday 10 January 2014, Ravi Patel wrote:

> Do you want any further clarification or document related to QMTM.
> We want to make sure everyone is on same page, understand and
> conclude upon that QMTM is a device and and not a bus or a dma
> engine.

I have a much better understanding now, but there are still a few open
questions from my side. Let me try to explain in my own words what I
think is the relevant information (part of this is still guessing).
It took me a while to figure out what it does from your description,
and then some more time to see what it's actually good for (as
opposed to adding complexity).

Please confirm or correct the individual statements in this
description:

The QMTM serves as a relay for short (a few bytes) messages between
the OS software and various slave hardware blocks on the SoC.
The messages are typically but not always DMA descriptors used by the
slave device used for starting bus master transactions by the slave,
or for notifying sofware about the competion of a DMA transaction.

The message format is specific to the slave device and the QMTM
only understands the common header of the message.

OS software sees the messages in cache-coherent memory and does
not require any cache flushes or MMIO access for inbound messages
and only a single posted MMIO write for outbound messages.

The queues are likely designed to be per-thread and don't require
software-side locking.

For outbound messages, the QMTM is the bus master of a device-to-device
DMA transaction that gets started once a message is queued and the
device has signaled that it is ready for receiving it. The QMTM needs
to know the bus address of the device as well as a slave ID for
the signal pin.
For inbound messages, the QMTM slave initiates a busmaster transaction
and needs to know the bus address of its QMTM port, while the QMTM
needs to know only the slave ID that is associated with the queue.

In addition to those hardware properties, the QMTM driver needs to
set up a memory buffer for the message queue as seen by the CPU,
and needs tell the QMTM the location as well as some other
properties such as the message length.

For inbound messages, the QMTM serves a similar purpose as an MSI
controller, ensuring that inbound DMA data has arrived in RAM
before an interrupt is delivered to the CPU and thereby avoiding
the need for an expensive MMIO read to serialize the DMA.

The resources managed by the QMTM are both SoC-global (e.g. bus
bandwidth) and slave specific (e.g. ethernet bandwith or buffer space).
Global resource management is performed to prevent one slave
device from monopolizing the system or preventing other slaves
from making forward progress.
Examples for local resource management (I had to think about this 
a long time, but probably some of these are wrong) would be
* balancing between multiple non-busmaster devices connected to
  a dma-engine
* distributing incoming ethernet data to the available CPUs based on
  a flow classifier in the MAC, e.g. by IOV MAC address, VLAN tag
  or even individual TCP connection depending on the NIC's capabilities.
* 802.1p flow control for incoming ethernet data based on the amount
  of data queued up between the MAC and the driver
* interrupt mitigation for both inbound data and outbound completion,
  by delaying the IRQ to the OS until multiple messages have arrived
  or a queue specific amount of time has passed.
* controlling the amount of outbound buffer space per flow to minimize
  buffer-bloat between an ethernet driver and the NIC hardware.
* reordering data from outbound flows based on priority.

This is basically my current interpretation, I hope I got at least
some of it right this time ;-)

	Arnd

  reply	other threads:[~2014-01-12 21:19 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-12-21  2:57 [PATCH V2 0/4] misc: xgene: Add support for APM X-Gene SoC Queue Manager/Traffic Manager Ravi Patel
2013-12-21  2:57 ` [PATCH V2 1/4] Documentation: Add documentation for APM X-Gene SoC Queue Manager/Traffic Manager DTS binding Ravi Patel
2013-12-21 18:52   ` Arnd Bergmann
2013-12-21  2:57 ` [PATCH V2 2/4] misc: xgene: Add base driver for APM X-Gene SoC Queue Manager/Traffic Manager Ravi Patel
2013-12-21 20:04   ` Arnd Bergmann
2013-12-22  1:45     ` Ravi Patel
2013-12-22  6:54       ` Arnd Bergmann
2013-12-21  2:57 ` [PATCH V2 3/4] arm64: boot: dts: Add DTS entries " Ravi Patel
2013-12-21  2:57 ` [PATCH V2 4/4] misc: xgene: Add error handling " Ravi Patel
2013-12-21 20:11 ` [PATCH V2 0/4] misc: xgene: Add support " Arnd Bergmann
2013-12-22  1:00   ` Loc Ho
2013-12-22  7:03     ` Arnd Bergmann
2014-01-04 23:59       ` Ravi Patel
2014-01-05  3:38         ` Greg KH
2014-01-05  5:27           ` Ravi Patel
2014-01-05  5:39           ` Loc Ho
2014-01-05 18:01             ` Greg KH
2014-01-05 20:52               ` Ravi Patel
2014-01-05 18:11         ` Arnd Bergmann
2014-01-05 20:48           ` Ravi Patel
2014-01-10 22:40             ` Ravi Patel
2014-01-12 21:19               ` Arnd Bergmann [this message]
2014-01-13 22:18                 ` Ravi Patel
2014-01-14  6:58                   ` Arnd Bergmann
2014-01-14 15:15                   ` Arnd Bergmann
2014-01-28  0:58                     ` Ravi Patel
2014-01-30 14:35                       ` Arnd Bergmann
2013-12-21 21:06 ` Greg KH
2013-12-21 23:16   ` Ravi Patel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201401122219.11593.arnd@arndb.de \
    --to=arnd@arndb.de \
    --cc=davem@davemloft.net \
    --cc=devicetree@vger.kernel.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=jcm@redhat.com \
    --cc=kchudgar@apm.com \
    --cc=lho@apm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=patches@apm.com \
    --cc=rapatel@apm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox