From: Andrew Lunn <andrew@lunn.ch>
To: stefanc@marvell.com
Cc: netdev@vger.kernel.org, thomas.petazzoni@bootlin.com,
davem@davemloft.net, nadavh@marvell.com, ymarkman@marvell.com,
linux-kernel@vger.kernel.org, kuba@kernel.org,
linux@armlinux.org.uk, mw@semihalf.com,
rmk+kernel@armlinux.org.uk, atenart@kernel.org,
devicetree@vger.kernel.org, robh+dt@kernel.org,
sebastian.hesselbarth@gmail.com, gregory.clement@bootlin.com,
linux-arm-kernel@lists.infradead.org
Subject: Re: [RESEND PATCH v8 net-next 03/15] net: mvpp2: add CM3 SRAM memory map
Date: Sun, 7 Feb 2021 17:41:19 +0100 [thread overview]
Message-ID: <YCAYL+jEVijKQqaa@lunn.ch> (raw)
In-Reply-To: <1612685964-21890-4-git-send-email-stefanc@marvell.com>
On Sun, Feb 07, 2021 at 10:19:12AM +0200, stefanc@marvell.com wrote:
> From: Stefan Chulski <stefanc@marvell.com>
>
> This patch adds CM3 memory map and CM3 read/write callbacks.
> No functionality changes.
>
> Signed-off-by: Stefan Chulski <stefanc@marvell.com>
> ---
> drivers/net/ethernet/marvell/mvpp2/mvpp2.h | 7 +++
> drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 63 +++++++++++++++++++-
> 2 files changed, 67 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
> index 6bd7e40..aec9179 100644
> --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
> +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
> @@ -748,6 +748,9 @@
> #define MVPP2_TX_FIFO_THRESHOLD(kb) \
> ((kb) * 1024 - MVPP2_TX_FIFO_THRESHOLD_MIN)
>
> +/* MSS Flow control */
> +#define MSS_SRAM_SIZE 0x800
> +
> /* RX buffer constants */
> #define MVPP2_SKB_SHINFO_SIZE \
> SKB_DATA_ALIGN(sizeof(struct skb_shared_info))
> @@ -925,6 +928,7 @@ struct mvpp2 {
> /* Shared registers' base addresses */
> void __iomem *lms_base;
> void __iomem *iface_base;
> + void __iomem *cm3_base;
>
> /* On PPv2.2, each "software thread" can access the base
> * register through a separate address space, each 64 KB apart
> @@ -996,6 +1000,9 @@ struct mvpp2 {
>
> /* page_pool allocator */
> struct page_pool *page_pool[MVPP2_PORT_MAX_RXQ];
> +
> + /* CM3 SRAM pool */
> + struct gen_pool *sram_pool;
> };
>
> struct mvpp2_pcpu_stats {
> diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> index a07cf60..307f9fd 100644
> --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
> @@ -25,6 +25,7 @@
> #include <linux/of_net.h>
> #include <linux/of_address.h>
> #include <linux/of_device.h>
> +#include <linux/genalloc.h>
> #include <linux/phy.h>
> #include <linux/phylink.h>
> #include <linux/phy/phy.h>
> @@ -6846,6 +6847,44 @@ static int mvpp2_init(struct platform_device *pdev, struct mvpp2 *priv)
> return 0;
> }
>
> +static int mvpp2_get_sram(struct platform_device *pdev,
> + struct mvpp2 *priv)
> +{
> + struct device_node *dn = pdev->dev.of_node;
> + static bool defer_once;
> + struct resource *res;
> +
> + if (has_acpi_companion(&pdev->dev)) {
> + res = platform_get_resource(pdev, IORESOURCE_MEM, 2);
> + if (!res) {
> + dev_warn(&pdev->dev, "ACPI is too old, Flow control not supported\n");
> + return 0;
> + }
> + priv->cm3_base = devm_ioremap_resource(&pdev->dev, res);
> + if (IS_ERR(priv->cm3_base))
> + return PTR_ERR(priv->cm3_base);
> + } else {
> + priv->sram_pool = of_gen_pool_get(dn, "cm3-mem", 0);
> + if (!priv->sram_pool) {
> + if (!defer_once) {
> + defer_once = true;
> + /* Try defer once */
> + return -EPROBE_DEFER;
> + }
> + dev_warn(&pdev->dev, "DT is too old, Flow control not supported\n");
> + return -ENOMEM;
> + }
> + /* cm3_base allocated with offset zero into the SRAM since mapping size
> + * is equal to requested size.
> + */
> + priv->cm3_base = (void __iomem *)gen_pool_alloc(priv->sram_pool,
> + MSS_SRAM_SIZE);
> + if (!priv->cm3_base)
> + return -ENOMEM;
> + }
For v2 i asked:
> I'm wondering if using a pool even makes sense. The ACPI case just
> ioremap() the memory region. Either this memory is dedicated, and
> then there is no need to use a pool, or the memory is shared, and at
> some point the ACPI code is going to run into problems when some
> other driver also wants access.
There was never an answer to this.
Also, the defer_once stuff is odd. You don't see any other driver do
this. The core decides when to give up probing a device. This is
partially an API problem. of_gen_pool_get() gives you no idea why it
failed. Is the property missing, or has the SRAM not probed yet. If
the answer to my question is yes, a pool does make sense, it would be
good to add an of_gen_pool_get_optional() which returns
ERR_PTR(-EPROBE_DEFER) if the property is in DT, but is not yet
available, NULL if the properties does not exist, and a pointer if
everything goes well.
Andrew
next prev parent reply other threads:[~2021-02-07 16:42 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-07 8:19 [RESEND PATCH v8 net-next 00/15] net: mvpp2: Add TX Flow Control support stefanc
2021-02-07 8:19 ` [RESEND PATCH v8 net-next 01/15] doc: marvell: add cm3-mem and PPv2.3 description stefanc
2021-02-07 8:19 ` [RESEND PATCH v8 net-next 02/15] dts: marvell: add CM3 SRAM memory to cp11x ethernet device tree stefanc
2021-02-07 8:19 ` [RESEND PATCH v8 net-next 03/15] net: mvpp2: add CM3 SRAM memory map stefanc
2021-02-07 10:45 ` Baruch Siach
2021-02-07 12:21 ` [EXT] " Stefan Chulski
2021-02-07 16:41 ` Andrew Lunn [this message]
2021-02-07 16:51 ` Stefan Chulski
2021-02-07 8:19 ` [RESEND PATCH v8 net-next 04/15] net: mvpp2: always compare hw-version vs MVPP21 stefanc
2021-02-07 8:19 ` [RESEND PATCH v8 net-next 05/15] net: mvpp2: add PPv23 version definition stefanc
2021-02-07 8:19 ` [RESEND PATCH v8 net-next 06/15] net: mvpp2: increase BM pool and RXQ size stefanc
2021-02-07 8:19 ` [RESEND PATCH v8 net-next 07/15] net: mvpp2: add FCA periodic timer configurations stefanc
2021-02-07 8:19 ` [RESEND PATCH v8 net-next 08/15] net: mvpp2: add FCA RXQ non occupied descriptor threshold stefanc
2021-02-07 8:19 ` [RESEND PATCH v8 net-next 09/15] net: mvpp2: enable global flow control stefanc
2021-02-07 8:19 ` [RESEND PATCH v8 net-next 10/15] net: mvpp2: add RXQ flow control configurations stefanc
2021-02-07 8:19 ` [RESEND PATCH v8 net-next 11/15] net: mvpp2: add ethtool flow control configuration support stefanc
2021-02-07 8:19 ` [RESEND PATCH v8 net-next 12/15] net: mvpp2: add BM protection underrun feature support stefanc
2021-02-07 8:19 ` [RESEND PATCH v8 net-next 13/15] net: mvpp2: add PPv23 RX FIFO flow control stefanc
2021-02-07 8:19 ` [RESEND PATCH v8 net-next 14/15] net: mvpp2: set 802.3x GoP Flow Control mode stefanc
2021-02-07 8:19 ` [RESEND PATCH v8 net-next 15/15] net: mvpp2: add TX FC firmware check stefanc
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YCAYL+jEVijKQqaa@lunn.ch \
--to=andrew@lunn.ch \
--cc=atenart@kernel.org \
--cc=davem@davemloft.net \
--cc=devicetree@vger.kernel.org \
--cc=gregory.clement@bootlin.com \
--cc=kuba@kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux@armlinux.org.uk \
--cc=mw@semihalf.com \
--cc=nadavh@marvell.com \
--cc=netdev@vger.kernel.org \
--cc=rmk+kernel@armlinux.org.uk \
--cc=robh+dt@kernel.org \
--cc=sebastian.hesselbarth@gmail.com \
--cc=stefanc@marvell.com \
--cc=thomas.petazzoni@bootlin.com \
--cc=ymarkman@marvell.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).