From: Andrew Lunn <andrew@lunn.ch>
To: Michal Simek <michal.simek@xilinx.com>
Cc: Punnaiah Choudary Kalluri <punnaiah.choudary.kalluri@xilinx.com>,
nicolas.ferre@atmel.com, anirudh@xilinx.com, davem@davemloft.net,
harinik@xilinx.com, kpc528@gmail.com,
kalluripunnaiahchoudary@gmail.com, netdev@vger.kernel.org,
Punnaiah Choudary Kalluri <punnaia@xilinx.com>
Subject: Re: [RFC PATCH 0/2] net: macb: Add mdio driver for accessing multiple phy devices
Date: Mon, 20 Jul 2015 18:23:53 +0200 [thread overview]
Message-ID: <20150720162353.GF14842@lunn.ch> (raw)
In-Reply-To: <55ACF7FC.4020408@xilinx.com>
On Mon, Jul 20, 2015 at 03:30:36PM +0200, Michal Simek wrote:
> Hi Nicolas,
>
> have you had a time to look at this?
>
> Thanks,
> Michal
>
> On 07/13/2015 06:48 AM, Punnaiah Choudary Kalluri wrote:
> > This patch is to add support for the design that has multiple ethernet
> > mac controllers and single mdio bus connected to multiple phy devices.
> > i.e mdio lines are connected to any of the ethernet mac controller and
> > all the phy devices will be accessed using the phy maintenance interface
> > in that mac controller.
> >
> > ______ _____
> > | | |PHY0 |
> > | MAC0 |-----------------| |
> > |______| | |_____|
> > |
> > ______ | _____
> > | | | | |
> > | MAC1 | |_________|PHY1 |
> > |______| |____ |
> >
> > So, i come up with two implementations for addressing the above configuration.
> >
> > Implementation 1:
> > Have separate driver for mdio bus
> > Create a DT node for all the PHY devices connected to the mdio bus
> > This driver will share the register space of the mac controller that has
> > mdio bus connected.
> >
Hi Michal
The above it what Marvell, Freescale FEC and probably others do. It is
well defined in Documentation/devicetree/bindings/net/ethernet.txt
that you can have a phy-handle property containing a phandle to the
actual phy device on some random MDIO bus.
Andrew
next prev parent reply other threads:[~2015-07-20 16:30 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-07-13 4:48 [RFC PATCH 0/2] net: macb: Add mdio driver for accessing multiple phy devices Punnaiah Choudary Kalluri
2015-07-13 4:48 ` [RFC PATCH 1/2] " Punnaiah Choudary Kalluri
2015-07-13 4:48 ` [RFC PATCH 2/2] net: macb: Add support for single mac managing more than one phy Punnaiah Choudary Kalluri
2015-07-13 18:43 ` [RFC PATCH 0/2] net: macb: Add mdio driver for accessing multiple phy devices Florian Fainelli
2015-07-14 3:02 ` punnaiah choudary kalluri
2015-07-20 13:30 ` Michal Simek
2015-07-20 16:23 ` Andrew Lunn [this message]
2015-07-27 7:37 ` Nicolas Ferre
2015-07-28 3:34 ` Punnaiah Choudary Kalluri
2015-07-31 21:53 ` Nathan Sullivan
2015-08-03 6:01 ` Michal Simek
2015-07-31 21:58 ` Nathan Sullivan
-- strict thread matches above, loose matches on Subject: below --
2015-07-13 3:35 Punnaiah Choudary Kalluri
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150720162353.GF14842@lunn.ch \
--to=andrew@lunn.ch \
--cc=anirudh@xilinx.com \
--cc=davem@davemloft.net \
--cc=harinik@xilinx.com \
--cc=kalluripunnaiahchoudary@gmail.com \
--cc=kpc528@gmail.com \
--cc=michal.simek@xilinx.com \
--cc=netdev@vger.kernel.org \
--cc=nicolas.ferre@atmel.com \
--cc=punnaia@xilinx.com \
--cc=punnaiah.choudary.kalluri@xilinx.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).