From: Jamie Iles <jamie-wmLquQDDieKakBO8gow8eQ@public.gmane.org>
To: Rob Herring <robherring2-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: devicetree-discuss-uLR06cmDAlY/bJ5BZ2RsiQ@public.gmane.org
Subject: Re: Virtual devices (cpufreq etc) and DT
Date: Thu, 4 Aug 2011 10:54:25 +0100 [thread overview]
Message-ID: <20110804095425.GB2899@pulham.picochip.com> (raw)
In-Reply-To: <4E397D29.4090500-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
On Wed, Aug 03, 2011 at 11:54:01AM -0500, Rob Herring wrote:
> On 08/03/2011 11:41 AM, Jamie Iles wrote:
> > On Wed, Aug 03, 2011 at 11:29:16AM -0500, Rob Herring wrote:
> >> On 08/03/2011 04:50 AM, Jamie Iles wrote:
> >>> I'm trying to work out how our cpufreq driver fits in with device tree
> >>> bindings. We have a simple driver that just takes a struct clk and
> >>> calls clk_set_rate() on it. Is a node in the device tree the right way
> >>> to do this as it isn't really a physical device? I have the PLL in the
> >>> clocks group of the DT:
> >>
> >> Sounds generically useful...
> >
> > Yes, once I've got it working internally I'll submit this as a generic
> > thing for drivers/cpufreq.
> >
> >> The OF clock bindings are not really completely finalized and work on
> >> the OF clk code is basically blocked waiting on the common struct clk
> >> infrastructure.
> >
> > OK, so for the platform I'm working on mainlining at the moment does
> > that mean I should leave the clock bindings for now or is that something
> > that can be revised at a later date?
> >
> I'm separating it out for mine and just doing limited clk implementation
> now based on the rate common struct clk is going.
>
> There's a 3rd option. Implement DT clk binding parsing and clk node
> creation within your platform. Perhaps the struct clk details could be
> abstracted out from the binding parsing code so some could still be common.
OK, that sounds like pretty much what I have at the moment. I have a
struct clk and struct clk_ops then separate binding parsers so it should
be fairly easy to port over. I'll post some patches after the merge
window closes.
Thanks,
Jamie
next prev parent reply other threads:[~2011-08-04 9:54 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-08-03 9:50 Virtual devices (cpufreq etc) and DT Jamie Iles
[not found] ` <20110803095019.GB2607-apL1N+EY0C9YtYNIL7UdTEEOCMrvLtNR@public.gmane.org>
2011-08-03 16:29 ` Rob Herring
[not found] ` <4E39775C.4090305-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2011-08-03 16:41 ` Jamie Iles
[not found] ` <20110803164128.GQ2607-apL1N+EY0C9YtYNIL7UdTEEOCMrvLtNR@public.gmane.org>
2011-08-03 16:54 ` Rob Herring
[not found] ` <4E397D29.4090500-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2011-08-04 9:54 ` Jamie Iles [this message]
2011-08-04 10:23 ` Jamie Iles
[not found] ` <20110804102321.GC2899-apL1N+EY0C9YtYNIL7UdTEEOCMrvLtNR@public.gmane.org>
2011-08-04 10:33 ` Grant Likely
[not found] ` <CACxGe6tq_fpcwVZPYvPTkoLkkU0c3KeuCR+-UQSCcZcAo9vOUA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2011-08-04 10:44 ` Jamie Iles
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110804095425.GB2899@pulham.picochip.com \
--to=jamie-wmlquqddiekakbo8gow8eq@public.gmane.org \
--cc=devicetree-discuss-uLR06cmDAlY/bJ5BZ2RsiQ@public.gmane.org \
--cc=robherring2-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).