devicetree.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Rob Herring <robh+dt@kernel.org>
To: Frank Rowand <frowand.list@gmail.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	devicetree@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [PATCH] of: allocate / free phandle cache outside of the devtree_lock
Date: Tue, 12 Nov 2019 18:48:06 -0600	[thread overview]
Message-ID: <CAL_JsqJODons-CL9epGj2QXSyiE17N_a6W-Cyzb=2KKaePT0QQ@mail.gmail.com> (raw)
In-Reply-To: <c2c76ab9-9967-3cc3-fff2-5f791598c7e3@gmail.com>

On Tue, Nov 12, 2019 at 5:46 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 11/12/19 9:55 AM, Rob Herring wrote:
> > On Tue, Nov 12, 2019 at 3:10 AM Sebastian Andrzej Siewior
> > <bigeasy@linutronix.de> wrote:
> >>
> >> On 2019-11-11 21:35:35 [-0600], Rob Herring wrote:
> >>>>    28d0e36bf9686 ("OF: Fixup resursive locking code paths")
> >>>>    d6d3c4e656513 ("OF: convert devtree lock from rw_lock to raw spinlock")
> >>>
> >>> So to summarize, we changed mainline to fix RT which then broke RT. :)
> >>
> >> correct, but we were good until v4.17-rc1 :)
> >>
> >>>> I've been looking into making devtree_lock a spinlock_t which would
> >>>> avoid this patch. I haven't seen an issue during boot on arm64 even
> >>>> with hotplug.
> >>>
> >>> Did you look into using RCU reader locks any more?
> >>
> >> A little bit. The writers, which modify the node, would need to replace
> >> the whole node. So this is where things got a little complicated.
> >
> > Why is that exactly? That's pretty much how node and property updates
> > work anyways though maybe not strictly enforced. The other updates are
> > atomic flags and ref counts which I assume are fine. The spinlock
> > protects traversing the tree of nodes and list of properties.
> > Traversing and updates would need to follow similar semantics as
> > rculist, right? BTW, there's been some thoughts to move node and
> > property lists to list_head. We may want to do that rather than trying
> > to roll our own RCU code.
> >
> >> Frank wasn't a big fan of it back then and he still wasn't a few weeks
> >> back.
> >
> > I guess you spoke at ELCE. RCU seems like the right locking to me as
> > we're almost entirely reads and I haven't seen another proposal.
> >
> >> If you two agree to prefer RCU over this patch then I would look more
> >> into adding RCU into the lookup path. The argument was that this isn't
> >> time critical. I'm just trying to avoid to replace the locking for
> >> nothing.
> >> So, should I come up with a RCU patch?
> >
> > Lets see what Frank says first.
>
> RCU is good for adding list entries, deleting list entries, and updating
> the value(s) of a list element.
>
> RCU is not good for excluding readers of a data structure while multiple
> entries are being added, deleted, and/or modified, such that the readers
> only see the state of the data structure either (1) before any changes
> or (2) after all changes occur.  Overlay application and removal require
> multiple modifications, with readers seeing either the before state of
> the after state.

Do we ensure that currently? The of_mutex provides what you describe,
but most or all the reader paths don't take the mutex. Even the
changeset API takes the spinlock a single node or property at a time.

Rob

  reply	other threads:[~2019-11-13  0:48 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-11 17:21 [PATCH] of: allocate / free phandle cache outside of the devtree_lock Sebastian Andrzej Siewior
2019-11-12  3:35 ` Rob Herring
2019-11-12  9:10   ` Sebastian Andrzej Siewior
2019-11-12 15:55     ` Rob Herring
2019-11-12 23:46       ` Frank Rowand
2019-11-13  0:48         ` Rob Herring [this message]
2019-11-13 16:52           ` Frank Rowand
2019-11-12 22:48 ` Frank Rowand
2019-11-29 13:57   ` Sebastian Andrzej Siewior
2019-11-30  2:48     ` Frank Rowand
2019-11-29 14:04   ` [PATCH v2] " Sebastian Andrzej Siewior
2019-11-30  2:46     ` Frank Rowand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAL_JsqJODons-CL9epGj2QXSyiE17N_a6W-Cyzb=2KKaePT0QQ@mail.gmail.com' \
    --to=robh+dt@kernel.org \
    --cc=bigeasy@linutronix.de \
    --cc=devicetree@vger.kernel.org \
    --cc=frowand.list@gmail.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).