From: Frank Rowand <frowand.list@gmail.com>
To: Rob Herring <robh+dt@kernel.org>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: devicetree@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [PATCH] of: allocate / free phandle cache outside of the devtree_lock
Date: Tue, 12 Nov 2019 17:46:37 -0600 [thread overview]
Message-ID: <c2c76ab9-9967-3cc3-fff2-5f791598c7e3@gmail.com> (raw)
In-Reply-To: <CAL_Jsq+Rfguea+KO0kLKom=8t_oCnesT8cb833Y0fhsbu_a1Cg@mail.gmail.com>
On 11/12/19 9:55 AM, Rob Herring wrote:
> On Tue, Nov 12, 2019 at 3:10 AM Sebastian Andrzej Siewior
> <bigeasy@linutronix.de> wrote:
>>
>> On 2019-11-11 21:35:35 [-0600], Rob Herring wrote:
>>>> 28d0e36bf9686 ("OF: Fixup resursive locking code paths")
>>>> d6d3c4e656513 ("OF: convert devtree lock from rw_lock to raw spinlock")
>>>
>>> So to summarize, we changed mainline to fix RT which then broke RT. :)
>>
>> correct, but we were good until v4.17-rc1 :)
>>
>>>> I've been looking into making devtree_lock a spinlock_t which would
>>>> avoid this patch. I haven't seen an issue during boot on arm64 even
>>>> with hotplug.
>>>
>>> Did you look into using RCU reader locks any more?
>>
>> A little bit. The writers, which modify the node, would need to replace
>> the whole node. So this is where things got a little complicated.
>
> Why is that exactly? That's pretty much how node and property updates
> work anyways though maybe not strictly enforced. The other updates are
> atomic flags and ref counts which I assume are fine. The spinlock
> protects traversing the tree of nodes and list of properties.
> Traversing and updates would need to follow similar semantics as
> rculist, right? BTW, there's been some thoughts to move node and
> property lists to list_head. We may want to do that rather than trying
> to roll our own RCU code.
>
>> Frank wasn't a big fan of it back then and he still wasn't a few weeks
>> back.
>
> I guess you spoke at ELCE. RCU seems like the right locking to me as
> we're almost entirely reads and I haven't seen another proposal.
>
>> If you two agree to prefer RCU over this patch then I would look more
>> into adding RCU into the lookup path. The argument was that this isn't
>> time critical. I'm just trying to avoid to replace the locking for
>> nothing.
>> So, should I come up with a RCU patch?
>
> Lets see what Frank says first.
RCU is good for adding list entries, deleting list entries, and updating
the value(s) of a list element.
RCU is not good for excluding readers of a data structure while multiple
entries are being added, deleted, and/or modified, such that the readers
only see the state of the data structure either (1) before any changes
or (2) after all changes occur. Overlay application and removal require
multiple modifications, with readers seeing either the before state of
the after state.
> While this patch is a bit of a band-aid, I don't think it complicates
> the situation at all to prevent coming up with a better solution. The
Agreed.
> other option is get rid of the memory allocation altogether. My
> preference for the cache was a simpler solution that was truly a cache
> (i.e. a fixed size that could miss). The performance wasn't quite as
> good though.
The current implementation allows and properly handles misses. But misses
will not occur if the range of phandle values is in the range of 1..n,
when there are n distinct phandle values.
I don't think the patch makes the implementation so bad that the cache
should instead switch to a fixed size to avoid kcalloc().
>
> Rob
>
next prev parent reply other threads:[~2019-11-12 23:46 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-11 17:21 [PATCH] of: allocate / free phandle cache outside of the devtree_lock Sebastian Andrzej Siewior
2019-11-12 3:35 ` Rob Herring
2019-11-12 9:10 ` Sebastian Andrzej Siewior
2019-11-12 15:55 ` Rob Herring
2019-11-12 23:46 ` Frank Rowand [this message]
2019-11-13 0:48 ` Rob Herring
2019-11-13 16:52 ` Frank Rowand
2019-11-12 22:48 ` Frank Rowand
2019-11-29 13:57 ` Sebastian Andrzej Siewior
2019-11-30 2:48 ` Frank Rowand
2019-11-29 14:04 ` [PATCH v2] " Sebastian Andrzej Siewior
2019-11-30 2:46 ` Frank Rowand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c2c76ab9-9967-3cc3-fff2-5f791598c7e3@gmail.com \
--to=frowand.list@gmail.com \
--cc=bigeasy@linutronix.de \
--cc=devicetree@vger.kernel.org \
--cc=robh+dt@kernel.org \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).