From: Kousik Sanagavarapu <five231003@gmail.com>
To: Nishanth Menon <nm@ti.com>,
Jonathan Cameron <Jonathan.Cameron@huawei.com>,
Santosh Shilimkar <ssantosh@kernel.org>,
Nathan Chancellor <nathan@kernel.org>,
Julia Lawall <julia.lawall@inria.fr>
Cc: Shuah Khan <skhan@linuxfoundation.org>,
Javier Carrasco <javier.carrasco.cruz@gmail.com>,
linux-kernel@vger.kernel.org,
linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v3 0/4] Do device node auto cleanup in drivers/soc/ti/
Date: Thu, 18 Jul 2024 03:04:15 +0530 [thread overview]
Message-ID: <Zpg41yZRHPv9w0Lg@five231003> (raw)
In-Reply-To: <20240707055341.3656-1-five231003@gmail.com>
On Sun, Jul 07, 2024 at 10:44:15AM +0530, Kousik Sanagavarapu wrote:
> Do "struct device_node" auto cleanup in soc/ti/. This patch series takes
> care of all the cases where this is possible.
>
> Thanks Jonathan for the review on the previous round.
>
> v2:
>
> https://lore.kernel.org/linux-arm-kernel/20240703065710.13786-1-five231003@gmail.com/
>
> Changes since v2:
> - Split v2 1/3 into v3 1/4 and v3 2/4. The memory setup code is
> seperated out of the pruss_probe() function and put into 1/4 and the
> actual conversion to auto cleanup is done in 2/4.
> - Replace dev_err() with dev_err_probe() in the code paths touched.
>
> v1:
>
> https://lore.kernel.org/linux-arm-kernel/20240510071432.62913-1-five231003@gmail.com/
>
> Changes since v1:
> - Refactor code so that it the scope of the pointers touched is reduced,
> making the code look more clean.
> - The above also the side-effect of fixing the errors that clang emitted
> (but my local version of gcc didn't) for PATCH 2/3 during v1.
>
> Kousik Sanagavarapu (4):
> soc: ti: pruss: factor out memories setup
> soc: ti: pruss: do device_node auto cleanup
> soc: ti: knav_qmss_queue: do device_node auto cleanup
> soc: ti: pm33xx: do device_node auto cleanup
>
> drivers/soc/ti/knav_qmss_queue.c | 100 +++++++++---------
> drivers/soc/ti/pm33xx.c | 52 ++++-----
> drivers/soc/ti/pruss.c | 176 ++++++++++++++-----------------
> 3 files changed, 155 insertions(+), 173 deletions(-)
Ping
next prev parent reply other threads:[~2024-07-17 21:35 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-07 5:14 [PATCH v3 0/4] Do device node auto cleanup in drivers/soc/ti/ Kousik Sanagavarapu
2024-07-07 5:14 ` [PATCH v3 1/4] soc: ti: pruss: factor out memories setup Kousik Sanagavarapu
2024-08-24 18:49 ` Nishanth Menon
2024-08-25 6:38 ` Kousik Sanagavarapu
2024-07-07 5:14 ` [PATCH v3 2/4] soc: ti: pruss: do device_node auto cleanup Kousik Sanagavarapu
2024-07-07 5:14 ` [PATCH v3 3/4] soc: ti: knav_qmss_queue: " Kousik Sanagavarapu
2024-07-07 5:14 ` [PATCH v3 4/4] soc: ti: pm33xx: " Kousik Sanagavarapu
2024-07-17 21:34 ` Kousik Sanagavarapu [this message]
2024-07-18 11:21 ` [PATCH v3 0/4] Do device node auto cleanup in drivers/soc/ti/ Nishanth Menon
2024-07-18 14:12 ` Kousik Sanagavarapu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zpg41yZRHPv9w0Lg@five231003 \
--to=five231003@gmail.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=javier.carrasco.cruz@gmail.com \
--cc=julia.lawall@inria.fr \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=nathan@kernel.org \
--cc=nm@ti.com \
--cc=skhan@linuxfoundation.org \
--cc=ssantosh@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).