From: Simon Horman <horms@kernel.org>
To: Yu-Chun Lin <eleanor15x@gmail.com>
Cc: isdn@linux-pingi.de, kuba@kernel.org, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org, jserv@ccns.ncku.edu.tw,
visitorckw@gmail.com
Subject: Re: [PATCH] mISDN: hfcsusb: Optimize performance by replacing rw_lock with spinlock
Date: Thu, 27 Mar 2025 17:00:44 +0000 [thread overview]
Message-ID: <20250327170044.GA1883535@horms.kernel.org> (raw)
In-Reply-To: <Z-VumXiqJJkZKNZZ@eleanor-wkdl>
On Thu, Mar 27, 2025 at 11:28:25PM +0800, Yu-Chun Lin wrote:
> On Mon, Mar 24, 2025 at 02:21:15PM +0000, Simon Horman wrote:
> > On Sat, Mar 22, 2025 at 01:20:24AM +0800, Yu-Chun Lin wrote:
> > > The 'HFClock', an rwlock, is only used by writers, making it functionally
> > > equivalent to a spinlock.
> > >
> > > According to Documentation/locking/spinlocks.rst:
> > >
> > > "Reader-writer locks require more atomic memory operations than simple
> > > spinlocks. Unless the reader critical section is long, you are better
> > > off just using spinlocks."
> > >
> > > Since read_lock() is never called, switching to a spinlock reduces
> > > overhead and improves efficiency.
> > >
> > > Signed-off-by: Yu-Chun Lin <eleanor15x@gmail.com>
> > > ---
> > > Build tested only, as I don't have the hardware.
> > > Ensured all rw_lock -> spinlock conversions are complete, and replacing
> > > rw_lock with spinlock should always be safe.
> > >
> > > drivers/isdn/hardware/mISDN/hfcsusb.c | 6 +++---
> > > 1 file changed, 3 insertions(+), 3 deletions(-)
> >
> > Hi Yu-Chun Lin,
> >
> > Thanks for your patch.
> >
> > Unfortunately I think it would be best to leave this rather old
> > and probably little used driver as-is in this regard unless there
> > is a demonstrable improvement on real hardware.
> >
> > Otherwise the small risk of regression and overhead of driver
> > changes seems to outweigh the theoretical benefit.
>
> Thank you for your feedback.
>
> I noticed that the MAINTAINERS file lists a maintainer for ISDN, so I
> was wondering if he might have access to the necessary hardware for
> quick testing.
>
> Since I am new to the kernel, I would like to ask if there have been
> any past cases or experiences where similar changes were considered
> unsafe. Additionally, I have seen instances where the crypto maintainer
> accepted similar patches even without hardware testing. [1]
>
> [1]: https://lore.kernel.org/lkml/20240823183856.561166-1-visitorckw@gmail.com/
I think it is a judgement call, and certainly the crypto maintainer is
free to make their own call. But in this case I do lean towards leaving
the code unchanged in the absence of hardware testing.
prev parent reply other threads:[~2025-03-27 17:00 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-21 17:20 [PATCH] mISDN: hfcsusb: Optimize performance by replacing rw_lock with spinlock Yu-Chun Lin
2025-03-24 14:21 ` Simon Horman
2025-03-27 15:28 ` Yu-Chun Lin
2025-03-27 17:00 ` Simon Horman [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250327170044.GA1883535@horms.kernel.org \
--to=horms@kernel.org \
--cc=eleanor15x@gmail.com \
--cc=isdn@linux-pingi.de \
--cc=jserv@ccns.ncku.edu.tw \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=visitorckw@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).