public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Greg KH <gregkh@linuxfoundation.org>
To: Li Nan <linan666@huaweicloud.com>
Cc: arnd@arndb.de, linux-kernel@vger.kernel.org,
	viro@zeniv.linux.org.uk, wanghai38@huawei.com,
	"yi.zhang@huawei.com" <yi.zhang@huawei.com>,
	"yangerkun@huawei.com" <yangerkun@huawei.com>
Subject: Re: [PATCH v2] char: lp: Fix NULL pointer dereference of cad
Date: Tue, 20 Jan 2026 13:43:13 +0100	[thread overview]
Message-ID: <2026012049-scroll-smog-edf1@gregkh> (raw)
In-Reply-To: <7d2eab46-bad4-3e28-50e7-92a34e22ecd8@huaweicloud.com>

On Tue, Jan 20, 2026 at 07:41:10PM +0800, Li Nan wrote:
> 
> 
> 在 2026/1/20 17:23, Greg KH 写道:
> > On Tue, Jan 20, 2026 at 03:55:47PM +0800, Li Nan wrote:
> > > > And how was this tested?
> > > 
> > > We found this issue during fuzzing in QEMU.
> > > Based on the root cause, I got the following reproducer:
> > 
> > QEMU is not real hardware :)
> > 
> > Do you really still use parallel port hardware in the real world?  or
> > is this just an accidemic exercise of a fuzzing tool?
> > 
> 
> Yes, the issue was first found in QEMU.
> 
> We do have some parallel port hardware for testing, as this module was
> used by our customers in the past, though we are not sure if it is still
> in active use today. This work is mainly part of routine maintenance.

Look at the big:
	/* !!! LOCKING IS NEEDED HERE */
in parport.c for a hint of maybe what you should be doing here instead.

Your test example says you are allowing multiple users to access the
parport at the same time, which would trigger this problem.  But in the
"real world" that isn't what happens as multiple parport accesses is not
anything that is expected to work well, if at all (the same for serial
ports).

Also remember that this driver was written in the single processor days,
so any locking was added much later, and odds are, is probably
insuffient, to handle multiple accesses.  I'd just prevent this from
happening entirely by not letting your userspace to do this, to make
things much simpler overall.

So while I understand the fuzzing of this is "fun", the real-world
applicability of attempting to add proper multi-user locking to the
parport subsystem might not be a viable thing in the end, as I doubt the
existing users of it need that type of thing.  And the real-world users
of this hardware probably are living just fine without it :)

thanks,

greg k-h

  reply	other threads:[~2026-01-20 12:43 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-08  2:41 [PATCH v2] char: lp: Fix NULL pointer dereference of cad linan666
2026-01-16 14:39 ` Greg KH
2026-01-20  7:55   ` Li Nan
2026-01-20  9:23     ` Greg KH
2026-01-20 11:41       ` Li Nan
2026-01-20 12:43         ` Greg KH [this message]
2026-01-20 13:21           ` Li Nan
2026-01-20 14:07             ` Greg KH
2026-01-21  1:17               ` Li Nan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2026012049-scroll-smog-edf1@gregkh \
    --to=gregkh@linuxfoundation.org \
    --cc=arnd@arndb.de \
    --cc=linan666@huaweicloud.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=viro@zeniv.linux.org.uk \
    --cc=wanghai38@huawei.com \
    --cc=yangerkun@huawei.com \
    --cc=yi.zhang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox