From: Tony Krowiak <akrowiak@linux.ibm.com>
To: Halil Pasic <pasic@linux.ibm.com>
Cc: linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org,
borntraeger@de.ibm.com, cohuck@redhat.com,
pasic@linux.vnet.ibm.com, jjherne@linux.ibm.com, jgg@nvidia.com,
alex.williamson@redhat.com, kwankhede@nvidia.com,
frankja@linux.ibm.com, david@redhat.com, imbrenda@linux.ibm.com,
hca@linux.ibm.com
Subject: Re: [PATCH] s390/vfio-ap: do not open code locks for VFIO_GROUP_NOTIFY_SET_KVM notification
Date: Tue, 13 Jul 2021 14:47:00 -0400 [thread overview]
Message-ID: <29f8c92c-2ad9-b407-4063-0e9a4580c971@linux.ibm.com> (raw)
In-Reply-To: <20210713184517.48eacee6.pasic@linux.ibm.com>
On 7/13/21 12:45 PM, Halil Pasic wrote:
> On Tue, 13 Jul 2021 09:48:01 -0400
> Tony Krowiak <akrowiak@linux.ibm.com> wrote:
>
>> On 7/12/21 7:38 PM, Halil Pasic wrote:
>>> On Wed, 7 Jul 2021 11:41:56 -0400
>>> Tony Krowiak <akrowiak@linux.ibm.com> wrote:
>>>
>>>> It was pointed out during an unrelated patch review that locks should not
>>>> be open coded - i.e., writing the algorithm of a standard lock in a
>>>> function instead of using a lock from the standard library. The setting and
>>>> testing of the kvm_busy flag and sleeping on a wait_event is the same thing
>>>> a lock does. Whatever potential deadlock was found and reported via the
>>>> lockdep splat was not magically removed by going to a wait_queue; it just
>>>> removed the lockdep annotations that would identify the issue early
>>> Did you change your opinion since we last talked about it? This reads to
>>> me like we are deadlocky without this patch, because of the last
>>> sentence.
>> The words are a direct paraphrase of Jason G's responses to my
>> query regarding what he meant by open coding locks. I
>> am choosing to take his word on the subject and remove the
>> open coded locks.
>>
>> Having said that, we do not have a deadlock problem without
>> this patch. If you recall, the lockdep splat occurred ONLY when
>> running a Secure Execution guest in a CI environment. Since
>> AP is not yet supported for SE guests, there is no danger of
>> a lockdep splat occurring in a customer environment. Given
>> Jason's objections to the original solution (i.e., kvm_busy flag
>> and wait queue), I decided to replace the so-called open
>> coded locks.
> I'm in favor of doing that. But if ("s390/vfio-ap: fix
> circular lockdep when setting/clearing crypto masks") ain't buggy,
> then this patch does not qualify for stable. For a complete set of
> rules consult:
> https://github.com/torvalds/linux/blob/master/Documentation/process/stable-kernel-rules.rst
>
> Here the most relevant points:
> * It must fix a real bug that bothers people (not a, "This could be a
> problem..." type thing).
> * t must fix a problem that causes a build error (but not for things
> marked CONFIG_BROKEN), an oops, a hang, data corruption, a real security
> issue, or some "oh, that's not good" issue. In short, something critical.
> * No "theoretical race condition" issues, unless an explanation of how
> the race can be exploited is also provided.
>
> Jason may give it another try to convince us that 0cc00c8d4050 only
> silenced lockdep, but vfio_ap remained prone to deadlocks. To my best
> knowledge using condition variable and a mutex is one of the well known
> ways to implement an rwlock.
>
> In my opinion, you should drop the fixes tag, drop the cc stable, and
> provide a patch description that corresponds to *your* understanding
> of the situation.
I'll drop the fixes and cc stable. Given the patch was created in
response to Jason G's comments - which are paraphrased in
the patch description - the patch description corresponds
directly to my understanding of the situation. It is precisely why
I created the patch.
>
> Neither the Fixes tag or the stable process is (IMHO) meant for these
> types of (style) issues. And if you don't think the alleged problem is
> real, don't make the description of your patch say it is real.
> Regards,
> Halil
>
>>> Regards,
>>> Halil
next prev parent reply other threads:[~2021-07-13 18:47 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-07 15:41 [PATCH] s390/vfio-ap: do not open code locks for VFIO_GROUP_NOTIFY_SET_KVM notification Tony Krowiak
2021-07-12 13:42 ` Tony Krowiak
2021-07-12 23:38 ` Halil Pasic
2021-07-13 13:48 ` Tony Krowiak
2021-07-13 16:45 ` Halil Pasic
2021-07-13 17:05 ` Jason Gunthorpe
2021-07-13 19:04 ` Tony Krowiak
2021-07-13 19:21 ` Jason Gunthorpe
2021-07-14 13:25 ` Tony Krowiak
2021-07-14 17:56 ` Christian Borntraeger
2021-07-14 18:43 ` Jason Gunthorpe
2021-07-14 19:02 ` Alex Williamson
2021-07-15 11:31 ` Halil Pasic
2021-07-15 14:57 ` Jason Gunthorpe
2021-07-13 18:47 ` Tony Krowiak [this message]
2021-07-15 13:44 ` Halil Pasic
2021-07-15 14:38 ` Tony Krowiak
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=29f8c92c-2ad9-b407-4063-0e9a4580c971@linux.ibm.com \
--to=akrowiak@linux.ibm.com \
--cc=alex.williamson@redhat.com \
--cc=borntraeger@de.ibm.com \
--cc=cohuck@redhat.com \
--cc=david@redhat.com \
--cc=frankja@linux.ibm.com \
--cc=hca@linux.ibm.com \
--cc=imbrenda@linux.ibm.com \
--cc=jgg@nvidia.com \
--cc=jjherne@linux.ibm.com \
--cc=kwankhede@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=pasic@linux.ibm.com \
--cc=pasic@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox