public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Waiman Long <longman@redhat.com>
To: Mukesh Ojha <quic_mojha@quicinc.com>,
	Peter Zijlstra <peterz@infradead.org>,
	mingo@redhat.com, will@kernel.org
Cc: linux-kernel@vger.kernel.org, "<boqun.feng"@gmail.com
Subject: Re: locking/rwsem: RT throttling issue due to RT task hogging the cpu
Date: Tue, 27 Sep 2022 11:52:14 -0400	[thread overview]
Message-ID: <5f16511c-1cb7-e40f-e9aa-87ee97d5a266@redhat.com> (raw)
In-Reply-To: <9ae90959-4d7c-bc3c-4710-5867a0cd4573@quicinc.com>

On 9/27/22 11:30, Mukesh Ojha wrote:
> Hi Waiman,
>
> Thanks for the reply.
>
> On 9/27/2022 8:56 PM, Waiman Long wrote:
>> On 9/27/22 11:25, Waiman Long wrote:
>>>
>>> On 9/20/22 12:19, Mukesh Ojha wrote:
>>>> Hi,
>>>>
>>>> We are observing one issue where, sem->owner is not set and 
>>>> sem->count=6 [1] which means both RWSEM_FLAG_WAITERS and 
>>>> RWSEM_FLAG_HANDOFF bits are set. And if unfold the sem->wait_list 
>>>> we see the following order of process waiting [2] where [a] is 
>>>> waiting for write, while [b],[c] are waiting for read and [d] is 
>>>> the RT task for which waiter.handoff_set=true and it is 
>>>> continuously running on cpu7 and not letting the first write waiter 
>>>> [a] on cpu7.
>>>>
>>>> [1]
>>>>
>>>>   sem = 0xFFFFFFD57DDC6680 -> (
>>>>     count = (counter = 6),
>>>>     owner = (counter = 0),
>>>>
>>>> [2]
>>>>
>>>> [a] kworker/7:0 pid: 32516 ==> [b] iptables-restor pid: 18625 ==> 
>>>> [c]HwBinder:1544_3  pid: 2024 ==> [d] RenderEngine pid: 2032 cpu: 7 
>>>> prio:97 (RT task)
>>>>
>>>>
>>>> Sometime back, Waiman has suggested this which could help in RT task
>>>> leaving the cpu.
>>>>
>>>> https://lore.kernel.org/all/8c33f989-8870-08c6-db12-521de634b34e@redhat.com/ 
>>>>
>>>>
>>> Sorry for the late reply. There is now an alternative way of dealing 
>>> with this RT task hogging issue with the commit 48dfb5d2560d 
>>> ("locking/rwsem: Disable preemption while trying for rwsem lock"). 
>>> Could you try it to see if it can address your problem?
>>
>> FYI, this commit is in the tip tree. It is not in the mainline yet.
>
>
> I only posted that patch so, i am aware about it. In that issue 
> sem->count was 7 and here it is 6 and current issue occurs after fix
> 48dfb5d2560d ("locking/rwsem: Disable preemption while trying for 
> rwsem lock").

Thanks for the quick reply. So it doesn't completely fix this RT hogging 
issue. It is harder than I thought. Will look further into this.

Cheers,
Longman


      reply	other threads:[~2022-09-27 15:53 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-20 16:19 locking/rwsem: RT throttling issue due to RT task hogging the cpu Mukesh Ojha
2022-09-26 11:46 ` Mukesh Ojha
2022-09-27 15:03   ` Mukesh Ojha
2022-09-27 15:25 ` Waiman Long
2022-09-27 15:26   ` Waiman Long
2022-09-27 15:30     ` Mukesh Ojha
2022-09-27 15:52       ` Waiman Long [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5f16511c-1cb7-e40f-e9aa-87ee97d5a266@redhat.com \
    --to=longman@redhat.com \
    --cc="<boqun.feng"@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=quic_mojha@quicinc.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox