public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Jiri Pirko <jiri@resnulli.us>
To: Manish Chopra <manishc@marvell.com>
Cc: "kuba@kernel.org" <kuba@kernel.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Ariel Elior <aelior@marvell.com>, Alok Prasad <palok@marvell.com>,
	Sudarsana Reddy Kalluru <skalluru@marvell.com>,
	David Miller <davem@davemloft.net>
Subject: Re: [EXT] Re: [PATCH v5 net] qede: Fix scheduling while atomic
Date: Mon, 29 May 2023 08:06:22 +0200	[thread overview]
Message-ID: <ZHRA0Ef6l9YwVDfE@nanopsycho> (raw)
In-Reply-To: <BY3PR18MB4612A5906D64C3DBACFAAECDAB469@BY3PR18MB4612.namprd18.prod.outlook.com>

Thu, May 25, 2023 at 05:27:03PM CEST, manishc@marvell.com wrote:
>Hi Jiri,
>
>> -----Original Message-----
>> From: Jiri Pirko <jiri@resnulli.us>
>> Sent: Wednesday, May 24, 2023 5:01 PM
>> To: Manish Chopra <manishc@marvell.com>
>> Cc: kuba@kernel.org; netdev@vger.kernel.org; Ariel Elior
>> <aelior@marvell.com>; Alok Prasad <palok@marvell.com>; Sudarsana Reddy
>> Kalluru <skalluru@marvell.com>; David Miller <davem@davemloft.net>
>> Subject: [EXT] Re: [PATCH v5 net] qede: Fix scheduling while atomic
>> 
>> External Email
>> 
>> ----------------------------------------------------------------------
>> Tue, May 23, 2023 at 04:42:35PM CEST, manishc@marvell.com wrote:
>> >Bonding module collects the statistics while holding the spinlock,
>> >beneath that qede->qed driver statistics flow gets scheduled out due to
>> >usleep_range() used in PTT acquire logic which results into below bug
>> >and traces -
>> >
>> >[ 3673.988874] Hardware name: HPE ProLiant DL365 Gen10 Plus/ProLiant
>> >DL365 Gen10 Plus, BIOS A42 10/29/2021 [ 3673.988878] Call Trace:
>> >[ 3673.988891]  dump_stack_lvl+0x34/0x44 [ 3673.988908]
>> >__schedule_bug.cold+0x47/0x53 [ 3673.988918]  __schedule+0x3fb/0x560 [
>> >3673.988929]  schedule+0x43/0xb0 [ 3673.988932]
>> >schedule_hrtimeout_range_clock+0xbf/0x1b0
>> >[ 3673.988937]  ? __hrtimer_init+0xc0/0xc0 [ 3673.988950]
>> >usleep_range+0x5e/0x80 [ 3673.988955]  qed_ptt_acquire+0x2b/0xd0 [qed]
>> >[ 3673.988981]  _qed_get_vport_stats+0x141/0x240 [qed] [ 3673.989001]
>> >qed_get_vport_stats+0x18/0x80 [qed] [ 3673.989016]
>> >qede_fill_by_demand_stats+0x37/0x400 [qede] [ 3673.989028]
>> >qede_get_stats64+0x19/0xe0 [qede] [ 3673.989034]
>> >dev_get_stats+0x5c/0xc0 [ 3673.989045]
>> >netstat_show.constprop.0+0x52/0xb0
>> >[ 3673.989055]  dev_attr_show+0x19/0x40 [ 3673.989065]
>> >sysfs_kf_seq_show+0x9b/0xf0 [ 3673.989076]  seq_read_iter+0x120/0x4b0 [
>> >3673.989087]  new_sync_read+0x118/0x1a0 [ 3673.989095]
>> >vfs_read+0xf3/0x180 [ 3673.989099]  ksys_read+0x5f/0xe0 [ 3673.989102]
>> >do_syscall_64+0x3b/0x90 [ 3673.989109]
>> >entry_SYSCALL_64_after_hwframe+0x44/0xae
>> 
>> You mention "bonding module" at the beginning of this description. Where
>> exactly is that shown in the trace?
>> 
>> I guess that the "spinlock" you talk about is "dev_base_lock", isn't it?
>
>Bonding function somehow were not part of traces, but this is the flow from bonding module
>which calls dev_get_stats() under spin_lock_nested(&bond->stats_lock, nest_level) which results to this issue.

Trace you included is obviously from sysfs read. Either change the trace
or the description.

>
>> 
>> 
>> >[ 3673.989115] RIP: 0033:0x7f8467d0b082 [ 3673.989119] Code: c0 e9 b2
>> >fe ff ff 50 48 8d 3d ca 05 08 00 e8 35 e7 01 00 0f 1f 44 00 00 f3 0f 1e
>> >fa 64 8b 04 25 18 00 00 00 85 c0 75 10 0f 05 <48> 3d 00 f0 ff ff 77 56
>> >c3 0f 1f 44 00 00 48 83 ec 28 48 89 54 24 [ 3673.989121] RSP:
>> >002b:00007ffffb21fd08 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 [
>> >3673.989127] RAX: ffffffffffffffda RBX: 000000000100eca0 RCX:
>> >00007f8467d0b082 [ 3673.989128] RDX: 00000000000003ff RSI:
>> >00007ffffb21fdc0 RDI: 0000000000000003 [ 3673.989130] RBP:
>> 00007f8467b96028 R08: 0000000000000010 R09: 00007ffffb21ec00 [
>> 3673.989132] R10: 00007ffffb27b170 R11: 0000000000000246 R12:
>> 00000000000000f0 [ 3673.989134] R13: 0000000000000003 R14:
>> 00007f8467b92000 R15: 0000000000045a05
>> >[ 3673.989139] CPU: 30 PID: 285188 Comm: read_all Kdump: loaded
>> Tainted: G        W  OE

[...]

      reply	other threads:[~2023-05-29  6:06 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-23 14:42 [PATCH v5 net] qede: Fix scheduling while atomic Manish Chopra
2023-05-24 11:31 ` Jiri Pirko
2023-05-25 15:27   ` [EXT] " Manish Chopra
2023-05-29  6:06     ` Jiri Pirko [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZHRA0Ef6l9YwVDfE@nanopsycho \
    --to=jiri@resnulli.us \
    --cc=aelior@marvell.com \
    --cc=davem@davemloft.net \
    --cc=kuba@kernel.org \
    --cc=manishc@marvell.com \
    --cc=netdev@vger.kernel.org \
    --cc=palok@marvell.com \
    --cc=skalluru@marvell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox