linux-wireless.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
To: linux-wireless@vger.kernel.org
Cc: Daniel Mack <daniel@zonque.org>
Subject: mwifiex: infinite loop in mwifiex_main_process
Date: Tue, 19 Mar 2013 10:52:35 +0100	[thread overview]
Message-ID: <20130319095235.GA22962@blumentopf> (raw)

Hi,

I'm working on this, currently testing it
http://www.mail-archive.com/linux-mmc@vger.kernel.org/msg17726.html

Within less than 3 days, the module always crashes. I observe
that mwifiex is looping in mwifiex_main_process, not exiting. The
loop is always entered from the sdio_irq_thread.

I put printk statement to find cause for why it's not exiting:

[18017.211513] scan processing 0
[18017.214686] data sent 0
[18017.217269] ps state 0
[18017.219765] cmd sent 0 / curr cmd   (null)
[18017.224134] is_command_pending 0
[18017.227548] wmm list empty 0
[18017.230592] tx_lock_flag 0

So it seems the wmm list has packets queued, but they are never
sent out. Adding a few more statements, it seems the problem is
in mwifiex_wmm_get_highest_priolist_ptr:

	for (j = adapter->priv_num - 1; j >= 0; --j) {

		spin_lock_irqsave(&adapter->bss_prio_tbl[j].bss_prio_lock,
				flags);
		is_list_empty = list_empty(&adapter->bss_prio_tbl[j]
				.bss_prio_head);
		spin_unlock_irqrestore(&adapter->bss_prio_tbl[j].bss_prio_lock,
				flags);
		if (is_list_empty)
			continue;

		.... <snip> ...

		do {                                                                               
			priv_tmp = bssprio_node->priv;
			hqp = &priv_tmp->wmm.highest_queued_prio;

			for (i = atomic_read(hqp); i >= LOW_PRIO_TID;
					--i) {                     
			...
			... NEVER REACHED ...
			...


So there are packets queued, but the highest_queued_prio is too
low, so they are never sent out.

I was never able to crash it without my patches, though trying
harder now. But maybe my patches only trigger the issue more
often.

Is there a known issue, with highest_queued_prio getting out of
sync with the number of packets queued?

rgds,
Andi


             reply	other threads:[~2013-03-19  9:52 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-03-19  9:52 Andreas Fenkart [this message]
2013-03-19 22:37 ` mwifiex: infinite loop in mwifiex_main_process Bing Zhao
2013-04-02  0:05   ` Andreas Fenkart
2013-04-02  0:08     ` [PATCH 1/6] mwifiex: bug: remove NO_PKT_PRIO_TID Andreas Fenkart
2013-04-02  0:08       ` [PATCH 2/6] mwifiex: bug: wrong list in list_empty check Andreas Fenkart
2013-04-02  0:08       ` [PATCH 3/6] mwifiex: remove unused tid_tbl_lock from mwifiex_tid_tbl Andreas Fenkart
2013-04-02  0:08       ` [PATCH 4/6] mwifiex: replace ra_list_curr by list rotation Andreas Fenkart
2013-04-02  0:08       ` [PATCH 5/6] mwifiex: rework round robin scheduling of bss nodes Andreas Fenkart
2013-04-02  0:08       ` [PATCH 6/6] mwifiex: hold proper locks when accessing ra_list / bss_prio lists Andreas Fenkart
2013-04-03  2:40       ` [PATCH 1/6] mwifiex: bug: remove NO_PKT_PRIO_TID Bing Zhao
2013-04-03 11:35         ` Andreas Fenkart
2013-04-03 18:37           ` Bing Zhao
2013-04-04 20:57             ` Andreas Fenkart
2013-04-04 21:01               ` [PATCH 1/4] mwifiex: bug: wrong list in list_empty check Andreas Fenkart
2013-04-04 21:01                 ` [PATCH 2/4] mwifiex: remove unused tid_tbl_lock from mwifiex_tid_tbl Andreas Fenkart
2013-04-04 22:33                   ` Bing Zhao
2013-04-04 21:01                 ` [PATCH 3/4] mwifiex: bug: remove NO_PKT_PRIO_TID Andreas Fenkart
2013-04-04 22:34                   ` Bing Zhao
2013-04-04 21:01                 ` [PATCH 4/4] mwifiex: bug: hold proper locks when accessing ra_list / bss_prio lists Andreas Fenkart
2013-04-04 22:38                   ` Bing Zhao
2013-04-04 22:29                 ` [PATCH 1/4] mwifiex: bug: wrong list in list_empty check Bing Zhao
2013-04-04 21:08               ` [PATCH 1/2] mwifiex: replace ra_list_curr by list rotation Andreas Fenkart
2013-04-04 21:08                 ` [PATCH 2/2] mwifiex: rework round robin scheduling of bss nodes Andreas Fenkart
2013-04-04 22:56               ` [PATCH 1/6] mwifiex: bug: remove NO_PKT_PRIO_TID Bing Zhao
2013-04-05  8:27                 ` Andreas Fenkart
2013-04-08 18:19                   ` Bing Zhao
2013-04-11 11:51                     ` [PATCH v3 0/2] wmm queues handling simplificatons Andreas Fenkart
2013-04-11 11:51                       ` [PATCH 1/2] mwifiex: replace ra_list_curr by list rotation Andreas Fenkart
2013-04-11 18:42                         ` Bing Zhao
2013-04-11 11:51                       ` [PATCH 2/2] mwifiex: rework round robin scheduling of bss nodes Andreas Fenkart
2013-04-11 18:43                         ` Bing Zhao
2013-04-23 18:33                       ` [PATCH v3 0/2] wmm queues handling simplificatons Bing Zhao
2013-04-23 18:48                         ` John W. Linville
2013-04-23 18:51                           ` Bing Zhao
2013-04-02 18:16     ` mwifiex: infinite loop in mwifiex_main_process Bing Zhao
2013-04-02 19:35       ` Andreas Fenkart

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130319095235.GA22962@blumentopf \
    --to=andreas.fenkart@streamunlimited.com \
    --cc=daniel@zonque.org \
    --cc=linux-wireless@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).