From: <gregkh@linuxfoundation.org>
To: tiwai@suse.de, dvyukov@google.com, gregkh@linuxfoundation.org
Cc: <stable@vger.kernel.org>, <stable-commits@vger.kernel.org>
Subject: Patch "ALSA: seq: Don't handle loop timeout at snd_seq_pool_done()" has been added to the 4.4-stable tree
Date: Sun, 12 Feb 2017 14:15:12 -0800 [thread overview]
Message-ID: <14869377123310@kroah.com> (raw)
This is a note to let you know that I've just added the patch titled
ALSA: seq: Don't handle loop timeout at snd_seq_pool_done()
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
alsa-seq-don-t-handle-loop-timeout-at-snd_seq_pool_done.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
>From 37a7ea4a9b81f6a864c10a7cb0b96458df5310a3 Mon Sep 17 00:00:00 2001
From: Takashi Iwai <tiwai@suse.de>
Date: Mon, 6 Feb 2017 15:09:48 +0100
Subject: ALSA: seq: Don't handle loop timeout at snd_seq_pool_done()
From: Takashi Iwai <tiwai@suse.de>
commit 37a7ea4a9b81f6a864c10a7cb0b96458df5310a3 upstream.
snd_seq_pool_done() syncs with closing of all opened threads, but it
aborts the wait loop with a timeout, and proceeds to the release
resource even if not all threads have been closed. The timeout was 5
seconds, and if you run a crazy stuff, it can exceed easily, and may
result in the access of the invalid memory address -- this is what
syzkaller detected in a bug report.
As a fix, let the code graduate from naiveness, simply remove the loop
timeout.
BugLink: http://lkml.kernel.org/r/CACT4Y+YdhDV2H5LLzDTJDVF-qiYHUHhtRaW4rbb4gUhTCQB81w@mail.gmail.com
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
sound/core/seq/seq_memory.c | 9 +--------
1 file changed, 1 insertion(+), 8 deletions(-)
--- a/sound/core/seq/seq_memory.c
+++ b/sound/core/seq/seq_memory.c
@@ -419,7 +419,6 @@ int snd_seq_pool_done(struct snd_seq_poo
{
unsigned long flags;
struct snd_seq_event_cell *ptr;
- int max_count = 5 * HZ;
if (snd_BUG_ON(!pool))
return -EINVAL;
@@ -432,14 +431,8 @@ int snd_seq_pool_done(struct snd_seq_poo
if (waitqueue_active(&pool->output_sleep))
wake_up(&pool->output_sleep);
- while (atomic_read(&pool->counter) > 0) {
- if (max_count == 0) {
- pr_warn("ALSA: snd_seq_pool_done timeout: %d cells remain\n", atomic_read(&pool->counter));
- break;
- }
+ while (atomic_read(&pool->counter) > 0)
schedule_timeout_uninterruptible(1);
- max_count--;
- }
/* release all resources */
spin_lock_irqsave(&pool->lock, flags);
Patches currently in stable-queue which might be from tiwai@suse.de are
queue-4.4/alsa-seq-don-t-handle-loop-timeout-at-snd_seq_pool_done.patch
queue-4.4/alsa-seq-fix-race-at-creating-a-queue.patch
reply other threads:[~2017-02-12 22:15 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=14869377123310@kroah.com \
--to=gregkh@linuxfoundation.org \
--cc=dvyukov@google.com \
--cc=stable-commits@vger.kernel.org \
--cc=stable@vger.kernel.org \
--cc=tiwai@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).