From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753825AbdK3A6d (ORCPT ); Wed, 29 Nov 2017 19:58:33 -0500 Received: from mail-pf0-f195.google.com ([209.85.192.195]:32851 "EHLO mail-pf0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753768AbdK3A6b (ORCPT ); Wed, 29 Nov 2017 19:58:31 -0500 X-Google-Smtp-Source: AGs4zMax5w0tfpvWhaDX4cH2rU2U3FLMdYH2JsM4I8yuGkjvd3S2EjRgDhIT8XW4wKUGeXs0NJgmcg== Date: Wed, 29 Nov 2017 16:58:28 -0800 From: Omar Sandoval To: Ingo Molnar Cc: Linus Torvalds , linux-kernel@vger.kernel.org, Jens Axboe Subject: add_wait_queue() (unintentional?) behavior change in v4.13 Message-ID: <20171130005828.GA15628@vader> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, Ingo, Commit 50816c48997a ("sched/wait: Standardize internal naming of wait-queue entries") changed the behavior of add_wait_queue() from inserting the wait entry at the head of the wait queue to the tail of the wait queue. This is the relevant hunk: -void add_wait_queue(wait_queue_head_t *q, wait_queue_entry_t *wait) +void add_wait_queue(wait_queue_head_t *q, struct wait_queue_entry *wq_entry) { unsigned long flags; - wait->flags &= ~WQ_FLAG_EXCLUSIVE; + wq_entry->flags &= ~WQ_FLAG_EXCLUSIVE; spin_lock_irqsave(&q->lock, flags); - __add_wait_queue(q, wait); + __add_wait_queue_entry_tail(q, wq_entry); spin_unlock_irqrestore(&q->lock, flags); } EXPORT_SYMBOL(add_wait_queue); Note the change from __add_wait_queue() to __add_wait_queue_entry_tail(). I'm assuming this was a typo since the commit message doesn't mention any functional changes. This patch restores the old behavior: diff --git a/kernel/sched/wait.c b/kernel/sched/wait.c index 98feab7933c7..929ecb7d6b78 100644 --- a/kernel/sched/wait.c +++ b/kernel/sched/wait.c @@ -27,7 +27,7 @@ void add_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq wq_entry->flags &= ~WQ_FLAG_EXCLUSIVE; spin_lock_irqsave(&wq_head->lock, flags); - __add_wait_queue_entry_tail(wq_head, wq_entry); + __add_wait_queue(wq_head, wq_entry); spin_unlock_irqrestore(&wq_head->lock, flags); } EXPORT_SYMBOL(add_wait_queue); I didn't go through and audit callers of add_wait_queue(), but from a quick code read this makes it so that non-exclusive waiters will not be woken up if they are behind enough exclusive waiters, and I bet that'll cause some bugs.