From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EC75C77B6E for ; Wed, 12 Apr 2023 03:59:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229533AbjDLD7O (ORCPT ); Tue, 11 Apr 2023 23:59:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229456AbjDLD7K (ORCPT ); Tue, 11 Apr 2023 23:59:10 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 641F644B1 for ; Tue, 11 Apr 2023 20:59:09 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-54efad677cbso97372307b3.6 for ; Tue, 11 Apr 2023 20:59:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681271948; x=1683863948; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=fyiqpLvToGpX9zbSN0SmDzV1gui0tF1l+0pOfCKBwUY=; b=qLE2elCx/bFwG+/CQp/dg66GoBw4Ge7nInCKKielTmGl9IJxhqm8otj0WWoOXBr49L G5VdoyGtTSE09rWAd06Z1vgXPRplhAEKo+VI9ZNmVbpZnzyEexvwmAQC+vMFUjECVwXm 7TSLiL2FTAxAXj48fiSRHfnHLr4PJOtHvUTySmVSYaxsIA1COHxXvzIjfReGb2Rc7SQz rm0cCu3vFTpGPlyOS3GhJgfj9R8+3dgCE+wYmuNBUkLJxNu1bcavBTQIvi2PFgwc9t+J tzFiaRX1iRUm5GSpL5kFzSa/1jQVoFFZmwwRLvN6wirPEImzKpsd4OiDQ8mCeycC5UP5 YzXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1681271948; x=1683863948; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=fyiqpLvToGpX9zbSN0SmDzV1gui0tF1l+0pOfCKBwUY=; b=slFz2KJ23iOF1Nik+s4thhSUpMkC6BoOn/WQE5g95SGPTLVELrrRQW6aoaYweWvG3+ xYqduKNxephFQC3FJkQakVzx3Y4X3nefQ42kcdC13owcdeh2JyHdOiXWf2KhhEoT2aIm imqCoWkOykhonKn8DjwvgwNVSSbzbB9cPHeCSjs4OP0kvvs1DmM2sCQEZin7yozkKujr u3cDmm4y8lBz1nazYd00R5yBi9TqsZ0iU/4h8rw2CO1c8ot8R1KAY5490fo78dGz76ej GdqIwFkDeyVskHEQtb4m4D1XfUX9+MaEuYh8KVObMoSoT0BHIr57xwMYId8s0YyXy21T 5ZpQ== X-Gm-Message-State: AAQBX9cTXwwgUyhID6wJZbzcO6RVmcZd3t5YoiGCIvElWQKp8ebneji+ I40o0RyDxTw/3EomIujjLqPF5ZaUmKduJEHD0G7BgOqs2GyQeIzE3V0iSmbD/f3afOVK6hlRgmg PBLBRk5xnFeo1eGbRQMhCVYaZ+aw7VckpX2rjhpnv0DUq3uZxuVxnGaEgPAcXl94ba2aij9I= X-Google-Smtp-Source: AKy350ZloRliCFbOCCbKcsUE6C+Z+2wYZ9OCrUmkCg8LueCGkR1KcBbnjSO2nUqBbrSDXR9Mi6UI7Li0gpKs X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a81:d002:0:b0:533:9185:fc2c with SMTP id v2-20020a81d002000000b005339185fc2cmr3181437ywi.7.1681271948571; Tue, 11 Apr 2023 20:59:08 -0700 (PDT) Date: Wed, 12 Apr 2023 03:59:05 +0000 In-Reply-To: <20230412023839.2869114-1-jstultz@google.com> Mime-Version: 1.0 References: <20230412023839.2869114-1-jstultz@google.com> X-Mailer: git-send-email 2.40.0.577.gac1e443424-goog Message-ID: <20230412035905.3184199-1-jstultz@google.com> Subject: [PATCH v2] locking/rwsem: Add __always_inline annotation to __down_read_common() From: John Stultz To: LKML Cc: John Stultz , Minchan Kim , Tim Murray , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng , kernel-team@android.com, stable@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Apparently despite it being marked inline, the compiler may not inline __down_read_common() which makes it difficult to identify the cause of lock contention, as the blocked function will always be listed as __down_read_common(). So this patch adds __always_inline annotation to the function to force it to be inlines so the calling function will be listed. Cc: Minchan Kim Cc: Tim Murray Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: kernel-team@android.com Cc: stable@vger.kernel.org Fixes: c995e638ccbb ("locking/rwsem: Fold __down_{read,write}*()") Reported-by: Tim Murray Signed-off-by: John Stultz --- v2: Reworked to use __always_inline instead of __sched as suggested by Waiman Long --- kernel/locking/rwsem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c index acb5a50309a1..e99eef8ea552 100644 --- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -1240,7 +1240,7 @@ static struct rw_semaphore *rwsem_downgrade_wake(struct rw_semaphore *sem) /* * lock for reading */ -static inline int __down_read_common(struct rw_semaphore *sem, int state) +static __always_inline int __down_read_common(struct rw_semaphore *sem, int state) { int ret = 0; long count; -- 2.40.0.577.gac1e443424-goog