From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3930C10F14 for ; Thu, 10 Oct 2019 08:43:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B8D4521D7C for ; Thu, 10 Oct 2019 08:43:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1570696984; bh=SW5+MJ+dAc9BhA2Qt94J5IT4MmvBAaF7zVFKoytNsbU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=aV6t8rJFJByZn98K6bioW54csQxm46lS8dYcoWp46k09dgIP4TKx8WGNAqdcevaSs 9DNjCiknltcUG3COYdikfZW99lCfVqMPpXLlERIRooBNw1Zalcqpo6Melg5bRet8Ro CGDRbWTGaUXkvKsN//klTxFNOPqdgjKd09+ghi8Q= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388632AbfJJIm7 (ORCPT ); Thu, 10 Oct 2019 04:42:59 -0400 Received: from mail.kernel.org ([198.145.29.99]:47624 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388643AbfJJIm5 (ORCPT ); Thu, 10 Oct 2019 04:42:57 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B53EB2190F; Thu, 10 Oct 2019 08:42:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1570696977; bh=SW5+MJ+dAc9BhA2Qt94J5IT4MmvBAaF7zVFKoytNsbU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=zkNM18PJaobz8jcnKFd7oARVMwPwKHXSN2wgfRlUKJ0Gb7sfSSW46fyFYFOPNGtVT dOo+siQSFG7D4ogT2XT0vRvT4m9J5iWYSOKX0WQmdZjSgMfJe5eVkVS14xX+ArEQSH 2l0U4EiTmk8/XphT/K5iaB6LSNkz+ve+NU2fqLfo= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Oleg Nesterov , Mathieu Desnoyers , "Peter Zijlstra (Intel)" , Chris Metcalf , Christoph Lameter , "Eric W. Biederman" , Kirill Tkhai , Linus Torvalds , Mike Galbraith , "Paul E. McKenney" , Russell King - ARM Linux admin , Thomas Gleixner , Ingo Molnar , Sasha Levin Subject: [PATCH 5.3 120/148] sched/membarrier: Call sync_core only before usermode for same mm Date: Thu, 10 Oct 2019 10:36:21 +0200 Message-Id: <20191010083618.331652441@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191010083609.660878383@linuxfoundation.org> References: <20191010083609.660878383@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Mathieu Desnoyers [ Upstream commit 2840cf02fae627860156737e83326df354ee4ec6 ] When the prev and next task's mm change, switch_mm() provides the core serializing guarantees before returning to usermode. The only case where an explicit core serialization is needed is when the scheduler keeps the same mm for prev and next. Suggested-by: Oleg Nesterov Signed-off-by: Mathieu Desnoyers Signed-off-by: Peter Zijlstra (Intel) Cc: Chris Metcalf Cc: Christoph Lameter Cc: Eric W. Biederman Cc: Kirill Tkhai Cc: Linus Torvalds Cc: Mike Galbraith Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Russell King - ARM Linux admin Cc: Thomas Gleixner Link: https://lkml.kernel.org/r/20190919173705.2181-4-mathieu.desnoyers@efficios.com Signed-off-by: Ingo Molnar Signed-off-by: Sasha Levin --- include/linux/sched/mm.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index 4a7944078cc35..8557ec6642130 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -362,6 +362,8 @@ enum { static inline void membarrier_mm_sync_core_before_usermode(struct mm_struct *mm) { + if (current->mm != mm) + return; if (likely(!(atomic_read(&mm->membarrier_state) & MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE))) return; -- 2.20.1