From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5AC85C4360C for ; Thu, 10 Oct 2019 08:47:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3195721929 for ; Thu, 10 Oct 2019 08:47:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1570697246; bh=BMf+rEzR1w5ipbPXpmKEKTOY4qdBzIQWERfWjHJQs6Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=d/vGMOtPCL9S6nKBhBNafrqr0fDUqRkprhRSKwu6+wuejCEpumrDutt29eh6J0DXV gpUQyyaRCJK621b4NBAD0TwgIQXQ2Yx0SwfGw87n0bAGeLS4Uj7smezkMDv+Y324M2 iJwRtWpYJRfAa7CLzG1SpNp3YU2Y8h2vegaa0lIY= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388935AbfJJIrZ (ORCPT ); Thu, 10 Oct 2019 04:47:25 -0400 Received: from mail.kernel.org ([198.145.29.99]:53344 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389467AbfJJIrZ (ORCPT ); Thu, 10 Oct 2019 04:47:25 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3B235208C3; Thu, 10 Oct 2019 08:47:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1570697244; bh=BMf+rEzR1w5ipbPXpmKEKTOY4qdBzIQWERfWjHJQs6Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=W9/8ps6wYNe9TXjTT+UM5BtzawXYJTD/Jc09vvspA1+Zz+YV2pHMhVehjKmJ1lnBx 4Kcda6y5aUlKisB/Lcd29rl+aMe6Fh7XaHOTFpsj1B7wTYue3vOVbHApI8QZ1gE4z2 Sz0T9vbufNYbQH+VA1ArsUTSUSvYszEvm+WUAwzA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Oleg Nesterov , Mathieu Desnoyers , "Peter Zijlstra (Intel)" , Chris Metcalf , Christoph Lameter , "Eric W. Biederman" , Kirill Tkhai , Linus Torvalds , Mike Galbraith , "Paul E. McKenney" , Russell King - ARM Linux admin , Thomas Gleixner , Ingo Molnar , Sasha Levin Subject: [PATCH 4.19 070/114] sched/membarrier: Call sync_core only before usermode for same mm Date: Thu, 10 Oct 2019 10:36:17 +0200 Message-Id: <20191010083611.479840071@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191010083544.711104709@linuxfoundation.org> References: <20191010083544.711104709@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Mathieu Desnoyers [ Upstream commit 2840cf02fae627860156737e83326df354ee4ec6 ] When the prev and next task's mm change, switch_mm() provides the core serializing guarantees before returning to usermode. The only case where an explicit core serialization is needed is when the scheduler keeps the same mm for prev and next. Suggested-by: Oleg Nesterov Signed-off-by: Mathieu Desnoyers Signed-off-by: Peter Zijlstra (Intel) Cc: Chris Metcalf Cc: Christoph Lameter Cc: Eric W. Biederman Cc: Kirill Tkhai Cc: Linus Torvalds Cc: Mike Galbraith Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Russell King - ARM Linux admin Cc: Thomas Gleixner Link: https://lkml.kernel.org/r/20190919173705.2181-4-mathieu.desnoyers@efficios.com Signed-off-by: Ingo Molnar Signed-off-by: Sasha Levin --- include/linux/sched/mm.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index 0d10b7ce0da74..e9d4e389aed93 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -330,6 +330,8 @@ enum { static inline void membarrier_mm_sync_core_before_usermode(struct mm_struct *mm) { + if (current->mm != mm) + return; if (likely(!(atomic_read(&mm->membarrier_state) & MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE))) return; -- 2.20.1