From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965585AbdEOO1V (ORCPT ); Mon, 15 May 2017 10:27:21 -0400 Received: from mx1.redhat.com ([209.132.183.28]:47028 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965193AbdEOO1T (ORCPT ); Mon, 15 May 2017 10:27:19 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com ED971C059732 Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=riel@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com ED971C059732 Message-ID: <1494858437.29205.26.camel@redhat.com> Subject: Re: [PATCH] sched/numa: use down_read_trylock for mmap_sem From: Rik van Riel To: Vlastimil Babka , Ingo Molnar , Peter Zijlstra Cc: Mel Gorman , linux-kernel@vger.kernel.org Date: Mon, 15 May 2017 10:27:17 -0400 In-Reply-To: <20170515131316.21909-1-vbabka@suse.cz> References: <20170515131316.21909-1-vbabka@suse.cz> Organization: Red Hat, Inc Content-Type: text/plain; charset="UTF-8" Mime-Version: 1.0 Content-Transfer-Encoding: 8bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Mon, 15 May 2017 14:27:19 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 2017-05-15 at 15:13 +0200, Vlastimil Babka wrote: > A customer has reported a soft-lockup when running a proprietary > intensive > memory stress test, where the trace on multiple CPU's looks like > this: > >  RIP: 0010:[] >   [] native_queued_spin_lock_slowpath+0x10e/0x190 > ... >  Call Trace: >   [] queued_spin_lock_slowpath+0x7/0xa >   [] change_protection_range+0x3b1/0x930 >   [] change_prot_numa+0x18/0x30 >   [] task_numa_work+0x1fe/0x310 >   [] task_work_run+0x72/0x90 > > Further investigation showed that the lock contention here is > pmd_lock(). > > The task_numa_work() function makes sure that only one thread is let > to perform > the work in a single scan period (via cmpxchg), but if there's a > thread with > mmap_sem locked for writing for several periods, multiple threads in > task_numa_work() can build up a convoy waiting for mmap_sem for read > and then > all get unblocked at once. > > This patch changes the down_read() to the trylock version, which > prevents the > build up. For a workload experiencing mmap_sem contention, it's > probably better > to postpone the NUMA balancing work anyway. This seems to have fixed > the soft > lockups involving pmd_lock(), which is in line with the convoy > theory. > > Signed-off-by: Vlastimil Babka Acked-by: Rik van Riel