From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Michael S. Tsirkin" Subject: Re: [PATCHv3 0/2] vhost: a kernel-level virtio server Date: Thu, 20 Aug 2009 20:03:38 +0300 Message-ID: <20090820170338.GA9014@redhat.com> References: <20090813182749.GA6585@redhat.com> <4A8BB4EF.7030403@Voltaire.com> <20090819131048.GD3080@redhat.com> <4A8C01A2.6040207@voltaire.com> <20090819134512.GA3807@redhat.com> <4A8D50EF.6010208@voltaire.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: kvm@vger.kernel.org To: Or Gerlitz Return-path: Received: from mx1.redhat.com ([209.132.183.28]:25621 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754835AbZHTRFI (ORCPT ); Thu, 20 Aug 2009 13:05:08 -0400 Content-Disposition: inline In-Reply-To: <4A8D50EF.6010208@voltaire.com> Sender: kvm-owner@vger.kernel.org List-ID: On Thu, Aug 20, 2009 at 04:34:39PM +0300, Or Gerlitz wrote: > Michael S. Tsirkin wrote: >> Yes. master > okay, will get testing this later next week. Any chance you can provide > some packet-per-second numbers (netperf udp stream with small packets)? > > Or. If you do, maybe you should apply the following patch on top (seems to save 2 atomics in about 50% of cases for me). --- mm: reduce atomic use on use_mm fast path When mm switched to matches that of active mm, we don't need to increment and then drop the mm count. Making that conditional reduces contention on that cache line on SMP systems. Acked-by: Andrea Arcangeli Signed-off-by: Michael S. Tsirkin diff --git a/mm/mmu_context.c b/mm/mmu_context.c index 9989c2f..0777654 100644 --- a/mm/mmu_context.c +++ b/mm/mmu_context.c @@ -27,13 +27,16 @@ void use_mm(struct mm_struct *mm) task_lock(tsk); active_mm = tsk->active_mm; - atomic_inc(&mm->mm_count); + if (active_mm != mm) { + atomic_inc(&mm->mm_count); + tsk->active_mm = mm; + } tsk->mm = mm; - tsk->active_mm = mm; switch_mm(active_mm, mm, tsk); task_unlock(tsk); - mmdrop(active_mm); + if (active_mm != mm) + mmdrop(active_mm); } EXPORT_SYMBOL_GPL(use_mm);