From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1945980AbbCEQAM (ORCPT ); Thu, 5 Mar 2015 11:00:12 -0500 Received: from mail-we0-f179.google.com ([74.125.82.179]:46356 "EHLO mail-we0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S964862AbbCEQAI (ORCPT ); Thu, 5 Mar 2015 11:00:08 -0500 Date: Thu, 5 Mar 2015 17:00:05 +0100 From: Frederic Weisbecker To: "Paul E. McKenney" Cc: Jason Low , Peter Zijlstra , Ingo Molnar , Linus Torvalds , Andrew Morton , Oleg Nesterov , Mike Galbraith , Rik van Riel , Steven Rostedt , Scott Norton , Aswin Chandramouleeswaran , linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] sched, timer: Use atomics for thread_group_cputimer to improve scalability Message-ID: <20150305160004.GE5074@lerouge> References: <1425321731.5304.14.camel@j-VirtualBox> <20150305153506.GD5074@lerouge> <20150305155659.GD5773@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150305155659.GD5773@linux.vnet.ibm.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 05, 2015 at 07:56:59AM -0800, Paul E. McKenney wrote: > On Thu, Mar 05, 2015 at 04:35:09PM +0100, Frederic Weisbecker wrote: > > So, in the case we are calling that right after setting cputimer->running, I guess we are fine > > because we just updated cputimer with the freshest values. > > > > But if we are reading this a while after, say several ticks further, there is a chance that > > we read stale values since we don't lock anymore. > > > > I don't know if it matters or not, I guess it depends how stale it can be and how much precision > > we expect from posix cpu timers. It probably doesn't matter. > > > > But just in case, atomic64_read_return(&cputimer->utime, 0) would make sure we get the freshest > > value because it performs a full barrier, at the cost of more overhead of course. > > Well, if we are running within a guest OS, we might be delayed at any point > for quite some time. Even with interrupts disabled. You mean delayed because of the overhead of atomic_add_return() or the stale value of cptimer-> fields?